Tuesday, June 12, 2012

Mask Layers on the Direct3D backends

In the next few posts I'll describe how mask layers are used in the various graphic backends. The Direct3D 9 and 10 backends are pretty similar, so I'll describe them together. The Direct3D 10 backend is the cleanest, so that's where I will start.

A mask layer is loaded from the surface created in SetupMaskLayer into a DX10 texture in LayerD3D10::LoadMaskTexture. This should be fast because we use a DX10 texture as the backend to that surface. In fact, LoadMaskTexture does more than that, it also sets everything up for the shader to use the mask and returns a flag indicating the kind of shader to use (with a mask or without).

Each DX10 layer class has a GetAsTexture method which returns a texture snapshot of that layer. Currently, that is only implemented for image layers, and it returns the backing texture. This method is called by LoadMaskTexture to get the texture for the mask layer. GetAsTexture returns a shader resource view (SRV) which wraps the texture memory, LoadMaskTexture can simply set this SRV as a resource for the effect (a DX10 abstraction for a group of shaders and state), called tMask (t for texture).

GetAsTexture also returns the size of the texture. LoadMaskTexture uses this and the mask layer's effective transform to calculate the texture's bounding rectangle in the coordinate space which will be used for rendering, this is passed to the effect as vMaskQuad (v for vector, it is just a vector of length four, containing the position and dimensions of the bounding quad).

When a layer is rendered (RenderLayer), a system of shader flags is used (including the flag returned from LoadMaskTexture) to work out which shader to use. LoadMaskTexture us called as part of this calculation, so when we have the shader flags, ready the mask and its quad will be passed to the GPU is they are needed. SelectShader is used to pick the actual shaders (a technique, technically, which is a group of passes, which are combinations of vertex and fragment shaders), and the other arguments are passed to the effect. The technique is called and the layer is rendered.

There is one kind of vertex shader and a bunch of different kinds of fragment shaders (for different colour schemes, etc.). When we added mask layers, we introduced vertex shaders for layers with masks and for layers with masks and a 3D transform. We doubled the number of fragment shaders, adding a version of each for layers with masks (plus a couple of extra for 3D). For masks, the vertex shader calculates the location on the mask at which to sample (in the texture's coordinate space), and the fragment shader pretty much just does a multiply with the alpha value at that sample from the mask and the result of rendering the rest of the layer at that point.

The vertex shader operates on each corner of the rendered quad (which is a rectangular window onto a layer), it figures out the location of the corner it is working in the current coordinate space (which is that of the containing layer) and uses that to calculate a position in the coordinate space of the texture being rendered (an image of the layer). For mask layers, we do likewise to calculate a position on the mask's texture, the graphics hardware interpolates these positions to find texture coords for each pixel.

For layers with a 3D transform (with perspective) this interpolation could lead to distortion because the graphics hardware does perspective correct interpolation, but our mask layer was rendered to the texture, after being transformed, so interpolation should be linear. We therefore have to do a little dance to 'undo' the perspective correct interpolation in any vertex and fragment shaders which could be used on layers with 3D transforms.

DirectX 9 works pretty similarly to the DirectX 10 backend, the shaders and textures work differently, but the concepts are the same. The management of the mask layers changes to reflect the way the LayerManager is written too, but again the changes are minor. On the DirectX 9 backend, we support shadow layers for OMTC, but I'll discuss this stuff when I talk about the OpenGL backend, because that is where more of the effort for OMTC went.

No comments: