I got taken off the layers refactoring last year to work on off-main thread animations for Firefox OS (more on that in another post). In the meantime Bas and Nical have been carrying on the refactoring work. As of a few weeks ago I am back on the refactoring. And so is most of the graphics team in some capacity. It has become a high priority for us because it blocks OMTC on Windows, which blocks our Windows Metro browser. I have been converting tiled layers to the refactored setup (more below). Bas has got OMTC going on Windows using Direct3D 11, still early days though. There have been some small architectural changes (also below), and work carries on. We're getting there, we hope to merge the graphics branch (where you can follow our progress and contribute, beware builds can be very shaky) to Mozilla Central around the end of February.
On the architectural side, there are two major changes: textures communicate via their own IPDL protocol, and textures can be changed dynamically. There has also been some renaming - what used to be called BufferHost/Client are now called CompositableHost/Client. Many of the flavours of Texture* and Compositable* have changed as we go for cleaner abstractions rather than trying to closely match existing code.
Textures (and soon Compositables) communicate directly with one another using their own IPDL protocols, rather than using the Layers protocol. Communication mostly still occurs within the Layers transactions, so we avoid any race conditions. The advantage of separate protocols is that each abstraction layer is more fully isolated - the layers don't know what their textures are doing and so forth.
It is a requirement that Textures can be changed dynamically. This is a shame. It would be nice (and sensible) if once we create a layer its Textures remain of the same type, unless the layer changes them. But this is not the case, for example, async video can change from RGB to YCbCr after the first frame without the layer knowing. So, we have to deal with the texture changing under us (i.e., the Layer and Compositable), which since the Textures use their own communication mechanism is complicated. This has lead to a lot of code churn, but hopefully we have a solution now. It will be an interesting challenge for our test frameworks to see if they pick up all the bugs!
Personally, I have been concentrating on tiled layers (and tidying up a whole bunch of TODOs and little things). Tiled layers are Thebes layers which are broken up into a grid of tiles. We use tiled layers on Android, but have long term plans to use them pretty much everywhere. Each tile is a texture and they are managed by a TiledBuffer which is held by the layer. There is thus an obvious mapping to the refactored layers system. Unfortunately that didn't work so well. Perhaps in the long term we can end up with something like that. For now, the Compositable owns a TiledBuffer which manages tiles which hold Textures. This is because the buffer is copied from the rendering to compositing threads, and tiles are required to be lightweight value objects, but Textures are ref counted heap objects. Once we have an initial landing of the refactoring, we can hopefully change the tiled layers architecture to match the refactoring vision and we'll be sweet (which will allow tiled layers to work cross-process too, currently they only work cross-thread).
On the architectural side, there are two major changes: textures communicate via their own IPDL protocol, and textures can be changed dynamically. There has also been some renaming - what used to be called BufferHost/Client are now called CompositableHost/Client. Many of the flavours of Texture* and Compositable* have changed as we go for cleaner abstractions rather than trying to closely match existing code.
Textures (and soon Compositables) communicate directly with one another using their own IPDL protocols, rather than using the Layers protocol. Communication mostly still occurs within the Layers transactions, so we avoid any race conditions. The advantage of separate protocols is that each abstraction layer is more fully isolated - the layers don't know what their textures are doing and so forth.
It is a requirement that Textures can be changed dynamically. This is a shame. It would be nice (and sensible) if once we create a layer its Textures remain of the same type, unless the layer changes them. But this is not the case, for example, async video can change from RGB to YCbCr after the first frame without the layer knowing. So, we have to deal with the texture changing under us (i.e., the Layer and Compositable), which since the Textures use their own communication mechanism is complicated. This has lead to a lot of code churn, but hopefully we have a solution now. It will be an interesting challenge for our test frameworks to see if they pick up all the bugs!
Personally, I have been concentrating on tiled layers (and tidying up a whole bunch of TODOs and little things). Tiled layers are Thebes layers which are broken up into a grid of tiles. We use tiled layers on Android, but have long term plans to use them pretty much everywhere. Each tile is a texture and they are managed by a TiledBuffer which is held by the layer. There is thus an obvious mapping to the refactored layers system. Unfortunately that didn't work so well. Perhaps in the long term we can end up with something like that. For now, the Compositable owns a TiledBuffer which manages tiles which hold Textures. This is because the buffer is copied from the rendering to compositing threads, and tiles are required to be lightweight value objects, but Textures are ref counted heap objects. Once we have an initial landing of the refactoring, we can hopefully change the tiled layers architecture to match the refactoring vision and we'll be sweet (which will allow tiled layers to work cross-process too, currently they only work cross-thread).
No comments:
Post a Comment