derek.long Posted July 31, 2014 Posted July 31, 2014 Hello, We're developing an effect where rain drops are applied to a window and are cleared using a windscreen wiper. The water drops and trickling is done using a pixel shader and then applied as a post effect using refraction to distort the image. This works quite well and has minimal performance hit. To clear the drops using a windscreen wiper we render a white quad using ObjectMeshDynamic onto a black texture and then set this texture in the shader and zero out the water drops in the white area. The wiper area and texture can be seen in the top right of the screenshot here. We use engine.render.renderImage2D to render a simple quad to the image from within our update_scene() and then set the image in the shader. Unfortunately this seems to have a lot of overhead. Usually engine.render.renderImage2D will render the entire scene again into the texture but we only want the dynamic quad so we store a list of all nodes that are enabled and disable them before calling renderImage2D and then re-enabling them afterward. This causes a few glitches though if the editor is loaded we find that enabling / disabling nodes can end up with them going disabled and not being able to enable them again. (Perhaps some threading issue). Anyway even with just rendering a quad onto the texture it causes a large performance hit with our frame rate. Our next solution we have implemented to workaround this is to supply the quad vertices to the pixel shader as parameters and then do a point in triangle test inside the pixel shader itself. This is a short term solution and not as flexible as the render to texture method but will do for now. Unfortunately doing all that math inside the pixel shader may get close to the instruction limit and incur a performance hit. Another solution if have thought of is to put a white square on a black texture and then calculate the texture coords needed at the corners of the screen to get it into the shape of the quad we need. Although there may be an issue with this if texture coordinates become extremely large. If the wiper were not moving the coordinates would be calculated at infinity or if moving very slow may overflow the texture wrap limit and cause texture glitches. I noticed there is the Unigine::TextureRender in the C++ API, would this be a preferred method of doing a simple render to texture or is there another way to do this effectively inside the Unigine Script?
ulf.schroeter Posted July 31, 2014 Posted July 31, 2014 Performance-wise main drawback of renderImage2D might be the required texture copy operation. Best performance might be achievable with engine.render.renderProcedural () https://developer.unigine.com/en/docs/1.0/cpp_api/reference/api_render_class#c_renderProcedurals_constVectorMaterialPtrn and some custom post-processing material/shaders as this could avoid per-frame texture copy to GPU. There are some Unigine samples using renderProcedural ().
unclebob Posted August 15, 2014 Posted August 15, 2014 Hey Derek! Just use Unigine::TextureRender on C++ side as we do with AppPanorama plugin for example (render 4 textures and compose them into final image).
derek.long Posted August 19, 2014 Author Posted August 19, 2014 Ok cheers, Will look into TextureRender, it sounds like it will do what we need.
Recommended Posts