Jump to content

add 2D elements to DOF calculation


photo

Recommended Posts

Posted

We try to add 2d elements to the DOF calculation. We use a live video input to create a chromakey shape. Normally this is postpro effect and added at: CALLBACK_END_POST_MATERIALS.

If we switch to RENDER_CALLBACK_END_TAA and using DOF, this 2d shape will be also blurred and this is not desired. The idea is to add tis element to the depth texture and add this shape in a predefined z-position (later use an object tracking) but if we change the texture in the Render::CALLBACK_END_OPACITY_GBUFFER stage, we have an order problem, cause, the calculation of the key shape is later then gbuffer depth texture modification.

Is there a better way to do this?

Thanks

Ralf

Posted

Hi Ralf,

Could you please elaborate? I am not sure if I understand you correctly.
So in retrospect you have some plane (for simplicity) that you use as a chromakey mask and you want to move it closer/further from the camera. This shape is drawn in a callback and you try to somehow modify the depth buffer. And the DoF is affecting your mask but you don't want that?

I'd like some more info on how you write to the depth buffer and what's breaking. Some kind of minimal repro sample would be great.
As for DoF affecting the mask you can modify the Material Mask GBuffer texture's DOF_BIT. If you write 0 it will disable this pixel's DoF contribution, so I think this is what you are looking for?

Note that you'd might have to preserve other bits by sampling a copy if this texture and setting only DOF_BIT to 0 where the shape is.
 

Posted

Hi,

our chromakeyer is currently a fullscreen postpro effect. We would like to integrate this posteffect into an earlier renderpipeline state to use expose, colorcorrection effects etc but not the DOF blur.

The idea is to use this blackwhite key signal texture to modify the depthtexture. Sounds like a fake, but could be a way get a better integration with the engine lighting effects etc.

Thanks

Ralf

Posted (edited)

The attached files are showing the difference.

end_camera_effects.jpeg

end_taa.jpeg

Edited by ralf.preininger
Posted

Still not sure what you are talking about.
From those screenshots I can take a guess that RE1 is the Physical Camera's feed? And "ME 1 PGM" is Mixed Feed of Unigine and RE1 feeds inside Unigine it self?
So this means that you are just using some kind of post process material that removes chromakey and places RE1's feed into the world? And you want for it to be affected by lighting? Is that what you are trying to do?
In that case I could recommend trying to use mesh material instead. This way you can control whenever to apply DoF or not, just don't forget that you'd have to discard pixels so that background won't be affected. That's because if you use some kind of alpha blend here then every pixel that was touching 
As per UV input you can use Screen UV node in Material Graph or float2(IN_POSITION.xy * s_viewport.zw) in code should suffice if I am not mistaken.

For mesh material you should flip this checkbox off and this will automatically remove it from DoF accumulation.
image.png

As for lighting you'll still have to have depth map of your RE1's feed for this to work somehow.
With mesh material you can write to the depth buffer of the scene, you'll have to leverage that in order for your RE1 feed to be lit by Unigine's lighting system.
Also, don't forget to rerange RE1's depth map into Unigine's camera depth range.
Like so: rerange(depth, 0.0f, 1.0f, var_re1_range_near, var_re1_range_far). Where re1_range_near/far - material parameters in range from Unigine camera's near to far.


Here's some useful links:
More info on writing mesh materials manually
Unified Unigine Shader Language - Documentation - Unigine Developer
ULON Base Material File Format - Documentation - Unigine Developer

Posted

Sorry for the uncommented images! And yes RE1 is the live video input camera feed and ME1 PGM the mixed engine output.

We using a scriptable material with several rendertargets to create textures and reuse these in the following shaders like:

        {
            renderPass("keyer");
        }

        {
            renderPass("horiz_blur");
        }

        {
            renderPass("vertical_blur");
        }

       // final image

        renderPassToTexture("composite", engine.render_state.getScreenColorTexture());

i assume, this is not possible in mesh_base.

Posted

But your proposal to use screen UV sounds good, but what is the right way to create a planar screen-oriented plane for this material? 

Posted

this is the nodebased material (attached as package), looks good

image.thumb.png.d8bc39635f6dda6517f495e06f644f25.png

our generate mesh (ObjectMeshDynamic plane = addToEditor(Unigine::createPlane(1000.0f, 1000.0f, 1000.0f));
    plane.setWorldTransform(Mat4_identity);
    )

looks different:

image.thumb.png.e39be1656b7b915e79bf054021e3dab3.png

 

image.thumb.png.cec1b2b404066aaa6ca7a8fbbce68dd3.png

 

 

 

 

key_plane.upackage

Posted

The first issue here is that the plane is always on the ground. It has to cover all of the camera's frustum.
For that you have to make this mesh child of the camera so that it always follows with it. Set it's Z to near plane of the camera and move it a bit further so it won't line entirely at the near plane otherwise you will get Z-fighting.
Next issue is scale of an image and it's depth. They are related due to the fact that you are adding Z offset for this plane but not adjusting the size of an image. When the object is further from the camera it should appear smaller. You can playaround with it but something like this should give you the starting point:
image.png

As for shadows - the easiest way is to get rid of Screen UVs and make sure that your plane matches the frustum's near plane. That way your UVs won't rely on Screen UV which should make your shadows work correctly and you won't need to correct your UV by scaling them based on the depth.
They way to achieve this is:
1) Project 4 vertices of the camera's frustum by multiplying (-1, -1, 0), (1, -1, 0), (1, 1, 0), (-1, 1, 0) points (if I am not mistaken) by camera's projection matrix.
2) Assign projected vertecies as positions for the plane's vertices.
3) Use UVs of the plane instead of the Screen UV

As a side note:
For far background elements or as a generalized rule of thumb you can try matching intersection plane of the frustum instead.
They way to achieve this is:
1) You project each of the 8 points of camera's frustum with it's projection matrix.
2) You interpolate each vertex of the plane between 1 projected vertex of the near and 1 projected vertex of the far planes based on the Z offset of the plane
 Don't forget to normalize your lerp factor:

factor = (ZOffset - Near) / (Far - Near);

 Here 0 should give you near plane (don't forget to add a bit of an offset here to avoid Z-fighting)
 1 should give you the far plane.
3) Use UVs of the plane instead of the Screen UV


As another side note: You might want to look into RGB To HSV node
image.png

Hope this helps!

Posted

thanks a lot, this is a good reference point.

best regards

Ralf 

  • Like 1
×
×
  • Create New...