For a long time I was interested in making a realtime arch-viz project but the prospect of having to wait for lightmap baking deterred me. (The rendering times pushed me from offline to realtime rendering back in the day.)
Then I finally managed to compile UE 4.21 with VXGI 2.0, a realtime GI solution, so I figured it was time to come up with something. As an artsy twist to a standard arch-viz scene I thought I’d animate most elements in the level.
Nvidia’s voxel based lighting solution handles realtime GI, reflections and ambient occlusion. The tech doesn’t work particularly well on big, open areas and with foliage that’s why it’s an indoor scene.
I wanted to use distance field shadows to get decent penumbras, but I ran into a problem: VXGI only works on stuff covered by cascaded shadow maps from a given light (the sun in this case). If the light only uses DF shadows then the voxelization step seems to be skipped.
After some experimentation I came up with a solution: two sun lights in the scene!
One light, called “Sun_Indirect”, is responsible for the indirect lighting: it has a very low fidelity CSS setup to force the voxelization and emittance calculations. To prevent it from contributing to direct lighting during the main render pass a simple light function material is applied: it sets brightness to 0 in main pass while it leaves the light untouched while the voxelization runs.
Every moving piece in the scene is an instanced static mesh. They are compact, fast to render and transform but have a drawback: each instance looks the same, a property inherent to the technique. This was a problem since I wanted to move small pieces making up a bigger structure – like a concrete wall or hardwood floor – and having them look identical would’ve ruined the visuals. I had to find a way to tweak the UVs on each instance separately in a way that when standing next to each other they match up and show a much larger, seemingly contiguous surface.
I looked at what data is passed along to the instances: Putting custom values to PerInstanceRandom and PerInstanceFadeAmount would’ve needed changes in the engine which left me the transform data. I realized that I won’t really need scaling so I could sacrifice that vector3: put in U and V offsets there and compensate the change of scale in the material by moving vertices.
The logic linked to World Position Offset is fairly simple.
There are two main things to keep in mind with this hack: One is that instance bounds remain scaled regardless the vertex compensation in the material. This means that it is recommended to scale things up, above 1x, and not bellow because then culling could remove instances too soon. (When the scaled down bounds are off screen while the offset vertices should still be in view.) Excessive scaling up also better avoided in order to not compromise performance too much.
The second problem is that since during shadow calculations the size compensating material is not used therefore the shadows fully reflect the scaling. To work around this I have two InstancedStaticMeshComponents in each actor: one scaled, rendered in main pass but not casting shadows while the other is never scaled, not visible but casts shadows.
For some items, like the couch or the arches, multiple instanced meshes end up with the same transform, on top of each other. Since they look the same Z fighting is not visible however the multi bounce indirect lighting gets amplified by each stacked mesh. To fix that every instance except the first one gets scaled to 0.5x when not moving. In the material I check for this particular scale and if found then set diffuse color to black (zero bounce light) and opacity to 0 to make it disappear from the main pass too.
On the left: 1 vs 15 stacked vases.
The texturing was fairly straight forward: The textures are from Substance Source, with smaller adjustments in Photoline. I created texture atlases for ArtEngine to generate a contiguous surface with a much bigger scope. ArtEngine’s Color Match also came in handy when experimenting with different looks.
The placing of instanced meshes is determined by Transform Generators producing a bunch of transforms in the world. The system is general and extensible although I only had time to implement the Grid Generator: it takes a mesh for previewing and using its dimensions creates a 3D grid of a given size where the meshes are evenly distributed. Further tweaks are possible like cell padding, rotation, odd row offset and so on.
That data combined with offset and scale parameters in the material instance allows mapping a texture onto the instances as if they were a single surface.
These TransformGenerators are used in Animators: that class is responsible for the setup, rendering and animation of the mesh instances.
The basic setup includes the mesh asset, count, the list of TransformGenerators the meshes should visit, etc. However it is also possible to set not just the overall tweening type (Linear, Exponential, Elastic, etc) but per instance tweening parameters: Late start, early finish, exponent, and so on. The values can be linked to mesh indices, per index random, or driven by a texture. Here are a few examples what can be set up.
The rest of the animation settings are duration and sampling rate for the animation baking: a series of transforms for each mesh instance are cached for quick access during playback. The data is saved into the instances on the level, in an editable but hidden variable. (Hidden so the editor won’t freeze for minutes while creating the editor UI for those arrays.)
The baking itself, just like the animation preview, is driven by a single instance of the EditorTicker class. It’s sole purpose is to call a function called EditorTick in every class implementing the EditorTickable interface.
By using this mechanic the animation baking happens in-editor, with the calculations distributed over several frames in order to keep the editor responsive.
Since preview is also driven by the editor tick I can check the animation without starting the game.
The Animators are managed by the Conductor class: it has a list of Animators and a delay associated with each one and just fires them off one after the other, at the right time.
And finally here is a screenshot from the night version of the scene which I didn’t have time to finish:
: ob_end_flush(): failed to send buffer of zlib output compression (1) in /home/zoltanec/public_html/pages/wp-includes/functions.php on line 4556
Notice: ob_end_flush(): failed to send buffer of zlib output compression (1) in /home/zoltanec/public_html/pages/wp-includes/functions.php on line 4556