It took me two weeks to finish this weekly… fortnightly project because implementing physically based interactions turned out to be much more complicated than I expected.
Most of the level and the base mechanics (shooting a disc sticking to surfaces, calling it back) was done in two days. Then came endless hours of tweaking and debugging the physics, interspersed with art creation. And it’s still way too difficult to pull off tricks… Oh well… ¯\_(ツ)_/¯
If you want to give it a try then you can download it bellow:
TargetPractice v1.0 (503Mb)
Just unzip the file and start TargetPractice.exe.
This was designed to be a “stand in place” experience so even the smallest SteamVR room setup should be enough, just don’t hit anything while flailing. 🙂
Only the right motion controller is used. Trigger fires the disc if it’s in the gun. While it’s flying but before it hit a surface the first time you can pull the trigger again for “aftertouch” where your hand movement is added to the disc’s. Aftertouch is indicated by a pink sparkle trail behind the disc.
After the disc has hit something the trigger can be held to intensify the magic link. While it’s brighter the disc can be pulled closer, even upwards to some extent.
Touchpad down spawns a disc into the gun while the menu button resets the scene. Quit with ESC.
It was developed and tested with a 1080Ti driving a Vive Pro so your mileage may vary framerate wise.
I bought an environment pack from the unreal marketplace so I only had to create the art assets specific to the project: the gun, the disc and the targets. I used my usual workflow: modeling in Modo, sending the high poly mesh through my Houdini mesh processor and texturing the game ready object with Substance Painter.
The shape of the dummy target mirrors it’s collision hulls closely to keep the physical behaviour and visuals consistent. The disc is still sometimes seen floating near the surface but that’s due to a trick: the disc has an invisible, physics driven root component and a visible render mesh. The rigid body sticks onto surfaces, barely touching them. When the constraint is created an animation starts which moves the render mesh along a vector, determined by hit normal and impact velocity. This logic works fine when smacking head on into a surface but goes awry when colliding with a corner at a glancing angle.
I used forward shading in the project, as recommended for VR projects. As expected the most time consuming aspect of the scene was shadow casting. I used ray traced distance field shadowing and a single, stationary distant light for the sun.
To keep the shadowing cost at a reasonable level I had to be very stingy about what casts dynamic shadows. The red marks on the image to the left show which meshes don’t cast shadows (aside all the stone slabs in the ground).
The box with the discs is aligned to the player and the sun in a way that it’s lack of cast shadow is only apparent if one bends over it. The two elongated stone “benches” had such short shadows that they remained hidden from the user’s perspective. The wood cross support’s shadow under the dummy figure didn’t contribute enough to the final picture so it’s gone. Then of course everything shadowed by bigger meshes like the cliffs cast no shadows either.
Having the discs flying trough my VR representation was unsettling so I added a body, a skeletal mesh made up of boxes and an invisible sphere as the head. A shadow caster physics asset provided capsules for distance field shadows. As it turns out just having a (really crude) shadow in the world increases presence and immersion a great deal.
However I started having weird frame time spikes: 11 ms for a while then 22 ms for a few seconds. These fluctuations seemingly followed no pattern and were present even in shipping builds. I checked the profiler but the captured data was “CPU stall” all the way down…
It took me a few days to realize that the skeletal mesh capsule shadows are the culprit: if they showed up from one frame to another (when turning my head) then the frame time got just over 11 ms and steamVR halved the frame rate. Another spike occurred when the body shadow left the field of view. Looking at it also made the frame rate dips more probable.
As a hackfix I turned off shadows on the skeletal mesh and attached a shadow casting static mesh to every other bone. It doesn’t look as good, the capsule shadows were more blurry, but makes the performance issue rare (on my configuration at least).
I had a lot of trouble with the disc sticking into surfaces: Every time I caught the Hit event the disc was some distance away from the surface so even tho I stopped it in its track it still didn’t look like it was ever touching the hit object.
What I failed to understand is that no code update is ever executed at the time of impact. The closest two points in time are the last tick and the hit event, but the actual impact still occurred sometime between them.
The physics engine determines that a collision happened then calculates the result of the collision (the actor bounced back a bit) and updates the world to reflect that. Unreal was never in a state where the colliding actors were actually touching: no frame showed that, no code was run at that precise point in time.
If we need the actor location at the time of impact then we have to calculate it because sadly the hit data struct doesn’t provide it. A simple solution is, when the hit event fires, taking the hit location and the closest point to it on the actor’s collider. The difference of the two vectors is how much the actor’s locations needs to be offset to get roughly where is should have been at the collision. Of course this doesn’t take rotation into account so the more the actor spins the less correct this calculation will be. It worked for my usecase but others might need to come up with a more elaborate solution.
The other important lesson I learned is that directly interfering with the physics system is never a good idea. It’s always a painful spot where “real” physics and game logic “physics” meet so I made even my avatar and the gun physics objects. This was done by separating the physics representation from the game logic one: for example my physics driven gun is constrained to an invisible static mesh directly moved by the motion controller. You can push your hand into a crate but you won’t see the gun doing that.
I tried a few time directly controlling physics actors by setting their transforms but it always ended in tears. The safest (but indirect) way to influence the physics scene is using constraints and applying forces. It needs some getting used to, not having instant and direct control, but keeps things more consistent.
One useful pattern I found is creating “grabbers”: There is an invisible primitive component in the gun which, when a disc gets close enough, teleports to the disc, “grabs it” by creating a constraint, then moves back to its original location over time, dragging the disc with it. Of course there will be problems if something prevents this, that’s why I have no collisions between the disc and the gun… Other solutions include breakable constraints and line traces to detect obstructions, but I didn’t have time to experiment with them.
A massive timesink was tweaking physics properties, especially for the recall mechanic: far away and nearby the disc had different gravity, linear damping, different amount of force applied to it while dragging, the drag vector’s vertical component was adjusted differently. Every time I changed something I had to get up, put on the HMD, grab the motion controller, play around, take off the headset, put down the controller, sit down, change a value, rinse, repeat. Rapid iteration it isn’t… As a fix now I’m planning a generic, project independent framework for tweaking variables through a simple UI floating in VR which would make experimentation much cheaper.