Neoclip is my currently active personal project. It's still a work-in-progress, but I think it's in a good-enough state to show off. My main goal was to make a game with a totally unique movement mechanic. I was primarily inspired by three things:
Mirio Togata from My Hero Academia, for his creative quirk that allows him to pass through objects while holding his breath, and forcefully pushes him out of objects when he stops.
The original Spider-Man 2 game from 2004 for Gamecube/PlayStation 2, for its engaging webswinging that took skill to master and wasn't afraid to punish the player if they messed up.
Gravity Rush from 2012 for PlayStation 4, for its never-before-seen movement and excellent game feel.

Mirio Togata                                          Spider-Man 2                                                 Gravity Rush            

So, I got to work! I originally started the project in Godot 4.2, as this was around the time that Unity was having their paid-installs fiasco, and I wanted to try learning something new. Flying around while noclipping lends itself well to a dense urban environment, so I made a small city with some free assets I found on the Godot AssetLib. I'm not much of an animator, so rather than piece together a bunch of preset animations I opted to experiment with creating an active ragdoll for the player. I also programmed an air-drag system that calculates the exposed surface area of the ragdoll's limbs to apply a counterforce. That took a lot of trial-and-error to make it performant, I initially used a grid of raycasts but eventually migrated it to a compute shader. Lastly, I knew I wanted to have the game stand out, so I tried to create an eye-catching shader. Here's a video showing this early version:
While working with Godot was fun overall, its 3D capabilities weren't as fleshed out as I needed them to be. It was especially lacking in the ragdoll department, as the physics were very buggy and I had to fight the engine constantly to get it to behave. Eventually the ragdoll started jittering terribly, and there was nothing I could do to fix it. And so I paused work on the project for a while.
Eventually I decided that, rather than let the project languish until Godot's 3D support improved, I should just remake the game in the engine I'm most familiar with: Unity! It took a little time getting everything set back up, but Unity is a much more mature engine than Godot, and I was able to take advantage of a number of features to achieve even better gameplay and performance.
I have several ideas for the future of this project, but nothing is set in stone. With themes of looking in from outside of reality, I think it could tell a powerful story about human connection. It could make literal the feeling of sinking into the ground when something has gone terribly wrong. And as for the gameplay, travelling from point A to point B in the quickest time or without jostling yourself too much could be fun. Combat would be exciting too, activating noclip to avoid an attack, then releasing it to strike would be a good back-and-forth input.
Of course, it still has a lot of work to be done. None of the shaders, models, or mechanics are finalized yet, and there's no sounds at the moment, but moving around feels quite good in my opinion. With all that said, here's a video showing the current state of Neoclip:
Why does the character look choppy?
I am experimenting with a unique way to animate the character, where the overall movement and rotation is smoothly driven by ragdoll physics, but the actual animations themselves are run at 24 FPS. It's probably not hard to guess that this was inspired by the incredible style of the Spider-Verse movies!
In order to achieve this, I use three separate character renderers. The first one simply runs the chosen animation, the second one is the active ragdoll that targets that animation (more on that later), and the third one updates on a custom timescale to match the ragdoll's pose.
Okay then, how does the active ragdoll work?
The ragdoll is generated by my custom helper script that connects each bone together with one of Unity's ConfigurableJoints. The bones of the driver skeleton and the ragdoll skeleton are then mapped together, and some fancy code that I adapted is able to convert local rotational offsets in the driver skeleton to the ConfigurableJoint's targetRotation.
How does the character move around?
While freefalling or noclipping, using the movement keys will apply a gentle impulse to every bone of the ragdoll. However, while being ejected from an object, the movement keys slightly alter the exit direction vector, allowing you to influence your movement more strongly.
Where did you get the city from?
It's a free asset store item called Japanese Otaku City. It's a model of Akihabara, Japan. Eventually I want to make my own city, but for testing, this has exceptional quality!
How are you rendering the insides of objects?
This is actually a very tricky problem to solve and I'm not totally sold on my current solution. Basically, as soon as the camera is inside something, the material of every model is swapped to one with multiplicative transparency. The skybox is removed and the camera's clear color is set to white. All materials only render in black-and-white, where the darkness is calculated by:
1.0 - e ^ (pixelDistanceToCamera / -100) * 0.8
This makes nearby materials render darker than far away ones, but far away ones are also more likely to have more models between them and the camera, so they'll also appear darker because of the multiplicative transparency. The result is *pretty* good, but it could still use a lot of tweaking.
How does the air-drag system work?
For each bone of the ragdoll, I generate a mesh that matches the bone's primitive collider. That mesh also has a vertex color that maps to its bone number. I use an orthographic camera aligned with the direction of movement to render those meshes into a render texture. Then I have a compute shader count the number of pixels of each color, and then a script can use that to calculate the approximate surface area of that bone. Finally, I calculate the drag from the density of the surrounding air (the density is much higher while clipping into objects, as it gives the impression that the character is diving into water) and apply force to each bone according to the equation:
F = 0.5 * density * bone_velocity_squared * drag_coefficient * bone_area
To me, the coolest part is that it's actually pretty realistic. Google tells me the average person's mass is ~70kg, and their terminal velocity is ~200km/h. My ragdoll's mass is ~62kg, and with the drag applied and a rather abitrary drag_coefficient of 0.7, its velocity tops out at ~178km/h. Neat!

The visible drag meshes

The camera cube that renders the drag meshes

Does the character flail around more the closer it is to hitting something?
Yes! I wrote an ImpactTimeEstimator script that projects a few seconds into the future and tries to guess when the ragdoll will collide with something. It repeatedly spherecasts forward with a large deltaTime. The result is fed through an AnimationCurve that sets a blend value on the character's Animator.

Spherecast between each cyan sphere to check for collisions

How does the character know when it's clipping into something?
You might think this is as simple as running Physics.OverlapSphere, and that worked for my hand-made city in the Godot version. But it has one major flaw: it doesn't work inside concave mesh colliders. And with the city I'm using, almost every building is concave! First I thought about approximating the collision shapes with primitives, but I didn't want to spend multiple days doing that for every building. I tried to find plugins that would generate convex colliders for me, but none of them worked quite right. So I finally came up with my own solution.
To test if something is clipping inside something else, I shoot out 6 rays, one for each axis. If 2 or more of those rays hit a backface (the dot product between the hit normal and the ray direction is positive) then we assume we're inside something.
How does it know which direction to push the character out?
This was a hard problem to solve, and I don't think there's a clean way to do it without manually sampling many different points to see which are inside something and which aren't.
In the Godot version I used an integration approach. Each frame a thousand-or-so equidistant positions on the shell of a sphere would be checked to see if they were overlapping a collider. If more than 70% of them were, then increase the search radius. Otherwise, decrease it. It would converge onto a good radius for averaging the position of the non-clipping points.
Of course, I can't do easy overlap tests with more complicated geometry. So what did I do? That's right, I used the 6 raycast method again! Now you might think that doing 6 raycasts each for thousands of points would be terrible for performance, and, well, it is. However, Unity has a magic little system called *Jobs*. It wasn't too hard to learn, and once I converted the raycasts to burst-compiled jobs, it could run in the background and retrieve the results on the next frame with absolutely no performance hit and much higher accuracy than the integration method.
Then I just average the position of every clipping point, invert that vector, and boom! That's the direction to push the character. Additionally, by clustering the test points closer to the character rather (than making them equidistant) it intrinsically increases the magnitude as the character gets closer to the surface. Essentially, this pushes them out much more strongly, which feels a lot more satisfying.

Sampling points to find the average exit direction

You may also like

Back to Top