I have some gripes about how Unity lets you apply image effects.
Notably, that you can’t apply image effects to only some layers.
The technical reason for this is that Unity’s Image Effects run on whichever screen buffer is returned by the camera to which the effect is attached. Each camera in the scene, depending on its clear flags, writes on top of what’s there. That means an image effect operates on everything that’s come before it.
We can’t change that. But we can route around it using render textures.
The setup is actually faily simple: you have one camera that gets a depth pass for the scene and one camera for each FX chain you want to use. Each camera uses clear flags (to solid color black) and outputs to a render texture. That way, the only thing on the screen buffer for each camera are the layers set to be rendered.
The next problem is that each camera clears the depth buffer when it clears the pixel buffer. It will only properly depth-clip the triangles that are drawn by that camera. The base level geometry, for example, won’t clip things from a different camera that are behind a wall.
We can solve that by taking the depth pass and using it with an image effect that compares the camera’s depth buffer with the depth pass.
Now we just need to merge all the layers back together, which I do by putting the render textures on quads in front of an orthographic camera.
The setup looks something like:
That’s a really basic overview of how I did this. I may come back through and re-write this as a tutorial with code samples. There’s also a lot of automation I did with setting up the render textures and quads that would be nice to write up. If you come across this and I haven’t elaborated on it more, get in touch and I’ll send you the code I wrote.