Hey I want to ask for some advice on implementing a render pass for my voxel meshes. My hope is that I can make incremental improvements to amethyst_rendy at the same time, and I want those improvements to be in line with the amethyst team’s vision for rendering.
I have read through the Rendering Runtime API Types Rethink topic as well as most of the amethyst_rendy code, and I agree with the general idea that it should be easier for users to define how they want their entities to be encoded for rendering.
Some background about what I have currently…
Here is my project: https://github.com/bonsairobo/handsome-voxels
Right now I am generating meshes using the Surface Nets algorithm. The only vertex attributes I have right now are position and normal. The material UVs and tangents are calculated in the shader using triplanar texturing (a modification I made to the amethyst PBR shader). If one of the voxel chunks has multiple materials present, then the vertices are simply copied into a mesh for each material, where the indices only cover the triangles for that material.
I managed to do all of this without having to interact with Rendy, which is pretty cool! All I had to do was implement the
Base3DPassDef with my own shaders and vertex format.
But now I’m at the point where I want change more stuff about the shaders, and I think it will require more flexibility from the amethyst_rendy module. Just to summarize, there are some kinda hacky things I did to get this far:
- Copied all of the PBR shader code from Amethyst
- I’m pretty sure this was necessary unless I wanted to compile the shaders myself, since the #includes in the shaders are a feature of shaderc. It would be nice if there was an easier way to share shader code.
- Hardcoded some values in the shader which should really be per-instance uniforms or push constants.
unimplemented!in the “skinned” shader definition of
- Copied my vertices into multiple meshes where only the indices differed, resulting in many “unused” copies of the vertices being uploaded to the GPU.
OK so that captures all of the rendering work I’ve done so far. Now, I’d like to describe my future goals and hopefully offer some ideas for changes we could make to amethyst_rendy.
So here are some additional features I want to implement for this render pass:
- Different materials for each of the 3 planes
- Requires multiple materials bound to the pipeline, which is not currently supported without writing my own render plugin.
- Texture splatting
- This also requires multiple materials, and I’ll likely have to extend my vertex format to include “weights” for the materials being blended.
So I think the minimal set of new requirements I have for amethyst_rendy are:
- binding multiple materials, or sets of materials, to the pipeline
- custom descriptors for the extra per-instance uniforms I need
- decoupled vertex and index buffers
For 1, I think I could probably add a
MaterialSet component, and maybe that would supersede
Material. Right now, each texture in a material takes up one binding. I think instead we could have a texture array in each binding. Then maybe some maximum number of textures per array. The
TwoLevelBatch code of amethyst_rendy would probably need to change a bit to sort the meshes by material set (rather than just by a single material) in such a way to minimize bind calls. There would also need to be additional information in the
uniform Material to determine which indices of the textures arrays to use for an instance.
For 2, I mostly just want the ability to set a UV scaling factor per instance. This could be as simple as adding a field to the GLSL
uniform Material and Rust
MaterialPrefab. But I also foresee the need to be able to make arbitrary changes to my uniforms for various shader effects, and this would require a more generic base 3D pass.
Right now, the 3D pass has the uniforms encapsulated in the
MaterialSub seem to be limited to uniforms,
SkinningSub is a bit more complex, as it manages uniforms and vertex buffers. I think there is probably an abstraction we could make where anyone could implement a submodule that’s used in the base 3D pass. It would need:
- access to the
Worldto process the relevant resources
- knowledge about how it binds to pipelines
- buffer management
This seems pretty much the same as the encoder pattern described by @kabergstrom. If there is some incremental refactor we could do towards this path which allows me the flexibility to write my own submodule, even if it’s a really simple one, that would be desirable. I think maybe it would be easiest to find an abstraction similar to the
EnvironmentSub so that I could basically take a
World and write out the uniform buffer.
For 3, I think this mostly depends on Rendy. It looks like
MeshBuilder::with_vertices takes anything that implements
Cow<'a, [V]>, so I assume it won’t do any copies until it needs to upload to the GPU. But it would also be nice if could just make a single vertex buffer on the GPU and only rebind the index buffers. I think this would require aliasing the rendy
Buffer type. I’m not sure if this is possible and I would appreciate some advice in this area.
I don’t think there is much I can do about sharing shader code without a lot of work, but I’m open to ideas for improvements I could make.
What do you think? Does these changes seem feasible?