Rendering Runtime API Types Rethink

(Kae) #1

rendy seems to be moving closer to being included in Amethyst and I would like to propose a rethink of the current public API types for rendering to make the low-level rendering types more data-driven and flexible to allow for the power of the coming Asset Pipeline to shine through. The main driver is to make assets more re-usable and composable by avoiding unnecessary coupling of data.

First, let’s see what data is required to render something. Please Render Team, let me know I’m missing something

  • List of static vertex channels (device-local GPU buffers) and associated vertex format
  • List of dynamic vertex channels (host memory buffers, requires sync) and associated vertex format
  • List of image views + sampler combinations
  • List of shaders
  • List of constant buffers
  • Blend & stencil config
  • List of buffer/image attachments/outputs
  • Binding metadata for all the elements in all the lists - how do vertex channels, textures and constants bind to things in the shaders?

We can construct a “pure” function that takes these inputs and emits render commands for the graphics backend. It can be used in many different scenarios with different inputs, like drawing UI, 3D objects etc. Each situation is different and may have simplified, cached or ignored parts of these inputs depending on the requirements.

I would like for as many of these inputs as possible to be configurable by assets (and thus hot-reloadable). I don’t expect to be able to define everything in data in every situation, but as much as is reasonable given the component-specific constraints would be nice.

The current Texture struct in Amethyst is pretty good as is. Mesh includes a transform matrix which seems redundant. The purpose of a transformation matrix is to place the object in the world and it doesn’t make sense to have this built into the Mesh as it would necessitate one more matrix multiplication in the rendering of any Mesh.

amethyst::Effect contains shaders and constant buffer values. From an asset perspective, this doesn’t make much sense: if you can define constant buffer values in a shader asset, then they can probably be a real constant within the shader anyway. I would like to create an amethyst::Shader type that only represents a compiled shader program with metadata for its possible constant bindings. The existing amethyst::Effect could still exist if there’s demand for it, combining amethyst::Shader and the current API for constant values.

amethyst::Material is probably the most opinionated of the existing Amethyst rendering types. It contains “hard-coded” named constants for textures, both sampler bindings (TextureHandle -> sampler) and constants (TextureOffset -> vec2 constant). I would like to make it more generic by making it contain

  • Handle<Shader>
  • Vec of named constant buffer values (string + constant buffer value)
  • Vec of named TextureHandles (string + TextureHandle)
  • Blend & stencil config

This would allow Materials to be the primary way to glue together Shaders, Textures and constant buffers while leaving the binding of Mesh and Shader unspecified, making it easier to compose Materials with Meshes.
When constructing the pipeline, we can look at the Shader’s metadata and bind constants & samplers based on the corresponding string. If the strings are interned or hashed, it will be fast.

I’m not sure how Mesh vertex channels and Shader should be bound, whether the relationship should be described with an asset. I would love to get some ideas about this, but generally I think game engines just define a preset vertex attribute enum and bind them automatically. Could we improve upon this somehow? I feel that vertex data handling is the least flexible part of most popular game engine rendering pipelines.

Buffer/image attachments/outputs, global constants or similar might be interesting to include as assets too? I would love some feedback from the org, and primarily from the Rendering Team.

3 Likes
(Théo Degioanni) #2

From what I’m reading, introducing those changes would make more things dynamic and therefore potentially less efficient. Do you have any suggestion regarding deployment-time compilation of these things into more efficient, “hardcoded” values?

(Gray Olson) #3

IMO Material should not be a primary low level rendering primitive, and should rather be a higher level convenience function. The concept of a Material only really translates well in the main shading stage of a forward-rendering pipeline or in the shading stage of a deferred rendering pipeline. Other than that, you’re not often really working with ‘materials’…

What you’ve basically described here is the concept of a PipelineState in gfx (essentially, in Vulkan and DX12, you bind these resources together to create pipelines). I think we should go all the way in just essentially providing a wrapper around this model. The basic idea would be something like

  • amethyst::Shader as you described, except an upgrade to metadata. It should be able to use spirv-reflect to analyze the shader code and help build pipeline meta information
  • amethyst::ConstBuffer which is much like a Texture but to a GPU allocated buffer. This should be able to be used separately to change data if/when needed
  • amethyst::Texture basically as before
  • amethyst::Mesh which is basically as before with changes you describe, except each vertex property can be named
  • amethyst::Pipeline which assembles the pieces. Shader, list of named ConstBuffer, Texture, a Mesh, and directly accepts blend and stencil config and should be able to automatically generate binding metadata from the shader and connected pieces, and return an Error if they are incompatible.
1 Like
(Gray Olson) #4

These things all need to be created dynamically on the gpu when the application is run anyway, so as long as we are smart about not recreating things except when they need to be it shouldn’t incur any performance penalty.

3 Likes
(Kae) #5

This is great! Thank you for the quick reply.

IMO Material should not be a primary low level rendering primitive, and should rather be a higher level convenience function. The concept of a Material only really translates well in the main shading stage of a forward-rendering pipeline or in the shading stage of a deferred rendering pipeline. Other than that, you’re not often really working with ‘materials’…

I agree that it is a higher level construct - it is “glue” between some of the lower-level primitives in a sense. The concept of a Material as an asset is mainly useful for artists to easily be able to re-use a rendering “look”.

What you’ve basically described here is the concept of a PipelineState in gfx (essentially, in Vulkan and DX12, you bind these resources together to create pipelines). I think we should go all the way in just essentially providing a wrapper around this model.

Awesome! I agree with this, however I am also interested in providing artist-friendly composable asset types that map well to these low-level constructs and it would be great if Rendering Team could assist there too.

  • amethyst::ConstBuffer which is much like a Texture but to a GPU allocated buffer. This should be able to be used separately to change data if/when needed

I have a hard time to imagine how ConstBuffer relates to an artist workflow. Where would the pieces of data come from? I imagine there could be the following high-level pieces, am I missing something?

  • Material
  • M/VP matrices
  • User-specified shader globals
  • Component/pass specific constants
1 Like
(Gray Olson) #6

The problem is that an artist workflow doesn’t map so seamlessly to lower level rendering primitives any more. In a very simple rendering pipeline it might, but when you’re composing two, three, five, even ten or twenty passes to make up the actual look of a final frame, the artist workflow for something like a Material relates less to how to bind together shaders and textures in a low level graphics pipeline and more with how you relate out of game assets to in game constructs at a much higher level, and which will then be necessarily much more opinionated at the high level but could give flexibility in how it maps down. In order for there to be a seamless transition, there needs to be an opinionated level somewhere. For example, the gltf format provides a standardized set of values that determine the ‘look’ of an asset based on commonly used physically based rendering techniques, however the way in which you actually map those down into a real rendering pipeline can be done many different ways. We could provide a standardized set of passes which take advantage of the data provided from artists in the gltf format and then render them in a scene. However the artist-centric data should not be related directly to the way in which the renderer accomplishes that task.

A const buffer would not (as a Texture should not) relate directly to the artist workflow, as discussed above, however a higher level building block could be baked down to this and could provide anything from a (set of) colors to a (set of) changing values which are then used to interpolate between different stages of a fade-in-out vanishing effect applied to an asset, or many other things like this. It’s really just any input to a shader that isn’t coming directly from vertex information or a texture.

2 Likes
(Kae) #7

Great points!

I think a lot of this “baking down to rendering primitives” can and should be done in the asset pipeline. I agree we (or users) can build opinionated high-level constructs with custom editors/tooling as long as the engine can accept individual pieces of data from the asset pipeline and turn them into (parts of) a render command.

Talking about ConstBuffer though: how do you imagine the struct should look? As I understand it, the actual data layout will depend on the shader it is used with, so if we want to have a “composable” ConstBuffer it would need to contain named constant buffer values as I described for Materials, then “compiled” for a specific shader.

1 Like
#8

Most things said here only consider main render pass.
There can be no Mesh or Material outside of it.

I’d like to add that rendy has its own Mesh and Texture for static vertex data and images that are typically loaded from assets.
rendy::Mesh is designed in a way that allows to glue it to the shader without man in the middle but with only shader reflection available.

I totally agree that main render-pass must be data-driven. The pass implementation should be able to read shader code and understand what data it should get from World, what get from other render-nodes etc.
But the hardest thing here is how.

My initial idea was to register functions into some registry to allow fetching data from World by name.
Consider next simple vertex shader.

layout(location = 0) in vec3 position;
layout(location = 1) in vec4 color;

layout(location = 0) out vec4 out_color;

layout(set = 0, binding = 0) uniform VertexArgs {
  mat4 transform;
  mat4 camera_view;
};

fn main() {
  out_color = color;
  gl_Position = vec4(position, 1.0) * transform * camera_view;
}

Now render-pass seeing that this shader must be applied to the Entity (its handler is somehow associated with it).
It reads that vertex attibutes are position and color. Mesh stores format with named attributes. It just matches names and size of attributes, checks if they occupy one buffer or stored separately, generates VkPipelineVertexInputStateCreateInfo (part of VkGraphicsPipelineCreateInfo) that glues Mesh and shader.

Next render-pass should deal with uniform buffer definition.
First it must be decided if buffer exists on its own or should be created and filled with data.
We can use uniform name for that purpose (I haven’t checked how this data is reflected in spirv though).
Maybe some metadata attached to the Entity guide render-pass in the decision.
Ok. “VertexArgs” means render-pass should create and fill buffer. This means render-pass reads fields to see what they are and where to get actual data. First field is “transform”. Nice. Let’s use Transform to fill this one.
But one does not just walk into Mordor hardcode every possible field name, it’s exactly opposite to data-driven architecture.
Let’s make render-pass go into Registry and do fetch("transform"). This will return a function fn(&World, Entity) -> &[u8]. How this is better than hardcoding? For one it can be augmented from user-code.
I asked @torkleyy if he sees an opportunity in nitric to solve this with even less code-writing and more data-driving :slight_smile:
Also, render-pass creates pipeline layout with uniform buffer at set = 0, binding = 0 and remembers to write into set a buffer that will be filled by data fetched from World to the set.

Long story short render-pass reads all shaders and creates appropriate graphics pipeline.
Or uses one already created one for the same set of pipeline defining data.

To implement this we need to declare what components define a pipeline for the renderable object.
We may settle on Mesh + Material set. To be more precise, vertex format of the Mesh and shaders, blending op, stencil op etc (not textures) from Material define pipeline.
If Entity has at least one then it is renderable object.
Some renderable objects can be rendered without Mesh - sprites, billboards, particles etc.
Maybe Mesh without Material can make sense too.

If you think that this idea worth it then we can discuss it further :wink:
Ask if something is confusing.
Any feedback is appreciated.

(Gray Olson) #9

But, this is what I want to avoid. Since not everything makes sense as a combination of Material + Mesh, it seems like there should be a more generic lower level wrapping of “vertex format + shaders + blending op etc”, and Material and Mesh simply wrap these for convenience as a default (for use with the default passes for which it makes sense to have an entity have a Material + Mesh in order to be drawn). But it seems to me that it shouldn’t have to be that sprites, UI, etc. are “special cases”–they are relatively common things that tie together very basic building blocks that would be very useful to make into more easily useable composable parts.

#10

I didn’t think too much about how to store this in World actually :slight_smile:
Mesh + Material is just what comes first into my head.

(Paweł Grabarz) #11

I’m wondering if using spirv-reflect at runtime might actually be too generic and impose too much performance penalty to be useful. Doing hashmap lookups in a tight rendering loop seems costly. Those lookups might potentially be taken out from the loop and done only once, but still i’m not sure if having a specific data shape should always translate to the same world queries. There are many possibilities what a constant byte buffer might mean for a particular render pass. Relying on a binding name also seems like a stretch. I’m wondering if we can instead statically generate code basing on shader reflection, that would be included into the pass. Then require a “encoder” trait to be implemented by the user, which would connect a shader input shape to the actual render pass, that is query the world for whatever it needs and return the shader input data.

#12

In this case Encoder will be basically the hardcoded render-pass. And render-pass will be not data-driven at all.
BTW. No map lookups required. Spirv reflection is few vectors :slight_smile:
And the process will be done only once for each shader.

#13

I agree that entity name usually is not reliable thing. But in code we rely on string identifiers all the time. Any type or field or variable are named and you use it to get data, to know what it is.
Although in programming language you rely on types more :slight_smile:
But in shaders you often have to use primitive types. And you put semantics into attribute and field names.
Not to mention that in opengl attributes and uniforms were bound using names because there was no way to do it otherwise.

(Joël Lupien) #14

@kabergstrom I talked about Material in my “render unification” RFC

(Kae) #15

I’ve thought some more about this, and Frizi has recently implemented an “Encoder” concept that takes parts of the World and extracts data into buffers, ready for rendering Passes to process.

Here is an example of Encoders: https://paste.gg/p/anonymous/191b71cf50924d1c85a2f17916c7e7e6
Here is the greatly simplified DrawFlat2D pass: https://paste.gg/p/anonymous/8b4cb528646840abb61fd0a98738ec24

I think this is a great way of extracting data from the world and is composable in the sense that a Pass does not need to know about all the components it may get data from, just the buffer format. Ordering of encoded commands for the pass is handled by “post-encoders” that sort the data for the pass. Advantages

  • Optimized (linear cache behaviour, very few branches)
  • Easily parallelizable (par_join, multiple encoders running in parallel)
  • Decoupled from graphics API specifics (encoders are not coupled to a specific Pass, Pass is not coupled to specific components or specific Encoder)

This pattern makes it very straight-forward to implement a Node for the rendy graph: Read the appropriate encoded buffer and emit commands into the command buffer. The Node will be entirely responsible for setting up the graphics pipeline etc.

In terms of getting data from other render nodes: This should be 100% expressed within the node graph.

I think we should provide a ConstBuffer with name:value pairs that assets/users can put arbitrary values in, and make it easy to bind these to a shader, but let Node handle ConstBuffers the way that is appropriate for the pass.

On a different topic, I think it will be possible to make rendy node graph composable at runtime too, with an editor similar to Shader Forge: https://shaderforge.userecho.com/s/attachments/10417/1/527/56fbcbacd7a8875a92a6b34138337c5a.jpg

This would allow users to modify the entire frame graph as an asset with hot reloading support. We’ll have to flesh out the details, but from scanning the docs it looks like it’ll be possible.

1 Like
#16

That’s not uniform data but actual constants :slight_smile:

The problem I see is that Flat2DData format is not extensible for user.
If user wants to pass more data into shader how would he do that? Could you provide a pseudo code if this can be done with suggested approach.

(Kae) #17

User would extend the Flat2DData format to extract more data from the World.

I meant that this type of name:value pairs can be supplied as an asset and passed by reference in the Encoder supplied data, like Flat2DData, to supply arbitrary data to the shader.

#18

So each user extends Flat2DData until it grows so large that Vec<Flat2DData> it doesn’t fit into memory :smile:

(Kae) #19

:smiley:
Each Pass would define its own input data structure from the World. The input should be kept only to what is needed by the Pass to keep it optimized. If a user wants to create a new pass, it would have a new data format.

#20

We talking about main render-pass here mostly. Others are very different and requires own design.
All objects would be rendered in main render-pass, regardless of what shaders they use.
Each object may need its own set of data from World.