Could you clarify about where the main render logic starts from?

Mar 20, 2013 at 1:40 PM
Edited Mar 20, 2013 at 1:59 PM
Hey, I stumbled on this engine from an article in GameDev and I like the way you've designed your Renderer and Pipeline, however there are some things I didn't quite get yet.I'm getting some confusion about the Material system.I mean, materials have update, render i pre-render methods, but in them they also contain pointers to the render tasks(that the entities who have the materials participate in?) and they call the same methods for the render tasks inside the material.So is the intention to loop trough Actors in the scene, then for each actor to go trough his _Body_(or bodies if the user chooses to tweak it to have multiple body components) and for each body to render the Task referenced in the Material of the body.But doesn't that mean that if you render 10 Entities, that would set the material 10 times instead of setting it once, rendering all 10, then going on for the next one?Or am I missing something(maybe that's what the State Monitors are for, I guess I'll study them some more).

Maybe it's cause I'm so used to the static design of "loop trough shaders, for reach shader loop trough each geometry that will use it, for each geometry loop for each entity that will use it and add to instances, then render + >insert some material-ID-dependant render sorting algorithm here<"
Coordinator
Mar 20, 2013 at 10:53 PM
Hello, thanks for your interest and comments on the engine. I'll try to summarize what each of the major objects do, and then we can discuss the order that things happen in. First is the Scene based objects:
  1. Entity3D: The basic unit in the engine. Responsible for 3D spatial representation, and references a MaterialDX11 and a PipelineExecutorDX11.
  2. Node3D: A specialization of Entity3D that implements the ability to create a graph of entities.
  3. Actor: A composite object made of up a Node (Node3D) and a Body (Entity3D), and which provides easy interactions with the Scene. The Node of an actor can be linked to another node - thus allowing Actor objects to become part of the scene graph.
  4. Camera: Specialized Actor for referencing a SceneRenderTask, and providing camera properties during rendering.
  5. Scene: Collects Actors and at least one Camera with a single Node3D that serves as the root of the scene graph.
That is the overview of the scene based system. The client program builds a scene out of these objects, and defines customized Actors for special cases. Rendering is performed for each Entity3D which has both a material and a pipeline executor set. We will talk about the rendering process next:
  1. SceneRenderTask: This object (and all its specializations) represent one complete rendering pass. You can think of it like a transformation from a Scene to an output resource (i.e. render targets). This object is set in the Camera, and serves as the main mechanism to perform a rendering.
  2. PipelineExecutorDX11: This object encapsulates all of the rendering pipeline input - vertex and index buffers, primitive topology, etc... Basically the complete input assembler state. He is also responsible for issuing a pipeline execution call that is appropriate for the input state that it sets (such as DrawIndexed or DrawIndexedInstanced, etc...)
  3. MaterialDX11: Finally to your original question :) The material holds an array of RenderEffectDX11 instances which each provide a pipeline state configuration. That would include all shaders, constant buffers, etc... All other states except for the input assembler and the output state (which comes from the SceneRenderTask). There is one RenderEffectDX11 for each 'type' of SceneRenderTask. So you can have one type that requires the perspective rendering, one for generating a shadow map, one for generating a paraboloid map, etc... Each of those would take separate pipeline states, so this is the mechanism to support them.
As you mentioned, there is also the availability of SceneRenderTasks to be referenced for each of the 'type' configurations as well. This is to allow the easy access to multipass rendering. For example, an object can have a dynamic environment map attached to it, which is also a type of scene render task. That environment map has to be rendered prior to the main rendering where it will be used, so this is the mechanism by which we accomplish it.

So the overall sequence of events goes something like this:
  1. Scene::Render is called, which has it check its camera for a SceneRenderTask. If there is one, it recursively allows the scene graph objects to 'queue' any SceneRenderTasks that they need for the final rendering (these would be stored in the material) by calling the PreRender methods.
  2. After the PreRender, then all of the queued SceneRenderTasks are processed one by one. Each one can take the scene and perform the desired rendering task as needed. Most of them behave more or less exactly the same as the SceneRenderTask class does, and set the output merger state before rendering everything.
  3. When the RenderTask class calls the render method of the entities, they then set the pipeline state for that render task via their material, and then set the input state and execute the pipeline via the pipeline executor.
You are right about the state setting - it would be really redundant if you have 10 of the same thing that just gets set over and over again. But that is why there is a pipeline state monitor system to cull out the redundant states. However, the sorting by material or ID would be implemented in the SceneRenderTask - each of these objects can specialize the scene rendering as they see fit. Since the engine is used primarily in small sample programs, this hasn't really been a big issue for me so far (especially with the state monitoring).

I am really interested to hear your thoughts on the (somewhat convoluted) rendering process. It has evolved over a long period of time, and I'm always open to suggestions or criticisms that could lead to something better. I generally try for flexibility first, then performance as a second priority, but I prefer fast solutions if they are available and possible.

Thanks again for the comments!
Mar 22, 2013 at 6:45 PM
Edited Mar 22, 2013 at 6:53 PM
Ok, I think I get it now.A small suggestion could be to change the way controllers and entities work.I mean, right now each controller has 1 entity, so if you have 10 rotating cubes to make them rotate, you would need 10 controllers, right?So what if you make each controller to work on a list of entities instead of just one and keep a list of controllers in the scene and each frame to update all controllers from the scene update call or something like that.I have something similar in my engine, however I'm facing another problem - I have different types of entities(render entity(with the render data, etc), physics entity(with the physics stuff, used in the physics dispatcher) and gameplay entity(with gameplay-related data for my game).So the only way to get it to work is to have the controller check IEntity->GetType() before casting it to either PhysicsEntity or RenderEntity or whatever.Not sure if it's a good idea performance-wise tho.The alternative is to add virtual proxy methods in IEntity for the controllers to call, but I think behind the scenes the compiler would generate something similar.Then again, most games nowadays are GPU bound, so it won't hurt to add some extra branching on the CPU code. edit: I like the idea of Node3D, I might add myself a GraphEntity.
Coordinator
Mar 23, 2013 at 12:56 AM
Your suggestion about the controllers is actually possible with the current system. It would require a specialization of the controller interface to allow multiple entities, but other than that, there is no reason that it couldn't be implemented. However, I don't think there would be many situations where I could save much in the way of performance - at least not in a generalized way. Most controllers are stateful, meaning that they have some unique state that they represent. If I had a single controller, it would be representing the same state for all entities that it was bound to.

It is still an option though - thanks for the suggestion!

Regarding your controllers, I would say that you should have a controller that is dedicated to be used with a certain type of entity. Do the validation about the type when it is bound to your entity, and then you don't need to worry about it anymore - one time validation should be enough. With that said, the controller should have a reference to an entity type that it can process, so if the reference is accepted, then you shouldn't have any problem using the data from the entity.

In other words, you should use the type checking capabilities of C++ to automatically ensure that the entity is able to be used by the specific controller you are using. That should help you simplify the processing of your scene without the need for manual type checking!