How to design DrawExecutorDX11 for loading skinned mesh of MS3D model

Jan 16, 2014 at 1:31 PM
Edited Jan 16, 2014 at 1:39 PM
Hi Jason Zink,

I'm trying to implement a skinning animation from MS3D model with your framework. As I was known, MS3D divides its vertex into groups, each of which has its own texture and indices, but all of them share same vertex buffer. Therefore, I need to call "draw" for several times, and change the texture for each of them.

I am thinking about creating one instance of DrawExecutorDX11, which contains the vertex buffer, and several instances of DrawSkinnedIndexedExecutorDX11, for example. All of these DrawSkinnedIndexExecutor instance will refer to the same instance DrawExecutorDX11. So each time I draw the model, I only need to call it once and then several times for DrawSkinnedIndexedExecutorDX11.

But the problem is that it's impossible to seperate these two kinds of instance.
Could you suggest me some ideas for resolving this problem.

Thanks so much!
Jan 17, 2014 at 10:02 PM

If I understand the situation correctly, you want to have a single vertex buffer, a single index buffer, and then have multiple draw calls - each of which should have their own set of textures, right? If that is the case, then it may be problematic with the current design. The problem is, that the conversion from the parameter system to binding data to the pipeline happens at a higher level of granularity (i.e. at the RenderEffectDX11::ConfigurePipeline level when it binds its shaders). This means that the textures are determined prior to doing the PipelineExecutorDX11::Execute method.

You could easily implement multiple draw calls in the current architecture, but having each one use a different texture won't work so well. This is because I always have assumed that the textures would be considered as part of the material instead of part of the base execution object.

In that case, I would recommend that you use multiple Entity3D instances to represent the mesh. This would allow you to have multiple material instances, each of which would be responsible for the new texture. Then each entity instance could reference a pipeline executor that knows which section of the MS3D model it will be rendering, but they would all reference a common vertex buffer.

This solution may seem like it is excessive (and it is somewhat) but since the pipeline remembers what states have already been set, it won't be too inefficient.

The longer term possibility is if we create a material that can have multiple RenderEffectDX11 instances for a particular view type, and then the Entity3D would cycle through them in its Render method. I considered doing this before, but never ended up implementing it... That is definitely a design gap though - when you know you only want to modify a small subset of the state in between multiple draw calls, there currently isn't a good way to do it.

I'll think about this further and see if there is a decent way we can do something like this.
Marked as answer by khanhhh89 on 1/17/2014 at 7:44 PM
Jan 17, 2014 at 10:03 PM
One other possibility is that you could ensure that the same texture is set for all of the MS3D groups (i.e. combine all textures into one). I know that isn't ideal, but it could give you a quick and dirty solution to the problem...

Just thought I would mention it...
Jan 18, 2014 at 12:23 AM
Edited Jan 18, 2014 at 10:42 AM
It's so great to hear from you :),

I will try to focus on your suggestion about using multiple Entity3D instances for reprensenting a MS3D mesh. As I was known from you SkinAndBones demo, the skinned mesh is represented by an Actor (SkinnedActor). Its body is used to store the whole mesh surface, and the rest nodes are for storing the bone hierarchy.

In my case, I need to present a MS3D mesh with multiple groups, each of which has their own texture. Based on your advice, I tried to extend the SkinnedActor class to a new actor TextureSkinnedActor which will contains multiple "body" entities for representing its mesh surface. Each of such bodies will reference to a PipelineExecutor and a Material, which contains the texture for corresponding PipelineExecutor.

So the problem is that how this new actor can plug these bodies to the scene for rendering activity in the call: ViewPerspective::executeTask() ?

I also want to ask you about how I can design one DrawExecutor for storing the vertex buffer and multiple IndexedDrawExecutor referencing to this DrawExecutor?

I hope to see your answer about it!
Many thanks,
Jan 18, 2014 at 2:23 PM
I'm happy to discuss design topics like this :) Its always good to hear about other use cases, and then extend the design for that particular use case. Now on to your questions:

"So the problem is that how this new actor can plug these bodies to the scene for rendering activity in the call: ViewPerspective::executeTask() ?"

I think you can just attach the node of the new actor to the scene just like normal, and then the view perspective will render the entities in the order that they get submitted. As long as all of the sub-entities are attached to the node of the actor, then it is like they are part of the scene graph automatically. Is this what you mean, or am I misunderstanding?

"I also want to ask you about how I can design one DrawExecutor for storing the vertex buffer and multiple IndexedDrawExecutor referencing to this DrawExecutor?"

You probably just want to have multiple IndexedDrawExecutor instances that all just reference the same vertex buffer. I would do this construction work in the new actor class that you created - for example, in its constructor you can create the vertex buffer accordingly, then create one IndexedDrawExecutor for each group and give it a reference to the common vertex buffer. Then each one will be able to 'set' its entire needed state, but only states that are different will actually be passed to the API. Does that make sense, or is it really necessary to have a separate DrawExecutor?

Also, if you do manage to get this working well, I would be thrilled if you would consider contributing it back to the library for others to use! I never could get the bone animations to load properly in my own attempts, so I definitely have respect for your efforts!
Marked as answer by khanhhh89 on 1/18/2014 at 8:38 AM
Jan 18, 2014 at 3:38 PM
I'm so glad to do it.

Now I am putting in most of my time for practicing with your framework. This is really an invaluable material for beginner like me, who have taken a break from OpenGL to DirectX.

About the first task, I created mutiple entities, each of which represent one MS3D group, and then added such entities as the children of the root node of the new actor, TextureSkinnedActor, and it seems work well. But I am stuck at some problem, which I posted on the Stackoverflow. If you have time, please take a look at this.

For seperating vertex buffer from Index buffer for the sake of performance, I think the object IndexedDrawExecutor by itself is an object DrawExecutor. So is it reasonabe to create multiple IndexedDrawExecutor referencing to a common Drawexecutor?
I think using GeometryDX11 will be more suitable in this situation. Specifically, I will create multiple instances of GeometryDX11 for MS3D groups, and all of them will point to the same vertex data, as it is shown in below:
MeshPtr->AddElement( pPositions );
MeshPtr->AddElement( pBoneIDs );
MeshPtr->AddElement( pTexcoords );
MeshPtr->AddElement( pNormals );
But now I implemented multiple IndexedDrawExecutor, all of which has its own vertex and index buffer and are independent from each other. So I will come back to this problem later when I finish.

I am planing on building a MS3D animation program, and user can press some keys for choosing different kinds of animation of the character, and I am so happy if my work is helpful for other people. I also want to make more examples with you framework.
Jan 21, 2014 at 2:38 PM
Actually, I think the GeometryDX11 instances will each create their own GPU resources for their vertex and index buffers - so it will probably not be very efficient. Instead, what you should do is to create a new DrawIndexedExecutor implementation that can share its vertex buffer among multiple instances. This should be relatively easy to do, as you just need to provide access to the vertex buffer variable (just be careful that you account for all the ways that the buffer might be manipulated, including adding new elements). If you have trouble getting it to work, let me know and I could put something together.

Regarding your animation tool, are you planning to use a UI framework to interact with it? I would be really interested to hear which framework you will use if you go this route...
Jan 23, 2014 at 7:37 AM
Yes I got it, I will try it.
At the present, I use no UI framework now. I am focusing on DirectX.