how is ParameterContainer::SetXXXXXParameter() work?

Dec 26, 2014 at 8:10 PM
Edited Dec 26, 2014 at 8:12 PM
hi,

I'm looking at class FullscreenActor and trying to figure out how the color is set. After calling:
m_pMaterial->Parameters.SetVectorParameter( L"ObjectColor", m_Color );
somehow, this named vector parameter "ObjectColor" become accessible (as a constant buffer) in the shader ("FullscreenColor.hlsl").
cbuffer FullscreenObjectProperties
{
    float4 ObjectColor;
};
//-----------------------------------------------------------------------------
struct VS_INPUT
{
    float3 position : POSITION;
};

struct VS_OUTPUT
{
    float4 position : SV_POSITION;
    float4 color : COLOR;
};
//-----------------------------------------------------------------------------
VS_OUTPUT VSMAIN( in VS_INPUT v )
{
    VS_OUTPUT o = (VS_OUTPUT)0;

    // Transform the new position to clipspace.
    o.position = float4( v.position, 1.0f );
    o.color = ObjectColor;
            
    return o;
}
//-----------------------------------------------------------------------------
float4 PSMAIN( in VS_OUTPUT input ) : SV_Target
{
    float4 color = input.color;
    
    return( color );
}
I'm just puzzled how these two are connected.

More broadly, how does the renderer ("RendererDX11") apply all those parameters set by calling SetXXXXXParameters() functions?

Thanks.
Coordinator
Dec 30, 2014 at 12:40 PM
This is actually one of the subsystems that I am really happy with the ease of use - so I'm glad that you brought it up :)

You can consider two different parts to this setup - the shader side and the CPU side. On the shader side, when you load the shader it is inspected by shader reflection to determine which parameters and their types are required for the shader to work. This includes textures, buffers, and cbuffer contents. That info is used later on to create a constant buffer resource big enough for each of the constant buffers declared in the shader.

On the CPU side, the individual classes in the application have access to the parameter system. This is a specialized form of a database where named values can be stored. The name is the same as the name used in the shaders, so you can think of this as something like a whiteboard pattern (anyone can write it and anyone can read it). During the update phase, all objects can write their data into a parameter through the parameter manager. Alternatively, some objects have what is called a ParameterContainer, where they can write the value and it will be written to the parameter manager later on for them. This is the method that you described above - each material, entity, and even some actors have this parameter container mechanism to simplify the writing process.

Then during the render phase, when each shader is bound to the pipeline, the required constant buffers are located, filled with the named data from the parameter system, and then bound to the pipeline. All that happens with very little effort from the users - it just works behind the scenes.

Does that help clarify what is going on?
Dec 30, 2014 at 6:37 PM
Thanks for showing me the big picture of the system. It seems that the input layout is also created by shader reflection (as seen in RendererDX11::CreateInputLayout() ) ? If that's the case, does that mean the input layout is defined by the shader file (.hlsl) ?
When I try to use the "GeometryActor" class to add geometry to the scene, I didn't specify the input layout, instead, I set the position and color for the vertices. Here's an example I made to show a rotating cube (similar to the rotating cube sample project). In Application::Initialize(), I loaded the shaders and defined the cube:
    m_pEffect = new RenderEffectDX11();
    m_pEffect->SetVertexShader(m_pRenderer11->LoadShader(VERTEX_SHADER,
        std::wstring(L"tutorial06.hlsl"),
        std::wstring(L"VSMAIN"),
        std::wstring(L"vs_5_0")));
    m_pEffect->SetPixelShader(m_pRenderer11->LoadShader(PIXEL_SHADER,
        std::wstring(L"tutorial06.hlsl"),
        std::wstring(L"PSMAIN"),
        std::wstring(L"ps_5_0")));

    DepthStencilStateConfigDX11 dsConfig;
    int iDepthStencilState = m_pRenderer11->CreateDepthStencilState(&dsConfig);
    if (iDepthStencilState == -1) {
        Log::Get().Write(L"Failed to create light depth stencil state");
        assert(false);
    }

    BlendStateConfigDX11 blendConfig;
    int iBlendState = m_pRenderer11->CreateBlendState(&blendConfig);
    if (iBlendState == -1) {
        Log::Get().Write(L"Failed to create light blend state");
        assert(false);
    }

    RasterizerStateConfigDX11 rsConfig;
    rsConfig.CullMode = D3D11_CULL_BACK;
    int iRasterizerState = m_pRenderer11->CreateRasterizerState(&rsConfig);
    if (iRasterizerState == -1) {
        Log::Get().Write(L"Failed to create rasterizer state");
        assert(false);
    }

    m_pEffect->m_iBlendState = iBlendState;
    m_pEffect->m_iDepthStencilState = iDepthStencilState;
    m_pEffect->m_iRasterizerState = iRasterizerState;
    m_pEffect->m_uStencilRef = iDepthStencilState;

    m_pMaterial = MaterialPtr(new MaterialDX11());
    m_pMaterial->Params[VT_PERSPECTIVE].bRender = true;
    m_pMaterial->Params[VT_PERSPECTIVE].pEffect = m_pEffect;

    m_pGeometryActor = new GeometryActor();
    // manually create the cube
    Vector3f positions[] =
    {
        Vector3f(-1.0f, 1.0f, -1.0f),
        Vector3f(1.0f, 1.0f, -1.0f),
        Vector3f(1.0f, 1.0f, 1.0f),
        Vector3f(-1.0f, 1.0f, 1.0f),
        Vector3f(-1.0f, -1.0f, -1.0f),
        Vector3f(1.0f, -1.0f, -1.0f),
        Vector3f(1.0f, -1.0f, 1.0f),
        Vector3f(-1.0f, -1.0f, 1.0f),
    };

    Vector4f colors[] =
    {
        Vector4f(0.0f, 0.0f, 1.0f, 1.0f),
        Vector4f(0.0f, 1.0f, 0.0f, 1.0f),
        Vector4f(0.0f, 1.0f, 1.0f, 1.0f),
        Vector4f(1.0f, 0.0f, 0.0f, 1.0f),
        Vector4f(1.0f, 0.0f, 1.0f, 1.0f),
        Vector4f(1.0f, 1.0f, 0.0f, 1.0f),
        Vector4f(1.0f, 1.0f, 1.0f, 1.0f),
        Vector4f(0.0f, 0.0f, 0.0f, 1.0f),
    };

    for (size_t i = 0; i < sizeof(positions) / sizeof(Vector3f); i++) {
        m_pGeometryActor->AddVertex(positions[i], colors[i]);
    }

    UINT indices[] =
    {
        3, 1, 0,
        2, 1, 3,

        0, 5, 4,
        1, 5, 0,

        3, 4, 7,
        0, 4, 3,

        1, 6, 5,
        2, 6, 1,

        2, 7, 6,
        3, 7, 2,

        6, 4, 5,
        7, 4, 6,
    };

    for (size_t i = 0; i < sizeof(indices) / sizeof(UINT); i=i+3) {
        m_pGeometryActor->AddIndices(indices[i], indices[i + 1], indices[i + 2]);
    }

    RotationController<Node3D>* pGeometryRotController = new RotationController<Node3D>(Vector3f(0.0f, 1.0f, 0.0f), 0.4f);
    m_pGeometryActor->GetNode()->Controllers.Attach(pGeometryRotController);
    m_pGeometryActor->GetBody()->Visual.SetMaterial(m_pMaterial);

    m_pScene->AddActor(m_pGeometryActor);
Running the code shows nothing. I cannot tell from the debugger what's missing. It shows that the shaders were loaded, with input layout look like:
slot || semanticName || semantic id || format
0 || POSITION || 0 || DXGI_FORMAT_R32G32B32_FLOAT
1 || NORMAL || 0 || DXGI_FORMAT_R32G32B32_FLOAT
2 || COLOR || 0 || DXGI_FORMAT_R32G32B32A32_FLOAT
3 || TEXCOORD || 0 || DXGI_FORMAT_R32G32_FLOAT

The shader file is like this:
struct VS_INPUT
{
    float3 position: POSITION;
    float3 normal:  NORMAL;
    float4 color : COLOR;
    float2 texcoord : TEXCOORD;
};

struct PS_INPUT
{
    float4 position: SV_POSITION;
    float4 color : COLOR;
};

PS_INPUT VSMAIN(VS_INPUT input)
{
    PS_INPUT output;
    output.position = float4(input.position, 1.0f);
    output.color = input.color;
    return output;
};

float4 PSMAIN(PS_INPUT input) : SV_Target
{
    return input.color;
}
However, even if I change the "struct VS_INPUT" in the shader to include only position and color, the input layout in debug still shows all four components. Why is that?
Coordinator
Dec 30, 2014 at 7:53 PM
The input layout is defined by the geometry side - so the vertex layout from the buffers. The input signature from the vertex shader must be a subset of what you have in your vertex buffer, so it is perfectly legal to have more stuff in the buffer than what is used by the shader. It just means that the input assembler stage will discard some of the data from the buffer when generating the vertices.

In the case of the GeometryActor, it specifies its vertex type and input layout info on its own - you don't need to add it manually. It assumes the four components that you showed above, just as a general format for a wide variety of shader types.
Dec 30, 2014 at 9:18 PM
SO why the code is not working? If I comment out the line:
m_pGeometryActor->GetBody()->Visual.SetMaterial(m_pMaterial);
I can see a black box rotating. Is there something wrong in setting up the shaders?
Coordinator
Dec 30, 2014 at 10:48 PM
When the sample loads and runs, is there any messages in the debug window? Usually if a shader doesn't compile or if there is something amiss with the API calls, then you will see messages there.

The fact that you said if you don't set a material that there is a black rotating cube would seem to indicate there is an issue with either the shaders or that one of the state configurations may not be right. You can check to see if your geometry is making it to the render target by changing your pixel shader to just return a solid color (my favorite is green :) ) and that can tell you that your geometry is not being clipped or culled away and that it is passing the depth and stencil tests.

I don't see anything else that is obviously wrong, so let's step through it together and figure out what is going on.
Dec 31, 2014 at 1:34 PM
The shaders I loaded seem un-relevant to the final rendering. There is no error message except a dozen other shaders were loaded as well:
[AlphaTestTextured.hlsl][VSMAIN][vs_4_0]
[AlphaTestTextured.hlsl][PSMAIN][ps_4_0]
[AlphaTestTexturedVS.hlsl][VSMAIN][vs_4_0]
[AlphaTestTexturedVS.hlsl][PSMAIN][ps_4_0]
[VertexInstanceColor.hlsl][VSMAIN][vs_4_0]
[VertexInstanceColor.hlsl][PSMAIN][ps_4_0]
[Sprite.hlsl][SpriteVS][vs_4_0]
[Sprite.hlsl][SpritePS][ps_4_0]
[tutorial06.hlsl][VSMAIN][vs_5_0]
[tutorial06.hlsl][PSMAIN][ps_5_0]
[ImmediateGeometrySolid.hlsl][VSMAIN][vs_4_0]
[ImmediateGeometrySolid.hlsl][PSMAIN][ps_4_0]
[ImmediateGeometryTextured.hlsl][VSMAIN][vs_4_0]
[ImmediateGeometryTextured.hlsl][PSMAIN][ps_4_0]

By the way, I put all my code in Github. It's easier to see the issue if you download and run the code.
Coordinator
Dec 31, 2014 at 4:37 PM
I checked out your source code, and I am assuming you are talking about tutorial06, right?

After taking a closer look, I found two things that are likely to be wrong, one of which I think will solve your issue. The first problem is that you set your stencil reference to the value of your depth stencil state index - that probably isn't what you wanted (and if you copied it from somewhere in the Hieroglyph 3 samples then you found a bug!).

The other more pertinent issue is that you are not transforming your vertices by the WorldViewProj matrix. There are three special matrix parameters that are used by the Actor and Camera classes - the WorldMatrix, ViewMatrix, ProjMatrix, and the various combinations thereof. In your current tutorial06 shader, you aren't transforming your geometry at all, so the vertices aren't projected to the render target properly. All of the entity objects will set their own world matrix, which gets combined with the view / projection matrices automatically and are stored as parameters in the parameter system.

I modified the shader to add the relevant projection matrix and multiplication in your vertex shader:
cbuffer Transforms
{
    matrix WorldViewProjMatrix;
};


struct VS_INPUT
{
    float3 position: POSITION;
    float3 normal:  NORMAL;
    float4 color : COLOR;
    float2 texcoord : TEXCOORD;
};

struct PS_INPUT
{
    float4 position: SV_POSITION;
    float4 color : COLOR;
};

PS_INPUT VSMAIN(VS_INPUT input)
{
    PS_INPUT output;
    output.position = mul(float4(input.position, 1.0f), WorldViewProjMatrix);
    output.color = input.color;
    return output;
};

float4 PSMAIN(PS_INPUT input) : SV_Target
{
    return input.color;
}
Does that help to clear up the situation? By the way, I really appreciate you taking the time to build up some tutorials - as you mentioned, they are sorely needed! I am also about to make a commit that changes a couple of project structures that would likely affect your project setup, so I hope it isn't too much of a disruption... I can post the details about what is changing in a separate discussion topic.
Marked as answer by wxz on 1/3/2015 at 6:53 AM
Dec 31, 2014 at 5:58 PM
yes, that fixed the problem.

the code for setting the stencil reference is copied over from RotatingCube sample.

It would be great if my Github project can keep synced with Hieroglyph code base. But I'm not sure how to do that other than manually change files. If there is a better way to do it, please let me know. Thanks.
Coordinator
Jan 2, 2015 at 2:30 PM
I am planning to switch the repository over to Git instead of SVN early in the new year. That might make it easier using some of the Git systems for multiple dependencies (i.e. submodules or something similar). I don't know much about those systems though.

I am also considering the creation of a NuGet package for Hieroglyph 3. Would that be a desirable consumption model for you?
Jan 3, 2015 at 1:44 PM
I avoided NuGet because it makes the source code bigger. But it sure helps sync the code base.
Coordinator
Jan 3, 2015 at 1:46 PM
You can always delete the 'packages' folder from the source code, and make sure that your SCM isn't tracking it. Then it will auto restore when someone downloads and tries to build the project. I have been using it for DirectXTK for a few months now, and I'm pretty happy with it so far.