
Just curious  you aren't applying the Parameters of Camera to a parameter system each frame, so that parameter writer(ViewPosition) is deprecated/unused, right?


Coordinator
Apr 13, 2013 at 12:52 PM

It actually does look this this at first, but it is in fact being used. If you look at the Camera class constructor, we get the pointer to the position writer from the Camera's 'Parameters' member (which is of type ParameterContainer). If you follow this
method back, you can see that the ParameterContainer provides the SetXXXParameter(...) methods that provide two different functionalities.
First, it will create an instance of the desired parameter writer if it doesn't already have one by that name. Second, it actually sets the desired value in that writer.
The ParameterContainer essentially keeps a small cache of these writers (whichever once the using class tries to 'set'), and then at render time they are all used to set parameters at once (for the Camera its in the RenderFrame method). The ParameterContainer
class is also used in several other areas related to rendering operations, including the Entity3D and MaterialDX11 classes.
I admit, the flow is a bit deceptive. Originally the user had to explicitly request the parameter writer and then set its value. However, this was basically a repeated pattern anywhere that the ParameterContainer was uses, so I ended up combining them into
one method. Now it is significantly cleaner, and easier to use  at least in my opinion.
By the way, if you still want to add your own parameter writers, that is still possible too. Take a look at the VolumeActor class for a good example of creating, configuring, and adding the specialized parameter writers that actually do some calculations in
addition to writing the values.



ok thanks I got it now, but 1 more question  not sure if I can ask here or start a new discussion  it's about the geometry generator.I noticed that when you generate indices for a sphere in the double loop here:
// Now the middle rings
for ( unsigned int v = 1; v < VRes  2; ++v )
{
const unsigned int top = 1 + ( ( v  1 ) * URes );
const unsigned int bottom = top + URes;
for ( unsigned int u = 0; u < URes; ++u )
{
const unsigned int currentU = u;
const unsigned int nextU = ( u + 1 ) % URes;
const unsigned int currTop = top + currentU;
const unsigned int nextTop = top + nextU;
const unsigned int currBottom = bottom + currentU;
const unsigned int nextBottom = bottom + nextU;
_ASSERT( currTop <= NumVerts );
_ASSERT( currBottom <= NumVerts );
_ASSERT( nextBottom <= NumVerts );
_ASSERT( nextTop <= NumVerts );
face = TriangleIndices( currTop, currBottom, nextBottom );
pGeometry>AddFace( face );
face = TriangleIndices( nextBottom, nextTop, currTop);
pGeometry>AddFace( face );
}
}
the amount if indices grows exponentially the denser the sphere is, so a 80x70 resolution sphere indices vector would become with a capacity of 40965 and a size of 32400, donno, maybe it's the correct way of doing it, just seemed strange to me, since it's gigantic
compared to the small amount of vertices the sphere has.


Coordinator
Apr 13, 2013 at 4:16 PM

Its no problem  normally this should go into another discussion thread, but since we are already here...
An 80x70 sphere is a very fine resolution. If you do the general math for it, then if you have 80 'slices' and 70 'stacks' in your sphere, then you end up with ~5600 quads that you have to fill in. Since there is two triangles per face, you end up with 6 indices
per face  resulting in somewhere around 33,000 indices. So the math checks out.
Generally you can get away with less than 10x10 if you are using per pixel shading on the geometry, so those resolutions typically aren't required.

