Creating a texture in system memory

Oct 31, 2011 at 7:57 AM

Jason wrote elsewhere:

There is a very small discussion of this in the book on page 88, although it could have a code sample to go along with it. You are basically just providing a system memory copy of the image data to the API and it will be copied into the resource upon creation. This is done by passing a D3D11_SUBRESOURCE_DATA structure to the ID3D11Device::CreateTexture2D() function.

The data layout is dependent on the texture format you are using. As long as your data is in memory in the same format that your resource will end up in, then you simply pass the system memory pointer of your image data to the pSysMem function member. I assume you are working with 2D textures, so you will also need to fill out the SysMemPitch member. This is the number of bytes between new lines of the texture data. So if you haven't padded your image data (which you most likely haven't done) then your pitch will be the x resolution times the number of bytes per pixel.

Once you do this, and pass the structure to the creation function, then your texture should be initialized with the data. You should quickly find out if there is a mismatch in format or layout, and can debug from there. Of course, if you have further questions then you can post them here or on the Hieroglyph 3 site.

This looks close, if not exactly, to what I need.

A simple extension to ImageProcessing Hieroglyph3 demo would be very welcome. For example, replacing the second texture "fruit.png" by a texture created in system memory would help me tremendously. As a simple example, the in system memory texture could be a simple array of 640 x 480 float4 items, initialised by C++ code with a uniform color and a grid of horizontal and vertical lines (This is trivial to write in C++). Then that texture is added to m_pRenderer11 instance calling an overloaded LoadTexture method where "fruit.png" is called currently.

-- Francois Piette

Coordinator
Oct 31, 2011 at 9:52 PM

Based on this suggestion, I have an idea of how to add this type of dual object.  It will allow both to initialize the resource with a system memory version, but also to read back the data from the GPU resource and to update it again with the system memory copy if necessary.  I'll add this to the queue of things to get done, but it sounds like a useful addition to the engine.

As a side note, the creation of textures such as this are already possible with the RendererDX11::CreateTexture2D(...) method.  If you want to experiment with this while I am working out the above mentioned class, then that is one option.

Nov 1, 2011 at 6:31 AM

In your design, make sure that the original resource loaded from system memory into the GPU stay untouched on the GPU. The need is to have the original image untouched except zoom and pan on screen, the the user select one or more filters and see the result (filtered image rendered on screen), change his mind, select other filters to see if the image is better, finally, the user is satisfyed with the resulting image after applying a number of filters and want to save his work. The resulting image is saved as a thumbnail while the original imgae stay there for more processing but also saved to a file (untouched), along with the list of applyed filters and parameters.

As you can see the is a kind of pipeline for all operation: original image, filters, zoom and pan, final image. Of couse every filter, zoom and pan will be implemented as HLSL. Beside taking an image from system memory, the main idea which is different from your ImageProcessing demo is that the image processing is dynamic and cumulative.

One of the main filter is "grey level window". Technically, it is very simple but it requires power to be comfortable for the user. The images are 16 bit grey scale (Radiology) which is way too much for the human eye and way too much for the rendering engine (24 bit color images are only 8 bit when grey scale is used). In a radiology image, the details are not only in the X/Y plane of the image (small details needs to be zoomed to be seen) but also in the Z coordinate (Yes, the Z for a 2D image ! It is simply the grey level of a pixel). You need to show very low constrast changes. For example out of the 65536 (16 bit) grey scale, you need to show the details in a [grey level] region where the contrast change from 6000 to 6100 (just an example). A simple scaling is used to make grey level 6000 and below looks black and every grey level above 6100 white. The range in between will be displayed with the full potential of the renderer (256 grey levels usualy). In my example, I showed a grey window having a width of 100 and centered at grey level 6050.

Of course for the user it is very difficult to find what he is looking for: small changes in the contrast somewhere deep in the pixel (Z axis of the 2D image). To make his life easier, I need to provide a tool based on the mouse: if the user move the mouse vertically, he change the grey window center, if he move the mouse horizontally, he change the width of the grey window. This is where GPU is helping: the user likes to have a highly responsive mouse and yet have his other filters applyed on the adjusted grey level. I currently have all this implemented on the CPU and it is painly slow.

In this application, I also need to have text and drawing shown above the images for rendering. But I think this is classic in games and is probably demontrated in one of your samples.

Sorry for this long message but I tought that explaing the real world use, far away from game world, would help you find the best design.

Regards.

Coordinator
Nov 1, 2011 at 9:36 AM

Hello Francois,

The texture object that I am thinking of will enable this type of processing if you choose to do so.  Instead of being specialized for a single use case, it will allow the user to create multiple copies of the texture (via multiple copies of the texture objects) and then applying filters to the object can be done to modify a GPU resource and store it into another one.  If you want to do cumulative filtering, then you just need to ping-pong between two texture objects - which is currently being done in the ImageProcessing sample.

The rendering of the texture results (and correspondingly the zoomed location of the texture) is controlled by rendering.  In the ImageProcessing sample, this rendering is done with an entity that has a full screen quad geometry attached to it.  If you wanted to utilize something that didn't take up the whole screen, then you could utilize something like what is produced in the ActorGenerator::GenerateVisualizationTexture2D(...) method.

My Master's thesis was regarding the processing of MRI images, so I am quite familiar with your usage case and the complexities of low contrast image visualization.  Indeed, the GPU is the perfect platform to perform that type of processing, so it should be suitable for your needs.  In addition, the Hieroglyph 3 application framework also provides events for mouse movement and button states, so you should have all of the tools at your disposal to produce an image processing tool.  If you have questions along the way, just let me know and we can discuss the concepts further.

Especially if you end up using it on a commercial or even a non-commercial application, please let me know!  I would like to keep a running list of applications that use the framework...

- Jason

Nov 1, 2011 at 10:23 AM

I'm trying to use Createtexture2D but only get error.

Here is an extract from my code:

	m_IbnImage = new IBN_IMAGE_TYPE;
	LoadIbnImage("../Data/Images/ImageMireLow.ibn", m_IbnImage);
	XMFLOAT4* rawIbnViewFloat4 = static_cast<XMFLOAT4 *>(malloc(uWindowWidth * uWindowHeight * sizeof(rawIbnViewFloat4[0])));
    for (UINT y = 0; y < uWindowHeight; y++) {
		for (UINT x = 0; x < uWindowWidth; x++) {
			rawIbnViewFloat4[x + y * uWindowWidth] = m_IbnImage->RawDataFloat4[x + y * uRawBitmapSizeX];
		}
	}

	Texture2dConfigDX11 IbnConfig;
	IbnConfig.SetColorBuffer( 640, 480 ); 
	IbnConfig.SetFormat(DXGI_FORMAT_R32G32B32A32_FLOAT);
	IbnConfig.SetBindFlags( D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE );

	m_Texture[0] = m_pRenderer11->CreateTexture2D( &IbnConfig, (D3D11_SUBRESOURCE_DATA*)rawIbnViewFloat4 );

In that code, LoadIbnImage() load an image from a file into a structure which among other members contains RawDataFloat4 which is a XMFLOAT4 array initialised with the pixel colors (1920x1536). The loop after that copy a 640x480 area into rawIbnViewFloat4 variable so that it is compatible with ImageProcessingDemo. Then I creat IbnConfig and initialize some mebers and then calls m_pRenderer11->CreateTexture2D which trigger and error on the m_pDevice->CreateTexture2D(). The error is an unhandled exception in MSVC runtime where HeapFree() is called.

Any help appreciated.

 
Nov 1, 2011 at 10:32 AM

I'm glad to know you made your Master's thesis about MRI images and you perfectly understand my use case. Currently I'm experimenting with the hope to use Hieroglyph3 in a real world application. I have an advantage: I already have the complete application (wihout GPU). I have a disadvantage: I'm completely new to DirectX and the learning curve is somewhat long.

-- Francois Piette

Coordinator
Nov 1, 2011 at 12:24 PM
fpiette wrote:

I'm trying to use Createtexture2D but only get error.

Here is an extract from my code:

	m_IbnImage = new IBN_IMAGE_TYPE;
	LoadIbnImage("../Data/Images/ImageMireLow.ibn", m_IbnImage);
	XMFLOAT4* rawIbnViewFloat4 = static_cast<XMFLOAT4 *>(malloc(uWindowWidth * uWindowHeight * sizeof(rawIbnViewFloat4[0])));
    for (UINT y = 0; y < uWindowHeight; y++) {
		for (UINT x = 0; x < uWindowWidth; x++) {
			rawIbnViewFloat4[x + y * uWindowWidth] = m_IbnImage->RawDataFloat4[x + y * uRawBitmapSizeX];
		}
	}

	Texture2dConfigDX11 IbnConfig;
	IbnConfig.SetColorBuffer( 640, 480 ); 
	IbnConfig.SetFormat(DXGI_FORMAT_R32G32B32A32_FLOAT);
	IbnConfig.SetBindFlags( D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE );

	m_Texture[0] = m_pRenderer11->CreateTexture2D( &IbnConfig, (D3D11_SUBRESOURCE_DATA*)rawIbnViewFloat4 );

In that code, LoadIbnImage() load an image from a file into a structure which among other members contains RawDataFloat4 which is a XMFLOAT4 array initialised with the pixel colors (1920x1536). The loop after that copy a 640x480 area into rawIbnViewFloat4 variable so that it is compatible with ImageProcessingDemo. Then I creat IbnConfig and initialize some mebers and then calls m_pRenderer11->CreateTexture2D which trigger and error on the m_pDevice->CreateTexture2D(). The error is an unhandled exception in MSVC runtime where HeapFree() is called.

Any help appreciated.

 

 

The second argument to the CreateTexture2D should be to a structure of type D3D11_SUBRESOURCE_DATA.  Declare one on the stack, then set your rawIbnViewFloat4 pointer as the pSysMem member.  Then set the pitch accordingly (sizeof(rawIbnViewFloat4[0]) * 640, and then it should succeed.  If it fails, take a look at the debug output and you should have some hints about where to go...

Nov 1, 2011 at 12:40 PM

Yeah ! It works !

Thanks a lot, really a lot !

-- Francois Piette

 

Coordinator
Nov 1, 2011 at 2:44 PM

I'm happy it worked out for you - hopefully that's the first step toward your next application :)