Volume Rendering #792
Replies: 9 comments 4 replies
-
This might a good example for vsgExamples, so please submit this for inclusion if you hare happy to do this. A shared example would help with debugging. What results are you seeing right now? One thing that might be an issue is the VSG by default uses reverse depth so the depth fragment go from 0 at the far plane to 1.0 at the near plane, this is opposite the OSG default. |
Beta Was this translation helpful? Give feedback.
-
I will try to put together an example in vsgExamples where the code can be refined. Thank you! |
Beta Was this translation helpful? Give feedback.
-
Thanks for the example. I have merged the PR as Joaopmoliveira-master branch: /~https://github.com/vsg-dev/vsgExamples/tree/Joaopmoliveira-master I have renamed the directory from osg_reproduction to vsgvolume with the hope that this can evolve into an official example. I have also changed the code to by default check the VSG_FILE_PATH variable for volume.vert and volume.frag shaders and have placed these in the vsgExampels/data/shaders directory. This means we can edit the .vert and .frag shaders and just re-run the example. I will now have a tinker to see what might be going on. |
Beta Was this translation helpful? Give feedback.
-
I have changed the face culling from the default back face culling to front face culling, this is one of the trick that the osgVolume uses to get the start point of the rate to march back towards the eye point and wasn't yet implemented in vsgvolume: I now see the brick in the default camera position bit it doesn't yet do the ray tracing correctly. Looking at the vertex and fragments shaders I now don't think the reverse depth will be an issue and won't require any special handling as the ray is generated in objects space rather than in clip space. |
Beta Was this translation helpful? Give feedback.
-
I have some half reasonable now checked in to the Joaopmoliveira-master branch: I'm currently trying to fix an automated build issue, once that's done I'll merge these changes with vsgExamples master. From there there is other work that could be done. I've added the ability to load a 3D texture image on the command line but it doesn't work yet because the vsgXchange DDS reader writer doesn't yet support 3D texture images, and we don't have a DIACOM loader like the OSG has. The DDS reader should be a first step to get working. I'm thinking a vsgVolume library that adds an DIACOM loader and other volume related features like osgVolume does would be an appropriate thing to do longer term. |
Beta Was this translation helpful? Give feedback.
-
Good morning @robertosfield
I have looked at the Openscene graph to implement the transfer function functionally, but for my life, I can't figure out how to deal with the z-depth problem. Solutions I have looked at: Ordering could be enforced such that all other traditional geometries are first rendered, and only afterward are the volumes rendered. Do you think I should add a separate renderpass, similar to what you implemented with Im-GUI? Even if this extra render pass is implemented, the interleaving problem mentioned in 1) remains unsolved to the best of my knowledge. I would like, if possible, to get your first impression on what I would have to do to have this feature, as well as how you would go about to solving it. From there I will try to get a working prototype with your insights so that we can add it to your examples, and hopefully to your amazing vsg library itself. On another note, for the DICOM reader, would you happen to have a preferred library to do so? I currently use ITK for most of my medical applications, but I would understand if the size of the library makes it an undesirable feature to add to vsgXchange. |
Beta Was this translation helpful? Give feedback.
-
A long time back when I did volume rendering work with the OpenSceneGraph, to handle the rendering of 3d meshes in the same scene as a volume I have previously used a couple of approaches:
The transfer function is simply a 1D texture on the GPU, to conveniently set that up I've previously using a std::map<double, vec4> to provide the input data and then interpolate the data to fill in the 1D texture. For the OSG project I wrote a DICOM plugin using the DCMTK library to read the files. I haven't done any serious work on the volume rendering side for a decade so I don't know what would be the best solution now. The OSG's dicom plugin could serve as a reference for any new implementation. The basics of image conversion would be similar. All this functionality could be provided as an vsgVolume library like has happened with vsgPoints, vsgImGui etc. However, it's not something I can personally embark on without funding for the effort put it. |
Beta Was this translation helpful? Give feedback.
-
The example would still be welcome in the project, correct? At least to add to vsgExamples for future inspiration. If not, I understand entirely. |
Beta Was this translation helpful? Give feedback.
-
Your are welcome to continue to extend the vsgvolume example, or even begin work on open source volume rendering library :-) |
Beta Was this translation helpful? Give feedback.
-
Good afternoon VSG community,
I have been trying to port the OpenSceneGraph volume rendering shaders into the VSG API, and something is sort of working.
I would like to get some help from people with more experience with the API to add this feature to VSG (if no help is possible, general directions of how to implement the feature are welcome).
Beta Was this translation helpful? Give feedback.
All reactions