In article <d1ad669a-040f-4182-94fb-191ea939cd89
@a1g2000hsb.googlegroups.com>,
nv******@gmail.com says...
[ ... ]
These images are in fact 2D slices of a 3D image of a cube containing
a set of layered spheres. A layered sphere is a set of concentric
spheres of different colors (called grains). So, in my program, I
have the coordinates of the center of the grains, its radii and the
colors of each layers.
As the easiest way, I have used the Povray (www.povray.org/) program
by writing my grains data to text file of format required by Povray.
With this way, I can generate image of up to about 3000 grains. More
than that, the tracing progress of Povray is so slow.
I have therefore been looking for other more effective approaches. The
first one is to use a free C++ image library that can write images
containing spheres. The second way is to write my data to pixels.
My question is how to realize these two ways in real C++ code. Please
help me.
A great deal depends on what you really want. If you want to produce a
display (especially an interactive display), the answer is quite a bit
different than if you want to produce a bitmap file of some sort.
If you want produce an (interactive) display, you probably want to use
something like OpenGL. This will allow you to send your data to the
display as a set of items with 3D coordinates. You can then pick an
angle of view and coordinates of the slice you want to view, and it'll
produce a display of that part of the data visible from that point.
If you want to produce an output file, you can create a 3D volume as
(for example) a three dimensional array (or some sort of matrix) of
numbers signifying the color at any given spot in the volume. Be aware
that this step can take a great deal of memory. Picking a slice from
that consists of reading the colors from the matrix where it intersects
a plane of your choosing. You'll typically pick that plane in terms of
polar coordinates (i.e. its angles with respect to some origin) and then
convert to individual pixel coordinates using sine and cosine. If you're
concerned with speed, you can also do a version of something like
Bresenham's algorithm, but for a plane instead of a line.
Of course, from that point you'll need to convert the individual pixel
values into a readable file format, but that's mostly a matter of
finding documentation on your format of choice, and writing the
appropriate "stuff" into the header and such, then writing out the
values of the individual pixels. Of course some formats (e.g. JPEG) do
lossy compression on the pixels, but they're probably not very suitable
for your purposes -- from the looks of things, something like RLE or LZ*
compression should work quite well.
--
Later,
Jerry.
The universe is a figment of its own imagination.