By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
425,805 Members | 1,056 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 425,805 IT Pros & Developers. It's quick & easy.

PIL Image transform

P: n/a
Okay, so here is the situation. I have need to do some on-the-fly image
creation. I have everything working great except for the last part of it,
applying a perspective type transform to the image. The transform will take
a rectangular 2D image and transform it to a 3D representation in 2D.

Please see the link to see what I am trying to do:
http://seanberry.com/transform.png

I have PIL v1.15 installed which has a PERSPECTIVE transform, but it does
not appear to do what I want - or I can't figure out how to use it correctly
because it is using a transform matrix's coefficients.

Here is the only info I could find on the usage:
http://mail.python.org/pipermail/ima...ry/003198.html

This is for the creation of images to be used in Flash. Originally I was
doing the image processing in Flash because Flash 8 has a BitmapData class
which does the basics of images, copy, transform, etc. To accomplish the
transform I was using an algorithm that approximated triangles to fill and
worked really well, but I need the image processing to be server side, not
client.

So, here I am. Anyone have any idea how to accomplish my goal here? Is
there a way to fill a triangle with a bitmap using PIL? What about better
docs on the PERSPECTIVE transform?

Thanks for any and all help on this.

Aug 9 '06 #1
Share this Question
Share on Google+
4 Replies


P: n/a
On 2006-08-09, Dean Card <ma****@forums.comwrote:
Okay, so here is the situation. I have need to do some on-the-fly image
creation. I have everything working great except for the last part of it,
applying a perspective type transform to the image. The transform will take
a rectangular 2D image and transform it to a 3D representation in 2D.

Please see the link to see what I am trying to do:
http://seanberry.com/transform.png

I have PIL v1.15 installed which has a PERSPECTIVE transform, but it does
not appear to do what I want - or I can't figure out how to use it correctly
because it is using a transform matrix's coefficients.

Here is the only info I could find on the usage:
http://mail.python.org/pipermail/ima...ry/003198.html
This looks like a correct description of the sources:

In Image.py:

elif method == PERSPECTIVE:
# change argument order to match implementation
data = (data[2], data[0], data[1],
data[5], data[3],
data[4],
data[6],
data[7])

and then in Geometry.c:

static int
perspective_transform(double* xin, double* yin, int x, int y, void*
data)
{
double* a = (double*) data;
double a0 = a[0]; double a1 = a[1]; double a2 = a[2];
double a3 = a[3]; double a4 = a[4]; double a5 = a[5];
double a6 = a[6]; double a7 = a[7];

xin[0] = (a0 + a1*x + a2*y) / (a6*x + a7*y + 1);
yin[0] = (a3 + a4*x + a5*y) / (a6*x + a7*y + 1);

return 1;
}
This is for the creation of images to be used in Flash. Originally I was
doing the image processing in Flash because Flash 8 has a BitmapData class
which does the basics of images, copy, transform, etc. To accomplish the
transform I was using an algorithm that approximated triangles to fill and
worked really well, but I need the image processing to be server side, not
client.

So, here I am. Anyone have any idea how to accomplish my goal here? Is
there a way to fill a triangle with a bitmap using PIL? What about better
docs on the PERSPECTIVE transform?

Thanks for any and all help on this.
Something like this is almost what you what:

im = im.transform(im.size, Image.PERSPECTIVE, (1, 0, 0, 0, 1, 0, -0.004, 0))

But the problem really is that the top row of the image is at at y of
0-- I think you want the origin of the image to be in the centre for
this to work properly.

Is there a way to do that in PIL?
Aug 11 '06 #2

P: n/a
This looks like a correct description of the sources:
>
In Image.py:

elif method == PERSPECTIVE:
# change argument order to match implementation
data = (data[2], data[0], data[1],
data[5], data[3],
data[4],
data[6],
data[7])

and then in Geometry.c:

static int
perspective_transform(double* xin, double* yin, int x, int y, void*
data)
{
double* a = (double*) data;
double a0 = a[0]; double a1 = a[1]; double a2 = a[2];
double a3 = a[3]; double a4 = a[4]; double a5 = a[5];
double a6 = a[6]; double a7 = a[7];

xin[0] = (a0 + a1*x + a2*y) / (a6*x + a7*y + 1);
yin[0] = (a3 + a4*x + a5*y) / (a6*x + a7*y + 1);

return 1;
}

Something like this is almost what you what:

im = im.transform(im.size, Image.PERSPECTIVE, (1, 0, 0, 0, 1, 0, -0.004,
0))

But the problem really is that the top row of the image is at at y of
0-- I think you want the origin of the image to be in the centre for
this to work properly.

Is there a way to do that in PIL?

thanks for the reply. I have been able to use the Image.PERSPECTIVE
transform via trial and error to get it to work properly for each transform.
What I am really looking for I guess is a way to calculate the 8 int tuple
to match the perspective change I am going for. For a given image there may
be 5 elements that need to be 'painted' on with perspective. A database
table will include the transform tuples based on the source image. So, by
passing a starting image and a pattern image, the starting image can be
covered with. Perhaps the best way to explain is visually....

http://seanberry.com/perspective.png

What I need to know is how you take a surface like (P1, P5, P6, P2) and
describe it with the 8 int tuple?

I know that for the elements in the transform a - h they are as follows...
(a, b, c, d, e, f, g, h)
a / e is the ratio of height to width. For a = 2, e = 1 the output is half
the width of the original.
b is the tan of the angle of horizonal skew. e is the vertical skew
equivalent of b.
c and f are the x and y offsets respectively.
g and h are the values that actually distort the image ranther than doing an
affine transform... which is where I need the help...

I appreciate any additional insight into this problem.

This is a small step in a massive project I am working on and need to get
past this part to move on to the next.

I am also willing to $pay$ for help that results in a success.

Thanks.
Aug 11 '06 #3

P: n/a
On 2006-08-11, Dean Card <ma****@forums.comwrote:
>This looks like a correct description of the sources:

In Image.py:

elif method == PERSPECTIVE:
# change argument order to match implementation
data = (data[2], data[0], data[1],
data[5], data[3],
data[4],
data[6],
data[7])

and then in Geometry.c:

static int
perspective_transform(double* xin, double* yin, int x, int y, void*
data)
{
double* a = (double*) data;
double a0 = a[0]; double a1 = a[1]; double a2 = a[2];
double a3 = a[3]; double a4 = a[4]; double a5 = a[5];
double a6 = a[6]; double a7 = a[7];

xin[0] = (a0 + a1*x + a2*y) / (a6*x + a7*y + 1);
yin[0] = (a3 + a4*x + a5*y) / (a6*x + a7*y + 1);

return 1;
}

Something like this is almost what you what:

im = im.transform(im.size, Image.PERSPECTIVE, (1, 0, 0, 0, 1, 0, -0.004,
0))

But the problem really is that the top row of the image is at at y of
0-- I think you want the origin of the image to be in the centre for
this to work properly.

Is there a way to do that in PIL?


thanks for the reply. I have been able to use the Image.PERSPECTIVE
transform via trial and error to get it to work properly for each transform.
What I am really looking for I guess is a way to calculate the 8 int tuple
to match the perspective change I am going for. For a given image there may
be 5 elements that need to be 'painted' on with perspective. A database
table will include the transform tuples based on the source image. So, by
passing a starting image and a pattern image, the starting image can be
covered with. Perhaps the best way to explain is visually....

http://seanberry.com/perspective.png

What I need to know is how you take a surface like (P1, P5, P6, P2) and
describe it with the 8 int tuple?
I think you want to work out normalized normals for each face of your
cube, in 3D. This part is easy (or at least well-known) for any given
set of Euler angles (rotation about each axis) or for an angle about an
arbitrary axis.

A normal can be visualized as a perpendicular spike sticking out of the
centre of each face (for the sake of simplicity assume the centre of the
cube is at the origin).

A normal is "normalized" if dot(n, n) == 1.

So, anyway, for a given face, you work out a normal with 3 components,
n[0], n[1] and n[2].

Then I think it's likely that each of (a, b), (d, e) and (g, h) should
be set to different combinations of two out of the three (maybe with
some minus signs in here and there).

Something like this:

(a, b) <= (n[2], n[1])
(d, e) <= (n[0], n[2])
(g, h) <= (n[0], n[1])

Leaving c and f as zero.

If I get the time I'll try and work it out properly and explain it.
Aug 11 '06 #4

P: n/a
On 2006-08-11, Dean Card <ma****@forums.comwrote:

[snip]
thanks for the reply. I have been able to use the Image.PERSPECTIVE
transform via trial and error to get it to work properly for each transform.
What I am really looking for I guess is a way to calculate the 8 int tuple
to match the perspective change I am going for. For a given image there may
be 5 elements that need to be 'painted' on with perspective. A database
table will include the transform tuples based on the source image. So, by
passing a starting image and a pattern image, the starting image can be
covered with. Perhaps the best way to explain is visually....

http://seanberry.com/perspective.png

What I need to know is how you take a surface like (P1, P5, P6, P2) and
describe it with the 8 int tuple?
You could try asking this in comp.graphics.algorithms. The question is
easily made non-Python-specific, just say you have a function in a
library that does this:

Transform each point {x,y} in an image into a new point {x1,y1} where
{x1,y1} = {(ax + by + c)/(gx + hy + 1), (dx + ey + f)/(gx + hy + 1)}

then ask how to choose the params a to h to achieve the effect
illustrated on http://seanberry.com/perspective.png.
Aug 12 '06 #5

This discussion thread is closed

Replies have been disabled for this discussion.