By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
428,786 Members | 2,241 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 428,786 IT Pros & Developers. It's quick & easy.

image matching algorithms

P: n/a
Hi all,

There are a number of free tools for image matching but it's not very
easy to decipher the actual algorithm from the code that includes db
management, GUI, etc, etc. I have my own image database and GUI so all
I need is the actual algorithm preferably in pseudo code and not in
the form of a research paper (from which I also found a lot but since
I'm not that much interested in the actual science of image
recognition this seems like an over kill).

My understanding of image matching is that it works by first
calculating N real numbers for an image and defining a metric for
pairs of N-tuples. Images are similar if their distance (defined by
the metric) is small.

The various free tools differ by their chosen optimization paths and
their degree of specialization. My preference would be,

1. Doesn't really matter how long it takes to compute the N numbers per image
2. Lookups should be fast, consequently N should not be too large (I guess)
3. It should be a generic algorithm working on generic images (everyday photos)
4. PIL should be enough for the implementation

So if anyone knows of a good resource that is close to being pseudo
code I would be very grateful!

Cheers,
Daniel
Mar 10 '08 #1
Share this Question
Share on Google+
6 Replies


P: n/a
On Mar 10, 1:32*am, "Daniel Fetchinson" <fetchin...@googlemail.com>
wrote:
Hi all,

There are a number of free tools for image matching but it's not very
easy to decipher the actual algorithm from the code that includes db
management, GUI, etc, etc. I have my own image database and GUI so all
I need is the actual algorithm preferably in pseudo code and not in
the form of a research paper (from which I also found a lot but since
I'm not that much interested in the actual science of image
recognition this seems like an over kill).

My understanding of image matching is that it works by first
calculating N real numbers for an image and defining a metric for
pairs of N-tuples. Images are similar if their distance (defined by
the metric) is small.

The various free tools differ by their chosen optimization paths and
their degree of specialization. My preference would be,

1. Doesn't really matter how long it takes to compute the N numbers per image
2. Lookups should be fast, consequently N should not be too large (I guess)
3. It should be a generic algorithm working on generic images (everyday photos)
4. PIL should be enough for the implementation
http://www.idi.ntnu.no/~blake/gbimpdet.htm
"High level features carry information about an image in an abstracted
or propositional form"

It says it constructs a graph about the image's features. Here's the
graph:

Graph components Notes

[A[@id=crot3.77;ext:sqr:aa(1659):mm(19815,148,0,0): <- Leading node
cg(62,86):cr(255,153,153):pl(-204,574,792,10353)]] with
attributes

[[5][@rep= 1 ]] <- Relation
and strength

[B[@id=crot3.77;ext:sqr:aa(199):mm(17759,244,1,0): <- Trailing
node
cg(98,77):cr(153,153,255):pl(966,2,258,-79198)]]$ with
attributes

It doesn't say what corner cases it leaves. "seem to provide" and
"seems to be extremely flexible". I like this feature:

- the equation of the best fitting plane Ax+By+Cz+D=0 to the range
image data masked by the current region;

Where does that get you?
Mar 10 '08 #2

P: n/a
Daniel Fetchinson wrote:
Thanks for the info! SIFT really looks like a heavy weight solution,
but do you think the whole concept can be simplified if all I needed
was: given a photo, find similar ones? I mean SIFT first detects
objects on the image and find similarities, but I don't need the
detection part at all, all I care about is similarity for the whole
photo. I surely don't understand the big picture fully but just have
the general feeling that SIFT and other expert tools are an overkill
for me and a simplified version would be just as good with a much more
easily comprehensible core algorithm.
Please describe the kind of photos you are dealing with. Are they
identical photos, in say different formats or with different metadata?
Or are they rescaled images? Or maybe they are the same photo cropped
differently?

SIFT will work in more or less the process you described in your first
post. It basically calculates the N sets of numbers for each image,
representing the unique features of that image and their relative
positions. The rest of the process if up to you. You have to compare the
different sets of numbers to find the image with the minimal difference,
as opposed to comparing the whole image.
Mar 10 '08 #3

P: n/a
Thanks for the info! SIFT really looks like a heavy weight solution,
but do you think the whole concept can be simplified if all I needed
was: given a photo, find similar ones? I mean SIFT first detects
objects on the image and find similarities, but I don't need the
detection part at all, all I care about is similarity for the whole
photo. I surely don't understand the big picture fully but just have
the general feeling that SIFT and other expert tools are an overkill
for me and a simplified version would be just as good with a much more
easily comprehensible core algorithm.

Please describe the kind of photos you are dealing with. Are they
identical photos, in say different formats or with different metadata?
Or are they rescaled images? Or maybe they are the same photo cropped
differently?
The photos are just coming straight from my digital camera. Same
format (JPEG), varying size (6-10 megapixel) and I would like to be
able to pick one and then query the database for similar ones. For
example: I pick a photo which is more or less a portrait of someone,
the query should return other photos with more or less portraits. If I
pick a landscape with lot of green and a mountain the query should
result in other nature (mostly green) photos. Something along these
lines, of course the matches won't be perfect because I'm looking for
a simple algorithm, but something along these lines.
SIFT will work in more or less the process you described in your first
post. It basically calculates the N sets of numbers for each image,
representing the unique features of that image and their relative
positions. The rest of the process if up to you. You have to compare the
different sets of numbers to find the image with the minimal difference,
as opposed to comparing the whole image.
Great, this sounds very good, I'll give SIFT a try (at least trying to
understand the basic concepts) although at the moment it looks a bit
scary :)
Mar 12 '08 #4

P: n/a
Daniel Fetchinson wrote:
Since you seem to know quite a bit about this topic, what is your
opinion on the apparently 'generic' algorithm described here:
http://grail.cs.washington.edu/projects/query/ ?
So far it seems to me that it does what I'm asking for, it does even
more because it can take a hand drawn sample image and query the
database for similar photos.

There is even a python implementation for it here:
http://members.tripod.com/~edcjones/pycode.html

On the histogram method I agree that it won't work partly because of
what you say and partly because it is terribly slow since it's
comparing every single pixel.
I'm hardly the expert and can't answer authoritatively, but here's my 2c.

I can't comment as to the actual accuracy of the algorithm, since it
will depend on your specific data set (set of photos). The algorithm is
sensitive to spatial and luminance information (because of the YIQ
colorspace), so there are simple ways in which it will fail.

The histogram method uses only color, but has a lot of numbers to
compare. You may find the histogram method insensitive to spatial
relations (a landscape with the mountain on the left and one with the
mountain on the right) compared to the wavelet approach.

This is a relatively old paper, and I've seen other more recent image
retrieval research using wavelets (some cases using only the
high-frequency wavelets for "texture" information instead of the
low-frequency ones used by this paper for "shape") and other information
retrieval-related research using lossy compressed data as the features.
If you have time, you may want to look at other research that cite this
particular paper.

And just a thought: Instead of merely cutting off at m largest-wavelets,
why not apply a quantization matrix to all the values?

Let me know how it works out.
Mar 13 '08 #5

P: n/a
Since you seem to know quite a bit about this topic, what is your
opinion on the apparently 'generic' algorithm described here:
http://grail.cs.washington.edu/projects/query/ ?
So far it seems to me that it does what I'm asking for, it does even
more because it can take a hand drawn sample image and query the
database for similar photos.

There is even a python implementation for it here:
http://members.tripod.com/~edcjones/pycode.html

On the histogram method I agree that it won't work partly because of
what you say and partly because it is terribly slow since it's
comparing every single pixel.

I'm hardly the expert and can't answer authoritatively, but here's my 2c.

I can't comment as to the actual accuracy of the algorithm, since it
will depend on your specific data set (set of photos). The algorithm is
sensitive to spatial and luminance information (because of the YIQ
colorspace), so there are simple ways in which it will fail.

The histogram method uses only color, but has a lot of numbers to
compare. You may find the histogram method insensitive to spatial
relations (a landscape with the mountain on the left and one with the
mountain on the right) compared to the wavelet approach.

This is a relatively old paper, and I've seen other more recent image
retrieval research using wavelets (some cases using only the
high-frequency wavelets for "texture" information instead of the
low-frequency ones used by this paper for "shape") and other information
retrieval-related research using lossy compressed data as the features.
If you have time, you may want to look at other research that cite this
particular paper.

And just a thought: Instead of merely cutting off at m largest-wavelets,
why not apply a quantization matrix to all the values?
I'm not at all an expert, just started to look into image matching, so
I'm not quite sure what you mean. What's a quantization matrix in this
context?
Mar 14 '08 #6

P: n/a
On Mar 14, 10:59*am, "Daniel Fetchinson" <fetchin...@googlemail.com>
wrote:
Since you seem to know quite a bit about this topic, what is your
opinion on the apparently 'generic' algorithm described here:
>http://grail.cs.washington.edu/projects/query/?
So far it seems to me that it does what I'm asking for, it does even
more because it can take a hand drawn sample image and query the
database for similar photos.
There is even a python implementation for it here:
>http://members.tripod.com/~edcjones/pycode.html
On the histogram method I agree that it won't work partly because of
what you say and partly because it is terribly slow since it's
comparing every single pixel.
I'm hardly the expert and can't answer authoritatively, but here's my 2c..
I can't comment as to the actual accuracy of the algorithm, since it
will depend on your specific data set (set of photos). The algorithm is
sensitive to spatial and luminance information (because of the YIQ
colorspace), so there are simple ways in which it will fail.
The histogram method uses only color, but has a lot of numbers to
compare. You may find the histogram method insensitive to spatial
relations (a landscape with the mountain on the left and one with the
mountain on the right) compared to the wavelet approach.
This is a relatively old paper, and I've seen other more recent image
retrieval research using wavelets (some cases using only the
high-frequency wavelets for "texture" information instead of the
low-frequency ones used by this paper for "shape") and other information
retrieval-related research using lossy compressed data as the features.
If you have time, you may want to look at other research that cite this
particular paper.
And just a thought: Instead of merely cutting off at m largest-wavelets,
why not apply a quantization matrix to all the values?

I'm not at all an expert, just started to look into image matching, so
I'm not quite sure what you mean. What's a quantization matrix in this
context?
Hello,

I am also looking for the solution to the same problem. Could you let
me know if you have found something useful so far?

I appreciate your response.

Thanks a lot.

Sengly
Jun 27 '08 #7

This discussion thread is closed

Replies have been disabled for this discussion.