443,705 Members | 2,017 Online Need help? Post your question and get tips & solutions from a community of 443,705 IT Pros & Developers. It's quick & easy.

# nearest neighbor in 2D

 P: n/a I have a list of two tuples containing x and y coord (x0, y0) (x1, y1) ... (xn, yn) Given a new point x,y, I would like to find the point in the list closest to x,y. I have to do this a lot, in an inner loop, and then I add each new point x,y to the list. I know the range of x and y in advance. One solution that comes to mind is to partition to space into quadrants and store the elements by quadrant. When a new element comes in, identify it's quadrant and only search the appropriate quadrant for nearest neighbor. This could be done recursively, a 2D binary search of sorts.... Can anyone point me to some code or module that provides the appropriate data structures and algorithms to handle this task efficiently? The size of the list will likely be in the range of 10-1000 elements. Thanks, John Hunter Jul 18 '05 #1
14 Replies

 P: n/a >>>>> "John" == John Hunter writes: John> Given a new point x,y, I would like to find the point in the list John> closest to x,y. I have to do this a lot, in an inner loop, and John> then I add each new point x,y to the list. I know the range of x John> and y in advance. John> One solution that comes to mind is to partition to space into John> quadrants and store the elements by quadrant. When a new element John> comes in, identify it's quadrant and only search the appropriate John> quadrant for nearest neighbor. This could be done recursively, a John> 2D binary search of sorts.... By recursion your solution would work in O(log n) time. The construction would take O(n log n) time. Unluckily, it can return the wrong point, as the nearest point within the nearest quadrant might not be the nearest point. The problem is a well-studied basic computational geometry problem, although I don't really know any Python code that actually do it. Try to look at the web for "Voronoi diagrams" and "radial triangulation" to understand how to solve it properly in the above mentioned (perhaps randomized) time complexity. Regards, Isaac. Jul 18 '05 #2

 P: n/a John Hunter wrote: I have a list of two tuples containing x and y coord (x0, y0) (x1, y1) ... (xn, yn) Given a new point x,y, I would like to find the point in the list closest to x,y. I have to do this a lot, in an inner loop, and then I add each new point x,y to the list. I know the range of x and y in advance. One solution that comes to mind is to partition to space into quadrants and store the elements by quadrant. When a new element comes in, identify it's quadrant and only search the appropriate quadrant for nearest neighbor. This could be done recursively, a 2D binary search of sorts.... Can anyone point me to some code or module that provides the appropriate data structures and algorithms to handle this task efficiently? The size of the list will likely be in the range of 10-1000 elements. Thanks, John Hunter You could to a for loop, and inside that loop you will have a variable lessest_distance. I dont know much geometric mathematics, but Im pretty sure you can use pytagoras stuff to find the lenght from (Xn,Yn) to (X,Y) using sinus cosinus and such. And when the function is finished, you should return lessest_distance Jul 18 '05 #3

 P: n/a John Hunter wrote: One solution that comes to mind is to partition to space into quadrants and store the elements by quadrant. When a new element comes in, identify it's quadrant and only search the appropriate quadrant for nearest neighbor. This could be done recursively, a 2D binary search of sorts.... What happens when you put a particle in near/at the boundary of a quadrant though? It's possible for the nearest neighbour to be in the nearest neighbour quadrant...although you could search over these as well. However, the number of independent areas implied by use of the word 'quadrant' suggests that this would be the same as iterating over all space.... :-) -- Graham Lee Wadham College Oxford Jul 18 '05 #4

 P: n/a >>>>> "Ron" == Ron Adam writes: Ron> I really don't know how you can make this faster. There Ron> might be a library that has a distance between two points Ron> function that could speed it up. If you only had to compare one point to all the other points, then the brute force approach -- check every point -- will work great. This is O(N) and I don't think you can beat it. The idea is that I will be repeatedly checking and adding points to the list, so it is worthwhile at the outset to set up a data structure which allows a more efficient search. The analogy is a binary search in 1D. If you plan to repeatedly search a (possibly growing) list of numbers to see whether it contains some number or find the nearest neighbor to a number, it is worthwhile at the outset to put them in a data structure that allows a binary search. Setting up the initial data structure costs you some time, but subsequent searches are O(log2(N)). See google for 'binary search' and the python module bisect. So roughly, for a list with 1,000,000 elements, your brute force approach requires a million comparisons per search. If the data is setup for binary search, on average only 13-14 comparisons will be required. Well worth the effort if you need to do a lot of searches, as I do. John Hunter Jul 18 '05 #6

 P: n/a On Tue, 04 Nov 2003 08:25:36 -0800, David Eppstein wrote: In article , Peter Otten <__*******@web.de> wrote: This is of course all premature optimization as the most promising approach is to try hard to reduce the number of candidate points, as David Eppstein seems to have done. But then, he could use complex numbers, too.Well, yes, but then my code wouldn't work very well in dimensions higherthan two... I rearranged my first example to match the output of yours and used a random number seed to get identical results. Moving the square root to the return line of the find shortest distance function increased the speed of my routine about 20%. Then using the p*p form instead of p**2 added anouther 4%. With printing turned there is only a very small difference. Of course printing is the bottle neck. Turning off printing resulted in the following. All are best of 3 runs. 1000 points: Standard loop: 0:00:00.958192 Kdtree: 0:00:00.248096 Quite a difference. I'm not quite sure how kdtree's work. (yet) But they can be worth while when working with large data sets. The standard loop seems to be better for small lists of < 100 points. 100 points: Standard loop: 0:00:00.009966 kdtree 0:00:00.015247 But for really large lists. 10000 points: Standard loop: 0:01:39.246454 kdtree 0:00:03.424873 Hehe.... no comparison. The number of calculations the standard loop does: 100 new points, 4950 distance calculations. 1000 new points, 499500 distance calculations. 10000 new points, 49995000 distance calculations. And I don't know how to figure it for kdtree. But we can estimate it by using the ratio of the speeds. 1000 points: kdtree (3.42/99.25)*49995000 = 1722749.62 est. dist. calculations. There's probably a better way to do that. Python is fun to do this stuff with. Playing around like this with other languages is just too much trouble. _Ron Jul 18 '05 #7

 P: n/a On Tue, 04 Nov 2003 17:13:58 +0100, Peter Otten <__*******@web.de> wrote: Alex Martelli wrote: Andrew Dalke wrote: Ron Adam for point in p: distance = math.sqrt((new_point-point)**2 \ +(new_point-point)**2) I really don't know how you can make this faster. There might be a Hmmm, that's what math.hypot is for, isn't it...? [alex@lancelot Lib]\$ timeit.py -c -s'import math; p=1.6,2.5; np=2.4,1.3' 'math.sqrt((np-p)**2 + (np-p)**2)' 100000 loops, best of 3: 3 usec per loop [alex@lancelot Lib]\$ timeit.py -c -s'import math; p=1.6,2.5; np=2.4,1.3' 'math.hypot(np-p, np-p)' 100000 loops, best of 3: 1.9 usec per loop library that has a distance between two points function that could speed it up. An easy way is to move the math.sqrt call outside the loop, since sqrt(d1) < sqrt(d2) iff d1 < d2 (when d1,d2>=0) Yes, omitting the math.sqrt gives the same speed as calling math.hypot, and it's the classic solution to speed up minimum-distance problems. I vaguely suspect you could shave some further fraction of a microsecond by saving those differences as dx and dy and then computing dx*dx+dy*dy -- since another classic tip is that a**2 is slower than a*2. Let's see...: [alex@lancelot Lib]\$ timeit.py -c -s'import math; p=1.6,2.5; np=2.4,1.3' 'dx=np-p; dy=np-p; disq=dx*dx+dy*dy' 1000000 loops, best of 3: 1.39 usec per loop ...yep, another small enhancement. Ain't measuring _FUN_?-) Finally found an application for complex numbers: ...> timeit.py -s"p= 1.6+2.5j; np=2.4+1.3j" "d=abs(p-np)" 1000000 loops, best of 3: 0.436 usec per loop ...> timeit.py -s"p= 1.6,2.5; np=2.4,1.3" "dx=np-p; dy=np-p;d=dx*dx+dy*dy" 1000000 loops, best of 3: 1.15 usec per loop This is of course all premature optimization as the most promising approach is to try hard to reduce the number of candidate points, as David Eppstein seems to have done. But then, he could use complex numbers, too. Peter I am new to timeit.py, but this is odd. jim@grendel:~\$ /usr/lib/python2.3/timeit.py -c ' p=1.6+2.5j;np=2.4+1.3j; d=abs(p-np)' 100000 loops, best of 3: 3.1 usec per loop vs jim@grendel:~\$ /usr/lib/python2.3/timeit.py -c -s'import math; p=1.6+2.5j;np=2.4+1.3j; d=abs(p-np)' 10000000 loops, best of 3: 0.141 usec per loop Is it because the builtin math functions are much slower? -- Jim Richardson http://www.eskimo.com/~warlock Televangelists: The Pro Wrestlers of Religion Jul 18 '05 #9

 P: n/a Hello John, Can anyone point me to some code or module that provides the appropriate data structures and algorithms to handle this task efficiently? The size of the list will likely be in the range of 10-1000 elements. boost (http://www.boost.org) has a graph library and a good connection to Python using Boost.Python. HTH. Miki Jul 18 '05 #11

 P: n/a Jim Richardson wrote: I am new to timeit.py, but this is odd. jim@grendel:~\$ /usr/lib/python2.3/timeit.py -c ' p=1.6+2.5j;np=2.4+1.3j; d=abs(p-np)' 100000 loops, best of 3: 3.1 usec per loop The above is actually timed. jim@grendel:~\$ /usr/lib/python2.3/timeit.py -c -s'import math; p=1.6+2.5j;np=2.4+1.3j; d=abs(p-np)' 10000000 loops, best of 3: 0.141 usec per loop Here you put everything in the setup parameter. As the timed statement defaults to pass, you are essentially timing an empty loop :-( Is it because the builtin math functions are much slower? You should have used math.abs() or from math import abs to really get the abs() function in the math module. But "abs" in dir(math) False Not there, so we cannot compare. Peter Jul 18 '05 #12 