Hi all,
I have a collection of ordered numerical data in a list. The numbers
when plotted on a line chart make a low-high-low-high-high-low (random)
pattern. I need an algorithm to extract the "significant" high and low
points from this data.
Here is some sample data:
data = [0.10, 0.50, 0.60, 0.40, 0.39, 0.50, 1.00, 0.80, 0.60, 1.20,
1.10, 1.30, 1.40, 1.50, 1.05, 1.20, 0.90, 0.70, 0.80, 0.40, 0.45, 0.35,
0.10]
In this data, some of the significant points include:
data[0]
data[2]
data[4]
data[6]
data[8]
data[9]
data[13]
data[14]
.....
How do I sort through this data and pull out these points of
significance?
Thanks for your help!
Erik 12 1633
erikcw wrote:
I have a collection of ordered numerical data in a list. The numbers
when plotted on a line chart make a low-high-low-high-high-low (random)
pattern. I need an algorithm to extract the "significant" high and low
points from this data.
....
>
How do I sort through this data and pull out these points of
significance?
Get a book on statistics. One idea is as follows. If you expect the points
to be centred around a single value, you can calculate the median or mean
of the points, calculate their standard deviation (aka spread), and remove
points which are more than N-times the standard deviation from the median.
Jeremy
--
Jeremy Sanders http://www.jeremysanders.net/
"erikcw" wrote:
I have a collection of ordered numerical data in a list. The numbers
when plotted on a line chart make a low-high-low-high-high-low (random)
pattern. I need an algorithm to extract the "significant" high and low
points from this data.
Here is some sample data:
data = [0.10, 0.50, 0.60, 0.40, 0.39, 0.50, 1.00, 0.80, 0.60, 1.20,
1.10, 1.30, 1.40, 1.50, 1.05, 1.20, 0.90, 0.70, 0.80, 0.40, 0.45, 0.35,
0.10]
silly solution:
for i in range(1, len(data)-1):
if data[i-1] < data[i] data[i+1] or data[i-1] data[i] < data[i+1]:
print i
(the above doesn't handle the "edges", but that's easy to fix)
</F>
erikcw <er***********@gmail.comwrote:
I have a collection of ordered numerical data in a list. The numbers
when plotted on a line chart make a low-high-low-high-high-low (random)
pattern. I need an algorithm to extract the "significant" high and low
points from this data.
I am not sure, what you mean by 'ordered' in this context. As
pointed out by Jeremy, you need to find an appropriate statistical test.
The appropriateness depend on how your data is (presumably) distributed
and what exactly you are trying to test. E.g. do te data pints come from
differetn groupos of some kind? Or are you just looking for extreme
values (outliers maybe?)?
So it's more of statistical question than a python one.
cu
Philipp
--
Dr. Philipp Pagel Tel. +49-8161-71 2131
Dept. of Genome Oriented Bioinformatics Fax. +49-8161-71 2186
Technical University of Munich http://mips.gsf.de/staff/pagel
erikcw wrote:
Hi all,
I have a collection of ordered numerical data in a list. The numbers
when plotted on a line chart make a low-high-low-high-high-low (random)
pattern. I need an algorithm to extract the "significant" high and low
points from this data.
Here is some sample data:
data = [0.10, 0.50, 0.60, 0.40, 0.39, 0.50, 1.00, 0.80, 0.60, 1.20,
1.10, 1.30, 1.40, 1.50, 1.05, 1.20, 0.90, 0.70, 0.80, 0.40, 0.45, 0.35,
0.10]
In this data, some of the significant points include:
data[0]
data[2]
data[4]
data[6]
data[8]
data[9]
data[13]
data[14]
....
How do I sort through this data and pull out these points of
significance?
I think you are looking for "extrema":
def w3(items):
items = iter(items)
view = None, items.next(), items.next()
for item in items:
view = view[1:] + (item,)
yield view
for i, (a, b, c) in enumerate(w3(data)):
if a b < c:
print i+1, "min", b
elif a < b c:
print i+1, "max", b
else:
print i+1, "---", b
Peter
If the order doesn't matter, you can sort the data and remove x * 0.5 *
n where x is the proportion of numbers you want. If you have too many
similar values though, this falls down. I suggest you check out
quantiles in a good statistics book.
Alan.
Peter Otten wrote:
erikcw wrote:
Hi all,
I have a collection of ordered numerical data in a list. The numbers
when plotted on a line chart make a low-high-low-high-high-low (random)
pattern. I need an algorithm to extract the "significant" high and low
points from this data.
Here is some sample data:
data = [0.10, 0.50, 0.60, 0.40, 0.39, 0.50, 1.00, 0.80, 0.60, 1.20,
1.10, 1.30, 1.40, 1.50, 1.05, 1.20, 0.90, 0.70, 0.80, 0.40, 0.45, 0.35,
0.10]
In this data, some of the significant points include:
data[0]
data[2]
data[4]
data[6]
data[8]
data[9]
data[13]
data[14]
....
How do I sort through this data and pull out these points of
significance?
I think you are looking for "extrema":
def w3(items):
items = iter(items)
view = None, items.next(), items.next()
for item in items:
view = view[1:] + (item,)
yield view
for i, (a, b, c) in enumerate(w3(data)):
if a b < c:
print i+1, "min", b
elif a < b c:
print i+1, "max", b
else:
print i+1, "---", b
Peter
erikcw wrote:
I have a collection of ordered numerical data in a list. The numbers
when plotted on a line chart make a low-high-low-high-high-low (random)
pattern. I need an algorithm to extract the "significant" high and low
points from this data.
In calculus, you identify high and low points by looking where the
derivative changes its sign. When working with discrete samples, you can
look at the sign changes in finite differences:
>>data = [...] diff = [data[i + 1] - data[i] for i in range(len(data))] map(str, diff)
['0.4', '0.1', '-0.2', '-0.01', '0.11', '0.5', '-0.2', '-0.2', '0.6',
'-0.1', '0.2', '0.1', '0.1', '-0.45', '0.15', '-0.3', '-0.2', '0.1',
'-0.4', '0.05', '-0.1', '-0.25']
The high points are those where diff changes from + to -, and the low
points are those where diff changes from - to +.
HTH,
--
Roberto Bonvallet
>>>>Jeremy Sanders <je*******************@jeremysanders.netwrites:
>How do I sort through this data and pull out these points of significance?
Get a book on statistics. One idea is as follows. If you expect the points
to be centred around a single value, you can calculate the median or mean
of the points, calculate their standard deviation (aka spread), and remove
points which are more than N-times the standard deviation from the median.
Standard deviation was the first thought that jumped to my mind
too. However, that's not what the OP is after. He's seems to be looking for
points when the direction changes.
Ganesan
--
Ganesan Rajagopal
"erikcw" <er***********@gmail.comwrote:
I have a collection of ordered numerical data in a list. The numbers
when plotted on a line chart make a low-high-low-high-high-low (random)
pattern. I need an algorithm to extract the "significant" high and low
points from this data.
I think you want a control chart. A good place to start might be http://en.wikipedia.org/wiki/Control_chart. Even if you don't actually
graph the data, understanding the math behind control charts might help you
with your analysis.
Wow. I think this is the first time I'm actually used something I learned
by sitting though those stupid Six Sigma training classes :-)
erikcw wrote:
Hi all,
I have a collection of ordered numerical data in a list.
Called a "time series" in statistics.
The numbers
when plotted on a line chart make a low-high-low-high-high-low (random)
pattern. I need an algorithm to extract the "significant" high and low
points from this data.
Here is some sample data:
data = [0.10, 0.50, 0.60, 0.40, 0.39, 0.50, 1.00, 0.80, 0.60, 1.20,
1.10, 1.30, 1.40, 1.50, 1.05, 1.20, 0.90, 0.70, 0.80, 0.40, 0.45, 0.35,
0.10]
In this data, some of the significant points include:
data[0]
data[2]
data[4]
data[6]
data[8]
data[9]
data[13]
data[14]
....
How do I sort through this data and pull out these points of
significance?
The best place to ask about an algorithm for this is not
comp.lang.python -- maybe sci.stat.math would be better. Once you have
an algorithm, coding it in Python should not be difficult. I'd suggest
using the NumPy array rather than the native Python list, which is not
designed for crunching numbers.
erikcw wrote:
Hi all,
I have a collection of ordered numerical data in a list. The numbers
when plotted on a line chart make a low-high-low-high-high-low (random)
pattern. I need an algorithm to extract the "significant" high and low
points from this data.
Here is some sample data:
data = [0.10, 0.50, 0.60, 0.40, 0.39, 0.50, 1.00, 0.80, 0.60, 1.20,
1.10, 1.30, 1.40, 1.50, 1.05, 1.20, 0.90, 0.70, 0.80, 0.40, 0.45, 0.35,
0.10]
In this data, some of the significant points include:
data[0]
data[2]
data[4]
data[6]
data[8]
data[9]
data[13]
data[14]
....
How do I sort through this data and pull out these points of
significance?
Its obviously a kind of time series and you are search for a "moving_max(data,t,window)>data(t)" / "moving_min(data,t,window)<data(t)": an extremum within a certain (time) window. And obviously your time window is as low as 2 or 3 or so.
Unfortunately a moving_max func is not yet in numpy and probably not achievable from other existing array functions. You have to create slow looping code.
Robert
"robert" <no*****@no-spam-no-spam.invalidwrote in message
news:ej**********@news.albasani.net...
erikcw wrote:
>Hi all,
I have a collection of ordered numerical data in a list. The numbers when plotted on a line chart make a low-high-low-high-high-low (random) pattern. I need an algorithm to extract the "significant" high and low points from this data.
Here is some sample data: data = [0.10, 0.50, 0.60, 0.40, 0.39, 0.50, 1.00, 0.80, 0.60, 1.20, 1.10, 1.30, 1.40, 1.50, 1.05, 1.20, 0.90, 0.70, 0.80, 0.40, 0.45, 0.35, 0.10]
In this data, some of the significant points include: data[0] data[2] data[4] data[6] data[8] data[9] data[13] data[14] ....
How do I sort through this data and pull out these points of significance?
Using zip and map, it's easy to compute first and second derivatives of a
time series of values. The first lambda computes
.... dang touchy keyboard!
Here is some sample data:
data = [0.10, 0.50, 0.60, 0.40, 0.39, 0.50, 1.00, 0.80, 0.60, 1.20,
1.10, 1.30, 1.40, 1.50, 1.05, 1.20, 0.90, 0.70, 0.80, 0.40, 0.45, 0.35,
0.10]
In this data, some of the significant points include:
data[0]
data[2]
data[4]
data[6]
data[8]
data[9]
data[13]
data[14]
Using the first derivative, and looking for sign changes, finds many of the
values you marked as "significant".
-- Paul
data = [0.10, 0.50, 0.60, 0.40, 0.39, 0.50, 1.00, 0.80, 0.60, 1.20,
1.10, 1.30, 1.40, 1.50, 1.05, 1.20, 0.90, 0.70, 0.80, 0.40, 0.45, 0.35,
0.10]
delta = lambda (x1,x2) : x2-x1
dy_dx =[0]+map(delta,zip(data,data[1:]))
d2y_dx2 = [0]+map(delta,zip(dy_dx,dy_dx[1:]))
sgnChange = lambda (x1,x2) : x1*x2<0
sigs = map(sgnChange,zip(dy_dx,dy_dx[1:]))
print [i for i,v in enumerate(sigs) if v]
[2, 4, 6, 8, 9, 10, 13, 14, 15, 17, 18, 19, 20] This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Jim Garrison |
last post by:
I need to create a ResultSet in an Oracle Java Stored Procedure and
return it to a PL/SQL caller. I've done quite a bit of research in
Oracle's...
|
by: James A. Donald |
last post by:
I am contemplating getting into Python, which is used by engineers I
admire - google and Bram Cohen, but was horrified to read
"no variable or...
|
by: Matt |
last post by:
I have 2 questions:
1. strlen returns an unsigned (size_t) quantity. Why is an unsigned
value more approprate than a signed value? Why is...
|
by: Rico |
last post by:
Hello,
I have an application that I'm converting to Access 2003 and SQL Server 2005
Express. The application uses extensive use of DAO and the...
|
by: jht5945 |
last post by:
For example I wrote a function:
function Func()
{
// do something
}
we can call it like:
var obj = new Func(); // call it as a...
|
by: =?Utf-8?B?YjF1Y2VyZWU=?= |
last post by:
Hi, still very new to programming but am a little stumped by how to do this
idea i have.
I need to make a moving average which takes every nth...
|
by: none |
last post by:
Hello,
IIRC, I once saw an explanation how Python doesn't have "variables" in
the sense that, say, C does, and instead has bindings from names to...
|
by: Bob Altman |
last post by:
Hi all,
I want to write a generic class that does this:
Public Class X (Of T)
Public Sub Method(param As T)
dim x as T = param >3
End Sub...
|
by: Astley Le Jasper |
last post by:
Sorry for the numpty question ...
How do you find the reference name of an object?
So if i have this
bob = modulename.objectname()
how do...
|
by: concettolabs |
last post by:
In today's business world, businesses are increasingly turning to PowerApps to develop custom business applications. PowerApps is a powerful tool...
|
by: teenabhardwaj |
last post by:
How would one discover a valid source for learning news, comfort, and help for engineering designs? Covering through piles of books takes a lot of...
|
by: CD Tom |
last post by:
This happens in runtime 2013 and 2016. When a report is run and then closed a toolbar shows up and the only way to get it to go away is to right...
|
by: jalbright99669 |
last post by:
Am having a bit of a time with URL Rewrite. I need to incorporate http to https redirect with a reverse proxy. I have the URL Rewrite rules made...
|
by: antdb |
last post by:
Ⅰ. Advantage of AntDB: hyper-convergence + streaming processing engine
In the overall architecture, a new "hyper-convergence" concept was...
|
by: Matthew3360 |
last post by:
Hi, I have a python app that i want to be able to get variables from a php page on my webserver. My python app is on my computer. How would I make it...
|
by: AndyPSV |
last post by:
HOW CAN I CREATE AN AI with an .executable file that would suck all files in the folder and on my computerHOW CAN I CREATE AN AI with an .executable...
|
by: Arjunsri |
last post by:
I have a Redshift database that I need to use as an import data source. I have configured the DSN connection using the server, port, database, and...
|
by: WisdomUfot |
last post by:
It's an interesting question you've got about how Gmail hides the HTTP referrer when a link in an email is clicked. While I don't have the specific...
| |