473,386 Members | 1,721 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

Maximum List size (item number) limit?

Dear Mr. Kern, and Members,

Thank you very much for the fast answer, my question became
over-simplified.

My source code is appended below. It uses two text files (L.txt and
GC.txt) as input and merges them. Please find these two files here:
http://kristonvizi.hu/L.txt
http://kristonvizi.hu/GC.txt

Both L.txt and GC.txt contains 3000 rows. When running, the code stops
with error message:

'The debugged program raised the exception IndexError "list index out of
range"
File: /home/kvjanos/file.py, Line: 91'

And I noticed that all the lists that should contain 3000 items,
contains less as follows:
NIR_mean_l = 1000 items
NIR_stdev_l = 1000 items
R_mean_l = 1000 items
R_stdev_l = 1000 items
G_mean_l = 999 items
G_stdev_l = 999 items
area_l = 999 items

NIR_mean_gc = 1000 items
NIR_stdev_gc = 1000 items
R_mean_gc = 1000 items
R_stdev_gc = 1000 items
G_mean_gc = 999 items
G_stdev_gc = 999 items
area_gc = 999 items

This is why I thought it is a limit in list items number.

Code that's failing:
#*******************************************

import string,sys,os,sets

# Open L, GC txt files and create merged file
inp_file_l = open('/home/kvjanos/L/L.txt')
inp_file_gc = open('/home/kvjanos/GC/GC.txt')
out_file = open('/home/kvjanos/L_GC_merged/merged.txt', 'w')

# Define L lists
NIR_mean_l = []
NIR_stdev_l =[]
R_mean_l = []
R_stdev_l =[]
G_mean_l = []
G_stdev_l =[]
area_l = []

# Define GC lists
NIR_mean_gc = []
NIR_stdev_gc =[]
R_mean_gc = []
R_stdev_gc =[]
G_mean_gc = []
G_stdev_gc =[]
area_gc = []
# Processing L file
line_no_l =0 # Input L file line number
type_l = 1 # Input L file row type: 1 (row n),2 (row n+1) or 3 (row n+2)

# Append L values to lists.
for line in inp_file_l.xreadlines():
line_no_l = line_no_l + 1
if line_no_l == 1: # To skip the header row
continue
data_l = [] # An L row
data_l = line.split()

if type_l == 1:
NIR_mean_l.append(data_l[2]) # Append 3rd item of the row to
the list
NIR_stdev_l.append(data_l[3]) # Append 4th item of the row to
the list
type_l = 2 # Change to row n+1

else:
if type_l == 2:
R_mean_l.append(data_l[2])
R_stdev_l.append(data_l[3])
type_l = 3
else:
G_mean_l.append(data_l[2])
G_stdev_l.append(data_l[3])
area_l.append(data_l[1])
type_l = 1
inp_file_l.close()
# Processing GC file, the same way as L file above
line_no_gc =0
type_gc = 1

for line in inp_file_gc.xreadlines():
line_no_gc = line_no_gc+ 1
if line_no_gc== 1:
continue
data_gc = []
data_gc = line.split()

if type_gc== 1:
NIR_mean_gc.append(data_gc[2])
NIR_stdev_gc.append(data_gc[3])
type_gc= 2

else:
if type_gc== 2:
R_mean_gc.append(data_gc[2])
R_stdev_gc.append(data_gc[3])
type_gc= 3
else:
G_mean_gc.append(data_gc[2])
G_stdev_gc.append(data_gc[3])
area_gc.append(data_gc[1])
type_gc= 1
inp_file_gc.close()

#############################

# Create output rows from lists
for i in range(len(NIR_mean_l)): # Process all input rows

# Filters L rows by 'area_l' values
area_l_rossz = string.atof(area_l[i])
if area_l_rossz < 10000:
continue
elif area_l_rossz > 100000:
continue

# Filters GC rows by 'area_gc' values
area_gc_rossz = string.atof(area_gc[i])
if area_gc_rossz < 10000:
continue
elif area_gc_rossz > 200000:
continue

# Create output line and write out
newline = []
newline.append(str(i+1))
# L
newline.append(NIR_mean_l[i])
newline.append(NIR_stdev_l[i])
newline.append(R_mean_l[i])
newline.append(R_stdev_l[i])
newline.append(G_mean_l[i])
newline.append(G_stdev_l[i])
newline.append(area_l[i])
# GC
newline.append(NIR_mean_gc[i])
newline.append(NIR_stdev_gc[i])
newline.append(R_mean_gc[i])
newline.append(R_stdev_gc[i])
newline.append(G_mean_gc[i])
newline.append(G_stdev_gc[i])
newline.append(area_gc[i])
outline = string.join(newline,'\t') + '\n'
out_file.writelines(outline)

out_file.close()

#*******************************************

Thnx again,
Janos

Kriston-Vizi Janos wrote:
/ Dear Members,

/>/
/>/ Is there any possibility to use more than 999 items in a list?
/
Yes. Of course.
/ Cannot

/>/ append more than 999 items.
/
Post the code that's failing for you and the error message it generates.

And please read http://www.catb.org/~esr/faqs/smart-questions.html <http://www.catb.org/%7Eesr/faqs/smart-questions.html> . It will
help us help you.
/ The same problem with 'array' type. Is it a

/>/ result of a default setting maybe?
/
No.

--
Robert Kern
robert.kern at gmail.com <http://mail.python.org/mailman/listinfo/python-list>

"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter


Jan 11 '06 #1
3 8507
Kriston-Vizi Janos wrote:
Dear Mr. Kern, and Members,

Thank you very much for the fast answer, my question became
over-simplified.

My source code is appended below. It uses two text files (L.txt and
GC.txt) as input and merges them.

Both L.txt and GC.txt contains 3000 rows. When running, the code stops
with error message:

'The debugged program raised the exception IndexError "list index out of
range"
File: /home/kvjanos/file.py, Line: 91'

And I noticed that all the lists that should contain 3000 items,
contains less as follows:
NIR_mean_l = 1000 items Code that's failing: # Processing L file
line_no_l =0 # Input L file line number
type_l = 1 # Input L file row type: 1 (row n),2 (row n+1) or 3 (row n+2)
# Append L values to lists.
for line in inp_file_l.xreadlines():
line_no_l = line_no_l + 1
if line_no_l == 1: # To skip the header row
continue
data_l = [] # An L row
data_l = line.split()
if type_l == 1:
NIR_mean_l.append(data_l[2]) # Append 3rd item of the row to the list
NIR_stdev_l.append(data_l[3]) # Append 4th item of the row to the list
type_l = 2 # Change to row n+1
else:
if type_l == 2:
R_mean_l.append(data_l[2])
R_stdev_l.append(data_l[3])
type_l = 3
else:
G_mean_l.append(data_l[2])
G_stdev_l.append(data_l[3])
area_l.append(data_l[1])
type_l = 1
inp_file_l.close()


Looking at the data files, it seems there is no header row to skip.
Skipping 1st row seems to cause the discrepancy of vector sizes,
which leads to the IndexError. should NIR_mean_l[0] be 203 or 25?

As the comments in your code suggest, the code adds values to
NIR_mean_l only from lines 1, 4, 7, ...
R_mean_l only from lines 2, 5, 8, ...
G_mean_l only from lines 3, 6, 9, ...
Try with 12 lines of input data and see how the vectors
are 4 elements before filtering/writing.
Jan 11 '06 #2
Juho Schultz
NIR_mean_l only from lines 1, 4, 7, ...
R_mean_l only from lines 2, 5, 8, ...
G_mean_l only from lines 3, 6, 9, ...


This can be the problem, but it can be right too.
The following code is shorter and I hope cleaner, with it maybe
Kriston-Vizi Janos can fix his problem.

class ReadData:
def __init__(self, filename):
self.NIR_mean = []
self.NIR_stdev = []
self.R_mean = []
self.R_stdev = []
self.G_mean = []
self.G_stdev = []
self.area = []

for line in file(filename):
row = line.split()
self.area.append(row[1])
self.NIR_mean.append(row[2])
self.NIR_stdev.append(row[3])
self.R_mean.append(row[4])
self.R_stdev.append(row[5])
self.G_mean.append(row[6])
self.G_stdev.append(row[7])

# -------------------------------
L = ReadData('L.txt')
GC = ReadData('GC.txt')
out_file = file('merged.txt', 'w')

# Create output rows from lists
for i in xrange(len(L.NIR_mean)): # Process all input rows

# Filter L and GC rows by area values
if (10000 <= float(L.area[i]) <= 100000) and \
(10000 <= float(GC.area[i]) <= 100000):

# Create output line and write out
newline = [str(i+1)]
for obj in L, GC:
newline.extend([obj.NIR_mean[i], obj.NIR_stdev[i],
obj.R_mean[i], obj.R_stdev[i],
obj.G_mean[i], obj.G_stdev[i],
obj.area[i]])
outline = '\t'.join(newline) + '\n'
out_file.write(outline)

out_file.close()

Jan 11 '06 #3
be************@lycos.com wrote:
Juho Schultz
NIR_mean_l only from lines 1, 4, 7, ...
R_mean_l only from lines 2, 5, 8, ...
G_mean_l only from lines 3, 6, 9, ...

This can be the problem, but it can be right too.


I guess he is expecting 3000 elements, not 1000, as he wrote:

"And I noticed that all the lists that should contain 3000 items,
contains less as follows:
NIR_mean_l = 1000 items"
Jan 11 '06 #4

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

5
by: MaJoHu | last post by:
I've got a list where u can select some products. It's something like this: <SELECT NAME='Product' MULTIPLE SIZE='10'> <% 'Get every possible item from items table SQL = "SELECT DISTINCT...
9
by: Till Crueger | last post by:
Hi, I have to implement some simple sorting algorithm. I am NOT asking for you to do my homework, but my question is rather on how to store the integers. I recall reading once in here that there...
3
by: Dave Crypto | last post by:
Hi There, SUMMARY: I need to know what the actual maximum date limit possible on a row of a MYSQL database. MORE DETAILS: For example, does a MYSQL database only allow 4032 bytes of data...
2
by: Kums | last post by:
What is the maximum permissible size of a database? Is there any limitation. What is the maximum # of tablespace's allowed in a database? Thanks for your response.
0
by: Kriston-Vizi Janos | last post by:
Dear Members, Is there any possibility to use more than 999 items in a list? Cannot append more than 999 items. The same problem with 'array' type. Is it a result of a default setting maybe? ...
29
by: garyusenet | last post by:
I'm trying to investigate the maximum size of different variable types. I'm using INT as my starting variable for exploration. I know that the maximum number that the int variable can take is:...
12
by: Paul Sijben | last post by:
I have a server in Python 2.5 that generates a lot of threads. It is running on a linux server (Fedora Core 6). The server quickly runs out of threads. I am seeing the following error. File...
2
by: Woody Ling | last post by:
I am now using db2 v8.2 64bits without DPF. I want to create a very large table which is about 1000G and the record length is suitable for 32K page size. I find in the manual that the maximum size...
5
by: Mark A | last post by:
I have an OLTP database that has a tablespace that will start out at about 1 TB, but may grow to 75 TB's in size over the next 5 years. When I add new containers (on a new mount point), I want to...
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.