By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,592 Members | 1,958 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,592 IT Pros & Developers. It's quick & easy.

Maximum List size (item number) limit?

P: n/a
Dear Mr. Kern, and Members,

Thank you very much for the fast answer, my question became
over-simplified.

My source code is appended below. It uses two text files (L.txt and
GC.txt) as input and merges them. Please find these two files here:
http://kristonvizi.hu/L.txt
http://kristonvizi.hu/GC.txt

Both L.txt and GC.txt contains 3000 rows. When running, the code stops
with error message:

'The debugged program raised the exception IndexError "list index out of
range"
File: /home/kvjanos/file.py, Line: 91'

And I noticed that all the lists that should contain 3000 items,
contains less as follows:
NIR_mean_l = 1000 items
NIR_stdev_l = 1000 items
R_mean_l = 1000 items
R_stdev_l = 1000 items
G_mean_l = 999 items
G_stdev_l = 999 items
area_l = 999 items

NIR_mean_gc = 1000 items
NIR_stdev_gc = 1000 items
R_mean_gc = 1000 items
R_stdev_gc = 1000 items
G_mean_gc = 999 items
G_stdev_gc = 999 items
area_gc = 999 items

This is why I thought it is a limit in list items number.

Code that's failing:
#*******************************************

import string,sys,os,sets

# Open L, GC txt files and create merged file
inp_file_l = open('/home/kvjanos/L/L.txt')
inp_file_gc = open('/home/kvjanos/GC/GC.txt')
out_file = open('/home/kvjanos/L_GC_merged/merged.txt', 'w')

# Define L lists
NIR_mean_l = []
NIR_stdev_l =[]
R_mean_l = []
R_stdev_l =[]
G_mean_l = []
G_stdev_l =[]
area_l = []

# Define GC lists
NIR_mean_gc = []
NIR_stdev_gc =[]
R_mean_gc = []
R_stdev_gc =[]
G_mean_gc = []
G_stdev_gc =[]
area_gc = []
# Processing L file
line_no_l =0 # Input L file line number
type_l = 1 # Input L file row type: 1 (row n),2 (row n+1) or 3 (row n+2)

# Append L values to lists.
for line in inp_file_l.xreadlines():
line_no_l = line_no_l + 1
if line_no_l == 1: # To skip the header row
continue
data_l = [] # An L row
data_l = line.split()

if type_l == 1:
NIR_mean_l.append(data_l[2]) # Append 3rd item of the row to
the list
NIR_stdev_l.append(data_l[3]) # Append 4th item of the row to
the list
type_l = 2 # Change to row n+1

else:
if type_l == 2:
R_mean_l.append(data_l[2])
R_stdev_l.append(data_l[3])
type_l = 3
else:
G_mean_l.append(data_l[2])
G_stdev_l.append(data_l[3])
area_l.append(data_l[1])
type_l = 1
inp_file_l.close()
# Processing GC file, the same way as L file above
line_no_gc =0
type_gc = 1

for line in inp_file_gc.xreadlines():
line_no_gc = line_no_gc+ 1
if line_no_gc== 1:
continue
data_gc = []
data_gc = line.split()

if type_gc== 1:
NIR_mean_gc.append(data_gc[2])
NIR_stdev_gc.append(data_gc[3])
type_gc= 2

else:
if type_gc== 2:
R_mean_gc.append(data_gc[2])
R_stdev_gc.append(data_gc[3])
type_gc= 3
else:
G_mean_gc.append(data_gc[2])
G_stdev_gc.append(data_gc[3])
area_gc.append(data_gc[1])
type_gc= 1
inp_file_gc.close()

#############################

# Create output rows from lists
for i in range(len(NIR_mean_l)): # Process all input rows

# Filters L rows by 'area_l' values
area_l_rossz = string.atof(area_l[i])
if area_l_rossz < 10000:
continue
elif area_l_rossz > 100000:
continue

# Filters GC rows by 'area_gc' values
area_gc_rossz = string.atof(area_gc[i])
if area_gc_rossz < 10000:
continue
elif area_gc_rossz > 200000:
continue

# Create output line and write out
newline = []
newline.append(str(i+1))
# L
newline.append(NIR_mean_l[i])
newline.append(NIR_stdev_l[i])
newline.append(R_mean_l[i])
newline.append(R_stdev_l[i])
newline.append(G_mean_l[i])
newline.append(G_stdev_l[i])
newline.append(area_l[i])
# GC
newline.append(NIR_mean_gc[i])
newline.append(NIR_stdev_gc[i])
newline.append(R_mean_gc[i])
newline.append(R_stdev_gc[i])
newline.append(G_mean_gc[i])
newline.append(G_stdev_gc[i])
newline.append(area_gc[i])
outline = string.join(newline,'\t') + '\n'
out_file.writelines(outline)

out_file.close()

#*******************************************

Thnx again,
Janos

Kriston-Vizi Janos wrote:
/ Dear Members,

/>/
/>/ Is there any possibility to use more than 999 items in a list?
/
Yes. Of course.
/ Cannot

/>/ append more than 999 items.
/
Post the code that's failing for you and the error message it generates.

And please read http://www.catb.org/~esr/faqs/smart-questions.html <http://www.catb.org/%7Eesr/faqs/smart-questions.html> . It will
help us help you.
/ The same problem with 'array' type. Is it a

/>/ result of a default setting maybe?
/
No.

--
Robert Kern
robert.kern at gmail.com <http://mail.python.org/mailman/listinfo/python-list>

"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter


Jan 11 '06 #1
Share this Question
Share on Google+
3 Replies


P: n/a
Kriston-Vizi Janos wrote:
Dear Mr. Kern, and Members,

Thank you very much for the fast answer, my question became
over-simplified.

My source code is appended below. It uses two text files (L.txt and
GC.txt) as input and merges them.

Both L.txt and GC.txt contains 3000 rows. When running, the code stops
with error message:

'The debugged program raised the exception IndexError "list index out of
range"
File: /home/kvjanos/file.py, Line: 91'

And I noticed that all the lists that should contain 3000 items,
contains less as follows:
NIR_mean_l = 1000 items Code that's failing: # Processing L file
line_no_l =0 # Input L file line number
type_l = 1 # Input L file row type: 1 (row n),2 (row n+1) or 3 (row n+2)
# Append L values to lists.
for line in inp_file_l.xreadlines():
line_no_l = line_no_l + 1
if line_no_l == 1: # To skip the header row
continue
data_l = [] # An L row
data_l = line.split()
if type_l == 1:
NIR_mean_l.append(data_l[2]) # Append 3rd item of the row to the list
NIR_stdev_l.append(data_l[3]) # Append 4th item of the row to the list
type_l = 2 # Change to row n+1
else:
if type_l == 2:
R_mean_l.append(data_l[2])
R_stdev_l.append(data_l[3])
type_l = 3
else:
G_mean_l.append(data_l[2])
G_stdev_l.append(data_l[3])
area_l.append(data_l[1])
type_l = 1
inp_file_l.close()


Looking at the data files, it seems there is no header row to skip.
Skipping 1st row seems to cause the discrepancy of vector sizes,
which leads to the IndexError. should NIR_mean_l[0] be 203 or 25?

As the comments in your code suggest, the code adds values to
NIR_mean_l only from lines 1, 4, 7, ...
R_mean_l only from lines 2, 5, 8, ...
G_mean_l only from lines 3, 6, 9, ...
Try with 12 lines of input data and see how the vectors
are 4 elements before filtering/writing.
Jan 11 '06 #2

P: n/a
Juho Schultz
NIR_mean_l only from lines 1, 4, 7, ...
R_mean_l only from lines 2, 5, 8, ...
G_mean_l only from lines 3, 6, 9, ...


This can be the problem, but it can be right too.
The following code is shorter and I hope cleaner, with it maybe
Kriston-Vizi Janos can fix his problem.

class ReadData:
def __init__(self, filename):
self.NIR_mean = []
self.NIR_stdev = []
self.R_mean = []
self.R_stdev = []
self.G_mean = []
self.G_stdev = []
self.area = []

for line in file(filename):
row = line.split()
self.area.append(row[1])
self.NIR_mean.append(row[2])
self.NIR_stdev.append(row[3])
self.R_mean.append(row[4])
self.R_stdev.append(row[5])
self.G_mean.append(row[6])
self.G_stdev.append(row[7])

# -------------------------------
L = ReadData('L.txt')
GC = ReadData('GC.txt')
out_file = file('merged.txt', 'w')

# Create output rows from lists
for i in xrange(len(L.NIR_mean)): # Process all input rows

# Filter L and GC rows by area values
if (10000 <= float(L.area[i]) <= 100000) and \
(10000 <= float(GC.area[i]) <= 100000):

# Create output line and write out
newline = [str(i+1)]
for obj in L, GC:
newline.extend([obj.NIR_mean[i], obj.NIR_stdev[i],
obj.R_mean[i], obj.R_stdev[i],
obj.G_mean[i], obj.G_stdev[i],
obj.area[i]])
outline = '\t'.join(newline) + '\n'
out_file.write(outline)

out_file.close()

Jan 11 '06 #3

P: n/a
be************@lycos.com wrote:
Juho Schultz
NIR_mean_l only from lines 1, 4, 7, ...
R_mean_l only from lines 2, 5, 8, ...
G_mean_l only from lines 3, 6, 9, ...

This can be the problem, but it can be right too.


I guess he is expecting 3000 elements, not 1000, as he wrote:

"And I noticed that all the lists that should contain 3000 items,
contains less as follows:
NIR_mean_l = 1000 items"
Jan 11 '06 #4

This discussion thread is closed

Replies have been disabled for this discussion.