473,729 Members | 2,142 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

getting a submatrix of all true


I have a largish data set (1000 observations x 100 floating point
variables), and some of the of the data are missing. I want to try a
variety of clustering, neural network, etc, algorithms on the data,
and to keep life simple I want to reduce the dimensions of the matrix
so that I have no missing values, since not all the algorithms are
able to handle them and there is sufficient redundancy in the
variables that I can afford to lose some.

I am currently using a hack that works, but it makes me wonder if
there is an optimal solution. I define optimal as the removal of rows
and columns such that there are no missing values and
max(numRows*num Cols).

My current approach is to drop rows (observations) that have more than
some prespecified number of missing variables, and then drop the
columns (variables) of the reduced data set that have any missing
values. I chose the threshold for dropping a row by eyeballing the
distribution of number of missing variables per observation, pick a
number on the low end of the distribution, and dropping the rows that
exceed the threshold.

Another way of formulating the question: for a sparse boolean matrix
(sparse on True), what is the optimal way to remove rows and columns
so that the total number of elements in the matrix is maximal and
there are no True values left.
Example:

0 0 0
0 0 0 candidate sub matrix has 12 elements
0 0 0
0 0 0

1 0 0 0 1
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 candidate submatrix has 15 elements
0 0 0 0 0 0 0 0 0 0
0 0 1 0 0

0 0
0 0 candidate submatrix has 8 elements
0 0
0 0

I want to programatically extract the 15 element matrix

Following the approach described above, I get the desired answer in
the example below, though this is a hack solution and I have the
feeling there is a better one.

from Numeric import nonzero, array, take, sum

X = array([[1, 0, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0]])

goodObsInd = nonzero(sum(X,1 )<2) # observations with < 2 missing variables
X = take(X, goodObsInd) # drop the bad

goodVarInd = nonzero(sum(X)= =0) # variables with no missing data
X = take(X, goodVarInd, 1 ) # drop the bad variables

print X
John Hunter

Jul 18 '05 #1
17 4681
On Wed, 02 Jul 2003 14:16:57 -0500, John Hunter <jd******@ace.b sd.uchicago.edu > wrote:

I have a largish data set (1000 observations x 100 floating point
variables), and some of the of the data are missing. I want to try a
variety of clustering, neural network, etc, algorithms on the data,
and to keep life simple I want to reduce the dimensions of the matrix
so that I have no missing values, since not all the algorithms are
able to handle them and there is sufficient redundancy in the
variables that I can afford to lose some.

I am currently using a hack that works, but it makes me wonder if
there is an optimal solution. I define optimal as the removal of rows
and columns such that there are no missing values and
max(numRows*nu mCols).

My current approach is to drop rows (observations) that have more than
some prespecified number of missing variables, and then drop the
columns (variables) of the reduced data set that have any missing
values. I chose the threshold for dropping a row by eyeballing the
distribution of number of missing variables per observation, pick a
number on the low end of the distribution, and dropping the rows that
exceed the threshold.

Another way of formulating the question: for a sparse boolean matrix
(sparse on True), what is the optimal way to remove rows and columns
so that the total number of elements in the matrix is maximal and
there are no True values left.
Example:

0 0 0
0 0 0 candidate sub matrix has 12 elements
0 0 0
0 0 0

1 0 0 0 1
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 candidate submatrix has 15 elements
0 0 0 0 0 0 0 0 0 0
0 0 1 0 0

0 0
0 0 candidate submatrix has 8 elements
0 0
0 0

I want to programatically extract the 15 element matrix
If I understand your original optimality definition, that would be suboptimal.
I.e., how about the 16-element matrix? (x's mark the corresponding zeroes)

1 0 0 0 1
x x 0 x x
x x 0 x x
x x 0 x x
x x 1 x x

Or do they have to be adjacent?

Following the approach described above, I get the desired answer in
the example below, though this is a hack solution and I have the
feeling there is a better one.

from Numeric import nonzero, array, take, sum

X = array([[1, 0, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0]])

goodObsInd = nonzero(sum(X,1 )<2) # observations with < 2 missing variables
X = take(X, goodObsInd) # drop the bad

goodVarInd = nonzero(sum(X)= =0) # variables with no missing data
X = take(X, goodVarInd, 1 ) # drop the bad variables

print X


Brute force seems to work on this little example. Maybe it can
be memoized and optimized and/or whatever to handle your larger matrix fast enough?

====< submatrix.py >============== =============== ===
X = [
[1, 0, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0]
]

def optimrcs(mx, rows=[], cols=[]):
if not rows or not cols: return 0,[],[]
maxscore = 0
for r in rows:
if not mx[r].count(1): continue
for c in cols:
if not mx[r][c]: continue
# eval column submatrix vs row submatrix
colsxthis = cols[:]; colsxthis.remov e(c)
colscore, rset, cset = optimrcs(mx, rows, colsxthis)
if colscore>maxsco re:
maxscore, maxrows, maxcols = colscore, rset, cset
rowsxthis = rows[:]; rowsxthis.remov e(r)
rowscore, rset, cset = optimrcs(mx, rowsxthis, cols)
if rowscore >= maxscore:
maxscore, maxrows, maxcols = rowscore, rset, cset
if not maxscore:
return len(rows)*len(c ols), rows, cols
return maxscore, maxrows,maxcols

if __name__ == '__main__':
score, rowsel, colsel = optimrcs(X, range(5), range(5))
print 'Score = %s, rows = %s, cols = %s' % (score,rowsel,c olsel)
print
for r in range(5):
row = X[r]
for c in range(5):
if r in rowsel and c in colsel:
print '<%s>'% row[c],
else:
print ' %s '% row[c],
print
=============== =============== =============== =======
Running it results thus:

[21:24] C:\pywk\clp>sub matrix.py
Score = 16, rows = [1, 2, 3, 4], cols = [0, 1, 3, 4]

1 0 0 0 1
<0> <0> 0 <0> <0>
<0> <0> 0 <0> <0>
<0> <0> 0 <0> <0>
<0> <0> 1 <0> <0>

Regards,
Bengt Richter
Jul 18 '05 #2

"John Hunter" <jd******@ace.b sd.uchicago.edu > wrote in message
news:ma******** *************** ***********@pyt hon.org...

I have a largish data set (1000 observations x 100 floating point
variables), and some of the of the data are missing.
All too typical -- missing data are the bane of statistics.
I want to try a
variety of clustering, neural network, etc, algorithms on the data,
and to keep life simple I want to reduce the dimensions of the matrix so that I have no missing values, since not all the algorithms are
able to handle them and there is sufficient redundancy in the
variables that I can afford to lose some.


Statisticians have tried a variety of approaches. Googling for '
statistics "missing data" 'will give you some leads if you want.

Terry J. Reedy
Jul 18 '05 #3
On Wed, 02 Jul 2003 14:16:57 -0500, John Hunter <jd******@ace.b sd.uchicago.edu > wrote:

I have a largish data set (1000 observations x 100 floating point
variables), and some of the of the data are missing. I want to try a
variety of clustering, neural network, etc, algorithms on the data,
and to keep life simple I want to reduce the dimensions of the matrix
so that I have no missing values, since not all the algorithms are
able to handle them and there is sufficient redundancy in the
variables that I can afford to lose some.

I am currently using a hack that works, but it makes me wonder if
there is an optimal solution. I define optimal as the removal of rows
and columns such that there are no missing values and
max(numRows*nu mCols).

My current approach is to drop rows (observations) that have more than
some prespecified number of missing variables, and then drop the
columns (variables) of the reduced data set that have any missing
values. I chose the threshold for dropping a row by eyeballing the
distribution of number of missing variables per observation, pick a
number on the low end of the distribution, and dropping the rows that
exceed the threshold.

Another way of formulating the question: for a sparse boolean matrix
(sparse on True), what is the optimal way to remove rows and columns
so that the total number of elements in the matrix is maximal and
there are no True values left.
Example:

0 0 0
0 0 0 candidate sub matrix has 12 elements
0 0 0
0 0 0

1 0 0 0 1
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 candidate submatrix has 15 elements
0 0 0 0 0 0 0 0 0 0
0 0 1 0 0

0 0
0 0 candidate submatrix has 8 elements
0 0
0 0

I want to programatically extract the 15 element matrix
If I understand your original optimality definition, that would be suboptimal.
I.e., how about the 16-element matrix? (x's mark the corresponding zeroes)

1 0 0 0 1
x x 0 x x
x x 0 x x
x x 0 x x
x x 1 x x

Or do they have to be adjacent?

Following the approach described above, I get the desired answer in
the example below, though this is a hack solution and I have the
feeling there is a better one.

from Numeric import nonzero, array, take, sum

X = array([[1, 0, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0]])

goodObsInd = nonzero(sum(X,1 )<2) # observations with < 2 missing variables
X = take(X, goodObsInd) # drop the bad

goodVarInd = nonzero(sum(X)= =0) # variables with no missing data
X = take(X, goodVarInd, 1 ) # drop the bad variables

print X


Brute force seems to work on this little example. Maybe it can
be memoized and optimized and/or whatever to handle your larger matrix fast enough?

====< submatrix.py >============== =============== ===
X = [
[1, 0, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0]
]

def optimrcs(mx, rows=[], cols=[]):
if not rows or not cols: return 0,[],[]
maxscore = 0
for r in rows:
if not mx[r].count(1): continue
for c in cols:
if not mx[r][c]: continue
# eval column submatrix vs row submatrix
colsxthis = cols[:]; colsxthis.remov e(c)
colscore, rset, cset = optimrcs(mx, rows, colsxthis)
if colscore>maxsco re:
maxscore, maxrows, maxcols = colscore, rset, cset
rowsxthis = rows[:]; rowsxthis.remov e(r)
rowscore, rset, cset = optimrcs(mx, rowsxthis, cols)
if rowscore >= maxscore:
maxscore, maxrows, maxcols = rowscore, rset, cset
if not maxscore:
return len(rows)*len(c ols), rows, cols
return maxscore, maxrows,maxcols

if __name__ == '__main__':
score, rowsel, colsel = optimrcs(X, range(5), range(5))
print 'Score = %s, rows = %s, cols = %s' % (score,rowsel,c olsel)
print
for r in range(5):
row = X[r]
for c in range(5):
if r in rowsel and c in colsel:
print '<%s>'% row[c],
else:
print ' %s '% row[c],
print
=============== =============== =============== =======
Running it results thus:

[21:24] C:\pywk\clp>sub matrix.py
Score = 16, rows = [1, 2, 3, 4], cols = [0, 1, 3, 4]

1 0 0 0 1
<0> <0> 0 <0> <0>
<0> <0> 0 <0> <0>
<0> <0> 0 <0> <0>
<0> <0> 1 <0> <0>

Regards,
Bengt Richter
Jul 18 '05 #4

"John Hunter" <jd******@ace.b sd.uchicago.edu > wrote in message
news:ma******** *************** ***********@pyt hon.org...

I have a largish data set (1000 observations x 100 floating point
variables), and some of the of the data are missing.
All too typical -- missing data are the bane of statistics.
I want to try a
variety of clustering, neural network, etc, algorithms on the data,
and to keep life simple I want to reduce the dimensions of the matrix so that I have no missing values, since not all the algorithms are
able to handle them and there is sufficient redundancy in the
variables that I can afford to lose some.


Statisticians have tried a variety of approaches. Googling for '
statistics "missing data" 'will give you some leads if you want.

Terry J. Reedy
Jul 18 '05 #5
>>>>> "Bengt" == Bengt Richter <bo**@oz.net> writes:
Bengt> If I understand your original optimality definition, that
Bengt> would be suboptimal. I.e., how about the 16-element
Bengt> matrix? (x's mark the corresponding zeroes)

Bengt> Or do they have to be adjacent?

No, in general they won't be. I missed that one. Just goes to show
that my "solution" is not one, which I knew. But it does efficiently
deliver good solutions, where good means large but not optimal.

Bengt> Brute force seems to work on this little example. Maybe it
Bengt> can be memoized and optimized and/or whatever to handle
Bengt> your larger matrix fast enough?

Thanks for the example. Unfortunately, it is too slow even for
moderate size matrices (30,10). I've been running it for over two
hours for a 30x10 matrix and it hasn't finished. And my data are
1000x100!

Here is a little code to generate larger matrices for testing....

from Numeric import zeros, array, Int, put, reshape
from RandomArray import uniform

numRows, numCols = 30,10
numMissing = int(0.05*numRow s*numCols) # 5% missing

X = zeros( (300,))
ind = uniform(0, numRows*numCols , (numMissing,)). astype(Int)

put(X,ind,1)
X = reshape(X, (numRows,numCol s))

# turn it into a list for your code....
X = [ [val for val in row] for row in X]
for row in X: print row

Last night, I began to formulate the problem as a logic statement,
hoping this would give me an idea of how to proceed. But no progress
yet. But I have come to the conclusion that with P ones, brute force
requires 2^P combinations. With 1000x100 with 5% missing that gives
me 2^5000. Not good.

Thanks for your suggestion, though. If you have any more thoughts,
let me know.

John Hunter


Jul 18 '05 #6
>>>>> "Terry" == Terry Reedy <tj*****@udel.e du> writes:
Terry> Statisticians have tried a variety of approaches. Googling
Terry> for ' statistics "missing data" 'will give you some leads
Terry> if you want.

I have done some searching. I'm familiar with the common methods
(delete every row that contains any missing, replace missing via mean
or regression or something clever) but haven't seen any discussion of
dropping variables and observations together to yield data sets with
no missing values. Have you seen something like this?

John Hunter

Jul 18 '05 #7

"John Hunter" <jd******@ace.b sd.uchicago.edu > wrote in message
news:ma******** *************** **********@pyth on.org...
>> "Terry" == Terry Reedy <tj*****@udel.e du> writes:


Terry> Statisticians have tried a variety of approaches.

Googling Terry> for ' statistics "missing data" 'will give you some leads
Terry> if you want.

I have done some searching. I'm familiar with the common methods
(delete every row that contains any missing, replace missing via mean or regression or something clever) but haven't seen any discussion of dropping variables and observations together to yield data sets with
no missing values. Have you seen something like this?


There are also calculation methods for at least some analyses that
allow for missing data . One of the google hits is for the book
Statistical Analysis with Missing Data. I have not seen it, but it is
not unique.

As I hinted, there are no really nice solutions to missing data. I
have done both row and column deletion. Sometimes I have done
multiple analyses with different deletion strategies: once with enough
vars deleted so all or most cases are complete, and again with enough
cases deleted so that all or most var are complete.

I would start by counting (with a program) the number of missing
values for each row and then construction the frequency distribution
thereof. Then the same for the columns, with the addition of a
correlation table or tables.

One thing one can do with vars is to combine some to make a composite
measure. For instance, if three variables more-or-less measure the
same thing, one can combine (perhaps by the mean of those present) to
make one variable that is present if any of the three are, so it is
only missing for cases (rows) that are missing all three. This type
of work requires that you look at the variables and consider their
meaning, rather than just inputing them into a blind proceedure that
consisders all vars to be the same.

Terry J. Reedy
Jul 18 '05 #8
Jon
bo**@oz.net (Bengt Richter) wrote in message news:<be******* ***@216.39.172. 122>...
On Wed, 02 Jul 2003 14:16:57 -0500, John Hunter <jd******@ace.b sd.uchicago.edu > wrote:
I am currently using a hack that works, but it makes me wonder if
there is an optimal solution. I define optimal as the removal of rows
and columns such that there are no missing values and
max(numRows*nu mCols).
There must be! For each missing entry you can choose to drop either
the row or the column - giving a maximum of 2^N possible ways for N
missing elements. Given that the order in which the rows/columns are
deleted is unimportant the number of possibilities to test is less
than that, but the difficulty is that your decision about row or
column for one element affects the decision for other elements if they
share rows or columns. Graph theoretic approaches probably help - it's
similar to reordering matrix elements to avoid fill in when
decomposing sparse matrices. Depending on N, you might be able to test
all possibilities in a reasonable time, but I have a nasty feeling you
need to check all 2^M to be sure of an optimal solution, where M is
the subset of N which either a share a row or column with another
problematic element.
Another way of formulating the question: for a sparse boolean matrix
(sparse on True), what is the optimal way to remove rows and columns
so that the total number of elements in the matrix is maximal and
there are no True values left.


Sadly I can only give you a method which is certainly not optimal, but
at least finds something which is probably not too bad, and fairly
quickly. Uses the Numeric module to speed up, so that a few thousand
by a few hundred is quickly computable. I'm curious to know if it does
much better or worse than other algorithms.

Interesting problem!

Cheers,

Jon

=============== =============== =============== =============== =========
from Numeric import *

def pickrcs(a,debug =0):
b=array(a,copy= 1) # local copy of array is modified
dr=zeros(b.shap e[0]) # Arrays to note deleted rows...
dc=zeros(b.shap e[1]) # ... and columns
sizeleft=[b.shape[0],b.shape[1]] # Remaining dimensions
while 1: # Keep deleting till none left
sr=sum(b,1) # Column scores = ij's to remove
sc=sum(b,0) # Row scores
mr=argmax(sr) # index highest row score
mc=argmax(sc) # index highest column score
if sr[mr]==0 and sc[mc]==0 :
break # stop when nothing to delete
# pick the row/column to delete fewest useful elements
if sizeleft[1]-sr[mr] > sizeleft[0]-sc[mc]:
b[:,mc]=0 # Zero out column
dc[mc]=1 # tags column as deleted
sizeleft[1]-=1
else:
b[mr,:]=0 # Zero out row
dr[mr]=1
sizeleft[0]-=1
# end of deletions loop - should be no missing elements now
#
# paranoia!, check if anything was left after deletions
if sum(dr)<b.shape[0] and sum(dc)<b.shape[1]:
dr=compress(dr= =0,range(b.shap e[0])) # gives return format of
dc=compress(dc= =0,range(b.shap e[1])) # optimrcs posted by Bengt
score=dr.shape[0]*dc.shape[0] # Richter
return score,dr,dc
else:
print "Sorry, didn't manage to let anything survive"
return 0,0,0

# test code

X = [
[1, 0, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0]
]

if __name__ == '__main__':
a=array(X)
score, rowsel, colsel = pickrcs(a)
print "score=",sc ore
for r in range(5):
row = X[r]
for c in range(5):
if r in rowsel and c in colsel:
print '<%s>'% row[c],
else:
print ' %s '% row[c],
print
# now try for a larger problem - use a random pattern
import RandomArray,tim e
start=time.time ()
x0=1000 # problem size
x1=600
a=RandomArray.r andom((x0,x1))
a=where(a>0.999 ,1,0) # ~ 0.1% of elements are 1
print "Using a large random matrix"
print "Number of non-zeros=",sum(rav el(a)),"in",x0, "x",x1,"mat rix"
score, rowsel, colsel = pickrcs(a)
print 'Score = %s' % (score)
print 'Number of rows deleted=',a.sha pe[0]-rowsel.shape[0]
print 'Number of columns deleted=',a.sha pe[1]-colsel.shape[0]
print time.time()-start," seconds"

==sample output (of course second part is random!)======= ========

score= 16
1 0 0 0 1
<0> <0> 0 <0> <0>
<0> <0> 0 <0> <0>
<0> <0> 0 <0> <0>
<0> <0> 1 <0> <0>
Using a large random matrix
Number of non-zeros= 607 in 1000 x 600 matrix
Score = 331776
Number of rows deleted= 424
Number of columns deleted= 24
3.38999998569 seconds
Jul 18 '05 #9
John Hunter <jd******@ace.b sd.uchicago.edu > wrote:
Another way of formulating the question: for a sparse boolean matrix
(sparse on True), what is the optimal way to remove rows and columns
so that the total number of elements in the matrix is maximal and
there are no True values left.


After having read Bengts code (and scraping the parts of my exploded
head from the floor) and after later getting the hint about the size
of the problem being 2**N, it finally dawned on me that the problem
would be related to getting all possible combinations of a range.

The following code has the advantage that it generates solutions by
index, for example "M[i]" returns the score and the row and column
information for index i. (note that is not the same as finding the
optimal solution, but it still could be handy to be able to generate a
specific one by index)

However for small matrices it is possible to do exhaustive searching
with "max(M)".

Maybe if this would be combined with some heuristic algorithm it would
be better (genetic algorithm?)

Code below (needs Python 2.3, very lightly tested), I hope this helps,

Anton

class Combinations:

def __init__(self,n ,k):
self.n,self.k,s elf.count = n,k,self.noverk (n,k)

def __getitem__(sel f,index):
#combination number index
if index > self.count - 1: raise IndexError
res,x,rest = [],self.n,index
for i in range(self.k,0,-1):
while self.noverk(x,i ) > rest : x -= 1
rest -= self.noverk(x,i )
res.append(x)
return res

def noverk(self,n,k ):
return reduce(lambda a,b: a*(n-b)/(b+1),range(k), 1)

class AllCombinations :

def __init__(self,n ):
self.n, self.count = n, 2**n

def __getitem__(sel f,index):
#combination number index for all k in combinations(n, k)
if index > self.count - 1: raise IndexError
n,rest = self.n,index
for k in range(n+1):
c = Combinations(n, k)
if rest - c.count < 0:
return c[rest]
rest -= c.count

class ScoreMatrix:

def __init__(self, X):
self.AC = AllCombinations (len(X))
self.X = X

def __getitem__(sel f,i):
#score selected rows and columns by index
rows = self.AC[i][::-1]
if rows:
Y = [self.X[i] for i in rows]
Z = [(i,z) for i,z in enumerate(zip(* Y)) if 1 not in z]
cols = [i for i,z in Z]
score = sum(map(len,[z for i,z in Z]))
return score,rows,cols
else: return 0,[],[]

def test():
X = [ [1, 0, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 1, 0, 0], ]

M = ScoreMatrix(X)
sc,rows,cols = max(M)
print sc
for i,r in enumerate(X):
for j,c in enumerate(r):
if i in rows and j in cols: print c,
else: print 'x',
print

if __name__=='__ma in__':
test()

output:

16
x x x x x
0 0 x 0 0
x x x x x
0 0 x 0 0
0 0 x 0 0
0 0 x 0 0
Jul 18 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
849
by: John Hunter | last post by:
I have a largish data set (1000 observations x 100 floating point variables), and some of the of the data are missing. I want to try a variety of clustering, neural network, etc, algorithms on the data, and to keep life simple I want to reduce the dimensions of the matrix so that I have no missing values, since not all the algorithms are able to handle them and there is sufficient redundancy in the variables that I can afford to lose...
15
2970
by: sara | last post by:
Hi I'm pretty new to Access here (using Access 2000), and appreciate the help and instruction. I gave myself 2.5 hours to research online and help and try to get this one, and I am not getting it. Simple database: I want to have a user enter Supply Orders (just for tracking purposes) by Item. The user may also enter a new item - "new" is a combination of Item, PartNumber and Vendor - they could have the
3
3088
by: Hitesh | last post by:
Hi, I am getting the response from another Website by using the HttpHandler in my current site. I am getting the page but all the images on that page are not appearing only placeholder are displayed. Can anybody know this issue and help me to resolve this. In past i received the response saying that i should download the image first and then parse the actual response and modify the src attribute of the
5
5988
by: Nathan Sokalski | last post by:
I have a user control that contains three variables which are accessed through public properties. They are declared immediately below the "Web Form Designer Generated Code" section. Every time an event is fired by one of the controls contained in the User Control, these variable are reset. Here is my current code (I have a little more to add later, right now I am just concerned about the variables getting reset): Public Class DatePicker2...
13
10141
by: dbuchanan | last post by:
Hello, Here is the error message; ---------------------------- Exception Message: ForeignKeyConstraint Lkp_tbl040Cmpt_lkp302SensorType requires the child key values (5) to exist in the parent table. ----------------------------
0
3714
by: ruju00 | last post by:
I am getting an error in Login() method of the following class FtpConnection public class FtpConnection { public class FtpException : Exception { public FtpException(string message) : base(message){} public FtpException(string message, Exception innerException) : base(message,innerException){}
3
3701
by: dei3cmix | last post by:
Hey, I am having a problem with a program I am working on. Basically, the first part of the program gets input from a file using cin.getline. Then the second part, (still in the same main as the first part) needs to get input from the user, and I want to do this with cin.getline also. The problem I am getting, is when I run the program, the text if read in from the file correctly, but it seems to just skip over the cin.getline when I want...
2
4220
by: MSK | last post by:
Hi, Continued to my earlier post regaring "Breakpoints are not getting hit" , I have comeup with more input this time.. Kindly give me some idea. I am a newbie to .NET, recently I installed .NET. I could not debug using breakpoints, breakpoints are not getting hit, but the application is working fine with out any issue.
2
4448
by: srusskinyon | last post by:
I need some help getting unique records from our database! I work for a small non-profit homeless shelter. We keep track of guest information as well as what services we have offered for statistical purposes. I've been using Here's the situation: I have two main tables:
0
8913
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8761
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9280
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
9200
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9142
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
6016
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4525
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
4795
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
3
2162
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.