By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
446,411 Members | 991 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 446,411 IT Pros & Developers. It's quick & easy.

C++ vector algorithm

P: n/a
Hello,

This is OT, but since everybody here seems to have
a great knowledge of C++ I thought I'd ask.

I have this vector with n^2 elements (where n usually
is somewhere between 100 and 1000...)
My goal is to loop through these values (this is not
a real-time computation...) 5000 times or so, select a
random element from the vector and do some simple
operations on the most nearby neighbors - meaning on top,
under, left and right in the vector.

However there is this problem - when at the edge
of the vector, not all nearby neighbors can be computed
(since they are not in the vector).

This algorithm is based on a mathematical one which
states that if the are no nearby neighbors, then go
with 0 (zero) instead.

But, this stuff needs to be fast even though it's not
in real-time.

Here's what I've got so far:

S - the vector in question
S_size - total number of elements in S

for(int T = 0; T < T_repeat; T += 10) {
for(int l = 0; l < l_repeat; l += 10) {
for(int q = 0; q < q_repeat; q++) {
xrand = rand() % S_size + 1;
yrand = rand() % S_size + 1;

try {
U1 =(-S_values.at(xrand).at(yrand)) *
S_values.at(xrand).at(yrand);
U2 = S_values.at(xrand).at(yrand) *
S_values.at(xrand + 1).at(yrand);
} catch(...) {
cout << "borked" << endl;
}

/* Do some computing here */
dU = J * (float)(U2 - U1);
}
}
}

See the try/catch-stuff? Theres my problem.
Can I handle this is some nifty way by letting
the exception tell me what overflowed and at
what elements?

Its late at night here and I really should get to bed
- if there's something thats not clear here
I'd be pleased to answer.

Cheers.

-- Pelle
Jul 23 '05 #1
Share this Question
Share on Google+
5 Replies


P: n/a
S - the vector in question
S_size - total number of elements in S

for(int T = 0; T < T_repeat; T += 10) {
for(int l = 0; l < l_repeat; l += 10) {
for(int q = 0; q < q_repeat; q++) {
xrand = rand() % S_size + 1;
yrand = rand() % S_size + 1;


given an array (or std::vector)
int field[] = {1, 2, 3, 4};
it has 4 members ... sizeof(field)/sizeof(field[0])

if you use rand() % 4, possible numbers are in range [0..3], then you
add one more and use as index into you array, this will raise an
exception in case rand()%4 == 3

meine 10 cent

--daniel
Jul 23 '05 #2

P: n/a

"Pelle Beckman" <he******@chello.se> wrote in message
news:TXD4e.299$184.89@amstwist00...

I have this vector with n^2 elements (where n usually
is somewhere between 100 and 1000...)
My goal is to loop through these values (this is not
a real-time computation...) 5000 times or so, select a
random element from the vector and do some simple
operations on the most nearby neighbors - meaning on top,
under, left and right in the vector.

However there is this problem - when at the edge
of the vector, not all nearby neighbors can be computed
(since they are not in the vector).


Why not make your vector bigger, so if you want n = 100 you actually use n =
102, setting the extra elements to zero (as required by the math). Then as
you work with your 100 elements, they will all have defined neighbors. Then
you won't throw exceptions, and you won't have to waste time testing for
being near the edge.
Jul 23 '05 #3

P: n/a
"Pelle Beckman" <he******@chello.se> wrote in message
news:TXD4e.299$184.89@amstwist00...
Hello,

This is OT, but since everybody here seems to have
a great knowledge of C++ I thought I'd ask.

I have this vector with n^2 elements (where n usually
is somewhere between 100 and 1000...)
My goal is to loop through these values (this is not
a real-time computation...) 5000 times or so, select a
random element from the vector and do some simple
operations on the most nearby neighbors - meaning on top,
under, left and right in the vector.

However there is this problem - when at the edge
of the vector, not all nearby neighbors can be computed
(since they are not in the vector).

This algorithm is based on a mathematical one which
states that if the are no nearby neighbors, then go
with 0 (zero) instead.

But, this stuff needs to be fast even though it's not
in real-time.

Here's what I've got so far:

S - the vector in question
S_size - total number of elements in S

for(int T = 0; T < T_repeat; T += 10) {
for(int l = 0; l < l_repeat; l += 10) {
for(int q = 0; q < q_repeat; q++) {
xrand = rand() % S_size + 1;
yrand = rand() % S_size + 1;

try {
U1 =(-S_values.at(xrand).at(yrand)) *
S_values.at(xrand).at(yrand);
U2 = S_values.at(xrand).at(yrand) *
S_values.at(xrand + 1).at(yrand);
} catch(...) {
cout << "borked" << endl;
}

/* Do some computing here */
dU = J * (float)(U2 - U1);
}
}
}

See the try/catch-stuff? Theres my problem.
Can I handle this is some nifty way by letting
the exception tell me what overflowed and at
what elements?

Its late at night here and I really should get to bed
- if there's something thats not clear here
I'd be pleased to answer.

Cheers.

-- Pelle


If you want to go fast don't use "at" and don't use exceptions for what
amounts to flow control. There are two reasonable solutions:

1) put in artifical borders filled with zeros as someone else suggested, and
2) write special loops for use near the edges and corners.

I recently wrote some code which used method 2 and had to write 9 loops: top
left corner, top edge, top right corner, left edge, sweet spot in the
middle, right edge, lower right corner, bottom edge, and lower right corner.
It was a pain in the neck but turned out to be quite fast as well as
non-invasive. The loop for the middle part (where most of the data are) can
be coded without any runtime penalty for checking. The code at the corners
has lots of checking but only subtends a few elements.

--
Cy
http://home.rochester.rr.com/cyhome/
Jul 23 '05 #4

P: n/a
Dana Cartwright skrev:
"Pelle Beckman" <he******@chello.se> wrote in message
news:TXD4e.299$184.89@amstwist00...
I have this vector with n^2 elements (where n usually
is somewhere between 100 and 1000...)
My goal is to loop through these values (this is not
a real-time computation...) 5000 times or so, select a
random element from the vector and do some simple
operations on the most nearby neighbors - meaning on top,
under, left and right in the vector.

However there is this problem - when at the edge
of the vector, not all nearby neighbors can be computed
(since they are not in the vector).

Why not make your vector bigger, so if you want n = 100 you actually use n =
102, setting the extra elements to zero (as required by the math). Then as
you work with your 100 elements, they will all have defined neighbors. Then
you won't throw exceptions, and you won't have to waste time testing for
being near the edge.


It's perfectly sane idea but since we're talking about
100000s of elements, a "border" would mean
1000s of "unnecessary" elements which could require
a few megs memory req. extra depending on the
type of the vector.

-- Pelle
Jul 23 '05 #5

P: n/a
"Pelle Beckman" <he******@chello.se> wrote in message
news:_lU4e.341$184.115@amstwist00...
Dana Cartwright skrev:
"Pelle Beckman" <he******@chello.se> wrote in message
news:TXD4e.299$184.89@amstwist00...

Why not make your vector bigger, so if you want n = 100 you actually use
n = 102, setting the extra elements to zero (as required by the math).
Then as you work with your 100 elements, they will all have defined
neighbors. Then you won't throw exceptions, and you won't have to waste
time testing for being near the edge.


It's perfectly sane idea but since we're talking about
100000s of elements, a "border" would mean
1000s of "unnecessary" elements which could require
a few megs memory req. extra depending on the
type of the vector.

-- Pelle


How big are your elements? If the maximum size is 1000 by 1000 (as I think
you said in your OP), that's only one million elements. Adding a border of
zeroes only adds about 4000, or 4%, storage overhead.

You said this needed to be fast. I'm just thinking that 4% memory overhead
is reasonable if you gain a lot of speed.

If your elements are large, then you could put them in memory that's
separate from the vector. The vector could be a vector of pointers into the
elements. Then, you need only create a single 'zero' element, because the
border pointers can (probably) all point at a single 'zero' element.
Jul 23 '05 #6

This discussion thread is closed

Replies have been disabled for this discussion.