By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,670 Members | 1,526 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,670 IT Pros & Developers. It's quick & easy.

vector.size() and (signed) int

P: n/a
I am getting warnings when comparing a (regular) int to the value
returned from std::vector.size() in code similar to the following:

int i = //stuff
if (i >= vec.size())
The compiler gives me a "signed/unsigned mismatch" warning (I am using
the C++ compiler found in Visual Studio .NET 2003, which tells me that
the return type is a size_t). I get similar warnings when assigning the
return value of .size() to an int. However, even on Stroustrup's FAQ
<http://www.research.att.com/~bs/bs_faq2.html> he has:

for (int i = 0; i<v.size(); ++i)
which seems to me that it should be fine. In order to quiet the error,
I have casted the return value to an int:

if (i >= static_cast<int>(vec.size()))
Is this bad style? Is there a more elegant solution? I recall reading
somewhere that unsigned ints should be avoided due to implicit
conversion issues that may crop up or if the value happens to wrap
around (technically it should be undefined but I assume most
implementations will just wrap the value around), which is why I would
prefer not to use them. For example, in TC++PL:SE, section 16.3.4 has
an example of a weird issue (but he says the compiler will complain if
we are lucky):

Since a vector cannot have a negative number of elements, its size
must be non-negative. This is reflected in the requirement that
vector's size_type must be an unsigned type. This allows a greater
range of vector sizes on some architectures. However, it can also
lead to surprises:

void f(int i)
{
vector<char> vc0(-1); // fairly easy for compiler to warn against
vector<char> vc1(i);
}

void g()
{
f(-1); // trick f() into accepting a large positive number!
}

In the call f(-1), -1 is converted into a (rather large) postive
integer (Sec. C.6.3). If we are lucky, the compiler will find a way
of complaining.
--Bjarne Stroustrup, _The_C++_Programming_Language:_Special_Edition_

--
Marcus Kwok
Sep 15 '05 #1
Share this Question
Share on Google+
27 Replies


P: n/a

"Marcus Kwok" <ri******@gehennom.net> wrote in message
news:dg**********@news-int2.gatech.edu...
I am getting warnings when comparing a (regular) int to the value
returned from std::vector.size() in code similar to the following:

int i = //stuff
if (i >= vec.size())
The compiler gives me a "signed/unsigned mismatch" warning (I am using
the C++ compiler found in Visual Studio .NET 2003, which tells me that
the return type is a size_t).


Any reason you're not using unsigned int for i?

(You can also feel free to ignore the warning, if you know it'll never be a
real problem.)

-Howard
Sep 15 '05 #2

P: n/a
Howard wrote:
I am getting warnings when comparing a (regular) int to the value
returned from std::vector.size() in code similar to the following:

int i = //stuff
if (i >= vec.size())
The compiler gives me a "signed/unsigned mismatch" warning (I am using
the C++ compiler found in Visual Studio .NET 2003, which tells me that
the return type is a size_t).


Any reason you're not using unsigned int for i?


Actually, I'd say he should use int by default. unsigned types are only
useful in special cases. I actually don't understand why size_t is
unsigned.

Sep 15 '05 #3

P: n/a
Rolf Magnus wrote:
Howard wrote:

I am getting warnings when comparing a (regular) int to the value
returned from std::vector.size() in code similar to the following:

int i = //stuff
if (i >= vec.size())
The compiler gives me a "signed/unsigned mismatch" warning (I am using
the C++ compiler found in Visual Studio .NET 2003, which tells me that
the return type is a size_t).


Any reason you're not using unsigned int for i?

Actually, I'd say he should use int by default. unsigned types are only
useful in special cases. I actually don't understand why size_t is
unsigned.


A man after my own heart. I'd go further and say I don't know why
unsigned types exist.

John
Sep 15 '05 #4

P: n/a
Marcus Kwok wrote:

if (i >= static_cast<int>(vec.size()))
Is this bad style?
Yes.
Is there a more elegant solution?


Turn off the damn warning. The code is legal and its meaning is
well-defined and clear. Or, if you truly believe that the compiler
writer understands your code better than you do, change i to type size_t
or add a cast.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Sep 15 '05 #5

P: n/a
John Harrison wrote:

A man after my own heart. I'd go further and say I don't know why
unsigned types exist.


Try writing a multiple precision integral math package with signed types
(i.e. in Java...). You'll spend far too much time masking high bits out.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Sep 15 '05 #6

P: n/a
Howard <al*****@hotmail.com> wrote:
"Marcus Kwok" <ri******@gehennom.net> wrote in message
news:dg**********@news-int2.gatech.edu...
The compiler gives me a "signed/unsigned mismatch" warning (I am using
the C++ compiler found in Visual Studio .NET 2003, which tells me that
the return type is a size_t).


Any reason you're not using unsigned int for i?

(You can also feel free to ignore the warning, if you know it'll never be a
real problem.)


I listed some reasons in the original post (wrap-around and implicit
conversion issues and gave example code from Stroustrup). I mean, I
know MY code will never make that mistake ;-) but I try to code
defensively when possible. In either case, even if I declare it as an
unsigned, it still warns me that there is a "conversion from size_t to
unsigned int, possible loss of data". Now, I know that the chance of
losing data converting from a size_t (an unsigned type) to an unsigned
int is pretty small, but I like clean compiles with no warnings (please
don't tell me to just disable this warning message :) ). I do not think
using a size_t would be appropriate either, as I am using i for another
purpose.

OK, maybe it will help if I post the ACTUAL code (the code I gave was an
extremely simplified version), and then possibly someone can see a
better way to do it that will avoid the issue. I will simplify less,
and add comments.

The int Xmtr.num is being used to store the index of the transmitter.
Then, we check a vector of receivers to see if we have the data for this
transmitter, and if not then we add a new entry for it, with the same
index. Let me know if this is not clear after reading the code.
void FSBSModel::fsbs_set(const FSBSParameters& params)
{
// ...
// params.xmtrs is a std::vector<Xmtr_setq>
// xmtrs is a std::vector<Xmtr>

typedef std::vector<Xmtr_setq>::const_iterator XCI;
XCI xend = params.xmtrs.end();

// for each Xmtr_setq passed in through params,
// add it to the list of real xmtrs
for (XCI xci = params.xmtrs.begin(); xci != xend; ++xci) {
int num_xmtrs = static_cast<int>(xmtrs.size());

Xmtr toAdd;

toAdd.name = xci->name;
toAdd.num = num_xmtrs;

// xmtr_fill() will fill in the rest of the Xmtr based on its name
xmtr_fill(toAdd);
xmtrs.push_back(toAdd);
}
}
void FSBSModel::read_prop_files()
{
// ...

std::vector<Xmtr>::const_iterator xci;
std::vector<Rcvr>::iterator ri;

// ...

int xmtr_i = xci->num;

// ri->signal is a std::vector<Matrix> that holds the signal info
// for a specific transmitter (ri->signal[xmtr_i]) at that receiver
// location

// if the below condition is true, then this receiver does not have
// any signal info for that transmitter, so we add a blank Matrix
// (it will be filled later)
if (xmtr_i >= static_cast<int>(ri->signal.size())) {
Matrix temp999;

// ...

ri->signal.push_back(temp999);
}
}
A somewhat related but tangential question, can anyone see a better
implementation for this function?

// setcol() sets the elements of the colv vector
// to the individual phrases found in input_record, which is tab-delimited.
//
// The column numbers start at 0.
//
// The number of columns found is returned.
int setcol(const std::string& input_record, std::vector<std::string>& colv)
{
std::string temp;

colv.clear();

typedef std::string::const_iterator CI;
CI end = input_record.end();
for (CI ci = input_record.begin(); ci != end; ++ci) {
if (*ci != '\t') {
temp += *ci;
}
else {
colv.push_back(temp);
temp.clear();
}
}
colv.push_back(temp);

return static_cast<int>(colv.size());
}
Originally I implemented it like the below, which I feel is much more
elegant, but I could not figure out how to get it to separate the
phrases only on tabs; it would delimit with any whitespace.

int setcol(const std::string& input_record, std::vector<std::string>& colv)
{
std::istringstream s(input_record);
std::string temp;

colv.clear();
while (s >> temp)
{
colv.push_back(temp);
}

return colv.size();
}
Thanks in advance.

--
Marcus Kwok
Sep 15 '05 #7

P: n/a
Marcus Kwok wrote:
Howard <al*****@hotmail.com> wrote:
"Marcus Kwok" <ri******@gehennom.net> wrote in message
news:dg**********@news-int2.gatech.edu...
The compiler gives me a "signed/unsigned mismatch" warning (I am using
the C++ compiler found in Visual Studio .NET 2003, which tells me that
the return type is a size_t).


Any reason you're not using unsigned int for i?

(You can also feel free to ignore the warning, if you know it'll never be a
real problem.)

I listed some reasons in the original post (wrap-around and implicit
conversion issues and gave example code from Stroustrup). I mean, I
know MY code will never make that mistake ;-) but I try to code
defensively when possible. In either case, even if I declare it as an
unsigned, it still warns me that there is a "conversion from size_t to
unsigned int, possible loss of data". Now, I know that the chance of
losing data converting from a size_t (an unsigned type) to an unsigned
int is pretty small, but I like clean compiles with no warnings (please
don't tell me to just disable this warning message :) ). I do not think
using a size_t would be appropriate either, as I am using i for another
purpose.


That warning is in preparation for 64-bit computing when size_t will
become a 64 bit quantity, but unsigned will (presumably) stay at 32 bits.

john
Sep 15 '05 #8

P: n/a
Marcus Kwok wrote:
I am getting warnings when comparing a (regular) int to the value
returned from std::vector.size() in code similar to the following:

int i = //stuff
if (i >= vec.size())
The compiler gives me a "signed/unsigned mismatch" warning (I am using
the C++ compiler found in Visual Studio .NET 2003, which tells me that
the return type is a size_t).**I*get*similar*warnings*when*assigning*th e
return value of .size() to an int.


Why not use a vector<type>::site_type instead of an int? It is simply a
typedef for the adopted unsigned int type and it is there for that exact
purpose.
Rui Maciel
--
Running Kubuntu 5.04 with KDE 3.4.2 and proud of it.
jabber:ru********@jabber.org
Sep 15 '05 #9

P: n/a
John Harrison wrote:
A man after my own heart. I'd go further and say I don't know why
unsigned types exist.


They are extremely valuable and useful. For you to say that, I will take a
wild guess and point out that you never did any numerical analysis work.
Rui Maciel
--
Running Kubuntu 5.04 with KDE 3.4.2 and proud of it.
jabber:ru********@jabber.org
Sep 15 '05 #10

P: n/a
Rui Maciel wrote:
John Harrison wrote:

A man after my own heart. I'd go further and say I don't know why
unsigned types exist.

They are extremely valuable and useful. For you to say that, I will take a
wild guess and point out that you never did any numerical analysis work.


An alternative to having unsigned types is to have unsigned operators.
Java already does this for one of the shift operators. I don't know why
they didn't extend that to other operators.

I did a little numerical analysis at university and I don't remember any
use for unsigned types. In what way are they extremely valuable and useful?

john
Sep 16 '05 #11

P: n/a
John Harrison wrote:
Rolf Magnus wrote:
Howard wrote:

I am getting warnings when comparing a (regular) int to the value
returned from std::vector.size() in code similar to the following:

int i = //stuff
if (i >= vec.size())
The compiler gives me a "signed/unsigned mismatch" warning (I am using
the C++ compiler found in Visual Studio .NET 2003, which tells me that
the return type is a size_t).

Any reason you're not using unsigned int for i?

Actually, I'd say he should use int by default. unsigned types are only
useful in special cases. I actually don't understand why size_t is
unsigned.


A man after my own heart. I'd go further and say I don't know why
unsigned types exist.


Well,

don't get me started on *signed* ones. The unsigned ones clearly exist
because the signed types are only useful for the positive half of their
admissible values. Outside that range, they do even have
implementation-defined multiplicative arithmetic [see5.6/4]. Also, to avoid
undefined behavior from overflows, you always have to check whether an
overflow *would* occur *before* you add [see 5/5].

The unsigned types instead never overflow and are completely predictable:
they just do arithmetic mod 2^n [see 3.9.1/4], and it is easy to detect a
wrap-around after the fact. If you care about knowing what your code is
actually computing, the unsigned types just rule.
Best

Kai-Uwe Bux

PS.: I have to admit, though, that the following idiom for counting down to
0 in unsigned types looks funny [creating the impression of looping
forever]:

#include <iostream>

int main (void ) {

unsigned size = 10;
for ( unsigned i = size - 1; i < size; -- i ) {
std::cout << i << '\n';
}

}

Sep 16 '05 #12

P: n/a
Kai-Uwe Bux wrote:
John Harrison wrote:

Rolf Magnus wrote:
Howard wrote:

>I am getting warnings when comparing a (regular) int to the value
>returned from std::vector.size() in code similar to the following:
>
>int i = //stuff
>if (i >= vec.size())
>
>
>The compiler gives me a "signed/unsigned mismatch" warning (I am using
>the C++ compiler found in Visual Studio .NET 2003, which tells me that
>the return type is a size_t).

Any reason you're not using unsigned int for i?
Actually, I'd say he should use int by default. unsigned types are only
useful in special cases. I actually don't understand why size_t is
unsigned.


A man after my own heart. I'd go further and say I don't know why
unsigned types exist.

Well,

don't get me started on *signed* ones. The unsigned ones clearly exist
because the signed types are only useful for the positive half of their
admissible values. Outside that range, they do even have
implementation-defined multiplicative arithmetic [see5.6/4]. Also, to avoid
undefined behavior from overflows, you always have to check whether an
overflow *would* occur *before* you add [see 5/5].

The unsigned types instead never overflow and are completely predictable:
they just do arithmetic mod 2^n [see 3.9.1/4], and it is easy to detect a
wrap-around after the fact. If you care about knowing what your code is
actually computing, the unsigned types just rule.


Well yes, but it all begs the question why don't the signed types have
prdictable behaviour on overflow. I've never heard a convincing reason
for this and don't believe one exists. Please don't try and claim that
it is to support unusual formats like ones-complement or signed
magnitude, that argument is demolished by the rareness of those formats
and the 'as-if' rule.

john
Sep 16 '05 #13

P: n/a
John Harrison wrote:
Kai-Uwe Bux wrote:
John Harrison wrote:
[snip]
A man after my own heart. I'd go further and say I don't know why
unsigned types exist.

Well,

don't get me started on *signed* ones. The unsigned ones clearly exist
because the signed types are only useful for the positive half of their
admissible values. Outside that range, they do even have
implementation-defined multiplicative arithmetic [see5.6/4]. Also, to
avoid undefined behavior from overflows, you always have to check whether
an overflow *would* occur *before* you add [see 5/5].

The unsigned types instead never overflow and are completely predictable:
they just do arithmetic mod 2^n [see 3.9.1/4], and it is easy to detect a
wrap-around after the fact. If you care about knowing what your code is
actually computing, the unsigned types just rule.


Well yes, but it all begs the question why don't the signed types have
prdictable behaviour on overflow. I've never heard a convincing reason
for this and don't believe one exists.


I have no idea as to why the signed integer typed were designed useless (at
least for arithmetic programming). I just stay away from them as far as
possible. I rarely ever miss them.

Please don't try and claim that
it is to support unusual formats like ones-complement or signed
magnitude, that argument is demolished by the rareness of those formats
and the 'as-if' rule.


Yeah, the standard could simply decree the arithmetic of the integer types
on the abstract machine as it does for unsigned types. But I do not see how
this and the rareness of certain formats invalidates the design goal of C++
supporting native machine types. Could you elaborate on this?
Best

Kai-Uwe Bux
Sep 16 '05 #14

P: n/a
John Harrison wrote:

Well yes, but it all begs the question why don't the signed types have
prdictable behaviour on overflow. I've never heard a convincing reason
for this and don't believe one exists. Please don't try and claim that
it is to support unusual formats like ones-complement or signed
magnitude, that argument is demolished by the rareness of those formats
and the 'as-if' rule.


Nevertheless, that's the reason for it. Rare isn't the same as
non-existent, and the magic words "as if" don't remove implementation
overhead.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Sep 16 '05 #15

P: n/a
Kai-Uwe Bux wrote:

I have no idea as to why the signed integer typed were designed useless (at
least for arithmetic programming).
They're not useless, so there's no need to come up with a reason.


Yeah, the standard could simply decree the arithmetic of the integer types
on the abstract machine as it does for unsigned types.


So you want INT_MAX + 1 to equal 0?

The rules for signed and unsigned arithmetic were written with real
hardware architectures, not abstractions, in mind. When you ignore the
actual cost of fundamental operations you create a system that's
unusable. Java's strict floating-point rules are a good example.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Sep 16 '05 #16

P: n/a
Pete Becker wrote:
Kai-Uwe Bux wrote:

I have no idea as to why the signed integer typed were designed useless
(at least for arithmetic programming).
They're not useless, so there's no need to come up with a reason.


Don't get all up in arms :)


Yeah, the standard could simply decree the arithmetic of the integer
types on the abstract machine as it does for unsigned types.


So you want INT_MAX + 1 to equal 0?

The rules for signed and unsigned arithmetic were written with real
hardware architectures, not abstractions, in mind. When you ignore the
actual cost of fundamental operations you create a system that's
unusable. Java's strict floating-point rules are a good example.


Well, you snipped the more relevant part:
Yeah, the standard could simply decree the arithmetic of the integer
types on the abstract machine as it does for unsigned types. But I do
not see how this and the rareness of certain formats invalidates
the design goal of C++ supporting native machine types.


When I say, the standard *could* say something, I am not implying it
*should* say so.
Best

Kai-Uwe Bux
Sep 16 '05 #17

P: n/a
Kai-Uwe Bux wrote:

When I say, the standard *could* say something, I am not implying it
*should* say so.


Too subtle. <g>

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Sep 16 '05 #18

P: n/a

Rui Maciel schreef:
Marcus Kwok wrote:
I am getting warnings when comparing a (regular) int to the value
returned from std::vector.size() in code similar to the following:
Why not use a vector<type>::site_type instead of an int? It is simply a
typedef for the adopted unsigned int type and it is there for that exact
purpose.


For a number of reasons:

1. integer literals are precisely that, ints and not
std::vector<type>::size_type literals

2. Readability of the code (which is why we need auto i = v.size() )

3. The actual int may be negative. E.g. the follwoing code can make
sense:
bool foo( int i, std::vector<int> v ) { return i > v.size(); }

For i<=0 this should return false. Use unsigned and it will return true
instead.

HTH,
Michiel Salters

Sep 16 '05 #19

P: n/a
In message <dg**********@murdoch.acc.Virginia.EDU>, Kai-Uwe Bux
<jk********@gmx.net> writes
PS.: I have to admit, though, that the following idiom for counting down to
0 in unsigned types looks funny [creating the impression of looping
forever]:

#include <iostream>

int main (void ) {

unsigned size = 10;
for ( unsigned i = size - 1; i < size; -- i ) {
std::cout << i << '\n';
}

}


That's not funny, it's *vile* (anagram of "evil"). Try this :

for (unsigned i = size; i-->0;)
{ //...
}

(Extra marks for working out the meaning of the emoticon -->0;)

--
Richard Herring
Sep 16 '05 #20

P: n/a
Marcus Kwok wrote:
I am getting warnings when comparing a (regular) int to the value
returned from std::vector.size() in code similar to the following:

int i = //stuff
if (i >= vec.size())
The compiler gives me a "signed/unsigned mismatch" warning (I am using
the C++ compiler found in Visual Studio .NET 2003, which tells me that
the return type is a size_t). I get similar warnings when assigning the
return value of .size() to an int. However, even on Stroustrup's FAQ
<http://www.research.att.com/~bs/bs_faq2.html> he has:

for (int i = 0; i<v.size(); ++i)
which seems to me that it should be fine. In order to quiet the error,
I have casted the return value to an int:

if (i >= static_cast<int>(vec.size()))
Is this bad style? Is there a more elegant solution?


Why not just use size_t as the type of your loop index?

--
Mike Smith
Sep 16 '05 #21

P: n/a
Rolf Magnus wrote:
Howard wrote:

I am getting warnings when comparing a (regular) int to the value
returned from std::vector.size() in code similar to the following:

int i = //stuff
if (i >= vec.size())
The compiler gives me a "signed/unsigned mismatch" warning (I am using
the C++ compiler found in Visual Studio .NET 2003, which tells me that
the return type is a size_t).


Any reason you're not using unsigned int for i?

Actually, I'd say he should use int by default. unsigned types are only
useful in special cases. I actually don't understand why size_t is
unsigned.


Because a size can never be negative?

--
Mike Smith
Sep 16 '05 #22

P: n/a
Richard Herring wrote:
In message <dg**********@murdoch.acc.Virginia.EDU>, Kai-Uwe Bux
<jk********@gmx.net> writes
PS.: I have to admit, though, that the following idiom for counting down
to 0 in unsigned types looks funny [creating the impression of looping
forever]:

#include <iostream>

int main (void ) {

unsigned size = 10;
for ( unsigned i = size - 1; i < size; -- i ) {
std::cout << i << '\n';
}

}


That's not funny, it's *vile* (anagram of "evil"). Try this :

for (unsigned i = size; i-->0;)
{ //...
}

(Extra marks for working out the meaning of the emoticon -->0;)


Wow, so there is a legitimate use for postfix operators after all.
Thanks a bunch.

Kai-Uwe Bux

PS.: Of course the idiom you suggested is way superior: it works regardless
of whether i is signed or unsigned and is therefore *much* safer.
Sep 16 '05 #23

P: n/a
Kai-Uwe Bux <jk********@gmx.net> wrote:
Wow, so there is a legitimate use for postfix operators after all.


What about the old "copying C strings" idiom?

while (*src)
*dest++ = *src++;

--
Marcus Kwok
Sep 16 '05 #24

P: n/a
Marcus Kwok wrote:
Kai-Uwe Bux <jk********@gmx.net> wrote:
Wow, so there is a legitimate use for postfix operators after all.


What about the old "copying C strings" idiom?

while (*src)
*dest++ = *src++;


For my taste that is just too much happening in one line. I prefer

while ( *src != 0 ) {
*dest = *src;
++dest;
++src;
}

I do not really consider avoiding {}-blocks and a few keystrokes as a
legitimate goal.
What struck me about

for (unsigned i = size; i-->0;)
{ //...
}

is that you cannot easily rewrite it to get rid of the postcrement while
keeping the for loop with its benefit of a local declaration.

[On second thought, I think you can:

for ( unsigned int = size; i > 0; ) {
--i;
...
}

Oh well, never mind. I just didn't see it.]
Best

Kai-Uwe Bux
Sep 17 '05 #25

P: n/a
msalters wrote:
For a number of reasons:

1. integer literals are precisely that, ints and not
std::vector<type>::size_type literals
vector<type>::size() returns a vector<type>::size_type. If we want to deal
with the size of a container, why shouldn't we use the type that the size
of the container is expressed with?

On top of that, as I mentioned in my previous post, a
vector<type>::size_type is a typedef of an unsigned integral type.

2. Readability of the code (which is why we need auto i = v.size() )
If readability of the code is an issue, than the best thing to do is to use
vector<type>::size_type. When someone is reading the code and stumbles on
the declaration

vector<type>::size_type i;

that person will imediately know that i will be used to handle the sizes of
a vector. That doesn't happen with a

int i;
Besides that, if there is an example to prove that we don't need an auto,
the example you pointed out is precisely it.

3. The actual int may be negative. E.g. the follwoing code can make
sense:
bool foo( int i, std::vector<int> v ) { return i > v.size(); }
and where, exactly, does a vector<type>::size_type makes that piece of code
not make sense? As far as I can see, it makes it as much, if not even more,
readable.

For i<=0 this should return false. Use unsigned and it will return true
instead.


being:
vector<type> v;
vector<type>::size_type i;

if i == 0 and v.size() == 0, foo does indeed returns false. If the purpose
of i is to compare sizes, I doubt that there is a need to express negative
sizes. Therefore, there isn't a need to use a signed number with that
particular piece of code.
So, exactly where is your problem with the vector<type>::size_type?
Best regards
Rui Maciel
--
Running Kubuntu 5.04 with KDE 3.4.2 and proud of it.
jabber:ru********@jabber.org
Sep 17 '05 #26

P: n/a
Marcus Kwok wrote:
What about the old "copying C strings" idiom?

while (*src)
*dest++ = *src++;


Well, that fails to copy the entire "C string", doesn't it...

while (*dest++ = *src++);

would make the proper '\0' termination too.
-+-Ben-+-
Sep 19 '05 #27

P: n/a
This is from a different thread, but I had asked this question in it and
then realized that I probably should have started a new thread for it.
I came up with the answer, and so I am posting it here. In case anyone
is interested, the solution was to pass the third argument (EOL
character) to std::getline().

Marcus Kwok <ri******@gehennom.net> wrote:
A somewhat related but tangential question, can anyone see a better
implementation for this function?

// setcol() sets the elements of the colv vector
// to the individual phrases found in input_record, which is tab-delimited.
//
// The column numbers start at 0.
//
// The number of columns found is returned.
int setcol(const std::string& input_record, std::vector<std::string>& colv)
{
std::string temp;

colv.clear();

typedef std::string::const_iterator CI;
CI end = input_record.end();
for (CI ci = input_record.begin(); ci != end; ++ci) {
if (*ci != '\t') {
temp += *ci;
}
else {
colv.push_back(temp);
temp.clear();
}
}
colv.push_back(temp);

return static_cast<int>(colv.size());
}
Originally I implemented it like the below, which I feel is much more
elegant, but I could not figure out how to get it to separate the
phrases only on tabs; it would delimit with any whitespace.

int setcol(const std::string& input_record, std::vector<std::string>& colv)
{
std::istringstream s(input_record);
std::string temp;

colv.clear();
while (s >> temp)
{
colv.push_back(temp);
}

return colv.size();
}


int setcol(const std::string& input_record, std::vector<std::string>& colv)
{
std::istringstream s(input_record);
std::string temp;

colv.clear();
while (std::getline(s, temp, '\t')) {
colv.push_back(temp);
}

return static_cast<int>(colv.size());
}

--
Marcus Kwok
Sep 27 '05 #28

This discussion thread is closed

Replies have been disabled for this discussion.