473,394 Members | 1,965 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,394 software developers and data experts.

Tokenizer Function (plus rant on strtok documentation)

A couple of days ago I dedecided to force myself to really learn
exactly what "strtok" does, and how to use it. I figured I'd
just look it up in some book and that would be that.

I figured wrong!

Firstly, Bjarne Stroustrup's "The C++ Programming Language" said:

(nothing)

Ok, how about a C book? Steven Prata's "C Primer Plus" said:

(nothing)

Aaarrrggg. Ok, how about good old Randy Schildt and his book
"C++: Complete Reference"? It said:

#include <cstring>
char *strtok(char *str1, const char *str2);
The strtok() function returns a pointer to the next token in
the string pointed to by str1. The characters making up the
string pointed to by str2 are the delimiters that determine
the token. A null pointer is returned when there is no token
to return.To tokenize a string, the first call to strtok()
must have str1 point to the string being tokenized. Subsequent
calls must use a null pointer for str1. In this way, the entire
string can be reduced to its tokens. It is possible to use a
different set of delimiters for each call to strtok() .

Ok. But when I tried using the function, it didn't do what I
expected at all. For one thing, it severely alters the contents
of its first argument. Randy Schildt's book doesn't mention that
little factoid at all. :-( Bad Randy!

I had to google this function and find info on it on the web in
order to find out how it really works. Turns out, there's lots
of things missing is Schildt's description. (But hey, at least
he tried. Most other C/C++ authors chicken out and won't even
touch strtok in their books.) This is how this function REALLY
works:

http://www.opengroup.org/onlinepubs/.../strtok_r.html

I wish more authors would cover this useful function in their
books. After all, it IS a part of both the C and C++ standard
libraries. Ok, I'm done ranting now.
For your amusement, here is a function I wrote to break a string
into tokens, given a string of "separator" characters, and put
the tokens in a std::vector<std::string. I'm sure there's
various ways this could be improved. Comments? Slings? Arrows?
void
Tokenize
(
std::string const & RawText,
std::string const & Delimiters,
std::vector<std::string & Tokens
)
{
// Load raw text into an appropriately-sized dynamic char array:
size_t StrSize = RawText.size();
size_t ArraySize = StrSize + 5;
char* Ptr = new char[ArraySize];
memset(Ptr, 0, ArraySize);
strncpy(Ptr, RawText.c_str(), StrSize);

// Clear the Tokens vector:
Tokens.clear();

// Get the tokens from the array and put them in the vector:
char* TokenPtr = NULL;
char* TempPtr = Ptr;
while (NULL != (TokenPtr = strtok(TempPtr, Delimiters.c_str())))
{
Tokens.push_back(std::string(TokenPtr));
TempPtr = NULL;
}

// Free memory and scram:
delete[] Ptr;
return;
}
--
Cheers,
Robbie Hatley
East Tustin, CA, USA
lone wolf intj at pac bell dot net
(put "[usenet]" in subject to bypass spam filter)
http://home.pacbell.net/earnur/
Jul 11 '06 #1
18 3450
Robbie Hatley wrote:
A couple of days ago I dedecided to force myself to really learn
exactly what "strtok" does, and how to use it.
This is how this function REALLY
works:

http://www.opengroup.org/onlinepubs/.../strtok_r.html

I wish more authors would cover this useful function in their
books. After all, it IS a part of both the C and C++ standard
libraries. Ok, I'm done ranting now.
strtok is one of the weird functions that maintain internal state, so
that you cannot tokenize two strings in an interleaved manner or use it
in a multithreaded program. POSIX offers a strtok_r which is somewhat
saner.
>
For your amusement, here is a function I wrote to break a string
into tokens, given a string of "separator" characters, and put
the tokens in a std::vector<std::string. I'm sure there's
various ways this could be improved. Comments? Slings? Arrows?
void
Tokenize
(
std::string const & RawText,
std::string const & Delimiters,
std::vector<std::string & Tokens
)
{
// Load raw text into an appropriately-sized dynamic char array:
size_t StrSize = RawText.size();
size_t ArraySize = StrSize + 5;
char* Ptr = new char[ArraySize];
memset(Ptr, 0, ArraySize);
strncpy(Ptr, RawText.c_str(), StrSize);

// Clear the Tokens vector:
Tokens.clear();

// Get the tokens from the array and put them in the vector:
char* TokenPtr = NULL;
char* TempPtr = Ptr;
while (NULL != (TokenPtr = strtok(TempPtr, Delimiters.c_str())))
{
Tokens.push_back(std::string(TokenPtr));
TempPtr = NULL;
}

// Free memory and scram:
delete[] Ptr;
return;
}
I guess tying the tokenizer to vector<stringis not a good idea. If it
took an output iterator it could be used with any container or even
with things like ostream_iterators. Here is my attempt, which also gets
rid of strtok:

#include <string>
using namespace std;
template <class OItervoid tokenize( const string &str,
const string &delim,
OIter oi)
{
typedef string::size_type Sz;

Sz begin=0;
while(begin<str.size()){
Sz end=str.find_first_of(delim,begin);
*oi++=str.substr(begin,end-begin);
begin=str.find_first_not_of(delim,end);
}
}

I use find_first_not_of in order to be compatible with strtok's
behaviour of treating multiple adjacent delimiters as a single
delimiter. I have not measured the performance of this version against
the strtok version.

Jul 11 '06 #2
jmoy wrote:
Robbie Hatley wrote:
A couple of days ago I dedecided to force myself to really learn
exactly what "strtok" does, and how to use it.
This is how this function REALLY
works:

http://www.opengroup.org/onlinepubs/.../strtok_r.html

I wish more authors would cover this useful function in their
books. After all, it IS a part of both the C and C++ standard
libraries. Ok, I'm done ranting now.

strtok is one of the weird functions that maintain internal state, so
that you cannot tokenize two strings in an interleaved manner or use it
in a multithreaded program. POSIX offers a strtok_r which is somewhat
saner.

For your amusement, here is a function I wrote to break a string
into tokens, given a string of "separator" characters, and put
the tokens in a std::vector<std::string. I'm sure there's
various ways this could be improved. Comments? Slings? Arrows?
void
Tokenize
(
std::string const & RawText,
std::string const & Delimiters,
std::vector<std::string & Tokens
)
{
// Load raw text into an appropriately-sized dynamic char array:
size_t StrSize = RawText.size();
size_t ArraySize = StrSize + 5;
char* Ptr = new char[ArraySize];
memset(Ptr, 0, ArraySize);
strncpy(Ptr, RawText.c_str(), StrSize);

// Clear the Tokens vector:
Tokens.clear();

// Get the tokens from the array and put them in the vector:
char* TokenPtr = NULL;
char* TempPtr = Ptr;
while (NULL != (TokenPtr = strtok(TempPtr, Delimiters.c_str())))
{
Tokens.push_back(std::string(TokenPtr));
TempPtr = NULL;
}

// Free memory and scram:
delete[] Ptr;
return;
}

I guess tying the tokenizer to vector<stringis not a good idea. If it
took an output iterator it could be used with any container or even
with things like ostream_iterators. Here is my attempt, which also gets
rid of strtok:

#include <string>
using namespace std;
template <class OItervoid tokenize( const string &str,
const string &delim,
OIter oi)
{
typedef string::size_type Sz;

Sz begin=0;
while(begin<str.size()){
Sz end=str.find_first_of(delim,begin);
*oi++=str.substr(begin,end-begin);
begin=str.find_first_not_of(delim,end);
}
}

I use find_first_not_of in order to be compatible with strtok's
behaviour of treating multiple adjacent delimiters as a single
delimiter. I have not measured the performance of this version against
the strtok version.
check out http://www.boost.org/libs/tokenizer/index.html
One cool thing about the boost tokenizer is that you can get NULL
tokens if you have adjacent separators, which I believe can't be
handled by strtok.

Thanks and regards
SJ

Jul 11 '06 #3

jmoy wrote:
Robbie Hatley wrote:
A couple of days ago I dedecided to force myself to really learn
exactly what "strtok" does, and how to use it.
This is how this function REALLY
works:

http://www.opengroup.org/onlinepubs/.../strtok_r.html

I wish more authors would cover this useful function in their
books. After all, it IS a part of both the C and C++ standard
libraries. Ok, I'm done ranting now.

strtok is one of the weird functions that maintain internal state, so
that you cannot tokenize two strings in an interleaved manner or use it
in a multithreaded program. POSIX offers a strtok_r which is somewhat
saner.

For your amusement, here is a function I wrote to break a string
into tokens, given a string of "separator" characters, and put
the tokens in a std::vector<std::string. I'm sure there's
various ways this could be improved. Comments? Slings? Arrows?
void
Tokenize
(
std::string const & RawText,
std::string const & Delimiters,
std::vector<std::string & Tokens
)
{
// Load raw text into an appropriately-sized dynamic char array:
size_t StrSize = RawText.size();
size_t ArraySize = StrSize + 5;
char* Ptr = new char[ArraySize];
memset(Ptr, 0, ArraySize);
strncpy(Ptr, RawText.c_str(), StrSize);

// Clear the Tokens vector:
Tokens.clear();

// Get the tokens from the array and put them in the vector:
char* TokenPtr = NULL;
char* TempPtr = Ptr;
while (NULL != (TokenPtr = strtok(TempPtr, Delimiters.c_str())))
{
Tokens.push_back(std::string(TokenPtr));
TempPtr = NULL;
}

// Free memory and scram:
delete[] Ptr;
return;
}

I guess tying the tokenizer to vector<stringis not a good idea. If it
took an output iterator it could be used with any container or even
with things like ostream_iterators. Here is my attempt, which also gets
rid of strtok:

#include <string>
using namespace std;
template <class OItervoid tokenize( const string &str,
const string &delim,
OIter oi)
{
typedef string::size_type Sz;

Sz begin=0;
while(begin<str.size()){
Sz end=str.find_first_of(delim,begin);
*oi++=str.substr(begin,end-begin);
begin=str.find_first_not_of(delim,end);
}
}
I like this implementation, but don't you assume the space for data
(tokens) is already pre-allocated?
If I use your fn with something like this, I get segmentation fault..

std::vector<std::stringv;
std::vector<std::string>::iterator it = v.begin();
tokenize<std::vector<std::string>::iterator>("a b c", " ", it);
>
I use find_first_not_of in order to be compatible with strtok's
behaviour of treating multiple adjacent delimiters as a single
delimiter. I have not measured the performance of this version against
the strtok version.
Jul 11 '06 #4
"jmoy" <jm**********@gmail.comwrote:
strtok is one of the weird functions that maintain internal state, so
that you cannot tokenize two strings in an interleaved manner or use it
in a multithreaded program. POSIX offers a strtok_r which is somewhat
saner.
Ah, sort of like the code my ex-boss left me to maintain after he
got fired. Hundreds of global variables, which he uses to pass
data from function to function, like a dumbass. Of course, since
the program is a complex windows app with timers and interrupts,
the data often gets over-written on its way from one place to
another. ::sigh:: Global variables are the work of Sauron.
I guess tying the tokenizer to vector<stringis not a good idea.
It does limit the user to a std::vector<std::string>, yes. However,
that construct is pretty good for this app. I find it hard to
think of cases which couldn't use that to hold a bunch of tokens.
If it took an output iterator it could be used with any container
or even with things like ostream_iterators.
Provided that the output container was big enough. If you start
with an empty conainer and try writing to it using output
iterators, you'll get an "illegal memory access" or "general
protection fault" or some such thing. So you'd have to make sure
that the container was huge. I don't like that approach.
#include <string>
using namespace std;
template <class OItervoid tokenize( const string &str,
const string &delim,
OIter oi)
{
typedef string::size_type Sz;

Sz begin=0;
while(begin<str.size()){
Sz end=str.find_first_of(delim,begin);
*oi++=str.substr(begin,end-begin);
begin=str.find_first_not_of(delim,end);
}
}

I use find_first_not_of in order to be compatible with strtok's
behaviour of treating multiple adjacent delimiters as a single
delimiter. I have not measured the performance of this version against
the strtok version.
Alluring in its simplicity, yes. But has two major bugs:

1. Memory corruption danger if used to write to a small container.
2. You don't take into account the fact that the string might START
with one or more delimiters.

Maybe something like THIS might be better:

#include <string>
// using namespace std; // Ewww.
template <class Container>
void
tokenize
(
const std::string & str,
const std::string & delim,
Container & C
)
{
typedef std::string::size_type Sz;
Sz begin = 0;
Sz end = 0;
while (begin < str.size())
{
begin = str.find_first_not_of (delim, begin);
end = str.find_first_of (delim, begin);
Container.push_back(str.substr(begin, end-begin));
}
}

I haven't tested that, but I think something like that would work
better. It does require that the container for the tokens have
the push_back() method defined. Other than that, it's pretty
generic.

Note that to take care of the "starts with delimiters" case,
I simply moved your "first_not_of" up to the top of the loop.
That should work nicely.
--
Cheers,
Robbie Hatley
East Tustin, CA, USA
lone wolf intj at pac bell dot net
(put "[usenet]" in subject to bypass spam filter)
http://home.pacbell.net/earnur/
Jul 11 '06 #5
In article <zB********************@newssvr21.news.prodigy.com >,
bo***********@no.spam says...

[ ... ]
Maybe something like THIS might be better:

#include <string>
// using namespace std; // Ewww.
template <class Container>
void
tokenize
(
const std::string & str,
const std::string & delim,
Container & C
)
{
typedef std::string::size_type Sz;
Sz begin = 0;
Sz end = 0;
while (begin < str.size())
{
begin = str.find_first_not_of (delim, begin);
end = str.find_first_of (delim, begin);
Container.push_back(str.substr(begin, end-begin));
}
}
IMO, this is a poor idea. Take an iterator for the output. If the
user wants the data pushed onto the back, they can use back_inserter
to get that. If they want it inserted into something like a set, they
can use inserter to get that.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jul 11 '06 #6
"Jerry Coffin" <jc*****@taeus.comwrote:
IMO, this is a poor idea. Take an iterator for the output.
Puts extreme burden on the user to provide the right kind of
container and iterator. Such a function would often get mis-used
and cause memory corruption and program crashes.
If the user wants the data pushed onto the back, they can
use back_inserter to get that.
If they know any better.
If they want it inserted into something like a set, they
can use inserter to get that.
If they know that they should, and if they know how.

So it really depends on which kind of function one wants to write:

1. Something efficient but dangerous, that requires having and
reading and understanding some external documentation to use it
correctly.

or

2. Something safe and easy and self-documenting, but a bit limited.

I can see use for both, actually. But the iterator version will
always be the more dangerous one.

--
Cheers,
Robbie Hatley
East Tustin, CA, USA
lone wolf intj at pac bell dot net
(put "[usenet]" in subject to bypass spam filter)
http://home.pacbell.net/earnur/
Jul 11 '06 #7
In article <I5*******************@newssvr21.news.prodigy.com> ,
bo***********@no.spam says...
"Jerry Coffin" <jc*****@taeus.comwrote:
IMO, this is a poor idea. Take an iterator for the output.

Puts extreme burden on the user to provide the right kind of
container and iterator.
IMO, it's not extreme at all. They're going to have to provide the
right kind of container in any case -- but the code you provided will
often _prevent_ them from using the right container. Just for
example, putting the output into a set might well make sense -- but
your code simply won't work with it at all.

[ ... ]
I can see use for both, actually. But the iterator version will
always be the more dangerous one.
The iterator version is the only one that really works. In any case,
for a programmer to become at all proficient in using C++, they need
to learn how to do this anyway -- look through most of the algorithms
in the standard library, and note that they also take an iterator to
tell them where to put the results -- with precisely the same result.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jul 11 '06 #8

Alex Vinokur wrote:
[slip]
Also "Splitting string into vector of vectors":
http://groups.google.com/group/sourc...993fb8841382c8
http://groups.google.com/group/perfo...49a1be3a5c6335
--------------------------------------------------
Instead of
http://groups.google.com/group/perfo...73f4d1a05cfbd1
should be
http://groups.google.com/group/perfo...c775cf7e3cdcf0
Sorry
--------------------------------------------------
[snip]

Alex Vinokur
email: alex DOT vinokur AT gmail DOT com
http://mathforum.org/library/view/10978.html
http://sourceforge.net/users/alexvn

Jul 11 '06 #10
Robbie Hatley wrote:
"jmoy" <jm**********@gmail.comwrote:
#include <string>
using namespace std;
template <class OItervoid tokenize( const string &str,
const string &delim,
OIter oi)
{
typedef string::size_type Sz;

Sz begin=0;
while(begin<str.size()){
Sz end=str.find_first_of(delim,begin);
*oi++=str.substr(begin,end-begin);
begin=str.find_first_not_of(delim,end);
}
}

...
Alluring in its simplicity, yes. But has two major bugs:

1. Memory corruption danger if used to write to a small container.
No. As mentioned by other posters, if you are adding tokens to a
container the right thing is to call the function with something like a
back_inserter in which case there is no memory corruption
2. You don't take into account the fact that the string might START
with one or more delimiters.
You are right. My mistake.
>
Maybe something like THIS might be better:

#include <string>
// using namespace std; // Ewww.
template <class Container>
void
tokenize
(
const std::string & str,
const std::string & delim,
Container & C
)
{
typedef std::string::size_type Sz;
Sz begin = 0;
Sz end = 0;
while (begin < str.size())
{
begin = str.find_first_not_of (delim, begin);
end = str.find_first_of (delim, begin);
Container.push_back(str.substr(begin, end-begin));
}
}
The problem with this is that it fails for the reverse case of a string
ending with a delimiter. Also, I don't like the idea of tying
algorithms with containers---iterators are a much more general concept.
Here is a corrected version of my function:

template <class OItervoid tokenize( const string &str,
const string &delim,
OIter oi)
{
typedef string::size_type Sz;

Sz end=0;
for(;;){
Sz begin=str.find_first_not_of(delim,end);
if (begin==string::npos)
break;
end=str.find_first_of(delim,begin);
*oi++=str.substr(begin,end-begin);
}
}

Jul 11 '06 #11
In article <11**********************@m79g2000cwm.googlegroups .com>,
jm**********@gmail.com says...

[ ... ]
template <class OItervoid tokenize( const string &str,
const string &delim,
OIter oi)
{
typedef string::size_type Sz;

Sz end=0;
for(;;){
Sz begin=str.find_first_not_of(delim,end);
if (begin==string::npos)
break;
end=str.find_first_of(delim,begin);
*oi++=str.substr(begin,end-begin);
}
}
I think I'd also make the character type a template parameter:

template class<charT, class OIter>
void tokenize( basic_string<charTinput,
basic_string<charTdelim,
OIter oi)
{
typedef basic_string<charTstr;
typedef str::size_type Sz;

Sz end = 0;
for (;;) {
Sz begin = input.find_first_not_of(delim, end);
if (str::npos == begin)
break;
end = input.find_first_of(delim, begin);
*oi++ = input.substr(begin, end-begin);
}
}

This way, the code can work with strings of either narrow or wide
characters.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jul 11 '06 #12

Robbie Hatley wrote:
A couple of days ago I dedecided to force myself to really learn
exactly what "strtok" does, and how to use it.
Every once in a while I also have this massocistic urge to torture
myself for no reason. Eventually though I tire of it and move on.

The stroke function is useless. It is insecure, works in insecure
types, and is a general pita to use. There are better options that are
much easier to use and have type safety. Look into string streams as a
much better alternative to stroke.

Jul 11 '06 #13
On Tue, 11 Jul 2006 07:06:16 GMT, "Robbie Hatley"
<bo***********@no.spamwrote:
>So it really depends on which kind of function one wants to write:

1. Something efficient but dangerous, that requires having and
reading and understanding some external documentation to use it
correctly.

or

2. Something safe and easy and self-documenting, but a bit limited.
The following tokenize function, derived from John Potter's
implementation
(http://groups.google.com/group/comp....daafacd01ce26),
is IMO safe (no output iterators) and efficient (no dynamic
allocation, no substr). Usable for all STL-like containers (except
map) with bidirectional iterators:

#include <algorithm>

template <typename StringT, typename ContainerT>
size_t tokenize (const StringT& text, const StringT& delim,
ContainerT& result) {
size_t num = 0;
typename StringT::size_type b = text.find_first_not_of(delim);
while (b != StringT::npos) {
typename StringT::size_type e(text.find_first_of(delim, b));
StringT s (text.c_str() + b, e - b);
result.insert (result.end(), StringT());
typename ContainerT::iterator iter = result.end();
(*--iter).swap (s);
++num;
b = text.find_first_not_of(delim, std::min(e, text.size()));
}
return num;
}

For std::vector as result container efficency can be increased with
reserve().

Best wishes,
Roland Pibinger
Jul 11 '06 #14

Roland Pibinger wrote:
On Tue, 11 Jul 2006 07:06:16 GMT, "Robbie Hatley"
<bo***********@no.spamwrote:
So it really depends on which kind of function one wants to write:

1. Something efficient but dangerous, that requires having and
reading and understanding some external documentation to use it
correctly.

or

2. Something safe and easy and self-documenting, but a bit limited.

The following tokenize function, derived from John Potter's
implementation
(http://groups.google.com/group/comp....daafacd01ce26),
is IMO safe (no output iterators) and efficient (no dynamic
allocation, no substr). Usable for all STL-like containers (except
map) with bidirectional iterators:

#include <algorithm>

template <typename StringT, typename ContainerT>
size_t tokenize (const StringT& text, const StringT& delim,
ContainerT& result) {
size_t num = 0;
typename StringT::size_type b = text.find_first_not_of(delim);
while (b != StringT::npos) {
typename StringT::size_type e(text.find_first_of(delim, b));
I think e can be StringT::npos here and that would cause problems in
the line below..
StringT s (text.c_str() + b, e - b);
result.insert (result.end(), StringT());
typename ContainerT::iterator iter = result.end();
(*--iter).swap (s);
++num;
b = text.find_first_not_of(delim, std::min(e, text.size()));
}
return num;
}

For std::vector as result container efficency can be increased with
reserve().

Best wishes,
Roland Pibinger
Jul 12 '06 #15
jmoy wrote:
>
strtok is one of the weird functions that maintain internal state, so
that you cannot tokenize two strings in an interleaved manner or use it
in a multithreaded program. POSIX offers a strtok_r which is somewhat
saner.
Strtok should have never been allowed to enter the standard. It is
an abysmal function an a relic from the days when C programmers
were pretty lousy software engineers (it's even worse than the
stupid-assed "portable" I/O library that should have been allowed
to be made into the stdio.
Jul 12 '06 #16
On 11 Jul 2006 22:58:29 -0700, "Me*****@gmail.com" <Me*****@gmail.com>
wrote:
>Roland Pibinger wrote:
>The following tokenize function, derived from John Potter's
implementation
(http://groups.google.com/group/comp....daafacd01ce26),
is IMO safe (no output iterators) and efficient (no dynamic
allocation, no substr). Usable for all STL-like containers (except
map) with bidirectional iterators:

#include <algorithm>

template <typename StringT, typename ContainerT>
size_t tokenize (const StringT& text, const StringT& delim,
ContainerT& result) {
size_t num = 0;
typename StringT::size_type b = text.find_first_not_of(delim);
while (b != StringT::npos) {
typename StringT::size_type e(text.find_first_of(delim, b));

I think e can be StringT::npos here and that would cause problems in
the line below..
> StringT s (text.c_str() + b, e - b);
result.insert (result.end(), StringT());
typename ContainerT::iterator iter = result.end();
(*--iter).swap (s);
++num;
b = text.find_first_not_of(delim, std::min(e, text.size()));
}
return num;
}
You are right. The while loop should rather be:

while (b != StringT::npos) {
typename StringT::size_type
e(std::min (text.find_first_of(delim, b), text.size()));
StringT s (text.c_str() + b, e - b);
result.insert (result.end(), StringT());
typename ContainerT::iterator iter = result.end();
(*--iter).swap (s);
++num;
b = text.find_first_not_of(delim, std::min(e, text.size()));
}

Thank you,
Roland Pibinger
Jul 12 '06 #17
Roland Pibinger wrote:
On 11 Jul 2006 22:58:29 -0700, "Me*****@gmail.com" <Me*****@gmail.com>
wrote:
Roland Pibinger wrote:
The following tokenize function, derived from John Potter's
implementation
(http://groups.google.com/group/comp....daafacd01ce26),
is IMO safe (no output iterators) and efficient (no dynamic
allocation, no substr). Usable for all STL-like containers (except
map) with bidirectional iterators:

#include <algorithm>

template <typename StringT, typename ContainerT>
size_t tokenize (const StringT& text, const StringT& delim,
ContainerT& result) {
size_t num = 0;
typename StringT::size_type b = text.find_first_not_of(delim);
while (b != StringT::npos) {
typename StringT::size_type e(text.find_first_of(delim, b));
I think e can be StringT::npos here and that would cause problems in
the line below..
StringT s (text.c_str() + b, e - b);
result.insert (result.end(), StringT());
typename ContainerT::iterator iter = result.end();
(*--iter).swap (s);
++num;
b = text.find_first_not_of(delim, std::min(e, text.size()));
}
return num;
}

You are right. The while loop should rather be:

while (b != StringT::npos) {
typename StringT::size_type
e(std::min (text.find_first_of(delim, b), text.size()));
StringT s (text.c_str() + b, e - b);
result.insert (result.end(), StringT());
typename ContainerT::iterator iter = result.end();
(*--iter).swap (s);
++num;
b = text.find_first_not_of(delim, std::min(e, text.size()));
}

Thank you,
Roland Pibinger
I have also found this impl is not usable with set, you only mentioned
it does not work with map..
Is that intentional?

Jul 13 '06 #18
On 12 Jul 2006 22:35:19 -0700, "Me*****@gmail.com" <Me*****@gmail.com>
wrote:
>I have also found this impl is not usable with set, you only mentioned
it does not work with map. Is that intentional?
Let me add background information to the above implementation. The
performance characteristics for copying a std::string (the
std::basic_string template) are not specified by the C++ Standard.
Copying may be expensive ('deep') or cheap because of an optimization
(ref-counting or small-string optimization) - implementation specific.
The above code tries to avoid copying (and assignment) of non-empty
strings (assuming that copying an empty string is always cheap for
reasonable basic_sting implementations). As shown in the mentioned
thread with John Potter's original code an optimized version of the
tokenize function can considerably reduce dynamic memory allocation
for some string implementations and some usage scenarios.
I realize now that it's probably not a good idea to make the container
type a template parameter.That kind of optimization cannot be applied
to all Standard containers in the same way. Elements in a set e.g. are
sorted after insert and immutable (otherwise the sort order would be
compromised). Separate non-template implementations for each container
and each string type are preferable.

Best regards,
Roland Pibinger

Jul 13 '06 #19

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

17
by: Bart Nessux | last post by:
How can one view the contents of a queue? I'd like to verify that the queue has the same number of objects as the list that has been put into it. Also, should one think of a queue as static or...
5
by: Dale Strickland-Clak | last post by:
I'm looking for a simple string tokenizer. The Python tokenize module is too specific and is geared to Python syntax. I just want something that can be setup to read the basic stuff like .....
4
by: Java Guy | last post by:
This must be a classical topic -- C++ stgring tokenizer. I just switched from C to C++ ( in Unix ). It turns out that there is no existing C++ string tokenizer. Searching on the Web, I found...
9
by: AMT2K5 | last post by:
Hello, how would I go about breaking up a string that is returned by a function. After I do that, I will strcpy that data to a class data member . I have the following functions void...
12
by: Generic Usenet Account | last post by:
Is it that I am blurry eyed, or is it indeed that the C++ string class has no tokenizer method defined? I have defined my own functions , but I would prefer to use the standard functions, if...
10
by: Lorenzo J. Lucchini | last post by:
Do you see any counter-indication to a token extractor in the following form? typedef int token; token ExtractToken(const char** String, char* Symbol); If it finds a valid token at the start...
51
by: thesushant | last post by:
hi, can anyone of C masters siting out there suggest any 1 example of re-entrnt function just to show what is the significance of that property and how we can exploit it .... sushant
6
by: compboy | last post by:
Does anyone know how to break string apart using c? I know that we can use strtok to tokenize a string but by using that we can only use 1 char as a delimiter. like: char delimiter = "*";...
26
by: ryampolsky | last post by:
I'm using strtok to break apart a colon-delimited string. It basically works, but it looks like strtok skips over empty sections. In other words, if the string has 2 colons in a row, it doesn't...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.