473,413 Members | 1,795 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,413 software developers and data experts.

why the usage of gets() is dangerous.

Hi all,

Whenever I use the gets() function, the gnu c compiler gives a
warning that it is dangerous to use gets(). why...?

regards,
jayapal.
Nov 16 '07
104 5109
Malcolm McLean wrote, On 18/11/07 08:39:
>
"CBFalconer" <cb********@yahoo.comwrote in message
>Need not, not must not. gets() (and any replacement) can be
written in standard C and fully satisfy all the specifications. As
evidenced by my ggets() and others.
On top of fgetc().
Or fgets. However, I accept your point.
>I wish the library had been divided into things implementable
within the language, and things requiring extensions. Although
actually doing it might have been a horrendous task, and it is now
too late to try.
The standard library fits into a small book. I don't think it would take
more than a few minutes to go through ticking each one.

assert (portable)
the isxxxx character macros (portable*)
No, they require system specific knowledge.
tolower (portable*)
toupper (portable*)
math.h functions (portable but)
setjmp (compiler-specific)
longjmp (compiler-specific)
stdarg,h (compiler-specific)
stdio.h - all functions that take a FILE parameter need system calls, as
do those whicht ake an implicit stdin / stdout.
At least two of the IO functions need to use a system specific method,
but the rest can be built on top of the two selected. The system
specific method is not necessarily a system call (on DOS it could be
accessing the keyboard and video HW directly, this might or might not be
a *good* choice but it is possible).
sprintf, vsprintf - (portable with one niggle)
remove, rename, tmpname - (system call)
Or other system specific method.
tmpfile - interesting one. Needs either a malloc(0 call or access tot he
filesystem.
I don't see how malloc(0) helps. You need access to the file system
because you are opening a file that does not already exist so you need
to either get the OS to do it or makes sure that the file you are
opening does not already exist.
atof, atoi, atol, strtod, strtol, strtoul (portable)
rand, srand (portable)
malloc family (in reality system, but theoretically portable except for
a niggle)
abort, exit (system / compiler-specific|)
atexit - (portable)
Not really, it depends on how the startup code works.
system - (system)
getenv - (system)
bsearch, qsort - (portable)
abs, labs - (portable)
div, ldiv - (anyone heard of these? compiler-specific)
You could write them in standard C. It might not be the most efficient
method, but...
string functions - (portable, in reality compiler-specific)
clock - (system)
time - (system)
difftime, mktime, ctime, gmtime, localtime, strftime - (portable as logn
as you know the internal time structure)
You you need to know the internal structure then it is not portable.
There, more or less done it.
Apart from the bits others disagree with, which shows it is not easy.
The problem is the division doesn't really work. sqrt() was originally a
portable function. Now most larger machines have dedicated root-finding
hardware, which in practise you must use.
That has applied to a number of the maths functions on some
implementations for a lot of years.
tolower and toupper, and the isxxxx macros can be written in C, but to
implement with any efficiency you need to know the execution character
encoding.
Others in the group need knowledge of the execution character set to
implement. For example, how do you implement isprint without knowing
which characters are printable?
Some functions, like longjmp(), do not need to make system calls, but
cannot be implemented without an intimate knowledge of the compiler.
sprinf() can be written completely portably, except for the %p field.
You can implement %p portably, you just write the pointer a byte at a
type for sizeof(void*) bytes.
The string functions can be portable, in reality you'd want to take
advantage of the alignment. malloc() realistically needs a system call
on all but the smallest machines that run only one program, but you can
write using a global arena, except that there is no cast iron way of
ensuring correct alignment in portable C.
I agree with what I believe your main point is, i.e. that trying to
split the functions in to ones that can be implemented portably and ones
that can't is pointless. My disputes of some of your categorisations
just show that it is not as easy to do as some people might think.
--
Flash Gordon
Nov 18 '07 #51

"Richard Tobin" <ri*****@cogsci.ed.ac.ukwrote in message
CBFalconer <cb********@maineline.netwrote:
>>Oh? Then why the frantic efforts right here on c.l.c to implement
sizeof?

As far as I can tell, most recent questions about this seem to be the
result of one particular C programming course, in India.
I'd expect an Indian to ask why sizeof(void) does not equal zero.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Nov 18 '07 #52
Keith Thompson wrote:
CBFalconer wrote:
>Malcolm McLean wrote:
>>"Richard Heathfield" <rj*@see.sig.invalidwrote:

It is possible, though rather difficult, to implement a safe
gets(), that is to say one that always terminates the program
with an error message if the buffer is exceeded.
Show me.
We'll declare that pointer cosist of three values - the address,
the start of the object, and the end of the object. Now in the
write to array code we specify that if the address execceds the
end of the object, the program is to terminate with an error
meaage.

No good. Pointers do not necessarily contain those components. You
have to make it safe within the guarantees provided by the C
standard.

No, he doesn't. You're asking for more than Malcolm claimed.

Malcolm didn't claim that it could be made safe within the gaurantees
provided by the C standard. His claim is a much more modest one,
that it's possible for a (hypothetical) C implementation to provide a
"safe" gets() function, and I believe he's correct.
I don't think so.
His solution requires the use of "fat pointers", which are not
Methinks, fat pointers break pointer arithmetic and thus require at
least a new language dialect.

Also, the buffer passed to gets() may not be malloc'ed, but can be an
array, or even a sub-array.
I believe Malcolm's claim as stated is correct. It's not particularly
useful, but he didn't claim that it was; I believe it was merely an
intellectual excercise, not a serious proposal.
I can't see how Malcolm's claim can be correct, the only way.... is IF
the implementation restrict gets() buffer writes to some hard upper
limit, let say one less than MAX_GETS_WRITE, then the

char buf[MAX_GETS_WRITE];

gets(buf);

would be safe.

--
Tor <bw****@wvtqvm.vw | tr i-za-h a-z>
Nov 20 '07 #53
Tor Rustad wrote:
Keith Thompson wrote:
....
His solution requires the use of "fat pointers", which are not

Methinks, fat pointers break pointer arithmetic and thus require at
least a new language dialect.
I don't believe that's the case; could you explain what breakage you
would expect?
Nov 20 '07 #54
In article <7a**********************************@d50g2000hsf. googlegroups.com>,
<ja*********@verizon.netwrote:
>Tor Rustad wrote:
>Keith Thompson wrote:
...
His solution requires the use of "fat pointers", which are not

Methinks, fat pointers break pointer arithmetic and thus require at
least a new language dialect.

I don't believe that's the case; could you explain what breakage you
would expect?
While I agree with the sentiment behind Malcolm's idea (and I'm of the
opinion that most of the naysaying in this thread is of the usual
"usual negative comments" variety as noted by Jacob - i.e., the usual
nonsense), I do see this as being a bit difficult. It boils down to: Is
it possible to keep enough information in the system so that we can
know, for any possible pointer and/or pointer value, how much valid
memory there is after that pointer?

I can't think of any counter-examples off-hand, but that doesn't mean
there aren't any.

Nov 20 '07 #55
Tor Rustad wrote:
Keith Thompson wrote:
>CBFalconer wrote:
>>Malcolm McLean wrote:
"Richard Heathfield" <rj*@see.sig.invalidwrote:

>It is possible, though rather difficult, to implement a safe
>gets(), that is to say one that always terminates the program
>with an error message if the buffer is exceeded.
Show me.
We'll declare that pointer cosist of three values - the address,
the start of the object, and the end of the object. Now in the
write to array code we specify that if the address execceds the
end of the object, the program is to terminate with an error
meaage.

No good. Pointers do not necessarily contain those components. You
have to make it safe within the guarantees provided by the C
standard.

No, he doesn't. You're asking for more than Malcolm claimed.

Malcolm didn't claim that it could be made safe within the gaurantees
provided by the C standard. His claim is a much more modest one,
that it's possible for a (hypothetical) C implementation to provide a
"safe" gets() function, and I believe he's correct.

I don't think so.
>His solution requires the use of "fat pointers", which are not

Methinks, fat pointers break pointer arithmetic and thus require at
least a new language dialect.
Hmm. You might be right. I don't care enough about a hypothetical
"safe" gets() to work out the details.
Also, the buffer passed to gets() may not be malloc'ed, but can be an
array, or even a sub-array.
Certainly. My assumption is that *all* pointers would be "fat". For
example, if you take the address of a single declared object, the
resulting pointer value includes the information that it points to a
single object (and dereferencing ptr+1 is therefore not allowed). When
an array name decays to a pointer to its first element, that pointer
value contains bounds information about the array. Pointer arithmetic
preserves and updates the bounds information.

A "fat" pointer might consist of three elements: the address of the
zeroth element of the array that contains the object being pointed to,
the index of the specific element, and the length of the array. All
operations that create pointers must correctly initialize this
information; all operations on pointers must preserve and update it.

I *think* that such an implementation could conform to the C standard,
and could detect many pointer bugs (at a perhaps unacceptable cost in
performance).
>I believe Malcolm's claim as stated is correct. It's not particularly
useful, but he didn't claim that it was; I believe it was merely an
intellectual excercise, not a serious proposal.

I can't see how Malcolm's claim can be correct, the only way.... is IF
the implementation restrict gets() buffer writes to some hard upper
limit, let say one less than MAX_GETS_WRITE, then the

char buf[MAX_GETS_WRITE];

gets(buf);

would be safe.
Such an implementation of gets() would be non-conforming, since it
wouldn't allow you to read into a buffer bigger than MAX_GETS_WRITE bytes.

--
Keith Thompson (The_Other_Keith) <ks***@mib.org>
Looking for software development work in the San Diego area.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Nov 20 '07 #56
Keith Thompson wrote, On 20/11/07 22:36:
Tor Rustad wrote:
>Keith Thompson wrote:
>>CBFalconer wrote:
Malcolm McLean wrote:
"Richard Heathfield" <rj*@see.sig.invalidwrote:
>
>>It is possible, though rather difficult, to implement a safe
>>gets(), that is to say one that always terminates the program
>>with an error message if the buffer is exceeded.
>Show me.
We'll declare that pointer cosist of three values - the address,
the start of the object, and the end of the object. Now in the
write to array code we specify that if the address execceds the
end of the object, the program is to terminate with an error
meaage.

No good. Pointers do not necessarily contain those components. You
have to make it safe within the guarantees provided by the C
standard.

No, he doesn't. You're asking for more than Malcolm claimed.

Malcolm didn't claim that it could be made safe within the gaurantees
provided by the C standard. His claim is a much more modest one,
that it's possible for a (hypothetical) C implementation to provide a
"safe" gets() function, and I believe he's correct.

I don't think so.
>>His solution requires the use of "fat pointers", which are not

Methinks, fat pointers break pointer arithmetic and thus require at
least a new language dialect.

Hmm. You might be right. I don't care enough about a hypothetical
"safe" gets() to work out the details.
>Also, the buffer passed to gets() may not be malloc'ed, but can be an
array, or even a sub-array.

Certainly. My assumption is that *all* pointers would be "fat". For
example, if you take the address of a single declared object, the
resulting pointer value includes the information that it points to a
single object (and dereferencing ptr+1 is therefore not allowed). When
an array name decays to a pointer to its first element, that pointer
value contains bounds information about the array. Pointer arithmetic
preserves and updates the bounds information.

A "fat" pointer might consist of three elements: the address of the
zeroth element of the array that contains the object being pointed to,
the index of the specific element, and the length of the array. All
operations that create pointers must correctly initialize this
information; all operations on pointers must preserve and update it.

I *think* that such an implementation could conform to the C standard,
and could detect many pointer bugs (at a perhaps unacceptable cost in
performance).
Here is a problem for it. Assume that all allocations succeed and each
function is in a separate TU with appropriate headers etc...

char *p1;
char *p2;
char *p3;

char *alloc()
{
return p2 = p3 = malloc(5);
}

char *ralloc(char *orig)
{
return realloc(orig,10);
}

char *foo(void)
{
p1 = ralloc(alloc());
if (p1 == p2)
strcpy(p3,"Hello World");
else
strcpy(p3,"Bye");
}

If realloc has not moved the pointer then how will the size information
in the fat pointer p3 get updated? This is a contrived example, but
there might be more likely situations that would also be a problem, so
there would be a lot of work to nail down everything for a fat pointer
system.
>>I believe Malcolm's claim as stated is correct. It's not particularly
useful, but he didn't claim that it was; I believe it was merely an
intellectual excercise, not a serious proposal.

I can't see how Malcolm's claim can be correct, the only way.... is IF
the implementation restrict gets() buffer writes to some hard upper
limit, let say one less than MAX_GETS_WRITE, then the

char buf[MAX_GETS_WRITE];

gets(buf);

would be safe.

Such an implementation of gets() would be non-conforming, since it
wouldn't allow you to read into a buffer bigger than MAX_GETS_WRITE bytes.
Even if it returned NULL to indicate an error?
--
Flash Gordon
Nov 20 '07 #57
Flash Gordon <sp**@flash-gordon.me.ukwrites:
Keith Thompson wrote, On 20/11/07 22:36:
<snip>
>A "fat" pointer might consist of three elements: the address of the
zeroth element of the array that contains the object being pointed
to, the index of the specific element, and the length of the array.
All operations that create pointers must correctly initialize this
information; all operations on pointers must preserve and update it.

I *think* that such an implementation could conform to the C
standard, and could detect many pointer bugs (at a perhaps
unacceptable cost in performance).

Here is a problem for it. Assume that all allocations succeed and each
function is in a separate TU with appropriate headers etc...

char *p1;
char *p2;
char *p3;

char *alloc()
{
return p2 = p3 = malloc(5);
}

char *ralloc(char *orig)
{
return realloc(orig,10);
}

char *foo(void)
{
p1 = ralloc(alloc());
if (p1 == p2)
strcpy(p3,"Hello World");
else
strcpy(p3,"Bye");
}

If realloc has not moved the pointer then how will the size
information in the fat pointer p3 get updated?
I don't think that example is valid. An implementation is allowed to
have realloc always move the object, and that is what I'd do if I were
implementing fat pointers.

--
Ben.
Nov 21 '07 #58
RoS
In data Tue, 20 Nov 2007 23:18:50 +0000, Flash Gordon scrisse:
>Here is a problem for it. Assume that all allocations succeed and each
function is in a separate TU with appropriate headers etc...

char *p1;
char *p2;
char *p3;

char *alloc()
{
return p2 = p3 = malloc(5);
}

char *ralloc(char *orig)
{
return realloc(orig,10);
}

char *foo(void)
{
p1 = ralloc(alloc());
if (p1 == p2)
strcpy(p3,"Hello World");
else
strcpy(p3,"Bye");
}

If realloc has not moved the pointer then how will the size information
in the fat pointer p3 get updated?
it is update from realloc

if realloc not move the pointer, p3 point to a valid address
and its size is changed from realloc (aggiornato da realloc)

if realloc move the pointer, p3 point to not valid address
Nov 21 '07 #59
Tor Rustad <to********@hotmail.comwrote:
Keith Thompson wrote:
CBFalconer wrote:
Malcolm McLean wrote:
"Richard Heathfield" <rj*@see.sig.invalidwrote:

It is possible, though rather difficult, to implement a safe
gets(), that is to say one that always terminates the program
with an error message if the buffer is exceeded.
Show me.
We'll declare that pointer cosist of three values - the address,
the start of the object, and the end of the object. Now in the
write to array code we specify that if the address execceds the
end of the object, the program is to terminate with an error
meaage.

No good. Pointers do not necessarily contain those components. You
have to make it safe within the guarantees provided by the C
standard.
No, he doesn't. You're asking for more than Malcolm claimed.

Malcolm didn't claim that it could be made safe within the gaurantees
provided by the C standard. His claim is a much more modest one,
that it's possible for a (hypothetical) C implementation to provide a
"safe" gets() function, and I believe he's correct.

I don't think so.
In theory, he's correct. In practice, it depends on whether you think
either a predictable crash or predictable loss of data counts as "safe".
It is at least generally safer than having gets() write all over the end
of its target.
His solution requires the use of "fat pointers", which are not

Methinks, fat pointers break pointer arithmetic and thus require at
least a new language dialect.
No, they don't. Pointer arithmetic beyond the bounds of an object has
undefined behaviour anyway, and within an object it works fine with fat
pointers. Adding an integer to a pointer is now a matter of adding it to
a single field of the pointer structure, rather than to a flat index,
but something similar is needed with, e.g., segmented architectures.
Also, the buffer passed to gets() may not be malloc'ed, but can be an
array, or even a sub-array.
So? A sub-array simply has it recorded, in its fat pointer data, that it
is a sub-array, and what of.

Richard
Nov 21 '07 #60
Richard Bos wrote:
Tor Rustad <to********@hotmail.comwrote:
[...]
>>His solution requires the use of "fat pointers", which are not
Methinks, fat pointers break pointer arithmetic and thus require at
least a new language dialect.

No, they don't.
Hmm... what about volatile, extern and variable-length arrays? Also, I
expect GC's, to do low-level pointer magic.
Pointer arithmetic beyond the bounds of an object has
undefined behaviour anyway,
Off-by-one is allowed, see e.g. response to DR #76 and DR #221.

N1256 6.5.6p8:

"Moreover, if the expression P points to the last element of an array
object, the expression (P)+1 points one past the last element of the
array object, and if the expression Q points one past the last element
of an array object, the expression (Q)-1 points to the last element of
the array object. If both the pointer operand and the result point to
elements of the same array object, or one past the last element of the
array object, the evaluation shall not produce an overflow;
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
otherwise, the behavior is undefined. If the result points one past the
last element of the array object, it shall not be used as the operand of
a unary * operator that is evaluated."

For example:

int a[10], *p;

p = (a + 11) - 1; /* UB */
p = a + 10; /* No UB, but off-by-one */
(&*p); /* No UB in C99, but off-by-one */

and within an object it works fine with fat
pointers. Adding an integer to a pointer is now a matter of adding it to
a single field of the pointer structure, rather than to a flat index,
but something similar is needed with, e.g., segmented architectures.
Well, it appears to me that that segmented architectures capable of C99,
might not be able to have 65535 bytes for an object, when using fat
pointers.
>Also, the buffer passed to gets() may not be malloc'ed, but can be an
array, or even a sub-array.

So? A sub-array simply has it recorded, in its fat pointer data, that it
is a sub-array, and what of.
So... the compiler would generate instrumented code then, lots of
run-time checks! This is getting far to theoretical.. fat pointers
make sense to me, *if* C had such a new pointer type, else everyone need
to use "the same" compiler, or how do you suggest we link in libraries
in practice?

--
Tor <bw****@wvtqvm.vw | tr i-za-h a-z>
Nov 21 '07 #61
Keith Thompson wrote:
Tor Rustad wrote:
[...]
>the implementation restrict gets() buffer writes to some hard upper
limit, let say one less than MAX_GETS_WRITE, then the

char buf[MAX_GETS_WRITE];

gets(buf);

would be safe.

Such an implementation of gets() would be non-conforming, since it
wouldn't allow you to read into a buffer bigger than MAX_GETS_WRITE bytes.
MAX_GETS_WRITE could give some system-dependent limit of the max length
of an input line. IIRC, Dan Pop had such a DOS box in his office once,
which made using gets() perfectly safe on that system.

--
Tor <bw****@wvtqvm.vw | tr i-za-h a-z>
Nov 21 '07 #62

"RoS" <Ro*@not.existwrote in message
In data Tue, 20 Nov 2007 23:18:50 +0000, Flash Gordon scrisse:
>>Here is a problem for it. Assume that all allocations succeed and each
function is in a separate TU with appropriate headers etc...

char *p1;
char *p2;
char *p3;

char *alloc()
{
return p2 = p3 = malloc(5);
}

char *ralloc(char *orig)
{
return realloc(orig,10);
}

char *foo(void)
{
p1 = ralloc(alloc());
if (p1 == p2)
strcpy(p3,"Hello World");
else
strcpy(p3,"Bye");
}

If realloc has not moved the pointer then how will the size information
in the fat pointer p3 get updated?

it is update from realloc

if realloc not move the pointer, p3 point to a valid address
and its size is changed from realloc (aggiornato da realloc)

if realloc move the pointer, p3 point to not valid address
p3 is invalidated by the realloc() call. If it happens to still point to an
area of memory that is physically under the control of the program that is
pure chance.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Nov 21 '07 #63

"Kenny McCormack" <ga*****@xmission.xmission.comwrote in message
I do see this as being a bit difficult. It boils down to: Is
it possible to keep enough information in the system so that we can
know, for any possible pointer and/or pointer value, how much valid
memory there is after that pointer?

I can't think of any counter-examples off-hand, but that doesn't mean
there aren't any.
The difficult situationwas given by Ben Bacarisse

struct any
{
char array[10];
int x;
};

struct any list[2];

char *ptr = list[1].aray;
struct any *ptr2 = (struct any *) ptr;
struct any *ptr3 = ptr2-1;

ptr3 is defined. So we need an additional "mother" field in our fat pointer,
purely to give the correct bounds to ptr2.

In practise of course the standard would have to be tweaked, if fat pointers
ever gained currency.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Nov 21 '07 #64
Tor Rustad wrote:
Keith Thompson wrote:
>Tor Rustad wrote:

[...]
>>the implementation restrict gets() buffer writes to some hard upper
limit, let say one less than MAX_GETS_WRITE, then the

char buf[MAX_GETS_WRITE];

gets(buf);

would be safe.

Such an implementation of gets() would be non-conforming, since it
wouldn't allow you to read into a buffer bigger than MAX_GETS_WRITE
bytes.

MAX_GETS_WRITE could give some system-dependent limit of the max length
of an input line. IIRC, Dan Pop had such a DOS box in his office once,
which made using gets() perfectly safe on that system.
Ok, good point. A length-limited gets() could be conforming on a system that
already imposes a maximum length on input lines.

But on systems that don't impose such a limit, gets() must be able to read
arbitrarily long lines, as long as the provided buffer is big enough.
(Normally, of course, gets can't tell how big the buffer is, which is why it's
so dangerous in most cases.)

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
Looking for software development work in the San Diego area.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Nov 22 '07 #65
Malcolm McLean wrote:
"RoS" <Ro*@not.existwrote in message
>Flash Gordon scrisse:
(some code compression, ignoring missing #includes)
>>
>>char *p1, *p2, *p3;

char *alloc() {
return p2 = p3 = malloc(5);
}

char *ralloc(char *orig) {
return realloc(orig,10);
}

char *foo(void) {
p1 = ralloc(alloc());
if (p1 == p2) strcpy(p3, "Hello World");
else strcpy(p3, "Bye");
}
Faulty routine - fails to return a value.
>>>
If realloc has not moved the pointer then how will the size
information in the fat pointer p3 get updated?

it is update from realloc

if realloc not move the pointer, p3 point to a valid address
and its size is changed from realloc (aggiornato da realloc)

if realloc move the pointer, p3 point to not valid address

p3 is invalidated by the realloc() call. If it happens to still
point to an area of memory that is physically under the control
of the program that is pure chance.
This illustrates the dangers of keeping copies of allocated
pointers. p3 is not necessarily invalidated. However, if (p1 !=
p2) then p3 has been invalidated. You have no control over this.

--
Chuck F (cbfalconer at maineline dot net)
<http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

Nov 22 '07 #66
RoS
In data Wed, 21 Nov 2007 18:35:01 -0800, Keith Thompson scrisse:
>Tor Rustad wrote:
>Keith Thompson wrote:
>>Tor Rustad wrote:

[...]
>>>the implementation restrict gets() buffer writes to some hard upper
limit, let say one less than MAX_GETS_WRITE, then the

char buf[MAX_GETS_WRITE];

gets(buf);

would be safe.

Such an implementation of gets() would be non-conforming, since it
wouldn't allow you to read into a buffer bigger than MAX_GETS_WRITE
bytes.

MAX_GETS_WRITE could give some system-dependent limit of the max length
of an input line. IIRC, Dan Pop had such a DOS box in his office once,
which made using gets() perfectly safe on that system.

Ok, good point. A length-limited gets() could be conforming on a system that
already imposes a maximum length on input lines.

But on systems that don't impose such a limit, gets() must be able to read
arbitrarily long lines, as long as the provided buffer is big enough.
(Normally, of course, gets can't tell how big the buffer is, which is why it's
so dangerous in most cases.)
if the compiler has a array of elements
arrayelement{char* where; size_t size}
that point to each memory object in the memory returned from malloc or
allocated in the stack
it is possible to write a routine that says if one address is allow to
read-write or not; this mean it is possible to write a safe gets()
in the sense: make gets() function not overflow the input array

int isok(char* a)
{arrayelement *p=&whereitis;
for(i=0; p[i].where!=0; ++i)
{if(a>p[i].where && a<p[i].where+p[i].size)
return 1;
}
return 0;
}

size_t sizefromhere(char* a)
{arrayelement *p=&whereitis;
for(i=0; p[i].where!=0; ++i)
{if(a>p[i].where && a<p[i].where+p[i].size)
return p[i].size-(a-p[i].where);
}
return 0;
}
so gets could be something like

char* gets(char* buf)
{char *p;
int c;
size_t limit, h;
p=buf;
limit=sizefromhere(buf);
if(limit==0) return 0;
h=0;
l0:;
if(h==limit)
{l1:;
p[h]=0;
return 0;
}
c=getchar();
if(c==EOF)
{if(ferror(stdin)) goto l1
p[h]=0;
return buf;
}
if(c=='\n')
{p[h]='\n'; /* limit-1 */
p[h+1]=0; /* limit */
return buf;
}
p[h]=c; ++h; goto l0;
}

Nov 22 '07 #67
Tor Rustad <to********@hotmail.comwrote:
Richard Bos wrote:
Tor Rustad <to********@hotmail.comwrote:
>His solution requires the use of "fat pointers", which are not
Methinks, fat pointers break pointer arithmetic and thus require at
least a new language dialect.
No, they don't.

Hmm... what about volatile, extern and variable-length arrays?
Why should they be different?
Also, I expect GC's, to do low-level pointer magic.
But GC in itself breaks ISO C compatibility. GC introduces a new
dialect; fat pointers do not.
Pointer arithmetic beyond the bounds of an object has
undefined behaviour anyway,

Off-by-one is allowed, see e.g. response to DR #76 and DR #221.
Of course, but that still doesn't break fat pointers.
and within an object it works fine with fat
pointers. Adding an integer to a pointer is now a matter of adding it to
a single field of the pointer structure, rather than to a flat index,
but something similar is needed with, e.g., segmented architectures.

Well, it appears to me that that segmented architectures capable of C99,
might not be able to have 65535 bytes for an object, when using fat
pointers.
Why on earth not? For starters, who says that segments _must_ be 1980s
Intel 8088-compatible segments?
Also, the buffer passed to gets() may not be malloc'ed, but can be an
array, or even a sub-array.
So? A sub-array simply has it recorded, in its fat pointer data, that it
is a sub-array, and what of.

So... the compiler would generate instrumented code then, lots of
run-time checks!
Yes. That's most the point of fat pointers. They are generally used in
debugging implementations. I don't know of any normal C implementation
which uses them.
This is getting far to theoretical.. fat pointers make sense to me, *if*
C had such a new pointer type, else everyone need to use "the same"
compiler, or how do you suggest we link in libraries in practice?
You misunderstand. Fat pointers are not a new pointer type within ISO C.
Rather, they are a way to implement normal C pointers behind the scene.
And yes, you do need to link fat-pointer-compiled object files with
fat-pointer-compiled libraries; much as you need to link 64-bit object
files with 64-bit libraries, little-endian object files with little-
endian libraries, and on MS-DOS used to link large memory model object
files with large memory model libraries.

Richard
Nov 22 '07 #68
Malcolm McLean wrote, On 21/11/07 22:35:
>
"RoS" <Ro*@not.existwrote in message
>In data Tue, 20 Nov 2007 23:18:50 +0000, Flash Gordon scrisse:
>>Here is a problem for it. Assume that all allocations succeed and each
function is in a separate TU with appropriate headers etc...

char *p1;
char *p2;
char *p3;

char *alloc()
{
return p2 = p3 = malloc(5);
}

char *ralloc(char *orig)
{
return realloc(orig,10);
}

char *foo(void)
{
p1 = ralloc(alloc());
if (p1 == p2)
I should really have used memcmp here to avoid the problem of evaluating
a possibly invalid pointer. Yes, memcmp could find a difference when the
realloc has not moved the allocated space, but...
>> strcpy(p3,"Hello World");
else
strcpy(p3,"Bye");
}

If realloc has not moved the pointer then how will the size information
in the fat pointer p3 get updated?

it is update from realloc

if realloc not move the pointer, p3 point to a valid address
and its size is changed from realloc (aggiornato da realloc)

if realloc move the pointer, p3 point to not valid address
p3 is invalidated by the realloc() call.
Not necessarily.
If it happens to still point to
an area of memory that is physically under the control of the program
that is pure chance.
I was attempting to be careful to only use p3 if it was still valid.
Although as I point out above I made a mistake.
--
Flash Gordon
Nov 22 '07 #69
Ben Bacarisse wrote, On 21/11/07 00:59:
Flash Gordon <sp**@flash-gordon.me.ukwrites:
>Keith Thompson wrote, On 20/11/07 22:36:
<snip>
>>A "fat" pointer might consist of three elements: the address of the
zeroth element of the array that contains the object being pointed
to, the index of the specific element, and the length of the array.
All operations that create pointers must correctly initialize this
information; all operations on pointers must preserve and update it.

I *think* that such an implementation could conform to the C
standard, and could detect many pointer bugs (at a perhaps
unacceptable cost in performance).
Here is a problem for it. Assume that all allocations succeed and each
function is in a separate TU with appropriate headers etc...
<snip>
>If realloc has not moved the pointer then how will the size
information in the fat pointer p3 get updated?

I don't think that example is valid. An implementation is allowed to
have realloc always move the object, and that is what I'd do if I were
implementing fat pointers.
OK, the implementer can make that choice and thus prevent my idea from
braking things.
--
Flash Gordon
Nov 22 '07 #70
CBFalconer wrote, On 22/11/07 01:43:
Malcolm McLean wrote:
>"RoS" <Ro*@not.existwrote in message
>>Flash Gordon scrisse:
(some code compression, ignoring missing #includes)
>>>char *p1, *p2, *p3;

char *alloc() {
return p2 = p3 = malloc(5);
}

char *ralloc(char *orig) {
return realloc(orig,10);
}

char *foo(void) {
p1 = ralloc(alloc());
if (p1 == p2) strcpy(p3, "Hello World");
else strcpy(p3, "Bye");
}

Faulty routine - fails to return a value.
Only if the return value is used...

OK, I was going to do some more stuff but ended up not bothering. It is
also irrelevant to the main points.
>>>If realloc has not moved the pointer then how will the size
information in the fat pointer p3 get updated?
it is update from realloc

if realloc not move the pointer, p3 point to a valid address
and its size is changed from realloc (aggiornato da realloc)

if realloc move the pointer, p3 point to not valid address
p3 is invalidated by the realloc() call. If it happens to still
point to an area of memory that is physically under the control
of the program that is pure chance.

This illustrates the dangers of keeping copies of allocated
pointers. p3 is not necessarily invalidated. However, if (p1 !=
p2) then p3 has been invalidated. You have no control over this.
The function checked for this, although the real error that no one
pointed out was not using memcmp for the comparison.
--
Flash Gordon
Nov 22 '07 #71
In article <47*****************@news.xs4all.nl>
Richard Bos <rl*@hoekstra-uitgeverij.nlwrote:
>... you do need to link fat-pointer-compiled object files with
fat-pointer-compiled libraries; much as you need to link 64-bit object
files with 64-bit libraries, little-endian object files with little-
endian libraries, and on MS-DOS used to link large memory model object
files with large memory model libraries.
All true. Note, however, that "fat-to-thin shims" are easy to
construct: if fat_f() is a function compiled with "fat" pointers,
and it calls thin_g() in a library compiled with "thin" pointers
while passing a pointer, the call need only pass through a "skim
off the fat" layer (fat_g_to_thin_g() perhaps). A compiler could
even generate such a shim "on the fly" at link-time.

Going the other direction -- from thin to fat, including thin_g()'s
return value if it returns a pointer -- is considerably more
difficult. There are several ways to deal with this, with different
tradeoffs.

Contrary to Chuck F's followup, it *is* possible to implement fat
pointers in C, despite cast-conversions and malloc() and different
array sizes and pointer usage and so on. Again, there are multiple
ways to deal with various issues, with different tradeoffs.

In general, the simplest method is to have all "fat pointers"
represented as a triple: <currentvalue, base, limit>. A pointer
is "valid" if its current-value is at least as big as its base and
no bigger than its limit:

if (p.current >= p.base && p.current <= p.limit)
/* pointer value is valid */;
else
/* pointer value is invalid */;

A pointer is valid-for-dereference if it is valid (as above) *and*
strictly less than its limit. This is so that we can compute
&array[N] (where N is the size of the array) but not access the
nonexistent element array[N].

Given this type of "fat pointer", the simplest thin-to-fat conversion
is done like this:

fat_pointer make_fat_pointer(machine_pointer_type value) {
fat_pointer result;

result.current = value;
result.base = (machine_pointer_type) MINIMUM_MACHINE_ADDRESS;
result.limit = (machine_pointer_type) MAXIMUM_MACHINE_ADDRESS;
return result;
}

Clearly this is somewhat undesirable, as it means all "thin-derived"
pointers lose all protection.

When dealing with pointers to objects embedded within larger objects
(such as elements of "struct"s), the simplest method is again to
widen the base-and-limit to encompass the large object. Consider,
e.g.:

% cat derived.c
#include "base.h"
struct derived {
struct base common;
int additional;
};

void basefunc(struct base *, void (*)(struct base *)); /* in base.c */
static void subfunc(struct base *);

void func(void) {
struct derived var;
...
basefunc(&var.common, subfunc);
...
}

static void subfunc(struct base *p0) {
struct derived *p = (struct derived *)p0; /* line X */
... use p->common and p->additional here ...
}

There is clearly no problem in the call to basefunc(), even if we
"narrow" the "fat pointer" to point only to the sub-structure
&var.common. However, when basefunc() "calls back" into subfunc(),
as presumably it will, with a "fat pointer" to the "common" part
of a "struct derived", we will have to "re-widen" the pointer. We
can do that at the point of the cast (line X), or simply avoid
"narrowing" the fat pointer at the call to basefunc(), so that when
basefunc() calls subfunc(), it passes a pointer to the entire
structure "var", rather than just var.common.

(It is probably the case, not that I have thought about it that
much, that we can pass only the "fully widened to entire object"
pointer if we are taking the address of the *first* element of
the structure. That is, C code of the form:

struct multi_inherit {
struct base1 b1;
struct base2 b2;
int additional;
};

static void callback(struct base2 *);

void func(void) {
struct multi_inherit m;
...
b2func(&m.b2, callback);
...
}

static void callback(struct base2 *p0) {
struct multi_inherit *p;

p = (struct multi_inherit *)
((char *)p0 - offsetof(struct multi_inherit, b2)); /* DANGER */
... use p->b1, p->b2, and p->additional ...
}

is "iffy" at the line marked "danger", although it works in practice
on real C compilers. If the call to b2func() passes a pointer
whose base is &m.b2 and whose limit is (&m.b2 + 1), the cast in
callback() must somehow both reduce the base and increase the limit.
The C Standard says that a pointer to the first element of a
structure can be converted back to a pointer to the entire structure,
but says nothing about this kind of tricky subtraction to go from
"middle of structure" to "first element" and thence to "entire
structure". Still, several ways to handle this -- either by
forbidding it, or by recognizing such subtractions embedded within
cast expressions -- are obvious.)

A more complicated way to implement "fat pointers", which also
provides a more useful way to go from "thin" to "fat", is to record
thin-to-fat conversions in one or more runtime tables, and do
lookups as needed:

fat_pointer make_fat_pointer(machine_pointer_type value) {
fat_pointer *p;

p = look_up_in_table(value);
if (p == NULL)
__runtime_exception("invalid pointer");
return *p;
}

The compiler can cache these computed fat pointers, use them
internally, pass them around, or "thin" them (by simply taking
p.current) at any time as needed for compatibility with code compiled
without the fat pointers. But there is a significant runtime cost
whenever going from "thin" to "fat", typically greater than that
added by verifying p.current as needed. (Note also that malloc()
must manipulate the, or a, fat-pointer table, if pointers that come
out of malloc() are ever to be looked-up. Thus, you *can* link
against various thin functions, but never against "thin malloc".)
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Nov 23 '07 #72
RoS
In data Thu, 22 Nov 2007 08:57:22 +0100, RoS scrisse:
>if the compiler has a array of elements
arrayelement{char* where; size_t size}
that point to each memory object in the memory returned from malloc or
allocated in the stack
it is possible to write a routine that says if one address is allow to
read-write or not; this mean it is possible to write a safe gets()
in the sense: make gets() function not overflow the input array

int isok(char* a)
{arrayelement *p=&whereitis;
for(i=0; p[i].where!=0; ++i)
{if(a>p[i].where && a<p[i].where+p[i].size)
return 1;
}
return 0;
}

size_t sizefromhere(char* a)
{arrayelement *p=&whereitis;
for(i=0; p[i].where!=0; ++i)
{if(a>p[i].where && a<p[i].where+p[i].size)
return p[i].size-(a-p[i].where);
}
return 0;
}
so gets could be something like
yes gets if its argument is a stack memory should return always 0
(why waste cicles and memory space for automatic objects?)

if the argument is malloc heap memory (that in my case has the list of
avaiable memory and their size) could be somethign like below
>char* gets(char* buf)
{char *p;
int c;
size_t limit, h;
p=buf;
limit=sizefromhere(buf);
if(limit==0) return 0;
h=0;
l0:;
if(h==limit)
{l1:;
p[h]=0;
return 0;
}
c=getchar();
if(c==EOF)
{if(ferror(stdin)) goto l1
p[h]=0;
return buf;
}
if(c=='\n')
{p[h]='\n'; /* limit-1 */
p[h+1]=0; /* limit */
return buf;
}
p[h]=c; ++h; goto l0;
}
Nov 23 '07 #73
In article <ku************@news.flash-gordon.me.ukFlash Gordon <sp**@flash-gordon.me.ukwrites:
CBFalconer wrote, On 23/11/07 03:15:
....
I maintain that 'fat' pointers are incompatible with C in the first
place.

I disagree. They are not easy to implement, but there are ways around
all of the problems if you work hard enough at it.
I think a way would be to have a fat pointer consist of three parts.
The first part would be the traditional pointer, the second part a
pointer to the first element of the memory block, and the third part
the size of the memory block.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 23 '07 #74
Chris Torek wrote:
Richard Bos <rl*@hoekstra-uitgeverij.nlwrote:
>... you do need to link fat-pointer-compiled object files with
fat-pointer-compiled libraries; much as you need to link 64-bit
object files with 64-bit libraries, little-endian object files
with little-endian libraries, and on MS-DOS used to link large
memory model object files with large memory model libraries.
.... snip ...
>
In general, the simplest method is to have all "fat pointers"
represented as a triple: <currentvalue, base, limit>. A pointer
is "valid" if its current-value is at least as big as its base
and no bigger than its limit:
Just picking on one thing, based on it has to work everywhere, or
it is useless. If a pointer is malloced, and then copied while the
original is realloced, it may be totally invalid. The above won't
pick this up.

(If every use of a pointer is accompanied by a grand sweep of
possibilities, things may be possible. However the resultant code
would be totally useless and spend all its time sweeping.)

I have made my share of mistakes in code-generation, and I want
things simple and consistent.

--
Chuck F (cbfalconer at maineline dot net)
<http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

Nov 24 '07 #75
santosh wrote:
CBFalconer <cb********@yahoo.comwrote:
.... snip ...
>
>Don't forget that these pointers must arise from allocated memory,
static memory, and auto memory. Some operations (such as free)
become invalid on incremented/decremented pointers.

Unless such an improvement can handle EVERY type of occurrence, it
is better to simply not provide the 'improvement'. Now the poor
programmer may even have to think.

I am not knowledgeable enough with C to say whether fat pointers
break it's rules sufficiently severely to rule out their inclusion,
but from what I know, I can't see how it would be non-permissible.

Obviously it would require a lot of behind the screens compiler
magic, and is likely to severely degrade performance, but it ought
to be, from what I know possible. Of course I'm likely to be
proved wrong in a few minutes by an expert here.
I don't think you need too much experience to note the troubles.
Just try creating some sub-system compilers, that handle all sorts
of pointers. I think you will rapidly find that the backup
information required makes things impossible very rapidly. Build
your own structures for keeping track of pointers, and remember to
update them and the handling code whenever you consider another
input version.

The basic problem is that a C pointer is not restricted.

--
Chuck F (cbfalconer at maineline dot net)
<http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

Nov 24 '07 #76
In article <xCU1j.7238$dh.4161@trnddc05>, James Kuyper
<ja*********@verizon.netwrote on Saturday 24 Nov 2007 5:46 pm:
CBFalconer wrote:
<snip>
You said in your response to Flash Gordon, "I maintain this is not
practical." You are, of course, correct. I don't know of any efficient
way of implementing fat pointers. They are intended to help test for
code correctness; the performance cost of using them would be high,
and would only rarely be acceptable in production code. I say "would
be", because I believe that actual implementations with fat pointers
are rare; possibly non-existent.
Apparently not non-existent, if the following page is to be believed.

<http://www.springerlink.com/content/qhb2pnhvr9w96nfc/>

Also Cyclone is well known language based on C, which also implements
fat pointers.

Nov 24 '07 #77
James Kuyper wrote:

[snip]
Also, quite frankly, it's virtually impossible to write up something
that long and complicated without making several mistakes, and I prefer
to avoid publishing something like that on a public forum known for the
viciousness with which mistakes are attacked.
I think this is the big problem with this forum, and I have been
trying to fight this for over 2 years now. The only solution is to
ignore the vicious attacks. Keep in mind that those people are
just unable to propose anything positive and their only way to
express themselves is through those attacks.

It suffices to ignore them, and discuss normally.

For instance the rounding thread, it was full of
those. I ignore them, and I think my opinion came through
at the end.
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Nov 24 '07 #78
santosh wrote:
<ja*********@verizon.netwrote:

<snip>
>You said in your response to Flash Gordon, "I maintain this is
not practical." You are, of course, correct. I don't know of any
efficient way of implementing fat pointers. They are intended to
help test for code correctness; the performance cost of using
them would be high, and would only rarely be acceptable in
production code. I say "would be", because I believe that actual
implementations with fat pointers are rare; possibly non-existent.

Apparently not non-existent, if the following page is to be believed.

<http://www.springerlink.com/content/qhb2pnhvr9w96nfc/>

Also Cyclone is well known language based on C, which also implements
fat pointers.
However this is about C, not Cyclone. I have no difficulty
implementing proper range checking in Pascal, for example. Having
done so, I have a fair idea of the evil difficulties of adapting to
C pointers.

--
Chuck F (cbfalconer at maineline dot net)
<http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

Nov 24 '07 #79
Flash Gordon wrote:
CBFalconer wrote:
.... snip ...
>
>I disagree. The first requirement is some sweeping statements.
Further investigation may prove or disprove them. But it only
takes one impossibility to make the whole attack impracticable.

So provide such an impossibility then you will have proved your
position.
So here is another. Imagine a routine to upshift a string. One
routine receives a char, and answere with 'this is lower case'.
Another receives a char, and answers by replacing it with the upper
case equivalent. Both are passed pointers.

Assuming the original data is a string, the calling routine will
pass something like:

p = &(s[3]); or p = s + 3; (p is parameter)

For the reading routine, there is no harm in allowing reads from (p
- n), where n can be 0 through 3. For the writing routine, this is
not allowable. How do we separate the actions? Further, the
optional read/write may be in one routine, so the multiple access
allowance must be passed in the parameter to everything. After
all, that access may be passed on to another routine.

Yes, these objections are fairly hazy, and I am not willing to try
and work them all out (especially since I am convinced they can't
be so worked out). The fact that a further access requires further
supervision data (rather than a revision of existing supervision
data) means unlimited storage requirements.

--
Chuck F (cbfalconer at maineline dot net)
<http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

Nov 24 '07 #80
On Sat, 24 Nov 2007 10:16:52 -0500, CBFalconer wrote:
So here is another. Imagine a routine to upshift a string. One routine
receives a char, and answere with 'this is lower case'. Another receives
a char, and answers by replacing it with the upper case equivalent.
Both are passed pointers.

Assuming the original data is a string, the calling routine will pass
something like:

p = &(s[3]); or p = s + 3; (p is parameter)

For the reading routine, there is no harm in allowing reads from (p -
n), where n can be 0 through 3. For the writing routine, this is not
allowable.
Why not? Is it because the behaviour would be undefined, or is it because
the function's actions would be different from its description? If the
former, I'm not seeing it, so could you please explain? If the latter, as
long as it's valid C, there's no reason why an implementation would or
should complain about it.
Nov 24 '07 #81
RoS
In data Thu, 22 Nov 2007 08:57:22 +0100, RoS scrisse:
>if the compiler has a array of elements
arrayelement{char* where; size_t size}
that point to each memory object in the memory returned from malloc or
allocated in the stack
it is possible to write a routine that says if one address is allow to
read-write or not; this mean it is possible to write a safe gets()
in the sense: make gets() function not overflow the input array

int isok(char* a)
{arrayelement *p=&whereitis;
for(i=0; p[i].where!=0; ++i)
{if(a>p[i].where && a<p[i].where+p[i].size)
return 1;
}
return 0;
}

size_t sizefromhere(char* a)
{arrayelement *p=&whereitis;
for(i=0; p[i].where!=0; ++i)
{if(a>p[i].where && a<p[i].where+p[i].size)
return p[i].size-(a-p[i].where);
}
return 0;
}
so gets could be something like
yes gets if its argument is a stack memory should return always 0
(why waste cicles and memory space for debug automatic objects?)

if the argument is malloc heap memory (that in my case has the list of
avaiable memory and their size) could be somethign like below
>char* gets(char* buf)
{char *p;
int c;
size_t limit, h;
p=buf;
limit=sizefromhere(buf);
if(limit==0) return 0;
h=0;
l0:;
if(h==limit)
{l1:;
p[h]=0;
return 0;
}
c=getchar();
if(c==EOF)
{if(ferror(stdin)) goto l1
p[h]=0;
return buf;
}
if(c=='\n')
{p[h]='\n'; /* limit-1 */
p[h+1]=0; /* limit */
return buf;
}
p[h]=c; ++h; goto l0;
}
Nov 24 '07 #82
In article <aL******************************@bt.com>,
Richard Heathfield <rj*@see.sig.invalidwrote:
>James Kuyper said:

<snip>
>Also, quite frankly, it's virtually impossible to write up something
that long and complicated without making several mistakes, and I prefer
to avoid publishing something like that on a public forum known for the
viciousness with which mistakes are attacked.

On the whole, mistakes in clc articles are identified and corrected, but
not attacked. I have made a great many mistakes in clc articles over the
last few years, but I have never been attacked for them (except by trolls,
and I don't count those).
That's because you are an accepted "regular" and most defer to you.
Or, to put it another way, your definition of "troll" is:

One who has the balls to attack you.

(Therefore, it is definitionally true that the only posters who attack
you are "trolls").
>I have also corrected a great many mistakes in
clc articles over the last few years, but I've never attacked a mistake.
Liar. Two words: Jacob Navia.

Nov 24 '07 #83
CBFalconer wrote, On 24/11/07 15:24:
santosh wrote:
><ja*********@verizon.netwrote:
<snip fat pointer C implementation discussion>
>>production code. I say "would be", because I believe that actual
implementations with fat pointers are rare; possibly non-existent.
Apparently not non-existent, if the following page is to be believed.

<http://www.springerlink.com/content/qhb2pnhvr9w96nfc/>

Also Cyclone is well known language based on C, which also implements
fat pointers.

However this is about C, not Cyclone. I have no difficulty
implementing proper range checking in Pascal, for example. Having
done so, I have a fair idea of the evil difficulties of adapting to
C pointers.
Read the post again and then read the article. Once sentence in
particular from the article is, "This paper describes a memory-safe
implementation of the full ANSI C language."

The reference to Cyclone was a reference to something else related and
potentially interesting, not a reference to the article. Hence the word
"Also" in the post.
--
Flash Gordon
Nov 24 '07 #84
�Harald van Dijk wrote, On 24/11/07 17:29:
On Sat, 24 Nov 2007 10:16:52 -0500, CBFalconer wrote:
>So here is another. Imagine a routine to upshift a string. One routine
receives a char, and answere with 'this is lower case'. Another receives
a char, and answers by replacing it with the upper case equivalent.
Both are passed pointers.

Assuming the original data is a string, the calling routine will pass
something like:

p = &(s[3]); or p = s + 3; (p is parameter)

For the reading routine, there is no harm in allowing reads from (p -
n), where n can be 0 through 3. For the writing routine, this is not
allowable.

Why not? Is it because the behaviour would be undefined, or is it because
the function's actions would be different from its description? If the
former, I'm not seeing it, so could you please explain? If the latter, as
long as it's valid C, there's no reason why an implementation would or
should complain about it.
I agree. Further, if the routine that is not allowed to write has the
parameter declared as a pointer to const the compiler will complain
about it.

Someone else has posted a link to an article about a fat pointer
implementation of C thus providing strong evidence that such an
implementation is possible.

My position is that it is difficult rather than impossible.
--
Flash Gordon
Nov 24 '07 #85
jacob navia wrote, On 24/11/07 16:38:
James Kuyper wrote:

[snip]
>Also, quite frankly, it's virtually impossible to write up something
that long and complicated without making several mistakes, and I
prefer to avoid publishing something like that on a public forum known
for the viciousness with which mistakes are attacked.

I think this is the big problem with this forum, and I have been
trying to fight this for over 2 years now. The only solution is to
ignore the vicious attacks.
So we should ignore you? You frequently attack people.
Keep in mind that those people are
just unable to propose anything positive and their only way to
express themselves is through those attacks.
Think about the fact that you frequently attack.
It suffices to ignore them, and discuss normally.
So why do you attack and ascribe to people motives and opinions that
they do not have?
For instance the rounding thread, it was full of
those. I ignore them, and I think my opinion came through
at the end.
Your opinion was clear. It was not sensible to offer code that did not
meet the stated requirements without telling the OP that it did not and
why not, and I would not say that you one the argument either since
there is no independent arbiter.
--
Flash Gordon
Nov 24 '07 #86
In article <fi********@news2.newsguy.comI described various
techniques for constructing and testing "fat pointers".

In article <47***************@yahoo.com>,
CBFalconer <cb********@maineline.netwrote:
>Just picking on one thing, based on it has to work everywhere, or
it is useless.
There is a difference between "only catches some mistakes right away"
and "is useless". Still:
>If a pointer is malloced, and then copied while the original is
realloced, it may be totally invalid. The above won't pick this up.
"The above" was merely the description of a fat pointer.

If you want the runtime system to catch the use of a pointer value
that was once valid due to a malloc(), but is now invalid due to
a later realloc() or free(), you probably want one of the compilers[%]
that "up-converts" a "thin pointer" as needed, using the runtime
tables I described. This has a time penalty:
>(If every use of a pointer is accompanied by a grand sweep of
possibilities, things may be possible. However the resultant code
would be totally useless and spend all its time sweeping.)
It is not "every use", but rather "every case where the compiler
cannot prove that a cached fat pointer is still valid". For
instance, in the code fragment:

char *little, *big;
size_t i, len;
...
len = strlen(little);
for (i = 0; i < len; i++)
big[i] = toupper((unsigned char)little[i]);

the compiler needs to do at most two lookups to validate the pointers
"little" and "big" before the loop starts. (If it is sufficiently
clever, it can even do the bounds-checking at that time as well,
using 0 and len as the minimum and maximum offsets to be applied
to the "current value" of each pointer.)

A not-so-smart compiler has to check each pointer each time, or
we might do something like this:

for (i = 0; i < len; i++) {
uglify(&big, &little);
big[i] = toupper((unsigned char)little[i]);
}

after which, yes, "every" use of the two pointers (in this case)
is "accompanied by a grand sweep of possibilities", which causes
a noticeable slowdown.

As I said, there are tradeoffs.

[% I say this as if there are lots of bounds-checking compilers to
choose from, when in fact they are as rare as correct comp.lang.c
posts. :-) ]
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Nov 24 '07 #87
CBFalconer wrote:
Flash Gordon wrote:
....
>So provide such an impossibility then you will have proved your
position.

So here is another. Imagine a routine to upshift a string. One
routine receives a char, and answere with 'this is lower case'.
Another receives a char, and answers by replacing it with the upper
case equivalent. Both are passed pointers.

Assuming the original data is a string, the calling routine will
pass something like:

p = &(s[3]); or p = s + 3; (p is parameter)

For the reading routine, there is no harm in allowing reads from (p
- n), where n can be 0 through 3. For the writing routine, this is
not allowable. How do we separate the actions?
Distinguishing readable from writable memory is what 'const' is for. Fat
pointers are for bounds checking.

You don't indicate whether s is a pointer or an array. For simplicity,
I'll assume that it's an array of N chars. Then when s decays into a
pointer, that pointer's limits are set to s and s+N. When s+3 is
evaluated, it inherits those same limits, and they are retained when
that pointer value is stored in p. As a result, any attempt to calculate
p+i for i<-3 or i>N-3 will trigger a failure mechanism. any attempt to
evaluate p[i] for i<-3 or i >= N-3 will trigger a failure mechanism, and
that's true whether the access is for read or write.
Nov 24 '07 #88
$)CHarald van D)&k wrote:
CBFalconer wrote:
>So here is another. Imagine a routine to upshift a string. One
routine receives a char, and answere with 'this is lower case'.
Another receives a char, and answers by replacing it with the
upper case equivalent. Both are passed pointers.

Assuming the original data is a string, the calling routine will
pass something like:

p = &(s[3]); or p = s + 3; (p is parameter)

For the reading routine, there is no harm in allowing reads from
(p - n), where n can be 0 through 3. For the writing routine,
this is not allowable.

Why not? Is it because the behaviour would be undefined, or is
it because the function's actions would be different from its
description? If the former, I'm not seeing it, so could you
please explain? If the latter, as long as it's valid C, there's
no reason why an implementation would or should complain about
it.
Because a char was passed, not an array. So a means has to be
found to separate the two. Remember that the system is pointless
unless it works all the time.

--
Chuck F (cbfalconer at maineline dot net)
<http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

Nov 25 '07 #89
Chris Torek wrote:
>
In article <fi********@news2.newsguy.comI described various
techniques for constructing and testing "fat pointers".

CBFalconer <cb********@maineline.netwrote:
>Just picking on one thing, based on it has to work everywhere,
or it is useless.

There is a difference between "only catches some mistakes right
away" and "is useless". Still:
>If a pointer is malloced, and then copied while the original is
realloced, it may be totally invalid. The above won't pick
this up.

"The above" was merely the description of a fat pointer.
For example, I conclude that the only reasonable way to handle C
style pointers is to make them ALL indirect. I.e. the value in the
running code points at a descriptor, which separates auto, static,
allocated pointers and includes limits, etc. Now a modification of
such a pointer requires another descriptor, etc., which in turn
requires storage. However a copy can simply point to the same
descriptor. In theory this can be done, but in practice it would
mean an unusable language.

I admit I have not taken the time to read your proposal. It is
quite possible we have much different views of feasibility.

.... snip ...
It is not "every use", but rather "every case where the compiler
cannot prove that a cached fat pointer is still valid". For
instance, in the code fragment:

char *little, *big;
size_t i, len;
...
len = strlen(little);
for (i = 0; i < len; i++)
big[i] = toupper((unsigned char)little[i]);

the compiler needs to do at most two lookups to validate the
pointers "little" and "big" before the loop starts. (If it is
sufficiently clever, it can even do the bounds-checking at that
time as well, using 0 and len as the minimum and maximum offsets
to be applied to the "current value" of each pointer.)
Point to be made here. The above is user code, calling system
functions with known abilities and requirements (assuming the often
unfollowed directions are followed :-) ). However the common
situation is to call user routines, located in different
compilations, and with no known requirements beyond the argument
types. Now the compiler doesn't even know what to look for. It
doesn't read comments.

And even in the above, what supplied big? How did it find the
storage limits? How does it _always_ know that big is a pointer to
a char 10 bytes from the end of a malloced area. Etc.

--
Chuck F (cbfalconer at maineline dot net)
<http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

Nov 25 '07 #90
CBFalconer wrote, On 24/11/07 22:52:
$)CHarald van D)&k wrote:
>CBFalconer wrote:
>>So here is another. Imagine a routine to upshift a string. One
routine receives a char, and answere with 'this is lower case'.
Another receives a char, and answers by replacing it with the
upper case equivalent. Both are passed pointers.

Assuming the original data is a string, the calling routine will
pass something like:

p = &(s[3]); or p = s + 3; (p is parameter)

For the reading routine, there is no harm in allowing reads from
(p - n), where n can be 0 through 3. For the writing routine,
this is not allowable.
Why not? Is it because the behaviour would be undefined, or is
it because the function's actions would be different from its
description? If the former, I'm not seeing it, so could you
please explain? If the latter, as long as it's valid C, there's
no reason why an implementation would or should complain about
it.

Because a char was passed, not an array.
No, a pointer to char was passed, and that is a pointer to a byte within
a larger object.
So a means has to be
found to separate the two.
No, because C does not have a distinction between a pointer to an
element of an array and a pointer to a single object.
Remember that the system is pointless
unless it works all the time.
That is a good argument for getting rid of *all* safety features
everywhere. So go back to DOS because the memory protection in Windows
does not work all the time (otherwise we would not be talking about
whether fat pointers can improve protection), get rid of lifeboats from
cruise ships because sometimes they fail and so on. Not to mention your
ggets routine being pointless because it fails if someone feeds it a
line longer than it can successfully allocate memory for.
--
Flash Gordon
Nov 25 '07 #91
CBFalconer wrote:
Chris Torek wrote:
>Richard Bos <rl*@hoekstra-uitgeverij.nlwrote:
>>... you do need to link fat-pointer-compiled object files with
fat-pointer-compiled libraries; much as you need to link 64-bit
object files with 64-bit libraries, little-endian object files
with little-endian libraries, and on MS-DOS used to link large
memory model object files with large memory model libraries.
... snip ...
>In general, the simplest method is to have all "fat pointers"
represented as a triple: <currentvalue, base, limit>. A pointer
is "valid" if its current-value is at least as big as its base
and no bigger than its limit:

Just picking on one thing, based on it has to work everywhere, or
it is useless.
We're talking about use of fat pointers as a device to protect against
memory bounds violation. Protection doesn't have to be perfect to be
useful. A bullet-proof vest won't protect against a head shot, washing
your hands before a meal won't protect you against airborne diseases,
the best available contraceptive measures still occasionally fail. That
doesn't make those protective measures useless.

A simple, relatively straightforward fat pointer implementation that is
only moderately inefficient can't protect against every possible
problem. It could protect against use of pointer arithmetic to create a
pointer to a position before the beginning, or more than one position
after the end, of the relevant array. It can also protect against any
attempt to access positions one past the end of the array. Providing
more complete protection requires a more complicated or inefficient
mechanism; but that doesn't make the simpler protection mechanism useless.

....
I have made my share of mistakes in code-generation, and I want
things simple and consistent.
Fat pointers are not as simple to implement as ordinary pointers, and I
would imagine that retrofitting an implementation originally designed to
use ordinary pointers would be a bug-prone process. However, it's
nowhere near to being the most complicated feature that real C compilers
have implemented successfully.
Nov 25 '07 #92

"Chris Torek" <no****@torek.netwrote in message
>When dealing with pointers to objects embedded within larger objects
(such as elements of "struct"s), the simplest method is again to
widen the base-and-limit to encompass the large object. Consider,
e.g.:

% cat derived.c
#include "base.h"
struct derived {
struct base common;
int additional;
};
void basefunc(struct base *, void (*)(struct base *)); /* in base.c */
static void subfunc(struct base *);
void func(void) {
struct derived var;
...
basefunc(&var.common, subfunc);
...
}
static void subfunc(struct base *p0) {
struct derived *p = (struct derived *)p0; /* line X */
... use p->common and p->additional here ...
}
>There is clearly no problem in the call to basefunc(), even if we
"narrow" the "fat pointer" to point only to the sub-structure
&var.common. However, when basefunc() "calls back" into subfunc(),
as presumably it will, with a "fat pointer" to the "common" part
of a "struct derived", we will have to "re-widen" the pointer. We
can do that at the point of the cast (line X), or simply avoid
"narrowing" the fat pointer at the call to basefunc(), so that when
basefunc() calls subfunc(), it passes a pointer to the entire
structure "var", rather than just var.common.
(It is probably the case, not that I have thought about it that
much, that we can pass only the "fully widened to entire object"
pointer if we are taking the address of the *first* element of
the structure. That is, C code of the form:

You've missed the problem

struct gandma
{
struct mother first[3];
int x;
};

struct mother
{
char child[3];
int x;
};

struct grandma grannies[3];

char *fatkid = &grannies[1].first[1].child;
/* fatkid ought to have bounds between child and child + 3 */

struct grandma *fatoldlady = (struct grandma *) fatkid;
/* fat old lady points tot he middle of an array, it needs bounds
grannies[0] to
grannies[2] */

It's hard to supoort this
You need a "mother pointer" to give the bounds of the containing structure,
purely to support upcasts. That can have its own mother pointer. All in all,
too much trouble purely to support one little-used construct.

struct fatpointer{
void *base;
void *upper;
void *ptr;
struct fatpointer *mother;
};

The real answer is to ban subtractions or additions to upcast pointers.
--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Nov 25 '07 #93
James Kuyper wrote:
CBFalconer wrote:
>Chris Torek wrote:
>>Richard Bos <rl*@hoekstra-uitgeverij.nlwrote:

... you do need to link fat-pointer-compiled object files with
fat-pointer-compiled libraries; much as you need to link 64-bit
object files with 64-bit libraries, little-endian object files
with little-endian libraries, and on MS-DOS used to link large
memory model object files with large memory model libraries.
... snip ...
>>In general, the simplest method is to have all "fat pointers"
represented as a triple: <currentvalue, base, limit>. A pointer
is "valid" if its current-value is at least as big as its base
and no bigger than its limit:

Just picking on one thing, based on it has to work everywhere, or
it is useless.

We're talking about use of fat pointers as a device to protect against
memory bounds violation. Protection doesn't have to be perfect to be
useful. A bullet-proof vest won't protect against a head shot, washing
your hands before a meal won't protect you against airborne diseases,
the best available contraceptive measures still occasionally fail. That
doesn't make those protective measures useless.

A simple, relatively straightforward fat pointer implementation that is
only moderately inefficient can't protect against every possible
problem. It could protect against use of pointer arithmetic to create a
pointer to a position before the beginning, or more than one position
after the end, of the relevant array. It can also protect against any
attempt to access positions one past the end of the array. Providing
more complete protection requires a more complicated or inefficient
mechanism; but that doesn't make the simpler protection mechanism useless.

...
>I have made my share of mistakes in code-generation, and I want
things simple and consistent.

Fat pointers are not as simple to implement as ordinary pointers, and I
would imagine that retrofitting an implementation originally designed to
use ordinary pointers would be a bug-prone process. However, it's
nowhere near to being the most complicated feature that real C compilers
have implemented successfully.
I think my major point is that, wherever I look, I see continuously
building auxiliary data combined with cpu-eating indirect pointer
accesses.

I also maintain that it has to work all the time. Otherwise the
fact that it passes gives one no confidence whatsoever, and the odd
pass just encourages ignoring the real problem. How many time do
you hear 'It works on my machine' as the excuse/justification for
poor code?

--
Chuck F (cbfalconer at maineline dot net)
<http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

Nov 25 '07 #94
Flash Gordon wrote:
CBFalconer wrote, On 24/11/07 22:52:
>$)CHarald van D)&k wrote:
>>CBFalconer wrote:

So here is another. Imagine a routine to upshift a string. One
routine receives a char, and answere with 'this is lower case'.
Another receives a char, and answers by replacing it with the
upper case equivalent. Both are passed pointers.

Assuming the original data is a string, the calling routine will
pass something like:

p = &(s[3]); or p = s + 3; (p is parameter)

For the reading routine, there is no harm in allowing reads from
(p - n), where n can be 0 through 3. For the writing routine,
this is not allowable.

Why not? Is it because the behaviour would be undefined, or is
it because the function's actions would be different from its
description? If the former, I'm not seeing it, so could you
please explain? If the latter, as long as it's valid C, there's
no reason why an implementation would or should complain about
it.

Because a char was passed, not an array.

No, a pointer to char was passed, and that is a pointer to a byte
within a larger object.
>So a means has to be found to separate the two.

No, because C does not have a distinction between a pointer to an
element of an array and a pointer to a single object.
Exactly. So the only way to pass that distinction is through the
pointer. This involves revising the pointer on every pass. The
data builds and builds, as does the overhead. Don't forget that
the old pointer must be retained, because the called routine will
presumably return.

--
Chuck F (cbfalconer at maineline dot net)
<http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

Nov 25 '07 #95

"CBFalconer" <cb********@yahoo.comwrote in message
I think my major point is that, wherever I look, I see continuously
building auxiliary data combined with cpu-eating indirect pointer
accesses.

I also maintain that it has to work all the time. Otherwise the
fact that it passes gives one no confidence whatsoever, and the odd
pass just encourages ignoring the real problem. How many time do
you hear 'It works on my machine' as the excuse/justification for
poor code?
A wild pointer read, write or even calculation renders the program
undefined. Undefined by the C standard, that is, not in some philosphical
state of undefinedness.
If we define it as always printing out to stderr an error message, and
terminating, rather than doing something funny, like corrupting an
instruction in an unrelatred part of the program, or appearing to work as
intended, it is much easiser for a debugger to catch bugs.
The tool is far more useful if it catches every case. However this cannot be
achieved in practise. For instance a recent bug in our code - actually
written in Fortran, went

double energy[12]; /* set up to statistical free energy levels */

for(i=0;i<12;i++)
if(theta erange[i] && theta <= erange[i+1])
index = i;

totenergy += energy[index];

of course if theta is not in any of the ranges, UB will result. However the
ranges went from 0 degrees to 160 degrees, and you tend not to get straight
bonds. So it could run for some time until the calculation went wrong.

After a few debug passes, you run code for real with thin pointers. The
runtime overhead is acceptable for some applications, but not the ones C
tends to be used for.
--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Nov 25 '07 #96
CBFalconer wrote:
James Kuyper wrote:
>CBFalconer wrote:
....
>>Just picking on one thing, based on it has to work everywhere, or
it is useless.
We're talking about use of fat pointers as a device to protect against
memory bounds violation. Protection doesn't have to be perfect to be
useful. A bullet-proof vest won't protect against a head shot, ...
....
I think my major point is that, wherever I look, I see continuously
building auxiliary data combined with cpu-eating indirect pointer
accesses.
That depends upon your concept that it has to be a perfect to be useful.
Embrace the fact that it can still be useful if imperfect, and a much
simpler implementation becomes possible that merely seriously impairs
efficiency, rather than completely destroying it.

All that's needed to provide substantial protection is pointers that
carry a base address, an end address, and a current address, and a
compiler that sets those addresses appropriately whenever an expression
has a value of pointer type.
I also maintain that it has to work all the time. Otherwise the
fact that it passes gives one no confidence whatsoever, and the odd
pass just encourages ignoring the real problem.
By the same logic, to apply my previous analogy, a bullet proof vest has
to work against all bullets, or it's useless. If it doesn't it gives you
no confidence whatsoever that you are safe from bullets, and the odd
pass just encourages ignoring the real problem, which is that people are
shooting at you. That's a clearly ridiculous argument; and the reason is
the logic, not the analogy.

The fat pointers we're discussing aren't supposed to give you confidence
that your code has no errors. They're supposed to help you detect
certain kinds of errors, so you can deal with them. The fact that they
can't catch all possible errors is perfectly normal for error detection
techniques.
Nov 26 '07 #97
CBFalconer wrote, On 25/11/07 16:20:
Flash Gordon wrote:
>CBFalconer wrote, On 24/11/07 22:52:
>>$)CHarald van D)&k wrote:
CBFalconer wrote:

So here is another. Imagine a routine to upshift a string. One
routine receives a char, and answere with 'this is lower case'.
Another receives a char, and answers by replacing it with the
upper case equivalent. Both are passed pointers.
>
Assuming the original data is a string, the calling routine will
pass something like:
>
p = &(s[3]); or p = s + 3; (p is parameter)
>
For the reading routine, there is no harm in allowing reads from
(p - n), where n can be 0 through 3. For the writing routine,
this is not allowable.
Why not? Is it because the behaviour would be undefined, or is
it because the function's actions would be different from its
description? If the former, I'm not seeing it, so could you
please explain? If the latter, as long as it's valid C, there's
no reason why an implementation would or should complain about
it.
Because a char was passed, not an array.
No, a pointer to char was passed, and that is a pointer to a byte
within a larger object.
>>So a means has to be found to separate the two.
No, because C does not have a distinction between a pointer to an
element of an array and a pointer to a single object.

Exactly. So the only way to pass that distinction is through the
pointer.
No, the only way is to accept that there is no distinction.
This involves revising the pointer on every pass.
No, it involves passing exactly the same data all the way down.
The
data builds and builds, as does the overhead. Don't forget that
the old pointer must be retained, because the called routine will
presumably return.
I honestly cannot see what you are getting at. However many functions
the pointer is passed through there is no additional overhead because it
is always valid to access any part of the parent object. Anything else
and you are talking about how useful fat pointers would be for a
language other than C, and no one else is talking about anything other
than what a conforming C implementation could do.
--
Flash Gordon
Nov 26 '07 #98

"James Kuyper" <ja*********@verizon.netwrote in message
By the same logic, to apply my previous analogy, a bullet proof vest has
to work against all bullets, or it's useless. If it doesn't it gives you
no confidence whatsoever that you are safe from bullets, and the odd pass
just encourages ignoring the real problem, which is that people are
shooting at you. That's a clearly ridiculous argument; and the reason is
the logic, not the analogy.
If a competing vest offers 100% protection then your vest, offering 99%,
will need to be very much cheaper indeed before it will find a market.
However in the absence of anything better, even 50% protection is much
better than nothing.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Nov 26 '07 #99
Flash Gordon wrote:
CBFalconer wrote:
>Flash Gordon wrote:
.... snip ...
>>
>>No, because C does not have a distinction between a pointer to
an element of an array and a pointer to a single object.

Exactly. So the only way to pass that distinction is through
the pointer.

No, the only way is to accept that there is no distinction.
>This involves revising the pointer on every pass.

No, it involves passing exactly the same data all the way down.
>The data builds and builds, as does the overhead. Don't forget
that the old pointer must be retained, because the called
routine will presumably return.

I honestly cannot see what you are getting at. However many
functions the pointer is passed through there is no additional
overhead because it is always valid to access any part of the
parent object. Anything else and you are talking about how useful
fat pointers would be for a language other than C, and no one
else is talking about anything other than what a conforming C
implementation could do.
Just as an example, imagine some idiot designed a function that
operated on two strings known to be stored DIFF bytes apart. That
function is passed a single pointer, as in:

char foo(char *bar) {
char ch;

if (islower(*bar)) *(bar + DIFF) = toupper(*bar);
else *(bar + DIFF) = *bar;
/* now exchange chars */
ch = *bar; *bar = *(bar + DIFF); *(bar + DIFF) = ch;
return ch;
} /* foo */

Now this silly function has a name that makes it seem to upshift
chars in a string. It passes all tests, because of the use of the
magic value of DIFF. Somebody procedes to use it again. All sorts
of things blow up. The function is ignored, because it passes the
tests in the original, and it is in a library, and never got
recompiled. Don't forget that it has been stamped as VALIDATED in
upper case.

I don't want this form of 'checking'.

NOTE: I am pulling up various weird code to show that there are
problems. I have not tried for any consistency.

--
Chuck F (cbfalconer at maineline dot net)
<http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

Nov 26 '07 #100

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

48
by: Michael Sig Birkmose | last post by:
Hi everyone! Does anyone know, if it is possible to meassure the maximum stack usage of a C program throughout it's entire execution? -- Michael Birkmose
302
by: Lee | last post by:
Hi Whenever I use the gets() function, the gnu c compiler gives a warning that it is dangerous to use gets(). Is this due to the possibility of array overflow? Is it correct that the program...
89
by: Cuthbert | last post by:
After compiling the source code with gcc v.4.1.1, I got a warning message: "/tmp/ccixzSIL.o: In function 'main';ex.c: (.text+0x9a): warning: the 'gets' function is dangerous and should not be...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.