473,396 Members | 1,915 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

Request

As a consequence of a heavy discussion in another thread,
I wrote this program:
#include <stdlib.h>
int main(void)
{
char *p=calloc(65521,65552);
if (p)
printf("BUG!!!!\n");
}

The multiplication will give an erroneus result
in 32 bit size_t systems. I have tested this
in several systems with several compilers.

Maybe people with other compilers/implementations
would compile this program and send the results?

The results I have so far are:

LINUX
GCC 2.95
HAS THE BUG

GCC-4.02
NO BUGS

GCC-3.x
NO BUGS

WINDOWS
cygwin gcc 3.4.4
NO bugs
pelles c:
HAS THE BUG
Microsoft
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.42
for 80x86
NO BUGS (If the multiplication overflows it will trap. Very good
behavior)

AIX
IBM-cc (compiling in 32 bit mode)
NO BUGS
gcc 4.0.0 (compiling in 32 bit mode)
NO BUGS
Thanks in advance for your cooperation

Jan 2 '07
149 4120

"CBFalconer" <cb********@yahoo.comwrote in message
news:45***************@yahoo.com...
alex bodnaru wrote:
CBFalconer wrote:
"David T. Ashley" wrote:

... snip ...

54135^2 is going to be on the order of 2.5G. That is a pretty
fair hunk of memory.

---------

[nouser@pamc ~]$ cat test3.c
#include <stdio.h>
#include <stdlib.h>

int main(void)
{
char *p;
int i;

for (i=65535; i>0; i--)
{
if (p = calloc(i,i))
{
printf("%d is apparently the largest integer that will
succeed.\n",
>i);
break;
}
}
}
[nouser@pamc ~]$ gcc test3.c
[nouser@pamc ~]$ ./a.out
54135 is apparently the largest integer that will succeed.
[nouser@pamc ~]$

With DJGPP 2.03 (specifies the library) that crashes immediately in
memset. With nmalloc linked in in place of the library it yields
23093.

Advisory cross-post to comp.os.msdos.djgpp, f'ups set to clc.
even the first malloc is 4gb, and i doubt you have more ;-)

Which should simply fail. The code is looking for the largest
assignable value and for foulups in the calloc multiply operation.
It shouldn't crash. nmalloc fails and returns NULL. DJGPP malloc
claims to succeed, but doesn't, and the memset crashes. This also
shows the need to include calloc in the nmalloc package, to ensure
the same limits are observed.

Why did you override the follow-up I had set? And top-post.
Please go to comp.lang.c for further general discussion of this.
Meanwhile, be aware of the DJGPP bug.
http://groups.google.com/group/comp....f8224d28?hl=en

Chuck,

1) you've encourage DJGPP (and c.l.c) users to use nmalloc for a few years
with little response
2) you never built DJGPP demand for nmalloc by submitting it to the DJGPP
archives
3) since you've said your health isn't good, you could've asked if someone
on comp.os.msdos.djgpp was willing to prepare and submit nmalloc to the
DJGPP archives for you, but you haven't
4) DJ has never publicly responded to your posts on nmalloc...
5) although you believe in nmalloc, you've never bothered to ask DJ why he's
shown zero interest in nmalloc... Without asking, you'll never know if it
was something simple holding back nmalloc.

Just how serious are you about getting nmalloc into the DJGPP archives or
the DJGPP C library?
Rod Pemberton
PS. Sent to both groups...
Jan 5 '07 #101

"Rod Pemberton" <do*********@bitfoad.cmmwrites:
2) you never built DJGPP demand for nmalloc by submitting it to the DJGPP
archives
He did, there were some buglets that needed tweaking long ago, it's
probably just waiting for someone with write access to review it and
put it in place. Plus backward compatibility (the malloc debug
interface) and all that.
4) DJ has never publicly responded to your posts on nmalloc...
Dude, go WAY back in the djgpp-workers mail archives.
Jan 5 '07 #102
Ben Pfaff wrote:
Mark McIntyre <ma**********@spamcop.netwrites:
.... snip ...
>>
It still doesn't follow. The pointer returned by malloc need not
be a complete absolute reference. Magic can go on behind the
scenes, much as it did with 16-bit pointers used to reference
20-bit memory.

I assume you're speaking of the segmented 16-bit x86 architecture
as featured in MS-DOS compilers. The object pointers used by
these compilers in some modes were 16 bits wide, and they could
only address 64 kB worth of data. You could use
implementation-specific extensions to address more than 16 kB of
memory, or you could switch your compiler to a mode where
pointers were 32 bits long, but there was no way to address more
than 64 kB of objects with 16-bit pointers and without using C
extensions.
Turbo C 2.01 says the following. In that mode (with some other
options set) I can simply compile virtually any standard C source.
Things will be unwell if the object code exceeds 64 k, but I can
put up with that. near and far are just things the compiler does
internally.

"Compact model: This model has 64K for code, 64K
for stack and static data, and up to 1 MB for
the heap. All functions are near by default and
all data pointers are far by default."

I suspect that this may blow up if function pointer are converted
to void * and back again. However that is standard conforming.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Jan 5 '07 #103
jacob navia wrote:
>
.... snip ...
>
Obviously each 16 bit pointer could address only 64K!!!
Mr McIntyre has strong hallucinations when he supposes that
you can have more than 64K with 16 bit pointers!
Emulating Dan Pops tact again?

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Jan 5 '07 #104
Richard Heathfield wrote:
>
.... snip ...
>
Many people were oppressed by King Henry VII. More suffered under
King Henry VIII.
<OT>
That may also reflect the world population at the time, and the
available oppression methods. The shrubish leader has exceeded
both, IMO. I suspect Adolph Schickelgruber holds the record.

Record candidates should include Genghis Khan, Napoleon,
Robespierre, Stalin, Cromwell, Queen Mary I, Mao, all spammers,
various Czars, Popes, and Roman Emperors. Maybe the results should
be normallized to world population at the time of oppression.
</OT>

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Jan 5 '07 #105
in 713360 20070105 054050 CBFalconer <cb********@yahoo.comwrote:
>Richard Heathfield wrote:
>>
.... snip ...
>>
Many people were oppressed by King Henry VII. More suffered under
King Henry VIII.

<OT>
That may also reflect the world population at the time, and the
available oppression methods. The shrubish leader has exceeded
both, IMO. I suspect Adolph Schickelgruber holds the record.

Record candidates should include Genghis Khan, Napoleon,
Robespierre, Stalin, Cromwell, Queen Mary I, Mao, all spammers,
various Czars, Popes, and Roman Emperors. Maybe the results should
be normallized to world population at the time of oppression.
</OT>
So Adam pushing Eve around would be top of the list?
Jan 5 '07 #106
Bob Martin said:
in 713360 20070105 054050 CBFalconer <cb********@yahoo.comwrote:
>>Richard Heathfield wrote:
>>>
.... snip ...
>>>
Many people were oppressed by King Henry VII. More suffered under
King Henry VIII.

<OT>
That may also reflect the world population at the time, and the
available oppression methods. The shrubish leader has exceeded
both, IMO. I suspect Adolph Schickelgruber holds the record.

Record candidates should include Genghis Khan, Napoleon,
Robespierre, Stalin, Cromwell, Queen Mary I, Mao, all spammers,
various Czars, Popes, and Roman Emperors. Maybe the results should
be normallized to world population at the time of oppression.
</OT>

So Adam pushing Eve around would be top of the list?
Other way round. :-)

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Jan 5 '07 #107
Richard Heathfield wrote:
Bob Martin said:
>CBFalconer <cb********@yahoo.comwrote:
>>Richard Heathfield wrote:

.... snip ...

Many people were oppressed by King Henry VII. More suffered under
King Henry VIII.

<OT>
That may also reflect the world population at the time, and the
available oppression methods. The shrubish leader has exceeded
both, IMO. I suspect Adolph Schickelgruber holds the record.

Record candidates should include Genghis Khan, Napoleon,
Robespierre, Stalin, Cromwell, Queen Mary I, Mao, all spammers,
various Czars, Popes, and Roman Emperors. Maybe the results should
be normallized to world population at the time of oppression.
</OT>

So Adam pushing Eve around would be top of the list?

Other way round. :-)
If we count Cain and Abel in either case they achieved a NOL
(normallized oppression level) of 25%. And they are tied by Cain.
Maybe we should limit the field to recorded history. Bible
thumpers can opt-out. Or, if we include a factor for the QOO
(quality of oppression) Cain 'wins' since his QOO is 1.0.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Jan 5 '07 #108
CBFalconer <cb********@yahoo.comwrites:
Ben Pfaff wrote:
>Mark McIntyre <ma**********@spamcop.netwrites:
... snip ...
>>>
It still doesn't follow. The pointer returned by malloc need not
be a complete absolute reference. Magic can go on behind the
scenes, much as it did with 16-bit pointers used to reference
20-bit memory.

I assume you're speaking of the segmented 16-bit x86 architecture
as featured in MS-DOS compilers. The object pointers used by
these compilers in some modes were 16 bits wide, and they could
only address 64 kB worth of data. You could use
implementation-specific extensions to address more than 16 kB of
memory, or you could switch your compiler to a mode where
pointers were 32 bits long, but there was no way to address more
than 64 kB of objects with 16-bit pointers and without using C
extensions.

Turbo C 2.01 says the following. In that mode (with some other
options set) I can simply compile virtually any standard C source.
Things will be unwell if the object code exceeds 64 k, but I can
put up with that. near and far are just things the compiler does
internally.

"Compact model: This model has 64K for code, 64K
for stack and static data, and up to 1 MB for
the heap. All functions are near by default and
all data pointers are far by default."

I suspect that this may blow up if function pointer are converted
to void * and back again. However that is standard conforming.
I don't think this contradicts anything I wrote. The compact
model uses 32-bit object pointers, like I said. Function
pointers are different and need not have the same form.
--
"When in doubt, treat ``feature'' as a pejorative.
(Think of a hundred-bladed Swiss army knife.)"
--Kernighan and Plauger, _Software Tools_
Jan 5 '07 #109
Ben Pfaff wrote:
CBFalconer <cb********@yahoo.comwrites:>
>Ben Pfaff wrote:
>>Mark McIntyre <ma**********@spamcop.netwrites:
... snip ...
>>>>
It still doesn't follow. The pointer returned by malloc need not
be a complete absolute reference. Magic can go on behind the
scenes, much as it did with 16-bit pointers used to reference
20-bit memory.

I assume you're speaking of the segmented 16-bit x86 architecture
as featured in MS-DOS compilers. The object pointers used by
these compilers in some modes were 16 bits wide, and they could
only address 64 kB worth of data. You could use
implementation-specific extensions to address more than 16 kB of
memory, or you could switch your compiler to a mode where
pointers were 32 bits long, but there was no way to address more
than 64 kB of objects with 16-bit pointers and without using C
^^^^^^^^^^^^^^^
>>extensions.
^^^^^^^^^^
>>
Turbo C 2.01 says the following. In that mode (with some other
options set) I can simply compile virtually any standard C source.
Things will be unwell if the object code exceeds 64 k, but I can
put up with that. near and far are just things the compiler does
internally.

"Compact model: This model has 64K for code, 64K
for stack and static data, and up to 1 MB for
the heap. All functions are near by default and
all data pointers are far by default."

I suspect that this may blow up if function pointer are converted
to void * and back again. However that is standard conforming.

I don't think this contradicts anything I wrote. The compact
model uses 32-bit object pointers, like I said. Function
pointers are different and need not have the same form.
And my point is that no C extensions are needed. Just feed it
portable standard code. I primarily use it to check I haven't
inadverdently relied on an int size larger than 16 bits.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Jan 6 '07 #110
CBFalconer <cb********@yahoo.comwrites:
Ben Pfaff wrote:
>CBFalconer <cb********@yahoo.comwrites:>
>>Ben Pfaff wrote:
Mark McIntyre <ma**********@spamcop.netwrites:

... snip ...
>
It still doesn't follow. The pointer returned by malloc need not
be a complete absolute reference. Magic can go on behind the
scenes, much as it did with 16-bit pointers used to reference
20-bit memory.

I assume you're speaking of the segmented 16-bit x86 architecture
as featured in MS-DOS compilers. The object pointers used by
these compilers in some modes were 16 bits wide, and they could
only address 64 kB worth of data. You could use
implementation-specific extensions to address more than 16 kB of
memory, or you could switch your compiler to a mode where
pointers were 32 bits long, but there was no way to address more
than 64 kB of objects with 16-bit pointers and without using C
^^^^^^^^^^^^^^^
>>>extensions.
^^^^^^^^^^
>>>
Turbo C 2.01 says the following. In that mode (with some other
options set) I can simply compile virtually any standard C source.
Things will be unwell if the object code exceeds 64 k, but I can
put up with that. near and far are just things the compiler does
internally.

"Compact model: This model has 64K for code, 64K
for stack and static data, and up to 1 MB for
the heap. All functions are near by default and
all data pointers are far by default."

I suspect that this may blow up if function pointer are converted
to void * and back again. However that is standard conforming.

I don't think this contradicts anything I wrote. The compact
model uses 32-bit object pointers, like I said. Function
pointers are different and need not have the same form.

And my point is that no C extensions are needed. Just feed it
portable standard code. I primarily use it to check I haven't
inadverdently relied on an int size larger than 16 bits.
That's a non sequitur.

I don't think you've actually followed the discussion. The
original point I followed up on was this, posted by Jacob Navia:
Mark McIntyre a écrit :
On Wed, 03 Jan 2007 02:45:18 +0100, in comp.lang.c , jacob navia
<ja***@jacob.remcomp.frwrote:

>SQL server surely doesn't use calloc for allocating 32GB of
RAM,

I guess we'd need to ask Microsoft instead of guessing.

>>Obviously if that were the case, pointers would have to be larger than
32 bits.

This doesn't actually follow. I can envisage a segmented architecture
which solved this problem.

Excuse me but then the pointer is bigger than 32 bits!!!
Mark McIntyre made the claim that 32-bit pointers could address
more than 2**32 bytes, claiming as evidence an architecture that
resembles 16-bit x86 real mode. I pointed out that in fact, no,
16-bit pointers in such an environment cannot address more than
2**16 bytes. Now you're posting non sequiturs about how 32-bit
pointers can address 2**20 bytes in x86 real mode.
--
"The expression isn't unclear *at all* and only an expert could actually
have doubts about it"
--Dan Pop
Jan 6 '07 #111
On Thu, 04 Jan 2007 15:22:27 -0800, in comp.lang.c , Ben Pfaff
<bl*@cs.stanford.eduwrote:
>I assume you're speaking of the segmented 16-bit x86 architecture
as featured in MS-DOS compilers. The object pointers used by
these compilers in some modes were 16 bits wide, and they could
only address 64 kB worth of data.
The programming manual for the 286, which I have on my shelf, explains
how it can address 1MB memory using 16-bit pointers.
>You could use
implementation-specific extensions to address more than 16 kB of
memory,
precisely. This doesn't contradict what I said, by the way.

--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Jan 6 '07 #112
On Fri, 05 Jan 2007 20:24:42 -0800, in comp.lang.c , Ben Pfaff
<bl*@cs.stanford.eduwrote:
>Mark McIntyre made the claim that 32-bit pointers could address
more than 2**32 bytes, claiming as evidence an architecture that
resembles 16-bit x86 real mode. I pointed out that in fact, no,
16-bit pointers in such an environment cannot address more than
2**16 bytes.
The point is, they can - but not as a single contiguous chunk. Either
you've forgotten how 286's worked, or you didn't have much time to
play with 'em.

--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Jan 6 '07 #113
On Fri, 05 Jan 2007 00:39:57 +0100, in comp.lang.c , jacob navia
<ja***@jacob.remcomp.frwrote:
>Mr McIntyre has strong hallucinations when he supposes that
you can have more than 64K with 16 bit pointers!
Amazing then, that they ever bothered to build machines with more than
64k memory, since nothing could address it.

Perhaps thats why someone or other said he didn';t think anyone would
ever need more than 64K. He was wrong too...
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Jan 6 '07 #114
Mark McIntyre wrote:
On Fri, 05 Jan 2007 00:39:57 +0100, in comp.lang.c , jacob navia
<ja***@jacob.remcomp.frwrote:

>>Mr McIntyre has strong hallucinations when he supposes that
you can have more than 64K with 16 bit pointers!


Amazing then, that they ever bothered to build machines with more than
64k memory, since nothing could address it.
Jacob didn't say that, hist point was you could only address 64K is one
block with a pointer comprising a 16 bit segment and 16 bit offset. You
can have as many blocks as there are available segment registers.

--
Ian Collins.
Jan 6 '07 #115
Mark McIntyre wrote:
On Fri, 05 Jan 2007 00:39:57 +0100, in comp.lang.c , jacob navia
<ja***@jacob.remcomp.frwrote:
>Mr McIntyre has strong hallucinations when he supposes that
you can have more than 64K with 16 bit pointers!

Amazing then, that they ever bothered to build machines with more than
64k memory, since nothing could address it.

Perhaps thats why someone or other said he didn';t think anyone would
ever need more than 64K. He was wrong too...

I think that was Bill Gates in 1981 or so commenting on the PC having
640K of memory instead of the lowly 64K of CP/M systems. "Nobody will
ever need more that 640K of memory!" will live in infamy.

My current desktop has 1GB of main memory. I got into this Internet
stuff in 1988 with netcom.com in California. I was al***@netcom.com, one
of the first 100 or so subscribers. Our netcom.com was a Sun workstation
with a 56Kbps link to sun.com just across the street in Silicon Valley.

Disk drives of the day were Seagate SCSI of less than 100 MB capacity.
To have a Gigabyte of disk storage on your Internet server was an
impossible dream. Today we can buy 200GB drive for $95.

Everybody predicting a limited future always catches it in the neck.

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Jan 6 '07 #116
Mark McIntyre <ma**********@spamcop.netwrites:
On Thu, 04 Jan 2007 15:22:27 -0800, in comp.lang.c , Ben Pfaff
<bl*@cs.stanford.eduwrote:
>>I assume you're speaking of the segmented 16-bit x86 architecture
as featured in MS-DOS compilers. The object pointers used by
these compilers in some modes were 16 bits wide, and they could
only address 64 kB worth of data.

The programming manual for the 286, which I have on my shelf, explains
how it can address 1MB memory using 16-bit pointers.
Yes. But it doesn't explain how you can do so in standard C,
because you can't.
>
>>You could use
implementation-specific extensions to address more than 16 kB of
memory,

precisely. This doesn't contradict what I said, by the way.
By default, we talk about portable standard C here. If you want
to talk about pointers outside that context, please be specific,
so that others won't be confused.
--
"When in doubt, treat ``feature'' as a pejorative.
(Think of a hundred-bladed Swiss army knife.)"
--Kernighan and Plauger, _Software Tools_
Jan 6 '07 #117
Mark McIntyre <ma**********@spamcop.netwrites:
On Fri, 05 Jan 2007 20:24:42 -0800, in comp.lang.c , Ben Pfaff
<bl*@cs.stanford.eduwrote:
>>Mark McIntyre made the claim that 32-bit pointers could address
more than 2**32 bytes, claiming as evidence an architecture that
resembles 16-bit x86 real mode. I pointed out that in fact, no,
16-bit pointers in such an environment cannot address more than
2**16 bytes.

The point is, they can - but not as a single contiguous chunk. Either
you've forgotten how 286's worked, or you didn't have much time to
play with 'em.
A 16-bit standard C pointer cannot address more than 2**16
bytes, period. If you want to talk about nonstandard,
nonportable C constructs, please say so, so that everyone else
knows that you are doing so.
--
Bite me! said C.
Jan 6 '07 #118
Mark McIntyre escreveu:
On Thu, 04 Jan 2007 15:22:27 -0800, in comp.lang.c , Ben Pfaff
<bl*@cs.stanford.eduwrote:
>I assume you're speaking of the segmented 16-bit x86 architecture
as featured in MS-DOS compilers. The object pointers used by
these compilers in some modes were 16 bits wide, and they could
only address 64 kB worth of data.

The programming manual for the 286, which I have on my shelf, explains
how it can address 1MB memory using 16-bit pointers.
Would you mind to explain us how? I'm affraid you did not grok very well
some aspect of it...

Jan 6 '07 #119
Mark McIntyre escreveu:
On Fri, 05 Jan 2007 20:24:42 -0800, in comp.lang.c , Ben Pfaff
<bl*@cs.stanford.eduwrote:
>Mark McIntyre made the claim that 32-bit pointers could address
more than 2**32 bytes, claiming as evidence an architecture that
resembles 16-bit x86 real mode. I pointed out that in fact, no,
16-bit pointers in such an environment cannot address more than
2**16 bytes.

The point is, they can - but not as a single contiguous chunk. Either
you've forgotten how 286's worked, or you didn't have much time to
play with 'em.
But, then is no longer Standard compliant, right?
Jan 6 '07 #120
av
On Sat, 06 Jan 2007 13:55:57 -0500, Joe Wright wrote:
>I think that was Bill Gates in 1981 or so commenting on the PC having
640K of memory instead of the lowly 64K of CP/M systems. "Nobody will
ever need more that 640K of memory!" will live in infamy.

My current desktop has 1GB of main memory. I got into this Internet
stuff in 1988 with netcom.com in California. I was al***@netcom.com, one
of the first 100 or so subscribers. Our netcom.com was a Sun workstation
with a 56Kbps link to sun.com just across the street in Silicon Valley.

Disk drives of the day were Seagate SCSI of less than 100 MB capacity.
To have a Gigabyte of disk storage on your Internet server was an
impossible dream. Today we can buy 200GB drive for $95.

Everybody predicting a limited future always catches it in the neck.
but how all that space is used?
is it used well?
Jan 6 '07 #121
In article <6g********************************@4ax.com>,
Mark McIntyre <ma**********@spamcop.netwrote:
>>Mr McIntyre has strong hallucinations when he supposes that
you can have more than 64K with 16 bit pointers!
>Amazing then, that they ever bothered to build machines with more than
64k memory, since nothing could address it.
They didn't address it with 16-bit pointers. They addressed it with
16-bit pointers and segments. You can't put that combination in a
16-bit C pointer.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jan 6 '07 #122
Mark McIntyre <ma**********@spamcop.netwrites:
On Fri, 05 Jan 2007 20:24:42 -0800, in comp.lang.c , Ben Pfaff
<bl*@cs.stanford.eduwrote:
>>Mark McIntyre made the claim that 32-bit pointers could address
more than 2**32 bytes, claiming as evidence an architecture that
resembles 16-bit x86 real mode. I pointed out that in fact, no,
16-bit pointers in such an environment cannot address more than
2**16 bytes.

The point is, they can - but not as a single contiguous chunk. Either
you've forgotten how 286's worked, or you didn't have much time to
play with 'em.
Personally, I've never had more than a vague idea of how 286's worked,
nor did I play with them much. Such knowledge shouldn't be a
prerequisite to understanding anything posted here.

A 16-bit pointer cannot, by itself, possibly specify more than 65536
distinct addresses. It's possible, of course, that some additional
context could alter the meanings of the bit patterns of these 16-bit
pointers; if you have 4 bits of such context, you could address 2**20
bytes. But any such context is behond the scope of standard C --
unless it's implicit in the *type* of the pointer, but even then
conversions to and from void* would have to work properly.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jan 6 '07 #123
Cesar Rabak a écrit :
Mark McIntyre escreveu:
>On Thu, 04 Jan 2007 15:22:27 -0800, in comp.lang.c , Ben Pfaff
<bl*@cs.stanford.eduwrote:
>>I assume you're speaking of the segmented 16-bit x86 architecture
as featured in MS-DOS compilers. The object pointers used by
these compilers in some modes were 16 bits wide, and they could
only address 64 kB worth of data.


The programming manual for the 286, which I have on my shelf, explains
how it can address 1MB memory using 16-bit pointers.


Would you mind to explain us how? I'm affraid you did not grok very well
some aspect of it...
I explained elsethread that that it is possible only with
16 bit segment registers and 16 bit pointer. In this case
the pointer is 16 + 16 --32 bits (a FAR pointer as it
was called in MSDOS)

NEAR pointer (16 bits) can only address 1 "segment" i.e. 64K

Mr McIntyre has misunderstood this stuff completely.

Jan 6 '07 #124
Cesar Rabak wrote:
Mark McIntyre escreveu:
>Ben Pfaff <bl*@cs.stanford.eduwrote:
>>I assume you're speaking of the segmented 16-bit x86 architecture
as featured in MS-DOS compilers. The object pointers used by
these compilers in some modes were 16 bits wide, and they could
only address 64 kB worth of data.

The programming manual for the 286, which I have on my shelf,
explains how it can address 1MB memory using 16-bit pointers.

Would you mind to explain us how? I'm affraid you did not grok very
well some aspect of it...
Works just fine if CHAR_BIT is 128. :-)

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Jan 7 '07 #125
CBFalconer <cb********@yahoo.comwrites:
Cesar Rabak wrote:
>Mark McIntyre escreveu:
>>Ben Pfaff <bl*@cs.stanford.eduwrote:

I assume you're speaking of the segmented 16-bit x86 architecture
as featured in MS-DOS compilers. The object pointers used by
these compilers in some modes were 16 bits wide, and they could
only address 64 kB worth of data.

The programming manual for the 286, which I have on my shelf,
explains how it can address 1MB memory using 16-bit pointers.

Would you mind to explain us how? I'm affraid you did not grok very
well some aspect of it...

Works just fine if CHAR_BIT is 128. :-)
As long as we're being silly, you can still only address 64 kilobytes
(but 1024 kilo-octets).

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jan 7 '07 #126
On Sat, 06 Jan 2007 10:53:56 -0800, in comp.lang.c , Ben Pfaff
<bl*@cs.stanford.eduwrote:
>Mark McIntyre <ma**********@spamcop.netwrites:
>On Fri, 05 Jan 2007 20:24:42 -0800, in comp.lang.c , Ben Pfaff
<bl*@cs.stanford.eduwrote:
>>>Mark McIntyre made the claim that 32-bit pointers could address
more than 2**32 bytes, claiming as evidence an architecture that
resembles 16-bit x86 real mode. I pointed out that in fact, no,
16-bit pointers in such an environment cannot address more than
2**16 bytes.

The point is, they can - but not as a single contiguous chunk. Either
you've forgotten how 286's worked, or you didn't have much time to
play with 'em.

A 16-bit standard C pointer cannot address more than 2**16
bytes, period. If you want to talk about nonstandard,
nonportable C constructs, please say so, so that everyone else
knows that you are doing so.
Personally I think the vast majority of this thread has been about
precisely that. Trying to diss the point now, by bringing it back onto
pure C topicality, is pretty pointless, IMHO. YMMV, E&OE etc.
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Jan 7 '07 #127
Mark McIntyre <ma**********@spamcop.netwrites:
On Sat, 06 Jan 2007 10:53:56 -0800, in comp.lang.c , Ben Pfaff
<bl*@cs.stanford.eduwrote:
>>Mark McIntyre <ma**********@spamcop.netwrites:
>>On Fri, 05 Jan 2007 20:24:42 -0800, in comp.lang.c , Ben Pfaff
<bl*@cs.stanford.eduwrote:

Mark McIntyre made the claim that 32-bit pointers could address
more than 2**32 bytes, claiming as evidence an architecture that
resembles 16-bit x86 real mode. I pointed out that in fact, no,
16-bit pointers in such an environment cannot address more than
2**16 bytes.

The point is, they can - but not as a single contiguous chunk. Either
you've forgotten how 286's worked, or you didn't have much time to
play with 'em.

A 16-bit standard C pointer cannot address more than 2**16
bytes, period. If you want to talk about nonstandard,
nonportable C constructs, please say so, so that everyone else
knows that you are doing so.

Personally I think the vast majority of this thread has been about
precisely that. Trying to diss the point now, by bringing it back onto
pure C topicality, is pretty pointless, IMHO. YMMV, E&OE etc.
Ok, if you don't want to be topical that's more or less ok. But if
you're going to assert that a 16-bit pointer can address more than
2**16 bytes, you should still explain how (without assuming we're all
familiar with the 286 architecture).

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jan 7 '07 #128
jacob navia wrote:
I explained elsethread that that it is possible only with
16 bit segment registers and 16 bit pointer. In this case
the pointer is 16 + 16 --32 bits (a FAR pointer as it
was called in MSDOS)
Actually I think it was originally 16 + 16 --20 bits,
but that's a story for another day, another newsgroup,
another decade, another century.
--
Steve Summit
sc*@eskimo.com
Jan 8 '07 #129
Keith Thompson wrote:
Mark McIntyre <ma**********@spamcop.netwrites:
>>On Fri, 05 Jan 2007 20:24:42 -0800, in comp.lang.c , Ben Pfaff
<bl*@cs.stanford.eduwrote:

>>>Mark McIntyre made the claim that 32-bit pointers could address
more than 2**32 bytes, claiming as evidence an architecture that
resembles 16-bit x86 real mode. I pointed out that in fact, no,
16-bit pointers in such an environment cannot address more than
2**16 bytes.

The point is, they can - but not as a single contiguous chunk. Either
you've forgotten how 286's worked, or you didn't have much time to
play with 'em.


Personally, I've never had more than a vague idea of how 286's worked,
nor did I play with them much. Such knowledge shouldn't be a
prerequisite to understanding anything posted here.

A 16-bit pointer cannot, by itself, possibly specify more than 65536
distinct addresses. It's possible, of course, that some additional
context could alter the meanings of the bit patterns of these 16-bit
pointers; if you have 4 bits of such context, you could address 2**20
bytes. But any such context is behond the scope of standard C --
unless it's implicit in the *type* of the pointer, but even then
conversions to and from void* would have to work properly.
A funny implementation could keep a mapping from bit representation
to actual memory locations (you're calling it "context"). If a program
can't contain more than N pointers where N is less than 2**32, then
32-bit pointers can be used to point to more than 2**32 bytes of memory.
Such implementation would need to have very clever pointer operations,
and it would crash when you exceed number of pointers allowed at once,
but it would be conforming, wouldn't it?

Best regards,
Yevgen
Jan 8 '07 #130
Yevgen Muntyan wrote:
Keith Thompson wrote:
>Mark McIntyre <ma**********@spamcop.netwrites:
>>On Fri, 05 Jan 2007 20:24:42 -0800, in comp.lang.c , Ben Pfaff
<bl*@cs.stanford.eduwrote:
Mark McIntyre made the claim that 32-bit pointers could address
more than 2**32 bytes, claiming as evidence an architecture that
resembles 16-bit x86 real mode. I pointed out that in fact, no,
16-bit pointers in such an environment cannot address more than
2**16 bytes.
The point is, they can - but not as a single contiguous chunk. Either
you've forgotten how 286's worked, or you didn't have much time to
play with 'em.

Personally, I've never had more than a vague idea of how 286's worked,
nor did I play with them much. Such knowledge shouldn't be a
prerequisite to understanding anything posted here.

A 16-bit pointer cannot, by itself, possibly specify more than 65536
distinct addresses. It's possible, of course, that some additional
context could alter the meanings of the bit patterns of these 16-bit
pointers; if you have 4 bits of such context, you could address 2**20
bytes. But any such context is behond the scope of standard C --
unless it's implicit in the *type* of the pointer, but even then
conversions to and from void* would have to work properly.

A funny implementation could keep a mapping from bit representation
to actual memory locations (you're calling it "context"). If a program
can't contain more than N pointers where N is less than 2**32, then
32-bit pointers can be used to point to more than 2**32 bytes of memory.
Such implementation would need to have very clever pointer operations,
and it would crash when you exceed number of pointers allowed at once,
but it would be conforming, wouldn't it?
A little clarification: total amount of memory (in particular maximum
size of object allocated doesn't exceed this) on this ridiculous
implementation is 2**33 bytes; and maximum number of pointers
allowed is 2**32; and size of pointer is 4 bytes (8-bit bytes). Then
it seems you can't trick it by allocating huge array and stuffing
pointers to its bytes into this array.

Yevgen
Jan 8 '07 #131
In article <Latoh.1530$Ul4.664@trnddc05>,
Yevgen Muntyan <mu****************@tamu.eduwrote:
>A funny implementation could keep a mapping from bit representation
to actual memory locations (you're calling it "context"). If a program
can't contain more than N pointers where N is less than 2**32, then
32-bit pointers can be used to point to more than 2**32 bytes of memory.
Very creative. But I can write pointers out to a file and read them
back, or print them out and ask the user to type them back in again,
so I can have as many pointers in use as I like.

Of course, the %p format could write out the necessary mapping
information, but I don't have to use %p. I could write out the bytes
of the pointer as integers.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jan 8 '07 #132
Richard Tobin wrote:
In article <Latoh.1530$Ul4.664@trnddc05>,
Yevgen Muntyan <mu****************@tamu.eduwrote:

>>A funny implementation could keep a mapping from bit representation
to actual memory locations (you're calling it "context"). If a program
can't contain more than N pointers where N is less than 2**32, then
32-bit pointers can be used to point to more than 2**32 bytes of memory.


Very creative. But I can write pointers out to a file and read them
back, or print them out and ask the user to type them back in again,
so I can have as many pointers in use as I like.

Of course, the %p format could write out the necessary mapping
information, but I don't have to use %p. I could write out the bytes
of the pointer as integers.
What about implementation without files? All you have is 2**33 bytes of
memory where program code and everything lives. (yes, ridiculous, but
it's not the point here)

Best regards,
Yevgen
Jan 8 '07 #133
In article <hcAoh.15012$rz3.9253@trnddc03>,
Yevgen Muntyan <mu****************@tamu.eduwrote:
>What about implementation without files? All you have is 2**33 bytes of
memory where program code and everything lives. (yes, ridiculous, but
it's not the point here)
Use 2^32 bits of the memory to correspond to the 2^32 possible pointer
representations. Loop through the addresses of the 2^33 bytes,
setting the bit corresponding to the (32-bit) representation of the
pointer. Eventually there will be a collision: you will find that the
bit is already set. You can recover the pointer that caused that but
to be set - it's the same as the current one - and now you have two
pointers that compare equal but are supposed to be pointers to
different bytes.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jan 8 '07 #134
Richard Tobin wrote:
In article <hcAoh.15012$rz3.9253@trnddc03>,
Yevgen Muntyan <mu****************@tamu.eduwrote:
>>What about implementation without files? All you have is 2**33 bytes of
memory where program code and everything lives. (yes, ridiculous, but
it's not the point here)


Use 2^32 bits of the memory to correspond to the 2^32 possible pointer
representations. Loop through the addresses of the 2^33 bytes,
setting the bit corresponding to the (32-bit) representation of the
pointer. Eventually there will be a collision: you will find that the
bit is already set. You can recover the pointer that caused that but
to be set - it's the same as the current one - and now you have two
pointers that compare equal but are supposed to be pointers to
different bytes.
You won't have two pointers, you will only know that given bit
representation denoted some pointer at some point in past.
But your example may be modified so it stores bit representation
of first pointer, and then loops through all possible addresses
until there's a collision. But then again, the implementation may
be SUPER clever, and it may watch what you're doing - it may see
that you stored pointer (in any form, even if you split it into bits
and shuffle randomly, it's SUPER clever after all). You won't be
able to store all valid pointers, so it always can find a new bit
representation once you request new pointer. Super checked pointers.

Yevgen
Jan 9 '07 #135
Richard Tobin wrote:
Yevgen Muntyan <mu****************@tamu.eduwrote:
>A funny implementation could keep a mapping from bit representation
to actual memory locations (you're calling it "context"). If a program
can't contain more than N pointers where N is less than 2**32, then
32-bit pointers can be used to point to more than 2**32 bytes of memory.

Very creative. But I can write pointers out to a file and read them
back, or print them out and ask the user to type them back in again,
so I can have as many pointers in use as I like.

Of course, the %p format could write out the necessary mapping
information, but I don't have to use %p. I could write out the bytes
of the pointer as integers.
But first you have to get the pointer. If malloc and friends
refuses to give you such, you don't have those pointers to bandy
about.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Jan 9 '07 #136
CBFalconer wrote:
Richard Tobin wrote:
>>Yevgen Muntyan <mu****************@tamu.eduwrote:

>>>A funny implementation could keep a mapping from bit representation
to actual memory locations (you're calling it "context"). If a program
can't contain more than N pointers where N is less than 2**32, then
32-bit pointers can be used to point to more than 2**32 bytes of memory.

Very creative. But I can write pointers out to a file and read them
back, or print them out and ask the user to type them back in again,
so I can have as many pointers in use as I like.

Of course, the %p format could write out the necessary mapping
information, but I don't have to use %p. I could write out the bytes
of the pointer as integers.


But first you have to get the pointer. If malloc and friends
refuses to give you such, you don't have those pointers to bandy
about.
Of course we assume calloc(1 << 31 + 8, 2) succeeds here.

Yevgen
Jan 9 '07 #137
On Tue, 2 Jan 2007 22:40:29 -0500, "David T. Ashley" <dt*@e3ft.comwrote:
>"David T. Ashley" <dt*@e3ft.comwrote in message
news:8c******************************@giganews.co m...
>It isn't immediately clear to me why the call has to fail.
2^16 * 2^16 is 2^32 (4 gigabytes). My system has more virtual
memory than that.

Well, just out of curiousity, I tried it out to see what the
largest approximate value is. Results below.

54135^2 is going to be on the order of 2.5G. That is a pretty
fair hunk of memory.

---------

[nouser@pamc ~]$ cat test3.c
#include <stdio.h>
#include <stdlib.h>

int main(void)
{
char *p;
int i;

for (i=65535; i>0; i--)
{
if (p = calloc(i,i))
{
printf("%d is apparently the largest integer that will succeed.\n",
i);
break;
}
}
}
[nouser@pamc ~]$ gcc test3.c
[nouser@pamc ~]$ ./a.out
54135 is apparently the largest integer that will succeed.
[nouser@pamc ~]$
Watch out for system specific limits though.

For instance, on a system with a 32-bit size_t type, it may be in
theory possible to represent the size of a single object with a
size of SIZE_MAX, but user-specific limits might kick in a lot
earlier.

<OT>

For instance, on x86 systems the default installation of FreeBSD
shows:

| $ ulimit -a
| core file size (blocks, -c) unlimited
=| data seg size (kbytes, -d) 524288
| file size (blocks, -f) unlimited
| max locked memory (kbytes, -l) unlimited
| max memory size (kbytes, -m) unlimited
| open files (-n) 7149
| pipe size (512 bytes, -p) 1
| stack size (kbytes, -s) 65536
| cpu time (seconds, -t) unlimited
| max user processes (-u) 3574
=| virtual memory (kbytes, -v) unlimited
| $

Note, above, that a system-specific limit for the size of the
data segment of a single process, will prevent a successful
allocation of memory long before you hit the 4 GB limit of a
32-bit size_t.

You can probably use something like the following program:

#include <assert.h>
#include <inttypes.h>
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int
main(void)
{
size_t currsize, allocsize;
char *p, *tmp;

/*
* Try allocating memory by doubling the size in each step.
*/
allocsize = 1;
currsize = 0;
p = NULL;
while (1) {
if (currsize)
allocsize = 2 * currsize;
if ((SIZE_T_MAX / 2) < allocsize) {
fprintf(stderr,
"size_t limit would be exceeded!\n");
free(p);
exit(EXIT_SUCCESS);
}
printf("allocating %10ju", (uintmax_t) allocsize);
fflush(stdout);
if (p == NULL) {
tmp = malloc(allocsize);
} else {
tmp = realloc(p, allocsize);
}
if (tmp == NULL) {
printf("\nswitching algorithm at %ju bytes\n",
(uintmax_t) currsize);
fflush(stdout);
break;
}
p = tmp;
currsize = allocsize;

printf(" zeroing");
fflush(stdout);
memset(p, 0, currsize);
printf(" success\n", (uintmax_t) currsize);
fflush(stdout);
}

if (p == NULL || currsize == 0) {
fprintf(stderr, "Bummer! No allocation is possible.\n");
exit(EXIT_FAILURE);
}
/*
* Now try allocating repeatedly with 'allocsize', until we fail. When we
* do, decrease allocsize and loop back.
*/
allocsize = currsize;
while (1) {
if ((SIZE_T_MAX - allocsize) < currsize || allocsize < 1) {
printf("Cannot allocate any more memory.\n");
fflush(stdout);
break;
}
printf("allocating %10ju+%ju",
(uintmax_t) currsize, (uintmax_t) allocsize);
fflush(stdout);

assert(p != NULL);
tmp = realloc(p, currsize + allocsize);
if (tmp == NULL) {
allocsize /= 2;
printf(" failed, reducing allocsize to %ju\n",
(uintmax_t) allocsize);
continue;
}
p = tmp;
currsize += allocsize;

printf(" zeroing");
fflush(stdout);
memset(p, 0, currsize);
printf(" success\n", (uintmax_t) currsize);
fflush(stdout);
}

printf("Total memory allocated: %ju bytes\n", (uintmax_t) currsize);
fflush(stdout);
free(p);
return EXIT_SUCCESS;
}

When run with an unlimited virtual memory size user-limit, this
will succeed in allocating a *lot* of memory.

But see what happens when user-limits are in place:

| $ ulimit -v 20000
| $ ulimit -a
| core file size (blocks, -c) unlimited
| data seg size (kbytes, -d) 524288
| file size (blocks, -f) unlimited
| max locked memory (kbytes, -l) unlimited
| max memory size (kbytes, -m) unlimited
| open files (-n) 7149
| pipe size (512 bytes, -p) 1
| stack size (kbytes, -s) 65536
| cpu time (seconds, -t) unlimited
| max user processes (-u) 3574
| virtual memory (kbytes, -v) 20000
| $ ./foo
| allocating 1 zeroing success
| allocating 2 zeroing success
| allocating 4 zeroing success
| allocating 8 zeroing success
| allocating 16 zeroing success
| allocating 32 zeroing success
| allocating 64 zeroing success
| allocating 128 zeroing success
| allocating 256 zeroing success
| allocating 512 zeroing success
| allocating 1024 zeroing success
| allocating 2048 zeroing success
| allocating 4096 zeroing success
| allocating 8192 zeroing success
| allocating 16384 zeroing success
| allocating 32768 zeroing success
| allocating 65536 zeroing success
| allocating 131072 zeroing success
| allocating 262144 zeroing success
| allocating 524288 zeroing success
| allocating 1048576 zeroing success
| allocating 2097152 zeroing success
| allocating 4194304 zeroing success
| allocating 8388608
| switching algorithm at 4194304 bytes
| allocating 4194304+4194304 failed, reducing allocsize to 2097152
| allocating 4194304+2097152 failed, reducing allocsize to 1048576
| allocating 4194304+1048576 failed, reducing allocsize to 524288
| allocating 4194304+524288 failed, reducing allocsize to 262144
| allocating 4194304+262144 failed, reducing allocsize to 131072
| allocating 4194304+131072 failed, reducing allocsize to 65536
| allocating 4194304+65536 failed, reducing allocsize to 32768
| allocating 4194304+32768 failed, reducing allocsize to 16384
| allocating 4194304+16384 failed, reducing allocsize to 8192
| allocating 4194304+8192 failed, reducing allocsize to 4096
| allocating 4194304+4096 failed, reducing allocsize to 2048
| allocating 4194304+2048 failed, reducing allocsize to 1024
| allocating 4194304+1024 failed, reducing allocsize to 512
| allocating 4194304+512 failed, reducing allocsize to 256
| allocating 4194304+256 failed, reducing allocsize to 128
| allocating 4194304+128 failed, reducing allocsize to 64
| allocating 4194304+64 failed, reducing allocsize to 32
| allocating 4194304+32 failed, reducing allocsize to 16
| allocating 4194304+16 failed, reducing allocsize to 8
| allocating 4194304+8 failed, reducing allocsize to 4
| allocating 4194304+4 failed, reducing allocsize to 2
| allocating 4194304+2 failed, reducing allocsize to 1
| allocating 4194304+1 failed, reducing allocsize to 0
| Cannot allocate any more memory.
| Total memory allocated: 4194304 bytes
| $

</OT>

Back to more topical stuff: the size_t type can represent the
size of much much bigger objects, but local configuration
prevents malloc() and realloc() from obtaining so much memory.

Jan 9 '07 #138
In article <MFCoh.1061$Cn3.1055@trnddc02>,
Yevgen Muntyan <mu****************@tamu.eduwrote:
>You won't have two pointers, you will only know that given bit
representation denoted some pointer at some point in past.
I see nothing in the standard that prevents me from storing a pointer
by any means that I choose. If I store it by setting a bit in a table
of possible representations, how is that different from storing it in
a variable, writing it to a file, or encoding it in an integer, all of
which are generally agreed to be legal?

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jan 9 '07 #139
Richard Tobin wrote:
In article <MFCoh.1061$Cn3.1055@trnddc02>,
Yevgen Muntyan <mu****************@tamu.eduwrote:

>>You won't have two pointers, you will only know that given bit
representation denoted some pointer at some point in past.


I see nothing in the standard that prevents me from storing a pointer
by any means that I choose. If I store it by setting a bit in a table
of possible representations, how is that different from storing it in
a variable, writing it to a file, or encoding it in an integer, all of
which are generally agreed to be legal?
Good point. Last attempt: you have read-only memory, you can read but
you can't write. So you can't use that table. Somehow I still believe
such a weird implementation is possible (not in practical sense of
course). C standard seems to have made pointers different enough from
simple structures or integers for that to be true.
In any case it's impossible to prove or disprove that such an
implementation exists, and it's unlikely someone will actually create
reasonably non-buggy implementation like that :)

Regards,
Yevgen
Jan 10 '07 #140
av
On Wed, 03 Jan 2007 09:58:35 +0100, av wrote:
>the "problem" is not in calloc or malloc but it is in the mathemaical
model ("modello matematico") used for size_t
possible i'm the only one in think that standard C is wrong on
definition of size_t?
Jan 11 '07 #141
av said:
On Wed, 03 Jan 2007 09:58:35 +0100, av wrote:
>>the "problem" is not in calloc or malloc but it is in the mathemaical
model ("modello matematico") used for size_t

possible i'm the only one in think that standard C is wrong on
definition of size_t?
That's unlikely. There are plenty of people who don't understand C.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Jan 11 '07 #142
av wrote:
On Wed, 03 Jan 2007 09:58:35 +0100, av wrote:
>>the "problem" is not in calloc or malloc but it is in the mathemaical
model ("modello matematico") used for size_t

possible i'm the only one in think that standard C is wrong on
definition of size_t?
Yes.

--
Chris "first on the Underground!" Dollin
"No-one here is exactly what he appears." G'kar, /Babylon 5/

Jan 11 '07 #143
In article <2v********************************@4ax.com>, av <av@ala.awrote:
>possible i'm the only one in think that standard C is wrong on
definition of size_t?
Certainly the modular arithmetic used for unsigned types is usually
inappropriate for sizes, but so are signed types. And presumably the
standards committee would not be willing to invent a new kind of
integer for this purpose.

-- Richard

--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jan 11 '07 #144
Richard Tobin said:
In article <2v********************************@4ax.com>, av <av@ala.a>
wrote:
>>possible i'm the only one in think that standard C is wrong on
definition of size_t?

Certainly the modular arithmetic used for unsigned types is usually
inappropriate for sizes,
Why?
but so are signed types. And presumably the
standards committee would not be willing to invent a new kind of
integer for this purpose.
And it's easy to see why not, if you try doing so for yourself. What
characteristics should such an integer type have?

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Jan 11 '07 #145
In article <vI******************************@bt.com>,
Richard Heathfield <rj*@see.sig.invalidwrote:
>Certainly the modular arithmetic used for unsigned types is usually
inappropriate for sizes,
>Why?
Because the space required for A bytes and B bytes is A+B bytes, not
(A+B) mod (SIZE_MAX+1).
>but so are signed types. And presumably the
standards committee would not be willing to invent a new kind of
integer for this purpose.
>And it's easy to see why not
Yes.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jan 11 '07 #146
av
On Thu, 11 Jan 2007 12:31:53 +0100, av wrote:
>On Wed, 03 Jan 2007 09:58:35 +0100, av wrote:
>>the "problem" is not in calloc or malloc but it is in the mathemaical
model ("modello matematico") used for size_t

possible i'm the only one in think that standard C is wrong on
definition of size_t?
because i have some time to troll, here it is the right model for your
"beloved" ssize_t and size_t
------------------------------------------
Signed
int32_t a,b;
/ INT32_MIN, if a or b or both are INT32_MIN
/ INT32_MIN, if the result is not the usual mathematic result
a@b=v{
\ mathematical correct result

where @ is in { +, -, *, /, %}
If i want to see if the computatiton that use +-*/%
is correct is enought to prove
result is in [INT32_MIN+1..INT32_MAX]
------------------------------------------
unsigned
uns32_t a, b;
/ UNS32_MAX, if a or b or both are UNS32_MAX
/ UNS32_MAX, if the result is not the usual mathematic result
a@b=v{
\ mathematical correct result

where @ is in { +, -, *, /, %, <<, >>, &, |, ~}
If i want to see if the computatiton that use +-*/%<<>>&|~
is correct is enought to prove
result is in [0..UNS32_MAX-1]
------------------------------------------
i say i trolling because i'm not sure to be right
can i say only that it seems to me useful
Jan 11 '07 #147
Richard Tobin said:
In article <vI******************************@bt.com>,
Richard Heathfield <rj*@see.sig.invalidwrote:
>>Certainly the modular arithmetic used for unsigned types is usually
inappropriate for sizes,
>>Why?

Because the space required for A bytes and B bytes is A+B bytes, not
(A+B) mod (SIZE_MAX+1).
Sure, but at some point you have to accept that A+B doesn't make sense. It
may well be that implementations need to supply a wider size_t than they do
at present, but when all's said and done, there are (according to
Schneier's "Applied Cryptography", 2nd edition) only 2^265 atoms in the
Universe (not including dark matter). If we could store a megabyte of data
in each atom (ha! why not?), that would be 2^285 bytes, right? So if we had
a 285-bit size_t, it would be able to store the sum total of the sizes of
all the objects it is possible to create in a Universe-sized computer, so
there is simply no point in having a size_t wider than that (unless you're
into parallel universes). If we confine ourselves to the Earth, we have a
mere 2^170 atoms to play with, so - even given that rather optimistic
megabyte per atom - a 190-bit size_t would do the trick nicely, and modular
arithmetic would become rather irrelevant.

We live in a real world with real limits, and it's high time that computer
programmers got used to the fact.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Jan 11 '07 #148
Richard Heathfield <rj*@see.sig.invalidwrites:
We live in a real world with real limits, and it's high time that computer
programmers got used to the fact.
But integers are *so* much easier!
--
int main(void){char p[]="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuv wxyz.\
\n",*q="kl BIcNBFr.NKEzjwCIxNJC";int i=sizeof p/2;char *strchr();int putchar(\
);while(*q){i+=strchr(p,*q++)-p;if(i>=(int)sizeof p)i-=sizeof p-1;putchar(p[i]\
);}return 0;}
Jan 11 '07 #149
In article <eb*********************@bt.com>,
Richard Heathfield <rj*@see.sig.invalidwrote:
>Because the space required for A bytes and B bytes is A+B bytes, not
(A+B) mod (SIZE_MAX+1).
>Sure, but at some point you have to accept that A+B doesn't make sense.
I'm not really arguing that size_t ought to be a different type, just
agreeing with someone that it has a problem. And we've seen that this
problem is not illusory, because several implementations have a bug in
calloc() because of it.

There are various ways it could be improved, some more and some less
C-like:

- use an arbitrary-precision type. Handy, but not really C.
- use a larger but fixed-size integer type. This is happening anyway
with 64-bit systems, but is still susceptible to the calloc() mistake.
- use an integer type that is required to signal overflow. This
seems like a plausible extension.

The overflow-signalling solution seems like a good idea to me; I imagine
the main problem is that it is not efficient to implement on all systems.
>We live in a real world with real limits, and it's high time that computer
programmers got used to the fact.
Having programming languages check these limits has advantages.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jan 11 '07 #150

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

5
by: Jack | last post by:
Hi, I am trying to get a thorough understanding of a code where a addition or deletion of records can be done from a list of records. For addition part of the form, data is being obtained from set...
4
by: SP | last post by:
Hi I have a problem with a customer's XML being submitting to me in a non-well-format XML. They said that there are 18 other partners who has been able to tweak the XML to make it work. So I...
7
by: Shapiro | last post by:
I have a scenario where I log a resquest to a database table and update the request with a corresponding response including the response time. I am using an HttpModule to do this. My challenge...
4
by: Nalaka | last post by:
Hi, I have some request specific data that gets created in a "early event", that I need to pass around to many other events. I need access to this data during that request. (and more importantly...
5
by: dougwig | last post by:
I'm trying to handle the scenario where a user's session times out and and their ajax request triggers a redirection by the webserver (302 error?). I'm using Prototype 1.4 and the my works great...
4
by: Michael Kujawa | last post by:
I am using the following to create an SQL statement using the names and values from request.form. The loop goes through each item in request.form The issue comes in having an additional "and" at...
6
by: santhoskumara | last post by:
How to request to servlet from Ajax and also I got the DOM object in the servlet through Business Logic. Now how will i pass the DOM object from serlvet to Clientside. Where in the client Side i am...
2
by: MDANH2002 | last post by:
Hi From VB.NET I want to simulate the POST request of the following HTML form <html> <title>HTTP Post Testing</title> <body> <form action=http://www.example.com/postdata ...
5
by: chromis | last post by:
Hi there, I've recently been updating a site to use locking on application level variables, and I am trying to use a commonly used method which copies the application struct into the request...
3
by: Joseph Geretz | last post by:
I'm using the Request Filter documentation which can be found here: http://msdn.microsoft.com/en-us/library/system.web.httprequest.filter.aspx In this example, two filters are installed, one...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.