By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
454,125 Members | 921 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 454,125 IT Pros & Developers. It's quick & easy.

Suggestions for my mallocator

P: n/a
Hi all,

This is my first posting on this forum! I hope it will be the first of
many as I work my way up to be a C guru :)

We had an assigment to write our own storage allocator, and mine was
graded very poorly with comment "Hopelessly naive". Now I can see the
main problem, which is that some calls to myfree() won't actually
release the memory, but that would be pretty difficult to achieve, and
the rest of it seems to work fine to me.

If anyone here can make any constructive comments, that would be great
:)
#include <stdio.h>
#include <string.h>
#include <stddef.h>

#define SYSTEM_RAM ( 1<<30 )
#define MAX_TO_USE ( ((SYSTEM_RAM) / 3) * 2 )

static char heap[MAX_TO_USE];
static char *p=heap;
static char *oblivion=heap+MAX_TO_USE;
static char *last;

void *mymalloc(long bytes)
{
if(p+bytes<oblivion) {
last=p;
p+=bytes;
return (void *) last;
}
return NULL;
}

void myfree(void *q)
{
if(q==NULL)
return;
if(q==last) {
p=last;
last=NULL;
return;
}
/* need something more sophisticated - we don't have it, so
* let's just leak the memory :) */
return;
}

int main()
{
char *t=mymalloc(14);
strcpy(t,"Hello, world!");
printf("%s\n", t);
myfree(t);
return 0;
}

Dec 27 '06 #1
Share this Question
Share on Google+
39 Replies


P: n/a
nf*********@mailinator.com said:
Hi all,

This is my first posting on this forum! I hope it will be the first of
many as I work my way up to be a C guru :)

We had an assigment to write our own storage allocator, and mine was
graded very poorly with comment "Hopelessly naive". Now I can see the
main problem, which is that some calls to myfree() won't actually
release the memory, but that would be pretty difficult to achieve, and
the rest of it seems to work fine to me.
The job of the deallocator is to make the storage available for re-use by
the program. See pp185ff of K&R2 for an example.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Dec 27 '06 #2

P: n/a
Mine does do that, but admittedly not in certain cases :~

The only problem occurs if mymalloc and myfree calls are intertwined
rather than nested like parentheses. And after all this was just an
exercise - it's not like any production code will use my functions -
and I'd guess that ultimately you need kernel calls for a serious
implementation of malloc (keeping large arrays around the place can't
really be ideal :?)

Anyway, back to my code, it seems that the trouble is that to keep
track of an arbitrary number of calls to mymalloc(), my allocator would
itself need arbitrarily large storage, and it could only get this from
malloc(), which sort of defeats the object. :???:
Richard Heathfield wrote:
nf*********@mailinator.com said:
Hi all,

This is my first posting on this forum! I hope it will be the first of
many as I work my way up to be a C guru :)

We had an assigment to write our own storage allocator, and mine was
graded very poorly with comment "Hopelessly naive". Now I can see the
main problem, which is that some calls to myfree() won't actually
release the memory, but that would be pretty difficult to achieve, and
the rest of it seems to work fine to me.

The job of the deallocator is to make the storage available for re-use by
the program. See pp185ff of K&R2 for an example.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Dec 27 '06 #3

P: n/a

<nf*********@mailinator.comwrote in message
news:11*********************@i12g2000cwa.googlegro ups.com...
Mine does do that, but admittedly not in certain cases :~

The only problem occurs if mymalloc and myfree calls are intertwined
rather than nested like parentheses. And after all this was just an
exercise - it's not like any production code will use my functions -
and I'd guess that ultimately you need kernel calls for a serious
implementation of malloc (keeping large arrays around the place can't
really be ideal :?)

Anyway, back to my code, it seems that the trouble is that to keep
track of an arbitrary number of calls to mymalloc(), my allocator would
itself need arbitrarily large storage, and it could only get this from
malloc(), which sort of defeats the object. :???:
No. It is quite legitimate to allow only a finite, though large number of
allocations. If you have a megabyte of memory available, and allow only 1024
allocations, then unless the average allocation is under 1K the user will
never notice. plenty of commercial allocators will pad small allocations up
to 1K.

The important thing is that calls to malloc() and free() can be interleaved
in any order, as long as the free() follows its matching malloc(), when
memory is freed it is avialable for reallocation, and the blocks you
allocate do not overlap. This is not trivial to achieve.

For bonus points you want to use your storage area efficiently - avoid
fragmentation - and you want the malloc() and the free() to execute in a
short time.

I've hand-code mallocs for production systems. These were embedded programs.
The allocator didn't have to run in special kernel mode. It was simply a
normal C function that accessed a global array.
--
www.personal.leeds.ac.uk/~bgy1mm
freeware games to download.

Dec 27 '06 #4

P: n/a
nf*********@mailinator.com said:
Mine does do that, but admittedly not in certain cases :~

The only problem occurs if mymalloc and myfree calls are intertwined
rather than nested like parentheses.
And yet that is the typical usage, which is why you need to do more work
than just a simple "start here" indicator. I refer you again to K&R2's
example, which gives you a solid introduction to the issues and how to set
about resolving them.
And after all this was just an exercise
....which won't do you much good if you don't do it properly. Sorry, but
there it is.
- it's not like any production code will use my functions -
and I'd guess that ultimately you need kernel calls for a serious
implementation of malloc (keeping large arrays around the place can't
really be ideal :?)
Ultimately, yes, a production-quality memory manager in a protected-mode OS
is going to need to ask the OS for its memory.
Anyway, back to my code, it seems that the trouble is that to keep
track of an arbitrary number of calls to mymalloc(), my allocator would
itself need arbitrarily large storage, and it could only get this from
malloc(), which sort of defeats the object. :???:
No, you need to keep track of memory that's *returned* to you, so that you
can give it out *again*. And that means keeping track of each block you
allocate, rather than just saying "well, everything below offset X has been
allocated".

As I said before, see pp185ff of K&R2 for an example.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Dec 27 '06 #5

P: n/a
nf*********@mailinator.com wrote:
Richard Heathfield wrote:
nf*********@mailinator.com said:
We had an assigment to write our own storage allocator, and mine was
graded very poorly with comment "Hopelessly naive". Now I can see the
main problem, which is that some calls to myfree() won't actually
release the memory, but that would be pretty difficult to achieve, and
the rest of it seems to work fine to me.
The job of the deallocator is to make the storage available for re-use by
the program. See pp185ff of K&R2 for an example.
Mine does do that, but admittedly not in certain cases :~
You can only free the chunk of memory you have allocated last.
The only problem occurs if mymalloc and myfree calls are intertwined
rather than nested like parentheses.
Well, that's like building a car that has no steering wheel, no
accelerator and no brake padel and just a single button labeled
"Go" which, when you hold it down, makes the car go straight for-
ward. You could also argue that "the only problem" is that it
works only on a straight street with no other traffic...
Anyway, back to my code, it seems that the trouble is that to keep
track of an arbitrary number of calls to mymalloc(), my allocator would
itself need arbitrarily large storage, and it could only get this from
malloc(), which sort of defeats the object. :???:
If you look up the pages in K&R2 Richard told you about you will
find one way how to deal with that "unsurmountable" problem by
just putting a structure at the start of each newly "allocated"
block of memory that contains a field with the size and a pointer
to the next block's address. By iterating through the linked list
of pointers you can find the next fitting free block or the block
you're asked to free. When you free a block you also joins it with
a free block before or after it (if that exists) to avoid fragmen-
tation. All that takes not much more than 50 lines of code (if you
modify it to use memory only from a fixed area like in your case
instead of asking the OS for more memory when running out of space)
and at a "cost" of about the size of a pointer plus an integer size
field for each block. The example code in K&R2 also deals with the
alignment of the memory returned by malloc(), a problem your program
does not even take into consideration.

Regards, Jens
--
\ Jens Thoms Toerring ___ jt@toerring.de
\__________________________ http://toerring.de
Dec 27 '06 #6

P: n/a
Thanks to all on the forum for your thoughts :)

I guess it's a question of trade-offs here between flexibility (how
many allocations do you allow?), speed vs. memory usage (how much
fragmentation is tolerable?), etc. I've come up with what seems like a
pretty good way of doing this, which ensures absolutely no
fragmentation occurs - once a block is freed, the deallocator
immediately tidies things up. An array of pointers keeps references to
all the currently allocated blocks.

The only slight disadvantage of this approach is that it necessitates a
slightly different API for malloc and free, with an extra level of
indirection.
#include <stdio.h>
#include <string.h>
#include <stddef.h>

#define SYSTEM_RAM ( 1<<30 )
#define MAX_TO_USE ( ((SYSTEM_RAM) / 3) * 2 )
#define MAX_ALLOCS ( (MAX_TO_USE) >10 )

/* set up heap to allocate from */
static char heap[MAX_TO_USE], *top=heap, *oblivion=heap+MAX_TO_USE;

/* pointers to allocated blocks */
static char *blocks[MAX_ALLOCS];

void **mymalloc(long bytes)
{
char **q;
if(top+bytes<oblivion) {
/* find first free pointer in array */
for(q=blocks; *q; q++)
if(q==blocks+MAX_ALLOCS)
return NULL;
*q=top;
top+=bytes;
return (void **) q;
}
return NULL;
}

void myfree(void **q)
{
char **next, **r;
long size;
if(q==NULL || *q==NULL)
return;
/* OK... let's find the start of the block after the one to be freed
*/
next=&top;
for(r=blocks; r<blocks+MAX_ALLOCS; r++)
if((char *) *q<*r && *r<*next)
next=r;
size=next-(char **) q; /* size of the block */
/* shift down to fill in block to be freed */
memmove(*q, *next, MAX_TO_USE-(next-blocks));
/* move down all pointers above next */
top=*next-size;
for(r=blocks; r<blocks+MAX_ALLOCS; r++)
if(*next<=*r)
*r-=size;
}

int main()
{
char **s=(char **) mymalloc(8);
char **t=(char **) mymalloc(7);
strcpy(*s,"Hello, ");
strcpy(*t,"world!");
printf("%s%s\n", *s, *t);
myfree((void **) s);
myfree((void **) t);
return 0;
}
PS. Thanks for the heads-up on the K&R2 discussion - I've got a pdf of
it coming down as a torrent as we speak, so I'll read it as soon as I
get it :)
Malcolm wrote:
<nf*********@mailinator.comwrote in message
news:11*********************@i12g2000cwa.googlegro ups.com...
Mine does do that, but admittedly not in certain cases :~

The only problem occurs if mymalloc and myfree calls are intertwined
rather than nested like parentheses. And after all this was just an
exercise - it's not like any production code will use my functions -
and I'd guess that ultimately you need kernel calls for a serious
implementation of malloc (keeping large arrays around the place can't
really be ideal :?)

Anyway, back to my code, it seems that the trouble is that to keep
track of an arbitrary number of calls to mymalloc(), my allocator would
itself need arbitrarily large storage, and it could only get this from
malloc(), which sort of defeats the object. :???:
No. It is quite legitimate to allow only a finite, though large number of
allocations. If you have a megabyte of memory available, and allow only 1024
allocations, then unless the average allocation is under 1K the user will
never notice. plenty of commercial allocators will pad small allocations up
to 1K.

The important thing is that calls to malloc() and free() can be interleaved
in any order, as long as the free() follows its matching malloc(), when
memory is freed it is avialable for reallocation, and the blocks you
allocate do not overlap. This is not trivial to achieve.

For bonus points you want to use your storage area efficiently - avoid
fragmentation - and you want the malloc() and the free() to execute in a
short time.

I've hand-code mallocs for production systems. These were embedded programs.
The allocator didn't have to run in special kernel mode. It was simply a
normal C function that accessed a global array.
--
www.personal.leeds.ac.uk/~bgy1mm
freeware games to download.
Dec 27 '06 #7

P: n/a
nf*********@mailinator.com wrote:
This is my first posting on this forum! I hope it will be the first of
many as I work my way up to be a C guru :)

We had an assigment to write our own storage allocator, and mine was
graded very poorly with comment "Hopelessly naive". Now I can see the
main problem, which is that some calls to myfree() won't actually
release the memory, but that would be pretty difficult to achieve, and
the rest of it seems to work fine to me.
Well, the comment is fair. But the problem is that writing a good
allocation system is actually quite difficult.

Writing one that satisfies all possible useful criteria and runs in
O(1) (both malloc and free) is basically impossible. You can do one
that is O(ln(#allocations)) however it is horrendously complicated.
Usually, however, its straight forward to write an allocation system
that is O(1) *most* of the time, or which is O(1) all the time, but
does not always successfully allocate or recover all of memory
correctly.

The classic "buddy allocator" system has each allocation point to the
two allocations (or edges of a large block) that it is adjacent to.
Each allocated unit would also have header data like a flag indicating
whether it is allocated or free, and how large the allocation is. Thus
freeing each block is easy to see -- you just switch the state to
"free" and immediately merge with the any adjacent block that is free
(this is always at most two steps.)

The real problem is how do you do allocations. If you maintain a list
of free entries (with internal references, like a linked list or
something) then you can look through the list for entries that are
large enough to fit your allocation. But then the operation is
O(#allocations) which can be very slow. So the simplest thing to do is
to partition these into lists of particular size groupings, powers of
two are usually good, and just picking the first entry from each list
after you've guaranteed that its in a size range that will sufficiently
satisfy the allocation request. However, you are still stuck with the
problem that unless you look through your entire lists in each category
(thus bringing you back to O(#allocations)) you cannot guarantee "best
fit" allocations. So you will commonly end up allocation too much
memory.

Then there are other strage strategies like fibonacci allocators. The
idea is that you start with a block the size of a fibonacci number, and
you can always subdivide the block into two sub-blocks also with
fibonacci number sizes. In this way you can avoid the overhead of
storing the size of the block, since that in a sense is determined by
your offset from the big block. Personally, I don't quite see the
whole logic of this scheme, but people have done such things.

Of course the other thing about allocations strategies is that they
often entail system-level details. For example, the addresses for new
space from the system may always be adjacent to your previous
allocations. The way this is possible is if you have a dynamic memory
mapping system where physical memory is fetched in distinct pieces, but
which can be mapped into logical memory at a prescribed address. This
is the way the UNIX-style "sbrk()" function works (Windows NT does not
support a similar mechanism; you simply allocate in distinct blocks
that may or may not be adjacent.)

Another system detail has to do with multi-threading. To support
multithreading environments you have to make sure that at most one
thread is calling your memory allocation functions at a time. Without
this kind of mechanism, multi-threading would not even work. This can
often have a profound impact on allocator design -- using distinct
regions with different threads can dramatically raise performance, if
you design things right.

So, in short, writing a good memory allocation system is not something
can be undertaken lightly, unless you are satisfied with getting
unimpressive answers.

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/

Dec 27 '06 #8

P: n/a
nf*********@mailinator.com said:
PS. Thanks for the heads-up on the K&R2 discussion - I've got a pdf of
it coming down as a torrent as we speak, so I'll read it as soon as I
get it :)
I suggest you buy a copy instead. Keeping yourself on the right side of the
law is always a good idea.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Dec 27 '06 #9

P: n/a
On 27 Dec 2006 09:36:03 -0800, nf*********@mailinator.com wrote:
>Thanks to all on the forum for your thoughts :)

I guess it's a question of trade-offs here between flexibility (how
many allocations do you allow?), speed vs. memory usage (how much
fragmentation is tolerable?), etc. I've come up with what seems like a
pretty good way of doing this, which ensures absolutely no
fragmentation occurs - once a block is freed, the deallocator
immediately tidies things up. An array of pointers keeps references to
all the currently allocated blocks.

The only slight disadvantage of this approach is that it necessitates a
slightly different API for malloc and free, with an extra level of
indirection.
#include <stdio.h>
#include <string.h>
#include <stddef.h>

#define SYSTEM_RAM ( 1<<30 )
#define MAX_TO_USE ( ((SYSTEM_RAM) / 3) * 2 )
#define MAX_ALLOCS ( (MAX_TO_USE) >10 )

/* set up heap to allocate from */
static char heap[MAX_TO_USE], *top=heap, *oblivion=heap+MAX_TO_USE;
For ease of discussion below, let's assume top contains 0x10000 and
oblivion contains 0x20000.
>
/* pointers to allocated blocks */
static char *blocks[MAX_ALLOCS];

void **mymalloc(long bytes)
Let's also assume bytes is 0x1000 for each of two calls.

Why not use size_t for consistency

A void** is not guaranteed to be able to hold the address of anything
other than a void*.

Since a void* can hold the address of anything, there is no reason not
to use it.
>{
char **q;
if(top+bytes<oblivion) {
/* find first free pointer in array */
for(q=blocks; *q; q++)
On first call, blocks[0] is NULL, *q is false, and loop exits before
first iteration.

On second call, loop exits when q points to block[1].
if(q==blocks+MAX_ALLOCS)
return NULL;
*q=top;
On first call, blocks[0] now contains 0x10000.

On second call, blocks[1] now contains 0x11000.
top+=bytes;
On first call, top now contains 0x11000.

On second call, top now contains 0x12000.
return (void **) q;
Why are you returning the address of an element in blocks when you
could just as easily return an address in heap.

On first call, you return &blocks[0].

On second call, your return &blocks[1].
}
return NULL;
}

void myfree(void **q)
Let's assume we return block[1] first.

Let's return block[0] second but pretend the first call never
occurred.
>{
char **next, **r;
long size;
if(q==NULL || *q==NULL)
return;
/* OK... let's find the start of the block after the one to be freed
*/
next=&top;
On first call, top contains 0x12000.

On second call also.
for(r=blocks; r<blocks+MAX_ALLOCS; r++)
if((char *) *q<*r && *r<*next)
The second relation will not be evaluated until the first one is true.
On first call, *q is 0x11000.
On first iteration, *r is 0x10000; if is false.
On second iteration *r is 0x11000; if is false.
On all subsequent iterations, *r is NULL; if is false.

On second call, *q is 0x10000.
On first iteration, *r is ox10000; if is false
On second call, *r is 0x11000 and *next is 0x12000; if is true.
On subsequent calls, *r is NULL; if is false.
next=r;
You seem to be missing an exit from the for loop once you find the
block you are interested in.

On first call, you fall out of for loop never changing next and r
pointing one byte beyond end of blocks.

On second call, next is set to &blocks[1].
size=next-(char **) q; /* size of the block */
This arithmetic does not yield the number of bytes in the block.

On first call, next contains &top and q contains &block[1]. The
difference has no meaning since it is simply a measure (but not in
bytes) of the space between to unrelated objects defined at file
scope.

On second call, next contains &blocks[1] and q contains block[0]. The
difference is guaranteed to always be exactly 1.
/* shift down to fill in block to be freed */
memmove(*q, *next, MAX_TO_USE-(next-blocks));
Your third argument computes the wrong number of bytes. next-blocks
is the number of elements in blocks, not the number of bytes in heap.

Furthermore, by moving data in heap, you have broken the interface
with your users. Once you have returned the address of blocks[N] to a
user and he has dereferenced it to get the address of heap[M] as the
start of his allocated area, you may not move heap[M] to some other
area in heap.
/* move down all pointers above next */
top=*next-size;
for(r=blocks; r<blocks+MAX_ALLOCS; r++)
if(*next<=*r)
*r-=size;
Once you move any of the entries in blocks, you have broken the
interface with your users. Once you have returned the address of
blocks[N] to a user, you may not alter the value contained in
blocks[N] until the user frees it. Your user is entitled to use the
value in blocks[N] as often as he wants to get to the start of his
allocated area.
>}

int main()
{
char **s=(char **) mymalloc(8);
char **t=(char **) mymalloc(7);
strcpy(*s,"Hello, ");
strcpy(*t,"world!");
printf("%s%s\n", *s, *t);
myfree((void **) s);
myfree((void **) t);
return 0;
}
PS. Thanks for the heads-up on the K&R2 discussion - I've got a pdf of
it coming down as a torrent as we speak, so I'll read it as soon as I
get it :)
K&R2 is not legally available as a pdf.
Remove del for email
Dec 27 '06 #10

P: n/a
Thanks for the reply... I think you've misunderstood my approach, which
is understandable as I didn't give any information apart from a signal
that it breaks the usual malloc/free API...
return (void **) q;

Why are you returning the address of an element in blocks when you
could just as easily return an address in heap.

...

You seem to be missing an exit from the for loop once you find the
block you are interested in.

...

Furthermore, by moving data in heap, you have broken the interface
with your users. Once you have returned the address of blocks[N] to a
user and he has dereferenced it to get the address of heap[M] as the
start of his allocated area, you may not move heap[M] to some other
area in heap.

...

Once you move any of the entries in blocks, you have broken the
interface with your users. Once you have returned the address of
blocks[N] to a user, you may not alter the value contained in
blocks[N] until the user frees it. Your user is entitled to use the
value in blocks[N] as often as he wants to get to the start of his
allocated area.
That's the point of the pointers-to-pointers... in my scheme, the
allocation black-box maintains an array of pointers, each of which
points to a block of memory in the big array. However, the contract
with the caller is as follows. Let's say he calls malloc(N) and is
given p, which is a char **. Then *p is secretly an element of the
array of pointers, so **p points to a block in the big array of size N.
However, the caller isn't promised that *p is a constant pointer, only
that p is a constant pointer and *p always points to the right thing -
in other words, he always has to dereference twice as **p and mustn't
store *p=q, fiddle around, then hope that *q will still be valid.

The pay-off for this is that the allocator is guaranteed to be most
efficient possible in storage terms, in the sense that fragementation
is never allowed to occur.

I agree that this is a slight disadvantage, but once again this is only
an exercise, not a proposal for a genuine library function.
size=next-(char **) q; /* size of the block */

This arithmetic does not yield the number of bytes in the block.
<snip>

I'll have to check the calculations in the rest of the code and debug
:)
K&R2 is not legally available as a pdf.
Maybe that's strictly speaking true, but some considerations:
1. I could wait a week and borrow it from the library - same result,
i.e. I read their bit on allocators, but I've wasted a week.
2. If Kernighan & Ritchie were around today they'd probably be in the
spheres of Open Content, Free Documentation, etc., which is a more
ethical approach to "IP".
3. It's unlikely either of them will go hungry because I don't buy a
copy of their book.

Dec 27 '06 #11

P: n/a
nf*********@mailinator.com said:

<snip>
>K&R2 is not legally available as a pdf.

Maybe that's strictly speaking true, but some considerations:
1. I could wait a week and borrow it from the library - same result,
No, that's a legal source.
i.e. I read their bit on allocators, but I've wasted a week.
You'd spend that whole week sitting on your hands?
2. If Kernighan & Ritchie were around today they'd probably be in the
spheres of Open Content, Free Documentation, etc., which is a more
ethical approach to "IP".
K&R *are* around today, and so is their publisher. And they don't take
kindly to copyright violations.
3. It's unlikely either of them will go hungry because I don't buy a
copy of their book.
So go steal a Ferrari, why don't you? What kind of morality is that?

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Dec 27 '06 #12

P: n/a
we******@gmail.com wrote:
nf*********@mailinator.com wrote:
>This is my first posting on this forum! I hope it will be the
first of many as I work my way up to be a C guru :)

We had an assigment to write our own storage allocator, and mine
was graded very poorly with comment "Hopelessly naive". Now I can
see the main problem, which is that some calls to myfree() won't
actually release the memory, but that would be pretty difficult
to achieve, and the rest of it seems to work fine to me.

Well, the comment is fair. But the problem is that writing a good
allocation system is actually quite difficult.

Writing one that satisfies all possible useful criteria and runs
in O(1) (both malloc and free) is basically impossible. You can
do one that is O(ln(#allocations)) however it is horrendously
complicated. Usually, however, its straight forward to write an
allocation system that is O(1) *most* of the time, or which is
O(1) all the time, but does not always successfully allocate or
recover all of memory correctly.
It is inherently impossible to make a fragmentation proof allocator
for C, because of the presence of arbitrary pointers. Any such
would require indirect memory access for everything, which would be
a monstrous performance hit when enforced, together with
specialized compilers (or interpreters). However it is definitely
possible to have O(1) performance for all operations and to always
recover memory. It doesn't even require buddy systems (which have
their own penalties). I have published such a system, which only
requires care in writing.

See <http://cbfalconer.home.att.net/download/nmalloc.zip>

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>

Dec 27 '06 #13

P: n/a
It's the morality of common sense and practicality.

Scenario 1. I go to a library and find K&R. Under UK fair-use law I'm
allowed to photocopy 5% for personal use. In this case I copy the pages
on allocators. I end up with several sheets of paper containing the
relevent bits from the book.

Scenario 2. Someone shares a PDF of K&R with me over the internet. No
one has stolen anything: he still has the file himself. I print out the
pages on allocators and delete the file (for argument's sake). I end up
with several sheets of paper containing the relevent bits from the
book.

If you're so anal that you worry about these different means to obtain
an identical result, then I pity you.

Copyright law is completely screwed up. I have a dual-boot
Windows/Linux machine. I take my favourite DVD, and if I boot into
Windows then I can legally play it, whereas if I boot into Linux then
there is no legal way I can obtain the same result (reading the disc,
displaying the movie on the screen). That may tell you that I should
submit myself to this idiocy and never watch DVDs in Linux. It tells
*me* however that the law is an ass and life is too short.
Richard Heathfield wrote:
nf*********@mailinator.com said:

<snip>
K&R2 is not legally available as a pdf.
Maybe that's strictly speaking true, but some considerations:
1. I could wait a week and borrow it from the library - same result,

No, that's a legal source.
i.e. I read their bit on allocators, but I've wasted a week.

You'd spend that whole week sitting on your hands?
2. If Kernighan & Ritchie were around today they'd probably be in the
spheres of Open Content, Free Documentation, etc., which is a more
ethical approach to "IP".

K&R *are* around today, and so is their publisher. And they don't take
kindly to copyright violations.
3. It's unlikely either of them will go hungry because I don't buy a
copy of their book.

So go steal a Ferrari, why don't you? What kind of morality is that?

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Dec 27 '06 #14

P: n/a
On 27 Dec 2006 13:20:51 -0800, nf*********@mailinator.com wrote:
snip
>
>K&R2 is not legally available as a pdf.

Maybe that's strictly speaking true, but some considerations:
1. I could wait a week and borrow it from the library - same result,
i.e. I read their bit on allocators, but I've wasted a week.
2. If Kernighan & Ritchie were around today they'd probably be in the
spheres of Open Content, Free Documentation, etc., which is a more
ethical approach to "IP".
3. It's unlikely either of them will go hungry because I don't buy a
copy of their book.
I make my living writing software. I have no interest in helping you
steal from someone else who does the same.
Remove del for email
Dec 27 '06 #15

P: n/a
CBFalconer wrote:
we******@gmail.com wrote:
nf*********@mailinator.com wrote:
This is my first posting on this forum! I hope it will be the
first of many as I work my way up to be a C guru :)

We had an assigment to write our own storage allocator, and mine
was graded very poorly with comment "Hopelessly naive". Now I can
see the main problem, which is that some calls to myfree() won't
actually release the memory, but that would be pretty difficult
to achieve, and the rest of it seems to work fine to me.
Well, the comment is fair. But the problem is that writing a good
allocation system is actually quite difficult.

Writing one that satisfies all possible useful criteria and runs
in O(1) (both malloc and free) is basically impossible. You can
do one that is O(ln(#allocations)) however it is horrendously
complicated. Usually, however, its straight forward to write an
allocation system that is O(1) *most* of the time, or which is
O(1) all the time, but does not always successfully allocate or
recover all of memory correctly.

It is inherently impossible to make a fragmentation proof allocator
for C, because of the presence of arbitrary pointers.
Well more importantly, you can choose the ordering in which a sequence
of allocations is freed, and therefore can always create whatever kind
of fragmentation you like. So this clearly is not one of the criteria
a C-style allocator can have.
[...] Any such
would require indirect memory access for everything, which would be
a monstrous performance hit when enforced, together with
specialized compilers (or interpreters). However it is definitely
possible to have O(1) performance for all operations and to always
recover memory.
Ok, but what about also always being able to successfully allocate?
[...] It doesn't even require buddy systems (which have
their own penalties). I have published such a system, which only
requires care in writing.

See <http://cbfalconer.home.att.net/download/nmalloc.zip>
I noticed you don't supply any seperate documentation for it. Can I
assume the comments in the code are accurate? From what I can peruse:

1) You seem to be neglecting at least one merge scenario. When you
split off the excess from an allocation, you are not merging it with
any possible adjacent free entries. In a sense this causes unnecessary
fragmentation of the remaining free entries. Remember with your
strategy, you want your free entries to be as large as possible. Try
writing a benchmark with a lot of reallocs in it, and you will see the
difference this makes.

2) Your function size2bucket() is a linear search (though, of course,
its count is bounded above by sizeof(ulong)*CHAR_BITS). Think about it
as bits but seperated from the mathematical properties for a second.
You can re-implement it using a binary search, which means on real
system you would have an effective loop count of 5 or 6.

3) You are also giving up as soon as your algorithm is unable to find
memory using its "fast method". If you run out of memory, you are in
typically in "screwedzville" anyways. You can simply go back to your
free list and check the bucket one lower, and start scanning linearly
for some fixed amount before truly bailing. This is important for
certain scenarios where you are allocating near the system limit in
terms of amount of memory, but nearly all your allocations are the
exact same size (this scenario is very typical). With your strategy as
coded, you can "run out of memory" even though you actually have plenty
of memory available -- you just aren't willing to scan for it. Its
very difficult to implement something which works robustly and all the
time here, but you can capture typical cases without too much effort.

In any event, it is as I said. You can implement something with O(1),
but it ends up not always being able to allocate all available memory
successfully.

--
Paul Hsieh
http://www.pobox.com/~qed
http://bstring.sf.net/

Dec 28 '06 #16

P: n/a
>>>>"n" == nfwavzrdvwl <nf*********@mailinator.comwrites:

nIf Kernighan & Ritchie were around today they'd probably be in
nthe spheres of Open Content, Free Documentation, etc., which is
na more ethical approach to "IP".

Kernighan & Ritchie *are* around today, and if they wanted their book
released under a permissive license they could do so.

NIt's unlikely either of them will go hungry because I don't buy
na copy of their book.

It's unlikely you'll get any help from people who make a living with
intellectual property, now that you've demonstrated your cavalier
attitude towards it.

Charlton

--
Charlton Wilbur
cw*****@chromatico.net
Dec 28 '06 #17

P: n/a
nf*********@mailinator.com wrote:
It's the morality of common sense and practicality.
Yes, common sense, you're right.
>
Scenario 1. I go to a library and find K&R. Under UK fair-use law I'm
allowed to photocopy 5% for personal use. In this case I copy the pages
on allocators. I end up with several sheets of paper containing the
relevent bits from the book.
Or you can buy the whole book and reward the authors' effort.
>
Scenario 2. Someone shares a PDF of K&R with me over the internet. No
one has stolen anything: he still has the file himself. I print out the
pages on allocators and delete the file (for argument's sake). I end up
with several sheets of paper containing the relevent bits from the
book.
Are you sure there's a legal pdf copy of K&R2 on the Internet?
Also, personal use doesn't authorize you or others to share it
over the Internet - personal means you not half the world. The
copyright system is not broken... you probably talk about the
patent system being broken which is something different. Broken
or not the law is the way it is. Until it gets changed you are
breaking the law.
If you're so anal that you worry about these different means to obtain
an identical result, then I pity you.
So if I shoot you I should be safe because you would die anyway
eventually and I would still get my goal accomplished (identical
result). Screw the law, it's broken... The point is: we live by
the law, broken or not. You can fight it (FSF vs. patents), fight
to change it, but you don't break it until it's changed.
>
Copyright law is completely screwed up. I have a dual-boot
Windows/Linux machine. I take my favourite DVD, and if I boot into
Windows then I can legally play it, whereas if I boot into Linux then
there is no legal way I can obtain the same result (reading the disc,
displaying the movie on the screen). That may tell you that I should
submit myself to this idiocy and never watch DVDs in Linux. It tells
*me* however that the law is an ass and life is too short.
Patents, again, not copyright. You're getting it wrong. It
doesn't play on Linux because there is no program under Linux to
run it, because the guys distributing software respect the law by
not using stuff they don't agree with. You don't seem to get that.
--
Ioan - Ciprian Tandau
tandau _at_ freeshell _dot_ org (hope it's not too late)
(... and that it still works...)
Dec 28 '06 #18

P: n/a
nf*********@mailinator.com writes:
[...]
>K&R2 is not legally available as a pdf.

Maybe that's strictly speaking true,
Another word for "strictly speaking true" is "true", and you can drop
the "maybe".
but some considerations:
1. I could wait a week and borrow it from the library - same result,
i.e. I read their bit on allocators, but I've wasted a week.
2. If Kernighan & Ritchie were around today they'd probably be in the
spheres of Open Content, Free Documentation, etc., which is a more
ethical approach to "IP".
3. It's unlikely either of them will go hungry because I don't buy a
copy of their book.
If they were around today? Not very bright, are you?

If you're going to commit a crime, you probably shouldn't announce it
to the world in a manner that allows you to be traced.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Dec 28 '06 #19

P: n/a
Nelu said:

<snip>
Are you sure there's a legal pdf copy of K&R2 on the Internet?
There isn't, as Prentice Hall will confirm if you ask them.

<snip>

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Dec 28 '06 #20

P: n/a
we******@gmail.com wrote:
CBFalconer wrote:
>we******@gmail.com wrote:
>>nf*********@mailinator.com wrote:

This is my first posting on this forum! I hope it will be the
first of many as I work my way up to be a C guru :)

We had an assigment to write our own storage allocator, and mine
was graded very poorly with comment "Hopelessly naive". Now I can
see the main problem, which is that some calls to myfree() won't
actually release the memory, but that would be pretty difficult
to achieve, and the rest of it seems to work fine to me.

Well, the comment is fair. But the problem is that writing a good
allocation system is actually quite difficult.

Writing one that satisfies all possible useful criteria and runs
in O(1) (both malloc and free) is basically impossible. You can
do one that is O(ln(#allocations)) however it is horrendously
complicated. Usually, however, its straight forward to write an
allocation system that is O(1) *most* of the time, or which is
O(1) all the time, but does not always successfully allocate or
recover all of memory correctly.

It is inherently impossible to make a fragmentation proof allocator
for C, because of the presence of arbitrary pointers.

Well more importantly, you can choose the ordering in which a sequence
of allocations is freed, and therefore can always create whatever kind
of fragmentation you like. So this clearly is not one of the criteria
a C-style allocator can have.
If we could move the storage (see next para) this can be handled.
In ISO Pascal we know there are no extant pointers other that those
assigned by new, so indirection and repacking is possible.
>
>[...] Any such
would require indirect memory access for everything, which would be
a monstrous performance hit when enforced, together with
specialized compilers (or interpreters). However it is definitely
possible to have O(1) performance for all operations and to always
recover memory.

Ok, but what about also always being able to successfully allocate?
>[...] It doesn't even require buddy systems (which have
their own penalties). I have published such a system, which only
requires care in writing.

See <http://cbfalconer.home.att.net/download/nmalloc.zip>

I noticed you don't supply any seperate documentation for it. Can I
assume the comments in the code are accurate? From what I can peruse:
As far as I know they are accurate. Not the memalign portion,
which is nonsense as published (and so labelled).
>
1) You seem to be neglecting at least one merge scenario. When you
split off the excess from an allocation, you are not merging it with
any possible adjacent free entries. In a sense this causes unnecessary
fragmentation of the remaining free entries. Remember with your
strategy, you want your free entries to be as large as possible. Try
writing a benchmark with a lot of reallocs in it, and you will see the
difference this makes.
Not so. The split off portion is always merged if possible. If
too small to be of use it is included in the assigned memory. One
of us is wrong :-) Just such a test exists in the testing code.
>
2) Your function size2bucket() is a linear search (though, of course,
its count is bounded above by sizeof(ulong)*CHAR_BITS). Think about it
as bits but seperated from the mathematical properties for a second.
You can re-implement it using a binary search, which means on real
system you would have an effective loop count of 5 or 6.
For a search of roughly 16 or 20 items (at most), this is not
useful. Notice that the search begins at the first guaranteed
suitable bucket, and there is no need to search the list of bucket
items. This makes it approximately a first fit algorithm.
>
3) You are also giving up as soon as your algorithm is unable to find
memory using its "fast method". If you run out of memory, you are in
typically in "screwedzville" anyways. You can simply go back to your
free list and check the bucket one lower, and start scanning linearly
for some fixed amount before truly bailing. This is important for
certain scenarios where you are allocating near the system limit in
terms of amount of memory, but nearly all your allocations are the
exact same size (this scenario is very typical). With your strategy as
coded, you can "run out of memory" even though you actually have plenty
of memory available -- you just aren't willing to scan for it. Its
very difficult to implement something which works robustly and all the
time here, but you can capture typical cases without too much effort.
It won't be 'plenty'. It may exist in the lower bucket, but that
would involve searching a list of unknown length, requiring unknown
time. The system hasn't aborted, it just returns NULL, so the user
has an opportunity to take corrective action. If the system has
reached that point total inability is near anyhow, but the
capability of making smaller assignments MAY still be present, to
assist in recovery.

Notice that when that mechanism fails the opportunity to grab more
memory from the OS still exists, via sbrk.
>
In any event, it is as I said. You can implement something with O(1),
but it ends up not always being able to allocate all available memory
successfully.
And, as I said, doing such requires moving previously assigned
memory and repacking. The fundamentals of C make this
impracticable.

The whole thing should port easily to any system where alignment
depends on the low order bits of the addresses and arithmetic on
such addresses is valid. The other dependency is on the sbrk
routine. Thus it should work for most *IX systems. The variadic
development debuggery macros require gcc to avoid bombing even when
they are switched off.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>

Dec 28 '06 #21

P: n/a
CBFalconer wrote:
we******@gmail.com wrote:
CBFalconer wrote:
we******@gmail.com wrote:
nf*********@mailinator.com wrote:

This is my first posting on this forum! I hope it will be the
first of many as I work my way up to be a C guru :)

We had an assigment to write our own storage allocator, and mine
was graded very poorly with comment "Hopelessly naive". Now I can
see the main problem, which is that some calls to myfree() won't
actually release the memory, but that would be pretty difficult
to achieve, and the rest of it seems to work fine to me.

Well, the comment is fair. But the problem is that writing a good
allocation system is actually quite difficult.

Writing one that satisfies all possible useful criteria and runs
in O(1) (both malloc and free) is basically impossible. You can
do one that is O(ln(#allocations)) however it is horrendously
complicated. Usually, however, its straight forward to write an
allocation system that is O(1) *most* of the time, or which is
O(1) all the time, but does not always successfully allocate or
recover all of memory correctly.

It is inherently impossible to make a fragmentation proof allocator
for C, because of the presence of arbitrary pointers.
Well more importantly, you can choose the ordering in which a sequence
of allocations is freed, and therefore can always create whatever kind
of fragmentation you like. So this clearly is not one of the criteria
a C-style allocator can have.

If we could move the storage (see next para) this can be handled.
In ISO Pascal we know there are no extant pointers other that those
assigned by new, so indirection and repacking is possible.
[...] Any such
would require indirect memory access for everything, which would be
a monstrous performance hit when enforced, together with
specialized compilers (or interpreters). However it is definitely
possible to have O(1) performance for all operations and to always
recover memory.
Ok, but what about also always being able to successfully allocate?
[...] It doesn't even require buddy systems (which have
their own penalties). I have published such a system, which only
requires care in writing.

See <http://cbfalconer.home.att.net/download/nmalloc.zip>
I noticed you don't supply any seperate documentation for it. Can I
assume the comments in the code are accurate? From what I can peruse:

As far as I know they are accurate. Not the memalign portion,
which is nonsense as published (and so labelled).
1) You seem to be neglecting at least one merge scenario. When you
split off the excess from an allocation, you are not merging it with
any possible adjacent free entries. In a sense this causes unnecessary
fragmentation of the remaining free entries. Remember with your
strategy, you want your free entries to be as large as possible. Try
writing a benchmark with a lot of reallocs in it, and you will see the
difference this makes.

Not so. The split off portion is always merged if possible. If
too small to be of use it is included in the assigned memory. One
of us is wrong :-) Just such a test exists in the testing code.
Hmm ... you threw the combine call in the mv2freelist() but didn't
document the fact that a merge was taking place. You need better
comments -- including one that explains where mv2freelist() is called
from and why it only needs to perform one direction of combine instead
of testing for both.
2) Your function size2bucket() is a linear search (though, of course,
its count is bounded above by sizeof(ulong)*CHAR_BITS). Think about it
as bits but seperated from the mathematical properties for a second.
You can re-implement it using a binary search, which means on real
system you would have an effective loop count of 5 or 6.

For a search of roughly 16 or 20 items (at most), this is not
useful.
You don't notice that 16/6 = 2.666 < c < 20/5 = 4 which is a lot larger
than 1? Look, I've written my own very similar allocation scheme, I
threw a profiler on it, and this calculation always turns out to be a
white-hot hot spot.
[...] Notice that the search begins at the first guaranteed
suitable bucket, and there is no need to search the list of bucket
items.
Huh? It starts at sz and shifts off bits one at a time. I don't know
what you are talking about here.
[...] This makes it approximately a first fit algorithm.
Whatever, you are trying to perform a ceil(lg2()), and you aren't doing
it very well.
3) You are also giving up as soon as your algorithm is unable to find
memory using its "fast method". If you run out of memory, you are in
typically in "screwedzville" anyways. You can simply go back to your
free list and check the bucket one lower, and start scanning linearly
for some fixed amount before truly bailing. This is important for
certain scenarios where you are allocating near the system limit in
terms of amount of memory, but nearly all your allocations are the
exact same size (this scenario is very typical). With your strategy as
coded, you can "run out of memory" even though you actually have plenty
of memory available -- you just aren't willing to scan for it. Its
very difficult to implement something which works robustly and all the
time here, but you can capture typical cases without too much effort.

It won't be 'plenty'.
It can be arbitrarily plenty. Look, simply allocate a fixed sized
element over and over until you run out of memory. Then just free
every other allocation (so no merging takes place, but basically half
your memory is reclaimed.) Then simply try to allocate 1 entry again
with that fixed size. You will fail because the free list only
contains elements from the tightly fitting bucket rather than the one
one greater.
[...] It may exist in the lower bucket, but that
would involve searching a list of unknown length, requiring unknown
time.
To guarantee it, yes -- so don't provide a guarantee. Just search a
small finite number of elements before giving up. The point is that
the vast majority of scenarios where the memory is being pushed this
far, the size of the memory dominating data structures are still likely
to be uniform.
[...] The system hasn't aborted, it just returns NULL, so the user
has an opportunity to take corrective action.
My point is that its returning NULL too soon. And as you well know,
corrective action is typically "exit(EXIT_FAILURE ...)". Staving off
the need should be a priority.
[...] If the system has
reached that point total inability is near anyhow, but the
capability of making smaller assignments MAY still be present, to
assist in recovery.

Notice that when that mechanism fails the opportunity to grab more
memory from the OS still exists, via sbrk.
I am talking about the scenario after sbrk() runs out of more system
memory as well.

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/

Dec 28 '06 #22

P: n/a
we******@gmail.com wrote:
CBFalconer wrote:
>we******@gmail.com wrote:
.... snip critique of
<http://cbfalconer.home.att.net/download/nmalloc.zip ...
>>
>>3) You are also giving up as soon as your algorithm is unable to find
memory using its "fast method". If you run out of memory, you are in
typically in "screwedzville" anyways. You can simply go back to your
free list and check the bucket one lower, and start scanning linearly
for some fixed amount before truly bailing. This is important for
certain scenarios where you are allocating near the system limit in
terms of amount of memory, but nearly all your allocations are the
exact same size (this scenario is very typical). With your strategy as
coded, you can "run out of memory" even though you actually have plenty
of memory available -- you just aren't willing to scan for it. Its
very difficult to implement something which works robustly and all the
time here, but you can capture typical cases without too much effort.

It won't be 'plenty'.

It can be arbitrarily plenty. Look, simply allocate a fixed sized
element over and over until you run out of memory. Then just free
every other allocation (so no merging takes place, but basically half
your memory is reclaimed.) Then simply try to allocate 1 entry again
with that fixed size. You will fail because the free list only
contains elements from the tightly fitting bucket rather than the one
one greater.
>[...] It may exist in the lower bucket, but that would involve
searching a list of unknown length, requiring unknown time.

To guarantee it, yes -- so don't provide a guarantee. Just search a
small finite number of elements before giving up. The point is that
the vast majority of scenarios where the memory is being pushed this
far, the size of the memory dominating data structures are still
likely to be uniform.
>[...] The system hasn't aborted, it just returns NULL, so the user
has an opportunity to take corrective action.

My point is that its returning NULL too soon. And as you well know,
corrective action is typically "exit(EXIT_FAILURE ...)". Staving off
the need should be a priority.
Yes, that is a reasonable possibility (the finite search). I guess
the case could arise from implementing a list with header nodes,
and destroying alternate entries. I think it is well enough
structured that the place to inject such code is obvious.

Another eventual improvement might be to separate out entirely all
allocations under, say, 16 or so bytes, and perform them from a
separate chunk. The purpose being to avoid the overhead of the
control field for small allocations. This is more complicated.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Dec 28 '06 #23

P: n/a
You seem to be under a misapprehension. Nothing is being stolen.
Barry Schwarz wrote:
On 27 Dec 2006 13:20:51 -0800, nf*********@mailinator.com wrote:
snip
K&R2 is not legally available as a pdf.
Maybe that's strictly speaking true, but some considerations:
1. I could wait a week and borrow it from the library - same result,
i.e. I read their bit on allocators, but I've wasted a week.
2. If Kernighan & Ritchie were around today they'd probably be in the
spheres of Open Content, Free Documentation, etc., which is a more
ethical approach to "IP".
3. It's unlikely either of them will go hungry because I don't buy a
copy of their book.

I make my living writing software. I have no interest in helping you
steal from someone else who does the same.
Remove del for email
Dec 28 '06 #24

P: n/a
Or you can buy the whole book and reward the authors' effort.

I think that's an excellent thing to do, and I happily buy books...
however, I don't have unlimited funds to buy books. When I need the
information in a book I can't afford, I can get it from my library, or
get it from the big world-wide library of the web.

I find the buying and selling of information per se to be highly
distasteful: sure, buy and sell books because it's nice to have a bound
printed reference to work from, but don't try to lock up the ideas,
because information should be free (everyone gains from access to
information), and one way or another information will be free.

Notice that none of this is to say that authors shouldn't be
remunerated, and very well remunerated, for their work - just that the
old-fashioned publishing model is broken beyond repair in the
electronic age (at least for technical works - probably not for
fiction).
Also, personal use doesn't authorize you or others to share it
over the Internet - personal means you not half the world. The
copyright system is not broken... you probably talk about the
patent system being broken which is something different.
No, it's the copyright conditions that forbid converting it to another
medium (in this case electronic). Recording TV programs to VHS (or DVD
nowadays) is also generally illegal as it isn't expressly permitted by
copyright holders who routinely "reserve all rights" unless they're
explicitly granted, but thankfully a huge majority of people have the
common sense to realize that this is idiotic.
So if I shoot you I should be safe because you would die anyway
eventually and I would still get my goal accomplished (identical
result). Screw the law, it's broken...
The obvious difference being that shooting me has a negative side
effect (namely my immediate death itself), whereas my two routes to
getting printed copies of K&R on allocations have no bad side-effects.
Patents, again, not copyright. You're getting it wrong. It
doesn't play on Linux because there is no program under Linux to
run it, because the guys distributing software respect the law by
not using stuff they don't agree with. You don't seem to get that.
Yes, that's a patent issue. I'd hardly say they respect the law - more
fear it. I seem to recall my distro came with a program that when run
gave a message like "We can't legally distribute DeCSS for absurd legal
reasons. This program will download it from a server based somewhere
it's legal - do this at your own risk." Quite clearly they don't think
it's wrong for people to download DeCSS - they're just afraid of being
sued if they distribute it themselves.

Dec 28 '06 #25

P: n/a
Kernighan & Ritchie *are* around today, and if they wanted their book
released under a permissive license they could do so.
That's extremely unlikely - almost always publishers require authors to
transfer copyright to them. It would be interesting to know what their
view of information sharing...
It's unlikely you'll get any help from people who make a living with
intellectual property, now that you've demonstrated your cavalier
attitude towards it.
....and "intellectual property" is though. Of course there's a school of
thought that says the term "IP" is an extremely unfortunate one:
http://www.gnu.org/philosophy/not-ipr.xhtml

Dec 28 '06 #26

P: n/a
nf*********@mailinator.com said:
You seem to be under a misapprehension. Nothing is being stolen.
You seem to be under the misapprehension that we care about your
twisty-turny "I'm not dishonest, even though I'm breaking copyright law"
self-justification. A crook is a crook, whether the law calls it stealing
or copyright violation or whatever.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Dec 28 '06 #27

P: n/a
nf*********@mailinator.com wrote:
You seem to be under a misapprehension. Nothing is being stolen.
No, _you_ seem to be under several misapprehensions.

First, top-posting is _not_ acceptable except on junk newsgroups.

Second, it doesn't matter whether you call it theft, piracy, or
liberating the oppressed electrons. You're still a freeloader.

Third, nobody likes a freeloader, and nobody wants to help you.

Richard
Dec 28 '06 #28

P: n/a
nf*********@mailinator.com wrote:
I find the buying and selling of information per se to be highly
distasteful: sure, buy and sell books because it's nice to have a bound
printed reference to work from, but don't try to lock up the ideas,
because information should be free (everyone gains from access to
information), and one way or another information will be free.
You know, every time someone trots out this argument, I wonder whether
they'd be willing to put their SSN, bank account number, PIN codes,
passwords, date of birth, and mother's maiden name on a web page for
everybody to see. That's information, after all, and information must be
free.

Richard
Dec 28 '06 #29

P: n/a
nf*********@mailinator.com wrote:
>Or you can buy the whole book and reward the authors' effort.

I think that's an excellent thing to do, and I happily buy books...
however, I don't have unlimited funds to buy books. When I need the
information in a book I can't afford, I can get it from my library, or
get it from the big world-wide library of the web.
I don't either, so until I could afford to buy K&R2 (which is
relatively recent), I borrowed it from the library. And, yes, I
waited for it, every time. (Even worse, when I really needed it I
couldn't find it anywhere). I found a website once advertising a
free K&R(1) pdf and I downloaded. I was told it was not legal and
I deleted it.
I find the buying and selling of information per se to be highly
distasteful: sure, buy and sell books because it's nice to have a bound
printed reference to work from, but don't try to lock up the ideas,
because information should be free (everyone gains from access to
information), and one way or another information will be free.
Tough luck. The law says you have to, in this case. Other people
find it distasteful when people use their work without paying for
it. The law is on their side for now.
>
Notice that none of this is to say that authors shouldn't be
remunerated, and very well remunerated, for their work - just that the
old-fashioned publishing model is broken beyond repair in the
electronic age (at least for technical works - probably not for
fiction).
>Also, personal use doesn't authorize you or others to share it
over the Internet - personal means you not half the world. The
copyright system is not broken... you probably talk about the
patent system being broken which is something different.

No, it's the copyright conditions that forbid converting it to another
medium (in this case electronic). Recording TV programs to VHS (or DVD
nowadays) is also generally illegal as it isn't expressly permitted by
copyright holders who routinely "reserve all rights" unless they're
explicitly granted, but thankfully a huge majority of people have the
common sense to realize that this is idiotic.
Fight to change it if you don't like it. Yes, a huge majority of
people are switching to relative morality... That may not be the
best thing ever. In my opinion it's even worse than broken laws.
Also, if I don't share 'illegal' content, it's really difficult
for anyone to know what I do with my personal copy.
>
>So if I shoot you I should be safe because you would die anyway
eventually and I would still get my goal accomplished (identical
result). Screw the law, it's broken...

The obvious difference being that shooting me has a negative side
effect (namely my immediate death itself), whereas my two routes to
getting printed copies of K&R on allocations have no bad side-effects.
How can you tell? What are writers supposed to live on?
Concerts??? Who are you to tell them they can't make money on
their work? If they sell one book and that gets shared over the
Internet chances are they won't make too much. It's like abusing
them. If they want to do it differently that's fine.
>Patents, again, not copyright. You're getting it wrong. It
doesn't play on Linux because there is no program under Linux to
run it, because the guys distributing software respect the law by
not using stuff they don't agree with. You don't seem to get that.

Yes, that's a patent issue. I'd hardly say they respect the law - more
fear it. I seem to recall my distro came with a program that when run
gave a message like "We can't legally distribute DeCSS for absurd legal
reasons. This program will download it from a server based somewhere
it's legal - do this at your own risk." Quite clearly they don't think
it's wrong for people to download DeCSS - they're just afraid of being
sued if they distribute it themselves.
I didn't say they agree with it, I said that they respect it. A
lot of people respect the law out of fear. That's why not
everybody has the courage to shop-lift.

--
Ioan - Ciprian Tandau
tandau _at_ freeshell _dot_ org (hope it's not too late)
(... and that it still works...)
Dec 28 '06 #30

P: n/a
Nelu wrote:
I don't either, so until I could afford to buy K&R2 (which is
relatively recent), I borrowed it from the library. And, yes, I
waited for it, every time. (Even worse, when I really needed it I
couldn't find it anywhere). I found a website once advertising a
free K&R(1) pdf and I downloaded. I was told it was not legal and
I deleted it.
If that makes you sleep easier at night, then good for you!
Also, if I don't share 'illegal' content, it's really difficult
for anyone to know what I do with my personal copy.
Is that supposed to mean that you think it's OK to make a PDF of a book
you own for personal use as long as you don't share it? Even though
it's against the law?
How can you tell? What are writers supposed to live on?
Concerts??? Who are you to tell them they can't make money on
their work? If they sell one book and that gets shared over the
Internet chances are they won't make too much. It's like abusing
them. If they want to do it differently that's fine.
This sounds like a compelling argument for closing all the libraries in
the world with immediate effect.

I find it incredible that this sort of FUD is still being trotted out
from intelligent people. "Oh, no one can make money from free
software!" Try telling that to all the employees of the FSF and Red Hat
and Sourceforge and the hundreds of little companies that make free
software.

The idea that there's no way to make money as an author or publihser
other than by restricting distribution is now years and years out of
date - O'Reilly are a living, breathing case in point of a company
making it work right now.

Once again I say: of course authors should be very well remunerated for
the work they do. The main obstacle to that is the short-term greed and
vested interests of fat-cat publishers, not those who share information
for personal use.

Dec 28 '06 #31

P: n/a
nf*********@mailinator.com wrote:
Nelu wrote:
>Also, if I don't share 'illegal' content, it's really difficult
for anyone to know what I do with my personal copy.

Is that supposed to mean that you think it's OK to make a PDF of a book
you own for personal use as long as you don't share it? Even though
it's against the law?
No. The point is that if you're doing something illegal you
should shut up unless you're prepared to suffer the consequences.
>
>How can you tell? What are writers supposed to live on?
Concerts??? Who are you to tell them they can't make money on
their work? If they sell one book and that gets shared over the
Internet chances are they won't make too much. It's like abusing
them. If they want to do it differently that's fine.

This sounds like a compelling argument for closing all the libraries in
the world with immediate effect.
The libraries are buying the books. They are not doing something
illegal. And because they buy the book you have free access to
information. Most of the libraries I know have free membership
and the others have a very low price for the membership that is
*not* used to restrict your access to the information. They obey
the law.
>
I find it incredible that this sort of FUD is still being trotted out
from intelligent people. "Oh, no one can make money from free
software!" Try telling that to all the employees of the FSF and Red Hat
and Sourceforge and the hundreds of little companies that make free
software.
I didn't say that you can't make money in other ways and I am a
supporter of open source software although I have to use
proprietary software from time to time because there may not be
good open source alternatives. At the same time I recognize the
fact that there are people that don't want to do things the same
way and the law allows them to do things differently. If you
don't like it stay away from their software and books.
>
The idea that there's no way to make money as an author or publihser
other than by restricting distribution is now years and years out of
date - O'Reilly are a living, breathing case in point of a company
making it work right now.
Good for them. Other people may not want to go the same route if
they think they can make more money in other ways that are legal.
>
Once again I say: of course authors should be very well remunerated for
the work they do. The main obstacle to that is the short-term greed and
vested interests of fat-cat publishers, not those who share information
for personal use.
Good. If you think the authors should be very well remunerated
for their work then go and buy K&R. Stealing it won't bring the
authors any money.

--
Ioan - Ciprian Tandau
tandau _at_ freeshell _dot_ org (hope it's not too late)
(... and that it still works...)
Dec 28 '06 #32

P: n/a
Nelu wrote:
nf*********@mailinator.com wrote:
<snip>
>No, it's the copyright conditions that forbid converting it to another
medium (in this case electronic). Recording TV programs to VHS (or DVD
nowadays) is also generally illegal as it isn't expressly permitted by
copyright holders who routinely "reserve all rights" unless they're
explicitly granted, but thankfully a huge majority of people have the
common sense to realize that this is idiotic.

Fight to change it if you don't like it. Yes, a huge majority of
people are switching to relative morality... That may not be the
best thing ever. In my opinion it's even worse than broken laws.
Also, if I don't share 'illegal' content, it's really difficult
for anyone to know what I do with my personal copy.
<snip>
I didn't say they agree with it, I said that they respect it. A
lot of people respect the law out of fear. That's why not
everybody has the courage to shop-lift.
You seem to be promoting the view that legality defines morality, or
that the two are somehow equal. Three examples to disprove this:

1. Recreational drug users of say, cannabis, feel what they do is
entirely moral. The law disagrees.
2. Others feel that the government making lots of money from sales of
something which kills many thousands every year (say, tobacco) is
immoral. The law disagrees.
3. Ripping a CD to your iPod so you can listen to it on the move is
generally considered a moral actions. UK law, at least, disagrees.
Andrew Sidwell
--
My email address changes monthly because of Too Much Spam. It's always
the first three letters of the month (in English), followed by the last
two digits of the current year. For example, de***@entai.co.uk.
Dec 28 '06 #33

P: n/a
Nelu wrote:
The libraries are buying the books. They are not doing something
illegal. And because they buy the book you have free access to
information.
I'm sorry, but where in the process of buying a book (to scan and upload
it as a PDF) is the book not bought?
Andrew Sidwell
--
My email address changes monthly because of Too Much Spam. It's always
the first three letters of the month (in English), followed by the last
two digits of the current year. For example, de***@entai.co.uk.
Dec 28 '06 #34

P: n/a
CBFalconer wrote:
we******@gmail.com wrote:
>CBFalconer wrote:
.... snip ...
>>>
See <http://cbfalconer.home.att.net/download/nmalloc.zip>

I noticed you don't supply any seperate documentation for it.
Can I assume the comments in the code are accurate? From what I
can peruse:

As far as I know they are accurate. Not the memalign portion,
which is nonsense as published (and so labelled).
Actually there is fairly good documentation in nmalloc.txh (which
is intended to be a component of a larger library documentation)
together with instructions in readme.txt for converting that to a
usable html documentation file.

--
Merry Christmas, Happy Hanukah, Happy New Year
Joyeux Noel, Bonne Annee.
Chuck F (cbfalconer at maineline dot net)
<http://cbfalconer.home.att.net>
Dec 28 '06 #35

P: n/a
Andrew Sidwell wrote:
Nelu wrote:
>The libraries are buying the books. They are not doing something
illegal. And because they buy the book you have free access to
information.

I'm sorry, but where in the process of buying a book (to scan and upload
it as a PDF) is the book not bought?
I guess the periods should have been commas. Sorry about that.
--
Ioan - Ciprian Tandau
tandau _at_ freeshell _dot_ org (hope it's not too late)
(... and that it still works...)
Dec 28 '06 #36

P: n/a
Andrew Sidwell wrote:
Nelu wrote:
>nf*********@mailinator.com wrote:
<snip>
>>No, it's the copyright conditions that forbid converting it to another
medium (in this case electronic). Recording TV programs to VHS (or DVD
nowadays) is also generally illegal as it isn't expressly permitted by
copyright holders who routinely "reserve all rights" unless they're
explicitly granted, but thankfully a huge majority of people have the
common sense to realize that this is idiotic.
Fight to change it if you don't like it. Yes, a huge majority of
people are switching to relative morality... That may not be the
best thing ever. In my opinion it's even worse than broken laws.
Also, if I don't share 'illegal' content, it's really difficult
for anyone to know what I do with my personal copy.
<snip>
>I didn't say they agree with it, I said that they respect it. A
lot of people respect the law out of fear. That's why not
everybody has the courage to shop-lift.

You seem to be promoting the view that legality defines morality, or
that the two are somehow equal. Three examples to disprove this:

1. Recreational drug users of say, cannabis, feel what they do is
entirely moral. The law disagrees.
Other people may disagree as well.
2. Others feel that the government making lots of money from sales of
something which kills many thousands every year (say, tobacco) is
immoral. The law disagrees.
Smokers disagree, in general.
3. Ripping a CD to your iPod so you can listen to it on the move is
generally considered a moral actions. UK law, at least, disagrees.
Singers may disagree.

Shop-lifters may think that what they are doing is moral.
This is moral relativism. The law is a very slow follower of a
morality shared by the majority. Until the new majority manages
to make changes whatever you think is moral may break the law.
The law usually reflects a morality shared by the majority at
some point in time, not necessarily the present.

--
Ioan - Ciprian Tandau
tandau _at_ freeshell _dot_ org (hope it's not too late)
(... and that it still works...)
Dec 28 '06 #37

P: n/a
CBFalconer <cb********@yahoo.comwrites:
[...]
Another eventual improvement might be to separate out entirely all
allocations under, say, 16 or so bytes, and perform them from a
separate chunk. The purpose being to avoid the overhead of the
control field for small allocations. This is more complicated.
It seems to me that that could cause problems if the implementation
has to make assumptions about how much memory will be needed for small
allocations vs. large ones.

A more general approach might be to allocate large chunks on demand,
and provide small allocations from within those large chunks.

This is just off the top of my head; there may well be flaws in my
reasoning.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Dec 29 '06 #38

P: n/a
nf*********@mailinator.com writes:
>Kernighan & Ritchie *are* around today, and if they wanted their book
released under a permissive license they could do so.

That's extremely unlikely - almost always publishers require authors to
transfer copyright to them. It would be interesting to know what their
view of information sharing...
Under that assumption (which is likely to be correct), Prentice Hall
owns the copyright, since Kernighan and Ritchie sold it to them -- as
they were perfectly entitled to do. If Prentice Hall wanted their
book released under a permissive license, they could do so. If
Kernighan and Ritchie wanted the book they wrote released under a
permissive license, they could have done so themselves rather than
selling it to Prentice Hall. They might still be able to make some
sort of deal with Prentice Hall to allow it, though the final decision
would be up to Prentice Hall. They have not chosen to do so.
>It's unlikely you'll get any help from people who make a living with
intellectual property, now that you've demonstrated your cavalier
attitude towards it.

...and "intellectual property" is though. Of course there's a school of
thought that says the term "IP" is an extremely unfortunate one:
http://www.gnu.org/philosophy/not-ipr.xhtml
Yes, there is. But I don't recall the FSF ever advocating violation
of existing copyrights. They would probably prefer that K&R had
released their book under a free license in the first place, but they
didn't choose to do so.

Kernighan and Ritchie happen to be highly respected individuals around
here; Mr. Ritchie even posts here every now and then. And even if
they weren't, it wouldn't make any difference. You are violating the
laws under which they make their living, and you are bragging about
it. We are not interested in your excuses. Don't expect any help
here in the future.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Dec 29 '06 #39

P: n/a
nf*********@mailinator.com writes:
[...]
The idea that there's no way to make money as an author or publihser
other than by restricting distribution is now years and years out of
date - O'Reilly are a living, breathing case in point of a company
making it work right now.
[...]

What makes you think that mentioning O'Reilly helps your case?

O'Reilly sells books. If you were to download a PDF copy of an
O'Reilly book without their permission, you would be violating their
copyright.

They also provide *some* materials free of charge, and others over the
Internet *for money*.

Until recently, O'Reilly offered a "CD Bookshelf" series, packages of
several of their books in HTML format on a CD-ROM. They've stopped
doing this, supposedly because it made their books too easy to
redistribute without authorization. In other words, *I* can no longer
obtain these valuable publications because of the actions of people
like *you*.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Dec 29 '06 #40

This discussion thread is closed

Replies have been disabled for this discussion.