472,352 Members | 1,497 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 472,352 software developers and data experts.

Garbage collection


In an interviw with Dr Dobbs, Paul Jansen explains which languages are
gaining in popularity and which not:

<quote>
DDJ: Which languages seem to be losing ground?

PJ: C and C++ are definitely losing ground. There is a simple
explanation for this. Languages without automated garbage collection are
getting out of fashion. The chance of running into all kinds of memory
problems is gradually outweighing the performance penalty you have to
pay for garbage collection.
<end quote>

lcc-win has been distributing a GC since 2004.

It really helps.
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Jun 27 '08 #1
109 3607

"jacob navia" <ja***@nospam.comwrote in message
news:fu**********@aioe.org...
>
In an interviw with Dr Dobbs, Paul Jansen explains which languages are
gaining in popularity and which not:

<quote>
DDJ: Which languages seem to be losing ground?

PJ: C and C++ are definitely losing ground. There is a simple explanation
for this. Languages without automated garbage collection are getting out
of fashion. The chance of running into all kinds of memory problems is
gradually outweighing the performance penalty you have to pay for garbage
collection.
<end quote>

lcc-win has been distributing a GC since 2004.

It really helps.
I agree...

I have also been using garbage collection in my projects for years with good
success...
I also face people who condemn both garbage collection and C, but I really
like C personally (I am less of a fan of Java personally, even if it has
gained a lot of ground...).

C# may also become a big player, and may in time overpower Java (there are
variables though, I suspect .NET being both a gain and a potential
hinderance at this point in time).

C will likely remain around for a while, but the high-point for C++ may be
starting to pass (C++ is a heavy beast, and losing some of its major selling
points to other languages, may start to fall into disuse much faster than C
does...).

>
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32

Jun 27 '08 #2
I don't see C dying out any time soon. The problem with automatic
garbage collection is not just in performance penalty, but that it
introduces uncertainty to the code. It becomes difficult to predict at
what time the garbage collector will start running. In some cases this
behavior simply cannot be tolerated.
Jun 27 '08 #3
Yunzhong wrote:
I don't see C dying out any time soon. The problem with automatic
garbage collection is not just in performance penalty, but that it
introduces uncertainty to the code. It becomes difficult to predict at
what time the garbage collector will start running. In some cases this
behavior simply cannot be tolerated.
In Boehm's GC it will start ONLY when you call malloc().
If you do not call malloc() nothing can happen...
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Jun 27 '08 #4
On Apr 25, 5:16*am, jacob navia <ja...@nospam.comwrote:
In an interviw with Dr Dobbs, Paul Jansen explains which languages are
gaining in popularity and which not:
Who's Paul Jansen?
<quote>
DDJ: Which languages seem to be losing ground?

PJ: C and C++ are definitely losing ground. There is a simple
explanation for this. Languages without automated garbage collection are
getting out of fashion. The chance of running into all kinds of memory
problems is gradually outweighing the performance penalty you have to
pay for garbage collection.
<end quote>

lcc-win has been distributing a GC since 2004.

It really helps.
Is lcc-win outselling Microsoft or Intel's compilers?

I guess that most C work is at the embedded level today. I doubt if
we will have garbage collectors running in our toasters any time soon.
Jun 27 '08 #5
>lcc-win has been distributing a GC since 2004.
>
It really helps.
In what ways does that implementation violate the standard?

My bet is that it will incorrectly free pieces of allocated memory
when the only references to that memory are in a file (written by
that process and later read back in by the same process). If lcc-win
actually handles this, its performance likely sucks if it has to
scan gigabyte (or worse, terabyte) files for pointer references.
I think the standard also allows, under the right circumstances,
for pointers to be *encrypted*, then stored in a file, and later
read back, decrypted, and used.

Oh, yes, to count as GC, it has to occasionally actually free
something eligible to be freed.

I don't consider this to be a fatal flaw for GC in general or this
implementation in particular, as storing pointers in files is
relatively unusual. But a standards-compliant GC has to deal with
it.
Jun 27 '08 #6
user923005 wrote:
On Apr 25, 5:16 am, jacob navia <ja...@nospam.comwrote:
>In an interviw with Dr Dobbs, Paul Jansen explains which languages are
gaining in popularity and which not:

Who's Paul Jansen?
Google has lots of hits for "Paul Jansen," starting with
a firm that makes piano benches. Adding "Dobbs" to the query
leads us to a Paul Jansen who's the managing director of TIOBE
Software. Never heard of them, but they have something called
the "TIOBE Programming Community Index" that apparently tries
to measure the "popularity of programming languages," but does
so by estimating "their web presence." This they do by running
searches of the form "X programming" for various language names,
and counting the hits.

Sounds like phrenology, doesn't it? Or perhaps astrology?
Maybe we need a new word for this sort of research, something
like "gullibology," or better yet "apology."

Interesting factoid: According to Jansen, COBOL is not
among the top ten most popular languages, having been edged
out by Python -- within the last five years! The Amazing
Grace had more staying power than people thought ...
>lcc-win has been distributing a GC since 2004.

It really helps.

Is lcc-win outselling Microsoft or Intel's compilers?

I guess that most C work is at the embedded level today. I doubt if
we will have garbage collectors running in our toasters any time soon.
That's why the toast crumbs keep accumulating at the bottom.

--
Er*********@sun.com
Jun 27 '08 #7

Eric Sosman <Er*********@sun.comwrites:
>I guess that most C work is at the embedded level today. I doubt if
we will have garbage collectors running in our toasters any time soon.

That's why the toast crumbs keep accumulating at the bottom.
Heh. That's funny - no garbage collection. Good one.
Jun 27 '08 #8
Gordon Burditt wrote:
>>lcc-win has been distributing a GC since 2004.

It really helps.

In what ways does that implementation violate the standard?

My bet is that it will incorrectly free pieces of allocated memory
when the only references to that memory are in a file (written by
that process and later read back in by the same process). If lcc-win
actually handles this, its performance likely sucks if it has to
scan gigabyte (or worse, terabyte) files for pointer references.
I think the standard also allows, under the right circumstances,
for pointers to be *encrypted*, then stored in a file, and later
read back, decrypted, and used.

Oh, yes, to count as GC, it has to occasionally actually free
something eligible to be freed.

I don't consider this to be a fatal flaw for GC in general or this
implementation in particular, as storing pointers in files is
relatively unusual. But a standards-compliant GC has to deal with
it.
As GC hasn't been defined by the standard yet, we can't say. For all we
know WG14 might decide to excuse GC from scanning for pointers in files
and similar stuff. Right now using a GC is non-conforming simply
because the standard attempts no definition for it.

Jun 27 '08 #9
user923005 <dc*****@connx.comwrote:
On Apr 25, 5:16?am, jacob navia <ja...@nospam.comwrote:
In an interviw with Dr Dobbs, Paul Jansen explains which languages are
gaining in popularity and which not:
Who's Paul Jansen?
<quote>
DDJ: Which languages seem to be losing ground?

PJ: C and C++ are definitely losing ground. There is a simple
explanation for this. Languages without automated garbage collection are
getting out of fashion. The chance of running into all kinds of memory
problems is gradually outweighing the performance penalty you have to
pay for garbage collection.
<end quote>

lcc-win has been distributing a GC since 2004.

It really helps.

Is lcc-win outselling Microsoft or Intel's compilers?

I guess that most C work is at the embedded level today. I doubt if
we will have garbage collectors running in our toasters any time soon.
When people merely say "embedded", I think it confuses the issue.

The places where C continues to be used, and will continue to be used, are
places where the problem is very well defined, and the solution amenable to
a fixed interface. This happens to be the case for embedded hardware
products, as well as for many elements of more general purpose computing
platforms: SQL databases, libraries or applications implementing
well-defined specifications (XML, HTTP), and other "optimizing" parts of
applications in more higher-level languages, not to mention virtual
machines, etc. C is a simple language, and it can express these solutions
quite readily.

Because of the way software systems are constructed (especially "edge"
systems, like web applications), and because of the growth of the IT sector
(particularly in number of programmers), it's inevitable that C will become
_relatively_ diminished. By itself, of course, this means very little for C.
And I think that when people characterize the C community as "mostly
embedded developers", they fall into the trap of excusing or explaining a
supposed shift; but I don't see a shift in C usage much at all.

Before Java and C# there was Visual Basic and Delphi. There's still Visual
Basic. There are untold numbers of people programming in PHP. I remember
when Cold Fusion came out. Many corporations jumped on it because it
promised managers the ability to repurpose "functional" personnel for doing
"technical" work; i.e. turn your average Joe into a "programmer". There's
nothing wrong w/ that, but when it happens it doesn't mean C is becoming
less popular in the sense that its role is being overcome by other
languages. And there's no reason to think that C is "retreating" to the
hidden world of toaster software anymore than it always has; which is to
say: it's not.

All that's happening is that there are myriad new _types_ of applications,
many of which C is not well suited for. The old _types_ are still there, and
usage is growing in absolute terms. There's no need for a bunker mentality.

Jun 27 '08 #10
"jacob navia" <ja***@nospam.comwrote in message
news:fu**********@aioe.org...
>
In an interviw with Dr Dobbs, Paul Jansen explains which languages are
gaining in popularity and which not:

<quote>
DDJ: Which languages seem to be losing ground?

PJ: C and C++ are definitely losing ground. There is a simple explanation
for this. Languages without automated garbage collection are getting out
of fashion. The chance of running into all kinds of memory problems is
gradually outweighing the performance penalty you have to pay for garbage
collection.
<end quote>

lcc-win has been distributing a GC since 2004.

It really helps.
If you need GC, well yes, it would help out a whole lot indeed. As long as
its not mandatory, I see absolutely nothing wrong with including a
full-blown GC in your compiler package.

Jun 27 '08 #11
>I don't consider this to be a fatal flaw for GC in general or this
>implementation in particular, as storing pointers in files is
relatively unusual. But a standards-compliant GC has to deal with
it.

As GC hasn't been defined by the standard yet, we can't say. For all we
Yes, we can. GC doesn't have to do anything different from the perspective
of a C program, although for 'real' GC it has to be able to collect some
garbage.
>know WG14 might decide to excuse GC from scanning for pointers in files
and similar stuff. Right now using a GC is non-conforming simply
because the standard attempts no definition for it.
No, using GC is allowed under 'as if' rules provided it doesn't make
any mistakes. Similar issues apply to swapping and paging - also not
specifically mentioned by the standard. There are
no rules against things like long pauses, poor performance, or similar
things. However, if GC mistakenly releases memory in use, then
reallocates it or otherwise lets it get scribbled on, it's in violation
of the 'as if' rule.

Jun 27 '08 #12
santosh <sa*********@gmail.comwrites:
Gordon Burditt wrote:
>jacob navia wrote (and Gordon Burditt rudely snipped the attribution):
>>>lcc-win has been distributing a GC since 2004.

It really helps.

In what ways does that implementation violate the standard?

My bet is that it will incorrectly free pieces of allocated memory
when the only references to that memory are in a file (written by
that process and later read back in by the same process). If lcc-win
actually handles this, its performance likely sucks if it has to
scan gigabyte (or worse, terabyte) files for pointer references.
I think the standard also allows, under the right circumstances,
for pointers to be *encrypted*, then stored in a file, and later
read back, decrypted, and used.

Oh, yes, to count as GC, it has to occasionally actually free
something eligible to be freed.

I don't consider this to be a fatal flaw for GC in general or this
implementation in particular, as storing pointers in files is
relatively unusual. But a standards-compliant GC has to deal with
it.

As GC hasn't been defined by the standard yet, we can't say. For all we
know WG14 might decide to excuse GC from scanning for pointers in files
and similar stuff. Right now using a GC is non-conforming simply
because the standard attempts no definition for it.
But a given GC implementation might cause a C implementation that uses
it to become non-conforming because it causes that implementation to
violate the requirements that the standard *does* define.

For example, it's perfectly legal to take a pointer object, break its
representation down into bytes, and store those bytes separately, then
erase the pointer object's value. You can later reconstitute the
pointer value from the bytes and use it. A typical GC implementation,
such as Boehm's, will detect that there is no pointer currently in
memory that refers to the referenced block of memory, and collect it.

I'm not claiming that that this is an insurmountable problem. You're
free to use an almost-but-not-quite-conforming C implementation if
it's useful to do so, and if the actions that would expose the
nonconformance are rare and easy to avoid. And if GC were
incorporated into a future C standard, the rules would probably be
changed to allow for this kind of thing (by rendering the behavior of
breaking down and reconstituting a pointer like this undefined), at
least in programs for which GC is enabled.

(I do not give permission to quote this article, or any other article
I post to Usenet, without attribution.)

--
Keith Thompson (The_Other_Keith) <ks***@mib.org>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 27 '08 #13
"Chris Thomasson" <cr*****@comcast.netwrites:
"jacob navia" <ja***@nospam.comwrote in message
news:fu**********@aioe.org...
>>
In an interviw with Dr Dobbs, Paul Jansen explains which languages
are gaining in popularity and which not:

<quote>
DDJ: Which languages seem to be losing ground?

PJ: C and C++ are definitely losing ground. There is a simple
explanation for this. Languages without automated garbage collection
are getting out of fashion. The chance of running into all kinds of
memory problems is gradually outweighing the performance penalty you
have to pay for garbage collection.
<end quote>

lcc-win has been distributing a GC since 2004.

It really helps.

If you need GC, well yes, it would help out a whole lot indeed. As
long as its not mandatory, I see absolutely nothing wrong with
including a full-blown GC in your compiler package.
I also see nothing wrong with it. However, users need to be aware
that if they write code that depends on GC, they're going to have
problems if they want to use it with an implementation that doesn't
support GC. This is a problem with any extension, no matter how
useful.

(I think it's possible to use the Boehm GC with other compilers. I
don't know how difficult it is.)

--
Keith Thompson (The_Other_Keith) <ks***@mib.org>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 27 '08 #14
Keith Thompson wrote:
But a given GC implementation might cause a C implementation that uses
it to become non-conforming because it causes that implementation to
violate the requirements that the standard *does* define.

For example, it's perfectly legal to take a pointer object, break its
representation down into bytes, and store those bytes separately, then
erase the pointer object's value. You can later reconstitute the
pointer value from the bytes and use it. A typical GC implementation,
such as Boehm's, will detect that there is no pointer currently in
memory that refers to the referenced block of memory, and collect it.
Then, there is only ONE solution for you and all people like you:

DO NOT USE A GC.

Then, you will happily be able to xor your pointers, store it in files,
whatever.

For the other people that do care about memory management, they can go
on using the GC.

The only reaction from the "regulars" are this kind of very practical
arguments.

Nobody is forbidding anyone to store pointers in files. You can't store
only those that the GC uses. Other pointers can be stored in files
at will, since malloc is still available!

This is typical of their arguments: No substance, just form. There
could be a pointer stored in a file and the standard says at paragraph
blah... blah... blah...

The practical advantages of a GC for a programming language, the pros
and cons... they do not care
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Jun 27 '08 #15
In article <fu**********@aioe.org>, jacob navia <ja***@nospam.orgwrote:
>But a given GC implementation might cause a C implementation that uses
it to become non-conforming because it causes that implementation to
violate the requirements that the standard *does* define.
>Then, there is only ONE solution for you and all people like you:

DO NOT USE A GC.

Then, you will happily be able to xor your pointers, store it in files,
whatever.

For the other people that do care about memory management, they can go
on using the GC.
Be reasonable Jacob. You deleted the rest of Keith's article, where
he essentially agrees with you that this problem could be overcome
by changing C's rules when GC is enabled. You could perfectly well
have followed up with more discussion of this without flaming him.

-- Richard
--
:wq
Jun 27 '08 #16
jacob navia <ja***@nospam.comwrites:
Keith Thompson wrote:
>But a given GC implementation might cause a C implementation that uses
it to become non-conforming because it causes that implementation to
violate the requirements that the standard *does* define.

For example, it's perfectly legal to take a pointer object, break its
representation down into bytes, and store those bytes separately, then
erase the pointer object's value. You can later reconstitute the
pointer value from the bytes and use it. A typical GC implementation,
such as Boehm's, will detect that there is no pointer currently in
memory that refers to the referenced block of memory, and collect it.

Then, there is only ONE solution for you and all people like you:

DO NOT USE A GC.

Then, you will happily be able to xor your pointers, store it in files,
whatever.

For the other people that do care about memory management, they can go
on using the GC.

The only reaction from the "regulars" are this kind of very practical
arguments.

Nobody is forbidding anyone to store pointers in files. You can't store
only those that the GC uses. Other pointers can be stored in files
at will, since malloc is still available!

This is typical of their arguments: No substance, just form. There
could be a pointer stored in a file and the standard says at paragraph
blah... blah... blah...

The practical advantages of a GC for a programming language, the pros
and cons... they do not care
jacob, I can only assume that you didn't bother to read what I
actually wrote before you responded.

I did not argue against GC. I pointed out a possible problem that
might be associated with the use of GC, particularly in the context of
conformance to the C standard.

Here's part of what I wrote that you didn't quote in your followup:

| I'm not claiming that that this is an insurmountable problem. You're
| free to use an almost-but-not-quite-conforming C implementation if
| it's useful to do so, and if the actions that would expose the
| nonconformance are rare and easy to avoid. And if GC were
| incorporated into a future C standard, the rules would probably be
| changed to allow for this kind of thing (by rendering the behavior of
| breaking down and reconstituting a pointer like this undefined), at
| least in programs for which GC is enabled.

The C standard was not written with GC in mind. You might want to
argue that it should have been, or that it should be modified so that
it allows for GC, but it's a fact that the current standard doesn't
allow for GC. Because of this, it's not at all surprising that there
might be some corner cases where GC and C standard conformance might
collide.

I pointed out a *minor* issue that *might* cause a problem in some
rare cases. I also mentioned how to avoid that issue. And somehow
you interpreted this as an attack on GC and/or on you personally.

I. Did. Not. Argue. Against. GC.

Please re-read what I actually wrote however many times it takes until
you understand this.

I use languages other than C that depend on built-in garbage
collection, and it's extremely useful. I haven't had an opportunity
to use a C implementation that provides GC, so I can't really comment
on how useful it would be in that context.

--
Keith Thompson (The_Other_Keith) <ks***@mib.org>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 27 '08 #17
William Ahern wrote, On 25/04/08 20:33:
user923005 <dc*****@connx.comwrote:
>On Apr 25, 5:16?am, jacob navia <ja...@nospam.comwrote:
>>In an interviw with Dr Dobbs, Paul Jansen explains which languages are
gaining in popularity and which not:
>Who's Paul Jansen?
>><quote>
DDJ: Which languages seem to be losing ground?

PJ: C and C++ are definitely losing ground. There is a simple
explanation for this. Languages without automated garbage collection are
getting out of fashion. The chance of running into all kinds of memory
problems is gradually outweighing the performance penalty you have to
pay for garbage collection.
<end quote>

lcc-win has been distributing a GC since 2004.

It really helps.
Is lcc-win outselling Microsoft or Intel's compilers?

I guess that most C work is at the embedded level today. I doubt if
we will have garbage collectors running in our toasters any time soon.

When people merely say "embedded", I think it confuses the issue.
True and not just for the reasons you give. Sometimes an "embedded"
processor turns out to be a full blown computer running either Linux or
some version of Windows.
The places where C continues to be used, and will continue to be used, are
places where the problem is very well defined, and the solution amenable to
a fixed interface. This happens to be the case for embedded hardware
products, as well as for many elements of more general purpose computing
platforms: SQL databases, libraries or applications implementing
<snip>
And I think that when people characterize the C community as "mostly
embedded developers", they fall into the trap of excusing or explaining a
supposed shift; but I don't see a shift in C usage much at all.
There are also applications which are currently written in C where as
bits need to be updated they are replaced with code written in other
languages. Thus in at least some areas (at the very least in some
companies) the number of lines of C code being used is decreasing as an
absolute not just relative amount.

<snip>
All that's happening is that there are myriad new _types_ of applications,
many of which C is not well suited for. The old _types_ are still there, and
usage is growing in absolute terms. There's no need for a bunker mentality.
I agree there is no need for a bunker mentality. Other languages which
are older than me are still alive and well (even if not used as much)
and I'm sure that there will be enough C work around to keep all those
who both want to be C programmers and have the aptitude/attitude to be C
programmers busy for a long time to come.
--
Flash Gordon
Jun 27 '08 #18
jacob navia wrote:
Keith Thompson wrote:
>But a given GC implementation might cause a C implementation that uses
it to become non-conforming because it causes that implementation to
violate the requirements that the standard *does* define.

For example, it's perfectly legal to take a pointer object, break its
representation down into bytes, and store those bytes separately, then
erase the pointer object's value. You can later reconstitute the
pointer value from the bytes and use it. A typical GC implementation,
such as Boehm's, will detect that there is no pointer currently in
memory that refers to the referenced block of memory, and collect it.

Then, there is only ONE solution for you and all people like you:

DO NOT USE A GC.
Um, er, isn't that what he said in the part you snipped?
It went like this (emphasis mine):

:[...] And if GC were
:incorporated into a future C standard, the rules would probably be
:changed to allow for this kind of thing (by rendering the behavior of
:breaking down and reconstituting a pointer like this undefined), at
^^
:least in programs for which GC is enabled.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For the other people that do care about memory management, they can go
on using the GC.
The people who "care about memory management" don't need GC:
They manage their memory themselves *because* they care about it.
GC is for the people who choose not to manage their memory; just
dropping memory on the floor and letting GC sweep it up is prima-
facie evidence that the programmer does *not* "care about memory
management" and chooses to leave it to someone else.
The practical advantages of a GC for a programming language, the pros
and cons... they do not care
GC has practical advantages for some programming languages;
nobody disputes that. I cannot imagine LISP without it, nor
SNOBOL, nor Java, and others I've never used. But the fact that
languages X,Y,Z benefit from garbage collection does not imply
that all languages would, nor even that any particular other
language would. The same could be said for pretty much any
other language feature you care to name: FORTRAN benefits from
built-in complex numbers, but does that mean awk needs them?
Perl benefits from built-in regular expression machinery; does
that mean assembly language should have it, too?

In my opinion, it is not helpful to C to try to lard it up
with every cute feature of every programming environment every
enthusiast ever sighed over.

--
Er*********@sun.com
Jun 27 '08 #19
Keith Thompson wrote:
>
I did not argue against GC. I pointed out a possible problem that
might be associated with the use of GC, particularly in the context of
conformance to the C standard.
Please, that stuff appears every time I mention the GC here.

"I *could* store pointers in disk and read it later, so the GC
is not conforming"

You know that this has no practical significance at all. Since a
conforming program that stores pointers in the disk is using
malloc/free and that system is still available, those programs
would CONTINUE TO RUN!

The only case where the program would NOT run is if they use the GC
allocated pointers and put them in the disk!
But this is the same as saying that

I stored the socket descriptor in the disk and then, next day, when I
turned on the machine the network connection would not work...

Don't you think that it would be much more substantial to address the
issues that the GC poses in a substantive way?

Or the issues about the debugging support that Paul Hsie proposes?

That would be more interesting that the ethernal "GC-and-storing
pointers in the disk" discussion!

--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Jun 27 '08 #20

"Yunzhong" <ga*********@gmail.comwrote in message
news:4e**********************************@b64g2000 hsa.googlegroups.com...
>I don't see C dying out any time soon. The problem with automatic
garbage collection is not just in performance penalty, but that it
introduces uncertainty to the code. It becomes difficult to predict at
what time the garbage collector will start running. In some cases this
behavior simply cannot be tolerated.
I wasn't saying it would die out "soon". what I wrote about could easily
take 10 or 15 years to play out at the current rates...
but, my speculation is like this:
C will fall out of favor for mainstream app development (it already largely
has), but will likely retain a stronghold in low-level systems, namely:
embedded systems, OS-kernels, drivers, domain-specific languages (ok, likely
C-variants), libraries, VMs, ...

meanwhile, C++ is currently at its high-point, and I will guess will start
to decline at a faster rate than C (in 15 years, it may have largely fallen
into disuse). a partial reason for this being a "high cost of maintainence"
(for both code and implementations, the language less likely to overtake C's
strongholds, it falls into decline).

Java is likely to overtake C++, and for a while become the dominant language
for app development.
C# is likely to grow at a faster rate than Java, at some point (say, 5-10
years) overtaking both, however. in this time, to make a guess, .NET will
either be abandoned or heavily mutated (for example, .GNU, which in time may
likely mutate in a divergent path).

at the current rates, in this time period, Windows will also fall behind,
with most likely the mainstream OS being Linux (thus, C# would become the
dominant language, on Linux...).

however, this is predicated on the language and apps being able to
effectively make the transition (the fall of Windows majorly hurting, but
not likely killing, the language).
and in the course of all this, new languages emerge that further begin to
overtake the current generation (IMO, there will never be an "ultimate
language", only new ones, most changing what they will, but otherwise being
very conservative).

the mainstream language in 20 years could very well by a hybrid of C#,
JavaScript, and features from many other languages...

by this time, I also expect notable mutations in terms of kernel
architecture and filesystems, us likely facing the demise of both processes
and heirarchical filesystems... Linux either facing replacement, or being
internally restructured into something almost unrecognizable...

and so on...
as noted, this is mostly all just speculation here...

Jun 27 '08 #21
On Apr 25, 3:34*pm, "cr88192" <cr88...@NOSPAM.hotmail.comwrote:
"Yunzhong" <gaoyunzh...@gmail.comwrote in message

news:4e**********************************@b64g2000 hsa.googlegroups.com...
I don't see C dying out any time soon. The problem with automatic
garbage collection is not just in performance penalty, but that it
introduces uncertainty to the code. It becomes difficult to predict at
what time the garbage collector will start running. In some cases this
behavior simply cannot be tolerated.

I wasn't saying it would die out "soon". what I wrote about could easily
take 10 or 15 years to play out at the current rates...

but, my speculation is like this:
C will fall out of favor for mainstream app development (it already largely
has), but will likely retain a stronghold in low-level systems, namely:
embedded systems, OS-kernels, drivers, domain-specific languages (ok, likely
C-variants), libraries, VMs, ...

meanwhile, C++ is currently at its high-point, and I will guess will start
to decline at a faster rate than C (in 15 years, it may have largely fallen
into disuse). a partial reason for this being a "high cost of maintainence"
(for both code and implementations, the language less likely to overtake C's
strongholds, it falls into decline).

Java is likely to overtake C++, and for a while become the dominant language
for app development.

C# is likely to grow at a faster rate than Java, at some point (say, 5-10
years) overtaking both, however. in this time, to make a guess, .NET will
either be abandoned or heavily mutated (for example, .GNU, which in time may
likely mutate in a divergent path).

at the current rates, in this time period, Windows will also fall behind,
with most likely the mainstream OS being Linux (thus, C# would become the
dominant language, on Linux...).

however, this is predicated on the language and apps being able to
effectively make the transition (the fall of Windows majorly hurting, but
not likely killing, the language).

and in the course of all this, new languages emerge that further begin to
overtake the current generation (IMO, there will never be an "ultimate
language", only new ones, most changing what they will, but otherwise being
very conservative).

the mainstream language in 20 years could very well by a hybrid of C#,
JavaScript, and features from many other languages...

by this time, I also expect notable mutations in terms of kernel
architecture and filesystems, us likely facing the demise of both processes
and heirarchical filesystems... Linux either facing replacement, or being
internally restructured into something almost unrecognizable...

and so on...

as noted, this is mostly all just speculation here...
Looking at the past is always easy.
Prophecy proves more difficult.

Jun 27 '08 #22

"Eric Sosman" <Er*********@sun.comwrote in message
news:1209149152.181452@news1nwk...
user923005 wrote:
>I guess that most C work is at the embedded level today. I doubt if
we will have garbage collectors running in our toasters any time soon.

That's why the toast crumbs keep accumulating at the bottom.
There might be a little tray that you can slide out to remove the crumbs
easily.

--
Bart
Jun 27 '08 #23

"jacob navia" <ja***@nospam.comwrote in message
news:fu**********@aioe.org...
>
In an interviw with Dr Dobbs, Paul Jansen explains which languages are
gaining in popularity and which not:

<quote>
DDJ: Which languages seem to be losing ground?

PJ: C and C++ are definitely losing ground. There is a simple explanation
for this. Languages without automated garbage collection are getting out
of fashion. The chance of running into all kinds of memory problems is
gradually outweighing the performance penalty you have to pay for garbage
collection.
<end quote>

lcc-win has been distributing a GC since 2004.
I've little idea about GC or how it works (not about strategies and things
but how it knows what goes on in the program).

What happens in this code fragment:

void fn(void) {
int *p = malloc(1236);
}

when p goes out of scope? (Presumably some replacement for malloc() is
used.)

What about a linked list when it's no longer needed? How does GC know that,
do you tell it? And it knows where all the pointers in each node are. I can
think of a dozen ways to confuse a GC so it all sounds like magic to me, if
it works.

--
Bart
Jun 27 '08 #24
>Please, that stuff appears every time I mention the GC here.
>
"I *could* store pointers in disk and read it later, so the GC
is not conforming"

You know that this has no practical significance at all. Since a
I asked for what rules the GC breaks. Anyone using your GC ought
to know this. The response seems to be a complete denial that it's
an issue at all.

Some of the consequences are not of much practical consequence (like
storing pointers on disk). Some of the consequences might be
significant (like not being allowed to use signal handlers AT ALL,
or not being allowed to use dynamically-allocated memory (I didn't
say allocate or free, which you can't do anyway, I said *use*).
Does your implementation have this problem? I don't know. I'd be
VERY concerned if 2+2 occasionally came out 5.
>conforming program that stores pointers in the disk is using
malloc/free and that system is still available, those programs
would CONTINUE TO RUN!
Not if GC freed what the pointer points at, and it got re-used for
something else.
>The only case where the program would NOT run is if they use the GC
allocated pointers and put them in the disk!
Which is perfectly legitimate. I even wrote a program that did
this (without GC) because a giant table wouldn't fit in memory but
the NAMES of things would. So pieces of the table got "swapped"
(CP/M didn't have real swapping, so it was done manually to a file)
to (floppy) disk, and the name strings stayed put, and the pointers
to the names existed mostly in the temporary file.
>But this is the same as saying that

I stored the socket descriptor in the disk and then, next day, when I
turned on the machine the network connection would not work...
No, it's not the same. C makes no guarantees about sockets at all,
but it does guarantee that THE SAME INSTANCE OF A RUNNING PROGRAM
may store and retrieve pointers and expect them to be valid.

I believe POSIX also allows you to store socket descripter numbers on disk
and then retrieve them IN THE SAME PROCESS (and assuming it's still open).
>Don't you think that it would be much more substantial to address the
issues that the GC poses in a substantive way?
I don't know what else your implementation breaks, and it's not at
all obvious. Does it break all use of signal handlers? Does it
break all use (not malloc and free, *use*) of dynamically allocated
memory in signal handlers? Does it break all use of data pointers
in signal handlers?
>Or the issues about the debugging support that Paul Hsie proposes?
The C standard doesn't require debugging support.
>That would be more interesting that the ethernal "GC-and-storing
pointers in the disk" discussion!
I recommend you just admit that GC breaks if you store and later retrieve
the only references to dynamically allocated memory. It's not a practical
problem as very, very few programs do this. If GC is optional (as in
a command-line flag to the compiler) it's very easy: document the kinds
of things programs CAN'T do with GC, and say that these programs should
turn GC off.
Now: WHAT *ELSE* DOES IT BREAK?

Jun 27 '08 #25
Paul Hsieh <we******@gmail.comwrote:
>
If C was merely augmented with total memory consumption statistic
functions and/or heap walking mechanisms and heap checking, all the
main complaints about C's memory management problems would be
immediately addressed.
And a number of very fast and very useful allocations methodologies
would no longer be valid.

-Larry Jones

I hate it when they look at me that way. -- Calvin
Jun 27 '08 #26
In article <Z%*******************@text.news.virginmedia.com >
Bartc <bc@freeuk.comwrote:
>I've little idea about GC or how it works ...
What happens when [the only pointer to a successful malloc() result]
goes out of scope?
>What about a linked list when it's no longer needed? How does GC know that,
do you tell it? And it knows where all the pointers in each node are. ...
There are a bunch of different ways to implement garbage collectors,
with different tradeoffs. To make one that works for all cases,
though, including circularly-linked data structures, one ultimately
has to compute a sort of "reachability graph".

Suppose, for instance, we have a table somewhere of "all allocated
blocks" (whether this is a list of malloc()s made into an actual
table or tree or whatever, or a set of pages for a page-based
collector, just picture it mentally as a regular old table, as if
it were written out on a big sheet of paper). The table has whatever
entries it needs to keep track of addresses and sizes -- if it is
a malloc() table, that might well be all it needs -- except that
it has a single extra bit, which we can call "referenced".

To do a full GC, we run through the table and clear all the
"referenced" bits. Then, we use some sort of System Magic to find
all "root pointers". In C, this could be "every address register,
plus every valid address" (on a machine that has those), or "those
parts used by the C compiler, ignoring things that might be used
by other languages" (this will probably speed up the GC -- if we
know the C compiler never stores a pointer in registers 5, 6, and
7, we can ignore them; if we know the C compiler never stores
pointers in "text segments" we can skip those, and so on). In
non-C languages, it may be much easier to find all root pointers
(for instance a Lisp system may have just one or a few such values).
Then we hit the really tricky part: for each root pointer, if it
points into some entry in the table, we set the "referenced" bit
to mark that entry as used, and -- recursively -- check the entire
memory area described by that table entry for more pointers. (If
the referenced bit is already set, we assume that the memory area
has been scanned earlier, and skip this scan. This prevents infinite
recursion, if a malloc()ed block contains pointers to itself.)

When we have finished marking each entry according to each root
pointer and all "reachable pointers" found via that pointer, we
make one final pass through the table, freeing those table entries
whose bits are still clear.

In other words, in code form, and making even more assumptions
(that there is just one "kind" of pointer -- if there are three
kinds, VALID_PTR() and/or find_entry_containing() become complicated):

void __scan(TABLE_ENTRY *);
TABLE_ENTRY *__find_entry_containing(POINTER_TYPE);

__GC() {
TABLE_ENTRY *e;
POINTER_TYPE p;

for (e in all table entries)
e->ref = 0;
for (p in all possible root pointers)
if (VALID_PTR(p) && (e = __find_entry_containing(p)) != NULL)
__scan(e);
for (e in all table entries)
if (!e->ref)
__internal_free(e);
}

void __scan(TABLE_ENTRY *e) {
TABLE_ENTRY *inner;

if (e->ref)
return; /* nothing to do */
e->ref = 1;
for (p taking on all possible valid pointers based on e)
if ((inner = __find_entry_containing(p)) != NULL)
__scan(inner);
}

As you might imagine, all of these are pretty tricky to implement
correctly, and the scanning can take a very long time. These are
why there are so many kinds of GC systems.
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: gmail (figure it out) http://web.torek.net/torek/index.html
Jun 27 '08 #27
On Apr 25, 5:01 pm, lawrence.jo...@siemens.com wrote:
Paul Hsieh <websn...@gmail.comwrote:
If C was merely augmented with total memory consumption statistic
functions and/or heap walking mechanisms and heap checking, all the
main complaints about C's memory management problems would be
immediately addressed.

And a number of very fast and very useful allocations methodologies
would no longer be valid.
Its easy to talk isn't it? If you can perform free() and realloc()
correctly that means you, in some sense *know* what those memory
contents are. All the credible strategies I have looked at (bitmaps,
buddy systems, and just the header/footer linkage systems), one way or
another can support *all* the useful relevant extensions with minimal
performance impact.

People are voting with their feet with this programming language and
its because of false dichotomies like this. If there's some marginal
crazy system that can't support this, its not going to be updated to
the latest standard anyway. Its like you guys didn't take away any
lesson from the lack of C99 adoption.

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/
Jun 27 '08 #28
jacob navia said:

<snip>
C programmers do not need debuggers and can debug programs by phone
without the source.

Dixit...

This attitude about debugging (debugging is only for stupid people that
have bugs. I do not have any bugs since I am the best programmer in the
world) permeates all the philosophy of the language, and memory
management is no exception.
It is plain from the above that you are guilty of misrepresentation, either
maliciously or accidentally. If it's malicious, that makes you a troll. If
it's accidental, it makes you an idiot (because it has already been
explained to you several times *why* it is a misrepresentation).

If you want garbage collection, you know where to find it. I can't see
anything that stops you from using it if you want to. After all, you never
use platforms where it isn't available, right?

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Jun 27 '08 #29

"user923005" <dc*****@connx.comwrote in message
news:e7**********************************@l64g2000 hse.googlegroups.com...
>On Apr 25, 3:34 pm, "cr88192" <cr88...@NOSPAM.hotmail.comwrote:
>"Yunzhong" <gaoyunzh...@gmail.comwrote in message
<snip>
>>
and so on...

as noted, this is mostly all just speculation here...

Looking at the past is always easy.
Prophecy proves more difficult.
yes, however, IME it is not always so difficult...

my approach consists of taking the current observed trends, and "running
them forwards".

the reason I can try to predict so far ahead, is given the incredibly slow
rate of change WRT programming languages.
now, my claim here is not accuracy, only a vague guess...
however, even vague guesses can be worthwhile...
something different could happen, so then one has to consider, how likely is
it that it will happen this way, and how likely it is that it will happen
some other way...

my understanding of the trends, is because I have lived through many of the
changes, and have watched them progress at the terribly slow rate at which
they are going, and so, can make a guess as to what will happen within a
related timeframe.

I think, the next 10 years are not too hard to guess (ignoring any
catastrophic events or revolutionary changes, but these are rare enough to
be ignored).

moving much outside this, say, 15 or 20 years, requires a good deal more
speculation.
the reason for a decline in windows and rise of linux would be due to the
current decline in windows and current rise in linux. I had just assumed
that these continue at the current rates.

the reason for a decline in the process model would be because of the rise
of VM-like developments (however, it is very possible that processes may
remain as a vestigial feature, and thus not go away).

as for filesystems, the prediction is that they move from being pure
heirarchies, to being annotated with so much metadata that they are, in
effect, based on the network-model, rather than heirarchical.
other guesses:

I can also claim, in a similar line of thought that, most likely, in 10
years, x86-64 will most likely be the dominant architecture.

estimating from current curves, computers will likely have around 256GB RAM,
and HDs will be around 125TB...

the model doesn't hold up so well WRT processor power, a crude guess is that
there will be around 64 cores, processing power being somewhere between 1.1
and 1.5 TFLOP (assuming only a gradual increase in terms of per-core power
and that the per-core complexity will go up a little faster than the
transistor density). an uncertainty here is in the scope and nature of
future vector extensions, so my guess is likely to be conservative...

3D lithography and Stack-chips, allong with reaching a maximal transistor
density, are other factors that could throw predictions (currently based
simply on mental curve approximation).

Jun 27 '08 #30
On Apr 25, 8:41*pm, "cr88192" <cr88...@NOSPAM.hotmail.comwrote:
"user923005" <dcor...@connx.comwrote in message

news:e7**********************************@l64g2000 hse.googlegroups.com...
On Apr 25, 3:34 pm, "cr88192" <cr88...@NOSPAM.hotmail.comwrote:
"Yunzhong" <gaoyunzh...@gmail.comwrote in message

<snip>
and so on...
as noted, this is mostly all just speculation here...
Looking at the past is always easy.
Prophecy proves more difficult.

yes, however, IME it is not always so difficult...

my approach consists of taking the current observed trends, and "running
them forwards".

the reason I can try to predict so far ahead, is given the incredibly slow
rate of change WRT programming languages.
Exactly. COBOL and Fortran are still going strong.
now, my claim here is not accuracy, only a vague guess...
however, even vague guesses can be worthwhile...
That, and $3.75 will get you a cup of coffee.
something different could happen, so then one has to consider, how likely is
it that it will happen this way, and how likely it is that it will happen
some other way...

my understanding of the trends, is because I have lived through many of the
changes, and have watched them progress at the terribly slow rate at which
they are going, and so, can make a guess as to what will happen within a
related timeframe.

I think, the next 10 years are not too hard to guess (ignoring any
catastrophic events or revolutionary changes, but these are rare enough to
be ignored).
I think that the next ten years will be an even bigger surprise than
the previous ten years, and the surprises will accelerate. Since
total knowledge doubles every 5 years now, and the trend is going
exponential-exponential, I think that projecting what will be is much
harder than you think.
moving much outside this, say, 15 or 20 years, requires a good deal more
speculation.
I think that 6 months from now is difficult, even for stock prices,
much less industry trends. We can mathematically project these things
but you will see that the prediction and confidence intervals bell out
in an absurd manner after the last data point.

the reason for a decline in windows and rise of linux would be due to the
current decline in windows and current rise in linux. I had just assumed
that these continue at the current rates.
I guess that 15 years from now Windows will still dominate, unless the
Mac takes over or Linux does really well. Linux is not a desktop
force, but it is making server inroads.
the reason for a decline in the process model would be because of the rise
of VM-like developments (however, it is very possible that processes may
remain as a vestigial feature, and thus not go away).
VM developmments are nice, but we lose horsepower. I'm not at all
sure it is the right model for computation.
as for filesystems, the prediction is that they move from being pure
heirarchies, to being annotated with so much metadata that they are, in
effect, based on the network-model, rather than heirarchical.
I think that file systems should be database systems, like the AS/400
and even OpenVMS have.
other guesses:

I can also claim, in a similar line of thought that, most likely, in 10
years, x86-64 will most likely be the dominant architecture.
I guess five years, but maybe not. We are going 64 bit here now, both
with machines and OS, but it's still a few weeks off. However, I
guess that our office is atypical.
estimating from current curves, computers will likely have around 256GB RAM,
and HDs will be around 125TB...
256G Ram is way too low for 10 years from now. My guess would be 1 TB
give or take a factor of 2.

I suspect we won't be using platter based hard drive technology ten
years from now. I know of some alternate technologies that will store
1 TB on one square cm and so something like that will take over once
the disk drive manufacturers have depreciated their current capital.
the model doesn't hold up so well WRT processor power, a crude guess is that
there will be around 64 cores, processing power being somewhere between 1.1
and 1.5 TFLOP (assuming only a gradual increase in terms of per-core power
and that the per-core complexity will go up a little faster than the
transistor density). an uncertainty here is in the scope and nature of
future vector extensions, so my guess is likely to be conservative...
I don't know if core count will be the thing that pushes technology or
some completely new idea.
It went like this before:
Mechanical switch
Relay
Vacuum tube
Transistor
IC
VLSI
3D lithography and Stack-chips, allong with reaching a maximal transistor
density, are other factors that could throw predictions (currently based
simply on mental curve approximation).
I guess something new will come along right about the time that
silicon can no longer keep up. It always has before.

We get ourselves into trouble when we try to predict the future. Look
at the economic analysis by Marx and Engels. It was very good. But
their predictions on what the future held were not so good.

I think that we have a hard time knowing for sure about tomorrow.
Next week is a stretch. Next year is a guess. Five years is
speculation. Ten years is fantasy. Twenty years is just being silly.

IMO-YMMV

Now, is there some C content in here somewhere? Oh, right -- C is
going to lose force. I predict:
1. C will lose force.
OR
2. C will gain force.
OR
3. C will stay the same.

That's trichotomy for you. And I can guarantee that I am right about
it.

Jun 27 '08 #31
jacob navia <ja***@nospam.comwrites:
Keith Thompson wrote:
>I did not argue against GC. I pointed out a possible problem that
might be associated with the use of GC, particularly in the context of
conformance to the C standard.

Please, that stuff appears every time I mention the GC here.
I posted it in direct response to a discussion about whether GC breaks
C standard conformance.
"I *could* store pointers in disk and read it later, so the GC
is not conforming"
Storing pointers on disk is not the only case I mentioned.

I can easily imagine a program that encrypts some in-memory data
structure and decrypts it later, to protect against snooping by other
users on the machine. That data structure could easily include the
only pointers to some chunk of memory outside the encrypted region.
You know that this has no practical significance at all. Since a
conforming program that stores pointers in the disk is using
malloc/free and that system is still available, those programs
would CONTINUE TO RUN!

The only case where the program would NOT run is if they use the GC
allocated pointers and put them in the disk!
It's something you have to think about when deciding to use GC.
That's all.

Now that you mention it, I think the point about malloc vs. gc_malloc
(sp?) was brought up in this context before. If memory allocated by
malloc isn't garbage-collected, then there may not be a conformance
issue; the behavior of gc_malloc and of memory allocated to it is
implementation-defined, and the standard doesn't impose the same
requirements on it as it does on malloc-created pointers and what they
point to. So I may have been mistaken. I've participated in a lot of
discussions here; I do occasionally forget some of the details.

Even so, a programmer using GC still has to be aware that there are
some (admittedly exotic) things that he can't do with the resulting
pointers.

And even if I made a minor error, that is no excuse for your
insufferable rudeness. If you're bored by my bringing it up in
response to somebody else's question, then DON'T ANSWER. I wasn't
talking to you anyway.

[snip]

--
Keith Thompson (The_Other_Keith) <ks***@mib.org>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 27 '08 #32
cr88192 wrote:
>
I can also claim, in a similar line of thought that, most likely, in 10
years, x86-64 will most likely be the dominant architecture.
It already is, at least on the laptop, desktop and low to mid range
server market. You'd be hard pushed to find a new 32 bit machine these
days.

--
Ian Collins.
Jun 27 '08 #33
rio
"jacob navia" <ja***@nospam.comha scritto nel messaggio
news:fu**********@aioe.org...
Re: Garbage collection
the garbage collection is the wrong direction
it is better a malloc-free system that allow check on boundary [edge]
and check on memory leak too, because it is easy that find errors.

the ideal language is a language that make to sound the bells
(write it down in a queue log file)
when there is some error

Buon Giorno
r_io

Jun 27 '08 #34

"user923005" <dc*****@connx.comwrote in message
news:3b**********************************@y21g2000 hsf.googlegroups.com...
On Apr 25, 8:41 pm, "cr88192" <cr88...@NOSPAM.hotmail.comwrote:
>
>the reason I can try to predict so far ahead, is given the incredibly
slow
rate of change WRT programming languages.

Exactly. COBOL and Fortran are still going strong.
yes, but are still less dominant than they were...

assembler is still going strong as well FWIW...

>now, my claim here is not accuracy, only a vague guess...
however, even vague guesses can be worthwhile...

That, and $3.75 will get you a cup of coffee.
depends, I have not really been to cafes.
I can get steel cans of coffee 2 for 1$...

I think it is 0.75$ for coffee in a plastic bottle.
all this is at a nearby convinience store...

>>
I think, the next 10 years are not too hard to guess (ignoring any
catastrophic events or revolutionary changes, but these are rare enough
to
be ignored).

I think that the next ten years will be an even bigger surprise than
the previous ten years, and the surprises will accelerate. Since
total knowledge doubles every 5 years now, and the trend is going
exponential-exponential, I think that projecting what will be is much
harder than you think.
yeah.

well, one can wait and see I guess...

>moving much outside this, say, 15 or 20 years, requires a good deal more
speculation.

I think that 6 months from now is difficult, even for stock prices,
much less industry trends. We can mathematically project these things
but you will see that the prediction and confidence intervals bell out
in an absurd manner after the last data point.
stocks are likely to be much harder I think, considering that they vary
chaotically and have strong feedback properties, rather than gradually over
a long period of time...

my guess is that predictions get far worse the further one moves into the
future, which is why I can speculate 10 years, but I don't really make any
solid claims for 20 years.

in the reverse direction, I can assert with pretty good confidence that
within 6 months or 1 year that the programming-language landscape will be
much the same as it is right now.

>the reason for a decline in windows and rise of linux would be due to the
current decline in windows and current rise in linux. I had just assumed
that these continue at the current rates.

I guess that 15 years from now Windows will still dominate, unless the
Mac takes over or Linux does really well. Linux is not a desktop
force, but it is making server inroads.
Windows dominance is possible, but unless recent trends change, I don't
expect it to last.

I also personally doubt that Mac is going to regain dominance (though it
can't really be ruled out either).
the great problem with Mac is that there is not many good reasons for people
TO use it, so it seems most likely that they will not.

at this point, Linux is not a desktop force (or even a very good desktop
OS), however, the Linux driver support situation is steadily improving, and
opensource apps are gaining a strong footing vs commercial apps (for
example, at my college they use OpenOffice rather than MS Office, ...).

as such, it may be not too long before Windows and Linux are "comprable" in
many respects, and if MS throws out much more total crap (like they have
done with Vista, ...), they may just end up convincing a lot more people to
make the switch...

>the reason for a decline in the process model would be because of the
rise
of VM-like developments (however, it is very possible that processes may
remain as a vestigial feature, and thus not go away).

VM developmments are nice, but we lose horsepower. I'm not at all
sure it is the right model for computation.
I agree, technically.

VMs are not necessarily the "best" idea, but they may well continue to gain
sway over more conventional static-compilation approaches in many cases.
however, this was more in my longer-term speculation (I still expect static
compilation will be dominant in 10 years, but VMs and hybrid approaches may
become common as well).

I expect there will be a lot more hybrid apps/VMs (usually, with part of the
app being statically compiled, and part of the app running in a VM), than
either purely statically compiled, or purely VM-based apps.

however, I don't really know in which direction this will go (or which exact
form things will take).

>as for filesystems, the prediction is that they move from being pure
heirarchies, to being annotated with so much metadata that they are, in
effect, based on the network-model, rather than heirarchical.

I think that file systems should be database systems, like the AS/400
and even OpenVMS have.
I doubt, however, that something like the AS/400 or OpenVMS will gain sway,
more likely, it will be conventional filesystems with some huge amount of
crufty metadata tacked on, probably with features like union-directories,
.... fouling up what had formerly been a clean heirarchical filesystem.

some of this had already been added to Linux, and may change primarily in
the way of becoming "standard" filesystem features, in much the same way as
long filenames, unicode filenames, and symbolic links...

notice on a modern Linux distro just how long the list of mounts can end up
being...

>other guesses:

I can also claim, in a similar line of thought that, most likely, in 10
years, x86-64 will most likely be the dominant architecture.

I guess five years, but maybe not. We are going 64 bit here now, both
with machines and OS, but it's still a few weeks off. However, I
guess that our office is atypical.
possibly, or at least this will be when the "bulk" transition is
completed...

it has been about 12 years since 32 bit OSs have been the norm, but we still
have some 16 bit apps floating around...

in 10 years, likely x86-64 will still be dominant, but maybe there will be a
competitor by this point or a transition to a new major replacement...

>estimating from current curves, computers will likely have around 256GB
RAM,
and HDs will be around 125TB...

256G Ram is way too low for 10 years from now. My guess would be 1 TB
give or take a factor of 2.
it was a linear guess based on vague memory.

in 1998, a new desktop came with like 64MB RAM and a P2, now a desktop can
come with 4GB.
so, my estimate was linear...

I suspect we won't be using platter based hard drive technology ten
years from now. I know of some alternate technologies that will store
1 TB on one square cm and so something like that will take over once
the disk drive manufacturers have depreciated their current capital.
yeah. I was ignoring the technology, again, guessing based on linear
estimation from past trends.
as for current magnetic recording, AFAIK they are using tricks like vertical
polarization and have multi-layer magnetic recording, ...

I was just making a crude guess that they continue increasing the density on
magnetic platters much as they have done for the past few decades. nevermind
that by the time magnetic platters could pull off densities like this,
probably the underlying recording will work very different.

for example, rather than polarizing specific points in the medium, the
recording will consist of a good number of likely overlapping magnetic
fields, and likely be geometric rather than linear (for example, each point
represents a 3D vector or unit-quaternion value rather than a scalar). the
encoding and decoding electronics then go about converting chunks of data
into masses of overlapping geometric waveforms (data being encoded in terms
of both the orientation and the nature of the paths taken around the surface
of a 4D unit sphere...).

note that a single head would be used, but would likely consist of 4+
directional magnetic sensors (a tripod and a vertical sensor), the
orientation measured via triangulation or similar (note that a transform
would be needed, since the sensor axes would be non-orthogonal, ...).
additional sensors could allow a vertical component to be added as well
(basically, we could then have 6 axes per track, XYZW VL, where XYZW form a
Quaternion, V represents a limited-range vertical component, possibly
allowing encoding arcs or spirals, or stacking spheres, or similar, and L
represents the linear position along the track).

so, I suspect thay, yes, we can squeeze a little more density out of these
here magnetic platters (currently, we only exploit 2 magnetic axes, or 3 in
some newer drives...).
>the model doesn't hold up so well WRT processor power, a crude guess is
that
there will be around 64 cores, processing power being somewhere between
1.1
and 1.5 TFLOP (assuming only a gradual increase in terms of per-core
power
and that the per-core complexity will go up a little faster than the
transistor density). an uncertainty here is in the scope and nature of
future vector extensions, so my guess is likely to be conservative...

I don't know if core count will be the thing that pushes technology or
some completely new idea.
It went like this before:
Mechanical switch
Relay
Vacuum tube
Transistor
IC
VLSI
I don't really know here...

>3D lithography and Stack-chips, allong with reaching a maximal transistor
density, are other factors that could throw predictions (currently based
simply on mental curve approximation).

I guess something new will come along right about the time that
silicon can no longer keep up. It always has before.
yeah.

We get ourselves into trouble when we try to predict the future. Look
at the economic analysis by Marx and Engels. It was very good. But
their predictions on what the future held were not so good.
IMO, they screwed up really hard in that they analyzed everything, and then
set out to rework everything in their own image. necessarily that does not
go over as well...

personally I have a lot more respect for Nietzsche...

I think that we have a hard time knowing for sure about tomorrow.
Next week is a stretch. Next year is a guess. Five years is
speculation. Ten years is fantasy. Twenty years is just being silly.
yes, but even for being speculative, there can be at least some merit...

of course, note that I personally tend to use bottom-up planning, rather
than top-down planning, so my approaches tend to be a lot more tolerant to
prediction errors (in my case, I often assume that predictions are likely to
be incorrect anyways, so it is always good to keep a good fallback handy in
case things work out differently than expected...).

IMO-YMMV

Now, is there some C content in here somewhere? Oh, right -- C is
going to lose force. I predict:
1. C will lose force.
OR
2. C will gain force.
OR
3. C will stay the same.

That's trichotomy for you. And I can guarantee that I am right about
it.
2 is unlikely (it is already near the top), as is 3 (what was ideal when C
was going strong are becomming gradually less the case).

assuming 1, we can then guess from the available most-likely options.

Jun 27 '08 #35
"Bartc" <bc@freeuk.comwrites:
>
What happens in this code fragment:

void fn(void) {
int *p = malloc(1236);
}

when p goes out of scope? (Presumably some replacement for malloc() is
used.)
I think this one is a reayll simply example. And I would be very
suprised if any GC would fail on this simple thing. As you can see p
can never be reached after leaving the function. So it's clear that
it's "free" to be collected.

The point with GC is that you better have some language support. C
does not support you with anything. As Ken has pointed out you can
take apart a pointer and reconstruct it (hopefulyl ones knows how to
do that) But I can not see that in simple cases this can be a
problem, it if was then the Boehm Weisser GC could not work at
all. But it works quite well.

So I'd argue as long as you do not play "dirty" tricks with your
Pointers you can assume that the GC works. Of course if anyone beeing
against GC wouldn't mind to post a very simply example on where this
failed I realyl would be curious to learn that.

I for my part just can see that the Boehm Weisser GC is used in very
many different packages and it seems that works flawless. If I'd the
chance of using it I do not hesitate to do so.

For me this is one part in langauge development where the "machine"
can do better than me, and I'm very happy to give that task to it.
>
What about a linked list when it's no longer needed? How does GC know that,
do you tell it? And it knows where all the pointers in each node are. I can
think of a dozen ways to confuse a GC so it all sounds like magic to me, if
it works.
Well if you keep track of every allocation, then you know where your
pointers are, after that it's "just" travesing them all if you find
some are not any longer reachable it's safe to throw them away. Of
course bugs can and will happen, but I'd argue after more than 60
years of GCs in diverse languages the big bummers have gone. I for my
part would trust any application "controlled by GC" more than any a
bit larger new C program doing that all by "hand".

But the nice thing is that you have the choice (well sometimes you
don't), but if I think of a well known OS and it's many "memory
allocation schemes" I start to shiver. It's a miracle that it
works, and it's IMHO a big achievement of the people involved in
programming to get that into that state... Howerver I doubt
anyone knows how much time they were hunting down resource problems,
and how much time and effort they spend on developging tools for
helping them finding bugs. At least it seems a not so small industry has
lived well (and still lives) on hunting down such resouce
problems. And I can't remember having it seen in another context as C
and derivates or Pascal and derivates, language without initial GC.

Regards
Friedrich
--
Please remove just-for-news- to reply via e-mail.
Jun 27 '08 #36

"Ian Collins" <ia******@hotmail.comwrote in message
news:67*************@mid.individual.net...
cr88192 wrote:
>>
I can also claim, in a similar line of thought that, most likely, in 10
years, x86-64 will most likely be the dominant architecture.
It already is, at least on the laptop, desktop and low to mid range
server market. You'd be hard pushed to find a new 32 bit machine these
days.
yes, and it will probably remain so for a while.
in not too long probably the software will start to catch up...

--
Ian Collins.

Jun 27 '08 #37
cr88192 wrote:
"Ian Collins" <ia******@hotmail.comwrote in message
news:67*************@mid.individual.net...
>cr88192 wrote:
>>I can also claim, in a similar line of thought that, most likely, in 10
years, x86-64 will most likely be the dominant architecture.
It already is, at least on the laptop, desktop and low to mid range
server market. You'd be hard pushed to find a new 32 bit machine these
days.

yes, and it will probably remain so for a while.
in not too long probably the software will start to catch up...
I caught up a long time ago, at least in the Unix world.

--
Ian Collins.
Jun 27 '08 #38
On Fri, 25 Apr 2008 12:16:30 UTC, jacob navia <ja***@nospam.com>
wrote:

lcc-win has been distributing a GC since 2004.
Fool, stop spamming for your properitary product that has nothing to
do with the topic of this group!

jacob navia <ja***@nospam.comignores constantly the topic of clc to
spam for his faulty compiler of something that has nothing to do with
the topic of this group. Spammer go away!

jacob navia <ja***@nospam.comknows nothing about the standard as he
proves constantly with his spam - or where in the standard is GC
mentioned? Giving something away for free does not allow to spam for
it anyway!

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2R Deutsch ist da!
Jun 27 '08 #39
On Fri, 25 Apr 2008 22:13:13 UTC, jacob navia <ja***@nospam.com>
wrote:
>
Surely my intention wasn't to make an alternative out of GC or
not GC! There are many other possibilities.
There is absolutely no cause to trust a compiler whoes author is
unable to understund the standard and likes to ignore it anyway.

So jacob navia <ja***@nospam.comproves himsel again and again as
fool noit only by spamming but by ignoring the standard too.

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2R Deutsch ist da!
Jun 27 '08 #40
Ian Collins <ia******@hotmail.comwrites:
cr88192 wrote:
>>
I can also claim, in a similar line of thought that, most likely, in 10
years, x86-64 will most likely be the dominant architecture.
It already is, at least on the laptop, desktop and low to mid range
server market. You'd be hard pushed to find a new 32 bit machine these
days.
For new machines maybe (maybe!), but the fact is that people do not
upgrade that often anymore - the curve has flattened IMO.
Jun 27 '08 #41
On Apr 25, 9:34 pm, gordonb.8s...@burditt.org (Gordon Burditt) wrote:
lcc-win has been distributing a GC since 2004.
It really helps.

In what ways does that implementation violate the standard?

My bet is that it will incorrectly free pieces of allocated memory
when the only references to that memory are in a file (written by
that process and later read back in by the same process). If lcc-win
actually handles this, its performance likely sucks if it has to
scan gigabyte (or worse, terabyte) files for pointer references.
I think the standard also allows, under the right circumstances,
for pointers to be *encrypted*, then stored in a file, and later
read back, decrypted, and used.
It is possible for C99.
You change the pointer to intptr_t, encrypt the integer type, write it
to the file, etc.
Jun 27 '08 #42
"Richard" <de***@gmail.comwrote in message.
Ian Collins <ia******@hotmail.comwrites:
>cr88192 wrote:
>>>
I can also claim, in a similar line of thought that, most likely, in 10
years, x86-64 will most likely be the dominant architecture.
It already is, at least on the laptop, desktop and low to mid range
server market. You'd be hard pushed to find a new 32 bit machine these
days.

For new machines maybe (maybe!), but the fact is that people do not
upgrade that often anymore - the curve has flattened IMO.
A non-technical person, or even a technical person, isn't really interested
in how fast his processor is or how much memory it has installed. He wants
to know "what can I do with this computer?"
At the moment the answer is that five year old hardware will watch video
clips on the web perfectly happily, will do wordprocessing with ease, will
play a nice 3d game. So there's no reason to upgrade except maybe for the
game, and not everyone plays 3d video games.

The really interesting question is what new things we will find to do with
computers.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Jun 27 '08 #43
vi******@gmail.com wrote, On 26/04/08 10:43:
On Apr 25, 9:34 pm, gordonb.8s...@burditt.org (Gordon Burditt) wrote:
>>lcc-win has been distributing a GC since 2004.
It really helps.
In what ways does that implementation violate the standard?

My bet is that it will incorrectly free pieces of allocated memory
when the only references to that memory are in a file (written by
that process and later read back in by the same process). If lcc-win
actually handles this, its performance likely sucks if it has to
scan gigabyte (or worse, terabyte) files for pointer references.
I think the standard also allows, under the right circumstances,
for pointers to be *encrypted*, then stored in a file, and later
read back, decrypted, and used.
It is possible for C99.
You change the pointer to intptr_t, encrypt the integer type, write it
to the file, etc.
It's possible in C89 as well. You just treat it as an array of unsigned
char!

I can actually see potentially good reasons for us (my company) to store
some pointers in a temporary table in a database on occasion, probably
the only reason we do not do this is that the original author did not
think of it. The database is written in something close to standard C
(record locking being the main issue for standard C).
--
Flash Gordon
Jun 27 '08 #44

"Chris Torek" <no****@torek.netwrote in message
news:fu*********@news5.newsguy.com...
In article <Z%*******************@text.news.virginmedia.com >
Bartc <bc@freeuk.comwrote:
>>I've little idea about GC or how it works ...
What happens when [the only pointer to a successful malloc() result]
goes out of scope?
>>What about a linked list when it's no longer needed? How does GC know
that,
do you tell it? And it knows where all the pointers in each node are. ...
Suppose, for instance, we have a table somewhere of "all allocated
blocks" (whether this is a list of malloc()s made into an actual . . .
To do a full GC, we run through the table and clear all the
"referenced" bits. Then, we use some sort of System Magic to find
all "root pointers". In C, this could be "every address register,
Probably I haven't fully understood, but are you saying something like:

"Scan through all the memory areas that C can store values in (static areas,
automatic (eg. stack) areas, and you're saying registers too), but ignore
heap areas at first.

And look for a value that, if it happens to be a pointer, points into one of
the allocated areas in this table. Then you can extend the search to that
block too."

OK, the bit I have trouble with is: how does GC know if a value in memory is
a pointer, and not an int, character string, float etc that by coincidence
has a bit-pattern that is in the address range of an allocated block?

--
Bartc
Jun 27 '08 #45
"Bartc" <bc@freeuk.comwrites:
OK, the bit I have trouble with is: how does GC know if a value in memory is
a pointer, and not an int, character string, float etc that by coincidence
has a bit-pattern that is in the address range of an allocated
block?
Mostly (in C) they don't. Anything that looks like it *might* be a
pointer is treated as if it were one. The term "conservative GC" is
sometimes used for this behaviour.

--
Ben.
Jun 27 '08 #46
santosh wrote:
Gordon Burditt wrote:
.... snip unattributed quotes about GC systems ...
>
>My bet is that it will incorrectly free pieces of allocated memory
when the only references to that memory are in a file (written by
that process and later read back in by the same process). If
lcc-win actually handles this, its performance likely sucks if it
has to scan gigabyte (or worse, terabyte) files for pointer
references. I think the standard also allows, under the right
circumstances, for pointers to be *encrypted*, then stored in a
file, and later read back, decrypted, and used.

Oh, yes, to count as GC, it has to occasionally actually free
something eligible to be freed.

I don't consider this to be a fatal flaw for GC in general or
this implementation in particular, as storing pointers in files
is relatively unusual. But a standards-compliant GC has to
deal with it.

As GC hasn't been defined by the standard yet, we can't say. For
all we know WG14 might decide to excuse GC from scanning for
pointers in files and similar stuff. Right now using a GC is
non-conforming simply because the standard attempts no
definition for it.
Actually the available systems are conforming, because the complete
source code for the GC mechanism is published, and can be written
in purely standard C. However the resulting mechanism is not a
replacement for the malloc/free system, even though it uses it,
because the result is a non-conforming system.

--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>
Try the download section.
** Posted from http://www.teranews.com **
Jun 27 '08 #47
"Chris Thomasson" <cr*****@comcast.netwrote in message
news:45******************************@comcast.com. ..
"jacob navia" <ja***@nospam.comwrote in message
news:fu**********@aioe.org...
>>
In an interviw with Dr Dobbs, Paul Jansen explains which languages are
gaining in popularity and which not:

<quote>
DDJ: Which languages seem to be losing ground?

PJ: C and C++ are definitely losing ground. There is a simple explanation
for this. Languages without automated garbage collection are getting out
of fashion. The chance of running into all kinds of memory problems is
gradually outweighing the performance penalty you have to pay for garbage
collection.
<end quote>

lcc-win has been distributing a GC since 2004.

It really helps.

If you need GC, well yes, it would help out a whole lot indeed. As long as
its not mandatory, I see absolutely nothing wrong with including a
full-blown GC in your compiler package.
As long as it is possible to turn it off, I have no problem with it either.
On the other hand, if it just runs *period* I have a big problem with it.
** Posted from http://www.teranews.com **
Jun 27 '08 #48

"Ben Bacarisse" <be********@bsb.me.ukwrote in message
news:87************@bsb.me.uk...
"Bartc" <bc@freeuk.comwrites:
>OK, the bit I have trouble with is: how does GC know if a value in memory
is
a pointer, and not an int, character string, float etc that by
coincidence
has a bit-pattern that is in the address range of an allocated
block?

Mostly (in C) they don't. Anything that looks like it *might* be a
pointer is treated as if it were one. The term "conservative GC" is
sometimes used for this behaviour.
And I thought it was doing something clever!

--
Bartc
Jun 27 '08 #49
"Bartc" <bc@freeuk.comwrote in message
news:Z%*******************@text.news.virginmedia.c om...
>
"jacob navia" <ja***@nospam.comwrote in message
news:fu**********@aioe.org...
>>
In an interviw with Dr Dobbs, Paul Jansen explains which languages are
gaining in popularity and which not:

<quote>
DDJ: Which languages seem to be losing ground?

PJ: C and C++ are definitely losing ground. There is a simple explanation
for this. Languages without automated garbage collection are getting out
of fashion. The chance of running into all kinds of memory problems is
gradually outweighing the performance penalty you have to pay for garbage
collection.
<end quote>

lcc-win has been distributing a GC since 2004.

I've little idea about GC or how it works (not about strategies and things
but how it knows what goes on in the program).

What happens in this code fragment:

void fn(void) {
int *p = malloc(1236);
}

when p goes out of scope? (Presumably some replacement for malloc() is
used.)
A compiler can optmize this away. Even if you did:

extern fne(void* const);

void fn(void) {
void* const buf1 = malloc(1236);
if (buf1) {
void* const buf2 = malloc(1236);
if (buf2) {
L1: fne(buf1);
L2: fne(buf2);
}
}
}
If compiler can detect that 'fne()' does not cache a pointer somewhere, it
can possibly detect that 'buf1' goes out of scope at 'L2'. The GC would need
to run a cycle to act on this information. It would be nice if a compiler
could inject calls to free:
void fn(void) {
void* const buf1 = malloc(1236);
if (buf1) {
void* const buf2 = malloc(1236);
if (buf2) {
L1: fne(buf1);
Inject: free(buf1);
L2: fne(buf2);
Inject: free(buf2);
}
}
}
And the compiler creates a new file, and I can inspect the calls to free to
see if the compiler did a good job before I compile it. Would that be
feasible?

What about a linked list when it's no longer needed? How does GC know
that, do you tell it? And it knows where all the pointers in each node
are. I can think of a dozen ways to confuse a GC so it all sounds like
magic to me, if it works.
Some GC implementations that will actually scan the stack of all application
threads for pointers into its managed heaps, and attempt to follow all the
links to NULL. If there are no global pointers to an object and no local
pointers in any threads stack, well, it means that its has been rendered
into a persistent quiescent state.

Jun 27 '08 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
by: Bob | last post by:
Are there any known applications out there used to test the performance of the .NET garbage collector over a long period of time? Basically I need...
6
by: Ganesh | last post by:
Is there a utility by microsoft (or anyone) to force garbage collection in a process without have access to the process code. regards Ganesh
11
by: Rick | last post by:
Hi, My question is.. if Lisp, a 40 year old language supports garbage collection, why didn't the authors of C++ choose garbage collection for...
34
by: Ville Voipio | last post by:
I would need to make some high-reliability software running on Linux in an embedded system. Performance (or lack of it) is not an issue,...
5
by: Bob lazarchik | last post by:
Hello: We are considering developing a time critical system in C#. Our tool used in Semiconductor production and we need to be able to take...
8
by: mike2036 | last post by:
For some reason it appears that garbage collection is releasing an object that I'm still using. The object is declared in a module and instantiated...
28
by: Goalie_Ca | last post by:
I have been reading (or at least googling) about the potential addition of optional garbage collection to C++0x. There are numerous myths and...
56
by: Johnny E. Jensen | last post by:
Hellow I'am not sure what to think about the Garbage Collector. I have a Class OutlookObject, It have two private variables. Private...
350
by: Lloyd Bonafide | last post by:
I followed a link to James Kanze's web site in another thread and was surprised to read this comment by a link to a GC: "I can't imagine writing...
158
by: pushpakulkar | last post by:
Hi all, Is garbage collection possible in C++. It doesn't come as part of language support. Is there any specific reason for the same due to the...
0
by: teenabhardwaj | last post by:
How would one discover a valid source for learning news, comfort, and help for engineering designs? Covering through piles of books takes a lot of...
0
by: Naresh1 | last post by:
What is WebLogic Admin Training? WebLogic Admin Training is a specialized program designed to equip individuals with the skills and knowledge...
0
jalbright99669
by: jalbright99669 | last post by:
Am having a bit of a time with URL Rewrite. I need to incorporate http to https redirect with a reverse proxy. I have the URL Rewrite rules made...
0
by: antdb | last post by:
Ⅰ. Advantage of AntDB: hyper-convergence + streaming processing engine In the overall architecture, a new "hyper-convergence" concept was...
0
by: Matthew3360 | last post by:
Hi there. I have been struggling to find out how to use a variable as my location in my header redirect function. Here is my code. ...
0
by: Arjunsri | last post by:
I have a Redshift database that I need to use as an import data source. I have configured the DSN connection using the server, port, database, and...
0
hi
by: WisdomUfot | last post by:
It's an interesting question you've got about how Gmail hides the HTTP referrer when a link in an email is clicked. While I don't have the specific...
0
by: Matthew3360 | last post by:
Hi, I have been trying to connect to a local host using php curl. But I am finding it hard to do this. I am doing the curl get request from my web...
0
by: Carina712 | last post by:
Setting background colors for Excel documents can help to improve the visual appeal of the document and make it easier to read and understand....

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.