469,936 Members | 2,444 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,936 developers. It's quick & easy.

Are c++ features a subset of java features?

Want to do OOP. Does c++ have all the abilities of java, or is it some
subset?

Thanks...

Jan 19 '07
148 4594
IR
andrewmcdonagh wrote:
[snip]
Hmmm... this is what I thought....

Java Finalizers are not destructors. They are not even remotely
equivalent and so we are comparing apples and pears.
For the record, I was answering to:

Cesar Rabak wrote:
Ian Collins escreveu:
>C++ doesn't require GC, Java does. So how can it be described as
a beneficial feature?
To answer that you need to question yourself if this features
allow easier programming, safely, less error prone.
I showed that both approaches (RAII vs GC) are as easy/safe in the
general case, that is, just fire and forget.

This becomes false as soon as you need determinism, in which case GC
becomes a nuisance: the responsability of freeing a resource in time
is then transfered on the user of the resource.

IMO this is *not* easier or less error prone, and it can even affect
safety (eg. does automatically rolling back a DB transaction on
exception ring a bell?).

One could argue that to handle this latter example, he could use a
try/finally block. Again, this is a shift of responsability on the
user of the transaction object.
So a language that *requires* you to use GC renders itself less safe
and more error prone, by imposing certain vital constructs to be
handled by the user rather than by the library.

And I have yet to find any advantage in the favor of GC, as long as
one doesn't play around with naked pointers...
Cheers,
--
IR
Jan 22 '07 #101
IR wrote:
I showed that both approaches (RAII vs GC) are as easy/safe in the
general case, that is, just fire and forget.
Almost any program using graphs or trees with reuse is a counter example.
For example, an abstract syntax tree:

http://www.codecodex.com/wiki/index....tle=Derivative

Look at the final OCaml implementation. The first two functions simplify
symbolic expressions as they are built:

let ( +: ) f g = match f, g with
| `Int n, `Int m -`Int (n + m)
| `Int 0, f | f, `Int 0 -f
| f, g -`Add(f, g)

let ( *: ) f g = match f, g with
| `Int n, `Int m -`Int (n * m)
| `Int 0, _ | _, `Int 0 -`Int 0
| `Int 1, f | f, `Int 1 -f
| f, g -`Mul(f, g)

The next function computes the symbolic derivative:

let rec d e x = match e with
| `Int n -`Int 0
| `Add(f, g) -d f x +: d g x
| `Mul(f, g) -f *: d g x +: g *: d f x
| `Var v -`Int (if v=x then 1 else 0)

Skip to the end (the rest is a pretty printer) and it contructs and
differentiates a symbolic expression:

let x = `Var "x" and a = `Var "a" and b = `Var "b" and c = `Var "c"

let e = a *: x *: x +: b *: x +: c

d e "x"

Note that the subexpression x = Var "x" is a tree that is referred to three
times by reference.

My question is simply: how can RAII and smart pointers be used to handle
deallocation of a simple graph like this as simply and efficiently as this
GC-based program does?

The simplest solution that I know of in C++ (short of battling with Boehm)
is to disallow sharing and explicitly copy all subtrees so that the
root "owns" its entire tree. This is obviously very wasteful and slow.
This becomes false as soon as you need determinism, in which case GC
becomes a nuisance:
You simply resort to RAII when you must. GC is then irrelevant, not
a "nuisance".
the responsability of freeing a resource in time is then transfered on the
user of the resource.
GCs are designed specifically for memory management.
IMO this is *not* easier or less error prone, and it can even affect
safety (eg. does automatically rolling back a DB transaction on
exception ring a bell?).
Can you elaborate on this?
One could argue that to handle this latter example, he could use a
try/finally block. Again, this is a shift of responsability on the
user of the transaction object.
The try..finally block would be in the library code, just as the destructor
is in the library code in C++.
So a language that *requires* you to use GC renders itself less safe
and more error prone, by imposing certain vital constructs to be
handled by the user rather than by the library.
Why can the library not handle them?
And I have yet to find any advantage in the favor of GC, as long as
one doesn't play around with naked pointers...
Easier to write graph algorithms.

Can have closures.

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 23 '07 #102
* Jon Harrop:
>
My question is simply: how can RAII and smart pointers be used to handle
deallocation of a simple graph like this as simply and efficiently as this
GC-based program does?
E.g. boost::shared_ptr.

Will be included in the next standard as std::shared_ptr.

There are many other smart pointers designed for sharing.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jan 23 '07 #103
Jon Harrop wrote:
IMO this is *not* easier or less error prone, and it can even affect
safety (eg. does automatically rolling back a DB transaction on
exception ring a bell?).

Can you elaborate on this?
One could argue that to handle this latter example, he could use a
try/finally block. Again, this is a shift of responsability on the
user of the transaction object.

The try..finally block would be in the library code, just as the destructor
is in the library code in C++.
So a language that *requires* you to use GC renders itself less safe
and more error prone, by imposing certain vital constructs to be
handled by the user rather than by the library.

Why can the library not handle them?
Consider a class in C++ that uses RAII to rollback a database
transaction in the destructor. You would use it something like this:

void f()
{
Transaction t ;

// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;
}

If at any point an exception is thrown, the transaction is rolled back
when the destructor is called. Now consider equivalent code in Java:

class Whatever
{
public static void f()
{
Transaction t = null ;
try
{
t = new Transaction() ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;
}
finally
{
if (t != null)
t.rollback() ;
}
}
}

Note that the responsibility of rolling back was transferred from the
Transaction class to the client code. One could argue that eventually
the GC will reclaim the object, at which point the rollback will be
executed, but a database transaction isn't the sort of thing you want
to just leave sitting around waiting for the GC. It could be that I
just lack the proper skill as a Java programmer, but I cannot see a way
that the library (in this case, where the Transaction class lives) can
handle that try/finally for me.

--
Alan Johnson

Jan 23 '07 #104
Alf P. Steinbach wrote:
* Jon Harrop:
>>
My question is simply: how can RAII and smart pointers be used to handle
deallocation of a simple graph like this as simply and efficiently as
this GC-based program does?

E.g. boost::shared_ptr.

Will be included in the next standard as std::shared_ptr.
I maybe wrong, but I thought that shared_ptr<uses reference counting and
will therefore not reclaim resources properly for graphs that contain
directed cycles.

There are many other smart pointers designed for sharing.
Do you know of one that copes with cycles?
Best

Kai-Uwe Bux
Jan 23 '07 #105

This is exactly what I thought they were getting at.

Alan Johnson wrote:
Note that the responsibility of rolling back was transferred from the
Transaction class to the client code.
In this case, yes.
One could argue that eventually
the GC will reclaim the object, at which point the rollback will be
executed, but a database transaction isn't the sort of thing you want
to just leave sitting around waiting for the GC.
Absolutely. That is an abuse of the GC (using it for non-memory resource).
It could be that I
just lack the proper skill as a Java programmer, but I cannot see a way
that the library (in this case, where the Transaction class lives) can
handle that try/finally for me.
Certainly in ML you do exactly what I said before. There will be a Java
equivalent, of course, but it will be much more verbose.

Essentially, instead of writing a transaction class with a destructor in a
C++ library, the library writer must write a higher-order function that
unwinds the transaction before it returns in the event that an exception
was raised. This gives exactly the same semantics as RAII with the same
amount of code.

I'm sure someone in a Java forum could explain how to do this in Java.

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 23 '07 #106
Alf P. Steinbach wrote:
* Jon Harrop:
>>
My question is simply: how can RAII and smart pointers be used to handle
deallocation of a simple graph like this as simply and efficiently as
this GC-based program does?

E.g. boost::shared_ptr.

Will be included in the next standard as std::shared_ptr.
Using such a smart pointer is not as simple (you must explicitly use it) and
not as fast (reference counting is one of the slowest forms of GC).
There are many other smart pointers designed for sharing.
Most importantly, none of Boost's smart pointers can collect cyclic graphs.
So they are more verbose, slower and less powerful than GC.

The AST I gave before was not a cyclic graph, but z=3+2z is:

let rec z = `Add(`Int 3, `Mul(`Int 2, z))

A GC has no problem collecting such data structures. C++ with Boost's smart
pointers will leak memory until it dies.

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 23 '07 #107
* Kai-Uwe Bux:
Alf P. Steinbach wrote:
>* Jon Harrop:
>>My question is simply: how can RAII and smart pointers be used to handle
deallocation of a simple graph like this as simply and efficiently as
this GC-based program does?
E.g. boost::shared_ptr.

Will be included in the next standard as std::shared_ptr.

I maybe wrong, but I thought that shared_ptr<uses reference counting and
will therefore not reclaim resources properly for graphs that contain
directed cycles.
Right, a simplistic application of shared_ptr<has that problem. John
Harrop didn't say the graph had cycles but instead kept going on about
shared expression trees. If he had mentioned cyclic graphs I'd add that
for the cyclic portions weak_ptr may in many cases be employed to break
the cycles (a companion class to shared_ptr).

>There are many other smart pointers designed for sharing.

Do you know of one that copes with cycles?
Not directly and automagically, it requires design. The general case of
completely arbitrary graphs with portions reused in other graphs, and so
on, appears to be difficult. The question is whether there's any
problem that requires that degree of unrestricted linkup of object.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jan 23 '07 #108
* Jon Harrop:
>
Essentially, instead of writing a transaction class with a destructor in a
C++ library, the library writer must write a higher-order function that
unwinds the transaction before it returns in the event that an exception
was raised. This gives exactly the same semantics as RAII with the same
amount of code.
Applying the template pattern to exception safety / transactions is a
good idea. But (sad to say) unfamiliar to me. But hey, learned
something new! :-)

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jan 23 '07 #109
* Jon Harrop:
Alf P. Steinbach wrote:
>* Jon Harrop:
>>My question is simply: how can RAII and smart pointers be used to handle
deallocation of a simple graph like this as simply and efficiently as
this GC-based program does?
E.g. boost::shared_ptr.

Will be included in the next standard as std::shared_ptr.

Using such a smart pointer is not as simple (you must explicitly use it) and
not as fast (reference counting is one of the slowest forms of GC).
>There are many other smart pointers designed for sharing.

Most importantly, none of Boost's smart pointers can collect cyclic graphs.
So they are more verbose, slower and less powerful than GC.

The AST I gave before was not a cyclic graph, but z=3+2z is:

let rec z = `Add(`Int 3, `Mul(`Int 2, z))

A GC has no problem collecting such data structures. C++ with Boost's smart
pointers will leak memory until it dies.
As mentioned else-subthread, weak_ptr, or simply a raw pointer, can in
many cases be employed to "break" the graph. The recursive rhs
reference could here just be a weak_ptr. But there are examples that
are not so easily dealt with, and also as stated else-thread, the
question is whether such arbitrary linkups occur in practice, with no
reasonable alternative.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jan 23 '07 #110
I V
On Mon, 22 Jan 2007 18:04:12 -0800, Alan Johnson wrote:
Jon Harrop wrote:
and more error prone, by imposing certain vital constructs to be
handled by the user rather than by the library.

Why can the library not handle them?

Consider a class in C++ that uses RAII to rollback a database
transaction in the destructor. You would use it something like this:

void f()
{
Transaction t ;

// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;
}

If at any point an exception is thrown, the transaction is rolled back
when the destructor is called. Now consider equivalent code in Java:
[snip]
>
Note that the responsibility of rolling back was transferred from the
Transaction class to the client code. One could argue that eventually
the GC will reclaim the object, at which point the rollback will be
executed, but a database transaction isn't the sort of thing you want
to just leave sitting around waiting for the GC. It could be that I
just lack the proper skill as a Java programmer, but I cannot see a way
that the library (in this case, where the Transaction class lives) can
handle that try/finally for me.
I think you could do something like this:

class SaferWhatever
{
public static void f()
{
DbConnection.doTransaction(new DatabaseAction() {
public doAction(Transaction t) {
t.add_query(sql);
// Do some stuff that could throw exceptions.
t.add_query(sql);
// Do some stuff that could throw exceptions.
t.add_query(sql);
t.commit();
}
});
}
}

Where the library provides something like:

class DatabaseAction
{
public abstract void doAction(Transaction t);
}

class DbConnection
{
public void doTransaction(DatabaseAction a)
{
Transaction t = null;
try
{
t = new Transaction();
a.doAction(t);
}
finally
{
if( t != null )
t.rollback();
}
}
}

And the syntax might be a bit nicer in a language with first-class
anonymous functions.
Jan 23 '07 #111
I V wrote:
On Mon, 22 Jan 2007 18:04:12 -0800, Alan Johnson wrote:
>Jon Harrop wrote:
>>>and more error prone, by imposing certain vital constructs to be
handled by the user rather than by the library.
Why can the library not handle them?
Consider a class in C++ that uses RAII to rollback a database
transaction in the destructor. You would use it something like this:

void f()
{
Transaction t ;

// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;
}

If at any point an exception is thrown, the transaction is rolled back
when the destructor is called. Now consider equivalent code in Java:

[snip]
>Note that the responsibility of rolling back was transferred from the
Transaction class to the client code. One could argue that eventually
the GC will reclaim the object, at which point the rollback will be
executed, but a database transaction isn't the sort of thing you want
to just leave sitting around waiting for the GC. It could be that I
just lack the proper skill as a Java programmer, but I cannot see a way
that the library (in this case, where the Transaction class lives) can
handle that try/finally for me.

I think you could do something like this:

class SaferWhatever
{
public static void f()
{
DbConnection.doTransaction(new DatabaseAction() {
public doAction(Transaction t) {
t.add_query(sql);
// Do some stuff that could throw exceptions.
t.add_query(sql);
// Do some stuff that could throw exceptions.
t.add_query(sql);
t.commit();
}
});
}
}

Where the library provides something like:

class DatabaseAction
{
public abstract void doAction(Transaction t);
}

class DbConnection
{
public void doTransaction(DatabaseAction a)
{
Transaction t = null;
try
{
t = new Transaction();
a.doAction(t);
}
finally
{
if( t != null )
t.rollback();
}
}
}

And the syntax might be a bit nicer in a language with first-class
anonymous functions.
This is clever, but is there any non-masochistic way of expanding it to
an arbitrary number of resources? Consider when you have two resources
from different libraries. First, the RAII way in C++:

#include <iostream>

class Resource1
{
public:
void doSomething()
{
std::cout << "Doing something with Resource1." << std::endl ;
}

~Resource1()
{
std::cout << "Resource1 released." << std::endl ;
}
} ;

class Resource2
{
public:
void doSomething()
{
std::cout << "Doing something with Resource2." << std::endl ;
}

~Resource2()
{
std::cout << "Resource2 released." << std::endl ;
}
} ;

int main()
{
Resource1 r1 ;
Resource2 r2 ;
r1.doSomething() ;
r2.doSomething() ;
}
Now let's do the same thing in Java, transferring responsibility to the
client code (aka the "bad" way):

public class Resource1
{
public void doSomething()
{
System.out.println("Doing something with Resource1.") ;
}

public void release()
{
System.out.println("Resource1 released.") ;
}
}

public class Resource2
{
public void doSomething()
{
System.out.println("Doing something with Resource2.") ;
}

public void release()
{
System.out.println("Resource2 released.") ;
}
}

public class Whatever
{
public static void main(String args[])
{
Resource1 r1 = null ;
Resource2 r2 = null ;
try
{
r1 = new Resource1() ;
r2 = new Resource2() ;
r1.doSomething() ;
r2.doSomething() ;
}
finally
{
if (r1 != null)
r1.release() ;
if (r2 != null)
r2.release() ;
}
}
}

And finally, let's apply the idiom you propose to make this safer:

public class Resource1Manager
{
public static void use(Resource1Action a)
{
Resource1 r1 = null ;
try
{
r1 = new Resource1() ;
a.doAction(r1) ;
}
finally
{
if (r1 != null)
r1.release() ;
}
}
}

public abstract class Resource1Action
{
public abstract void doAction(Resource1 r1) ;
}

public class Resource2Manager
{
public static void use(Resource2Action a)
{
Resource2 r2 = null ;
try
{
r2 = new Resource2() ;
a.doAction(r2) ;
}
finally
{
if (r2 != null)
r2.release() ;
}
}
}

public abstract class Resource2Action
{
public abstract void doAction(Resource2 r2) ;
}

public class SaferWhatever
{
public static void main(String args[])
{
Resource1Manager.use(new Resource1Action()
{
public void doAction(final Resource1 r1)
{
Resource2Manager.use(new Resource2Action()
{
public void doAction(Resource2 r2)
{
r1.doSomething() ;
r2.doSomething() ;
}
}) ;
}
}) ;
}
}
Even if you had a first-class anonymous functions, this is clearly going
to quickly depart from the simplicity and straightforwardness of RAII.
(And as a side issue, how do you negotiate the return type in this idiom?)

--
Alan Johnson
Jan 23 '07 #112
Safety = no dangling pointers, no buffer overruns, no segmentation faults...

only "internal null pointer expections" ;)
Jan 23 '07 #113

Jon Harrop wrote:
sean wrote:
Just so no one gets confused on the 'not cross platform' thing: Linux
was written in C/C++

C. Not C++. I'm not saying that C isn't cross platform - it ports well. I'm
saying that C++ (using all the bells and whistles) is very
platform/compiler specific in my experience - it doesn't port well.
What platforms does C++ not port well to? I'm curious as our company
has a huge codebase in C++ (16000 cpp and hpp files), and we have had
more problems porting our Java base (the size of which I don't know)
than our C++ base.

/Peter

Jan 23 '07 #114
Sascha Bohnenkamp wrote:
>Safety = no dangling pointers, no buffer overruns, no segmentation
faults...

only "internal null pointer expections" ;)
Yes, exceptions are regarded as safe. Even if they're called that! :-)

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 23 '07 #115
IR
I have not much to add to the answers given to your post, as the other
people already explained my point quite clearly.

But...

Jon Harrop wrote:
IR wrote:
[...]
>And I have yet to find any advantage in the favor of GC, as long
as one doesn't play around with naked pointers...

Easier to write graph algorithms.

Can have closures.
I don't see how closures and GC are related?

FWIW, closures exist in C++. Granted, not as part of the language, but
as libraries. So even without GC you can have closures...
Cheers,
--
IR
Jan 23 '07 #116


On Jan 23, 2:04 am, "Alan Johnson" <a...@yahoo.comwrote:
Jon Harrop wrote:
IMO this is *not* easier or less error prone, and it can even affect
safety (eg. does automatically rolling back a DB transaction on
exception ring a bell?).
Can you elaborate on this?
One could argue that to handle this latter example, he could use a
try/finally block. Again, this is a shift of responsability on the
user of the transaction object.
The try..finally block would be in the library code, just as the destructor
is in the library code in C++.
So a language that *requires* you to use GC renders itself less safe
and more error prone, by imposing certain vital constructs to be
handled by the user rather than by the library.
Why can the library not handle them?Consider a class in C++ that uses RAII to rollback a database
transaction in the destructor. You would use it something like this:

void f()
{
Transaction t ;

// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;

}If at any point an exception is thrown, the transaction is rolled back
when the destructor is called. Now consider equivalent code in Java:

class Whatever
{
public static void f()
{
Transaction t = null ;
try
{
t = new Transaction() ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;
}
finally
{
if (t != null)
t.rollback() ;
}
}

}Note that the responsibility of rolling back was transferred from the
Transaction class to the client code. One could argue that eventually
the GC will reclaim the object, at which point the rollback will be
executed, but a database transaction isn't the sort of thing you want
to just leave sitting around waiting for the GC. It could be that I
just lack the proper skill as a Java programmer, but I cannot see a way
that the library (in this case, where the Transaction class lives) can
handle that try/finally for me.

--
Alan Johnson
If we are writing the Transaction class, then there is nothing stopping
us putting the rollback inside a try/finally block with in the
add_query() method.

So we'd end up with:

void f()
{
Transaction t = new Transaction();

// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;

}If at any point an exception is thrown, the transaction is rolled
back

If we can't (because its a 3rd party class or widely used and we'd be
breaking its known behaviour), then yes, this is where a Java
programmer mind set (aka idiom) comes to play (just like C++ RAII
idiom).

We'd decorate the Transaction class to provide the same safety.

void f()
{
Transaction t = new AutoRollbackTransaction(new Transaction() );

// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;

}//If at any point an exception is thrown, the transaction is rolled
back

Where AutoRollbackTransaction derives from Transaction but wraps a
try/finally block around each call to its delegate that was passed at
construction time.

This is actually a cleaner design (for Java) as it separates the
responsibilities of AutoRollback from transaction.

Andrew

Jan 23 '07 #117


On Jan 23, 8:29 am, Alan Johnson <a...@yahoo.comwrote:
I V wrote:
On Mon, 22 Jan 2007 18:04:12 -0800, Alan Johnson wrote:
Jon Harrop wrote:
and more error prone, by imposing certain vital constructs to be
handled by the user rather than by the library.
Why can the library not handle them?
Consider a class in C++ that uses RAII to rollback a database
transaction in the destructor. You would use it something like this:
void f()
{
Transaction t ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
t.commit() ;
}
If at any point an exception is thrown, the transaction is rolled back
when the destructor is called. Now consider equivalent code in Java:
[snip]
Note that the responsibility of rolling back was transferred from the
Transaction class to the client code. One could argue that eventually
the GC will reclaim the object, at which point the rollback will be
executed, but a database transaction isn't the sort of thing you want
to just leave sitting around waiting for the GC. It could be that I
just lack the proper skill as a Java programmer, but I cannot see a way
that the library (in this case, where the Transaction class lives) can
handle that try/finally for me.
I think you could do something like this:
class SaferWhatever
{
public static void f()
{
DbConnection.doTransaction(new DatabaseAction() {
public doAction(Transaction t) {
t.add_query(sql);
// Do some stuff that could throw exceptions.
t.add_query(sql);
// Do some stuff that could throw exceptions.
t.add_query(sql);
t.commit();
}
});
}
}
Where the library provides something like:
class DatabaseAction
{
public abstract void doAction(Transaction t);
}
class DbConnection
{
public void doTransaction(DatabaseAction a)
{
Transaction t = null;
try
{
t = new Transaction();
a.doAction(t);
}
finally
{
if( t != null )
t.rollback();
}
}
}
And the syntax might be a bit nicer in a language with first-class
anonymous functions.This is clever, but is there any non-masochistic way of expanding it to
an arbitrary number of resources? Consider when you have two resources
from different libraries. First, the RAII way in C++:

#include <iostream>

class Resource1
{
public:
void doSomething()
{
std::cout << "Doing something with Resource1." << std::endl ;
}

~Resource1()
{
std::cout << "Resource1 released." << std::endl ;
}

} ;class Resource2
{
public:
void doSomething()
{
std::cout << "Doing something with Resource2." << std::endl ;
}

~Resource2()
{
std::cout << "Resource2 released." << std::endl ;
}

} ;int main()
{
Resource1 r1 ;
Resource2 r2 ;
r1.doSomething() ;
r2.doSomething() ;

}Now let's do the same thing in Java, transferring responsibility to the
client code (aka the "bad" way):
snipped....
Alan Johnson

So here's how I'd do it in java.....

public class Whatever
{
public static void main(String args[])
{
Resource1 r1 = new AutoReleasingResource1(new Resource1()
);
Resource2 r2 = new AutoReleasingResource2(new Resource2()
);
r1.doSomething() ;
r2.doSomething() ;
}
}
class AutoReleasingResource1 extends Resource1 {
private Resource1 delegate

public AutoReleasingResource1(Resource1 delegate) {
this.delegate = delegate;
}

public void doSomething()
{
try {
delegate.doSomething();
} finally {
delegate.release();
}
}

}

class AutoReleasingResource2 extends Resource2 {

private Resource2 delegate

public AutoReleasingResource1(Resource2 delegate) {
this.delegate = delegate;
}

public void doSomething()
{
try {
delegate.doSomething();
} finally {
delegate.release();
}
}

}
Do note, I've explicitly chosen to pass the delegate object into the
decorator to show its using a delegate(not to mention that it makes
unit testing so much easier), but there is nothing stopping us from
creating this object within each decorator itself instead of passing it
in.

Andrew

Jan 23 '07 #118
Alf P. Steinbach wrote:
* Jon Harrop:
>A GC has no problem collecting such data structures. C++ with Boost's
smart pointers will leak memory until it dies.

As mentioned else-subthread, weak_ptr, or simply a raw pointer, can in
many cases be employed to "break" the graph.
Then you are reimplementing the garbage collector.

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 23 '07 #119
Alf P. Steinbach wrote:
* Kai-Uwe Bux:
>Alf P. Steinbach wrote:
>>There are many other smart pointers designed for sharing.

Do you know of one that copes with cycles?

Not directly and automagically, it requires design. The general case of
completely arbitrary graphs with portions reused in other graphs, and so
on, appears to be difficult. The question is whether there's any
problem that requires that degree of unrestricted linkup of object.
Almost any problem domain that uses graph theory. For me, this includes
compilers, interpreters, graphics, numerical, symbolic, parallel and many
parts of scientific computing.

I used C++ for many years and this is one of the most important improvements
made by modern high-level languages is the ease with which they handle
these kinds of problems.

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 23 '07 #120
IR wrote:
>Can have closures.

I don't see how closures and GC are related?
1. A closure can capture values from its environment.

2. A closure can be returned from a function.

=a closure can extend the lifetime of a value arbitrarily, so you need GC.
FWIW, closures exist in C++. Granted, not as part of the language, but
as libraries. So even without GC you can have closures...
Closures in C++ are so limited that most people wouldn't even call them
closures. Try translating some of the examples into languages with native
support for first class lexical closures and you'll see what I mean. With
an expressive type system, errors are much easier to understand.

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 23 '07 #121
* Jon Harrop:
Alf P. Steinbach wrote:
>* Jon Harrop:
>>A GC has no problem collecting such data structures. C++ with Boost's
smart pointers will leak memory until it dies.
As mentioned else-subthread, weak_ptr, or simply a raw pointer, can in
many cases be employed to "break" the graph.

Then you are reimplementing the garbage collector.
Not really. E.g. your example concerned a self-referential expression.
When the self-referential nature becomes a problem, e.g. not easily
modelled via weak or raw pointers, then that indicates that the
structure used to represent the problem domain is not well chosen, i.e.
will likely lead to problems also in other processing (not just garbage
collection). It's like an infinitely recursive function. Some
languages (e.g. data flow languages) can handle infinite recursion
easily, but that doesn't mean it's a good idea: it's usually an
indication of some thinko, a much less than perfect implementation.

But still you have a point.

C++ RAII is a trade-off, involving some (design) work being delegated to
the programmer. I think that's generally a Good Thing(TM), because when
the language is capable of dealing with any messy structure, messy
structures are likely to abound. However, I grant that in some problem
domains such structures /may/ be practically necessary, and the question
is still whether some example can be found where that is so.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jan 23 '07 #122
Alan Johnson wrote:
If at any point an exception is thrown, the transaction is rolled back
when the destructor is called.
Can this be written in a purely functional style? Perhaps that is easier...

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 23 '07 #123
Alf P. Steinbach wrote:
* Jon Harrop:
>Then you are reimplementing the garbage collector.

Not really. E.g. your example concerned a self-referential expression.
When the self-referential nature becomes a problem, e.g. not easily
modelled via weak or raw pointers, then that indicates that the
structure used to represent the problem domain is not well chosen, i.e.
will likely lead to problems also in other processing (not just garbage
collection).
Graph theory is a very elegant branch of pure mathematics. Garbage collected
languages can represent graphs and graph-theoretic algorithms with
simplicity, clarity and robustness.

The fact that C++ cannot do this in no way diminishes graph theory as a
useful tool. Graph theory remains of fundamental importance in computer
science.
It's like an infinitely recursive function.
Not "infinitely" recursive. A recursive function is an ideal example of
something that would be most elegantly represented by a cyclic graph. I did
exactly this in my example interpreter here:

http://www.ffconsultancy.com/free/oc...terpreter.html

The pattern match in the "eval" function creates a cyclic graph in the host
language to represent a recursive function in the target language:

| ELetRec(var, arg, body, rest) ->
let rec vars = (var, VClosure(arg, vars, body)) :: vars in
eval vars rest
Some
languages (e.g. data flow languages) can handle infinite recursion
easily,
Almost all languages (including C++) can handle recursion.
but that doesn't mean it's a good idea: it's usually an
indication of some thinko, a much less than perfect implementation.
If you believe it is imperfect then how do you think it can be improved?
C++ RAII is a trade-off, involving some (design) work being delegated to
the programmer. I think that's generally a Good Thing(TM), because when
the language is capable of dealing with any messy structure, messy
structures are likely to abound.
You are implying that C++ can deal with structures that other languages
cannot. Can you elaborate?

The examples I've given don't look messy to me.
However, I grant that in some problem
domains such structures /may/ be practically necessary, and the question
is still whether some example can be found where that is so.
Here are some examples from various disciplines:

You already mentioned the example of representing recursive functions in
compilers and interpreters, e.g. for register allocation:

http://citeseer.ist.psu.edu/492550.html

In graphics, meshes are most elegantly represented as co-cyclic graphs of
vertex, edge and triangle connectivity. Functions that act upon meshes are
ubiquitous in graphics programming:

http://www.actapress.com/PDFViewer.aspx?paperId=29408

Scene graphs are the most common high-level representation of 3D worlds and
these can be cyclic:

http://www.techfak.uni-bielefeld.de/...and-graphs.pdf

Higher-dimensional multiresolution representations and adaptive subdivisions
can also benefit from cyclicity.

In symbolic computing, computer algebra packages have very similar structure
to compilers and interpreters and must be able to rewrite cyclic graphs
representing symbolic expressions:

http://journals.cambridge.org/produc...ltextid=455532

In biological scientific computing, any relationship network (e.g. metabolic
pathways, gene expression relationships) can be cyclic and algorithms for
storing and manipulating these must be able to handle cyclic graphs:

http://216.239.59.104/search?q=cache...&cd=8&ie=UTF-8

In physical scientific computing, graph theory is used in many subjects,
e.g. to study the topology of amorphous materials at the atomic scale, e.g.
Franzblau's shortest path ring statistics applied to the structure of
silica glass in my own PhD thesis:

http://www.ffconsultancy.com/free/thesis.html

Finally, I should probably mention that implementing a garbage collector is
another application of graph theory. Although this is a funny example to
bring up, it underlines my point that writing code to collect graphs in C++
is literally writing your own garbage collector.

As you can see, cyclic-graph theory and computer programming have many
overlaps. Consequently, it is beneficial to use a language that allows such
things to be represented succinctly and efficiently.

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 23 '07 #124
* Jon Harrop:
Alf P. Steinbach wrote:
>* Jon Harrop:
>>Then you are reimplementing the garbage collector.
Not really. E.g. your example concerned a self-referential expression.
When the self-referential nature becomes a problem, e.g. not easily
modelled via weak or raw pointers, then that indicates that the
structure used to represent the problem domain is not well chosen, i.e.
will likely lead to problems also in other processing (not just garbage
collection).

Graph theory is a very elegant branch of pure mathematics. Garbage collected
languages can represent graphs and graph-theoretic algorithms with
simplicity, clarity and robustness.
Doesn't mean anything to me, sorry. Not that I disagree. It just
doesn't connect with anything.

The fact that C++ cannot do this in no way diminishes graph theory as a
useful tool.
Whatever it is you think is a fact, it hasn't been established as fact.
There are two issues: clarifying what "this" is, and establishing the
statement about "this" and C++ as fact. I'll leave you to it.

Graph theory remains of fundamental importance in computer
science.
Yep, but that's off-topic in this group, sorry.

>It's like an infinitely recursive function.

Not "infinitely" recursive.
I wrote it, so I'm the one deciding what it should be.

A recursive function is an ideal example of
something that would be most elegantly represented by a cyclic graph.
In C++, most elegantly as a recursive function.

I did
exactly this in my example interpreter here:

http://www.ffconsultancy.com/free/oc...terpreter.html

The pattern match in the "eval" function creates a cyclic graph in the host
language to represent a recursive function in the target language:

| ELetRec(var, arg, body, rest) ->
let rec vars = (var, VClosure(arg, vars, body)) :: vars in
eval vars rest
>Some
languages (e.g. data flow languages) can handle infinite recursion
easily,

Almost all languages (including C++) can handle recursion.
Yes, green plants are green.

However, most languages, including C++, can't handle infinite recursion.

>but that doesn't mean it's a good idea: it's usually an
indication of some thinko, a much less than perfect implementation.

If you believe it is imperfect then how do you think it can be improved?
When you end up with infinite recursion, think first about completely
different ways of solving the problem. If it then still seems that
recursion is needed, introduce one or more base cases. It's a simple
principle, but applied to a complex situation, may still be complex.

>C++ RAII is a trade-off, involving some (design) work being delegated to
the programmer. I think that's generally a Good Thing(TM), because when
the language is capable of dealing with any messy structure, messy
structures are likely to abound.

You are implying that C++ can deal with structures that other languages
cannot.
No.

>Can you elaborate?
Yes, see above.

The examples I've given don't look messy to me.
Beauty is in the eye of the beholder... ;-)

>However, I grant that in some problem
domains such structures /may/ be practically necessary, and the question
is still whether some example can be found where that is so.

Here are some examples from various disciplines:

You already mentioned the example of representing recursive functions in
compilers and interpreters, e.g. for register allocation:

http://citeseer.ist.psu.edu/492550.html

In graphics, meshes are most elegantly represented as co-cyclic graphs of
vertex, edge and triangle connectivity. Functions that act upon meshes are
ubiquitous in graphics programming:

http://www.actapress.com/PDFViewer.aspx?paperId=29408

Scene graphs are the most common high-level representation of 3D worlds and
these can be cyclic:

http://www.techfak.uni-bielefeld.de/...and-graphs.pdf

Higher-dimensional multiresolution representations and adaptive subdivisions
can also benefit from cyclicity.
Interestingly, all these problems that are allegedly incompatible with
lack of a language-supported garbage collector have corresponding C++
programs: the basic libraries for handling such things are typically
implemented in C and/or C++.

In symbolic computing, computer algebra packages have very similar structure
to compilers and interpreters and must be able to rewrite cyclic graphs
representing symbolic expressions:

http://journals.cambridge.org/produc...ltextid=455532
Ah, computer algebra packages, don't know anything about them, but the
term reminds me of Mathematica.

In biological scientific computing, any relationship network (e.g. metabolic
pathways, gene expression relationships) can be cyclic and algorithms for
storing and manipulating these must be able to handle cyclic graphs:

http://216.239.59.104/search?q=cache...&cd=8&ie=UTF-8

In physical scientific computing, graph theory is used in many subjects,
e.g. to study the topology of amorphous materials at the atomic scale, e.g.
Franzblau's shortest path ring statistics applied to the structure of
silica glass in my own PhD thesis:

http://www.ffconsultancy.com/free/thesis.html
Interestingly, all these problems that are allegedly incompatible with
lack of a language-supported garbage collector have corresponding C++
programs: the basic libraries for handling such things are typically
implemented in C and/or C++.

Finally, I should probably mention that implementing a garbage collector is
another application of graph theory.
Ah, yes, and garbage collectors are typically implemented in... what
language(s), do you think?

Although this is a funny example to
bring up, it underlines my point that writing code to collect graphs in C++
is literally writing your own garbage collector.
Can't be contested: at some level there is always a similarity between a
thing and another thing, if nothing else, both are things.

As you can see, cyclic-graph theory and computer programming have many
overlaps. Consequently, it is beneficial to use a language that allows such
things to be represented succinctly and efficiently.
The premise seems good. The conclusion, disconnected from the premise.
This group's FAQ does discuss the choice of programming language, I
suggest reading that FAQ item.

Cheers,

- Alf

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jan 23 '07 #125


On Jan 23, 11:11 am, "andrewmcdonagh" <andrewmcdon...@gmail.comwrote:
public class Whatever
{
public static void main(String args[])
{
Resource1 r1 = new AutoReleasingResource1(new Resource1()
);
Resource2 r2 = new AutoReleasingResource2(new Resource2()
);
r1.doSomething() ;
r2.doSomething() ;
}

}class AutoReleasingResource1 extends Resource1 {
private Resource1 delegate

public AutoReleasingResource1(Resource1 delegate) {
this.delegate = delegate;
}

public void doSomething()
{
try {
delegate.doSomething();
} finally {
delegate.release();
}
}

}class AutoReleasingResource2 extends Resource2 {

private Resource2 delegate

public AutoReleasingResource1(Resource2 delegate) {
this.delegate = delegate;
}

public void doSomething()
{
try {
delegate.doSomething();
} finally {
delegate.release();
}
}

}Do note, I've explicitly chosen to pass the delegate object into the
decorator to show its using a delegate(not to mention that it makes
unit testing so much easier), but there is nothing stopping us from
creating this object within each decorator itself instead of passing it
in.

Andrew
This solution is not exception safe. Specifically, r2 is never
released if r1.doSomething() throws an exception. Also, what do I do
if I want use more than one method of the resource object? That is ...

r1.doSomething() ;
r2.doSomething() ;
r1.doSomethingElse() ;
r2.doSomethingElse() ;

--
Alan Johnson

Jan 24 '07 #126


On Jan 23, 11:11 am, "andrewmcdonagh" <andrewmcdon...@gmail.comwrote:
public class Whatever
{
public static void main(String args[])
{
Resource1 r1 = new AutoReleasingResource1(new Resource1()
);
Resource2 r2 = new AutoReleasingResource2(new Resource2()
);
r1.doSomething() ;
r2.doSomething() ;
}

}class AutoReleasingResource1 extends Resource1 {
private Resource1 delegate

public AutoReleasingResource1(Resource1 delegate) {
this.delegate = delegate;
}

public void doSomething()
{
try {
delegate.doSomething();
} finally {
delegate.release();
}
}

}class AutoReleasingResource2 extends Resource2 {

private Resource2 delegate

public AutoReleasingResource1(Resource2 delegate) {
this.delegate = delegate;
}

public void doSomething()
{
try {
delegate.doSomething();
} finally {
delegate.release();
}
}

}Do note, I've explicitly chosen to pass the delegate object into the
decorator to show its using a delegate(not to mention that it makes
unit testing so much easier), but there is nothing stopping us from
creating this object within each decorator itself instead of passing it
in.

Andrew
This solution is not exception safe. Specifically, r2 is never
released if r1.doSomething() throws an exception. Also, what do I do
if I want use more than one method of the resource object? That is ...

r1.doSomething() ;
r2.doSomething() ;
r1.doSomethingElse() ;
r2.doSomethingElse() ;

--
Alan Johnson

Jan 24 '07 #127
Alf P. Steinbach wrote:
>Higher-dimensional multiresolution representations and adaptive
subdivisions can also benefit from cyclicity.

Interestingly, all these problems that are allegedly incompatible with
lack of a language-supported garbage collector have corresponding C++
programs: the basic libraries for handling such things are typically
implemented in C and/or C++.
Greenspun: Each of these programs contains an ad-hoc, informally-specified,
bug-ridden, slow implementation of a garbage collector.

Why bother reimplementing the garbage collector when you could just use a
language that has one built-in, e.g. any high-level language?

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 24 '07 #128
can you name one java application whihc behaves better than its c++
counterparts?

Most java apps I saw just through such null-pointer exceptions or refuse
to work well on the installed vm
For this reason I normaly avoid using java-applications.
(and they are often slow)

Jon Harrop schrieb:
Sascha Bohnenkamp wrote:
>>Safety = no dangling pointers, no buffer overruns, no segmentation
faults...
only "internal null pointer expections" ;)

Yes, exceptions are regarded as safe. Even if they're called that! :-)
Jan 24 '07 #129


On Jan 23, 10:44 pm, Jon Harrop <j...@ffconsultancy.comwrote:
Alf P. Steinbach wrote:
* Jon Harrop:
Then you are reimplementing the garbage collector.
Not really. E.g. your example concerned a self-referential expression.
When the self-referential nature becomes a problem, e.g. not easily
modelled via weak or raw pointers, then that indicates that the
structure used to represent the problem domain is not well chosen, i.e.
will likely lead to problems also in other processing (not just garbage
collection).Graph theory is a very elegant branch of pure mathematics. Garbage collected
languages can represent graphs and graph-theoretic algorithms with
simplicity, clarity and robustness.

The fact that C++ cannot do this in no way diminishes graph theory as a
useful tool. Graph theory remains of fundamental importance in computer
science.
I think that you are making the basic mistake in assumption that the
only possible representation of graphs has to use pointers.

--
Mirek Fidler
U++ team leader. http://www.ultimatepp.org

Jan 24 '07 #130
Sascha Bohnenkamp wrote:
can you name one java application whihc behaves better than its c++
counterparts?
Most java apps I saw just through such null-pointer exceptions or refuse
to work well on the installed vm
For this reason I normaly avoid using java-applications.
(and they are often slow)
I agree. The safe languages that I use don't have null pointer exceptions,
so that isn't a problem.

As for programs written in Java that work well. Tribal trouble might be a
good example. Its the only Java program that I've used significantly and it
is noticeable more stable than many other games (presumably written in
C++). However, Tribal Trouble doesn't exactly push the limits of my
graphics card. :-)

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 24 '07 #131
peter koch wrote:
What platforms does C++ not port well to? I'm curious as our company
has a huge codebase in C++ (16000 cpp and hpp files), and we have had
more problems porting our Java base (the size of which I don't know)
than our C++ base.
We had so many problems trying to port a relatively small (but template
heavy) program to IRIX that we gave up. We've had fewer but still
significant problems with Intel's compilers. Not always template related,
one irritating problem was a difference in the typing of -- in the presence
of const. I had used a particular pattern hundreds of times in my code and
would have had to change every occurrence by hand to get around this
discrepancy in the compilers' understanding of the C++ language.

A friend of mine put a lot of effort into a template-heavy partially
specialising image processing library in C++ only to find that Microsoft's
VCC would leak memory and die before it could compile even the simplest
program. Microsoft's support team advised him to "avoid templates", and
presumably other language features.

We have even had problems porting from GCC (v 2) to GCC (v 3). We had two
people in a team (I was one of them) working concurrently on two different
aspects of a program. I developed in the current Debian compiler of the
time, GCC 2. My colleague developed in GCC 3. I used some features (I
forget which) that worked perfectly in GCC 2 but not in 3 and he used some
features (string streams) that worked in GCC 3 but not 2. Reconciling the
problems before we could combine the code was a big waste of time.

Perhaps the most portable languages I've used lately are Mathematica and
OCaml. We commercialised a Mathematica notebook that works transparently
(with the exception of installation paths) between Linux, Mac OS and
Windows. All of the example programs from my book on OCaml compile with no
changes under Linux, Mac OS X and Windows. I'm waiting to see if F#
provides the same portability via Mono.

I've collaborated with Java GUI developers in the past. Them using Windows,
me using Linux at the time. I was very impressed with Java's portability. I
could always compile and run their code with no changes from the latest
repository.

To be fair, I have made little use of C++ over the past 3 years (having
moved on to much friendlier languages), so maybe the situation has changed.
Maybe templates don't produces pages of incomprehensible errors. Maybe C++
programs can no longer segfault. However, I doubt it and I'm not about to
risk my company by moving back. :-)

What problems have you had porting Java?

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 24 '07 #132
Tribal Trouble is nice, but needs a much more powerfull computer than
better / prettier rts games need.
Like Warcraft 3, Combat Mission and many more.
It might not crash as often as most Java app ;) But its much slower than
non Java apps ...

Just compare those graphics with Black&White and try to run it on a
system where Black&White was playable without framerate issues.
(Hint: Nvidia GeForce 256 ddr, 600MHz Pentium 3, 512 MB RAM)
No way to play Tribal Trouble on such a system ... without stuttering
As for programs written in Java that work well. Tribal trouble might be a
good example. Its the only Java program that I've used significantly and it
is noticeable more stable than many other games (presumably written in
C++). However, Tribal Trouble doesn't exactly push the limits of my
graphics card. :-)
Jan 25 '07 #133
Maybe you would have less problems porting C++ from one system to
another if you try to stick to the standard as close as possible?
Normaly you get problems if you violate the standard (or expect too
much) even if not knowing that.
We ported some large application from SGI to SUN to Windows .. no real
problems.
Today we use gcc to cross-test our windows sources.

This helps much!
Jan 25 '07 #134
Sascha Bohnenkamp wrote:
Maybe you would have less problems porting C++ from one system to
another if you try to stick to the standard as close as possible?
Normaly you get problems if you violate the standard (or expect too
much) even if not knowing that.
We ported some large application from SGI to SUN to Windows .. no real
problems.
Today we use gcc to cross-test our windows sources.

This helps much!
Please keep the context you are replying to.

--
Ian Collins.
Jan 25 '07 #135
sorry next time I will
Please keep the context you are replying to.
Jan 25 '07 #136
Sascha Bohnenkamp wrote:
sorry next time I will
And don't top-post!
>>Please keep the context you are replying to.

--
Ian Collins.
Jan 25 '07 #137
Sascha Bohnenkamp wrote:
Maybe you would have less problems porting C++ from one system to
another if you try to stick to the standard as close as possible?
Heh, standard. :-)

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 25 '07 #138


On 24 Jan, 20:10, Jon Harrop <j...@ffconsultancy.comwrote:

All of the example programs from my book on OCaml compile with no
changes under Linux, Mac OS X and Windows.
Just out of curiosity, how many OCaml compilers are there?

regards
Andy Little

Jan 25 '07 #139
kwikius wrote:
On 24 Jan, 20:10, Jon Harrop <j...@ffconsultancy.comwrote:
>All of the example programs from my book on OCaml compile with no
changes under Linux, Mac OS X and Windows.

Just out of curiosity, how many OCaml compilers are there?
Depends what counts as a different compiler. There are many open source
compilers for bytecode, native code and several related languages but they
all have some code in common. As for unrelated compilers, there's the F#
compiler fsc but that compiles a different (but related) language.

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 25 '07 #140


On 25 Jan, 14:22, Jon Harrop <j...@ffconsultancy.comwrote:
kwikius wrote:
On 24 Jan, 20:10, Jon Harrop <j...@ffconsultancy.comwrote:
All of the example programs from my book on OCaml compile with no
changes under Linux, Mac OS X and Windows.
Just out of curiosity, how many OCaml compilers are there?Depends what counts as a different compiler. There are many open source
compilers for bytecode, native code and several related languages but they
all have some code in common.
Lets just restrict this to OCaml.

How many different software houses are producing OCaml compilers?

regards
Andy Little

Jan 25 '07 #141
kwikius wrote:
Lets just restrict this to OCaml.

How many different software houses are producing OCaml compilers?
None outside academia.

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 25 '07 #142


On 25 Jan, 17:20, Jon Harrop <j...@ffconsultancy.comwrote:
kwikius wrote:
Lets just restrict this to OCaml.
How many different software houses are producing OCaml compilers?
None outside academia.

OK. Ho many 'academic' software houses are producing OCaml compilers?

regards
Andy Little

Jan 25 '07 #143
kwikius wrote:
On 25 Jan, 17:20, Jon Harrop <j...@ffconsultancy.comwrote:
>kwikius wrote:
Lets just restrict this to OCaml.
How many different software houses are producing OCaml compilers?
None outside academia.

OK. Ho many 'academic' software houses are producing OCaml compilers?
The main one is INRIA in France:

http://caml.inria.fr

There are probably a few dozen places around the world working on OCaml
compilers. There's MetaOCaml in the states. GCaml and JCaml. OCamil
for .NET. I've seen an OCaml compiler targetting the JVM. I just met a guy
in Cambridge who is writing an OCaml compiler with an extended type system
for web programming security. There's Ocsigen and CDuce. You might also
count the theorem provers like Coq.

Of course, academics share source code and there is no OCaml language
definition. So this isn't comparable to Intel and Microsoft developing C
compilers independently. This is more like asking "how many software houses
are producing Linux kernels?".

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 26 '07 #144


On 26 Jan, 05:20, Jon Harrop <j...@ffconsultancy.comwrote:
kwikius wrote:
On 25 Jan, 17:20, Jon Harrop <j...@ffconsultancy.comwrote:
kwikius wrote:
Lets just restrict this to OCaml.
How many different software houses are producing OCaml compilers?
None outside academia.
OK. Ho many 'academic' software houses are producing OCaml compilers?
The main one is INRIA in France:

http://caml.inria.fr
According to this OCaml is nearky as old as C++. Why do you think it
never took off?

regards
Andy Little

Jan 26 '07 #145
kwikius wrote:
On 26 Jan, 05:20, Jon Harrop <j...@ffconsultancy.comwrote:
>The main one is INRIA in France:

http://caml.inria.fr

According to this OCaml is nearky as old as C++.
OCaml was 1996, IIRC.
Why do you think it never took off?
Like calculus hasn't taken off compared to Sudoku?

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Jan 26 '07 #146
On Jan 26, 9:58 am, Jon Harrop <j...@ffconsultancy.comwrote:
kwikius wrote:
Why do you think it never took off?

Like calculus hasn't taken off compared to Sudoku?
That's not really a fair comparison -- Sudoku and Calculus aren't meant
to solve similar problems. For example, Sudoku can't be used to solve
physics problems.

However, OCaml and C++ are both computer languages and can be used to
solve similar problems even if their strengths lie in different areas.
More appropriate questions could include:
- Why hasn't Sudoku taken off compared to regular old crossword
puzzles?
- Why hasn't K-theory taken off comapred to calculus?
- Why hasn't Haskel taken off compared to C++?

So, I think it is fair to ask why OCaml hasn't taken off compared to
other computer languages. The answer may be 'OCaml is best at a
different set of problems, and those problems occur less frequently
than the set of problems C++ is good at' -- or it could be something
else -- but I think it's a fair question to ask.

- Kevin Hall

Jan 26 '07 #147

Jon Harrop wrote:
kwikius wrote:
On 26 Jan, 05:20, Jon Harrop <j...@ffconsultancy.comwrote:
The main one is INRIA in France:

http://caml.inria.fr
According to this OCaml is nearky as old as C++.

OCaml was 1996, IIRC.
Why do you think it never took off?

Like calculus hasn't taken off compared to Sudoku?
You mean that unless you can understand advanced maths then there is no
point in attempting to try and learn it.? If so I guess that answers
the question !

regards
Andy Little

Jan 26 '07 #148


On 26 Jan., 14:00, "kwikius" <a...@servocomm.freeserve.co.ukwrote:
On 26 Jan, 05:20, Jon Harrop <j...@ffconsultancy.comwrote:
kwikius wrote:
On 25 Jan, 17:20, Jon Harrop <j...@ffconsultancy.comwrote:
>kwikius wrote:
Lets just restrict this to OCaml.
How many different software houses are producing OCaml compilers?
None outside academia.
OK. Ho many 'academic' software houses are producing OCaml compilers?
The main one is INRIA in France:
http://caml.inria.fr
According to this OCaml is nearky as old as C++. Why do you think it
never took off?
There are probably lots of good programming languages, that never have
taken off for reasons unrelated to their abilities to solve what they
were meant to solve. One reason C++ took off was (apart from it being
a nice language to use) that it was based on C and thus widely
available (only required C Front to produce a working program) and
also easy to learn. But the C heritage is also responsible for the
many quirks of the language and a lot of its complexity.

/Peter

Jan 27 '07 #149

This discussion thread is closed

Replies have been disabled for this discussion.

By using this site, you agree to our Privacy Policy and Terms of Use.