471,325 Members | 1,269 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 471,325 software developers and data experts.

benchmarks? java vs .net

The shootout site has benchmarks comparing different languages. It
includes C# Mono vs Java but not C# .NET vs Java. So I went through
all the benchmark on the site ...

http://kingrazi.blogspot.com/2008/05...enchmarks.html

Just to keep the post on topic for my friends at comp.lang.c++, how do
I play default windows sounds with C++?

Jun 27 '08
358 12282
On Tue, 03 Jun 2008 20:05:33 +0100, Jon Harrop <jo*@ffconsultancy.com>
wrote:
>Java is 2x slower than .NET on my AMD64 for this benchmark. What CPU are you
using?
AMD 3 core phenom

Are you sure you used this version?

http://shootout.alioth.debian.org/gp...lang=java&id=4

$ time partialsums 2500000 (.NET)
3.000000000 (2/3)^k
3160.817621887 k^-0.5
0.999999600 1/k(k+1)
30.314541510 Flint Hills
42.995233998 Cookson Hills
15.309017155 Harmonic
1.644933667 Riemann Zeta
0.693146981 Alternating Harmonic
0.785398063 Gregory

real 0m1.530s
user 0m0.000s
sys 0m0.031s

$ time java -server partialsums 2500000
3.000000000 (2/3)^k
3160.817621887 k^-0.5
0.999999600 1/k(k+1)
30.314541510 Flint Hills
42.995233998 Cookson Hills
15.309017155 Harmonic
1.644933667 Riemann Zeta
0.693146981 Alternating Harmonic
0.785398063 Gregory

real 0m1.697s
user 0m0.000s
sys 0m0.031s

Jun 27 '08 #51
Razii <kl*****@mail.comwrote:
On Tue, 3 Jun 2008 19:50:21 +0100, Jon Skeet [C# MVP]
<sk***@pobox.comwrote:
Um, the C# version (as posted on the site you've linked to below) uses
unbuffered IO while the Java version doesn't. They're just not doing
the same thing.

What benchmark are you talking about?
The mandelbrot one. Look back in the thread from where I first talked
about IO, and you'll see it's the last one referenced.
Changing the line to

using (StreamReader r = new StreamReader(new
BufferedStream(Console.OpenStandardInput())))

didn't make a difference in sum-file benchmark.
The benchmark I wasn't referring to? Oh.
By the way, StreamTokenizer is not buffered. It reads a byte each time
from the stream. It maybe that System.in is buffered by default in
Java.
Which benchmark are you talking about now? sum-file uses BufferedReader
but not StreamTokenizer as far as I can see.

It would be interesting to know what the sum-file benchmark is trying
to measure - there are six things off the top of my head:

1) File IO
2) Conversion from binary to text
3) Splitting a stream of textual data into lines
4) Parsing text into integers
5) Integer addition
6) Writing the result to the console

The benchmark gives no information as to which of those is the
bottleneck for any particular platform.

I note, by the way, that the Java version of sum-col assumes that using
the platform default encoding is good enough. The C# version always
uses UTF-8. Depending on what the default encoding is, that may or may
not be significant - but it does show that the programs are not doing
the same thing.
How did you measure your .NET results, by the way? By starting the
process thousands of times, as shown on the other web site, or by
running the same code within the process many times?

--
Jon Skeet - <sk***@pobox.com>
Web site: http://www.pobox.com/~skeet
Blog: http://www.msmvps.com/jon.skeet
C# in Depth: http://csharpindepth.com
Jun 27 '08 #52
Razii <kl*****@mail.comwrote:
$ time partialsums 2500000 (.NET)
3.000000000 (2/3)^k
3160.817621887 k^-0.5
0.999999600 1/k(k+1)
30.314541510 Flint Hills
42.995233998 Cookson Hills
15.309017155 Harmonic
1.644933667 Riemann Zeta
0.693146981 Alternating Harmonic
0.785398063 Gregory

real 0m1.530s
user 0m0.000s
sys 0m0.031s
That looks very much like Unix output rather than Windows. Are you
running under cygwin or something similar?

It would really help if you gave a full explanation of how *you* (as
opposed to the shoot-out) site are running your tests.

(I'd also argue that benchmarks taking only a second and a half to
complete aren't likely to have a good startup vs running program
balance.)

--
Jon Skeet - <sk***@pobox.com>
Web site: http://www.pobox.com/~skeet
Blog: http://www.msmvps.com/jon.skeet
C# in Depth: http://csharpindepth.com
Jun 27 '08 #53
Jon Skeet [C# MVP] wrote:
Mark Thornton <mt*******@optrak.co.ukwrote:
>>I haven't used it, but I've heard about Doug Lea's Fork/Join
framework:
http://gee.cs.oswego.edu/dl/papers/fj.pdf
I have used it and find it works quite well even without closures.

Cool - that's good to hear. I'd be very surprised if closures didn't
make it even simpler to use (possibly with a bit of refactoring towards
single-method interfaces if necessary).

Given the rest of Doug Lea's work, I'd expect it to be good :)
>I haven't used .NET so can't compare them.

Here's a little example from a Mandelbrot benchmark I was writing a few
weeks ago (oh the irony).

Simple initial code:

public override void Generate()
{
int index = 0;
for (int row = 0; row < Height; row++)
{
for (int col = 0; col < Width; col++)
{
Data[index++] = ComputeMandelbrotIndex(row, col);
}
}
}

Now using Parallel Extensions, and the lamdba expressions of C# 3:

public override void Generate()
{
Parallel.For(0, Height, row =>
{
int index = row * Width;
for (int col = 0; col < Width; col++)
{
Data[index++] = ComputeMandelbrotIndex(row, col);
}
});
}

I can't imagine many transformations being simpler than that - and the
results are great.

See http://preview.tinyurl.com/58vfav for rather more :)
>For more information and downloads of the package:
http://g.oswego.edu/dl/concurrency-interest/

Thanks, will take a look.
There are some FJ examples here:
http://artisans-serverintellect-com....efault.asp?W32

The recursive matrix multiplication is my contribution.

Mark Thornton
Jun 27 '08 #54
Mark Thornton <mt*******@optrak.co.ukwrote:

<snip>
There are some FJ examples here:
http://artisans-serverintellect-com....efault.asp?W32

The recursive matrix multiplication is my contribution.
Right. It still looks a bit involved - which is only to be expected,
really. I haven't taken it in fully yet though - will aim to do so at a
later date. Might try porting to Parallel Extensions...

--
Jon Skeet - <sk***@pobox.com>
Web site: http://www.pobox.com/~skeet
Blog: http://www.msmvps.com/jon.skeet
C# in Depth: http://csharpindepth.com
Jun 27 '08 #55
On Tue, 03 Jun 2008 20:07:00 +0100, Jon Harrop <jo*@ffconsultancy.com>
wrote:
>So having to write your own basic trig functions in Java so that you can be
only 2x slower than other languages doesn't bother you?
It's not 2x slower with my trig function on my computer.

On the other hand, C# results are not accurate for trig functions.

using System;
using System.IO;

class Test
{
static void Main(){
Console.WriteLine(Math.Sin (1e15));
}
}
The answer I get from C# is: 0.858272132476373

with Java I get 0.8582727931702359

Checking the answer with maple, I get
0.8582727931702358355238863908484066466002034

How are you going to fix C# in this case?

Does it bother you that C# gave wrong answer?

Jun 27 '08 #56
On Tue, 3 Jun 2008 20:11:45 +0100, Jon Skeet [C# MVP]
<sk***@pobox.comwrote:
>Now, you're using *those results* to form conclusions, right? If not,
there was little point in posting them. However, those results are of
Mono, not .NET.
Is this guy Jon Skeet really this stupid?

Let me know so I can add him to ignore list.


Jun 27 '08 #57
On Tue, 3 Jun 2008 20:27:19 +0100, Jon Skeet [C# MVP]
<sk***@pobox.comwrote:
>That looks very much like Unix output rather than Windows. Are you
running under cygwin or something similar?
Yes, I am running cygwin.
Jun 27 '08 #58
On Tue, 03 Jun 2008 20:03:49 +0100, Jon Harrop <jo*@ffconsultancy.com>
wrote:
>You said ".NET is twice slower in four benchmarks: binarytrees, mandelbrot,
regexdna, sumcol.". I have already corrected two of your results (mandelbrot
and regexdna) and provided two more counter examples where .NET is >2x
faster than Java and explained why binarytrees is flawed.
Yes, you did correct two of them. I added recursive to the list, so
there are 3 more to go.

With FastMath, C# is not much faster in partialsums on my computer.
You haven't posted the other prime benchmark yet so I can't comment on
that.

Jun 27 '08 #59
On Tue, 3 Jun 2008 20:23:08 +0100, Jon Skeet [C# MVP]
<sk***@pobox.comwrote:
>The mandelbrot one. Look back in the thread from where I first talked
about IO, and you'll see it's the last one referenced.
Yes, that was fixed by Harpo a while ago.
>Which benchmark are you talking about now? sum-file uses BufferedReader
but not StreamTokenizer as far as I can see.
I suspect System.in is already buffered by default.
>1) File IO
2) Conversion from binary to text
3) Splitting a stream of textual data into lines
4) Parsing text into integers
5) Integer addition
6) Writing the result to the console
(1), (2), (4), (5), (6) ?
Jun 27 '08 #60
Razii <kl*****@mail.comwrote:
That looks very much like Unix output rather than Windows. Are you
running under cygwin or something similar?

Yes, I am running cygwin.
Has it occurred to you that that may have an effect on the results?

The benchmarks would be *much* better in my view if each benchmark
started up a single process and ran with it for a significant amount of
time. After all, that's far more realistic in terms of how almost all
software is actually run in the real world.

--
Jon Skeet - <sk***@pobox.com>
Web site: http://www.pobox.com/~skeet
Blog: http://www.msmvps.com/jon.skeet
C# in Depth: http://csharpindepth.com
Jun 27 '08 #61
Razii <kl*****@mail.comwrote:
The mandelbrot one. Look back in the thread from where I first talked
about IO, and you'll see it's the last one referenced.

Yes, that was fixed by Harpo a while ago.
And yet you seem to be completely ignoring my point: a shootout which
has had such a trivial but significant flaw for a significant time is
suspect in its methodology. (Heck, the measurement style is dodgy to
start with. Including the startup cost in the test is pretty crazy.)
Which benchmark are you talking about now? sum-file uses BufferedReader
but not StreamTokenizer as far as I can see.

I suspect System.in is already buffered by default.
In what way does your sentence relate to my two?
1) File IO
2) Conversion from binary to text
3) Splitting a stream of textual data into lines
4) Parsing text into integers
5) Integer addition
6) Writing the result to the console

(1), (2), (4), (5), (6) ?
What is your list of numbers meant to signify?

--
Jon Skeet - <sk***@pobox.com>
Web site: http://www.pobox.com/~skeet
Blog: http://www.msmvps.com/jon.skeet
C# in Depth: http://csharpindepth.com
Jun 27 '08 #62
Starting new thread, since that thread got too large (same applies to
C++).

On Tue, 03 Jun 2008 20:07:00 +0100, Jon Harrop <jo*@ffconsultancy.com>
wrote:
>So having to write your own basic trig functions in Java so that you can be
only 2x slower than other languages doesn't bother you?
It's not 2x slower with my trig function on my computer.

On the other hand, C# results are not accurate for trig functions.

using System;
using System.IO;

class Test
{
static void Main(){
Console.WriteLine(Math.Sin (1e15));
}
}
The answer I get from C# is: 0.858272132476373

with Java I get 0.8582727931702359

Checking the answer with maple, I get
0.8582727931702358355238863908484066466002034

How are you going to fix C# in this case?

Does it bother you that C# gave wrong answer?
Jun 27 '08 #63
Jon Skeet [C# MVP] wrote:
Mark Thornton <mt*******@optrak.co.ukwrote:

<snip>
>There are some FJ examples here:
http://artisans-serverintellect-com....efault.asp?W32

The recursive matrix multiplication is my contribution.

Right. It still looks a bit involved - which is only to be expected,
really. I haven't taken it in fully yet though - will aim to do so at a
later date. Might try porting to Parallel Extensions...
Frameworks like Fork/Join can only go so far in easing tasks like matrix
multiplication. Given that my Java code won't be making best use of
SSE, I was quite pleased with the performance of the result.

Mark Thornton
Jun 27 '08 #64
Razii wrote:
Is this guy Jon Skeet really this stupid?

Let me know so I can add him to ignore list.
Razii, if you hope to ever have any positive effect in your
communication - e.g. convince anyone that you have a clue - you'll have
to work much harder not to look like a naive 13 year old with
undeveloped social skills.

Plonk etc.

-- Barry

--
http://barrkel.blogspot.com/
Jun 27 '08 #65
Jon Skeet wrote:
Now using Parallel Extensions, and the lamdba expressions of C# 3:

public override void Generate()
{
Parallel.For(0, Height, row =>
{
int index = row * Width;
for (int col = 0; col < Width; col++)
{
Data[index++] = ComputeMandelbrotIndex(row, col);
}
});
}

I can't imagine many transformations being simpler than that - and the
results are great.
How have you found Parallel.For to perform?

I've found that my own naive (i.e. quick hack with little thought)
implementations of Parallel.For that dispatch over a fixed number of
threads (one thread per core) outperform the ones in the TPL by quite a
bit - TPL seemed to have had around ~20% overhead in the particular
microbenchmark I was trying to scale over my quad core.

I must dig it out and analyse the discrepancy, and blog about it, when I
find time.

-- Barry

--
http://barrkel.blogspot.com/
Jun 27 '08 #66
On Tue, 03 Jun 2008 22:27:12 +0100, Barry Kelly
<ba***********@gmail.comwrote:
>Plonk etc.
You must be a newbie, moron. No one ever plonks me. You can try
though, as you did.

As for Jon Skeet, what kind of reply do you expect? I explained to him
that I don't have Mono, and the benchmarks I am using are at shootout
site (not that I myself am using Mono). The link to the blog I posted
also makes that clear, but yet the guy continued with the same line
that I am comparing Mono. Is he stupid or just doesn't take the time
to read carefully? Take your pick.
Jun 27 '08 #67
Barry Kelly <ba***********@gmail.comwrote:
I can't imagine many transformations being simpler than that - and the
results are great.

How have you found Parallel.For to perform?
Very well - see my last few blog posts.
I've found that my own naive (i.e. quick hack with little thought)
implementations of Parallel.For that dispatch over a fixed number of
threads (one thread per core) outperform the ones in the TPL by quite a
bit - TPL seemed to have had around ~20% overhead in the particular
microbenchmark I was trying to scale over my quad core.

I must dig it out and analyse the discrepancy, and blog about it, when I
find time.
I wouldn't be surprised if some individual test cases did badly in
microbenchmarks. Without even looking at your code, my guess is that
you may find your code doesn't work as well when there's other load on
the processor (e.g. effectively taking up one core with a different
process) whereas work stealing should (I believe) cope with that
reasonably well.

But as I point out on the most recent blog entry, accurately
benchmarking this kind of thing and being able to interpret the results
sensibly is basically beyond my current understanding (and time to
spend on it).

--
Jon Skeet - <sk***@pobox.com>
Web site: http://www.pobox.com/~skeet
Blog: http://www.msmvps.com/jon.skeet
C# in Depth: http://csharpindepth.com
Jun 27 '08 #68
On Tue, 3 Jun 2008 21:04:47 +0100, Jon Skeet [C# MVP]
<sk***@pobox.comwrote:
>Has it occurred to you that that may have an effect on the results?
No, it won't have a major effect.

Jun 27 '08 #69
On Tue, 03 Jun 2008 15:14:52 -0500, Razii <kl*****@mail.comwrote:
>The answer I get from C# is: 0.858272132476373

with Java I get 0.8582727931702359

Checking the answer with maple, I get
0.8582727931702358355238863908484066466002034

How are you going to fix C# in this case?

Does it bother you that C# gave wrong answer?
Console.WriteLine(Math.Sin (1e7));

0.420547793190771 (C# with .NET)
0.42054779319078249129850658974095 (right answer)

Console.WriteLine(Math.Sin (1e10));

-0.48750602507627 (C# with .NET)
-0.48750602508751069152779429434811 (right answer)

Console.WriteLine(Math.Sin (1e15));

0.858272132476373 (C# with .NET)
0.8582727931702358355238863908484 (right answer)

So C# doesn't get 15-17 digit accuracy of double for sin and cos.
That's the only reason it's faster in partialsums benchmark. By using
FastMath class, the times for both is about same on my computer.

On the other hand, you still need to 'optimize' 4 benchmarks where
..NET is much slower: binarytrees, sum-col, recursive, revcomp.
Jun 27 '08 #70
Lew
Jon Skeet [C# MVP] wrote:
One problem with creating anything like Parallel Extensions (which I
assume is what you meant - AFAIK the TPL is just part of Parallel
Extensions, although I could be misunderstanding you completely) for
Java is that it doesn't (currently) have closures other than anonymous
inner classes which are ugly as hell.
I personally like anonymous (and named) inner classes in lieu of closures, myself.

I take more the attitude of the maintenance programmer than the initial
developer. The very verbosity of inner classes help me understand what's
going on.
The state of closures in Java 7 is currently up in the air.
In part because a lot of Java mavens feel similarly, or they subscribe to Josh
Bloch's opinion of, "When in doubt, leave it out", or they have similar
concerns as he about what the closure idiom would do to other long-standing
Java idioms and semantics.

Some of those in favor of closures, by my general impression of other threads
that have discussed this, tend to take as axiomatic their value equations that
make closures seem desirable, and to dismiss the concerns of those opposed to
closures as somehow benighted. The trouble with that is that both sides then
flame each other for failing to understand their values.

I see the value of closures, and while they are marginally more powerful than
inner classes, their cost to the Java zeitgeist could be as high as Mr. Bloch
and others feel, and I also find in practice that they are too terse for the
maintenance programmer, compared to the inner class idiom. YMMV, of course,
since this seems to me a matter of style.

The opponents to closures in Java point out that inner classes get the job
done in practice, so that the need for closures is small, while the complexity
and difficulties of it are large.

It's harder to retrofit such a feature to a language that was deliberately
designed to exclude it than to put it in the beginning into a language that
was designed to use it. It's also unnecessary to add closures to Java.

--
Lew
Jun 27 '08 #71
Lew
Mark Thornton wrote:
Given that my Java code won't be making best use of SSE
Are you sure of that?

If true, it's one more area for Hotspot authors to approach.

--
Lew
Jun 27 '08 #72
On Jun 3, 11:15 pm, Razii <klgf...@mail.comwrote:
<sk...@pobox.comwrote:
Has it occurred to you that that may have an effect on the results?

No, it won't have a major effect.
And you know this because...?

Just claiming that it won't have an effect isn't exactly compelling
evidence.

Jon
Jun 27 '08 #73
On Jun 4, 1:29 am, Barry Kelly <barry.j.ke...@gmail.comwrote:

<snip>
Of course, the whole point of my naive algorithm is that it is naive, so
I didn't do any of this. It's a 30-minute hack. I just find it's
interesting that TPL can, on occasion, take close to 30% more time than
a straightforward naive implementation, depending on work chunk size.
Out of interest, which release of Parallel Extensions did you measure
against? Have you installed the CTP from Monday yet? I'd be interested
to hear whether that is better or worse than the December one in your
case.

Also, have you posted on the PFX forum about this? I'm sure the team
would be interested in examining it.

Jon
Jun 27 '08 #74
Jon Skeet [C# MVP] wrote:
Razii <kl*****@mail.comwrote:
>The mandelbrot one. Look back in the thread from where I first talked
about IO, and you'll see it's the last one referenced.

Yes, that was fixed by Harpo a while ago.

And yet you seem to be completely ignoring my point: a shootout which
has had such a trivial but significant flaw for a significant time is
suspect in its methodology. (Heck, the measurement style is dodgy to
start with. Including the startup cost in the test is pretty crazy.)
Yes. The problems with Mandelbrot are tiny compared to the problems with
binarytrees though. If you want to benchmark properly then you need to put
a lot more effort into creating the tasks than simply timing a one-liner
you downloaded from the 'net without making any attempt to optimize it.

--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
Jun 27 '08 #75
On Jun 4, 2:52 am, Lew <con...@lewscanon.com.invalidwrote:
One problem with creating anything like Parallel Extensions (which I
assume is what you meant - AFAIK the TPL is just part of Parallel
Extensions, although I could be misunderstanding you completely) for
Java is that it doesn't (currently) have closures other than anonymous
inner classes which are ugly as hell.

I personally like anonymous (and named) inner classes in lieu of closures, myself.

I take more the attitude of the maintenance programmer than the initial
developer. The very verbosity of inner classes help me understand what's
going on.
I can't see why - it's just more fluff that gets in the way of the
actual logic.

For instance, I find it much easier to understand this:

people.Where(person =person.Age 18);

than this:

people.where(new Predicate()
{
public boolean match(Person person)
{
return person.Age 18;
}
});

It's entirely reasonable to chain together several method calls with
lambda expressions and still end up with readable code. By the time
you've got more than a couple of anonymous inner classes in a single
statement, I think the readability ends up being lost.
The state of closures in Java 7 is currently up in the air.

In part because a lot of Java mavens feel similarly, or they subscribe to Josh
Bloch's opinion of, "When in doubt, leave it out", or they have similar
concerns as he about what the closure idiom would do to other long-standing
Java idioms and semantics.
I have concerns over some of the proposals under consideration. I
personally don't like the idea of closures being able to return beyond
their own call, if you see what I mean - I think of closures as a way
of expressing logic for "normal" methods more easily, rather than a
way of introducing whole new control structures.

None of the proposals currently on the table is entirely satisfactory
to me, and I'd like to see some more with possibly more limited scope
but simpler syntax. However, I *do* find closures incredibly useful as
a tool to have available, and I really don't like all the extra cruft
that anonymous inner classes requires.
Some of those in favor of closures, by my general impression of other threads
that have discussed this, tend to take as axiomatic their value equations that
make closures seem desirable, and to dismiss the concerns of those opposed to
closures as somehow benighted. The trouble with that is that both sides then
flame each other for failing to understand their values.
Not having been in the discussion, I can certainly see how that would
happen. There's truth in Blub's Paradox, but there's also certainly
value in keeping things simple. It's highly unlikely that everyone is
going to end up happy, I'd say.
I see the value of closures, and while they are marginally more powerful than
inner classes, their cost to the Java zeitgeist could be as high as Mr. Bloch
and others feel, and I also find in practice that they are too terse for the
maintenance programmer, compared to the inner class idiom. YMMV, of course,
since this seems to me a matter of style.
It's absolutely a matter of style - but I think it opens up a style
which ends up simplifying code significantly precisely *because* it's
terse.

It certainly takes a little bit of extra up-front learning for
developers (whether maintenance or not) but I personally think the pay-
back is massive.
The opponents to closures in Java point out that inner classes get the job
done in practice, so that the need for closures is small, while the complexity
and difficulties of it are large.
Anonymous inner classes get the job done in a way which discourages a
lot of the more effective ways of using closures, IMO. Yes, they
achieve largely the same goals (I need to consider the value of
capturing variables vs values, compared with the cost in complexity -
it *does* trip people up in C#) but it's the extra cruft that I have a
problem with.
It's harder to retrofit such a feature to a language that was deliberately
designed to exclude it than to put it in the beginning into a language that
was designed to use it.
Certainly. But taking C# as an example, closures weren't available *at
all* in C# 1 (not even in an anonymous inner class manner) - and yet
have been introduced in a very elegant manner.
It's also unnecessary to add closures to Java.
It was also "unnecessary" to add generics, or the enhanced for loop,
or enums though. "Unnecessary" != "not useful".

This is now wildly off the topic of Razii's benchmarks, but I'd be
very interested in further discussions, particularly across both C#
and Java. It's been a while since I've been a regular poster on
comp.lang.java.programmer, but do you think a new thread discussing
closures and cross-posted between
microsoft.public.dotnet.languages.csharp and comp.lang.java.programmer
would be useful? If the topic's been done to death on the Java groups
already, that's fine - I just wonder whether the C# crowd might have
some interesting perspectives to bring in. There's always the *risk*
that it would turn into yet another language oneupmanship contest of
course, but we could try to make it clear that that's not the point of
the thread...

Alternatively, if you'd be happy to take this up on email instead,
please drop me a mail (sk***@pobox.com)

If you're sick of the whole debate, I quite understand - and thanks
for the previous post.

Jon
Jun 27 '08 #76
Razii wrote:
Console.WriteLine(Math.Sin (1e15));

0.858272132476373 (C# with .NET)
0.8582727931702358355238863908484 (right answer)

So C# doesn't get 15-17 digit accuracy of double for sin and cos.
For double precision floating point your argument of 1e15 to sin has already
lost all precision because it is fourteen orders of magnitude outside the
primary domain of this trig function.
That's the only reason it's faster in partialsums benchmark.
That is pure speculation.
By using FastMath class, the times for both is about same on my computer.
Java is still 2x slower here on this and other benchmarks.

--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
Jun 27 '08 #77
On Jun 4, 9:12 am, Jon Harrop <j...@ffconsultancy.comwrote:

<snip>
I can't imagine many transformations being simpler than that - and the
results are great.
How have you found Parallel.For to perform?

Yes, but you have to write your code properly and all of the freely
available examples I have seen (like Jon's above) are poorly written.
Can you elaborate on that? I'm not going to try to claim the example I
gave is going to be optimal. However, I find it important that it's a
*very* simple transformation. I suspect that most of the more optimal
transformations are also more complicated - and thus likely to be
beyond a large segment of the developer population. (I'm not trying to
be patronising - it seems to me to be pretty obvious that the general
level of understanding of threading complexities is very low. I'd
consider myself to know "a bit more than most" but I still get very
concerned when things get complicated.) I like the idea of "here's a
really simple transformation which can give you pretty good
parallelisation - more powerful and complex building blocks are also
available if you need to squeeze every bit of performance out".

Of course, if you've got examples of transformations which are about
as simple, but give even better results, that would be fabulous.

Jon
Jun 27 '08 #78
On Fri, 30 May 2008 22:28:07 -0500, King Raz <kl*********@mail.com>
wrote, quoted or indirectly quoted someone who said :
>Just to keep the post on topic for my friends at comp.lang.c++, how do
I play default windows sounds with C++?
see http://mindprod.com/products.html#HONK

C++ source code included.
--

Roedy Green Canadian Mind Products
The Java Glossary
http://mindprod.com
Jun 27 '08 #79
On Jun 4, 1:38 pm, Jon Harrop <j...@ffconsultancy.comwrote:
Doesn't that just mean you have to apply it selectively?

Not quite, no. It means you must automate the selective application so that
it can be done at run time, i.e. make it adaptive.
Sometimes that's the case - but in many business cases, I believe it
can be statically decided, making development simpler at the risk that
in some cases you may not take the optimal course.
I wouldn't start replacing every for loop with Parallel.For, or every
foreach loop with Parallel.ForEach - I'd do it based on evidence and
thought.

The problem is that part of the "evidence" is a function of the problem
being solved and, therefore, is only available at run time. So you cannot
do it statically (with changes to the source code) to good effect.
In some cases. In others it's pretty obvious, IMO. Take the Mandelbrot
case: on a single core, I don't believe Parallel.For will hurt much (a
case I should test some time...) but obviously it won't help. On
multiple cores, it will improve things significantly vs a non-parallel
implementation. There may be other solutions which will improve the
performance more, but only by introducing more complexity. The balance
between complexity and improvement depends on many things, of course -
including the comfort zone of the developers involved.
I suspect you'd get a lot more subscriptions if you'd give away a few
good sample articles - as well as that being a Good Thing in general.

I suspect we would get fewer subscriptions if we made even more content
freely available.
When you say "even more" content, it's not like the web site is
currently bursting at the seams with free samples. With so much free
content (much of it pretty good) available online these days, I'd want
more evidence than a single F# tutorial before spending £39 on a
subscription. Obviously it's your call though.
It really depends on what you're doing. As ever, one size doesn't fit
all in software engineering.

There is still a lot of room for improvement though.
Sure - but I believe that *just* Parallel.For and Parallel.ForEach is
a quite staggering improvement for the relatively parallelism-scared
developer who wants to get more bang-for-the-buck with the minimum of
effort.

Jon
Jun 27 '08 #80
Jon Skeet [C# MVP] wrote:
On Jun 4, 1:29 am, Barry Kelly <barry.j.ke...@gmail.comwrote:

<snip>
Of course, the whole point of my naive algorithm is that it is naive, so
I didn't do any of this. It's a 30-minute hack. I just find it's
interesting that TPL can, on occasion, take close to 30% more time than
a straightforward naive implementation, depending on work chunk size.

Out of interest, which release of Parallel Extensions did you measure
against? Have you installed the CTP from Monday yet?
Yes, that's what I installed last night to test with.

I do note this blog posting:

http://blogs.msdn.com/pfxteam/archiv...2/8179013.aspx
I'd be interested
to hear whether that is better or worse than the December one in your
case.
The two aren't compatible when installed side by side, unfortunately, so
I'd have to uninstall and reinstall. If I write it up I'll do that.
Also, have you posted on the PFX forum about this? I'm sure the team
would be interested in examining it.
I'm doing that now.

-- Barry

--
http://barrkel.blogspot.com/
Jun 27 '08 #81
I wrote a reply, but it never showed up here in my newsreader. It's
online, however:

http://groups.google.com/group/micro...8e9162b5bae717

(Apologies if other folks are seeing my message repeatedly.)

-- Barry

--
http://barrkel.blogspot.com/
Jun 27 '08 #82
Jon Skeet [C# MVP] wrote:
On Jun 4, 1:38 pm, Jon Harrop <j...@ffconsultancy.comwrote:
Doesn't that just mean you have to apply it selectively?

Not quite, no. It means you must automate the selective application so
that it can be done at run time, i.e. make it adaptive.

Sometimes that's the case - but in many business cases, I believe it
can be statically decided, making development simpler at the risk that
in some cases you may not take the optimal course.
Again, the problem isn't that you fail to take the optimal course. The
problem is that you silently introduce massive performance degradations
because you don't understand when Parallel.For is slow.

At the very least you want to document the pathological cases but you can't
even do that without more information.
I wouldn't start replacing every for loop with Parallel.For, or every
foreach loop with Parallel.ForEach - I'd do it based on evidence and
thought.

The problem is that part of the "evidence" is a function of the problem
being solved and, therefore, is only available at run time. So you cannot
do it statically (with changes to the source code) to good effect.

In some cases. In others it's pretty obvious, IMO. Take the Mandelbrot
case: on a single core, I don't believe Parallel.For will hurt much (a
case I should test some time...) but obviously it won't help. On
multiple cores, it will improve things significantly vs a non-parallel
implementation.
No. That is exactly what I was saying was wrong. Your implementation has
introduced massive performance degradations and you don't even know when
they appear. That is a serious problem, IMHO.
There may be other solutions which will improve the
performance more, but only by introducing more complexity. The balance
between complexity and improvement depends on many things, of course -
including the comfort zone of the developers involved.
Yes.
I suspect you'd get a lot more subscriptions if you'd give away a few
good sample articles - as well as that being a Good Thing in general.

I suspect we would get fewer subscriptions if we made even more content
freely available.

When you say "even more" content, it's not like the web site is
currently bursting at the seams with free samples. With so much free
content (much of it pretty good) available online these days, I'd want
more evidence than a single F# tutorial before spending £39 on a
subscription. Obviously it's your call though.
May I ask which articles you would most like for free?

http://www.ffconsultancy.com/product...roduction.html
http://www.ffconsultancy.com/dotnet/...e30/index.html
http://www.ffconsultancy.com/dotnet/...pot/index.html
http://www.ffconsultancy.com/dotnet/...cer/index.html
http://www.ffconsultancy.com/dotnet/...oku/index.html

I'll also upload the code from my forthcoming book F# for Scientists ASAP.
It really depends on what you're doing. As ever, one size doesn't fit
all in software engineering.

There is still a lot of room for improvement though.

Sure - but I believe that *just* Parallel.For and Parallel.ForEach is
a quite staggering improvement for the relatively parallelism-scared
developer who wants to get more bang-for-the-buck with the minimum of
effort.
They must be made aware of when Parallel.For kills performance or they will
surely start going in the wrong direction.

--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
Jun 27 '08 #83
On Jun 4, 3:11 pm, Jon Harrop <j...@ffconsultancy.comwrote:
Not quite, no. It means you must automate the selective application so
that it can be done at run time, i.e. make it adaptive.
Sometimes that's the case - but in many business cases, I believe it
can be statically decided, making development simpler at the risk that
in some cases you may not take the optimal course.

Again, the problem isn't that you fail to take the optimal course. The
problem is that you silently introduce massive performance degradations
because you don't understand when Parallel.For is slow.
I know that it will introduce significant degredations when the body
of the loop is very quick, thus dwarfed by the time taken to invoke a
delegate and the time taken to schedule threads etc.

Are there other situations you know about where the performance is
terrible? In many cases (not all by a long way, but many) the
developer will *know* that the body of the loop is a significant
amount of work.
In some cases. In others it's pretty obvious, IMO. Take the Mandelbrot
case: on a single core, I don't believe Parallel.For will hurt much (a
case I should test some time...) but obviously it won't help. On
multiple cores, it will improve things significantly vs a non-parallel
implementation.

No. That is exactly what I was saying was wrong. Your implementation has
introduced massive performance degradations and you don't even know when
they appear. That is a serious problem, IMHO.
So do *you* know when they appear? In the Mandelbrot case, I'd expect
the ParallelFor version to be a lot slower for situations where
computing each row is a trivial amount of work compared to the
scheduling involved - i.e. when the number of columns is tiny. I'm
happy enough to ignore that case - any Mandelbrot image with so few
columns is going to be uninteresting anyway.
When you say "even more" content, it's not like the web site is
currently bursting at the seams with free samples. With so much free
content (much of it pretty good) available online these days, I'd want
more evidence than a single F# tutorial before spending £39 on a
subscription. Obviously it's your call though.

May I ask which articles you would most like for free?
The ParallelFX and SciMark articles would be nice. Are the paid
articles longer/meatier than the free ones?
I'll also upload the code from my forthcoming book F# for Scientists ASAP.
Cool.
Sure - but I believe that *just* Parallel.For and Parallel.ForEach is
a quite staggering improvement for the relatively parallelism-scared
developer who wants to get more bang-for-the-buck with the minimum of
effort.

They must be made aware of when Parallel.For kills performance or they will
surely start going in the wrong direction.
Agreed. If that really *is* a case of "don't do it when the body of
the loop is going to execute really quickly anyway" then that's easy
enough to understand. If there are more complicated situations where
using Parallel.For introduces degredations (rather than just not being
optimal) then I'd be really interested to hear about them.

Jon
Jun 27 '08 #84
On Wed, 04 Jun 2008 09:04:01 +0100, Jon Harrop <jo*@ffconsultancy.com>
wrote:
>That's the only reason it's faster in partialsums benchmark.

That is pure speculation.
It's not. I posted the evidence. C# gets wrong result.
>By using FastMath class, the times for both is about same on my computer.

Java is still 2x slower here on this and other benchmarks.
It's not on on my comp when using FastMath class. It's not on shootout
site when they are using FastMath class. You are the only one making
the claim.

http://shootout.alioth.debian.org/gp...lsums&lang=all

C++ GNU g++ 4.05 seconds
Java 6 4.89 seconds.

You haven't posted the other "benchmark" yet so there is no way to
confirm your claim. Also, you have yet to optimize 4 benchmarks where
..NET is much slower: binarytrees, sum-col, recursive, revcomp.

Jun 27 '08 #85
On Tue, 3 Jun 2008 22:57:40 -0700 (PDT), "Jon Skeet [C# MVP]"
<sk***@pobox.comwrote:
>And you know this because...?
The one who asserts must prove. Post proof that any of these
benchmarks give different timing on Cygwin than on Windows consol.
Until you do that, I will assume it has no effect.

Jun 27 '08 #86
On Wed, 04 Jun 2008 08:11:54 +0100, Jon Harrop <jo*@ffconsultancy.com>
wrote:
>Yes. The problems with Mandelbrot are tiny compared to the problems with
binarytrees though. If you want to benchmark properly then you need to put
a lot more effort into creating the tasks than simply timing a one-liner
you downloaded from the 'net without making any attempt to optimize it.
How about you optimize them? Here are 4 benchmarks where .NET is much
slower: binarytrees, sum-col, recursive, revcomp.

Jun 27 '08 #87
Razii wrote:
On Tue, 3 Jun 2008 22:57:40 -0700 (PDT), "Jon Skeet [C# MVP]"
<sk***@pobox.comwrote:
>>And you know this because...?

The one who asserts must prove. Post proof that any of these
benchmarks give different timing on Cygwin than on Windows consol.
Until you do that, I will assume it has no effect.
Oh, wait a minute. Your not even using .NET properly? No wonder your results
are all screwed.

..NET is now several times faster than Java on many of these benchmarks (and
I have many others where that is also the case).

--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
Jun 27 '08 #88
Razii wrote:
On the other hand, you still need to 'optimize' 4 benchmarks where
.NET is much slower: binarytrees, sum-col, recursive, revcomp.
I have already addressed binarytrees elsewhere. I have now taken a look at
sumcol and recursive too. The recursive benchmark in F# is only 20% slower
than Java, so C# is being inefficient here and that has nothing to do
with .NET itself.

As for sumcol you, again, have a Java program that is doing something
completely different to the C# program. Specifically, the Java is using a
lexer to parse the input whereas the C# is reading lines in as strings and
parsing them, which incurs a huge amount of allocation that is completely
unnecessary.

So I wrote a simple lexer in F# that is 2.5x faster than Java (1s vs 2.5s)
on this benchmark as well. Here is my code:

let rec parse (ch : System.IO.FileStream) total accu =
let c = ch.ReadByte()
if c >= 48 && c <= 57 then
parse ch total (10 * accu + c - 48)
else
if c = 45 then parse_neg ch (total + accu) 0 else
if c >= 0 then parse ch (total + accu) 0 else total
and parse_neg ch total accu =
let c = ch.ReadByte()
if c >= 48 && c <= 57 then
parse_neg ch total (10 * accu + c - 48)
else
if c = 45 then parse_neg ch (total - accu) 0 else
if c >= 0 then parse ch (total - accu) 0 else total

do
use ch = System.IO.File.OpenRead(@"input.txt")
printf "%d\n" (parse ch 0 0)

As an aside, you cannot even write that directly in Java because it still
lacks tail calls.

--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
Jun 27 '08 #89
Razii wrote:
On Wed, 04 Jun 2008 08:11:54 +0100, Jon Harrop <jo*@ffconsultancy.com>
wrote:
>>Yes. The problems with Mandelbrot are tiny compared to the problems with
binarytrees though. If you want to benchmark properly then you need to put
a lot more effort into creating the tasks than simply timing a one-liner
you downloaded from the 'net without making any attempt to optimize it.

How about you optimize them?
I already did. You are just ignoring my improvements because you don't want
to concede that Java is now much slower on several benchmarks.
Here are 4 benchmarks where .NET is much
slower: binarytrees, sum-col, recursive, revcomp.
Java is 2.5x slower on sumcol as well, only 20% slower on recursive,
binarytrees is still ill-defined and I haven't looked at revcomp yet.

--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
Jun 27 '08 #90
Razii wrote:
On Wed, 04 Jun 2008 09:04:01 +0100, Jon Harrop <jo*@ffconsultancy.com>
wrote:
>>That's the only reason it's faster in partialsums benchmark.

That is pure speculation.

It's not. I posted the evidence. C# gets wrong result.
That says nothing about why it is faster.
>>By using FastMath class, the times for both is about same on my
computer.

Java is still 2x slower here on this and other benchmarks.

It's not on on my comp when using FastMath class. It's not on shootout
site when they are using FastMath class. You are the only one making
the claim.

http://shootout.alioth.debian.org/gp...lsums&lang=all
>
C++ GNU g++ 4.05 seconds
Java 6 4.89 seconds.

You haven't posted the other "benchmark" yet so there is no way to
confirm your claim.
Mersenne Twister is freely available. Just time the generation of 10^8
int32s:

C: 0.93s
F#: 1.8s
Java: 3.5s

and note how Java is uniquely slow.
Also, you have yet to optimize 4 benchmarks where
.NET is much slower: binarytrees, sum-col, recursive, revcomp.
binarytrees: benchmark is flawed so I can make it as fast as you like.

sumcol: .NET is 2.5x faster than Java.

recursive: .NET is only 20% slower than Java.

revcomp: I have not yet examined. Perhaps this will turn out to be the only
benchmark where Java is significantly faster (but I seriously doubt it).

--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
Jun 27 '08 #91
On Jun 4, 5:43 pm, Razii <nikjhlgf...@mail.comwrote:
<sk...@pobox.comwrote:
And you know this because...?

The one who asserts must prove.
I absolutely agree. Here's what I wrote:

<quote>Has it occurred to you that that may have an effect on the
results?</quote>

I don't see an assertion there - merely speculation that it *may* have
an effect.

Now here's what you wrote:

<quote>No, it won't have a major effect.</quote>

*That's* an assertion. Therefore by your own instructions, it's up to
you to prove your assertion.

If you're running a program in a non-standard way (and running .NET
under Cygwin certainly isn't the usual environment) then it's up to
you to investigate whether or not that has any effect.

Jon
Jun 27 '08 #92
Jon Harrop wrote:
revcomp: I have not yet examined. Perhaps this will turn out to be the
only benchmark where Java is significantly faster (but I seriously doubt
it).
I just had another look at revcomp and it has exactly the same problems as
mandelbrot and sumcol. Specifically, it is reading inefficiently from stdin
when idiomatic .NET reads from a file (which is 6x faster) and then it
outputs to an unbuffered stream (which is 2x slower).

No doubt if you make these trivial optimizations again the .NET will kick
the crap out of Java on this benchmark as well...

--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
Jun 27 '08 #93
Jon Skeet [C# MVP] wrote:
If you're running a program in a non-standard way (and running .NET
under Cygwin certainly isn't the usual environment) then it's up to
you to investigate whether or not that has any effect.
I have a horrible feeling he isn't running .NET at all. It seems far more
likely that he is running Mono, which explains why his results are all
outliers...

--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
Jun 27 '08 #94
On Wed, 04 Jun 2008 17:53:19 +0100, Jon Harrop <jo*@ffconsultancy.com>
wrote:
>Razii wrote:
>On Wed, 04 Jun 2008 09:04:01 +0100, Jon Harrop <jo*@ffconsultancy.com>
wrote:
>>>That's the only reason it's faster in partialsums benchmark.

That is pure speculation.

It's not. I posted the evidence. C# gets wrong result.

That says nothing about why it is faster.
It does.

http://blogs.sun.com/jag/entry/trans...tal_meditation
For many years, the JDK on x86 platforms has used the hardware
fsin/fcos x87 instructions in the range [-pi/4, pi/4], a range which
encompasses about half of all representable floating-point values.
Therefore, in that range the performance of the JDK's transcendental
functions should be nearly the same as the performance of the
transcendental functions in C, C++, etc. that are using those same
fsin/fcos instructions. Benchmarks which focus on testing the trig
performance of large values, such as almabench, present a skewed
portrait of Java's trigonometric performance. The next question is why
don't we just use fsin/fcos over the entire floating-point range? The
simple answer is that fsin/fcos can deliver answers that are
arbitrarily wrong for the most straightforward way of measuring error
in the result.

Every finite real number, no matter how large, has a well-defined
value for sin/cos. Ideally, the floating-point result returned for
sin/cos would be the representable floating-point number closest to
the mathematically defined result for the floating-point input. A
floating-point library having this property is called correctly
rounded, which is equivalent to saying the library has an error bound
less than or equal to 1/2 an ulp (unit in the last place). For
sin/cos, writing a correctly rounding implementation that runs at a
reasonable speed is still something of a research problem so in
practice platforms often use a library with a 1 ulp error bound
instead, which means either of the floating-point numbers adjacent to
the true result can be returned. This is the implementation criteria
the Java Math library has to meet. The implementation challenge is
that sin/cos are implemented using argument reduction whereby any
input is mapped into a corresponding input in the [-pi/4, pi/4] range.
Since the period of sin/cos is pi and pi is transcendental, this
amounts to having to compute a remainder from the division by a
transcendental number, which is non-obvious. A few years after the x87
was designed, people figured out how to do this division as if by an
exact value of pi. Instead the x87 fsin/fcos use a particular
approximation to pi, which effectively means the period of the
function is changed, which can lead to large errors outside [-pi/4,
pi/4]. For example the value of sine for the floating-point number
Math.PI is around

1.2246467991473532E-16

while the computed value from fsin is

1.2246063538223773E-16

In other words, instead of getting the full 15-17 digit accuracy of
double, the returned result is only correct to about 5 decimal digits.
In terms of ulps, the error is about 1.64e11 ulps, over *ten billion*
ulps. With some effort, I'm confident I could find results with the
wrong sign, etc. There is a rationale which can justify this behavior;
however, it was much more compelling before the argument reduction
problem was solved.


Jun 27 '08 #95
On Wed, 04 Jun 2008 17:53:19 +0100, Jon Harrop <jo*@ffconsultancy.com>
wrote:
>Mersenne Twister is freely available. Just time the generation of 10^8
int32s:
Post the link if it's freely available.
>Also, you have yet to optimize 4 benchmarks where
.NET is much slower: binarytrees, sum-col, recursive, revcomp.

binarytrees: benchmark is flawed so I can make it as fast as you like.
It's flawed because C# is twice slower? Nice logic. Both programs are
doing the same thing. If it's flawed, it's flawed for both sides.
>sumcol: .NET is 2.5x faster than Java.
I don't get that result even after changing it to BufferedStream. It's
2x times slower.
>recursive: .NET is only 20% slower than Java.

revcomp: I have not yet examined. Perhaps this will turn out to be the only
benchmark where Java is significantly faster (but I seriously doubt it).
you have yet to optimize 4 benchmarks where .NET is much slower:
binarytrees, sum-col, recursive, revcomp.

Jun 27 '08 #96
On Wed, 04 Jun 2008 17:46:18 +0100, Jon Harrop <jo*@ffconsultancy.com>
wrote:
>So I wrote a simple lexer in F# that is 2.5x faster than Java (1s vs 2.5s)
on this benchmark as well. Here is my code:
StreamTokenizer is part of Java's API. The rules of the benchmark are
that you can't write custom parsers. You need to use library.

I can make Java 3x times faster by writing custom parser.

The following version is 3x times faster than StreamTokenizer

http://shootout.alioth.debian.org/gp...lang=java&id=3

import java.io.*;

public final class sumcol {

static final byte[] buf = new byte [18432];
final static InputStream in = System.in;

public static void main(String[] args) throws Exception {

System.out.println(sum());
}

private static int sum() throws Exception
{
int total = 0, num=0, j, neg = 1;
while ((j = in.read(buf)) 0)
{
for (int i = 0; i < j; i++)
{
int c = buf[i];
if (c >= '0' && c <= '9')
num = num * 10 + c - '0';
else if (c == '-')
neg = -1;
else {
total += (num * neg);
num = 0;
neg = 1;
}
}

}
return total;
}
}


Jun 27 '08 #97
On Wed, 04 Jun 2008 12:21:39 -0500, Razii <ni*******@mail.comwrote:
>>sumcol: .NET is 2.5x faster than Java.
Now that I read your other post, I see where you got that from: by
writing custom parser that breaks benchmark rule that program must use
standard library for reading and parsing:

I can make Java 3x times faster by writing custom parser.

The following version is 3x times faster than StreamTokenizer

http://shootout.alioth.debian.org/gp...lang=java&id=3

import java.io.*;

public final class sumcol {

static final byte[] buf = new byte [18432];
final static InputStream in = System.in;

public static void main(String[] args) throws Exception {

System.out.println(sum());
}

private static int sum() throws Exception
{
int total = 0, num=0, j, neg = 1;
while ((j = in.read(buf)) 0)
{
for (int i = 0; i < j; i++)
{
int c = buf[i];
if (c >= '0' && c <= '9')
num = num * 10 + c - '0';
else if (c == '-')
neg = -1;
else {
total += (num * neg);
num = 0;
neg = 1;
}
}

}
return total;
}
}

Jun 27 '08 #98
On Wed, 4 Jun 2008 10:00:57 -0700 (PDT), "Jon Skeet [C# MVP]"
<sk***@pobox.comwrote:
><quote>No, it won't have a major effect.</quote>

*That's* an assertion. Therefore by your own instructions, it's up to
you to prove your assertion.
There is no reason to believe that it "may have" any effect. Cygwin
offers easy way to time with time command. If you can time all the
becnhmarks as easily on Windows and get different results, let us
know.
Jun 27 '08 #99
On Wed, 04 Jun 2008 17:45:53 +0100, Jon Harrop <jo*@ffconsultancy.com>
wrote:
>Oh, wait a minute. Your not even using .NET properly? No wonder your results
are all screwed.
No, I am using .NET very properly -- exactly same way as java.

Jun 27 '08 #100

This discussion thread is closed

Replies have been disabled for this discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.