By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
446,156 Members | 1,026 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 446,156 IT Pros & Developers. It's quick & easy.

Release is slower than debug... What's happening?

P: n/a
Hi,
I have a windows service that only loads a CSV file and import it's data
using SqlBulkCopy in a newly created Sql Server 2005 table using 25000 rows
batches.

If I build the service in debug mode and run it, I get descent performances.
If I build it in release mode, I would expect it to at least stay as fast
and maybe be faster, but I actually lose performance. I lose about 10%
performance.

How can this happen?

Here's what could be involved :
- SqlBulkCopy Class a IDataReader implementation as a source which
performance has been verified.
- <SqlDbTypes>.Parse(...)
- Array indexing
- Boxing and Unboxing of objects
- Generic classes (List, Dictionary, Hashtable) search by key

Thanks

ThunderMusic
Jun 1 '07 #1
Share this Question
Share on Google+
11 Replies


P: n/a
oups
- SqlBulkCopy Class a IDataReader implementation as a source which
performance has been verified.

should read

- SqlBulkCopy Class using a IDataReader implementation as a source which
performance has been verified.
"ThunderMusic" <No*************************@NoSpAm.comwrote in message
news:OR**************@TK2MSFTNGP05.phx.gbl...
Hi,
I have a windows service that only loads a CSV file and import it's data
using SqlBulkCopy in a newly created Sql Server 2005 table using 25000
rows batches.

If I build the service in debug mode and run it, I get descent
performances. If I build it in release mode, I would expect it to at least
stay as fast and maybe be faster, but I actually lose performance. I lose
about 10% performance.

How can this happen?

Here's what could be involved :
- SqlBulkCopy Class a IDataReader implementation as a source which
performance has been verified.
- <SqlDbTypes>.Parse(...)
- Array indexing
- Boxing and Unboxing of objects
- Generic classes (List, Dictionary, Hashtable) search by key

Thanks

ThunderMusic

Jun 1 '07 #2

P: n/a

"ThunderMusic" <No*************************@NoSpAm.comwrote in message
news:OR**************@TK2MSFTNGP05.phx.gbl...
Hi,
I have a windows service that only loads a CSV file and import it's data
using SqlBulkCopy in a newly created Sql Server 2005 table using 25000
rows batches.

If I build the service in debug mode and run it, I get descent
performances. If I build it in release mode, I would expect it to at least
stay as fast and maybe be faster, but I actually lose performance. I lose
about 10% performance.

How can this happen?
Are you measuring first run performance, or steady state?

The first run requires that the JIT compiler be run, and in release mode the
JIT has to look for optimizations, hence taking longer. After the code is
compiled, however, the incremental cost of rerunning it is usually greatly
reduced by the optimizations.
>
Here's what could be involved :
- SqlBulkCopy Class a IDataReader implementation as a source which
performance has been verified.
- <SqlDbTypes>.Parse(...)
- Array indexing
- Boxing and Unboxing of objects
- Generic classes (List, Dictionary, Hashtable) search by key

Thanks

ThunderMusic

Jun 1 '07 #3

P: n/a
Even after many runs, it's still slower.

"Ben Voigt [C++ MVP]" <rb*@nospam.nospamwrote in message
news:u3**************@TK2MSFTNGP06.phx.gbl...
>
"ThunderMusic" <No*************************@NoSpAm.comwrote in message
news:OR**************@TK2MSFTNGP05.phx.gbl...
>Hi,
I have a windows service that only loads a CSV file and import it's data
using SqlBulkCopy in a newly created Sql Server 2005 table using 25000
rows batches.

If I build the service in debug mode and run it, I get descent
performances. If I build it in release mode, I would expect it to at
least stay as fast and maybe be faster, but I actually lose performance.
I lose about 10% performance.

How can this happen?

Are you measuring first run performance, or steady state?

The first run requires that the JIT compiler be run, and in release mode
the JIT has to look for optimizations, hence taking longer. After the
code is compiled, however, the incremental cost of rerunning it is usually
greatly reduced by the optimizations.
>>
Here's what could be involved :
- SqlBulkCopy Class a IDataReader implementation as a source which
performance has been verified.
- <SqlDbTypes>.Parse(...)
- Array indexing
- Boxing and Unboxing of objects
- Generic classes (List, Dictionary, Hashtable) search by key

Thanks

ThunderMusic


Jun 1 '07 #4

P: n/a
ThunderMusic,
This simply "does not compute". Are you actually installing this windows
service after it has been built in a debug build configuration? And you are
comparing the performance against a different Windows Service installation
that was done with a release build?

What I'd do (unless you want to get into some heavy-duty profiling) is use
the Stopwatch class to take some start / finish timings around critical
operations that occur when your service runs, and write the results as
appended lines to a text file.

Examining this, if you have instrumented it properly, should give more
insight.
Peter

--
Site: http://www.eggheadcafe.com
UnBlog: http://petesbloggerama.blogspot.com
Short urls & more: http://ittyurl.net


"ThunderMusic" wrote:
Hi,
I have a windows service that only loads a CSV file and import it's data
using SqlBulkCopy in a newly created Sql Server 2005 table using 25000 rows
batches.

If I build the service in debug mode and run it, I get descent performances.
If I build it in release mode, I would expect it to at least stay as fast
and maybe be faster, but I actually lose performance. I lose about 10%
performance.

How can this happen?

Here's what could be involved :
- SqlBulkCopy Class a IDataReader implementation as a source which
performance has been verified.
- <SqlDbTypes>.Parse(...)
- Array indexing
- Boxing and Unboxing of objects
- Generic classes (List, Dictionary, Hashtable) search by key

Thanks

ThunderMusic
Jun 1 '07 #5

P: n/a
On Jun 1, 10:33 pm, Peter Bromberg [C# MVP]
<pbromb...@yahoo.yabbadabbadoo.comwrote:
ThunderMusic,
This simply "does not compute". Are you actually installing this windows
service after it has been built in a debug build configuration? And you are
comparing the performance against a different Windows Service installation
that was done with a release build?

What I'd do (unless you want to get into some heavy-duty profiling) is use
the Stopwatch class to take some start / finish timings around critical
operations that occur when your service runs, and write the results as
appended lines to a text file.

Examining this, if you have instrumented it properly, should give more
insight.
Peter

--
Site: http://www.eggheadcafe.com
UnBlog: http://petesbloggerama.blogspot.com
Short urls & more: http://ittyurl.net

"ThunderMusic" wrote:
Hi,
I have a windows service that only loads a CSV file and import it's data
using SqlBulkCopy in a newly created Sql Server 2005 table using 25000 rows
batches.
If I build the service in debug mode and run it, I get descent performances.
If I build it in release mode, I would expect it to at least stay as fast
and maybe be faster, but I actually lose performance. I lose about 10%
performance.
How can this happen?
Here's what could be involved :
- SqlBulkCopy Class a IDataReader implementation as a source which
performance has been verified.
- <SqlDbTypes>.Parse(...)
- Array indexing
- Boxing and Unboxing of objects
- Generic classes (List, Dictionary, Hashtable) search by key
Thanks
ThunderMusic- Hide quoted text -

- Show quoted text -
I was using same kind of functionality, but in the reverse way,
loading data from SQL to CRM Server and then to MSMQ. Again from queue
to a CSV file. The performance is considerably better when I built the
windows service in release mode. I've found it out using the
aforementioned Stopwatch class. I'd venture to suggest the same.

Jun 1 '07 #6

P: n/a
Hi,
Thanks for the answers...

1) Yes, it's the same service I'm testing, in debug, then in release (or in
release then in debug, no importance, I get the same results) and always on
the same computer.
2) The thing is, We are already logging when the application reach an
important point (with Timestamps). So using Stopwatch here will only add
overhead to the process. Actually, we are taking a 4000000 rows CSV and copy
it to a SQL Table by taking care the datatypes are good. So we optimized our
process and managed to get a good performance (by replacing most of the
generics by native arrays, using temporary variables instead of always
getting the current value from the file reader, et al.) maybe we optimized
"too much" and the JIT cannot find a way to optimize intelligently... Would
it be possible?

Thanks

ThunderMusic

"Aneesh P" <an******@gmail.comwrote in message
news:11**********************@z28g2000prd.googlegr oups.com...
On Jun 1, 10:33 pm, Peter Bromberg [C# MVP]
<pbromb...@yahoo.yabbadabbadoo.comwrote:
>ThunderMusic,
This simply "does not compute". Are you actually installing this windows
service after it has been built in a debug build configuration? And you
are
comparing the performance against a different Windows Service
installation
that was done with a release build?

What I'd do (unless you want to get into some heavy-duty profiling) is
use
the Stopwatch class to take some start / finish timings around critical
operations that occur when your service runs, and write the results as
appended lines to a text file.

Examining this, if you have instrumented it properly, should give more
insight.
Peter

--
Site: http://www.eggheadcafe.com
UnBlog: http://petesbloggerama.blogspot.com
Short urls & more: http://ittyurl.net

"ThunderMusic" wrote:
Hi,
I have a windows service that only loads a CSV file and import it's
data
using SqlBulkCopy in a newly created Sql Server 2005 table using 25000
rows
batches.
If I build the service in debug mode and run it, I get descent
performances.
If I build it in release mode, I would expect it to at least stay as
fast
and maybe be faster, but I actually lose performance. I lose about 10%
performance.
How can this happen?
Here's what could be involved :
- SqlBulkCopy Class a IDataReader implementation as a source which
performance has been verified.
- <SqlDbTypes>.Parse(...)
- Array indexing
- Boxing and Unboxing of objects
- Generic classes (List, Dictionary, Hashtable) search by key
Thanks
ThunderMusic- Hide quoted text -

- Show quoted text -

I was using same kind of functionality, but in the reverse way,
loading data from SQL to CRM Server and then to MSMQ. Again from queue
to a CSV file. The performance is considerably better when I built the
windows service in release mode. I've found it out using the
aforementioned Stopwatch class. I'd venture to suggest the same.

Jun 1 '07 #7

P: n/a
If I build the service in debug mode and run it, I get descent performances.
If I build it in release mode, I would expect it to at least stay as fast
and maybe be faster, but I actually lose performance. I lose about 10%
performance.

How can this happen?
A stab in the dark ... Do you handle many exceptions? My experience is that
executing a 'try' gives no noticeable performance hit, but executing a
'catch' is very noticeable. So... maybe the non-catch part of your code runs
slower in debug (as intuition dictates), and maybe the catch parts of your
code run much slower in release (not sure why, perhaps there is more to
untangle). As I said, a stab in the dark.
Jun 2 '07 #8

P: n/a
AMercer <AM*****@discussions.microsoft.comwrote:
How can this happen?

A stab in the dark ... Do you handle many exceptions? My experience is that
executing a 'try' gives no noticeable performance hit, but executing a
'catch' is very noticeable. So... maybe the non-catch part of your code runs
slower in debug (as intuition dictates), and maybe the catch parts of your
code run much slower in release (not sure why, perhaps there is more to
untangle). As I said, a stab in the dark.
Catching an exception *in the debugger* takes a vast amount of time.
Catching an exception in a release build, not running in the debugger,
takes relatively little time. Unless you're throwing and catching
thousands and thousands of exceptions, there shouldn't be any
significant performance hit.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 2 '07 #9

P: n/a
No, I'm not throwing many exceptions. Actually, with the test we ran, none
were thrown. so it must be in the non-catch part... ;)

We still experience the problem independantly from being executed after a
reboot or after running a while. We tried to measure with StopWatch, but
nothing relevant came out of this because when run in a close loop,
StopWatch tend to add overhead and it almost doubled our processing time.
(in the file we are testing, we do a 4000000 iteration 'for' (rows, records)
with an inner 13 iteration 'for' (columns, fields). So if we try to measure
the total time doing parse and total time for writing in the array to send
to bulk copy and the total time of anything, the StopWatch make make it go
awry

Would you have any other ideas so we can find what is going on?

Thanks

ThunderMusic

"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP********************@msnews.microsoft.com.. .
AMercer <AM*****@discussions.microsoft.comwrote:
How can this happen?

A stab in the dark ... Do you handle many exceptions? My experience is
that
executing a 'try' gives no noticeable performance hit, but executing a
'catch' is very noticeable. So... maybe the non-catch part of your code
runs
slower in debug (as intuition dictates), and maybe the catch parts of
your
code run much slower in release (not sure why, perhaps there is more to
untangle). As I said, a stab in the dark.

Catching an exception *in the debugger* takes a vast amount of time.
Catching an exception in a release build, not running in the debugger,
takes relatively little time. Unless you're throwing and catching
thousands and thousands of exceptions, there shouldn't be any
significant performance hit.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too

Jun 4 '07 #10

P: n/a
"ThunderMusic" <No*************************@NoSpAm.comwrote in message
news:uN**************@TK2MSFTNGP04.phx.gbl...
No, I'm not throwing many exceptions. Actually, with the test we ran, none
were thrown. so it must be in the non-catch part... ;)

We still experience the problem independantly from being executed after a
reboot or after running a while. We tried to measure with StopWatch, but
nothing relevant came out of this because when run in a close loop,
StopWatch tend to add overhead and it almost doubled our processing time.
(in the file we are testing, we do a 4000000 iteration 'for' (rows,
records) with an inner 13 iteration 'for' (columns, fields). So if we try
to measure the total time doing parse and total time for writing in the
array to send to bulk copy and the total time of anything, the StopWatch
make make it go awry

Would you have any other ideas so we can find what is going on?

Thanks

ThunderMusic

I don't get is, when using StopWatch doubles the processing time, it looks
like you have your StopWatch calls at the wrong places, so I'm currious
where you inserted the StopWatch calls, my guess is for each iteration,
which is wrong when each iteration executes only a few statements. Mind to
post some code?
Also, did you compare the GC counters like Time Spent in GC and the number
of Collections done for each generational heap when running both release and
debug versions?

Willy.

Jun 4 '07 #11

P: n/a

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:66**********************************@microsof t.com...
"ThunderMusic" <No*************************@NoSpAm.comwrote in message
news:uN**************@TK2MSFTNGP04.phx.gbl...
>No, I'm not throwing many exceptions. Actually, with the test we ran,
none were thrown. so it must be in the non-catch part... ;)

We still experience the problem independantly from being executed after a
reboot or after running a while. We tried to measure with StopWatch, but
nothing relevant came out of this because when run in a close loop,
StopWatch tend to add overhead and it almost doubled our processing time.
(in the file we are testing, we do a 4000000 iteration 'for' (rows,
records) with an inner 13 iteration 'for' (columns, fields). So if we try
to measure the total time doing parse and total time for writing in the
array to send to bulk copy and the total time of anything, the StopWatch
make make it go awry

Would you have any other ideas so we can find what is going on?

Thanks

ThunderMusic


I don't get is, when using StopWatch doubles the processing time, it looks
like you have your StopWatch calls at the wrong places, so I'm currious
where you inserted the StopWatch calls, my guess is for each iteration,
which is wrong when each iteration executes only a few statements. Mind to
post some code?
Also, did you compare the GC counters like Time Spent in GC and the number
of Collections done for each generational heap when running both release
and debug versions?

Willy.
For the StopWatch, we tried to nail down a section of the code where the
processing is slower in release, but because of the structure of the code,
we cannot place it elsewhere than inside a loop because all the work is done
in a loop. So it's really hard to know exactly where we must look.

I cannot post code because it would be a violation of my contract because I
can't just simplify the code mainly because the problem could be anywhere in
it.

We didn't look for the GC counters... I'll look into that and come back with
the results.

Thanks

ThunderMusic
Jun 4 '07 #12

This discussion thread is closed

Replies have been disabled for this discussion.