473,385 Members | 1,185 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

TDS and character encoding

I've seen a dump of the TDS traffic going from my webserver to the SQL
Server database and it seems encoded in Unicode (it has two bytes per
char). Seems it would have a huge impact on performance if it
travelled in one byte. Why might this be?

rj

Aug 30 '07 #1
17 2573
(ra***************@yahoo.com) writes:
I've seen a dump of the TDS traffic going from my webserver to the SQL
Server database and it seems encoded in Unicode (it has two bytes per
char). Seems it would have a huge impact on performance if it
travelled in one byte. Why might this be?
I have never eavesdropped on TDS, but Unicode is indeed the character
set of SQL Server. You are perfectly able to name your tables in
Cyrillic or Hindi characters if you feel like. And of course character
strings may include all sorts of characters. So an batch of SQL statement
that is sent over the wire must be Unicode. That is beyond dispute.

However, you don't encode something in Unicode. Unicode is the character
set, and there are several encodings available, of which the most popular
are UTF-16 and UTF-8. In UTF-8 each character in the base plane takes up
2 bytes, and characters beyond that takes up 4 bytes. (The base plane
covers the vast majority of living langauges). In UTF-8, ASCII characters
takes up one byte, other characters in the Latin, Greek and Cyrillic
script takes two bytes, and Chinese and Japanese characters takes up three
bytes.

SQL Server uses UTF-16 exclusively. It is true that for network traffic
in the western world, it would be more effective if TDS used UTF-8, but
as you can see that it is necessarily the case in the Far East. And had
TDS used UTF-8, both ends of the wire would have had to convert to
UTF-16, so any reduced network traffic could be eaten up by extra CPU
time.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Aug 30 '07 #2
Snooping into the TDS would be the very last place I would look when trying
to improve performance. It would be like polishing a clean mirror to remove
one's zits.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant, Dad, Grandpa
Microsoft MVP
INETA Speaker
www.betav.com
www.betav.com/blog/billva
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------

"Erland Sommarskog" <es****@sommarskog.sewrote in message
news:Xn**********************@127.0.0.1...
(ra***************@yahoo.com) writes:
>I've seen a dump of the TDS traffic going from my webserver to the SQL
Server database and it seems encoded in Unicode (it has two bytes per
char). Seems it would have a huge impact on performance if it
travelled in one byte. Why might this be?

I have never eavesdropped on TDS, but Unicode is indeed the character
set of SQL Server. You are perfectly able to name your tables in
Cyrillic or Hindi characters if you feel like. And of course character
strings may include all sorts of characters. So an batch of SQL statement
that is sent over the wire must be Unicode. That is beyond dispute.

However, you don't encode something in Unicode. Unicode is the character
set, and there are several encodings available, of which the most popular
are UTF-16 and UTF-8. In UTF-8 each character in the base plane takes up
2 bytes, and characters beyond that takes up 4 bytes. (The base plane
covers the vast majority of living langauges). In UTF-8, ASCII characters
takes up one byte, other characters in the Latin, Greek and Cyrillic
script takes two bytes, and Chinese and Japanese characters takes up three
bytes.

SQL Server uses UTF-16 exclusively. It is true that for network traffic
in the western world, it would be more effective if TDS used UTF-8, but
as you can see that it is necessarily the case in the Far East. And had
TDS used UTF-8, both ends of the wire would have had to convert to
UTF-16, so any reduced network traffic could be eaten up by extra CPU
time.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Aug 30 '07 #3
Well William, that is clearly not the case where you have a REAL
database with REAL traffic. When I mean REAL, I mean a 25Mbps stream
between the IIS servers and SQL Server... Getting away from about
10Mbps of unneeded traffic does not seem like polishing to me...
I can guarantee you that this is having serious impact on performance,
and when you're digging really into it (things like TCP/IP slow-
starts...), you really get to know why it's huge impact for the
client, the DB server and performance.
rj

On 30 Ago, 23:16, "William Vaughn" <billvaNoS...@betav.comwrote:
Snooping into the TDS would be the very last place I would look when trying
to improve performance. It would be like polishing a clean mirror to remove
one's zits.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant, Dad, Grandpa
Microsoft MVP
INETA Speakerwww.betav.comwww.betav.com/blog/billva
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Aug 31 '07 #4
Given that SQL Server has the highest TPC-E benchmarks in the industry,
don't you think that the SQL Server team has made the TDS stream as
efficient as possible? IMHO, it's not the line protocol or the lowest layers
of the interface that should be the focus of performance tuning, but the
applications, database designs and query methodologies that should dominate
your attempts to improve throughput and scalibility. Reducing the traffic on
the TDS channel will go a long way to improving performance if you have to
move that much volume over the wire to make a difference.

SQL Server Holds Record for TPC-E Database Benchmark
by Brian Moran, br***@solidqualitylearning.com

SQL Server now holds every conceivable world record for the TPC-E database
benchmark. That news would be slightly more impressive if TPC-E scores
existed for any database besides SQL Server, but heck, winning a race with
just one runner doesn't mean that runner did a bad job. I first wrote about
TPC-E, the latest benchmark from the Transaction Processing Performance
Council, in my commentary "TPC's New Benchmark Strives for Realism," October
2006, InstantDoc ID 93955.

Microsoft became the first database vendor to have a published TPC-E result
when Unisys published a TPC-E score on July 12 using SQL Server 2005 on a
dual-core 16-processor ES7000. IBM followed suit with a dual-core
2-processor server two weeks later, and Dell posted a dual-core 4-processor
result on August 24. Both IBM's and Dell's results used SQL Server, so SQL
Server is currently the only database vendor listed, meaning SQL Server
currently holds all the top scores. Sane vendors don't post TPC-E scores
that make them look bad, but I suspect it's only a matter of time before IBM
and Oracle post TPC- E scores for their database products that leapfrog the
latest SQL Server scores, which will in turn be bested by Microsoft in the
never-ending game of benchmark leapfrog.
Read the full article at:
http://lists.sqlmag.com/t?ctl=642B5:...B50D3688BDE645

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
<ra***************@yahoo.comwrote in message
news:11*********************@g4g2000hsf.googlegrou ps.com...
Well William, that is clearly not the case where you have a REAL
database with REAL traffic. When I mean REAL, I mean a 25Mbps stream
between the IIS servers and SQL Server... Getting away from about
10Mbps of unneeded traffic does not seem like polishing to me...
I can guarantee you that this is having serious impact on performance,
and when you're digging really into it (things like TCP/IP slow-
starts...), you really get to know why it's huge impact for the
client, the DB server and performance.
rj

On 30 Ago, 23:16, "William Vaughn" <billvaNoS...@betav.comwrote:
>Snooping into the TDS would be the very last place I would look when
trying
to improve performance. It would be like polishing a clean mirror to
remove
one's zits.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant, Dad, Grandpa
Microsoft MVP
INETA Speakerwww.betav.comwww.betav.com/blog/billva
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no
rights.
__________________________________
Aug 31 '07 #5
On 31 Ago, 18:54, "William \(Bill\) Vaughn"
<billvaRemoveT...@betav.comwrote:
Given that SQL Server has the highest TPC-E benchmarks in the industry,
don't you think that the SQL Server team has made the TDS stream as
efficient as possible? IMHO, it's not the line protocol or the lowest layers
of the interface that should be the focus of performance tuning, but the
applications, database designs and query methodologies that should dominate
your attempts to improve throughput and scalibility. Reducing the traffic on
the TDS channel will go a long way to improving performance if you have to
move that much volume over the wire to make a difference.
Don't know a lot about TPC-E benchmarks. Are they measured over a
network?

rj

Sep 1 '07 #6
On 31 Ago, 18:54, "William \(Bill\) Vaughn"
<billvaRemoveT...@betav.comwrote:
Given that SQL Server has the highest TPC-E benchmarks in the industry,
don't you think that the SQL Server team has made the TDS stream as
efficient as possible? IMHO, it's not the line protocol or the lowest layers
of the interface that should be the focus of performance tuning, but the
applications, database designs and query methodologies that should dominate
your attempts to improve throughput and scalibility. Reducing the traffic on
the TDS channel will go a long way to improving performance if you have to
move that much volume over the wire to make a difference.
Don't know a lot about TPC-E benchmarks. Are they measured over a
network?

rj

Sep 1 '07 #7
Don't know a lot about TPC-E benchmarks. Are they measured over a
network?
Database benchmarks are typically done with a dedicated database server and
remote client(s). You can download the results disclosure reports from
http://www.tpc.org/tpce/tpce_perf_results.asp to get details of the actual
configurations used. Looking at the specs of the network gear, it doesn't
look to me like the benchmark sponsors were too concerned about network
performance.

I agree with the others in this thread that the application and database
design are by far the biggest contributing factors to overall performance.
A little common sense, like filtering data on the server rather than the
client, goes a long way towards improving scalability and performance.
--
Hope this helps.

Dan Guzman
SQL Server MVP

<ra***************@yahoo.comwrote in message
news:11**********************@y42g2000hsy.googlegr oups.com...
On 31 Ago, 18:54, "William \(Bill\) Vaughn"
<billvaRemoveT...@betav.comwrote:
>Given that SQL Server has the highest TPC-E benchmarks in the industry,
don't you think that the SQL Server team has made the TDS stream as
efficient as possible? IMHO, it's not the line protocol or the lowest
layers
of the interface that should be the focus of performance tuning, but the
applications, database designs and query methodologies that should
dominate
your attempts to improve throughput and scalibility. Reducing the traffic
on
the TDS channel will go a long way to improving performance if you have
to
move that much volume over the wire to make a difference.
Don't know a lot about TPC-E benchmarks. Are they measured over a
network?

rj
Sep 1 '07 #8
Erland Sommarskog (es****@sommarskog.se) writes:
(ra***************@yahoo.com) writes:
>>And, yes, while you would have seen a gross cut if TDS was UTF-8 on
the wire and not UTF-16, a Chinese user would have seen an increase
instead.

Wouldn't it be great to have an option?

If you thinks, submit this suggest on
http://connect.microsoft.com/SqlServer/Feedback.
Personally, I don't think it is worth the pain, also it would also
require changes in the client APIs. And all it would affect is query
batches sent to SQL Server and metadata sent back. If the query batches
sent to SQL Server is killing your network, maybe you should look into
using stored procedures.
Thinking of it, rather than having to select the character encoding, it's
better if the option was for compression of the network traffic in general.

But I find it difficult to believe that this would be a good option
for the traffic between a web server and an SQL Server that are on
the same LAN. It could possibly be an option if you are on a slow connection
over VPN. In general, I have a feeling that the network considerations
for SQL Server are for LAN connections, because that is surely the most
common scenario.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Sep 1 '07 #9
And as I've noted in another thread, SqlClient does not do this when
you use it plainly. Run a plain ExecuteReader or DataAdapter.Fill and
you will not see it. There was a link posted that lead to an article
pointing out the problem with the CommandBuilder.
Can you get classic ADO and SQLOLEDB in combination not to do that?
Thanks

Stephen Howe
Sep 1 '07 #10
In addition to agreement with others statements about refactoring the
application to reduce network traffic
Sure. Dont disagree. I could arrange that I get 5 records back from a SP
query instead of 1 record back per query.
And it may well be that 5 records fit in a network packet.
That is 1 round-trip rather than 5 round-trips.
But it could be that I squeeze 10 records or 20 records etc.
How do we determine this without endlessly going round some design-cycle
trying magic numbers which immediately alter if the fields in the query
alters?

Thanks

Stephen Howe
Sep 1 '07 #11

"William Vaughn" <bi**********@betav.comwrote in message
news:eV**************@TK2MSFTNGP02.phx.gbl...
As cynical as this sounds, I'm with Stephen here. ADO.NET has (IMHO)
regressed in functionality in some respects from ADO classic. I would
embrace opening up the interface to see what it's doing so would could get
around some of the "won't fix" or "can't fix" issues.
<snipped>

Does anyone really believe with ODBC, DAO, RDO, ADO, ADO.NET - each
generation has been an improvement?
They are all interfaces on the same set of database technology - which
does not change.

Stephen Howe
Yes I believe you can see a clear trail of the improvements.

I am somewhat timid in submitting a post in mild* favor of DAO, ADO, and
Microsoft's data access technologies in general. After all, Messrs. Howe and
Vaughn, have likely forgotten more about data access that I ever knew. <g>

But I believe it is frequently overlooked that any data access library is
merely one layer in a stack of technologies. For example...
[ADO | DAO] <-[OLE DB | ODBC] <-ClientAPI <-DatabaseEngine.
If one actually attempts to trace what goes on with even the simplest of
queries - it quickly becomes amazing that it works at all.

"They are all interfaces on the same set of database technology - which does
not change." No, they are not. Database engines change, OLE implementations
change. Microsoft has not only presented us with the programming data
access, they have also made amazing changes and improvements all down the
line and back again.

Also Microsoft (who was talking to IBM in those days <g>) was the primary
sponsor and the major reason ODBC was adopted. ODBC took us out of the dark
ages of when every datasource, vendor, and language platform, had its own
access "libraries". I don't think MS gets nearly the credit it deserves.
Doesn't anyone remember the nightmare it used to be?

But sure ODBC had a flaw. It depended on everything to look like tables with
a hierarchal structure. This made working with non-table datasource
difficult. So ADO came about. ADO's stated goal was Universal Access. And it
accomplished that. We think nothing of using ADO to connect to anything.
"Give me a provider and I can move the world." <g>

But it is not my intention to provide a history lesson. Wikipedia can do
that.

Now to the question of why there are mysteries?

With ADO and SQLServer this seems strange - as Steve pointed out - "You
would think they would have it down pat by now". Especially when you
consider ADO and OLE DB and SQLServer were until recently the responsibility
of the same team. But why, is basically because of three problems.
1) ADO is so prevalent that any fundlemental changes would require massive
regression testing.
2) the specter of the 1998 "United States vs.. Microsoft" civil suit. While
it didn't happen MS came close to being broken up. As it is, they have to be
very careful to share its application programming interfaces with
third-party companies, and not make any 'improvements' that effectively
break anything else, or perhaps even more important provide an obvious
advantage for themselves.
And 3) the shear complexity of the problem in the sense - what is optimal?
We all know there are more than one way in programming and no single choice
is always the best choice.

Since the dawn of programming several things have always been paramount,
picking the best solution, you never knew until you tested it, and the next
situation was likely different.

Yes ADO.Net has flaws, so did ADO, and so did DAO, and so did... But all of
them had their strengths as well. Contrary to popular belief - NONE was ever
a 100% optimal replacement of the other.
Which is why I always laugh when someone comes out says things like - "ADO
is the only way to go." Even today DAO with the right engine and server will
thrash ADO. (eg, Jet and DB2)

Anyway this is too long. My apologizes to everyone.
[*"mild", I wouldn't want anyone to think I'm a MS bigot. I'm still pretty m
iffed over what they did to VB, and the clumsy half-a** PITA that is Vista.
Plus IMHO, if they would listen to me and done things the way I would have
done it, they would be better off. After all everything goes smoother when
people just do things MY way. <g>]

-ralph

Sep 3 '07 #12
"They are all interfaces on the same set of database technology - which
does
not change." No, they are not. Database engines change, OLE
implementations
change. Microsoft has not only presented us with the programming data
access, they have also made amazing changes and improvements all down the
line and back again.
They do not change.
Yes, database engines do change, there are improvements.
There are improvements to drivers. I dont disagree.
But the access methods have not changed.

We still have
Rowsets and Cursors
Stored Procedures
Input/Output parameters

Has that access method list got longer or changed in nature? No!
So why keep reinventing yet another interface to exactly the same set of
methods?
How many different ways can you call a Stored Procedure?
Also Microsoft (who was talking to IBM in those days <g>) was the primary
sponsor and the major reason ODBC was adopted. ODBC took us out of the
dark
ages of when every datasource, vendor, and language platform, had its own
access "libraries". I don't think MS gets nearly the credit it deserves.
Doesn't anyone remember the nightmare it used to be?
Yes it is a good thing.
A universal method of access.
Insulate programmers from the specifics of each database source - which can
vary in detail.

[snip rest]

- Yes I guess I mostly agree with you. But I am weary.

Thanks for the comments

Stephen Howe
Sep 3 '07 #13
William Vaughn (bi**********@betav.com) writes:
As cynical as this sounds, I'm with Stephen here. ADO.NET has (IMHO)
regressed in functionality in some respects from ADO classic. I would
embrace opening up the interface to see what it's doing so would could get
around some of the "won't fix" or "can't fix" issues.
I'm in the opposite camp. I have difficulties to use "classic" when
speaking of old ADO. "Classic" indicates that it was actually good and
useful, but it wasn't. I also have difficulties to avoid using the word
"crap" when I talk about ADO (see, I failed this time too). Old ADO
performs too many tricks behind your back, and gets in the way to often.
I can't update that field, because it's a computed column? So what, I'm
going to update through a stored procedure. And then it the bug where
it submits a query with SET FMTONLY ON, and the procedure bombs and the
transaction is rolled back - and ADO drops this error on the floor.

On the ADO .Net and SqlClient is a very clean interface, particularly
if you stick to the basics for data access. No tricks behind your back
(well save the use of sp_executesql for parameterised queries), and
errors are caught. And if you run with FireInfoMessagesEventOnUserErrors
you can catch all errors and result sets even if they come mixed with
each other. Yes, there is an funny thing with the rowcount, but it
rarely matters.

It's almost as clean as DB-Library.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Sep 3 '07 #14
I (personally) have to take the blame for some of this churn. As I said any
number of times in my books (especially the early ones), the JET/DAO
interface was designed as a native interface to JET and made a terrible
interface to SQL Server. ODBC the first one-size-fits-all (OSFA) generic
interface was supposed to bring utopia to the data access world. It ignored
the fact that just because you put in a door that anyone with a key could
open, once you got inside the room nothing was the same. So what if you
could open a connection to SQL Server or Oracle (or even JET) using the same
interface method. Most aspects of doing so were different. Once connected,
one was an ISAM engine, the next was best accessed with stored procedures
and the next while it could use stored procedures, they weren't coded the
same or didn't behave the same. We were right back where we started but
running the train from the caboose via teletype twelve cars back from the
engine.

RDO was my (and about three others in the VB team) attempt to get a better,
cheaper, lighter interface to Microsoft SQL Server. It worked (to a great
extent). As an intended side-effect it got the JET team to fix the first
versions of ADO to work correctly with stored procedures. 9 or 10 versions
later they're still trying to get it right. Each time they change it, the
apps that depend on the MDAC stack creak, bend or snap. Thank the stars
they've now decoupled the stack for SQL Server (SNAC)--everyone else is
still pooched.

According to the data access gurus, the downside to ODBC was perceived by
(... don't get me started on the other Mr. V) to be only suitable for
"relational" databases and MS needed more. They wanted the data access layer
to ALSO access flat data, round data, object data and corkscrewed up
data--thus OLE DB was hatched. It added layer upon layer and when all things
were done it was no faster and actually slower than ODBC and DAO--but you
could access virtually any kind of data. All you needed was a rocket
scientist to create the OLE DB provider and a language other than VB to do
it with. In their infinite wisdom they created a DAL that could not be
accessed with the most popular language on the planet (outside Redmond).

ADO.NET on the other hand was more of a start-from-scratch approach to let
the objects get closer to the native interfaces like DB-Lib and PL/SQL. It
was supposed to be very light (too light in my humble opinion) and fast and
not an OSFA. Each provider implemented similar base functionality in the
..NET provider but left you in the same place if you tried to create a single
application to access more than one backend.

At this point in time, I'm a bit miffed that we had to abandon some pretty
serious (and powerful, but difficult-to-get-right) architectures in ADO
classic like server-side cursors, fully async ops (including async Open and
fetch) and others. Instead the team has been focused on more and more
client-side library work like LINQ and TableAdapter code generators.

Now we hear that LINQ is (dramatically?) slower than not... I'm not a bit
surprised--is anyone? Of course the processors now-a-days are faster so the
real difference is smaller--as long as you're running a Quad Core Duo at
3.GHz with a RAID 5 array.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant, Dad, Grandpa
Microsoft MVP
INETA Speaker
www.betav.com
www.betav.com/blog/billva
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------

"Ralph" <nt*************@yahoo.comwrote in message
news:OR**************@TK2MSFTNGP02.phx.gbl...
>
"William Vaughn" <bi**********@betav.comwrote in message
news:eV**************@TK2MSFTNGP02.phx.gbl...
>As cynical as this sounds, I'm with Stephen here. ADO.NET has (IMHO)
regressed in functionality in some respects from ADO classic. I would
embrace opening up the interface to see what it's doing so would could
get
around some of the "won't fix" or "can't fix" issues.
<snipped>
>
Does anyone really believe with ODBC, DAO, RDO, ADO, ADO.NET - each
generation has been an improvement?
They are all interfaces on the same set of database technology - which
does not change.

Stephen Howe

Yes I believe you can see a clear trail of the improvements.

I am somewhat timid in submitting a post in mild* favor of DAO, ADO, and
Microsoft's data access technologies in general. After all, Messrs. Howe
and
Vaughn, have likely forgotten more about data access that I ever knew. <g>

But I believe it is frequently overlooked that any data access library is
merely one layer in a stack of technologies. For example...
[ADO | DAO] <-[OLE DB | ODBC] <-ClientAPI <-DatabaseEngine.
If one actually attempts to trace what goes on with even the simplest of
queries - it quickly becomes amazing that it works at all.

"They are all interfaces on the same set of database technology - which
does
not change." No, they are not. Database engines change, OLE
implementations
change. Microsoft has not only presented us with the programming data
access, they have also made amazing changes and improvements all down the
line and back again.

Also Microsoft (who was talking to IBM in those days <g>) was the primary
sponsor and the major reason ODBC was adopted. ODBC took us out of the
dark
ages of when every datasource, vendor, and language platform, had its own
access "libraries". I don't think MS gets nearly the credit it deserves.
Doesn't anyone remember the nightmare it used to be?

But sure ODBC had a flaw. It depended on everything to look like tables
with
a hierarchal structure. This made working with non-table datasource
difficult. So ADO came about. ADO's stated goal was Universal Access. And
it
accomplished that. We think nothing of using ADO to connect to anything.
"Give me a provider and I can move the world." <g>

But it is not my intention to provide a history lesson. Wikipedia can do
that.

Now to the question of why there are mysteries?

With ADO and SQLServer this seems strange - as Steve pointed out - "You
would think they would have it down pat by now". Especially when you
consider ADO and OLE DB and SQLServer were until recently the
responsibility
of the same team. But why, is basically because of three problems.
1) ADO is so prevalent that any fundlemental changes would require massive
regression testing.
2) the specter of the 1998 "United States vs.. Microsoft" civil suit.
While
it didn't happen MS came close to being broken up. As it is, they have to
be
very careful to share its application programming interfaces with
third-party companies, and not make any 'improvements' that effectively
break anything else, or perhaps even more important provide an obvious
advantage for themselves.
And 3) the shear complexity of the problem in the sense - what is optimal?
We all know there are more than one way in programming and no single
choice
is always the best choice.

Since the dawn of programming several things have always been paramount,
picking the best solution, you never knew until you tested it, and the
next
situation was likely different.

Yes ADO.Net has flaws, so did ADO, and so did DAO, and so did... But all
of
them had their strengths as well. Contrary to popular belief - NONE was
ever
a 100% optimal replacement of the other.
Which is why I always laugh when someone comes out says things like - "ADO
is the only way to go." Even today DAO with the right engine and server
will
thrash ADO. (eg, Jet and DB2)

Anyway this is too long. My apologizes to everyone.
[*"mild", I wouldn't want anyone to think I'm a MS bigot. I'm still pretty
m
iffed over what they did to VB, and the clumsy half-a** PITA that is
Vista.
Plus IMHO, if they would listen to me and done things the way I would have
done it, they would be better off. After all everything goes smoother when
people just do things MY way. <g>]

-ralph
Sep 3 '07 #15

raymond_b_jime...@yahoo.com a écrit :
I've seen a dump of the TDS traffic going from my webserver to the SQL
Server database and it seems encoded in Unicode (it has two bytes per
char). Seems it would have a huge impact on performance if it
travelled in one byte. Why might this be?

rj
Hi,
I've exactly the same trouble ... My apps is responding fine on a LAN
but when I goes on a WAN, getting horrible !
>From 5 sec to 7 min ... When I look SQL Time is the same so I look
network usage with network analyser and find the same result a you !

Something strange : in some case when string are in parameter it using
only one byte instead of two !

I'm wondering, if you find any solution ...

Sep 27 '07 #16
bzh_29 (xi****@free.Fr) writes:
It's with this analyser I see that when I send one byte in fact I send
two ... I understand the reason you explain before but as my apps will
never feet for japanese ou chinese needs, I'm a little sad to not be
able to avoir such things ...
But maybe you need the oe digraph? Or the euro character? Those characters
are not on in Latin-1.

I don't know your business, but even if you are not aiming at the Far
Eastern market, you may expand into Poland or Hungary one day. That's
enough reason to use Unicode.

Developing for Unicode from the start is cheap. Changing to Unicode after
the fact is expensive.

And if you use varchar in your application, what is really your problem?
The only Unicode you need to send is the name of the stored procedures you
call? Or are you sending query batches from the application? Now, if you
do that, there are some bytes you can save by using stored procedures
instead.


--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Sep 28 '07 #17
On Thu, 27 Sep 2007 08:54:30 -0700, bzh_29 <xi****@free.Frwrote:
>I've exactly the same trouble ... My apps is responding fine on a LAN
but when I goes on a WAN, getting horrible !
>>From 5 sec to 7 min ... When I look SQL Time is the same so I look
network usage with network analyser and find the same result a you !
Just to make sure one basic point is covered, do all the stored
procedures have SET NOCOUNT ON right at the beginning? Leaving that
out can magnify network issues.

Roy Harvey
Beacon Falls, CT
Sep 28 '07 #18

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

10
by: David Komanek | last post by:
Hi all, I have a question if it is possible to manipulate the settings of character encoding in Ms Internet Explorer 5.0, 5.5 and 6.0. The problem is that the default instalation of Ms IE seems...
7
by: Mark | last post by:
Hi... I've been doing a lot of work both creating and consuming web services, and I notice there seems to be a discontinuity between a number of the different cogs in the wheel centering around...
37
by: chandy | last post by:
Hi, I have an Html document that declares that it uses the utf-8 character set. As this document is editable via a web interface I need to make sure than high-ascii characters that may be...
3
by: Jon Davis | last post by:
I have a software application I've written called PowerBlog (PowerBlog.net) that takes the editing capability of the Internet Explorer WebBrowser control (essentially a DHTMLTextBox), extracts the...
2
by: John Dalberg | last post by:
The below html validates correctly on w3.org's html validator when the file has an html extension. When the same file gets an aspx extension, I get the error below from the validator. This tells me...
5
by: Dadi | last post by:
Hi, My native language is Icelandic and I´m making a web service that returns results that contain many Icelandic characters. This works fine, however, when I return a soap:Fault, the string in...
13
by: Michal | last post by:
Hello, is there any way how to detect string encoding in Python? I need to proccess several files. Each of them could be encoded in different charset (iso-8859-2, cp1250, etc). I want to detect...
37
by: Zhiv Kurilka | last post by:
Hi, I have a text file with following content: "((^)|(.* +))§§§§§§§§" if I read it with: k=System.IO.StreamReader( "file.txt",System.Text.Encoding.ASCII); k.readtotheend()
2
by: stup | last post by:
Hi! I have a small javascript snippet that does the following: // an entire html document is in here data = "\u003c!DOCTYPE html PUBLIC \u0022-//W3C//DTD XHTML 1.1 Strict//EN\u0022\n ....";...
4
by: GGnOrE | last post by:
Hey, When I am writing an HTML Document, how do i know what character encoding I am using. Is Times New Roman have a specific character encoding or can it be found on my host server? What do you...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...
0
by: ryjfgjl | last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.