By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,718 Members | 1,840 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,718 IT Pros & Developers. It's quick & easy.

Best practices for moving large amounts of data using WCF ...

P: n/a
Hello everyone:

I am looking for everyone's thoughts on moving large amounts (actually, not
very large, but large enough that I'm throwing exceptions using the default
configurations).

We're doing a proof-of-concept on WCF whereby we have a Windows form client
and a Server. Our server is a middle-tier that interfaces with our SQL 05
database server.

Using the "netTcpBindings" (using the default config ... no special
adjustments to buffer size, buffer pool size, etc., etc.) we are invoking a
call to our server, which invokes a stored procedure and returns the query
result. At this point we take the rows in the result set and "package" them
into a hashtable, then return the hashtable to the calling client.

Our original exception was a time-out exception, but with some
experimentation we've learned that wasn't the problem .... although it is
getting reported that way. Turns out it was the amount of data.

The query should return ~11,000 records from the database. From our
experimentation we've noticed we can only return 95 of the rows back before
we throw a "exceded buffer size" exception. Using the default values in our
app.config file that size is 65,536.

Not that moving 11,000 records is smart, but to be limited to only 64Kb in a
communication seems overly restrictive. We can change the value from the
default, but I wanted to ask what other's are doing to work with larger
amounts of data with WCF first?

Are you simply "turning up" the size of the buffer size? Some kind of
paging technique? Some other strategy?? Having a tough time finding answers
on this.

Greatly appreciate any and all comments on this,

Thanks
--
Stay Mobile
Jan 31 '07 #1
Share this Question
Share on Google+
7 Replies


P: n/a
Hi, MobileMan:

You get problems for moving large amounts of data? Our SocketPro at
www.udaparts.com solves this problem completely with very simple and elegant
way, non-blocking socket communication. SocketPro is a package of
revolutionary software components written from batching, asynchrony and
parallel computation with many attractive and critical features to help you
easily and quickly develop secured internet-enabled distributed applications
running on all of window platforms and smart devices with super performance
and scalability.

See the attached tutorial three. Let me give you some code here.

protected void GetManyItems()

{

int nRtn = 0;

m_UQueue.SetSize(0);

PushNullException();

while (m_Stack.Count 0)

{

//a client may either shut down the socket
connection or call IUSocket::Cancel

if (nRtn == SOCKET_NOT_FOUND || nRtn ==
REQUEST_CANCELED)

break;

CTestItem Item = (CTestItem)m_Stack.Pop();

Item.SaveTo(m_UQueue);

//20 kbytes per batch at least

//also shouldn't be too large.

//If the size is too large, it will cost
more memory resource and reduce conccurency if online compressing is
enabled.

//for an opimal value, you'd better test it
by yourself

if (m_UQueue.GetSize() 20480)

{

nRtn =
SendReturnData(TThreeConst.idGetBatchItemsCTThree, m_UQueue);

m_UQueue.SetSize(0);

PushNullException();

}

}

if (nRtn == SOCKET_NOT_FOUND || nRtn == REQUEST_CANCELED)

{

}

else if (m_UQueue.GetSize() sizeof(int))

{

nRtn =
SendReturnData(TThreeConst.idGetBatchItemsCTThree, m_UQueue);

}

}

There are a lot of samples inside our SocketPro to demonstrate how to
move a lot of database records, large files, a large collection of items
across machines. See the site
http://www.udaparts.com/document/articles/dialupdb.htm

"MobileMan" <Mo*******@discussions.microsoft.comwrote in message
news:EA**********************************@microsof t.com...
Hello everyone:

I am looking for everyone's thoughts on moving large amounts (actually,
not
very large, but large enough that I'm throwing exceptions using the
default
configurations).

We're doing a proof-of-concept on WCF whereby we have a Windows form
client
and a Server. Our server is a middle-tier that interfaces with our SQL 05
database server.

Using the "netTcpBindings" (using the default config ... no special
adjustments to buffer size, buffer pool size, etc., etc.) we are invoking
a
call to our server, which invokes a stored procedure and returns the query
result. At this point we take the rows in the result set and "package"
them
into a hashtable, then return the hashtable to the calling client.

Our original exception was a time-out exception, but with some
experimentation we've learned that wasn't the problem .... although it is
getting reported that way. Turns out it was the amount of data.

The query should return ~11,000 records from the database. From our
experimentation we've noticed we can only return 95 of the rows back
before
we throw a "exceded buffer size" exception. Using the default values in
our
app.config file that size is 65,536.

Not that moving 11,000 records is smart, but to be limited to only 64Kb in
a
communication seems overly restrictive. We can change the value from the
default, but I wanted to ask what other's are doing to work with larger
amounts of data with WCF first?

Are you simply "turning up" the size of the buffer size? Some kind of
paging technique? Some other strategy?? Having a tough time finding
answers
on this.

Greatly appreciate any and all comments on this,

Thanks
--
Stay Mobile

Jan 31 '07 #2

P: n/a
Dear "msgroup":

Thanks for the sales pitch .....

Actually, we're interested in best practices as it pertains to WCF. We've
been doing socket-level programming for far too long - that's the point in
moving up to a higher level abstraction isn't it??

Sounds like a nice product, though.
--
Stay Mobile
"msgroup" wrote:
Hi, MobileMan:

You get problems for moving large amounts of data? Our SocketPro at
www.udaparts.com solves this problem completely with very simple and elegant
way, non-blocking socket communication. SocketPro is a package of
revolutionary software components written from batching, asynchrony and
parallel computation with many attractive and critical features to help you
easily and quickly develop secured internet-enabled distributed applications
running on all of window platforms and smart devices with super performance
and scalability.

See the attached tutorial three. Let me give you some code here.

protected void GetManyItems()

{

int nRtn = 0;

m_UQueue.SetSize(0);

PushNullException();

while (m_Stack.Count 0)

{

//a client may either shut down the socket
connection or call IUSocket::Cancel

if (nRtn == SOCKET_NOT_FOUND || nRtn ==
REQUEST_CANCELED)

break;

CTestItem Item = (CTestItem)m_Stack.Pop();

Item.SaveTo(m_UQueue);

//20 kbytes per batch at least

//also shouldn't be too large.

//If the size is too large, it will cost
more memory resource and reduce conccurency if online compressing is
enabled.

//for an opimal value, you'd better test it
by yourself

if (m_UQueue.GetSize() 20480)

{

nRtn =
SendReturnData(TThreeConst.idGetBatchItemsCTThree, m_UQueue);

m_UQueue.SetSize(0);

PushNullException();

}

}

if (nRtn == SOCKET_NOT_FOUND || nRtn == REQUEST_CANCELED)

{

}

else if (m_UQueue.GetSize() sizeof(int))

{

nRtn =
SendReturnData(TThreeConst.idGetBatchItemsCTThree, m_UQueue);

}

}

There are a lot of samples inside our SocketPro to demonstrate how to
move a lot of database records, large files, a large collection of items
across machines. See the site
http://www.udaparts.com/document/articles/dialupdb.htm

"MobileMan" <Mo*******@discussions.microsoft.comwrote in message
news:EA**********************************@microsof t.com...
Hello everyone:

I am looking for everyone's thoughts on moving large amounts (actually,
not
very large, but large enough that I'm throwing exceptions using the
default
configurations).

We're doing a proof-of-concept on WCF whereby we have a Windows form
client
and a Server. Our server is a middle-tier that interfaces with our SQL 05
database server.

Using the "netTcpBindings" (using the default config ... no special
adjustments to buffer size, buffer pool size, etc., etc.) we are invoking
a
call to our server, which invokes a stored procedure and returns the query
result. At this point we take the rows in the result set and "package"
them
into a hashtable, then return the hashtable to the calling client.

Our original exception was a time-out exception, but with some
experimentation we've learned that wasn't the problem .... although it is
getting reported that way. Turns out it was the amount of data.

The query should return ~11,000 records from the database. From our
experimentation we've noticed we can only return 95 of the rows back
before
we throw a "exceded buffer size" exception. Using the default values in
our
app.config file that size is 65,536.

Not that moving 11,000 records is smart, but to be limited to only 64Kb in
a
communication seems overly restrictive. We can change the value from the
default, but I wanted to ask what other's are doing to work with larger
amounts of data with WCF first?

Are you simply "turning up" the size of the buffer size? Some kind of
paging technique? Some other strategy?? Having a tough time finding
answers
on this.

Greatly appreciate any and all comments on this,

Thanks
--
Stay Mobile


Jan 31 '07 #3

P: n/a
Too bad you can't ask Juval Lowy, the guy who worked with MS to develop
WCF. You could always check out his web site and see if there's any contact
info. I saw him talk about WCF yesterday at the Vista Launch in SF. Pretty
cool stuff. http://www.idesign.net

Good luck.

Robin S.
-------------------------------------------
"MobileMan" <Mo*******@discussions.microsoft.comwrote in message
news:EA**********************************@microsof t.com...
Hello everyone:

I am looking for everyone's thoughts on moving large amounts (actually,
not
very large, but large enough that I'm throwing exceptions using the
default
configurations).

We're doing a proof-of-concept on WCF whereby we have a Windows form
client
and a Server. Our server is a middle-tier that interfaces with our SQL
05
database server.

Using the "netTcpBindings" (using the default config ... no special
adjustments to buffer size, buffer pool size, etc., etc.) we are invoking
a
call to our server, which invokes a stored procedure and returns the
query
result. At this point we take the rows in the result set and "package"
them
into a hashtable, then return the hashtable to the calling client.

Our original exception was a time-out exception, but with some
experimentation we've learned that wasn't the problem .... although it is
getting reported that way. Turns out it was the amount of data.

The query should return ~11,000 records from the database. From our
experimentation we've noticed we can only return 95 of the rows back
before
we throw a "exceded buffer size" exception. Using the default values in
our
app.config file that size is 65,536.

Not that moving 11,000 records is smart, but to be limited to only 64Kb
in a
communication seems overly restrictive. We can change the value from the
default, but I wanted to ask what other's are doing to work with larger
amounts of data with WCF first?

Are you simply "turning up" the size of the buffer size? Some kind of
paging technique? Some other strategy?? Having a tough time finding
answers
on this.

Greatly appreciate any and all comments on this,

Thanks
--
Stay Mobile

Jan 31 '07 #4

P: n/a
Yea, we've seen their site ... some really good stuff. We're pretty new to
all this, but I haven't seen anybody else who seems to be as "fluent" as they
are. They've obviously put in some serious time on the subject to come up
with all that.

Wish I was there to see him speak too.

From what we've gathered so far (which admitedly isn't much ... not a lot of
people doing this) the way to handle this is using stream-based connections
instead of using buffered - the default. We could go through and change some
of the settings in the config file, but the issue would be did you make the
buffer / max message size "big enough" to handle all possibilities?

I'm not sure just how much of an issue it would be, but conceptually the
idea of taking a setting that defaults at 64Kb and changing it to something
like 75MB, 150MB, or even larger ... just to handle those few-and-far-between
situations that only come up once in a blue moon ... seems wrong somehow.
Dont' get me wrong, though, if that is really the best way to handle this
then we'll be changing the settings! We'd love to hear from someone who's
really using WCF - using large amount of data - and the strategy they've
employed.

I'll drop a note to Juval and maybe get lucky. No matter what, I'll
post-back and let you know what we went with and what the "real world"
results work out like. WCF seems to hold A LOT of promise .... we all just
need more communication about it.

Thanks Robin.

--
Stay Mobile
"RobinS" wrote:
Too bad you can't ask Juval Lowy, the guy who worked with MS to develop
WCF. You could always check out his web site and see if there's any contact
info. I saw him talk about WCF yesterday at the Vista Launch in SF. Pretty
cool stuff. http://www.idesign.net

Good luck.

Robin S.
-------------------------------------------
"MobileMan" <Mo*******@discussions.microsoft.comwrote in message
news:EA**********************************@microsof t.com...
Hello everyone:

I am looking for everyone's thoughts on moving large amounts (actually,
not
very large, but large enough that I'm throwing exceptions using the
default
configurations).

We're doing a proof-of-concept on WCF whereby we have a Windows form
client
and a Server. Our server is a middle-tier that interfaces with our SQL
05
database server.

Using the "netTcpBindings" (using the default config ... no special
adjustments to buffer size, buffer pool size, etc., etc.) we are invoking
a
call to our server, which invokes a stored procedure and returns the
query
result. At this point we take the rows in the result set and "package"
them
into a hashtable, then return the hashtable to the calling client.

Our original exception was a time-out exception, but with some
experimentation we've learned that wasn't the problem .... although it is
getting reported that way. Turns out it was the amount of data.

The query should return ~11,000 records from the database. From our
experimentation we've noticed we can only return 95 of the rows back
before
we throw a "exceded buffer size" exception. Using the default values in
our
app.config file that size is 65,536.

Not that moving 11,000 records is smart, but to be limited to only 64Kb
in a
communication seems overly restrictive. We can change the value from the
default, but I wanted to ask what other's are doing to work with larger
amounts of data with WCF first?

Are you simply "turning up" the size of the buffer size? Some kind of
paging technique? Some other strategy?? Having a tough time finding
answers
on this.

Greatly appreciate any and all comments on this,

Thanks
--
Stay Mobile


Feb 1 '07 #5

P: n/a
They're not just "fluent". Like I said before, Juval actually helped MS
design the WCF stuff. That kind of takes fluent to a whole new level. ;-)
I haven't used it, so unfortunately, I can't help you specifically.

However, here's something that should help. There is a newsgroup
specifically for WCF. WCF used to be called Indigo before the Marketing
people got their claws into it. So I recommend that you post your query to
this newsgroup:

microsoft.public.windows.developer.winfx.indigo

Someone there can probably be very helpful.

Good luck.
Robin S.
-------------------------------------------------
"MobileMan" <Mo*******@discussions.microsoft.comwrote in message
news:BF**********************************@microsof t.com...
Yea, we've seen their site ... some really good stuff. We're pretty new
to
all this, but I haven't seen anybody else who seems to be as "fluent" as
they
are. They've obviously put in some serious time on the subject to come
up
with all that.

Wish I was there to see him speak too.

From what we've gathered so far (which admitedly isn't much ... not a lot
of
people doing this) the way to handle this is using stream-based
connections
instead of using buffered - the default. We could go through and change
some
of the settings in the config file, but the issue would be did you make
the
buffer / max message size "big enough" to handle all possibilities?

I'm not sure just how much of an issue it would be, but conceptually the
idea of taking a setting that defaults at 64Kb and changing it to
something
like 75MB, 150MB, or even larger ... just to handle those
few-and-far-between
situations that only come up once in a blue moon ... seems wrong somehow.
Dont' get me wrong, though, if that is really the best way to handle this
then we'll be changing the settings! We'd love to hear from someone
who's
really using WCF - using large amount of data - and the strategy they've
employed.

I'll drop a note to Juval and maybe get lucky. No matter what, I'll
post-back and let you know what we went with and what the "real world"
results work out like. WCF seems to hold A LOT of promise .... we all
just
need more communication about it.

Thanks Robin.

--
Stay Mobile
"RobinS" wrote:
>Too bad you can't ask Juval Lowy, the guy who worked with MS to develop
WCF. You could always check out his web site and see if there's any
contact
info. I saw him talk about WCF yesterday at the Vista Launch in SF.
Pretty
cool stuff. http://www.idesign.net

Good luck.

Robin S.
-------------------------------------------
"MobileMan" <Mo*******@discussions.microsoft.comwrote in message
news:EA**********************************@microso ft.com...
Hello everyone:

I am looking for everyone's thoughts on moving large amounts
(actually,
not
very large, but large enough that I'm throwing exceptions using the
default
configurations).

We're doing a proof-of-concept on WCF whereby we have a Windows form
client
and a Server. Our server is a middle-tier that interfaces with our
SQL
05
database server.

Using the "netTcpBindings" (using the default config ... no special
adjustments to buffer size, buffer pool size, etc., etc.) we are
invoking
a
call to our server, which invokes a stored procedure and returns the
query
result. At this point we take the rows in the result set and
"package"
them
into a hashtable, then return the hashtable to the calling client.

Our original exception was a time-out exception, but with some
experimentation we've learned that wasn't the problem .... although it
is
getting reported that way. Turns out it was the amount of data.

The query should return ~11,000 records from the database. From our
experimentation we've noticed we can only return 95 of the rows back
before
we throw a "exceded buffer size" exception. Using the default values
in
our
app.config file that size is 65,536.

Not that moving 11,000 records is smart, but to be limited to only
64Kb
in a
communication seems overly restrictive. We can change the value from
the
default, but I wanted to ask what other's are doing to work with
larger
amounts of data with WCF first?

Are you simply "turning up" the size of the buffer size? Some kind of
paging technique? Some other strategy?? Having a tough time finding
answers
on this.

Greatly appreciate any and all comments on this,

Thanks
--
Stay Mobile



Feb 1 '07 #6

P: n/a
Bravo!

--
Stay Mobile
"RobinS" wrote:
They're not just "fluent". Like I said before, Juval actually helped MS
design the WCF stuff. That kind of takes fluent to a whole new level. ;-)
I haven't used it, so unfortunately, I can't help you specifically.

However, here's something that should help. There is a newsgroup
specifically for WCF. WCF used to be called Indigo before the Marketing
people got their claws into it. So I recommend that you post your query to
this newsgroup:

microsoft.public.windows.developer.winfx.indigo

Someone there can probably be very helpful.

Good luck.
Robin S.
-------------------------------------------------
"MobileMan" <Mo*******@discussions.microsoft.comwrote in message
news:BF**********************************@microsof t.com...
Yea, we've seen their site ... some really good stuff. We're pretty new
to
all this, but I haven't seen anybody else who seems to be as "fluent" as
they
are. They've obviously put in some serious time on the subject to come
up
with all that.

Wish I was there to see him speak too.

From what we've gathered so far (which admitedly isn't much ... not a lot
of
people doing this) the way to handle this is using stream-based
connections
instead of using buffered - the default. We could go through and change
some
of the settings in the config file, but the issue would be did you make
the
buffer / max message size "big enough" to handle all possibilities?

I'm not sure just how much of an issue it would be, but conceptually the
idea of taking a setting that defaults at 64Kb and changing it to
something
like 75MB, 150MB, or even larger ... just to handle those
few-and-far-between
situations that only come up once in a blue moon ... seems wrong somehow.
Dont' get me wrong, though, if that is really the best way to handle this
then we'll be changing the settings! We'd love to hear from someone
who's
really using WCF - using large amount of data - and the strategy they've
employed.

I'll drop a note to Juval and maybe get lucky. No matter what, I'll
post-back and let you know what we went with and what the "real world"
results work out like. WCF seems to hold A LOT of promise .... we all
just
need more communication about it.

Thanks Robin.

--
Stay Mobile
"RobinS" wrote:
Too bad you can't ask Juval Lowy, the guy who worked with MS to develop
WCF. You could always check out his web site and see if there's any
contact
info. I saw him talk about WCF yesterday at the Vista Launch in SF.
Pretty
cool stuff. http://www.idesign.net

Good luck.

Robin S.
-------------------------------------------
"MobileMan" <Mo*******@discussions.microsoft.comwrote in message
news:EA**********************************@microsof t.com...
Hello everyone:

I am looking for everyone's thoughts on moving large amounts
(actually,
not
very large, but large enough that I'm throwing exceptions using the
default
configurations).

We're doing a proof-of-concept on WCF whereby we have a Windows form
client
and a Server. Our server is a middle-tier that interfaces with our
SQL
05
database server.

Using the "netTcpBindings" (using the default config ... no special
adjustments to buffer size, buffer pool size, etc., etc.) we are
invoking
a
call to our server, which invokes a stored procedure and returns the
query
result. At this point we take the rows in the result set and
"package"
them
into a hashtable, then return the hashtable to the calling client.

Our original exception was a time-out exception, but with some
experimentation we've learned that wasn't the problem .... although it
is
getting reported that way. Turns out it was the amount of data.

The query should return ~11,000 records from the database. From our
experimentation we've noticed we can only return 95 of the rows back
before
we throw a "exceded buffer size" exception. Using the default values
in
our
app.config file that size is 65,536.

Not that moving 11,000 records is smart, but to be limited to only
64Kb
in a
communication seems overly restrictive. We can change the value from
the
default, but I wanted to ask what other's are doing to work with
larger
amounts of data with WCF first?

Are you simply "turning up" the size of the buffer size? Some kind of
paging technique? Some other strategy?? Having a tough time finding
answers
on this.

Greatly appreciate any and all comments on this,

Thanks
--
Stay Mobile


Feb 1 '07 #7

P: n/a
Hi, All:

See the site at
http://www.udaparts.com/document/Tut...orialThree.htm for how to move
large size files, large record set, large collection of items and large
whatever by our SocketPro at www.udaparts.com.

We see a lot of semilar problems posted on various discussion groups and
web sites. Let me tell you our SocketPro is able to solve this type of
challenge problems with much more elegant and simpler codes. This tutorial
sample is a good testmony to the quality of our SocketPro. You can also see
our source codes for our remote window file and database services.

We publish this message for helping you and also for advertisement on
internet. Our SocketPro is able to solve many many challenge problems in our
daily programming in unique way, batching, asynchrony and parallel
computation.

Regards,

"MobileMan" <Mo*******@discussions.microsoft.comwrote in message
news:EA**********************************@microsof t.com...
Hello everyone:

I am looking for everyone's thoughts on moving large amounts (actually,
not
very large, but large enough that I'm throwing exceptions using the
default
configurations).

We're doing a proof-of-concept on WCF whereby we have a Windows form
client
and a Server. Our server is a middle-tier that interfaces with our SQL 05
database server.

Using the "netTcpBindings" (using the default config ... no special
adjustments to buffer size, buffer pool size, etc., etc.) we are invoking
a
call to our server, which invokes a stored procedure and returns the query
result. At this point we take the rows in the result set and "package"
them
into a hashtable, then return the hashtable to the calling client.

Our original exception was a time-out exception, but with some
experimentation we've learned that wasn't the problem .... although it is
getting reported that way. Turns out it was the amount of data.

The query should return ~11,000 records from the database. From our
experimentation we've noticed we can only return 95 of the rows back
before
we throw a "exceded buffer size" exception. Using the default values in
our
app.config file that size is 65,536.

Not that moving 11,000 records is smart, but to be limited to only 64Kb in
a
communication seems overly restrictive. We can change the value from the
default, but I wanted to ask what other's are doing to work with larger
amounts of data with WCF first?

Are you simply "turning up" the size of the buffer size? Some kind of
paging technique? Some other strategy?? Having a tough time finding
answers
on this.

Greatly appreciate any and all comments on this,

Thanks
--
Stay Mobile

Feb 3 '07 #8

This discussion thread is closed

Replies have been disabled for this discussion.