By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
457,726 Members | 1,134 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 457,726 IT Pros & Developers. It's quick & easy.

Problem with TCP Network Programming

P: 4

I am currently programming a TCP socket to transfer large files. My client is running on .NET application and my server is running on Java.

I am using TCP socket to transfer file over the network, and I am able to achieve it only if I segment my own file and then send them over the network one segment by one segment. I would like to ask that why do I have to do this when TCP is able to help me handle the whole large file. I thought that TCP can automatically segment and send the file, and in sequence, with error checking capabilities. However in this network programming, that I did, whenever I put a large file for the TCP sockets to transfer, it will only transfer the first time, which is the size that the network can handle. The rest are lost.

My .NET application is using the Socket.Send(byte array) while my Java server that is receiving the byte array using array).

Can anyone explain to me why in this case TCP is behaving like it is UDP? I did set the protocol type of both the client and server to be using TCP.

Thank you!
Sep 14 '08 #1
Share this Question
Share on Google+
1 Reply

Expert 5K+
P: 7,872
Well the Send() functions return an integer value for a reason. It tell yous how much of the data was sent, so you can loop through and continue to send the data from where it left off.
This is a normal operation.
You can't expect to just dump a 10meg file in continuous memory and have it all go through the socket.
Sep 15 '08 #2

Post your reply

Sign in to post your reply or Sign up for a free account.