My experience (of which I have quite a lot) of receiving data on the serial port on embedded devices is that missing data is either the baud rate is too fast for the device or the device is not servicing it's data receive interrupt in a timely fashion.
It is actually quite hard to tell the difference between these 2 faults, you have missed data did it arrive too fast or did you read it too slowly? That's 2 sides of the same coin. So you have to assume the second and if you are still loosing data once you have done everything you can then you try a lower baud rate.
One of the assumptions is normally that the data will not be a constant stream, that is there will be gaps in the data during which time embeded application can perform processing on the received data. The problem is quite different if the data stream is constant.
If you are not already using an interrupt driven receive on the UART then you should do this, using an interrupt helps to ensure that when data is received by the UART the firmware then actually reads the UART.
So this leads to the following ways they software could cause the loss of data.
- Failure to service the interrupt routine. An interrupt routine can be stopped from running either by another interrupt routine (on a platform that does not allow nested interrupts which is a large portion of them). For instance in the last year I was using a platform that had a particularly unless OS on it, the task scheduler for the OS took longer to run than the time to receive a character and ran in an interrupt routine which resulted in the loss of data when receiving large quantities of data in a block until I realised the problem.
- Buffers too small. You are using a circular buffer which is a common and fine thing to do but by no means the only option. However whatever buffer option you use if you buffer is too small for your largest use case that is the largest amount of data you are likely to receive in a block then you are going to loose characters when you run out of data because unless what you have to do is with the data is extremely simple or you have an extremely powerful processor you will spend more time processing the data than it takes to receive it. That is you are normally reliant on the gaps in the data stream to give the embeded processor time to catch up with processing the received data you have to be able to buffer enough data to span the largest expect burst of data between these gaps. As a rule of thumb I try to guestimate this size and then double it.
- Program logic error. The code written has a logic error in it that results in it loosing data or state.
It is unlikely that a circular buffer written correctly would cause a problem in itself as long as it is large enough for the job in hand.
It is important that you don't try to perform all your data processing in your data receive routine, you are unlikely to have time you need to buffer the data for later processing.
Checking that your interrupt routine is being executed in a timely fashion is quite hard to do. IIRC last time I had to do this I set a GPIO line high at the start of the interrupt and low at the end and then used an oscilloscope to follow the value of the pin. Which lead to the discovery of the processing gap and then doing the same for the other interrupts in turn lead me to the interrupt controling task scheduling.
Remember interrupt routines should be short and simple, if they are more than a page or 2 of code or contain complex control structures (switches or loops) then you should start being suspicious.