By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
459,442 Members | 1,316 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 459,442 IT Pros & Developers. It's quick & easy.

limit Python CGI's frequency of calls to a database?

P: 7
I've got a Python CGI script that pulls data from a GPS service; I'd like this information to be updated on the webpage about once every 10s (the max allowed by the GPS service's TOS). But there could be, say, 100 users viewing the webpage at once, all calling the script.

I think the users' scripts need to grab data from a buffer page that itself only upates once every ten seconds. How can I make this buffer page auto-update if there's no one directly viewing the content (and not accessing the CGI)? Are there better ways to accomplish this? Database? Server-side cron? I'm very new to these topics but these are ideas I've heard.
Dec 16 '09 #1
Share this Question
Share on Google+
5 Replies


gits
Expert Mod 5K+
P: 5,390
the serverside cron is a good start. let the cron write a file every 10s and then write a script that reads the file. the webpage just calls the 'reader-script' -> so you would have your 'buffer-page' ...

kind regards
Dec 16 '09 #2

P: 7
gits -- thanks, I read into my host's TOS and they don't want cron jobs running more than once every 15 minutes.. What if I had a local script pull GPS data every 10s then upload a new file to the server if the data has changed? That just seems inefficient but I'm struggling to think of another option.
Dec 16 '09 #3

gits
Expert Mod 5K+
P: 5,390
you could even have a serverside script that reads the gps-data ... and writes a file. when that is done just write a lockfile and then let the next request read from the file. after 10s just remove the lockfile and read again from the GPS-service to write a new file - to give you just another idea ...

kind regards
Dec 19 '09 #4

acoder
Expert Mod 15k+
P: 16,027
Another possibility: use a database to store the results and the time that the results were retrieved in a cache. If the time has elapsed (time retrieved > result retrieve time + 10s), then make a new request. If no users are viewing the page for a length of time, there's no need to make a request every 10 seconds. You just make the next request when the next user views the page by which time 10 seconds will have passed.
Dec 22 '09 #5

P: 7
acoder, thanks, this is exactly what I did. The script checks the timestamp of the data first thing and only fetches new data if ten seconds have elapsed. The data file is updated after a new fetch. Thanks again; here's the code I used.

I guess my last worry is about read/write interference if the system takes on a lot of users at once. Maybe I'll try gits' write-lock idea...
Dec 26 '09 #6

Post your reply

Sign in to post your reply or Sign up for a free account.