473,785 Members | 2,801 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Pausing/Waiting in C

Hi everyone,
I'm developing an application in C; basically a linked list, with a series
of "events" to be popped off, separated by a command to pause reading off
the next event in the list. It has been sometime since I last did C, and
that was the first K&R version! Is there a command to pause an app for
a period of time, as all the commands I am familiar with specify pauses
for integer numbers of seconds, and what I would like is fractions of a
second, preferably milliseconds if possible

TIA

Paul

--
----
Home: http://www.paullee.com
Woes: http://www.dr_paul_lee.btinternet.co.uk/zzq.shtml
Jan 10 '07
25 2317

"Nelu" <sp*******@gmai l.com>
<snip>
int abuseSTDIN()
No, do not abuse stdin. LS

Jan 11 '07 #11
On Wed, 10 Jan 2007 16:06:01 -0800, Keith Thompson <ks***@mib.or g>
wrote:
>Kwebway Konongo <pa**@pNaOuSlPl AeMe.comwrites:
>I'm developing an application in C; basically a linked list, with a series
of "events" to be popped off, separated by a command to pause reading off
the next event in the list. It has been sometime since I last did C, and
that was the first K&R version! Is there a command to pause an app for
a period of time, as all the commands I am familiar with specify pauses
for integer numbers of seconds, and what I would like is fractions of a
second, preferably milliseconds if possible

There is no good portable way to do this.

(The clock() function returns an indication of the amount of CPU time
your program has consumed. You might be tempted to write a loop that
executes until the result of the clock() function reaches a certain
value. Resist this temptation. Though it uses only standard C
features, it has at least two major drawbacks: it measures CPU time,
not wall clock time, and this kind of busy loop causes your program to
waste CPU time, possibly affecting other programs on the system.)

However, most operating systems will provide a good way to do this.
Ask in a newsgroup that's specific to whatever OS you're using, such
as comp.unix.progr ammer or comp.os.ms-windows.program mer.win32 -- but
see if you can find an answer in the newsgroup's FAQ first.
s/most operating systems/most implementations/

The OP mentioned nothing about using an OS, and lack of use of an OS
is entirely conformant to the C Standard. Many or most implementations
targeted for embedded processors in which there is no OS running even
support good ways to do this.

Best regards
--
jay
Jan 11 '07 #12

"Nelu" <sp*******@gmai l.comwrote in message
news:50******** *****@mid.indiv idual.net...
Keith Thompson <ks***@mib.orgw rote:
Nelu <sp*******@gmai l.comwrites:
Keith Thompson <ks***@mib.orgw rote:
Victor Silva <vf******@gmail .comwrites:
Kwebway Konongo wrote:
I'm developing an application in C; basically a linked list, with a
series
>>>of "events" to be popped off, separated by a command to pause
reading off
>>>the next event in the list. It has been sometime since I last did C,
and
>>>that was the first K&R version! Is there a command to pause an app
for
>>>a period of time, as all the commands I am familiar with specify
pauses
>>>for integer numbers of seconds, and what I would like is fractions
of a
>>>second, preferably milliseconds if possible

Maybe you can use something like sleep().

Maybe he can, but there is no sleep() function in the C standard
library (and the system-specific sleep() functions I'm familiar with
don't meet his requirements).

If the phrase "something like sleep()" is intended to exclude sleep()
itself, then you're probably right, but it's still system-specific.
(Hint: a system's documentation for sleep() might have links to other
similar functions.)
Is it ok to use stdin like this:

int abuseSTDIN() {
char a[2];
if(EOF==(ungetc ('\n',stdin)||E OF==ungetc('a', stdin))) {
return EOF;
}
if(NULL==fgets( a,2,stdin)) {
return EOF;
}
return !EOF;
}

?
Is it ok? I'd say definitely not (which isn't *necessarily* meant to
imply that it wouldn't work).

The standard guarantees only one character of pushback. I suppose you
could ungetc() a single '\n' character and read it back with fgets().

I wasn't sure whether pushing '\n' back to stdin would make fgets
return.

What is the result supposed to indicate? Since EOF is non-zero, !EOF
is just 0.

Just EOF if it failed. I could've used 0, I thought it was more
suggestive this way.
If yes, this can be run for a number of seconds, in a loop with calls
to mktime and difftime to get some kind of sub-second resolution (like
CLOCKS_PER_SEC) . That aproximation can be used to implement a kind of
a sleep function using milliseconds later (unless the function returns
EOF in which case the function may not work).
Also, by using the I/O system it may be more CPU friendly although
it will by no means replace a system-specific sleep function.
I think your idea is to execute this function a large number of times,
checking the value of time() before and after the loop, and using the
results for calibration, to estimate the time the function takes to
execute. I don't see how mktime() applies here. There's no guarantee
that the function will take the same time to execute each time you
call it.

I meant to say time(). I know there are no guarantees that's why I said
approximation.
The value returned by time() on most(?) implementations is in seconds.
Half of your statement seems to be about time(), the other half about
clock().
>
Pushing back a character with ungetc() and then reading it with
fgets() is not likely to cause any physical I/O to take place, so this
method is likely to be as antisocially CPU-intensive as any other busy
loop.

Yes, you're right.

--
Ioan - Ciprian Tandau
tandau _at_ freeshell _dot_ org (hope it's not too late)
(... and that it still works...)

Jan 11 '07 #13
Barry <ba****@nullhig hstream.netwrot e:
>
"Nelu" <sp*******@gmai l.comwrote in message
news:50******** *****@mid.indiv idual.net...
>Keith Thompson <ks***@mib.orgw rote:
Nelu <sp*******@gmai l.comwrites:
<snip>
>If yes, this can be run for a number of seconds, in a loop with calls
to mktime and difftime to get some kind of sub-second resolution (like
CLOCKS_PER_SEC ). That aproximation can be used to implement a kind of
a sleep function using milliseconds later (unless the function returns
EOF in which case the function may not work).
Also, by using the I/O system it may be more CPU friendly although
it will by no means replace a system-specific sleep function.

I think your idea is to execute this function a large number of times,
checking the value of time() before and after the loop, and using the
results for calibration, to estimate the time the function takes to
execute. I don't see how mktime() applies here. There's no guarantee
that the function will take the same time to execute each time you
call it.

I meant to say time(). I know there are no guarantees that's why I said
approximatio n.

The value returned by time() on most(?) implementations is in seconds.
Half of your statement seems to be about time(), the other half about
clock().
No, I was talking about counting how many times the function gets called
in a number of seconds. It will likely be called a lot more times than
the number of seconds so you get under a second approximations for a
call or a number of calls and it will give you something similar to
CLOCKS_PER_SEC, you can name it CALLS_PER_SEC, although it's just a very
bad approximation.
--
Ioan - Ciprian Tandau
tandau _at_ freeshell _dot_ org (hope it's not too late)
(... and that it still works...)

Jan 11 '07 #14

David T. Ashley wrote:
"Kwebway Konongo" <pa**@pNaOuSlPl AeMe.comwrote in message
news:97******** *************@b t.com...
Hi everyone,
I'm developing an application in C; basically a linked list, with a series
of "events" to be popped off, separated by a command to pause reading off
the next event in the list. It has been sometime since I last did C, and
that was the first K&R version! Is there a command to pause an app for
a period of time, as all the commands I am familiar with specify pauses
for integer numbers of seconds, and what I would like is fractions of a
second, preferably milliseconds if possible

OS-dependent question.

Any form of spin-wait is bad programming practice (but I suppose it would
work).
Sounds like the OP is asking about something I'm trying to do...

Incidentally, why is "spin-wait" bad? In my case, I'm trying to do a
pseudo-realtime application on a dedicated server, with latencies until
the next event is due to occur.
Sadly, the boss doesn't realise that my realtime app development
experience is precisely zero!

Jan 11 '07 #15
"Trev" <t.*********@bt internet.comwro te in message
news:11******** **************@ 77g2000hsv.goog legroups.com...
>
David T. Ashley wrote:
>"Kwebway Konongo" <pa**@pNaOuSlPl AeMe.comwrote in message
news:97******* **************@ bt.com...
Hi everyone,
I'm developing an application in C; basically a linked list, with a
series
of "events" to be popped off, separated by a command to pause reading
off
the next event in the list. It has been sometime since I last did C,
and
that was the first K&R version! Is there a command to pause an app for
a period of time, as all the commands I am familiar with specify pauses
for integer numbers of seconds, and what I would like is fractions of a
second, preferably milliseconds if possible

OS-dependent question.

Any form of spin-wait is bad programming practice (but I suppose it would
work).

Sounds like the OP is asking about something I'm trying to do...

Incidentally, why is "spin-wait" bad? In my case, I'm trying to do a
pseudo-realtime application on a dedicated server, with latencies until
the next event is due to occur.
Sadly, the boss doesn't realise that my realtime app development
experience is precisely zero!
Spin-wait is bad on a server because it chews up (i.e. consumes towards no
productive purpose) CPU bandwidth that would best be returned to the
operating system.

For example, on Linux, there might be a daemon that needs to wait for 10
minutes. "sleep(600) ;" allows the OS to do other things for 10 minutes. A
spin-wait will consume a large fraction of CPU bandwidth--potentially as
much as 100%--to do nothing but repeatedly check the time. On a shared
system -- and any server is shared, at least between your process and the
operating system -- it is horribly inefficient.

Now, on an embedded system, spin-wait may be valid. In fact, the most
common software architecture for small systems is just to spin-wait until
the next time tick and then do the things you need to do. That is OK
because you're the only process on the system, and there is nobody else who
can make better use of the CPU.

If you have any further questions or observations, please write me directly
at dt*@e3ft.com and answer my SPAM filtering system's automatic reply. I
might know one or two things about small embedded systems.
Jan 11 '07 #16

"Nelu" <sp*******@gmai l.comwrote in message
news:50******** *****@mid.indiv idual.net...
Barry <ba****@nullhig hstream.netwrot e:

"Nelu" <sp*******@gmai l.comwrote in message
news:50******** *****@mid.indiv idual.net...
Keith Thompson <ks***@mib.orgw rote:
Nelu <sp*******@gmai l.comwrites:
<snip>
If yes, this can be run for a number of seconds, in a loop with
calls
to mktime and difftime to get some kind of sub-second resolution
(like
CLOCKS_PER_SEC) . That aproximation can be used to implement a kind
of
a sleep function using milliseconds later (unless the function
returns
EOF in which case the function may not work).
Also, by using the I/O system it may be more CPU friendly although
it will by no means replace a system-specific sleep function.

I think your idea is to execute this function a large number of
times,
checking the value of time() before and after the loop, and using the
results for calibration, to estimate the time the function takes to
execute. I don't see how mktime() applies here. There's no
guarantee
that the function will take the same time to execute each time you
call it.


I meant to say time(). I know there are no guarantees that's why I said
approximation.
The value returned by time() on most(?) implementations is in seconds.
Half of your statement seems to be about time(), the other half about
clock().

No, I was talking about counting how many times the function gets called
in a number of seconds. It will likely be called a lot more times than
the number of seconds so you get under a second approximations for a
call or a number of calls and it will give you something similar to
CLOCKS_PER_SEC, you can name it CALLS_PER_SEC, although it's just a very
bad approximation.
I only read your post and gave you too much
credit. Your explanation is worse than what I thought
you were doing.
Jan 11 '07 #17
Barry <ba****@nullhig hstream.netwrot e:
>
"Nelu" <sp*******@gmai l.comwrote in message
news:50******** *****@mid.indiv idual.net...
>Barry <ba****@nullhig hstream.netwrot e:
>
"Nelu" <sp*******@gmai l.comwrote in message
news:50******** *****@mid.indiv idual.net...
Keith Thompson <ks***@mib.orgw rote:
Nelu <sp*******@gmai l.comwrites:
<snip>
>If yes, this can be run for a number of seconds, in a loop with
calls
>to mktime and difftime to get some kind of sub-second resolution
(like
>CLOCKS_PER_SEC ). That aproximation can be used to implement a kind
of
>a sleep function using milliseconds later (unless the function
returns
>EOF in which case the function may not work).
Also, by using the I/O system it may be more CPU friendly although
it will by no means replace a system-specific sleep function.

I think your idea is to execute this function a large number of
times,
checking the value of time() before and after the loop, and using the
results for calibration, to estimate the time the function takes to
execute. I don't see how mktime() applies here. There's no
guarantee
that the function will take the same time to execute each time you
call it.
I meant to say time(). I know there are no guarantees that's why I said
approximatio n.

The value returned by time() on most(?) implementations is in seconds.
Half of your statement seems to be about time(), the other half about
clock().

No, I was talking about counting how many times the function gets called
in a number of seconds. It will likely be called a lot more times than
the number of seconds so you get under a second approximations for a
call or a number of calls and it will give you something similar to
CLOCKS_PER_SEC , you can name it CALLS_PER_SEC, although it's just a very
bad approximation.

I only read your post and gave you too much
credit. Your explanation is worse than what I thought
you were doing.
What did you think I was doing?

--
Ioan - Ciprian Tandau
tandau _at_ freeshell _dot_ org (hope it's not too late)
(... and that it still works...)

Jan 11 '07 #18

"David T. Ashley" <dt*@e3ft.comwr ote in message
news:Wb******** *************** *******@giganew s.com...
"Trev" <t.*********@bt internet.comwro te in message
news:11******** **************@ 77g2000hsv.goog legroups.com...

David T. Ashley wrote:
"Kwebway Konongo" <pa**@pNaOuSlPl AeMe.comwrote in message
news:97******** *************@b t.com...
Hi everyone,
I'm developing an application in C; basically a linked list, with a
series
of "events" to be popped off, separated by a command to pause reading
off
the next event in the list. It has been sometime since I last did C,
and
that was the first K&R version! Is there a command to pause an app
for
a period of time, as all the commands I am familiar with specify
pauses
for integer numbers of seconds, and what I would like is fractions of
a
second, preferably milliseconds if possible

OS-dependent question.

Any form of spin-wait is bad programming practice (but I suppose it
would
work).
Sounds like the OP is asking about something I'm trying to do...

Incidentally, why is "spin-wait" bad? In my case, I'm trying to do a
pseudo-realtime application on a dedicated server, with latencies until
the next event is due to occur.
Sadly, the boss doesn't realise that my realtime app development
experience is precisely zero!

Spin-wait is bad on a server because it chews up (i.e. consumes towards no
productive purpose) CPU bandwidth that would best be returned to the
operating system.

For example, on Linux, there might be a daemon that needs to wait for 10
minutes. "sleep(600) ;" allows the OS to do other things for 10 minutes.
A
spin-wait will consume a large fraction of CPU bandwidth--potentially as
much as 100%--to do nothing but repeatedly check the time. On a shared
system -- and any server is shared, at least between your process and the
operating system -- it is horribly inefficient.

Now, on an embedded system, spin-wait may be valid. In fact, the most
common software architecture for small systems is just to spin-wait until
the next time tick and then do the things you need to do. That is OK
because you're the only process on the system, and there is nobody else
who
can make better use of the CPU.
Of course we have gotten so far off topic for clc it doesn't matter.
But, every embedded system I have worked on (and you have used all
of them, directly or indirectly :-)) also respond to hardware interrupts.
Jan 11 '07 #19
Barry wrote:
>
"user923005 " <dc*****@connx. comwrote in message
news:11******** **************@ i56g2000hsf.goo glegroups.com.. .
Kwebway Konongo wrote:
Hi everyone,
I'm developing an application in C; basically a linked list, with
a
series
of "events" to be popped off, separated by a command to pause
reading
off
the next event in the list. It has been sometime since I last did
C, and that was the first K&R version! Is there a command to
pause an app for a period of time, as all the commands I am
familiar with specify pauses for integer numbers of seconds, and
what I would like is fractions of a second, preferably
milliseconds if possible
From the C-FAQ:
[snip]
Most of your response has nothing to do with C. You should have
just referred the OP to an appropriate news group.
He gave the answer that's in the comp.lang.c FAQ on the matter. Try to
pay attention.


Brian

Jan 11 '07 #20

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

12
4171
by: Simon John | last post by:
I'm writing a PyQt network client for XMMS, using the InetCtrl plugin, that on connection receives a track length. To save on bandwidth, I don't want to be continually querying the server for updates (e.g. has the current track finished yet?) so I figured the best thing to do is just update after the track length has expired. So, how would I make a Python program automatically call a function after a preset period of time, without the...
7
7617
by: Dr. Know | last post by:
I am working on an ASP page that writes to several databases, ranging from MDBs to x-base. One of the tasks involves using an existing highest value from the DB and incrementing it before inserting a new record. I am using Application.Lock and .Unlock together with an application variable to negotiate access to the DB routine to one session (user) at a time. This is to ensure that the ID numbers are cleanly incremented, and that no...
2
1803
by: Dave Harris | last post by:
I am having a bit of trouble with form manipulation. I am working on a program where the user opens one form and when all the processes are done, he/she clicks on "Continue" which opens the next form in the program's sequence where more data processing takes place. Here is a snippet of code from the form I am currently working on. I need the application to pause after the "objReader.Close()" statement until the user clicks a button on the...
7
2695
by: Charles Law | last post by:
My first thought was to call WorkerThread.Suspend but the help cautions against this (for good reason) because the caller has no control over where the thread actually stops, and it might have a lock pending, for example. I want to be able to stop a thread temporarily, and then optionally resume it or stop it for good.
2
5254
by: BLUE | last post by:
I would like to pause an application while the GUI display a Label saying "Logging in...". System.Timers System.Windows.Forms.Timer System.Threading.Timer System.Threading ==Thread.Sleep Which one? In the last case Sleep must be applied to what(Application, this or what)?
0
2177
by: Grayzag | last post by:
Hi there, As part of my Software course, i have to create a game. Since I originally started out with python, I was used to it being really easy to create a main loop to control the game with a simple while statement, pausing where input is needed from the user. Now that I have moved on to VB, I can seem to find a way to pause the while statements until the user does something (e.g. clicks a button). When i start my app, if just throws...
12
3330
by: greg | last post by:
Hi, Can anyone help me with the following issue: How can I pause the execution of a program until a given file is created (by another process) in a specified directory? Any ideas would be highly appreciated. Thanks!
3
3903
by: Lucress Carol | last post by:
Hi everyone, I'm having troubles with pausing and continuing MFC Thread.For test purposes I've created in my MFC Dialog application a progress Bar Control, a Start Button and a Stop Button.The idea is to start the progress Bar Control by clicking on the Start Button and I would like to pause (Stop Button) the process at anytime and continue it when I click on the Start Button.I had a look at the following homepage:...
0
1314
by: thesti | last post by:
hello, i have some jbuttons in my frame. and i have a recursive method, which will check a certain condition and if satisfied, will move one of the jbuttons location to somewhere else in the frame and then remove the jbutton from the frame, then the process is repeated. (since i use null layout for my frame, i can call the setBounds method) the problem is, the method will keep running without pausing, so i couldn't see which...
0
10336
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10155
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
8978
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7502
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6741
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5383
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
5513
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
4054
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
3655
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.