Hi all,
Suppose a text file is given and we have to write a program that will
return a random line from the file. This can be easily done. But what
if the text file is too big and can't fit into the main memory
completely? In this case, how will we modify our code?
Also, if we are given a stream, instead of a file, then what changes
are required?
Thanks,
Anunay 23 1444
Anunay said: Hi all,
Suppose a text file is given and we have to write a program that will return a random line from the file. This can be easily done. But what if the text file is too big and can't fit into the main memory completely? In this case, how will we modify our code?
Not at all, if you write it properly first time. You just need storage for
two lines: the currently "saved" line, and the most recently read line.
When you read line N into memory, copy it into the "saved" buffer with
probability 1/N. At the end of this process, the "saved" buffer will
contain a randomly selected line.
Also, if we are given a stream, instead of a file, then what changes are required?
None at all, if the way you wrote the first program was to take the input
from stdin if no command line argument was forthcoming.
--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999 http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
"Anunay" <an*********@gmail.com> writes: Hi all,
Suppose a text file is given and we have to write a program that will return a random line from the file. This can be easily done. But what if the text file is too big and can't fit into the main memory completely?
What it the problem? You read the file line by line till you are at
your random number.
In this case, how will we modify our code?
I do not have to change a line of code. Also, if we are given a stream, instead of a file, then what changes are required?
What do you mean with this?
Friedrich
--
Please remove just-for-news- to reply via e-mail.
Friedrich Dominicus <ju*****************@q-software-solutions.de> wrote: "Anunay" <an*********@gmail.com> writes:
Hi all,
Suppose a text file is given and we have to write a program that will return a random line from the file. This can be easily done. But what if the text file is too big and can't fit into the main memory completely? What it the problem? You read the file line by line till you are at your random number.
Requiring 2 passes of file reading: one to count the number of lines,
one to choos a random line. This can be used when reading a file
(although not optimal, ofcourse), but is not possible with streams. Also, if we are given a stream, instead of a file, then what changes are required?
What do you mean with this?
I assume the OP means reading from standard input, instead of a file.
--
:wq
^X^Cy^K^X^C^C^C^C
On Wed, 17 May 2006 07:07:01 +0000,
Richard Heathfield <in*****@invalid.invalid> wrote
in Msg. <O-******************************@bt.com>
[...]
homework assignment DONE.
robert
Richard Heathfield wrote: Not at all, if you write it properly first time. You just need storage for two lines: the currently "saved" line, and the most recently read line. When you read line N into memory, copy it into the "saved" buffer with probability 1/N. At the end of this process, the "saved" buffer will contain a randomly selected line.
Hi Richard,
Can you please elaborate a little on this? I am confused what will
happen at the fopen() function call. As the file is too big to be fully
fit into main memory, what will fopen() return?
If fopen() returns SUCCESS, then where is that portion of file gets
stored which could not get loaded into main memory?
Thanks.
Robert Latest said: On Wed, 17 May 2006 07:07:01 +0000, Richard Heathfield <in*****@invalid.invalid> wrote in Msg. <O-******************************@bt.com>
[...]
homework assignment DONE.
No, I just did the thinking. He still gets to do the coding. If he can't,
and it looks very much as if this is the case, then presumably he still
won't get the marks.
--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999 http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Anunay said: Richard Heathfield wrote: Not at all, if you write it properly first time. You just need storage for two lines: the currently "saved" line, and the most recently read line. When you read line N into memory, copy it into the "saved" buffer with probability 1/N. At the end of this process, the "saved" buffer will contain a randomly selected line. Hi Richard,
Can you please elaborate a little on this?
I am very reluctant to do that.
I am confused what will happen at the fopen() function call.
Either the file will open, or fopen will return NULL.
As the file is too big to be fully fit into main memory, what will fopen() return?
You seem to be confused about the purpose of fopen. It does not read a file
into main memory.
If fopen() returns SUCCESS, then where is that portion of file gets stored which could not get loaded into main memory?
fopen() never returns SUCCESS. It returns either NULL or a pointer to an
in-memory semi-opaque data structure describing an open file.
--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999 http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Richard Heathfield wrote: As the file is too big to be fully fit into main memory, what will fopen() return?
You seem to be confused about the purpose of fopen. It does not read a file into main memory.
Okay, that was a bad question. I meant to ask that where will the
remaining portion of file be stored as it could not get loaded in one
go?
Anunay said: Richard Heathfield wrote: > As the file is too big to be fully fit into main memory, what > will fopen() return?
You seem to be confused about the purpose of fopen. It does not read a file into main memory.
Okay, that was a bad question. I meant to ask that where will the remaining portion of file be stored as it could not get loaded in one go?
On the disk, where it is already. That's what disks are for.
You just load one line of the file at a time. Having read it, you either
keep it or discard it, as I explained earlier, and then you read the next
one. And so on.
--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999 http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Richard Heathfield wrote: Anunay said: Suppose a text file is given and we have to write a program that will return a random line from the file. This can be easily done. But what if the text file is too big and can't fit into the main memory completely? In this case, how will we modify our code?
Not at all, if you write it properly first time. You just need storage for two lines: the currently "saved" line, and the most recently read line. When you read line N into memory, copy it into the "saved" buffer with probability 1/N. At the end of this process, the "saved" buffer will contain a randomly selected line.
Also, if we are given a stream, instead of a file, then what changes are required?
None at all, if the way you wrote the first program was to take the input from stdin if no command line argument was forthcoming.
Using Richards clever algorithm and my ggets routine you can
probably reduce it to a few lines and a suitable function to decide
the "probability 1/N" result.
char *saved;
char *buffer;
size_t N, savedlinenum;
N = 0; saved = NULL; savedlinenum = 0;
while (0 == ggets(&buffer)) {
N++;
if (probfunction(N)) {
free(saved); saved = buffer;
savedlinenum = N;
}
else free(buffer);
}
if (saved) puts(saved);
For probfunction read the Cfaq. You can find ggets at:
<http://cbfalconer.home.att.net/download/ggets.zip>
--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>
Ico wrote: Friedrich Dominicus <ju*****************@q-software-solutions.de> wrote: "Anunay" <an*********@gmail.com> writes:
Hi all,
Suppose a text file is given and we have to write a program that will return a random line from the file. This can be easily done. But what if the text file is too big and can't fit into the main memory completely? What it the problem? You read the file line by line till you are at your random number.
Requiring 2 passes of file reading: one to count the number of lines, one to choos a random line. This can be used when reading a file (although not optimal, ofcourse), but is not possible with streams.
Since the random distribution was not specified you don't need to do two
passes. This might mean there is no possibility of returning lines
towards the end of the file, or if the file is only two lines a high
probability of returning the last line, but it is still random. Also, if we are given a stream, instead of a file, then what changes are required?
What do you mean with this?
I assume the OP means reading from standard input, instead of a file.
Pick a probability for returning the current line, and keep going until
you either hit EOF or you at random decide to return the line.
--
Flash Gordon, living in interesting times.
Web site - http://home.flash-gordon.me.uk/
comp.lang.c posting guidelines and intro: http://clc-wiki.net/wiki/Intro_to_clc
Inviato da X-Privat.Org - Registrazione gratuita http://www.x-privat.org/join.php
Flash Gordon wrote: Ico wrote:
Friedrich Dominicus <ju*****************@q-software-solutions.de> wrote:
"Anunay" <an*********@gmail.com> writes:
Hi all,
Suppose a text file is given and we have to write a program that will return a random line from the file. This can be easily done. But what if the text file is too big and can't fit into the main memory completely?
What it the problem? You read the file line by line till you are at your random number.
Requiring 2 passes of file reading: one to count the number of lines, one to choos a random line. This can be used when reading a file (although not optimal, ofcourse), but is not possible with streams.
Since the random distribution was not specified you don't need to do two passes. This might mean there is no possibility of returning lines towards the end of the file, or if the file is only two lines a high probability of returning the last line, but it is still random.
Also, if we are given a stream, instead of a file, then what changes are required?
What do you mean with this?
I assume the OP means reading from standard input, instead of a file.
Pick a probability for returning the current line, and keep going until you either hit EOF or you at random decide to return the line.
Read Richard Heathfield's initial post on this thread
again; you'll find that his algorithm (1) requires no advance
knowledge of the line count, (2) makes just one pass over the
input, and (3) has equal probability of selecting any particular
line. It will, however, make one complete pass over all the
input even if it eventually selects the very first line read;
there is no "shortcut."
Point (3) may be a little difficult to grasp at first, so
let me offer a couple simple examples and then a formal proof.
What is the probability that the very last line is chosen? If
there are N lines altogether, the last line is chosen with
probability 1/N, as desired. How about the first line? It goes
into the "saved" buffer as soon as it's read (with probability
1/1), and then it survives there with probability
(1-1/2)*(1-1/3)*(1-1/4)*...*(1-1/N)
= (1/2)*(2/3)*(3/4)*...*((N-1)/N)
= 1/N
Now for the formal proof, by induction. If the file has
only one line, that line is selected with probability 1/1.
That's the "ground" step. Now suppose that the algorithm
works for N-line files, that is, that it chooses one of the
N lines with probability 1/N; we wish to show that it works
for files of N+1 lines. Imagine the situation just before
the final line is read: the first N lines have been processed
and one of them resides in the "saved" buffer. The final step
chooses the last line with probability 1/(N+1) -- which is as
it should be -- or chooses the already-saved line with probability
N/(N+1). Since the first N lines were processed by the N-line
algorithm (known to be correct), any particular one of those first
N lines is "saved" with probability 1/N, hence the probability that
that line is chosen from among N+1 is (1/N)*(N/(N+1)) = 1/(N+1).
The algorithm makes an unbiased "choice" from a 1-line file,
and if it works for N lines it also works for N+1. Therefore,
it works for all positive N. (To the limits of the random number
generator's precision, of course -- but that's an altogether
deeper topic.)
--
Eric Sosman es*****@acm-dot-org.invalid
On 2006-05-17, Anunay <an*********@gmail.com> wrote: Hi all,
Suppose a text file is given and we have to write a program that will return a random line from the file. This can be easily done. But what if the text file is too big and can't fit into the main memory completely? In this case, how will we modify our code?
Also, if we are given a stream, instead of a file, then what changes are required?
Thanks, Anunay http://minnie.tuhs.org/UnixTree/V7/u...fortune.c.html
Note that, of course, '(1+(double)RAND_MAX)' needs to be used instead of
'32768.' in that code for modern systems, and it could stand to be
otherwise updated. But the idea is there.
Anunay wrote: Richard Heathfield wrote: As the file is too big to be fully fit into main memory, what will fopen() return?
You seem to be confused about the purpose of fopen. It does not read a file into main memory.
Okay, that was a bad question. I meant to ask that where will the remaining portion of file be stored as it could not get loaded in one go?
As Richard pointed out, you are confused about the purpose of fopen().
The problem you are describing might be exemplified in the
following poorly designed program:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
const char *path = "/tmp/some_file";
int
main(void)
{
FILE *ifp;
struct stat stat_buf;
unsigned char *data_buf;
/* fopen() doesn't care how big the file is. */
if ( (ifp = fopen(path, "r")) == NULL)
perror(path), exit(EXIT_FAILURE);
/* stat() will tell you how big the file is. */
if (stat(path, &stat_buf) == -1)
perror(NULL), exit(EXIT_FAILURE);
/*
* If you want to read the file all at once,
* you'll need to allocate space. This will
* fail if the file is too big.
*/
if ( (data_buf = malloc(stat_buf.st_size)) == NULL)
fprintf(stderr, "No memory\n"), exit(EXIT_FAILURE);
/* Process the file here. (look up fread()) */
return EXIT_SUCCESS;
}
The reason this is poorly designed is precisely because
of the problem you are trying to describe. It is fairly stupid
to read the entire file in at once. Instead, you should only
read in chunks and process them as you go.
Richard Heathfield wrote: Anunay said: Richard Heathfield wrote:
As the file is too big to be fully fit into main memory, what will fopen() return?
You seem to be confused about the purpose of fopen. It does not read a file into main memory.
Okay, that was a bad question. I meant to ask that where will the remaining portion of file be stored as it could not get loaded in one go?
On the disk, where it is already. That's what disks are for.
You just load one line of the file at a time. Having read it, you either keep it or discard it, as I explained earlier, and then you read the next one. And so on.
I think it is time to refer Anunay to K&R II and to his elementary
computers textbook.
Analog for Anunay: When you open a book, do you immediately
transfer its content to your brain? Or do you start at page 1 and
read a sentence? Later do you read another sentence?
--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>
On Wed, 17 May 2006 07:08:21 -0400,
CBFalconer <cb********@yahoo.com> wrote
in Msg. <44***************@yahoo.com> Analog for Anunay: When you open a book
He doesn't. Or at least not C books.
robert
"Eric Sosman"> "Anunay" <an*********@gmail.com> writes: Suppose a text file is given and we have to write a program that will > return a random line from the file. This can be easily done. But what > if the text file is too big and can't fit into the main memory > completely?
[snipped Flash Gordon for thematic considerations]
[Heathfield:]
Not at all, if you write it properly first time. You just need storage for
two lines: the currently "saved" line, and the most recently read line.
When you read line N into memory, copy it into the "saved" buffer with
probability 1/N. At the end of this process, the "saved" buffer will
contain a randomly selected line.
Read Richard Heathfield's initial post on this thread again; you'll find that his algorithm (1) requires no advance knowledge of the line count, (2) makes just one pass over the input, and (3) has equal probability of selecting any particular line. It will, however, make one complete pass over all the input even if it eventually selects the very first line read; there is no "shortcut."
Point (3) may be a little difficult to grasp at first, so let me offer a couple simple examples and then a formal proof. What is the probability that the very last line is chosen? If there are N lines altogether, the last line is chosen with probability 1/N, as desired. How about the first line? It goes into the "saved" buffer as soon as it's read (with probability 1/1), and then it survives there with probability
(1-1/2)*(1-1/3)*(1-1/4)*...*(1-1/N) = (1/2)*(2/3)*(3/4)*...*((N-1)/N) = 1/N
Now for the formal proof, by induction. If the file has only one line, that line is selected with probability 1/1. That's the "ground" step. Now suppose that the algorithm works for N-line files, that is, that it chooses one of the N lines with probability 1/N; we wish to show that it works for files of N+1 lines. Imagine the situation just before the final line is read: the first N lines have been processed and one of them resides in the "saved" buffer. The final step chooses the last line with probability 1/(N+1) -- which is as it should be -- or chooses the already-saved line with probability N/(N+1). Since the first N lines were processed by the N-line algorithm (known to be correct), any particular one of those first N lines is "saved" with probability 1/N, hence the probability that that line is chosen from among N+1 is (1/N)*(N/(N+1)) = 1/(N+1).
The algorithm makes an unbiased "choice" from a 1-line file, and if it works for N lines it also works for N+1. Therefore, it works for all positive N. (To the limits of the random number generator's precision, of course -- but that's an altogether deeper topic.)
This proof is at the point where the other people in the room either shake
or nod their head and comment. First, I find it notable that there exist
proofs in theoretical comp sci and that this continues to be fertile ground
for unfolding scientific truth.
I find no error in the induction. One of the reasons that I find no error
could be that I've all but forgotten conditional probability. If that is
sound, the telescoping looks correct. I've taken the liberty of sending
this to a professor with whom I'm acquainted who authored a book in
statistical paradoxes to ask for comment. There will always be a
counterexample somewhere; the question is whether it's relevant and how far
afield. If relevant, one would hand Mr. Sosman the chalk back to elaborate.
Mr. Heathfield's algorithm would make the act of observation. Whenever that
happens, the Law of Unintended Consequences gets a phone call. joe
Joe Smith said:
<snip> Mr. Heathfield's algorithm [...]
Um, I think it's in Knuth somewhere (although I had a look just now, and
couldn't find it in TAOCP2.)
--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999 http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
"Richard Heathfield" <in*****@invalid.invalid> wrote in message
news:P_******************************@bt.com... Joe Smith said:
<snip>
Mr. Heathfield's algorithm [...]
Um, I think it's in Knuth somewhere (although I had a look just now, and couldn't find it in TAOCP2.)
I wish I had that handy, as I recall that he offers someting like $2.56 for
each erratum. I believe that relatively close to the beginning, Knuth
claims that Leonardo de Pisa gives the following numbers: 0, 1, 1, 2, 3, 5,
8, 13 and so on. In point of fact, Leonardo in _Libber Abbaci_ gives 0, 1,
2, 3, 5...., omitting one of the one's. I wouldn't be surprised if
Professor Knuth already heard of this, but if he hasn't and I read the
passage correctly, then .... joe
Richard Heathfield wrote: Joe Smith said:
<snip>
Mr. Heathfield's algorithm [...]
Um, I think it's in Knuth somewhere (although I had a look just now, and couldn't find it in TAOCP2.)
Algorithm 3.4.2R is a more general version, selecting
a sample of size n possibly greater than 1. But be sure
to read the third edition of TAOCP II; the second edition
also has an Algorithm 3.4.2R for the same purpose, but it's
a different algorithm!
--
Eric Sosman es*****@acm-dot-org.invalid
Joe Smith wrote: Eric Sosman wrote: Now for the formal proof, by induction. If the file has only one line, that line is selected with probability 1/1. That's the "ground" step. Now suppose that the algorithm works for N-line files, that is, that it chooses one of the N lines with probability 1/N; we wish to show that it works for files of N+1 lines. Imagine the situation just before the final line is read: the first N lines have been processed and one of them resides in the "saved" buffer. The final step chooses the last line with probability 1/(N+1) -- which is as it should be -- or chooses the already-saved line with probability N/(N+1). Since the first N lines were processed by the N-line algorithm (known to be correct), any particular one of those first N lines is "saved" with probability 1/N, hence the probability that that line is chosen from among N+1 is (1/N)*(N/(N+1)) = 1/(N+1).
The algorithm makes an unbiased "choice" from a 1-line file, and if it works for N lines it also works for N+1. Therefore, it works for all positive N. (To the limits of the random number generator's precision, of course -- but that's an altogether deeper topic.)
I find no error in the induction. One of the reasons that I find no error could be that I've all but forgotten conditional probability. If that is sound, the telescoping looks correct. I've taken the liberty of sending this to a professor with whom I'm acquainted who authored a book in statistical paradoxes to ask for comment.
There is no error. This is a great example of how good induction is.
Another way of writing the inductive step is to note that all of
the first N lines have equal probability of being selected; and
selecting or not selecting the new last line does not affect this,
so the new probability of one of the first N lines remaining selected
is (1 - P(new line)) / N .
Old Wolf wrote: Joe Smith wrote:
Eric Sosman wrote:
Now for the formal proof, by induction. If the file has only one line, that line is selected with probability 1/1. That's the "ground" step. Now suppose that the algorithm works for N-line files, that is, that it chooses one of the N lines with probability 1/N; we wish to show that it works for files of N+1 lines. Imagine the situation just before the final line is read: the first N lines have been processed and one of them resides in the "saved" buffer. The final step chooses the last line with probability 1/(N+1) -- which is as it should be -- or chooses the already-saved line with probability N/(N+1). Since the first N lines were processed by the N-line algorithm (known to be correct), any particular one of those first N lines is "saved" with probability 1/N, hence the probability that that line is chosen from among N+1 is (1/N)*(N/(N+1)) = 1/(N+1).
The algorithm makes an unbiased "choice" from a 1-line file, and if it works for N lines it also works for N+1. Therefore, it works for all positive N. (To the limits of the random number generator's precision, of course -- but that's an altogether deeper topic.)
I find no error in the induction. One of the reasons that I find no error could be that I've all but forgotten conditional probability. If that is sound, the telescoping looks correct. I've taken the liberty of sending this to a professor with whom I'm acquainted who authored a book in statistical paradoxes to ask for comment.
There is no error. This is a great example of how good induction is.
Another way of writing the inductive step is to note that all of the first N lines have equal probability of being selected; and selecting or not selecting the new last line does not affect this, so the new probability of one of the first N lines remaining selected is (1 - P(new line)) / N .
The proof can also be done non-inductively, by considering
the probability that any particular line is selected. The Kth
line is chosen if and only if
- When it is read it goes into the "saved" buffer, and
- It survives in the "saved" buffer as all the remaining
lines are read.
The Kth line enters the "saved" buffer with probability 1/K,
survives the challenge from the next line with probability
K/(K+1), from the line after that with probability (K+1)/(K+2),
and so on, surviving the final line's challenge with probability
(N-1)/N. Multiply the probabilities together, and each denominator
except the last cancels with the numerator of the fraction right
after it, leaving 1/N as the probability that the Kth line is
chosen, for any 0 < K <= N.
The non-inductive proof is shorter, and may be more appealing
to some. To me, though, induction seems a more natural way to
prove things about iterative processes; the correspondence of
the inductive step with the iterative step is usually compelling.
--
Eric Sosman es*****@acm-dot-org.invalid
"Old Wolf" <ol*****@inspire.net.nz> wrote in message
news:11**********************@j33g2000cwa.googlegr oups.com... Joe Smith wrote: Eric Sosman wrote: Now for the formal proof, by induction. If the file has only one line, that line is selected with probability 1/1. That's the "ground" step. Now suppose that the algorithm works for N-line files, that is, that it chooses one of the N lines with probability 1/N; we wish to show that it works for files of N+1 lines. Imagine the situation just before the final line is read: the first N lines have been processed and one of them resides in the "saved" buffer. The final step chooses the last line with probability 1/(N+1) -- which is as it should be -- or chooses the already-saved line with probability N/(N+1). Since the first N lines were processed by the N-line algorithm (known to be correct), any particular one of those first N lines is "saved" with probability 1/N, hence the probability that that line is chosen from among N+1 is (1/N)*(N/(N+1)) = 1/(N+1).
The algorithm makes an unbiased "choice" from a 1-line file, and if it works for N lines it also works for N+1. Therefore, it works for all positive N. (To the limits of the random number generator's precision, of course -- but that's an altogether deeper topic.)
I find no error in the induction. One of the reasons that I find no error could be that I've all but forgotten conditional probability. If that is sound, the telescoping looks correct. I've taken the liberty of sending this to a professor with whom I'm acquainted who authored a book in statistical paradoxes to ask for comment.
There is no error. This is a great example of how good induction is.
Another way of writing the inductive step is to note that all of the first N lines have equal probability of being selected; and selecting or not selecting the new last line does not affect this, so the new probability of one of the first N lines remaining selected is (1 - P(new line)) / N .
I would call that more an appeal to strong induction, whereas Mr. Sosman's
is to weak induction. Of course, that wouldn't change the outcome, just
what is "known" during the inductive step. BTW, you were right elsewhere
that I was mistakenly talking about the Euclidean Algorithm as the Reverse.
Why you think public key cryptogography is OT here is beyond me, except that
that might mean you only discuss it on your Own Terms. joe This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Daniel |
last post by:
Hello
I am developing a Java application for MacOS X, Windows and Linux. So
far easy with Java. But when it comes to file types, the
platfrom-specific handling knocks my application-created...
|
by: Hans-Joachim Widmaier |
last post by:
Hi all.
Handling files is an extremely frequent task in programming, so most
programming languages have an abstraction of the basic files offered by
the underlying operating system. This is...
|
by: Sean W. Quinn |
last post by:
Hey folks,
I have a question regarding file handling, and the preservation of class
structure. I have a class (and I will post snippets of code later in the
post) with both primitive data...
|
by: Shane |
last post by:
Hi,
Thanks in advance for the help. I have been to many websites and tried
several solutions to my problem, but have fixed part of it. It's time to
come humbly to the newsgroups for help :-)
...
|
by: nephish |
last post by:
Hey there,
kinda newbie question here.
i know how to read the lines of a txt file.
i know how to write a txt file.
but how do i overwrite a line value with another value ?
i mean, how do go...
|
by: Amit Kulkarni |
last post by:
Hi,
I have small problem.
I want to truncate a line in a text file using C file handling
functions and write new line in place of it. How do I do it?
e.g.
"example.txt"
Line 1: This is a...
|
by: |
last post by:
Using the HttpRequest object to gather material (potentially from an
external server), is it possible to write the response stream into which
ever file type is required e.g. I request not just...
|
by: Gina_Marano |
last post by:
Hey all,
I need to validate a large binary file. I just need to read the last
100 or so bytes of the file.
Here is what I am currently doing but it seems slow:
private bool ValidFile(string...
|
by: Digital Puer |
last post by:
I fixed a bug today that went against my intuition. I am on Linux.
I had a class that fopen'ed some files. When I called delete
on these objects, I expected that the files would be closed...
|
by: jeffc226 |
last post by:
I'm interested in an idiom for handling errors in functions without
using traditional nested ifs, because I think that can be very awkward
and difficult to maintain, when the number of error checks...
|
by: ryjfgjl |
last post by:
ExcelToDatabase: batch import excel into database automatically...
|
by: isladogs |
last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM).
In this month's session, we are pleased to welcome back...
|
by: isladogs |
last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM).
In this month's session, we are pleased to welcome back...
|
by: Vimpel783 |
last post by:
Hello!
Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
|
by: jfyes |
last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
|
by: PapaRatzi |
last post by:
Hello,
I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
|
by: Defcon1945 |
last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
|
by: Shællîpôpï 09 |
last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
|
by: af34tf |
last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you
| |