By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
446,280 Members | 2,263 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 446,280 IT Pros & Developers. It's quick & easy.

maximum file size of FOPEN

P: n/a
I tried to open some large files in my computer. (ram 512MB)

1. is there limitation of file size FOPEN?

2. if I have a file that is larger than the maximum size, how can I
open the file?

Nov 14 '05 #1
Share this Question
Share on Google+
15 Replies


P: n/a
In article <11**********************@z14g2000cwz.googlegroups .com>,
uremae <ur****@gmail.com> wrote:
I tried to open some large files in my computer. (ram 512MB) 1. is there limitation of file size FOPEN?
No, not unless there is something unusual and system-specific.

There could, for example, -potentially- be issues with
opening a device or pseudo-device, if the device driver for some
reason decided it needed to take a snapshot into memory
(e.g., opening /dev/core might take a snapshot of kernel state,)
Any such behaviour would be outside the bounds of C: such
behaviour does not happen for plain files.

2. if I have a file that is larger than the maximum size, how can I
open the file?


Any such matter would be system specific. fopen() is all that
C itself provides.

It isn't usually a problem to *open* an existing large file: the
problem is usually in *reading* the large file. Some systems
are only able to read to about the 2 gigabyte mark, or are
allowed to read indefinitely but cannot position (ftell/fseek)
beyond 2 gigabytes without using system-specific calls.
--
"Never install telephone wiring during a lightning storm." -- Linksys
Nov 14 '05 #2

P: n/a

"uremae" <ur****@gmail.com> wrote

1. is there limitation of file size FOPEN?

2. if I have a file that is larger than the maximum size, how can I
open the file?

There's ususally some limit, though it might be so large that there is no
chance of exceeding it.
If fopen() fails to open a huge file, or the read functions won't allow you
to read it all, the best solution is to look for lower-level system specific
calls. Sometimes you might be able to redesign your files so that they fall
within the limit.

However make sure that the size of the file really is the problem, not that
there is some hardware problem with the disk drive, or that the file is
corrupt in some way.
Nov 14 '05 #3

P: n/a
In article <d6**********@nwrdmz02.dmz.ncs.ea.ibs-infra.bt.com>,
Malcolm <re*******@btinternet.com> wrote:
"uremae" <ur****@gmail.com> wrote
1. is there limitation of file size FOPEN?

There's ususally some limit, though it might be so large that there is no
chance of exceeding it.


Could you give us an example of a system with such a limit, Malcolm ?

I have certainly run into systems whose filesize was limited
(with different limits for different filesystem types), but I
cannot think of anything in C or POSIX.1 or any implementation
that I have -encountered- that would fopen() for opening a
[plain] file for read once the file had made it on to the
filesystem.

Ability to read all of a large file is questionable, though:
the internal file position counter can overflow (e.g., if one
is NFS or SMB'ing over the contents of a file which resides on
a remote filesystem that supports much larger files than the local
system expects.)

And ability to position/ reposition in a large file is quite
questionable seeing as fseek() is limited to taking a 'long' for the
offset...
--
Beware of bugs in the above code; I have only proved it correct,
not tried it. -- Donald Knuth
Nov 14 '05 #4

P: n/a
On 19 May 2005 21:57:05 GMT, in comp.lang.c ,
ro******@ibd.nrc-cnrc.gc.ca (Walter Roberson) wrote:
In article <d6**********@nwrdmz02.dmz.ncs.ea.ibs-infra.bt.com>,
Malcolm <re*******@btinternet.com> wrote:
"uremae" <ur****@gmail.com> wrote

1. is there limitation of file size FOPEN?

There's ususally some limit, though it might be so large that there is no
chance of exceeding it.


Could you give us an example of a system with such a limit, Malcolm ?

I have certainly run into systems whose filesize was limited
(with different limits for different filesystem types), but I
cannot think of anything in C or POSIX.1 or any implementation
that I have -encountered- that would fopen() for opening a
[plain] file for read once the file had made it on to the
filesystem.


The file could exist on network attached storage which supports larger
file sizes than the local OS - f'rexample you could have Win95
locally, and be mapping a W2k3 or *nix drive.
..
--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.ungerhu.com/jxh/clc.welcome.txt>
Nov 14 '05 #5

P: n/a
then how can I read the large file that is larger than gigabye with
what system-specific calls in linux or unix.

Nov 14 '05 #6

P: n/a
In article <p7********************************@4ax.com>,
Mark McIntyre <ma**********@spamcop.net> wrote:
On 19 May 2005 21:57:05 GMT, in comp.lang.c ,
ro******@ibd.nrc-cnrc.gc.ca (Walter Roberson) wrote:
In article <d6**********@nwrdmz02.dmz.ncs.ea.ibs-infra.bt.com>,
Malcolm <re*******@btinternet.com> wrote:
"uremae" <ur****@gmail.com> wrote 1. is there limitation of file size FOPEN? There's ususally some limit, though it might be so large that there is no
chance of exceeding it.
Could you give us an example of a system with such a limit, Malcolm ?

The file could exist on network attached storage which supports larger
file sizes than the local OS


I see that Malcolm was correct. Checking the opengroup.org fopen() man
page, I see that one of the defined error returns is

[EOVERFLOW]
The named file is a regular file and the size of the file
cannot be represented correctly in an object of type off_t.
--
"Never install telephone wiring during a lightning storm." -- Linksys
Nov 14 '05 #7

P: n/a
In article <11*********************@g43g2000cwa.googlegroups. com>,
uremae <ur****@gmail.com> wrote:
then how can I read the large file that is larger than gigabye with
what system-specific calls in linux or unix.


You are guaranteed up to (2 gigabytes - 1) in any conforming
unix system (provided the filesystem supports files that large.)

There is no standard unix interface for reading files whose
size cannot be represented as off_t .

In Linux, if your version of Linux has Large Files System support,
then you can open() using the O_LARGEFILE flag.
--
"Mathematics? I speak it like a native." -- Spike Milligan
Nov 14 '05 #8

P: n/a
Walter Roberson wrote:
Ability to read all of a large file is questionable, though:
the internal file position counter can overflow (e.g., if one
is NFS or SMB'ing over the contents of a file which resides on
a remote filesystem that supports much larger files than the local
system expects.)
AFAICS, the counter can't overflow per se (this implies undefined
behaviour). Rather, an attempt to get the current file position can
fail.
And ability to position/ reposition in a large file is quite
questionable seeing as fseek() is limited to taking a 'long'
for the offset...


You are allowed to perform multiple seeks and keep your own file
position data.

--
Peter

Nov 14 '05 #9

P: n/a
uremae <ur****@gmail.com> wrote:
then how can I read the large file that is larger than gigabye with
what system-specific calls in linux or unix.


The folks over in comp.os.linux.development.apps and comp.unix.programmer
would be more helpful since this is a really platform dependent thing.

On more recent Linux platforms you would typically define _FILE_OFFSET_BITS
to 64 before you include _any_ standard or system header. This would then
automagically select the 64 bit interface allowing for > 2GB file access for
those interfaces which normally wouldn't be able to handle files that large.

For example:

gcc -D_FILE_OFFSET_BITS=64 -o main main.c

- Bill
Nov 14 '05 #10

P: n/a
1. It depends on the system. If the files can exist in your computer,
FOPEN can open it.

For FREAD, FWRITE, etc, the limitaion of file size is size_t, it also
depends on the system. In most of the systems, size_t is a 32-bits
unsigned inteter. In such system, the limitation is 4G - 1.

Nov 14 '05 #11

P: n/a
1. It depends on the system. If the files can exist in your computer,
FOPEN can open it.

For FREAD, FWRITE, etc, the limitaion of file size is size_t, it also
depends on the system. In most of the systems, size_t is a 32-bits
unsigned inteter. In such system, the limitation is 4G - 1.

Nov 14 '05 #12

P: n/a
In article <11**********************@g14g2000cwa.googlegroups .com>,
<zh********@163.com> wrote:
For FREAD, FWRITE, etc, the limitaion of file size is size_t, it also
depends on the system. In most of the systems, size_t is a 32-bits
unsigned inteter. In such system, the limitation is 4G - 1.


According to opengroup.org the limit is off_t which is a type
specifically for file sizes -- and it is a signed type.
fseek can return (off_t)-1 when the seek is not meaningful, so
it cannot be an unsigned type.

opengroup.org classifies off_t as an "extended signed integral type".
An "extended" integral type isn't limited to the usual short/int/long
types, provided that the type has "the same properties".
[For this purpose, it helps to remember that the UNIX98 specification
predates C99's formalization of the existance of 'long long'.]
--
Oh, to be a Blobel!
Nov 14 '05 #13

P: n/a
uremae wrote:
then how can I read the large file that is larger than gigabye with
what system-specific calls in linux or unix.


It's also worth mentioning that if stream processing is all
you're after then you can read a file of any size through a pipe.
For instance:

cat hugefile | processit > outfile

should work no matter how large hugefile is.
Processit just reads from stdin and writes to stdout
using standard C I/O functions. This assumes that
"cat" can read the huge file, but I've not yet
encountered a linux/unix system where cat cannot
do so.

Regards,

David Mathog
ma****@caltech.edu
Nov 14 '05 #14

P: n/a
On 19 May 2005 21:57:05 GMT, ro******@ibd.nrc-cnrc.gc.ca (Walter
Roberson) wrote:
In article <d6**********@nwrdmz02.dmz.ncs.ea.ibs-infra.bt.com>,
Malcolm <re*******@btinternet.com> wrote:
"uremae" <ur****@gmail.com> wrote
1. is there limitation of file size FOPEN?

There's ususally some limit, though it might be so large that there is no
chance of exceeding it.


Could you give us an example of a system with such a limit, Malcolm ?

I have certainly run into systems whose filesize was limited
(with different limits for different filesystem types), but I
cannot think of anything in C or POSIX.1 or any implementation
that I have -encountered- that would fopen() for opening a


IAYM would _fail_ or _reject_ or somesuch on the fopen.
[plain] file for read once the file had made it on to the
filesystem.

Not an actual example, but one idea of how someone might conceivably
get into such a fix:

Multics had normal files limited to 256KW = 1MB (of 9 bits) and there
was no easy way to increase this because files were tied to (most)
segments in the virtual memory. For people who needed bigger there
was a feature/kludge called "multi segment files" which treated a
directory -- IIRC specially marked -- containing multiple actual files
as one big file, but support for this feature was apparently uneven,
working in some programs and not others. (My information on this is
secondhand; I was a lowly student user so my quota didn't reach even
to one full segment much less many and my access didn't extend to any,
er, "interesting" file areas.)

Now I don't know if there was (ever) a C for Multics -- especially
considering its origin -- but if there was or were to be I can imagine
that it might fail for MSFs, and thus apparently for "large files".

- David.Thompson1 at worldnet.att.net
Nov 14 '05 #15

P: n/a
There was a C for Multics, late in its life, 1986, Multics release12.0,
documented in Honeywell manual HH07.

You are mistaken about files on Multics. Language library provided
files for PL/I, FORTRAN, COBOL, and C were all supported by the Multics
library I/O module vfile_, which supported various access modes on
objects stored in the file system, called "files." Small instances of
files were indeed stored as "single segment files" and larger ones as
"multi segment files." The system and the runtime library attempted to
make the transition from little files to big invisible to the
application program. As in many systems, it was possible to look behind
the curtain and write code that accessed underlying storage directly in
nonstandard ways. But standard conforming programs in the application
languages would work fine with files of sizes larger than 1MB.

Nov 14 '05 #16

This discussion thread is closed

Replies have been disabled for this discussion.