I'm writing a c++ program that has many (100+) threads read/write files
simultaneously. It works well if not considering the efficiency. The
file i/o seems to be the bottleneck.
This is my code to read from and write to files:
#include <fstream>
#include <sstream>
#include <string>
using namespace std;
bool write(const string &path, const string &contents,
ios::openmode mode)
{
ofstream out;
bool status;
out.open(path.c_str(), mode);
if (!out.fail()) {
out << contents;
}
status = !out.fail();
out.close();
return status;
}
bool read(const string &path, string &contents)
{
ifstream in;
stringstream ss;
bool status;
contents.clear();
in.open(path.c_str(), ios::in);
if (in) {
ss << in.rdbuf();
contents = ss.str();
}
status = !in.fail();
in.close();
return status;
}
I have few clues about how to optimize my code, any direction would be
greatly appreciated. 15 3599
bool read(const string &path, string &contents)
{
ifstream in;
stringstream ss;
bool status;
contents.clear();
in.open(path.c_str(), ios::in);
if (in) {
ss << in.rdbuf();
contents = ss.str();
}
status = !in.fail();
in.close();
return status;
}
I just tried to use fopen(), fread(), fclose() to read file contents:
bool read(const string &path, string &contents)
{
FILE *fp;
char buf[2048];
fp = fopen(path.c_str(), "r");
if (fp) {
while (fread(buf, 2048, 1, fp)) {
contents += buf;
}
fclose(fp);
return true;
} else {
return false;
}
}
This runs 4 times faster (literally) than the previous C++ version. Is
it possible to get the C++ version close to this speed?
Gan Quan wrote:
>bool read(const string &path, string &contents) { ifstream in; stringstream ss; bool status;
contents.clear(); in.open(path.c_str(), ios::in); if (in) { ss << in.rdbuf(); contents = ss.str(); } status = !in.fail(); in.close();
return status; }
I just tried to use fopen(), fread(), fclose() to read file contents:
bool read(const string &path, string &contents)
{
FILE *fp;
char buf[2048];
fp = fopen(path.c_str(), "r");
if (fp) {
while (fread(buf, 2048, 1, fp)) {
contents += buf;
}
fclose(fp);
return true;
} else {
return false;
}
}
This runs 4 times faster (literally) than the previous C++ version. Is
it possible to get the C++ version close to this speed?
I don't know about speed, but you could try:
#include <iterator>
#include <iostream>
#include <fstream>
#include <string>
#include <iosfwd>
bool read_1 ( std::string const & path,
std::string & contents )
{
std::ifstream in;
bool status;
in.open( path.c_str(), std::ios::in );
if ( in ) {
std::string buffer ( std::istreambuf_iterator<char>( in ),
(std::istreambuf_iterator<char>()) );
contents.swap( buffer );
}
status = !in.fail();
in.close();
return status;
}
At least, this avoids the detour through a stringstream. As for the
performance, you will just have to measure. But I take it, that you have
already a framework in place for doing that. I would be interested in the
comparison.
Best
Kai-Uwe Bux
ÔÚ Thu, 12 Oct 2006 14:16:41 +0800£¬Gan Quan <vi******@gmail.comдµÀ:
>bool read(const string &path, string &contents) { ifstream in; stringstream ss; bool status;
contents.clear(); in.open(path.c_str(), ios::in); if (in) { ss << in.rdbuf(); contents = ss.str(); } status = !in.fail(); in.close();
return status; }
I just tried to use fopen(), fread(), fclose() to read file contents:
bool read(const string &path, string &contents)
{
FILE *fp;
char buf[2048];
fp = fopen(path.c_str(), "r");
if (fp) {
while (fread(buf, 2048, 1, fp)) {
contents += buf;
}
fclose(fp);
return true;
} else {
return false;
}
}
This runs 4 times faster (literally) than the previous C++ version. Is
it possible to get the C++ version close to this speed?
you should reduce the disk operation for performance,
for each file, you can caculate its size first and malloc a buffer big
enough to get the data in one read operation.
--
Regards
On 11 Oct 2006 23:16:41 -0700, "Gan Quan" <vi******@gmail.comwrote:
>I just tried to use fopen(), fread(), fclose() to read file contents:
Well, there is always a way to speed up things ...
>bool read(const string &path, string &contents) {
Reserve enough space for 'contents' to read the entire file without
reallocation (see thread 'How do i copy an entire file into a
string'). But verify that your string implementation really uses the
reserved capacity. Some string implementations deallocate the reserved
space on assignment.
FILE *fp;
char buf[2048];
fp = fopen(path.c_str(), "r");
if (fp) {
while (fread(buf, 2048, 1, fp)) {
contents += buf;
I guess you need to terminate the read chars in buf with '\0' (fread
returns the number of chars read).
}
fclose(fp);
return true;
bool success = (ferror (fp) == 0);
fclose(fp); // return value can be ignored for read
return success;
} else {
return false;
} }
This runs 4 times faster (literally) than the previous C++ version. Is it possible to get the C++ version close to this speed?
It's no secret that iostreams are slow. Some people also don't like
their design and interfaces. But they have one advantage: operator<<
which makes it very convenient to trace information. For other IO
tasks I prefer functions that encapsulate stdio functions similar to
the one you have implemented.
Best wishes,
Roland Pibinger
Kai-Uwe Bux 写é“:
Gan Quan wrote:
bool read(const string &path, string &contents)
{
ifstream in;
stringstream ss;
bool status;
contents.clear();
in.open(path.c_str(), ios::in);
if (in) {
ss << in.rdbuf();
contents = ss.str();
}
status = !in.fail();
in.close();
return status;
}
I just tried to use fopen(), fread(), fclose() to read file contents:
bool read(const string &path, string &contents)
{
FILE *fp;
char buf[2048];
fp = fopen(path.c_str(), "r");
if (fp) {
while (fread(buf, 2048, 1, fp)) {
contents += buf;
}
fclose(fp);
return true;
} else {
return false;
}
}
This runs 4 times faster (literally) than the previous C++ version. Is
it possible to get the C++ version close to this speed?
I don't know about speed, but you could try:
#include <iterator>
#include <iostream>
#include <fstream>
#include <string>
#include <iosfwd>
bool read_1 ( std::string const & path,
std::string & contents )
{
std::ifstream in;
bool status;
in.open( path.c_str(), std::ios::in );
if ( in ) {
std::string buffer ( std::istreambuf_iterator<char>( in ),
(std::istreambuf_iterator<char>()) );
contents.swap( buffer );
}
status = !in.fail();
in.close();
return status;
}
At least, this avoids the detour through a stringstream. As for the
performance, you will just have to measure. But I take it, that you have
already a framework in place for doing that. I would be interested in the
comparison.
Best
Kai-Uwe Bux
Interesting, I did a simple test on the following 3 methods:
bool read_1(const string &path, string &contents)
{
ifstream in;
stringstream ss;
bool status;
contents.clear();
in.open(path.c_str(), ios::in);
if (in) {
ss << in.rdbuf();
contents = ss.str();
}
status = !in.fail();
in.close();
return status;
}
bool read_2(const string &path, string &contents)
{
ifstream in;
bool status;
contents.clear();
in.open(path.c_str(), ios::in);
if (in) {
string buffer(istreambuf_iterator<char>(in),
(istreambuf_iterator<char>()));
contents.swap(buffer);
}
status = !in.fail();
in.close();
return status;
}
bool read_3(const string &path, string &contents)
{
FILE *fp;
char *snippet, *buffer;
bool status = false;
size_t i;
if (fp = fopen(path.c_str(), "r")) {
fseek(fp, 0, SEEK_END);
buffer = new char[ftell(fp)+1];
fseek(fp, 0, SEEK_SET);
snippet = buffer;
while (i = fread(snippet, sizeof(char), 2048, fp)) {
snippet += i;
}
*snippet = '\0';
status = feof(fp) ? true : false;
fclose(fp);
contents.clear();
contents.assign(buffer);
delete []buffer;
}
return status;
}
int main(int argc, char* argv[])
{
string path("/path/to/file");
string contents;
int i, j, k;
clock_t c1, c2;
i = j = k = 100;
c1 = clock();
while (i--) {
read_1(path, contents);
}
c2 = clock();
cout << "read_1(): " << ((double)c2 - c1) / CLOCKS_PER_SEC <<
endl;
c1 = clock();
while (j--) {
read_2(path, contents);
}
c2 = clock();
cout << "read_2(): " << ((double)c2 - c1) / CLOCKS_PER_SEC <<
endl;
c1 = clock();
while (k--) {
read_3(path, contents);
}
c2 = clock();
cout << "read_3(): " << ((double)c2 - c1) / CLOCKS_PER_SEC <<
endl;
return 0;
}
Each method was called 100 times to read the contents of the same file,
the file size is 2,395,008 in bytes, clock() was used to measure time,
and each method was tested 3 times, the average times consumed by each
method are as follow:
read_1(): 80.49s
read_2(): 99.907s
read_3(): 13.578s
clock() is not so accurate, but since the differences are fairly
obvious, I think it's good enough for this test.
Gan Quan wrote:
>
Kai-Uwe Bux ???
[snip]
>I don't know about speed, but you could try:
#include <iterator> #include <iostream> #include <fstream> #include <string> #include <iosfwd>
bool read_1 ( std::string const & path, std::string & contents ) { std::ifstream in; bool status; in.open( path.c_str(), std::ios::in ); if ( in ) { std::string buffer ( std::istreambuf_iterator<char>( in ), (std::istreambuf_iterator<char>()) ); contents.swap( buffer ); } status = !in.fail(); in.close(); return status; }
At least, this avoids the detour through a stringstream. As for the performance, you will just have to measure. But I take it, that you have already a framework in place for doing that. I would be interested in the comparison.
Best
Kai-Uwe Bux
Interesting, I did a simple test on the following 3 methods:
bool read_1(const string &path, string &contents)
{
ifstream in;
stringstream ss;
bool status;
contents.clear();
in.open(path.c_str(), ios::in);
if (in) {
ss << in.rdbuf();
contents = ss.str();
}
status = !in.fail();
in.close();
return status;
}
bool read_2(const string &path, string &contents)
{
ifstream in;
bool status;
contents.clear();
in.open(path.c_str(), ios::in);
if (in) {
string buffer(istreambuf_iterator<char>(in),
(istreambuf_iterator<char>()));
contents.swap(buffer);
}
status = !in.fail();
in.close();
return status;
}
bool read_3(const string &path, string &contents)
{
FILE *fp;
char *snippet, *buffer;
bool status = false;
size_t i;
if (fp = fopen(path.c_str(), "r")) {
fseek(fp, 0, SEEK_END);
buffer = new char[ftell(fp)+1];
fseek(fp, 0, SEEK_SET);
snippet = buffer;
while (i = fread(snippet, sizeof(char), 2048, fp)) {
snippet += i;
}
*snippet = '\0';
status = feof(fp) ? true : false;
fclose(fp);
contents.clear();
contents.assign(buffer);
delete []buffer;
}
return status;
}
int main(int argc, char* argv[])
{
string path("/path/to/file");
string contents;
int i, j, k;
clock_t c1, c2;
i = j = k = 100;
c1 = clock();
while (i--) {
read_1(path, contents);
}
c2 = clock();
cout << "read_1(): " << ((double)c2 - c1) / CLOCKS_PER_SEC <<
endl;
c1 = clock();
while (j--) {
read_2(path, contents);
}
c2 = clock();
cout << "read_2(): " << ((double)c2 - c1) / CLOCKS_PER_SEC <<
endl;
c1 = clock();
while (k--) {
read_3(path, contents);
}
c2 = clock();
cout << "read_3(): " << ((double)c2 - c1) / CLOCKS_PER_SEC <<
endl;
return 0;
}
Each method was called 100 times to read the contents of the same file,
the file size is 2,395,008 in bytes, clock() was used to measure time,
and each method was tested 3 times, the average times consumed by each
method are as follow:
read_1(): 80.49s
read_2(): 99.907s
read_3(): 13.578s
clock() is not so accurate, but since the differences are fairly
obvious, I think it's good enough for this test.
You got lucky. On my machine, the numbers from your measurement code are
devastating for the istreambuf_iterator approach:
read_1(): 0.09
read_2(): 2.68
read_3(): 0.02
This is somewhat strange, because with a good STL implementation, method
read_2 could be really fast: one would need an overload for the string
constructor from istreambuf_iterator and an implementation of
istreambuf_iterator that allows to measure the size of the file (that would
be an extension for internal use by the implementation). Then the string
could allocate the right amount of memory and with one read dump the file
just into the right place. No further copying in memory should be
necessary. But apparently, the g++ implementation is very non-smart about
istreambuf_iterators: they managed to make it 27 times slower than a string
stream approach and more then 100 times slower than FILE* based IO. This
sucks :-(
Thanks
Kai-Uwe Bux
On 12 Oct 2006 03:27:38 -0700, "Gan Quan" wrote:
>I did a simple test on the following 3 methods:
[...]
>bool read_3(const string &path, string &contents) {
FILE *fp;
char *snippet, *buffer;
bool status = false;
size_t i;
if (fp = fopen(path.c_str(), "r")) {
fseek(fp, 0, SEEK_END);
buffer = new char[ftell(fp)+1];
fseek(fp, 0, SEEK_SET);
snippet = buffer;
while (i = fread(snippet, sizeof(char), 2048, fp)) {
snippet += i;
}
*snippet = '\0';
status = feof(fp) ? true : false;
fclose(fp);
contents.clear();
contents.assign(buffer);
delete []buffer;
}
return status; }
What about :
#include <stdio.h>
#include <string>
inline bool read_4 (const std::string &path, std::string &contents) {
bool status = false;
contents.resize(0); // may deallocate string buffer
FILE* fp = fopen(path.c_str(), "r");
if (fp) {
fseek(fp, 0, SEEK_END);
long len = ftell(fp);
if (len 0) {
contents.reserve (len);
}
fseek(fp, 0, SEEK_SET);
char buf[BUFSIZ] = "";
size_t numread = 0;
while ((numread = fread (buf, 1, sizeof (buf), fp)) 0) {
contents.append (buf, numread);
}
status = feof(fp) 0 && ferror(fp) == 0;
fclose(fp);
}
return status;
}
> Each method was called 100 times to read the contents of the same file, the file size is 2,395,008 in bytes, clock() was used to measure time, and each method was tested 3 times, the average times consumed by each method are as follow: read_1(): 80.49s read_2(): 99.907s read_3(): 13.578s clock() is not so accurate, but since the differences are fairly obvious, I think it's good enough for this test.
Be aware though that you measure the performance of various caches and
buffers (disk, OS, progam).
Best wishes,
Roland Pibinger
* Gan Quan:
I'm writing a c++ program that has many (100+) threads read/write files
simultaneously. It works well if not considering the efficiency. The
file i/o seems to be the bottleneck.
Try calling synch_with_stdio(false).
--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
In article <11**********************@e3g2000cwe.googlegroups. com>,
Gan Quan <vi******@gmail.comwrote:
>I'm writing a c++ program that has many (100+) threads read/write files simultaneously. It works well if not considering the efficiency. The file i/o seems to be the bottleneck.
This is my code to read from and write to files:
#include <fstream> #include <sstream> #include <string>
using namespace std;
bool write(const string &path, const string &contents,
ios::openmode mode) {
ofstream out;
bool status;
Why do you declare variable uninitialised at the top of
your function like that?
bool status will be unsafe.
ofstream out calls a default constructor that is not
needed and unsafe. This waste a few CPU cycles but
more importantly, you have created an invalid (well,
kind of invalid) object. google RAII for more details.
out.open(path.c_str(), mode);
replace both lines with with:
ofstream out(path.c_str(), mode)
if (!out.fail()) {
out << contents;
}
if( out ) {
out << contents;
}
status = !out.fail();
not needed, why not return !out.fail() directly
out.close();
not needed the ofstream destructor will close the file.
return status;
return !out.fail();
or
return out.good();
<< operator might be quite slow since it does formatting
and all kind of other things under the scene.
fstream.read() and fstream.write() might be faster but
you will need to profile your own platform.
For real optimisation and multithreading performance, you
will have to do more than just optimise reading and writng
to one file. File locking and multiple thread accessing
the same file might be a real cause of concern.
For example, you could use a solution with a file object
knowing its own internal state, including potentially lock,
mutex, share status, buffering, etc. Depending of your
platform and your usage pattern, you may find the memory
mapping files beneficial.
Yan
Gan Quan wrote:
>bool read(const string &path, string &contents) { ifstream in; stringstream ss; bool status;
contents.clear(); in.open(path.c_str(), ios::in); if (in) { ss << in.rdbuf(); contents = ss.str(); } status = !in.fail(); in.close();
return status; }
I just tried to use fopen(), fread(), fclose() to read file contents:
bool read(const string &path, string &contents)
{
FILE *fp;
char buf[2048];
fp = fopen(path.c_str(), "r");
if (fp) {
while (fread(buf, 2048, 1, fp)) {
contents += buf;
}
fclose(fp);
return true;
} else {
return false;
}
}
This runs 4 times faster (literally) than the previous C++ version. Is
it possible to get the C++ version close to this speed?
Yes.
You used the formatted IO operators (<< and >>) in your first
example, but you used the raw functions (fread and fwrite)
in your second example.
Try using fstream.read() and fstream.write(); these are
the raw functions corresponding to fread() and fwrite().
Roland Pibinger wrote:
What about:
#include <stdio.h>
#include <string>
inline bool read_4 (const std::string &path, std::string &contents) {
bool status = false;
contents.resize(0); // may deallocate string buffer
Indeed your solution runs faster. I thought it's because the buffer is
appended to the string during the loop, this avoided the big .assign(),
but it turned out to be the .resize(0) that made the big difference.
comment out the .resize(0) line just made the function runs 3 times
slower. It's really odd, .reserve() will resize the string anyway,
what's the difference between .resize() and .reserve()?
Yannick Tremblay wrote:
Why do you declare variable uninitialised at the top of
your function like that?
bool status will be unsafe.
ofstream out calls a default constructor that is not
needed and unsafe. This waste a few CPU cycles but
more importantly, you have created an invalid (well,
kind of invalid) object. google RAII for more details.
I just thought it would make the code clearer.
status = !out.fail();
not needed, why not return !out.fail() directly
out.close();
not needed the ofstream destructor will close the file.
I was used to write a close() right after the open() call, and I assume
that out.fail() would be invalid after out.close() was executed, so
there was status.
>
<< operator might be quite slow since it does formatting
and all kind of other things under the scene.
fstream.read() and fstream.write() might be faster but
you will need to profile your own platform.
For real optimisation and multithreading performance, you
will have to do more than just optimise reading and writng
to one file. File locking and multiple thread accessing
the same file might be a real cause of concern.
For example, you could use a solution with a file object
knowing its own internal state, including potentially lock,
mutex, share status, buffering, etc. Depending of your
platform and your usage pattern, you may find the memory
mapping files beneficial.
Larry Smith wrote:
>
Yes.
You used the formatted IO operators (<< and >>) in your first
example, but you used the raw functions (fread and fwrite)
in your second example.
Try using fstream.read() and fstream.write(); these are
the raw functions corresponding to fread() and fwrite().
Thanks for the good input, I'm working on replacing <</>operators
with read() and write() calls.
I have a new version of i/o functions now, comments interspersed:
bool read_cpp(const string &path, string &contents)
{
ifstream in(path.c_str(), ios::in);
char buf[BUFSIZ] = "";
bool status = false;
ios::sync_with_stdio(false); // comment out this line
// didn't make much difference
contents.resize(0);
in.seekg(0, ios::end);
contents.reserve(in.tellg()); // surprisingly, comment out
// this line didn't make much
// difference either
in.seekg(0, ios::beg);
while (in.good()) {
in.read(buf, BUFSIZ);
contents.append(buf, BUFSIZ);
}
status = in.eof();
in.close();
return status;
}
bool write_cpp(const string &path, const string &contents)
{
ofstream out(path.c_str(), ios::out);
bool status = false;
if (out) {
out.write(contents.c_str(), contents.length());
}
status = !out.fail();
out.close();
return status;
}
read_cpp() and write_cpp() runs as (almost) fast as their
fread()/fwrite()-implemented counterparts.
Thanks to all you guys.
On 12 Oct 2006 19:23:19 -0700, "Gan Quan" <vi******@gmail.comwrote:
>Roland Pibinger wrote:
>#include <stdio.h> #include <string>
inline bool read_4 (const std::string &path, std::string &contents) { bool status = false; contents.resize(0); // may deallocate string buffer
contents.resize(0); just clears the contents of 'contents'.
Unfortunately some std::string implementations thereby deallocate the
internal buffer (std::string internals are not standardized).
>Indeed your solution runs faster. I thought it's because the buffer is appended to the string during the loop, this avoided the big .assign(), but it turned out to be the .resize(0) that made the big difference. comment out the .resize(0) line just made the function runs 3 times slower. It's really odd, .reserve() will resize the string anyway, what's the difference between .resize() and .reserve()?
After .resize(n) the new .size() of the string is n, after .reserve(n)
only the .capacity() is n (not the .size()). It's unspecified how long
the reserved capacity will remain.
Best wishes,
Roland Pibinger
Roland Pibinger wrote:
After .resize(n) the new .size() of the string is n, after .reserve(n)
only the .capacity() is n (not the .size()). It's unspecified how long
the reserved capacity will remain.
Thanks, but still, why .resize(0) makes such a huge difference on
performance?
without contents.resize(0); my code runs 3-4 times slower.
On 12 Oct 2006 22:15:02 -0700, "Gan Quan" <vi******@gmail.comwrote:
>I have a new version of i/o functions now, comments interspersed:
bool write_cpp(const string &path, const string &contents) {
ofstream out(path.c_str(), ios::out);
bool status = false;
if (out) {
out.write(contents.c_str(), contents.length());
}
status = !out.fail();
out.close();
You must check the success of .close(), otherwise you will not detect
some errors.
return status; }
read_cpp() and write_cpp() runs as (almost) fast as their fread()/fwrite()-implemented counterparts.
Since you have written an encapsulated function that's merely an
implementation detail.
Best wishes,
Roland Pibinger This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Joh |
last post by:
Hello,
(sorry long)
i think i have missed something in the code below, i would like to
design some kind of detector with python, but i feel totally in a no
way now and need some advices to...
|
by: sf |
last post by:
Just started thinking about learning python.
Is there any place where I can get some free examples, especially for
following kind of problem ( it must be trivial for those using python)
I have...
|
by: Raghavendra Mahuli |
last post by:
i need to store a lot of integers(say 10,000) in a datastructure.
The constraints are:
1. It should be fast.
2. It should be orderded or sorted.
3.Insterting and deleting should be fast.
4. The...
|
by: macAWM |
last post by:
Hi list,
First let me explain that my background is in Java and I am quite
spoiled to its niceties (read "less ambiguous nature"). Anyway to my
problems.
1. I want to write my own library for...
|
by: Nirjhar Oberoi |
last post by:
Hi,
I am new to Linux and wanted to know how to use GCC to Compile the Code
written in C? I dont want to use EMacs or VI for my editor. Can you
suggest a good IDE for linux for C Programming..
...
|
by: elgiei |
last post by:
Good morning at all,
i have to implement a server,that every n-seconds (eg. 10sec) sends to
other clients,which files and directory has been deleted or modified.
i build a n-tree, for each...
|
by: murdla |
last post by:
Hello.
I am working on a project where users can automatically create personnel advices on the mainframe through a .NET Web Service.
My current problem is that I am trying to call a...
|
by: copx |
last post by:
In "Learning Standard C++ as a New Language" Bjarne Stroustrup claims that
properly written C++ outperforms C code. I will just copy his first example
here, which is supposed to demonstrate how C++...
|
by: Bill David |
last post by:
SUBJECT: How to make this program more efficient?
In my program, a thread will check update from server periodically and
generate a stl::map for other part of this program to read data from....
|
by: lllomh |
last post by:
Define the method first
this.state = {
buttonBackgroundColor: 'green',
isBlinking: false, // A new status is added to identify whether the button is blinking or not
}
autoStart=()=>{
|
by: Aliciasmith |
last post by:
In an age dominated by smartphones, having a mobile app for your business is no longer an option; it's a necessity. Whether you're a startup or an established enterprise, finding the right mobile app...
|
by: tracyyun |
last post by:
Hello everyone,
I have a question and would like some advice on network connectivity. I have one computer connected to my router via WiFi, but I have two other computers that I want to be able to...
|
by: giovanniandrean |
last post by:
The energy model is structured as follows and uses excel sheets to give input data:
1-Utility.py contains all the functions needed to calculate the variables and other minor things (mentions...
|
by: NeoPa |
last post by:
Introduction
For this article I'll be using a very simple database which has Form (clsForm) & Report (clsReport) classes that simply handle making the calling Form invisible until the Form, or all...
|
by: isladogs |
last post by:
The next Access Europe meeting will be on Wednesday 1 Nov 2023 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM)
Please note that the UK and Europe revert to winter time on...
|
by: nia12 |
last post by:
Hi there,
I am very new to Access so apologies if any of this is obvious/not clear.
I am creating a data collection tool for health care employees to complete. It consists of a number of...
|
by: NeoPa |
last post by:
Introduction
For this article I'll be focusing on the Report (clsReport) class. This simply handles making the calling Form invisible until all of the Reports opened by it have been closed, when it...
|
by: GKJR |
last post by:
Does anyone have a recommendation to build a standalone application to replace an Access database? I have my bookkeeping software I developed in Access that I would like to make available to other...
| |