By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,559 Members | 1,150 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,559 IT Pros & Developers. It's quick & easy.

automatically remove unused #includes from C source?

P: n/a
Hi All,

Does anyone know of a tool that can automatically analyze C source to
remove unused #includes?

Thanks,
Sean

Oct 20 '06 #1
Share this Question
Share on Google+
31 Replies


P: n/a
In article <11*********************@k70g2000cwa.googlegroups. com>,
<sm*********@gmail.comwrote:
>Does anyone know of a tool that can automatically analyze C source to
remove unused #includes?
Tricky.

A #define in include1.h might be used in a #define in include2.h that
might be used to build a type in include3.h that might be needed by a
function declaration brought in by include5.h that is #include'd by
include4.h, and the function name might be in a disguised array
initialization form in include6.h and the analyzer would have to
analyze your source to see whether you refer to that function directly
or if you use the array initialization...

In other words, such a tool would have to pretty much be a C compiler
itself, but one that kept track of all the "influences" that went
to build up every token, and figured out what wasn't used after-all.

It might be easier just to start commenting out #include's and
seeing if any compile problems came up.
--
If you lie to the compiler, it will get its revenge. -- Henry Spencer
Oct 20 '06 #2

P: n/a
On Fri, 20 Oct 2006 17:39:25 +0000 (UTC), Walter Roberson wrote:
<sm*********@gmail.comwrote:
>>Does anyone know of a tool that can automatically analyze C source to
remove unused #includes?

It might be easier just to start commenting out #include's and
seeing if any compile problems came up.
Automate that and you have the requested tool!

Best wishes,
Roland Pibinger
Oct 20 '06 #3

P: n/a
In article <45**************@news.utanet.at>,
Roland Pibinger <rp*****@yahoo.comwrote:
>On Fri, 20 Oct 2006 17:39:25 +0000 (UTC), Walter Roberson wrote:
><sm*********@gmail.comwrote:
>>>Does anyone know of a tool that can automatically analyze C source to
remove unused #includes?
>>It might be easier just to start commenting out #include's and
seeing if any compile problems came up.
>Automate that and you have the requested tool!
including a particular file can end up changing the meaning of
something else, but the code might compile fine without it.

For example, you might have an include file that contained

#define _use_search_heuristics 1

Then the code might have

#if defined(_use_search_heuristics)
/* do it one way */
#else
/* do it a different way */
#endif

where the code is valid either way.

Thus in order to test whether any particular #include is really
needed by checking the compile results, you need to analyze the
compiled object, strip out symbol tables and debug information and
compile timestamps and so on, and compare the generated code.
--
There are some ideas so wrong that only a very intelligent person
could believe in them. -- George Orwell
Oct 20 '06 #4

P: n/a
On Fri, 20 Oct 2006 19:47:42 +0000 (UTC), Walter Roberson wrote:
>including a particular file can end up changing the meaning of
something else, but the code might compile fine without it.

For example, you might have an include file that contained

#define _use_search_heuristics 1

Then the code might have

#if defined(_use_search_heuristics)
/* do it one way */
#else
/* do it a different way */
#endif

where the code is valid either way.
You are right in theory. But that kind of include file dependencies
(include order dependencies) is usally considered bad style.
>Thus in order to test whether any particular #include is really
needed by checking the compile results, you need to analyze the
compiled object, strip out symbol tables and debug information and
compile timestamps and so on, and compare the generated code.
IMO, this is overdone. You have to test your application after code
changes anyway.

Best regards,
Roland Pibinger
Oct 20 '06 #5

P: n/a
On Fri, 20 Oct 2006 20:30:35 GMT, rp*****@yahoo.com (Roland Pibinger)
wrote:
>On Fri, 20 Oct 2006 19:47:42 +0000 (UTC), Walter Roberson wrote:
>>including a particular file can end up changing the meaning of
something else, but the code might compile fine without it.

For example, you might have an include file that contained

#define _use_search_heuristics 1

Then the code might have

#if defined(_use_search_heuristics)
/* do it one way */
#else
/* do it a different way */
#endif

where the code is valid either way.

You are right in theory. But that kind of include file dependencies
(include order dependencies) is usally considered bad style.
No, he's right in practice. There's no guarantee that a body of
existing code will conform to your (or anyone's) rules of good style.
>
>>Thus in order to test whether any particular #include is really
needed by checking the compile results, you need to analyze the
compiled object, strip out symbol tables and debug information and
compile timestamps and so on, and compare the generated code.

IMO, this is overdone. You have to test your application after code
changes anyway.

Best regards,
Roland Pibinger
--
Al Balmer
Sun City, AZ
Oct 20 '06 #6

P: n/a
In article <45**************@news.utanet.at>,
Roland Pibinger <rp*****@yahoo.comwrote:
>On Fri, 20 Oct 2006 19:47:42 +0000 (UTC), Walter Roberson wrote:
>>including a particular file can end up changing the meaning of
something else, but the code might compile fine without it.
>>For example, you might have an include file that contained
#define _use_search_heuristics 1
Then the code might have
#if defined(_use_search_heuristics)
/* do it one way */
#else
/* do it a different way */
#endif
where the code is valid either way.
>You are right in theory. But that kind of include file dependencies
(include order dependencies) is usally considered bad style.
It happens often with large projects with automakes and
system dependancies. The included file that changes the meaning
of the rest is a "hints" file.

For example, on the OS I use most often, for a well
known large project (perl as I recall) the autoconfigure
detects that the OS has library entries and include entries
for a particular feature. Unfortunately that particular feature
doesn't work very well in the OS -- broken -and- very inefficient.
So the OS hints file basically says, "Yes I know you've detected
that, but don't use it." So the large project goes aheads and
compiles in the code that performs the task using more standardized
system calls instead of the newer less-standardized API.
>IMO, this is overdone. You have to test your application after code
changes
Conformance tests can take 3 days per build, and if you
are checking whether a project with 1500 #includes (distributed
over the source) can survive deleting one particular include
out of one particular module, then you need up to pow(2,1500)
complete builds and conformance tests. Even if each *complete*
application conformance test took only 1 second, it'd take
10^444 CPU years to complete the testing. *Much* faster to break
it into chunks (e.g., by source file) and check to see whether
each chunk still produces the same code after removal of a
particular include: the timing then becomes proportional to
the sum of pow(2,includes_in_this_chunk) instead of the product
of those as would be the case with what you propose.
--
"It is important to remember that when it comes to law, computers
never make copies, only human beings make copies. Computers are given
commands, not permission. Only people can be given permission."
-- Brad Templeton
Oct 20 '06 #7

P: n/a
Walter Roberson wrote:
Conformance tests can take 3 days per build, and if you
are checking whether a project with 1500 #includes (distributed
over the source) can survive deleting one particular include
out of one particular module, then you need up to pow(2,1500)
complete builds and conformance tests. Even if each *complete*
application conformance test took only 1 second, it'd take
10^444 CPU years to complete the testing. *Much* faster to break
it into chunks (e.g., by source file) and check to see whether
each chunk still produces the same code after removal of a
particular include: the timing then becomes proportional to
the sum of pow(2,includes_in_this_chunk) instead of the product
of those as would be the case with what you propose.
That is a good idea: selectively removing #include statements, and then
simply seeing if the resulting object code file changes.

Otherwise, a customized C compiler could absolutely tell if there were
any dependencies on a particular #include file.
Oct 21 '06 #8

P: n/a

"Walter Roberson" <ro******@ibd.nrc-cnrc.gc.cawrote in message news:eh**********@canopus.cc.umanitoba.ca...
In article <45**************@news.utanet.at>,
Roland Pibinger <rp*****@yahoo.comwrote:
>>On Fri, 20 Oct 2006 17:39:25 +0000 (UTC), Walter Roberson wrote:
>><sm*********@gmail.comwrote:
Does anyone know of a tool that can automatically analyze C source to
remove unused #includes?
>>>It might be easier just to start commenting out #include's and
seeing if any compile problems came up.
>>Automate that and you have the requested tool!

including a particular file can end up changing the meaning of
something else, but the code might compile fine without it.

For example, you might have an include file that contained

#define _use_search_heuristics 1

Then the code might have

#if defined(_use_search_heuristics)
/* do it one way */
#else
/* do it a different way */
#endif

where the code is valid either way.

Thus in order to test whether any particular #include is really
needed by checking the compile results, you need to analyze the
compiled object, strip out symbol tables and debug information and
compile timestamps and so on, and compare the generated code.
Then, analyze it to make sure you don't delete the #include of "seems_unused.h" in this:

seems_unused.h:
-----------------
#define MIGHT_NEED 1

somefile.c:
----------
#ifdef DEFINED_WITH_MINUS_D
int var = MIGHT_NEED;
#endif

-- so that next week, when somebody does gcc -DDEFINED_WITH_MINUS_D, the code still builds.
Oct 21 '06 #9

P: n/a
sm*********@gmail.com wrote:
Hi All,

Does anyone know of a tool that can automatically analyze C source to
remove unused #includes?

Thanks,
Sean
doesn't PC-LINT give you a list of unused includes?
Oct 21 '06 #10

P: n/a
On Fri, 20 Oct 2006 21:42:07 +0000 (UTC), Walter Roberson wrote:
>Conformance tests can take 3 days per build, and if you
are checking whether a project with 1500 #includes (distributed
over the source) can survive deleting one particular include
out of one particular module, then you need up to pow(2,1500)
complete builds and conformance tests. Even if each *complete*
application conformance test took only 1 second, it'd take
10^444 CPU years to complete the testing.
That calculaton is quite contrived. I wonder how you would do changes
in you code base besides removing an #include, not to speak of
refactoring.
>*Much* faster to break
it into chunks (e.g., by source file) and check to see whether
each chunk still produces the same code after removal of a
particular include:
.... and if it compiles but produces different object code then you
have found an include order dependency bug ;-)

Best regards,
Roland Pibinger
Oct 21 '06 #11

P: n/a
On Fri, 20 Oct 2006 21:48:43 -0700, Walter Bright
<wa****@digitalmars-nospamm.comwrote:
>Otherwise, a customized C compiler could absolutely tell if there were
any dependencies on a particular #include file.
BTW, there is a huge demand for static code analysis tools in C and
C++ (also in a commercial sense). For most of those code analysis
tasks you need to have a fully-fledged (customized) compiler. So, if I
had that compiler ...

Best regards,
Roland Pibinger
Oct 21 '06 #12

P: n/a
Roland Pibinger wrote:
BTW, there is a huge demand for static code analysis tools in C and
C++ (also in a commercial sense). For most of those code analysis
tasks you need to have a fully-fledged (customized) compiler. So, if I
had that compiler ...
True, I've seen some amazingly high prices quoted for static code
analysis. There's nothing stopping someone from approaching Digital Mars
or other compiler vendors and offering to purchase a license for the
compiler to get into that business.
Oct 21 '06 #13

P: n/a
On Sat, 21 Oct 2006 01:11:22 -0700, Walter Bright
<wa****@digitalmars-nospamm.comwrote:
>True, I've seen some amazingly high prices quoted for static code
analysis. There's nothing stopping someone from approaching Digital Mars
or other compiler vendors and offering to purchase a license for the
compiler to get into that business.
What is stopping you?
Oct 21 '06 #14

P: n/a
On Sat, 21 Oct 2006 06:09:05 GMT, Neil <Ne*******@worldnet.att.net>
wrote:
>sm*********@gmail.com wrote:
>Hi All,

Does anyone know of a tool that can automatically analyze C source to
remove unused #includes?

Thanks,
Sean

doesn't PC-LINT give you a list of unused includes?
Yes.

--
jay
Oct 21 '06 #15

P: n/a
Roland Pibinger wrote:
On Fri, 20 Oct 2006 21:48:43 -0700, Walter Bright
<wa****@digitalmars-nospamm.comwrote:
>>Otherwise, a customized C compiler could absolutely tell if there were
any dependencies on a particular #include file.


BTW, there is a huge demand for static code analysis tools in C and
C++ (also in a commercial sense). For most of those code analysis
tasks you need to have a fully-fledged (customized) compiler. So, if I
had that compiler ...
Due to extensions, such an tool can only realy be part of the compiler
suite.

--
Ian Collins.
Oct 21 '06 #16

P: n/a
Roland Pibinger said:
On Sat, 21 Oct 2006 01:11:22 -0700, Walter Bright
<wa****@digitalmars-nospamm.comwrote:
>>True, I've seen some amazingly high prices quoted for static code
analysis. There's nothing stopping someone from approaching Digital Mars
or other compiler vendors and offering to purchase a license for the
compiler to get into that business.

What is stopping you?
I don't think Walter Bright needs to approach /anyone/ to purchase a licence
for the Digital Mars compiler. :-)

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Oct 21 '06 #17

P: n/a
Walter Bright wrote:
Walter Roberson wrote:
>Conformance tests can take 3 days per build, and if you
are checking whether a project with 1500 #includes (distributed
over the source) can survive deleting one particular include
out of one particular module, then you need up to pow(2,1500)
complete builds and conformance tests. Even if each *complete*
application conformance test took only 1 second, it'd take
10^444 CPU years to complete the testing. *Much* faster to break
it into chunks (e.g., by source file) and check to see whether
each chunk still produces the same code after removal of a
particular include: the timing then becomes proportional to
the sum of pow(2,includes_in_this_chunk) instead of the product
of those as would be the case with what you propose.

That is a good idea: selectively removing #include statements, and
then simply seeing if the resulting object code file changes.

Otherwise, a customized C compiler could absolutely tell if there
were any dependencies on a particular #include file.
Such an operation would need C99 specs, otherwise the use of
implied int would foul the results. It might be enough to tell the
compiler to insist on prototypes.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Oct 21 '06 #18

P: n/a
In article <45*************@news.utanet.at>,
Roland Pibinger <rp*****@yahoo.comwrote:
>On Fri, 20 Oct 2006 21:42:07 +0000 (UTC), Walter Roberson wrote:
>>Conformance tests can take 3 days per build, and if you
are checking whether a project with 1500 #includes (distributed
over the source) can survive deleting one particular include
out of one particular module, then you need up to pow(2,1500)
complete builds and conformance tests. Even if each *complete*
application conformance test took only 1 second, it'd take
10^444 CPU years to complete the testing.
>That calculaton is quite contrived.
Contrived? Well, yes, in the sense that any large project is likely
to have much *more* than 1500 #include statements. For example, I just
ran a count against the trn4 source (which is less than 1 megabit
when gzip'd), and it has 1659 #include statements. openssl 0.9.7e
has 4679 #include statements (it's about 3 megabytes gzip'd).
>I wonder how you would do changes
in you code base besides removing an #include, not to speak of
refactoring.
You seem to have forgotten that you yourself proposed,
"Automate that and you have the requested tool!" in response to my
saying, "It might be easier just to start commenting out #include's".
When I indicated that it is more complex than that and that
comparing object code is necessary (not just looking to see if
looking for compile errors), you said,
"You have to test your application after code changes anyway."

Taken in context, your remark about testing after code changes
must be considered to apply to the *automated* tool you proposed.
And the difficulty with automated tools along these lines is that they
are necessarily dumb: if removing #include file1.h gives you a
compile error, then the tool cannot assume that file1.h is a -necessary-
dependancy (i.e., a tool that could test in linear time): the tool
would have to assume the possibility that removing file1.h
only gave an error because of something in file2.h --- and yes,
there can be backwards dependancies, in which file1.h is needed to
complete something included -before- that point. Thus, in this kind
of automated tool that doesn't know how to parse the C code itself,
full dependancy checking can only be done by trying every -possible-
combination of #include files, which is a 2^N process.

Do you feel that 1 second to "test your application after code changes"
is significantly longer than is realistic? It probably takes longer
than that just to compile and link the source each time.

>I wonder how you would do changes
in you code base besides removing an #include, not to speak of
refactoring.
I don't mechanically automate the code change and test process.

>>*Much* faster to break
it into chunks (e.g., by source file) and check to see whether
each chunk still produces the same code after removal of a
particular include:
>... and if it compiles but produces different object code then you
have found an include order dependency bug ;-)
Include order dependencies are not bugs unless the prevailing
development paradigm for the project has declared them to be so.

Once you get beyond standard C into POSIX or system dependancies,
it is *common* for #include files to be documented as being order
dependant upon something else. Better system developers hide
that by #include'ing the dependancies and ensuring that, as far as
is reasonable, that each system include file has guards against
multiple inclusion, but that's a matter of Quality of Implementation,
not part of the standards.
Still, it is true that in the case of multiple source files that
together have 1500 #include, that you would not need to do pow(2,1500)
application tests, if you are using a compiler that supports
independant compilation and later linking. If you do have independant
compilation, then within each source file it is a 2^N process
to find all the #include combinations that will compile, but most of
the combinations will not. Only the versions that will compile need
to go into the pool for experimental linkage; linkage experiments
would be the product of the number of eligable compilations for each
source. Only the linkages that survived would need to go on for testing.
The number of cases that wil make it to testing is not possible to
estimate without statistical information about the probability that any
given #include might turn out to be unneeded.

--
There are some ideas so wrong that only a very intelligent person
could believe in them. -- George Orwell
Oct 21 '06 #19

P: n/a
Roland Pibinger wrote:
On Sat, 21 Oct 2006 01:11:22 -0700, Walter Bright
<wa****@digitalmars-nospamm.comwrote:
>True, I've seen some amazingly high prices quoted for static code
analysis. There's nothing stopping someone from approaching Digital Mars
or other compiler vendors and offering to purchase a license for the
compiler to get into that business.

What is stopping you?
I'm pretty overloaded already.
Oct 21 '06 #20

P: n/a
Walter Roberson wrote:
Still, it is true that in the case of multiple source files that
together have 1500 #include, that you would not need to do pow(2,1500)
application tests, if you are using a compiler that supports
independant compilation and later linking. If you do have independant
compilation, then within each source file it is a 2^N process
to find all the #include combinations that will compile, but most of
the combinations will not. Only the versions that will compile need
to go into the pool for experimental linkage; linkage experiments
would be the product of the number of eligable compilations for each
source. Only the linkages that survived would need to go on for testing.
The number of cases that wil make it to testing is not possible to
estimate without statistical information about the probability that any
given #include might turn out to be unneeded.
You don't need to do linking or testing if you start with a known good
object file, and then remove #include's, compile, and do a bit compare
of the resulting object file.
Oct 21 '06 #21

P: n/a
In article <ov******************************@comcast.com>,
Walter Bright <wa****@digitalmars-nospamm.comwrote:
>You don't need to do linking or testing if you start with a known good
object file, and then remove #include's, compile, and do a bit compare
of the resulting object file.
Right, after stripping out symbol tables and debug information and
timestamps and anything else that can cause the object file to differ
for identical generated code.

Which is what I'd said before, but to which Roland said was
"overdone" on the basis that you have to test your code after code
changes. So this subthread has been exploring the feasility of
his proposal to *not* use the "did it generate the same code"
approach and instead use the "does the final executable test out the
same" approach.

--
Prototypes are supertypes of their clones. -- maplesoft
Oct 21 '06 #22

P: n/a
In article <xb******************************@comcast.com>,
Don Porges <po****@comcast.netwrote:
>"Walter Roberson" <ro******@ibd.nrc-cnrc.gc.cawrote in message news:eh**********@canopus.cc.umanitoba.ca...
>Thus in order to test whether any particular #include is really
needed by checking the compile results, you need to analyze the
compiled object, strip out symbol tables and debug information and
compile timestamps and so on, and compare the generated code.
>Then, analyze it to make sure you don't delete the #include of "seems_unused.h" in this:
>#ifdef DEFINED_WITH_MINUS_D
>-- so that next week, when somebody does gcc -DDEFINED_WITH_MINUS_D, the code still builds.

True -- and perhaps more difficult is the situation where there is
an environmental test such as checking the OS. If there is, for example,

#ifdef _solaris
#include <sys/some_solaris_only_include.h>
#endif

then unless one asserts _solaris then as far as the proposed automated
tester is concerned, the #include is "unused"... and if one -does-
assert _solaris then unless one has set up a strange and wonderful
cross-compilation environment, <sys/some_solaris_only_include.h>
probably doesn't exist locally so you won't be able to test whether
it is -really- needed for Solaris ...
--
"No one has the right to destroy another person's belief by
demanding empirical evidence." -- Ann Landers
Oct 21 '06 #23

P: n/a
does it work with #undef too?
Oct 21 '06 #24

P: n/a
In article <eh**********@canopus.cc.umanitoba.ca>,
Walter Roberson <ro******@ibd.nrc-cnrc.gc.cawrote:
>In article <ov******************************@comcast.com>,
Walter Bright <wa****@digitalmars-nospamm.comwrote:
>>You don't need to do linking or testing if you start with a known good
object file, and then remove #include's, compile, and do a bit compare
of the resulting object file.

Right, after stripping out symbol tables and debug information and
timestamps and anything else that can cause the object file to differ
for identical generated code.
Trying to compare object files is just asking for non-determinancy.

What occurred to me, upon reading this thread originally, was that you
could probably make it work by compiling to assembler (-S option) and
then comparing the assembly files. That should get you pretty close.

Yet, now that I think about it, maybe all you have to do is run it
through the preprocessor (-E option). That *might* be good enough.

Oct 22 '06 #25

P: n/a
>>Right, after stripping out symbol tables and debug information and
>>timestamps and anything else that can cause the object file to differ
for identical generated code.

Trying to compare object files is just asking for non-determinancy.
I believe OSF/1 put a timestamp in every object file, so consecutive
compilations of the same source files always mismatched.
>What occurred to me, upon reading this thread originally, was that you
could probably make it work by compiling to assembler (-S option) and
then comparing the assembly files. That should get you pretty close.
Maybe.
>Yet, now that I think about it, maybe all you have to do is run it
through the preprocessor (-E option). That *might* be good enough.
I doubt it. The preprocessor will show unnecessary structure definitions,
unnecessary typedefs, and unnecessary function declarations. Also,
preprocessors tend to insert a lot of blank lines and stuff made from
implicit #line directives containing info from where in the source the
code came from.

Oh, and if you have unnecessary global variable declarations in
your unnecessary include file, they might show up in the assembly
file.

Incidentally, just what is an UNNECESSARY include file? There are
a variety of types of preprocessor symbols which may require special
handling:

- Environment testing symbols: things like __FreeBSD__ which are
used to test things about the environment it's going to be compiled/run
on. Alternative environments which might define or undefine these
symbols aren't available to examine. Or maybe there's a configuration
file with alternate versions that might be used on Windows, Linux,
MacOS, MS-DOS, etc.

- Program configuration symbols: Symbols that may be enabled by
build procedures (e.g. compiler command line) to make different
versions of the program: the debug version, the release version,
the corporate version, etc.

- NDEBUG. This is referenced by the assert() macro somehow and is
intended to be defined or undefined to build a different version
of the program.

Oct 22 '06 #26

P: n/a
Paul Connolly wrote:
>
does it work with #undef too?
Does what work? Interpret too. There is a reason for quotes in
usenet articles.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Oct 22 '06 #27

P: n/a

"CBFalconer" <cb********@yahoo.comwrote in message
news:45***************@yahoo.com...
Paul Connolly wrote:
>>
does it work with #undef too?
Does what work? Interpret too. There is a reason for quotes in
usenet articles.
Firstly I feel I must say that I like most of what you write in this group -
eminent sense.

Sorry to be so inarticulate - I was drunk - typically Irish - thought there
was a thread and you might cotton on (-;

A more specific (but less general) question is:
if in my foo.c file I #undef all definitions in header xyz.h then will
PC-Lint be able to detect that xyz.h need not be included ?
(this should be not construed as my approval of #undef - I believe in one
true definition and no muddying of the water - typically Catholic)
Oct 23 '06 #28

P: n/a
Paul Connolly wrote:
>
.... snip ...
>
A more specific (but less general) question is:
if in my foo.c file I #undef all definitions in header xyz.h then
will PC-Lint be able to detect that xyz.h need not be included ?
(this should be not construed as my approval of #undef - I believe
in one true definition and no muddying of the water - typically
Catholic)
No, because there may be many necessary things left. I can
immediately think of prototypes, typedefs, externs, struct
definitions, not to mention the things that should not normally be
included in header files anyhow.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>

Oct 24 '06 #29

P: n/a

"CBFalconer" <cb********@yahoo.comwrote in message
news:45***************@yahoo.com...
Paul Connolly wrote:
>>
... snip ...
>>
A more specific (but less general) question is:
if in my foo.c file I #undef all definitions in header xyz.h then
will PC-Lint be able to detect that xyz.h need not be included ?
(this should be not construed as my approval of #undef - I believe
in one true definition and no muddying of the water - typically
Catholic)

No, because there may be many necessary things left. I can
immediately think of prototypes, typedefs, externs, struct
definitions, not to mention the things that should not normally be
included in header files anyhow.
This answers a different question to the one I asked.
If all (the all here is all-important) the definitions in xyz.h were
#undef-ed in foo.c (where xyz.h is included) then will PC-Lint be able to
detect that xyz.h need not be included in foo.c?


Oct 25 '06 #30

P: n/a
Paul Connolly wrote:
"CBFalconer" <cb********@yahoo.comwrote in message
>Paul Connolly wrote:
>>>
... snip ...
>>>
A more specific (but less general) question is:
if in my foo.c file I #undef all definitions in header xyz.h then
will PC-Lint be able to detect that xyz.h need not be included ?
(this should be not construed as my approval of #undef - I believe
in one true definition and no muddying of the water - typically
Catholic)

No, because there may be many necessary things left. I can
immediately think of prototypes, typedefs, externs, struct
definitions, not to mention the things that should not normally be
included in header files anyhow.

This answers a different question to the one I asked.
If all (the all here is all-important) the definitions in xyz.h were
#undef-ed in foo.c (where xyz.h is included) then will PC-Lint be able to
detect that xyz.h need not be included in foo.c?
You can't undef everything. For example, a prototype. You can do
an xref of both the files and see if there is anything in common.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Oct 25 '06 #31

P: n/a

"CBFalconer" <cb********@yahoo.comwrote in message
news:45***************@yahoo.com...
Paul Connolly wrote:
>"CBFalconer" <cb********@yahoo.comwrote in message
>>Paul Connolly wrote:

... snip ...

A more specific (but less general) question is:
if in my foo.c file I #undef all definitions in header xyz.h then
will PC-Lint be able to detect that xyz.h need not be included ?
(this should be not construed as my approval of #undef - I believe
in one true definition and no muddying of the water - typically
Catholic)

No, because there may be many necessary things left. I can
immediately think of prototypes, typedefs, externs, struct
definitions, not to mention the things that should not normally be
included in header files anyhow.

This answers a different question to the one I asked.
If all (the all here is all-important) the definitions in xyz.h were
#undef-ed in foo.c (where xyz.h is included) then will PC-Lint be able to
detect that xyz.h need not be included in foo.c?

You can't undef everything. For example, a prototype. You can do
an xref of both the files and see if there is anything in common.
This answer assumes that xyz.h has a form that is not indicated by the
question.

You can #undef everything in xyz.h if everything in xyz.h is #undefable
(tautology) - it is not mandatory that xyz.h contains something that in not
#undefable - my question was a conditional (if) on all the definitions in
xyz.h being #undefed - so xyz.h must be in that class of headers in which
all the definitions can be #undefed - there exist many headers that satisfy
that condition.

Strictly, (as you obviously know from your fine erudition in this forum)
prototypes are declarations not definitions. My question was about
definitions - I was rather vaguely assuming thet foo.c didn't refer to any
functions prototyped in in xyz.h - I thought the behaviour of PC-Lint wrt
referring to functions prototyped in the header had already been
established - I was wanting to extend that knowledge to understanding how
PC-Lint dealt with #undef in this particular case (I do want to know more
generally as well, but since you thought this was too broad a question to
ask, I just asked a specific question - hoping it might give me some
indication of what PC-Lint is able to do with code that has #undefs)

So here is my question again, but refined for what I thought was the
already-understood behaviour of PC-Lint wrt to prototypes:
If all (the all here is all-important) the definitions in xyz.h were
#undef-ed in foo.c (where xyz.h is included)
-- the new bit ------
and foo.c does not refer to any function prototyped in in xyz.h
----------------------
then will PC-Lint be able to
detect that xyz.h need not be included in foo.c?

The Jesuits would still see three flaws in my question and, since it might
amuse the readers of this forum, and I'm feeling Jesuitical, I'll tell you
two (-;

1. PC-Lint might be able to "detect" (your sloppy usage eejit Connolly) the
header need not be included - surely the point is what PC-Lint "reports" -
it might know, but not tell you.

2. The wording "need not be included" (your sloppy usage eejit Connolly) is
problematic for many - take as an example (just an example - there are many
more cases) abc.c with header abc.h that declares only the prototypes of
functions in abc.c, where the functions in abc.c are decalared in an order
such that no function refers to any function later in the text of the file
abc.c, and the declarations in abc.h agree with the definitions in abc.c,
then abc.h "need not be included" in abc.c - but many would argue that the
inclusion of abc.h in abc.c is "desirable", to keep the declarations
consistent with the definitions, in case the definition of the function
changes in the future.

3. An exercise for the reader.

Oct 27 '06 #32

This discussion thread is closed

Replies have been disabled for this discussion.