473,839 Members | 1,473 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

code portability

My question is more generic, but it involves what I consider ANSI standard C
and portability.

I happen to be a system admin for multiple platforms and as such a lot of
the applications that my users request are a part of the OpenSource
community. Many if not most of those applications strongly require the
presence of the GNU compiling suite to work properly. My assumption is that
this is due to the author/s creating the applications with the GNU suite.
Many of the tools requested/required are GNU replacements for make,
configure, the loader, and lastly the C compiler itself. Where I'm going
with this is, has the OpenSource community as a whole committed itself to at
the very least encouraging its contributing members to conform to ANSI
standards of programming?

My concern is that as an admin I am sometimes compelled to port these
applications to multiple platforms running the same OS and as the user
community becomes more and more insistent on OpenSource applications will
gotcha's appear due to lack of portability in coding? I fully realize that
independent developers may or may not conform to standards, but again is it
at least encouraged?

11.32 of the FAQ seemed to at least outline the crux of what I am asking.
If I loaded up my home machine to the gills will all open source compiler
applications (gcc, imake, autoconfig, etc....) would my applications that I
compile and link and load conform?
Aug 1 '06
239 10358
On 2006-08-04, Ian Collins <ia******@hotma il.comwrote:
Andrew Poelstra wrote:
>On 2006-08-03, Keith Thompson <ks***@mib.orgw rote:
>>>Richard Heathfield <in*****@invali d.invalidwrites :

The introduction of long long int was, in my continued opinion, a mistake.
All the ISO guys had to do was - nothing at all! Any implementation that
wanted to support 64-bit integers could simply have made long int rather
longer than before - such a system would have continued to be fully
conformin g to C90. And if it broke code, well, so what? Any code that
wrongly assumes long int is precisely 32 bits is already broken, and needs
fixing.

That's true, but 64 bits is the effective limit for this. The
following:
char 8 bits
short 16 bits
int 32 bits
long 64 bits
is a reasonable set of types, but if you go beyond that to 128 bits,
you're going to have to leave gaps (for example, there might not be
any 16-bit integer type).


1) This isn't really a problem; you can use a 32-bit variable to store
16-bit values; if you really need 16 bits you might need some debug
macros to artificially constrain the range.

Just beware of overflows!
Yes, that would require more than debug macros to fix, since you'd want
the overflow behavior to be the same whether or not you are debugging!
(It's a bit of a pain, I admit, but there aren't too many times when you
absolutely need an exact number of bits.)
>2) If you've got a 128-bit processor, IMHO, you shouldn't be insisting
on using 8-bit types. That just sounds inefficient. [OT]
Unless your (possibly externally imposed) data happens to be 8 bit.
If I were ISO, I'd consider adding a new specifier to scanf and friends
to read a specified number of bytes (or probably bits, although that
could be a lot harder to implement) into an already defined type. So, if
you wanted to read 8 bits into an int (which is 32 bits on this particular
system), you'd do:
fscanf (fhandle, "%d8b", &charvar);

Since I'm not ISO, nor will they create such a change, I'd stick to
avoiding arbitrary data widths (in general, stick with text files as
long as you can spare the space), and don't worry about changing data
widths: most companies don't switch compilers too often.

If you have Data of a Certain Width imposed on you, you're probably
going to have to fiddle with stuff when changing compilers anyway,
so suddenly having long twice as wide should be an expected problem.

--
Andrew Poelstra <http://www.wpsoftware. net/projects>
To reach me by email, use `apoelstra' at the above domain.
"Do BOTH ends of the cable need to be plugged in?" -Anon.
Aug 4 '06 #51
Richard Heathfield schrieb:
Keith Thompson said:
>>"Malcolm" <re*******@btin ternet.comwrite s:
>>>There is also the problem of "good enough" portability, for instance
assuming ASCII and two's complement integers.
<snip>
>>As for two's complement, I typically don't care about that either.
Numbers are numbers. If I need to do bit-twiddling, I use unsigned.

Indeed. And, on a related note, I find it very difficult to understand this
fascination with integers that have a particular number of bits. If I need
8 bits, I'll use char (or a flavour thereof). If I need 9 to 16 bits, I'll
use int (or unsigned). If I need 17 to 32 bits, I'll use long (or unsigned
long). And if I need more than 32 bits, I'll use a bit array. I see
absolutely no need for int_leastthis, int_fastthat, and int_exacttheoth er.
Depends on your area of application.
If you are in an embedded environment with two's complement, exact
width integers of width 8, 16, 32, you often want the 16 bit type
because it is sufficient for storing your data and saves memory
you really need. Even if it is only provided as compiler extension.
You compute values "through overflow" to save auxiliary variables
and time and do other things not necessary when in a less restricted
environment.

For other applications, I agree with you. However, I _would_ have
liked to have a clean naming scheme implying what the standard
says instead of nondescript identifiers.
short <-int_least16_t
int <-int_fast16_t
long <-int_least32_t
obviously does not give that in a convenient manner. int16, int32,
intFast16, intExact16 probably would have been a better starting
point for easy extensibility.
Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Aug 4 '06 #52
Richard Heathfield a écrit :
The introduction of long long int was, in my continued opinion, a mistake.
All the ISO guys had to do was - nothing at all! Any implementation that
wanted to support 64-bit integers could simply have made long int rather
longer than before - such a system would have continued to be fully
conforming to C90. And if it broke code, well, so what? Any code that
wrongly assumes long int is precisely 32 bits is already broken, and needs
fixing.
So what if it is broken? I wouldn't want to fix it. Let it run as it was
till now.

Leave long as it was and use a new type for 64 bits. This was the
decision of Microsoft.

Gcc decided otherwise. Long becomes 64 bits, and long long stays 64
bits, making this type completely useless.

For lcc-win64 I thought about

char 8 bits
short 16 bits
int 32 bits
long 64 bits
long long 128 bits

but then... I would have been incompatible
to both: gcc AND MSVC.

So I decided to follow MSVC under windows-64 and gcc
under unix-64.
Aug 4 '06 #53
Andrew Poelstra <ap*******@fals e.sitewrites:
On 2006-08-04, Ian Collins <ia******@hotma il.comwrote:
>Andrew Poelstra wrote:
>>On 2006-08-03, Keith Thompson <ks***@mib.orgw rote:

Richard Heathfield <in*****@invali d.invalidwrites :

>The introduction of long long int was, in my continued opinion, a mistake.
>All the ISO guys had to do was - nothing at all! Any implementation that
>wanted to support 64-bit integers could simply have made long int rather
>longer than before - such a system would have continued to be fully
>conformi ng to C90. And if it broke code, well, so what? Any code that
>wrongly assumes long int is precisely 32 bits is already broken, and needs
>fixing.

That's true, but 64 bits is the effective limit for this. The
following :
char 8 bits
short 16 bits
int 32 bits
long 64 bits
is a reasonable set of types, but if you go beyond that to 128 bits,
you're going to have to leave gaps (for example, there might not be
any 16-bit integer type).

1) This isn't really a problem; you can use a 32-bit variable to store
16-bit values; if you really need 16 bits you might need some debug
macros to artificially constrain the range.

Just beware of overflows!

Yes, that would require more than debug macros to fix, since you'd want
the overflow behavior to be the same whether or not you are debugging!
(It's a bit of a pain, I admit, but there aren't too many times when you
absolutely need an exact number of bits.)
Its not the fact that you absolutley *must* have the bits, rather than
the fact you want defined, repeatable behaviour.

The simplest test condition in the world would be hugely concerned to
knwo that that programmer didnt care if it was 16 or 32 bits

if(!x=func(y))
...;

How many times this would really be a problem I wouldnt venture to say.
Aug 4 '06 #54
Ian Collins posted:
Keith Thompson wrote:
>>
My objection to C's integer type system is that the names are
arbitrary: "char", "short", "int", "long", "long long", "ginormous
long". I'd like to see a system where the type names follow a regular
pattern, and if you want to have a dozen distinct types the names are
clear and obvious. I have a few ideas, but since this will never
happen in any language called "C" I won't go into any more detail.
Isn't that why we now have (u)int32_t and friends? I tend to use int or
unsigned if I don't care about the size and one of the exact size type
if I do.

I use "int unsigned" when I want to store a positive integer.

I use "int signed" when the integer value might be negative.

If the unsigned number might exceed 65535, or if the signed number might
not fit in the range +32767 to -32767, then I'll consider using "int long
unsigned" or "int long signed", or perhaps "int long long unsigned" or "int
long long signed".

I only use "plain" char when I'm dealing with characters.

I only use "unsigned char" when I'm playing with bytes.

I've never used "short", but I'd consider using it if I had a large array
of integers whose value wouldn't exceed 65535.

--

Frederick Gotham
Aug 4 '06 #55
Richard <rg****@gmail.c omwrites:
Its not the fact that you absolutley *must* have the bits, rather than
the fact you want defined, repeatable behaviour.

The simplest test condition in the world would be hugely concerned to
knwo that that programmer didnt care if it was 16 or 32 bits

if(!x=func(y))
...;
Why should I care whether func() returns 16 or 32 bits? I only
want to know whether it returned a nonzero value, and the
specific value, be it 1 or 2 or -5 or 0xffffffff, doesn't matter.

(That won't compile, by the way. You forgot a set of parentheses.)
--
"...Almost makes you wonder why Heisenberg didn't include postinc/dec operators
in the uncertainty principle. Which of course makes the above equivalent to
Schrodinger's pointer..."
--Anthony McDonald
Aug 4 '06 #56
Ben Pfaff <bl*@cs.stanfor d.eduwrites:
Richard <rg****@gmail.c omwrites:
>Its not the fact that you absolutley *must* have the bits, rather than
the fact you want defined, repeatable behaviour.

The simplest test condition in the world would be hugely concerned to
knwo that that programmer didnt care if it was 16 or 32 bits

if(!(x=func(y) ))
...;

Why should I care whether func() returns 16 or 32 bits? I only
want to know whether it returned a nonzero value, and the
specific value, be it 1 or 2 or -5 or 0xffffffff, doesn't matter.
because 0x10000 wont be zero since 32 bits hold it. 16 bits
will cycle through 0. Or?
Aug 4 '06 #57
Richard <rg****@gmail.c omwrites:
Ben Pfaff <bl*@cs.stanfor d.eduwrites:
>Richard <rg****@gmail.c omwrites:
>>Its not the fact that you absolutley *must* have the bits, rather than
the fact you want defined, repeatable behaviour.

The simplest test condition in the world would be hugely concerned to
knwo that that programmer didnt care if it was 16 or 32 bits

if(!(x=func(y )))
...;

Why should I care whether func() returns 16 or 32 bits? I only
want to know whether it returned a nonzero value, and the
specific value, be it 1 or 2 or -5 or 0xffffffff, doesn't matter.

because 0x10000 wont be zero since 32 bits hold it. 16 bits
will cycle through 0. Or?
func() should return the proper type for its caller to interpret
it? If it doesn't, then the caller is not going to be able to
interpret correctly. If it does, then the condition makes sense
regardless of the type.
--
"In My Egotistical Opinion, most people's C programs should be indented six
feet downward and covered with dirt." -- Blair P. Houghton
Aug 4 '06 #58
Ben Pfaff wrote:
Richard <rg****@gmail.c omwrites:
Ben Pfaff <bl*@cs.stanfor d.eduwrites:
Richard <rg****@gmail.c omwrites:

Its not the fact that you absolutley *must* have the bits, rather than
the fact you want defined, repeatable behaviour.

The simplest test condition in the world would be hugely concerned to
knwo that that programmer didnt care if it was 16 or 32 bits

if(!(x=func(y) ))
...;

Why should I care whether func() returns 16 or 32 bits? I only
want to know whether it returned a nonzero value, and the
specific value, be it 1 or 2 or -5 or 0xffffffff, doesn't matter.
because 0x10000 wont be zero since 32 bits hold it. 16 bits
will cycle through 0. Or?

func() should return the proper type for its caller to interpret
it? If it doesn't, then the caller is not going to be able to
interpret correctly. If it does, then the condition makes sense
regardless of the type.
!(x=func(y)) doesn't test func's return value. It tests func's return
value converted to the type of x. If x is narrower than func(), even
nonzero return values may cause the expression to evaluate to 1.

char ch; while((ch = getchar()) != EOF) { /* ... */ }

Aug 4 '06 #59
"Harald van DD3k" <tr*****@gmail. comwrites:
Ben Pfaff wrote:
>Richard <rg****@gmail.c omwrites:
Ben Pfaff <bl*@cs.stanfor d.eduwrites:

Richard <rg****@gmail.c omwrites:

Its not the fact that you absolutley *must* have the bits, rather than
the fact you want defined, repeatable behaviour.

The simplest test condition in the world would be hugely concerned to
knwo that that programmer didnt care if it was 16 or 32 bits

if(!(x=func(y )))
...;

Why should I care whether func() returns 16 or 32 bits? I only
want to know whether it returned a nonzero value, and the
specific value, be it 1 or 2 or -5 or 0xffffffff, doesn't matter.

because 0x10000 wont be zero since 32 bits hold it. 16 bits
will cycle through 0. Or?

func() should return the proper type for its caller to interpret
it? If it doesn't, then the caller is not going to be able to
interpret correctly. If it does, then the condition makes sense
regardless of the type.

!(x=func(y)) doesn't test func's return value. It tests func's return
value converted to the type of x. If x is narrower than func(), even
nonzero return values may cause the expression to evaluate to 1.
True. I missed that. But the point stands: you should be
assigning it to the proper type. Again, this is important
regardless of the width of the type in question. I don't care if
"int" is 16 or 32 bits as long as "int" is the type that func()
returns.
--
"I ran it on my DeathStation 9000 and demons flew out of my nose." --Kaz
Aug 4 '06 #60

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
1848
by: Lefevre | last post by:
Hello. I recently discovered that this kind of code : | struct Object | { | string f() { return string("Toto"); } | } | | int main( ... )
0
9855
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
10906
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10585
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10647
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
10292
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9426
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7828
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5682
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
1
4482
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.