473,839 Members | 1,542 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Boost process and C

Hi,

Is there any group in the manner of the C++ Boost group that works on
the evolution of the C language? Or is there any group that performs an
equivalent function?

Thanks,
-vs

Apr 29 '06
335 11909
we******@gmail. com wrote:
Flash Gordon wrote:
we******@gmail. com wrote:
Flash Gordon wrote:
we******@gmail. com wrote:
> Ben C wrote:
>> On 2006-05-03, we******@gmail. com <we******@gmail .com> wrote:
>>> CBFalconer wrote:
>>>> we******@gmail. com wrote:
>>>>> CBFalconer wrote:
>>>> ... snip ...
>>>>>> The last time I took an (admittedly cursory) look at Bstrlib, I
>>>>>> found it cursed with non-portabilities
>>>>> You perhaps would like to name one?
>>>> I took another 2 minute look, and was immediately struck by the use
>>>> of int for sizes, rather than size_t. This limits reliably
>>>> available string length to 32767.
>> [snip]
>>
>>>> [...] I did find an explanation and
>>>> justification for this. Conceded, such a size is probably adequate
>>>> for most usage, but the restriction is not present in standard C
>>>> strings.
>>> Your going to need to conceed on more grounds than that. There is a
>>> reason many UNIX systems tried to add a ssize_t type, and why TR 24731
>>> has added rsize_t to their extension. (As a side note, I strongly
>>> suspect the Microsoft, in fact, added this whole rsize_t thing to TR
>>> 24731 when they realized that Bstrlib, or things like it, actually has
>>> far better real world safety because its use of ints for string
>>> lengths.) Using a long would be incorrect since there are some systems
>>> where a long value can exceed a size_t value (and thus lead to falsely
>>> sized mallocs.) There is also the matter of trying to codify
>>> read-only and constant strings and detecting errors efficiently
>>> (negative lengths fit the bill.) Using ints is the best choice
>>> because at worst its giving up things (super-long strings) that nobody
>>> cares about,
>> I think it's fair to expect the possibility of super-long strings in a
>> general-purpose string library.
> Ok, so you can name a single application of such a thing right?
Handling an RTF document that you will be writing to a variable length
record in a database. Yes, I do have good reason for doing this. No, I
can't stream the document in to the database so I do have to have it all
in memory. Yes, RTF documents are encoded as text. Yes, they can be
extremely large, especially if they have graphics embedded in them
encoded as text.
So now name the platform where its *possible* to deal with this, but
where Bstrlib fails to be able to deal with them due to its design
choices.

If the DOS port hadn't been dropped then depending on the compiler we
might have hit this. A significant portion of the SW I'm thinking of
originated on DOS, so it could have hit it.


Oh ... I think of DOS as exactly the case where this *can't* happen.
Single objects in 16bit DOS have a size limit of 64K (size_t is just
unsigned which is 16 bits), so these huge RTF files you are talking
about *have* to be streamed, or split over multiple allocations
anyways.


Strangely enough there have been ways of having objects larger than 64K
in DOS. At least, given a 386 and some extensions.
>>> it allows in an efficient way for all desirable encoding scenarios,
>>> and it avoids any wrap around anomolies causing under-allocations.
>> What anomalies? Are these a consequence of using signed long, or
>> size_t?
> I am describing what int does (*BOTH* the encoding scenarios and
> avoiding anomolies), Using a long int would allow for arithmetic on
> numbers that exceed the maximum value of size_t on some systems (that
> actually *exist*), so when there was an attempt to malloc or realloc on
> such sizes, there would be a wrap around to some value that would just
> make it screw up. And if I used a size_t, then there would be no
> simple space of encodings that can catch errors, constants and write
> protected strings.
Is an extra byte (or word, or double word) for a flags field really that
big an overhead?
I need two *bits* for flags, and I want large ranges to catch errors in
the scalar fields (this is a *safe* library). An extra struct entry is
the wrong way to do this because it doesn't help my catch errors in the
scalar fields, and its space inefficient.

ssize_t would have been a reasonable *functional* choice, but its not
standard. size_t is no good because it can't go negative. long int is
no good because there are plenty of real platforms where long int is
larger than size_t. int solves all the main real problems, and as a
bonus the compiler is designed to make sure its the fastest scalar
primitive available.

Strangely enough, when a previous developer on the code I'm dealing with
thought he could limit size to a "valid" range an assert if it was out
of range we found that the asserts kept getting triggered. However, it
was always triggered incorrectly because the size was actually valid!


And how is this connected with Bstrlib? The library comes with a test
that, if you run in a 16 bit environment, will exercise length
overflowing. So you have some reasonable assurance that Bstrlib does
not make obvious mistakes with size computations.


You are assuming I won't want an object larger than can be represented
in an int. That is an artificial limitation.
[...] So I'll stick to not artificially limiting sizes.


And how do you deal with the fact that the language limits your sizes
anyways?


You are artificially reducing the limit below what the language allows
for. The language is not artificially reducing it below what the
language allows.
[...] If the administrator of a
server the SW is installed on wants then s/he can use system specific
means to limit the size of a process.


What? You think the adminstrator is in charge of how the compiler
works?


No, but the SW I'm dealing with is run on systems where the
administrator can limit process size, maximum CPU usage and lots of
other good stuff. Or the administrator can leave it unlimited (i.e.
limited by available resources). You really should try an OS that gives
real power and flexibility one day.
--
Flash Gordon, living in interesting times.
Web site - http://home.flash-gordon.me.uk/
comp.lang.c posting guidelines and intro:
http://clc-wiki.net/wiki/Intro_to_clc

Inviato da X-Privat.Org - Registrazione gratuita http://www.x-privat.org/join.php
May 4 '06 #241
Richard Tobin wrote:
In article <44************ **@jacob.remcom p.fr>,
jacob navia <ja***@jacob.re mcomp.fr> wrote:
mid_date = (start_date + end_date) / 2;

Excuse me but what does it mean

Sep-25-1981 + Dec-22-2000


Just because the sum of two dates is not a date doesn't mean that
it doesn't mean anything.
You obviously meant:

mid_date = (end_date - start_date)/2


No I didn't. That is something completely different.
The *subtraction* of two dates yields a time interval


True, and (end_date - start_date) / 2 would give me half the interval
between the dates, but that is not what I wanted. I wanted the
average of the dates, which is a date.

(Sep-25-1981 + Dec-22-2000) / 2 would be the date mid-way between
Sep-25-1981 and Dec-22-2000, just as (45 + 78) / 2 is the integer
mid-way between 45 and 78.

-- Richard


Adding date values is nonsense. Subtracting one date from another to
yield integer days between two dates is very handy. Adding (or
subtracting) integer days to (or from) a date yielding a date is handy
too. Look at this ..

set century on // prints 1981 instead of 81
dbeg = ctod("09/25/1981") // convert character string to date type
dend = ctod("12/22/2000")
diff = dend - dbeg // 7028 days between two dates
? dbeg, dend, diff
dmid = dbeg + diff / 2 // begin date + 3514 days, yielding date type
? dmid // 05/10/1991

... in xBase, the language of dBASE, FoxPro, Clipper and xHarbour. While
C is my favorite language, my employer pays for xBase. I have a hobby
project to translate some of the more useful xBase stuff into C.

Note that ? is a print command in xBase. It prints a leading newline and
then the values of its arguments, separated by a space character.

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
May 5 '06 #242
REH

"Ben C" <sp******@spam. eggs> wrote in message
news:sl******** *************@b owser.marioworl d...
Yes I know. But you do get constructors, destructors and references, so
you can fit explicit memory management "under the hood" of operator
overloading.
I can understood people's dislike of references (though I don't agree with
the reasons), but what is wrong with constructors and destructors?

Show me the string example, and hopefully either you will get my point
or I will get yours :)


I'd rather understand what you think is wrong this constructors. My
previous example can be written with constructors and will generate code
that is as efficient, if not more so, than without.

REH
May 5 '06 #243
Ed Jensen wrote:
CBFalconer <cb********@yah oo.com> wrote:
And, if you write the library in truly portable C, without any
silly extensions and/or entanglements, you just compile the library
module. All the compiler vendor need to do is meet the
specifications of the C standard.

Simple, huh?
That all depends on the license under which the source code was
released. Linking a bunch of C libraries under various licenses can
involve non-trivial amounts of legal hassle to ensure compliance.


If you publish your source under GPL, there is very little chance
of conflicts. In the case of things I have originated, all you
have to do is contact me to negotiate other licenses. I can be
fairly reasonable on months with a 'R' in them.

Also, there's something to be said for having features built into the
standard library. Besides making things easier from a legal point of
view, it means you can spend that much less time evaluating multiple
solutions, since most of the time, you'll just use the implementation
already available in the standard library.

I know it's unpopular around these parts to utter such heresy, but I,
for one, would love it if the standard C library included support for
smarter strings, hash tables, and linked lists.
No, there is nothing wrong with expanding the standard library.
Nothing forces anyone to use such components anyhow. There is
provision in the standard for "future library expansion". This is
a far cry from bastardizing the language with overloaded operators
and peculiar non-standard syntax, as recommended by some of the
unwashed.

Then again, I'm certainly NOT advocating these things should be added
to the standard C library. I recognize C for what it is, and use it
where it's appropriate. There are other languages that offer those
features. But that doesn't stop me from wanting those features in C.


Go ahead and advocate. I would certainly like to see at least
strlcpy/cat in the next standard, with gets removed, and possibly
my own hashlib and ggets added. What all of those things are is
completely described in terms of the existing C standards, so the
decisions can be fairly black and white.

--
"If you want to post a followup via groups.google.c om, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell. org/google/>
Also see <http://www.safalra.com/special/googlegroupsrep ly/>
May 5 '06 #244
Chris Torek wrote:
.... snip ...
Indeed. Suppose the String data structure is much like Paul Hsieh's
favorite, but perhaps with a few more bells and whistles (I have not
looked at his implementation) :


Neither have I, beyond a cursory glance. However I did see that
the fundamental object involved is a struct, which contains a
length, a capacity, and a pointer to actual string data as an array
of char. This is an organization that has been in use for many
years in GNU Pascal. There are still awkwardnesses in its use,
such as the equivalent of a union of two strings, and how to handle
the capacity value. GPC does this by making such a union an actual
structure, with separate fields. But, by and large, it is a
familiar organization.

Any of these so-called advanced organizations have to give up
something, be it code compactness, efficiency, other limitations.
There are very few limitations to the null terminated string, which
is why it has endured. There are, however, many traps for the
unwary. This is the hallmark of virtually all C code.

You pays your money and you takes your choice.

--
"If you want to post a followup via groups.google.c om, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell. org/google/>
Also see <http://www.safalra.com/special/googlegroupsrep ly/>
May 5 '06 #245
Flash Gordon wrote:
we******@gmail. com wrote:
Flash Gordon wrote:
we******@gmail. com wrote:
Flash Gordon wrote:
> we******@gmail. com wrote:
>> Ben C wrote:
>>> On 2006-05-03, we******@gmail. com <we******@gmail .com> wrote:
>>>> CBFalconer wrote:
>>>>> we******@gmail. com wrote:
>>>>>> CBFalconer wrote:
>>>>> ... snip ...
>>>>>>> The last time I took an (admittedly cursory) look at Bstrlib, I
>>>>>>> found it cursed with non-portabilities
>>>>>> You perhaps would like to name one?
>>>>> I took another 2 minute look, and was immediately struck by the use
>>>>> of int for sizes, rather than size_t. This limits reliably
>>>>> available string length to 32767.
>>> [snip]
>>>
>>>>> [...] I did find an explanation and
>>>>> justification for this. Conceded, such a size is probably adequate
>>>>> for most usage, but the restriction is not present in standard C
>>>>> strings.
>>>> Your going to need to conceed on more grounds than that. There is a
>>>> reason many UNIX systems tried to add a ssize_t type, and why TR 24731
>>>> has added rsize_t to their extension. (As a side note, I strongly
>>>> suspect the Microsoft, in fact, added this whole rsize_t thing to TR
>>>> 24731 when they realized that Bstrlib, or things like it, actually has
>>>> far better real world safety because its use of ints for string
>>>> lengths.) Using a long would be incorrect since there are some systems
>>>> where a long value can exceed a size_t value (and thus lead to falsely
>>>> sized mallocs.) There is also the matter of trying to codify
>>>> read-only and constant strings and detecting errors efficiently
>>>> (negative lengths fit the bill.) Using ints is the best choice
>>>> because at worst its giving up things (super-long strings) that nobody
>>>> cares about,
>>> I think it's fair to expect the possibility of super-long strings in a
>>> general-purpose string library.
>> Ok, so you can name a single application of such a thing right?
> Handling an RTF document that you will be writing to a variable length
> record in a database. Yes, I do have good reason for doing this. No, I
> can't stream the document in to the database so I do have to have it all
> in memory. Yes, RTF documents are encoded as text. Yes, they can be
> extremely large, especially if they have graphics embedded in them
> encoded as text.
So now name the platform where its *possible* to deal with this, but
where Bstrlib fails to be able to deal with them due to its design
choices.
If the DOS port hadn't been dropped then depending on the compiler we
might have hit this. A significant portion of the SW I'm thinking of
originated on DOS, so it could have hit it.


Oh ... I think of DOS as exactly the case where this *can't* happen.
Single objects in 16bit DOS have a size limit of 64K (size_t is just
unsigned which is 16 bits), so these huge RTF files you are talking
about *have* to be streamed, or split over multiple allocations
anyways.


Strangely enough there have been ways of having objects larger than 64K
in DOS. At least, given a 386 and some extensions.


For actual storage, you need go no further than a 8086, which could be
equipped with up to 640K of memory without issue. But of course,
that's not what's at issue here. Its a question of what size_t is on
those platforms. In all the 16 bit mode compilers I am aware of,
size_t (and int) is a 16 bit unsigned integer, which by the C standard,
says a single object cannot be more than 64K. This is a real issue
when you realize that if you perform a strcat on two strings each
greater than 32K, you get an undefined result, (because the C
specification is just a worthless in this respect).

If you want to use the 32 bit instruction x86 sets and a DOS extender,
you can use one of the 32 bit compilers, but here size_t is a 32 bit
unsigned integer (as is int.)

Perhaps you might want to refrain from chiming in about things you know
very little about; I mean seriously, are *YOU* trying to tell *ME* how
DOS works? Are you kidding me?
>>>> it allows in an efficient way for all desirable encoding scenarios,
>>>> and it avoids any wrap around anomolies causing under-allocations.
>>> What anomalies? Are these a consequence of using signed long, or
>>> size_t?
>> I am describing what int does (*BOTH* the encoding scenarios and
>> avoiding anomolies), Using a long int would allow for arithmetic on
>> numbers that exceed the maximum value of size_t on some systems (that
>> actually *exist*), so when there was an attempt to malloc or realloc on
>> such sizes, there would be a wrap around to some value that would just
>> make it screw up. And if I used a size_t, then there would be no
>> simple space of encodings that can catch errors, constants and write
>> protected strings.
> Is an extra byte (or word, or double word) for a flags field really that
> big an overhead?
I need two *bits* for flags, and I want large ranges to catch errors in
the scalar fields (this is a *safe* library). An extra struct entry is
the wrong way to do this because it doesn't help my catch errors in the
scalar fields, and its space inefficient.

ssize_t would have been a reasonable *functional* choice, but its not
standard. size_t is no good because it can't go negative. long int is
no good because there are plenty of real platforms where long int is
larger than size_t. int solves all the main real problems, and as a
bonus the compiler is designed to make sure its the fastest scalar
primitive available.
Strangely enough, when a previous developer on the code I'm dealing with
thought he could limit size to a "valid" range an assert if it was out
of range we found that the asserts kept getting triggered. However, it
was always triggered incorrectly because the size was actually valid!


And how is this connected with Bstrlib? The library comes with a test
that, if you run in a 16 bit environment, will exercise length
overflowing. So you have some reasonable assurance that Bstrlib does
not make obvious mistakes with size computations.


You are assuming I won't want an object larger than can be represented
in an int. That is an artificial limitation.


size_t is also a similar artificial limitation. The fact that arrays
can only take certain kinds of scalars as index parameters is also an
artificial limitation. But it turns out that basically every language
and every array-like or string-like (with the notable exceptions of Lua
and Python) has a similar kind of limitation.
[...] So I'll stick to not artificially limiting sizes.


And how do you deal with the fact that the language limits your sizes
anyways?


You are artificially reducing the limit below what the language allows
for. The language is not artificially reducing it below what the
language allows.


One of these statements is circular reasoning. See if you can figure
out which one it is.

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/

May 5 '06 #246
jacob navia <ja***@jacob.re mcomp.fr> wrote:
Besides, I think that using the addition operator to "add" strings is an
ABOMINATION because:

a+b != b+a
"Hello" + "World" != "World" + "Hello"

It just makes NO SENSE.


Quaternions must have come as a shock to you. Or does a*b != b*a somehow
make more sense to you than a+b != b+a?

Richard
May 5 '06 #247
On 2006-05-04, Richard Tobin <ri*****@cogsci .ed.ac.uk> wrote:
In article <44************ ***********@new s.wanadoo.fr>,
jacob navia <ja***@jacob.re mcomp.fr> wrote:
>mid_date = (start_date + end_date) / 2;

Ahh ok, you mean then

mid_date = startdate + (end_date-start_date)/2


Your attitude is baffling. You deny that adding dates makes sense,
and when I post an example where adding dates makes perfect sense, you
respond by asserting that I mean some other expression that achieves
that same effect. The mere fact that you were able to post another
expression with the same meaning refutes your original claim.


Mr Navia's attitude makes sense if you think of dates in "homogenous
coordinates".

It's common in 3D graphics to use 4-vectors to represent positions and
directions. A position has a 1 in its last element, and a direction has a
0.

I say directions, the vectors are not necessarily normalized, so they
are "directions with magnitude".

Positions implicitly mean "the place you get to if you start at the
origin and add the 3D part of the vector".

Directions-with-magnitude are not implicitly based at the origin. You
can add a d-with-m to a position to get to a new position.

[a0, a1, a2, 1] + [m0, m1, m2, 0] = [b0, b1, b2, 1]

If we do this as a 4D vector add, the result ends up correctly with a 1
in the 4th element-- it's a position.

Other implementation conveniences arise from this approach-- you can use
the last column of a 4D matrix to represent a translation. Applying the
matrix to a vector will rotate and then translate positions, but will
just rotate and not translate d-with-ms, because the 0 in the 4th
element will select out the last column in the matrix multiply.

Using this system, you should be able to do everything with straight 4D
matrix arithmetic, and if you ever end up with a 2 or a -1, or anything
that isn't 0 or 1 in the 4th element of a vector, you've done something
wrong.

Adding two positions, for example, gives you a 2 in that 4th element.
And, thinking of it geometrically, it doesn't make a lot of sense
because positions are implicitly "translatio ns from the origin", so you
can't translate one position from another position.

Well, we can represent time in a 1D space and use 2D "homogenous
coordinates":

[100, 0] means "100 seconds forwards"
[-100, 0] means "100 seconds ago"
[100, 1] means "100 seconds since 1970-01-01T00:00"

In exactly the same way we distinguish between a length of time, and a
length of time that implicitly starts at the origin.

start_date + (end_date - start_date) / 2

doesn't generate any invalid last-elements in any intermediate results,
but

(start_date + end_date) / 2

does.

In Python's datetime module, subtracting two dates returns a "timedelta"
object, which can be added to a date. But two dates cannot be added.

This seems a sensible way to do it, and if you wanted to do it in C++, I
think you'd overload global operators, not member function operators:

Timedelta& operator-(const Date& a, const Date& b);
Date& operator+(const Date& a, const Timedelta& delta);
Timedelta& operator+(const Timedelta& a, const Timedelta& b);

etc. You could make a perfectly usable system this way, and I'd say that
using operators for dates is no more or less sane or insane than using
them for matrices and vectors.
May 5 '06 #248
On 2006-05-05, REH <me@you.com> wrote:

"Ben C" <sp******@spam. eggs> wrote in message
news:sl******** *************@b owser.marioworl d...
Yes I know. But you do get constructors, destructors and references, so
you can fit explicit memory management "under the hood" of operator
overloading.
I can understood people's dislike of references (though I don't agree with
the reasons), but what is wrong with constructors and destructors?


Nothing, I like constructors and destructors.
May 5 '06 #249
jacob navia <ja***@jacob.re mcomp.fr> wrote:
2) Operator overloading does NOT need any constructors, nor destructors
nor the GC if we use small objects:

int128 a,b,c,d;

a = (b+c)/(b-d);


You keep repeating this as one of the prime examples (in fact, the only
consistent example) of why overloading is so useful in your suite. Don't
you realise that C99 allows any implementation to define any size
integers without requiring overloading at all?

Richard
May 5 '06 #250

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

65
5406
by: perseus | last post by:
I think that everyone who told me that my question is irrelevant, in particular Mr. David White, is being absolutely ridiculous. Obviously, most of you up here behave like the owners of the C++ language. A C++ interface installation IS ABOUT THE C++ LANGUAGE! The language does not possess the ability to handle even simple file directory manipulation. Those wise people that created it did not take care of it. So, BOOST is a portable...
205
10760
by: Jeremy Siek | last post by:
CALL FOR PAPERS/PARTICIPATION C++, Boost, and the Future of C++ Libraries Workshop at OOPSLA October 24-28, 2004 Vancouver, British Columbia, Canada http://tinyurl.com/4n5pf Submissions
17
1902
by: Howard Gardner | last post by:
/* If I am using boost, then how should I write this program? As it sits, this program is using SFINAE to determine whether or not a type supports particular syntax. I suspect that there is functionality in boost to do this. I have found mpl::has_xxx, which I suspect of being part of the solution. I've also found type_traits::has_nothrow_constructor
2
6636
by: smith4894 | last post by:
{ not sure you're aware of that but there are the newsgroups for all major operating systems. you might want to try asking in the forum 'comp.os.linux.development.apps', since memory-mapped files are not a language-supported structure, they are platform-specific. -mod } I'm trying to use boost serialization to serialize/deserialize data to and from a mmap'd file. I have my own ostream/istream classes that essentially read/write bytes...
5
2400
by: linyanhung | last post by:
I used a boost multi thread in VS 2005 on a Duo Core PC, and made a two thread process. The code is something like this: #include <boost/thread/thread.hpp> void fun1() { //do something
8
6214
by: Matt England | last post by:
My team currently using Boost Threads, but we are considering switching to ZThreads. (We seek cross-platform, C++ multithreading capabilities in an external library.) ZThread(s): http://zthread.sourceforge.net/ http://www.inf.uni-konstanz.de/dbis/members/vinnik/zsim/doc/ Can anyone share their ZThreads experience, either good, bad, or
2
2417
by: ironpingwin | last post by:
Hi! I'd like to make few threads which will run in the same time in C++. I try to use boost library v 1.34.1 (it can't be newest, because I compile on remote machine, which is not administrated by me). In this version there isn't detach() function. How to run functions from two different class in the same time?
13
4541
by: brad | last post by:
Still learning C++. I'm writing some regex using boost. It works great. Only thing is... this code seems slow to me compared to equivelent Perl and Python. I'm sure I'm doing something incorrect. Any tips? #include <boost/regex.hpp> #include <iostream> // g++ numbers.cpp -o numbers -I/usr/local/include/boost-1_35 /usr/local/lib/libboost_regex-gcc41-mt-s.a // g++ numbers.cpp -o numbers.exe
5
3596
by: ameyav | last post by:
Hi All, I am converting some C code into C++ code. The objective is to improve throughput. I have some code written in C which serially parses through a list of files, opens each one of them, processes the data and closes the file. All the files are processed one by one. The obvious performance bottleneck that i could think of is the wasted cpu cycles for file i/o. *My solution* was to spawn multiple threads to do the file i/o. For...
0
9855
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9697
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10906
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10585
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10647
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
10292
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9426
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
5682
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
2
4064
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.