473,757 Members | 5,404 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Re: C++0x: release sequence

On Jun 16, 3:09 pm, Anthony Williams <anthony....@gm ail.comwrote:
Note that the
use of fences in the C++0x WP has changed this week from
object-specific fences to global fences. See Peter Dimov's paper
N2633:http://www.open-std.org/JTC1/SC22/WG...008/n2633.html

Yes, I've already read this. It's just GREAT! It's far more useful and
intuitive.
And it contains clear and simple binding to memory model, i.e.
relations between acquire/release fences; and between acquire/release
fences and acquire/release operations.

Is it already generally approved by memory model working group?
For atomic_fence() I'm not worry :) But what about complier_fence( )?

Btw, I see some problems in Peter Dimov's proposal.
First, it's possible to write:

x.store(1, std::memory_ord er_relaxed);
std::atomic_com piler_fence(std ::memory_order_ release);
y.store(1, std::memory_ord er_relaxed);

But it's not possible to write:

x.store(1, std::memory_ord er_relaxed);
y.store(1, std::memory_ord er_relaxed_but_ compiler_order_ release);
// or just y.store(1, std::compiler_o rder_release);

I.e. it's not possible to use complier ordering, when using acquire/
release operations. It's a bit inconsistent, especially taking into
account that acquire/release operations are primary and standalone
bidirectional fences are supplementary.

Second, more important moment. It's possible to write:

//thread 1:
data = 1;
std::atomic_mem ory_fence(std:: memory_order_re lease);
x.store(1, std::memory_ord er_relaxed);

//thread 2:
if (x.load(std::me mory_order_acqu ire))
assert(1 == data);

But it's not possible to write:

//thread 1:
data = 1;
z.store(1, std::memory_ord er_release);
x.store(1, std::memory_ord er_relaxed);

//thread 2:
if (x.load(std::me mory_order_acqu ire))
assert(1 == data);

From point of view of Peter Dimov's proposal, this core contains race
on 'data'.

I think there must be following statements:

- release operation *is a* release fence
- acquire operation *is a* acquire fence

So this:
z.store(1, std::memory_ord er_release);
basically transforms to:
std::atomic_mem ory_fence(std:: memory_order_re lease);
z.store(1, std::memory_ord er_release);

Then second example will be legal. What do you think?

Dmitriy V'jukov
Jun 27 '08 #1
7 2826
"Dmitriy V'jukov" <dv*****@gmail. comwrites:
On Jun 16, 3:09 pm, Anthony Williams <anthony....@gm ail.comwrote:
>Note that the
use of fences in the C++0x WP has changed this week from
object-specific fences to global fences. See Peter Dimov's paper
N2633:http://www.open-std.org/JTC1/SC22/WG...008/n2633.html


Yes, I've already read this. It's just GREAT! It's far more useful and
intuitive.
And it contains clear and simple binding to memory model, i.e.
relations between acquire/release fences; and between acquire/release
fences and acquire/release operations.

Is it already generally approved by memory model working group?
For atomic_fence() I'm not worry :) But what about complier_fence( )?
Yes. It's been approved to be applied to the WP with minor renamings
(atomic_memory_ fence -atomic_thread_f ence, atomic_compiler _fence ->
atomic_signal_f ence)
Btw, I see some problems in Peter Dimov's proposal.
First, it's possible to write:

x.store(1, std::memory_ord er_relaxed);
std::atomic_com piler_fence(std ::memory_order_ release);
y.store(1, std::memory_ord er_relaxed);

But it's not possible to write:

x.store(1, std::memory_ord er_relaxed);
y.store(1, std::memory_ord er_relaxed_but_ compiler_order_ release);
// or just y.store(1, std::compiler_o rder_release);

I.e. it's not possible to use complier ordering, when using acquire/
release operations. It's a bit inconsistent, especially taking into
account that acquire/release operations are primary and standalone
bidirectional fences are supplementary.
You're right that you can't do this. I don't think it's a problem as
compiler orderings are not really the same as the inter-thread
orderings.
Second, more important moment. It's possible to write:

//thread 1:
data = 1;
std::atomic_mem ory_fence(std:: memory_order_re lease);
x.store(1, std::memory_ord er_relaxed);

//thread 2:
if (x.load(std::me mory_order_acqu ire))
assert(1 == data);

But it's not possible to write:

//thread 1:
data = 1;
z.store(1, std::memory_ord er_release);
x.store(1, std::memory_ord er_relaxed);

//thread 2:
if (x.load(std::me mory_order_acqu ire))
assert(1 == data);

From point of view of Peter Dimov's proposal, this core contains race
on 'data'.
Yes. Fences are global, whereas ordering on individual objects is
specific. The fence version is equivalent to:

// thread 1
data=1
x.store(1,std:: memory_order_re lease);
I think there must be following statements:

- release operation *is a* release fence
- acquire operation *is a* acquire fence

So this:
z.store(1, std::memory_ord er_release);
basically transforms to:
std::atomic_mem ory_fence(std:: memory_order_re lease);
z.store(1, std::memory_ord er_release);

Then second example will be legal. What do you think?
I think that compromises the model, because it makes release
operations contagious. The fence transformation is precisely the
reverse of this, which I think is correct.

Anthony
--
Anthony Williams | Just Software Solutions Ltd
Custom Software Development | http://www.justsoftwaresolutions.co.uk
Registered in England, Company Number 5478976.
Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL
Jun 27 '08 #2
On Jun 16, 11:47 pm, Anthony Williams <anthony....@gm ail.comwrote:
Yes, I've already read this. It's just GREAT! It's far more useful and
intuitive.
And it contains clear and simple binding to memory model, i.e.
relations between acquire/release fences; and between acquire/release
fences and acquire/release operations.
Is it already generally approved by memory model working group?
For atomic_fence() I'm not worry :) But what about complier_fence( )?

Yes. It's been approved to be applied to the WP with minor renamings
(atomic_memory_ fence -atomic_thread_f ence, atomic_compiler _fence ->
atomic_signal_f ence)
COOL!

Looking forward to next draft. Btw, what about dependent memory
ordering (memory_order_c onsume)? Is it going to be accepted?

Btw, I see some problems in Peter Dimov's proposal.
First, it's possible to write:
x.store(1, std::memory_ord er_relaxed);
std::atomic_com piler_fence(std ::memory_order_ release);
y.store(1, std::memory_ord er_relaxed);
But it's not possible to write:
x.store(1, std::memory_ord er_relaxed);
y.store(1, std::memory_ord er_relaxed_but_ compiler_order_ release);
// or just y.store(1, std::compiler_o rder_release);
I.e. it's not possible to use complier ordering, when using acquire/
release operations. It's a bit inconsistent, especially taking into
account that acquire/release operations are primary and standalone
bidirectional fences are supplementary.

You're right that you can't do this. I don't think it's a problem as
compiler orderings are not really the same as the inter-thread
orderings.
Yes, but why I can do and inter-thread orderings and compiler
orderings with stand-alone fences, and can do only inter-thread
orderings with operations? Why stand-alone fences are more 'powerful'?

Second, more important moment. It's possible to write:
//thread 1:
data = 1;
std::atomic_mem ory_fence(std:: memory_order_re lease);
x.store(1, std::memory_ord er_relaxed);
//thread 2:
if (x.load(std::me mory_order_acqu ire))
assert(1 == data);
But it's not possible to write:
//thread 1:
data = 1;
z.store(1, std::memory_ord er_release);
x.store(1, std::memory_ord er_relaxed);
//thread 2:
if (x.load(std::me mory_order_acqu ire))
assert(1 == data);
From point of view of Peter Dimov's proposal, this core contains race
on 'data'.

Yes. Fences are global, whereas ordering on individual objects is
specific.
Hmmm... need to think some more on this...

The fence version is equivalent to:

// thread 1
data=1
x.store(1,std:: memory_order_re lease);
I think there must be following statements:
- release operation *is a* release fence
- acquire operation *is a* acquire fence
So this:
z.store(1, std::memory_ord er_release);
basically transforms to:
std::atomic_mem ory_fence(std:: memory_order_re lease);
z.store(1, std::memory_ord er_release);
Then second example will be legal. What do you think?

I think that compromises the model, because it makes release
operations contagious...
.... and this will interfere with efficient implementation on some
hardware. Right? Or there are some 'logical' reasons for this (why you
don't want to make release operations contagious)?

Dmitriy V'jukov
Jun 27 '08 #3
"Dmitriy V'jukov" <dv*****@gmail. comwrites:
On Jun 16, 11:47 pm, Anthony Williams <anthony....@gm ail.comwrote:
Yes, I've already read this. It's just GREAT! It's far more useful and
intuitive.
And it contains clear and simple binding to memory model, i.e.
relations between acquire/release fences; and between acquire/release
fences and acquire/release operations.
Is it already generally approved by memory model working group?
For atomic_fence() I'm not worry :) But what about complier_fence( )?

Yes. It's been approved to be applied to the WP with minor renamings
(atomic_memory _fence -atomic_thread_f ence, atomic_compiler _fence ->
atomic_signal_ fence)

COOL!

Looking forward to next draft. Btw, what about dependent memory
ordering (memory_order_c onsume)? Is it going to be accepted?
Yes. That's been voted in too.
Btw, I see some problems in Peter Dimov's proposal.
First, it's possible to write:
x.store(1, std::memory_ord er_relaxed);
std::atomic_com piler_fence(std ::memory_order_ release);
y.store(1, std::memory_ord er_relaxed);
But it's not possible to write:
x.store(1, std::memory_ord er_relaxed);
y.store(1, std::memory_ord er_relaxed_but_ compiler_order_ release);
// or just y.store(1, std::compiler_o rder_release);
I.e. it's not possible to use complier ordering, when using acquire/
release operations. It's a bit inconsistent, especially taking into
account that acquire/release operations are primary and standalone
bidirectional fences are supplementary.

You're right that you can't do this. I don't think it's a problem as
compiler orderings are not really the same as the inter-thread
orderings.

Yes, but why I can do and inter-thread orderings and compiler
orderings with stand-alone fences, and can do only inter-thread
orderings with operations? Why stand-alone fences are more 'powerful'?
Stand-alone fences affect all data touched by the executing thread, so
they are inherently more 'powerful'.
>The fence version is equivalent to:

// thread 1
data=1
x.store(1,std: :memory_order_r elease);
I think there must be following statements:
- release operation *is a* release fence
- acquire operation *is a* acquire fence
So this:
z.store(1, std::memory_ord er_release);
basically transforms to:
std::atomic_mem ory_fence(std:: memory_order_re lease);
z.store(1, std::memory_ord er_release);
Then second example will be legal. What do you think?

I think that compromises the model, because it makes release
operations contagious...

... and this will interfere with efficient implementation on some
hardware. Right? Or there are some 'logical' reasons for this (why you
don't want to make release operations contagious)?
It affects where you put the memory barrier instruction. The whole
point of relaxed operations is that they don't have memory barriers,
but if you make the release contagious the compiler might have to add
extra memory barriers in some cases. N2633 shows how you can
accidentally end up having to get full barriers all over the place.

Anthony
--
Anthony Williams | Just Software Solutions Ltd
Custom Software Development | http://www.justsoftwaresolutions.co.uk
Registered in England, Company Number 5478976.
Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL
Jun 27 '08 #4
On 17 ÉÀÎ, 00:27, Anthony Williams <anthony....@gm ail.comwrote:
Looking forward to next draft. Btw, what about dependent memory
ordering (memory_order_c onsume)? Is it going to be accepted?

Yes. That's been voted in too.
Oooo, it's a bad news. I only start understading current "1.10", and
they change it almost completely! :)

The latest proposal about dependent ordering is:
http://open-std.org/jtc1/sc22/wg21/d...008/n2556.html
Right?

And what about syntax with double square brackets
[[carries_depende ncy]]? It's quite unusual syntax addition for C/C+
+...

Btw, I see some problems in Peter Dimov's proposal.
First, it's possible to write:
x.store(1, std::memory_ord er_relaxed);
std::atomic_com piler_fence(std ::memory_order_ release);
y.store(1, std::memory_ord er_relaxed);
But it's not possible to write:
x.store(1, std::memory_ord er_relaxed);
y.store(1, std::memory_ord er_relaxed_but_ compiler_order_ release);
// or just y.store(1, std::compiler_o rder_release);
I.e. it's not possible to use complier ordering, when using acquire/
release operations. It's a bit inconsistent, especially taking into
account that acquire/release operations are primary and standalone
bidirectional fences are supplementary.
You're right that you can't do this. I don't think it's a problem as
compiler orderings are not really the same as the inter-thread
orderings.
Yes, but why I can do and inter-thread orderings and compiler
orderings with stand-alone fences, and can do only inter-thread
orderings with operations? Why stand-alone fences are more 'powerful'?

Stand-alone fences affect all data touched by the executing thread, so
they are inherently more 'powerful'.
I'm starting to understand. Initially I was thinking that it's just 2
forms of saying the same thing (stand-alone fence and acquire/release
operation). It turns out to be not true. Ok.

Dmitriy V'jukov
Jun 27 '08 #5
"Dmitriy V'jukov" <dv*****@gmail. comwrites:
On 17 июн, 00:27, Anthony Williams <anthony....@gm ail.comwrote:
Looking forward to next draft. Btw, what about dependent memory
ordering (memory_order_c onsume)? Is it going to be accepted?

Yes. That's been voted in too.

Oooo, it's a bad news. I only start understading current "1.10", and
they change it almost completely! :)
It's all additions, so it's not too bad. The key thing is that the
paper adds memory_order_co nsume and dependency ordering, which
provides an additional mechanism for introducing a happens-before
relationship between threads.
The latest proposal about dependent ordering is:
http://open-std.org/jtc1/sc22/wg21/d...008/n2556.html
Right?
That's the latest pre-meeting paper. The latest (which is what was
voted on) is N2664 which is currently only available on the committee
site. It should be in the post-meeting mailing.
And what about syntax with double square brackets
[[carries_depende ncy]]? It's quite unusual syntax addition for C/C+
+...
That's the new attribute syntax. This part of the proposal has not
been included for now.

Anthony
--
Anthony Williams | Just Software Solutions Ltd
Custom Software Development | http://www.justsoftwaresolutions.co.uk
Registered in England, Company Number 5478976.
Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL
Jun 27 '08 #6
On Jun 17, 10:48 am, Anthony Williams <anthony....@gm ail.comwrote:
"Dmitriy V'jukov" <dvyu...@gmail. comwrites:
On 17 ÉÀÎ, 00:27, Anthony Williams <anthony....@gm ail.comwrote:
Looking forward to next draft. Btw, what about dependent memory
ordering (memory_order_c onsume)? Is it going to be accepted?
Yes. That's been voted in too.
Oooo, it's a bad news. I only start understading current "1.10", and
they change it almost completely! :)

It's all additions, so it's not too bad. The key thing is that the
paper adds memory_order_co nsume and dependency ordering, which
provides an additional mechanism for introducing a happens-before
relationship between threads.
The latest proposal about dependent ordering is:
http://open-std.org/jtc1/sc22/wg21/d...008/n2556.html
Right?

That's the latest pre-meeting paper. The latest (which is what was
voted on) is N2664 which is currently only available on the committee
site. It should be in the post-meeting mailing.

I hope that in N2664 'happens before' definition is changed. Because
now I can't understand it.
For example in following code:

int data;
std::atomic<int x;

thread 1:
data = 1;
x.store(1, std::memory_ord er_release); (A)

thread2:
if (x.load(std::me mory_order_cons ume)) (B)
assert(1 == data); (C)

A dependency-ordered before B, B sequenced before C.
So according to definition of 'happens before' in n2556, A happens-
before C.
According to my understanding, this is simply wrong. There is no data-
dependency between B and C, so A must not happens-before C. (there is
control dependency, but currently C++0x doesn't respect control-
dependency)

------------------------------------
Another moment:

An evaluation A carries a dependency to an evaluation B if
* the value of A is used as an operand of B, and:
o B is not an invocation of any specialization of
std::kill_depen dency, and
o A is not the left operand to the comma (',') operator,

I think here ---------------------------------/\/\/\/\/\/\/\
must be 'built-in comma operator'. Because consider following example:

struct X
{
int data;
};

void operator , (int y, X& x)
{
x.data = y;
}

std::atomic<int a;

int main()
{
int y = a.load(std::mem ory_order_consu me);
X x;
y, x; // here 'carries a dependency' is broken, because 'y' is a
left operand of comma operator
int z = x.data; // but I think, that 'z' still must be in
'dependency tree' rooted by 'y'
}
Where I am wrong this time? :)
Dmitriy V'jukov
Jun 27 '08 #7
"Dmitriy V'jukov" <dv*****@gmail. comwrites:
On Jun 17, 10:48 am, Anthony Williams <anthony....@gm ail.comwrote:
>"Dmitriy V'jukov" <dvyu...@gmail. comwrites:
On 17 июн, 00:27, Anthony Williams <anthony....@gm ail.comwrote:
Looking forward to next draft. Btw, what about dependent memory
ordering (memory_order_c onsume)? Is it going to be accepted?
>Yes. That's been voted in too.
Oooo, it's a bad news. I only start understading current "1.10", and
they change it almost completely! :)

It's all additions, so it's not too bad. The key thing is that the
paper adds memory_order_co nsume and dependency ordering, which
provides an additional mechanism for introducing a happens-before
relationship between threads.
The latest proposal about dependent ordering is:
http://open-std.org/jtc1/sc22/wg21/d...008/n2556.html
Right?

That's the latest pre-meeting paper. The latest (which is what was
voted on) is N2664 which is currently only available on the committee
site. It should be in the post-meeting mailing.


I hope that in N2664 'happens before' definition is changed. Because
now I can't understand it.
N2664 is almost the same as N2556.
For example in following code:

int data;
std::atomic<int x;

thread 1:
data = 1;
x.store(1, std::memory_ord er_release); (A)

thread2:
if (x.load(std::me mory_order_cons ume)) (B)
assert(1 == data); (C)

A dependency-ordered before B, B sequenced before C.
Yes.
So according to definition of 'happens before' in n2556, A happens-
before C.
No. happens-before is no longer transitive if one of the legs is a
dependency ordering.

N2664 says:

"An evaluation A inter-thread happens before an evaluation B if,

* A synchronizes with B, or
* A is dependency-ordered before B, or
* for some evaluation X,
o A synchronizes with X and X is sequenced before B, or
o A is sequenced before X and X inter-thread happens before B, or
o A inter-thread happens before X and X inter-thread happens before B."

"An evaluation A happens before an evaluation B if:

* A is sequenced before B, or
* A inter-thread happens before B."

A is dependency-ordered before B, so A inter-thread happens-before B,
and A happens-before B.

However A neither synchronizes with B or C, nor is sequenced before B, so
the only way A could inter-thread-happen-before C is if B
inter-thread-happens-before C. Since C is not atomic, B cannot
synchronize with C or be dependency-ordered before C. Thus A does not
inter-thread-happen-before C, and A does not happen-before C.
According to my understanding, this is simply wrong. There is no data-
dependency between B and C, so A must not happens-before C. (there is
control dependency, but currently C++0x doesn't respect control-
dependency)
You're right in your analysis, but N2664 agrees with you.
------------------------------------
Another moment:

An evaluation A carries a dependency to an evaluation B if
* the value of A is used as an operand of B, and:
o B is not an invocation of any specialization of
std::kill_depen dency, and
o A is not the left operand to the comma (',') operator,

I think here ---------------------------------/\/\/\/\/\/\/\
must be 'built-in comma operator'. Because consider following example:
Yes. That's fixed in N2664:

"An evaluation A carries a dependency to an evaluation B if

* the value of A is used as an operand of B, unless:
o B is an invocation of any specialization of std::kill_depen dency (29.1), or
o A is the left operand of a built-in logical AND ('&&', see 5.14) or logical OR ('||', see 5.15) operator, or
o A is the left operand of a conditional ('?:') operator (5.16), or
o A is the left operand of the built-in comma (',') operator (5.18);
or
* A writes a scalar object or bit-field M, B reads the value written by A from M, and A is sequenced before B, or
* for some evaluation X, A carries a dependency to X, and X carries a dependency to B."

Anthony
--
Anthony Williams | Just Software Solutions Ltd
Custom Software Development | http://www.justsoftwaresolutions.co.uk
Registered in England, Company Number 5478976.
Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL
Jun 27 '08 #8

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

14
1807
by: Ioannis Vranos | last post by:
I would like to see your views on these. C++98 is already a large language since it supports 4 paradigms and each one is supported well, with optimal space and time efficiency. And this is excellent. From the few things that i have read about C++0x, in addition to some C99... features (actually some other term comes in my mind for this instinctively, but it is another subject for discussion), there is library expansion with
1
2158
by: Dan Maharry | last post by:
A few weeks ago, I upgraded an XML server of ours to .NET 2.0. All went fine. Then last week, I decided to remove all obsolete calls from the app and sign each project in the solution in the .NET 2 way so that the app compiled with no warnings. This built fine but we noticed that server was deserializing one of our XML messages incorrectly. I confirmed via source control that neither the schemas themselves nor the classes representing the...
12
2766
by: Ioannis Vranos | last post by:
Perhaps a mechanism can be introduced in the C++0x/1x standard, something simple like defining a function as: void somefunc(void) throw() { // ... }
2
1793
by: LewGun | last post by:
at the end of last year Herb Sutter told us that "C++ 0x has been fixed", now GCC4.3 had been released, the compiler were known as "the C ++ new features' experimental unit",but it support to the new features is very limited, what do you think about the "C++0x" and the standard compilers will come in future?
3
273
by: Dmitriy V'jukov | last post by:
On Jun 16, 11:36 pm, Anthony Williams <anthony....@gmail.comwrote: Ok. Let's put it this way. I change your initial example a bit: atomic_int x=0; atomic_int y=0; Processor 1 does store-release:
6
3765
by: Dmitriy V'jukov | last post by:
On 2 Á×Ç, 20:47, "Dmitriy V'jukov" <dvyu...@gmail.comwrote: Q: Can I use Relacy Race Detector to check my algo againts other that C ++0x memory models (x86, PPC, Java, CLI)? A Yes, you can. Fortunately, C++0x memory model is very relaxaed, so for the main part it's a "superset" of basically any other memory
13
2057
by: Dmitriy V'jukov | last post by:
Consider following Peterson's algorithm implementation from Wikipedia: http://en.wikipedia.org/wiki/Peterson%27s_algorithm flag = 0 flag = 0 turn = 0 P0: flag = 1 turn = 1 memory_barrier()
3
1880
by: Dmitriy V'jukov | last post by:
Latest C++0x draft N2723, 2008-08-25: "1.9/7 When the processing of the abstract machine is interrupted by receipt of a signal, the values of objects which are neither — of type volatile std::sig_atomic_t nor — lock-free atomic objects (29.2) are unspecified, and the value of any object not in either of these two categories that is modified by the handler becomes undefined."
0
9487
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
10069
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
9735
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8736
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7285
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6556
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5168
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
1
3828
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
3
2697
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.