473,320 Members | 1,910 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,320 software developers and data experts.

Is *all* thread cached data flushed at MemoryBarrier

Obviously wrapping a critical section around access to some set of
shared state variables flushes any cached data, etc so that the threads
involved don't see a stale copy. What I was wondering is *what*
exactly gets flushed. Does the compiler some how determine the data
that is accessible from that thread, and flush just that set? (Seems
unlikely to me). Is it all data cached in registers etc? Or am I
overthinking this and instead it's more along the lines that a memory
barrier is just invalidating pages of memory such that when another
thread goes to access that memory it checks first to see if that page
needs to be refetched from main memory?

Thanks for any insights,
Tom

Aug 9 '06 #1
20 1915
Hi,

I do not understand clearly what is your question. MemoryBarrier (according
to MSDN) is only significative in Itanium processors, not sure if the .net
is even ported to the itanium to be honest.

My suggestion is to try to see what is the equivalent in the unmanaged
world.
--
--
Ignacio Machin,
ignacio.machin AT dot.state.fl.us
Florida Department Of Transportation

<NO***********@lycos.comwrote in message
news:11**********************@b28g2000cwb.googlegr oups.com...
Obviously wrapping a critical section around access to some set of
shared state variables flushes any cached data, etc so that the threads
involved don't see a stale copy. What I was wondering is *what*
exactly gets flushed. Does the compiler some how determine the data
that is accessible from that thread, and flush just that set? (Seems
unlikely to me). Is it all data cached in registers etc? Or am I
overthinking this and instead it's more along the lines that a memory
barrier is just invalidating pages of memory such that when another
thread goes to access that memory it checks first to see if that page
needs to be refetched from main memory?

Thanks for any insights,
Tom

Aug 9 '06 #2
When a lock is performed (or Monitor enetr/exit) in implicit read and
write memory barrier is performed to assure that the current thread
does not look at a "stale" value (one that was in a
cache/register/etc). This is the reason (for example) that you cannot
perform a loop on a simple boolean, waiting for it to be changed by
another thread. The "watching" thread is likely to continue to loop
after the boolean has changed value because it is seeing a stale value.
My question is, when this memory barrier is performed, what is the set
of data that gets flushed or gets invalidated (forcing a readthrough)
or gets written-through, or whatever.

Tom

Ignacio Machin ( .NET/ C# MVP ) wrote:
Hi,

I do not understand clearly what is your question. MemoryBarrier (according
to MSDN) is only significative in Itanium processors, not sure if the .net
is even ported to the itanium to be honest.

My suggestion is to try to see what is the equivalent in the unmanaged
world.
--
--
Ignacio Machin,
ignacio.machin AT dot.state.fl.us
Florida Department Of Transportation

<NO***********@lycos.comwrote in message
news:11**********************@b28g2000cwb.googlegr oups.com...
Obviously wrapping a critical section around access to some set of
shared state variables flushes any cached data, etc so that the threads
involved don't see a stale copy. What I was wondering is *what*
exactly gets flushed. Does the compiler some how determine the data
that is accessible from that thread, and flush just that set? (Seems
unlikely to me). Is it all data cached in registers etc? Or am I
overthinking this and instead it's more along the lines that a memory
barrier is just invalidating pages of memory such that when another
thread goes to access that memory it checks first to see if that page
needs to be refetched from main memory?

Thanks for any insights,
Tom
Aug 9 '06 #3
NO***********@lycos.com wrote:
When a lock is performed (or Monitor enetr/exit) in implicit read and
write memory barrier is performed to assure that the current thread
does not look at a "stale" value (one that was in a
cache/register/etc). This is the reason (for example) that you cannot
perform a loop on a simple boolean, waiting for it to be changed by
another thread. The "watching" thread is likely to continue to loop
after the boolean has changed value because it is seeing a stale
value.
My question is, when this memory barrier is performed, what is the set
of data that gets flushed or gets invalidated (forcing a readthrough)
or gets written-through, or whatever.
It's defined by the hardware architecture. In the case of x86, the amount
of memory flushed is 0, becuase x86 processors have strong cache coherency
guarantees. In other architectures it will be different, but in all cases,
following a memory barrier, all writes issued before the barrier will be
visible to all CPUs. Whether that's done by cache invalidation, updating
other caches, etc., is defined by the hardware architecture and generally
not visible to the programmer.

-cd
Aug 9 '06 #4
OK.

So my example of watching a boolean is only unsafe on x86 if
instrucution (re)ordering is an issue, not because multiple threads
will see different values for that variable.

That's wasn't completely clear to me before.

Thanks!
Tom

Carl Daniel [VC++ MVP] wrote:
NO***********@lycos.com wrote:
When a lock is performed (or Monitor enetr/exit) in implicit read and
write memory barrier is performed to assure that the current thread
does not look at a "stale" value (one that was in a
cache/register/etc). This is the reason (for example) that you cannot
perform a loop on a simple boolean, waiting for it to be changed by
another thread. The "watching" thread is likely to continue to loop
after the boolean has changed value because it is seeing a stale
value.
My question is, when this memory barrier is performed, what is the set
of data that gets flushed or gets invalidated (forcing a readthrough)
or gets written-through, or whatever.

It's defined by the hardware architecture. In the case of x86, the amount
of memory flushed is 0, becuase x86 processors have strong cache coherency
guarantees. In other architectures it will be different, but in all cases,
following a memory barrier, all writes issued before the barrier will be
visible to all CPUs. Whether that's done by cache invalidation, updating
other caches, etc., is defined by the hardware architecture and generally
not visible to the programmer.

-cd
Aug 9 '06 #5
Tom,

I have to ask, why are you using MemoryBarrier instead of the lock
statement?

--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

<NO***********@lycos.comwrote in message
news:11**********************@b28g2000cwb.googlegr oups.com...
Obviously wrapping a critical section around access to some set of
shared state variables flushes any cached data, etc so that the threads
involved don't see a stale copy. What I was wondering is *what*
exactly gets flushed. Does the compiler some how determine the data
that is accessible from that thread, and flush just that set? (Seems
unlikely to me). Is it all data cached in registers etc? Or am I
overthinking this and instead it's more along the lines that a memory
barrier is just invalidating pages of memory such that when another
thread goes to access that memory it checks first to see if that page
needs to be refetched from main memory?

Thanks for any insights,
Tom

Aug 9 '06 #6
To be honest, the original question was for informational purposes
only.

I one of those people that always wants to know the "why", not just the
"how".

Any place where I am forced to say to myself "I know if I do this it
will work, but I don't really completely know *why* it does" is a place
where I start buying books, downloading articles, and hitting Google.

On this topic, I've found dedicated books on advanced concurreny to be
thin at best in the .NET world. Java on the other, which has a less
feature-rich set of concurrency options, has a number of excellent
texts available. If anyone can recommend a few highly-detailed books
on the topic (NOT books with just a chapter or two on the topic),
please let me know!

Tom
Nicholas Paldino [.NET/C# MVP] wrote:
Tom,

I have to ask, why are you using MemoryBarrier instead of the lock
statement?

--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

<NO***********@lycos.comwrote in message
news:11**********************@b28g2000cwb.googlegr oups.com...
Obviously wrapping a critical section around access to some set of
shared state variables flushes any cached data, etc so that the threads
involved don't see a stale copy. What I was wondering is *what*
exactly gets flushed. Does the compiler some how determine the data
that is accessible from that thread, and flush just that set? (Seems
unlikely to me). Is it all data cached in registers etc? Or am I
overthinking this and instead it's more along the lines that a memory
barrier is just invalidating pages of memory such that when another
thread goes to access that memory it checks first to see if that page
needs to be refetched from main memory?

Thanks for any insights,
Tom
Aug 9 '06 #7
Tom,

While I can't really recommend any FULL books on the topic, I can tell
you that for the most part, you will want to use the lock statement (which
in turn is really a call to Monitor.Enter/Monitor.Exit) over MemoryBarrier.
Monitor.Enter/Monitor.Exit is specified in the spec as having to work, and
you should always be able to depend on that.
--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

<NO***********@lycos.comwrote in message
news:11**********************@75g2000cwc.googlegro ups.com...
To be honest, the original question was for informational purposes
only.

I one of those people that always wants to know the "why", not just the
"how".

Any place where I am forced to say to myself "I know if I do this it
will work, but I don't really completely know *why* it does" is a place
where I start buying books, downloading articles, and hitting Google.

On this topic, I've found dedicated books on advanced concurreny to be
thin at best in the .NET world. Java on the other, which has a less
feature-rich set of concurrency options, has a number of excellent
texts available. If anyone can recommend a few highly-detailed books
on the topic (NOT books with just a chapter or two on the topic),
please let me know!

Tom
Nicholas Paldino [.NET/C# MVP] wrote:
>Tom,

I have to ask, why are you using MemoryBarrier instead of the lock
statement?

--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

<NO***********@lycos.comwrote in message
news:11**********************@b28g2000cwb.googleg roups.com...
Obviously wrapping a critical section around access to some set of
shared state variables flushes any cached data, etc so that the threads
involved don't see a stale copy. What I was wondering is *what*
exactly gets flushed. Does the compiler some how determine the data
that is accessible from that thread, and flush just that set? (Seems
unlikely to me). Is it all data cached in registers etc? Or am I
overthinking this and instead it's more along the lines that a memory
barrier is just invalidating pages of memory such that when another
thread goes to access that memory it checks first to see if that page
needs to be refetched from main memory?

Thanks for any insights,
Tom

Aug 9 '06 #8
Tom,

No, I do not believe it is safe. And even if it technically were I
certainly wouldn't bank on it because you may later port the code to
another framework version or hardware platform.

Maybe I'm wrong, but as I understand it the x86 memory model only
guarentees that writes cannot move with respect to other writes, but it
doesn't make any guarentees about reads. So it seems to me that you're
example is unsafe. But, I bet you'd have a hard time reproducing the
issue in reality. You'd almost certainly have to have a SMP system to
see it.

Here are some excellent links regarding memory barriers the .NET
framework.

<http://blogs.msdn.com/cbrumme/archive/2003/05/17/51445.aspx>
<http://discuss.develop.com/archives/wa.exe?A2=ind0203B&L=DOTNET&P=R375>
<http://www.yoda.arachsys.com/csharp/threads/volatility.shtml>
<http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/>

Brian

NO***********@lycos.com wrote:
OK.

So my example of watching a boolean is only unsafe on x86 if
instrucution (re)ordering is an issue, not because multiple threads
will see different values for that variable.

That's wasn't completely clear to me before.

Thanks!
Tom
Aug 9 '06 #9
<NO***********@lycos.comwrote:
So my example of watching a boolean is only unsafe on x86 if
instrucution (re)ordering is an issue, not because multiple threads
will see different values for that variable.

That's wasn't completely clear to me before.
Reordering *is* an issue, however. Memory barriers are about preventing
*effective* reordering, whether that's done by the JIT or due to caches
etc.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Aug 9 '06 #10
Brian Gideon <br*********@yahoo.comwrote:
Maybe I'm wrong, but as I understand it the x86 memory model only
guarentees that writes cannot move with respect to other writes, but it
doesn't make any guarentees about reads. So it seems to me that you're
example is unsafe. But, I bet you'd have a hard time reproducing the
issue in reality. You'd almost certainly have to have a SMP system to
see it.
I thought that, but it's very easy to see "effective" memory read moves
- where a value is basically only read once instead of being reread
each time through a loop:

using System;
using System.Threading;

public class Test
{
static volatile bool stop;

static void Main()
{
ThreadStart job = new ThreadStart(ThreadJob);
Thread thread = new Thread(job);
thread.Start();

// Let the thread start running
Thread.Sleep(500);

// Now tell it to stop counting
stop = true;

thread.Join();
}

static void ThreadJob()
{
int count=0;
while (!stop)
{
count++;

}
}
}

That stops half a second after you start it. Take the "volatile" bit
out, and it'll run forever (at least it does on my single processor P4,
when compiled with optimisation enabled).

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Aug 9 '06 #11
Jon Skeet [C# MVP] wrote:
Brian Gideon <br*********@yahoo.comwrote:
>Maybe I'm wrong, but as I understand it the x86 memory model only
guarentees that writes cannot move with respect to other writes, but
it doesn't make any guarentees about reads. So it seems to me that
you're example is unsafe. But, I bet you'd have a hard time
reproducing the issue in reality. You'd almost certainly have to
have a SMP system to see it.

I thought that, but it's very easy to see "effective" memory read
moves
- where a value is basically only read once instead of being reread
each time through a loop:

using System;
using System.Threading;

public class Test
{
static volatile bool stop;

static void Main()
{
ThreadStart job = new ThreadStart(ThreadJob);
Thread thread = new Thread(job);
thread.Start();

// Let the thread start running
Thread.Sleep(500);

// Now tell it to stop counting
stop = true;

thread.Join();
}

static void ThreadJob()
{
int count=0;
while (!stop)
{
count++;

}
}
}

That stops half a second after you start it. Take the "volatile" bit
out, and it'll run forever (at least it does on my single processor
P4, when compiled with optimisation enabled).
But this is just due to code hoisting by the JIT and has nothing to do with
the memory model at the CLR or CPU level. The volatile modifier inhibits
the hoising of the read out of the loop, so the thread stops like you'd
expect. Without voliatile, the read is hoisted and the variable only read
once, since the compiler can easily prove that nothing in the loop affects
the value of the variable.

-cd
Aug 10 '06 #12
Carl Daniel [VC++ MVP]
<cp*****************************@mvps.org.nospamwr ote:

<snip>
That stops half a second after you start it. Take the "volatile" bit
out, and it'll run forever (at least it does on my single processor
P4, when compiled with optimisation enabled).

But this is just due to code hoisting by the JIT and has nothing to do with
the memory model at the CLR or CPU level. The volatile modifier inhibits
the hoising of the read out of the loop, so the thread stops like you'd
expect. Without voliatile, the read is hoisted and the variable only read
once, since the compiler can easily prove that nothing in the loop affects
the value of the variable.
But it's the memory model which specifies what the JIT can do. That's
what I'm saying - regardless of CPU architecture, the JIT can do
optimisations which change the "apparent" read time of a variable. The
optimisations it's able to do are controlled by the memory model at the
CLR level.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Aug 10 '06 #13
First off, thanks to everyone contributing to the thread...this is why
I post here!

I have never used MemoryBarrior in any code other than tests for my own
education, as I said the original question was more theoritcal.

By the way, I have seen caching/reordering (it's often hard to
effectively tell which) in common environments.

This is kind of what I was investiagting. While I am well aware of
"how" to prevent it (and I of course do so), I wanted to know more
about what was going on under the covers.

And the fact that a simple question on the underlying behavior of a
memory barrier has blossomed into this debate on behavior only
underlines what I was saying before. There seems to be nothing
authoratative out there on this topic. If there us room for debate,
then there is room for error and misunderstanding.

As an example of what I'd like see, I do a lot of p/invoke and COM
interop and the text ".NET and COM - The Complete Interoperabiliy
Guide" is my idea of a great book on that topic. I can only hope such
a volume is created in regards to concurency on the .NET / Windows
platform.

Thanks again,
Tom

Brian Gideon wrote:
Tom,

No, I do not believe it is safe. And even if it technically were I
certainly wouldn't bank on it because you may later port the code to
another framework version or hardware platform.

Maybe I'm wrong, but as I understand it the x86 memory model only
guarentees that writes cannot move with respect to other writes, but it
doesn't make any guarentees about reads. So it seems to me that you're
example is unsafe. But, I bet you'd have a hard time reproducing the
issue in reality. You'd almost certainly have to have a SMP system to
see it.

Here are some excellent links regarding memory barriers the .NET
framework.

<http://blogs.msdn.com/cbrumme/archive/2003/05/17/51445.aspx>
<http://discuss.develop.com/archives/wa.exe?A2=ind0203B&L=DOTNET&P=R375>
<http://www.yoda.arachsys.com/csharp/threads/volatility.shtml>
<http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/>

Brian

NO***********@lycos.com wrote:
OK.

So my example of watching a boolean is only unsafe on x86 if
instrucution (re)ordering is an issue, not because multiple threads
will see different values for that variable.

That's wasn't completely clear to me before.

Thanks!
Tom
Aug 10 '06 #14

Jon wrote:
That stops half a second after you start it. Take the "volatile" bit
out, and it'll run forever (at least it does on my single processor P4,
when compiled with optimisation enabled).

Which framework version were you using? I tried it with 1.1 and 2.0 on
my dual core laptop and I could only see it run forever with 2.0. I
guess 2.0 is more aggressive in its optimizations. At the very least
this proves that those who naively rely on it being safe in 1.1 will
get burned when they port their code to 2.0.

Aug 10 '06 #15
Brian Gideon wrote:
Jon wrote:
That stops half a second after you start it. Take the "volatile" bit
out, and it'll run forever (at least it does on my single processor P4,
when compiled with optimisation enabled).

Which framework version were you using? I tried it with 1.1 and 2.0 on
my dual core laptop and I could only see it run forever with 2.0. I
guess 2.0 is more aggressive in its optimizations. At the very least
this proves that those who naively rely on it being safe in 1.1 will
get burned when they port their code to 2.0.
I only tried it with 2.0 yesterday, but I think I've tried similar
programs with 1.1 before. I wouldn't like to swear to it though...

Jon

Aug 10 '06 #16
Tom,

One thing I should point out, which Jon already eluded to, is that we
code using the CLR memory model. The hardware memory model is mostly
irrelevant from a .NET developer's perspective because the CLR sits on
top of it. So your example is certainly unsafe because the CLR
specification says it is. We shouldn't be too concerned with the
differences between x86, AMD64, IA64, etc. architectures. That's the
job of the CLR. But, I do share your interest in learning exactly what
is going on behind the scenes.

Brian

NO***********@lycos.com wrote:
OK.

So my example of watching a boolean is only unsafe on x86 if
instrucution (re)ordering is an issue, not because multiple threads
will see different values for that variable.

That's wasn't completely clear to me before.

Thanks!
Tom
Aug 10 '06 #17
This little sample shows reordering (or some type of caching) on 1.1:
class ConcurrencyTest
{
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main(string[] args)
{

ConcurrencyTest test = new ConcurrencyTest();
test.Start();

}

private uint m_First;
private uint m_Second;
private System.Threading.Thread m_Incrementor;
private System.Threading.Thread m_Inspector;

public void Start()
{

Console.WriteLine("Tets running");

m_Incrementor = new Thread(new ThreadStart(Increment));
m_Inspector = new Thread(new ThreadStart(CheckValues));
m_Incrementor.Start();
m_Inspector.Start();

}

private void Increment()
{
while(true)
{
m_First++;
m_Second++;
}
}

private void CheckValues()
{
while(true)
{

uint first = m_First ;
uint second = m_Second;

if (first < second)
{
Console.WriteLine("First is {0} and Second is {1}", first,second);
Thread.Sleep(1000);
}

}

}
}


Jon Skeet [C# MVP] wrote:
Brian Gideon wrote:
Jon wrote:
That stops half a second after you start it. Take the "volatile" bit
out, and it'll run forever (at least it does on my single processor P4,
when compiled with optimisation enabled).
Which framework version were you using? I tried it with 1.1 and 2.0 on
my dual core laptop and I could only see it run forever with 2.0. I
guess 2.0 is more aggressive in its optimizations. At the very least
this proves that those who naively rely on it being safe in 1.1 will
get burned when they port their code to 2.0.

I only tried it with 2.0 yesterday, but I think I've tried similar
programs with 1.1 before. I wouldn't like to swear to it though...

Jon
Aug 10 '06 #18
NO***********@lycos.com wrote:
This little sample shows reordering (or some type of caching) on 1.1:
I don't think so. It would be masked by the race condition between the
reads of m_First and m_Second. m_First could be read and m_Second
incremented several times before it is eventually read.

Aug 10 '06 #19
<NO***********@lycos.comwrote:
This little sample shows reordering (or some type of caching) on 1.1:
I don't see anything - what do you see?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Aug 10 '06 #20
That makes perfect sense. So much for my attempt at an example!

Brian Gideon wrote:
NO***********@lycos.com wrote:
This little sample shows reordering (or some type of caching) on 1.1:

I don't think so. It would be masked by the race condition between the
reads of m_First and m_Second. m_First could be read and m_Second
incremented several times before it is eventually read.
Aug 10 '06 #21

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

15
by: DanGo | last post by:
Hi All, I'm trying to get my head around synchronization. Documentation seems to say that creating a volatile field gives a memorybarrier. Great! But when I do a little performance testing...
0
by: Timo | last post by:
I'm trying to make a thread safe object cache without locking. The objects are cached by the id of the data dict given in __new__. Objects are removed from the cache as soon as they are no longer...
5
by: hugo27 | last post by:
hugo 27, Oct 9, 2004 Ref Docs: c.l.c FAQ article 12.26 . www.digitalmars.com sitemap.stdio.fflush Reading these Docs I understand that fflush does not summarily destroy or discard the...
14
by: Michi Henning | last post by:
Hi, I can't find a statement about this in the threading sections in the doc... Consider: class Class1 { Class1() { _val = 42;
21
by: JoKur | last post by:
Hello, First let me tell you that I'm very new to C# and learning as I go. I'm trying to write a client application to communicate with a server (that I didn't write). Each message from the...
5
by: collection60 | last post by:
Hi people, I am writing a library that will implement a binary file format. Well... I'm thinking, as a design, it would be nice to have my library able to promise to not use more than a...
1
by: MindClass | last post by:
I've to modifying a file, then I use a method imported that access to that file and has to read the new data, but they are not read ( as if the data were not flushed at the moment even using...
2
by: lwhitb1 | last post by:
Does anyone have any input on setting up my CAB application so that the application is thread safe, and cached appropiately? I read that this can be managed through Services, and dynamic injection....
29
by: NvrBst | last post by:
I've read a bit online seeing that two writes are not safe, which I understand, but would 1 thread push()'ing and 1 thread pop()'ing be thread-safe? Basically my situation is the follows: ...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
0
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.