473,756 Members | 1,808 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

another thread on Python threading

Hi,

I've recently been working on an application[1] which does quite a bit
of searching through large data structures and string matching, and I
was thinking that it would help to put some of this CPU-intensive work
in another thread, but of course this won't work because of Python's
GIL.

There's a lot of past discussion on this, and I want to bring it up
again because with the work on Python 3000, I think it is worth trying
to take a look at what can be done to address portions of the problem
through language changes.

Also, the recent hardware trend towards multicore processors is
another reason I think it is worth taking a look at the problem again.

= dynamic objects, locking and __slots__ =

I remember reading (though I can't find it now) one person's attempt
at true multithreaded programming involved adding a mutex to all
object access. The obvious question though is - why don't other true
multithreaded languages like Java need to lock an object when making
changes? The answer is that they don't support adding random
attributes to objects; in other words, they default to the equivalent
of __slots__.

== Why hasn't __slots__ been successful? ==

I very rarely see Python code use __slots__. I think there are
several reasons for this. The first is that a lot of programs don't
need to optimize on this level. The second is that it's annoying to
use, because it means you have to type your member variables *another*
time (in addition to __init__ for example), which feels very un-
Pythonic.

== Defining object attributes ==

In my Python code, one restriction I try to follow is to set all the
attributes I use for an object in __init__. You could do this as
class member variables, but often I want to set them in __init__
anyways from constructor arguments, so "defining" them in __init__
means I only type them once, not twice.

One random idea is to for Python 3000, make the equivalent of
__slots__ the default, *but* instead gather
the set of attributes from all member variables set in __init__. For
example, if I write:

class Foo(object):
def __init__(self, bar=None):
self.__baz = 20
if bar:
self.__bar = bar
else:
self.__bar = time.time()

f = Foo()
f.otherattr = 40 # this would be an error! Can't add random
attributes not defined in __init__

I would argue that the current Python default of supporting adding
random attributes is almost never what you really want. If you *do*
want to set random attributes, you almost certainly want to be using a
dictionary or a subclass of one, not an object. What's nice about the
current Python is that you don't need to redundantly type things, and
we should preserve that while still allowing more efficient
implementation strategies.

= Limited threading =

Now, I realize there are a ton of other things the GIL protects other
than object dictionaries; with true threading you would have to touch
the importer, the garbage collector, verify all the C extension
modules, etc. Obviously non-trivial. What if as an initial push
towards real threading, Python had support for "restricted threads".
Essentially, restricted threads would be limited to a subset of the
standard library that had been verified for thread safety, would not
be able to import new modules, etc.

Something like this:

def datasearcher(li st, queue):
for item in list:
if item.startswith ('foo'):
queue.put(item)
queue.done()

vals = ['foo', 'bar']
queue = queue.Queue()
threading.start _restricted_thr ead(datasearche r, vals, queue)
def print_item(item ):
print item
queue.set_callb ack(print_item)

Making up some API above I know, but the point here is "datasearch er"
could pretty easily run in a true thread and touch very little of the
interpreter; only support for atomic reference counting and a
concurrent garbage collector would be needed.

Thoughts?

[1] http://submind.verbum.org/hotwire/wiki

Jun 3 '07 #1
9 1863
On Jun 3, 5:52 pm, Steve Howell <showel...@yaho o.comwrote:
The pitfall here is that to reduce code duplication,
you might initialize certain variables in a method
called by __init__, because your object might want to
return to its initial state.
This is a good point. I was thinking that this analysis would
occur during module loading; i.e. it would be the equivalent of Java's
classloading.

What if the startup code analysis just extended to methods called
during __init__?
That seems like a relatively straightforward extension. Remember we
aren't actually *executing*
the startup code in my proposal; we are just analyzing it for all
possible execution paths which cause
a member variable assignment.

Jun 3 '07 #2
cg*******@gmail .com wrote:
I've recently been working on an application[1] which does quite a bit
of searching through large data structures and string matching, and I
was thinking that it would help to put some of this CPU-intensive work
in another thread, but of course this won't work because of Python's
GIL.
If you are doing string searching, implement the algorithm in C, and
call out to the C (remembering to release the GIL).
There's a lot of past discussion on this, and I want to bring it up
again because with the work on Python 3000, I think it is worth trying
to take a look at what can be done to address portions of the problem
through language changes.
Not going to happen. All Python 3000 PEPs had a due-date at least a
month ago (possibly even 2), so you are too late to get *any*
substantial change in.
I remember reading (though I can't find it now) one person's attempt
at true multithreaded programming involved adding a mutex to all
object access. The obvious question though is - why don't other true
multithreaded languages like Java need to lock an object when making
changes?
From what I understand, the Java runtime uses fine-grained locking on
all objects. You just don't notice it because you don't need to write
the acquire()/release() calls. It is done for you. (in a similar
fashion to Python's GIL acquisition/release when switching threads)
They also have a nice little decorator-like thingy (I'm not a Java guy,
so I don't know the name exactly) called 'synchronize', which locks and
unlocks the object when accessing it through a method.
- Josiah
== Why hasn't __slots__ been successful? ==

I very rarely see Python code use __slots__. I think there are
several reasons for this. The first is that a lot of programs don't
need to optimize on this level. The second is that it's annoying to
use, because it means you have to type your member variables *another*
time (in addition to __init__ for example), which feels very un-
Pythonic.

== Defining object attributes ==

In my Python code, one restriction I try to follow is to set all the
attributes I use for an object in __init__. You could do this as
class member variables, but often I want to set them in __init__
anyways from constructor arguments, so "defining" them in __init__
means I only type them once, not twice.

One random idea is to for Python 3000, make the equivalent of
__slots__ the default, *but* instead gather
the set of attributes from all member variables set in __init__. For
example, if I write:

class Foo(object):
def __init__(self, bar=None):
self.__baz = 20
if bar:
self.__bar = bar
else:
self.__bar = time.time()

f = Foo()
f.otherattr = 40 # this would be an error! Can't add random
attributes not defined in __init__

I would argue that the current Python default of supporting adding
random attributes is almost never what you really want. If you *do*
want to set random attributes, you almost certainly want to be using a
dictionary or a subclass of one, not an object. What's nice about the
current Python is that you don't need to redundantly type things, and
we should preserve that while still allowing more efficient
implementation strategies.

= Limited threading =

Now, I realize there are a ton of other things the GIL protects other
than object dictionaries; with true threading you would have to touch
the importer, the garbage collector, verify all the C extension
modules, etc. Obviously non-trivial. What if as an initial push
towards real threading, Python had support for "restricted threads".
Essentially, restricted threads would be limited to a subset of the
standard library that had been verified for thread safety, would not
be able to import new modules, etc.

Something like this:

def datasearcher(li st, queue):
for item in list:
if item.startswith ('foo'):
queue.put(item)
queue.done()

vals = ['foo', 'bar']
queue = queue.Queue()
threading.start _restricted_thr ead(datasearche r, vals, queue)
def print_item(item ):
print item
queue.set_callb ack(print_item)

Making up some API above I know, but the point here is "datasearch er"
could pretty easily run in a true thread and touch very little of the
interpreter; only support for atomic reference counting and a
concurrent garbage collector would be needed.

Thoughts?

[1] http://submind.verbum.org/hotwire/wiki
Jun 4 '07 #3
On Jun 4, 3:10 am, Josiah Carlson <josiah.carl... @sbcglobal.net>
wrote:
cgwalt...@gmail .com wrote:
I've recently been working on an application[1] which does quite a bit
of searching through large data structures and string matching, and I
was thinking that it would help to put some of this CPU-intensive work
in another thread, but of course this won't work because of Python's
GIL.

If you are doing string searching, implement the algorithm in C, and
call out to the C (remembering to release the GIL).
There's a lot of past discussion on this, and I want to bring it up
again because with the work on Python 3000, I think it is worth trying
to take a look at what can be done to address portions of the problem
through language changes.

Not going to happen. All Python 3000 PEPs had a due-date at least a
month ago (possibly even 2), so you are too late to get *any*
substantial change in.
I remember reading (though I can't find it now) one person's attempt
at true multithreaded programming involved adding a mutex to all
object access. The obvious question though is - why don't other true
multithreaded languages like Java need to lock an object when making
changes?

From what I understand, the Java runtime uses fine-grained locking on
all objects. You just don't notice it because you don't need to write
the acquire()/release() calls. It is done for you. (in a similar
fashion to Python's GIL acquisition/release when switching threads)
The problem is CPython's reference counting. Access to reference
counts must be synchronized.

Java, IronPython and Jython uses another scheme for the garbage
collector and do not need a GIL.

Changing CPython's garbage collection from reference counting to a
generational GC will be a major undertaking. There are also pros and
cons to using reference counts instead of 'modern' garbage collectors.
For example, unless there are cyclic references, one can always know
when an object is garbage collected. One also avoids periodic delays
when garbage are collected, and memory use can be more modest then a
lot of small temporary objects are being used.

Also beware that the GIL is only a problem for CPU bound code. IO
bound code is not slowed by the GIL. The Python runtime itself is a
bigger problem for CPU bound code.

In C or Fortran, writing parallell algorithms for multiprocessor
systems typically involves using OpenMP or MPI. Parallelizing
algorithms using manual threading should be discouraged. It is far
better to insert a compiler directive (#pragma omp) and let an OpenMP
compiler to the job.

There are a number of different options for exploiting multiple CPUs
from CPython, including:

- MPI (e.g. mpi4py or PyMPI)
- PyPar
- os.fork() on Linux or Unix
- subprocess.Pope n
- C extensions that use OpenMP
- C extensions that spawn threads (should be discouraged!)
They also have a nice little decorator-like thingy (I'm not a Java guy,
so I don't know the name exactly) called 'synchronize', which locks and
unlocks the object when accessing it through a method.
A similar Python 'synchronized' function decorator may look like this:

def synchronized(fu n):
from threading import RLock
rl = RLock()
def decorator(*args ,**kwargs):
with rl:
retv = fun(*args,**kwa rgs)
return retv
return decorator

It is not possible to define a 'synchronized' block though, as Python
do not have Lisp macros :(







>
- Josiah
== Why hasn't __slots__ been successful? ==
I very rarely see Python code use __slots__. I think there are
several reasons for this. The first is that a lot of programs don't
need to optimize on this level. The second is that it's annoying to
use, because it means you have to type your member variables *another*
time (in addition to __init__ for example), which feels very un-
Pythonic.
== Defining object attributes ==
In my Python code, one restriction I try to follow is to set all the
attributes I use for an object in __init__. You could do this as
class member variables, but often I want to set them in __init__
anyways from constructor arguments, so "defining" them in __init__
means I only type them once, not twice.
One random idea is to for Python 3000, make the equivalent of
__slots__ the default, *but* instead gather
the set of attributes from all member variables set in __init__. For
example, if I write:
class Foo(object):
def __init__(self, bar=None):
self.__baz = 20
if bar:
self.__bar = bar
else:
self.__bar = time.time()
f = Foo()
f.otherattr = 40 # this would be an error! Can't add random
attributes not defined in __init__
I would argue that the current Python default of supporting adding
random attributes is almost never what you really want. If you *do*
want to set random attributes, you almost certainly want to be using a
dictionary or a subclass of one, not an object. What's nice about the
current Python is that you don't need to redundantly type things, and
we should preserve that while still allowing more efficient
implementation strategies.
= Limited threading =
Now, I realize there are a ton of other things the GIL protects other
than object dictionaries; with true threading you would have to touch
the importer, the garbage collector, verify all the C extension
modules, etc. Obviously non-trivial. What if as an initial push
towards real threading, Python had support for "restricted threads".
Essentially, restricted threads would be limited to a subset of the
standard library that had been verified for thread safety, would not
be able to import new modules, etc.
Something like this:
def datasearcher(li st, queue):
for item in list:
if item.startswith ('foo'):
queue.put(item)
queue.done()
vals = ['foo', 'bar']
queue = queue.Queue()
threading.start _restricted_thr ead(datasearche r, vals, queue)
def print_item(item ):
print item
queue.set_callb ack(print_item)
Making up some API above I know, but the point here is "datasearch er"
could pretty easily run in a true thread and touch very little of the
interpreter; only support for atomic reference counting and a
concurrent garbage collector would be needed.
Thoughts?
[1]http://submind.verbum. org/hotwire/wiki

Jun 4 '07 #4
On Jun 3, 9:10 pm, Josiah Carlson <josiah.carl... @sbcglobal.net>
wrote:
>
If you are doing string searching, implement the algorithm in C, and
call out to the C (remembering to release the GIL).
I considered that, but...ick! The whole reason I'm writing this
program
in Python in the first place is so I don't have to deal with the mess
that is involved when you do string matching and data structure
traversal
in C.

On the other hand, there are likely C libraries out there for
searching the
kinds of data structures I use; I'll investigate.
There's a lot of past discussion on this, and I want to bring it up
again because with the work on Python 3000, I think it is worth trying
to take a look at what can be done to address portions of the problem
through language changes.

Not going to happen. All Python 3000 PEPs had a due-date at least a
month ago (possibly even 2), so you are too late to get *any*
substantial change in.
=( Too bad. It might be possible to do these changes in a backwards
compatible way,
though less elegantly. For example, the object change could be
denoted by inheriting from "fixedobjec t"
or something.

Jun 4 '07 #5
sturlamolden wrote:
On Jun 4, 3:10 am, Josiah Carlson <josiah.carl... @sbcglobal.net>
wrote:
> From what I understand, the Java runtime uses fine-grained locking on
all objects. You just don't notice it because you don't need to write
the acquire()/release() calls. It is done for you. (in a similar
fashion to Python's GIL acquisition/release when switching threads)

The problem is CPython's reference counting. Access to reference
counts must be synchronized.
Java, IronPython and Jython uses another scheme for the garbage
collector and do not need a GIL.
There was a discussion regarding this in the python-ideas list recently.
You *can* attach a lock to every object, and use fine-grained locking
to handle refcounts. Alternatively, you can use platform-specific
atomic increments and decrements, or even a secondary 'owner thread'
refcount that doesn't need to be locked by 1 thread at a time.

It turns out that atomic updates are slow, and I wasn't able to get any
sort of productive results using 'owner threads' (seemed generally
negative, and was certainly more work to make happen). I don't believe
anyone bothered to test fine-grained locking on objects.

However, locking isn't just for refcounts, it's to make sure that thread
A isn't mangling your object while thread B is traversing it. With
object locking (course via the GIL, or fine via object-specific locks),
you get the same guarantees, with the problem being that fine-grained
locking is about a bazillion times more difficult to verify the lack of
deadlocks than a GIL-based approach.

Changing CPython's garbage collection from reference counting to a
generational GC will be a major undertaking. There are also pros and
cons to using reference counts instead of 'modern' garbage collectors.
For example, unless there are cyclic references, one can always know
when an object is garbage collected. One also avoids periodic delays
when garbage are collected, and memory use can be more modest then a
lot of small temporary objects are being used.
It was done a while ago. The results? On a single-processor machine,
Python code ran like 1/4-1/3 the speed of the original runtime. When
using 4+ processors, there were some gains in threaded code, but not
substantial at that point.

There are a number of different options for exploiting multiple CPUs
from CPython, including:
My current favorite is the processing package (available from the Python
cheeseshop). You get much of the same API as threading, only you are
using processes instead. It works on Windows, OSX, and *nix.

def synchronized(fu n):
from threading import RLock
rl = RLock()
def decorator(*args ,**kwargs):
with rl:
retv = fun(*args,**kwa rgs)
return retv
return decorator

It is not possible to define a 'synchronized' block though, as Python
do not have Lisp macros :(
Except that you just used the precise mechanism necessary to get a
synchronized block in Python:

lock = threading.Lock( )

with lock:
#synchronized block!
pass
- Josiah
Jun 4 '07 #6
On Jun 4, 10:11 pm, Josiah Carlson <josiah.carl... @sbcglobal.net>
wrote:
lock = threading.Lock( )

with lock:
#synchronized block!
pass
True, except that the lock has to be shared among the threads. This
explicit initiation of an reentrant lock is avoided in a Java
synchronized block.



Jun 4 '07 #7
On Jun 4, 10:11 pm, Josiah Carlson <josiah.carl... @sbcglobal.net>
wrote:
However, locking isn't just for refcounts, it's to make sure that thread
A isn't mangling your object while thread B is traversing it.
With
object locking (course via the GIL, or fine via object-specific locks),
you get the same guarantees, with the problem being that fine-grained
locking is about a bazillion times more difficult to verify the lack of
deadlocks than a GIL-based approach.
I think this is just as much a question of what the runtime should
guarantee. One don't need a guarantee that two threads are not
mangling the same object simultaneously. Instead, the runtime could
leave it to the programmer to use explicit locks on the object or
synchronized blocks to guarantee this for himself.

It was done a while ago. The results? On a single-processor machine,
Python code ran like 1/4-1/3 the speed of the original runtime. When
using 4+ processors, there were some gains in threaded code, but not
substantial at that point.
I am not surprised. Reference counts are quite efficient, contrary to
common belief. The problem with reference counts is cyclic references
involving objects that define a __del__ method. As these objects are
not eligible for cyclic garbage collection, this can produce resource
leaks.

My current favorite is the processing package (available from the Python
cheeseshop).
Thanks. I'll take a look at that.


Jun 4 '07 #8
sturlamolden wrote:
On Jun 4, 10:11 pm, Josiah Carlson <josiah.carl... @sbcglobal.net>
wrote:
> lock = threading.Lock( )

with lock:
#synchronized block!
pass

True, except that the lock has to be shared among the threads. This
explicit initiation of an reentrant lock is avoided in a Java
synchronized block.
You toss the lock creation in the global namespace of the module for
which you would like to synchronize access.
- Josiah
Jun 5 '07 #9
sturlamolden wrote:
On Jun 4, 10:11 pm, Josiah Carlson <josiah.carl... @sbcglobal.net>
wrote:
>However, locking isn't just for refcounts, it's to make sure that thread
A isn't mangling your object while thread B is traversing it.

>With
object locking (course via the GIL, or fine via object-specific locks),
you get the same guarantees, with the problem being that fine-grained
locking is about a bazillion times more difficult to verify the lack of
deadlocks than a GIL-based approach.

I think this is just as much a question of what the runtime should
guarantee. One don't need a guarantee that two threads are not
mangling the same object simultaneously. Instead, the runtime could
leave it to the programmer to use explicit locks on the object or
synchronized blocks to guarantee this for himself.
Why? Right now we have a language where the only thing that doing silly
things with threads can get you now is perhaps a deadlock, perhaps
incorrect execution, or maybe some data corruption if you are working
with files. If we forced all thread users to synchronize everything
themselves, we get an uglier language, and incorrectly written code
could potentially cause crashes (though all of the earlier drawbacks
still apply). In the "safety vs. speed" or "easy vs. fast" arenas,
Python has already chosen safe and easy rather than fast. I doubt it is
going to change any time soon.

>It was done a while ago. The results? On a single-processor machine,
Python code ran like 1/4-1/3 the speed of the original runtime. When
using 4+ processors, there were some gains in threaded code, but not
substantial at that point.

I am not surprised. Reference counts are quite efficient, contrary to
common belief. The problem with reference counts is cyclic references
involving objects that define a __del__ method. As these objects are
not eligible for cyclic garbage collection, this can produce resource
leaks.
There was a discussion about removing __del__ within the last couple
weeks. I didn't pay much attention to it (having learned never to use
__del__), but I believe it involved some sort of weakref-based cleanup.

- Josiah
Jun 5 '07 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
2085
by: Jim West | last post by:
Good day all, Ok, I'm starting to get the hang of this Python thing...really pretty cool actually. Again, thanx to those that have helped me thus far. I now have another stumbling block to which I would like help. Ok, here is my enviroment: Windows 2K
2
2317
by: CK | last post by:
I am a "newbie" to python and today I had the need to write a program which generated a lot of tcp connections to a range of addresses (10.34.32.0/22) in order to troubleshoot a problem with a switch. I also wanted to get familiar with threads under python and so I thought I could do both at the same time. I wrote two programs - one using thread and one using threading but the results between the two were unexpected and quite...
6
7059
by: Fabiano Sidler | last post by:
Hello Newsgroup! In my Python script, I use the 'thread' module (not 'threading') and 'signal' simultaneously. All spawned threads execute 'pcapObject.loop(-1, callback)', which does not return. The problem now is: When the script catch a signal (let's say: SIGHUP), only the main thread is affected. But I need also the subthreads to be ended, because the script reopen()s files on SIGHUP and would also re-create the threads.
9
2785
by: phil | last post by:
And sorry I got ticked, frustrating week >And I could help more, being fairly experienced with >threading issues and race conditions and such, but >as I tried to indicate in the first place, you've >provided next to no useful (IMHO) information to >let anyone help you more than this This is about 5% of the code. Uses no locks.
0
2066
by: Marco Nicosia | last post by:
Hello gang, My coworker and I are writing a Python class for the other developers within our team. This class is supposed to encapsulate several things, including daemonizing, setting up logging, and spawning a thread to host an XML-RPC server. I'm having a real problem with logging.Logger and threading.Thread. In the main thread, we set up two loggers, each with file handlers. We then spawn the xml-rpc server thread, and allow the...
4
2897
by: Gilles Leblanc | last post by:
Hi I have started a small project with PyOpenGL. I am wondering what are the options for a GUI. So far I checked PyUI but it has some problems with 3d rendering outside the Windows platform. I know of WxPython but I don't know if I can create a WxPython window, use gl rendering code in it and then put widgets on top of that...
3
20129
by: DE | last post by:
Hello, I have an app with embedded Python. Python scripts create their own threads and I need to terminate these threads at the point where the user wants to leave the application. I use threading.Thread as base classes. I have tried to use call the join method of the python thread objects from C++. But although the call succeeds, the threads don't exit.
9
2180
by: akrapus | last post by:
Hi, I am trying to understand how to use threading in Python. I get threading as a concept, but not the implementation. In order to start threading, do you call it as a separate function, which will then be applied to the rest of the code (functions) or do you open threading in each function. This all can probably be answered by 'How python threads different functions'?
11
1954
by: mark | last post by:
Right now I have a thread that sleeps for sometime and check if an event has happened and go back to sleep. Now instead I want the thread to sleep until the event has occured process the event and go back to sleep. How to do this? thanks mark class eventhndler(threading.Thread): def __init__(self):
0
9456
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9275
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10034
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
9872
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
9713
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
6534
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5304
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
2
3358
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2666
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.