473,721 Members | 2,036 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

optimizing memory utilization

Hello all,

I'm hoping for some guidance here... I am a c/c++ "expert", but a
complete python virgin. I'm trying to create a program that loads in the
entire FreeDB database (excluding the CDDBID itself) and uses this
"database" for other subsequent processing. The problem is, I'm running
out of memory on a linux RH8 box with 512MB. The FreeDB database that I'm
trying to load is presently consisting of two "CSV" files. The first file
contains a "list" of albums with artist name and an arbitrary sequential
album ID an the CDDBID (an ascii-hex representation of a 32-bit value).
The second file contains a list of all of the tracks on each of the
albums, crossreferenced via the album ID. When I load into memory, I
create a python list where each entry in the list is itself a list
representing the data for a given album. The album data list consists of
a small handful of text items like the album title, author, genre, and
year, as well as a list which itself contains a list for each of the track
on the album.

[[<Alb1ID#>, '<Alb1Artist>' , '<Alb1Title>', '<Alb1Genre>',' <Alb1Year>',
[["Track1", 1], ["Track2", 2], ["Track3", 3], ..., ["TrackN",N]],
[<Alb2ID#>, '<Alb2Artist>' , '<Alb2Title>', '<Alb2Genre>',' <Alb2Year>',
[["Track1", 1], ["Track2", 2], ["Track3", 3], ..., ["TrackN",N]],
...
[<AlbNID#>, '<AlbNArtist>' , '<AlbNTitle>', '<AlbNGenre>',' <AlbNYear>',
[["Track1", 1], ["Track2", 2], ["Track3", 3], ..., ["TrackN",N]]]]

So the problem I'm having is that I want to load it all in memory (the two
files total about 250MB of raw data) but just loading the first 50,000
lines of tracks (about 25MB of raw data) consumes 75MB of RAM. If the
approximation is fairly accurate, I'd need >750MB of available RAM just to
load my in-memory database.

The bottom line is, is there a more memory efficient way to load all this
arbitrary field length and count type data into RAM? I can already see
that creating a seperate list for the group of tracks on an album is
probably wasteful versus just appending them to the album list, but I
doubt that will yeild the desired level of optimization of memory usage??

Any data structures suggestions for this application? BTW, the later
accesses to this database would not benefit in any way from being
presorted, so no need to warn me in advance about concepts like
presorting the albums list to facillitate faster look-up later...

Thanks,
Wes
Jul 18 '05 #1
12 2039

anon> [[<Alb1ID#>, '<Alb1Artist>' , '<Alb1Title>', '<Alb1Genre>',' <Alb1Year>',
anon> [["Track1", 1], ["Track2", 2], ["Track3", 3], ..., ["TrackN",N]],
anon> [<Alb2ID#>, '<Alb2Artist>' , '<Alb2Title>', '<Alb2Genre>',' <Alb2Year>',
anon> [["Track1", 1], ["Track2", 2], ["Track3", 3], ..., ["TrackN",N]],
anon> ...
anon> [<AlbNID#>, '<AlbNArtist>' , '<AlbNTitle>', '<AlbNGenre>',' <AlbNYear>',
anon> [["Track1", 1], ["Track2", 2], ["Track3", 3], ..., ["TrackN",N]]]]

anon> So the problem I'm having is that I want to load it all in memory
anon> (the two files total about 250MB of raw data) but just loading the
anon> first 50,000 lines of tracks (about 25MB of raw data) consumes
anon> 75MB of RAM. If the approximation is fairly accurate, I'd need
anon> >750MB of available RAM just to load my in-memory database.

anon> The bottom line is, is there a more memory efficient way to load
anon> all this arbitrary field length and count type data into RAM?

Sure, assuming you know what your keys are, store them in a db file. Let's
assume you want to search by artist. Do your csv thing, but store the
records in a shelve keyed by the AlbNArtist field:

import shelve
import csv

reader = csv.reader(open ("file1.csv" , "rb"))
db = shelve.open("fi le1.db")

for row in reader:
stuff = db.get(row[1], [])
stuff.append(ro w)
db[row[1]] = stuff
db.close()

I'm not sure I've interpreted your sample csv quite right, but I think
you'll get the idea. You can of course have multiple db files, each keyed
by a different field (or part of a field).

Obviously, using a db file will be slower than an in-memory dictionary, but
if memory is a bottleneck, this will likely help. You can also avoid
initializing the db file on subsequent program runs if the csv file is older
than the db file, probably resulting in faster startup.

Skip
Jul 18 '05 #2

"Anon" <an**@ymous.com > wrote in message
news:pa******** *************** *****@ymous.com ...
Any data structures suggestions for this application? BTW, the later
accesses to this database would not benefit in any way from being
presorted,


You keep saying it yourself: Use a Database ;-).

Databases "knows" about stuff in memory, as well as any searching and
sorting you might dream up later.

Python supports several, the simplest is probably PySQLite:
http://sourceforge.net/projects/pysqlite/
Jul 18 '05 #3
"Frithiof Andreas Jensen" <frithiof.jense n@die_spammer_d ie.ericsson.com > wrote in message news:<ci******* ***@newstree.wi se.edt.ericsson .se>...
"Anon" <an**@ymous.com > wrote in message
news:pa******** *************** *****@ymous.com ...
Any data structures suggestions for this application? BTW, the later
accesses to this database would not benefit in any way from being
presorted,


You keep saying it yourself: Use a Database ;-).

Databases "knows" about stuff in memory, as well as any searching and
sorting you might dream up later.

Python supports several, the simplest is probably PySQLite:
http://sourceforge.net/projects/pysqlite/


Thanks for the specific suggestion, but it's an approach I'd hoped to
avoid. I'd rather get 2Gig of RAM if I was confident I could make it
work that way... The problem with a file-based database (versus an
in-memory data-structure-based database as I was meaning) is
performance.

Maybe someone has a better suggestion if I give little more info: I
want to iterate through about 10,000 strings (each ~256 characters or
less) looking for occurances (within those 10,000 strings) of any one
of about 500,000 smaller (~30-50 characters average) strings. Then I
generate an XML doc that records which of the 500,000 strings were
found within each of the 10,000 strings.

I guess I am hoping to optimize memory usage in order to avoid
thrashing my drives to death (due to using a file based database). I
can't believe that python requires memory 200% "overhead" to store my
data as lists of lists. I gotta believe I've chosen the wrong
data-type or have mis-applied it somehow??

BTW, my illustration wasn't my input data, it was literally the output
of "print Albums" (with only a couple albums worth of dummy data
loaded). Albums is a list containing a list for each album, which in
turn contains another list for each track on the album, which in-turn
contains a couple of items of data about the given track.

Bottom line question here: Is a 'list' the most efficient data type
to use for storing "arrays" of "arrays" of "arrays" and strings",
given that I'm happy to iterate and don't need any sort of lookup or
search functionality??
Thanks again,
Wes
Jul 18 '05 #4
On 14 Sep 2004 14:55:56 -0700, wc******@sandc. com (Wes Crucius) wrote:
"Frithiof Andreas Jensen" <frithiof.jense n@die_spammer_d ie.ericsson.com > wrote in message news:<ci******* ***@newstree.wi se.edt.ericsson .se>...
"Anon" <an**@ymous.com > wrote in message
news:pa******** *************** *****@ymous.com ...
> Any data structures suggestions for this application? BTW, the later
> accesses to this database would not benefit in any way from being
> presorted,
You keep saying it yourself: Use a Database ;-).

Databases "knows" about stuff in memory, as well as any searching and
sorting you might dream up later.

Python supports several, the simplest is probably PySQLite:
http://sourceforge.net/projects/pysqlite/


Thanks for the specific suggestion, but it's an approach I'd hoped to
avoid. I'd rather get 2Gig of RAM if I was confident I could make it
work that way... The problem with a file-based database (versus an
in-memory data-structure-based database as I was meaning) is
performance.

Maybe someone has a better suggestion if I give little more info: I
want to iterate through about 10,000 strings (each ~256 characters or
less) looking for occurances (within those 10,000 strings) of any one
of about 500,000 smaller (~30-50 characters average) strings. Then I
generate an XML doc that records which of the 500,000 strings were
found within each of the 10,000 strings.


Need example of raw input and desired output ;-)

Patterns starting at any character? What if you have overlapping matches? I.e.,
if you had a string 'abbcde123', and were looking for 'bb' and 'bc'
would it count as two matches? Would longer matches have priority?
E.g., matching 'bbcd' overrides 'bb and 'cd' as separate matches?

Anyway, let's see what a brute force simple approach does. E.g, just put the
500,000 small strings in a small tree of dictionaries based on say string length, then
the string (you could tier it again with some leading characters and the rest, but I'd
try the easy thing first. But anyway, the idea (practically untested) is something like:

This finds repeating and overlapping patterns, since it was easy. That may not be what
you want, but you didn't say ;-)

----< s500kin10k.py >---------------------------------------
def main(path500k, path10k):
root = {}
for line in file(path500k):
line = line.rstrip()
if not line: continue
root.setdefault (len(line),{})[line] = None
lengths = root.keys()
lengths.sort()
print lengths
print root
# and then walk through your 10k strings of ~256 characters

for nl, line in enumerate(file( path10k)):
line = line.rstrip()
hdr = 'Line %s: %r\n' % (nl, line)
nc = len(line)
for length in lengths:
if length > nc: break
tier1 = root[length]
for nstep in xrange(nc+1-length): # walk a window of length chars
if line[nstep:nstep+len gth] in tier1:
print '%s %r' % (hdr, line[nstep:nstep+len gth])
hdr = ''

if __name__ == '__main__':
import sys
if len(sys.argv)!= 3: raise SystemExit, """
Usage: s500kin10k.py path500k path10k
where path500k has 500k lines of one string each, and
path10k has 10k lines to search for the 500k in any position.
"""
main(*sys.argv[1:])
------------------------------------------------------------

I guess I am hoping to optimize memory usage in order to avoid
thrashing my drives to death (due to using a file based database). I
can't believe that python requires memory 200% "overhead" to store my
data as lists of lists. I gotta believe I've chosen the wrong
data-type or have mis-applied it somehow??

BTW, my illustration wasn't my input data, it was literally the output
of "print Albums" (with only a couple albums worth of dummy data
loaded). Albums is a list containing a list for each album, which in
turn contains another list for each track on the album, which in-turn
contains a couple of items of data about the given track.

Bottom line question here: Is a 'list' the most efficient data type
to use for storing "arrays" of "arrays" of "arrays" and strings",
given that I'm happy to iterate and don't need any sort of lookup or
search functionality??

I suspect that you won't be that happy iterating through just arrays,
unless they are arrays of patternmatching tables like IIRC flex generates
to do parsing. Basically doing tricky parallel pattern matching by keeping
track of which patterns are alive as you go from character to character.
But 500k patterns are a lot of patterns....Thanks again,


See if the above runs you out of memory. Or time ;-)
I'd try it on shorter files first. You didn't say what you wanted
your final output to look like. Small examples of input => output communicate well ;-)

Regards,
Bengt Richter
Jul 18 '05 #5
On Wed, 15 Sep 2004 02:20:38 +0000, Bengt Richter wrote:
Need example of raw input and desired output ;-) The input is:
- one "csv" file containing a list of albums with their artist and an
AlbumID - one "csv" file containing a list of track names and associated
AlbumID - one big file containing fully-qualified paths to mp3 files

The desired output is:
and XML file with the "proposed" artist, album, and track name for each
file. The file name would be the top level element for a given XML node
and then there would be element groupings of "proposed" artist, album, and
track name for each file.

The last step is to edit the XML file and "select" the desired fields from
list of candidates for each fields, and then use the XML as a work order
for tagging (and possibly renaming) all of the identified files.

Patterns starting at any character? What if you have overlapping
matches? I.e., if you had a string 'abbcde123', and were looking for
'bb' and 'bc' would it count as two matches? Would longer matches have
priority? E.g., matching 'bbcd' overrides 'bb and 'cd' as separate
matches?
Yes, yes, and yes...

Anyway, let's see what a brute force simple approach does. E.g, just put
the 500,000 small strings in a small tree of dictionaries based on say
string length, then the string (you could tier it again with some
leading characters and the rest, but I'd try the easy thing first. But
anyway, the idea (practically untested) is something like:
I had planned to sort the list based upon string length (and probably
apply some minimum length requirement too), but the tree of dictionaries
surely seems a bit more elegant.

This finds repeating and overlapping patterns, since it was easy. That
may not be what you want, but you didn't say ;-)

----< s500kin10k.py >--------------------------------------- def
main(path500k, path10k):
root = {}
for line in file(path500k):
line = line.rstrip()
if not line: continue
root.setdefault (len(line),{})[line] = None
lengths = root.keys()
lengths.sort()
print lengths
print root
# and then walk through your 10k strings of ~256 characters

for nl, line in enumerate(file( path10k)):
line = line.rstrip()
hdr = 'Line %s: %r\n' % (nl, line)
nc = len(line)
for length in lengths:
if length > nc: break
tier1 = root[length]
for nstep in xrange(nc+1-length): # walk a window of length
chars
if line[nstep:nstep+len gth] in tier1:
print '%s %r' % (hdr, line[nstep:nstep+len gth])
hdr = ''

if __name__ == '__main__':
import sys
if len(sys.argv)!= 3: raise SystemExit, """ Usage: s500kin10k.py
path500k path10k
where path500k has 500k lines of one string each, and path10k
has 10k lines to search for the 500k in any position.
"""
main(*sys.argv[1:])
------------------------------------------------------------

Gee, thanks. That's a nice example!
I suspect that you won't be that happy iterating through just arrays,
unless they are arrays of patternmatching tables like IIRC flex
generates to do parsing. Basically doing tricky parallel pattern
matching by keeping track of which patterns are alive as you go from
character to character. But 500k patterns are a lot of patterns....
Since I only want to do it once in rare while, execution time isn't a huge
deal... I could readily write this code in C and make my data fit within
a reasonable amount of RAM, but I'm trying to use the project as an
opportunity to lean Python.

See if the above runs you out of memory. Or time ;-) I'd try it on
shorter files first. You didn't say what you wanted your final output to
look like. Small examples of input => output communicate well ;-)


Well, basically, I'm trying to use the FreeDB database as a list of valid
artists, albums, and tracks to locate these strings within the file names
of mp3 files and then use those result to apply ID tags to the mp3 files.
I'm assuming that I'll have multiple matches for any given file so my plan
was to output to an XML file, hand 'clean' that, and then use the XML as
the input for the actual tagging operation.

Thanks,
Wes
Jul 18 '05 #6
[Anon]:

Well, basically, I'm trying to use the FreeDB database as a list
of valid artists, albums, and tracks to locate these strings
within the file names of mp3 files and then use those result to
apply ID tags to the mp3 files.


have you considered using musicbrainz instead? it computes a checksum
of the music, not the bits and bytes.

http://www.musicbrainz.org/
--
Kjetil T.
Jul 18 '05 #7

"Wes Crucius" <wc******@sandc .com> wrote in message
news:1d******** *************** ***@posting.goo gle.com...
Thanks for the specific suggestion, but it's an approach I'd hoped to
avoid. I'd rather get 2Gig of RAM if I was confident I could make it
work that way... The problem with a file-based database (versus an
in-memory data-structure-based database as I was meaning) is
performance.


I am not convinced ;-)

The people designing databases have run into exactly that kind of problem
and they have spent a long time studying algorithms, eficcient
datastructures and writing confererence papers on how to cleverly map and
search through said data structures that may not fit in memory - to the
point where I think one will have a hard time doing it better by hand.

PySQLite will try to keep everything in memory, so if you have enough, it
will only hit the disk on opening the database and on comitting a change.
Commit can take a long time though.
Jul 18 '05 #8
can't believe that python requires memory 200% "overhead" to store my
data as lists of lists.


And it almost certainly doesn't.

One one hand there could be a sizebale overhead when running
your program even when loading a single data line.
So don't extrapolate from a small datasets.

But what I believe happens to you is that you keep references
alive and you end up with data duplication.

For example if you do:

lines = file('whatever' ).readlines()

then you say for example:

stripped_lines = [ line.strip() for line in lines ]

you've just made your program use twice the memory
at that point of the code.

You may also inadvertently keep your 'lines' variable
in the scope of the full program then
you'll effectively end up with a lot more memory consumption
overall.

Check and plug all these holes.

Istvan.
Jul 18 '05 #9
On Tue, Sep 14, 2004 at 04:39:49AM +0000, Anon wrote:
Hello all,

I'm hoping for some guidance here... I am a c/c++ "expert", but a
complete python virgin. I'm trying to create a program that loads in the
entire FreeDB database (excluding the CDDBID itself) and uses this
"database" for other subsequent processing. The problem is, I'm running
out of memory on a linux RH8 box with 512MB. The FreeDB database that I'm
trying to load is presently consisting of two "CSV" files. The first file
contains a "list" of albums with artist name and an arbitrary sequential
album ID an the CDDBID (an ascii-hex representation of a 32-bit value).
The second file contains a list of all of the tracks on each of the
albums, crossreferenced via the album ID. When I load into memory, I
create a python list where each entry in the list is itself a list
representing the data for a given album. The album data list consists of
a small handful of text items like the album title, author, genre, and
year, as well as a list which itself contains a list for each of the track
on the album.

[[<Alb1ID#>, '<Alb1Artist>' , '<Alb1Title>', '<Alb1Genre>',' <Alb1Year>',
[["Track1", 1], ["Track2", 2], ["Track3", 3], ..., ["TrackN",N]],
[<Alb2ID#>, '<Alb2Artist>' , '<Alb2Title>', '<Alb2Genre>',' <Alb2Year>',
[["Track1", 1], ["Track2", 2], ["Track3", 3], ..., ["TrackN",N]],
...
[<AlbNID#>, '<AlbNArtist>' , '<AlbNTitle>', '<AlbNGenre>',' <AlbNYear>',
[["Track1", 1], ["Track2", 2], ["Track3", 3], ..., ["TrackN",N]]]]


silly question: have you looked into using tuples instead of lists for
the inner objects? They're supposed to be more memory efficient,
although I have found that that isn't necessarily the case.

--
John Lenton (jo**@grulic.or g.ar) -- Random fortune:
You're a card which will have to be dealt with.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFBSIMRgPq u395ykGsRAgPgAJ 9DqQNAvQI4pnr9J UEshQ/ACThW6QCfYmIJ
2W/vN0+beKvg5IdF2P sJe/I=
=5lCT
-----END PGP SIGNATURE-----

Jul 18 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

11
2799
by: jong | last post by:
I have a problem with an C# application. The application is a 24x7 low volume message processing server. The server has a single thread of processing, running in a continuous loop, for each iteration thread sleeping for 5 seconds and than reading textual message files from a Windows Folder and applying them to a database. From an external view, the server appears to have a memory-leek, continuously consuming RAM in a near linier fashion as...
2
2374
by: Arthur M. | last post by:
Does anyone know of a way to pin point true memory utilization on a per object / allocation basis in dot net. The problem that i'm having is projecting the amount of memory that will be required by an application. Obviously there is a component size (i.e. a class that is using 2 32bit integers will use 8byte) but then there is a question of overhead for .NET management (i.e. vtables, lookup tables, garbage collector pointer on a per...
4
2529
by: J. Campbell | last post by:
From reading this forum, it is my understanding that C++ doesn't require the compiler to keep code that does not manifest itself in any way to the user. For example, in the following: { for(int i = 0; i < 10; ++i){ std::cout << i << std::endl; for(int j = 0; j < 0x7fffffff; ++j){} } }
1
5510
by: 1944USA | last post by:
I have a C# application written as a Windows Service. It spawns a number of threads, each thread has very large memory intensive processes running on it. I can keep adding threads as long as the "committed byte" utilization of the server stays below 2.5 gigs As soon as we go over that amount we receive "System.OutOfMemoryException" exceptions. I'm not sure what causes the 2.5 gig limit but we consistently hit it on all of our ...
2
1734
by: rizjabbar | last post by:
I have a memory leak happening... I believe it is due to Dom parser... could anyone help me with this: Do I need a delete??? /////////////////////////////////////////////// //Code on Main HTML page: <html> <head> <SCRIPT LANGUAGE="JavaScript" SRC="Engine.js" ></SCRIPT> <SCRIPT LANGUAGE="JavaScript" SRC="Sarissa.js" </SCRIPT> </head>
1
2850
by: nazgul | last post by:
Hi all, I have an app that runs on multiple boxes. On my slackware box, running Python 2.5.1, top shows this: Mem: 1002736k total, 453268k used, 549468k free, 31392k buffers Swap: 2097136k total, 0k used, 2097136k free, 136876k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2741 dnm 16 0 267m 261m 2676 S 99.3 26.7 14:54.62 python
2
1906
by: =?Utf-8?B?Um9oaXQ=?= | last post by:
..NET is a memory hog - plain and simple. This, in my opinion, makes it less than ideal for embedded applications where memory is constrained. I am forced to use .NET for reasons that I will not go into, so I need to be able to code my applications in such a way as to use memory efficiently. So, is there a tool out there that will allow me monitor my application's memory utilization on the embedded device? I am using the .NET compact...
1
6628
by: Kaheru | last post by:
memory utilization increase? This is because when i try to keep track of the CPU utilization and memory utilization of my FTP server process (ftpserver.exe), the CPU utilization increase, but the memory utilization decrease. Why is this happening? PS: CPU Utilization is taken from Process\% Processor Time Memory Utilization is taken from Process\Working Set (I want to measure the physical memory it uses) Any help is appreciated....
7
14000
by: balach | last post by:
hi all, i am new in .NET, some body please help me regarding calculating total cpu utilization and RAM memory utilization in percentage (Accurately) on page load event of ASP.NET. i found the following code from web, but it is not running properly and also i have no idea what it will do, i heared that there will be used performance monitoring and donot know how to use it. please help me with coding instructins. public...
0
8736
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9373
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
9227
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
9145
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9081
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
5992
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4497
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
2
2590
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2143
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.