473,548 Members | 2,622 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Memory Error while constructing Compound Dictionary

Hello.

I attempted to build a compound dictionary:

len(Lst)=1000
len(nuerLst)=25 0
len(nuestLst)=5 00

Dict={}

for s in Lst:
Dict[s]={}

for s in Lst:
for t in nuerLst:
Dict[s][t]={}

for s in Lst:
for t in nuerLst:
for r in nuestLst:
Dict[s][t][r]={}

I got the following error:

Traceback (most recent call last):
File "<pyshell#8 9>", line 5, in -toplevel-
Dict[s][t][r]=[]
MemoryError
Specs:

Python 2.3.4
XPpro
4 GB RAM
Python was utilizing 2.0 GB when the error was generated. I have
attempted this task twice with different data sets. I got the same
error both times.

Thanks in advance for your feedback,

Benjamin Scott
Jul 18 '05 #1
8 2473
If you are using Linux on x86, there is a kernel configuration option
that might give more than 2GiB of address space to each process. This
will require that you compile your own kernel.

If you arn't using Linux on x86, then you are still probably hitting an
OS limitation. If you have access to an itanium, G5, or newer MIPS (SGI)
CPU, that *might* help.

If your keys are sequential integers, you will likely find numerical
python <http://numpy.sf.net> useful.

On Tue, Sep 07, 2004 at 10:47:22AM -0700, Benjamin Scott wrote:
Hello.

I attempted to build a compound dictionary:

len(Lst)=1000
len(nuerLst)=25 0
len(nuestLst)=5 00

Dict={}

for s in Lst:
Dict[s]={}

for s in Lst:
for t in nuerLst:
Dict[s][t]={}

for s in Lst:
for t in nuerLst:
for r in nuestLst:
Dict[s][t][r]={}

I got the following error:

Traceback (most recent call last):
File "<pyshell#8 9>", line 5, in -toplevel-
Dict[s][t][r]=[]
MemoryError
Specs:

Python 2.3.4
XPpro
4 GB RAM
Python was utilizing 2.0 GB when the error was generated. I have
attempted this task twice with different data sets. I got the same
error both times.

Thanks in advance for your feedback,

Benjamin Scott

Jul 18 '05 #2
You are asking Python to create 125,000,000
(yes that's 125 million) empty dictionaries. The hash
(key) of the dictionary takes up at least a couple
of bytes, plus the amount of space taken up by an
empty dictionary object at each entry (125 million
of them). I'm not surprised, adds up pretty quickly.

HTH,
Larry Bates
Syscon, Inc.

"Benjamin Scott" <my************ **@hotmail.com> wrote in message
news:cc******** *************** ***@posting.goo gle.com...
Hello.

I attempted to build a compound dictionary:

len(Lst)=1000
len(nuerLst)=25 0
len(nuestLst)=5 00

Dict={}

for s in Lst:
Dict[s]={}

for s in Lst:
for t in nuerLst:
Dict[s][t]={}

for s in Lst:
for t in nuerLst:
for r in nuestLst:
Dict[s][t][r]={}

I got the following error:

Traceback (most recent call last):
File "<pyshell#8 9>", line 5, in -toplevel-
Dict[s][t][r]=[]
MemoryError
Specs:

Python 2.3.4
XPpro
4 GB RAM
Python was utilizing 2.0 GB when the error was generated. I have
attempted this task twice with different data sets. I got the same
error both times.

Thanks in advance for your feedback,

Benjamin Scott

Jul 18 '05 #3
Benjamin Scott <my************ **@hotmail.com> wrote:
...
len(Lst)=1000
len(nuerLst)=25 0
len(nuestLst)=5 00
So you want 1000*250*500 = 125 million dictionaries... ?
Specs:

Python 2.3.4
XPpro
4 GB RAM
Python was utilizing 2.0 GB when the error was generated. I have


So you've found out that a dictionary takes at least (about) 16 bytes
even when empty -- not surprising since 16 bytes is typically the least
slice of memory the system will allocate at a time. And you've found
out that XP so-called pro doesn't let a user program have more than 2GB
to itself -- I believe there are slight workarounds for that, as in
costly hacks that may let you have 3GB or so, but it's not going to help
if you want to put any informatiion in those dictionaries, even a tiny
amount of info per dict will easily bump each dict's size to 32 bytes
and overwhelm your 32-bit processor's addressing capabilities (I'm
assuming you have a 32-bit CPU -- you don't say, but few people use
64-bitters yet).

What problem are you really trying to solve? Unless you can splurge
into a 64-bit CPU with an adequate OS (e.g., AMD 64 with a Linux for it,
or a G5-based Mac) anything requiring SO many gigabytes probably needs a
radical rethink of your intended architecture/strategy, and it's hard to
give suggestions without knowing what problem you need to solve.
Alex
Jul 18 '05 #4
Thanks for the replies.

First I will make a minor correction to the code I originally posted
and then I will describe the original problem I am trying to solve,
per Alex's request.

Correction:

for s in Lst:
for t in nuerLst:
for r in nuestLst:
Dict[s][t][r]={}

....should actually be...

for s in Lst:
for t in nuerLst:
for r in nuestLst:
Dict[s][t][r]=[]

That is, the object accessed by 3 keys is a list, not a 4th
dictionary.
The Original Problem:

The data set: 3 Columns and at least 100,000 rows. However, it can
be up to 1,000,000 rows.

For the purpose of illustration let's suppose that the first column
has the name of 1,000 "Factories" , i.e. there are 1,000 unique symbols
in the first column. Likewise, suppose the second column contains a
"production date" or just a date; there are 250 unique dates in the
second column. Finally, suppose the third column contains a
description of a "widget type"; there are 500 unique widget
descriptions.

*** i.e. each row contains the name of one factory which produced one
widget type on a particular date. If a factory produced more than one
widget on a given date it is reflected in the data as a new row. ***

The motivation to construct the mentioned compound dictionary comes
from the fact that I need quick access to the following data sets:

Data Set for Factory #1:
Column#1: time 1, time 2, ... , time 250
Column#2: #widgets, #widgets, ... , #widgets <- same widget types

Data Set for Factory #2:
Column#1: time 1, time 2, ... , time 250
Column#2: #widgets, #widgets, ... , #widgets <- same widget types

..
..
..

Data Set for Factory #1000:
Column#1: time 1, time 2, ... , time 250
Column#2: #widgets, #widgets, ... , #widgets <- same widget types

Note that if the compound dictionary was created, it would be easy to
construct these data sets like so:

File=open('Data Set', 'r')
Lst=File.readli nes()
..
..
..

len(Lst[n])=3

Lst[n][0]="Factory"
Lst[n][1]="date"
Lst[n][2]="WidgetType "

for s in Lst:
Dict[s[0]][s[1]][s[2]].append('1')
..
..
..

len(Dict["Factory"]["date"]["WidgetType "]) = #Widgets of some type
produced at a Factory on a given date.

The idea here is that I will be graphing a handful of the data sets at
a time; they will then be discarded and a new handful will be
graphed... etc.

What I might attempt next is to construct the required data in R (or
NumPy) since an array object seems better suited for the task.
However, I'm not sure this will avert the memory error. So, does
anyone know how to increase the RAM limit for a process? Other
suggestions are also welcome.

-Benjamin Scott


al*****@yahoo.c om (Alex Martelli) wrote in message news:<1gjroty.t p0ndj1vwu3e7N%a l*****@yahoo.co m>...
Benjamin Scott <my************ **@hotmail.com> wrote:
...
len(Lst)=1000
len(nuerLst)=25 0
len(nuestLst)=5 00


So you want 1000*250*500 = 125 million dictionaries... ?
Specs:

Python 2.3.4
XPpro
4 GB RAM
Python was utilizing 2.0 GB when the error was generated. I have


So you've found out that a dictionary takes at least (about) 16 bytes
even when empty -- not surprising since 16 bytes is typically the least
slice of memory the system will allocate at a time. And you've found
out that XP so-called pro doesn't let a user program have more than 2GB
to itself -- I believe there are slight workarounds for that, as in
costly hacks that may let you have 3GB or so, but it's not going to help
if you want to put any informatiion in those dictionaries, even a tiny
amount of info per dict will easily bump each dict's size to 32 bytes
and overwhelm your 32-bit processor's addressing capabilities (I'm
assuming you have a 32-bit CPU -- you don't say, but few people use
64-bitters yet).

What problem are you really trying to solve? Unless you can splurge
into a 64-bit CPU with an adequate OS (e.g., AMD 64 with a Linux for it,
or a G5-based Mac) anything requiring SO many gigabytes probably needs a
radical rethink of your intended architecture/strategy, and it's hard to
give suggestions without knowing what problem you need to solve.
Alex

Jul 18 '05 #5
On 7 Sep 2004 20:46:18 -0700, my************* *@hotmail.com (Benjamin
Scott) declaimed the following in comp.lang.pytho n:
The data set: 3 Columns and at least 100,000 rows. However, it can
be up to 1,000,000 rows.

For the purpose of illustration let's suppose that the first column
has the name of 1,000 "Factories" , i.e. there are 1,000 unique symbols
in the first column. Likewise, suppose the second column contains a
"production date" or just a date; there are 250 unique dates in the
second column. Finally, suppose the third column contains a
description of a "widget type"; there are 500 unique widget
descriptions.

*** i.e. each row contains the name of one factory which produced one
widget type on a particular date. If a factory produced more than one
widget on a given date it is reflected in the data as a new row. ***

The motivation to construct the mentioned compound dictionary comes
from the fact that I need quick access to the following data sets:

Data Set for Factory #1:
Column#1: time 1, time 2, ... , time 250
Column#2: #widgets, #widgets, ... , #widgets <- same widget types
For some reason I'd suggest living with the speed of a decent
RDBMS (maybe MySQL) and generating the queries as needed.

Or, maybe, go to some fancier RDBMS -- one that supports "views"
or some other "predefined query" capability, so the result set doesn't
have to be computed each time.

-- =============== =============== =============== =============== == <
wl*****@ix.netc om.com | Wulfraed Dennis Lee Bieber KD6MOG <
wu******@dm.net | Bestiaria Support Staff <
=============== =============== =============== =============== == <
Home Page: <http://www.dm.net/~wulfraed/> <
Overflow Page: <http://wlfraed.home.ne tcom.com/> <

Jul 18 '05 #6
On Tue, 07 Sep 2004 20:46:18 -0700, Benjamin Scott wrote:
The Original Problem:

The data set: 3 Columns and at least 100,000 rows. However, it can
be up to 1,000,000 rows.

For the purpose of illustration let's suppose that the first column
has the name of 1,000 "Factories" , i.e. there are 1,000 unique symbols
in the first column. Likewise, suppose the second column contains a
"production date" or just a date; there are 250 unique dates in the
second column. Finally, suppose the third column contains a
description of a "widget type"; there are 500 unique widget
descriptions.


Based on these numbers, aren't the majority of your lists empty? In that
case, key off of a tuple as you need them. For some "factory", "date",
"widget" and "whatever" (the data, which you don't specify):

Dict[(factory, date, widget)].setdefault([]).append(whatev er)

(You may actually benefit from changing the setdefault around to some
"try"-based scheme; you'd have to benchmark the various choices if it
matters to you. Also, depending on your data, consider Psyco.)

No extra memory. Some potentially significant slowdown due to tuple
construction and empty list construction.

(Depending on what you are doing, C++ and the STL can be not *that* much
harder to program in, and it should support hash tables with tuple keys,
although I don't know the magic incantations off the top of my head. If
you're just adding some numbers or something, this is also likely to be
enough faster that it may even be faster to develop in, since the program
will almost certainly execute much, much faster, potentially making the
write-compile-test-debug cycle still faster than Python. Of course, if
this is a prelude to complicated meta-hack programming of the type Python
makes easy, never mind; I'm kinda guessing you're just storing numbers on
the grounds that you can't store much else in the machine at once. If
that's a bad assumption, you are in some trouble, as even C++ can only be
100%-episilon efficient and it will likely be affected by the same memory
limit, in which case you either need a larger machine or a database.

Actually, you probably should go to a database now; these problems have a
habit of doubling or tripling in size when someone decides it'd be neat to
add Just One More attribute and you already pretty much can't afford that.)
Jul 18 '05 #7
Benjamin Scott <my************ **@hotmail.com> wrote:
Thanks for the replies.

First I will make a minor correction to the code I originally posted
and then I will describe the original problem I am trying to solve,
per Alex's request.

Correction:

for s in Lst:
for t in nuerLst:
for r in nuestLst:
Dict[s][t][r]={}

...should actually be...

for s in Lst:
for t in nuerLst:
for r in nuestLst:
Dict[s][t][r]=[]

That is, the object accessed by 3 keys is a list, not a 4th
dictionary.
OK, unfortunately that doesn't change memory requirements, as 16 bytes
is still a minimum allocation for an object.


The Original Problem:

The data set: 3 Columns and at least 100,000 rows. However, it can
be up to 1,000,000 rows.
Aha -- a sparse 3D matrix, VERY sparse, no more than 1 million "true"
entries out of 125 million slots, and all the rest just
"placeholders". ..

For the purpose of illustration let's suppose that the first column
has the name of 1,000 "Factories" , i.e. there are 1,000 unique symbols
in the first column. Likewise, suppose the second column contains a
"production date" or just a date; there are 250 unique dates in the
second column. Finally, suppose the third column contains a
description of a "widget type"; there are 500 unique widget
descriptions.
Sure, quite clear.

*** i.e. each row contains the name of one factory which produced one
widget type on a particular date. If a factory produced more than one
widget on a given date it is reflected in the data as a new row. ***

The motivation to construct the mentioned compound dictionary comes
from the fact that I need quick access to the following data sets: ... len(Lst[n])=3

Lst[n][0]="Factory"
Lst[n][1]="date"
Lst[n][2]="WidgetType "

for s in Lst:
Dict[s[0]][s[1]][s[2]].append('1')
.
.
.

len(Dict["Factory"]["date"]["WidgetType "]) = #Widgets of some type
produced at a Factory on a given date.

The idea here is that I will be graphing a handful of the data sets at
a time; they will then be discarded and a new handful will be
graphed... etc.

What I might attempt next is to construct the required data in R (or
NumPy) since an array object seems better suited for the task.
However, I'm not sure this will avert the memory error. So, does
When you represent a sparse matrix as if it was a dense one, you hit a
typical wall of combinatorial explosion: you need memory proportional to
the product of all the dimensions, and for a matrix of high
dimensionality it's a horrid prospect.
anyone know how to increase the RAM limit for a process? Other
With a 32-bit CPU you're SOL, IMHO. One option is to change machines:
Apple has just announced a very reasonably priced iMac G5, a 64-bit
machine intended for the home; or, you can do as I did, and look for a
little-used, well-reconditioned, guaranteed PowerMac G5 -- the latter
can use 8 GB of physical RAM and more importantly the address space is
only bounded by the amount of disk available, so a few hundred GBs may
be handled if you're in no hurry. While these are wonderful machines,
however, I think you can do better. Consider...:
suggestions are also welcome.


The Null Object Design Pattern is more likely to be what you want ( a
fancy name for what in this case is quite simple, read on...):

Start by placing in each slot of the compound dictionary the SAME
object, which is just a placeholder. So you'll still have 125 million
slots, but all initially will point at the same placeholder: so you're
spending only 125 million times the size of a SLOT, about 4 bytes, for a
total of 500 megabytes -- plus something because dictionaries being hash
table are always "overdimensione d" a bit, but you should fit very
comfortably in your 2GB anyway.

Now, as the data come in, you ADD 1 instead of APPENDING a string of '1'
to the appropriate slot. THEN and only then, for those relatively very
few cells of the 3D matrix take up space for a new object,

Moreover with the operations you appear to need you don't need to make a
special null object, I think: just the integer 0 will do, and you will
not call len() at the end since the integer is already stored in the
cell. If you wanted to store more info in each cell or somehow keep
track more directly of what cells are non-empty, etc etc, then you would
go for a more complete Null Object DP. But for your problem as stated,
the following might suffice:

1. initialize your dictionary with:

for s in Lst:
for t in nuerLst:
for r in nuestLst:
Dict[s][t][r] = 0

2. update it on each incoming datum with:

for s in Lst:
Dict[s[0]][s[1]][s[2]] += 1

3 consult it when done with:

Dict["Factory"]["date"]["WidgetType "] = #Widgets of some type
produced at a Factory on a given date.
Hope this helps -- if you do need a bit more, write about it here and
I'll happily show you a richer Null Object Design Pattern variant!
Alex
Jul 18 '05 #8
Jeremy Bowers <je**@jerf.or g> wrote:
...
Based on these numbers, aren't the majority of your lists empty? In that
case, key off of a tuple as you need them. For some "factory", "date",
"widget" and "whatever" (the data, which you don't specify):

Dict[(factory, date, widget)].setdefault([]).append(whatev er)
Well, he does specify a '1' as the datum, which is why I suggested
keeping the matrix dense (minimal change to his program) and just using
numbers instead of lists (he only needs the length of the lists, he
might as well be keeping the counts differently).

But, yes, indexing with the tuple would probably cut down memory
consumption further, since he's got no more than a million entries; each
entry may take a few tens of bytes (if it needs to uniquely keep a tuple
and the strings in it) so his memory consumption would plummet. Also,
he would save the long initialization phase entirely.

The parentheses are not needed, and for the "keeping counts" approach he
should be updating the counts with:

Dict[factory, date, widget] = 1 + Dict.get((facto ry, date, widget), 0)

Also, Dict.get((facto ry, date, widget), 0) is what he would be using to
check his Dict at the end. It's no doubt worth making this prettier...:

class SparseMatrix(di ct):
def __getitem__(sel f, key): return self.get(key, 0)

so the update becomes:
Dict[factory, date, widget] += 1
and the check at the end just Dict[factory, date, widget].

(You may actually benefit from changing the setdefault around to some
"try"-based scheme; you'd have to benchmark the various choices if it
matters to you. Also, depending on your data, consider Psyco.)
If you're trying to SAVE memory, do NOT, repeat NOT, consider Psyco,
which eats it for breakfast;-).
No extra memory. Some potentially significant slowdown due to tuple
construction and empty list construction.
Nah, he's saving 125 M operations at initialization, and doing at most
about 1M updates, no way this will slow things down. But then it
appears he doesn't need the lists.
(Depending on what you are doing, C++ and the STL can be not *that* much
harder to program in, and it should support hash tables with tuple keys,
although I don't know the magic incantations off the top of my head. If
I doubt they'll beat Python dict's, which are a wonder of speed.
you're just adding some numbers or something, this is also likely to be
enough faster that it may even be faster to develop in, since the program
will almost certainly execute much, much faster, potentially making the
I think you're wrong here. C++ in terms of standard only has
order-based containers, whose performance just can't compare to hash
tables. I do believe the STL, which has been evolving independently
from C++ for many years now, has added hash tables (and surely you can
find good ones on www.boost.org, you can find EVERYTHING there and if
you're programming in C++ it should be your second home).
Actually, you probably should go to a database now; these problems have a
habit of doubling or tripling in size when someone decides it'd be neat to
add Just One More attribute and you already pretty much can't afford that.)


With a tuple-indexed SparseMatrix of numbers, his program will be taking
up a tiny fraction of the memory it's burning now, and he will have room
to spare for enhancements. And adding an attribute will cost little,
say a few tens of megabytes if he has a million entries and the new
attribute is typically a string a few tens of bytes long.

Relational databases are cool because you can make impromptu queries
(if you speak SQL) and with clever indexing the cost of reading in the
dataset is only paid once. But for his problem as stated he may well be
happier staying in Python and just using the tricks suggested in these
posts.
Suppose he did need lists and mere counts would not do. A fuller Null
Object DP can help. E.g., consider:

In [5]: class Nullo:
...: def __iadd__(self, wot): return wot
...: def __len__(self): return 0
...: def __iter__(self): return iter(())
...:

In [6]: na = Nullo()

In [7]: alist = [na] * 33

In [8]: alist[7] += [23]

In [9]: alist[7] += [45]

In [10]: alist[12] += [3]
In [12]: [(i,x) for i,x in enumerate(alist ) if x]
Out[12]: [(7, [23, 45]), (12, [3])]
Some might consider this an abuse of __iadd__ (shouldn't it "always"
return self?!) but I consider it a case of "practicali ty beats purity".

Use one (ONE) instance of this Nullo class as the default object
returned by the above-shown class SparseMatrix, and you have a sparse
matrix of "lists" (most of them are that one Nullo instance, which
behaves, ducktyping-like, as an empty list, for the requested uses; the
non-empty ones are bona fide lists)...
Alex
Jul 18 '05 #9

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
3677
by: Ganesan selvaraj | last post by:
I using C# .net. i want to split the text files based of the some condition. my source text file size may be 4 kb to 4 gb. some time when i split the i got the "out of memory exception. when i read from the file i read the 10mb and split the this content and clear the buffer and read the second 10 mb and so on. How can i rectify the...
3
2734
by: windandwaves | last post by:
Hi Gurus Does anyone know how I set the error trapping to option 2 in visual basic. I know that you can go to tools, options and then choose on unhandled errors only, but is there a VB command that I can use instead? Cheers Nicolaas
7
5438
by: Timo Haberkern | last post by:
Hi there, i have some troubles with my TSearch2 Installation. I have done this installation as described in http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_compound_words <http://www.sai.msu.su/%7Emegera/oddmuse/index.cgi/Tsearch_V2_compound_words> I used the german myspell dictionary from...
3
342
by: RubenDV | last post by:
I am writing code for a dictionary type in c, but i have some problems allocating the memory for it. The dictionary type itself is a struct with the following contents: typedef struct { char ** keys; char ** content; int count; /* Amount of space actually used */ int allocated_space; /* Amount of space allocated for the keys and content...
3
3372
by: KWienhold | last post by:
I'm currently writing an application (using Visual Studio 2003 SP1 and C#) that stores files and additional information in a single compound file using IStorage/IStream. Since files in a compound file aren't really useful to a user, I use the IStream::Read function together with a StreamWriter to extract single files from my compound...
4
2784
by: Kev | last post by:
Hello, I have an Access 2003 database running on an XP network. I have a datasheet subform containing a 28 day roster - shift1 to shift28. Each record has 1 RosterEmpID, 1 EmployeeNumber, 28 shift fields, and 1 shiftTotal field and 1 HoursTotal field. Datasheet may look like:
2
1488
by: DBC User | last post by:
I have a class which uses dictionary to look up a key to find a url. When found it will launch the url in new IE. Now my question is, I will be doing a new on this class when ever a particular menu item is pressed. On the end of the method this class is send to GC to mark for GC. Will this can cause any memory problem or performance...
0
2897
by: mbenedict | last post by:
I am rather new at this code and am attempting to modify existing code to use clob datatypes, which I have never used before. The database tables have been set up for clob data. When trying to use dbms_lob.writeappend, I am getting the following error..Command text was not set for the command object. I've researched this in the net as much as I...
2
4494
by: Nagu | last post by:
I am trying to save a dictionary of size 65000X50 to a local file and I get the memory error problem. How do I go about resolving this? Is there way to partition the pickle object and combine later if this is a problem due to limited resources (memory) on the machine (it is 32 bit machine Win XP, with 4GB RAM). Here is the detail...
0
7512
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main...
0
7438
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language...
0
7707
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. ...
0
7951
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that...
0
5082
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert...
0
3495
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in...
0
3475
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
1051
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
0
751
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.