473,883 Members | 2,395 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

efficient data loading with Python, is that possible possible?

Hi, I am pretty new to Python and trying to use it for a relatively
simple problem of loading a 5 million line text file and converting it
into a few binary files. The text file has a fixed format (like a
punchcard). The columns contain integer, real, and date values. The
output files are the same values in binary. I have to parse the values
and write the binary tuples out into the correct file based on a given
column. It's a little more involved but that's not important.

I have a C++ prototype of the parsing code and it loads a 5 Mline file
in about a minute. I was expecting the Python version to be 3-4 times
slower and I can live with that. Unfortunately, it's 20 times slower
and I don't see how I can fix that.

The fundamental difference is that in C++, I create a single object (a
line buffer) that's reused for each input line and column values are
extracted straight from that buffer without creating new string
objects. In python, new objects must be created and destroyed by the
million which must incur serious memory management overhead.

Correct me if I am wrong but

1) for line in file: ...
will create a new string object for every input line

2) line[start:end]
will create a new string object as well

3) int(time.mktime (time.strptime( s, "%m%d%y%H%M%S") ))
will create 10 objects (since struct_time has 8 fields)

4) a simple test: line[i:j] + line[m:n] in hash
creates 3 strings and there is no way to avoid that.

I thought arrays would help but I can't load an array without creating
a string first: ar(line, start, end) is not supported.

I hope I am missing something. I really like Python but if there is no
way to process data efficiently, that seems to be a problem.

Thanks,
igor

Dec 12 '07 #1
9 1732
On Dec 12, 5:48 pm, igor.tatari...@ gmail.com wrote:
Hi, I am pretty new to Python and trying to use it for a relatively
simple problem of loading a 5 million line text file and converting it
into a few binary files. The text file has a fixed format (like a
punchcard). The columns contain integer, real, and date values. The
output files are the same values in binary. I have to parse the values
and write the binary tuples out into the correct file based on a given
column. It's a little more involved but that's not important.

I have a C++ prototype of the parsing code and it loads a 5 Mline file
in about a minute. I was expecting the Python version to be 3-4 times
slower and I can live with that. Unfortunately, it's 20 times slower
and I don't see how I can fix that.

The fundamental difference is that in C++, I create a single object (a
line buffer) that's reused for each input line and column values are
extracted straight from that buffer without creating new string
objects. In python, new objects must be created and destroyed by the
million which must incur serious memory management overhead.

Correct me if I am wrong but

1) for line in file: ...
will create a new string object for every input line

2) line[start:end]
will create a new string object as well

3) int(time.mktime (time.strptime( s, "%m%d%y%H%M%S") ))
will create 10 objects (since struct_time has 8 fields)

4) a simple test: line[i:j] + line[m:n] in hash
creates 3 strings and there is no way to avoid that.

I thought arrays would help but I can't load an array without creating
a string first: ar(line, start, end) is not supported.

I hope I am missing something. I really like Python but if there is no
way to process data efficiently, that seems to be a problem.
20 times slower because of garbage collection sounds kinda fishy.
Posting some actual code usually helps; it's hard to tell for sure
otherwise.

George
Dec 12 '07 #2
On Dec 12, 4:03 pm, John Machin <sjmac...@lexic on.netwrote:
Inside your function
[you are doing all this inside a function, not at global level in a
script, aren't you?], do this:
from time import mktime, strptime # do this ONCE
...
blahblah = int(mktime(strp time(s, "%m%d%y%H%M%S") ))

It would help if you told us what platform, what version of Python,
how much memory, how much swap space, ...

Cheers,
John
I am using a global 'from time import ...'. I will try to do that
within the
function and see if it makes a difference.

The computer I am using has 8G of RAM. It's a Linux dual-core AMD or
something like that. Python 2.4

Here is some of my code. Tell me what's wrong with it :)

def loadFile(inputF ile, loader):
# .zip files don't work with zlib
f = popen('zcat ' + inputFile)
for line in f:
loader.handleLi ne(line)
...

In Loader class:
def handleLine(self , line):
# filter out 'wrong' lines
if not self._dataForma t(line): return

# add a new output record
rec = self.result.add Record()

for col in self._dataForma t.colFormats:
value = parseValue(line , col)
rec[col.attr] = value

And here is parseValue (will using a hash-based dispatch make it much
faster?):

def parseValue(line , col):
s = line[col.start:col.e nd+1]
# no switch in python
if col.format == ColumnFormat.DA TE:
return Format.parseDat e(s)
if col.format == ColumnFormat.UN SIGNED:
return Format.parseUns igned(s)
if col.format == ColumnFormat.ST RING:
# and-or trick (no x ? y:z in python 2.4)
return not col.strip and s or rstrip(s)
if col.format == ColumnFormat.BO OLEAN:
return s == col.arg and 'Y' or 'N'
if col.format == ColumnFormat.PR ICE:
return Format.parseUns igned(s)/100.

And here is Format.parseDat e() as an example:
def parseDate(s):
# missing (infinite) value ?
if s.startswith('9 99999') or s.startswith('0 00000'): return -1
return int(mktime(strp time(s, "%y%m%d")))

Hopefully, this should be enough to tell what's wrong with my code.

Thanks again,
igor
Dec 13 '07 #3
On Dec 13, 11:44 am, igor.tatari...@ gmail.com wrote:
On Dec 12, 4:03 pm, John Machin <sjmac...@lexic on.netwrote:
Inside your function
[you are doing all this inside a function, not at global level in a
script, aren't you?], do this:
from time import mktime, strptime # do this ONCE
...
blahblah = int(mktime(strp time(s, "%m%d%y%H%M%S") ))
It would help if you told us what platform, what version of Python,
how much memory, how much swap space, ...
Cheers,
John

I am using a global 'from time import ...'. I will try to do that
within the
function and see if it makes a difference.

The computer I am using has 8G of RAM. It's a Linux dual-core AMD or
something like that. Python 2.4

Here is some of my code. Tell me what's wrong with it :)

def loadFile(inputF ile, loader):
# .zip files don't work with zlib
f = popen('zcat ' + inputFile)
for line in f:
loader.handleLi ne(line)
...

In Loader class:
def handleLine(self , line):
# filter out 'wrong' lines
if not self._dataForma t(line): return

# add a new output record
rec = self.result.add Record()

for col in self._dataForma t.colFormats:
value = parseValue(line , col)
rec[col.attr] = value

And here is parseValue (will using a hash-based dispatch make it much
faster?):

def parseValue(line , col):
s = line[col.start:col.e nd+1]
# no switch in python
if col.format == ColumnFormat.DA TE:
return Format.parseDat e(s)
if col.format == ColumnFormat.UN SIGNED:
return Format.parseUns igned(s)
if col.format == ColumnFormat.ST RING:
# and-or trick (no x ? y:z in python 2.4)
return not col.strip and s or rstrip(s)
if col.format == ColumnFormat.BO OLEAN:
return s == col.arg and 'Y' or 'N'
if col.format == ColumnFormat.PR ICE:
return Format.parseUns igned(s)/100.

And here is Format.parseDat e() as an example:
def parseDate(s):
# missing (infinite) value ?
if s.startswith('9 99999') or s.startswith('0 00000'): return -1
return int(mktime(strp time(s, "%y%m%d")))

Hopefully, this should be enough to tell what's wrong with my code.
I have to go out now, so here's a quick overview: too many goddam dots
and too many goddam method calls.
1. do
colfmt = col.format # ONCE
if colfmt == ...
2. No switch so put most frequent at the top
3. What is ColumnFormat? What is Format? I think you have gone class-
crazy, and there's more overhead than working code ...

Cheers,
John
Dec 13 '07 #4
On Wed, 12 Dec 2007 14:48:03 -0800, igor.tatarinov wrote:
Hi, I am pretty new to Python and trying to use it for a relatively
simple problem of loading a 5 million line text file and converting it
into a few binary files. The text file has a fixed format (like a
punchcard). The columns contain integer, real, and date values. The
output files are the same values in binary. I have to parse the values
and write the binary tuples out into the correct file based on a given
column. It's a little more involved but that's not important.
I suspect that this actually is important, and that your slowdown has
everything to do with the stuff you dismiss and nothing to do with
Python's object model or execution speed.

I have a C++ prototype of the parsing code and it loads a 5 Mline file
in about a minute. I was expecting the Python version to be 3-4 times
slower and I can live with that. Unfortunately, it's 20 times slower and
I don't see how I can fix that.
I've run a quick test on my machine with a mere 1GB of RAM, reading the
entire file into memory at once, and then doing some quick processing on
each line:

>>def make_big_file(n ame, size=5000000):
.... fp = open(name, 'w')
.... for i in xrange(size):
.... fp.write('here is a bunch of text with a newline\n')
.... fp.close()
....
>>make_big_file ('BIG')

def test(name):
.... import time
.... start = time.time()
.... fp = open(name, 'r')
.... for line in fp.readlines():
.... line = line.strip()
.... words = line.split()
.... fp.close()
.... return time.time() - start
....
>>test('BIG')
22.531502008438 11

Twenty two seconds to read five million lines and split them into words.
I suggest the other nineteen minutes and forty-odd seconds your code is
taking has something to do with your code and not Python's execution
speed.

Of course, I wouldn't normally read all 5M lines into memory in one big
chunk. Replace the code

for line in fp.readlines():

with

for line in fp:

and the time drops from 22 seconds to 16.

--
Steven
Dec 13 '07 #5
Back about 8 yrs ago, on pc hardware, I was reading twin 5 Mb files
and doing a 'fancy' diff between the 2, in about 60 seconds. Granted,
your file is likely bigger, but so is modern hardware and 20 mins does
seem a bit high.

Can't talk about the rest of your code, but some parts of it may be
optimized

def parseValue(line , col):
s = line[col.start:col.e nd+1]
# no switch in python
if col.format == ColumnFormat.DA TE:
return Format.parseDat e(s)
if col.format == ColumnFormat.UN SIGNED:
return Format.parseUns igned(s)

How about taking the big if clause out? That would require making all
the formatters into functions, rather than in-lining some of them, but
it may clean things up.

#prebuilding a lookup of functions vs. expected formats...
#This is done once.
#Remember, you have to position this dict's computation _after_ all
the Format.parseXXX declarations. Don't worry, Python _will_ complain
if you don't.

dict_format_fun c = {ColumnFormat.D ATE:Format.pars eDate,
ColumnFormat.UN SIGNED:Format.p arseUnsigned,
....

def parseValue(line , col):
s = line[col.start:col.e nd+1]

#get applicable function, apply it to s
return dict_format_fun c[col.format](s)

Also...

if col.format == ColumnFormat.ST RING:
# and-or trick (no x ? y:z in python 2.4)
return not col.strip and s or rstrip(s)

Watch out! 'col.strip' here is not the result of stripping the
column, it is the strip _function_ itself, bound to the col object, so
it always be true. I get caught by those things all the time :-(

I agree that taking out the dot.dot.dots would help, but I wouldn't
expect it to matter that much, unless it was in an incredibly tight
loop.

I might be that.

if s.startswith('9 99999') or s.startswith('0 00000'): return -1

would be better as...

#outside of loop, define a set of values for which you want to return
-1
set_return = set(['999999','00000 0'])

#lookup first 6 chars in your set
def parseDate(s):
if s[0:6] in set_return:
return -1
return int(mktime(strp time(s, "%y%m%d")))

Bottom line: Python built-in data objects, such as dictionaries and
sets, are very much optimized. Relying on them, rather than writing a
lot of ifs and doing weird data structure manipulations in Python
itself, is a good approach to try. Try to build those objects outside
of your main processing loops.

Cheers

Douhet-did-suck

Dec 13 '07 #6
On Wed, 12 Dec 2007 16:44:01 -0800, igor.tatarinov wrote:
Here is some of my code. Tell me what's wrong with it :)

def loadFile(inputF ile, loader):
# .zip files don't work with zlib
Pardon?
f = popen('zcat ' + inputFile)
for line in f:
loader.handleLi ne(line)
Do you really need to compress the file? Five million lines isn't a lot.
It depends on the length of each line, naturally, but I'd be surprised if
it were more than 100MB.
...

In Loader class:
def handleLine(self , line):
# filter out 'wrong' lines
if not self._dataForma t(line): return

Who knows what the _dataFormat() method does? How complicated is it? Why
is it a private method?

# add a new output record
rec = self.result.add Record()
Who knows what this does? How complicated it is?

for col in self._dataForma t.colFormats:
Hmmm... a moment ago, _dataFormat seemed to be a method, or at least a
callable. Now it has grown a colFormats attribute. Complicated and
confusing.

value = parseValue(line , col)
rec[col.attr] = value

And here is parseValue (will using a hash-based dispatch make it much
faster?):
Possibly, but not enough to reduce 20 minutes to one or two.

But you know something? Your code looks like a bad case of over-
generalisation. I assume it's a translation of your C++ code -- no wonder
it takes an entire minute to process the file! (Oh lord, did I just say
that???) Object-oriented programming is a useful tool, but sometimes you
don't need a HyperDispatcher LoaderManagerCr eator, you just need a hammer.

In your earlier post, you gave the data specification:

"The text file has a fixed format (like a punchcard). The columns contain
integer, real, and date values. The output files are the same values in
binary."

Easy-peasy. First, some test data:
fp = open('BIG', 'w')
for i in xrange(5000000) :
anInt = i % 3000
aBool = ['TRUE', 'YES', '1', 'Y', 'ON',
'FALSE', 'NO', '0', 'N', 'OFF'][i % 10]
aFloat = ['1.12', '-3.14', '0.0', '7.42'][i % 4]
fp.write('%s %s %s\n' % (anInt, aBool, aFloat))
if i % 45000 == 0:
# Write a comment and a blank line.
fp.write('# this is a comment\n \n')

fp.close()

Now let's process it:
import struct

# Define converters for each type of value to binary.
def fromBool(s):
"""String to boolean byte."""
s = s.upper()
if s in ('TRUE', 'YES', '1', 'Y', 'ON'):
return struct.pack('b' , True)
elif s in ('FALSE', 'NO', '0', 'N', 'OFF'):
return struct.pack('b' , False)
else:
raise ValueError('not a valid boolean')

def fromInt(s):
"""String to integer bytes."""
return struct.pack('l' , int(s))

def fromFloat(s):
"""String to floating point bytes."""
return struct.pack('f' , float(s))
# Assume three fields...
DEFAULT_FORMAT = [fromInt, fromBool, fromFloat]

# And three files...
OUTPUT_FILES = ['ints.out', 'bools.out', 'floats.out']
def process_line(s, format=DEFAULT_ FORMAT):
s = s.strip()
fields = s.split() # I assume the fields are whitespace separated
assert len(fields) == len(format)
return [f(x) for (x, f) in zip(fields, format)]

def process_file(in file, outfiles=OUTPUT _FILES):
out = [open(f, 'wb') for f in outfiles]
for line in file(infile, 'r'):
# ignore leading/trailing whitespace and comments
line = line.strip()
if line and not line.startswith ('#'):
fields = process_line(li ne)
# now write the fields to the files
for x, fp in zip(fields, out):
fp.write(x)
for f in out:
f.close()

And now let's use it and see how long it takes:
>>import time
s = time.time(); process_file('B IG'); time.time() - s
129.58465385437 012
Naturally if your converters are more complex (e.g. date-time), or if you
have more fields, it will take longer to process, but then I've made no
effort at all to optimize the code.

--
Steven.
Dec 13 '07 #7
igor:
The fundamental difference is that in C++, I create a single object (a
line buffer) that's reused for each input line and column values are
extracted straight from that buffer without creating new string
objects. In python, new objects must be created and destroyed by the
million which must incur serious memory management overhead.
Python creates indeed many objects (as I think Tim once said "it
allocates memory at a ferocious rate"), but the management of memory
is quite efficient. And you may use the JIT Psyco (that's currently
1000 times more useful than PyPy, despite sadly not being developed
anymore) that in some situations avoids data copying (example: in
slices). Python is designed for string processing, and from my
experience string processing Psyco programs may be faster than similar
not-optimized-to-death C++/D programs (you can see that manually
crafted code, or from ShedSkin that's often slower than Psyco during
string processing). But in every language I know to gain performance
you need to know the language, and Python isn't C++, so other kinds of
tricks are necessary.

The following advice is useful too:

DouhetSukd:
>Bottom line: Python built-in data objects, such as dictionaries and
sets, are very much optimized. Relying on them, rather than writing a
lot of ifs and doing weird data structure manipulations in Python
itself, is a good approach to try. Try to build those objects outside
of your main processing loops.<

Bye,
bearophile
Dec 13 '07 #8
On 2007-12-13, ig************@ gmail.com <ig************ @gmail.comwrote :
On Dec 12, 4:03 pm, John Machin <sjmac...@lexic on.netwrote:
>Inside your function
[you are doing all this inside a function, not at global level in a
script, aren't you?], do this:
from time import mktime, strptime # do this ONCE
...
blahblah = int(mktime(strp time(s, "%m%d%y%H%M%S") ))

It would help if you told us what platform, what version of Python,
how much memory, how much swap space, ...

Cheers,
John

I am using a global 'from time import ...'. I will try to do that
within the
function and see if it makes a difference.

The computer I am using has 8G of RAM. It's a Linux dual-core AMD or
something like that. Python 2.4

Here is some of my code. Tell me what's wrong with it :)

def loadFile(inputF ile, loader):
# .zip files don't work with zlib
f = popen('zcat ' + inputFile)
for line in f:
loader.handleLi ne(line)
...

In Loader class:
def handleLine(self , line):
# filter out 'wrong' lines
if not self._dataForma t(line): return

# add a new output record
rec = self.result.add Record()

for col in self._dataForma t.colFormats:
value = parseValue(line , col)
rec[col.attr] = value

def parseValue(line , col):
s = line[col.start:col.e nd+1]
# no switch in python
if col.format == ColumnFormat.DA TE:
return Format.parseDat e(s)
if col.format == ColumnFormat.UN SIGNED:
return Format.parseUns igned(s)
if col.format == ColumnFormat.ST RING:
# and-or trick (no x ? y:z in python 2.4)
return not col.strip and s or rstrip(s)
if col.format == ColumnFormat.BO OLEAN:
return s == col.arg and 'Y' or 'N'
if col.format == ColumnFormat.PR ICE:
return Format.parseUns igned(s)/100.

And here is Format.parseDat e() as an example:
def parseDate(s):
# missing (infinite) value ?
if s.startswith('9 99999') or s.startswith('0 00000'): return -1
return int(mktime(strp time(s, "%y%m%d")))
An inefficient parsing technique is probably to blame. You first
inspect the line to make sure it is valid, then you inspect it
(number of column type) times to discover what data type it
contains, and then you inspect it *again* to finally translate
it.
And here is parseValue (will using a hash-based dispatch make
it much faster?):
Not much.

You should be able to validate, recognize and translate all in
one pass. Get pyparsing to help, if need be.

What does your data look like?

--
Neil Cerutti
Dec 13 '07 #9
Neil Cerutti wrote:
An inefficient parsing technique is probably to blame. You first
inspect the line to make sure it is valid, then you inspect it
(number of column type) times to discover what data type it
contains, and then you inspect it *again* to finally translate
it.
I was thinking just that. It is much more "pythonic" to simply attempt
to convert the values in whatever fashion they are supposed to be
converted, and handle errors in data format by means of exceptions.
IMO, of course. In the "trivial" case, where there are no errors in the
data file, this is a heck of a lot faster.

-- Chris.

Dec 14 '07 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
1981
by: Simon Steele | last post by:
Hi, I'm trying to embed python into my application, and have everything working fine using the static import lib. However, I don't want my application to be dependant on python being installed (it's an optional script interpreter). Therefore I want to be able to dynamically load python instead (which I believe is possible, but I can't find any examples for). It seems, however, that the import lib is imported in python.h using a pragma...
12
12277
by: Ola Natvig | last post by:
Hi all Does anyone know of a fast way to calculate checksums for a large file. I need a way to generate ETag keys for a webserver, the ETag of large files are not realy nececary, but it would be nice if I could do it. I'm using the python hash function on the dynamic generated strings (like in page content) but on things like images I use the shutil's copyfileobject function and the hash of a fileobject's hash are it's handlers memmory...
6
3282
by: John | last post by:
Just a general question... I'm currently using a combobox that when updated, opens a form with its recordset based on a query using the combo box value as the criteria. I'm I correct in thinking that using: docmd.openfrm "frmName",,,where "=" & cboSelectID will open all records and then just navigate to that filtered record, which is not as fast/efficient as using a query where the criteria is
1
1817
by: Robert McLay | last post by:
I have been trying to build python on Cray X1. As far as I can tell it does not support dynamic loading. So the question is: How to build 2.4 without dynamic loading? That is: can I build 2.4 where all the extensions are archived in libpython2.4.a as a static library? Building on the Cray X1 is slow, so I have been trying to also build it under Linux without dynamic loading since it
3
1420
by: dejavue82 | last post by:
Wound't be wise and efficient to copy the rows into a collection list ArrayList, close the connection and then process the data? (Rather than do the processing while the connection is still open). ArrayList data_array = new ArrayList(); while (reader.Read()) { data_array.Add(reader.ToString()); }
5
3691
by: sql_er | last post by:
Guys, I have an XML file which is 233MB in size. It was created by loading 6 tables from an sql server database into a dataset object and then writing out the contents from this dataset into an XML file. Once my application starts, I load this XML file into a DataSet object using "ReadXML" function. This creates a dataset in memory with 6 tables.
13
2559
by: gonzlobo | last post by:
Greetings, and happyNewYear to all. I picked up Python a few weeks ago, and have been able to parse large files and process data pretty easily, but I believe my code isn't too efficient. I'm hoping dictionaries will help out, but I'm not sure the best way to implement them. I've been using a bunch of nested if/elif/else statements to select slices (0317 & 03de) from a file, then parse the data (aa, hh, bb, d2-d9) into parameters (a =...
21
2039
by: py_genetic | last post by:
Hello, I'm importing large text files of data using csv. I would like to add some more auto sensing abilities. I'm considing sampling the data file and doing some fuzzy logic scoring on the attributes (colls in a data base/ csv file, eg. height weight income etc.) to determine the most efficient 'type' to convert the attribute coll into for further processing and efficient storage... Example row from sampled file data: , ....]
1
1790
by: John Nagle | last post by:
Is it possible to feed data into a LOAD DATA command in MySQL without writing out the data to a file? It's possible to do this using the MySQL command line and a UNIX FIFO, but that's kind of clunky. I have to load a few gigabytes of data, and using INSERT takes a whole day for each update. And can this be done portably across UNIX and Windows? Thanks. John Nagle
0
9953
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
11167
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10422
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9591
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7984
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5808
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
6009
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
2
4231
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
3242
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.