473,390 Members | 1,280 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,390 software developers and data experts.

Advice sought on best way to get data into a Linux db2 8.1.4 database

Hi all,

I'm looking for some advice on how best to implement storage of access
logs into a db/2 8.1.4 database running on a RH 7.2 system.

I have 5 (squid) web caches running here that service the whole
university. All access to external web sites must go through these
caches. Each cache generates a gzip'd access log file that's about
100Mbytes every night.

At the moment I'm ftp'ing them to a central system each night for
processing which generates a set of html stat files that are about
1.2Gbytes - that's per night.
Needless to say at that rate the 36 Gbyte disk space I've assigned to
it doesn't last very long.I'm therefore looking for a way of
transferring the data into a back end database that I can access via a
web interface that makes use of stored procedures,java beans and jsp
pages. I probably don't have to dump the data into the db in real time
so I could just post process the existing access log files every
night. Having said that, updating the database in real time would save
a lot of disk space.

I can create a named pipe on the linux box that the squid caching
process writes to and have the other end connected to a process that
munges the data round and writes it into a database. The code I've got
( not mine) is written in perl and writes data to a mysql database and
it would (i think) be a trivial task to write the data into a db/2
back end instead

The other option is to cat an existing log through the same prog and
jsut update
the info off line.

All the web caches are running RH 8.0 with 4 Gbytes of RAM 4 with 2 *
Gbit/s network links in a trunked configuration so the DB/2 server
would be receiving input from 5 caches simultaneously

I suppose what i'm asking is what would be the quickest way of getting
the data into the database. Stick with perl/DBI ?, java prog to
process the input (doesn't feel as if this would be the quickest way
of doing things) piping a data file through a db2 cli interface? or
something else?

Any help suggestions appreciated

alex
Nov 12 '05 #1
3 2050
It is pretty quick to use the Perl/DBI to load data into db2. I loaded about 5GB data (CSV format) into a backend Linux DB2 in just 10 hrs (the hardware is slightly slower than yours). The only disadvantage is the transction control. You have to add this to MySQL Perl code ( MySQL doesn't support transaction by default). An alternative is to use DB2 CLI to do batch loading. The LOAD facility is very quick, in my case it took only 3 hrs to finish the loading and plus integrity check (depends on your table definition and the size of your data). But if you're not confident with the raw data, you'd better use IMPORT, which inserts data row by row and it took me more than 12 hrs to complete the task.

May this help.

Bing
Alex wrote:
Hi all,

I'm looking for some advice on how best to implement storage of access
logs into a db/2 8.1.4 database running on a RH 7.2 system.

I have 5 (squid) web caches running here that service the whole
university. All access to external web sites must go through these
caches. Each cache generates a gzip'd access log file that's about
100Mbytes every night.

At the moment I'm ftp'ing them to a central system each night for
processing which generates a set of html stat files that are about
1.2Gbytes - that's per night.
Needless to say at that rate the 36 Gbyte disk space I've assigned to
it doesn't last very long.I'm therefore looking for a way of
transferring the data into a back end database that I can access via a
web interface that makes use of stored procedures,java beans and jsp
pages. I probably don't have to dump the data into the db in real time
so I could just post process the existing access log files every
night. Having said that, updating the database in real time would save
a lot of disk space.

I can create a named pipe on the linux box that the squid caching
process writes to and have the other end connected to a process that
munges the data round and writes it into a database. The code I've got
( not mine) is written in perl and writes data to a mysql database and
it would (i think) be a trivial task to write the data into a db/2
back end instead

The other option is to cat an existing log through the same prog and
jsut update
the info off line.

All the web caches are running RH 8.0 with 4 Gbytes of RAM 4 with 2 *
Gbit/s network links in a trunked configuration so the DB/2 server
would be receiving input from 5 caches simultaneously

I suppose what i'm asking is what would be the quickest way of getting
the data into the database. Stick with perl/DBI ?, java prog to
process the input (doesn't feel as if this would be the quickest way
of doing things) piping a data file through a db2 cli interface? or
something else?

Any help suggestions appreciated

alex


Nov 12 '05 #2
off the topic - make sure that you siut on a supported LInux distro. We
validated RH7.2 but RH took it off support (and the same is true for RH8,
RH9). If you are not on a RHEL versuion they are not willing to give you any
support and we can not help from the DB2 side fixing their kernel issues
(even if we would ilke to sometimes).

Boris
"Alex" <A.******@hull.ac.uk> wrote in message
news:b2**************************@posting.google.c om...
Hi all,

I'm looking for some advice on how best to implement storage of access
logs into a db/2 8.1.4 database running on a RH 7.2 system.

I have 5 (squid) web caches running here that service the whole
university. All access to external web sites must go through these
caches. Each cache generates a gzip'd access log file that's about
100Mbytes every night.

At the moment I'm ftp'ing them to a central system each night for
processing which generates a set of html stat files that are about
1.2Gbytes - that's per night.
Needless to say at that rate the 36 Gbyte disk space I've assigned to
it doesn't last very long.I'm therefore looking for a way of
transferring the data into a back end database that I can access via a
web interface that makes use of stored procedures,java beans and jsp
pages. I probably don't have to dump the data into the db in real time
so I could just post process the existing access log files every
night. Having said that, updating the database in real time would save
a lot of disk space.

I can create a named pipe on the linux box that the squid caching
process writes to and have the other end connected to a process that
munges the data round and writes it into a database. The code I've got
( not mine) is written in perl and writes data to a mysql database and
it would (i think) be a trivial task to write the data into a db/2
back end instead

The other option is to cat an existing log through the same prog and
jsut update
the info off line.

All the web caches are running RH 8.0 with 4 Gbytes of RAM 4 with 2 *
Gbit/s network links in a trunked configuration so the DB/2 server
would be receiving input from 5 caches simultaneously

I suppose what i'm asking is what would be the quickest way of getting
the data into the database. Stick with perl/DBI ?, java prog to
process the input (doesn't feel as if this would be the quickest way
of doing things) piping a data file through a db2 cli interface? or
something else?

Any help suggestions appreciated

alex

Nov 12 '05 #3
Ken
A.******@hull.ac.uk (Alex) wrote in message news:<b2**************************@posting.google. com>...
I suppose what i'm asking is what would be the quickest way of getting
the data into the database. Stick with perl/DBI ?, java prog to
process the input (doesn't feel as if this would be the quickest way
of doing things) piping a data file through a db2 cli interface? or
something else?


I'd recommend a typical warehouse etl solution. However, rather than
go the route of commercial etl tools (especially for such a small
project), I'd implement with simple commodity components:
- sftp the gzipped files to the warehouse server
- transform the files from extract to load images using an
easily-maintained
language such as python (or whatever works best for you)
- use the db2 load utility for loading into base fact tables
- archive both extract and load files in a gzipped format in case you
want to
recover or change your data model.

If you go this route then there are only two challenges initially:
- learning the ins & outs of the db2 load utility
- process management

The load utility isn't that bad - just need to get familiar with how
it locks the table, load recovery, etc. And it's very fast - a small
box using load can easily hit 20,000 rows / sec - far faster than
anything you could do with java, perl, etc.

Process management is trickier. But if you only need to do loads once
a day it isn't too tough. I normally just keep the extract,
transport, transform, and load processes completely separate - each
run once a minute via cron, each ensuring that only a single copy is
running, and each interfacing to the other processes only via a flat
file. Need to ensure that files aren't picked up until they're
complete - so a signally file, or a file rename should be performed at
the conclusion of a successful process.

However, an initial implemention (given just 5 files a day) could be
far simplier - based on a static schedule, and an alerting process in
the event that those files aren't available when the downstream
processes get kicked off. Developed in this fashion, a simple log
fact table solution could be developed in 1-3 days, and then easily
enhanced later if you wanted.

ken
Nov 12 '05 #4

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: danaf | last post by:
Dear all, (1)Is it possible to host PHP-based Web Portal + MySQL Database (e.g. PHPNuke) on MS Windows System? If yes which server software I need? (2)Compare PHP over JSP/ASP/CGI in...
1
by: john | last post by:
Hello all. Thanks for reading and for any advice... Ok, I have a linux webserver with php/mysql/apache. Next, there are csv (comma separated values) log files on a user's WINDOWS machine (and...
4
by: Socheat Sou | last post by:
After a brief, but informative, discussion on Freenode's #zope chatroom, I was advised to consult the gurus on c.l.p. I'm working for a small company who is in desperate need to rewrite it's...
6
by: Robert W. | last post by:
I'm building my first major C# program and am try to use best practices everywhere. So I've implemented the "Document/View Model" whereby: - There's a Windows Form, which we'll call "formView" -...
1
by: Jolly Student | last post by:
Dear Colleagues: Thank you for taking the time to read this - I recently posted here with regards to what was possible with .NET. I have been working as a systems engineer for about...
1
by: David Van D | last post by:
Hi there, A few weeks until I begin my journey towards a degree in Computer Science at Canterbury University in New Zealand, Anyway the course tutors are going to be teaching us JAVA wth bluej...
5
by: Ray Tomes | last post by:
Hi Folks I am an old codger who has much experience with computers in the distant past before all this object oriented stuff. Also I have loads of software in such languages as FORTRAN and...
9
by: Duncan Smith | last post by:
Hello, I find myself in the, for me, unusual (and at the moment unique) position of having to write a web application. I have quite a lot of existing Python code that will form part of the...
0
by: Chris Rebert | last post by:
On Mon, Nov 17, 2008 at 10:42 AM, Abah Joseph <joefazee@gmail.comwrote: Have you considered basing this off existing software for schools, like one of the programs listed on...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.