473,394 Members | 1,811 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,394 software developers and data experts.

perl hash doubt

89
Hi,
I have two text file as below:

file1.txt (4 columns)
Expand|Select|Wrap|Line Numbers
  1. test1 1000 2000 +
  2. test2 1000 2000 -
  3. test1 1000 2000 +
  4. test3 1000 2000 +
  5. test1 1000 2000 -
  6. test2 2000 3000 +
  7. test1 1000 2000 +
  8. test1 1000 3000 -
  9.  
file2.txt also contains very similar data.

The first step in the processing is that I want to collect all the data. Hence I want to use column1 as key and column2,3 and 4 as values.

For all test1, all other three columns will be added after removing duplicates. if two lines as below are available, then only one value should be added.
Expand|Select|Wrap|Line Numbers
  1. test1 1000 2000 -
  2. test1 1000 2000 -
  3.  
key = {test1} and value = {1000 2000 -}
Since I have always used perl hash with two columns, the first column as key and second into hash and at the same time adding them in perl hash helped me to get rid of duplicate values of the second column, I want to know how that can be applicable for the above dataset. column 1 as key and column 2,3and4 as values (basically to remove duplicates) Can I cat them into one string with some pattern as
"1000:2000:-". Please let me know.

My second query would be how to compare perl hash1 (data from file1) and hash2(data from file2).

For eaxmple for test1 (every key), I have to compare the values between two hashes.

Please let me know as I am not familiar with perl hash of hash. Do I have to use that?

My basic motivation is to remove duplicates from two files and then compare two hashes to find how many of the column2:column2:column3 are present in both files as well the ones that are unique to each data set. Or any other way to handle the data?

And it is highly confusing. A small example woule be easy for me to proceed.

Thanks in advance.
Oct 27 '09 #1
4 2590
toolic
70 Expert
I do not understand what you are trying to accomplish.

If you also post a small example of your 2nd file, along
with the exact output you are looking for, I might be able to
create some example code for you.

Hash-of-hashes data structure are very useful
and may be appropriate for this task.
Oct 27 '09 #2
lilly07
89
Hi Toolic thx for your response. my file2.txt also contains the data in the same format as file1.txt

Expand|Select|Wrap|Line Numbers
  1. test1 1000 4000 + 
  2. test3 1000 2000 - 
  3. test1 1000 2000 + 
  4. test5 5000 7000 + 
  5. test1 1000 2000 - 
  6. test2 2000 4000 + 
  7. test3 1000 6000 + 
  8. test1 1000 3000 - 
  9.  
As I mentioned earlier, I need to collect them as test1 (column1) and column2, 3 and 4 has to be collected as values after removing the duplicates. For example
iif there are two records as below:
Expand|Select|Wrap|Line Numbers
  1. test1 1000 2000 - 
  2. test1 1000 2000 - 
  3.  
Then only one pair should be added into hash as test1 {key} and "1000 2000 - " as {value}. This resolves records removing the duplicates of cloumn2,3 and 4.

And the next step would be comparing two files.

As each file1 and file2 data are collected in two different hashes, I want to compare for every (key) in file2 (ie test1), I want to compare each value in file2 hash exists in file2(same key (ie test1) whether value exists or not. Same key between two hashes are selected and values among both hashes are to be compared. Is it feasible?
I know this is highly confusing. Sorry and thanks again.

Regards
Oct 28 '09 #3
nithinpes
410 Expert 256MB
The approach would be to remove duplicate lines first.
Then split each unique line on spaces into an array and then create a hash of arrays. The first column in the file will be made hash key and the remaining values will be pushed into an array which is value for the key. Whenever you come across a key that already exists, append rest of the elements into the existing value of key.
I am giving only the approach since you haven't posted the code that you tried. If you face any issues, post the code that you tried so that we can correct or modify the code.
Oct 29 '09 #4
lilly07
89
Yes. Thank you so much for your response.

I managed to remove duplicates as below. A very simple way.

Expand|Select|Wrap|Line Numbers
  1.  %seen = ();
  2. while(<>){
  3.       $seen{$_}++;
  4.       next if $seen{$_} > 1;
  5.       print "$_";
  6.    }
As I don't know how to compare between two hashes, I couldn't proceed further. I will definitely follow pushing the data in to perl key and array as you suggested. Thanks.
Oct 30 '09 #5

Sign in to post your reply or Sign up for a free account.

Similar topics

5
by: John Smith | last post by:
Can someone point me to an example of how to implement and access the kind of object shown below? Most of the examples if found are an object that contains one other object. I need to create an...
5
by: Robert Oschler | last post by:
I am converting a Perl script over to "C" for a potential open source project. I need some open source "C" code that will give me the same functionality of a Perl Style associative array: ...
0
by: Kirt Loki Dankmyer | last post by:
So, I download the latest "stable" tar for perl (5.8.7) and try to compile it on the Solaris 8 (SPARC) box that I administrate. I try all sorts of different switches, but I can't get it to compile....
10
by: Qiangning Hong | last post by:
I'm writing a spider. I have millions of urls in a table (mysql) to check if a url has already been fetched. To check fast, I am considering to add a "hash" column in the table, make it a unique...
3
crazy4perl
by: crazy4perl | last post by:
Hi All, I have some doubt related to xml. Actually I want to update a file which is in some format. So I am converting that file using Tap3edit perl module in a hash. Now I m trying to create a...
11
by: doraima29 | last post by:
Hi, I am a newbie at PERL and really wanted to understand how server-side programming really works and operates since I use at the workplace. I use ASP and wanted to learn more about server-side...
1
by: sixtyfootersdude | last post by:
Good Morning! I am a perl newbie and I think that I am struggling with references. I have an array of references to hashes which I am trying to print. This is what I have: for(my...
5
by: lilly07 | last post by:
Hi, I have a doubt regarding perl hash data. I have a tab delimited as below: file1.txt name1 AM bin1 name2 AM bin1 name3 PM bin1
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.