By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,374 Members | 2,014 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,374 IT Pros & Developers. It's quick & easy.

perl hash doubt

P: 89
Hi,
I have two text file as below:

file1.txt (4 columns)
Expand|Select|Wrap|Line Numbers
  1. test1 1000 2000 +
  2. test2 1000 2000 -
  3. test1 1000 2000 +
  4. test3 1000 2000 +
  5. test1 1000 2000 -
  6. test2 2000 3000 +
  7. test1 1000 2000 +
  8. test1 1000 3000 -
  9.  
file2.txt also contains very similar data.

The first step in the processing is that I want to collect all the data. Hence I want to use column1 as key and column2,3 and 4 as values.

For all test1, all other three columns will be added after removing duplicates. if two lines as below are available, then only one value should be added.
Expand|Select|Wrap|Line Numbers
  1. test1 1000 2000 -
  2. test1 1000 2000 -
  3.  
key = {test1} and value = {1000 2000 -}
Since I have always used perl hash with two columns, the first column as key and second into hash and at the same time adding them in perl hash helped me to get rid of duplicate values of the second column, I want to know how that can be applicable for the above dataset. column 1 as key and column 2,3and4 as values (basically to remove duplicates) Can I cat them into one string with some pattern as
"1000:2000:-". Please let me know.

My second query would be how to compare perl hash1 (data from file1) and hash2(data from file2).

For eaxmple for test1 (every key), I have to compare the values between two hashes.

Please let me know as I am not familiar with perl hash of hash. Do I have to use that?

My basic motivation is to remove duplicates from two files and then compare two hashes to find how many of the column2:column2:column3 are present in both files as well the ones that are unique to each data set. Or any other way to handle the data?

And it is highly confusing. A small example woule be easy for me to proceed.

Thanks in advance.
Oct 27 '09 #1
Share this Question
Share on Google+
4 Replies


Expert
P: 70
I do not understand what you are trying to accomplish.

If you also post a small example of your 2nd file, along
with the exact output you are looking for, I might be able to
create some example code for you.

Hash-of-hashes data structure are very useful
and may be appropriate for this task.
Oct 27 '09 #2

P: 89
Hi Toolic thx for your response. my file2.txt also contains the data in the same format as file1.txt

Expand|Select|Wrap|Line Numbers
  1. test1 1000 4000 + 
  2. test3 1000 2000 - 
  3. test1 1000 2000 + 
  4. test5 5000 7000 + 
  5. test1 1000 2000 - 
  6. test2 2000 4000 + 
  7. test3 1000 6000 + 
  8. test1 1000 3000 - 
  9.  
As I mentioned earlier, I need to collect them as test1 (column1) and column2, 3 and 4 has to be collected as values after removing the duplicates. For example
iif there are two records as below:
Expand|Select|Wrap|Line Numbers
  1. test1 1000 2000 - 
  2. test1 1000 2000 - 
  3.  
Then only one pair should be added into hash as test1 {key} and "1000 2000 - " as {value}. This resolves records removing the duplicates of cloumn2,3 and 4.

And the next step would be comparing two files.

As each file1 and file2 data are collected in two different hashes, I want to compare for every (key) in file2 (ie test1), I want to compare each value in file2 hash exists in file2(same key (ie test1) whether value exists or not. Same key between two hashes are selected and values among both hashes are to be compared. Is it feasible?
I know this is highly confusing. Sorry and thanks again.

Regards
Oct 28 '09 #3

nithinpes
Expert 100+
P: 410
The approach would be to remove duplicate lines first.
Then split each unique line on spaces into an array and then create a hash of arrays. The first column in the file will be made hash key and the remaining values will be pushed into an array which is value for the key. Whenever you come across a key that already exists, append rest of the elements into the existing value of key.
I am giving only the approach since you haven't posted the code that you tried. If you face any issues, post the code that you tried so that we can correct or modify the code.
Oct 29 '09 #4

P: 89
Yes. Thank you so much for your response.

I managed to remove duplicates as below. A very simple way.

Expand|Select|Wrap|Line Numbers
  1.  %seen = ();
  2. while(<>){
  3.       $seen{$_}++;
  4.       next if $seen{$_} > 1;
  5.       print "$_";
  6.    }
As I don't know how to compare between two hashes, I couldn't proceed further. I will definitely follow pushing the data in to perl key and array as you suggested. Thanks.
Oct 30 '09 #5

Post your reply

Sign in to post your reply or Sign up for a free account.