467,179 Members | 1,245 Online
Bytes | Developer Community
Ask Question

Home New Posts Topics Members FAQ

Post your question to a community of 467,179 developers. It's quick & easy.

parsing pipe delimited txt file

Hi All,


In file1.txt I have following data.

1B13C1311945038FE0440003BA1DC817-PLC13|1B13C1311945038FE0440003BA1DC817-PLC130.18063115540216856
7A7789C8-448E-11DD-8F9E-D9EE09CAC2E2|7A7789C8-448E-11DD-8F9E-D9EE09CAC2E20.29695003763007266
7A7789C8-448E-11DD-8F9E-D9EE09CAC2E2|7A7789C8-448E-11DD-8F9E-D9EE09CAC2E20.7591754537315574


All the data are pipe delimited. The first field is questions(id) and second field is answer(id).
Now I want if questions ids are same like 2nd line and 3rd line then all of its answers id should be appended like following....

1B13C1311945038FE0440003BA1DC817-PLC13|1B13C1311945038FE0440003BA1DC817-PLC130.18063115540216856
7A7789C8-448E-11DD-8F9E-D9EE09CAC2E2|7A7789C8-448E-11DD-8F9E-D9EE09CAC2E20.29695003763007266|7A7789C8-448E-11DD-8F9E-D9EE09CAC2E20.7591754537315574

I have the following code....
Expand|Select|Wrap|Line Numbers
  1. my %ques_row = ();
  2. my @army;
  3. my $i = 0;
  4. open (FH, "<file.txt") or die "Cannot open remove_duplicate file for reading: $!";
  5. while (my $each_row = <FH> ) {
  6. chomp($each_row);
  7.     my @row = split(/\|/, $each_row);
  8.     $ques_row{$row[0]}[$i] = $row[1];
  9.     $i++;
  10. }
  11. close FH;
  12.  
With this code the problem is when it is placing all answers in hashes of array, then it will put the empty element as $i is increasing by one. So how to remove all the empty elements from the array.

Please help me to correct my logic or code.

Thanks,
Chiku
Aug 29 '08 #1
  • viewed: 4480
Share:
1 Reply
Hi,

I made it work using different way. Sorry to bother.

Thanks,
Chiku
Aug 29 '08 #2

Post your reply

Sign in to post your reply or Sign up for a free account.

Similar topics

15 posts views Thread by VMI | last post: by
1 post views Thread by Fordraiders | last post: by
7 posts views Thread by Prem Parekh | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.