473,395 Members | 1,766 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,395 software developers and data experts.

Performance of XSLT

Hallo,

there were some posts about this, but nothing I could find useful.
I have a large XML file (80MB) and need certain information out of it.
I though I could use XSLT with an fairy simple transformation:
....
<xsl:for-each select="/values/STRING/item[I=10]">
<tr class="own">
<td><xsl:value-of select="A"/></td>
<td><xsl:value-of select="B"/></td>
</tr>
</xsl:for-each>
<tr class="header">
<td><xsl:value-of select="format-number(sum(/values/STRING/item/A),
'###,###')"/></td>
<td><xsl:value-of select="format-number(sum(/values/STRING/item/B),
'###,###')"/></td>
</tr>
....
but the performance is more than miserable (5-6 hours at least!)
How do I solve this problem? Is there a fast XML-parser, which can do
the work? After all its just a straight-forward read of a file.

Kind Regards,
Chris

Feb 22 '07 #1
9 2159
* starlight wrote in comp.text.xml:
>How do I solve this problem? Is there a fast XML-parser, which can do
the work? After all its just a straight-forward read of a file.
Well, which processor did you use up until now? Generally speaking, you
might want to try MSXML and Saxon; Saxon and xsltproc also allow you to
do some tracing and performance analysis, if you reduce the input size
and let them analyze the transformation, you might find why its so slow.
Other than that, you've shown too little of the transformation and the
document to give better advise.
--
Björn Höhrmann · mailto:bj****@hoehrmann.de · http://bjoern.hoehrmann.de
Weinh. Str. 22 · Telefon: +49(0)621/4309674 · http://www.bjoernsworld.de
68309 Mannheim · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/
Feb 22 '07 #2
On Feb 22, 11:17 am, "starlight" <hristoma...@yahoo.com>
wrote:
there were some posts about this, but nothing I could
find useful. I have a large XML file (80MB) and need
certain information out of it. I though I could use XSLT
with an fairy simple transformation:
[simple XSLT fragment]
but the performance is more than miserable (5-6 hours at
least!) How do I solve this problem? Is there a fast
XML-parser, which can do the work? After all its just a
straight-forward read of a file.
What XSLT processor are you using? I ran a quick test on a
~50MB test file filled with junk data (with simple and
regular XML structure), copying one tenth of the records
with predicates and summing the values of all numeric
fields, using xsltproc (libxslt). It took about ten
seconds.

I think it has to be one of the three possible problems:

- either your XSLT processor is really slow;
- or the stuff you're doing is quite a bit more complex
than your example suggests;
- or you're doing something very inefficiently.

--
Pavel Lepin

Feb 22 '07 #3
XSLT is a programming language. Like any language, its performance
depends on a combination of how well your code is written and how well
the processor can optimize it.

Your example, as written, scans through the entire document three times
-- once in the for-each, then twice in calculating the sums. You didn't
show the context, but if the sequence you've shown us was itself
embedded in another loop...

Also: Because XSLT supports random access to a document's contents, it
normally operates by reading the entire document into memory and
processing it there. With larger documents that can drive you into
swapping, at which point your PC's performance immediately falls through
the floor. Different processors use different in-memory models which can
exacerbate or reduce this problem. (A few, such as the custom XSLT
processor in the Datapower/IBM "network appliance", can perform some
streaming analysis and *seriously* reduce the read-process-write
overhead for a subset of XSLT; I'm not sure whether their algorithm
would stream this particular example or not.)

Outside of trying different processors in a search for one that's
happier with your example (I'd try Apache Xalan, but I'm biased...), the
thing I'd suggest is that you consider hand-coding this as a SAX
application. The example you've shown us, if that's all you're doing,
could indeed be fully streamed and would then be speed-limited only by
the parser, the serializer, and the rate at which you can get data into
and out of it... and ought to deliver the kind of performance you're
looking for. (Again, being biased, I'd suggest Apache Xerces as the
parser/serializer package if you're working in C or Java, but since SAX
is pretty well standardized you can fairly easily experiment with
different parsers if you want to spend the time on that.)
--
() ASCII Ribbon Campaign | Joe Kesselman
/\ Stamp out HTML e-mail! | System architexture and kinetic poetry
Feb 22 '07 #4
Good point, Pavel. Only 80MB? Unless the querant has a massively
inadequate or overloaded machine (which is possible), that really
shouldn't be a problem; by today's bloated standards that's a smallish
file. We're missing some information.

My suggestion of switching to SAX-based might still make sense, but I
think it's appropriate to spend more time figuring out where the
bottleneck is in what was actually attempted.

--
() ASCII Ribbon Campaign | Joe Kesselman
/\ Stamp out HTML e-mail! | System architexture and kinetic poetry
Feb 22 '07 #5
On Feb 22, 3:40 pm, Joe Kesselman
<keshlam-nos...@comcast.netwrote:
Only 80MB? Unless the querant has a massively inadequate
or overloaded machine (which is possible), that really
shouldn't be a problem
Well, I was running the test on one of our data-crunchers,
which is *both* inadequate and overloaded. Still wasn't a
problem.
My suggestion of switching to SAX-based might still make
sense
It always does when the problem seems well-suited for
streaming solutions, doesn't it? Granted, typical XSLT
processors are not at all bad at that kind of stuff either,
but coding it in C using a fast SAX-parser is probably
going to result in an order of magnitude increase in
performance. Which might be just what was needed for smooth
operation.

--
Pavel Lepin

Feb 22 '07 #6
In article <af******************************@comcast.com>,
Joe Kesselman <ke************@comcast.netwrote:
>Good point, Pavel. Only 80MB?
80MB is not huge, but there's a big difference between 80MB of
lightly-marked-up text, and 80MB of <a>24</a><a>23.4</a><a>... In the
latter case, it could easily expand greatly when parsed.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Feb 22 '07 #7
Richard Tobin wrote:
80MB is not huge, but there's a big difference between 80MB of
lightly-marked-up text, and 80MB of <a>24</a><a>23.4</a><a>... In the
latter case, it could easily expand greatly when parsed.
Depends on what the underlying data model is -- which is why we invented
DTM for the Xalan processor; making every node a Java object would
indeed have been hugely wasteful of memory.

--
Joe Kesselman / Beware the fury of a patient man. -- John Dryden
Feb 22 '07 #8
On Feb 22, 6:18 pm, Joseph Kesselman <keshlam-nos...@comcast.net>
wrote:
Richard Tobin wrote:
80MB is not huge, but there's a big difference between 80MB of
lightly-marked-up text, and 80MB of <a>24</a><a>23.4</a><a>... In the
latter case, it could easily expand greatly when parsed.

Depends on what the underlying data model is -- which is why we invented
DTM for the Xalan processor; making every node a Java object would
indeed have been hugely wasteful of memory.
Hi, sorry for the late reply!
Richard, thats exactly the problem!

I have 80MB of a XML with the following structure:
Expand|Select|Wrap|Line Numbers
  1. ....
  2. <suppliers>
  3. <item>
  4. <ID>21</ID>
  5. <N>Super Duper Computer store</N>
  6. <A>24</A>
  7. <B>18</B>
  8. <Z>1</Z>
  9. </item>
  10. <item>
  11. <ID>21</ID>
  12. <N>Get 1 Pay 2 Computer store</N>
  13. <A>24</A>
  14. <B>18</B>
  15. <Z>2</Z>
  16. </item>
  17. ....
  18. </suppliers>
  19. ....
  20. <articles>
  21. <item>
  22. <ID>3</ID>
  23. <SID>21</SID>
  24. <A>24</A>
  25. <B>18</B>
  26. </item>
  27. <item>
  28. <ID>4</ID>
  29. <SID>22</SID>
  30. <A>24</A>
  31. <B>16</B>
  32. </item>
  33. ....
  34. </articles>
  35. ....
  36.  
I'm (ahem.. was) using MSXML DOM.
The weird thing is that I dont know how to deal with the problem. Here
is what I am supposed to do:
- Find all suppliers (in 90% of cases only one) for an article. To do
this use <SIDin "articles", which sorresponds to <IDin
"suppliers", but only for those where <Zin "suppliers" have value 2.
(<Aand <Bin "articles" are the prices.)
I didnt invent the XML! Its weird!!

Now I dont know how to deal with the case. I tried SAX and DOM but the
code got ugly so fast, that I gave it up yesterday. XPath sounded like
a good option, but the performance of it is like pfui. ...and by the
way, when using DOM with Java I got rid of the OutOfMemoryException,
when a set JVM max memory to 1024MB.
Any ideas?

Feb 24 '07 #9
starlight wrote:
- Find all suppliers (in 90% of cases only one) for an article. To do
this use <SIDin "articles", which sorresponds to <IDin
"suppliers", but only for those where <Zin "suppliers" have value 2.
(<Aand <Bin "articles" are the prices.)
The items in your XML data seem to be a simple
list of "records". This kind of data can be
processed efficiently with languages that map
the SAX-approach to their internal control flow.
One example of this kind of XML processing is
described in a small booklet that we wrote for
XMLgawk, the XML extension of GNU Awk. XMLgawk
is known to process large amounts of large files
in a short time. But you wont get a DOM:

http://home.vrweb.de/~juergen.kahrs/...with-XML-paths
way, when using DOM with Java I got rid of the OutOfMemoryException,
when a set JVM max memory to 1024MB.
Any ideas?
Try XMLgawk. 80 MB of the data that you describe should
be processed in less than one minute (assuming the
algorithm is as simple as what you described).
Feb 24 '07 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

49
by: Paul Rubin | last post by:
I've started a few threads before on object persistence in medium to high end server apps. This one is about low end apps, for example, a simple cgi on a personal web site that might get a dozen...
3
by: Andy Dingley | last post by:
I've just started on a new project and inherited a huge pile of XSLT (and I use the term "pile" advisedly !) It runs at glacial speed, and I need to fix this this. Platform is MSXML 4 / ASP ...
1
by: Diego Rivero | last post by:
Hi. I'm working with the XsltTransform class and I found that invocations to extension objects are extremely time consuming in contrast to invocations to embebed code using "msxsl:script"...
2
by: Trevor Oakley | last post by:
I am writing thousands of html pages from an MS SQL source using a DataSet and then an XslTransform. I have an interest in making the code run faster as it takes several minutes (sometimes ten...
1
by: John A Grandy | last post by:
I've got an app that has hundreds of medium-sized (100s of elements) XML files on disk (not in db). Right now these are loaded via XMLDocument.Load and searched with XPATH. The performance has...
14
by: ajfish | last post by:
Hi, I am trying to allocate a unique ID to every instance of tag 'foo' in a large XML document. currently I'm doing this: <xsl:variable name="UniqueId"> <xsl:number count="foo" level="any"/>...
3
by: Andy Fish | last post by:
Hi, From reading the documentation, I get the impression that XslCompiledTransform should be faster than XslTransform on my test with a large complex document and a large complex XSLT, the...
0
by: SpaceMarine | last post by:
hello, im having a discussion w/ one of my associates, and we're are trying to get a consensus on a possible performance scenario. we're working a/ 3-rd party component that produces PDFs using...
4
by: Andy Fish | last post by:
Hi, I am using .Net 2.0 to run an XSLT process. It basically takes a word XML document at it's input, plus a separate XML file opened using the document() function and applies changes to the...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.