I'm trying to build a couple classes that would allow for writing xsl
transforms against data that is not originally xml. I've got an xmlreader
implementation that seems to work well. Based on some issues with it and
transforms, I then wrapped it up in a custom xpathnavigator implementation.
Everything works fine from the transform functionality perspective, but it's
not scaling well to larger amounts of data, and I would like to find out if
these are framework limitations or results of my lack of knowledge.
Specifically, if I pass my xpathnavigator object into the transform method of
an xsltransform object along with a streamwriter instance, I'm expecting the
transform to be able to stream the result to the streamwriter without needing
to finish the entire transform first. Now, I could see some issues depending
on xsl options, but in a perfect world, with a perfect xsl, is this at all
possible? Also, I'm noticing that the xpathnavigator instance gets cloned
like crazy, at least once per node, creating millions of instances during the
transform. Can someone explain this to me?