I'm frequently running into performance issues when I XSL transform large amounts of data into HTML. This data is usually just a couple of very large tables of roughly this form:
<table>
<record>
<group>1</group>
<data>abc</abc>
</record>
<record>
<group>1</group>
<data>def</abc>
</record>
<record>
<group>2</group>
<data>ghi</abc>
</record>
</table>
During transformation, I want to visually group the records like this
+--------------+
| Group 1 |
+--------------+
| abc |
| def |
+--------------+
| Group 2 |
+--------------+
| ghi |
+--------------+
A silly implementation is this one (set is from http://exslt.org. the actual implementation is a bit different, this is just an example):
<xsl:for-each select="set:distinct(/table/record/group)">
<xsl:variable name="group" select="."/>
<!-- This access needs to be made faster : -->
<xsl:for-each select="/table/record[group = $group]">
<!-- Do the table stuff -->
</xsl:for-each>
</xsl:for-each>
It's easy to see that this tends to have O(n^2)
complexity. Even worse, as there are lots of fields in every record. The data operated on can reach several dozens of MB, the number of records can go up to 5000. In the worst case, every record has its own group and 50 fields. And to make things even much worse, there is yet another level of grouping possible, making this O(n^3)
Now there would be quite a few options:
- I could find a Java solution to this involving maps and nested data structures. But I want to improve my XSLT skills, so that's actually the last option.
- I'm maybe oblivious of a nice feature in Xerces/Xalan/Exslt, that can handle grouping much better
- I can maybe build an index of some sort for
/table/record/group
- You can prove to me that the
<xsl:apply-templates/>
approach is decidedly faster in this use-case than the<xsl:for-each/>
approach.
What do you think how this O(n^2)
complexity can be reduced?
<xsl:if test="not(preceding-sibling::record[1]/group = group)"/>
But this is clearly something to keep in mind – Bresnahan