lxml.etree is a very fast XML library. Most of this is due to the speed of libxml2, e.g. the parser and serialiser, or the XPath engine. Other areas of lxml were specifically written for high performance in high-level operations, such as the tree iterators.
On the other hand, the simplicity of lxml sometimes hides internal operations that are more costly than the API suggests. If you are not aware of these cases, lxml may not always perform as you expect. A common example in the Python world is the Python list type. New users often expect it to be a linked list, while it actually is implemented as an array, which results in a completely different complexity for common operations.
Similarly, the tree model of libxml2 is more complex than what lxml's ElementTree API projects into Python space, so some operations may show unexpected performance. Rest assured that most lxml users will not notice this in real life, as lxml is very fast in absolute numbers. It is definitely fast enough for most applications, so lxml is probably somewhere between 'fast enough' and 'the best choice' for yours. Read some messages from happy users to see what we mean.
This text describes where lxml.etree (abbreviated to 'lxe') excels, gives hints on some performance traps and compares the overall performance to the original ElementTree (ET) and cElementTree (cET) libraries by Fredrik Lundh. The cElementTree library is a fast C-implementation of the original ElementTree.
First thing to say: there is an overhead involved in having a DOM-like C library mimic the ElementTree API. As opposed to ElementTree, lxml has to generate Python representations of tree nodes on the fly when asked for them, and the internal tree structure of libxml2 results in a higher maintenance overhead than the simpler top-down structure of ElementTree. What this means is: the more of your code runs in Python, the less you can benefit from the speed of lxml and libxml2. Note, however, that this is true for most performance critical Python applications. No one would implement fourier transformations in pure Python when you can use NumPy.
The up side then is that lxml provides powerful tools like tree iterators, XPath and XSLT, that can handle complex operations at the speed of C. Their pythonic API in lxml makes them so flexible that most applications can easily benefit from them.
The statements made here are backed by the (micro-)benchmark scripts bench_etree.py, bench_xpath.py and bench_objectify.py that come with the lxml source distribution. They are distributed under the same BSD license as lxml itself, and the lxml project would like to promote them as a general benchmarking suite for all ElementTree implementations. New benchmarks are very easy to add as tiny test methods, so if you write a performance test for a specific part of the API yourself, please consider sending it to the lxml mailing list.
The timings cited below compare lxml 2.2 (with libxml2 2.7.3) to the February 2009 SVN versions of ElementTree (1.3alpha2) and cElementTree (1.0.6). They were run single-threaded on a 1.8GHz Intel Core Duo machine under Ubuntu Linux 8.10 (Intrepid). The C libraries were compiled with the same platform specific optimisation flags. The Python interpreter (2.6.1) was manually compiled for the platform. Note that many of the following ElementTree timings are therefore better then what a normal Python installation with the standard library (c)ElementTree modules would yield.
The scripts run a number of simple tests on the different libraries, using different XML tree configurations: different tree sizes (T1-4), with or without attributes (-/A), with or without ASCII string or unicode text (-/S/U), and either against a tree or its serialised XML form (T/X). In the result extracts cited below, T1 refers to a 3-level tree with many children at the third level, T2 is swapped around to have many children below the root element, T3 is a deep tree with few children at each level and T4 is a small tree, slightly broader than deep. If repetition is involved, this usually means running the benchmark in a loop over all children of the tree root, otherwise, the operation is run on the root node (C/R).
As an example, the character code (SATR T1) states that the benchmark was running for tree T1, with plain string text (S) and attributes (A). It was run against the root element (R) in the tree structure of the data (T).
Note that very small operations are repeated in integer loops to make them measurable. It is therefore not always possible to compare the absolute timings of, say, a single access benchmark (which usually loops) and a 'get all in one step' benchmark, which already takes enough time to be measurable and is therefore measured as is. An example is the index access to a single child, which cannot be compared to the timings for getchildren(). Take a look at the concrete benchmarks in the scripts to understand how the numbers compare.
Serialisation is an area where lxml excels. The reason is that it executes entirely at the C level, without any interaction with Python code. The results are rather impressive, especially for UTF-8, which is native to libxml2. While 20 to 40 times faster than (c)ElementTree 1.2 (which is part of the standard library since Python 2.5), lxml is still more than 7 times as fast as the much improved ElementTree 1.3:
lxe: tostring_utf16 (SATR T1) 22.4042 msec/pass cET: tostring_utf16 (SATR T1) 184.5090 msec/pass ET : tostring_utf16 (SATR T1) 182.4350 msec/pass lxe: tostring_utf16 (UATR T1) 23.1769 msec/pass cET: tostring_utf16 (UATR T1) 188.6780 msec/pass ET : tostring_utf16 (UATR T1) 186.7781 msec/pass lxe: tostring_utf16 (S-TR T2) 21.8501 msec/pass cET: tostring_utf16 (S-TR T2) 200.0139 msec/pass ET : tostring_utf16 (S-TR T2) 190.8720 msec/pass lxe: tostring_utf8 (S-TR T2) 17.1690 msec/pass cET: tostring_utf8 (S-TR T2) 192.3709 msec/pass ET : tostring_utf8 (S-TR T2) 189.7140 msec/pass lxe: tostring_utf8 (U-TR T3) 4.9832 msec/pass cET: tostring_utf8 (U-TR T3) 60.2911 msec/pass ET : tostring_utf8 (U-TR T3) 57.8101 msec/pass
The same applies to plain text serialisation. Note that cElementTree does not currently support this, as it is a new feature in ET 1.3 and lxml.etree 2.0:
lxe: tostring_text_ascii (S-TR T1) 4.3709 msec/pass ET : tostring_text_ascii (S-TR T1) 83.9939 msec/pass lxe: tostring_text_ascii (S-TR T3) 1.3590 msec/pass ET : tostring_text_ascii (S-TR T3) 26.6340 msec/pass lxe: tostring_text_utf16 (S-TR T1) 6.2978 msec/pass ET : tostring_text_utf16 (S-TR T1) 84.7399 msec/pass lxe: tostring_text_utf16 (U-TR T1) 7.7510 msec/pass ET : tostring_text_utf16 (U-TR T1) 79.9279 msec/pass
Unlike ElementTree, the tostring() function in lxml also supports serialisation to a Python unicode string object:
lxe: tostring_text_unicode (S-TR T1) 4.6940 msec/pass lxe: tostring_text_unicode (U-TR T1) 6.3069 msec/pass lxe: tostring_text_unicode (S-TR T3) 1.3652 msec/pass lxe: tostring_text_unicode (U-TR T3) 2.0702 msec/pass
For parsing, on the other hand, the advantage is clearly with cElementTree. The (c)ET libraries use a very thin layer on top of the expat parser, which is known to be extremely fast:
lxe: parse_stringIO (SAXR T1) 50.0100 msec/pass cET: parse_stringIO (SAXR T1) 19.3238 msec/pass ET : parse_stringIO (SAXR T1) 318.2330 msec/pass lxe: parse_stringIO (S-XR T3) 6.1851 msec/pass cET: parse_stringIO (S-XR T3) 5.7080 msec/pass ET : parse_stringIO (S-XR T3) 83.5931 msec/pass lxe: parse_stringIO (UAXR T3) 34.4319 msec/pass cET: parse_stringIO (UAXR T3) 28.8520 msec/pass ET : parse_stringIO (UAXR T3) 164.5968 msec/pass
While about as fast for smaller documents, the expat parser allows cET to be up to 2 times faster than lxml on plain parser performance for large input documents. Similar timings can be observed for the iterparse() function:
lxe: iterparse_stringIO (SAXR T1) 57.8308 msec/pass cET: iterparse_stringIO (SAXR T1) 23.8140 msec/pass ET : iterparse_stringIO (SAXR T1) 349.5209 msec/pass lxe: iterparse_stringIO (UAXR T3) 37.2162 msec/pass cET: iterparse_stringIO (UAXR T3) 30.2329 msec/pass ET : iterparse_stringIO (UAXR T3) 171.4060 msec/pass
However, if you benchmark the complete round-trip of a serialise-parse cycle, the numbers will look similar to these:
lxe: write_utf8_parse_stringIO (S-TR T1) 60.2388 msec/pass cET: write_utf8_parse_stringIO (S-TR T1) 314.9750 msec/pass ET : write_utf8_parse_stringIO (S-TR T1) 616.4260 msec/pass lxe: write_utf8_parse_stringIO (UATR T2) 71.7540 msec/pass cET: write_utf8_parse_stringIO (UATR T2) 364.4099 msec/pass ET : write_utf8_parse_stringIO (UATR T2) 684.5109 msec/pass lxe: write_utf8_parse_stringIO (S-TR T3) 10.7441 msec/pass cET: write_utf8_parse_stringIO (S-TR T3) 103.3869 msec/pass ET : write_utf8_parse_stringIO (S-TR T3) 179.5921 msec/pass lxe: write_utf8_parse_stringIO (SATR T4) 1.1981 msec/pass cET: write_utf8_parse_stringIO (SATR T4) 7.0901 msec/pass ET : write_utf8_parse_stringIO (SATR T4) 10.4899 msec/pass
For applications that require a high parser throughput of large files, and that do little to no serialization, cET is the best choice. Also for iterparse applications that extract small amounts of data or aggregate information from large XML data sets that do not fit into memory. If it comes to round-trip performance, however, lxml tends to be multiple times faster in total. So, whenever the input documents are not considerably larger than the output, lxml is the clear winner.
Regarding HTML parsing, Ian Bicking has done some benchmarking on lxml's HTML parser, comparing it to a number of other famous HTML parser tools for Python. lxml wins this contest by quite a length. To give an idea, the numbers suggest that lxml.html can run a couple of parse-serialise cycles in the time that other tools need for parsing alone. The comparison even shows some very favourable results regarding memory consumption.
Liza Daly has written an article that presents a couple of tweaks to get the most out of lxml's parser for very large XML documents. She quite favourably positions lxml.etree as a tool for high-performance XML parsing.
Finally, xml.com has a couple of publications about XML parser performance. Farwick and Hafner have written two interesting articles that compare the parser of libxml2 to some major Java based XML parsers. One deals with event-driven parser performance, the other one presents benchmark results comparing DOM parsers. Both comparisons suggest that libxml2's parser performance is largely superiour to all commonly used Java parsers in almost all cases. Note that the C parser benchmark results are based on xmlbench, which uses a simpler setup for libxml2 than lxml does.
Since all three libraries implement the same API, their performance is easy to compare in this area. A major disadvantage for lxml's performance is the different tree model that underlies libxml2. It allows lxml to provide parent pointers for elements and full XPath support, but also increases the overhead of tree building and restructuring. This can be seen from the tree setup times of the benchmark (given in seconds):
lxe: -- S- U- -A SA UA T1: 0.0502 0.0572 0.0613 0.0494 0.0575 0.0615 T2: 0.0602 0.0691 0.0747 0.0651 0.0745 0.0796 T3: 0.0145 0.0157 0.0176 0.0392 0.0411 0.0415 T4: 0.0003 0.0003 0.0003 0.0008 0.0008 0.0008 cET: -- S- U- -A SA UA T1: 0.0092 0.0094 0.0094 0.0094 0.0096 0.0093 T2: 0.0152 0.0151 0.0152 0.0156 0.0154 0.0154 T3: 0.0079 0.0080 0.0079 0.0106 0.0107 0.0134 T4: 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 ET : -- S- U- -A SA UA T1: 0.1017 0.1715 0.1962 0.1080 0.2470 0.1049 T2: 0.3130 0.3324 0.1130 0.3897 0.1158 0.4246 T3: 0.0341 0.0323 0.0338 0.0358 0.3965 0.0359 T4: 0.0006 0.0005 0.0006 0.0006 0.0007 0.0006
While lxml is still a lot faster than ET in most cases, cET can be up to five times faster than lxml here. One of the reasons is that lxml must encode incoming string data and tag names into UTF-8, and additionally discard the created Python elements after their use, when they are no longer referenced. ET and cET represent the tree itself through these objects, which reduces the overhead in creating them.
The same reason makes operations like collecting children as in list(element) more costly in lxml. Where ET and cET can quickly create a shallow copy of their list of children, lxml has to create a Python object for each child and collect them in a list:
lxe: root_list_children (--TR T1) 0.0148 msec/pass cET: root_list_children (--TR T1) 0.0050 msec/pass ET : root_list_children (--TR T1) 0.0219 msec/pass lxe: root_list_children (--TR T2) 0.1719 msec/pass cET: root_list_children (--TR T2) 0.0260 msec/pass ET : root_list_children (--TR T2) 0.3390 msec/pass
This handicap is also visible when accessing single children:
lxe: first_child (--TR T2) 0.1879 msec/pass cET: first_child (--TR T2) 0.1760 msec/pass ET : first_child (--TR T2) 0.8099 msec/pass lxe: last_child (--TR T1) 0.1910 msec/pass cET: last_child (--TR T1) 0.1872 msec/pass ET : last_child (--TR T1) 0.8099 msec/pass
... unless you also add the time to find a child index in a bigger list. ET and cET use Python lists here, which are based on arrays. The data structure used by libxml2 is a linked tree, and thus, a linked list of children:
lxe: middle_child (--TR T1) 0.2189 msec/pass cET: middle_child (--TR T1) 0.1779 msec/pass ET : middle_child (--TR T1) 0.8030 msec/pass lxe: middle_child (--TR T2) 2.4071 msec/pass cET: middle_child (--TR T2) 0.1781 msec/pass ET : middle_child (--TR T2) 0.8039 msec/pass
As opposed to ET, libxml2 has a notion of documents that each element must be in. This results in a major performance difference for creating independent Elements that end up in independently created documents:
lxe: create_elements (--TC T2) 2.1949 msec/pass cET: create_elements (--TC T2) 0.1941 msec/pass ET : create_elements (--TC T2) 1.2760 msec/pass
Therefore, it is always preferable to create Elements for the document they are supposed to end up in, either as SubElements of an Element or using the explicit Element.makeelement() call:
lxe: makeelement (--TC T2) 1.8370 msec/pass cET: makeelement (--TC T2) 0.3200 msec/pass ET : makeelement (--TC T2) 1.5380 msec/pass lxe: create_subelements (--TC T2) 1.6761 msec/pass cET: create_subelements (--TC T2) 0.2329 msec/pass ET : create_subelements (--TC T2) 3.0999 msec/pass
So, if the main performance bottleneck of an application is creating large XML trees in memory through calls to Element and SubElement, cET is the best choice. Note, however, that the serialisation performance may even out this advantage, especially for smaller trees and trees with many attributes.
A critical action for lxml is moving elements between document contexts. It requires lxml to do recursive adaptations throughout the moved tree structure.
The following benchmark appends all root children of the second tree to the root of the first tree:
lxe: append_from_document (--TR T1,T2) 3.4299 msec/pass cET: append_from_document (--TR T1,T2) 0.2639 msec/pass ET : append_from_document (--TR T1,T2) 1.1489 msec/pass lxe: append_from_document (--TR T3,T4) 0.0429 msec/pass cET: append_from_document (--TR T3,T4) 0.0169 msec/pass ET : append_from_document (--TR T3,T4) 0.0780 msec/pass
Although these are fairly small numbers compared to parsing, this easily shows the different performance classes for lxml and (c)ET. Where the latter do not have to care about parent pointers and tree structures, lxml has to deep traverse the appended tree. The performance difference therefore increases with the size of the tree that is moved.
This difference is not always as visible, but applies to most parts of the API, like inserting newly created elements:
lxe: insert_from_document (--TR T1,T2) 6.1119 msec/pass cET: insert_from_document (--TR T1,T2) 0.4129 msec/pass ET : insert_from_document (--TR T1,T2) 1.4160 msec/pass
or replacing the child slice by a newly created element:
lxe: replace_children_element (--TC T1) 0.1769 msec/pass cET: replace_children_element (--TC T1) 0.0250 msec/pass ET : replace_children_element (--TC T1) 0.1538 msec/pass
as opposed to replacing the slice with an existing element from the same document:
lxe: replace_children (--TC T1) 0.0169 msec/pass cET: replace_children (--TC T1) 0.0119 msec/pass ET : replace_children (--TC T1) 0.0758 msec/pass
While these numbers are too small to provide a major performance impact in practice, you should keep this difference in mind when you merge very large trees.
Deep copying a tree is fast in lxml:
lxe: deepcopy_all (--TR T1) 10.0670 msec/pass cET: deepcopy_all (--TR T1) 115.8700 msec/pass ET : deepcopy_all (--TR T1) 866.8201 msec/pass lxe: deepcopy_all (-ATR T2) 12.4321 msec/pass cET: deepcopy_all (-ATR T2) 130.1000 msec/pass ET : deepcopy_all (-ATR T2) 901.1638 msec/pass lxe: deepcopy_all (S-TR T3) 2.6951 msec/pass cET: deepcopy_all (S-TR T3) 28.9950 msec/pass ET : deepcopy_all (S-TR T3) 218.7109 msec/pass
So, for example, if you have a database-like scenario where you parse in a large tree and then search and copy independent subtrees from it for further processing, lxml is by far the best choice here.
Another area where lxml is very fast is iteration for tree traversal. If your algorithms can benefit from step-by-step traversal of the XML tree and especially if few elements are of interest or the target element tag name is known, lxml is a good choice:
lxe: getiterator_all (--TR T1) 4.7209 msec/pass cET: getiterator_all (--TR T1) 45.8400 msec/pass ET : getiterator_all (--TR T1) 22.9480 msec/pass lxe: getiterator_islice (--TR T2) 0.0398 msec/pass cET: getiterator_islice (--TR T2) 0.3798 msec/pass ET : getiterator_islice (--TR T2) 0.1900 msec/pass lxe: getiterator_tag (--TR T2) 0.0160 msec/pass cET: getiterator_tag (--TR T2) 0.8149 msec/pass ET : getiterator_tag (--TR T2) 0.3560 msec/pass lxe: getiterator_tag_all (--TR T2) 0.6580 msec/pass cET: getiterator_tag_all (--TR T2) 46.3769 msec/pass ET : getiterator_tag_all (--TR T2) 20.3989 msec/pass
This translates directly into similar timings for Element.findall():
lxe: findall (--TR T2) 6.7198 msec/pass cET: findall (--TR T2) 51.2750 msec/pass ET : findall (--TR T2) 26.9110 msec/pass lxe: findall (--TR T3) 1.4520 msec/pass cET: findall (--TR T3) 14.2760 msec/pass ET : findall (--TR T3) 8.4310 msec/pass lxe: findall_tag (--TR T2) 0.7401 msec/pass cET: findall_tag (--TR T2) 46.5961 msec/pass ET : findall_tag (--TR T2) 20.3760 msec/pass lxe: findall_tag (--TR T3) 0.3331 msec/pass cET: findall_tag (--TR T3) 11.5960 msec/pass ET : findall_tag (--TR T3) 5.4510 msec/pass
Note that all three libraries currently use the same Python implementation for .findall(), except for their native tree iterator (element.iter()).
The following timings are based on the benchmark script bench_xpath.py.
This part of lxml does not have an equivalent in ElementTree. However, lxml provides more than one way of accessing it and you should take care which part of the lxml API you use. The most straight forward way is to call the xpath() method on an Element or ElementTree:
lxe: xpath_method (--TC T1) 1.5750 msec/pass lxe: xpath_method (--TC T2) 20.9570 msec/pass lxe: xpath_method (--TC T3) 0.1199 msec/pass lxe: xpath_method (--TC T4) 1.0121 msec/pass
This is well suited for testing and when the XPath expressions are as diverse as the trees they are called on. However, if you have a single XPath expression that you want to apply to a larger number of different elements, the XPath class is the most efficient way to do it:
lxe: xpath_class (--TC T1) 0.6301 msec/pass lxe: xpath_class (--TC T2) 2.6128 msec/pass lxe: xpath_class (--TC T3) 0.0498 msec/pass lxe: xpath_class (--TC T4) 0.1400 msec/pass
Note that this still allows you to use variables in the expression, so you can parse it once and then adapt it through variables at call time. In other cases, where you have a fixed Element or ElementTree and want to run different expressions on it, you should consider the XPathEvaluator:
lxe: xpath_element (--TR T1) 0.2739 msec/pass lxe: xpath_element (--TR T2) 10.8800 msec/pass lxe: xpath_element (--TR T3) 0.0660 msec/pass lxe: xpath_element (--TR T4) 0.2739 msec/pass
While it looks slightly slower, creating an XPath object for each of the expressions generates a much higher overhead here:
lxe: xpath_class_repeat (--TC T1) 1.5399 msec/pass lxe: xpath_class_repeat (--TC T2) 20.5159 msec/pass lxe: xpath_class_repeat (--TC T3) 0.1178 msec/pass lxe: xpath_class_repeat (--TC T4) 0.9880 msec/pass
... based on lxml 1.3.
A while ago, Uche Ogbuji posted a benchmark proposal that would read in a 3MB XML version of the Old Testament of the Bible and look for the word begat in all verses. Apparently, it is contained in 120 out of almost 24000 verses. This is easy to implement in ElementTree using findall(). However, the fastest and most memory friendly way to do this is obviously iterparse(), as most of the data is not of any interest.
Now, Uche's original proposal was more or less the following:
def bench_ET(): tree = ElementTree.parse("ot.xml") result =  for v in tree.findall("//v"): text = v.text if 'begat' in text: result.append(text) return len(result)
which takes about one second on my machine today. The faster iterparse() variant looks like this:
def bench_ET_iterparse(): result =  for event, v in ElementTree.iterparse("ot.xml"): if v.tag == 'v': text = v.text if 'begat' in text: result.append(text) v.clear() return len(result)
The improvement is about 10%. At the time I first tried (early 2006), lxml didn't have iterparse() support, but the findall() variant was already faster than ElementTree. This changes immediately when you switch to cElementTree. The latter only needs 0.17 seconds to do the trick today and only some impressive 0.10 seconds when running the iterparse version. And even back then, it was quite a bit faster than what lxml could achieve.
Since then, lxml has matured a lot and has gotten much faster. The iterparse variant now runs in 0.14 seconds, and if you remove the v.clear(), it is even a little faster (which isn't the case for cElementTree).
One of the many great tools in lxml is XPath, a swiss army knife for finding things in XML documents. It is possible to move the whole thing to a pure XPath implementation, which looks like this:
def bench_lxml_xpath_all(): tree = etree.parse("ot.xml") result = tree.xpath("//v[contains(., 'begat')]/text()") return len(result)
This runs in about 0.13 seconds and is about the shortest possible implementation (in lines of Python code) that I could come up with. Now, this is already a rather complex XPath expression compared to the simple "//v" ElementPath expression we started with. Since this is also valid XPath, let's try this instead:
def bench_lxml_xpath(): tree = etree.parse("ot.xml") result =  for v in tree.xpath("//v"): text = v.text if 'begat' in text: result.append(text) return len(result)
This gets us down to 0.12 seconds, thus showing that a generic XPath evaluation engine cannot always compete with a simpler, tailored solution. However, since this is not much different from the original findall variant, we can remove the complexity of the XPath call completely and just go with what we had in the beginning. Under lxml, this runs in the same 0.12 seconds.
But there is one thing left to try. We can replace the simple ElementPath expression with a native tree iterator:
def bench_lxml_getiterator(): tree = etree.parse("ot.xml") result =  for v in tree.getiterator("v"): text = v.text if 'begat' in text: result.append(text) return len(result)
This implements the same thing, just without the overhead of parsing and evaluating a path expression. And this makes it another bit faster, down to 0.11 seconds. For comparison, cElementTree runs this version in 0.17 seconds.
So, what have we learned?
The following timings are based on the benchmark script bench_objectify.py.
Objectify is a data-binding API for XML based on lxml.etree, that was added in version 1.1. It uses standard Python attribute access to traverse the XML tree. It also features ObjectPath, a fast path language based on the same meme.
Just like lxml.etree, lxml.objectify creates Python representations of elements on the fly. To save memory, the normal Python garbage collection mechanisms will discard them when their last reference is gone. In cases where deeply nested elements are frequently accessed through the objectify API, the create-discard cycles can become a bottleneck, as elements have to be instantiated over and over again.
ObjectPath can be used to speed up the access to elements that are deep in the tree. It avoids step-by-step Python element instantiations along the path, which can substantially improve the access time:
lxe: attribute (--TR T1) 6.9990 msec/pass lxe: attribute (--TR T2) 29.2060 msec/pass lxe: attribute (--TR T4) 6.9048 msec/pass lxe: objectpath (--TR T1) 3.5410 msec/pass lxe: objectpath (--TR T2) 24.9801 msec/pass lxe: objectpath (--TR T4) 3.5069 msec/pass lxe: attributes_deep (--TR T1) 16.9580 msec/pass lxe: attributes_deep (--TR T2) 39.8140 msec/pass lxe: attributes_deep (--TR T4) 16.9699 msec/pass lxe: objectpath_deep (--TR T1) 9.4180 msec/pass lxe: objectpath_deep (--TR T2) 31.7512 msec/pass lxe: objectpath_deep (--TR T4) 9.4421 msec/pass
Note, however, that parsing ObjectPath expressions is not for free either, so this is most effective for frequently accessing the same element.
A way to improve the normal attribute access time is static instantiation of the Python objects, thus trading memory for speed. Just create a cache dictionary and run:
cache[root] = list(root.iter())
after parsing and:
when you are done with the tree. This will keep the Python element representations of all elements alive and thus avoid the overhead of repeated Python object creation. You can also consider using filters or generator expressions to be more selective. By choosing the right trees (or even subtrees and elements) to cache, you can trade memory usage against access speed:
lxe: attribute_cached (--TR T1) 5.1420 msec/pass lxe: attribute_cached (--TR T2) 27.0739 msec/pass lxe: attribute_cached (--TR T4) 5.1429 msec/pass lxe: attributes_deep_cached (--TR T1) 7.0908 msec/pass lxe: attributes_deep_cached (--TR T2) 29.5591 msec/pass lxe: attributes_deep_cached (--TR T4) 7.1721 msec/pass lxe: objectpath_deep_cached (--TR T1) 2.2731 msec/pass lxe: objectpath_deep_cached (--TR T2) 23.1631 msec/pass lxe: objectpath_deep_cached (--TR T4) 2.3179 msec/pass
Things to note: you cannot currently use weakref.WeakKeyDictionary objects for this as lxml's element objects do not support weak references (which are costly in terms of memory). Also note that new element objects that you add to these trees will not turn up in the cache automatically and will therefore still be garbage collected when all their Python references are gone, so this is most effective for largely immutable trees. You should consider using a set instead of a list in this case and add new elements by hand.
Here are some more things to try if optimisation is required:
Note that none of these measures is guaranteed to speed up your application. As usual, you should prefer readable code over premature optimisations and profile your expected use cases before bothering to apply optimisations at random.