In the last few months, I have been reading statements from a wide variety of vendors / open-source XACML implementations that they have the world’s fastest 100% XACML standards-based engine.
This reminds me of heated debates that involved national pride and engineering feats when the French and Japanese were head-to-head in designing the world’s fastest trains. And the Chinese have since then caught up. The Wikipedia article is a trove of trivia when it comes to speed and how records were achieved (or what they actually mean). In the table summary, I can spot quite a few ‘current world record’ labels… If by the time you finish reading the article, your head is not spinning one way or the other, then you’re ready to take on the ‘fatest XACML engine’ claim.
So who’s the fastest? How can one claim to be the fastest? After all what does it mean? Don’t you need to compare to other engines? To date, only one such comparison has been done by academics on open-source engines .
In pure absolute terms, the fastest engine is the one that can process the most XACML access control requests in a given time, usually brought down to a second for comparison purposes. The statement however should raise a few questions:
- Are these live access control requests?
The requests being sent are live requests. There is no caching involved (no decision caching, no attribute caching). The PDP engine is processing each and every request and is returning a decision.
- Are the requests actually using XACML?
The request should be XACML-conformant. Sadly, not many implementations out there claim to conform with the XACML 2.0 standard or XACML 3.0 specification. Check that the solution you go for does. If a solution manages high levels of access control requests by cutting corners or not respecting the standard, the solution will fall apart when scaling or when evolving towards new, more demanding scenarios.
- How are the requests being transported?
The form factor of the request: in a performance test or benchmark used to create those figures, any form factor can be used. If one uses XACML expressed in its XML form inside a SAML assertion inside a SOAP message, I would bank the figures would rate poorly. SOAP’s middle name is not ‘Speedy Gonzales‘. Typically, a performance test would therefore be achieved using XACML’s representation in code (Java, C, C#, depending again on the implementation. SunXACML uses Java).
- Do the requests & policies used in benchmarks accurately represent business complexity?
The request and the policies should be meaningful and represent a varying level of complexity. When you are presented with a benchmark, ask to see how the results were computed and real-world complexity was simulated. A fellow vendor likes to use his kids in examples. I don’t have any so I’ll use my niece as an example – she can splutter out ‘no’ so fast that I’d reckon she might well be the fastest decision engine in the world. True, she’s not XACML-conformant and besides she’s got a deny-all policy. That’s not very useful. So remember, ask how complex a request is and how large the policy set being evaluated is.
- What is the architecture being used?
Performance can be impacted or enhanced depending on several architectural considerations: how many engines are used? How are engines combined together? Can engines scale horizontally? Vertically?
- Do performance figures represent real production data?
Have they simply been estimated in a lab environment? In the end, performance needs to be proven in real-time and in real-world use case scenarios. What actual experience and evidence from production systems have you been given?
It’s easy to claim one is the fastest. But one should tread carefully and check the face value of any such statement. Ask to understand how performance levels are achieved. Ask how such performance levels can be sustained and made resilient to different attacks or simply highly volatile levels of traffic.
PS: don’t forget to come visit us at Gartner in San Diego later this month where we will illustrate fine-grained access control in the cloud with best-of-breed vendors Radiant Logic (attribute virtualization) and Layer 7 (SOA security – see their flyer here: http://www.layer7tech.com/beachparty/).
 Fatih Turkmen and Bruno Crispo. 2008. Performance evaluation of XACML PDP implementations. In Proceedings of the 2008 ACM workshop on Secure web services (SWS ’08). ACM, New York, NY, USA, 37-44. DOI=10.1145/1456492.1456499 http://doi.acm.org/10.1145/1456492.1456499