Benchmark Setup
Test Scenario
Users connect again and again… forever. The full user profile (name, address, phone, etc…) is requested on each connection.
This is the preferred mode of testing because it is the most intensive and it is similar to peak load during peak hours.
Bottlenecks can be identified. Unreliable components will break apart:
- Max-out disk, CPU and network (until one becomes the bottleneck)
- Blocking I/O or inefficient I/O will destroy performances
- Logs will saturate disks when not managed properly
- Memory leaks pile up (if present) until the application crashes
- Race condition are more likely to happen and to be detected
- Bad resource usage (e.g. not freeing file and resources) is fatal
- Multiple of those issues can add up fast and kill the application right away
Test Setup
The authentication service is a web service exposing various API (REST, OAuth, SAML…) for use by all the applications throughout the organisation to authenticate users and get information about them. Nothing ever deals with LDAP directly, except that authentication service.
While we could authenticate directly against the LDAP for performance testing, we explicitly DO NOT want to do that. The performance of a single isolated LDAP server makes little sense and is of limited interest. We care about the performance of the full authentication chain, of which the LDAP server is an important factor.
Software and Configuration
- Centos 6.5
- OpenDJ 2.6
- OpenLDAP 2.4 (hdb)
- Symas Silver (OpenLDAP mdb)
- ApacheDS
All testing is done with 100 000 users in the LDAP server.
All applications are optimized, configured, tweaked and tuned for maximum performances. We went through all the official documentation, the performance optimization guide (when there is one), all the first page results on google for “Tuning <ldap server name>” and similar search strings.
We ran the performance tests dozen of times to find the best settings for each software.All the fluff about optimizing indexes, tuning cache settings, tuning database size, tuning connection pool, tuning logging levels… and much more, was done.
Settings are as close to production as possible. That implies one notable rule. We did not enable any of the ‘production unsafe’ settings like changing database syncing or data consistency behavior. They’re the kind of settings to give a performance boost at the cost of eventually destroying all user data on a power loss. Definitely not acceptable for production. If it’s not acceptable for production, it’s not worth testing.
PreTesting – Development Machine
Machine Specifications
HP Laptop:
- Windows 7 Enterprise
- CPU i5-2520m (2 cores, 4 threads)
- Memory 8 GB
- HDD 320 GB 7200 rpm
- VmWare Player
Each software run inside a dedicated VM (1 core, 2GB). The scenario itself runs in JMeter on the host.
Tests Results


Note: OpenLDAP is at 0 because it crashes.
PreProduction – Servers
Everything is virtualised on VmWare ESXi servers, unfortunately i can’t fully disclose the physical hardware of the hosts. Each software runs inside a dedicated VM.
- CPU 4 cores of a Xeon v3 year 2015
- 4 GB memory
- 10 Gbit networking
- (shared disks) 16 HDD SAS 10k rpm in RAID
- (shared disks) multiple 4GB battery backed raid controllers
Tests Results


Note: OpenLDAP (hdb) is missing because it failed miserably the preliminary tests on the laptop environment.
Conclusion
OpenLDAP & ApacheDS
They have poor performance in write and mediocre in read only. They both use a BerkeleyDB internally and exhibit similar behavior. OpenLDAP crashes under load. ApacheDS had to be configured with a special option (no write sync) to add initial users or it would have taken an entire week. They are not satisfactory. It looks like there is some sort of internal locking in the ldap or the database which block access to entries and result in shitty performances.
Symas OpenLDAP
Symas OpenLDAP has good performances yet it lacks a proper administration interface, configuration tools and instructions (same as the bare OpenLDAP). The Internet saying it’s 3-10 times faster than OpenLDAP for about 3-10 times less memory are about right. (though it can be tough at times to compare <number> to <crashed>).
We believe that high traffic sites admitting to use OpenLDAP are actually using the Symas Edition. Either directly via paid subscription or indirectly by scraping open source code and packages to rebuild it themselves.
The top end version is actually quite cheap. I remember seeing a 75k€ somewhere for a site license. (one deployment in one company, unlimited computers, unlimited cores). It makes sense that someone (e.g. a telecom company with 2M users) starts with the classic OpenLDAP only to get disappointed by it, and then transition to the Symas Edition which is able to take the load and seem reliable enough.
OpenDJ
OpenDJ has the best performances, the best administration tools both graphical and command line, as well as the best documentation. The multi-site replication is designed for a worldwide scale deployment with scalability, high availability and high performance in mind. This is the best LDAP server by an order of magnitude, be it about features or reliability or technical details.
The license costs money though. If you have the budget then you should go for it. It is worth every penny and it can even save you money in the long run.
So far OpenDJ has been a great experience to use and manage. I wish every other ldap server would die, especially OpenLDAP. Then everyone would be forced to use this one and it would be awesome 😀
Open Source vs Proprietary | Free vs Paid
(I will not get into a philosophical debate about those, I do not care, I take the tool which can do the job for the budget I have.)
Most people tend to assimilate Open Source to Free (as in ‘no money’). This is a common mistake but a mistake nonetheless. There are old sayings like “You get what you pay for” and “If it’s too good to be true, it probably isn’t.”. They turn out to be especially true in the case of LDAP servers.
All of the above servers are (mostly) open source. Yet the 2 working ones (Symas and OpenDJ) are not free at all. They will charge you a license fee and an optional support subscription to get and use their packages.
You still have the option to grab some of the source and try to build everything by yourself. That can satisfy the cheapskate hacker in you but that isn’t necessarily the best option once you realize your day-of-work costs 1000$ and you have some serious critical stuff to run (e.g. a telecom company with 2M users).
This benchmarking analysis is far from comprehensive. 100k entries is a relatively small data set. Also, were they testing the memory cache, or the off disk performance? For example, if you login user 1 1000 times, you are not really testing the LDAP server. How long were these tests run for? Minutes? Hours? Days? Also, the total and averages are not enough information. If one server dos 10x more authentications, but is down for 10 minutes a day, you may feel more heat as an admin. For example, in larger data sets, you need to give OpenDJ more memory and it becomes more challenging to manage garbage collection times (OpenDJ may become unresponsive). Also, OpenDJ has no proxy functionality, so there is no way to split up the data into mulitple ldap servers. This is essential if you want to reduce replication traffic. OpenLDAP has the back_meta plugin which enables you to do some very smart routing. My advice: this is interesting info, but do your own benchmarking, with your own requirements. Take a look at SLAMD which has some excellent tools for generating distributed load, and collecting data: http://dl.thezonemanager.com/slamd/
LikeLike
> This benchmarking analysis is far from comprehensive.
It is part of a much greater and detailed research. (Which is confidential and we may not publish).
> 100k entries is a relatively small data set.
This article gives the early tests. No need to test million users when most software already breaks at 100k.
See this article for OpenDJ tests with 10M users:
https://thehftguy.wordpress.com/2015/10/23/10-millions-users-accounts-with-ldap-yes-we-can/
> Also, were they testing the memory cache, or the off disk performance?
From what we remember. The servers had 4GB of memory. The raid array had 4GB of cache (and write caching enabled). The data set for 100k users is 50-500MB on disk. The Linux filesystem uses the free memory to cache all data after the first read (it has enough memory to cache the entire database). The LDAP server cache settings were configured as advised by the official tuning guides.
Given all of this. All user accounts should be cached after the first authentication.
Note: From our experience, it seems that none of the LDAP server preload accounts on startup.
> [duration]
Each test is ran for a fixed duration, ranging from 5 to 20 minutes. (Tests with more concurrent clients need more time to run).
During the test, new clients are created linearly over time up to the maximum number configured (X axis of most graphs). The duration is carefully chosen so that it takes AT MOST 5% of the total duration to have all clients running.
When the OpenDJ benchmark shows ~ 560 login/s on the graph (200 concurrent requests, 20 minutes duration). That means there were ~ 672 000 authentications done during this test. All accounts were authenticated 6 times. (The load is spread rather evenly).
> [account sequence – and cache impact]
The 100k accounts are specified in a CSV file (one user account per line). The lines are reordered randomly when the test starts, the resulting accounts sequence is split into N equals subsets (one per injector) and the subsets are distributed to the injectors. An injector runs the test scenario, cycling through it’s subset of accounts.
Given the same CSV file and the same system setup (that’s the case here), the generated sequence of 100k users is always the same (thus it’s “consistent randomness”). It’s configured this way on purpose. It ensures that all user accounts are used. It’s cache friendly across tests (i.e. Two test runs will use the accounts in the same order, allowing the second run to benefit from caching generated by the first one).
> If one server do 10x more authentications, but is down for 10 minutes a day
The whole point of this research is to test these systems for reliability AND scalability AND performance. It is as much a stress test as it is a performance test.
OpenLDAP and ApacheDS are unstable and unresponsive.
– OpenLDAP couldn’t do the tests, it just kept crashing.
– ApacheDS had to be run with special flags which are unsafe to run in production.
Symas OpenLDAP (paid edition, mdb) and OpenDJ just work.
– OpenDJ is flawless. It never gave any error, no matter the load.
– Symas works fine. Not sure whether it could scale as well to multiple 10 cores systems with 100 GB memory.
> [erranenous judgement about OpenDJ performance, memory and replication]
OpenDJ is the ONLY LDAP server supporting multi-master AND multi-site replication.
It is the fastest, the most reliable and the most scalable of all LDAP servers.
Would you care to elaborate what you are running? How many user accounts? How many servers? What are the specifications? What’s your SLA? What’s your traffic?
> Take a look at SLAMD which has some excellent tools for generating distributed load, and collecting data: http://dl.thezonemanager.com/slamd/
Coincidence: That tools was written by the OpenDJ authors 😀
> [conclusion]
Hope all these additional information can help you. It took a while to write.
LikeLike
Hello, you have the template of jmeter project?
LikeLike
I don’t see what version of OpenLDAP you were using, but I’ve used stock OpenLDAP on enterprise systems for years w/o the issue you report. And the jmeter project template would be useful as others have requested.
LikeLike
It was many years ago, I don’t have the versions or the test projects anymore.
LikeLike