fromMarch 2011
Feature:

Performance and Scalability in Drupal 7

0

With the longest release cycle of any major Drupal release to date, Drupal 7 contains more new features and architectural changes than any previous version. Over 50 contributed modules have been moved into core, several subsystems have been re-factored, and many thousands of smaller changes have been committed. We’ll review some of the challenges that face Drupal 6 sites, fixes and new features available in Drupal 7, and what new features may be in store for Drupal 8.

A note on performance and scalability:

Performance is about speed - how fast your application can handle a single request.

Scalability is about quantity - how many requests your application can handle at the same time, or the amount of information it can store and process.

Drupal 6 Challenges

With Drupal 6, many of the hardest to solve issues are caused by subsystems that do not scale well:

  • Higher memory and CPU usage with large numbers of modules installed - sometimes referred to as the ‘contrib module all you can eat buffet’.
  • High PHP peak memory usage from loading a large number of records into memory, like full taxonomy trees, or module and menu rebuilds.
  • Slow queries from the forum, taxonomy, or tracker modules with large numbers of nodes, comments and taxonomy terms.

In practice, many issues affect both performance and scalability. Web server processes and CPU can be tied up by inefficient use of PHP functions, lowering the overall capacity of the server.

Nearly all scalability issues begin with systems that work fine when they are only handling a few things - whether that’s articles or modules - but which are not able to cope when the number of items increases far beyond what was originally expected. As Drupal is used to build larger and more complex sites, more and more sites report issues where they had pushed one or more of these systems beyond the limit. With each release, Drupal finds its way into larger and larger deployments and the process repeats itself.

Drupal 7

Less SQL.

One of the most commonly mentioned performance issues in Drupal 5 and 6 was the number of queries executed per page, with most of these coming from path alias lookups. Path aliases allow a link to example.com/about to be converted to example.com/node/1 automatically in both directions. What is less known is that these queries were introduced in Drupal 4.7 to fix a scalability issue in earlier versions of Drupal.

In Drupal 4.6, all path aliases in the system were loaded into memory via a single query on every request, then this array was consulted when resolving paths to aliases.

For sites with a few dozen aliases this worked fine. Later people built Drupal sites with tens or hundreds of thousands of path aliases, something which became easier to do with the popularity of the pathauto module. Loading all these aliases into an array would take several seconds, could exceed PHP’s memory limit, and made Drupal more or less unusable on these sites.

In the Drupal 4.7 release, this mechanism was changed to load aliases one by one from the database when requested. This scaled no matter how many aliases there were. However, while each individual query was very fast, latency between web and database servers and the expense of preparing and executing each individual query quickly added up. Content rich pages could potentially query aliases individually for several hundred URLs.

Paths, round trips and mass transit

In Drupal 7 two large changes to the path system were introduced:

  1. The path system maintains a white-list of which kinds of paths to look up aliases for, to avoid making round trips when there is nothing there to find - if there’s no aliases for user/*, it doesn’t bother looking for them.
  2. Each page builds a cache of paths that were last requested, replaced every 24 hours or on cache clears. This allows those aliases to be queried in one go on the next request. If a path isn’t in the cache, it’s looked up individually until the next time the cache is refreshed.

The combination of these two optimizations means that on cache misses only a subset of paths are queried. This usually outweighs the cost of building the per-page cache compared to an equivalent request against Drupal 6. On a cache hit, all aliases for the page can be retrieved with 2-3 database queries and one cache_get(). The main trade-off here is storage, since the space taken for per-page caches grows linearly the more pages a site has. However, this can be resolved with a caching backend such as memcache, which allocates a certain amount of memory for each cache bin and purges on an LRU (least recently used) basis.

Entities on the wagon

Another frequently encountered performance issue in Drupal 6 and earlier was the expense of calling node_load() and user_load(). A single node can take several database queries to build, and many pages are built from multiple nodes and users - for example a list of blog posts or an image gallery.

In Drupal 7, we introduced the Entity API and the concept of ‘multiple load’. The entity_load() function takes an array of entity IDs so database queries and hooks can all act on the list of entities at the same time. If a single node takes five database queries to build, it will only take five queries to build 30 nodes in Drupal 7, compared to 150 queries for the same objects in Drupal 6.

The entity API also goes a long way toward standardizing the behavior of the different ‘things’ in a Drupal site. Nodes, users, taxonomy terms, comments and files, which in Drupal 6 were completely different systems, are now more or less standardized around the entity system. This was a late addition in Drupal 7 and there’s a lot of work remaining to complete the process in Drupal 8. The entity system also allows for pluggable entity loading, allowing the contributed Entity Cache module to put your entities into memcache for example.

This API and associated changes, along with the caching of queries served on every request in the cache_bootstrap bin, enables memcache to serve a full page to authenticated users in Drupal 7 without hitting the database at all.

Big data, big queries

As Drupal has become more popular, the number of Drupal sites storing large amounts of data has increased. Drupal 7 introduces a number of measures to deal with this.

The new database layer and query builder allows drivers to specify case-insensitive operators. This allows MySQL to use LIKE() for auto-complete and user name lookups instead of LOWER(), allowing these queries to use indexes rather than doing a full table scan.

There is also built in master/slave support with the new database layer, although Drupal can’t yet set up your database replication for you (or fold your laundry).

Forum, taxonomy and tracker modules now build de-normalized tables containing the data needed to build listings. Queries that used to require conditions and sorts across multiple tables can now be run against a single table, fully indexed.

InnoDB is enabled by default, providing much better safety and performance than MyISAM, particularly in situations with a large number of writes.

Fields and NoSQL

These de-normalized tables help with specific pain points encountered by Drupal 6 sites using those core features. However, a bigger issue facing Drupal sites is the combination of highly flexible storage via modules like CCK and Flag combined with the need to display that data in many different ways (often using the equally flexible Views module). This flexibility tends to lead to slow queries.

Drupal sites often need to list ten articles in a particular category or any of its subcategories ordered by the number of votes cast on a Tuesday. While pages like this can be built both by hand with a custom module or via Views without any programming knowledge at all, in both cases there is rarely a performant solution while the data is stored across different tables. Apart from the specific core modules already mentioned, Drupal 7 won’t be any better in this regard. The new fields system stores each field in it’s own table, something the CCK module does in Drupal 6 when a field is shared between multiple content types or may have unlimited values. This means much more simplicity when creating or updating fields, but also that all data is now stored in multiple different tables - the ‘per content type’ storage of Drupal 6 no longer exists.

The simplicity of the default SQL storage is balanced by the fact that it’s possible to ignore it completely and write your own field storage back end. This allows for the creation of alternative SQL storage models, or to put the data outside SQL altogether, providing developers the means to solve the inherent performance problems with this default highly normalized storage.

At the moment, the most viable storage back end for high-performance sites is the MongoDB project. There are contrib modules that provide a field storage backend for MongoDB, with each entity stored as a single document and one collection per entity type (i.e. a collection for nodes, another for users). Since MongoDB stores documents in BSON (binary JSON), there is no up front need to define a schema in the same way as relational databases. This means adding a new field to an entity simply means an extra object key in the BSON document - no new tables, no ALTER TABLE operations. MongoDB also supports B-Tree indexes in the same way as SQL. However, since all data is now in a single document, it’s possible to write queries with conditions and sorts against any property of a node, user or taxonomy term, all while using an index and with no complicated de-normalization. This allowed Examiner.com, a top 100 website in the US, to launch on Drupal 7 with several million nodes and taxonomy terms without any page caching.

MongoDB is still a young technology, and working with it takes some adjustment from MySQL both in terms of administration and using the API. However, a late addition to Drupal 7 makes API adjustment easier. EntityFieldQuery, an API for building storage-agnostic queries against entity properties and field data, allows field storage backends to convert these queries into their own syntax. This is only an API in Drupal 7, but there is already an EntityFieldQuery Views Backend project - meaning the same view could be used to query either SQL or MongoDB depending on the field storage of a site.

A plug for pluggability

In addition to the pluggable watchdog, cache and session backends in Drupal 6, Drupal 7 added pluggable lock and queue APIs. This means the storage for these systems can be moved out of SQL to MongoDB, Memcached, APC, Beanstalkd and other technologies better suited to handle these tasks at high volume.

It's now possible for different cache bins to be sent to different cache backends, and includes a new cache_bootstrap bin for data that’s necessary to serve a full bootstrap - allowing sites even on a single server with limited memory to pull that single bin out of SQL into a backend such as APC, saving several SQL hits in return for a few megabytes of shared memory.

Additionally, Drupal 7 now sends correct cache headers for reverse proxies, and avoids creating anonymous user sessions until explicitly requested. These changes allow for much easier integration with reverse caching proxies and CDNs for cached pages, allowing these requests to avoid hitting PHP altogether. Even without a caching proxy, if you’re using memcache as a cache backend, you can speed up cached page delivery by using the ‘page_cache_without_database’ feature. See settings.php for the gory details.

Files and front end

Alongside swapping out SQL, Drupal 7 also allows you to swap out your local filesystem much easier, both for saving and serving files. Core file handling now uses PHP stream wrappers, meaning any file storage method supported by a stream wrapper can be used for file operations. Support already exists for Amazon S3 and the PHP hash wrapper in contributed modules. CDN support also got easier, with the CDN project providing drop-in support from contrib.

The CSS and JavaScript aggregation already in Drupal 6 was heavily re-factored in Drupal 7 to account for issues found in the Drupal 6 logic. Drupal 6 attempted to bundle files into a single large aggregate file to minimize the number of http requests. While this looks good when viewing a single page in yslow or firebug, once users browse through a site, they would often have to download a significant number of different large aggregate files due to minor variations in the CSS and JavaScript files aggregated between different pages. This was rewritten so that files are split into groups, resulting in more aggregates per page, but with a much higher chance that any one aggregate will be fetched from browser cache on the next request. For those who are uncomfortable with the idea of the extra http requests when first visiting a site, there are also experimental contributed projects that attempt to deal with that, either by learning which files are likely to appear on every page and combining those into a single aggregate, or via parallel JavaScript loaders like LABjs.

So is it faster?

During the Drupal 7 release cycle, we performed benchmarks and profiling between stock installs of Drupal 6 and Drupal 7. On very simple pages, such as viewing a node with a few comments, the answer is often a resounding no - raw Drupal 7 performance in terms of the CPU and memory required to generate a simple page is often worse than Drupal 6.

This is the main trade-off for additional features, abstraction and flexibility - the processing involved in rendering an article and some comments gets more complex and slower. There’s been no such degradation in the baseline performance of cached pages though. Additionally, by the time a site has several contrib modules and has been set up to serve pages to real people, it’s likely that in many cases it will perform better in Drupal 7. This means the real answer is “it depends”.

Looking forward to Drupal 8

While many subsystems were overhauled in Drupal 7, several were left broadly the same as Drupal 6, for example the bootstrap, module and theme systems. There is also plenty of work that wasn’t fully completed before the Drupal 7 code freeze, such as the Entity API and the conversion of core storage to Fields.

As with all major releases, Drupal 8 offers the opportunity to build on the work done in Drupal 7 and while it is hard to predict what will happen during a core release cycle, two targets have already been identified.

Memory usage from global site caches for modules, themes, schema and fields still grow linearly the more modules, database tables or fields you add. The amount of code loaded when running lots of modules on hosting without a PHP opcode cache also increases memory and CPU usage. While some great fixes made it into Drupal 7 for these issues, many remain largely the same despite some valiant attempts to fix them.

More unfinished business is the request routing system. Drupal 7 began the process of supporting different rendering pipelines, but there is still code such as RSS feeds and image generation that executes a page callback, then prints and exits the request half way through to avoid the actual page rendering process kicking in. Drupal is increasingly used to serve requests that don’t want a fully rendered HTML page, but our core mechanisms for serving these haven’t caught up yet.

Drupal 8 offers several opportunities to improve this. Both to understand our memory footprint better via the use of tools such as XHProf - which was open sourced only a few months before the Drupal 7 code freeze - and to re-factor the bootstrap system to allow Drupal to serve requests such as image generation or ESI callbacks without invoking the overhead of a full page request. Work hasn’t officially begun at time of writing, but there are plenty of plans around in the Drupal 8 issue queue, or groups.drupal.org/butler for those who are interested.

Drupal 7 performance and scalability projects to watch:
http://drupal.org/project/agrcache
http://drupal.org/project/apc
http://drupal.org/project/boost
http://drupal.org/project/cdn
http://drupal.org/project/core_library
http://drupal.org/project/entitycache
http://drupal.org/project/efq_views
http://drupal.org/project/hash_wrapper
http://drupal.org/project/labjs
http://drupal.org/project/headjs
http://drupal.org/project/media_amazon
http://drupal.org/project/memcache
http://drupal.org/project/mongodb
http://drupal.org/project/performance_hacks