Top of Page

Links to move inside this page.

Cache Engine Comparison (1/3)

June 12, 2012
(Original Japanese article translated on December 19, 2012)

To handle client access efficiently, major sites deliver content by operating some form of reverse proxy in addition to origin servers that store the original content. First, we examine the content cache functions (cache servers) of products that can be used as reverse proxies.

The Expected Role of Cache Servers

The Expected Role of Cache Servers

The following roles are expected of cache servers for the delivery of content.

  1. Retain content as cache data
  2. Manage cache data
  3. Balance load by reducing requests to origin servers

Most cache servers retain cached content as cache objects with a TTL (Time To Live) set. However, they also feature functions for purging the cache to manage content more flexibly when you want to update cache objects regardless of their TTL. Some products also provide a selection of algorithms to manage cache objects that spill over from the cache region.

When delivering content not cached on a cache server, or content with a short TTL such as live streaming video, connections often bypass cache servers and access the origin server. To remedy this, some products implement features for consolidating identical requests on the cache server side to reduce access to the origin server (request consolidation). Other cache servers are first queried for unavailable content to prevent requests from bypassing cache servers containing relevant cache objects and reaching the origin server whenever possible.

The Main Cache Engines

Squid is a well-established name in the cache server products business. Other products such as the mod_cache module for the long-standing Apache HTTP Server are also popular. However, a variety of other products that can be used as cache servers have recently been released.

Below we list some of the typical products that can be used as cache servers.

  • Varnish Cache
  • Apache Traffic Server(ATS)
  • nginx
  • IIS Application Request Routing(ARR)

Of the above, Varnish Cache and Apache Traffic Server (ATS) are released purely as cache servers, while nginx and IIS Application Request Routing (ARR) can be used as high-performance cache servers by adding modules, etc., to complement their Web server functions. The Varnish Cache and Apache Traffic Server products were created as cache servers like Squid.

Recently, the high-performance of Varnish Cache has attracted a lot of attention. The syntax used in its configuration files is of particular note. Because Varnish Cache configuration files can be defined using a language similar to C called VCL, experienced programmers are likely to feel right at home.

Apache Traffic Server (ATS) was originally developed at Yahoo! (the U.S. corporation), so it has a proven track record from its use in Yahoo! services. As you can tell from the "Apache" in its name, development has now shifted to Apache Software Foundation. However, its development continues to progress at a rapid pace, and version 3.2.0 was recently released.

The nginx Web server is used with sites such as It is also used with high-traffic Russian sites such as Yandex and Rambler. In recently surveys, it has increased its presence among products used as Web servers (although it is still not on the level of Apache HTTPD). Like Apache HTTPD, nginx also makes it possible to supplement its standard Web server functions or add new functions using modules. It can also be used as a cache server using the proxy module that is provided as standard.

IIS is well-known as a Web server product that can be used with Windows Server, and its extended functions include Application Request Routing (ARR). The product name may give the impression that it is an application gateway, but it can also handle content caching. Its performance rivals the OSS products mentioned above, so it is a candidate worth considering when you are already operating a server environment based around Windows Server.

This demonstrates that each cache server product has strengths and weaknesses due to differences in implementation or ideology, so it is important to select the optimal combination based on service characteristics when introducing them.

From the next part of this article we will present the results of tests carried out by IIJ comparing the Varnish Cache, Apache Traffic Server, and nginx cache engines.

Michikazu Watanabe

Author Profile

Michikazu Watanabe

Content Delivery Engineering Section, Core Product Development Department, Product Division, IIJ
Mr. Watanabe joined IIJ in 2011. He is involved in operations and development for the IIJ Contents Delivery Service, and lives by the motto, "do a lot with a little."

Related Links

  • "IIJ Technology "Cache Engine Comparison (2/3)"
    In the first part of this article, we explained the roles expected of cache servers, and gave an overview of cache servers and cache engines. Here we go into further detail, and discuss tests carried out by IIJ comparing the Varnish Cache, Apache Traffic Server, and nginx cache engines. (June 19, 2012)
  • "IIJ Technology "Cache Engine Comparison (3/3)"
    In the first and second parts of this article, we examined the role of cache servers, and gave an overview of cache servers and cache engines. We also discussed the results of comparative tests that IIJ carried out, with a focus on Varnish Cache and Apache Traffic Server. In this final part, we cover nginx. (June 26, 2012)
  • "IIJ Technology "The Architecture of the Mighttpd High-Speed Web Server"
    The IIJ-II Research Laboratory began development of a Web server called Mighttpdblank (pronounced "mighty") in Fall of 2009, and has released it as open source. Through its implementation, we arrived at an architecture that has enhanced multi-core performance while maintaining code simplicity. Here we take a look at each architecture one at a time. (May 29, 2012)

End of the page.

Top of Page