Top of Page

Links to move inside this page.

Cache Engine Comparison (2/3)

June 19, 2012
(Original Japanese article translated on December 19, 2012)

In the first part of this article, we explained the roles expected of cache servers, and gave an overview of cache servers and cache engines. Here we go into further detail, and discuss tests carried out by IIJ comparing the Varnish Cache, Apache Traffic Server, and nginx cache engines.

Cache Engine Comparison

First, here is a comparative chart listing the functions of Varnish Cache, Apache Traffic Server, and nginx.

Varnish Cache Apache Traffic Server nginx
Thread based Yes Yes No
Multi-process Yes No Yes
Event driven Yes Yes Yes
Cache purging function Yes Yes No
Internet Cache Protocol(ICP) No Yes No
Edge Server Include(ESI) Yes Yes No
Request Consolidation Yes Yes Yes
Multiple origin servers specifiable Yes No Yes

Varnish Cache

Varnish Cache has the following characteristics, due to the VCL syntax used in its configuration files that was mentioned in the first part of this article.

  • Dynamic loading of configuration files
  • Configuration using in-line C
  • Extension using Varnish Modules (VMODs)
  • Health checks for origin servers using probe objects

First, Varnish Cache translates the configuration file written in VCL into C language. Once settings are translated into C, the configuration file is compiled into a shared library. Finally, Varnish Cache links to the shared library generated from the configuration file, and settings are applied. This kind of implementation makes it possible to create highly-flexible configuration files for Varnish Cache, just like creating a program. Because compiled results are linked, it also has the advantage of providing an overall speed improvement.

Furthermore, a variety of tools are available for Varnish Cache, enabling use of features such as cache hit ratio monitoring and a CLI console by simply compiling them from source code.

However, although other cache server products maintain cache persistence, there is no persistent cache in Varnish Cache. In other words, when a process is restarted in Varnish Cache, the data cached up to that point is lost. A persistent option for cache storage is available for implementing persistent cache, but it is not very convenient to use.

Apache Traffic Server(ATS)

ATS has the following features in addition to those introduced in the first part of this article.

  • Split DNS
  • Use of RAW disks (unformatted drives) for the cache region
  • Cache operations via Web UI
  • Configuration changes from the CLI
  • Internet Cache Protocol (ICP) support

Split DNS enables you to configure separate DNS servers dedicated to ATS when specifying origin servers, rather than using the same DNS servers as the system. This increases name lookup overhead, but helps reduce operation load since it eliminates the need for ATS-side handling when changing origin servers. A function called Host DB has also been implemented to alleviate name lookup overhead.

Additionally, unformatted disk drives can be specified as the storage area for cached content by using them as RAW disks. Use of RAW disks eliminates various overheads, providing sustained processing speed and cache region capacity. The effect is most apparent when using high-speed devices such as SSDs as RAW disks.

ATS provides a basic Web UI for monitoring settings and operational status, and this can be used to search for and purge cached objects. One problem with ATS is that the management of settings can become complicated, because there are a number of different types of configuration file. The "traffic_line" CLI command resolves this by enabling you to change settings and update configuration files without stopping ATS. However, some settings do not support this command, making it necessary to rewrite the configuration file directly.

ATS is also the only product among those introduced here that supports the Internet Cache Protocol (ICP). ICP is a system for maintaining caches efficiently by sharing cache status between nodes belonging to the same peer group. A clustering function that provides further cache efficiency is also implemented.

This concludes our overview of the results of IIJ's comparative tests regarding Varnish Cache and Apache Traffic Server. We cover nginx in the last part of this article.

Michikazu Watanabe

Author Profile

Michikazu Watanabe

Content Delivery Engineering Section, Core Product Development Department, Product Division, IIJ
Mr. Watanabe joined IIJ in 2011. He is involved in operations and development for the IIJ Contents Delivery Service, and lives by the motto, "do a lot with a little."

Related Links

  • "IIJ Technology "Cache Engine Comparison (1/3)"
    To handle client access efficiently, major sites deliver content by operating some form of reverse proxy in addition to origin servers that store the original content. First, we examine the content cache functions (cache servers) of products that can be used as reverse proxies. (June 12, 2012)
  • "IIJ Technology "Cache Engine Comparison (3/3)"
    In the first and second parts of this article, we examined the role of cache servers, and gave an overview of cache servers and cache engines. We also discussed the results of comparative tests that IIJ carried out, with a focus on Varnish Cache and Apache Traffic Server. In this final part, we cover nginx. (June 26, 2012)
  • "IIJ Technology "The Architecture of the Mighttpd High-Speed Web Server"
    The IIJ-II Research Laboratory began development of a Web server called Mighttpdblank (pronounced "mighty") in Fall of 2009, and has released it as open source. Through its implementation, we arrived at an architecture that has enhanced multi-core performance while maintaining code simplicity. Here we take a look at each architecture one at a time. (May 29, 2012)

End of the page.

Top of Page