Top of Page


Links to move inside this page.

  1. HOME
  2. Technology
  3. IIJ Technology (archives)
  4. Cache Engine Comparison (3/3)

Cache Engine Comparison (3/3)

June 26, 2012
(Original Japanese article translated on December 19, 2012)

In the first and second parts of this article, we examined the role of cache servers, and gave an overview of cache servers and cache engines. We also discussed the results of comparative tests that IIJ carried out, with a focus on Varnish Cache and Apache Traffic Server. In this final part, we cover nginx.

nginx

In addition to those we have already introduced, nginx has the following features.

  • Asynchronous, event-driven, high-speed processing ability
  • Binaries can be updated without stopping the service
  • A wide-range of third party modules are available

Like the other cache engines we have introduced here, nginx processes connections using an asynchronous, event-driven model, but it features higher processing ability that gives it a big advantage over the others when it comes to operating site with heavy traffic.

Additionally, the main nginx worker process that processes connections and the master process that manages them are designed to accept a range of process signals. One example of this is on-the-fly binary updates. Many products around the world do not require processes to be restarted when reloading a configuration file, but as far as the author is aware, nginx is the only product that does not requires processes to be restarted when updating a binary. The on-the-fly binary update function in nginx works as follows.

  1. After the old binary is replaced with a new binary, a signal (USR2) for on-the-fly updating is sent to the master process
  2. The new binary starts (both old and new processes continue to accept connections at this point)
  3. A signal (WINCH) for stopping new connections is sent to the master process running on the old binary
  4. Worker processes managed by the master process for the old binary shut down once they complete the connections they are handling
  5. Once all worker processes have shut down, a signal (QUIT) telling the master process running on the old binary to shut down is sent
  6. All processes started by the old binary are shut down, leaving only processes started by the new binary

Nginx On-The-Fly Binary Updates

This means binaries can be switched while continuing to handle connections, making it possible to apply updates flexibly, such as when a security fix is released during operation. Furthermore, nginx benefits from a wide range of third party modules that supplement its existing functions, so it is easy to expand functionality as necessary. Additional modules can be added effortlessly using the on-the-fly upgrades mentioned above.

The comparative chart in the second part of this article may appear to indicate that nginx has poor functionality as a cache proxy server, but these features more than make up for its disadvantages.

In Closing

As demonstrated here, IIJ continues to release new services and enhance our existing ones, by constantly following new products in existing fields, and incorporating our own distinctive flair. The information presented here is the result of studies on peripheral technologies aimed at enhancing our existing services. We look forward to providing even better services in the future.

Michikazu Watanabe

Author Profile

Michikazu Watanabe

Content Delivery Engineering Section, Core Product Development Department, Product Division, IIJ
Mr. Watanabe joined IIJ in 2011. He is involved in operations and development for the IIJ Contents Delivery Service, and lives by the motto, "do a lot with a little."

Related Links

  • "IIJ Technology "Cache Engine Comparison (1/3)"
    To handle client access efficiently, major sites deliver content by operating some form of reverse proxy in addition to origin servers that store the original content. First, we examine the content cache functions (cache servers) of products that can be used as reverse proxies. (June 12, 2012)
  • "IIJ Technology "Cache Engine Comparison (2/3)"
    In the first part of this article, we explained the roles expected of cache servers, and gave an overview of cache servers and cache engines. Here we go into further detail, and discuss tests carried out by IIJ comparing the Varnish Cache, Apache Traffic Server, and nginx cache engines. (June 19, 2012)
  • "IIJ Technology "The Architecture of the Mighttpd High-Speed Web Server"
    The IIJ-II Research Laboratory began development of a Web server called Mighttpdblank (pronounced "mighty") in Fall of 2009, and has released it as open source. Through its implementation, we arrived at an architecture that has enhanced multi-core performance while maintaining code simplicity. Here we take a look at each architecture one at a time. (May 29, 2012)

End of the page.

Top of Page