Overview
The P1TS software system receives raw IMSA trackside timing and scoring data and transforms it to information assisting race teams with real-time race strategy. Recent enhancements to the (beta) software increased network traffic resulting in network time outs. This post describes the system and how it was resolved by configuring the web server with HTTP compression.
Background
P1TS is designed for the web, mostly running both locally and privately, and as a web product has two major components:
- Web service using Apache Tomcat that listens to raw IMSA trackside timing and scoring data (over 4Mb in 2hr 30min) and transforms it to information available to web applications through a RESTful API in JSON format.
- Web applications running on Google Chrome web browsers are responsible for human information displays. It not only requests and receives information from the web service, but then also performs additional calculations and tertiary data transformations itself.
While engineering this software, it has both somewhat surprised and delighted me that we are still able to run on proven but antiquated hardware. We are using Panasonic CF-18 Toughbooks (Windows XP, Pentium, 900MHz, 1Gb RAM, 802.11b) to perform our on-track computing duties.
P1TS software has recently been enhanced to perform comparisons of lap performance of a target car with its previous laps as well as comparisons to competitors. This has increased both the number of network requests as well as the size of each network response from the web service.
Time outs
The enhanced P1TS system was stress tested by playing back a full race using the following:
- 1 Web service on 1 Toughbook CF-18
- 1 Web application on 3 Toughbook CF-18s
- 2 Web applications on 1 Thinkpad T500
- 1 Web application on Apple iPad
- 1 Web application on Samsung Note 2 smart phone
This resulted in sporadic HTTP time outs surfacing in the web application’s Javascript console in Google Chrome. Most of the time outs were from requests whose responses are normally about 55Kb in size, requested every second. When multiplied by the 7 web applications that are running simultaneously, this is about 385Kb of data – still below the hardware’s meager 802.11b 11Mbps data rate.
Although I had known of HTTP compression, there wasn’t a need for it in earlier versions of P1TS which provided less information. Fortunately, implementing HTTP compression in Tomcat has eliminated the time outs, though how to implement this in Apache Tomcat took some hunting around.
Tomcat server.xml
Tomcat’s conf/server.xml configuration file is used to manage most of its configuration. Adding compression="on"
would at first glance seem to trigger compression, however it did not in my case. Here is the successful server.xml modification followed by explanations.
<Connector port="80" protocol="HTTP/1.1" compression="on" compressableMimeType="application/json;charset=UTF-8,text/html" connectionTimeout="20000" redirectPort="8443" />
- port=”80″ was changed from the default “8080” for user convenience.
- compression=”on” was added to enable compression.
- compressableMimeType=”application/json;charset=UTF-8,text/html” was added – this line was the key, particularly application/json. This was not immediately obvious, but of course in hindsight makes sense. If omitted, the default is “text/html,text/xml,text/plain”, which does not include the JSON format which P1TS uses.
After making the changes to server.xml and restarting Tomcat, the 55Kb response sizes were reduced to 9Kb – 16% of the uncompressed size. When the stress tests were run again, the time outs disappeared. According to Window Task Manager, the web service Toughbook’s CPU runs at about 40%-60% so the additional CPU requirements for compression is not yet a concern.
If you are writing web services on Tomcat serving JSON responses and haven’t implemented HTTP compression, give it a try now that you know how.
Let me know what you think of this article and your experiences.