New Trends in Application Monitoring Systems from Velocity 2016

Adil Aijaz on July 14, 2016

Anomaly detection in application monitoring systems (APM) is the gold standard. The idea is that if your APM can tell you when something is wrong, you can multiply the effectiveness of your site reliability team. Unnecessary alerts are never thrown and humans are not stuck watching dashboards.

So, why is anomaly detection not that common? First, it is really, really hard. Second, its value goes down drastically if it doesn’t control for false positives (incorrectly marked anomalies).

At this year’s O’Reilly Velocity 2016, there were many interesting sessions on anomaly detection in APM. Specifically, a fantastic talk was presented by two of my ex-colleagues from LinkedIn, Ritesh Maheshwari and Yang Yang. They spoke about ‘Anomaly Detection for Real User Monitoring Data’. While a video is not yet available, you can see the slides here.

First, a quick explanation for Real User Monitoring (RUM). It is the idea that performance optimizations, whether deep in the backend or in the UI layer, should result in a faster experience for the end user. Hence, it is important to measure performance from the perspective of a real user. Companies achieve this by inserting RUM Javascript libraries into their apps that measure page load times, client render time etc. against dimensions like CDN PoP, geo, and page type.

I’ve highlighted two important topics from their presentation:

#1 Their anomaly detection algorithm was simple yet powerful in detecting sustained anomalies (an anomaly that lasts for a while). Engineers learn from experience that threshold based anomaly detection is broken: yesterday’s threshold is today’s normal. Ritesh and Yang used sign test to detect if say the page load times today were anomalous when compared to yesterday or the same time a week ago. Besides its simplicity, the approach leads to an adaptive sustained anomaly detection which addresses false positives better.

#2 By connecting RUM with anomaly detection, they were able to quickly determine a high level root cause. For instance, if the anomaly was in connection time, they could be confident that the problem lay in their network, down to the region or PoP where the problem occurred. Similarly, if the anomaly was in first byte time or page download time, they could be confident that the problem lay on the server side (CDN Origin).

In summary, combining RUM with their anomaly detection approach is very promising and an interesting new approach to analysis for modern engineering teams.

We're Hiring!

Join the Team