Skip to main content

Monitoring

Jaeger vs Zipkin – OpenTracing Distributed Tracers

In the previous three parts of our OpenTracing series, we provided an Overview of OpenTracing, explaining what OpenTracing is and does, how it works and what it aims to achieve, we looked at Zipkin – a popular open-source distributed tracer and then at Jaeger – a newer open-source distributed tracer developed under the CNCF umbrella. In this blog post – the last part of the OpenTracing series – we will compare Jaeger vs. Zipkin side by side!

Prefer PDFs? Get the whole OpenTracing series as PDF: free OpenTracing eBook. Alternatively, follow @sematext if you are into observability in general.

Distributed tracers: how to debug through a complex workflow

As organizations embrace the cloud-native movement and migrate their applications from monolithic to microservice architectures, need for general visibility and observability into software behavior becomes an essential requirement. Since monolithic code base is segregated into multiple independent services running inside their own processes, which in addition can scale to various instances, diagnosing latency of an HTTP request issued from the client can end up being a serious deal rather than a trivial task.

In order to fulfill the request, it has to propagate through load balancers, routers, gateways, cross machine’s boundaries used to communicate with other microservices, send asynchronous messages to message brokers, etc. Along this pipeline, there could be a possible bottleneck, contention or communication issue in any of the aforementioned components.

Debugging through such a complex workflow wouldn’t be feasible if not relying on some kind of tracing/instrumentation mechanism, and thus the reason why distributed tracers like Zipkin, Jaeger or AppDash were born (most of them are inspired on Google’s Dapper large-scale distributed tracing platform). All of the previously mentioned tracers help engineers and operation teams understand system behavior as complexity of the infrastructure grows exponentially.

Tracers expose the source of truth for interactions originated within the system. Every transaction, if properly instrumented, might reflect performance anomalies in an early phase when new services are being introduced by (probably) independent teams with polyglot software stacks and continuous deployments.

However, each of the tracers stick with its proprietary API and other peculiarities that makes it costly for developers to switch between different tracer implementations. Since implanting instrumentation points requires code modification, OSS services, application frameworks and other platforms would have hard time tying them to a single tracer vendor.

 

“Tracers expose the source of truth for interactions originated within the system. Every transaction, if properly instrumented, might reflect performance anomalies early.”

Jaeger vs Zipkin: comparison matrix

Jaeger vs Zipkin or Zipkin vs Jaeger? The following is a comparison matrix between the two tracing systems. As seen on the table below, Jaeger has better OpenTracing support and more diversity of OpenTracing-compatible clients for different programming languages. This is due to Jaeger decision to adhere to the OpenTracing initiative from inception.

JAEGER ZIPKIN
OpenTracing compatibility Yes Yes
OpenTracing-compatible clients

Python

Go

Node

Java

C++

C#

Ruby *

PHP *

Rust *

Go

Java

Ruby *

C++

Python (work in progress)

Storage support

In-memory

Cassandra

Elasticsearch

ScyllaDB (work in progress)

In-memory

MySQL

Cassandra

Elasticsearch

Sampling Dynamic sampling rate   (supports rate limiting and  probabilistic sampling strategies) Fixed sampling rate (supports probabilistic sampling strategy)
Span transport

UDP

HTTP

HTTP

Kafka

Scribe

AMQP

Docker ready Yes Yes

* non-official OpenTracing clients

 

“Despite Zipkin being around for a while longer and being more mature, Jaeger has seen some good adoption thanks to several factors, such as good language coverage of OpenTracing-compatible clients, low memory footprint, and a modern, scalable design.”

OpenTracing: Supported instrumentation

Many frameworks and libraries ship with native OpenTracing instrumentation support or have extension points that add tracing capabilities.

Other tracers compatible with OpenTracing

Tracer – designed after Dapper, not production ready.

Lightstep – cloud-based commercial tracing instrumentation platform.

AppDash – based on Zipkin and Dapper. Limited clients availability (Go, Python and Ruby).

Instana – commercial product. Focused on APM and distributed tracing.


 Opentracing ebook sematext

Free OpenTracing eBook

Want to get useful how-to instructions, copy-paste code for tracer registration? We’ve prepared an OpenTracing eBook which puts all key OpenTracing information at your fingertips: from introducing OpenTracing, explaining what it is and does, how it works, to covering Zipkin followed by Jaeger, both being popular distributed tracers, and finally, compare Jaeger vs. Zipkin. Download yours.


Conclusion

In the era where software complexity is increasingly overwhelming, containers have brought on additional layer of complexity with their highly dynamic and ephemeral nature.  In such environments, where deployments are pushed to production on a daily basis, it is crucial to have full operational visibility of the complete infrastructure stack. In this blog post series, we’ve described the importance and benefits of tracing, one of the core pillars of modern application observability in the context of OpenTracing. We also explored how distributed tracers such as Zipkin or Jaeger collect and store traces while revealing inefficient portions of our applications. Finally, we compared Jaeger and Zipkin. Despite Zipkin being around for a while longer and being more mature, Jaeger has seen some good adoption thanks to several factors, such as good language coverage of OpenTracing-compatible clients, low memory footprint, and a modern, scalable design.

 

2 thoughts on “Jaeger vs Zipkin – OpenTracing Distributed Tracers

  1. Hi,

    A very interesting article to newbie’s in distributed tracing area.
    A bit curious to know about memory footprint comparison between Jaeger and Zipkin. Could you educate everyone how it is confirmed. Could you help us understand test execution and test environment.

    Thanks,
    Teja

    1. Hi Tejaswini

      Zipkin has to deal with the overhead of the JVM. For example, spinning up Zipkin server on my machine takes around 500MB of RSS. We know that’s because JVM preallocates some memory for the heap, internal data structures and the metaspace. Conversely, Jaeger collector takes only around 12MB of memory and is able to process considerable volume of traces without skyrocketing RAM usage. Ideally, Jaeger collector shouldn’t consume more then 20-30 MB of RAM. Hope this answers your question.

Leave a Reply