Registration is open - Live, Instructor-led Online Classes - Elasticsearch in March - Solr in April - OpenSearch in May. See all classes


The Three Pillars of Observability


What Are the Three Pillars of Observability?

The three pillars of observability refer to logs, metrics and traces, three types of data outputs that give you complete visibility into the state of your systems, hybrid infrastructures, or applications. They can help figure out how your system failed and track problems back to their source. Together, the three pillars can boost DevOps teams’ output and improve the system’s usability for end users.


Log files are the historical records of your systems. They are time stamped and often come in binary, plain text, or structured format. Structured logs can combine text with metadata to facilitate faster querying.

Admins first look for the system logs to see what went wrong in case of system errors or malfunctions. Logs track events and can detect anomalous activity, such as a security breach in your infrastructure’s load balancers or caching systems. In addition, it can help answer the “who, what, when, and how” resources were accessed.

Benefits of Logs

Logs are typically used for debugging and troubleshooting but they can provide more benefits than that.

  • Alert and monitoring: You can set alerts on specific logs or logging patterns to notify users before they are affected. Also, thanks to the monitoring features, engineers can handle event data via search and curated services.
  • Resource management and troubleshooting: Log data is used by the admins to monitor log events and patterns across systems, monitor real-time abnormalities or inactivity to evaluate system health, detect configuration or performance issues, troubleshoot data analysis, and root cause analysis.
  • Regulator compliance and SIEM: Automating log data collection, analysis, and correlation from numerous security systems and devices helps the IT department meet regulatory compliance and boost productivity. SIEM systems manage logs, alerts, real-time analysis, and workflow.
  • Business analytics: Vital business data, such as business process health, customer service level agreements (SLAs), hourly revenue, etc., can be mined from log data when specific business goals are attained, or criteria are fulfilled.
  • Marketing insights: Log analysis can give digital marketers insights into their campaign’s visibility, traffic, conversions, and sales. For example, it may demonstrate how bots explore your website, providing new SEO opportunities like recognizing helpful and meaningless content. Show Google’s crawled and non-crawled pages. Improve forecasting and decision-making. Alerts on critical events or patterns and better-track websites.

To fully leverage the benefits of logs and avoid their limitations, read the article on log management and monitoring best practices.

Limitations of Logs

While logs come with a long list of benefits, they also have their limitations.

  • Large volume of data: Although logging every event helps explain the system’s present condition, it also increases data storage needs. To get the most out of the events, businesses need to use a log management tool that’ll help collect only critical log data. Especially those based on microservices-heavy systems, where logs don’t reflect concurrency. [a][b][c][d]
  • Increased cost: Keeping extensive logs for an extended period can be expensive because of the space needed to store all the data. For example, to manage increased client activity, adding new containers or instances increases logging and storage costs.
  • Performance related issues: While it’s simple to generate a log, excessive logging may slow down the program unless the logging library supports dynamic sampling. In addition, if no protocol is in place to deliver event logs, they might even get lost in the system. For example, if an online store loses event logs with payment or other time-sensitive information, it might be hard to keep track of members’ subscriptions or orders placed.


Metrics are numerical values that show how well a system is doing over a period of time. The default structure of metrics includes a set of attributes including name, value, label, and timestamp, which facilitates faster and easier querying and optimizes storage, allowing you to keep them longer.

Metrics are usually the key performance indicators, for example, CPU usage, memory usage, error rate, network latency, or anything else that gives insight into the health and performance of your system. When monitoring a website, response time, peak load, and number of HTTP requests served are some KPIs that can be tracked.

Metrics can help identify problem severity, provide visibility and insight, and trigger alerts when attributes exceed a threshold. These indicators assist teams in measuring system performance and seriousness. For instance, while monitoring the website’s service requests per second, you notice a sudden spike in the system’s traffic. Metrics can explain the surge, possibly due to incorrect service configuration, malicious behavior, or design issues. Thus, assisting in the identification and assessment of the problem.

Benefits of Metrics

Metrics mainly help alert the DevOps and other teams of any possible errors, which can be used to provide the following benefits:

  • Improve user experience: Metrics can measure user engagement with your brand and products. Measures like revenue per user (RPU), average order value (AOV), cost per install (CPI), and others can provide insight into user satisfaction, involvement, and loyalty.
  • Business execution : Metrics are easy to keep and query because they are numerical and don’t put a heavy load on the system. As a result, they are great for dashboards for historical data, especially for monitoring KPIs in real-time and analyzing data over time for patterns.
  • Lower cost: Metrics are more cost-effective than logs since their price does not rise according to the number of users or other system activity that generates a large amount of data.
  • Fast processing despite increased storage: Metrics do not increase disk utilization, processing complexity, visualization speed, or operational costs like logs do when application traffic increases. Client-side aggregation can ensure that metric traffic doesn’t grow at the same rate as user traffic.
  • Alerts: Metrics are better for sending alerts because running queries against an in-memory, time-series database is more efficient, reliable, and cost-efficient than running queries against a distributed system like Elasticsearch and then adding up the results to decide if an alert should be sent.
  • System health performance insight: Metrics can quantify system performance across time and provide valuable insights, including actionable improvements. In addition, it provides real-time system monitoring via alerts. For example, when the system is down or overloaded etc. It can also detect anomalies and set new standards for future goals.

Limitations of Metrics

Like the other observability pillars, metrics have their own limitations. Here are some of them:

  • System scoped: Application logs and metrics are system-scoped, making it difficult to determine what’s happening outside a system. Metrics can also be request scoped; however, this requires more label spread-out, which in turn requires more metric storage.
  • High cardinality slows performance: In a system with thousands or millions of users, one user identity is high cardinality data. High cardinality data makes it harder to query and may slow down your monitoring tools. Monitoring tags with high cardinality may have thousands of combinations. The system may be costly due to the amount of computing power and storage needed to store all the data.
  • Multi-format data storage: Using metrics can be challenging because of the wide variety of data and analysis methods required. There’s also the question of how to store information obtained from various sources and in multiple formats. Nonetheless, the time spent studying the data’s application will be well rewarded.
  • Limited diagnosis capabilities: It is challenging to diagnose an event using only metrics. For example, in a real-world scenario where a tag could have hundreds of values, it is difficult to group metrics by tags, filter tags, and iterate on the filtering process. In addition, adding tags significantly increases the cardinality of your data set.

Metrics vs. Logs

Although both logs and metrics are timestamped, searchable, and provide actionable insights, they differ significantly in terms of the value they provide. Metrics can monitor progress, flag noteworthy occurrences, and foretell failures. They are better for alerting than logs because they are numerical and can apply a simple threshold.

While debugging is the most common use for logs, there are many other situations in which they might be helpful such as user behavior analysis, monitoring application performance, and more. Logs can help you figure out what’s going on if you know where to look. Metrics excel at proactive searches and alerts; logs are fantastic for reactive searches and digging deeper into silos. Meaning, metrics are great for spotting outliers and analyzing trends, whereas logs are best for forensic analysis. For instance, if a resource has stopped working, then logs can offer better insight into why it stopped.

In the case of an online store, the inclusion of metrics in a dashboard can enhance both data visualization and the capacity for problem-solving. Unlike logs, they don’t require more space or resources as the number of customers increases. Metrics may grow when you add containers or instances, but their compact structure makes them more cost-effective in comparison to logs.

Distributed Traces

The last of the three observability pillars, traces, are a way to record the actions a user performs while using your application or service. Distributed tracing is a method of observing requests as they move through distributed environments. For instance, a distributed trace follows a request from the user interface (frontend) to the backend systems and back to the user. So, traces log every user action from opening a tab in your app to accessing it in your GUI.

Traces aid in finding bottlenecks in an application and can be used to detect, characterize, and rank them so that they may be optimized. In addition, it helps debug and monitor applications that use multiple resources (mutex, disk, network). Because they provide background for the other parts, traces are an essential part of observability. For instance, site reliability engineers (SREs) and other ITOps and DevOps teams can analyze a trace to find the most useful metrics or logs related to the issue you’re trying to solve.

Benefits of Traces

DevOps, ITOps engineers and SREs use distributed tracing to achieve the following benefits:

  • Resolve user complaints: The support team can check dispersed traces if a customer reports a slow or faulty application functionality. Engineers can then use an end-to-end distributed tracing tool to study frontend performance issues from the same platform.
  • Service relationships: Distributed traces help developers optimize service performance by understanding cause-and-effect interactions. For example, looking at a span made by a call to a database might show that adding a new database entry causes a service further upstream to run slowly.
  • Track user activity: Engineers can use traces to measure the time it takes to accomplish critical user actions like payment. In addition, they help identify backend bottlenecks and issues that hurt user experience.
  • Improve productivity and collaboration: In microservice architectures, the services that make up a request may be run by different teams. Distributed tracing shows which team is responsible for fixing an error.
  • Maintain service level agreements (SLAs): SLAs are performance agreements with clients or internal teams in most companies. Distributed tracing technologies aggregate service performance data to help teams assess SLA compliance.

Limitations of Traces

Though traces can help improve user experience and site the root cause of the problem at the application level, it has its own limitations when it comes to performance and implementation as mentioned below:

  • Manual code modification: Some distributed tracing platforms require manual code modification to start tracing requests. Manual instrumentation wastes engineering time and introduces errors, but the language or framework you wish to instrument typically requires it. Standardizing code instrumentation may also cause missing traces.
  • Head-based sampling: Traditional platforms for tracing take random samples of traces right when each request starts. Unfortunately, this method leaves incomplete traces. In addition, this head-based sampling can miss significant business traces, such as high-value transactions or enterprise client requests.
  • Only backend coverage: If you don’t use an end-to-end distributed tracing platform, a trace ID for a request is only made when it reaches the first backend service. Frontend user sessions aren’t visible. As a result, it’s more work to pinpoint a request’s fundamental cause and whether a frontend or backend team should fix it.
  • Different programming languages: Tracing can be difficult if your system uses multiple programming languages or frameworks. Developers may need to identify tracing solutions for each language or framework in a system since all functions propagate traces. Heterogeneous systems are harder to retrofit using tracing.

Traces vs. Logs

Logging is a practice of gathering and storing data an app makes during regular operation to centrally monitor error reports and related data. It focuses on program activity. As a result, system administrators can troubleshoot application faults with detailed logs.

Distributed tracing contextually follows a single transaction from endpoint to endpoint. It seeks to pinpoint the exact site of an error. It helps teams see service interdependencies by highlighting service communication. As a result, time spent troubleshooting will increase while time spent locating the source of the problem decreases. Additionally, it helps companies find and repair app performance issues before customers notice them.

Traces provide background information better than logs but don’t reveal source code errors. Information visualization automatically generates traces, speeding up problem identification and solutions. Traces investigate distributed systems deeper than logs but are less adaptive and adjustable.

Sematext Infrastructure Monitoring

Map and monitor your entire infrastructure on-premise or in the cloud.

How Do Logs, Metrics, and Traces Work Together?

While metrics can assist in gaining insight into your system’s general health and performance, traces are used to connect individual log files. Together, these three pillars form the backbone of any observable system.

Each of these three components provides essential insight into the system; the actual value is in monitoring their interdependence through a unified analytics dashboard. Furthermore, observability becomes more important and harder to establish with system’s complexity.

A holistic observability strategy based on the three pillars lets teams monitor and fix system issues in real-time. They can be notified promptly if a metric performs differently. They can efficiently respond to alerts and client comments by analyzing high-cardinality traces and high-granularity logs. Having everything in one place allows them to address issues more rapidly.

Challenges of the Three Pillars of Observability

Even after implementing all three pillars, a company will still not have solved its most pressing business issues or reached its observability goals. This is due to the constraints of each pillar. Some of the common observability challenges are as follows:

  • Too many potential sources of trouble exist: Problems across several microservices can be traced back to a complex system’s interdependencies. Metrics that operate outside of recognized thresholds might generate an overwhelming number of alarms to appear on a monitoring dashboard. As a result, administrators require tools to filter out irrelevant warnings and resolve issues faster.
  • Microservice drill-down: The issue with a complex stack of microservices is that they all need their own independent sets of microservices to function correctly. Irrespective of the availability of the observability tool, if any of those fail, the top-level microservices should be eliminated to find where the real problem lies.
  • Need reporting tools for better visualization: When you look at a dashboard of metrics, logs, and traces, you might learn useful things, like how to put measurements into percentiles. However, percentile reporting can hide problems at the tail end of distributions, and developers may only detect faults in the full distribution with visualization of their complete data set. Reporting tools can bridge the gap between data and visualization.
  • Data stacks: Complex IT environments with many data sources and various logging and monitoring solutions are common in many enterprises. Unfortunately, these unstandardized stacks make it impossible to correlate and analyze data, filter out the correct information, and provide situational context for interpretation. Standardizing data can prevent this.
  • Alert fatigue: Observability tools generate many alerts to bring potential problems to the team’s attention. Even though this makes troubleshooting easier, it could also be challenging. Alert fatigue and operational noise can make your staff disregard even the most important notifications. Organizations need a comprehensive alerting system with many alarm paths to avoid this.

In the end, merging all three pillars of observability into a single platform like Sematext can help DevOps teams get an in-depth view and gain visibility into their systems. Thus, allowing companies to handle issues faster, meet or exceed service level agreements (SLAs), and collaborate to design, release, and enhance systems and applications more quickly.

Start Free Trial

See Also