At the end of November, we’ll be migrating the Sematext Logs backend from Elasticsearch to OpenSearch

How to Stop Memory Leaks Before they Crash Your Linux System

February 5, 2025

Table of contents

Imagine you’ve got a leaky faucet in your kitchen. 

At first, it’s just a drip here and there—annoying, sure, but not enough to ruin your day. But leave it unchecked, and soon that drip turns into a steady trickle. Your water bill skyrockets, the sink overflows, and before you know it, you’re ankle-deep in chaos.

Now, replace that faucet with a Linux system, and you’ve got a memory leak. 

At first, you might not even notice, as you will see in one of the screenshots further down.

But as time passes, that “drip” of wasted memory builds up. Your system slows down, processes start to crash, and eventually, everything grinds to a halt.

Why do memory leaks matter? 

Because they’re silent troublemakers that can wreak havoc if ignored. 

For businesses, this means downtime, frustrated users, and sleepless nights for whoever’s on call. Not to mention the stress of figuring out what went wrong while everything was on fire.

But here’s the good news: just like fixing that leaky faucet, memory leaks are preventable. All it takes is the right tools, a proactive mindset, and a bit of know-how.

In this guide, we’ll tackle memory leaks head-on. We’ll cover everything from spotting the first signs of trouble with basic tools like top to diving deep into memory usage with gdb and memleax

Plus, we’ll explore how to prevent leaks before they even start—because wouldn’t it be nice to avoid the mess altogether?

Understanding Memory Leaks

Before we dive into fixing memory leaks, let’s make sure we fully understand the culprit. 

What Causes Memory Leaks?

Think of memory in a system as a shared workspace. 

Every program grabs the tools (memory) it needs to do its job. A well-behaved program will clean up after itself, putting the tools back when it’s done. But sometimes, a program forgets to return what it borrowed—whether because of a bug, poor memory management, or just bad habits. That’s a memory leak.

Some common causes include:

  • Code Bugs: Oversights where developers forget to free allocated memory or hold on to resources unnecessarily.
  • Long-Running Processes: Services that accumulate memory over time without resetting or releasing it.
  • Inefficient Memory Management: Mismanagement of memory allocation, particularly in languages like C or C++ where developers handle memory manually.

It’s not always a developer’s fault. 

Even a well-written program can leak memory if it’s working with buggy libraries or gets stuck in a weird edge case.

Symptoms of Memory Leaks

So, how do you know you’ve got a memory leak? Here are a few unmistakable signs:

1. Gradual Memory Usage Increase

Your system’s memory usage grows steadily, even if workloads stay the same. It’s like watching your gas tank slowly empty while your car is parked.

Here is a real-life example of a process that gradually uses more and more memory over time. 

The charts below are from Sematext Cloud, which will help you expose such memory-leaking processes. Pay attention to the RSS Memory chart. 

This is a process that uses more and more memory. It leaks memory. 

You see how after about 4 days of slow growth the memory suddenly drops? 

See how those memory drops correlate to the OOM Killer events? 

Those RSS Memory drops are caused by the Linux kernel OOM Killer that kicks in and kills this process. The process then restarts, but because it still has a memory leak, its RSS Memory usage starts slowly going up again. 

So this is what you want to look for in your monitoring. Note how the rise in memory usage is very gradual and very slow. This makes it easy to miss – it doesn’t trip any anomaly detection alerts.

The lesson here is that you want to look for OOM Killer events. Hosts with such events either simply don’t have enough memory or they are running processes/services/applications that leak memory. See Linux events in Sematext for more info.

2. Performance Degradation

Over time, processes slow down as the system struggles to allocate enough memory.

3. Unexpected Crashes

When the system runs out of memory, it may terminate processes—sometimes abruptly—leading to application failures or even a system crash. If you’ve ever had a critical process killed by the OOM killer, you know how frustrating it can be.

Understanding these symptoms is the first step in diagnosing the problem.

Tools and Techniques for Monitoring Memory Leaks

As shown in the previous section, the scalable way to find memory-leaking processes is through the use of modern monitoring tools that capture this sort of information and can inform you about various events, like OOM Killer events. 

For example, those 2 OOM Killer events you saw above can be seen in the list of events in Sematext. This is the sort of screen you may want to look at periodically for your infra.

That said, here is a list of good Linux tools for monitoring memory usage and how to look for memory leaks with these tools.

Basic Monitoring with top and htop

When you need a quick overview of your system’s performance, top and htop are invaluable tools for monitoring real-time system resource usage, including memory consumption.

  • top: A straightforward tool that provides essential system metrics.

  • htop: A more modern, user-friendly alternative with a colorful interface, better navigation, and customization options. It allows you to sort processes with a single keystroke and tailor the view to your needs.

Key Metrics to Keep an Eye On:

  • RES (Resident Memory): The physical memory actively used by a process. A steady increase in this value could indicate a memory leak.
  • VIRT (Virtual Memory): The total memory accessible to the process, including swapped-out and reserved memory.
  • %MEM: The percentage of total physical memory consumed by a process.

Whether you stick with top or switch to htop, these tools provide vital insights into system performance and can help identify potential issues like memory leaks quickly and efficiently.

Deep Dive into the /proc Filesystem

If tools like top and htop are your binoculars, the /proc filesystem is your microscope. It allows you to closely examine individual processes and understand how they use memory.

How It Works

Here’s a mini tutorial to demonstrate using a background sleep command and inspecting its details using /proc:

Step 1: Run a Background Sleep Command

sleep 600 &

Take note of the Process ID (PID) shown (e.g., 12345).

Step 2: Verify the Process

Check that the process is running:

ps aux | grep sleep

Step 3: Inspect the Process Using /proc

cat /proc/<PID>/status

Replace <PID> with the process ID

Key Metrics to Explore

  • VmRSS (Resident Set Size): The amount of physical memory currently in use by the process.
  • VmSize: The total virtual memory allocated to the process.
  • VmData: Memory consumed by the heap and global variables—often a key area to investigate for potential memory leaks.

Memory Mapping with pmap

If /proc gives you raw data, pmap organizes it into a structured memory map for each process. It displays the memory layout, breaking it into segments like heap, stack, and shared libraries. This is useful because memory leaks often occur in specific regions (e.g., heap or anonymous memory), and pmap helps identify where the growth is happening.

How It Works

Run the following command: 

pmap <PID> | grep total

This gives a quick summary of a process’s memory usage. Combine this with periodic checks to spot unusual growth patterns.

Advanced Analysis with smem

For a deeper understanding of memory usage, smem is the tool to reach for.

Unlike other tools, smem differentiates between unique and shared memory. It calculates:

  • USS (Unique Set Size): Memory exclusively used by a process.
  • PSS (Proportional Set Size): Shared memory divided proportionally among sharing processes.
  • RSS (Resident Set Size): Physical memory used, including shared memory.

This makes smem great for figuring out which processes are the real memory hogs versus those just sharing space.

How It Works

Step 1: Run the following command

smem -tk

This shows a breakdown of memory usage for all processes, you can sort by USS to find the greediest ones.

Output:

Detecting and Stopping Memory Leaks in Real-Time

Now that you know how to monitor memory usage, it’s time to shift gears and focus on real-time detection and stopping leaks before they cause any damage. This is where you stop potential memory disasters in their tracks.

Automated Detection with Valgrind

Valgrind is like having a plumber constantly checking for leaks in your pipes. It’s a powerful tool for identifying memory leaks and tracking how your program manages memory.

How It Works:

Valgrind runs your program inside a virtual machine, intercepting memory-related calls like malloc and free. It checks if all allocated memory is eventually freed and reports any leaks or errors.

Installation:

On most Linux distributions, you can install it easily:

sudo apt install valgrind

Usage:

To detect memory leaks, run your program through Valgrind:

valgrind --leak-check=full ./your_program

Replace ./your_program with the executable of the application you want to debug.

Valgrind outputs detailed reports about memory leaks, including the size of the leaked memory and the exact code line where the allocation happened.

Debugging with gdb

For deeper insights and debugging, use the GNU Debugger (gdb). It’s especially useful for inspecting memory and understanding what’s happening under the hood.

How it Works

Step 1: Attach to a Running Process:

Attach gdb to a process with:

gdb -p <PID>

Replace <PID> with the process ID of the application.

Step 2: Inspecting the Heap:

Once attached, you can view heap statistics:

(gdb) call malloc_stats()

This shows the state of memory allocation and usage.

Step 3: Set Breakpoints for Memory Operations:

Use breakpoints to monitor memory-related functions like malloc or free:

(gdb) break malloc
(gdb) break free

This allows you to trace how memory is being allocated and freed in real time.

Proactive Monitoring and Prevention

Detecting memory leaks is great, but not having them in production is really what we want.In this section, we’ll explore strategies and tools we can use to detect memory leaks before they hit production..

Regular Testing and Logging

Regular testing in a non-prod environment can simulate production-like conditions and expose leaks before they hit production.

  • Load Testing: Use tools like Apache JMeter, Locust, or stress to simulate heavy workloads on your applications. Monitor memory usage during these tests to spot gradual increases that indicate leaks.
  • Memory Metric Monitoring: Track memory usage metrics to detect potential leaks and inefficiencies. Key indicators to monitor include:
    • Gradual memory growth: A steady increase in memory consumption over time without corresponding decreases.
    • Sudden spikes in memory usage: Sharp increases during specific operations.
    • High memory utilization trends: Persistent high memory usage that doesn’t normalize, signaling a potential issue.

Pro Tip: Combine load testing with performance/memory usage metrics monitoring tools like Sematext to gain deeper insights into memory usage under stress. 

System Configurations

Sometimes, a little system-level tweaking can go a long way in mitigating memory leaks. Linux provides several tools to enforce limits and prevent runaway processes from consuming all available memory.

  • ulimit: Set per-user limits on memory usage. For example, you can cap the maximum memory a process can use:
ulimit -v <max-memory-in-kilobytes>
  • cgroups: Use control groups to limit and isolate resource usage for specific processes. For example, create a cgroup that limits memory:
cgcreate -g memory:/mygroup
echo 500M > /sys/fs/cgroup/memory/mygroup/memory.limit_in_bytes

Then, attach processes to this cgroup to enforce the limit.

cgclassify -g memory:/mygroup <PID>
  • Monitoring with Observability tools: Once system-level controls are configured, you can set up real-time monitoring and alerts for memory usage trends using tools like Sematext that can do process monitoring.

Best Practices in Development

Most memory leaks can be traced back to code. Writing efficient, leak-free code is your best long-term defense.

Here are some of the effective practices:

  • Free memory explicitly in languages like C and C++ (use free()).
  • Use smart pointers or garbage collection in modern languages like C++ (e.g., std::shared_ptr) or Java.
  • Avoid holding onto references unnecessarily in languages like Python or JavaScript.
  • Use static analysis tools that can detect potential memory leaks during development like cppcheck and SonarQube
  • Peer reviews can help catch bad memory practices. Pair this with unit tests that track memory usage over time.

Treat memory leaks like pests—eliminate them early during development rather than waiting until they infest your system.

Conclusion

Memory leaks may start small, but their impact can snowball into significant issues if ignored.By combining basic tools like top and htop with advanced options like Valgrind and proactive strategies like load testing and efficient coding, you can identify and prevent leaks early.  

For deeper visibility, monitoring tools like Sematext provide comprehensive performance monitoring, helping you track memory usage trends and spot anomalies before they escalate. Staying vigilant and adopting these practices will keep your Linux systems stable, efficient, and ready for any workload.  Fix the leaks before they cause damage—and save yourself a lot of unnecessary headaches.

Top 10 Engineering Changes in 2017

2017 was a good year for Sematext in many ways....

How Do You Monitor Cassandra Performance: Key Metrics to Measure

Apache Cassandra is a distributed database known for its high...

15 Best Log Analysis Tools & Log Analyzers of 2024 (Paid, Free & Open-source)

Log analysis and management tools have become essential in troubleshooting....