Registration is open - Live, Instructor-led Online Classes - Elasticsearch in March - Solr in April - OpenSearch in May. See all classes


17 Linux Log Files You Must Be Monitoring

Imagine waking up to a critical system failure that has brought your business operations to a standstill. 

Panic sets in as you scramble to understand what went wrong. 

You sift through various error messages, desperately trying to pinpoint the issue. 

But what if you could have detected the problem before it escalated?

Linux log files are crucial for maintaining system health. 

They act like the “black box” of an airplane, recording vital system activities and events. Through Linux log analysis, admins can monitor log files for signs of trouble, ensuring smooth operation and enhanced security.

This post will introduce the most essential Linux log files you must monitor. Understanding these logs can help you troubleshoot and debug issues more effectively. 

We will explore various types of logs in Linux, from Linux error logs to Linux security logs, tools, and commands necessary for effective Linux log monitoring and Linux log management. 

You’ll be familiar with key commands like tail and journalctl, understand how to use Linux logwatch, and know which log files are critical for identifying failed login attempts and other security issues.

17 Linux Log Files You Must Be Monitoring

1. /var/log/syslog or /var/log/messages

Description

Syslog or messages log capture a broad range of system messages, including those from various daemons, system processes, and kernel messages. They serve as a comprehensive record of the system’s activity.

Importance

Syslog and messages are crucial for general Linux log analysis because they contain information about system errors, warnings, and other significant events. These logs help in diagnosing issues that affect the system’s stability and performance.

Usage Tips

To view these logs, you can use the cat, less, or tail -f commands. For example, tail -f /var/log/syslog allows you to follow the log in real time. Utilize log analysis tools Linux offers to filter and search for specific events.

2. /var/log/auth.log or /var/log/secure

Description

auth.log or secure log record authentication-related events, such as successful and failed login attempts, changes in user privileges, and other authentication mechanisms.

Importance

Monitoring Linux security logs like auth.log and secure is vital for identifying unauthorized access attempts and potential security breaches. They provide insight into who accessed the system and when.

Usage Tips

Use grep to search for specific entries, such as failed login attempts. For example, grep “Failed password” /var/log/auth.log. Setting up a Linux log monitoring tool can help automate the detection of suspicious activities.

3. /var/log/kern.log

Description

kern.log contains messages generated by the Linux kernel. It includes information about hardware events, driver messages, and other kernel-related activities.

Importance

Kernel logs are essential for diagnosing issues related to hardware and drivers, which are critical for system stability and performance.

Usage Tips

View the kernel log using tail -f /var/log/kern.log to monitor messages in real time. Tools like dmesg also provide access to kernel messages, and you can use dmesg | grep to filter for specific keywords.

4. /var/log/boot.log

Description

boot.log records events related to the system’s boot process, including services that start up and their statuses.

Importance

The boot log is crucial for troubleshooting boot issues. It helps identify services that failed to start, delays in the boot process, and other startup-related problems.

Usage Tips

Review the boot log using less /var/log/boot.log. Look for lines marked with “FAILED” or “ERROR” to pinpoint issues quickly. Regular monitoring ensures smooth system startups.

5. /var/log/dmesg

Description

The dmesg log contains messages from the kernel ring buffer, which include information about hardware components, drivers, and kernel initialization.

Importance

This log is valuable for hardware diagnostics and monitoring system performance. It helps identify hardware failures and performance bottlenecks.

Usage Tips

Access the dmesg log using the dmesg command. For specific information, use dmesg | grep [keyword]. Tools like dmesg -w allow real time monitoring of kernel messages.

6. /var/log/cron

Description

Cron log records the execution of cron jobs, which are scheduled tasks in the system.

Importance

Cron logs are vital for ensuring that scheduled tasks run smoothly. They help diagnose issues with task scheduling and execution.

Usage Tips

View cron logs using less /var/log/cron. To check for specific cron job executions, use grep with keywords or job identifiers. Ensure all critical jobs are executing as planned by regular log review.

7. /var/log/maillog or /var/log/mail.log

Description

Mailog or mail.log capture events related to mail server activities, including email delivery and errors.

Importance

Monitoring mail logs is essential for mail server administration and troubleshooting email delivery issues. They help ensure reliable communication within and outside the organization.

Usage Tips

Check mail logs with tail -f /var/log/maillog. Look for lines containing “error” or “failed” to identify issues. Log analysis tools Linux offers can help automate and streamline the process.

8. /var/log/httpd/access.log or /var/log/apache2/access.log

Description

This log records all access requests to the web server, including details like IP addresses, request types, and response statuses.

Importance

Access logs are crucial for monitoring web traffic and identifying potential security threats. They provide insights into visitor behavior and can help in optimizing website performance.

Usage Tips

Use tail -f /var/log/httpd/access.log to monitor access in real time. Analyze the logs to identify patterns, such as high traffic periods or repeated access attempts from suspicious IPs.

9. /var/log/httpd/error.log or /var/log/apache2/error.log

Description

This log captures errors encountered by the Apache web server, including issues with Apache server configuration, application errors, and client-related problems.

Importance

Error logs are essential for diagnosing issues with the web server and the applications running on it. They help ensure the smooth operation of websites and web services.

Usage Tips: Review error logs with less /var/log/httpd/error.log. Look for recurring errors to identify underlying issues. Regular monitoring helps quickly resolve problems and maintain uptime.

10. /var/log/NGINX/access.log

Description

This log records all access requests to the NGINX web server, including details like IP addresses, request methods, response statuses, and user agents.

Importance

Monitoring NGINX access logs is crucial for understanding web traffic, and user behavior, and identifying potential security threats such as DDoS attacks or unauthorized access attempts.

Usage Tips

Use tail -f /var/log/NGINX/access.log to monitor access in real time. Analyzing these logs can help optimize web performance and improve security measures. Tools like goaccess can provide real-time web log analysis and visual reports.

Learn more: Best NGINX log analyzer tools in 2024

11. /var/log/NGINX/error.log

Description

This log captures errors encountered by the NGINX web server, including configuration issues, server errors, and client-related problems.

Importance

Error logs are essential for diagnosing issues with the web server and the applications it hosts. They help ensure the smooth operation of your websites and web services by providing insights into what went wrong and when.

Usage Tips

Review error logs using less /var/log/NGINX/error.log. Look for recurring errors to identify and resolve underlying issues. Regular monitoring helps maintain the high availability and performance of your web services.

12. /var/log/mysql.log or /var/log/mysql/error.log

Description

This logs record activities and errors related to the MySQL database server, including queries, connections, and performance issues.

Importance

Monitoring MySQL logs is crucial for database administration, troubleshooting issues, and ensuring efficient database operations.

Usage Tips

Use tail -f /var/log/mysql/error.log to follow the log in real-time. Analyze slow queries and connection errors to optimize database performance and reliability.

13. /var/log/ufw.log

Description

UFW log records events related to the Uncomplicated Firewall (UFW), including allowed and denied connection attempts.

Importance

Firewall logs are vital for monitoring network security and detecting unauthorized access attempts. They help maintain a secure network environment.

Usage Tips

Check UFW logs with tail -f /var/log/ufw.log. Look for repeated denied attempts from the same IP, which may indicate a security threat. Regular review helps ensure firewall rules are effective.

14. /var/log/audit/audit.log

Description

The audit log contains detailed records from the audit daemon, capturing a wide range of system events for security auditing and compliance purposes.

Importance

Audit logs are essential for detailed security analysis and compliance with regulations. They provide a comprehensive view of system activities and changes.

Usage Tips

Use ausearch and aureport tools to search and generate reports from audit logs. Regular audits help ensure system security and compliance with policies.

15. /var/log/daemon.log

Description

Daemon log records messages from system daemons, which are background services running on the system.

Importance

Daemon logs are crucial for monitoring the health and performance of background services. They help troubleshoot issues with service operations.

Usage Tips

View daemon logs using less /var/log/daemon.log. Check for service-specific entries to diagnose issues. Regular monitoring ensures all services run smoothly.

16. /var/log/btmp

Description

Btmp log records failed login attempts, providing a record of unauthorized access attempts.

Importance

Btmp is vital for security monitoring. It helps detect and respond to unauthorized access attempts, enhancing system security.

Usage Tips

Use lastb command to view failed login attempts. Regularly review this log to identify and address potential security threats promptly.

17. /var/log/wtmp

Description

Wtmp log records login and logout events, tracking user activity on the system.

Importance

Wtmp is important for tracking user behavior and understanding system usage patterns. It helps in auditing user activities and detecting anomalies.

Usage Tips

Use last command to view login history. Analyze patterns to ensure users are following expected behavior and to detect any suspicious activities.

How to Access Linux Logs

Accessing Linux log files is a fundamental skill for system administrators. Understanding how to use various commands to view and analyze these logs is crucial for effective Linux log management

#1 Access Linux logs locally (command line method)

1. Using cat

The cat command is one of the simplest ways to view the contents of a log file. By typing cat /path/to/logfile, you can display the entire file. For example, to view the system log, you can use:

cat /var/log/syslog

This approach is quick and straightforward, making it ideal for small log files. However, it can be less effective when dealing with large logs, as it will dump the entire content on your screen at once, making it hard to navigate through the information.

2. Using less

For larger log files, less is more practical. This command allows you to view the file one page at a time and scroll up and down. To open a log file with less, type:

less /var/log/auth.log

Inside less, use the space key to scroll down, b to scroll up, and q to quit. This makes it easier to navigate through extensive logs without getting overwhelmed. The only downside is that you need to manually search and navigate through the content to find specific information.

3. Using grep

If you are looking for specific entries within a log file, grep is your go-to tool. It searches for patterns in the file. For instance, to find all occurrences of the word “error” in the syslog, you can use:

grep "error" /var/log/syslog

This command filters out only the lines that contain the word “error,” making it easier to find relevant information. The challenge with grep is that you need to know what you’re searching for, and it may miss relevant lines if the exact search term is not used.

4. Using tail

To monitor logs in real time, tail is extremely useful. The tail command shows the last part of a file. By using the -f option, tail will follow the log file and display new entries as they are added. For example, to follow the syslog, you can use:

tail -f /var/log/syslog

This is particularly helpful for tracking ongoing issues as they occur, giving you a live feed of log entries. However, it’s less useful for analyzing historical data or older log entries, as it only shows the most recent lines.

5. Using journalctl

For systems using systemd, journalctl is used to access and query the systemd journal logs. To view all logs, simply use:

journalctl

If you are interested in logs for a specific unit, use the -u option followed by the unit name, such as:

journalctl -u ssh

To view logs in real-time, similar to tail -f, you can use:

journalctl -f

Additionally, for a more detailed view with explanations, journalctl -xe can be used. This command provides a comprehensive and detailed log analysis, but it might be overwhelming for beginners due to its extensive output and numerous options.

#2 Access Linux logs with custom scripts and automation 

While manual log inspection using commands like cat, less, grep, and journalctl is essential, automating log monitoring can significantly enhance efficiency and responsiveness. 

Custom scripts and automation tools allow you to set up proactive alerts, perform regular log analysis, and ensure continuous system health monitoring without constant manual intervention.

Custom Scripts

Writing custom scripts in languages like Bash or Python can help automate the process of log file monitoring and analysis. For instance, a simple Bash script can be created to check for specific error patterns in logs and send email alerts if any are found. Here’s an example of a basic Bash script that monitors the syslog for critical errors:

#!/bin/bash
LOGFILE="/var/log/syslog"
KEYWORDS="error|critical|failed"
tail -F $LOGFILE | grep --line-buffered -E $KEYWORDS | while read -r line; do
    echo "Critical issue detected: $line" | mail -s "Log Alert" admin@example.com
done

This script continuously follows the syslog and searches for lines containing “error,” “critical,” or “failed.” When it finds a match, it sends an email alert. This approach ensures that critical issues are detected and reported promptly, enhancing the system’s reliability and security.

Automation Tools

For more sophisticated log management, you can use automation tools such as Logrotate, cron jobs, and centralized logging platforms like Sematext.

  • Logrotate: Automates the rotation, compression, and removal of log files, preventing them from consuming excessive disk space. You can configure logrotate by creating a configuration file in /etc/logrotate.d/ to specify how and when logs should be rotated.
  • Cron Jobs: Scheduling regular log analysis tasks with cron jobs can ensure consistent monitoring and maintenance. For example, you can set up a cron job to back up the /var/log/auth.log file daily:
0 0 * * * cp /var/log/auth.log /backup/auth.log.$(date +\%F)

This cron job runs at midnight every day, creating a timestamped backup of the authentication log.

#3 Log Analysis Tools

While manual methods and custom scripts are valuable for Linux log monitoring, using comprehensive log analysis tools can significantly streamline the process. 

These tools offer centralized logging, real-time analysis, and alerting capabilities and are indispensable for modern sysadmins.

Here are a few famous log monitoring and analysis tools

  • Sematext
  • Datadog
  • Splunk
  • New Relic
  • Elastic Stack
  • Dynatrace

Sematext stands out as a powerful yet cost-effective solution for Linux log analysis. It offers real-time log aggregation, search, visualization, and alerting. 

With Sematext, you can monitor all your logs from a single, intuitive dashboard. It transforms raw log data into actionable insights with automated log parsing and enrichment. Best of all, it delivers comprehensive log management for 1/4 of the price of its competitors.

Learn more: Explore the best Linux Monitoring tools in 2024 

Centralizing Linux Logs 

Instead of sifting through logs on individual machines, centralized logging aggregates logs from multiple sources into a single location. 

This approach simplifies monitoring, enhances security, and streamlines troubleshooting. 

Step 1: Create a free Sematext Account

First, you’ll need to create an account on Sematext. Visit Sematext and sign up. 

Step 2: Set Up Your Logging App

In your account add a new Logs App. This will be the central place where all your logs will be collected and analyzed.

  1. Navigate to the Logs section in Sematext.
  2. Click on Create App.
  3. Choose the type of logs you will be collecting (e.g., Syslog, Application Logs…).
  4. Name your Logs App and choose a plan based on your needs. If you’ve never used Sematext before, use the Pro plan. It’s free during the trial.

Step 3: Install Sematext Agent

The Sematext Agent is a lightweight agent through which you set up log shipping. 

Step 4: Configure Log Shippers (Optional)

Depending on your environment, you may need to configure additional log shippers like Logstash or Fluentd to forward logs to Sematext. Alternatively – and this is recommended – set up log shipping via the Logs Discovery UI.

Step 5: Set Up Alerts and Notifications

Sematext comes with several useful default alerts. You can disable, edit, or delete them and, of course, set up additional alerts and notifications for specific log events:

  1. In Sematext, navigate to the Alerts section.
  2. Create a new alert rule.
  3. Define the conditions for the alert (e.g., when the word “error” appears in the logs more than N times in M minutes).
  4. Configure the notification channels (e.g., email, Slack).

Step 6: Create Dashboards and Visualizations

Sematext provides several out-of-the-box dashboards for each integration. To gain additional insights from your logs, create custom dashboards or reports:

  1. Go to the Dashboards section in Sematext.
  2. Create a new dashboard and add components to visualize your log data.

Step 7: Regularly Review and Maintain

Regularly review your log data and dashboards to ensure everything is functioning correctly. Make adjustments to your configurations and alerts as needed to adapt to any changes in your infrastructure.  A few goodies worth pointing out:

Conclusion

Monitoring Linux log files is vital for system admins. These logs provide insights into system activities, security events, and performance metrics. Regular log monitoring helps identify issues before they escalate, enhancing system stability and security.

Utilize log monitoring and analysis tools like Sematext, as it simplifies log management with features like log enrichment, anomaly detection, and visualization, making it easier to detect and troubleshoot issues.

Start Free Trial