Generic Logs Integration
Stored data is received through the Elasticsearch API and also through a variety of Syslog protocols.
The Elasticsearch API lets you:
- send log events directly from your application, using any Elasticsearch library
- send log events using a "log shipper" application such as Logstash, rsyslog, Apache Flume, Fluentd, or anything that can output to Elasticsearch
- search for logs from your own application, or by configuring/adapting existing Elasticsearch UIs, such as Kibana
- optionally define custom mappings for your log types, so you can tweak the way your logs are indexed
Syslog Protocols¶
We accept Syslog messages using any log shipper and any Syslog library, as long as they either contain a valid token or the source IP is authorized.
Journald¶
We accept Journald logs using the systemd-journal-remote
package. Everything you need to do is point the systemd-journal-remote
to send Journald logs to Sematext Logs.
Log Shippers¶
- Logagent - cross platform, Smart and lightweight Log Parser and Log Shipper written in Node.js
- rsyslog - easy to get started, very fast and very light on resources, docs are harder to navigate for beginners though.
- Logstash - cross platform, very simple to set up, well documented, but a little heavy on resource usage
- Filebeat - cross platform, much lighter on resource usage, requires a Logstash instance to aggregate logs
- syslog-ng - very fast and very light on resources, good docs, available as both free and paid version
- syslogd - quite old, light on resources, not very feature rich
- Fluentd - cross platform, easy to get started, horizontally scalable, available as both free and paid version
- Fluent Bit - FluentBit is an open source specialized data collector. It provides built-in metrics and general purpose output interfaces for centralized collectors such as Fluentd.
- NXLog - cross platform but mostly used on Windows, easy to get started, available as both free and paid version
Programming Languages¶
Operating Systems¶
Containers¶
Cloud IaaS / PaaS¶
- AWS S3
- AWS CloudTrail
- AWS CloudWatch
- AWS VPC Flow Logs
- Heroku
- Cloud Foundry
- Google App Engine
- GitHub Webhook Events
- Vercel
iOS¶
Android¶
AWS EC2¶
If you're an EC2 user, you can log Sematext from your instances by setting up a log shipper like you would from any other physical or virtual machine.
AWS ECS on AWS Fargate With FireLens¶
We recommend you use the AWS ECS Logs Integration to get more detailed out-of-the-box reports.
There are two main ways you can forward logs from containers running in Fargate to Sematext. They rely on two different log drivers.
- AWS FireLens -
awsfirelens
- AWS Logs -
awslogs
We suggest you use AWS Firelens to avoid additional CloudWatch costs.
With Firelens you can route logs to another AWS service, like Firehose, or use Fluentd or Fluent Bit. AWS provides the image for Fluentd / Fluent Bit. You need to configure the output module.
1. Enable FireLens¶
In the ECS Task Definition, check a checkbox called Enable FireLens integration. Choose Fluent Bit and AWS will populate the image name for you.
AWS will add an additional container called log_router
to the list of containers in your Task Definition.
2. Configure the FireLens Log Driver¶
Next, in the same Task Definition but for your own container (not the log_router
), you configure the logConfiguration
like this:
"logConfiguration": { "logDriver": "awsfirelens", "options": { "Type": "ecs", "Port": "443", "Host": "logsene-receiver.sematext.com", "Index": "<LOGS_TOKEN>", "TLS": "On", "Match": "*", "Name": "es" } }
Note: If you are using the EU region of Sematext you should set the Host like this:
"Host": "logsene-receiver.eu.sematext.com"
This will forward all container logs to Sematext.
AWS ECS on AWS Fargate With AWS Logs¶
This log driver will forward all logs to CloudWatch. From there you can configure a Lambda function to collect the logs and forward them to Sematext.
1. Enable forwarding to CloudWatch¶
Your ECS task configuration JSON will contain this snippet:
"logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "/ecs/ecs-service-name", "awslogs-region": "eu-central-1", "awslogs-stream-prefix": "ecs" } }
2. Set up a Lambda function pipeline to collect and forward CloudWatch logs to Sematext¶
Once forwarding to CloudWatch is configured, you need to set up a Lambda function to collect these logs from CloudWatch and send them to Sematext. You do this by following this guide. Or, if you already know how to, here is the code for the Lambda pipeline so you can deploy right away.
All you need to do is edit the secrets to add your Sematext LOGS_TOKEN and LOGS_RECEIVER_URL. Also, don't forget to edit the PREFIX to match your ECS containers. E.g:
"PREFIX": "/ecs/ecs-service-name"
AWS ECS on AWS EC2¶
We recommend you use the AWS ECS Logs Integration to get more detailed out-of-the-box reports.
When using EC2 container instances you can configure Logagent to forward container logs.
1. Set env vars¶
In the ECS Task Definition you need to make sure you have these two environment variables configured:
LOGS_TOKEN
- set to your tokenREGION
- either US or EU based on the region you are using
In JSON it looks like this:
{ "requiresCompatibilities": [ "EC2" ], ... "containerDefinitions": [ { "name": "st-logagent", "image": "sematext/logagent:latest", ... "environment": [ { "name": "LOGS_TOKEN", "value": "9c63d337-xxxx-xxxx-xxxx-abcc87342d47" }, { "name": "REGION", "value": "US" } ], ... } ] ... }
2. Set volumes¶
To enable log collection you must bind the Docker Socket volume from the EC2 container instance to the Logagent container.
The /var/run/docker.sock
path on the host must be bound to the /var/run/docker.sock
path in the container.
In JSON it looks like this:
{ "requiresCompatibilities": [ "EC2" ], ... "containerDefinitions": [ { "name": "st-logagent", "image": "sematext/logagent:latest", ... "environment": [ { "name": "LOGS_TOKEN", "value": "9c63d337-xxxx-xxxx-xxxx-abcc87342d47" }, { "name": "REGION", "value": "US" } ], "mountPoints": [ { "sourceVolume": "docker-socket", "containerPath": "/var/run/docker.sock", "readOnly": "" } ] ... } ], "volumes": [ { "host": { "sourcePath": "/var/run/docker.sock" }, "name": "docker-socket" } ] ... }
3. Run the Logagent Task Definition as a Daemon Service type¶
When creating the Logagent service make sure to set the Launch type
as EC2 and Service type
as DAEMON.
AWS S3 (CloudTrail, Flow logs, ELB access logs, etc.) **¶
If you have logs stored in S3, you can ship them to Sematext via this AWS Lambda function. This method also works for when you periodically upload logs to S3 buckets, like Amazon CloudTrail does.
AWS CloudWatch Logs¶
If you want to ship CloudWatch logs, you can use another AWS Lambda function. If logs are VPC flowlogs, the Lambda function will also parse them and add geoIP information on the source IP addresses.
Centralized Logging for AWS Lambda¶
If you want to automatically subscribe to AWS Lambda log streams you can use this CloudFormation stack.
It'll let you run a single command and set up log group subscriptions, funnel all CloudWatch logs to Kinesis, and use a dedicated Lambda function to ship these logs to Sematext.
Read the full tutorial on our blog!