Debug School

Palani S Ramadoss
Palani S Ramadoss

Posted on

Datadog – 12-Sept–23 (Day2) : Assignment 1

Write it down a step to collect apache metrices to datadog

To start gathering your Apache metrics and logs, you need to:

Install the Agent on your Apache servers.

Install mod_status on your Apache servers and enable ExtendedStatus.
Enter fullscreen mode Exit fullscreen mode

Configuration
Host

To configure this check for an Agent running on a host:
Metric collection

Edit the apache.d/conf.yaml file in the conf.d/ folder at the root of your Agent's configuration directory to start collecting your Apache metrics. See the sample apache.d/conf.yaml for all available configuration options.

init_config:

instances:
  ## @param apache_status_url - string - required
  ## Status url of your Apache server.
  #
  - apache_status_url: http://localhost/server-status?auto

Restart the Agent.
Enter fullscreen mode Exit fullscreen mode

Log collection

Available for Agent versions >6.0

Collecting logs is disabled by default in the Datadog Agent. Enable it in datadog.yaml:

logs_enabled: true

Add this configuration block to your apache.d/conf.yaml file to start collecting your Apache logs, adjusting the path and service values to configure them for your environment:

logs:
  - type: file
    path: /path/to/your/apache/access.log
    source: apache
    service: apache
    sourcecategory: http_web_access

  - type: file
    path: /path/to/your/apache/error.log
    source: apache
    service: apache
    sourcecategory: http_web_error

See the sample apache.d/conf.yaml for all available configuration options.

Restart the Agent.
Enter fullscreen mode Exit fullscreen mode

Docker

To configure this check for an Agent running on a container:
Metric collection

Set Autodiscovery Integrations Templates as Docker labels on your application container:

LABEL "com.datadoghq.ad.check_names"='["apache"]'
LABEL "com.datadoghq.ad.init_configs"='[{}]'
LABEL "com.datadoghq.ad.instances"='[{"apache_status_url": "http://%%host%%/server-status?auto"}]'

Log collection

Collecting logs is disabled by default in the Datadog Agent. To enable it, see Docker Log Collection.

Then, set Log Integrations as Docker labels:

LABEL "com.datadoghq.ad.logs"='[{"source": "apache", "service": ""}]'

Kubernetes

To configure this check for an Agent running on Kubernetes:
Metric collection

Set Autodiscovery Integrations Templates as pod annotations on your application container. Aside from this, templates can also be configured with a file, a configmap, or a key-value store.

Annotations v1 (for Datadog Agent < v7.36)

apiVersion: v1
kind: Pod
metadata:
name: apache
annotations:
ad.datadoghq.com/apache.check_names: '["apache"]'
ad.datadoghq.com/apache.init_configs: '[{}]'
ad.datadoghq.com/apache.instances: |
[
{
"apache_status_url": "http://%%host%%/server-status?auto"
}
]
spec:
containers:
- name: apache

Annotations v2 (for Datadog Agent v7.36+)

apiVersion: v1
kind: Pod
metadata:
name: apache
annotations:
ad.datadoghq.com/apache.checks: |
{
"apache": {
"init_config": {},
"instances": [
{
"apache_status_url": "http://%%host%%/server-status?auto"
}
]
}
}
spec:
containers:
- name: apache

Log collection

Collecting logs is disabled by default in the Datadog Agent. To enable it, see Kubernetes Log Collection.

Then, set Log Integrations as pod annotations. This can also be configured with a file, a configmap, or a key-value store.

Annotations v1/v2

apiVersion: v1
kind: Pod
metadata:
name: apache
annotations:
ad.datadoghq.com/apache.logs: '[{"source":"apache","service":""}]'
spec:
containers:
- name: apache

ECS

To configure this check for an Agent running on ECS:
Metric collection

Set Autodiscovery Integrations Templates as Docker labels on your application container:

{
"containerDefinitions": [{
"name": "apache",
"image": "apache:latest",
"dockerLabels": {
"com.datadoghq.ad.check_names": "[\"apache\"]",
"com.datadoghq.ad.init_configs": "[{}]",
"com.datadoghq.ad.instances": "[{\"apache_status_url\": \"http://%%host%%/server-status?auto\"}]"
}
}]
}

Log collection

Collecting logs is disabled by default in the Datadog Agent. To enable it, see ECS Log Collection.

Then, set Log Integrations as Docker labels:

{
"containerDefinitions": [{
"name": "apache",
"image": "apache:latest",
"dockerLabels": {
"com.datadoghq.ad.logs": "[{\"source\":\"apache\",\"service\":\"\"}]"
}
}]
}

Validation

Run the Agent's status subcommand and look for apache under the Checks section.

Write it down a step to collect tomcat metrices to datadog

The Tomcat check is included in the Datadog Agent package, so you don't need to install anything else on your Tomcat servers.

This check is JMX-based, so you need to enable JMX Remote on your Tomcat servers. Follow the instructions in Monitoring and Managing Tomcat.
Configuration
Host

To configure this check for an Agent running on a host:

Edit the tomcat.d/conf.yaml file, in the conf.d/ folder at the root of your Agent's configuration directory to collect Tomcat metrics and logs. See the sample tomcat.d/conf.yaml for all available configuration options.

Restart the Agent.
Enter fullscreen mode Exit fullscreen mode

See the JMX Check documentation for a list of configuration options usable by all JMX-based checks.
List of metrics

The conf parameter is a list of metrics to be collected by the integration. Only two keys are allowed:

include (mandatory): A dictionary of filters. Any attribute that matches these filters is collected unless it also matches the exclude filters (see below).
exclude (optional): A dictionary of filters. Attributes that match these filters are not collected.
Enter fullscreen mode Exit fullscreen mode

For a given bean, metrics get tagged in the following manner:

mydomain:attr0=val0,attr1=val1

In this example, your metric is mydomain (or some variation depending on the attribute inside the bean) and has the tags attr0:val0, attr1:val1, and domain:mydomain.

If you specify an alias in an include key that is formatted as camel case, it is converted to snake case. For example, MyMetricName is shown in Datadog as my_metric_name.
The attribute filter

The attribute filter can accept two types of values:

A dictionary whose keys are attributes names (see below). For this case, you can specify an alias for the metric that becomes the metric name in Datadog. You can also specify the metric type as a gauge or counter. If you choose counter, a rate per second is computed for the metric.

conf:
  - include:
    attribute:
      maxThreads:
        alias: tomcat.threads.max
        metric_type: gauge
      currentThreadCount:
        alias: tomcat.threads.count
        metric_type: gauge
      bytesReceived:
        alias: tomcat.bytes_rcvd
        metric_type: counter

A list of attributes names (see below). For this case, the metric type is a gauge, and the metric name is jmx.\[DOMAIN_NAME].\[ATTRIBUTE_NAME].

conf:
  - include:
    domain: org.apache.cassandra.db
    attribute:
      - BloomFilterDiskSpaceUsed
      - BloomFilterFalsePositives
      - BloomFilterFalseRatio
      - Capacity
      - CompressionRatio
      - CompletedTasks
      - ExceptionCount
      - Hits
      - RecentHitRate
Enter fullscreen mode Exit fullscreen mode

Older versions

List of filters is only supported in Datadog Agent > 5.3.0. If you are using an older version, use singletons and multiple include statements instead.

Datadog Agent > 5.3.0

conf:
- include:
domain: domain_name
bean:
- first_bean_name
- second_bean_name

Older Datadog Agent versions

conf:
- include:
domain: domain_name
bean: first_bean_name
- include:
domain: domain_name
bean: second_bean_name

Log collection

To submit logs to Datadog, Tomcat uses the log4j logger. For versions of Tomcat before 8.0, log4j is configured by default. For Tomcat 8.0+, you must configure Tomcat to use log4j, see Using Log4j. In the first step of those instructions, edit the log4j.properties file in the $CATALINA_BASE/lib directory as follows:

  log4j.rootLogger = INFO, CATALINA

  # Define all the appenders
  log4j.appender.CATALINA = org.apache.log4j.DailyRollingFileAppender
  log4j.appender.CATALINA.File = /var/log/tomcat/catalina.log
  log4j.appender.CATALINA.Append = true

  # Roll-over the log once per day
  log4j.appender.CATALINA.layout = org.apache.log4j.PatternLayout
  log4j.appender.CATALINA.layout.ConversionPattern = %d{yyyy-MM-dd HH:mm:ss} %-5p [%t] %c{1}:%L - %m%n

  log4j.appender.LOCALHOST = org.apache.log4j.DailyRollingFileAppender
  log4j.appender.LOCALHOST.File = /var/log/tomcat/localhost.log
  log4j.appender.LOCALHOST.Append = true
  log4j.appender.LOCALHOST.layout = org.apache.log4j.PatternLayout
  log4j.appender.LOCALHOST.layout.ConversionPattern = %d{yyyy-MM-dd HH:mm:ss} %-5p [%t] %c{1}:%L - %m%n

  log4j.appender.MANAGER = org.apache.log4j.DailyRollingFileAppender
  log4j.appender.MANAGER.File = /var/log/tomcat/manager.log
  log4j.appender.MANAGER.Append = true
  log4j.appender.MANAGER.layout = org.apache.log4j.PatternLayout
  log4j.appender.MANAGER.layout.ConversionPattern = %d{yyyy-MM-dd HH:mm:ss} %-5p [%t] %c{1}:%L - %m%n

  log4j.appender.HOST-MANAGER = org.apache.log4j.DailyRollingFileAppender
  log4j.appender.HOST-MANAGER.File = /var/log/tomcat/host-manager.log
  log4j.appender.HOST-MANAGER.Append = true
  log4j.appender.HOST-MANAGER.layout = org.apache.log4j.PatternLayout
  log4j.appender.HOST-MANAGER.layout.ConversionPattern = %d{yyyy-MM-dd HH:mm:ss} %-5p [%t] %c{1}:%L - %m%n

  log4j.appender.CONSOLE = org.apache.log4j.ConsoleAppender
  log4j.appender.CONSOLE.layout = org.apache.log4j.PatternLayout
  log4j.appender.CONSOLE.layout.ConversionPattern = %d{yyyy-MM-dd HH:mm:ss} %-5p [%t] %c{1}:%L - %m%n

  # Configure which loggers log to which appenders
  log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost] = INFO, LOCALHOST
  log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager] =\
    INFO, MANAGER
  log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-manager] =\
    INFO, HOST-MANAGER

Then follow the remaining steps in the Tomcat docs for configuring log4j.

By default, Datadog's integration pipeline support the following conversion patterns:

  %d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
  %d [%t] %-5p %c - %m%n

Clone and edit the integration pipeline if you have a different format. See Logging in Tomcat for details on Tomcat logging capabilities.

Collecting logs is disabled by default in the Datadog Agent, enable it in your datadog.yaml file:

logs_enabled: true

Add this configuration block to your tomcat.d/conf.yaml file to start collecting your Tomcat Logs:

logs:
  - type: file
    path: /var/log/tomcat/*.log
    source: tomcat
    service: "<SERVICE>"
    #To handle multi line that starts with yyyy-mm-dd use the following pattern
    #log_processing_rules:
    #  - type: multi_line
    #    name: log_start_with_date
    #    pattern: \d{4}\-(0?[1-9]|1[012])\-(0?[1-9]|[12][0-9]|3[01])

Change the path and service parameter values and configure them for your environment. See the sample tomcat.yaml for all available configuration options.

Restart the Agent.
Enter fullscreen mode Exit fullscreen mode

Containerized

For containerized environments, see the Autodiscovery with JMX guide.
Validation

Run the Agent's status subcommand and look for tomcat under the Checks section.

Write it down a step to collect docker metrices to datadog

Make sure that Docker is installed and running on your server.

Add the user running the Agent to Docker's group.

usermod -a -G docker dd-agent

Configure the Agent to connect to Docker.
Edit conf.d/docker.d/docker_daemon.yaml :

init_config:

instances:
    - url: "unix://var/run/docker.sock"

Restart the Agent
Execute the Agent status command and verify that the integration check has passed. Look for docker under the Checks section.
Enter fullscreen mode Exit fullscreen mode

Write it down a step to collect mysql metrices to datadog

To install Database Monitoring for MySQL, select your hosting solution in the Database Monitoring documentation for instructions.

Proceed with the following steps in this guide only if you are installing the standard integration alone.

On each MySQL server, create a database user for the Datadog Agent.

The following instructions grant the Agent permission to login from any host using datadog@'%'. You can restrict the datadog user to be allowed to login only from localhost by using datadog@'localhost'. See MySQL Adding Accounts, Assigning Privileges, and Dropping Accounts for more info.

Create the datadog user with the following command:

mysql> CREATE USER 'datadog'@'%' IDENTIFIED BY '';
Query OK, 0 rows affected (0.00 sec)

Verify the user was created successfully using the following commands - replace with the password you created above:

mysql -u datadog --password= -e "show status" | \
grep Uptime && echo -e "\033[0;32mMySQL user - OK\033[0m" || \
echo -e "\033[0;31mCannot connect to MySQL\033[0m"

The Agent needs a few privileges to collect metrics. Grant the datadog user only the following limited privileges.

For MySQL versions 5.6 and 5.7, grant replication client and set max_user_connections with the following command:

mysql> GRANT REPLICATION CLIENT ON . TO 'datadog'@'%' WITH MAX_USER_CONNECTIONS 5;
Query OK, 0 rows affected, 1 warning (0.00 sec)

For MySQL 8.0 or greater, grant replication client and set max_user_connections with the following commands:

mysql> GRANT REPLICATION CLIENT ON . TO 'datadog'@'%'
Query OK, 0 rows affected (0.00 sec)
mysql> ALTER USER 'datadog'@'%' WITH MAX_USER_CONNECTIONS 5;
Query OK, 0 rows affected (0.00 sec)

Grant the datadog user the process privilege:

mysql> GRANT PROCESS ON . TO 'datadog'@'%';
Query OK, 0 rows affected (0.00 sec)

Verify the replication client. Replace with the password you created above:

mysql -u datadog --password= -e "show slave status" && \
echo -e "\033[0;32mMySQL grant - OK\033[0m" || \
echo -e "\033[0;31mMissing REPLICATION CLIENT grant\033[0m"

If enabled, metrics can be collected from the performance_schema database by granting an additional privilege:

mysql> show databases like 'performance_schema';
+-------------------------------+
| Database (performance_schema) |
+-------------------------------+
| performance_schema |
+-------------------------------+
1 row in set (0.00 sec)

mysql> GRANT SELECT ON performance_schema.* TO 'datadog'@'%';
Query OK, 0 rows affected (0.00 sec)

Configuration

Follow the instructions below to configure this check for an Agent running on a host. For containerized environments, see the Docker, Kubernetes, or ECS sections.
Host

To configure this check for an Agent running on a host:

Edit the mysql.d/conf.yaml file, in the conf.d/ folder at the root of your Agent's configuration directory to start collecting your MySQL metrics and logs. See the sample mysql.d/conf.yaml for all available configuration options.
Metric collection

Add this configuration block to your mysql.d/conf.yaml to collect your MySQL metrics:

init_config:

instances:
  - host: 127.0.0.1
    username: datadog
    password: "<YOUR_CHOSEN_PASSWORD>" # from the CREATE USER step earlier
    port: "<YOUR_MYSQL_PORT>" # e.g. 3306
    options:
      replication: false
      galera_cluster: true
      extra_status_metrics: true
      extra_innodb_metrics: true
      extra_performance_metrics: true
      schema_size_metrics: false
      disable_innodb_metrics: false
Enter fullscreen mode Exit fullscreen mode

Note: Wrap your password in single quotes in case a special character is present.

To collect extra_performance_metrics, your MySQL server must have performance_schema enabled - otherwise set extra_performance_metrics to false. For more information on performance_schema, see MySQL Performance Schema Quick Start.

Note: The datadog user should be set up in the MySQL integration configuration as host: 127.0.0.1 instead of localhost. Alternatively, you may also use sock.

See the sample mysql.yaml for all available configuration options, including those for custom metrics.

Restart the Agent to start sending MySQL metrics to Datadog.
Log collection

Available for Agent versions >6.0

By default MySQL logs everything in /var/log/syslog which requires root access to read. To make the logs more accessible, follow these steps:

    Edit /etc/mysql/conf.d/mysqld_safe_syslog.cnf and remove or comment the lines.

    Edit /etc/mysql/my.cnf and add following lines to enable general, error, and slow query logs:

      [mysqld_safe]
      log_error = /var/log/mysql/mysql_error.log

      [mysqld]
      general_log = on
      general_log_file = /var/log/mysql/mysql.log
      log_error = /var/log/mysql/mysql_error.log
      slow_query_log = on
      slow_query_log_file = /var/log/mysql/mysql_slow.log
      long_query_time = 2

    Save the file and restart MySQL using following commands: service mysql restart

    Make sure the Agent has read access on the /var/log/mysql directory and all of the files within. Double-check your logrotate configuration to make sure those files are taken into account and that the permissions are correctly set there as well.

    In /etc/logrotate.d/mysql-server there should be something similar to:

      /var/log/mysql.log /var/log/mysql/mysql.log /var/log/mysql/mysql_slow.log {
              daily
              rotate 7
              missingok
              create 644 mysql adm
              Compress
      }

Collecting logs is disabled by default in the Datadog Agent, enable it in your datadog.yaml file:

logs_enabled: true

Add this configuration block to your mysql.d/conf.yaml file to start collecting your MySQL logs:

logs:
  - type: file
    path: "<ERROR_LOG_FILE_PATH>"
    source: mysql
    service: "<SERVICE_NAME>"

  - type: file
    path: "<SLOW_QUERY_LOG_FILE_PATH>"
    source: mysql
    service: "<SERVICE_NAME>"
    log_processing_rules:
      - type: multi_line
        name: new_slow_query_log_entry
        pattern: "# Time:"
        # If mysqld was started with `--log-short-format`, use:
        # pattern: "# Query_time:"
        # If using mysql version <5.7, use the following rules instead:
        # - type: multi_line
        #   name: new_slow_query_log_entry
        #   pattern: "# Time|# User@Host"
        # - type: exclude_at_match
        #   name: exclude_timestamp_only_line
        #   pattern: "# Time:"

  - type: file
    path: "<GENERAL_LOG_FILE_PATH>"
    source: mysql
    service: "<SERVICE_NAME>"
    # For multiline logs, if they start by the date with the format yyyy-mm-dd uncomment the following processing rule
    # log_processing_rules:
    #   - type: multi_line
    #     name: new_log_start_with_date
    #     pattern: \d{4}\-(0?[1-9]|1[012])\-(0?[1-9]|[12][0-9]|3[01])
    # If the logs start with a date with the format yymmdd but include a timestamp with each new second, rather than with each log, uncomment the following processing rule
    # log_processing_rules:
    #   - type: multi_line
    #     name: new_logs_do_not_always_start_with_timestamp
    #     pattern: \t\t\s*\d+\s+|\d{6}\s+\d{,2}:\d{2}:\d{2}\t\s*\d+\s+

See the sample mysql.yaml for all available configuration options, including those for custom metrics.

Restart the Agent.
Enter fullscreen mode Exit fullscreen mode

Docker

To configure this check for an Agent running on a container:
Metric collection

Set Autodiscovery Integration Templates as Docker labels on your application container:

LABEL "com.datadoghq.ad.check_names"='["mysql"]'
LABEL "com.datadoghq.ad.init_configs"='[{}]'
LABEL "com.datadoghq.ad.instances"='[{"server": "%%host%%", "username": "datadog","password": ""}]'

See Autodiscovery template variables for details on using as an environment variable instead of a label.
Log collection

Collecting logs is disabled by default in the Datadog Agent. To enable it, see Docker Log Collection.

Then, set Log Integrations as Docker labels:

LABEL "com.datadoghq.ad.logs"='[{"source":"mysql","service":"mysql"}]'

Kubernetes

To configure this check for an Agent running on Kubernetes:
Metric collection

Set Autodiscovery Integrations Templates as pod annotations on your application container. Alternatively, you can configure templates with a file, configmap, or key-value store.

Annotations v1 (for Datadog Agent < v7.36)

apiVersion: v1
kind: Pod
metadata:
name: mysql
annotations:
ad.datadoghq.com/mysql.check_names: '["mysql"]'
ad.datadoghq.com/mysql.init_configs: '[{}]'
ad.datadoghq.com/mysql.instances: |
[
{
"server": "%%host%%",
"username": "datadog",
"password": ""
}
]
labels:
name: mysql
spec:
containers:
- name: mysql

Annotations v2 (for Datadog Agent v7.36+)

apiVersion: v1
kind: Pod
metadata:
name: mysql
annotations:
ad.datadoghq.com/mysql.checks: |
{
"mysql": {
"instances": [
{
"server": "%%host%%",
"username": "datadog",
"password": ""
}
]
}
}
labels:
name: mysql
spec:
containers:
- name: mysql

See Autodiscovery template variables for details on using as an environment variable instead of a label.
Log collection

Collecting logs is disabled by default in the Datadog Agent. To enable it, see Kubernetes Log Collection.

Then, set Log Integrations as pod annotations. Alternatively, you can configure this with a file, configmap, or key-value store.

Annotations v1/v2

apiVersion: v1
kind: Pod
metadata:
name: mysql
annotations:
ad.datadoghq.com/mysql.logs: '[{"source": "mysql", "service": "mysql"}]'
labels:
name: mysql

ECS

To configure this check for an Agent running on ECS:
Metric collection

Set Autodiscovery Integrations Templates as Docker labels on your application container:

{
"containerDefinitions": [{
"name": "mysql",
"image": "mysql:latest",
"dockerLabels": {
"com.datadoghq.ad.check_names": "[\"mysql\"]",
"com.datadoghq.ad.init_configs": "[{}]",
"com.datadoghq.ad.instances": "[{\"server\": \"%%host%%\", \"username\": \"datadog\",\"password\": \"\"}]"
}
}]
}

See Autodiscovery template variables for details on using as an environment variable instead of a label.
Log collection

Available for Agent versions >6.0

Collecting logs is disabled by default in the Datadog Agent. To enable it, see ECS Log Collection.

Then, set Log Integrations as Docker labels:

{
"containerDefinitions": [{
"name": "mysql",
"image": "mysql:latest",
"dockerLabels": {
"com.datadoghq.ad.logs": "[{\"source\":\"mysql\",\"service\":\"mysql\"}]"
}
}]
}

Validation

Run the Agent's status subcommand and look for mysql under the Checks section.

Top comments (0)