What is Alertmanager?

The Alertmanager handles alerts sent by shopper applications like the prometheus server. It takes care of deduplicating, grouping, and routing them to the proper receiver integration like email, PagerDuty, or OpsGenie. It conjointly takes care of silencing and inhibition of alerts.

Alertmanager

Installing the Alertmanager

To install alertmanager first we need to create a configuration file and then we will mount that to our docker container.
The config file is of yml format in case of alertmanager, below is an example of alertmanager config file named alertmanager.yaml
(you need to provide the sender and receiver email address in the configuration to send and receive alerts)
Creating alertmanager configuration with name alertmanager.yml at home location.

vim ~/alertmanager.yml

route:
  group_by: [Alertname]
  # Send all notifications to me.
  receiver: email-me

receivers:
- name: email-me
  email_configs:
  - to: receiver@gmail.com
    from: sender@gmail.com
    smarthost: smtp.gmail.com:587
    auth_username: "sender@gmail.com"
    auth_identity: "sender@gmail.com"
    auth_password: "**********************"     
  

We will run the alertmanager using docker as we did in case of prometheus and grafana

docker run -d -p 9093:9093 -v 
~/alertmanager.yml:/etc/alertmanager/alertmanager.yml prom/alertmanager   

Now hit your server ip with port no. 9093 to access the alertmanager dashboard

alertmanager dashboard

Now as your application goes down you should receive an alert in your mail that you configured in your configuration file.

Sending An Alert

The Alertmanager has 2 genus Apis, v1 and v2, each listening for alerts. The theme for v1 is represented within the code snipping below. The theme for v2 is such that as associate OpenAPI specification which will be found within the Alertmanager repository. clients are expected to endlessly re-send alerts as long as they’re still active (usually on the order of thirty seconds to three minutes). shoppers will push a listing of alerts to Alertmanager via a POST request.

The labels of every alert are accustomed determine identical instances of associate alert and to perform deduplication. The associatenotations square measure continually set to those received last and don’t seem to be distinctive an alert.

Both startsAt and endsAt timestamps square measure nonmandatory. If startsAt is omitted, this time is assigned by the Alertmanager. endsAt is barely set if the top time of associate alert is understood. Otherwise it’ll be set to a configurable timeout amount from the time since the alert was last received.

The generatorURL field may be a distinctive back-link that identifies the inflicting entity of this alert within the consumer.

[
  {
    "labels": {
      "alertname": "<requiredAlertName>",
      "<labelname>": "<labelvalue>",
      ...
    },
    "annotations": {
      "<labelname>": "<labelvalue>",
    },
    "startsAt": "<rfc3339>",
    "endsAt": "<rfc3339>",
    "generatorURL": "<generator_url>"
  },
  ...
]
    

We have created a alert rule and mentioned it in our prometheus.yml as it is mentioned below

groups:
 - name: example
   rules:
   - alert: InstanceDown
     expr: up == 0
     for: 1m
  

It means that whenever our application will go down it prometheus will check and in every 1 min it will send us the alert.

Subscribe Now