2-Olly
Grafana-Stack
VM-Setup
AlertManager
Labels

Adding labels to your alert configuration helps provide context and metadata that can be used for better alert routing, grouping, and filtering. Here are some useful labels you can consider adding to your Prometheus alert rules:

  1. severity: Indicates the importance of the alert, such as critical, warning, info, etc.

  2. alert_type: Specifies the type of alert, like resource_usage, performance, availability, security, etc.

  3. resource: Indicates the resource involved, such as cpu, memory, disk, network, etc.

  4. job: Denotes the job or service name from which the alert originates (inherited from Prometheus metrics, such as node_exporter, api_server).

  5. instance: Represents the specific instance (server) where the alert is firing, like server1.example.com.

  6. environment: Specifies the environment in which the alert occurs, such as production, staging, development.

  7. region: Indicates the geographical region, such as us-east-1, eu-west-1, etc., useful for multi-region monitoring.

  8. datacenter: Specifies the data center where the instance is hosted, e.g., dc1, dc2, dc3.

  9. service: Denotes the specific service or application generating the alert, like database, webserver, cache.

  10. team: Specifies the responsible team, such as ops, dev, networking, allowing alerts to be routed to specific teams.

  11. priority: Helps indicate urgency, using levels such as P1, P2, P3.

  12. sla: Indicates the Service Level Agreement (SLA) associated with the alert, which can help prioritize based on SLA requirements.

  13. owner: Specifies the individual or team responsible for the resource, useful for routing alerts to the correct owner.

  14. cluster: Helps in multi-cluster environments to identify which cluster the alert is related to.

  15. component: Represents a specific component of an application or infrastructure, such as frontend, backend, database.

  16. impact: Describes the impact level of the alert, e.g., high, medium, low, useful for prioritizing alerts.

  17. customer: Indicates a specific customer or tenant, useful in multi-tenant environments for filtering and alert management.

  18. function: Represents the specific function within the service, such as authentication, logging, api_gateway.

  19. mode: Describes the operational mode of the alert, e.g., primary, replica, standby, useful in clustered or HA systems.

  20. cause: Indicates a known cause for the alert if identifiable, e.g., network_issue, hardware_failure, high_traffic, to provide context.

Example Usage with New Labels

Here’s how you could incorporate some of these labels in an alert rule:

- alert: HighMemoryUsage
  expr: (node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) / node_memory_MemTotal_bytes * 100 > 90
  for: 2m
  labels:
    severity: warning
    alert_type: resource_usage
    resource: memory
    environment: production
    region: us-east-1
    team: ops
    priority: P2
  annotations:
    summary: "High Memory Usage Detected on {{ $labels.instance }}"
    description: "Memory usage is above 90% for more than 2 minutes on instance {{ $labels.instance }} in the {{ $labels.environment }} environment (job: {{ $labels.job }})."

Adding such labels will provide better context in the alert details, making it easier to route, filter, and analyze alerts.


💬 Need a Quick Summary?

Hey! Don't have time to read everything? I get it. 😊
Click below and I'll give you the main points and what matters most on this page.
Takes about 5 seconds • Uses Perplexity AI