Describes the process of configuring your Grafana, Prometheus and Alertmanager instances to monitor your Replex deployments.
Quick Note
This documentation is meant to be used by on-prem Replex installations.
Clients hosted on *.replex.io need not worry about the content of this documentation.
All alerts and configurations here are handled for you.
Dependencies
Setting up your monitoring stack requires you to have Grafana, Prometheus and Alertmanager installed on your desired cluster.
Ensure Grafana version 5.3.4+ is installed as the charts JSON specification is only compatible from there.
Setting up Prometheus
Prometheus is used for our metrics as we expose them from our core applications using this metrics provider.
We already configure our deployment spec with the required annotation (as shown below) so your Prometheus instance would scrape the metrics automatically.
To access the metrics please add the following scrape config named kubernetes-pods from here to your prometheus.yml file to complete the configuration.
You may ignore this step if the configuration above is in line with what is provided on your Prometheus installation.
Configuring AlertManager (Optional)
Alerts can also be configured based on certain metrics exposed to your Prometheus instance.
This step is only necessary if you are installing AlertManager for the first time, there's a very nice guide on getting it set up and configuring your receivers here.
Despite the earlier focus on installing AlertManager, the scope of the doc is outside the installation process.
The first step to setting up alerts is to confirm that your Prometheus instance has already been configured to point to AlertManager correctly.
We use the following configuration for our instance
# prometheus.ymlrule_files:-/etc/prometheus-rules/rules# This points to where the rules are storedalerting:alertmanagers:-static_configs:-targets:-alertmanager.<namespace>.svc.cluster.local:9093#FQDN to your alertmanager instance
Restarting your Prometheus instance is required after changing this configuration
You can check out this article about alerting rules and pointing Prometheus to Alert Manager.
Setting up Alerts
Once Alert Manager is properly configured with Prometheus, you may then add the following rules specified here to your Prometheus rules configuration.
You can copy and modify the current template below to fit your use cases or preferred alerting messages:
# /etc/prometheus-rules/rulesgroups:-name:uptimerules:-alert:CAdvisorHostDownexpr:up{job="kubernetes-cadvisor"}==0for: 1m labels:severity:highannotations:summary:CAdvisortreportsHost{{ $labels.instance}}isdown,investigateimmediately!-alert:NodeExporterNodeDownexpr:up{job="kubernetes-nodes"}==0for: 1mlabels:severity:highannotations:summary:NodeExporterreports{{ $labels.instance}}isdown,investigateimmediately!-alert:APIServerDownexpr:up{job="kubernetes-apiservers"}==0for: 1mlabels:severity:highannotations:summary:APIServer{{ $labels.instance}}isdown,investigateimmediately!-alert:PodDownexpr:up{job="kubernetes-pods"}==0for: 1mlabels:severity:highannotations: summary: Pod {{ $labels.kubernetes_namespace }}/{{ $labels.kubernetes_pod_name}} is down, investigate immediately!
-name:pvcrules:-alert:VolumeRequestThresholdExceededexpr: (kubelet_volume_stats_used_bytes /kubelet_volume_stats_capacity_bytes) > 0.9for: 1mlabels:severity:highannotations: summary: Volume {{ $labels.persistentvolumeclaim }} in namespace {{ $labels.namespace }} on node {{ $labels.kubernetes_io_hostname }} exceeded threshold capacity of 90%
-alert:UnboundedPVexpr:kube_persistentvolume_status_phase{phase!="Bound"}==1for: 1dlabels:severity:highannotations:summary:PV{{ $labels.persistentvolume}}hasbeeninphase{{ $labels.phase}}formorethan1day.-alert:UnboundedPVCexpr:kube_persistentvolumeclaim_status_phase{phase!="Bound"}==1for: 5mlabels:severity:highannotations: summary: PVC {{ $labels.persistentvolumeclaim }} in namespace {{ $labels.namespace }} is currently in phase {{ $labels.phase }}.
-name:replexrules:-alert:ServerErrorAlert expr: sum by (kubernetes_namespace, kubernetes_pod_name) (changes(server_http_request_duration_seconds_count{job="kubernetes-pods",status_code=~"5.*"}[1m])) > 0
for: 30slabels:severity:mediumannotations: summary: 5xx errors on {{$labels.kubernetes_namespace}}/{{ $labels.kubernetes_pod_name }} for url {{ $labels.url }} exceeded threshold of 1 requests in 1 minute
-alert:PushGatewayError expr: sum by (kubernetes_namespace, kubernetes_pod_name) (changes(pushgateway_push_requests_duration_seconds_count{job="kubernetes-pods", status=~"5.*"}[30m])) > 0
for: 30slabels:severity:mediumannotations: summary: 5xx errors on pod {{$labels.kubernetes_namespace}}/{{ $labels.kubernetes_pod_name }} exceeded threshold of 1 requests within 30 minutes
-alert:PricingAPIError expr: sum by (kubernetes_namespace, kubernetes_pod_name) (changes(pricingapi_http_request_duration_seconds_count{job="kubernetes-pods",status_code=~"5.*"}[1m])) > 0
for: 30slabels:severity:mediumannotations: summary: 5xx errors on pod {{$labels.kubernetes_namespace}}/{{ $labels.kubernetes_pod_name }} exceeded threshold of 1 requests in 1 minute
-alert:AggregatorErrorsexpr:changes(aggregator_aggregation_duration_seconds_count{job="kubernetes-pods",status="0"}[15m]) >0for: 15mlabels:severity:mediumannotations: summary: Failed aggregation on pod {{$labels.kubernetes_namespace}}/{{ $labels.kubernetes_pod_name }} exceeded threshold of 1 requests in 15 minute
-name:databaserules:-alert:ReplicationStoppedexpr:pg_repl_stream_active{job="kubernetes-pods"}==0for: 10s labels:severity:highannotations:summary:Replicationslot{{ $labels.slot_name}}forserver{{ $labels.server}}isnolongeractive-alert:PushGateWayDatabaseUnavailableexpr:pushgateway_database_status{job="kubernetes-pods"}==0for: 1mlabels:severity:highannotations: summary: Database connection for pod {{$labels.kubernetes_namespace}}/{{ $labels.kubernetes_pod_name }} is no longer active
description: Database connection for the pod {{$labels.kubernetes_namespace}}/{{ $labels.kubernetes_pod_name }} is no longer active, consider restarting the pod
-alert:AggregatorDatabaseUnavailableexpr:aggregator_database_status{job="kubernetes-pods"}==0for: 1mlabels:severity:highannotations: summary: Database connection for pod {{$labels.kubernetes_namespace}}/{{ $labels.kubernetes_pod_name }} is no longer active
description: Database connection for the pod {{$labels.kubernetes_namespace}}/{{ $labels.kubernetes_pod_name }} is no longer active, consider restarting the pod
-alert:FailedAggregationEventexpr:changes(aggregator_query_duration_seconds_count{job="kubernetes-pods",status="0"}[15m]) >0for: 1mlabels:severity:highannotations: summary: "{{ $labels.aggregation_type }} cron job failed on pod {{$labels.kubernetes_namespace}}/{{ $labels.kubernetes_pod_name }}"
description: A failed aggregation event has occurred on {{ $labels.aggregation_type }} on pod {{$labels.kubernetes_namespace}}/{{ $labels.kubernetes_pod_name }}
-alert:HighNumberOfConnectionsexpr:pg_stat_database_numbackends{datname="postgres"}>30for: 1mlabels:severity:highannotations:summary:"More than 30 connections to database {{$labels.datname}} on server {{ $labels.server }}"description:"More than 30 connections to database {{$labels.datname}} on server {{ $labels.server }}"
After copying the template above and modifying (if necessary) into the Prometheus rules file, you can check your Prometheus dashboard to verify the alerts are registered on the alert page
Restarting your Prometheus instance is required after editing the rules
Finishing with Grafana
Once Prometheus is configured, you can then proceed to install the Grafana charts.
The charts are hosted and maintained publicly on Grafana
Provided the metrics from the Replex components are exposed properly, you should access dashboards similar to this: