自学内容网 自学内容网

kubernets部署prometheus监控

目录

一、Prometheus简介

Prometheus架构

组件功能

二、在k8s中部署Prometheus

1、下载部署Prometheus所需资源

2、登录grafana

3、导入面板

4、访问Prometheus主程序

三、监控使用示例

1、建立监控项目

2、监控调整


一、Prometheus简介

Prometheus是一个开源的服务监控系统和时序数据库

其提供了通用的数据模型和快捷数据采集、存储和查询接口

它的核心组件Prometheus服务器定期从静态配置的监控目标或者基于服务发现自动配置的目标中进行拉取数据

新拉取到的数据大于配置的内存缓存区时,数据就会持久化到存储设备当中

Prometheus架构

组件功能

  • 监控代理程序:如node_exporter:收集主机的指标数据,如平均负载、CPU、内存、磁盘、网络等等多个维度的指标数据。

  • kubelet(cAdvisor):收集容器指标数据,也是K8S的核心指标收集,每个容器的相关指标数据包括:CPU使用率、限额、文件系统读写限额、内存使用率和限额、网络报文发送、接收、丢弃速率等等。

  • API Server:收集API Server的性能指标数据,包括控制队列的性能、请求速率和延迟时长等等

  • etcd:收集etcd存储集群的相关指标数据

  • kube-state-metrics:该组件可以派生出k8s相关的多个指标数据,主要是资源类型相关的计数器和元数据信息,包括制定类型的对象总数、资源限额、容器状态以及Pod资源标签系列等。

  • 每个被监控的主机都可以通过专用的exporter程序提供输出监控数据的接口,并等待Prometheus服务器周期性的进行数据抓取

  • 如果存在告警规则,则抓取到数据之后会根据规则进行计算,满足告警条件则会生成告警,并发送到Alertmanager完成告警的汇总和分发

  • 当被监控的目标有主动推送数据的需求时,可以以Pushgateway组件进行接收并临时存储数据,然后等待Prometheus服务器完成数据的采集

  • 任何被监控的目标都需要事先纳入到监控系统中才能进行时序数据采集、存储、告警和展示

  • 监控目标可以通过配置信息以静态形式指定,也可以让Prometheus通过服务发现的机制进行动态管理

二、在k8s中部署Prometheus

1、下载部署Prometheus所需资源

#在helm中添加Prometheus仓库
[root@k8s-master ~]# cd helm/
[root@k8s-master helm]# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
"prometheus-community" has been added to your repositories
#下载Prometheus项目
[root@k8s-master helm]# helm pull prometheus-community/kube-prometheus-stack
[root@k8s-master helm]# ls
helm-push_0.10.4_linux_amd64.tar.gz  linux-amd64                    nginx-18.1.15.tgz  zx-0.2.0.tgz
helm-v3.15.4-linux-amd64.tar.gz      nginx                          zx
kube-prometheus-stack-62.7.0.tgz     nginx-1.27.1-debian-12-r2.tar  zx-0.1.0.tgz
#解压项目包
[root@k8s-master helm]# tar zxf kube-prometheus-stack-62.7.0.tgz 
[root@k8s-master helm]# cd kube-prometheus-stack/
[root@k8s-master kube-prometheus-stack]# ls
Chart.lock  charts  Chart.yaml  CONTRIBUTING.md  README.md  templates  values.yaml

[root@k8s-master kube-prometheus-stack]# vim values.yaml 
227   imageRegistry: "reg.zx.org"

#根据所有项目中的values.yaml中指定的image路径下载容器镜像并上传至harbor仓库
[root@k8s-master helm]# docker load -i prometheus-62.6.0.tar 
[root@k8s-master helm]# docker tag quay.io/prometheus/prometheus:v2.54.1 reg.zx.org/prometheus/prometheus:v2.54.1
[root@k8s-master helm]# docker tag quay.io/thanos/thanos:v0.36.1 reg.zx.org/thanos/thanos:v0.36.1
[root@k8s-master helm]# docker tag quay.io/prometheus/alertmanager:v0.27.0 reg.zx.org/prometheus/alertmanager:v0.27.0
[root@k8s-master helm]# docker tag quay.io/prometheus-operator/admission-webhook:v0.76.1 reg.zx.org/prometheus-operator/admission-webhook:v0.76.1
[root@k8s-master helm]# docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6 reg.zx.org/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6
[root@k8s-master helm]# docker tag quay.io/prometheus-operator/prometheus-operator:v0.76.1 reg.zx.org/prometheus-operator/prometheus-operator:v0.76.1
[root@k8s-master helm]# docker tag quay.io/prometheus-operator/prometheus-config-reloader:v0.76.1 reg.zx.org/prometheus-operator/prometheus-config-reloader:v0.76.1

[root@k8s-master kube-prometheus-stack]# docker push reg.zx.org/prometheus/prometheus:v2.54.1
[root@k8s-master kube-prometheus-stack]# docker push reg.zx.org/thanos/thanos:v0.36.1
[root@k8s-master kube-prometheus-stack]# docker push reg.zx.org/prometheus/alertmanager:v0.27.0
[root@k8s-master kube-prometheus-stack]# docker push reg.zx.org/prometheus-operator/admission-webhook:v0.76.1
[root@k8s-master kube-prometheus-stack]# docker push reg.zx.org/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6
[root@k8s-master kube-prometheus-stack]# docker push reg.zx.org/prometheus-operator/prometheus-operator:v0.76.1
[root@k8s-master kube-prometheus-stack]# docker push reg.zx.org/prometheus-operator/prometheus-config-reloader:v0.76.1

[root@k8s-master kube-prometheus-stack]# cd charts/
[root@k8s-master charts]# cd grafana/
[root@k8s-master grafana]# ls
Chart.yaml  ci  dashboards  README.md  templates  values.yaml
[root@k8s-master grafana]# vim values.yaml 
"""
 1     global:
 2     # -- Overrides the Docker registry globally for all images
 3     imageRegistry: "reg.zx.org"    # 修改

 414   image:
 415     # -- The Docker registry
 416     registry: docker.io
 417     repository: library/busyboxplus
 418     tag: "latest"

"""

[root@k8s-master helm]# docker load -i grafana-11.2.0.tar 
[root@k8s-master helm]# docker tag grafana/grafana:11.2.0 reg.zx.org/grafana/grafana:11.2.0
[root@k8s-master helm]# docker tag quay.io/kiwigrid/k8s-sidecar:1.27.4 reg.zx.org/kiwigrid/k8s-sidecar:1.27.4
[root@k8s-master helm]# docker tag grafana/grafana-image-renderer:latest reg.zx.org/grafana/grafana-image-renderer:latest
[root@k8s-master helm]# docker tag bats/bats:v1.4.1 reg.zx.org/bats/bats:v1.4.1
[root@k8s-master helm]# docker push reg.zx.org/bats/bats:v1.4.1
[root@k8s-master helm]# docker push reg.zx.org/grafana/grafana:11.2.0
[root@k8s-master grafana]# docker push reg.zx.org/kiwigrid/k8s-sidecar:1.27.4
[root@k8s-master grafana]# docker push reg.zx.org/grafana/grafana-image-renderer:latest

[root@k8s-master charts]# cd kube-state-metrics/
[root@k8s-master kube-state-metrics]# vim values.yaml 
  3 image:
  4   registry: reg.zx.org
 29   imageRegistry: "reg.zx.org"
[root@k8s-master helm]# docker load -i kube-state-metrics-2.13.0.tar 
[root@k8s-master helm]# docker tag registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.13.0 reg.zx.org/kube-state-metrics/kube-state-metrics:v2.13.0
[root@k8s-master helm]# docker tag quay.io/brancz/kube-rbac-proxy:v0.18.0 reg.zx.org/brancz/kube-rbac-proxy:v0.18.0
[root@k8s-master helm]# docker push reg.zx.org/kube-state-metrics/kube-state-metrics:v2.13.0
[root@k8s-master helm]# docker push reg.zx.org/brancz/kube-rbac-proxy:v0.18.0

[root@k8s-master helm]# cd kube-prometheus-stack/
[root@k8s-master kube-prometheus-stack]# cd charts/
[root@k8s-master charts]# cd prometheus-node-exporter/
[root@k8s-master prometheus-node-exporter]# vim values.yaml 
  4 image:
  5   registry: reg.zx.org
 36   imageRegistry: "reg.zx.org"

[root@k8s-master helm]# docker load -i node-exporter-1.8.2.tar 
[root@k8s-master helm]# docker tag quay.io/prometheus/node-exporter:v1.8.2 reg.zx.org/prometheus/node-exporter:v1.8.2
[root@k8s-master helm]# docker tag quay.io/brancz/kube-rbac-proxy:v0.18.0 reg.zx.org/brancz/kube-rbac-proxy:v0.18.0
[root@k8s-master helm]# docker push reg.zx.org/prometheus/node-exporter:v1.8.2
[root@k8s-master helm]# docker push reg.zx.org/brancz/kube-rbac-proxy:v0.18.0

#利用helm安装Prometheus
[root@k8s-master kube-prometheus-stack]# kubectl create namespace kube-prometheus-stack
namespace/kube-prometheus-stack created
[root@k8s-master kube-prometheus-stack]# helm  -n kube-prometheus-stack install kube-prometheus-stack  .

#查看所有pod是否运行
[root@k8s-master kube-prometheus-stack]# kubectl --namespace kube-prometheus-stack get pods
#查看svc
[root@k8s-master kube-prometheus-stack]# kubectl -n kube-prometheus-stack get svc

#修改暴漏方式——“type: LoadBalancer”
[root@k8s-master kube-prometheus-stack]# kubectl -n kube-prometheus-stack edit svc kube-prometheus-stack-grafana

[root@k8s-master kube-prometheus-stack]# kubectl -n kube-prometheus-stack edit svc kube-prometheus-stack-prometheus

各个svc的作用

alertmanager-operated 告警管理

kube-prometheus-stack-grafana 展示prometheus采集到的指标

kube-prometheus-stack-prometheus-node-exporter 收集节点级别的指标的工具

kube-prometheus-stack-prometheus 主程

2、登录grafana

[root@k8s-master helm]# kubectl -n kube-prometheus-stack get secrets kube-prometheus-stack-grafana -o yaml
apiVersion: v1
data:
  admin-password: cHJvbS1vcGVyYXRvcg==
  admin-user: YWRtaW4=
  ldap-toml: ""
kind: Secret
metadata:
  annotations:
    meta.helm.sh/release-name: kube-prometheus-stack
    meta.helm.sh/release-namespace: kube-prometheus-stack
  creationTimestamp: "2024-09-21T11:12:48Z"
  labels:
    app.kubernetes.io/instance: kube-prometheus-stack
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: grafana
    app.kubernetes.io/version: 11.2.0
    helm.sh/chart: grafana-8.5.1
  name: kube-prometheus-stack-grafana
  namespace: kube-prometheus-stack
  resourceVersion: "5292"
  uid: 23028e9c-7b0a-4a78-94a4-5eedbad313ac
type: Opaque
[root@k8s-master helm]# echo -n "cHJvbS1vcGVyYXRvcg==" | base64 -d
prom-operator
[root@k8s-master helm]# echo "YWRtaW4=" | base64 -d
admin

k8s中的微服务-CSDN博客:metalLB———为LoadBalancer分配vip

3、导入面板

4、访问Prometheus主程序

[root@k8s-master ~]# kubectl -n kube-prometheus-stack get svc kube-prometheus-stack-prometheus
NAME                               TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                         AGE
kube-prometheus-stack-prometheus   LoadBalancer   10.110.149.210   172.25.254.50   9090:32154/TCP,8080:30562/TCP   102m

三、监控使用示例

1、建立监控项目

# 下载示例所需helm项目
[root@k8s-master ~]# helm  pull  bitnami/nginx --version 18.1.11
[root@k8s-master ~]# tar zxf nginx-18.1.11.tgz 
[root@k8s-master ~]# docker load -i nginx-exporter-1.3.0-debian-12-r2.tar 
[root@k8s-master ~]# docker push reg.zx.org/bitnami/nginx-exporter:1.3.0-debian-12-r2

[root@k8s-master ~]# cd nginx/
[root@k8s-master nginx]# vim values.yaml     # 修改项目开启监控
  13   imageRegistry: "reg.zx.org"
 925 metrics:
 926   ## @param metrics.enabled Start a Prometheus exporter sidecar container
 927   ##
 928   enabled: true        # 修改
1015   serviceMonitor:
1016     ## @param metrics.serviceMonitor.enabled Creates a Prometheus Operator ServiceMonitor (also requires `metrics.enabl     ed` to be `true`)
1017     ##
1018     enabled: true        # 修改
1019     ## @param metrics.serviceMonitor.namespace Namespace in which Prometheus is running
1020     ##
1021     namespace: "kube-prometheus-stack"    # 修改

[root@k8s-master nginx]# helm install zx .
[root@k8s-master nginx]# kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
zx-nginx-6f4fb4d585-6mjvj   2/2     Running   0          72s

[root@k8s-master nginx]# kubectl get svc
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                                     AGE
kubernetes   ClusterIP      10.96.0.1        <none>          443/TCP                                     160m
testpod      LoadBalancer   10.102.151.211   172.25.254.52   80:31865/TCP                                49m
zx-nginx     LoadBalancer   10.103.78.231    172.25.254.53   80:30671/TCP,443:32701/TCP,9113:32403/TCP   113s

[root@k8s-master nginx]# curl 172.25.254.53
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

[root@k8s-master ~]# kubectl -n kube-prometheus-stack get servicemonitors.monitoring.coreos.com --show-labels

# 压力测试
[root@k8s-master nginx]# ab -c 5 -n 100 http://172.25.254.53/index.html
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 172.25.254.53 (be patient).....done


Server Software:        nginx
Server Hostname:        172.25.254.53
Server Port:            80

Document Path:          /index.html
Document Length:        615 bytes

Concurrency Level:      5
Time taken for tests:   0.446 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      87000 bytes
HTML transferred:       61500 bytes
Requests per second:    224.19 [#/sec] (mean)
Time per request:       22.302 [ms] (mean)
Time per request:       4.460 [ms] (mean, across all concurrent requests)
Transfer rate:          190.47 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1    8   8.5      3      40
Processing:     2   13  11.2     10      51
Waiting:        1   13  11.2      9      50
Total:          3   21  15.1     21      72

Percentage of the requests served within a certain time (ms)
  50%     21
  66%     26
  75%     32
  80%     34
  90%     42
  95%     47
  98%     64
  99%     72
 100%     72 (longest request)

2、监控调整


原文地址:https://blog.csdn.net/weixin_68256171/article/details/142419846

免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!