The Prometheus monitoring system and time series database.

Overview

Prometheus

CircleCI Docker Repository on Quay Docker Pulls Go Report Card CII Best Practices Gitpod ready-to-code Fuzzing Status

Visit prometheus.io for the full documentation, examples and guides.

Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts when specified conditions are observed.

The features that distinguish Prometheus from other metrics and monitoring systems are:

  • A multi-dimensional data model (time series defined by metric name and set of key/value dimensions)
  • PromQL, a powerful and flexible query language to leverage this dimensionality
  • No dependency on distributed storage; single server nodes are autonomous
  • An HTTP pull model for time series collection
  • Pushing time series is supported via an intermediary gateway for batch jobs
  • Targets are discovered via service discovery or static configuration
  • Multiple modes of graphing and dashboarding support
  • Support for hierarchical and horizontal federation

Architecture overview

Install

There are various ways of installing Prometheus.

Precompiled binaries

Precompiled binaries for released versions are available in the download section on prometheus.io. Using the latest production release binary is the recommended way of installing Prometheus. See the Installing chapter in the documentation for all the details.

Docker images

Docker images are available on Quay.io or Docker Hub.

You can launch a Prometheus container for trying it out with

$ docker run --name prometheus -d -p 127.0.0.1:9090:9090 prom/prometheus

Prometheus will now be reachable at http://localhost:9090/.

Building from source

To build Prometheus from source code, first ensure that have a working Go environment with version 1.14 or greater installed. You also need Node.js and npm installed in order to build the frontend assets.

You can directly use the go tool to download and install the prometheus and promtool binaries into your GOPATH:

$ GO111MODULE=on go install github.com/prometheus/prometheus/cmd/...
$ prometheus --config.file=your_config.yml

However, when using go install to build Prometheus, Prometheus will expect to be able to read its web assets from local filesystem directories under web/ui/static and web/ui/templates. In order for these assets to be found, you will have to run Prometheus from the root of the cloned repository. Note also that these directories do not include the new experimental React UI unless it has been built explicitly using make assets or make build.

An example of the above configuration file can be found here.

You can also clone the repository yourself and build using make build, which will compile in the web assets so that Prometheus can be run from anywhere:

$ mkdir -p $GOPATH/src/github.com/prometheus
$ cd $GOPATH/src/github.com/prometheus
$ git clone https://github.com/prometheus/prometheus.git
$ cd prometheus
$ make build
$ ./prometheus --config.file=your_config.yml

The Makefile provides several targets:

  • build: build the prometheus and promtool binaries (includes building and compiling in web assets)
  • test: run the tests
  • test-short: run the short tests
  • format: format the source code
  • vet: check the source code for common errors
  • assets: build the new experimental React UI

Building the Docker image

The make docker target is designed for use in our CI system. You can build a docker image locally with the following commands:

$ make promu
$ promu crossbuild -p linux/amd64
$ make npm_licenses
$ make common-docker-amd64

NB if you are on a Mac, you will need gnu-tar.

React UI Development

For more information on building, running, and developing on the new React-based UI, see the React app's README.md.

More information

Contributing

Refer to CONTRIBUTING.md

License

Apache License 2.0, see LICENSE.

Issues
  • Remote storage

    Remote storage

    Prometheus needs to be able to interface with a remote and scalable data store for long-term storage/retrieval.

    kind/enhancement 
    opened by juliusv 170
  • TSDB data import tool for OpenMetrics format.

    TSDB data import tool for OpenMetrics format.

    Created a tool to import data formatted according to the Prometheus exposition format. The tool can be accessed via the TSDB CLI.

    closes prometheus/prometheus#535

    Signed-off-by: Dipack P Panjabi [email protected]

    (Port of https://github.com/prometheus/tsdb/pull/671)

    opened by dipack95 126
  • Add mechanism to perform bulk imports

    Add mechanism to perform bulk imports

    Currently the only way to bulk-import data is a hacky one involving client-side timestamps and scrapes with multiple samples per time series. We should offer an API for bulk import. This relies on https://github.com/prometheus/prometheus/issues/481.

    EDIT: It probably won't be an web-based API in Prometheus, but a command-line tool.

    kind/enhancement priority/P2 component/tsdb 
    opened by juliusv 112
  • Create a section ANNOTATIONS with user-defined payload and generalize RUNBOOK, DESCRIPTION, SUMMARY into fields therein.

    Create a section ANNOTATIONS with user-defined payload and generalize RUNBOOK, DESCRIPTION, SUMMARY into fields therein.

    RUNBOOK was added in a hurry in #843 for an internal demo of one of our users, which didn't give it enough time to be fully discussed. The demo has been done, so we can reconsider this.

    I think we should revert this change, and remove RUNBOOK:

    • Our general policy is that if it can be done with labels, do it with labels
    • All notification methods in the alertmanager will need extra code to deal with this
    • In future, all alertmanager notification templates will need extra code to deal with this
    • In general, all user code touching the alertmanager will need extra code to deal with this
    • This presumes a certain workflow in that you have something called a "runbook" (and not any other name - playbook is also common) and that you have exactly one of them

    Runbooks are not a fundamental aspect of an alert, are not in use by all of our users and thus I don't believe they meet the bar for first-class support within prometheus. This is especially true considering that they don't add anything that isn't already possible with labels.

    opened by brian-brazil 102
  • Implement strategies to limit memory usage.

    Implement strategies to limit memory usage.

    Currently, Prometheus simply limits the chunks in memory to a fixed number.

    However, this number doesn't directly imply the total memory usage as many other things take memory as well.

    Prometheus could measure its own memory consumption and (optionally) evict chunks early if it needs too much memory.

    It's non-trivial to measure "actual" memory consumption in a platform independent way.

    kind/enhancement 
    opened by beorn7 90
  • '@ <timestamp>' modifier

    '@ ' modifier

    This PR implements @ <timestamp> modifier as per this design doc.

    An example query:

    rate(process_cpu_seconds_total[1m]) 
      and
    topk(7, rate(process_cpu_seconds_total[1h] @ 1234))
    

    which ranks based on last 1h rate and w.r.t. unix timestamp 1234 but actually plots the 1m rate.

    Closes #7903

    This PR is to be followed up with an easier way to represent the start, end, range of a query in PromQL so that we could do @ <end>, metric[<range>] easily.

    opened by codesome 88
  • Add option to log slow queries and recording rules

    Add option to log slow queries and recording rules

    opened by AlekSi 86
  • Add offset to selectParams

    Add offset to selectParams

    This adds the Offset from the promql.EvalStmt to the selectParams which is sent to the querier during Select()

    Fixes #4224

    opened by jacksontj 80
  • Port isolation from old TSDB PR

    Port isolation from old TSDB PR

    The original PR was https://github.com/prometheus/tsdb/pull/306 .

    I tried to carefully adjust to the new world order, but please give this a very careful review, especially around iterator reuse (marked with a TODO).

    On the bright side, I definitely found and fixed a bug in txRing.

    prombench 
    opened by beorn7 78
  • 2.3.0 significatnt memory usage increase.

    2.3.0 significatnt memory usage increase.

    Bug Report

    What did you do? Upgraded to 2.3.0

    What did you expect to see? General improvements.

    What did you see instead? Under which circumstances? Memory usage, possibly driven by queries, has considerably increased. Upgrade at 09:27, the memory usage drops on the graph after then are from container restarts due to OOM.

    container_memory_usage_bytes

    image

    Environment

    Prometheus in kubernetes 1.9

    • System information: Standard docker containers, on docker kubelet on linux.

    • Prometheus version: 2.3.0 insert output of prometheus --version here

    kind/bug 
    opened by tcolgate 77
  • Missing metadata for some metrics

    Missing metadata for some metrics

    What did you do? Deployed an example application emitting Prometheus metrics in a Kubernetes cluster alongside a Prometheus deployment.

    What did you expect to see? All metrics with properly formatted HELP and TYPE strings have their metadata available in the Prometheus UI and API.

    What did you see instead? Under which circumstances?

    When querying for metrics in the graph UI, I am able to see the metric time series plotted. However, the metric metadata in the autocomplete tooltip is missing.

    I went to the metadata API at /api/v1/metadata and saw the metadata was missing there as well.

    I verified the target metrics endpoint looks to be exporting properly-formatted metrics.

    Environment

    • System information:

    Linux 5.4.144+ x86_64

    • Prometheus version:
    prometheus, version 2.32.1 (branch: HEAD, revision: 41f1a8125e664985dd30674e5bdf6b683eff5d32)
      build user:       [email protected]
      build date:       20211217-22:08:06
      go version:       go1.17.5
      platform:         linux/amd64
    
    • Alertmanager version:

    N/A

    • Prometheus configuration file:
    global:
      scrape_interval: 30s
      scrape_timeout: 10s
      evaluation_interval: 1m
    scrape_configs:
    - job_name: prometheus
      honor_timestamps: true
      scrape_interval: 30s
      scrape_timeout: 10s
      metrics_path: /metrics
      scheme: http
      follow_redirects: true
      static_configs:
      - targets:
        - localhost:9090
    - job_name: prom-example
      honor_timestamps: true
      scrape_interval: 30s
      scrape_timeout: 10s
      metrics_path: /metrics
      scheme: http
      follow_redirects: true
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_label_app]
        separator: ;
        regex: o11y-demo
        replacement: $1
        action: keep
      - source_labels: [__meta_kubernetes_namespace]
        separator: ;
        regex: (.*)
        target_label: namespace
        replacement: $1
        action: replace
      - source_labels: [__meta_kubernetes_pod_name, __meta_kubernetes_pod_container_port_name]
        separator: ;
        regex: (.+);(.+)
        target_label: instance
        replacement: $1:$2
        action: replace
      - source_labels: [__meta_kubernetes_pod_container_port_name]
        separator: ;
        regex: metrics
        replacement: $1
        action: keep
      kubernetes_sd_configs:
      - role: pod
        kubeconfig_file: ""
        follow_redirects: true
    
    • Alertmanager configuration file: N/A

    • Logs: N/A

    Please let me know if there's any more detail I can provide and thank you!

    opened by pintohutch 0
  • Add a time_tz(tz) function to promql

    Add a time_tz(tz) function to promql

    This function aims to solve problems for those who need to account for DST in their queries.

    opened by sylr 6
  • Error sending alert: http2: unsupported scheme

    Error sending alert: http2: unsupported scheme

    What did you do? I've updated prometheus and alertmanager.

    What did you expect to see? alerts being sent to alertmanager.

    What did you see instead? Under which circumstances? no alerts are sent to alertmanager and and error logged by prometheus.

    Environment docker + docker-compose in an Ubuntu 20.04.3 LTS VM running on VMware vSphere ESXi.

    There is a nginx installed on the Ubuntu host machine which provides https + reverse proxy for internal and external requests to service-name.domain.tld.

    • System information:

      Linux 5.4.0-94-generic x86_64

    • Prometheus version:

      from version v2.28.1 to v2.32.1

    • Alertmanager version:

      from version v0.22.2 to v0.23.0

    • Docker Compose file:

    version: '3.7'
    
    volumes:
      prometheus_data: {}
    
    networks:
      default:
        driver: bridge
        ipam:
          driver: default
          config:
            - subnet: 172.19.13.0/24
    
      front-tier:
      default:
        driver: bridge
        ipam:
          driver: default
          config:
            - subnet: 172.27.173.0/24
    
      back-tier:
      default:
        driver: bridge
        ipam:
          driver: default
          config:
            - subnet: 172.31.191.0/24
    
    services:
      prometheus:
    #   https://hub.docker.com/r/prom/prometheus/tags
        image: prom/prometheus:v2.32.1
        volumes:
          - ./prometheus/:/etc/prometheus/
          - prometheus_data:/prometheus
        command:
          - '--config.file=/etc/prometheus/prometheus.yml'
          - '--storage.tsdb.path=/prometheus'
          - '--storage.tsdb.retention.size=200GB'
          - '--storage.tsdb.retention.time=5y'
          - '--web.console.libraries=/usr/share/prometheus/console_libraries'
          - '--web.console.templates=/usr/share/prometheus/consoles'
          - '--web.external-url=https://prometheus.domain.tld/'
          - '--web.enable-lifecycle'
          - '--web.enable-admin-api'
          - '--enable-feature=promql-at-modifier'
        ports:
          - 9090:9090
        networks:
          - back-tier
        restart: always
        logging:
          driver: json-file
          options:
            max-size: "10m"
            max-file: "3"
    
      alertmanager:
    #   https://hub.docker.com/r/prom/alertmanager/tags
        image: prom/alertmanager:v0.23.0
        ports:
          - 9093:9093
        volumes:
          - ./alertmanager/etc/alertmanager/:/etc/alertmanager/
          - ./alertmanager/data/:/data/
        networks:
          - back-tier
        restart: always
        command:
          - '--config.file=/etc/alertmanager/config.yml'
          - '--storage.path=/data'
          - '--web.external-url=https://alertmanager.domain.tld/'
    
    • Prometheus configuration file:
    global:
      scrape_interval:     15s # By default, scrape targets every 15 seconds.
      evaluation_interval: 15s # By default, scrape targets every 15 seconds.
      # scrape_timeout is set to the global default (10s).
    
      # Attach these labels to any time series or alerts when communicating with
      # external systems (federation, remote storage, Alertmanager).
    #  external_labels:
    #      monitor: 'my-project'
    
    # Load and evaluate rules in this file every 'evaluation_interval' seconds.
    rule_files:
      - 'alerts/*.yml'
      # - "first.rules"
      # - "second.rules"
    
    # alert
    alerting:
      alertmanagers:
      - scheme: http
        proxy_url: "https://alertmanager.domain.tld"
        static_configs:
        - targets:
          - "alertmanager:9093"
    
    # A scrape configuration containing exactly one endpoint to scrape:
    # Here it's Prometheus itself.
    scrape_configs:
      # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
    
      - job_name: 'prometheus'
        # Override the global default and scrape targets from this job every 5 seconds.
        scrape_interval: 5s
        scrape_timeout: 4s
        static_configs:
          - targets: ['localhost:9090']
    
      - job_name: 'alertmanager'
        scrape_interval: 10s
        scrape_timeout: 9s
        static_configs:
          - targets: ['alertmanager:9093']
    
    • Alertmanager configuration file:
    # https://www.prometheus.io/docs/alerting/latest/configuration
    
    global:
      smtp_smarthost: 'mailserver.domain.tld:25'
      smtp_from: '[email protected]'
      smtp_hello: 'redacted.domain.tld'
      smtp_require_tls: false
      http_config:
        proxy_url: 'https://alertmanager.domain.tld'
    
    templates:
      - '/etc/alertmanager/template/*.tmpl'
    
    # A route block defines a node in a routing tree and its children.
    #
    # Every alert enters the routing tree at the configured top-level route, 
    # which must match all alerts (i.e. not have any configured matchers). 
    # It then traverses the child nodes. If continue is set to false, it stops 
    # after the first matching child. 
    # If continue is true on a matching node, the alert will continue matching 
    # against subsequent siblings. 
    # If an alert does not match any children of a node 
    # (no matching child nodes, or none exist), the alert is handled based 
    # on the configuration parameters of the current node.
    route:
    
      # The labels by which incoming alerts are grouped together. For example,
      # multiple alerts coming in for cluster=A and alertname=LatencyHigh would
      # be batched into a single group.
      #
      # To aggregate by all possible labels use the special value '...' as the sole label name, for example:
      # group_by: ['...']
      # This effectively disables aggregation entirely, passing through all
      # alerts as-is. This is unlikely to be what you want, unless you have
      # a very low alert volume or your upstream notification system performs
      # its own grouping.
      group_by: ['alertname', 'instance', 'service']
    
      # How long to initially wait to send a notification for a group of alerts. 
      # Allows to wait for an inhibiting alert to arrive or collect more initial 
      # alerts for the same group. (Usually ~0s to few minutes.)
      group_wait: 30s
    
      # How long to wait before sending a notification about new alerts that are 
      # added to a group of alerts for which an initial notification has 
      # already been sent.
      group_interval: 5m
    
      # How long to wait before sending a notification again if it has 
      # already been sent successfully for an alert.
      #repeat_interval: 8736h
      receiver: mail-alerts
    
      # Zero or more child routes.<LeftMouse>
      # 
      # Routing tree visulizer / debugger:
      # https://www.prometheus.io/webtools/alerting/routing-tree-editor/
      routes:
      - matchers:
          - 'severity="critical"'
          - 'alertname=~"HP_iLO_Unhealthy_Temparature|Temperature_RZ1|Temperature_RZ2|Temperature_RZ3|Temperature_USV_Room"'
        receiver: mail-alerts-critical-temperature
      - match:
          severity: critical
        receiver: mail-alerts-critical
      - match:
          severity: warning
        receiver: mail-alerts-warning
      - match:
          severity: test
        receiver: tils-test-mail
        repeat_interval: 3000m
    
    # An inhibition rule mutes an alert (target) matching a set of matchers 
    # when an alert (source) exists that matches another set of matchers. 
    # Both target and source alerts must have the same label values for 
    # the label names in the equal list.
    #
    # Semantically, a missing label and a label with an empty value 
    # are the same thing. 
    # Therefore, if all the label names listed in equal are missing from 
    # both the source and target alerts, the inhibition rule will apply.
    #
    # To prevent an alert from inhibiting itself, an alert that matches 
    # both the target and the source side of a rule cannot be inhibited 
    # by alerts for which the same is true (including itself). 
    # However, we recommend to choose target and source matchers 
    # in a way that alerts never match both sides. 
    # It is much easier to reason about and does not trigger this special case.
    inhibit_rules:
      # We use this to mute any warning-level notifications if the same alert is 
      # already critical.
      - source_match:
          severity: 'critical'
        target_match:
          severity: 'warning'
        # Apply inhibition if the alertname is the same.
        # CAUTION: 
        #   If all label names listed in `equal` are missing 
        #   from both the source and target alerts,
        #   the inhibition rule will apply!
        equal: ['alertname', 'instance', 'service']
    
    # Receiver is a named configuration of one or more notification integrations.
    receivers:
      - name: 'mail-alerts'
        email_configs:
          - to: '[email protected]'
            send_resolved: true
      - name: 'mail-alerts-critical'
        email_configs:
          - to: '[email protected]'
            send_resolved: true
      - name: 'mail-alerts-critical-temperature'
        email_configs:
          - to: '[email protected]'
            from: '[email protected]'
            send_resolved: true
      - name: 'mail-alerts-warning'
        email_configs:
          - to: '[email protected]'
            send_resolved: true
      - name: 'tils-test-mail'
        email_configs:
         - to: '[email protected]' 
    
    • Logs: ** Alertmanager
    alertmanager_1              | level=info ts=2022-01-13T10:52:01.667Z caller=main.go:225 msg="Starting Alertmanager" version="(version=0.23.0, branch=HEAD, revision=61046b17771a57cfd4c4a51be370ab930a4d7d54)"
    alertmanager_1              | level=info ts=2022-01-13T10:52:01.667Z caller=main.go:226 build_context="(go=go1.16.7, [email protected], date=20210825-10:48:55)"
    alertmanager_1              | level=info ts=2022-01-13T10:52:01.668Z caller=cluster.go:184 component=cluster msg="setting advertise address explicitly" addr=172.18.0.7 port=9094
    alertmanager_1              | level=info ts=2022-01-13T10:52:01.671Z caller=cluster.go:671 component=cluster msg="Waiting for gossip to settle..." interval=2s
    alertmanager_1              | level=info ts=2022-01-13T10:52:01.713Z caller=coordinator.go:113 component=configuration msg="Loading configuration file" file=/etc/alertmanager/config.yml
    alertmanager_1              | level=info ts=2022-01-13T10:52:01.714Z caller=coordinator.go:126 component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/config.yml
    alertmanager_1              | level=info ts=2022-01-13T10:52:01.717Z caller=main.go:518 msg=Listening address=:9093
    alertmanager_1              | level=info ts=2022-01-13T10:52:01.718Z caller=tls_config.go:191 msg="TLS is disabled." http2=false
    alertmanager_1              | level=info ts=2022-01-13T10:52:03.672Z caller=cluster.go:696 component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.001061322s
    alertmanager_1              | level=info ts=2022-01-13T10:52:11.674Z caller=cluster.go:688 component=cluster msg="gossip settled; proceeding" elapsed=10.002424442s
    

    ** Prometheus

    prometheus_1                | ts=2022-01-12T14:02:20.367Z caller=main.go:171 level=info msg="Experimental promql-at-modifier enabled"
    prometheus_1                | ts=2022-01-12T14:02:20.388Z caller=main.go:515 level=info msg="Starting Prometheus" version="(version=2.32.1, branch=HEAD, revision=41f1a8125e664985dd30674e5bdf6b683eff5d32)"
    prometheus_1                | ts=2022-01-12T14:02:20.388Z caller=main.go:520 level=info build_context="(go=go1.17.5, [email protected], date=20211217-22:08:06)"
    prometheus_1                | ts=2022-01-12T14:02:20.388Z caller=main.go:521 level=info host_details="(Linux 5.4.0-94-generic #106-Ubuntu SMP Thu Jan 6 23:58:14 UTC 2022 x86_64 7388d9fb57e3 (none))"
    prometheus_1                | ts=2022-01-12T14:02:20.388Z caller=main.go:522 level=info fd_limits="(soft=1048576, hard=1048576)"
    prometheus_1                | ts=2022-01-12T14:02:20.388Z caller=main.go:523 level=info vm_limits="(soft=unlimited, hard=unlimited)"
    prometheus_1                | ts=2022-01-12T14:02:20.399Z caller=web.go:570 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090
    prometheus_1                | ts=2022-01-12T14:02:20.420Z caller=main.go:924 level=info msg="Starting TSDB ..."
    prometheus_1                | ts=2022-01-12T14:02:20.422Z caller=tls_config.go:195 level=info component=web msg="TLS is disabled." http2=false
    prometheus_1                | ts=2022-01-12T14:02:20.453Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1628877600000 maxt=1630627200000 ulid=01FEMSK0DPKSPK61B3N5NV81KW
    prometheus_1                | ts=2022-01-12T14:02:20.472Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1630627200798 maxt=1632376800000 ulid=01FG95GBM38BPARV9N0KYM4W4C
    prometheus_1                | ts=2022-01-12T14:02:20.487Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1632376800041 maxt=1634126400000 ulid=01FHX9SBN5YPSY96SHT79TC5N2
    prometheus_1                | ts=2022-01-12T14:02:20.504Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1634126400238 maxt=1635876000000 ulid=01FKHE8NED6Z7C93GAATRBERZX
    prometheus_1                | ts=2022-01-12T14:02:20.525Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1635876000053 maxt=1637625600000 ulid=01FN5JSJEB0H78ZT4SWX6RAC7N
    prometheus_1                | ts=2022-01-12T14:02:20.537Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1637625600217 maxt=1639375200000 ulid=01FPSQB7WB1R16X4DZJTXQY5QA
    prometheus_1                | ts=2022-01-12T14:02:20.545Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1639375200239 maxt=1641124800000 ulid=01FRDVWP47TH57SG8EMB9PT9NC
    prometheus_1                | ts=2022-01-12T14:02:20.555Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1641124800283 maxt=1641708000000 ulid=01FRZ7ZRC6NS8NXNVPT38T2XRJ
    prometheus_1                | ts=2022-01-12T14:02:20.565Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1641708000283 maxt=1641902400000 ulid=01FS51BDM8HAEJ11VM5YPADC22
    prometheus_1                | ts=2022-01-12T14:02:20.578Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1641967200250 maxt=1641974400000 ulid=01FS6R7WXGQAWGHR896BP9DYC1
    prometheus_1                | ts=2022-01-12T14:02:20.592Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1641974400250 maxt=1641981600000 ulid=01FS6Z3M5FS2Z37NHJ4755MR0Z
    prometheus_1                | ts=2022-01-12T14:02:20.597Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1641902400250 maxt=1641967200000 ulid=01FS6Z4FNG91MT4RD6NGPHV8Z9
    prometheus_1                | ts=2022-01-12T14:02:20.604Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1641981600250 maxt=1641988800000 ulid=01FS75ZBE92JMGSDY3SD7VQB3S
    prometheus_1                | ts=2022-01-12T14:02:25.347Z caller=head.go:488 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
    prometheus_1                | ts=2022-01-12T14:02:25.806Z caller=head.go:522 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=459.603976ms
    prometheus_1                | ts=2022-01-12T14:02:25.806Z caller=head.go:528 level=info component=tsdb msg="Replaying WAL, this may take a while"
    prometheus_1                | ts=2022-01-12T14:04:40.966Z caller=head.go:564 level=info component=tsdb msg="WAL checkpoint loaded"
    prometheus_1                | ts=2022-01-12T14:04:46.848Z caller=head.go:599 level=info component=tsdb msg="WAL segment loaded" segment=17459 maxSegment=17464
    prometheus_1                | ts=2022-01-12T14:04:54.006Z caller=head.go:599 level=info component=tsdb msg="WAL segment loaded" segment=17460 maxSegment=17464
    prometheus_1                | ts=2022-01-12T14:04:59.462Z caller=head.go:599 level=info component=tsdb msg="WAL segment loaded" segment=17461 maxSegment=17464
    prometheus_1                | ts=2022-01-12T14:05:08.929Z caller=head.go:599 level=info component=tsdb msg="WAL segment loaded" segment=17462 maxSegment=17464
    prometheus_1                | ts=2022-01-12T14:05:10.172Z caller=head.go:599 level=info component=tsdb msg="WAL segment loaded" segment=17463 maxSegment=17464
    prometheus_1                | ts=2022-01-12T14:05:10.173Z caller=head.go:599 level=info component=tsdb msg="WAL segment loaded" segment=17464 maxSegment=17464
    prometheus_1                | ts=2022-01-12T14:05:10.173Z caller=head.go:605 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=2m15.159957052s wal_replay_duration=29.20616551s total_replay_duration=2m44.825789043s
    prometheus_1                | ts=2022-01-12T14:05:10.908Z caller=main.go:945 level=info fs_type=EXT4_SUPER_MAGIC
    prometheus_1                | ts=2022-01-12T14:05:10.909Z caller=main.go:948 level=info msg="TSDB started"
    prometheus_1                | ts=2022-01-12T14:05:10.909Z caller=main.go:1129 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
    prometheus_1                | ts=2022-01-12T14:05:11.125Z caller=main.go:1166 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=216.873395ms db_storage=1.723µs remote_storage=1.968µs web_handler=680ns query_engine=1.303µs scrape=291.882µs scrape_sd=591.151µs notify=28.149µs notify_sd=16.694µs rules=212.727311ms
    prometheus_1                | ts=2022-01-12T14:05:11.125Z caller=main.go:897 level=info msg="Server is ready to receive web requests."
    prometheus_1                | ts=2022-01-12T14:06:21.791Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    prometheus_1                | ts=2022-01-12T14:07:36.787Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    prometheus_1                | ts=2022-01-12T14:08:51.787Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    prometheus_1                | ts=2022-01-12T14:10:06.786Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    prometheus_1                | ts=2022-01-12T14:10:39.063Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    prometheus_1                | ts=2022-01-12T14:11:14.319Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    prometheus_1                | ts=2022-01-12T14:11:21.788Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    prometheus_1                | ts=2022-01-12T14:11:54.068Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    prometheus_1                | ts=2022-01-12T14:12:29.319Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    prometheus_1                | ts=2022-01-12T14:12:36.787Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    prometheus_1                | ts=2022-01-12T14:12:36.790Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    prometheus_1                | ts=2022-01-12T14:13:09.064Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    prometheus_1                | ts=2022-01-12T14:13:44.319Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    prometheus_1                | ts=2022-01-12T14:13:51.788Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    prometheus_1                | ts=2022-01-12T14:13:51.791Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    prometheus_1                | ts=2022-01-12T14:14:24.064Z caller=notifier.go:526 level=error component=notifier alertmanager=http://alertmanager:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://alertmanager:9093/api/v2/alerts\": http2: unsupported scheme"
    
    opened by compilenix 0
  • Nits after PR 10051 merge

    Nits after PR 10051 merge

    This adds a few improvements to the already merged PR https://github.com/prometheus/prometheus/pull/10051

    opened by replay 0
  • Allow remote read without external labels

    Allow remote read without external labels

    Proposal

    Use case. Why is this important? In our current setup, we have a unique Prometheus instance, no external_labels configured, and remote storage (InfluxDB) that already has months of data.

    global:
    
      scrape_interval: 60s
      scrape_timeout: 30s
    
    scrape_configs:
    [...]
    remote_write: 
      - url: xxx
    
    remote_read: 
      - url: yyy
    

    Now we need to set an external_label. However, when we do that, we can't retrieve data from remote_read (https://github.com/prometheus/prometheus/issues/7992).

    It should have an option to ignore external_labels in remote_read in order to keep reading data that has already been written.

    help wanted kind/enhancement component/remote storage priority/P3 
    opened by marcelo-giraldi 2
  • create a component to handle the search bar with debounce

    create a component to handle the search bar with debounce

    opened by Nexucis 0
  • Is it possible to add http client configure to disable HTTP2 in Remote_Write Configure?

    Is it possible to add http client configure to disable HTTP2 in Remote_Write Configure?

    Proposal

    Use case. Why is this important? “Nice to have” is not a good use case. :)

    Currently, if the remote write receives Stream Error HTTP1.1_Required, it would retry with HTTP2, and it would never work.

    opened by siyyang-ms 4
  • Prometheus Agent Stops Scraping all its Targets after sometime without any error logs.

    Prometheus Agent Stops Scraping all its Targets after sometime without any error logs.

    Prometheus with --enable-feature=agent Stops Scraping all its Targets after sometime and when we check in terminal logs there is no such logs related scraping stop. Is anyone has any idea about this ??? Image : prometheus:v2.32.0 Stop Targets image

    opened by Rahuly360 5
  • build(deps): bump sanitize-html from 2.6.0 to 2.6.1 in /web/ui

    build(deps): bump sanitize-html from 2.6.0 to 2.6.1 in /web/ui

    Bumps sanitize-html from 2.6.0 to 2.6.1.

    Changelog

    Sourced from sanitize-html's changelog.

    2.6.1 (2021-12-08)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies javascript 
    opened by dependabot[bot] 0
Releases(v2.33.0-rc.0)
Dashboard and code-driven configuration for Laravel queues.

Introduction Horizon provides a beautiful dashboard and code-driven configuration for your Laravel powered Redis queues. Horizon allows you to easily

The Laravel Framework 3.4k Jan 14, 2022
A self-hosted metrics and notifications platform for Laravel apps

Larametrics A self-hosted metrics and notifications platform for Laravel apps, Larametrics records and notifies you of changes made to models, incomin

Andrew Schmelyun 552 Jan 9, 2022
Centreon is a network, system and application monitoring tool. Centreon is the only AIOps Platform Providing Holistic Visibility to Complex IT Workflows from Cloud to Edge.

Centreon - IT and Application monitoring software Introduction Centreon is one of the most flexible and powerful monitoring softwares on the market;

Centreon 514 Jan 7, 2022
LibreNMS is an auto-discovering PHP/MySQL/SNMP based network monitoring system

LibreNMS is an auto-discovering PHP/MySQL/SNMP based network monitoring which includes support for a wide range of network hardware and operating systems including Cisco, Linux, FreeBSD, Juniper, Brocade, Foundry, HP and many more.

LibreNMS Project 2.6k Jan 11, 2022
A Computer Vision based speed monitoring system.

A Computer Vision based speed monitoring system. This project is developed as the submission for Smart City Hackathon 2021

Veeramanohar 1 Oct 30, 2021
Quickly and easily preview and test your Magento 2 order confirmation page, without hacks or spending time placing new order each time

Preview Order Confirmation Page for Magento 2 For Magento 2.0.x, 2.1.x, 2.2.x and 2.3.x Styling and testing Magento's order confirmation page can be a

MagePal :: Magento Extensions 66 Nov 15, 2021
Shelly Plug Prometheus exporter.

Shelly Plug Prometheus Exporter I am a simple man. I have a Shelly Plug. I don't want to flash a different firmware on it just to get a Prometheus end

Jeff Geerling 23 Dec 21, 2021
AirGradient Prometheus exporter.

AirGradient Prometheus Exporter AirGradient has a DIY air sensor. I built one (actually, more than one). I want to integrate sensor data into my in-ho

Jeff Geerling 51 Jan 7, 2022
Export Laravel Horizon metrics using this Prometheus exporter.

Laravel Horizon Prometheus Exporter Export Laravel Horizon metrics using this Prometheus exporter. This package leverages Exporter Contracts. ?? Suppo

Renoki Co. 16 Jan 3, 2022
Export Laravel Octane metrics using this Prometheus exporter.

Laravel Octane Prometheus Exporter Export Laravel Octane metrics using this Prometheus exporter. ?? Supporting If you are using one or more Renoki Co.

Renoki Co. 14 Dec 13, 2021
Prometheus exporter for Yii2

yii2-prometheus Prometheus Extension for Yii 2 This extension provides a Prometheus exporter component for Yii framework 2.0 applications. This extens

Mehdi Achour 3 Oct 27, 2021
Simple Magento 2 Prometheus Exporter.

Magento 2 Prometheus Exporter This Magento 2 Module exposes a new route under /metrics with Magento 2 specific metrics in the format of prometheus. Th

run_as_root GmbH 37 Jan 11, 2022
Laravel Angular Time Tracker is a simple time tracking application built on Laravel 5.2, Angular 2, and Bootstrap 3.

Laravel 5.2, Angular 2, and Bootstrap 3.3.* Time Tracker Laravel Angular Time Tracker is a simple time tracking application built on Laravel 5.2, Angu

Jeremy Kenedy 25 Jun 14, 2021
Self Hosted Movie, Series and Anime Watch List

Flox Flox is a self hosted Movie, Series and Animes watch list. It's build on top of Laravel and Vue.js and uses The Movie Database API. The rating ba

Viktor Geringer 1k Jan 20, 2022
Multipurpose Laravel and Livewire Application. This is a part of tutorial series on Youtube.

Multipurpose Laravel and Livewire Application This is a part of YouTube tutorial series on building application using Laravel and Livewire. Here is th

Clovon 52 Jan 18, 2022
Downloads new lessons and series from laracasts if there are updates. Or the whole catalogue.

Laracasts Downloader Downloads new lessons and series from laracasts if there are updates. Or the whole catalogue. Currently looking for maintainers.

Carlos Florêncio 576 Jan 11, 2022
phpReel is a free, MIT open-source subscription-based video streaming service that lets you create your platform for distributing video content in the form of movies or series.

phpReel is a free, MIT open-source subscription-based video streaming service that lets you create your platform for distributing video content in the form of movies or series.

null 101 Jan 15, 2022
The API for my blog series about creating a simple stock portfolio

Stockportfolio API This repository is the API part of my blog series 'How to create a simple application using Symfony and React Native' which can be

Wouter Carabain 2 Nov 13, 2021
The forum is a base for our Youtube tutorial series on "how to build a forum"

About Laravel Laravel is a web application framework with expressive, elegant syntax. We believe development must be an enjoyable and creative experie

AngelJay 10 Dec 26, 2021
The source code behind the Laracasts Series: Image Uploading with Vue + Laravel

About Laravel Laravel is a web application framework with expressive, elegant syntax. We believe development must be an enjoyable and creative experie

Andrew Schmelyun 0 Dec 10, 2021
A series of methods that let you manipulate colors. Just incase you ever need different shades of one color on the fly.

PHPColors A series of methods that let you manipulate colors. Just incase you ever need different shades of one color on the fly. Requirements PHPColo

Arlo Carreon 397 Dec 30, 2021
Easy management of Virtualization technologies including KVM, Xen, OpenVZ, Virtuozzo, and LXC/LXD including unified commands, monitoring, template management, and many more features.

ProVirted About Easy management of Virtualization technologies including KVM, Xen, OpenVZ, Virtuozzo, and LXC/LXD including unified commands, monitori

null 2 Jan 3, 2022
This package connects a Laravel Octance application with Tideways for PHP Monitoring, Profiling and Exception Tracking.

Tideways Middleware for Laravel Octane This package connects a Laravel Octance application with Tideways for PHP Monitoring, Profiling and Exception T

Tideways 7 Jan 6, 2022
Code execution monitoring for Laravel applications.

Inspector | Code Execution Monitoring Tool Simple code execution monitoring, built for Laravel developers. Requirements Install Configure the Ingestio

Inspector 92 Jan 17, 2022
Symfony Health Check Bundle Monitoring Project Status

Symfony Health Check Bundle Version Build Status Code Coverage master develop Installation Step 1: Download the Bundle Open a command console, enter y

MacPaw Inc. 16 Jan 12, 2022
All in one monitoring for NKN.org Crypto currency Miners!

nWatch - All in one NKN Crypto currency Miner & wallets monitoring NKN network is based on miners cheap and easy to deploy (from VPS to Raspberry Pi)

null 11 Jan 10, 2022
A Zabbix 5.4 module to group items under Monitoring -> Latest data per Tag as it used to be with Application grouping in previous versions of Zabbix

zabbix-module-latest-data Written according to Zabbix official documentation https://www.zabbix.com/documentation/current/manual/modules A Zabbix 5.4

BGmot 12 Dec 10, 2021
Laravel application performance monitoring

Gauge - Laravel Application Performance Monitoring Gauge is an easy to use package to monitor the performance of your Laravel applications. Gauge in b

Tobias Dierich 126 Dec 20, 2021