# Metrics in go-libp2p

# Introduction

go-libp2p is the core networking component for many go-based implementations of projects such as Kubo (IPFS), Lotus (Filecoin), Prysm (the Ethereum Beacon Chain), and more. We as maintainers of go-libp2p want to be able to observe the state of libp2p components and enable our users to do the same in their production systems. To that effect, we've added instrumentation to collect metrics from various components over the last few months. In fact, they've already helped us debug some nuanced go-libp2p issues and helped with the development of features (discussed in detail below). Today, we'd like to share some of the choices we made, our learnings, and point you to resources that will help you monitor your deployments of go-libp2p.

Check out the public dashboards (opens new window) to see the metrics different libp2p components in production.

# Why Prometheus?

We were first faced with the question of choosing a metrics collection and monitoring system. Among our choices were Prometheus, OpenCensus, and OpenTelemetry. The details of the discussion can be found here (opens new window).

We noticed that OpenCensus creates a lot of allocations (opens new window), which would lead to increased GC pressure. OpenTelemetry's metrics API is still unstable as of writing this blog. In contrast, Prometheus is performant (zero-alloc) and ubiquitous. This allows us to add metrics without sacrificing performance, even for frequently exercised code paths. We also added ready-to-use Grafana dashboards, since Grafana is the preferred visualization tool for many of our users.

# How Users can enable Metrics

Metrics are enabled by default since go-libp2p v0.26.0 (opens new window). All you need to do is setup a Prometheus exporter for the collected metrics.


func main() {
        http.Handle("/metrics", promhttp.Handler())
	go func() {
		http.ListenAndServe(":2112", nil) // Any port is fine
	}()

	host, err := libp2p.New()
        // err handling
        ...
}

Now just point your Prometheus instance to scrape from :2122/metrics

By default, metrics are sent to the default Prometheus Registerer. To use a different Registerer from the default Prometheus registerer, use the option libp2p.PrometheusRegisterer.


func main() {
	reg := prometheus.NewRegistry()
        http.Handle("/metrics", promhttp.HandlerFor(reg, promhttp.HandlerOpts{}))
	go func() {
		http.ListenAndServe(":2112", nil) // Any port is fine
	}()

	host, err := libp2p.New(
                libp2p.PrometheusRegisterer(reg),
        )
        // err handling
        ...
}

# Discovering which Metrics are available

go-libp2p provides metrics and Grafana dashboards for all its major subsystems out of the box. You can check https://github.com/libp2p/go-libp2p/tree/master/dashboards (opens new window) for the Grafana dashboards available. Another great way to discover available metrics is to open Prometheus ui and type libp2p_(libp2p-package-name)_ and find available metrics from autocomplete. For Ex: libp2p_autonat_ gives you the list of all metrics exported from AutoNAT (opens new window).

EvtLocalAddressesUpdated

To see the dashboards in action, check the Public Dashboards (opens new window). For an example of setting up an app with metrics collection and dashboards, check the Metrics and Dashboards (opens new window) example in the go-libp2p repo.

# Local development and debugging setup

We've made it extremely easy to get started with metrics for local development. You can use the Docker setup provided in https://github.com/libp2p/go-libp2p/tree/master/dashboards (opens new window) to spin up a Grafana and Prometheus instance configured with all the available dashboards.

First add these lines to your code. This exposes a metrics collection endpoint at http://localhost:5001/debug/metrics/prometheus (opens new window)

import "github.com/prometheus/client_golang/prometheus/promhttp"

go func() {
    http.Handle("/debug/metrics/prometheus", promhttp.Handler())
    log.Fatal(http.ListenAndServe(":5001", nil))
}()

Now run docker compose up and access your application's metrics at http://localhost:3000 (opens new window)

# How are Metrics useful?

I'll share two cases where having metrics was extremely helpful for us in go-libp2p. One case deals with being able to debug a memory leak, and one where adding two new metrics helped us with the development of a new feature.

# Debugging with Metrics

We were excited about adding metrics because it gave us the opportunity to observe exactly what was happening within the system. One of the first systems we added metrics to was the Event Bus. The event bus is used to pass event notifications between different libp2p components. When we added event bus metrics, we were immediately able to see a discrepancy between two of our metrics, EvtLocalReachabilityChanged and EvtLocalAddressesUpdated. You can see the details on the GitHub issue (opens new window)

EvtLocalReachabilityChanged
EvtLocalAddressesUpdated

Ideally when a node's reachability changes, its addresses should also change as it tries to obtain a relay reservation (opens new window). This pointed us to an issue with AutoNAT (opens new window). Upon debugging we realised that we were emitting reachability changed events when the reachability had not changed and only the address to which the autonat dial succeeded had changed.

The graph for event EvtLocalProtocolsUpdated pointed us to another problem.

EvtLocalProtocolsUpdated

A node's supported protocols shouldn't change if its reachability has not changed. Once we became aware of the issue, finding the root cause was simple enough. There was a problem with cleaning up the relay service used in relay manager. The details of the issue and the subsequent solution can be found here (opens new window)

# Development using Metrics

In go-libp2p v0.28.0 (opens new window) we introduced smart dialing. When connecting with a peer, instead of dialing all the addresses of the peer in parallel, we now prioritise QUIC dials. This significantly reduces dial cancellations and reduces unnecessary load on the network. Check the smart dialing PR (opens new window) for more information on the algorithm used and the impact of smart dialing.

Not dialing all addresses in parallel increases latency for establishing a connection if the first dial doesn't succeed. We wanted to ensure that most of the connections succeeded with no additional latency. To help us better gauge the impact, we added two metrics

  1. Dial ranking delay. This metric tracks the latency in connection establishment introduced by the dial prioritisation logic.
  2. Dials per connection. This metric counts the number of addresses dialed before a connection was established with the peer.

Dials per connection measured the benefit of introducing smart dialing mechanism, and dial ranking delay provided us with the assurance that the vast majority of dials had no adverse impact on latency.

Smart dialing metrics

# Resources

Check out our Grafana dashboards: https://github.com/libp2p/go-libp2p/tree/master/dashboards (opens new window)

To create custom dashboards, Prometheus (opens new window) and Grafana docs (opens new window) are great resources.

# Get Involved

To learn more about libp2p generally, checkout:

You can reach out to us and stay tuned for our next event announcement by joining our various communication channels (opens new window), joining the discussion forum (opens new window), following us on Twitter (opens new window), or saying hi in the #libp2p-implementers channel in the Filecoin public Slack (opens new window).