Monitoring System Keynotes¶
This blog introduces some keynotes about the monitoring system, which is consisted by grafana
, prometheus
and so forth. In this blog, we focus on the outlines of them about the concepts and some internal details.
By the way, the blog share some interesting implementation details which are nice to know.
Try Monitoring Locally¶
Try to run the servers are extremely simple as we can simply install them through homebrew
and then we can run them by command:
brew services start prometheus
# restart is used to apply the changes
brew services restart prometheus
Status/Command-Line Flags
shows the configuration file path, as my answer of the configuration file path in the stackoverflow. Note that the prometheus doesn't support hot-key updating so we need to manually restart the prometheus server.
Prometheus can reload its configuration at runtime. If the new configuration is not well-formed, the changes will not be applied. A configuration reload is triggered by sending a SIGHUP to the Prometheus process
The restart is done with the help of SIGNUP
, which is common to use and one of my blogs blog: cli with restarting feature. mentioned it as well.
Data Flow¶
Grafana is a server that helps to visualize data from sources, and the prometheus server is indeed a data source. Hence, the data flow is from the service to the prometheus server by the http call triggered by prometheus, and then grafana pulls the data from prometheus.
Prometheus¶
Prometheus is prestigious and powerful, but it's not the main topic, the concepts are. Among prometheus features, we can conclude the features into three parts:
- multi-dimensional data model with time series
- retrieving(push or pull) data and storage
- visualization and analysis support
The importance decreases from the top to bottom as the concepts go out of the developers gradually.
The fundamental concept is metrics
, which are numerical measurements in layperson terms. The term time series refers to the recording of changes over time.
Prometheus Server Side¶
Data Model¶
According to the data model:
Prometheus fundamentally stores all data as time series: streams of timestamped values belonging to the same metric and the same set of labeled dimensions.
It talks three core concepts, "streams", "timestamped value", and "the same metric and the same set of labeled dimensions" for a time series. We can image a line chart for better understanding.
The "timestamped value" is the coordination of a value, where the y-axis is the value and x-axis is the timestamp. The "streams" refer that there are many points with coordination so the points could be connected as a line. And "the same metric and the same set of labeled dimensions" shows only the data with the same identity should be aggregated together into the same line chart.
Every time series is uniquely identified by its metric name and optional key-value pairs called labels.
Metric Types¶
There are four kinds of metrics types, which are provided by the client side SDK and server doesn't respect them. The types are Counter
, Gauge
, Histogram
, and Summary
.
The following block shows each kind of metrics returned by a service's /metrics
http handler.
Response of /metrics API at a point in time
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 2.6251e-05
go_gc_duration_seconds{quantile="0.25"} 0.00014325
go_gc_duration_seconds{quantile="0.5"} 0.000224416
go_gc_duration_seconds{quantile="0.75"} 0.000395459
go_gc_duration_seconds{quantile="1"} 0.027332792
go_gc_duration_seconds_sum 0.193264274
go_gc_duration_seconds_count 223
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 33
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 1.21109833e+08
# HELP prometheus_http_response_size_bytes Histogram of response size for HTTP requests.
# TYPE prometheus_http_response_size_bytes histogram
prometheus_http_response_size_bytes_bucket{handler="/",le="100"} 5
prometheus_http_response_size_bytes_bucket{handler="/",le="1000"} 5
prometheus_http_response_size_bytes_bucket{handler="/",le="10000"} 5
prometheus_http_response_size_bytes_bucket{handler="/",le="100000"} 5
prometheus_http_response_size_bytes_bucket{handler="/",le="1e+06"} 5
prometheus_http_response_size_bytes_bucket{handler="/",le="1e+07"} 5
prometheus_http_response_size_bytes_bucket{handler="/",le="1e+08"} 5
prometheus_http_response_size_bytes_bucket{handler="/",le="1e+09"} 5
prometheus_http_response_size_bytes_bucket{handler="/",le="+Inf"} 5
prometheus_http_response_size_bytes_sum{handler="/"} 145
prometheus_http_response_size_bytes_count{handler="/"} 5
Client SDK Metrics¶
Overall¶
While the server defines the metrics, which has a name along with labels, in a time series, the client SDK offers more fine-grained metrics types. We need to highlight again that the server side doesn't respect these definitions and flatten all data into untyped time series under the same metrics.
We refer the metrics as the response of /metrics
API to affirm that server doesn't really care about it.
For example, the histogram metrics prometheus_http_response_size_bytes_bucket
is consisted by several pieces of data with different label value, output by the HistogramVec
metrics.
prometheus_http_response_size_bytes_bucket{handler="/",le="100"} 5
prometheus_http_response_size_bytes_bucket{handler="/",le="1000"} 5
However, technically, a gauge with a le
label defined by user itself could do the same thing.
The diverse metric types defined in client SDK side could be treated as utilities to benefit developers' work.
Fundamental and Vector Metrics Concept¶
Fundamental metric types and their respective metric types are provided by the prometheus go client, and its documentation reads:
In addition to the fundamental metric types Gauge, Counter, Summary, and Histogram, a very important part of the Prometheus data model is the partitioning of samples along dimensions called labels, which results in metric vectors. The fundamental types are GaugeVec, CounterVec, SummaryVec, and HistogramVec.
We can understand the vector is an advanced fundamental metric with labels, even though the internal details are much complex. Their constructors reveal this idea as well:
package prometheus
func NewCounter(opts CounterOpts) Counter
func NewCounterVec(opts CounterOpts, labelNames []string) *CounterVec
Of course, the management of labels are not easy, as we will see the complexities later.
Client SDK Interfaces¶
There are four important interfaces in the prometheus SDK implementation.
-
Metrics is the basic data representation, which contains the identity and value of a metric.
-
Collector helps to collect the metrics
- Registerer manages the collectors which could collect metrics from wherever metrics are
- Gatherer is the facade of prometheus sdk library, which generates the well-prepared data for the prometheus collector through http(
/metrics
http path) or the other protocol.
The workflow between them are:
Metrics are exposed through Collector, then Collectors are submitted to Registerer, which means the registerer manages many collectors which could retrieve metrics through them.
The Gather
, which is triggered when prometheus server during collection through http or the other protocols, will try to collect the metrics and report the required format of data. In the default implementation provided by prometheus sdk, the Registry
implements both Gather
and Registerer
so once the gather is called, it will use Registry to get all Collector and then retrieve all available metrics.
Metrics Interface¶
The Metric
handles the data exported to server at a point in time, as the comment reads, the "meta data" refers labels.
A Metric models a single sample value with its meta data being exported to Prometheus. Implementations of Metric in this package are Gauge, Counter, Histogram, Summary, and Untyped.
At prometheus SDK level, the Metric
interface doesn't care about the metrics types. Because when we want to report the data, the types defined in client side doesn't make sense. Note that the respective vector version doesn't satisfy the Metrics
, as it's not a single metric essentially. We will discuss this topic later.
Counter and CounterVec¶
The structure counter
implements the Counter
interface, and its fields have the counter and labels. It's a simple counter metric, which could stand a metric at any point in time. Here the counter identify is the counter(metric) name. The simple counter reports labels as well, even though it doesn't carry labels. Again, this is the limitation from prometheus client library, and the server side doesn't distinguish them.
However, the CounterVec
is different from counter as it doesn't satisfy the Metric
interface. When we look the code, it shows clearly that the CounterVec
stores the underlying metric desc, constructor, and an internal map to store metrics.
func (v2) NewCounterVec(opts CounterVecOpts) *CounterVec {
// ignore lines, added by blog author
return &CounterVec{
MetricVec: NewMetricVec(desc, func(lvs ...string) Metric {
if len(lvs) != len(desc.variableLabels.names) {
panic(makeInconsistentCardinalityError(desc.fqName, desc.variableLabels.names, lvs))
}
result := &counter{desc: desc, labelPairs: MakeLabelPairs(desc, lvs), now: opts.now}
result.init(result) // Init self-collection.
result.createdTs = timestamppb.New(opts.now())
return result
}),
}
}
Based on the code, we can know that the CounterVec
stores all metrics with metrics names and label names, but their label(dimensional) values and their values are varying.
Because the counter doesn't have any label but a value, every operation on the counter generates a new data in the same time series. As the label values vary, the counter with labels will be collected into different time series and this is clearly revealed by the code implementation of GetMetricWithLabelValues: The time series is marked by the hash of all provided labels with values calculated by method hashLabelValues
:
Interesting Source Code Details¶
Uint and Float in Counter Addition¶
The logic inside counter
is interesting, it manages the integer and float separately and adds the values together during reporting.
Uint and Float in Counter Addition
```go func (c *counter) Add(v float64) { if v < 0 { panic(errors.New("counter cannot decrease in value")) }
ival := uint64(v)
if float64(ival) == v {
atomic.AddUint64(&c.valInt, ival)
return
}
for {
oldBits := atomic.LoadUint64(&c.valBits)
newBits := math.Float64bits(math.Float64frombits(oldBits) + v)
if atomic.CompareAndSwapUint64(&c.valBits, oldBits, newBits) {
return
}
}
}
```
Integer handling and rounding error¶
The first step, which converts the float into uint and then cast it back, tries to avoid:
- the float with decimal, e.g, 1.1 or 2.2
- the overflowed rounding, but anyway this is an independent behavior.
The M1 MacOs will report the values are the same, but the linux will report they are different instead.
You can find the round in my another blog to learn more.
Possible Overflow¶
The current implementation might be overflow at the certain case, as the new added test cases.
I opened an PR to discuss it, and i will fix it in the future if the idea is accepted.
Float Handling¶
Then, if the data is a number with decimal or overflow integers, the number is added into a float directly. The prometheus SDK uses an uint64 to store the bit representation of a float64.
type counter struct {
// valBits contains the bits of the represented float64 value, while
// valInt stores values that are exact integers. Both have to go first
// in the struct to guarantee alignment for atomic operations.
// http://golang.org/pkg/sync/atomic/#pkg-note-BUG
valBits uint64
valInt uint64
// ignore lines, added by blog author xieyuschen
}
Value Reporting¶
Finally, when the metrics is collected through Write
method, the counter value will be added together:
func (c *counter) get() float64 {
fval := math.Float64frombits(atomic.LoadUint64(&c.valBits))
ival := atomic.LoadUint64(&c.valInt)
return fval + float64(ival)
}
Open hash strategy for metrics¶
The vector manages all metrics with the same name and labels(note that the label values are excluded).
As each metrics identifies itself through the hash, it avoids the collisions by open hash strategy.
func (m *metricMap) getOrCreateMetricWithLabels(
hash uint64, labels Labels, curry []curriedLabelValue,
) Metric {
m.mtx.RLock()
metric, ok := m.getMetricWithHashAndLabels(hash, labels, curry)
m.mtx.RUnlock()
if ok {
return metric
}
m.mtx.Lock()
defer m.mtx.Unlock()
metric, ok = m.getMetricWithHashAndLabels(hash, labels, curry)
if !ok {
lvs := extractLabelValues(m.desc, labels, curry)
metric = m.newMetric(lvs...)
m.metrics[hash] = append(m.metrics[hash], metricWithLabelValues{values: lvs, metric: metric})
}
return metric
}
Besides this, let's focus on the mutex usage here: 1. rlock and check existence. 2. lock and check existence again, and add the value if it doesn't exist.
Because rlock
called before lock
will always be consumed before, so it means even though the rlock finds a hash doesn't exist, the hash can be added after the rlock finishes.
Hence, we can check the existence of the hash again before adding new elements into the map.