Signal to metrics connector produces metrics from all signal types (traces, logs, or metrics).
| Status | |
|---|---|
| Distributions | contrib |
| Issues | |
| Code coverage | |
| Code Owners | @ChrsMark, @lahsivjar |
| Exporter Pipeline Type | Receiver Pipeline Type | Stability Level |
|---|---|---|
| traces | metrics | alpha |
| logs | metrics | alpha |
| metrics | metrics | alpha |
| profiles | metrics | alpha |
The component can produce metrics from spans, datapoints (for metrics), and logs. At least one of the metrics for one signal type MUST be specified correctly for the component to work.
All signal types can be configured to produce metrics with the same configuration structure. For example, the below configuration will produce delta temporality counters for counting number of events for each of the configured signals:
signal_to_metrics:
spans:
- name: span.count
description: Count of spans
sum:
value: Int(AdjustedCount()) # Count of total spans represented by each span
datapoints:
- name: datapoint.count
description: Count of datapoints
sum:
value: "1" # increment by 1 for each datapoint
logs:
- name: logrecord.count
description: Count of log records
sum:
value: "1" # increment by 1 for each log record
profiles:
- name: profile.count
description: Count of profiles
sum:
value: "1" # increment by 1 for each profileThe error_mode configuration option determines how the connector handles errors that occur while processing OTTL expressions:
error_mode(optional): Determines how errors returned from OTTL expressions are handled. Valid values arepropagate,ignore, andsilent.propagate(default): Errors cause the entire batch to fail and be returned up the pipeline. This will result in the payload being dropped from the collector.ignore: Errors are logged and the specific record that caused the error is skipped, but processing continues for the rest of the batch.silent: Errors are not logged and the specific record that caused the error is skipped, but processing continues for the rest of the batch.
Example with error handling:
signaltometrics:
error_mode: ignore # Log errors but continue processing other records
spans:
- name: span.count
description: Count of spans
sum:
value: Int(AdjustedCount())signal_to_metrics produces a variety of metric types by utilizing OTTL
to extract the relevant data for a metric type from the incoming data. The
component can produce the following metric types for each signal type:
The component does NOT perform any stateful or time based aggregations. The metric
types are aggregated for the payload sent in each Consume* call. The final metric
is then sent forward in the pipeline.
Sum metrics have the following configurations:
sum:
value: <ottl_value_expression>- [Required]
valuerepresents an OTTL expression to extract a value from the incoming data. Only OTTL expressions that return a value are accepted. The returned value determines the value type of thesummetric (intordouble). OTTL converters can be used to transform the data.
Gauge metrics aggregate the last value of a signal and have the following configuration:
gauge:
value: <ottl_value_expression>- [Required]
valuerepresents an OTTL expression to extract a numeric value from the signal. Only OTTL expressions that return a value are accepted. The returned value determines the value type of thegaugemetric (intordouble).- For logs: Use e.g.
ExtractGrokPatternswith a single key selector (see below). - For other signals: Use a field such as
value_int,value_double, or a valid OTTL expression.
- For logs: Use e.g.
Examples:
Logs (with Grok pattern):
signal_to_metrics:
logs:
- name: logs.memory_mb
description: Extract memory_mb from log records
gauge:
value: ExtractGrokPatterns(body, "Memory usage %{NUMBER:memory_mb:int}MB")["memory_mb"]Traces:
signal_to_metrics:
spans:
- name: span.duration.gauge
description: Span duration as gauge
gauge:
value: Int(Seconds(end_time - start_time))Histogram metrics have the following configurations:
histogram:
buckets: []float64
count: <ottl_value_expression>
value: <ottl_value_expression>-
[Optional]
bucketsrepresents the buckets to be used for the histogram. If no buckets are configured then it defaults to:[]float64{2, 4, 6, 8, 10, 50, 100, 200, 400, 800, 1000, 1400, 2000, 5000, 10_000, 15_000}
-
[Optional]
countrepresents an OTTL expression to extract the count to be recorded in the histogram from the incoming data. If no expression is provided then it defaults to the count of the signal. OTTL converters can be used to transform the data. For spans, a special converter adjusted count, is provided to help calculate the span's adjusted count. -
[Required]
valuerepresents an OTTL expression to extract the value to be recorded in the histogram from the incoming data. OTTL converters can be used to transform the data.
Exponential histogram metrics have the following configurations:
exponential_histogram:
max_size: <int64>
count: <ottl_value_expression>
value: <ottl_value_expression>- [Optional]
max_sizerepresents the maximum number of buckets per positive or negative number range. Defaults to160. - [Optional]
countrepresents an OTTL expression to extract the count to be recorded in the exponential histogram from the incoming data. If no expression is provided then it defaults to the count of the signal. OTTL converters can be used to transform the data. For spans, a special converter adjusted count, is provided to help calculate the span's adjusted count. - [Required]
valuerepresents an OTTL expression to extract the value to be recorded in the exponential histogram from the incoming data. OTTL converters can be used to transform the data.
The component can produce metrics categorized by the attributes (span attributes
for traces, datapoint attributes for datapoints, or log record attributes for logs)
from the incoming data by configuring attributes for the configured metrics.
If no attributes are configured then the metrics are produced without any attributes.
attributes:
- key: datapoint.foo
- key: datapoint.bar
default_value: bar
- key: datapoint.baz
optional: trueIf attributes are specified then a separate metric will be generated for each unique set of attribute values. There are three behaviors that can be configured for an attribute:
- Without any extra parameters:
datapoint.fooin the above yaml is an example of such configuration. In this configuration, only the signals which have the said attribute are processed with the attribute's value as one of the attributes for the output metric. If the attribute is missing then the signal is not processed. - With
default_value:datapoint.barin the above yaml is an example of such configuration. In this configuration all the signals are processed irrespective of the attribute being present or not in the input signal. The output metric is categorized as per the incoming value of the attribute and an extra bucket exists with the attribute set to the configured default value for all the signals that were missing the configured attribute. - With
optionalset totrue:datapoint.bazin the above yaml is an example of such configuration. If the attribute is configured withoptionaland present in the incoming signal then it will be added directly to the output metric. If it is absent then a new metric with missing attributes will be created. In addition, theoptionalattribute will not impact the decision i.e. even if theoptionalattributes are not present in the incoming signal, the signal will be processed and will produce a metric given all other non-optional attributes are present or have a default value defined.
Note that resource attributes are handled differently, check the resource attributes
section for more details on this. Think of attributes as conditional filters for
choosing which attributes should be included in the output metric whereas
include_resource_attributes is an include list for customizing resource attributes
of the output metric.
Conditions are an optional list of OTTL conditions that are evaluated on the incoming data and are ORed together. For example:
signal_to_metrics:
datapoints:
- name: datapoint.bar.sum
description: Count total number of datapoints as per datapoint.bar attribute
conditions:
- resource.attributes["foo"] != nil
- resource.attributes["bar"] != nil
sum:
value: "1"The above configuration will produce sum metrics from datapoints with either foo
OR bar resource attribute defined.
Conditions can also be ANDed together, for example:
signal_to_metrics:
datapoints:
- name: gauge.to.exphistogram
conditions:
- metric.type == 1 AND resource.attributes["resource.foo"] != nil
exponential_histogram:
count: "1" # 1 count for each datapoint
value: Double(value_int) + value_double # handle both int and doubleThe above configuration produces exponential histogram from gauge metrics with resource
attributes resource.foo set.
The component allows customizing the resource attributes for the produced metrics
by specifying a list of attributes that should be included in the final metrics.
If no attributes are specified for include_resource_attributes then no filtering
is performed i.e. all resource attributes of the incoming data is considered.
include_resource_attributes:
- key: resource.foo # Include resource.foo attribute if present
- key: resource.bar # Always include resource.bar attribute, default to bar
default_value: bar
- key: resource.baz # Optional resource.baz attribute is added if present
optional: trueWith the above configuration the produced metrics would have the following resource attributes:
resource.foowill be present for the produced metrics if the incoming data also has the attribute defined. If the attribute is missing in the incoming data the output metric will be produced without the said attribute.resource.barwill always be present because of thedefault_value. If the incoming data does not have a resource attribute with nameresource.barthen the configureddefault_valueofbarwill be used.resource.bazwill behave exactly same asresource.foo. Since resource attributes are basically an include list, theoptionaloption is a no-op i.e. the resource attributes withoptionalset totruebehaves identical to an attribute configured withoutdefault_valueoroptional.
Metrics data streams MUST obey single-writer
principle. However, since signal_to_metrics component produces metrics from all signal
types and also allows customizing the resource attributes, there is a possibility
of violating the single-writer principle. To keep the single-writer principle intact,
the component adds collector instance information as resource attributes. The following
resource attribute is added to each produced metric:
signal_to_metrics.service.instance.id: <service_instance_id_of_the_otel_collector>The component implements the following custom OTTL functions:
AdjustedCount: a converter capable of calculating adjusted count for a span.