hopefully now linux

Signed-off-by: Frank Davidson <davidfr@americas.manulife.net>
Signed-off-by: Frank Davidson <ffdavidson@gmail.com>
This commit is contained in:
Frank Davidson 2020-03-16 13:03:05 -04:00 committed by Frank Davidson
parent 77e8e78a88
commit a455a8ad64
601 changed files with 296683 additions and 296683 deletions

View file

@ -1,286 +1,286 @@
## 0.15.0 / 2020-03-05 ## 0.15.0 / 2020-03-05
* [ENHANCEMENT] Allow setting granularity for summary metrics ([#290](https://github.com/prometheus/statsd_exporter/pull/290)) * [ENHANCEMENT] Allow setting granularity for summary metrics ([#290](https://github.com/prometheus/statsd_exporter/pull/290))
* [ENHANCEMENT] Support a random-replacement cache invalidation strategy ([#281](https://github.com/prometheus/statsd_exporter/pull/281) * [ENHANCEMENT] Support a random-replacement cache invalidation strategy ([#281](https://github.com/prometheus/statsd_exporter/pull/281)
To facilitate the expanded settings for summaries, the configuration format changes from To facilitate the expanded settings for summaries, the configuration format changes from
```yaml ```yaml
mappings: mappings:
- match: … - match: …
timer_type: summary timer_type: summary
quantiles: quantiles:
- quantile: 0.99 - quantile: 0.99
error: 0.001 error: 0.001
- quantile: 0.95 - quantile: 0.95
error: 0.01 error: 0.01
``` ```
to to
```yaml ```yaml
mappings: mappings:
- match: … - match: …
timer_type: summary timer_type: summary
summary_options: summary_options:
quantiles: quantiles:
- quantile: 0.99 - quantile: 0.99
error: 0.001 error: 0.001
- quantile: 0.95 - quantile: 0.95
error: 0.01 error: 0.01
max_summary_age: 30s max_summary_age: 30s
summary_age_buckets: 3 summary_age_buckets: 3
stream_buffer_size: 1000 stream_buffer_size: 1000
``` ```
For consistency, the format for histogram buckets also changes from For consistency, the format for histogram buckets also changes from
```yaml ```yaml
mappings: mappings:
- match: … - match: …
timer_type: histogram timer_type: histogram
buckets: [ 0.01, 0.025, 0.05, 0.1 ] buckets: [ 0.01, 0.025, 0.05, 0.1 ]
``` ```
to to
```yaml ```yaml
mappings: mappings:
- match: … - match: …
timer_type: histogram timer_type: histogram
histogram_options: histogram_options:
buckets: [ 0.01, 0.025, 0.05, 0.1 ] buckets: [ 0.01, 0.025, 0.05, 0.1 ]
``` ```
Transitionally, the old format will still work but is *deprecated*. The new Transitionally, the old format will still work but is *deprecated*. The new
settings are optional. settings are optional.
For users of the [mapper](https://pkg.go.dev/github.com/prometheus/statsd_exporter/pkg/mapper?tab=doc) For users of the [mapper](https://pkg.go.dev/github.com/prometheus/statsd_exporter/pkg/mapper?tab=doc)
as a library, this is a breaking change. To adjust your code, replace as a library, this is a breaking change. To adjust your code, replace
`mapping.Buckets` with `mapping.HistogramOptions.Buckets` and `mapping.Buckets` with `mapping.HistogramOptions.Buckets` and
`mapping.Quantiles` with `mapping.SummaryOptions.Quantiles`. `mapping.Quantiles` with `mapping.SummaryOptions.Quantiles`.
## 0.14.1 / 2020-01-13 ## 0.14.1 / 2020-01-13
* [BUGFIX] Mapper cache poisoning when name is variable ([#286](https://github.com/prometheus/statsd_exporter/pull/286)) * [BUGFIX] Mapper cache poisoning when name is variable ([#286](https://github.com/prometheus/statsd_exporter/pull/286))
* [BUGFIX] nil pointer dereference in UDP listener ([#287](https://github.com/prometheus/statsd_exporter/pull/287)) * [BUGFIX] nil pointer dereference in UDP listener ([#287](https://github.com/prometheus/statsd_exporter/pull/287))
Thank you to everyone who reported these, and @bakins for the mapper cache fix! Thank you to everyone who reported these, and @bakins for the mapper cache fix!
## 0.14.0 / 2020-01-10 ## 0.14.0 / 2020-01-10
* [CHANGE] Switch logging to go-kit ([#283](https://github.com/prometheus/statsd_exporter/pull/283)) * [CHANGE] Switch logging to go-kit ([#283](https://github.com/prometheus/statsd_exporter/pull/283))
* [CHANGE] Rename existing metric for mapping cache size ([#284](https://github.com/prometheus/statsd_exporter/pull/284)) * [CHANGE] Rename existing metric for mapping cache size ([#284](https://github.com/prometheus/statsd_exporter/pull/284))
* [ENHANCEMENT] Add metrics for mapping cache hits ([#280](https://github.com/prometheus/statsd_exporter/pull/280)) * [ENHANCEMENT] Add metrics for mapping cache hits ([#280](https://github.com/prometheus/statsd_exporter/pull/280))
Logs are more structured now. The `fatal` log level no longer exists; use `--log.level=error` instead. The valid log formats are `logfmt` and `json`. Logs are more structured now. The `fatal` log level no longer exists; use `--log.level=error` instead. The valid log formats are `logfmt` and `json`.
The metric `statsd_exporter_cache_length` is now called `statsd_metric_mapper_cache_length`. The metric `statsd_exporter_cache_length` is now called `statsd_metric_mapper_cache_length`.
## 0.13.0 / 2019-12-06 ## 0.13.0 / 2019-12-06
* [ENHANCEMENT] Support sampling factors for all statsd metric types ([#264](https://github.com/prometheus/statsd_exporter/issues/250)) * [ENHANCEMENT] Support sampling factors for all statsd metric types ([#264](https://github.com/prometheus/statsd_exporter/issues/250))
* [ENHANCEMENT] Support Librato and InfluxDB labeling formats ([#267](https://github.com/prometheus/statsd_exporter/pull/267)) * [ENHANCEMENT] Support Librato and InfluxDB labeling formats ([#267](https://github.com/prometheus/statsd_exporter/pull/267))
## 0.12.2 / 2019-07-25 ## 0.12.2 / 2019-07-25
* [BUGFIX] Fix Unix socket handler ([#252](https://github.com/prometheus/statsd_exporter/pull/252)) * [BUGFIX] Fix Unix socket handler ([#252](https://github.com/prometheus/statsd_exporter/pull/252))
* [BUGFIX] Fix panic under high load ([#253](https://github.com/prometheus/statsd_exporter/pull/253)) * [BUGFIX] Fix panic under high load ([#253](https://github.com/prometheus/statsd_exporter/pull/253))
Thank you to everyone who reported and helped debug these issues! Thank you to everyone who reported and helped debug these issues!
## 0.12.1 / 2019-07-08 ## 0.12.1 / 2019-07-08
* [BUGFIX] Renew TTL when a metric receives updates ([#246](https://github.com/prometheus/statsd_exporter/pull/246)) * [BUGFIX] Renew TTL when a metric receives updates ([#246](https://github.com/prometheus/statsd_exporter/pull/246))
* [CHANGE] Reload on SIGHUP instead of watching the file ([#243](https://github.com/prometheus/statsd_exporter/pull/243)) * [CHANGE] Reload on SIGHUP instead of watching the file ([#243](https://github.com/prometheus/statsd_exporter/pull/243))
## 0.11.2 / 2019-06-14 ## 0.11.2 / 2019-06-14
* [BUGFIX] Fix TCP handler ([#235](https://github.com/prometheus/statsd_exporter/pull/235)) * [BUGFIX] Fix TCP handler ([#235](https://github.com/prometheus/statsd_exporter/pull/235))
## 0.11.1 / 2019-06-14 ## 0.11.1 / 2019-06-14
* [ENHANCEMENT] Batch event processing for improved ingestion performance ([#227](https://github.com/prometheus/statsd_exporter/pull/227)) * [ENHANCEMENT] Batch event processing for improved ingestion performance ([#227](https://github.com/prometheus/statsd_exporter/pull/227))
* [ENHANCEMENT] Switch Prometheus client to promhttp, freeing the standard HTTP metrics ([#233](https://github.com/prometheus/statsd_exporter/pull/233)) * [ENHANCEMENT] Switch Prometheus client to promhttp, freeing the standard HTTP metrics ([#233](https://github.com/prometheus/statsd_exporter/pull/233))
With #233, the exporter no longer exports metrics about its own HTTP status. These were not helpful since you could not get them when scraping fails. This allows mapping to metric names like `http_requests_total` that are useful as application metrics. With #233, the exporter no longer exports metrics about its own HTTP status. These were not helpful since you could not get them when scraping fails. This allows mapping to metric names like `http_requests_total` that are useful as application metrics.
## 0.10.6 / 2019-06-07 ## 0.10.6 / 2019-06-07
* [BUGFIX] Fix mapping collision for metrics with different types, but the same name ([#229](https://github.com/prometheus/statsd_exporter/pull/229)) * [BUGFIX] Fix mapping collision for metrics with different types, but the same name ([#229](https://github.com/prometheus/statsd_exporter/pull/229))
## 0.10.5 / 2019-05-27 ## 0.10.5 / 2019-05-27
* [BUGFIX] Fix "Error: inconsistent label cardinality: expected 0 label values but got N in prometheus.Labels" ([#224](https://github.com/prometheus/statsd_exporter/pull/224)) * [BUGFIX] Fix "Error: inconsistent label cardinality: expected 0 label values but got N in prometheus.Labels" ([#224](https://github.com/prometheus/statsd_exporter/pull/224))
## 0.10.4 / 2019-05-20 ## 0.10.4 / 2019-05-20
* [BUGFIX] Revert #218 due to a race condition ([#221](https://github.com/prometheus/statsd_exporter/pull/221)) * [BUGFIX] Revert #218 due to a race condition ([#221](https://github.com/prometheus/statsd_exporter/pull/221))
## 0.10.3 / 2019-05-17 ## 0.10.3 / 2019-05-17
* [ENHANCEMENT] Reduce allocations when escaping metric names ([#217](https://github.com/prometheus/statsd_exporter/pull/217)) * [ENHANCEMENT] Reduce allocations when escaping metric names ([#217](https://github.com/prometheus/statsd_exporter/pull/217))
* [ENHANCEMENT] Reduce allocations when handling packets ([#218](https://github.com/prometheus/statsd_exporter/pull/218)) * [ENHANCEMENT] Reduce allocations when handling packets ([#218](https://github.com/prometheus/statsd_exporter/pull/218))
* [ENHANCEMENT] Optimize label sorting ([#219](https://github.com/prometheus/statsd_exporter/pull/219)) * [ENHANCEMENT] Optimize label sorting ([#219](https://github.com/prometheus/statsd_exporter/pull/219))
This release is entirely powered by @claytono. Kudos! This release is entirely powered by @claytono. Kudos!
## 0.10.2 / 2019-05-17 ## 0.10.2 / 2019-05-17
* [CHANGE] Do not run as root in the Docker container by default ([#202](https://github.com/prometheus/statsd_exporter/pull/202)) * [CHANGE] Do not run as root in the Docker container by default ([#202](https://github.com/prometheus/statsd_exporter/pull/202))
* [FEATURE] Add metric for count of events by action ([#193](https://github.com/prometheus/statsd_exporter/pull/193)) * [FEATURE] Add metric for count of events by action ([#193](https://github.com/prometheus/statsd_exporter/pull/193))
* [FEATURE] Add metric for count of distinct metric names ([#200](https://github.com/prometheus/statsd_exporter/pull/200)) * [FEATURE] Add metric for count of distinct metric names ([#200](https://github.com/prometheus/statsd_exporter/pull/200))
* [FEATURE] Add UNIX socket listener support ([#199](https://github.com/prometheus/statsd_exporter/pull/199)) * [FEATURE] Add UNIX socket listener support ([#199](https://github.com/prometheus/statsd_exporter/pull/199))
* [FEATURE] Accept Datadog [distributions](https://docs.datadoghq.com/graphing/metrics/distributions/) ([#211](https://github.com/prometheus/statsd_exporter/pull/211)) * [FEATURE] Accept Datadog [distributions](https://docs.datadoghq.com/graphing/metrics/distributions/) ([#211](https://github.com/prometheus/statsd_exporter/pull/211))
* [ENHANCEMENT] Add a health check to the Docker container ([#182](https://github.com/prometheus/statsd_exporter/pull/182)) * [ENHANCEMENT] Add a health check to the Docker container ([#182](https://github.com/prometheus/statsd_exporter/pull/182))
* [ENHANCEMENT] Allow inconsistent label sets ([#194](https://github.com/prometheus/statsd_exporter/pull/194)) * [ENHANCEMENT] Allow inconsistent label sets ([#194](https://github.com/prometheus/statsd_exporter/pull/194))
* [ENHANCEMENT] Speed up sanitization of metric names ([#197](https://github.com/prometheus/statsd_exporter/pull/197)) * [ENHANCEMENT] Speed up sanitization of metric names ([#197](https://github.com/prometheus/statsd_exporter/pull/197))
* [ENHANCEMENT] Enable pprof endpoints ([#205](https://github.com/prometheus/statsd_exporter/pull/205)) * [ENHANCEMENT] Enable pprof endpoints ([#205](https://github.com/prometheus/statsd_exporter/pull/205))
* [ENHANCEMENT] DogStatsD tag parsing is faster ([#210](https://github.com/prometheus/statsd_exporter/pull/210)) * [ENHANCEMENT] DogStatsD tag parsing is faster ([#210](https://github.com/prometheus/statsd_exporter/pull/210))
* [ENHANCEMENT] Cache mapped metrics ([#198](https://github.com/prometheus/statsd_exporter/pull/198)) * [ENHANCEMENT] Cache mapped metrics ([#198](https://github.com/prometheus/statsd_exporter/pull/198))
* [BUGFIX] Fix panic if a mapping resulted in an empty name ([#192](https://github.com/prometheus/statsd_exporter/pull/192)) * [BUGFIX] Fix panic if a mapping resulted in an empty name ([#192](https://github.com/prometheus/statsd_exporter/pull/192))
* [BUGFIX] Ensure that there are always default quantiles if using summaries ([#212](https://github.com/prometheus/statsd_exporter/pull/212)) * [BUGFIX] Ensure that there are always default quantiles if using summaries ([#212](https://github.com/prometheus/statsd_exporter/pull/212))
* [BUGFIX] Prevent ingesting conflicting metric types that would make scraping fail ([#213](https://github.com/prometheus/statsd_exporter/pull/213)) * [BUGFIX] Prevent ingesting conflicting metric types that would make scraping fail ([#213](https://github.com/prometheus/statsd_exporter/pull/213))
With #192, the count of events rejected because of negative counter increments has moved into the `statsd_exporter_events_error_total` metric, instead of being lumped in with the different kinds of successful events. With #192, the count of events rejected because of negative counter increments has moved into the `statsd_exporter_events_error_total` metric, instead of being lumped in with the different kinds of successful events.
## 0.9.0 / 2019-03-11 ## 0.9.0 / 2019-03-11
* [ENHANCEMENT] Update the Prometheus client library to 0.9.2 ([#171](https://github.com/prometheus/statsd_exporter/pull/171)) * [ENHANCEMENT] Update the Prometheus client library to 0.9.2 ([#171](https://github.com/prometheus/statsd_exporter/pull/171))
* [FEATURE] Metrics can now be expired with a per-mapping TTL ([#164](https://github.com/prometheus/statsd_exporter/pull/164)) * [FEATURE] Metrics can now be expired with a per-mapping TTL ([#164](https://github.com/prometheus/statsd_exporter/pull/164))
* [CHANGE] Timers that mapped to a summary are scaled to seconds, just like histograms ([#178](https://github.com/prometheus/statsd_exporter/pull/178)) * [CHANGE] Timers that mapped to a summary are scaled to seconds, just like histograms ([#178](https://github.com/prometheus/statsd_exporter/pull/178))
If you are using summaries, all your quantiles and `_total` will change by a factor of 1000. If you are using summaries, all your quantiles and `_total` will change by a factor of 1000.
Adjust your queries and dashboards, or consider switching to histograms altogether. Adjust your queries and dashboards, or consider switching to histograms altogether.
## 0.8.1 / 2018-12-05 ## 0.8.1 / 2018-12-05
* [BUGFIX] Expose the counter for unmapped matches ([#161](https://github.com/prometheus/statsd_exporter/pull/161)) * [BUGFIX] Expose the counter for unmapped matches ([#161](https://github.com/prometheus/statsd_exporter/pull/161))
* [BUGFIX] Unsuccessful backtracking does not clobber captures ([#169](https://github.com/prometheus/statsd_exporter/pull/169), fixes [#168](https://github.com/prometheus/statsd_exporter/issues/168)) * [BUGFIX] Unsuccessful backtracking does not clobber captures ([#169](https://github.com/prometheus/statsd_exporter/pull/169), fixes [#168](https://github.com/prometheus/statsd_exporter/issues/168))
## 0.8.0 / 2018-10-12 ## 0.8.0 / 2018-10-12
* [ENHANCEMENT] Speed up glob matching ([#157](https://github.com/prometheus/statsd_exporter/pull/157)) * [ENHANCEMENT] Speed up glob matching ([#157](https://github.com/prometheus/statsd_exporter/pull/157))
This release replaces the implementation of the glob matching mechanism, This release replaces the implementation of the glob matching mechanism,
speeding it up significantly. In certain sub-optimal configurations, a warning speeding it up significantly. In certain sub-optimal configurations, a warning
is logged. is logged.
This major enhancement was contributed by [Wangchong Zhou](https://github.com/fffonion). This major enhancement was contributed by [Wangchong Zhou](https://github.com/fffonion).
## 0.7.0 / 2018-08-22 ## 0.7.0 / 2018-08-22
This is a breaking release, but the migration is easy: command line flags now This is a breaking release, but the migration is easy: command line flags now
require two dashes (`--help` instead of `-help`). The previous flag library require two dashes (`--help` instead of `-help`). The previous flag library
already accepts this form, so if necessary you can migrate the flags first already accepts this form, so if necessary you can migrate the flags first
before upgrading. before upgrading.
The deprecated `--statsd.listen-address` flag has been removed, use The deprecated `--statsd.listen-address` flag has been removed, use
`--statsd.listen-udp` instead. `--statsd.listen-udp` instead.
* [CHANGE] Switch to Kingpin for flags, fixes setting log level ([#141](https://github.com/prometheus/statsd_exporter/pull/141)) * [CHANGE] Switch to Kingpin for flags, fixes setting log level ([#141](https://github.com/prometheus/statsd_exporter/pull/141))
* [ENHANCEMENT] Allow matching on specific metric types ([#136](https://github.com/prometheus/statsd_exporter/pulls/136)) * [ENHANCEMENT] Allow matching on specific metric types ([#136](https://github.com/prometheus/statsd_exporter/pulls/136))
* [ENHANCEMENT] Summary quantiles can be configured ([#135](https://github.com/prometheus/statsd_exporter/pulls/135)) * [ENHANCEMENT] Summary quantiles can be configured ([#135](https://github.com/prometheus/statsd_exporter/pulls/135))
* [BUGFIX] Fix panic if an invalid regular expression is supplied ([#126](https://github.com/prometheus/statsd_exporter/pulls/126)) * [BUGFIX] Fix panic if an invalid regular expression is supplied ([#126](https://github.com/prometheus/statsd_exporter/pulls/126))
## 0.6.0 / 2018-01-17 ## 0.6.0 / 2018-01-17
* [ENHANCEMENT] Add a drop action ([#115](https://github.com/prometheus/statsd_exporter/pulls/115)) * [ENHANCEMENT] Add a drop action ([#115](https://github.com/prometheus/statsd_exporter/pulls/115))
* [ENHANCEMENT] Allow templating metric names ([#117](https://github.com/prometheus/statsd_exporter/pulls/117)) * [ENHANCEMENT] Allow templating metric names ([#117](https://github.com/prometheus/statsd_exporter/pulls/117))
## 0.5.0 / 2017-11-16 ## 0.5.0 / 2017-11-16
NOTE: This release breaks backward compatibility. `statsd_exporter` now uses NOTE: This release breaks backward compatibility. `statsd_exporter` now uses
a YAML configuration file. You must convert your mappings configuration to a YAML configuration file. You must convert your mappings configuration to
the new format when you upgrade. For example, the configuration the new format when you upgrade. For example, the configuration
``` ```
test.dispatcher.*.*.* test.dispatcher.*.*.*
name="dispatcher_events_total" name="dispatcher_events_total"
processor="$1" processor="$1"
action="$2" action="$2"
outcome="$3" outcome="$3"
job="test_dispatcher" job="test_dispatcher"
*.signup.*.* *.signup.*.*
name="signup_events_total" name="signup_events_total"
provider="$2" provider="$2"
outcome="$3" outcome="$3"
job="${1}_server" job="${1}_server"
``` ```
now has the format now has the format
```yaml ```yaml
mappings: mappings:
- match: test.dispatcher.*.*.* - match: test.dispatcher.*.*.*
help: "The total number of events handled by the dispatcher." help: "The total number of events handled by the dispatcher."
name: "dispatcher_events_total" name: "dispatcher_events_total"
labels: labels:
processor: "$1" processor: "$1"
action: "$2" action: "$2"
outcome: "$3" outcome: "$3"
job: "test_dispatcher" job: "test_dispatcher"
- match: *.signup.*.* - match: *.signup.*.*
name: "signup_events_total" name: "signup_events_total"
help: "The total number of signup events." help: "The total number of signup events."
labels: labels:
provider: "$2" provider: "$2"
outcome: "$3" outcome: "$3"
job: "${1}_server" job: "${1}_server"
``` ```
The help field is optional. The help field is optional.
There is a [tool](https://github.com/bakins/statsd-exporter-convert) available to help with this conversion. There is a [tool](https://github.com/bakins/statsd-exporter-convert) available to help with this conversion.
* [CHANGE] Replace the overloaded "packets" metric ([#106](https://github.com/prometheus/statsd_exporter/pulls/106)) * [CHANGE] Replace the overloaded "packets" metric ([#106](https://github.com/prometheus/statsd_exporter/pulls/106))
* [CHANGE] Removed `-statsd.add-suffix` option flag [#99](https://github.com/prometheus/statsd_exporter/pulls/99). Users should remove * [CHANGE] Removed `-statsd.add-suffix` option flag [#99](https://github.com/prometheus/statsd_exporter/pulls/99). Users should remove
this flag when upgrading. Metrics will no longer automatically include the this flag when upgrading. Metrics will no longer automatically include the
suffixes `_timer` or `counter`. You may need to adjust any graphs that used suffixes `_timer` or `counter`. You may need to adjust any graphs that used
metrics with these suffixes. metrics with these suffixes.
* [CHANGE] Reduce log levels [#92](https://github.com/prometheus/statsd_exporter/pulls/92). Many log events have been changed from error * [CHANGE] Reduce log levels [#92](https://github.com/prometheus/statsd_exporter/pulls/92). Many log events have been changed from error
to debug log level. to debug log level.
* [CHANGE] Use YAML for configuration file [#66](https://github.com/prometheus/statsd_exporter/pulls/66). See note above about file format * [CHANGE] Use YAML for configuration file [#66](https://github.com/prometheus/statsd_exporter/pulls/66). See note above about file format
conversion. conversion.
* [ENHANCEMENT] Allow help text to be customized [#87](https://github.com/prometheus/statsd_exporter/pulls/87) * [ENHANCEMENT] Allow help text to be customized [#87](https://github.com/prometheus/statsd_exporter/pulls/87)
* [ENHANCEMENT] Add support for regex mappers [#85](https://github.com/prometheus/statsd_exporter/pulls/85) * [ENHANCEMENT] Add support for regex mappers [#85](https://github.com/prometheus/statsd_exporter/pulls/85)
* [ENHANCEMENT] Add TCP listener support [#71](https://github.com/prometheus/statsd_exporter/pulls/71) * [ENHANCEMENT] Add TCP listener support [#71](https://github.com/prometheus/statsd_exporter/pulls/71)
* [ENHANCEMENT] Allow histograms for timer metrics [#66](https://github.com/prometheus/statsd_exporter/pulls/66) * [ENHANCEMENT] Allow histograms for timer metrics [#66](https://github.com/prometheus/statsd_exporter/pulls/66)
* [ENHANCEMENT] Added support for sampling factor on timing events [#28](https://github.com/prometheus/statsd_exporter/pulls/28) * [ENHANCEMENT] Added support for sampling factor on timing events [#28](https://github.com/prometheus/statsd_exporter/pulls/28)
* [BUGFIX] Conflicting label sets no longer crash the exporter and will be * [BUGFIX] Conflicting label sets no longer crash the exporter and will be
ignored. Restart to clear the remembered label set. [#72](https://github.com/prometheus/statsd_exporter/pulls/72) ignored. Restart to clear the remembered label set. [#72](https://github.com/prometheus/statsd_exporter/pulls/72)
## 0.4.0 / 2017-05-12 ## 0.4.0 / 2017-05-12
* [ENHANCEMENT] Improve mapping configuration parser [#61](https://github.com/prometheus/statsd_exporter/pulls/61) * [ENHANCEMENT] Improve mapping configuration parser [#61](https://github.com/prometheus/statsd_exporter/pulls/61)
* [ENHANCEMENT] Add increment/decrement support to Gauges [#65](https://github.com/prometheus/statsd_exporter/pulls/65) * [ENHANCEMENT] Add increment/decrement support to Gauges [#65](https://github.com/prometheus/statsd_exporter/pulls/65)
* [BUGFIX] Tolerate more forms of broken lines from StatsD [#48](https://github.com/prometheus/statsd_exporter/pulls/48) * [BUGFIX] Tolerate more forms of broken lines from StatsD [#48](https://github.com/prometheus/statsd_exporter/pulls/48)
* [BUGFIX] Skip metrics with invalid utf8 [#50](https://github.com/prometheus/statsd_exporter/pulls/50) * [BUGFIX] Skip metrics with invalid utf8 [#50](https://github.com/prometheus/statsd_exporter/pulls/50)
* [BUGFIX] ListenAndServe now fails on exit [#58](https://github.com/prometheus/statsd_exporter/pulls/58) * [BUGFIX] ListenAndServe now fails on exit [#58](https://github.com/prometheus/statsd_exporter/pulls/58)
## 0.3.0 / 2016-05-05 ## 0.3.0 / 2016-05-05
* [CHANGE] Drop `_count` suffix for `loaded_mappings` metric ([#41](https://github.com/prometheus/statsd_exporter/pulls/41)) * [CHANGE] Drop `_count` suffix for `loaded_mappings` metric ([#41](https://github.com/prometheus/statsd_exporter/pulls/41))
* [ENHANCEMENT] Use common's log and version packages, and add -version flag ([#44](https://github.com/prometheus/statsd_exporter/pulls/44)) * [ENHANCEMENT] Use common's log and version packages, and add -version flag ([#44](https://github.com/prometheus/statsd_exporter/pulls/44))
* [ENHANCEMENT] Add flag to disable metric type suffixes ([#37](https://github.com/prometheus/statsd_exporter/pulls/37)) * [ENHANCEMENT] Add flag to disable metric type suffixes ([#37](https://github.com/prometheus/statsd_exporter/pulls/37))
* [BUGFIX] Increase receivable UDP datagram size to 65535 bytes ([#36](https://github.com/prometheus/statsd_exporter/pulls/36)) * [BUGFIX] Increase receivable UDP datagram size to 65535 bytes ([#36](https://github.com/prometheus/statsd_exporter/pulls/36))
* [BUGFIX] Warn, not panic when negative number counter is submitted ([#33](https://github.com/prometheus/statsd_exporter/pulls/33)) * [BUGFIX] Warn, not panic when negative number counter is submitted ([#33](https://github.com/prometheus/statsd_exporter/pulls/33))
## 0.2.0 / 2016-03-19 ## 0.2.0 / 2016-03-19
NOTE: This release renames `statsd_bridge` to `statsd_exporter` NOTE: This release renames `statsd_bridge` to `statsd_exporter`
* [CHANGE] New Dockerfile using alpine-golang-make-onbuild base image ([#17](https://github.com/prometheus/statsd_exporter/pulls/17)) * [CHANGE] New Dockerfile using alpine-golang-make-onbuild base image ([#17](https://github.com/prometheus/statsd_exporter/pulls/17))
* [ENHANCEMENT] Allow configuration of UDP read buffer ([#22](https://github.com/prometheus/statsd_exporter/pulls/22)) * [ENHANCEMENT] Allow configuration of UDP read buffer ([#22](https://github.com/prometheus/statsd_exporter/pulls/22))
* [BUGFIX] allow metrics with dashes when mapping ([#24](https://github.com/prometheus/statsd_exporter/pulls/24)) * [BUGFIX] allow metrics with dashes when mapping ([#24](https://github.com/prometheus/statsd_exporter/pulls/24))
* [ENHANCEMENT] add root endpoint with redirect ([#25](https://github.com/prometheus/statsd_exporter/pulls/25)) * [ENHANCEMENT] add root endpoint with redirect ([#25](https://github.com/prometheus/statsd_exporter/pulls/25))
* [CHANGE] rename bridge to exporter ([#26](https://github.com/prometheus/statsd_exporter/pulls/26)) * [CHANGE] rename bridge to exporter ([#26](https://github.com/prometheus/statsd_exporter/pulls/26))
## 0.1.0 / 2015-04-17 ## 0.1.0 / 2015-04-17
* Initial release * Initial release

View file

@ -1,18 +1,18 @@
# Contributing # Contributing
Prometheus uses GitHub to manage reviews of pull requests. Prometheus uses GitHub to manage reviews of pull requests.
* If you have a trivial fix or improvement, go ahead and create a pull request, * If you have a trivial fix or improvement, go ahead and create a pull request,
addressing (with `@...`) the maintainer of this repository (see addressing (with `@...`) the maintainer of this repository (see
[MAINTAINERS.md](MAINTAINERS.md)) in the description of the pull request. [MAINTAINERS.md](MAINTAINERS.md)) in the description of the pull request.
* If you plan to do something more involved, first discuss your ideas * If you plan to do something more involved, first discuss your ideas
on our [mailing list](https://groups.google.com/forum/?fromgroups#!forum/prometheus-developers). on our [mailing list](https://groups.google.com/forum/?fromgroups#!forum/prometheus-developers).
This will avoid unnecessary work and surely give you and us a good deal This will avoid unnecessary work and surely give you and us a good deal
of inspiration. of inspiration.
* Relevant coding style guidelines are the [Go Code Review * Relevant coding style guidelines are the [Go Code Review
Comments](https://code.google.com/p/go-wiki/wiki/CodeReviewComments) Comments](https://code.google.com/p/go-wiki/wiki/CodeReviewComments)
and the _Formatting and style_ section of Peter Bourgon's [Go: Best and the _Formatting and style_ section of Peter Bourgon's [Go: Best
Practices for Production Practices for Production
Environments](http://peter.bourgon.org/go-in-production/#formatting-and-style). Environments](http://peter.bourgon.org/go-in-production/#formatting-and-style).

View file

@ -1,13 +1,13 @@
ARG ARCH="amd64" ARG ARCH="amd64"
ARG OS="linux" ARG OS="linux"
FROM quay.io/prometheus/busybox-${OS}-${ARCH}:latest FROM quay.io/prometheus/busybox-${OS}-${ARCH}:latest
LABEL maintainer="The Prometheus Authors <prometheus-developers@googlegroups.com>" LABEL maintainer="The Prometheus Authors <prometheus-developers@googlegroups.com>"
ARG ARCH="amd64" ARG ARCH="amd64"
ARG OS="linux" ARG OS="linux"
COPY .build/${OS}-${ARCH}/statsd_exporter /bin/statsd_exporter COPY .build/${OS}-${ARCH}/statsd_exporter /bin/statsd_exporter
USER nobody USER nobody
EXPOSE 9102 9125 9125/udp EXPOSE 9102 9125 9125/udp
HEALTHCHECK CMD wget --spider -S "http://localhost:9102/metrics" -T 60 2>&1 || exit 1 HEALTHCHECK CMD wget --spider -S "http://localhost:9102/metrics" -T 60 2>&1 || exit 1
ENTRYPOINT [ "/bin/statsd_exporter" ] ENTRYPOINT [ "/bin/statsd_exporter" ]

402
LICENSE
View file

@ -1,201 +1,201 @@
Apache License Apache License
Version 2.0, January 2004 Version 2.0, January 2004
http://www.apache.org/licenses/ http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions. 1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, "License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document. and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by "Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License. the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all "Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition, control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the "control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity. outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity "You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License. exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, "Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation including but not limited to software source code, documentation
source, and configuration files. source, and configuration files.
"Object" form shall mean any form resulting from mechanical "Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation, not limited to compiled object code, generated documentation,
and conversions to other media types. and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or "Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work copyright notice that is included in or attached to the work
(an example is provided in the Appendix below). (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object "Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of, separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof. the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including "Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted" the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems, communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution." designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity "Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work. subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of 2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual, this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of, copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form. Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of 3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual, this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made, (except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work, use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s) Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate granted to You under this License for that Work shall terminate
as of the date such litigation is filed. as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the 4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You modifications, and in Source or Object form, provided that You
meet the following conditions: meet the following conditions:
(a) You must give any other recipients of the Work or (a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices (b) You must cause any modified files to carry prominent notices
stating that You changed the files; and stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works (c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work, attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of excluding those notices that do not pertain to any part of
the Derivative Works; and the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its (d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or, documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed that such additional attribution notices cannot be construed
as modifying the License. as modifying the License.
You may add Your own copyright statement to Your modifications and You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use, for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License. the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, 5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions. this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions. with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade 6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor, names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file. origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or 7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS, Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License. risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, 8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise, whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special, liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill, Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages. has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing 9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer, the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity, and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify, of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability. of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work. APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]" boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier same "printed page" as the copyright notice for easier
identification within third-party archives. identification within third-party archives.
Copyright [yyyy] [name of copyright owner] Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.
You may obtain a copy of the License at You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0 http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and See the License for the specific language governing permissions and
limitations under the License. limitations under the License.

View file

@ -1 +1 @@
* Matthias Rampke <mr@soundcloud.com> * Matthias Rampke <mr@soundcloud.com>

View file

@ -1,28 +1,28 @@
# Copyright 2013 The Prometheus Authors # Copyright 2013 The Prometheus Authors
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
# You may obtain a copy of the License at # You may obtain a copy of the License at
# #
# http://www.apache.org/licenses/LICENSE-2.0 # http://www.apache.org/licenses/LICENSE-2.0
# #
# Unless required by applicable law or agreed to in writing, software # Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, # distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# Needs to be defined before including Makefile.common to auto-generate targets # Needs to be defined before including Makefile.common to auto-generate targets
DOCKER_ARCHS ?= amd64 armv7 arm64 DOCKER_ARCHS ?= amd64 armv7 arm64
include Makefile.common include Makefile.common
STATICCHECK_IGNORE = STATICCHECK_IGNORE =
DOCKER_IMAGE_NAME ?= statsd-exporter DOCKER_IMAGE_NAME ?= statsd-exporter
.PHONY: bench .PHONY: bench
bench: bench:
@echo ">> running all benchmarks" @echo ">> running all benchmarks"
$(GO) test -bench . -race $(pkgs) $(GO) test -bench . -race $(pkgs)
all: bench all: bench

View file

@ -1,289 +1,289 @@
# Copyright 2018 The Prometheus Authors # Copyright 2018 The Prometheus Authors
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
# You may obtain a copy of the License at # You may obtain a copy of the License at
# #
# http://www.apache.org/licenses/LICENSE-2.0 # http://www.apache.org/licenses/LICENSE-2.0
# #
# Unless required by applicable law or agreed to in writing, software # Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, # distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# A common Makefile that includes rules to be reused in different prometheus projects. # A common Makefile that includes rules to be reused in different prometheus projects.
# !!! Open PRs only against the prometheus/prometheus/Makefile.common repository! # !!! Open PRs only against the prometheus/prometheus/Makefile.common repository!
# Example usage : # Example usage :
# Create the main Makefile in the root project directory. # Create the main Makefile in the root project directory.
# include Makefile.common # include Makefile.common
# customTarget: # customTarget:
# @echo ">> Running customTarget" # @echo ">> Running customTarget"
# #
# Ensure GOBIN is not set during build so that promu is installed to the correct path # Ensure GOBIN is not set during build so that promu is installed to the correct path
unexport GOBIN unexport GOBIN
GO ?= go GO ?= go
GOFMT ?= $(GO)fmt GOFMT ?= $(GO)fmt
FIRST_GOPATH := $(firstword $(subst :, ,$(shell $(GO) env GOPATH))) FIRST_GOPATH := $(firstword $(subst :, ,$(shell $(GO) env GOPATH)))
GOOPTS ?= GOOPTS ?=
GOHOSTOS ?= $(shell $(GO) env GOHOSTOS) GOHOSTOS ?= $(shell $(GO) env GOHOSTOS)
GOHOSTARCH ?= $(shell $(GO) env GOHOSTARCH) GOHOSTARCH ?= $(shell $(GO) env GOHOSTARCH)
GO_VERSION ?= $(shell $(GO) version) GO_VERSION ?= $(shell $(GO) version)
GO_VERSION_NUMBER ?= $(word 3, $(GO_VERSION)) GO_VERSION_NUMBER ?= $(word 3, $(GO_VERSION))
PRE_GO_111 ?= $(shell echo $(GO_VERSION_NUMBER) | grep -E 'go1\.(10|[0-9])\.') PRE_GO_111 ?= $(shell echo $(GO_VERSION_NUMBER) | grep -E 'go1\.(10|[0-9])\.')
GOVENDOR := GOVENDOR :=
GO111MODULE := GO111MODULE :=
ifeq (, $(PRE_GO_111)) ifeq (, $(PRE_GO_111))
ifneq (,$(wildcard go.mod)) ifneq (,$(wildcard go.mod))
# Enforce Go modules support just in case the directory is inside GOPATH (and for Travis CI). # Enforce Go modules support just in case the directory is inside GOPATH (and for Travis CI).
GO111MODULE := on GO111MODULE := on
ifneq (,$(wildcard vendor)) ifneq (,$(wildcard vendor))
# Always use the local vendor/ directory to satisfy the dependencies. # Always use the local vendor/ directory to satisfy the dependencies.
GOOPTS := $(GOOPTS) -mod=vendor GOOPTS := $(GOOPTS) -mod=vendor
endif endif
endif endif
else else
ifneq (,$(wildcard go.mod)) ifneq (,$(wildcard go.mod))
ifneq (,$(wildcard vendor)) ifneq (,$(wildcard vendor))
$(warning This repository requires Go >= 1.11 because of Go modules) $(warning This repository requires Go >= 1.11 because of Go modules)
$(warning Some recipes may not work as expected as the current Go runtime is '$(GO_VERSION_NUMBER)') $(warning Some recipes may not work as expected as the current Go runtime is '$(GO_VERSION_NUMBER)')
endif endif
else else
# This repository isn't using Go modules (yet). # This repository isn't using Go modules (yet).
GOVENDOR := $(FIRST_GOPATH)/bin/govendor GOVENDOR := $(FIRST_GOPATH)/bin/govendor
endif endif
endif endif
PROMU := $(FIRST_GOPATH)/bin/promu PROMU := $(FIRST_GOPATH)/bin/promu
pkgs = ./... pkgs = ./...
ifeq (arm, $(GOHOSTARCH)) ifeq (arm, $(GOHOSTARCH))
GOHOSTARM ?= $(shell GOARM= $(GO) env GOARM) GOHOSTARM ?= $(shell GOARM= $(GO) env GOARM)
GO_BUILD_PLATFORM ?= $(GOHOSTOS)-$(GOHOSTARCH)v$(GOHOSTARM) GO_BUILD_PLATFORM ?= $(GOHOSTOS)-$(GOHOSTARCH)v$(GOHOSTARM)
else else
GO_BUILD_PLATFORM ?= $(GOHOSTOS)-$(GOHOSTARCH) GO_BUILD_PLATFORM ?= $(GOHOSTOS)-$(GOHOSTARCH)
endif endif
GOTEST := $(GO) test GOTEST := $(GO) test
GOTEST_DIR := GOTEST_DIR :=
ifneq ($(CIRCLE_JOB),) ifneq ($(CIRCLE_JOB),)
ifneq ($(shell which gotestsum),) ifneq ($(shell which gotestsum),)
GOTEST_DIR := test-results GOTEST_DIR := test-results
GOTEST := gotestsum --junitfile $(GOTEST_DIR)/unit-tests.xml -- GOTEST := gotestsum --junitfile $(GOTEST_DIR)/unit-tests.xml --
endif endif
endif endif
PROMU_VERSION ?= 0.5.0 PROMU_VERSION ?= 0.5.0
PROMU_URL := https://github.com/prometheus/promu/releases/download/v$(PROMU_VERSION)/promu-$(PROMU_VERSION).$(GO_BUILD_PLATFORM).tar.gz PROMU_URL := https://github.com/prometheus/promu/releases/download/v$(PROMU_VERSION)/promu-$(PROMU_VERSION).$(GO_BUILD_PLATFORM).tar.gz
GOLANGCI_LINT := GOLANGCI_LINT :=
GOLANGCI_LINT_OPTS ?= GOLANGCI_LINT_OPTS ?=
GOLANGCI_LINT_VERSION ?= v1.18.0 GOLANGCI_LINT_VERSION ?= v1.18.0
# golangci-lint only supports linux, darwin and windows platforms on i386/amd64. # golangci-lint only supports linux, darwin and windows platforms on i386/amd64.
# windows isn't included here because of the path separator being different. # windows isn't included here because of the path separator being different.
ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux darwin)) ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux darwin))
ifeq ($(GOHOSTARCH),$(filter $(GOHOSTARCH),amd64 i386)) ifeq ($(GOHOSTARCH),$(filter $(GOHOSTARCH),amd64 i386))
GOLANGCI_LINT := $(FIRST_GOPATH)/bin/golangci-lint GOLANGCI_LINT := $(FIRST_GOPATH)/bin/golangci-lint
endif endif
endif endif
PREFIX ?= $(shell pwd) PREFIX ?= $(shell pwd)
BIN_DIR ?= $(shell pwd) BIN_DIR ?= $(shell pwd)
DOCKER_IMAGE_TAG ?= $(subst /,-,$(shell git rev-parse --abbrev-ref HEAD)) DOCKER_IMAGE_TAG ?= $(subst /,-,$(shell git rev-parse --abbrev-ref HEAD))
DOCKERFILE_PATH ?= ./Dockerfile DOCKERFILE_PATH ?= ./Dockerfile
DOCKERBUILD_CONTEXT ?= ./ DOCKERBUILD_CONTEXT ?= ./
DOCKER_REPO ?= prom DOCKER_REPO ?= prom
DOCKER_ARCHS ?= amd64 DOCKER_ARCHS ?= amd64
BUILD_DOCKER_ARCHS = $(addprefix common-docker-,$(DOCKER_ARCHS)) BUILD_DOCKER_ARCHS = $(addprefix common-docker-,$(DOCKER_ARCHS))
PUBLISH_DOCKER_ARCHS = $(addprefix common-docker-publish-,$(DOCKER_ARCHS)) PUBLISH_DOCKER_ARCHS = $(addprefix common-docker-publish-,$(DOCKER_ARCHS))
TAG_DOCKER_ARCHS = $(addprefix common-docker-tag-latest-,$(DOCKER_ARCHS)) TAG_DOCKER_ARCHS = $(addprefix common-docker-tag-latest-,$(DOCKER_ARCHS))
ifeq ($(GOHOSTARCH),amd64) ifeq ($(GOHOSTARCH),amd64)
ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux freebsd darwin windows)) ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux freebsd darwin windows))
# Only supported on amd64 # Only supported on amd64
test-flags := -race test-flags := -race
endif endif
endif endif
# This rule is used to forward a target like "build" to "common-build". This # This rule is used to forward a target like "build" to "common-build". This
# allows a new "build" target to be defined in a Makefile which includes this # allows a new "build" target to be defined in a Makefile which includes this
# one and override "common-build" without override warnings. # one and override "common-build" without override warnings.
%: common-% ; %: common-% ;
.PHONY: common-all .PHONY: common-all
common-all: precheck style check_license lint unused build test common-all: precheck style check_license lint unused build test
.PHONY: common-style .PHONY: common-style
common-style: common-style:
@echo ">> checking code style" @echo ">> checking code style"
@fmtRes=$$($(GOFMT) -d $$(find . -path ./vendor -prune -o -name '*.go' -print)); \ @fmtRes=$$($(GOFMT) -d $$(find . -path ./vendor -prune -o -name '*.go' -print)); \
if [ -n "$${fmtRes}" ]; then \ if [ -n "$${fmtRes}" ]; then \
echo "gofmt checking failed!"; echo "$${fmtRes}"; echo; \ echo "gofmt checking failed!"; echo "$${fmtRes}"; echo; \
echo "Please ensure you are using $$($(GO) version) for formatting code."; \ echo "Please ensure you are using $$($(GO) version) for formatting code."; \
exit 1; \ exit 1; \
fi fi
.PHONY: common-check_license .PHONY: common-check_license
common-check_license: common-check_license:
@echo ">> checking license header" @echo ">> checking license header"
@licRes=$$(for file in $$(find . -type f -iname '*.go' ! -path './vendor/*') ; do \ @licRes=$$(for file in $$(find . -type f -iname '*.go' ! -path './vendor/*') ; do \
awk 'NR<=3' $$file | grep -Eq "(Copyright|generated|GENERATED)" || echo $$file; \ awk 'NR<=3' $$file | grep -Eq "(Copyright|generated|GENERATED)" || echo $$file; \
done); \ done); \
if [ -n "$${licRes}" ]; then \ if [ -n "$${licRes}" ]; then \
echo "license header checking failed:"; echo "$${licRes}"; \ echo "license header checking failed:"; echo "$${licRes}"; \
exit 1; \ exit 1; \
fi fi
.PHONY: common-deps .PHONY: common-deps
common-deps: common-deps:
@echo ">> getting dependencies" @echo ">> getting dependencies"
ifdef GO111MODULE ifdef GO111MODULE
GO111MODULE=$(GO111MODULE) $(GO) mod download GO111MODULE=$(GO111MODULE) $(GO) mod download
else else
$(GO) get $(GOOPTS) -t ./... $(GO) get $(GOOPTS) -t ./...
endif endif
.PHONY: common-test-short .PHONY: common-test-short
common-test-short: $(GOTEST_DIR) common-test-short: $(GOTEST_DIR)
@echo ">> running short tests" @echo ">> running short tests"
GO111MODULE=$(GO111MODULE) $(GOTEST) -short $(GOOPTS) $(pkgs) GO111MODULE=$(GO111MODULE) $(GOTEST) -short $(GOOPTS) $(pkgs)
.PHONY: common-test .PHONY: common-test
common-test: $(GOTEST_DIR) common-test: $(GOTEST_DIR)
@echo ">> running all tests" @echo ">> running all tests"
GO111MODULE=$(GO111MODULE) $(GOTEST) $(test-flags) $(GOOPTS) $(pkgs) GO111MODULE=$(GO111MODULE) $(GOTEST) $(test-flags) $(GOOPTS) $(pkgs)
$(GOTEST_DIR): $(GOTEST_DIR):
@mkdir -p $@ @mkdir -p $@
.PHONY: common-format .PHONY: common-format
common-format: common-format:
@echo ">> formatting code" @echo ">> formatting code"
GO111MODULE=$(GO111MODULE) $(GO) fmt $(pkgs) GO111MODULE=$(GO111MODULE) $(GO) fmt $(pkgs)
.PHONY: common-vet .PHONY: common-vet
common-vet: common-vet:
@echo ">> vetting code" @echo ">> vetting code"
GO111MODULE=$(GO111MODULE) $(GO) vet $(GOOPTS) $(pkgs) GO111MODULE=$(GO111MODULE) $(GO) vet $(GOOPTS) $(pkgs)
.PHONY: common-lint .PHONY: common-lint
common-lint: $(GOLANGCI_LINT) common-lint: $(GOLANGCI_LINT)
ifdef GOLANGCI_LINT ifdef GOLANGCI_LINT
@echo ">> running golangci-lint" @echo ">> running golangci-lint"
ifdef GO111MODULE ifdef GO111MODULE
# 'go list' needs to be executed before staticcheck to prepopulate the modules cache. # 'go list' needs to be executed before staticcheck to prepopulate the modules cache.
# Otherwise staticcheck might fail randomly for some reason not yet explained. # Otherwise staticcheck might fail randomly for some reason not yet explained.
GO111MODULE=$(GO111MODULE) $(GO) list -e -compiled -test=true -export=false -deps=true -find=false -tags= -- ./... > /dev/null GO111MODULE=$(GO111MODULE) $(GO) list -e -compiled -test=true -export=false -deps=true -find=false -tags= -- ./... > /dev/null
GO111MODULE=$(GO111MODULE) $(GOLANGCI_LINT) run $(GOLANGCI_LINT_OPTS) $(pkgs) GO111MODULE=$(GO111MODULE) $(GOLANGCI_LINT) run $(GOLANGCI_LINT_OPTS) $(pkgs)
else else
$(GOLANGCI_LINT) run $(pkgs) $(GOLANGCI_LINT) run $(pkgs)
endif endif
endif endif
# For backward-compatibility. # For backward-compatibility.
.PHONY: common-staticcheck .PHONY: common-staticcheck
common-staticcheck: lint common-staticcheck: lint
.PHONY: common-unused .PHONY: common-unused
common-unused: $(GOVENDOR) common-unused: $(GOVENDOR)
ifdef GOVENDOR ifdef GOVENDOR
@echo ">> running check for unused packages" @echo ">> running check for unused packages"
@$(GOVENDOR) list +unused | grep . && exit 1 || echo 'No unused packages' @$(GOVENDOR) list +unused | grep . && exit 1 || echo 'No unused packages'
else else
ifdef GO111MODULE ifdef GO111MODULE
@echo ">> running check for unused/missing packages in go.mod" @echo ">> running check for unused/missing packages in go.mod"
GO111MODULE=$(GO111MODULE) $(GO) mod tidy GO111MODULE=$(GO111MODULE) $(GO) mod tidy
ifeq (,$(wildcard vendor)) ifeq (,$(wildcard vendor))
@git diff --exit-code -- go.sum go.mod @git diff --exit-code -- go.sum go.mod
else else
@echo ">> running check for unused packages in vendor/" @echo ">> running check for unused packages in vendor/"
GO111MODULE=$(GO111MODULE) $(GO) mod vendor GO111MODULE=$(GO111MODULE) $(GO) mod vendor
@git diff --exit-code -- go.sum go.mod vendor/ @git diff --exit-code -- go.sum go.mod vendor/
endif endif
endif endif
endif endif
.PHONY: common-build .PHONY: common-build
common-build: promu common-build: promu
@echo ">> building binaries" @echo ">> building binaries"
GO111MODULE=$(GO111MODULE) $(PROMU) build --prefix $(PREFIX) $(PROMU_BINARIES) GO111MODULE=$(GO111MODULE) $(PROMU) build --prefix $(PREFIX) $(PROMU_BINARIES)
.PHONY: common-tarball .PHONY: common-tarball
common-tarball: promu common-tarball: promu
@echo ">> building release tarball" @echo ">> building release tarball"
$(PROMU) tarball --prefix $(PREFIX) $(BIN_DIR) $(PROMU) tarball --prefix $(PREFIX) $(BIN_DIR)
.PHONY: common-docker $(BUILD_DOCKER_ARCHS) .PHONY: common-docker $(BUILD_DOCKER_ARCHS)
common-docker: $(BUILD_DOCKER_ARCHS) common-docker: $(BUILD_DOCKER_ARCHS)
$(BUILD_DOCKER_ARCHS): common-docker-%: $(BUILD_DOCKER_ARCHS): common-docker-%:
docker build -t "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" \ docker build -t "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" \
-f $(DOCKERFILE_PATH) \ -f $(DOCKERFILE_PATH) \
--build-arg ARCH="$*" \ --build-arg ARCH="$*" \
--build-arg OS="linux" \ --build-arg OS="linux" \
$(DOCKERBUILD_CONTEXT) $(DOCKERBUILD_CONTEXT)
.PHONY: common-docker-publish $(PUBLISH_DOCKER_ARCHS) .PHONY: common-docker-publish $(PUBLISH_DOCKER_ARCHS)
common-docker-publish: $(PUBLISH_DOCKER_ARCHS) common-docker-publish: $(PUBLISH_DOCKER_ARCHS)
$(PUBLISH_DOCKER_ARCHS): common-docker-publish-%: $(PUBLISH_DOCKER_ARCHS): common-docker-publish-%:
docker push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" docker push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)"
.PHONY: common-docker-tag-latest $(TAG_DOCKER_ARCHS) .PHONY: common-docker-tag-latest $(TAG_DOCKER_ARCHS)
common-docker-tag-latest: $(TAG_DOCKER_ARCHS) common-docker-tag-latest: $(TAG_DOCKER_ARCHS)
$(TAG_DOCKER_ARCHS): common-docker-tag-latest-%: $(TAG_DOCKER_ARCHS): common-docker-tag-latest-%:
docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:latest" docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:latest"
.PHONY: common-docker-manifest .PHONY: common-docker-manifest
common-docker-manifest: common-docker-manifest:
DOCKER_CLI_EXPERIMENTAL=enabled docker manifest create -a "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG)" $(foreach ARCH,$(DOCKER_ARCHS),$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$(ARCH):$(DOCKER_IMAGE_TAG)) DOCKER_CLI_EXPERIMENTAL=enabled docker manifest create -a "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG)" $(foreach ARCH,$(DOCKER_ARCHS),$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$(ARCH):$(DOCKER_IMAGE_TAG))
DOCKER_CLI_EXPERIMENTAL=enabled docker manifest push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG)" DOCKER_CLI_EXPERIMENTAL=enabled docker manifest push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG)"
.PHONY: promu .PHONY: promu
promu: $(PROMU) promu: $(PROMU)
$(PROMU): $(PROMU):
$(eval PROMU_TMP := $(shell mktemp -d)) $(eval PROMU_TMP := $(shell mktemp -d))
curl -s -L $(PROMU_URL) | tar -xvzf - -C $(PROMU_TMP) curl -s -L $(PROMU_URL) | tar -xvzf - -C $(PROMU_TMP)
mkdir -p $(FIRST_GOPATH)/bin mkdir -p $(FIRST_GOPATH)/bin
cp $(PROMU_TMP)/promu-$(PROMU_VERSION).$(GO_BUILD_PLATFORM)/promu $(FIRST_GOPATH)/bin/promu cp $(PROMU_TMP)/promu-$(PROMU_VERSION).$(GO_BUILD_PLATFORM)/promu $(FIRST_GOPATH)/bin/promu
rm -r $(PROMU_TMP) rm -r $(PROMU_TMP)
.PHONY: proto .PHONY: proto
proto: proto:
@echo ">> generating code from proto files" @echo ">> generating code from proto files"
@./scripts/genproto.sh @./scripts/genproto.sh
ifdef GOLANGCI_LINT ifdef GOLANGCI_LINT
$(GOLANGCI_LINT): $(GOLANGCI_LINT):
mkdir -p $(FIRST_GOPATH)/bin mkdir -p $(FIRST_GOPATH)/bin
curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/$(GOLANGCI_LINT_VERSION)/install.sh \ curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/$(GOLANGCI_LINT_VERSION)/install.sh \
| sed -e '/install -d/d' \ | sed -e '/install -d/d' \
| sh -s -- -b $(FIRST_GOPATH)/bin $(GOLANGCI_LINT_VERSION) | sh -s -- -b $(FIRST_GOPATH)/bin $(GOLANGCI_LINT_VERSION)
endif endif
ifdef GOVENDOR ifdef GOVENDOR
.PHONY: $(GOVENDOR) .PHONY: $(GOVENDOR)
$(GOVENDOR): $(GOVENDOR):
GOOS= GOARCH= $(GO) get -u github.com/kardianos/govendor GOOS= GOARCH= $(GO) get -u github.com/kardianos/govendor
endif endif
.PHONY: precheck .PHONY: precheck
precheck:: precheck::
define PRECHECK_COMMAND_template = define PRECHECK_COMMAND_template =
precheck:: $(1)_precheck precheck:: $(1)_precheck
PRECHECK_COMMAND_$(1) ?= $(1) $$(strip $$(PRECHECK_OPTIONS_$(1))) PRECHECK_COMMAND_$(1) ?= $(1) $$(strip $$(PRECHECK_OPTIONS_$(1)))
.PHONY: $(1)_precheck .PHONY: $(1)_precheck
$(1)_precheck: $(1)_precheck:
@if ! $$(PRECHECK_COMMAND_$(1)) 1>/dev/null 2>&1; then \ @if ! $$(PRECHECK_COMMAND_$(1)) 1>/dev/null 2>&1; then \
echo "Execution of '$$(PRECHECK_COMMAND_$(1))' command failed. Is $(1) installed?"; \ echo "Execution of '$$(PRECHECK_COMMAND_$(1))' command failed. Is $(1) installed?"; \
exit 1; \ exit 1; \
fi fi
endef endef

10
NOTICE
View file

@ -1,5 +1,5 @@
StatsD-to-Prometheus exporter StatsD-to-Prometheus exporter
Copyright 2013-2015 The Prometheus Authors Copyright 2013-2015 The Prometheus Authors
This product includes software developed at This product includes software developed at
SoundCloud Ltd. (http://soundcloud.com/). SoundCloud Ltd. (http://soundcloud.com/).

948
README.md
View file

@ -1,474 +1,474 @@
# statsd exporter [![Build Status](https://travis-ci.org/prometheus/statsd_exporter.svg)][travis] # statsd exporter [![Build Status](https://travis-ci.org/prometheus/statsd_exporter.svg)][travis]
[![CircleCI](https://circleci.com/gh/prometheus/statsd_exporter/tree/master.svg?style=shield)][circleci] [![CircleCI](https://circleci.com/gh/prometheus/statsd_exporter/tree/master.svg?style=shield)][circleci]
[![Docker Repository on Quay](https://quay.io/repository/prometheus/statsd-exporter/status)][quay] [![Docker Repository on Quay](https://quay.io/repository/prometheus/statsd-exporter/status)][quay]
[![Docker Pulls](https://img.shields.io/docker/pulls/prom/statsd-exporter.svg)][hub] [![Docker Pulls](https://img.shields.io/docker/pulls/prom/statsd-exporter.svg)][hub]
`statsd_exporter` receives StatsD-style metrics and exports them as Prometheus metrics. `statsd_exporter` receives StatsD-style metrics and exports them as Prometheus metrics.
## Overview ## Overview
### With StatsD ### With StatsD
To pipe metrics from an existing StatsD environment into Prometheus, configure To pipe metrics from an existing StatsD environment into Prometheus, configure
StatsD's repeater backend to repeat all received metrics to a `statsd_exporter` StatsD's repeater backend to repeat all received metrics to a `statsd_exporter`
process. This exporter translates StatsD metrics to Prometheus metrics via process. This exporter translates StatsD metrics to Prometheus metrics via
configured mapping rules. configured mapping rules.
+----------+ +-------------------+ +--------------+ +----------+ +-------------------+ +--------------+
| StatsD |---(UDP/TCP repeater)--->| statsd_exporter |<---(scrape /metrics)---| Prometheus | | StatsD |---(UDP/TCP repeater)--->| statsd_exporter |<---(scrape /metrics)---| Prometheus |
+----------+ +-------------------+ +--------------+ +----------+ +-------------------+ +--------------+
### Without StatsD ### Without StatsD
Since the StatsD exporter uses the same line protocol as StatsD itself, you can Since the StatsD exporter uses the same line protocol as StatsD itself, you can
also configure your applications to send StatsD metrics directly to the exporter. also configure your applications to send StatsD metrics directly to the exporter.
In that case, you don't need to run a StatsD server anymore. In that case, you don't need to run a StatsD server anymore.
We recommend this only as an intermediate solution and recommend switching to We recommend this only as an intermediate solution and recommend switching to
[native Prometheus instrumentation](http://prometheus.io/docs/instrumenting/clientlibs/) [native Prometheus instrumentation](http://prometheus.io/docs/instrumenting/clientlibs/)
in the long term. in the long term.
### Tagging Extensions ### Tagging Extensions
The exporter supports Librato, InfluxDB, and DogStatsD-style tags, The exporter supports Librato, InfluxDB, and DogStatsD-style tags,
which will be converted into Prometheus labels. which will be converted into Prometheus labels.
For Librato-style tags, they must be appended to the metric name with a For Librato-style tags, they must be appended to the metric name with a
delimiting `#`, as so: delimiting `#`, as so:
``` ```
metric.name#tagName=val,tag2Name=val2:0|c metric.name#tagName=val,tag2Name=val2:0|c
``` ```
See the [statsd-librato-backend README](https://github.com/librato/statsd-librato-backend#tags) See the [statsd-librato-backend README](https://github.com/librato/statsd-librato-backend#tags)
for a more complete description. for a more complete description.
For InfluxDB-style tags, they must be appended to the metric name with a For InfluxDB-style tags, they must be appended to the metric name with a
delimiting comma, as so: delimiting comma, as so:
``` ```
metric.name,tagName=val,tag2Name=val2:0|c metric.name,tagName=val,tag2Name=val2:0|c
``` ```
See [this InfluxDB blog post](https://www.influxdata.com/blog/getting-started-with-sending-statsd-metrics-to-telegraf-influxdb/#introducing-influx-statsd) See [this InfluxDB blog post](https://www.influxdata.com/blog/getting-started-with-sending-statsd-metrics-to-telegraf-influxdb/#introducing-influx-statsd)
for a larger overview. for a larger overview.
For DogStatsD-style tags, they're appended as a `|#` delimited section at the For DogStatsD-style tags, they're appended as a `|#` delimited section at the
end of the metric, as so: end of the metric, as so:
``` ```
metric.name:0|c|#tagName:val,tag2Name:val2 metric.name:0|c|#tagName:val,tag2Name:val2
``` ```
See [Tags](https://docs.datadoghq.com/developers/dogstatsd/data_types/#tagging) See [Tags](https://docs.datadoghq.com/developers/dogstatsd/data_types/#tagging)
in the DogStatsD documentation for the concept description and in the DogStatsD documentation for the concept description and
[Datagram Format](https://docs.datadoghq.com/developers/dogstatsd/datagram_shell/). [Datagram Format](https://docs.datadoghq.com/developers/dogstatsd/datagram_shell/).
If you encounter problems, note that this tagging style is incompatible with If you encounter problems, note that this tagging style is incompatible with
the original `statsd` implementation. the original `statsd` implementation.
Be aware: If you mix tag styles (e.g., Librato/InfluxDB with DogStatsD), the Be aware: If you mix tag styles (e.g., Librato/InfluxDB with DogStatsD), the
exporter will consider this an error and the sample will be discarded. Also, exporter will consider this an error and the sample will be discarded. Also,
tags without values (`#some_tag`) are not supported and will be ignored. tags without values (`#some_tag`) are not supported and will be ignored.
## Building and Running ## Building and Running
NOTE: Version 0.7.0 switched to the [kingpin](https://github.com/alecthomas/kingpin) flags library. With this change, flag behaviour is POSIX-ish: NOTE: Version 0.7.0 switched to the [kingpin](https://github.com/alecthomas/kingpin) flags library. With this change, flag behaviour is POSIX-ish:
* long flags start with two dashes (`--version`) * long flags start with two dashes (`--version`)
* multiple short flags can be combined (but there currently is only one) * multiple short flags can be combined (but there currently is only one)
* flag processing stops at the first `--` * flag processing stops at the first `--`
``` ```
$ go build $ go build
$ ./statsd_exporter --help $ ./statsd_exporter --help
usage: statsd_exporter [<flags>] usage: statsd_exporter [<flags>]
Flags: Flags:
-h, --help Show context-sensitive help (also try --help-long and --help-man). -h, --help Show context-sensitive help (also try --help-long and --help-man).
--web.listen-address=":9102" --web.listen-address=":9102"
The address on which to expose the web interface and generated Prometheus metrics. The address on which to expose the web interface and generated Prometheus metrics.
--web.telemetry-path="/metrics" --web.telemetry-path="/metrics"
Path under which to expose metrics. Path under which to expose metrics.
--statsd.listen-udp=":9125" --statsd.listen-udp=":9125"
The UDP address on which to receive statsd metric lines. "" disables it. The UDP address on which to receive statsd metric lines. "" disables it.
--statsd.listen-tcp=":9125" --statsd.listen-tcp=":9125"
The TCP address on which to receive statsd metric lines. "" disables it. The TCP address on which to receive statsd metric lines. "" disables it.
--statsd.listen-unixgram="" --statsd.listen-unixgram=""
The Unixgram socket path to receive statsd metric lines in datagram. "" disables it. The Unixgram socket path to receive statsd metric lines in datagram. "" disables it.
--statsd.unixsocket-mode="755" --statsd.unixsocket-mode="755"
The permission mode of the unix socket. The permission mode of the unix socket.
--statsd.mapping-config=STATSD.MAPPING-CONFIG --statsd.mapping-config=STATSD.MAPPING-CONFIG
Metric mapping configuration file name. Metric mapping configuration file name.
--statsd.read-buffer=STATSD.READ-BUFFER --statsd.read-buffer=STATSD.READ-BUFFER
Size (in bytes) of the operating system's transmit read buffer associated with the UDP or Unixgram connection. Please make sure the kernel parameters net.core.rmem_max is set to Size (in bytes) of the operating system's transmit read buffer associated with the UDP or Unixgram connection. Please make sure the kernel parameters net.core.rmem_max is set to
a value greater than the value specified. a value greater than the value specified.
--statsd.cache-size=1000 Maximum size of your metric mapping cache. Relies on least recently used replacement policy if max size is reached. --statsd.cache-size=1000 Maximum size of your metric mapping cache. Relies on least recently used replacement policy if max size is reached.
--statsd.event-queue-size=10000 --statsd.event-queue-size=10000
Size of internal queue for processing events Size of internal queue for processing events
--statsd.event-flush-threshold=1000 --statsd.event-flush-threshold=1000
Number of events to hold in queue before flushing Number of events to hold in queue before flushing
--statsd.event-flush-interval=200ms --statsd.event-flush-interval=200ms
Number of events to hold in queue before flushing Number of events to hold in queue before flushing
--debug.dump-fsm="" The path to dump internal FSM generated for glob matching as Dot file. --debug.dump-fsm="" The path to dump internal FSM generated for glob matching as Dot file.
--log.level="info" Only log messages with the given severity or above. Valid levels: [debug, info, warn, error, fatal] --log.level="info" Only log messages with the given severity or above. Valid levels: [debug, info, warn, error, fatal]
--log.format="logger:stderr" --log.format="logger:stderr"
Set the log target and format. Example: "logger:syslog?appname=bob& local=7" or "logger:stdout?json=true" Set the log target and format. Example: "logger:syslog?appname=bob& local=7" or "logger:stdout?json=true"
--version Show application version. --version Show application version.
``` ```
## Tests ## Tests
$ go test $ go test
## Metric Mapping and Configuration ## Metric Mapping and Configuration
The `statsd_exporter` can be configured to translate specific dot-separated StatsD The `statsd_exporter` can be configured to translate specific dot-separated StatsD
metrics into labeled Prometheus metrics via a simple mapping language. The config metrics into labeled Prometheus metrics via a simple mapping language. The config
file is reloaded on SIGHUP. file is reloaded on SIGHUP.
A mapping definition starts with a line matching the StatsD metric in question, A mapping definition starts with a line matching the StatsD metric in question,
with `*`s acting as wildcards for each dot-separated metric component. The with `*`s acting as wildcards for each dot-separated metric component. The
lines following the matching expression must contain one `label="value"` pair lines following the matching expression must contain one `label="value"` pair
each, and at least define the metric name (label name `name`). The Prometheus each, and at least define the metric name (label name `name`). The Prometheus
metric is then constructed from these labels. `$n`-style references in the metric is then constructed from these labels. `$n`-style references in the
label value are replaced by the n-th wildcard match in the matching line, label value are replaced by the n-th wildcard match in the matching line,
starting at 1. Multiple matching definitions are separated by one or more empty starting at 1. Multiple matching definitions are separated by one or more empty
lines. The first mapping rule that matches a StatsD metric wins. lines. The first mapping rule that matches a StatsD metric wins.
Metrics that don't match any mapping in the configuration file are translated Metrics that don't match any mapping in the configuration file are translated
into Prometheus metrics without any labels and with any non-alphanumeric into Prometheus metrics without any labels and with any non-alphanumeric
characters, including periods, translated into underscores. characters, including periods, translated into underscores.
In general, the different metric types are translated as follows: In general, the different metric types are translated as follows:
StatsD gauge -> Prometheus gauge StatsD gauge -> Prometheus gauge
StatsD counter -> Prometheus counter StatsD counter -> Prometheus counter
StatsD timer -> Prometheus summary <-- indicates timer quantiles StatsD timer -> Prometheus summary <-- indicates timer quantiles
-> Prometheus counter (suffix `_total`) <-- indicates total time spent -> Prometheus counter (suffix `_total`) <-- indicates total time spent
-> Prometheus counter (suffix `_count`) <-- indicates total number of timer events -> Prometheus counter (suffix `_count`) <-- indicates total number of timer events
An example mapping configuration: An example mapping configuration:
```yaml ```yaml
mappings: mappings:
- match: "test.dispatcher.*.*.*" - match: "test.dispatcher.*.*.*"
name: "dispatcher_events_total" name: "dispatcher_events_total"
labels: labels:
processor: "$1" processor: "$1"
action: "$2" action: "$2"
outcome: "$3" outcome: "$3"
job: "test_dispatcher" job: "test_dispatcher"
- match: "*.signup.*.*" - match: "*.signup.*.*"
name: "signup_events_total" name: "signup_events_total"
labels: labels:
provider: "$2" provider: "$2"
outcome: "$3" outcome: "$3"
job: "${1}_server" job: "${1}_server"
``` ```
This would transform these example StatsD metrics into Prometheus metrics as This would transform these example StatsD metrics into Prometheus metrics as
follows: follows:
test.dispatcher.FooProcessor.send.success test.dispatcher.FooProcessor.send.success
=> dispatcher_events_total{processor="FooProcessor", action="send", outcome="success", job="test_dispatcher"} => dispatcher_events_total{processor="FooProcessor", action="send", outcome="success", job="test_dispatcher"}
foo_product.signup.facebook.failure foo_product.signup.facebook.failure
=> signup_events_total{provider="facebook", outcome="failure", job="foo_product_server"} => signup_events_total{provider="facebook", outcome="failure", job="foo_product_server"}
test.web-server.foo.bar test.web-server.foo.bar
=> test_web_server_foo_bar{} => test_web_server_foo_bar{}
Each mapping in the configuration file must define a `name` for the metric. The Each mapping in the configuration file must define a `name` for the metric. The
metric's name can contain `$n`-style references to be replaced by the n-th metric's name can contain `$n`-style references to be replaced by the n-th
wildcard match in the matching line. That allows for dynamic rewrites, such as: wildcard match in the matching line. That allows for dynamic rewrites, such as:
```yaml ```yaml
mappings: mappings:
- match: "test.*.*.counter" - match: "test.*.*.counter"
name: "${2}_total" name: "${2}_total"
labels: labels:
provider: "$1" provider: "$1"
``` ```
The metric name can also contain references to regex matches. The mapping above The metric name can also contain references to regex matches. The mapping above
could be written as: could be written as:
```yaml ```yaml
mappings: mappings:
- match: "test\\.(\\w+)\\.(\\w+)\\.counter" - match: "test\\.(\\w+)\\.(\\w+)\\.counter"
match_type: regex match_type: regex
name: "${2}_total" name: "${2}_total"
labels: labels:
provider: "$1" provider: "$1"
``` ```
Be aware about yaml escape rules as a mapping like the following one will not work. Be aware about yaml escape rules as a mapping like the following one will not work.
```yaml ```yaml
mappings: mappings:
- match: "test\.(\w+)\.(\w+)\.counter" - match: "test\.(\w+)\.(\w+)\.counter"
match_type: regex match_type: regex
name: "${2}_total" name: "${2}_total"
labels: labels:
provider: "$1" provider: "$1"
``` ```
Please note that metrics with the same name must also have the same set of Please note that metrics with the same name must also have the same set of
label names. label names.
If the default metric help text is insufficient for your needs you may use the YAML If the default metric help text is insufficient for your needs you may use the YAML
configuration to specify a custom help text for each mapping: configuration to specify a custom help text for each mapping:
```yaml ```yaml
mappings: mappings:
- match: "http.request.*" - match: "http.request.*"
help: "Total number of http requests" help: "Total number of http requests"
name: "http_requests_total" name: "http_requests_total"
labels: labels:
code: "$1" code: "$1"
``` ```
### StatsD timers ### StatsD timers
By default, statsd timers are represented as a Prometheus summary with By default, statsd timers are represented as a Prometheus summary with
quantiles. You may optionally configure the [quantiles and acceptable quantiles. You may optionally configure the [quantiles and acceptable
error](https://prometheus.io/docs/practices/histograms/#quantiles), as error](https://prometheus.io/docs/practices/histograms/#quantiles), as
well as adjusting how the summary metric is aggregated: well as adjusting how the summary metric is aggregated:
```yaml ```yaml
mappings: mappings:
- match: "test.timing.*.*.*" - match: "test.timing.*.*.*"
timer_type: summary timer_type: summary
name: "my_timer" name: "my_timer"
labels: labels:
provider: "$2" provider: "$2"
outcome: "$3" outcome: "$3"
job: "${1}_server" job: "${1}_server"
summary_options: summary_options:
quantiles: quantiles:
- quantile: 0.99 - quantile: 0.99
error: 0.001 error: 0.001
- quantile: 0.95 - quantile: 0.95
error: 0.01 error: 0.01
- quantile: 0.9 - quantile: 0.9
error: 0.05 error: 0.05
- quantile: 0.5 - quantile: 0.5
error: 0.005 error: 0.005
max_summary_age: 30s max_summary_age: 30s
summary_age_buckets: 3 summary_age_buckets: 3
stream_buffer_size: 1000 stream_buffer_size: 1000
``` ```
The default quantiles are 0.99, 0.9, and 0.5. The default quantiles are 0.99, 0.9, and 0.5.
The default summary age is 10 minutes, the default number of buckets The default summary age is 10 minutes, the default number of buckets
is 5 and the default buffer size is 500. See also the is 5 and the default buffer size is 500. See also the
[`golang_client` docs](https://godoc.org/github.com/prometheus/client_golang/prometheus#SummaryOpts). [`golang_client` docs](https://godoc.org/github.com/prometheus/client_golang/prometheus#SummaryOpts).
The `max_summary_age` corresponds to `SummaryOptions.MaxAge`, `summary_age_buckets` The `max_summary_age` corresponds to `SummaryOptions.MaxAge`, `summary_age_buckets`
to `SummaryOptions.AgeBuckets` and `stream_buffer_size` to `SummaryOptions.BufCap`. to `SummaryOptions.AgeBuckets` and `stream_buffer_size` to `SummaryOptions.BufCap`.
In the configuration, one may also set the timer type to "histogram". The In the configuration, one may also set the timer type to "histogram". The
default is "summary" as in the plain text configuration format. For example, default is "summary" as in the plain text configuration format. For example,
to set the timer type for a single metric: to set the timer type for a single metric:
```yaml ```yaml
mappings: mappings:
- match: "test.timing.*.*.*" - match: "test.timing.*.*.*"
timer_type: histogram timer_type: histogram
histogram_options: histogram_options:
buckets: [ 0.01, 0.025, 0.05, 0.1 ] buckets: [ 0.01, 0.025, 0.05, 0.1 ]
name: "my_timer" name: "my_timer"
labels: labels:
provider: "$2" provider: "$2"
outcome: "$3" outcome: "$3"
job: "${1}_server" job: "${1}_server"
``` ```
Note that timers will be accepted with the `ms`, `h`, and `d` statsd types. The first two are timers and histograms and the `d` type is for DataDog's "distribution" type. The distribution type is treated identically to timers and histograms. Note that timers will be accepted with the `ms`, `h`, and `d` statsd types. The first two are timers and histograms and the `d` type is for DataDog's "distribution" type. The distribution type is treated identically to timers and histograms.
It should be noted that whereas timers in statsd expects the unit of timing data to be in milliseconds, It should be noted that whereas timers in statsd expects the unit of timing data to be in milliseconds,
prometheus expects the unit to be seconds. Hence, the exporter converts all timers to seconds prometheus expects the unit to be seconds. Hence, the exporter converts all timers to seconds
before exporting them. before exporting them.
### DogStatsD Client Behavior ### DogStatsD Client Behavior
#### `timed()` decorator #### `timed()` decorator
If you are using the DogStatsD client's [timed](https://datadogpy.readthedocs.io/en/latest/#datadog.threadstats.base.ThreadStats.timed) decorator, If you are using the DogStatsD client's [timed](https://datadogpy.readthedocs.io/en/latest/#datadog.threadstats.base.ThreadStats.timed) decorator,
it emits the metric in seconds, set [use_ms](https://datadogpy.readthedocs.io/en/latest/index.html?highlight=use_ms) to `True` to fix this. it emits the metric in seconds, set [use_ms](https://datadogpy.readthedocs.io/en/latest/index.html?highlight=use_ms) to `True` to fix this.
### Regular expression matching ### Regular expression matching
Another capability when using YAML configuration is the ability to define matches Another capability when using YAML configuration is the ability to define matches
using raw regular expressions as opposed to the default globbing style of match. using raw regular expressions as opposed to the default globbing style of match.
This may allow for pulling structured data from otherwise poorly named statsd This may allow for pulling structured data from otherwise poorly named statsd
metrics AND allow for more precise targetting of match rules. When no `match_type` metrics AND allow for more precise targetting of match rules. When no `match_type`
paramter is specified the default value of `glob` will be assumed: paramter is specified the default value of `glob` will be assumed:
```yaml ```yaml
mappings: mappings:
- match: "(.*)\.(.*)--(.*)\.status\.(.*)\.count" - match: "(.*)\.(.*)--(.*)\.status\.(.*)\.count"
match_type: regex match_type: regex
name: "request_total" name: "request_total"
labels: labels:
hostname: "$1" hostname: "$1"
exec: "$2" exec: "$2"
protocol: "$3" protocol: "$3"
code: "$4" code: "$4"
``` ```
Note, that one may also set the histogram buckets. If not set, then the default Note, that one may also set the histogram buckets. If not set, then the default
[Prometheus client values](https://godoc.org/github.com/prometheus/client_golang/prometheus#pkg-variables) are used: `[.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10]`. `+Inf` is added [Prometheus client values](https://godoc.org/github.com/prometheus/client_golang/prometheus#pkg-variables) are used: `[.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10]`. `+Inf` is added
automatically. automatically.
`timer_type` is only used when the statsd metric type is a timer. `buckets` is `timer_type` is only used when the statsd metric type is a timer. `buckets` is
only used when the statsd metric type is a timerand the `timer_type` is set to only used when the statsd metric type is a timerand the `timer_type` is set to
"histogram." "histogram."
### Global defaults ### Global defaults
One may also set defaults for the timer type, buckets or quantiles, and match_type. These will be used One may also set defaults for the timer type, buckets or quantiles, and match_type. These will be used
by all mappings that do not define these. by all mappings that do not define these.
An option that can only be configured in `defaults` is `glob_disable_ordering`, which is `false` if omitted. By setting this to `true`, `glob` match type will not honor the occurance of rules in the mapping rules file and always treat `*` as lower priority than a general string. An option that can only be configured in `defaults` is `glob_disable_ordering`, which is `false` if omitted. By setting this to `true`, `glob` match type will not honor the occurance of rules in the mapping rules file and always treat `*` as lower priority than a general string.
```yaml ```yaml
defaults: defaults:
timer_type: histogram timer_type: histogram
buckets: [.005, .01, .025, .05, .1, .25, .5, 1, 2.5 ] buckets: [.005, .01, .025, .05, .1, .25, .5, 1, 2.5 ]
match_type: glob match_type: glob
glob_disable_ordering: false glob_disable_ordering: false
ttl: 0 # metrics do not expire ttl: 0 # metrics do not expire
mappings: mappings:
# This will be a histogram using the buckets set in `defaults`. # This will be a histogram using the buckets set in `defaults`.
- match: "test.timing.*.*.*" - match: "test.timing.*.*.*"
name: "my_timer" name: "my_timer"
labels: labels:
provider: "$2" provider: "$2"
outcome: "$3" outcome: "$3"
job: "${1}_server" job: "${1}_server"
# This will be a summary timer. # This will be a summary timer.
- match: "other.timing.*.*.*" - match: "other.timing.*.*.*"
timer_type: summary timer_type: summary
name: "other_timer" name: "other_timer"
labels: labels:
provider: "$2" provider: "$2"
outcome: "$3" outcome: "$3"
job: "${1}_server_other" job: "${1}_server_other"
``` ```
### Choosing between glob or regex match type ### Choosing between glob or regex match type
Despite from the missing flexibility of using regular expression in mapping and Despite from the missing flexibility of using regular expression in mapping and
formatting labels, `glob` matching is optimized to have better performance than formatting labels, `glob` matching is optimized to have better performance than
`regex` in certain use cases. In short, glob will have best performance if the `regex` in certain use cases. In short, glob will have best performance if the
rules amount is not so less and captures (using of `*`) is not to much in a rules amount is not so less and captures (using of `*`) is not to much in a
single rule. Whether disabling ordering in glob or not won't have a noticable single rule. Whether disabling ordering in glob or not won't have a noticable
effect on performance in general use cases. In edge cases like the below however, effect on performance in general use cases. In edge cases like the below however,
disabling ordering will be beneficial: disabling ordering will be beneficial:
a.*.*.*.* a.*.*.*.*
a.b.*.*.* a.b.*.*.*
a.b.c.*.* a.b.c.*.*
a.b.c.d.* a.b.c.d.*
The reason is that the list assignment of captures (using of `*`) is the most The reason is that the list assignment of captures (using of `*`) is the most
expensive operation in glob. Honoring ordering will result in up to 10 list expensive operation in glob. Honoring ordering will result in up to 10 list
assignments, while without ordering it will need only 4 at most. assignments, while without ordering it will need only 4 at most.
For details, see [pkg/mapper/fsm/README.md](pkg/mapper/fsm/README.md). For details, see [pkg/mapper/fsm/README.md](pkg/mapper/fsm/README.md).
Running `go test -bench .` in **pkg/mapper** directory will produce Running `go test -bench .` in **pkg/mapper** directory will produce
a detailed comparison between the two match type. a detailed comparison between the two match type.
### `drop` action ### `drop` action
You may also drop metrics by specifying a "drop" action on a match. For You may also drop metrics by specifying a "drop" action on a match. For
example: example:
```yaml ```yaml
mappings: mappings:
# This metric would match as normal. # This metric would match as normal.
- match: "test.timing.*.*.*" - match: "test.timing.*.*.*"
name: "my_timer" name: "my_timer"
labels: labels:
provider: "$2" provider: "$2"
outcome: "$3" outcome: "$3"
job: "${1}_server" job: "${1}_server"
# Any metric not matched will be dropped because "." matches all metrics. # Any metric not matched will be dropped because "." matches all metrics.
- match: "." - match: "."
match_type: regex match_type: regex
action: drop action: drop
name: "dropped" name: "dropped"
``` ```
You can drop any metric using the normal match syntax. You can drop any metric using the normal match syntax.
The default action is "map" which does the normal metrics mapping. The default action is "map" which does the normal metrics mapping.
### Explicit metric type mapping ### Explicit metric type mapping
StatsD allows emitting of different metric types under the same metric name, StatsD allows emitting of different metric types under the same metric name,
but the Prometheus client library can't merge those. For this use-case the but the Prometheus client library can't merge those. For this use-case the
mapping definition allows you to specify which metric type to match: mapping definition allows you to specify which metric type to match:
``` ```
mappings: mappings:
- match: "test.foo.*" - match: "test.foo.*"
name: "test_foo" name: "test_foo"
match_metric_type: counter match_metric_type: counter
labels: labels:
provider: "$1" provider: "$1"
``` ```
Possible values for `match_metric_type` are `gauge`, `counter` and `timer`. Possible values for `match_metric_type` are `gauge`, `counter` and `timer`.
### Mapping cache size and cache replacement policy ### Mapping cache size and cache replacement policy
There is a cache used to improve the performance of the metric mapping, that can greatly improvement performance. There is a cache used to improve the performance of the metric mapping, that can greatly improvement performance.
The cache has a default maximum of 1000 unique statsd metric names -> prometheus metrics mappings that it can store. The cache has a default maximum of 1000 unique statsd metric names -> prometheus metrics mappings that it can store.
This maximum can be adjust using the `statsd.cache-size` flag. This maximum can be adjust using the `statsd.cache-size` flag.
If the maximum is reached, entries are by default rotated using the [least recently used replacement policy](https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU)). This strategy is optimal when memory is constrained as only the most recent entries are retained. If the maximum is reached, entries are by default rotated using the [least recently used replacement policy](https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU)). This strategy is optimal when memory is constrained as only the most recent entries are retained.
Alternatively, you can choose a [random-replacement cache strategy](https://en.wikipedia.org/wiki/Cache_replacement_policies#Random_replacement_(RR)). This is less optimal if the cache is smaller than the cacheable set, but requires less locking. Use this for very high throughput, but make sure to allow for a cache that holds all metrics. Alternatively, you can choose a [random-replacement cache strategy](https://en.wikipedia.org/wiki/Cache_replacement_policies#Random_replacement_(RR)). This is less optimal if the cache is smaller than the cacheable set, but requires less locking. Use this for very high throughput, but make sure to allow for a cache that holds all metrics.
The optimal cache size is determined by the cardinality of the _incoming_ metrics. The optimal cache size is determined by the cardinality of the _incoming_ metrics.
### Time series expiration ### Time series expiration
The `ttl` parameter can be used to define the expiration time for stale metrics. The `ttl` parameter can be used to define the expiration time for stale metrics.
The value is a time duration with valid time units: "ns", "us" (or "µs"), The value is a time duration with valid time units: "ns", "us" (or "µs"),
"ms", "s", "m", "h". For example, `ttl: 1m20s`. `0` value is used to indicate "ms", "s", "m", "h". For example, `ttl: 1m20s`. `0` value is used to indicate
metrics that do not expire. metrics that do not expire.
TTL configuration is stored for each mapped metric name/labels combination TTL configuration is stored for each mapped metric name/labels combination
whenever new samples are received. This means that you cannot immediately whenever new samples are received. This means that you cannot immediately
expire a metric only by changing the mapping configuration. At least one expire a metric only by changing the mapping configuration. At least one
sample must be received for updated mappings to take effect. sample must be received for updated mappings to take effect.
### Event flushing configuration ### Event flushing configuration
Internally `statsd_exporter` runs a goroutine for each network listener (UDP, TCP & Unix Socket). These each receive and parse metrics received into an event. For performance purposes, these events are queued internally and flushed to the main exporter goroutine periodically in batches. The size of this queue and the flush criteria can be tuned with the `--statsd.event-queue-size`, `--statsd.event-flush-threshold` and `--statsd.event-flush-interval`. However, the defaults should perform well even for very high traffic environments. Internally `statsd_exporter` runs a goroutine for each network listener (UDP, TCP & Unix Socket). These each receive and parse metrics received into an event. For performance purposes, these events are queued internally and flushed to the main exporter goroutine periodically in batches. The size of this queue and the flush criteria can be tuned with the `--statsd.event-queue-size`, `--statsd.event-flush-threshold` and `--statsd.event-flush-interval`. However, the defaults should perform well even for very high traffic environments.
## Using Docker ## Using Docker
You can deploy this exporter using the [prom/statsd-exporter](https://registry.hub.docker.com/r/prom/statsd-exporter) Docker image. You can deploy this exporter using the [prom/statsd-exporter](https://registry.hub.docker.com/r/prom/statsd-exporter) Docker image.
For example: For example:
```bash ```bash
docker pull prom/statsd-exporter docker pull prom/statsd-exporter
docker run -d -p 9102:9102 -p 9125:9125 -p 9125:9125/udp \ docker run -d -p 9102:9102 -p 9125:9125 -p 9125:9125/udp \
-v $PWD/statsd_mapping.yml:/tmp/statsd_mapping.yml \ -v $PWD/statsd_mapping.yml:/tmp/statsd_mapping.yml \
prom/statsd-exporter --statsd.mapping-config=/tmp/statsd_mapping.yml prom/statsd-exporter --statsd.mapping-config=/tmp/statsd_mapping.yml
``` ```
[travis]: https://travis-ci.org/prometheus/statsd_exporter [travis]: https://travis-ci.org/prometheus/statsd_exporter
[circleci]: https://circleci.com/gh/prometheus/statsd_exporter [circleci]: https://circleci.com/gh/prometheus/statsd_exporter
[quay]: https://quay.io/repository/prometheus/statsd-exporter [quay]: https://quay.io/repository/prometheus/statsd-exporter
[hub]: https://hub.docker.com/r/prom/statsd-exporter/ [hub]: https://hub.docker.com/r/prom/statsd-exporter/

View file

@ -1 +1 @@
0.15.0 0.15.0

View file

@ -1,444 +1,444 @@
// Copyright 2013 The Prometheus Authors // Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package main package main
import ( import (
"reflect" "reflect"
"testing" "testing"
"github.com/go-kit/kit/log" "github.com/go-kit/kit/log"
"github.com/prometheus/statsd_exporter/pkg/event" "github.com/prometheus/statsd_exporter/pkg/event"
"github.com/prometheus/statsd_exporter/pkg/listener" "github.com/prometheus/statsd_exporter/pkg/listener"
) )
func TestHandlePacket(t *testing.T) { func TestHandlePacket(t *testing.T) {
scenarios := []struct { scenarios := []struct {
name string name string
in string in string
out event.Events out event.Events
}{ }{
{ {
name: "empty", name: "empty",
}, { }, {
name: "simple counter", name: "simple counter",
in: "foo:2|c", in: "foo:2|c",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 2, CValue: 2,
CLabels: map[string]string{}, CLabels: map[string]string{},
}, },
}, },
}, { }, {
name: "simple gauge", name: "simple gauge",
in: "foo:3|g", in: "foo:3|g",
out: event.Events{ out: event.Events{
&event.GaugeEvent{ &event.GaugeEvent{
GMetricName: "foo", GMetricName: "foo",
GValue: 3, GValue: 3,
GLabels: map[string]string{}, GLabels: map[string]string{},
}, },
}, },
}, { }, {
name: "gauge with sampling", name: "gauge with sampling",
in: "foo:3|g|@0.2", in: "foo:3|g|@0.2",
out: event.Events{ out: event.Events{
&event.GaugeEvent{ &event.GaugeEvent{
GMetricName: "foo", GMetricName: "foo",
GValue: 3, GValue: 3,
GLabels: map[string]string{}, GLabels: map[string]string{},
}, },
}, },
}, { }, {
name: "gauge decrement", name: "gauge decrement",
in: "foo:-10|g", in: "foo:-10|g",
out: event.Events{ out: event.Events{
&event.GaugeEvent{ &event.GaugeEvent{
GMetricName: "foo", GMetricName: "foo",
GValue: -10, GValue: -10,
GRelative: true, GRelative: true,
GLabels: map[string]string{}, GLabels: map[string]string{},
}, },
}, },
}, { }, {
name: "simple timer", name: "simple timer",
in: "foo:200|ms", in: "foo:200|ms",
out: event.Events{ out: event.Events{
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "foo", TMetricName: "foo",
TValue: 200, TValue: 200,
TLabels: map[string]string{}, TLabels: map[string]string{},
}, },
}, },
}, { }, {
name: "simple histogram", name: "simple histogram",
in: "foo:200|h", in: "foo:200|h",
out: event.Events{ out: event.Events{
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "foo", TMetricName: "foo",
TValue: 200, TValue: 200,
TLabels: map[string]string{}, TLabels: map[string]string{},
}, },
}, },
}, { }, {
name: "simple distribution", name: "simple distribution",
in: "foo:200|d", in: "foo:200|d",
out: event.Events{ out: event.Events{
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "foo", TMetricName: "foo",
TValue: 200, TValue: 200,
TLabels: map[string]string{}, TLabels: map[string]string{},
}, },
}, },
}, { }, {
name: "distribution with sampling", name: "distribution with sampling",
in: "foo:0.01|d|@0.2|#tag1:bar,#tag2:baz", in: "foo:0.01|d|@0.2|#tag1:bar,#tag2:baz",
out: event.Events{ out: event.Events{
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "foo", TMetricName: "foo",
TValue: 0.01, TValue: 0.01,
TLabels: map[string]string{"tag1": "bar", "tag2": "baz"}, TLabels: map[string]string{"tag1": "bar", "tag2": "baz"},
}, },
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "foo", TMetricName: "foo",
TValue: 0.01, TValue: 0.01,
TLabels: map[string]string{"tag1": "bar", "tag2": "baz"}, TLabels: map[string]string{"tag1": "bar", "tag2": "baz"},
}, },
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "foo", TMetricName: "foo",
TValue: 0.01, TValue: 0.01,
TLabels: map[string]string{"tag1": "bar", "tag2": "baz"}, TLabels: map[string]string{"tag1": "bar", "tag2": "baz"},
}, },
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "foo", TMetricName: "foo",
TValue: 0.01, TValue: 0.01,
TLabels: map[string]string{"tag1": "bar", "tag2": "baz"}, TLabels: map[string]string{"tag1": "bar", "tag2": "baz"},
}, },
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "foo", TMetricName: "foo",
TValue: 0.01, TValue: 0.01,
TLabels: map[string]string{"tag1": "bar", "tag2": "baz"}, TLabels: map[string]string{"tag1": "bar", "tag2": "baz"},
}, },
}, },
}, { }, {
name: "librato tag extension", name: "librato tag extension",
in: "foo#tag1=bar,tag2=baz:100|c", in: "foo#tag1=bar,tag2=baz:100|c",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 100, CValue: 100,
CLabels: map[string]string{"tag1": "bar", "tag2": "baz"}, CLabels: map[string]string{"tag1": "bar", "tag2": "baz"},
}, },
}, },
}, { }, {
name: "librato tag extension with tag keys unsupported by prometheus", name: "librato tag extension with tag keys unsupported by prometheus",
in: "foo#09digits=0,tag.with.dots=1:100|c", in: "foo#09digits=0,tag.with.dots=1:100|c",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 100, CValue: 100,
CLabels: map[string]string{"_09digits": "0", "tag_with_dots": "1"}, CLabels: map[string]string{"_09digits": "0", "tag_with_dots": "1"},
}, },
}, },
}, { }, {
name: "influxdb tag extension", name: "influxdb tag extension",
in: "foo,tag1=bar,tag2=baz:100|c", in: "foo,tag1=bar,tag2=baz:100|c",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 100, CValue: 100,
CLabels: map[string]string{"tag1": "bar", "tag2": "baz"}, CLabels: map[string]string{"tag1": "bar", "tag2": "baz"},
}, },
}, },
}, { }, {
name: "influxdb tag extension with tag keys unsupported by prometheus", name: "influxdb tag extension with tag keys unsupported by prometheus",
in: "foo,09digits=0,tag.with.dots=1:100|c", in: "foo,09digits=0,tag.with.dots=1:100|c",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 100, CValue: 100,
CLabels: map[string]string{"_09digits": "0", "tag_with_dots": "1"}, CLabels: map[string]string{"_09digits": "0", "tag_with_dots": "1"},
}, },
}, },
}, { }, {
name: "datadog tag extension", name: "datadog tag extension",
in: "foo:100|c|#tag1:bar,tag2:baz", in: "foo:100|c|#tag1:bar,tag2:baz",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 100, CValue: 100,
CLabels: map[string]string{"tag1": "bar", "tag2": "baz"}, CLabels: map[string]string{"tag1": "bar", "tag2": "baz"},
}, },
}, },
}, { }, {
name: "datadog tag extension with # in all keys (as sent by datadog php client)", name: "datadog tag extension with # in all keys (as sent by datadog php client)",
in: "foo:100|c|#tag1:bar,#tag2:baz", in: "foo:100|c|#tag1:bar,#tag2:baz",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 100, CValue: 100,
CLabels: map[string]string{"tag1": "bar", "tag2": "baz"}, CLabels: map[string]string{"tag1": "bar", "tag2": "baz"},
}, },
}, },
}, { }, {
name: "datadog tag extension with tag keys unsupported by prometheus", name: "datadog tag extension with tag keys unsupported by prometheus",
in: "foo:100|c|#09digits:0,tag.with.dots:1", in: "foo:100|c|#09digits:0,tag.with.dots:1",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 100, CValue: 100,
CLabels: map[string]string{"_09digits": "0", "tag_with_dots": "1"}, CLabels: map[string]string{"_09digits": "0", "tag_with_dots": "1"},
}, },
}, },
}, { }, {
name: "datadog tag extension with valueless tags: ignored", name: "datadog tag extension with valueless tags: ignored",
in: "foo:100|c|#tag_without_a_value", in: "foo:100|c|#tag_without_a_value",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 100, CValue: 100,
CLabels: map[string]string{}, CLabels: map[string]string{},
}, },
}, },
}, { }, {
name: "datadog tag extension with valueless tags (edge case)", name: "datadog tag extension with valueless tags (edge case)",
in: "foo:100|c|#tag_without_a_value,tag:value", in: "foo:100|c|#tag_without_a_value,tag:value",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 100, CValue: 100,
CLabels: map[string]string{"tag": "value"}, CLabels: map[string]string{"tag": "value"},
}, },
}, },
}, { }, {
name: "datadog tag extension with empty tags (edge case)", name: "datadog tag extension with empty tags (edge case)",
in: "foo:100|c|#tag:value,,", in: "foo:100|c|#tag:value,,",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 100, CValue: 100,
CLabels: map[string]string{"tag": "value"}, CLabels: map[string]string{"tag": "value"},
}, },
}, },
}, { }, {
name: "datadog tag extension with sampling", name: "datadog tag extension with sampling",
in: "foo:100|c|@0.1|#tag1:bar,#tag2:baz", in: "foo:100|c|@0.1|#tag1:bar,#tag2:baz",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 1000, CValue: 1000,
CLabels: map[string]string{"tag1": "bar", "tag2": "baz"}, CLabels: map[string]string{"tag1": "bar", "tag2": "baz"},
}, },
}, },
}, { }, {
name: "librato/dogstatsd mixed tag styles without sampling", name: "librato/dogstatsd mixed tag styles without sampling",
in: "foo#tag1=foo,tag3=bing:100|c|#tag1:bar,#tag2:baz", in: "foo#tag1=foo,tag3=bing:100|c|#tag1:bar,#tag2:baz",
out: event.Events{}, out: event.Events{},
}, { }, {
name: "influxdb/dogstatsd mixed tag styles without sampling", name: "influxdb/dogstatsd mixed tag styles without sampling",
in: "foo,tag1=foo,tag3=bing:100|c|#tag1:bar,#tag2:baz", in: "foo,tag1=foo,tag3=bing:100|c|#tag1:bar,#tag2:baz",
out: event.Events{}, out: event.Events{},
}, { }, {
name: "mixed tag styles with sampling", name: "mixed tag styles with sampling",
in: "foo#tag1=foo,tag3=bing:100|c|@0.1|#tag1:bar,#tag2:baz", in: "foo#tag1=foo,tag3=bing:100|c|@0.1|#tag1:bar,#tag2:baz",
out: event.Events{}, out: event.Events{},
}, { }, {
name: "histogram with sampling", name: "histogram with sampling",
in: "foo:0.01|h|@0.2|#tag1:bar,#tag2:baz", in: "foo:0.01|h|@0.2|#tag1:bar,#tag2:baz",
out: event.Events{ out: event.Events{
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "foo", TMetricName: "foo",
TValue: 0.01, TValue: 0.01,
TLabels: map[string]string{"tag1": "bar", "tag2": "baz"}, TLabels: map[string]string{"tag1": "bar", "tag2": "baz"},
}, },
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "foo", TMetricName: "foo",
TValue: 0.01, TValue: 0.01,
TLabels: map[string]string{"tag1": "bar", "tag2": "baz"}, TLabels: map[string]string{"tag1": "bar", "tag2": "baz"},
}, },
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "foo", TMetricName: "foo",
TValue: 0.01, TValue: 0.01,
TLabels: map[string]string{"tag1": "bar", "tag2": "baz"}, TLabels: map[string]string{"tag1": "bar", "tag2": "baz"},
}, },
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "foo", TMetricName: "foo",
TValue: 0.01, TValue: 0.01,
TLabels: map[string]string{"tag1": "bar", "tag2": "baz"}, TLabels: map[string]string{"tag1": "bar", "tag2": "baz"},
}, },
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "foo", TMetricName: "foo",
TValue: 0.01, TValue: 0.01,
TLabels: map[string]string{"tag1": "bar", "tag2": "baz"}, TLabels: map[string]string{"tag1": "bar", "tag2": "baz"},
}, },
}, },
}, { }, {
name: "datadog tag extension with multiple colons", name: "datadog tag extension with multiple colons",
in: "foo:100|c|@0.1|#tag1:foo:bar", in: "foo:100|c|@0.1|#tag1:foo:bar",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 1000, CValue: 1000,
CLabels: map[string]string{"tag1": "foo:bar"}, CLabels: map[string]string{"tag1": "foo:bar"},
}, },
}, },
}, { }, {
name: "datadog tag extension with invalid utf8 tag values", name: "datadog tag extension with invalid utf8 tag values",
in: "foo:100|c|@0.1|#tag:\xc3\x28invalid", in: "foo:100|c|@0.1|#tag:\xc3\x28invalid",
}, { }, {
name: "datadog tag extension with both valid and invalid utf8 tag values", name: "datadog tag extension with both valid and invalid utf8 tag values",
in: "foo:100|c|@0.1|#tag1:valid,tag2:\xc3\x28invalid", in: "foo:100|c|@0.1|#tag1:valid,tag2:\xc3\x28invalid",
}, { }, {
name: "multiple metrics with invalid datadog utf8 tag values", name: "multiple metrics with invalid datadog utf8 tag values",
in: "foo:200|c|#tag:value\nfoo:300|c|#tag:\xc3\x28invalid", in: "foo:200|c|#tag:value\nfoo:300|c|#tag:\xc3\x28invalid",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 200, CValue: 200,
CLabels: map[string]string{"tag": "value"}, CLabels: map[string]string{"tag": "value"},
}, },
}, },
}, { }, {
name: "combined multiline metrics", name: "combined multiline metrics",
in: "foo:200|ms:300|ms:5|c|@0.1:6|g\nbar:1|c:5|ms", in: "foo:200|ms:300|ms:5|c|@0.1:6|g\nbar:1|c:5|ms",
out: event.Events{ out: event.Events{
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "foo", TMetricName: "foo",
TValue: 200, TValue: 200,
TLabels: map[string]string{}, TLabels: map[string]string{},
}, },
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "foo", TMetricName: "foo",
TValue: 300, TValue: 300,
TLabels: map[string]string{}, TLabels: map[string]string{},
}, },
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 50, CValue: 50,
CLabels: map[string]string{}, CLabels: map[string]string{},
}, },
&event.GaugeEvent{ &event.GaugeEvent{
GMetricName: "foo", GMetricName: "foo",
GValue: 6, GValue: 6,
GLabels: map[string]string{}, GLabels: map[string]string{},
}, },
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "bar", CMetricName: "bar",
CValue: 1, CValue: 1,
CLabels: map[string]string{}, CLabels: map[string]string{},
}, },
&event.TimerEvent{ &event.TimerEvent{
TMetricName: "bar", TMetricName: "bar",
TValue: 5, TValue: 5,
TLabels: map[string]string{}, TLabels: map[string]string{},
}, },
}, },
}, { }, {
name: "timings with sampling factor", name: "timings with sampling factor",
in: "foo.timing:0.5|ms|@0.1", in: "foo.timing:0.5|ms|@0.1",
out: event.Events{ out: event.Events{
&event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}}, &event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}},
&event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}}, &event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}},
&event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}}, &event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}},
&event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}}, &event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}},
&event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}}, &event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}},
&event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}}, &event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}},
&event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}}, &event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}},
&event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}}, &event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}},
&event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}}, &event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}},
&event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}}, &event.TimerEvent{TMetricName: "foo.timing", TValue: 0.5, TLabels: map[string]string{}},
}, },
}, { }, {
name: "bad line", name: "bad line",
in: "foo", in: "foo",
}, { }, {
name: "bad component", name: "bad component",
in: "foo:1", in: "foo:1",
}, { }, {
name: "bad value", name: "bad value",
in: "foo:1o|c", in: "foo:1o|c",
}, { }, {
name: "illegal sampling factor", name: "illegal sampling factor",
in: "foo:1|c|@bar", in: "foo:1|c|@bar",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 1, CValue: 1,
CLabels: map[string]string{}, CLabels: map[string]string{},
}, },
}, },
}, { }, {
name: "zero sampling factor", name: "zero sampling factor",
in: "foo:2|c|@0", in: "foo:2|c|@0",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "foo", CMetricName: "foo",
CValue: 2, CValue: 2,
CLabels: map[string]string{}, CLabels: map[string]string{},
}, },
}, },
}, { }, {
name: "illegal stat type", name: "illegal stat type",
in: "foo:2|t", in: "foo:2|t",
}, },
{ {
name: "empty metric name", name: "empty metric name",
in: ":100|ms", in: ":100|ms",
}, },
{ {
name: "empty component", name: "empty component",
in: "foo:1|c|", in: "foo:1|c|",
}, },
{ {
name: "invalid utf8", name: "invalid utf8",
in: "invalid\xc3\x28utf8:1|c", in: "invalid\xc3\x28utf8:1|c",
}, },
{ {
name: "some invalid utf8", name: "some invalid utf8",
in: "valid_utf8:1|c\ninvalid\xc3\x28utf8:1|c", in: "valid_utf8:1|c\ninvalid\xc3\x28utf8:1|c",
out: event.Events{ out: event.Events{
&event.CounterEvent{ &event.CounterEvent{
CMetricName: "valid_utf8", CMetricName: "valid_utf8",
CValue: 1, CValue: 1,
CLabels: map[string]string{}, CLabels: map[string]string{},
}, },
}, },
}, },
} }
for k, l := range []statsDPacketHandler{&listener.StatsDUDPListener{nil, nil, log.NewNopLogger()}, &mockStatsDTCPListener{listener.StatsDTCPListener{nil, nil, log.NewNopLogger()}, log.NewNopLogger()}} { for k, l := range []statsDPacketHandler{&listener.StatsDUDPListener{nil, nil, log.NewNopLogger()}, &mockStatsDTCPListener{listener.StatsDTCPListener{nil, nil, log.NewNopLogger()}, log.NewNopLogger()}} {
events := make(chan event.Events, 32) events := make(chan event.Events, 32)
l.SetEventHandler(&event.UnbufferedEventHandler{C: events}) l.SetEventHandler(&event.UnbufferedEventHandler{C: events})
for i, scenario := range scenarios { for i, scenario := range scenarios {
l.HandlePacket([]byte(scenario.in), udpPackets, linesReceived, eventsFlushed, *sampleErrors, samplesReceived, tagErrors, tagsReceived) l.HandlePacket([]byte(scenario.in), udpPackets, linesReceived, eventsFlushed, *sampleErrors, samplesReceived, tagErrors, tagsReceived)
le := len(events) le := len(events)
// Flatten actual events. // Flatten actual events.
actual := event.Events{} actual := event.Events{}
for i := 0; i < le; i++ { for i := 0; i < le; i++ {
actual = append(actual, <-events...) actual = append(actual, <-events...)
} }
if len(actual) != len(scenario.out) { if len(actual) != len(scenario.out) {
t.Fatalf("%d.%d. Expected %d events, got %d in scenario '%s'", k, i, len(scenario.out), len(actual), scenario.name) t.Fatalf("%d.%d. Expected %d events, got %d in scenario '%s'", k, i, len(scenario.out), len(actual), scenario.name)
} }
for j, expected := range scenario.out { for j, expected := range scenario.out {
if !reflect.DeepEqual(&expected, &actual[j]) { if !reflect.DeepEqual(&expected, &actual[j]) {
t.Fatalf("%d.%d.%d. Expected %#v, got %#v in scenario '%s'", k, i, j, expected, actual[j], scenario.name) t.Fatalf("%d.%d.%d. Expected %#v, got %#v in scenario '%s'", k, i, j, expected, actual[j], scenario.name)
} }
} }
} }
} }
} }

View file

@ -1,81 +1,81 @@
// Copyright 2013 The Prometheus Authors // Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package main package main
import ( import (
"testing" "testing"
"time" "time"
"github.com/prometheus/statsd_exporter/pkg/clock" "github.com/prometheus/statsd_exporter/pkg/clock"
"github.com/prometheus/statsd_exporter/pkg/event" "github.com/prometheus/statsd_exporter/pkg/event"
) )
func TestEventThresholdFlush(t *testing.T) { func TestEventThresholdFlush(t *testing.T) {
c := make(chan event.Events, 100) c := make(chan event.Events, 100)
// We're not going to flush during this test, so the duration doesn't matter. // We're not going to flush during this test, so the duration doesn't matter.
eq := event.NewEventQueue(c, 5, time.Second, eventsFlushed) eq := event.NewEventQueue(c, 5, time.Second, eventsFlushed)
e := make(event.Events, 13) e := make(event.Events, 13)
go func() { go func() {
eq.Queue(e, &eventsFlushed) eq.Queue(e, &eventsFlushed)
}() }()
batch := <-c batch := <-c
if len(batch) != 5 { if len(batch) != 5 {
t.Fatalf("Expected event batch to be 5 elements, but got %v", len(batch)) t.Fatalf("Expected event batch to be 5 elements, but got %v", len(batch))
} }
batch = <-c batch = <-c
if len(batch) != 5 { if len(batch) != 5 {
t.Fatalf("Expected event batch to be 5 elements, but got %v", len(batch)) t.Fatalf("Expected event batch to be 5 elements, but got %v", len(batch))
} }
batch = <-c batch = <-c
if len(batch) != 3 { if len(batch) != 3 {
t.Fatalf("Expected event batch to be 3 elements, but got %v", len(batch)) t.Fatalf("Expected event batch to be 3 elements, but got %v", len(batch))
} }
} }
func TestEventIntervalFlush(t *testing.T) { func TestEventIntervalFlush(t *testing.T) {
// Mock a time.NewTicker // Mock a time.NewTicker
tickerCh := make(chan time.Time) tickerCh := make(chan time.Time)
clock.ClockInstance = &clock.Clock{ clock.ClockInstance = &clock.Clock{
TickerCh: tickerCh, TickerCh: tickerCh,
} }
clock.ClockInstance.Instant = time.Unix(0, 0) clock.ClockInstance.Instant = time.Unix(0, 0)
c := make(chan event.Events, 100) c := make(chan event.Events, 100)
eq := event.NewEventQueue(c, 1000, time.Second*1000, eventsFlushed) eq := event.NewEventQueue(c, 1000, time.Second*1000, eventsFlushed)
e := make(event.Events, 10) e := make(event.Events, 10)
eq.Queue(e, &eventsFlushed) eq.Queue(e, &eventsFlushed)
if eq.Len() != 10 { if eq.Len() != 10 {
t.Fatal("Expected 10 events to be queued, but got", eq.Len()) t.Fatal("Expected 10 events to be queued, but got", eq.Len())
} }
if len(eq.C) != 0 { if len(eq.C) != 0 {
t.Fatal("Expected 0 events in the event channel, but got", len(eq.C)) t.Fatal("Expected 0 events in the event channel, but got", len(eq.C))
} }
// Tick time forward to trigger a flush // Tick time forward to trigger a flush
clock.ClockInstance.Instant = time.Unix(10000, 0) clock.ClockInstance.Instant = time.Unix(10000, 0)
clock.ClockInstance.TickerCh <- time.Unix(10000, 0) clock.ClockInstance.TickerCh <- time.Unix(10000, 0)
events := <-eq.C events := <-eq.C
if eq.Len() != 0 { if eq.Len() != 0 {
t.Fatal("Expected 0 events to be queued, but got", eq.Len()) t.Fatal("Expected 0 events to be queued, but got", eq.Len())
} }
if len(events) != 10 { if len(events) != 10 {
t.Fatal("Expected 10 events in the event channel, but got", len(events)) t.Fatal("Expected 10 events in the event channel, but got", len(events))
} }
} }

View file

@ -1,157 +1,157 @@
// Copyright 2013 The Prometheus Authors // Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package main package main
import ( import (
"fmt" "fmt"
"testing" "testing"
"github.com/go-kit/kit/log" "github.com/go-kit/kit/log"
"github.com/prometheus/statsd_exporter/pkg/event" "github.com/prometheus/statsd_exporter/pkg/event"
"github.com/prometheus/statsd_exporter/pkg/exporter" "github.com/prometheus/statsd_exporter/pkg/exporter"
"github.com/prometheus/statsd_exporter/pkg/listener" "github.com/prometheus/statsd_exporter/pkg/listener"
"github.com/prometheus/statsd_exporter/pkg/mapper" "github.com/prometheus/statsd_exporter/pkg/mapper"
) )
func benchmarkUDPListener(times int, b *testing.B) { func benchmarkUDPListener(times int, b *testing.B) {
input := []string{ input := []string{
"foo1:2|c", "foo1:2|c",
"foo2:3|g", "foo2:3|g",
"foo3:200|ms", "foo3:200|ms",
"foo4:100|c|#tag1:bar,tag2:baz", "foo4:100|c|#tag1:bar,tag2:baz",
"foo5:100|c|#tag1:bar,#tag2:baz", "foo5:100|c|#tag1:bar,#tag2:baz",
"foo6:100|c|#09digits:0,tag.with.dots:1", "foo6:100|c|#09digits:0,tag.with.dots:1",
"foo10:100|c|@0.1|#tag1:bar,#tag2:baz", "foo10:100|c|@0.1|#tag1:bar,#tag2:baz",
"foo11:100|c|@0.1|#tag1:foo:bar", "foo11:100|c|@0.1|#tag1:foo:bar",
"foo15:200|ms:300|ms:5|c|@0.1:6|g\nfoo15a:1|c:5|ms", "foo15:200|ms:300|ms:5|c|@0.1:6|g\nfoo15a:1|c:5|ms",
"some_very_useful_metrics_with_quite_a_log_name:13|c", "some_very_useful_metrics_with_quite_a_log_name:13|c",
} }
bytesInput := make([]string, len(input)*times) bytesInput := make([]string, len(input)*times)
for run := 0; run < times; run++ { for run := 0; run < times; run++ {
for i := 0; i < len(input); i++ { for i := 0; i < len(input); i++ {
bytesInput[run*len(input)+i] = fmt.Sprintf("run%d%s", run, input[i]) bytesInput[run*len(input)+i] = fmt.Sprintf("run%d%s", run, input[i])
} }
} }
for n := 0; n < b.N; n++ { for n := 0; n < b.N; n++ {
// there are more events than input lines, need bigger buffer // there are more events than input lines, need bigger buffer
events := make(chan event.Events, len(bytesInput)*times*2) events := make(chan event.Events, len(bytesInput)*times*2)
l := listener.StatsDUDPListener{EventHandler: &event.UnbufferedEventHandler{C: events}} l := listener.StatsDUDPListener{EventHandler: &event.UnbufferedEventHandler{C: events}}
for i := 0; i < times; i++ { for i := 0; i < times; i++ {
for _, line := range bytesInput { for _, line := range bytesInput {
l.HandlePacket([]byte(line), udpPackets, linesReceived, eventsFlushed, *sampleErrors, samplesReceived, tagErrors, tagsReceived) l.HandlePacket([]byte(line), udpPackets, linesReceived, eventsFlushed, *sampleErrors, samplesReceived, tagErrors, tagsReceived)
} }
} }
} }
} }
func BenchmarkUDPListener1(b *testing.B) { func BenchmarkUDPListener1(b *testing.B) {
benchmarkUDPListener(1, b) benchmarkUDPListener(1, b)
} }
func BenchmarkUDPListener5(b *testing.B) { func BenchmarkUDPListener5(b *testing.B) {
benchmarkUDPListener(5, b) benchmarkUDPListener(5, b)
} }
func BenchmarkUDPListener50(b *testing.B) { func BenchmarkUDPListener50(b *testing.B) {
benchmarkUDPListener(50, b) benchmarkUDPListener(50, b)
} }
func BenchmarkExporterListener(b *testing.B) { func BenchmarkExporterListener(b *testing.B) {
events := event.Events{ events := event.Events{
&event.CounterEvent{ // simple counter &event.CounterEvent{ // simple counter
CMetricName: "counter", CMetricName: "counter",
CValue: 2, CValue: 2,
}, },
&event.GaugeEvent{ // simple gauge &event.GaugeEvent{ // simple gauge
GMetricName: "gauge", GMetricName: "gauge",
GValue: 10, GValue: 10,
}, },
&event.TimerEvent{ // simple timer &event.TimerEvent{ // simple timer
TMetricName: "timer", TMetricName: "timer",
TValue: 200, TValue: 200,
}, },
&event.TimerEvent{ // simple histogram &event.TimerEvent{ // simple histogram
TMetricName: "histogram.test", TMetricName: "histogram.test",
TValue: 200, TValue: 200,
}, },
&event.CounterEvent{ // simple_tags &event.CounterEvent{ // simple_tags
CMetricName: "simple_tags", CMetricName: "simple_tags",
CValue: 100, CValue: 100,
CLabels: map[string]string{ CLabels: map[string]string{
"alpha": "bar", "alpha": "bar",
"bravo": "baz", "bravo": "baz",
}, },
}, },
&event.CounterEvent{ // slightly different tags &event.CounterEvent{ // slightly different tags
CMetricName: "simple_tags", CMetricName: "simple_tags",
CValue: 100, CValue: 100,
CLabels: map[string]string{ CLabels: map[string]string{
"alpha": "bar", "alpha": "bar",
"charlie": "baz", "charlie": "baz",
}, },
}, },
&event.CounterEvent{ // and even more different tags &event.CounterEvent{ // and even more different tags
CMetricName: "simple_tags", CMetricName: "simple_tags",
CValue: 100, CValue: 100,
CLabels: map[string]string{ CLabels: map[string]string{
"alpha": "bar", "alpha": "bar",
"bravo": "baz", "bravo": "baz",
"golf": "looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong", "golf": "looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong",
}, },
}, },
&event.CounterEvent{ // datadog tag extension with complex tags &event.CounterEvent{ // datadog tag extension with complex tags
CMetricName: "foo", CMetricName: "foo",
CValue: 100, CValue: 100,
CLabels: map[string]string{ CLabels: map[string]string{
"action": "test", "action": "test",
"application": "testapp", "application": "testapp",
"application_component": "testcomp", "application_component": "testcomp",
"application_role": "test_role", "application_role": "test_role",
"category": "category", "category": "category",
"controller": "controller", "controller": "controller",
"deployed_to": "production", "deployed_to": "production",
"kube_deployment": "deploy", "kube_deployment": "deploy",
"kube_namespace": "kube-production", "kube_namespace": "kube-production",
"method": "get", "method": "get",
"version": "5.2.8374", "version": "5.2.8374",
"status": "200", "status": "200",
"status_range": "2xx", "status_range": "2xx",
}, },
}, },
} }
config := ` config := `
mappings: mappings:
- match: histogram.test - match: histogram.test
timer_type: histogram timer_type: histogram
name: "histogram_test" name: "histogram_test"
` `
testMapper := &mapper.MetricMapper{} testMapper := &mapper.MetricMapper{}
err := testMapper.InitFromYAMLString(config, 0) err := testMapper.InitFromYAMLString(config, 0)
if err != nil { if err != nil {
b.Fatalf("Config load error: %s %s", config, err) b.Fatalf("Config load error: %s %s", config, err)
} }
ex := exporter.NewExporter(testMapper, log.NewNopLogger()) ex := exporter.NewExporter(testMapper, log.NewNopLogger())
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
ec := make(chan event.Events, 1000) ec := make(chan event.Events, 1000)
go func() { go func() {
for i := 0; i < 1000; i++ { for i := 0; i < 1000; i++ {
ec <- events ec <- events
} }
close(ec) close(ec)
}() }()
ex.Listen(ec, eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount) ex.Listen(ec, eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount)
} }
} }

File diff suppressed because it is too large Load diff

30
go.mod
View file

@ -1,15 +1,15 @@
module github.com/prometheus/statsd_exporter module github.com/prometheus/statsd_exporter
require ( require (
github.com/go-kit/kit v0.9.0 github.com/go-kit/kit v0.9.0
github.com/hashicorp/golang-lru v0.5.1 github.com/hashicorp/golang-lru v0.5.1
github.com/kr/pretty v0.1.0 // indirect github.com/kr/pretty v0.1.0 // indirect
github.com/prometheus/client_golang v1.0.0 github.com/prometheus/client_golang v1.0.0
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90 github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90
github.com/prometheus/common v0.7.0 github.com/prometheus/common v0.7.0
gopkg.in/alecthomas/kingpin.v2 v2.2.6 gopkg.in/alecthomas/kingpin.v2 v2.2.6
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 // indirect gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 // indirect
gopkg.in/yaml.v2 v2.2.2 gopkg.in/yaml.v2 v2.2.2
) )
go 1.13 go 1.13

828
main.go
View file

@ -1,414 +1,414 @@
// Copyright 2013 The Prometheus Authors // Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package main package main
import ( import (
"bufio" "bufio"
"net" "net"
"net/http" "net/http"
_ "net/http/pprof" _ "net/http/pprof"
"os" "os"
"os/signal" "os/signal"
"strconv" "strconv"
"syscall" "syscall"
"github.com/go-kit/kit/log" "github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level" "github.com/go-kit/kit/log/level"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp" "github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/prometheus/common/promlog" "github.com/prometheus/common/promlog"
"github.com/prometheus/common/promlog/flag" "github.com/prometheus/common/promlog/flag"
"github.com/prometheus/common/version" "github.com/prometheus/common/version"
"gopkg.in/alecthomas/kingpin.v2" "gopkg.in/alecthomas/kingpin.v2"
"github.com/prometheus/statsd_exporter/pkg/event" "github.com/prometheus/statsd_exporter/pkg/event"
"github.com/prometheus/statsd_exporter/pkg/exporter" "github.com/prometheus/statsd_exporter/pkg/exporter"
"github.com/prometheus/statsd_exporter/pkg/listener" "github.com/prometheus/statsd_exporter/pkg/listener"
"github.com/prometheus/statsd_exporter/pkg/mapper" "github.com/prometheus/statsd_exporter/pkg/mapper"
"github.com/prometheus/statsd_exporter/pkg/util" "github.com/prometheus/statsd_exporter/pkg/util"
) )
const ( const (
defaultHelp = "Metric autogenerated by statsd_exporter." defaultHelp = "Metric autogenerated by statsd_exporter."
regErrF = "Failed to update metric" regErrF = "Failed to update metric"
) )
var ( var (
eventStats = prometheus.NewCounterVec( eventStats = prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_events_total", Name: "statsd_exporter_events_total",
Help: "The total number of StatsD events seen.", Help: "The total number of StatsD events seen.",
}, },
[]string{"type"}, []string{"type"},
) )
eventsFlushed = prometheus.NewCounter( eventsFlushed = prometheus.NewCounter(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_event_queue_flushed_total", Name: "statsd_exporter_event_queue_flushed_total",
Help: "Number of times events were flushed to exporter", Help: "Number of times events were flushed to exporter",
}, },
) )
eventsUnmapped = prometheus.NewCounter(prometheus.CounterOpts{ eventsUnmapped = prometheus.NewCounter(prometheus.CounterOpts{
Name: "statsd_exporter_events_unmapped_total", Name: "statsd_exporter_events_unmapped_total",
Help: "The total number of StatsD events no mapping was found for.", Help: "The total number of StatsD events no mapping was found for.",
}) })
udpPackets = prometheus.NewCounter( udpPackets = prometheus.NewCounter(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_udp_packets_total", Name: "statsd_exporter_udp_packets_total",
Help: "The total number of StatsD packets received over UDP.", Help: "The total number of StatsD packets received over UDP.",
}, },
) )
tcpConnections = prometheus.NewCounter( tcpConnections = prometheus.NewCounter(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_tcp_connections_total", Name: "statsd_exporter_tcp_connections_total",
Help: "The total number of TCP connections handled.", Help: "The total number of TCP connections handled.",
}, },
) )
tcpErrors = prometheus.NewCounter( tcpErrors = prometheus.NewCounter(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_tcp_connection_errors_total", Name: "statsd_exporter_tcp_connection_errors_total",
Help: "The number of errors encountered reading from TCP.", Help: "The number of errors encountered reading from TCP.",
}, },
) )
tcpLineTooLong = prometheus.NewCounter( tcpLineTooLong = prometheus.NewCounter(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_tcp_too_long_lines_total", Name: "statsd_exporter_tcp_too_long_lines_total",
Help: "The number of lines discarded due to being too long.", Help: "The number of lines discarded due to being too long.",
}, },
) )
unixgramPackets = prometheus.NewCounter( unixgramPackets = prometheus.NewCounter(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_unixgram_packets_total", Name: "statsd_exporter_unixgram_packets_total",
Help: "The total number of StatsD packets received over Unixgram.", Help: "The total number of StatsD packets received over Unixgram.",
}, },
) )
linesReceived = prometheus.NewCounter( linesReceived = prometheus.NewCounter(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_lines_total", Name: "statsd_exporter_lines_total",
Help: "The total number of StatsD lines received.", Help: "The total number of StatsD lines received.",
}, },
) )
samplesReceived = prometheus.NewCounter( samplesReceived = prometheus.NewCounter(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_samples_total", Name: "statsd_exporter_samples_total",
Help: "The total number of StatsD samples received.", Help: "The total number of StatsD samples received.",
}, },
) )
sampleErrors = prometheus.NewCounterVec( sampleErrors = prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_sample_errors_total", Name: "statsd_exporter_sample_errors_total",
Help: "The total number of errors parsing StatsD samples.", Help: "The total number of errors parsing StatsD samples.",
}, },
[]string{"reason"}, []string{"reason"},
) )
tagsReceived = prometheus.NewCounter( tagsReceived = prometheus.NewCounter(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_tags_total", Name: "statsd_exporter_tags_total",
Help: "The total number of DogStatsD tags processed.", Help: "The total number of DogStatsD tags processed.",
}, },
) )
tagErrors = prometheus.NewCounter( tagErrors = prometheus.NewCounter(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_tag_errors_total", Name: "statsd_exporter_tag_errors_total",
Help: "The number of errors parsing DogStatsD tags.", Help: "The number of errors parsing DogStatsD tags.",
}, },
) )
configLoads = prometheus.NewCounterVec( configLoads = prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_config_reloads_total", Name: "statsd_exporter_config_reloads_total",
Help: "The number of configuration reloads.", Help: "The number of configuration reloads.",
}, },
[]string{"outcome"}, []string{"outcome"},
) )
mappingsCount = prometheus.NewGauge(prometheus.GaugeOpts{ mappingsCount = prometheus.NewGauge(prometheus.GaugeOpts{
Name: "statsd_exporter_loaded_mappings", Name: "statsd_exporter_loaded_mappings",
Help: "The current number of configured metric mappings.", Help: "The current number of configured metric mappings.",
}) })
conflictingEventStats = prometheus.NewCounterVec( conflictingEventStats = prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_events_conflict_total", Name: "statsd_exporter_events_conflict_total",
Help: "The total number of StatsD events with conflicting names.", Help: "The total number of StatsD events with conflicting names.",
}, },
[]string{"type"}, []string{"type"},
) )
errorEventStats = prometheus.NewCounterVec( errorEventStats = prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_events_error_total", Name: "statsd_exporter_events_error_total",
Help: "The total number of StatsD events discarded due to errors.", Help: "The total number of StatsD events discarded due to errors.",
}, },
[]string{"reason"}, []string{"reason"},
) )
eventsActions = prometheus.NewCounterVec( eventsActions = prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_exporter_events_actions_total", Name: "statsd_exporter_events_actions_total",
Help: "The total number of StatsD events by action.", Help: "The total number of StatsD events by action.",
}, },
[]string{"action"}, []string{"action"},
) )
metricsCount = prometheus.NewGaugeVec( metricsCount = prometheus.NewGaugeVec(
prometheus.GaugeOpts{ prometheus.GaugeOpts{
Name: "statsd_exporter_metrics_total", Name: "statsd_exporter_metrics_total",
Help: "The total number of metrics.", Help: "The total number of metrics.",
}, },
[]string{"type"}, []string{"type"},
) )
) )
func init() { func init() {
prometheus.MustRegister(version.NewCollector("statsd_exporter")) prometheus.MustRegister(version.NewCollector("statsd_exporter"))
prometheus.MustRegister(eventStats) prometheus.MustRegister(eventStats)
prometheus.MustRegister(eventsFlushed) prometheus.MustRegister(eventsFlushed)
prometheus.MustRegister(eventsUnmapped) prometheus.MustRegister(eventsUnmapped)
prometheus.MustRegister(udpPackets) prometheus.MustRegister(udpPackets)
prometheus.MustRegister(tcpConnections) prometheus.MustRegister(tcpConnections)
prometheus.MustRegister(tcpErrors) prometheus.MustRegister(tcpErrors)
prometheus.MustRegister(tcpLineTooLong) prometheus.MustRegister(tcpLineTooLong)
prometheus.MustRegister(unixgramPackets) prometheus.MustRegister(unixgramPackets)
prometheus.MustRegister(linesReceived) prometheus.MustRegister(linesReceived)
prometheus.MustRegister(samplesReceived) prometheus.MustRegister(samplesReceived)
prometheus.MustRegister(sampleErrors) prometheus.MustRegister(sampleErrors)
prometheus.MustRegister(tagsReceived) prometheus.MustRegister(tagsReceived)
prometheus.MustRegister(tagErrors) prometheus.MustRegister(tagErrors)
prometheus.MustRegister(configLoads) prometheus.MustRegister(configLoads)
prometheus.MustRegister(mappingsCount) prometheus.MustRegister(mappingsCount)
prometheus.MustRegister(conflictingEventStats) prometheus.MustRegister(conflictingEventStats)
prometheus.MustRegister(errorEventStats) prometheus.MustRegister(errorEventStats)
prometheus.MustRegister(eventsActions) prometheus.MustRegister(eventsActions)
prometheus.MustRegister(metricsCount) prometheus.MustRegister(metricsCount)
} }
// uncheckedCollector wraps a Collector but its Describe method yields no Desc. // uncheckedCollector wraps a Collector but its Describe method yields no Desc.
// This allows incoming metrics to have inconsistent label sets // This allows incoming metrics to have inconsistent label sets
type uncheckedCollector struct { type uncheckedCollector struct {
c prometheus.Collector c prometheus.Collector
} }
func (u uncheckedCollector) Describe(_ chan<- *prometheus.Desc) {} func (u uncheckedCollector) Describe(_ chan<- *prometheus.Desc) {}
func (u uncheckedCollector) Collect(c chan<- prometheus.Metric) { func (u uncheckedCollector) Collect(c chan<- prometheus.Metric) {
u.c.Collect(c) u.c.Collect(c)
} }
func serveHTTP(listenAddress, metricsEndpoint string, logger log.Logger) { func serveHTTP(listenAddress, metricsEndpoint string, logger log.Logger) {
http.Handle(metricsEndpoint, promhttp.Handler()) http.Handle(metricsEndpoint, promhttp.Handler())
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte(`<html> w.Write([]byte(`<html>
<head><title>StatsD Exporter</title></head> <head><title>StatsD Exporter</title></head>
<body> <body>
<h1>StatsD Exporter</h1> <h1>StatsD Exporter</h1>
<p><a href="` + metricsEndpoint + `">Metrics</a></p> <p><a href="` + metricsEndpoint + `">Metrics</a></p>
</body> </body>
</html>`)) </html>`))
}) })
level.Error(logger).Log("msg", http.ListenAndServe(listenAddress, nil)) level.Error(logger).Log("msg", http.ListenAndServe(listenAddress, nil))
os.Exit(1) os.Exit(1)
} }
func configReloader(fileName string, mapper *mapper.MetricMapper, cacheSize int, logger log.Logger, option mapper.CacheOption) { func configReloader(fileName string, mapper *mapper.MetricMapper, cacheSize int, logger log.Logger, option mapper.CacheOption) {
signals := make(chan os.Signal, 1) signals := make(chan os.Signal, 1)
signal.Notify(signals, syscall.SIGHUP) signal.Notify(signals, syscall.SIGHUP)
for s := range signals { for s := range signals {
if fileName == "" { if fileName == "" {
level.Warn(logger).Log("msg", "Received signal but no mapping config to reload", "signal", s) level.Warn(logger).Log("msg", "Received signal but no mapping config to reload", "signal", s)
continue continue
} }
level.Info(logger).Log("msg", "Received signal, attempting reload", "signal", s) level.Info(logger).Log("msg", "Received signal, attempting reload", "signal", s)
err := mapper.InitFromFile(fileName, cacheSize, option) err := mapper.InitFromFile(fileName, cacheSize, option)
if err != nil { if err != nil {
level.Info(logger).Log("msg", "Error reloading config", "error", err) level.Info(logger).Log("msg", "Error reloading config", "error", err)
configLoads.WithLabelValues("failure").Inc() configLoads.WithLabelValues("failure").Inc()
} else { } else {
level.Info(logger).Log("msg", "Config reloaded successfully") level.Info(logger).Log("msg", "Config reloaded successfully")
configLoads.WithLabelValues("success").Inc() configLoads.WithLabelValues("success").Inc()
} }
} }
} }
func dumpFSM(mapper *mapper.MetricMapper, dumpFilename string, logger log.Logger) error { func dumpFSM(mapper *mapper.MetricMapper, dumpFilename string, logger log.Logger) error {
f, err := os.Create(dumpFilename) f, err := os.Create(dumpFilename)
if err != nil { if err != nil {
return err return err
} }
level.Info(logger).Log("msg", "Start dumping FSM", "file_name", dumpFilename) level.Info(logger).Log("msg", "Start dumping FSM", "file_name", dumpFilename)
w := bufio.NewWriter(f) w := bufio.NewWriter(f)
mapper.FSM.DumpFSM(w) mapper.FSM.DumpFSM(w)
w.Flush() w.Flush()
f.Close() f.Close()
level.Info(logger).Log("msg", "Finish dumping FSM") level.Info(logger).Log("msg", "Finish dumping FSM")
return nil return nil
} }
func main() { func main() {
var ( var (
listenAddress = kingpin.Flag("web.listen-address", "The address on which to expose the web interface and generated Prometheus metrics.").Default(":9102").String() listenAddress = kingpin.Flag("web.listen-address", "The address on which to expose the web interface and generated Prometheus metrics.").Default(":9102").String()
metricsEndpoint = kingpin.Flag("web.telemetry-path", "Path under which to expose metrics.").Default("/metrics").String() metricsEndpoint = kingpin.Flag("web.telemetry-path", "Path under which to expose metrics.").Default("/metrics").String()
statsdListenUDP = kingpin.Flag("statsd.listen-udp", "The UDP address on which to receive statsd metric lines. \"\" disables it.").Default(":9125").String() statsdListenUDP = kingpin.Flag("statsd.listen-udp", "The UDP address on which to receive statsd metric lines. \"\" disables it.").Default(":9125").String()
statsdListenTCP = kingpin.Flag("statsd.listen-tcp", "The TCP address on which to receive statsd metric lines. \"\" disables it.").Default(":9125").String() statsdListenTCP = kingpin.Flag("statsd.listen-tcp", "The TCP address on which to receive statsd metric lines. \"\" disables it.").Default(":9125").String()
statsdListenUnixgram = kingpin.Flag("statsd.listen-unixgram", "The Unixgram socket path to receive statsd metric lines in datagram. \"\" disables it.").Default("").String() statsdListenUnixgram = kingpin.Flag("statsd.listen-unixgram", "The Unixgram socket path to receive statsd metric lines in datagram. \"\" disables it.").Default("").String()
// not using Int here because flag diplays default in decimal, 0755 will show as 493 // not using Int here because flag diplays default in decimal, 0755 will show as 493
statsdUnixSocketMode = kingpin.Flag("statsd.unixsocket-mode", "The permission mode of the unix socket.").Default("755").String() statsdUnixSocketMode = kingpin.Flag("statsd.unixsocket-mode", "The permission mode of the unix socket.").Default("755").String()
mappingConfig = kingpin.Flag("statsd.mapping-config", "Metric mapping configuration file name.").String() mappingConfig = kingpin.Flag("statsd.mapping-config", "Metric mapping configuration file name.").String()
readBuffer = kingpin.Flag("statsd.read-buffer", "Size (in bytes) of the operating system's transmit read buffer associated with the UDP or Unixgram connection. Please make sure the kernel parameters net.core.rmem_max is set to a value greater than the value specified.").Int() readBuffer = kingpin.Flag("statsd.read-buffer", "Size (in bytes) of the operating system's transmit read buffer associated with the UDP or Unixgram connection. Please make sure the kernel parameters net.core.rmem_max is set to a value greater than the value specified.").Int()
cacheSize = kingpin.Flag("statsd.cache-size", "Maximum size of your metric mapping cache. Relies on least recently used replacement policy if max size is reached.").Default("1000").Int() cacheSize = kingpin.Flag("statsd.cache-size", "Maximum size of your metric mapping cache. Relies on least recently used replacement policy if max size is reached.").Default("1000").Int()
cacheType = kingpin.Flag("statsd.cache-type", "Metric mapping cache type. Valid options are \"lru\" and \"random\"").Default("lru").Enum("lru", "random") cacheType = kingpin.Flag("statsd.cache-type", "Metric mapping cache type. Valid options are \"lru\" and \"random\"").Default("lru").Enum("lru", "random")
eventQueueSize = kingpin.Flag("statsd.event-queue-size", "Size of internal queue for processing events").Default("10000").Int() eventQueueSize = kingpin.Flag("statsd.event-queue-size", "Size of internal queue for processing events").Default("10000").Int()
eventFlushThreshold = kingpin.Flag("statsd.event-flush-threshold", "Number of events to hold in queue before flushing").Default("1000").Int() eventFlushThreshold = kingpin.Flag("statsd.event-flush-threshold", "Number of events to hold in queue before flushing").Default("1000").Int()
eventFlushInterval = kingpin.Flag("statsd.event-flush-interval", "Number of events to hold in queue before flushing").Default("200ms").Duration() eventFlushInterval = kingpin.Flag("statsd.event-flush-interval", "Number of events to hold in queue before flushing").Default("200ms").Duration()
dumpFSMPath = kingpin.Flag("debug.dump-fsm", "The path to dump internal FSM generated for glob matching as Dot file.").Default("").String() dumpFSMPath = kingpin.Flag("debug.dump-fsm", "The path to dump internal FSM generated for glob matching as Dot file.").Default("").String()
) )
promlogConfig := &promlog.Config{} promlogConfig := &promlog.Config{}
flag.AddFlags(kingpin.CommandLine, promlogConfig) flag.AddFlags(kingpin.CommandLine, promlogConfig)
kingpin.Version(version.Print("statsd_exporter")) kingpin.Version(version.Print("statsd_exporter"))
kingpin.HelpFlag.Short('h') kingpin.HelpFlag.Short('h')
kingpin.Parse() kingpin.Parse()
logger := promlog.New(promlogConfig) logger := promlog.New(promlogConfig)
cacheOption := mapper.WithCacheType(*cacheType) cacheOption := mapper.WithCacheType(*cacheType)
if *statsdListenUDP == "" && *statsdListenTCP == "" && *statsdListenUnixgram == "" { if *statsdListenUDP == "" && *statsdListenTCP == "" && *statsdListenUnixgram == "" {
level.Error(logger).Log("At least one of UDP/TCP/Unixgram listeners must be specified.") level.Error(logger).Log("At least one of UDP/TCP/Unixgram listeners must be specified.")
os.Exit(1) os.Exit(1)
} }
level.Info(logger).Log("msg", "Starting StatsD -> Prometheus Exporter", "version", version.Info()) level.Info(logger).Log("msg", "Starting StatsD -> Prometheus Exporter", "version", version.Info())
level.Info(logger).Log("msg", "Build context", "context", version.BuildContext()) level.Info(logger).Log("msg", "Build context", "context", version.BuildContext())
level.Info(logger).Log("msg", "Accepting StatsD Traffic", "udp", *statsdListenUDP, "tcp", *statsdListenTCP, "unixgram", *statsdListenUnixgram) level.Info(logger).Log("msg", "Accepting StatsD Traffic", "udp", *statsdListenUDP, "tcp", *statsdListenTCP, "unixgram", *statsdListenUnixgram)
level.Info(logger).Log("msg", "Accepting Prometheus Requests", "addr", *listenAddress) level.Info(logger).Log("msg", "Accepting Prometheus Requests", "addr", *listenAddress)
go serveHTTP(*listenAddress, *metricsEndpoint, logger) go serveHTTP(*listenAddress, *metricsEndpoint, logger)
events := make(chan event.Events, *eventQueueSize) events := make(chan event.Events, *eventQueueSize)
defer close(events) defer close(events)
eventQueue := event.NewEventQueue(events, *eventFlushThreshold, *eventFlushInterval, eventsFlushed) eventQueue := event.NewEventQueue(events, *eventFlushThreshold, *eventFlushInterval, eventsFlushed)
if *statsdListenUDP != "" { if *statsdListenUDP != "" {
udpListenAddr, err := util.UDPAddrFromString(*statsdListenUDP) udpListenAddr, err := util.UDPAddrFromString(*statsdListenUDP)
if err != nil { if err != nil {
level.Error(logger).Log("msg", "invalid UDP listen address", "address", *statsdListenUDP, "error", err) level.Error(logger).Log("msg", "invalid UDP listen address", "address", *statsdListenUDP, "error", err)
os.Exit(1) os.Exit(1)
} }
uconn, err := net.ListenUDP("udp", udpListenAddr) uconn, err := net.ListenUDP("udp", udpListenAddr)
if err != nil { if err != nil {
level.Error(logger).Log("msg", "failed to start UDP listener", "error", err) level.Error(logger).Log("msg", "failed to start UDP listener", "error", err)
os.Exit(1) os.Exit(1)
} }
if *readBuffer != 0 { if *readBuffer != 0 {
err = uconn.SetReadBuffer(*readBuffer) err = uconn.SetReadBuffer(*readBuffer)
if err != nil { if err != nil {
level.Error(logger).Log("msg", "error setting UDP read buffer", "error", err) level.Error(logger).Log("msg", "error setting UDP read buffer", "error", err)
os.Exit(1) os.Exit(1)
} }
} }
ul := &listener.StatsDUDPListener{Conn: uconn, EventHandler: eventQueue, Logger: logger} ul := &listener.StatsDUDPListener{Conn: uconn, EventHandler: eventQueue, Logger: logger}
go ul.Listen(udpPackets, linesReceived, eventsFlushed, *sampleErrors, samplesReceived, tagErrors, tagsReceived) go ul.Listen(udpPackets, linesReceived, eventsFlushed, *sampleErrors, samplesReceived, tagErrors, tagsReceived)
} }
if *statsdListenTCP != "" { if *statsdListenTCP != "" {
tcpListenAddr, err := util.TCPAddrFromString(*statsdListenTCP) tcpListenAddr, err := util.TCPAddrFromString(*statsdListenTCP)
if err != nil { if err != nil {
level.Error(logger).Log("msg", "invalid TCP listen address", "address", *statsdListenUDP, "error", err) level.Error(logger).Log("msg", "invalid TCP listen address", "address", *statsdListenUDP, "error", err)
os.Exit(1) os.Exit(1)
} }
tconn, err := net.ListenTCP("tcp", tcpListenAddr) tconn, err := net.ListenTCP("tcp", tcpListenAddr)
if err != nil { if err != nil {
level.Error(logger).Log("msg", err) level.Error(logger).Log("msg", err)
os.Exit(1) os.Exit(1)
} }
defer tconn.Close() defer tconn.Close()
tl := &listener.StatsDTCPListener{Conn: tconn, EventHandler: eventQueue, Logger: logger} tl := &listener.StatsDTCPListener{Conn: tconn, EventHandler: eventQueue, Logger: logger}
go tl.Listen(linesReceived, eventsFlushed, tcpConnections, tcpErrors, tcpLineTooLong, *sampleErrors, samplesReceived, tagErrors, tagsReceived) go tl.Listen(linesReceived, eventsFlushed, tcpConnections, tcpErrors, tcpLineTooLong, *sampleErrors, samplesReceived, tagErrors, tagsReceived)
} }
if *statsdListenUnixgram != "" { if *statsdListenUnixgram != "" {
var err error var err error
if _, err = os.Stat(*statsdListenUnixgram); !os.IsNotExist(err) { if _, err = os.Stat(*statsdListenUnixgram); !os.IsNotExist(err) {
level.Error(logger).Log("msg", "Unixgram socket already exists", "socket_name", *statsdListenUnixgram) level.Error(logger).Log("msg", "Unixgram socket already exists", "socket_name", *statsdListenUnixgram)
os.Exit(1) os.Exit(1)
} }
uxgconn, err := net.ListenUnixgram("unixgram", &net.UnixAddr{ uxgconn, err := net.ListenUnixgram("unixgram", &net.UnixAddr{
Net: "unixgram", Net: "unixgram",
Name: *statsdListenUnixgram, Name: *statsdListenUnixgram,
}) })
if err != nil { if err != nil {
level.Error(logger).Log("msg", "failed to listen on Unixgram socket", "error", err) level.Error(logger).Log("msg", "failed to listen on Unixgram socket", "error", err)
os.Exit(1) os.Exit(1)
} }
defer uxgconn.Close() defer uxgconn.Close()
if *readBuffer != 0 { if *readBuffer != 0 {
err = uxgconn.SetReadBuffer(*readBuffer) err = uxgconn.SetReadBuffer(*readBuffer)
if err != nil { if err != nil {
level.Error(logger).Log("msg", "error setting Unixgram read buffer", "error", err) level.Error(logger).Log("msg", "error setting Unixgram read buffer", "error", err)
os.Exit(1) os.Exit(1)
} }
} }
ul := &listener.StatsDUnixgramListener{Conn: uxgconn, EventHandler: eventQueue, Logger: logger} ul := &listener.StatsDUnixgramListener{Conn: uxgconn, EventHandler: eventQueue, Logger: logger}
go ul.Listen(unixgramPackets, linesReceived, eventsFlushed, *sampleErrors, samplesReceived, tagErrors, tagsReceived) go ul.Listen(unixgramPackets, linesReceived, eventsFlushed, *sampleErrors, samplesReceived, tagErrors, tagsReceived)
// if it's an abstract unix domain socket, it won't exist on fs // if it's an abstract unix domain socket, it won't exist on fs
// so we can't chmod it either // so we can't chmod it either
if _, err := os.Stat(*statsdListenUnixgram); !os.IsNotExist(err) { if _, err := os.Stat(*statsdListenUnixgram); !os.IsNotExist(err) {
defer os.Remove(*statsdListenUnixgram) defer os.Remove(*statsdListenUnixgram)
// convert the string to octet // convert the string to octet
perm, err := strconv.ParseInt("0"+string(*statsdUnixSocketMode), 8, 32) perm, err := strconv.ParseInt("0"+string(*statsdUnixSocketMode), 8, 32)
if err != nil { if err != nil {
level.Warn(logger).Log("Bad permission %s: %v, ignoring\n", *statsdUnixSocketMode, err) level.Warn(logger).Log("Bad permission %s: %v, ignoring\n", *statsdUnixSocketMode, err)
} else { } else {
err = os.Chmod(*statsdListenUnixgram, os.FileMode(perm)) err = os.Chmod(*statsdListenUnixgram, os.FileMode(perm))
if err != nil { if err != nil {
level.Warn(logger).Log("Failed to change unixgram socket permission: %v", err) level.Warn(logger).Log("Failed to change unixgram socket permission: %v", err)
} }
} }
} }
} }
mapper := &mapper.MetricMapper{MappingsCount: mappingsCount} mapper := &mapper.MetricMapper{MappingsCount: mappingsCount}
if *mappingConfig != "" { if *mappingConfig != "" {
err := mapper.InitFromFile(*mappingConfig, *cacheSize, cacheOption) err := mapper.InitFromFile(*mappingConfig, *cacheSize, cacheOption)
if err != nil { if err != nil {
level.Error(logger).Log("msg", "error loading config", "error", err) level.Error(logger).Log("msg", "error loading config", "error", err)
os.Exit(1) os.Exit(1)
} }
if *dumpFSMPath != "" { if *dumpFSMPath != "" {
err := dumpFSM(mapper, *dumpFSMPath, logger) err := dumpFSM(mapper, *dumpFSMPath, logger)
if err != nil { if err != nil {
level.Error(logger).Log("msg", "error dumping FSM", "error", err) level.Error(logger).Log("msg", "error dumping FSM", "error", err)
// Failure to dump the FSM is an error (the user asked for it and it // Failure to dump the FSM is an error (the user asked for it and it
// didn't happen) but not fatal (the exporter is fully functional // didn't happen) but not fatal (the exporter is fully functional
// afterwards). // afterwards).
} }
} }
} else { } else {
mapper.InitCache(*cacheSize, cacheOption) mapper.InitCache(*cacheSize, cacheOption)
} }
go configReloader(*mappingConfig, mapper, *cacheSize, logger, cacheOption) go configReloader(*mappingConfig, mapper, *cacheSize, logger, cacheOption)
exporter := exporter.NewExporter(mapper, logger) exporter := exporter.NewExporter(mapper, logger)
signals := make(chan os.Signal, 1) signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt, syscall.SIGTERM) signal.Notify(signals, os.Interrupt, syscall.SIGTERM)
go exporter.Listen(events, eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount) go exporter.Listen(events, eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount)
<-signals <-signals
} }

View file

@ -1,41 +1,41 @@
// Copyright 2018 The Prometheus Authors // Copyright 2018 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package clock package clock
import ( import (
"time" "time"
) )
var ClockInstance *Clock var ClockInstance *Clock
type Clock struct { type Clock struct {
Instant time.Time Instant time.Time
TickerCh chan time.Time TickerCh chan time.Time
} }
func Now() time.Time { func Now() time.Time {
if ClockInstance == nil { if ClockInstance == nil {
return time.Now() return time.Now()
} }
return ClockInstance.Instant return ClockInstance.Instant
} }
func NewTicker(d time.Duration) *time.Ticker { func NewTicker(d time.Duration) *time.Ticker {
if ClockInstance == nil || ClockInstance.TickerCh == nil { if ClockInstance == nil || ClockInstance.TickerCh == nil {
return time.NewTicker(d) return time.NewTicker(d)
} }
return &time.Ticker{ return &time.Ticker{
C: ClockInstance.TickerCh, C: ClockInstance.TickerCh,
} }
} }

View file

@ -1,133 +1,133 @@
// Copyright 2013 The Prometheus Authors // Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package main package main
import ( import (
"sync" "sync"
"time" "time"
"github.com/prometheus/statsd_exporter/pkg/clock" "github.com/prometheus/statsd_exporter/pkg/clock"
"github.com/prometheus/statsd_exporter/pkg/mapper" "github.com/prometheus/statsd_exporter/pkg/mapper"
) )
type Event interface { type Event interface {
MetricName() string MetricName() string
Value() float64 Value() float64
Labels() map[string]string Labels() map[string]string
MetricType() mapper.MetricType MetricType() mapper.MetricType
} }
type CounterEvent struct { type CounterEvent struct {
metricName string metricName string
value float64 value float64
labels map[string]string labels map[string]string
} }
func (c *CounterEvent) MetricName() string { return c.metricName } func (c *CounterEvent) MetricName() string { return c.metricName }
func (c *CounterEvent) Value() float64 { return c.value } func (c *CounterEvent) Value() float64 { return c.value }
func (c *CounterEvent) Labels() map[string]string { return c.labels } func (c *CounterEvent) Labels() map[string]string { return c.labels }
func (c *CounterEvent) MetricType() mapper.MetricType { return mapper.MetricTypeCounter } func (c *CounterEvent) MetricType() mapper.MetricType { return mapper.MetricTypeCounter }
type GaugeEvent struct { type GaugeEvent struct {
metricName string metricName string
value float64 value float64
relative bool relative bool
labels map[string]string labels map[string]string
} }
func (g *GaugeEvent) MetricName() string { return g.metricName } func (g *GaugeEvent) MetricName() string { return g.metricName }
func (g *GaugeEvent) Value() float64 { return g.value } func (g *GaugeEvent) Value() float64 { return g.value }
func (c *GaugeEvent) Labels() map[string]string { return c.labels } func (c *GaugeEvent) Labels() map[string]string { return c.labels }
func (c *GaugeEvent) MetricType() mapper.MetricType { return mapper.MetricTypeGauge } func (c *GaugeEvent) MetricType() mapper.MetricType { return mapper.MetricTypeGauge }
type TimerEvent struct { type TimerEvent struct {
metricName string metricName string
value float64 value float64
labels map[string]string labels map[string]string
} }
func (t *TimerEvent) MetricName() string { return t.metricName } func (t *TimerEvent) MetricName() string { return t.metricName }
func (t *TimerEvent) Value() float64 { return t.value } func (t *TimerEvent) Value() float64 { return t.value }
func (c *TimerEvent) Labels() map[string]string { return c.labels } func (c *TimerEvent) Labels() map[string]string { return c.labels }
func (c *TimerEvent) MetricType() mapper.MetricType { return mapper.MetricTypeTimer } func (c *TimerEvent) MetricType() mapper.MetricType { return mapper.MetricTypeTimer }
type Events []Event type Events []Event
type eventQueue struct { type eventQueue struct {
c chan Events c chan Events
q Events q Events
m sync.Mutex m sync.Mutex
flushThreshold int flushThreshold int
flushTicker *time.Ticker flushTicker *time.Ticker
} }
type eventHandler interface { type eventHandler interface {
queue(event Events) queue(event Events)
} }
func newEventQueue(c chan Events, flushThreshold int, flushInterval time.Duration) *eventQueue { func newEventQueue(c chan Events, flushThreshold int, flushInterval time.Duration) *eventQueue {
ticker := clock.NewTicker(flushInterval) ticker := clock.NewTicker(flushInterval)
eq := &eventQueue{ eq := &eventQueue{
c: c, c: c,
flushThreshold: flushThreshold, flushThreshold: flushThreshold,
flushTicker: ticker, flushTicker: ticker,
q: make([]Event, 0, flushThreshold), q: make([]Event, 0, flushThreshold),
} }
go func() { go func() {
for { for {
<-ticker.C <-ticker.C
eq.flush() eq.flush()
} }
}() }()
return eq return eq
} }
func (eq *eventQueue) queue(events Events) { func (eq *eventQueue) queue(events Events) {
eq.m.Lock() eq.m.Lock()
defer eq.m.Unlock() defer eq.m.Unlock()
for _, e := range events { for _, e := range events {
eq.q = append(eq.q, e) eq.q = append(eq.q, e)
if len(eq.q) >= eq.flushThreshold { if len(eq.q) >= eq.flushThreshold {
eq.flushUnlocked() eq.flushUnlocked()
} }
} }
} }
func (eq *eventQueue) flush() { func (eq *eventQueue) flush() {
eq.m.Lock() eq.m.Lock()
defer eq.m.Unlock() defer eq.m.Unlock()
eq.flushUnlocked() eq.flushUnlocked()
} }
func (eq *eventQueue) flushUnlocked() { func (eq *eventQueue) flushUnlocked() {
eq.c <- eq.q eq.c <- eq.q
eq.q = make([]Event, 0, cap(eq.q)) eq.q = make([]Event, 0, cap(eq.q))
eventsFlushed.Inc() eventsFlushed.Inc()
} }
func (eq *eventQueue) len() int { func (eq *eventQueue) len() int {
eq.m.Lock() eq.m.Lock()
defer eq.m.Unlock() defer eq.m.Unlock()
return len(eq.q) return len(eq.q)
} }
type unbufferedEventHandler struct { type unbufferedEventHandler struct {
c chan Events c chan Events
} }
func (ueh *unbufferedEventHandler) queue(events Events) { func (ueh *unbufferedEventHandler) queue(events Events) {
ueh.c <- events ueh.c <- events
} }

View file

@ -1,134 +1,134 @@
// Copyright 2013 The Prometheus Authors // Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package event package event
import ( import (
"sync" "sync"
"time" "time"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/statsd_exporter/pkg/clock" "github.com/prometheus/statsd_exporter/pkg/clock"
"github.com/prometheus/statsd_exporter/pkg/mapper" "github.com/prometheus/statsd_exporter/pkg/mapper"
) )
type Event interface { type Event interface {
MetricName() string MetricName() string
Value() float64 Value() float64
Labels() map[string]string Labels() map[string]string
MetricType() mapper.MetricType MetricType() mapper.MetricType
} }
type CounterEvent struct { type CounterEvent struct {
CMetricName string CMetricName string
CValue float64 CValue float64
CLabels map[string]string CLabels map[string]string
} }
func (c *CounterEvent) MetricName() string { return c.CMetricName } func (c *CounterEvent) MetricName() string { return c.CMetricName }
func (c *CounterEvent) Value() float64 { return c.CValue } func (c *CounterEvent) Value() float64 { return c.CValue }
func (c *CounterEvent) Labels() map[string]string { return c.CLabels } func (c *CounterEvent) Labels() map[string]string { return c.CLabels }
func (c *CounterEvent) MetricType() mapper.MetricType { return mapper.MetricTypeCounter } func (c *CounterEvent) MetricType() mapper.MetricType { return mapper.MetricTypeCounter }
type GaugeEvent struct { type GaugeEvent struct {
GMetricName string GMetricName string
GValue float64 GValue float64
GRelative bool GRelative bool
GLabels map[string]string GLabels map[string]string
} }
func (g *GaugeEvent) MetricName() string { return g.GMetricName } func (g *GaugeEvent) MetricName() string { return g.GMetricName }
func (g *GaugeEvent) Value() float64 { return g.GValue } func (g *GaugeEvent) Value() float64 { return g.GValue }
func (c *GaugeEvent) Labels() map[string]string { return c.GLabels } func (c *GaugeEvent) Labels() map[string]string { return c.GLabels }
func (c *GaugeEvent) MetricType() mapper.MetricType { return mapper.MetricTypeGauge } func (c *GaugeEvent) MetricType() mapper.MetricType { return mapper.MetricTypeGauge }
type TimerEvent struct { type TimerEvent struct {
TMetricName string TMetricName string
TValue float64 TValue float64
TLabels map[string]string TLabels map[string]string
} }
func (t *TimerEvent) MetricName() string { return t.TMetricName } func (t *TimerEvent) MetricName() string { return t.TMetricName }
func (t *TimerEvent) Value() float64 { return t.TValue } func (t *TimerEvent) Value() float64 { return t.TValue }
func (c *TimerEvent) Labels() map[string]string { return c.TLabels } func (c *TimerEvent) Labels() map[string]string { return c.TLabels }
func (c *TimerEvent) MetricType() mapper.MetricType { return mapper.MetricTypeTimer } func (c *TimerEvent) MetricType() mapper.MetricType { return mapper.MetricTypeTimer }
type Events []Event type Events []Event
type EventQueue struct { type EventQueue struct {
C chan Events C chan Events
q Events q Events
m sync.Mutex m sync.Mutex
flushThreshold int flushThreshold int
flushTicker *time.Ticker flushTicker *time.Ticker
} }
type EventHandler interface { type EventHandler interface {
Queue(event Events, eventsFlushed *prometheus.Counter) Queue(event Events, eventsFlushed *prometheus.Counter)
} }
func NewEventQueue(c chan Events, flushThreshold int, flushInterval time.Duration, eventsFlushed prometheus.Counter) *EventQueue { func NewEventQueue(c chan Events, flushThreshold int, flushInterval time.Duration, eventsFlushed prometheus.Counter) *EventQueue {
ticker := clock.NewTicker(flushInterval) ticker := clock.NewTicker(flushInterval)
eq := &EventQueue{ eq := &EventQueue{
C: c, C: c,
flushThreshold: flushThreshold, flushThreshold: flushThreshold,
flushTicker: ticker, flushTicker: ticker,
q: make([]Event, 0, flushThreshold), q: make([]Event, 0, flushThreshold),
} }
go func() { go func() {
for { for {
<-ticker.C <-ticker.C
eq.Flush(eventsFlushed) eq.Flush(eventsFlushed)
} }
}() }()
return eq return eq
} }
func (eq *EventQueue) Queue(events Events, eventsFlushed *prometheus.Counter) { func (eq *EventQueue) Queue(events Events, eventsFlushed *prometheus.Counter) {
eq.m.Lock() eq.m.Lock()
defer eq.m.Unlock() defer eq.m.Unlock()
for _, e := range events { for _, e := range events {
eq.q = append(eq.q, e) eq.q = append(eq.q, e)
if len(eq.q) >= eq.flushThreshold { if len(eq.q) >= eq.flushThreshold {
eq.FlushUnlocked(*eventsFlushed) eq.FlushUnlocked(*eventsFlushed)
} }
} }
} }
func (eq *EventQueue) Flush(eventsFlushed prometheus.Counter) { func (eq *EventQueue) Flush(eventsFlushed prometheus.Counter) {
eq.m.Lock() eq.m.Lock()
defer eq.m.Unlock() defer eq.m.Unlock()
eq.FlushUnlocked(eventsFlushed) eq.FlushUnlocked(eventsFlushed)
} }
func (eq *EventQueue) FlushUnlocked(eventsFlushed prometheus.Counter) { func (eq *EventQueue) FlushUnlocked(eventsFlushed prometheus.Counter) {
eq.C <- eq.q eq.C <- eq.q
eq.q = make([]Event, 0, cap(eq.q)) eq.q = make([]Event, 0, cap(eq.q))
eventsFlushed.Inc() eventsFlushed.Inc()
} }
func (eq *EventQueue) Len() int { func (eq *EventQueue) Len() int {
eq.m.Lock() eq.m.Lock()
defer eq.m.Unlock() defer eq.m.Unlock()
return len(eq.q) return len(eq.q)
} }
type UnbufferedEventHandler struct { type UnbufferedEventHandler struct {
C chan Events C chan Events
} }
func (ueh *UnbufferedEventHandler) Queue(events Events, eventsFlushed *prometheus.Counter) { func (ueh *UnbufferedEventHandler) Queue(events Events, eventsFlushed *prometheus.Counter) {
ueh.C <- events ueh.C <- events
} }

View file

@ -1,134 +1,134 @@
// Copyright 2013 The Prometheus Authors // Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package event package event
import ( import (
"sync" "sync"
"time" "time"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/statsd_exporter/pkg/clock" "github.com/prometheus/statsd_exporter/pkg/clock"
"github.com/prometheus/statsd_exporter/pkg/mapper" "github.com/prometheus/statsd_exporter/pkg/mapper"
) )
type Event interface { type Event interface {
MetricName() string MetricName() string
Value() float64 Value() float64
Labels() map[string]string Labels() map[string]string
MetricType() mapper.MetricType MetricType() mapper.MetricType
} }
type CounterEvent struct { type CounterEvent struct {
CMetricName string CMetricName string
CValue float64 CValue float64
CLabels map[string]string CLabels map[string]string
} }
func (c *CounterEvent) MetricName() string { return c.CMetricName } func (c *CounterEvent) MetricName() string { return c.CMetricName }
func (c *CounterEvent) Value() float64 { return c.CValue } func (c *CounterEvent) Value() float64 { return c.CValue }
func (c *CounterEvent) Labels() map[string]string { return c.CLabels } func (c *CounterEvent) Labels() map[string]string { return c.CLabels }
func (c *CounterEvent) MetricType() mapper.MetricType { return mapper.MetricTypeCounter } func (c *CounterEvent) MetricType() mapper.MetricType { return mapper.MetricTypeCounter }
type GaugeEvent struct { type GaugeEvent struct {
GMetricName string GMetricName string
GValue float64 GValue float64
GRelative bool GRelative bool
GLabels map[string]string GLabels map[string]string
} }
func (g *GaugeEvent) MetricName() string { return g.GMetricName } func (g *GaugeEvent) MetricName() string { return g.GMetricName }
func (g *GaugeEvent) Value() float64 { return g.GValue } func (g *GaugeEvent) Value() float64 { return g.GValue }
func (c *GaugeEvent) Labels() map[string]string { return c.GLabels } func (c *GaugeEvent) Labels() map[string]string { return c.GLabels }
func (c *GaugeEvent) MetricType() mapper.MetricType { return mapper.MetricTypeGauge } func (c *GaugeEvent) MetricType() mapper.MetricType { return mapper.MetricTypeGauge }
type TimerEvent struct { type TimerEvent struct {
TMetricName string TMetricName string
TValue float64 TValue float64
TLabels map[string]string TLabels map[string]string
} }
func (t *TimerEvent) MetricName() string { return t.TMetricName } func (t *TimerEvent) MetricName() string { return t.TMetricName }
func (t *TimerEvent) Value() float64 { return t.TValue } func (t *TimerEvent) Value() float64 { return t.TValue }
func (c *TimerEvent) Labels() map[string]string { return c.TLabels } func (c *TimerEvent) Labels() map[string]string { return c.TLabels }
func (c *TimerEvent) MetricType() mapper.MetricType { return mapper.MetricTypeTimer } func (c *TimerEvent) MetricType() mapper.MetricType { return mapper.MetricTypeTimer }
type Events []Event type Events []Event
type EventQueue struct { type EventQueue struct {
C chan Events C chan Events
q Events q Events
m sync.Mutex m sync.Mutex
flushThreshold int flushThreshold int
flushTicker *time.Ticker flushTicker *time.Ticker
} }
type EventHandler interface { type EventHandler interface {
Queue(event Events, eventsFlushed *prometheus.Counter) Queue(event Events, eventsFlushed *prometheus.Counter)
} }
func NewEventQueue(c chan Events, flushThreshold int, flushInterval time.Duration, eventsFlushed prometheus.Counter) *EventQueue { func NewEventQueue(c chan Events, flushThreshold int, flushInterval time.Duration, eventsFlushed prometheus.Counter) *EventQueue {
ticker := clock.NewTicker(flushInterval) ticker := clock.NewTicker(flushInterval)
eq := &EventQueue{ eq := &EventQueue{
C: c, C: c,
flushThreshold: flushThreshold, flushThreshold: flushThreshold,
flushTicker: ticker, flushTicker: ticker,
q: make([]Event, 0, flushThreshold), q: make([]Event, 0, flushThreshold),
} }
go func() { go func() {
for { for {
<-ticker.C <-ticker.C
eq.Flush(eventsFlushed) eq.Flush(eventsFlushed)
} }
}() }()
return eq return eq
} }
func (eq *EventQueue) Queue(events Events, eventsFlushed *prometheus.Counter) { func (eq *EventQueue) Queue(events Events, eventsFlushed *prometheus.Counter) {
eq.m.Lock() eq.m.Lock()
defer eq.m.Unlock() defer eq.m.Unlock()
for _, e := range events { for _, e := range events {
eq.q = append(eq.q, e) eq.q = append(eq.q, e)
if len(eq.q) >= eq.flushThreshold { if len(eq.q) >= eq.flushThreshold {
eq.FlushUnlocked(*eventsFlushed) eq.FlushUnlocked(*eventsFlushed)
} }
} }
} }
func (eq *EventQueue) Flush(eventsFlushed prometheus.Counter) { func (eq *EventQueue) Flush(eventsFlushed prometheus.Counter) {
eq.m.Lock() eq.m.Lock()
defer eq.m.Unlock() defer eq.m.Unlock()
eq.FlushUnlocked(eventsFlushed) eq.FlushUnlocked(eventsFlushed)
} }
func (eq *EventQueue) FlushUnlocked(eventsFlushed prometheus.Counter) { func (eq *EventQueue) FlushUnlocked(eventsFlushed prometheus.Counter) {
eq.C <- eq.q eq.C <- eq.q
eq.q = make([]Event, 0, cap(eq.q)) eq.q = make([]Event, 0, cap(eq.q))
eventsFlushed.Inc() eventsFlushed.Inc()
} }
func (eq *EventQueue) Len() int { func (eq *EventQueue) Len() int {
eq.m.Lock() eq.m.Lock()
defer eq.m.Unlock() defer eq.m.Unlock()
return len(eq.q) return len(eq.q)
} }
type UnbufferedEventHandler struct { type UnbufferedEventHandler struct {
C chan Events C chan Events
} }
func (ueh *UnbufferedEventHandler) Queue(events Events) { func (ueh *UnbufferedEventHandler) Queue(events Events) {
ueh.C <- events ueh.C <- events
} }

View file

@ -1,173 +1,173 @@
package exporter package exporter
import ( import (
"os" "os"
"time" "time"
"github.com/go-kit/kit/log" "github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level" "github.com/go-kit/kit/log/level"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/statsd_exporter/pkg/clock" "github.com/prometheus/statsd_exporter/pkg/clock"
"github.com/prometheus/statsd_exporter/pkg/event" "github.com/prometheus/statsd_exporter/pkg/event"
"github.com/prometheus/statsd_exporter/pkg/mapper" "github.com/prometheus/statsd_exporter/pkg/mapper"
"github.com/prometheus/statsd_exporter/pkg/registry" "github.com/prometheus/statsd_exporter/pkg/registry"
) )
const ( const (
defaultHelp = "Metric autogenerated by statsd_exporter." defaultHelp = "Metric autogenerated by statsd_exporter."
regErrF = "Failed to update metric" regErrF = "Failed to update metric"
) )
type Exporter struct { type Exporter struct {
Mapper *mapper.MetricMapper Mapper *mapper.MetricMapper
Registry *registry.Registry Registry *registry.Registry
Logger log.Logger Logger log.Logger
} }
// Listen handles all events sent to the given channel sequentially. It // Listen handles all events sent to the given channel sequentially. It
// terminates when the channel is closed. // terminates when the channel is closed.
func (b *Exporter) Listen(e <-chan event.Events, eventsActions *prometheus.CounterVec, eventsUnmapped prometheus.Counter, func (b *Exporter) Listen(e <-chan event.Events, eventsActions *prometheus.CounterVec, eventsUnmapped prometheus.Counter,
errorEventStats *prometheus.CounterVec, eventStats *prometheus.CounterVec, conflictingEventStats *prometheus.CounterVec, metricsCount *prometheus.GaugeVec) { errorEventStats *prometheus.CounterVec, eventStats *prometheus.CounterVec, conflictingEventStats *prometheus.CounterVec, metricsCount *prometheus.GaugeVec) {
removeStaleMetricsTicker := clock.NewTicker(time.Second) removeStaleMetricsTicker := clock.NewTicker(time.Second)
for { for {
select { select {
case <-removeStaleMetricsTicker.C: case <-removeStaleMetricsTicker.C:
b.Registry.RemoveStaleMetrics() b.Registry.RemoveStaleMetrics()
case events, ok := <-e: case events, ok := <-e:
if !ok { if !ok {
level.Debug(b.Logger).Log("msg", "Channel is closed. Break out of Exporter.Listener.") level.Debug(b.Logger).Log("msg", "Channel is closed. Break out of Exporter.Listener.")
removeStaleMetricsTicker.Stop() removeStaleMetricsTicker.Stop()
return return
} }
for _, event := range events { for _, event := range events {
b.handleEvent(event, eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount) b.handleEvent(event, eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount)
} }
} }
} }
} }
// handleEvent processes a single Event according to the configured mapping. // handleEvent processes a single Event according to the configured mapping.
func (b *Exporter) handleEvent(thisEvent event.Event, eventsActions *prometheus.CounterVec, eventsUnmapped prometheus.Counter, func (b *Exporter) handleEvent(thisEvent event.Event, eventsActions *prometheus.CounterVec, eventsUnmapped prometheus.Counter,
errorEventStats *prometheus.CounterVec, eventStats *prometheus.CounterVec, conflictingEventStats *prometheus.CounterVec, metricsCount *prometheus.GaugeVec) { errorEventStats *prometheus.CounterVec, eventStats *prometheus.CounterVec, conflictingEventStats *prometheus.CounterVec, metricsCount *prometheus.GaugeVec) {
mapping, labels, present := b.Mapper.GetMapping(thisEvent.MetricName(), thisEvent.MetricType()) mapping, labels, present := b.Mapper.GetMapping(thisEvent.MetricName(), thisEvent.MetricType())
if mapping == nil { if mapping == nil {
mapping = &mapper.MetricMapping{} mapping = &mapper.MetricMapping{}
if b.Mapper.Defaults.Ttl != 0 { if b.Mapper.Defaults.Ttl != 0 {
mapping.Ttl = b.Mapper.Defaults.Ttl mapping.Ttl = b.Mapper.Defaults.Ttl
} }
} }
if mapping.Action == mapper.ActionTypeDrop { if mapping.Action == mapper.ActionTypeDrop {
eventsActions.WithLabelValues("drop").Inc() eventsActions.WithLabelValues("drop").Inc()
return return
} }
metricName := "" metricName := ""
help := defaultHelp help := defaultHelp
if mapping.HelpText != "" { if mapping.HelpText != "" {
help = mapping.HelpText help = mapping.HelpText
} }
prometheusLabels := thisEvent.Labels() prometheusLabels := thisEvent.Labels()
if present { if present {
if mapping.Name == "" { if mapping.Name == "" {
level.Debug(b.Logger).Log("msg", "The mapping generates an empty metric name", "metric_name", thisEvent.MetricName(), "match", mapping.Match) level.Debug(b.Logger).Log("msg", "The mapping generates an empty metric name", "metric_name", thisEvent.MetricName(), "match", mapping.Match)
errorEventStats.WithLabelValues("empty_metric_name").Inc() errorEventStats.WithLabelValues("empty_metric_name").Inc()
return return
} }
metricName = mapper.EscapeMetricName(mapping.Name) metricName = mapper.EscapeMetricName(mapping.Name)
for label, value := range labels { for label, value := range labels {
prometheusLabels[label] = value prometheusLabels[label] = value
} }
eventsActions.WithLabelValues(string(mapping.Action)).Inc() eventsActions.WithLabelValues(string(mapping.Action)).Inc()
} else { } else {
eventsUnmapped.Inc() eventsUnmapped.Inc()
metricName = mapper.EscapeMetricName(thisEvent.MetricName()) metricName = mapper.EscapeMetricName(thisEvent.MetricName())
} }
switch ev := thisEvent.(type) { switch ev := thisEvent.(type) {
case *event.CounterEvent: case *event.CounterEvent:
// We don't accept negative values for counters. Incrementing the counter with a negative number // We don't accept negative values for counters. Incrementing the counter with a negative number
// will cause the exporter to panic. Instead we will warn and continue to the next event. // will cause the exporter to panic. Instead we will warn and continue to the next event.
if thisEvent.Value() < 0.0 { if thisEvent.Value() < 0.0 {
level.Debug(b.Logger).Log("msg", "counter must be non-negative value", "metric", metricName, "event_value", thisEvent.Value()) level.Debug(b.Logger).Log("msg", "counter must be non-negative value", "metric", metricName, "event_value", thisEvent.Value())
errorEventStats.WithLabelValues("illegal_negative_counter").Inc() errorEventStats.WithLabelValues("illegal_negative_counter").Inc()
return return
} }
counter, err := b.Registry.GetCounter(metricName, prometheusLabels, help, mapping, metricsCount) counter, err := b.Registry.GetCounter(metricName, prometheusLabels, help, mapping, metricsCount)
if err == nil { if err == nil {
counter.Add(thisEvent.Value()) counter.Add(thisEvent.Value())
eventStats.WithLabelValues("counter").Inc() eventStats.WithLabelValues("counter").Inc()
} else { } else {
level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err) level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err)
conflictingEventStats.WithLabelValues("counter").Inc() conflictingEventStats.WithLabelValues("counter").Inc()
} }
case *event.GaugeEvent: case *event.GaugeEvent:
gauge, err := b.Registry.GetGauge(metricName, prometheusLabels, help, mapping, metricsCount) gauge, err := b.Registry.GetGauge(metricName, prometheusLabels, help, mapping, metricsCount)
if err == nil { if err == nil {
if ev.GRelative { if ev.GRelative {
gauge.Add(thisEvent.Value()) gauge.Add(thisEvent.Value())
} else { } else {
gauge.Set(thisEvent.Value()) gauge.Set(thisEvent.Value())
} }
eventStats.WithLabelValues("gauge").Inc() eventStats.WithLabelValues("gauge").Inc()
} else { } else {
level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err) level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err)
conflictingEventStats.WithLabelValues("gauge").Inc() conflictingEventStats.WithLabelValues("gauge").Inc()
} }
case *event.TimerEvent: case *event.TimerEvent:
t := mapper.TimerTypeDefault t := mapper.TimerTypeDefault
if mapping != nil { if mapping != nil {
t = mapping.TimerType t = mapping.TimerType
} }
if t == mapper.TimerTypeDefault { if t == mapper.TimerTypeDefault {
t = b.Mapper.Defaults.TimerType t = b.Mapper.Defaults.TimerType
} }
switch t { switch t {
case mapper.TimerTypeHistogram: case mapper.TimerTypeHistogram:
histogram, err := b.Registry.GetHistogram(metricName, prometheusLabels, help, mapping, metricsCount) histogram, err := b.Registry.GetHistogram(metricName, prometheusLabels, help, mapping, metricsCount)
if err == nil { if err == nil {
histogram.Observe(thisEvent.Value() / 1000) // prometheus presumes seconds, statsd millisecond histogram.Observe(thisEvent.Value() / 1000) // prometheus presumes seconds, statsd millisecond
eventStats.WithLabelValues("timer").Inc() eventStats.WithLabelValues("timer").Inc()
} else { } else {
level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err) level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err)
conflictingEventStats.WithLabelValues("timer").Inc() conflictingEventStats.WithLabelValues("timer").Inc()
} }
case mapper.TimerTypeDefault, mapper.TimerTypeSummary: case mapper.TimerTypeDefault, mapper.TimerTypeSummary:
summary, err := b.Registry.GetSummary(metricName, prometheusLabels, help, mapping, metricsCount) summary, err := b.Registry.GetSummary(metricName, prometheusLabels, help, mapping, metricsCount)
if err == nil { if err == nil {
summary.Observe(thisEvent.Value() / 1000) // prometheus presumes seconds, statsd millisecond summary.Observe(thisEvent.Value() / 1000) // prometheus presumes seconds, statsd millisecond
eventStats.WithLabelValues("timer").Inc() eventStats.WithLabelValues("timer").Inc()
} else { } else {
level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err) level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err)
conflictingEventStats.WithLabelValues("timer").Inc() conflictingEventStats.WithLabelValues("timer").Inc()
} }
default: default:
level.Error(b.Logger).Log("msg", "unknown timer type", "type", t) level.Error(b.Logger).Log("msg", "unknown timer type", "type", t)
os.Exit(1) os.Exit(1)
} }
default: default:
level.Debug(b.Logger).Log("msg", "Unsupported event type") level.Debug(b.Logger).Log("msg", "Unsupported event type")
eventStats.WithLabelValues("illegal").Inc() eventStats.WithLabelValues("illegal").Inc()
} }
} }
func NewExporter(mapper *mapper.MetricMapper, logger log.Logger) *Exporter { func NewExporter(mapper *mapper.MetricMapper, logger log.Logger) *Exporter {
return &Exporter{ return &Exporter{
Mapper: mapper, Mapper: mapper,
Registry: registry.NewRegistry(mapper), Registry: registry.NewRegistry(mapper),
Logger: logger, Logger: logger,
} }
} }

View file

@ -1,172 +1,172 @@
package exporter package exporter
import ( import (
"os" "os"
"time" "time"
"github.com/go-kit/kit/log" "github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level" "github.com/go-kit/kit/log/level"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/statsd_exporter/pkg/clock" "github.com/prometheus/statsd_exporter/pkg/clock"
"github.com/prometheus/statsd_exporter/pkg/event" "github.com/prometheus/statsd_exporter/pkg/event"
"github.com/prometheus/statsd_exporter/pkg/mapper" "github.com/prometheus/statsd_exporter/pkg/mapper"
"github.com/prometheus/statsd_exporter/pkg/registry" "github.com/prometheus/statsd_exporter/pkg/registry"
) )
const ( const (
defaultHelp = "Metric autogenerated by statsd_exporter." defaultHelp = "Metric autogenerated by statsd_exporter."
regErrF = "Failed to update metric" regErrF = "Failed to update metric"
) )
type Exporter struct { type Exporter struct {
Mapper *mapper.MetricMapper Mapper *mapper.MetricMapper
Registry *registry.Registry Registry *registry.Registry
Logger log.Logger Logger log.Logger
} }
// Listen handles all events sent to the given channel sequentially. It // Listen handles all events sent to the given channel sequentially. It
// terminates when the channel is closed. // terminates when the channel is closed.
func (b *Exporter) Listen(e <-chan event.Events, thisEvent event.Event, eventsActions prometheus.GaugeVec, eventsUnmapped prometheus.Gauge, func (b *Exporter) Listen(e <-chan event.Events, thisEvent event.Event, eventsActions prometheus.GaugeVec, eventsUnmapped prometheus.Gauge,
errorEventStats prometheus.GaugeVec, eventStats prometheus.GaugeVec, conflictingEventStats prometheus.GaugeVec, metricsCount prometheus.GaugeVec, l func(string, log.Logger)) { errorEventStats prometheus.GaugeVec, eventStats prometheus.GaugeVec, conflictingEventStats prometheus.GaugeVec, metricsCount prometheus.GaugeVec, l func(string, log.Logger)) {
removeStaleMetricsTicker := clock.NewTicker(time.Second) removeStaleMetricsTicker := clock.NewTicker(time.Second)
for { for {
select { select {
case <-removeStaleMetricsTicker.C: case <-removeStaleMetricsTicker.C:
b.Registry.RemoveStaleMetrics() b.Registry.RemoveStaleMetrics()
case events, ok := <-e: case events, ok := <-e:
if !ok { if !ok {
level.Debug(b.Logger).Log("msg", "Channel is closed. Break out of Exporter.Listener.") level.Debug(b.Logger).Log("msg", "Channel is closed. Break out of Exporter.Listener.")
removeStaleMetricsTicker.Stop() removeStaleMetricsTicker.Stop()
return return
} }
for _, event := range events { for _, event := range events {
b.handleEvent(event, eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount, l) b.handleEvent(event, eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount, l)
} }
} }
} }
} }
// handleEvent processes a single Event according to the configured mapping. // handleEvent processes a single Event according to the configured mapping.
func (b *Exporter) handleEvent(thisEvent event.Event, eventsActions prometheus.GaugeVec, eventsUnmapped prometheus.Gauge, func (b *Exporter) handleEvent(thisEvent event.Event, eventsActions prometheus.GaugeVec, eventsUnmapped prometheus.Gauge,
errorEventStats prometheus.GaugeVec, eventStats prometheus.GaugeVec, conflictingEventStats prometheus.GaugeVec, metricsCount prometheus.GaugeVec, l func(string, log.Logger)) { errorEventStats prometheus.GaugeVec, eventStats prometheus.GaugeVec, conflictingEventStats prometheus.GaugeVec, metricsCount prometheus.GaugeVec, l func(string, log.Logger)) {
mapping, labels, present := b.Mapper.GetMapping(thisEvent.MetricName(), thisEvent.MetricType()) mapping, labels, present := b.Mapper.GetMapping(thisEvent.MetricName(), thisEvent.MetricType())
if mapping == nil { if mapping == nil {
mapping = &mapper.MetricMapping{} mapping = &mapper.MetricMapping{}
if b.Mapper.Defaults.Ttl != 0 { if b.Mapper.Defaults.Ttl != 0 {
mapping.Ttl = b.Mapper.Defaults.Ttl mapping.Ttl = b.Mapper.Defaults.Ttl
} }
} }
if mapping.Action == mapper.ActionTypeDrop { if mapping.Action == mapper.ActionTypeDrop {
eventsActions.WithLabelValues("drop").Inc() eventsActions.WithLabelValues("drop").Inc()
return return
} }
metricName := "" metricName := ""
help := defaultHelp help := defaultHelp
if mapping.HelpText != "" { if mapping.HelpText != "" {
help = mapping.HelpText help = mapping.HelpText
} }
prometheusLabels := thisEvent.Labels() prometheusLabels := thisEvent.Labels()
if present { if present {
if mapping.Name == "" { if mapping.Name == "" {
level.Debug(b.Logger).Log("msg", "The mapping generates an empty metric name", "metric_name", thisEvent.MetricName(), "match", mapping.Match) level.Debug(b.Logger).Log("msg", "The mapping generates an empty metric name", "metric_name", thisEvent.MetricName(), "match", mapping.Match)
errorEventStats.WithLabelValues("empty_metric_name").Inc() errorEventStats.WithLabelValues("empty_metric_name").Inc()
return return
} }
metricName = mapper.EscapeMetricName(mapping.Name) metricName = mapper.EscapeMetricName(mapping.Name)
for label, value := range labels { for label, value := range labels {
prometheusLabels[label] = value prometheusLabels[label] = value
} }
eventsActions.WithLabelValues(string(mapping.Action)).Inc() eventsActions.WithLabelValues(string(mapping.Action)).Inc()
} else { } else {
eventsUnmapped.Inc() eventsUnmapped.Inc()
metricName = mapper.EscapeMetricName(thisEvent.MetricName()) metricName = mapper.EscapeMetricName(thisEvent.MetricName())
} }
switch ev := thisEvent.(type) { switch ev := thisEvent.(type) {
case *event.CounterEvent: case *event.CounterEvent:
// We don't accept negative values for counters. Incrementing the counter with a negative number // We don't accept negative values for counters. Incrementing the counter with a negative number
// will cause the exporter to panic. Instead we will warn and continue to the next event. // will cause the exporter to panic. Instead we will warn and continue to the next event.
if thisEvent.Value() < 0.0 { if thisEvent.Value() < 0.0 {
level.Debug(b.Logger).Log("msg", "counter must be non-negative value", "metric", metricName, "event_value", thisEvent.Value()) level.Debug(b.Logger).Log("msg", "counter must be non-negative value", "metric", metricName, "event_value", thisEvent.Value())
errorEventStats.WithLabelValues("illegal_negative_counter").Inc() errorEventStats.WithLabelValues("illegal_negative_counter").Inc()
return return
} }
counter, err := b.Registry.GetCounter(metricName, prometheusLabels, help, mapping, &metricsCount) counter, err := b.Registry.GetCounter(metricName, prometheusLabels, help, mapping, &metricsCount)
if err == nil { if err == nil {
counter.Add(thisEvent.Value()) counter.Add(thisEvent.Value())
eventStats.WithLabelValues("counter").Inc() eventStats.WithLabelValues("counter").Inc()
} else { } else {
level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err) level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err)
conflictingEventStats.WithLabelValues("counter").Inc() conflictingEventStats.WithLabelValues("counter").Inc()
} }
case *event.GaugeEvent: case *event.GaugeEvent:
gauge, err := b.Registry.GetGauge(metricName, prometheusLabels, help, mapping, &metricsCount) gauge, err := b.Registry.GetGauge(metricName, prometheusLabels, help, mapping, &metricsCount)
if err == nil { if err == nil {
if ev.GRelative { if ev.GRelative {
gauge.Add(thisEvent.Value()) gauge.Add(thisEvent.Value())
} else { } else {
gauge.Set(thisEvent.Value()) gauge.Set(thisEvent.Value())
} }
eventStats.WithLabelValues("gauge").Inc() eventStats.WithLabelValues("gauge").Inc()
} else { } else {
level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err) level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err)
conflictingEventStats.WithLabelValues("gauge").Inc() conflictingEventStats.WithLabelValues("gauge").Inc()
} }
case *event.TimerEvent: case *event.TimerEvent:
t := mapper.TimerTypeDefault t := mapper.TimerTypeDefault
if mapping != nil { if mapping != nil {
t = mapping.TimerType t = mapping.TimerType
} }
if t == mapper.TimerTypeDefault { if t == mapper.TimerTypeDefault {
t = b.Mapper.Defaults.TimerType t = b.Mapper.Defaults.TimerType
} }
switch t { switch t {
case mapper.TimerTypeHistogram: case mapper.TimerTypeHistogram:
histogram, err := b.Registry.GetHistogram(metricName, prometheusLabels, help, mapping, &metricsCount) histogram, err := b.Registry.GetHistogram(metricName, prometheusLabels, help, mapping, &metricsCount)
if err == nil { if err == nil {
histogram.Observe(thisEvent.Value() / 1000) // prometheus presumes seconds, statsd millisecond histogram.Observe(thisEvent.Value() / 1000) // prometheus presumes seconds, statsd millisecond
eventStats.WithLabelValues("timer").Inc() eventStats.WithLabelValues("timer").Inc()
} else { } else {
level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err) level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err)
conflictingEventStats.WithLabelValues("timer").Inc() conflictingEventStats.WithLabelValues("timer").Inc()
} }
case mapper.TimerTypeDefault, mapper.TimerTypeSummary: case mapper.TimerTypeDefault, mapper.TimerTypeSummary:
summary, err := b.Registry.GetSummary(metricName, prometheusLabels, help, mapping, &metricsCount) summary, err := b.Registry.GetSummary(metricName, prometheusLabels, help, mapping, &metricsCount)
if err == nil { if err == nil {
summary.Observe(thisEvent.Value() / 1000) // prometheus presumes seconds, statsd millisecond summary.Observe(thisEvent.Value() / 1000) // prometheus presumes seconds, statsd millisecond
eventStats.WithLabelValues("timer").Inc() eventStats.WithLabelValues("timer").Inc()
} else { } else {
level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err) level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err)
conflictingEventStats.WithLabelValues("timer").Inc() conflictingEventStats.WithLabelValues("timer").Inc()
} }
default: default:
level.Error(b.Logger).Log("msg", "unknown timer type", "type", t) level.Error(b.Logger).Log("msg", "unknown timer type", "type", t)
os.Exit(1) os.Exit(1)
} }
default: default:
level.Debug(b.Logger).Log("msg", "Unsupported event type") level.Debug(b.Logger).Log("msg", "Unsupported event type")
eventStats.WithLabelValues("illegal").Inc() eventStats.WithLabelValues("illegal").Inc()
} }
} }
func NewExporter(mapper *mapper.MetricMapper, logger log.Logger) *Exporter { func NewExporter(mapper *mapper.MetricMapper, logger log.Logger) *Exporter {
return &Exporter{ return &Exporter{
Mapper: mapper, Mapper: mapper,
Registry: registry.NewRegistry(mapper), Registry: registry.NewRegistry(mapper),
Logger: logger, Logger: logger,
} }
} }

View file

@ -1,241 +1,241 @@
package line package line
import ( import (
"fmt" "fmt"
"strconv" "strconv"
"strings" "strings"
"unicode/utf8" "unicode/utf8"
"github.com/go-kit/kit/log" "github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level" "github.com/go-kit/kit/log/level"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/statsd_exporter/pkg/event" "github.com/prometheus/statsd_exporter/pkg/event"
"github.com/prometheus/statsd_exporter/pkg/mapper" "github.com/prometheus/statsd_exporter/pkg/mapper"
) )
func buildEvent(statType, metric string, value float64, relative bool, labels map[string]string) (event.Event, error) { func buildEvent(statType, metric string, value float64, relative bool, labels map[string]string) (event.Event, error) {
switch statType { switch statType {
case "c": case "c":
return &event.CounterEvent{ return &event.CounterEvent{
CMetricName: metric, CMetricName: metric,
CValue: float64(value), CValue: float64(value),
CLabels: labels, CLabels: labels,
}, nil }, nil
case "g": case "g":
return &event.GaugeEvent{ return &event.GaugeEvent{
GMetricName: metric, GMetricName: metric,
GValue: float64(value), GValue: float64(value),
GRelative: relative, GRelative: relative,
GLabels: labels, GLabels: labels,
}, nil }, nil
case "ms", "h", "d": case "ms", "h", "d":
return &event.TimerEvent{ return &event.TimerEvent{
TMetricName: metric, TMetricName: metric,
TValue: float64(value), TValue: float64(value),
TLabels: labels, TLabels: labels,
}, nil }, nil
case "s": case "s":
return nil, fmt.Errorf("no support for StatsD sets") return nil, fmt.Errorf("no support for StatsD sets")
default: default:
return nil, fmt.Errorf("bad stat type %s", statType) return nil, fmt.Errorf("bad stat type %s", statType)
} }
} }
func parseTag(component, tag string, separator rune, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) { func parseTag(component, tag string, separator rune, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) {
// Entirely empty tag is an error // Entirely empty tag is an error
if len(tag) == 0 { if len(tag) == 0 {
tagErrors.Inc() tagErrors.Inc()
level.Debug(logger).Log("msg", "Empty name tag", "component", component) level.Debug(logger).Log("msg", "Empty name tag", "component", component)
return return
} }
for i, c := range tag { for i, c := range tag {
if c == separator { if c == separator {
k := tag[:i] k := tag[:i]
v := tag[i+1:] v := tag[i+1:]
if len(k) == 0 || len(v) == 0 { if len(k) == 0 || len(v) == 0 {
// Empty key or value is an error // Empty key or value is an error
tagErrors.Inc() tagErrors.Inc()
level.Debug(logger).Log("msg", "Malformed name tag", "k", k, "v", v, "component", component) level.Debug(logger).Log("msg", "Malformed name tag", "k", k, "v", v, "component", component)
} else { } else {
labels[mapper.EscapeMetricName(k)] = v labels[mapper.EscapeMetricName(k)] = v
} }
return return
} }
} }
// Missing separator (no value) is an error // Missing separator (no value) is an error
tagErrors.Inc() tagErrors.Inc()
level.Debug(logger).Log("msg", "Malformed name tag", "tag", tag, "component", component) level.Debug(logger).Log("msg", "Malformed name tag", "tag", tag, "component", component)
} }
func parseNameTags(component string, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) { func parseNameTags(component string, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) {
lastTagEndIndex := 0 lastTagEndIndex := 0
for i, c := range component { for i, c := range component {
if c == ',' { if c == ',' {
tag := component[lastTagEndIndex:i] tag := component[lastTagEndIndex:i]
lastTagEndIndex = i + 1 lastTagEndIndex = i + 1
parseTag(component, tag, '=', labels, tagErrors, logger) parseTag(component, tag, '=', labels, tagErrors, logger)
} }
} }
// If we're not off the end of the string, add the last tag // If we're not off the end of the string, add the last tag
if lastTagEndIndex < len(component) { if lastTagEndIndex < len(component) {
tag := component[lastTagEndIndex:] tag := component[lastTagEndIndex:]
parseTag(component, tag, '=', labels, tagErrors, logger) parseTag(component, tag, '=', labels, tagErrors, logger)
} }
} }
func trimLeftHash(s string) string { func trimLeftHash(s string) string {
if s != "" && s[0] == '#' { if s != "" && s[0] == '#' {
return s[1:] return s[1:]
} }
return s return s
} }
func ParseDogStatsDTags(component string, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) { func ParseDogStatsDTags(component string, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) {
lastTagEndIndex := 0 lastTagEndIndex := 0
for i, c := range component { for i, c := range component {
if c == ',' { if c == ',' {
tag := component[lastTagEndIndex:i] tag := component[lastTagEndIndex:i]
lastTagEndIndex = i + 1 lastTagEndIndex = i + 1
parseTag(component, trimLeftHash(tag), ':', labels, tagErrors, logger) parseTag(component, trimLeftHash(tag), ':', labels, tagErrors, logger)
} }
} }
// If we're not off the end of the string, add the last tag // If we're not off the end of the string, add the last tag
if lastTagEndIndex < len(component) { if lastTagEndIndex < len(component) {
tag := component[lastTagEndIndex:] tag := component[lastTagEndIndex:]
parseTag(component, trimLeftHash(tag), ':', labels, tagErrors, logger) parseTag(component, trimLeftHash(tag), ':', labels, tagErrors, logger)
} }
} }
func parseNameAndTags(name string, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) string { func parseNameAndTags(name string, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) string {
for i, c := range name { for i, c := range name {
// `#` delimits start of tags by Librato // `#` delimits start of tags by Librato
// https://www.librato.com/docs/kb/collect/collection_agents/stastd/#stat-level-tags // https://www.librato.com/docs/kb/collect/collection_agents/stastd/#stat-level-tags
// `,` delimits start of tags by InfluxDB // `,` delimits start of tags by InfluxDB
// https://www.influxdata.com/blog/getting-started-with-sending-statsd-metrics-to-telegraf-influxdb/#introducing-influx-statsd // https://www.influxdata.com/blog/getting-started-with-sending-statsd-metrics-to-telegraf-influxdb/#introducing-influx-statsd
if c == '#' || c == ',' { if c == '#' || c == ',' {
parseNameTags(name[i+1:], labels, tagErrors, logger) parseNameTags(name[i+1:], labels, tagErrors, logger)
return name[:i] return name[:i]
} }
} }
return name return name
} }
func LineToEvents(line string, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter, logger log.Logger) event.Events { func LineToEvents(line string, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter, logger log.Logger) event.Events {
events := event.Events{} events := event.Events{}
if line == "" { if line == "" {
return events return events
} }
elements := strings.SplitN(line, ":", 2) elements := strings.SplitN(line, ":", 2)
if len(elements) < 2 || len(elements[0]) == 0 || !utf8.ValidString(line) { if len(elements) < 2 || len(elements[0]) == 0 || !utf8.ValidString(line) {
sampleErrors.WithLabelValues("malformed_line").Inc() sampleErrors.WithLabelValues("malformed_line").Inc()
level.Debug(logger).Log("msg", "Bad line from StatsD", "line", line) level.Debug(logger).Log("msg", "Bad line from StatsD", "line", line)
return events return events
} }
labels := map[string]string{} labels := map[string]string{}
metric := parseNameAndTags(elements[0], labels, tagErrors, logger) metric := parseNameAndTags(elements[0], labels, tagErrors, logger)
var samples []string var samples []string
if strings.Contains(elements[1], "|#") { if strings.Contains(elements[1], "|#") {
// using DogStatsD tags // using DogStatsD tags
// don't allow mixed tagging styles // don't allow mixed tagging styles
if len(labels) > 0 { if len(labels) > 0 {
sampleErrors.WithLabelValues("mixed_tagging_styles").Inc() sampleErrors.WithLabelValues("mixed_tagging_styles").Inc()
level.Debug(logger).Log("msg", "Bad line (multiple tagging styles) from StatsD", "line", line) level.Debug(logger).Log("msg", "Bad line (multiple tagging styles) from StatsD", "line", line)
return events return events
} }
// disable multi-metrics // disable multi-metrics
samples = elements[1:] samples = elements[1:]
} else { } else {
samples = strings.Split(elements[1], ":") samples = strings.Split(elements[1], ":")
} }
samples: samples:
for _, sample := range samples { for _, sample := range samples {
samplesReceived.Inc() samplesReceived.Inc()
components := strings.Split(sample, "|") components := strings.Split(sample, "|")
samplingFactor := 1.0 samplingFactor := 1.0
if len(components) < 2 || len(components) > 4 { if len(components) < 2 || len(components) > 4 {
sampleErrors.WithLabelValues("malformed_component").Inc() sampleErrors.WithLabelValues("malformed_component").Inc()
level.Debug(logger).Log("msg", "Bad component", "line", line) level.Debug(logger).Log("msg", "Bad component", "line", line)
continue continue
} }
valueStr, statType := components[0], components[1] valueStr, statType := components[0], components[1]
var relative = false var relative = false
if strings.Index(valueStr, "+") == 0 || strings.Index(valueStr, "-") == 0 { if strings.Index(valueStr, "+") == 0 || strings.Index(valueStr, "-") == 0 {
relative = true relative = true
} }
value, err := strconv.ParseFloat(valueStr, 64) value, err := strconv.ParseFloat(valueStr, 64)
if err != nil { if err != nil {
level.Debug(logger).Log("msg", "Bad value", "value", valueStr, "line", line) level.Debug(logger).Log("msg", "Bad value", "value", valueStr, "line", line)
sampleErrors.WithLabelValues("malformed_value").Inc() sampleErrors.WithLabelValues("malformed_value").Inc()
continue continue
} }
multiplyEvents := 1 multiplyEvents := 1
if len(components) >= 3 { if len(components) >= 3 {
for _, component := range components[2:] { for _, component := range components[2:] {
if len(component) == 0 { if len(component) == 0 {
level.Debug(logger).Log("msg", "Empty component", "line", line) level.Debug(logger).Log("msg", "Empty component", "line", line)
sampleErrors.WithLabelValues("malformed_component").Inc() sampleErrors.WithLabelValues("malformed_component").Inc()
continue samples continue samples
} }
} }
for _, component := range components[2:] { for _, component := range components[2:] {
switch component[0] { switch component[0] {
case '@': case '@':
samplingFactor, err = strconv.ParseFloat(component[1:], 64) samplingFactor, err = strconv.ParseFloat(component[1:], 64)
if err != nil { if err != nil {
level.Debug(logger).Log("msg", "Invalid sampling factor", "component", component[1:], "line", line) level.Debug(logger).Log("msg", "Invalid sampling factor", "component", component[1:], "line", line)
sampleErrors.WithLabelValues("invalid_sample_factor").Inc() sampleErrors.WithLabelValues("invalid_sample_factor").Inc()
} }
if samplingFactor == 0 { if samplingFactor == 0 {
samplingFactor = 1 samplingFactor = 1
} }
if statType == "g" { if statType == "g" {
continue continue
} else if statType == "c" { } else if statType == "c" {
value /= samplingFactor value /= samplingFactor
} else if statType == "ms" || statType == "h" || statType == "d" { } else if statType == "ms" || statType == "h" || statType == "d" {
multiplyEvents = int(1 / samplingFactor) multiplyEvents = int(1 / samplingFactor)
} }
case '#': case '#':
ParseDogStatsDTags(component[1:], labels, tagErrors, logger) ParseDogStatsDTags(component[1:], labels, tagErrors, logger)
default: default:
level.Debug(logger).Log("msg", "Invalid sampling factor or tag section", "component", components[2], "line", line) level.Debug(logger).Log("msg", "Invalid sampling factor or tag section", "component", components[2], "line", line)
sampleErrors.WithLabelValues("invalid_sample_factor").Inc() sampleErrors.WithLabelValues("invalid_sample_factor").Inc()
continue continue
} }
} }
} }
if len(labels) > 0 { if len(labels) > 0 {
tagsReceived.Inc() tagsReceived.Inc()
} }
for i := 0; i < multiplyEvents; i++ { for i := 0; i < multiplyEvents; i++ {
event, err := buildEvent(statType, metric, value, relative, labels) event, err := buildEvent(statType, metric, value, relative, labels)
if err != nil { if err != nil {
level.Debug(logger).Log("msg", "Error building event", "line", line, "error", err) level.Debug(logger).Log("msg", "Error building event", "line", line, "error", err)
sampleErrors.WithLabelValues("illegal_event").Inc() sampleErrors.WithLabelValues("illegal_event").Inc()
continue continue
} }
events = append(events, event) events = append(events, event)
} }
} }
return events return events
} }

View file

@ -1,241 +1,241 @@
package line package line
import ( import (
"fmt" "fmt"
"strconv" "strconv"
"strings" "strings"
"unicode/utf8" "unicode/utf8"
"github.com/go-kit/kit/log" "github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level" "github.com/go-kit/kit/log/level"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/statsd_exporter/pkg/event" "github.com/prometheus/statsd_exporter/pkg/event"
"github.com/prometheus/statsd_exporter/pkg/mapper" "github.com/prometheus/statsd_exporter/pkg/mapper"
) )
func buildEvent(statType, metric string, value float64, relative bool, labels map[string]string) (event.Event, error) { func buildEvent(statType, metric string, value float64, relative bool, labels map[string]string) (event.Event, error) {
switch statType { switch statType {
case "c": case "c":
return &event.CounterEvent{ return &event.CounterEvent{
CMetricName: metric, CMetricName: metric,
CValue: float64(value), CValue: float64(value),
CLabels: labels, CLabels: labels,
}, nil }, nil
case "g": case "g":
return &event.GaugeEvent{ return &event.GaugeEvent{
GMetricName: metric, GMetricName: metric,
GValue: float64(value), GValue: float64(value),
GRelative: relative, GRelative: relative,
GLabels: labels, GLabels: labels,
}, nil }, nil
case "ms", "h", "d": case "ms", "h", "d":
return &event.TimerEvent{ return &event.TimerEvent{
TMetricName: metric, TMetricName: metric,
TValue: float64(value), TValue: float64(value),
TLabels: labels, TLabels: labels,
}, nil }, nil
case "s": case "s":
return nil, fmt.Errorf("no support for StatsD sets") return nil, fmt.Errorf("no support for StatsD sets")
default: default:
return nil, fmt.Errorf("bad stat type %s", statType) return nil, fmt.Errorf("bad stat type %s", statType)
} }
} }
func parseTag(component, tag string, separator rune, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) { func parseTag(component, tag string, separator rune, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) {
// Entirely empty tag is an error // Entirely empty tag is an error
if len(tag) == 0 { if len(tag) == 0 {
tagErrors.Inc() tagErrors.Inc()
level.Debug(logger).Log("msg", "Empty name tag", "component", component) level.Debug(logger).Log("msg", "Empty name tag", "component", component)
return return
} }
for i, c := range tag { for i, c := range tag {
if c == separator { if c == separator {
k := tag[:i] k := tag[:i]
v := tag[i+1:] v := tag[i+1:]
if len(k) == 0 || len(v) == 0 { if len(k) == 0 || len(v) == 0 {
// Empty key or value is an error // Empty key or value is an error
tagErrors.Inc() tagErrors.Inc()
level.Debug(logger).Log("msg", "Malformed name tag", "k", k, "v", v, "component", component) level.Debug(logger).Log("msg", "Malformed name tag", "k", k, "v", v, "component", component)
} else { } else {
labels[mapper.EscapeMetricName(k)] = v labels[mapper.EscapeMetricName(k)] = v
} }
return return
} }
} }
// Missing separator (no value) is an error // Missing separator (no value) is an error
tagErrors.Inc() tagErrors.Inc()
level.Debug(logger).Log("msg", "Malformed name tag", "tag", tag, "component", component) level.Debug(logger).Log("msg", "Malformed name tag", "tag", tag, "component", component)
} }
func parseNameTags(component string, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) { func parseNameTags(component string, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) {
lastTagEndIndex := 0 lastTagEndIndex := 0
for i, c := range component { for i, c := range component {
if c == ',' { if c == ',' {
tag := component[lastTagEndIndex:i] tag := component[lastTagEndIndex:i]
lastTagEndIndex = i + 1 lastTagEndIndex = i + 1
parseTag(component, tag, '=', labels, tagErrors, logger) parseTag(component, tag, '=', labels, tagErrors, logger)
} }
} }
// If we're not off the end of the string, add the last tag // If we're not off the end of the string, add the last tag
if lastTagEndIndex < len(component) { if lastTagEndIndex < len(component) {
tag := component[lastTagEndIndex:] tag := component[lastTagEndIndex:]
parseTag(component, tag, '=', labels, tagErrors, logger) parseTag(component, tag, '=', labels, tagErrors, logger)
} }
} }
func trimLeftHash(s string) string { func trimLeftHash(s string) string {
if s != "" && s[0] == '#' { if s != "" && s[0] == '#' {
return s[1:] return s[1:]
} }
return s return s
} }
func parseDogStatsDTags(component string, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) { func parseDogStatsDTags(component string, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) {
lastTagEndIndex := 0 lastTagEndIndex := 0
for i, c := range component { for i, c := range component {
if c == ',' { if c == ',' {
tag := component[lastTagEndIndex:i] tag := component[lastTagEndIndex:i]
lastTagEndIndex = i + 1 lastTagEndIndex = i + 1
parseTag(component, trimLeftHash(tag), ':', labels, tagErrors, logger) parseTag(component, trimLeftHash(tag), ':', labels, tagErrors, logger)
} }
} }
// If we're not off the end of the string, add the last tag // If we're not off the end of the string, add the last tag
if lastTagEndIndex < len(component) { if lastTagEndIndex < len(component) {
tag := component[lastTagEndIndex:] tag := component[lastTagEndIndex:]
parseTag(component, trimLeftHash(tag), ':', labels, tagErrors, logger) parseTag(component, trimLeftHash(tag), ':', labels, tagErrors, logger)
} }
} }
func parseNameAndTags(name string, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) string { func parseNameAndTags(name string, labels map[string]string, tagErrors prometheus.Counter, logger log.Logger) string {
for i, c := range name { for i, c := range name {
// `#` delimits start of tags by Librato // `#` delimits start of tags by Librato
// https://www.librato.com/docs/kb/collect/collection_agents/stastd/#stat-level-tags // https://www.librato.com/docs/kb/collect/collection_agents/stastd/#stat-level-tags
// `,` delimits start of tags by InfluxDB // `,` delimits start of tags by InfluxDB
// https://www.influxdata.com/blog/getting-started-with-sending-statsd-metrics-to-telegraf-influxdb/#introducing-influx-statsd // https://www.influxdata.com/blog/getting-started-with-sending-statsd-metrics-to-telegraf-influxdb/#introducing-influx-statsd
if c == '#' || c == ',' { if c == '#' || c == ',' {
parseNameTags(name[i+1:], labels, tagErrors, logger) parseNameTags(name[i+1:], labels, tagErrors, logger)
return name[:i] return name[:i]
} }
} }
return name return name
} }
func LineToEvents(line string, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter, logger log.Logger) event.Events { func LineToEvents(line string, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter, logger log.Logger) event.Events {
events := event.Events{} events := event.Events{}
if line == "" { if line == "" {
return events return events
} }
elements := strings.SplitN(line, ":", 2) elements := strings.SplitN(line, ":", 2)
if len(elements) < 2 || len(elements[0]) == 0 || !utf8.ValidString(line) { if len(elements) < 2 || len(elements[0]) == 0 || !utf8.ValidString(line) {
sampleErrors.WithLabelValues("malformed_line").Inc() sampleErrors.WithLabelValues("malformed_line").Inc()
level.Debug(logger).Log("msg", "Bad line from StatsD", "line", line) level.Debug(logger).Log("msg", "Bad line from StatsD", "line", line)
return events return events
} }
labels := map[string]string{} labels := map[string]string{}
metric := parseNameAndTags(elements[0], labels, tagErrors, logger) metric := parseNameAndTags(elements[0], labels, tagErrors, logger)
var samples []string var samples []string
if strings.Contains(elements[1], "|#") { if strings.Contains(elements[1], "|#") {
// using DogStatsD tags // using DogStatsD tags
// don't allow mixed tagging styles // don't allow mixed tagging styles
if len(labels) > 0 { if len(labels) > 0 {
sampleErrors.WithLabelValues("mixed_tagging_styles").Inc() sampleErrors.WithLabelValues("mixed_tagging_styles").Inc()
level.Debug(logger).Log("msg", "Bad line (multiple tagging styles) from StatsD", "line", line) level.Debug(logger).Log("msg", "Bad line (multiple tagging styles) from StatsD", "line", line)
return events return events
} }
// disable multi-metrics // disable multi-metrics
samples = elements[1:] samples = elements[1:]
} else { } else {
samples = strings.Split(elements[1], ":") samples = strings.Split(elements[1], ":")
} }
samples: samples:
for _, sample := range samples { for _, sample := range samples {
samplesReceived.Inc() samplesReceived.Inc()
components := strings.Split(sample, "|") components := strings.Split(sample, "|")
samplingFactor := 1.0 samplingFactor := 1.0
if len(components) < 2 || len(components) > 4 { if len(components) < 2 || len(components) > 4 {
sampleErrors.WithLabelValues("malformed_component").Inc() sampleErrors.WithLabelValues("malformed_component").Inc()
level.Debug(logger).Log("msg", "Bad component", "line", line) level.Debug(logger).Log("msg", "Bad component", "line", line)
continue continue
} }
valueStr, statType := components[0], components[1] valueStr, statType := components[0], components[1]
var relative = false var relative = false
if strings.Index(valueStr, "+") == 0 || strings.Index(valueStr, "-") == 0 { if strings.Index(valueStr, "+") == 0 || strings.Index(valueStr, "-") == 0 {
relative = true relative = true
} }
value, err := strconv.ParseFloat(valueStr, 64) value, err := strconv.ParseFloat(valueStr, 64)
if err != nil { if err != nil {
level.Debug(logger).Log("msg", "Bad value", "value", valueStr, "line", line) level.Debug(logger).Log("msg", "Bad value", "value", valueStr, "line", line)
sampleErrors.WithLabelValues("malformed_value").Inc() sampleErrors.WithLabelValues("malformed_value").Inc()
continue continue
} }
multiplyEvents := 1 multiplyEvents := 1
if len(components) >= 3 { if len(components) >= 3 {
for _, component := range components[2:] { for _, component := range components[2:] {
if len(component) == 0 { if len(component) == 0 {
level.Debug(logger).Log("msg", "Empty component", "line", line) level.Debug(logger).Log("msg", "Empty component", "line", line)
sampleErrors.WithLabelValues("malformed_component").Inc() sampleErrors.WithLabelValues("malformed_component").Inc()
continue samples continue samples
} }
} }
for _, component := range components[2:] { for _, component := range components[2:] {
switch component[0] { switch component[0] {
case '@': case '@':
samplingFactor, err = strconv.ParseFloat(component[1:], 64) samplingFactor, err = strconv.ParseFloat(component[1:], 64)
if err != nil { if err != nil {
level.Debug(logger).Log("msg", "Invalid sampling factor", "component", component[1:], "line", line) level.Debug(logger).Log("msg", "Invalid sampling factor", "component", component[1:], "line", line)
sampleErrors.WithLabelValues("invalid_sample_factor").Inc() sampleErrors.WithLabelValues("invalid_sample_factor").Inc()
} }
if samplingFactor == 0 { if samplingFactor == 0 {
samplingFactor = 1 samplingFactor = 1
} }
if statType == "g" { if statType == "g" {
continue continue
} else if statType == "c" { } else if statType == "c" {
value /= samplingFactor value /= samplingFactor
} else if statType == "ms" || statType == "h" || statType == "d" { } else if statType == "ms" || statType == "h" || statType == "d" {
multiplyEvents = int(1 / samplingFactor) multiplyEvents = int(1 / samplingFactor)
} }
case '#': case '#':
parseDogStatsDTags(component[1:], labels, tagErrors, logger) parseDogStatsDTags(component[1:], labels, tagErrors, logger)
default: default:
level.Debug(logger).Log("msg", "Invalid sampling factor or tag section", "component", components[2], "line", line) level.Debug(logger).Log("msg", "Invalid sampling factor or tag section", "component", components[2], "line", line)
sampleErrors.WithLabelValues("invalid_sample_factor").Inc() sampleErrors.WithLabelValues("invalid_sample_factor").Inc()
continue continue
} }
} }
} }
if len(labels) > 0 { if len(labels) > 0 {
tagsReceived.Inc() tagsReceived.Inc()
} }
for i := 0; i < multiplyEvents; i++ { for i := 0; i < multiplyEvents; i++ {
event, err := buildEvent(statType, metric, value, relative, labels) event, err := buildEvent(statType, metric, value, relative, labels)
if err != nil { if err != nil {
level.Debug(logger).Log("msg", "Error building event", "line", line, "error", err) level.Debug(logger).Log("msg", "Error building event", "line", line, "error", err)
sampleErrors.WithLabelValues("illegal_event").Inc() sampleErrors.WithLabelValues("illegal_event").Inc()
continue continue
} }
events = append(events, event) events = append(events, event)
} }
} }
return events return events
} }

View file

@ -1,138 +1,138 @@
package listener package listener
import ( import (
"bufio" "bufio"
"io" "io"
"net" "net"
"os" "os"
"strings" "strings"
"github.com/go-kit/kit/log" "github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level" "github.com/go-kit/kit/log/level"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/statsd_exporter/pkg/event" "github.com/prometheus/statsd_exporter/pkg/event"
pkgLine "github.com/prometheus/statsd_exporter/pkg/line" pkgLine "github.com/prometheus/statsd_exporter/pkg/line"
) )
type StatsDUDPListener struct { type StatsDUDPListener struct {
Conn *net.UDPConn Conn *net.UDPConn
EventHandler event.EventHandler EventHandler event.EventHandler
Logger log.Logger Logger log.Logger
} }
func (l *StatsDUDPListener) SetEventHandler(eh event.EventHandler) { func (l *StatsDUDPListener) SetEventHandler(eh event.EventHandler) {
l.EventHandler = eh l.EventHandler = eh
} }
func (l *StatsDUDPListener) Listen(udpPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) { func (l *StatsDUDPListener) Listen(udpPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) {
buf := make([]byte, 65535) buf := make([]byte, 65535)
for { for {
n, _, err := l.Conn.ReadFromUDP(buf) n, _, err := l.Conn.ReadFromUDP(buf)
if err != nil { if err != nil {
// https://github.com/golang/go/issues/4373 // https://github.com/golang/go/issues/4373
// ignore net: errClosing error as it will occur during shutdown // ignore net: errClosing error as it will occur during shutdown
if strings.HasSuffix(err.Error(), "use of closed network connection") { if strings.HasSuffix(err.Error(), "use of closed network connection") {
return return
} }
level.Error(l.Logger).Log("error", err) level.Error(l.Logger).Log("error", err)
return return
} }
l.HandlePacket(buf[0:n], udpPackets, linesReceived, eventsFlushed, sampleErrors, samplesReceived, tagErrors, tagsReceived) l.HandlePacket(buf[0:n], udpPackets, linesReceived, eventsFlushed, sampleErrors, samplesReceived, tagErrors, tagsReceived)
} }
} }
func (l *StatsDUDPListener) HandlePacket(packet []byte, udpPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) { func (l *StatsDUDPListener) HandlePacket(packet []byte, udpPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) {
udpPackets.Inc() udpPackets.Inc()
lines := strings.Split(string(packet), "\n") lines := strings.Split(string(packet), "\n")
for _, line := range lines { for _, line := range lines {
linesReceived.Inc() linesReceived.Inc()
l.EventHandler.Queue(pkgLine.LineToEvents(line, sampleErrors, samplesReceived, tagErrors, tagsReceived, l.Logger), &eventsFlushed) l.EventHandler.Queue(pkgLine.LineToEvents(line, sampleErrors, samplesReceived, tagErrors, tagsReceived, l.Logger), &eventsFlushed)
} }
} }
type StatsDTCPListener struct { type StatsDTCPListener struct {
Conn *net.TCPListener Conn *net.TCPListener
EventHandler event.EventHandler EventHandler event.EventHandler
Logger log.Logger Logger log.Logger
} }
func (l *StatsDTCPListener) SetEventHandler(eh event.EventHandler) { func (l *StatsDTCPListener) SetEventHandler(eh event.EventHandler) {
l.EventHandler = eh l.EventHandler = eh
} }
func (l *StatsDTCPListener) Listen(linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, tcpConnections prometheus.Counter, tcpErrors prometheus.Counter, tcpLineTooLong prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) { func (l *StatsDTCPListener) Listen(linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, tcpConnections prometheus.Counter, tcpErrors prometheus.Counter, tcpLineTooLong prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) {
for { for {
c, err := l.Conn.AcceptTCP() c, err := l.Conn.AcceptTCP()
if err != nil { if err != nil {
// https://github.com/golang/go/issues/4373 // https://github.com/golang/go/issues/4373
// ignore net: errClosing error as it will occur during shutdown // ignore net: errClosing error as it will occur during shutdown
if strings.HasSuffix(err.Error(), "use of closed network connection") { if strings.HasSuffix(err.Error(), "use of closed network connection") {
return return
} }
level.Error(l.Logger).Log("msg", "AcceptTCP failed", "error", err) level.Error(l.Logger).Log("msg", "AcceptTCP failed", "error", err)
os.Exit(1) os.Exit(1)
} }
go l.HandleConn(c, linesReceived, eventsFlushed, tcpConnections, tcpErrors, tcpLineTooLong, sampleErrors, samplesReceived, tagErrors, tagsReceived) go l.HandleConn(c, linesReceived, eventsFlushed, tcpConnections, tcpErrors, tcpLineTooLong, sampleErrors, samplesReceived, tagErrors, tagsReceived)
} }
} }
func (l *StatsDTCPListener) HandleConn(c *net.TCPConn, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, tcpConnections prometheus.Counter, tcpErrors prometheus.Counter, tcpLineTooLong prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) { func (l *StatsDTCPListener) HandleConn(c *net.TCPConn, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, tcpConnections prometheus.Counter, tcpErrors prometheus.Counter, tcpLineTooLong prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) {
defer c.Close() defer c.Close()
tcpConnections.Inc() tcpConnections.Inc()
r := bufio.NewReader(c) r := bufio.NewReader(c)
for { for {
line, isPrefix, err := r.ReadLine() line, isPrefix, err := r.ReadLine()
if err != nil { if err != nil {
if err != io.EOF { if err != io.EOF {
tcpErrors.Inc() tcpErrors.Inc()
level.Debug(l.Logger).Log("msg", "Read failed", "addr", c.RemoteAddr(), "error", err) level.Debug(l.Logger).Log("msg", "Read failed", "addr", c.RemoteAddr(), "error", err)
} }
break break
} }
if isPrefix { if isPrefix {
tcpLineTooLong.Inc() tcpLineTooLong.Inc()
level.Debug(l.Logger).Log("msg", "Read failed: line too long", "addr", c.RemoteAddr()) level.Debug(l.Logger).Log("msg", "Read failed: line too long", "addr", c.RemoteAddr())
break break
} }
linesReceived.Inc() linesReceived.Inc()
l.EventHandler.Queue(pkgLine.LineToEvents(string(line), sampleErrors, samplesReceived, tagErrors, tagsReceived, l.Logger), &eventsFlushed) l.EventHandler.Queue(pkgLine.LineToEvents(string(line), sampleErrors, samplesReceived, tagErrors, tagsReceived, l.Logger), &eventsFlushed)
} }
} }
type StatsDUnixgramListener struct { type StatsDUnixgramListener struct {
Conn *net.UnixConn Conn *net.UnixConn
EventHandler event.EventHandler EventHandler event.EventHandler
Logger log.Logger Logger log.Logger
} }
func (l *StatsDUnixgramListener) SetEventHandler(eh event.EventHandler) { func (l *StatsDUnixgramListener) SetEventHandler(eh event.EventHandler) {
l.EventHandler = eh l.EventHandler = eh
} }
func (l *StatsDUnixgramListener) Listen(unixgramPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) { func (l *StatsDUnixgramListener) Listen(unixgramPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) {
buf := make([]byte, 65535) buf := make([]byte, 65535)
for { for {
n, _, err := l.Conn.ReadFromUnix(buf) n, _, err := l.Conn.ReadFromUnix(buf)
if err != nil { if err != nil {
// https://github.com/golang/go/issues/4373 // https://github.com/golang/go/issues/4373
// ignore net: errClosing error as it will occur during shutdown // ignore net: errClosing error as it will occur during shutdown
if strings.HasSuffix(err.Error(), "use of closed network connection") { if strings.HasSuffix(err.Error(), "use of closed network connection") {
return return
} }
level.Error(l.Logger).Log(err) level.Error(l.Logger).Log(err)
os.Exit(1) os.Exit(1)
} }
l.HandlePacket(buf[:n], unixgramPackets, linesReceived, eventsFlushed, sampleErrors, samplesReceived, tagErrors, tagsReceived) l.HandlePacket(buf[:n], unixgramPackets, linesReceived, eventsFlushed, sampleErrors, samplesReceived, tagErrors, tagsReceived)
} }
} }
func (l *StatsDUnixgramListener) HandlePacket(packet []byte, unixgramPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) { func (l *StatsDUnixgramListener) HandlePacket(packet []byte, unixgramPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) {
unixgramPackets.Inc() unixgramPackets.Inc()
lines := strings.Split(string(packet), "\n") lines := strings.Split(string(packet), "\n")
for _, line := range lines { for _, line := range lines {
linesReceived.Inc() linesReceived.Inc()
l.EventHandler.Queue(pkgLine.LineToEvents(line, sampleErrors, samplesReceived, tagErrors, tagsReceived, l.Logger), &eventsFlushed) l.EventHandler.Queue(pkgLine.LineToEvents(line, sampleErrors, samplesReceived, tagErrors, tagsReceived, l.Logger), &eventsFlushed)
} }
} }

View file

@ -1,138 +1,138 @@
package listener package listener
import ( import (
"bufio" "bufio"
"io" "io"
"net" "net"
"os" "os"
"strings" "strings"
"github.com/go-kit/kit/log" "github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level" "github.com/go-kit/kit/log/level"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/statsd_exporter/pkg/event" "github.com/prometheus/statsd_exporter/pkg/event"
pkgLine "github.com/prometheus/statsd_exporter/pkg/line" pkgLine "github.com/prometheus/statsd_exporter/pkg/line"
) )
type StatsDUDPListener struct { type StatsDUDPListener struct {
Conn *net.UDPConn Conn *net.UDPConn
EventHandler event.EventHandler EventHandler event.EventHandler
Logger log.Logger Logger log.Logger
} }
func (l *StatsDUDPListener) SetEventHandler(eh event.EventHandler) { func (l *StatsDUDPListener) SetEventHandler(eh event.EventHandler) {
l.EventHandler = eh l.EventHandler = eh
} }
func (l *StatsDUDPListener) Listen(udpPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) { func (l *StatsDUDPListener) Listen(udpPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) {
buf := make([]byte, 65535) buf := make([]byte, 65535)
for { for {
n, _, err := l.Conn.ReadFromUDP(buf) n, _, err := l.Conn.ReadFromUDP(buf)
if err != nil { if err != nil {
// https://github.com/golang/go/issues/4373 // https://github.com/golang/go/issues/4373
// ignore net: errClosing error as it will occur during shutdown // ignore net: errClosing error as it will occur during shutdown
if strings.HasSuffix(err.Error(), "use of closed network connection") { if strings.HasSuffix(err.Error(), "use of closed network connection") {
return return
} }
level.Error(l.Logger).Log("error", err) level.Error(l.Logger).Log("error", err)
return return
} }
l.handlePacket(buf[0:n], udpPackets, linesReceived, eventsFlushed, sampleErrors, samplesReceived, tagErrors, tagsReceived) l.handlePacket(buf[0:n], udpPackets, linesReceived, eventsFlushed, sampleErrors, samplesReceived, tagErrors, tagsReceived)
} }
} }
func (l *StatsDUDPListener) handlePacket(packet []byte, udpPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) { func (l *StatsDUDPListener) handlePacket(packet []byte, udpPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) {
udpPackets.Inc() udpPackets.Inc()
lines := strings.Split(string(packet), "\n") lines := strings.Split(string(packet), "\n")
for _, line := range lines { for _, line := range lines {
linesReceived.Inc() linesReceived.Inc()
l.EventHandler.Queue(pkgLine.LineToEvents(line, sampleErrors, samplesReceived, tagErrors, tagsReceived, l.Logger), &eventsFlushed) l.EventHandler.Queue(pkgLine.LineToEvents(line, sampleErrors, samplesReceived, tagErrors, tagsReceived, l.Logger), &eventsFlushed)
} }
} }
type StatsDTCPListener struct { type StatsDTCPListener struct {
Conn *net.TCPListener Conn *net.TCPListener
EventHandler event.EventHandler EventHandler event.EventHandler
Logger log.Logger Logger log.Logger
} }
func (l *StatsDTCPListener) SetEventHandler(eh event.EventHandler) { func (l *StatsDTCPListener) SetEventHandler(eh event.EventHandler) {
l.EventHandler = eh l.EventHandler = eh
} }
func (l *StatsDTCPListener) Listen(linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, tcpConnections prometheus.Counter, tcpErrors prometheus.Counter, tcpLineTooLong prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) { func (l *StatsDTCPListener) Listen(linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, tcpConnections prometheus.Counter, tcpErrors prometheus.Counter, tcpLineTooLong prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) {
for { for {
c, err := l.Conn.AcceptTCP() c, err := l.Conn.AcceptTCP()
if err != nil { if err != nil {
// https://github.com/golang/go/issues/4373 // https://github.com/golang/go/issues/4373
// ignore net: errClosing error as it will occur during shutdown // ignore net: errClosing error as it will occur during shutdown
if strings.HasSuffix(err.Error(), "use of closed network connection") { if strings.HasSuffix(err.Error(), "use of closed network connection") {
return return
} }
level.Error(l.Logger).Log("msg", "AcceptTCP failed", "error", err) level.Error(l.Logger).Log("msg", "AcceptTCP failed", "error", err)
os.Exit(1) os.Exit(1)
} }
go l.handleConn(c, linesReceived, eventsFlushed, tcpConnections, tcpErrors, tcpLineTooLong, sampleErrors, samplesReceived, tagErrors, tagsReceived) go l.handleConn(c, linesReceived, eventsFlushed, tcpConnections, tcpErrors, tcpLineTooLong, sampleErrors, samplesReceived, tagErrors, tagsReceived)
} }
} }
func (l *StatsDTCPListener) handleConn(c *net.TCPConn, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, tcpConnections prometheus.Counter, tcpErrors prometheus.Counter, tcpLineTooLong prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) { func (l *StatsDTCPListener) handleConn(c *net.TCPConn, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, tcpConnections prometheus.Counter, tcpErrors prometheus.Counter, tcpLineTooLong prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) {
defer c.Close() defer c.Close()
tcpConnections.Inc() tcpConnections.Inc()
r := bufio.NewReader(c) r := bufio.NewReader(c)
for { for {
line, isPrefix, err := r.ReadLine() line, isPrefix, err := r.ReadLine()
if err != nil { if err != nil {
if err != io.EOF { if err != io.EOF {
tcpErrors.Inc() tcpErrors.Inc()
level.Debug(l.Logger).Log("msg", "Read failed", "addr", c.RemoteAddr(), "error", err) level.Debug(l.Logger).Log("msg", "Read failed", "addr", c.RemoteAddr(), "error", err)
} }
break break
} }
if isPrefix { if isPrefix {
tcpLineTooLong.Inc() tcpLineTooLong.Inc()
level.Debug(l.Logger).Log("msg", "Read failed: line too long", "addr", c.RemoteAddr()) level.Debug(l.Logger).Log("msg", "Read failed: line too long", "addr", c.RemoteAddr())
break break
} }
linesReceived.Inc() linesReceived.Inc()
l.EventHandler.Queue(pkgLine.LineToEvents(string(line), sampleErrors, samplesReceived, tagErrors, tagsReceived, l.Logger), &eventsFlushed) l.EventHandler.Queue(pkgLine.LineToEvents(string(line), sampleErrors, samplesReceived, tagErrors, tagsReceived, l.Logger), &eventsFlushed)
} }
} }
type StatsDUnixgramListener struct { type StatsDUnixgramListener struct {
Conn *net.UnixConn Conn *net.UnixConn
EventHandler event.EventHandler EventHandler event.EventHandler
Logger log.Logger Logger log.Logger
} }
func (l *StatsDUnixgramListener) SetEventHandler(eh event.EventHandler) { func (l *StatsDUnixgramListener) SetEventHandler(eh event.EventHandler) {
l.EventHandler = eh l.EventHandler = eh
} }
func (l *StatsDUnixgramListener) Listen(unixgramPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) { func (l *StatsDUnixgramListener) Listen(unixgramPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) {
buf := make([]byte, 65535) buf := make([]byte, 65535)
for { for {
n, _, err := l.Conn.ReadFromUnix(buf) n, _, err := l.Conn.ReadFromUnix(buf)
if err != nil { if err != nil {
// https://github.com/golang/go/issues/4373 // https://github.com/golang/go/issues/4373
// ignore net: errClosing error as it will occur during shutdown // ignore net: errClosing error as it will occur during shutdown
if strings.HasSuffix(err.Error(), "use of closed network connection") { if strings.HasSuffix(err.Error(), "use of closed network connection") {
return return
} }
level.Error(l.Logger).Log(err) level.Error(l.Logger).Log(err)
os.Exit(1) os.Exit(1)
} }
l.handlePacket(buf[:n], unixgramPackets, linesReceived, eventsFlushed, sampleErrors, samplesReceived, tagErrors, tagsReceived) l.handlePacket(buf[:n], unixgramPackets, linesReceived, eventsFlushed, sampleErrors, samplesReceived, tagErrors, tagsReceived)
} }
} }
func (l *StatsDUnixgramListener) handlePacket(packet []byte, unixgramPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) { func (l *StatsDUnixgramListener) handlePacket(packet []byte, unixgramPackets prometheus.Counter, linesReceived prometheus.Counter, eventsFlushed prometheus.Counter, sampleErrors prometheus.CounterVec, samplesReceived prometheus.Counter, tagErrors prometheus.Counter, tagsReceived prometheus.Counter) {
unixgramPackets.Inc() unixgramPackets.Inc()
lines := strings.Split(string(packet), "\n") lines := strings.Split(string(packet), "\n")
for _, line := range lines { for _, line := range lines {
linesReceived.Inc() linesReceived.Inc()
l.EventHandler.Queue(pkgLine.LineToEvents(line, sampleErrors, samplesReceived, tagErrors, tagsReceived, l.Logger), &eventsFlushed) l.EventHandler.Queue(pkgLine.LineToEvents(line, sampleErrors, samplesReceived, tagErrors, tagsReceived, l.Logger), &eventsFlushed)
} }
} }

View file

@ -1,42 +1,42 @@
// Copyright 2018 The Prometheus Authors // Copyright 2018 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package mapper package mapper
import "fmt" import "fmt"
type ActionType string type ActionType string
const ( const (
ActionTypeMap ActionType = "map" ActionTypeMap ActionType = "map"
ActionTypeDrop ActionType = "drop" ActionTypeDrop ActionType = "drop"
ActionTypeDefault ActionType = "" ActionTypeDefault ActionType = ""
) )
func (t *ActionType) UnmarshalYAML(unmarshal func(interface{}) error) error { func (t *ActionType) UnmarshalYAML(unmarshal func(interface{}) error) error {
var v string var v string
if err := unmarshal(&v); err != nil { if err := unmarshal(&v); err != nil {
return err return err
} }
switch ActionType(v) { switch ActionType(v) {
case ActionTypeDrop: case ActionTypeDrop:
*t = ActionTypeDrop *t = ActionTypeDrop
case ActionTypeMap, ActionTypeDefault: case ActionTypeMap, ActionTypeDefault:
*t = ActionTypeMap *t = ActionTypeMap
default: default:
return fmt.Errorf("invalid action type %q", v) return fmt.Errorf("invalid action type %q", v)
} }
return nil return nil
} }

View file

@ -1,74 +1,74 @@
// Copyright 2020 The Prometheus Authors // Copyright 2020 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package mapper package mapper
import ( import (
"strings" "strings"
"unicode/utf8" "unicode/utf8"
) )
// EscapeMetricName replaces invalid characters in the metric name with "_" // EscapeMetricName replaces invalid characters in the metric name with "_"
// Valid characters are a-z, A-Z, 0-9, and _ // Valid characters are a-z, A-Z, 0-9, and _
func EscapeMetricName(metricName string) string { func EscapeMetricName(metricName string) string {
metricLen := len(metricName) metricLen := len(metricName)
if metricLen == 0 { if metricLen == 0 {
return "" return ""
} }
escaped := false escaped := false
var sb strings.Builder var sb strings.Builder
// If a metric starts with a digit, allocate the memory and prepend an // If a metric starts with a digit, allocate the memory and prepend an
// underscore. // underscore.
if metricName[0] >= '0' && metricName[0] <= '9' { if metricName[0] >= '0' && metricName[0] <= '9' {
escaped = true escaped = true
sb.Grow(metricLen + 1) sb.Grow(metricLen + 1)
sb.WriteByte('_') sb.WriteByte('_')
} }
// This is an character replacement method optimized for this limited // This is an character replacement method optimized for this limited
// use case. It is much faster than using a regex. // use case. It is much faster than using a regex.
offset := 0 offset := 0
for i, c := range metricName { for i, c := range metricName {
// Seek forward, skipping valid characters until we find one that needs // Seek forward, skipping valid characters until we find one that needs
// to be replaced, then add all the characters we've seen so far to the // to be replaced, then add all the characters we've seen so far to the
// string.Builder. // string.Builder.
if (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || if (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') ||
(c >= '0' && c <= '9') || (c == '_') { (c >= '0' && c <= '9') || (c == '_') {
// Character is valid, so skip over it without doing anything. // Character is valid, so skip over it without doing anything.
} else { } else {
if !escaped { if !escaped {
// Up until now we've been lazy and avoided actually allocating // Up until now we've been lazy and avoided actually allocating
// memory. Unfortunately we've now determined this string needs // memory. Unfortunately we've now determined this string needs
// escaping, so allocate the buffer for the whole string. // escaping, so allocate the buffer for the whole string.
escaped = true escaped = true
sb.Grow(metricLen) sb.Grow(metricLen)
} }
sb.WriteString(metricName[offset:i]) sb.WriteString(metricName[offset:i])
offset = i + utf8.RuneLen(c) offset = i + utf8.RuneLen(c)
sb.WriteByte('_') sb.WriteByte('_')
} }
} }
if !escaped { if !escaped {
// This is the happy path where nothing had to be escaped, so we can // This is the happy path where nothing had to be escaped, so we can
// avoid doing anything. // avoid doing anything.
return metricName return metricName
} }
if offset < metricLen { if offset < metricLen {
sb.WriteString(metricName[offset:]) sb.WriteString(metricName[offset:])
} }
return sb.String() return sb.String()
} }

View file

@ -1,56 +1,56 @@
// Copyright 2020 The Prometheus Authors // Copyright 2020 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package mapper package mapper
import "testing" import "testing"
func TestEscapeMetricName(t *testing.T) { func TestEscapeMetricName(t *testing.T) {
scenarios := map[string]string{ scenarios := map[string]string{
"clean": "clean", "clean": "clean",
"0starts_with_digit": "_0starts_with_digit", "0starts_with_digit": "_0starts_with_digit",
"with_underscore": "with_underscore", "with_underscore": "with_underscore",
"with.dot": "with_dot", "with.dot": "with_dot",
"with😱emoji": "with_emoji", "with😱emoji": "with_emoji",
"with.*.multiple": "with___multiple", "with.*.multiple": "with___multiple",
"test.web-server.foo.bar": "test_web_server_foo_bar", "test.web-server.foo.bar": "test_web_server_foo_bar",
"": "", "": "",
} }
for in, want := range scenarios { for in, want := range scenarios {
if got := EscapeMetricName(in); want != got { if got := EscapeMetricName(in); want != got {
t.Errorf("expected `%s` to be escaped to `%s`, got `%s`", in, want, got) t.Errorf("expected `%s` to be escaped to `%s`, got `%s`", in, want, got)
} }
} }
} }
func BenchmarkEscapeMetricName(b *testing.B) { func BenchmarkEscapeMetricName(b *testing.B) {
scenarios := []string{ scenarios := []string{
"clean", "clean",
"0starts_with_digit", "0starts_with_digit",
"with_underscore", "with_underscore",
"with.dot", "with.dot",
"with😱emoji", "with😱emoji",
"with.*.multiple", "with.*.multiple",
"test.web-server.foo.bar", "test.web-server.foo.bar",
"", "",
} }
for _, s := range scenarios { for _, s := range scenarios {
b.Run(s, func(b *testing.B) { b.Run(s, func(b *testing.B) {
for n := 0; n < b.N; n++ { for n := 0; n < b.N; n++ {
EscapeMetricName(s) EscapeMetricName(s)
} }
}) })
} }
} }

View file

@ -1,132 +1,132 @@
# FSM Mapping # FSM Mapping
## Overview ## Overview
This package implements a fast and efficient algorithm for generic glob style This package implements a fast and efficient algorithm for generic glob style
string matching using a finite state machine (FSM). string matching using a finite state machine (FSM).
### Source Hierachy ### Source Hierachy
``` ```
'-- fsm '-- fsm
'-- dump.go // functionality to dump the FSM to Dot file '-- dump.go // functionality to dump the FSM to Dot file
'-- formatter.go // format glob templates using captured * groups '-- formatter.go // format glob templates using captured * groups
'-- fsm.go // manipulating and searching of FSM '-- fsm.go // manipulating and searching of FSM
'-- minmax.go // min() max() function for interger '-- minmax.go // min() max() function for interger
``` ```
## FSM Explained ## FSM Explained
Per [Wikipedia](https://en.wikipedia.org/wiki/Finite-state_machine): Per [Wikipedia](https://en.wikipedia.org/wiki/Finite-state_machine):
> A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), > A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata),
> finite automaton, or simply a state machine, is a mathematical model of > finite automaton, or simply a state machine, is a mathematical model of
> computation. It is an abstract machine that can be in exactly one of a finite > computation. It is an abstract machine that can be in exactly one of a finite
> number of states at any given time. The FSM can change from one state to > number of states at any given time. The FSM can change from one state to
> another in response to some external inputs; the change from one state to > another in response to some external inputs; the change from one state to
> another is called a transition. An FSM is defined by a list of its states, its > another is called a transition. An FSM is defined by a list of its states, its
> initial state, and the conditions for each transition. > initial state, and the conditions for each transition.
In our use case, each *state* is a substring after the input StatsD metric name is splitted by `.`. In our use case, each *state* is a substring after the input StatsD metric name is splitted by `.`.
### Add state to FSM ### Add state to FSM
`func (f *FSM) AddState(match string, matchMetricType string, `func (f *FSM) AddState(match string, matchMetricType string,
maxPossibleTransitions int, result interface{}) int` maxPossibleTransitions int, result interface{}) int`
At first, the FSM only contains three states, representing three possible metric types: At first, the FSM only contains three states, representing three possible metric types:
____ [gauge] ____ [gauge]
/ /
(start)---- [counter] (start)---- [counter]
\ \
'--- [ timer ] '--- [ timer ]
Adding a rule `client.*.request.count` with type `counter` will make the FSM to be: Adding a rule `client.*.request.count` with type `counter` will make the FSM to be:
____ [gauge] ____ [gauge]
/ /
(start)---- [counter] -- [client] -- [*] -- [request] -- [count] -- {R1} (start)---- [counter] -- [client] -- [*] -- [request] -- [count] -- {R1}
\ \
'--- [timer] '--- [timer]
`{R1}` is short for result 1, which is the match result for `client.*.request.count`. `{R1}` is short for result 1, which is the match result for `client.*.request.count`.
Adding a rule `client.*.*.size` with type `counter` will make the FSM to be: Adding a rule `client.*.*.size` with type `counter` will make the FSM to be:
____ [gauge] __ [request] -- [count] -- {R1} ____ [gauge] __ [request] -- [count] -- {R1}
/ / / /
(start)---- [counter] -- [client] -- [*] (start)---- [counter] -- [client] -- [*]
\ \__ [*] -- [size] -- {R2} \ \__ [*] -- [size] -- {R2}
'--- [timer] '--- [timer]
### Finding a result state in FSM ### Finding a result state in FSM
`func (f *FSM) GetMapping(statsdMetric string, statsdMetricType string) `func (f *FSM) GetMapping(statsdMetric string, statsdMetricType string)
(*mappingState, []string)` (*mappingState, []string)`
For example, when mapping `client.aaa.request.count` with `counter` type in the For example, when mapping `client.aaa.request.count` with `counter` type in the
FSM, the `^1` to `^7` symbols indicate how FSM will traversal in its tree: FSM, the `^1` to `^7` symbols indicate how FSM will traversal in its tree:
____ [gauge] __ [request] -- [count] -- {R1} ____ [gauge] __ [request] -- [count] -- {R1}
/ / ^5 ^6 ^7 / / ^5 ^6 ^7
(start)---- [counter] -- [client] -- [*] (start)---- [counter] -- [client] -- [*]
^1 \ ^2 ^3 \__ [*] -- [size] -- {R2} ^1 \ ^2 ^3 \__ [*] -- [size] -- {R2}
'--- [timer] ^4 '--- [timer] ^4
To map `client.bbb.request.size`, FSM will do a backtracking: To map `client.bbb.request.size`, FSM will do a backtracking:
____ [gauge] __ [request] -- [count] -- {R1} ____ [gauge] __ [request] -- [count] -- {R1}
/ / ^5 ^6 / / ^5 ^6
(start)---- [counter] -- [client] -- [*] (start)---- [counter] -- [client] -- [*]
^1 \ ^2 ^3 \__ [*] -- [size] -- {R2} ^1 \ ^2 ^3 \__ [*] -- [size] -- {R2}
'--- [timer] ^4 '--- [timer] ^4
^7 ^8 ^9 ^7 ^8 ^9
## Debugging ## Debugging
To see all the states of the current FSM, use `func (f *FSM) DumpFSM(w io.Writer)` To see all the states of the current FSM, use `func (f *FSM) DumpFSM(w io.Writer)`
to dump into a Dot file. The Dot file can be further renderer into image using: to dump into a Dot file. The Dot file can be further renderer into image using:
```shell ```shell
$ dot -Tpng dump.dot > dump.png $ dot -Tpng dump.dot > dump.png
``` ```
In StatsD exporter, one could use the following: In StatsD exporter, one could use the following:
```shell ```shell
$ statsd_exporter --statsd.mapping-config=statsd.rules --debug.dump-fsm=dump.dot $ statsd_exporter --statsd.mapping-config=statsd.rules --debug.dump-fsm=dump.dot
$ dot -Tpng dump.dot > dump.png $ dot -Tpng dump.dot > dump.png
``` ```
For example, the following rules: For example, the following rules:
```yaml ```yaml
mappings: mappings:
- match: client.*.request.count - match: client.*.request.count
name: request_count name: request_count
match_metric_type: counter match_metric_type: counter
labels: labels:
client: $1 client: $1
- match: client.*.*.size - match: client.*.*.size
name: sizes name: sizes
match_metric_type: counter match_metric_type: counter
labels: labels:
client: $1 client: $1
direction: $2 direction: $2
``` ```
will be rendered as: will be rendered as:
![FSM](fsm.png) ![FSM](fsm.png)
The `dot` program is part of [Graphviz](https://www.graphviz.org/) and is The `dot` program is part of [Graphviz](https://www.graphviz.org/) and is
available in most of popular operating systems. available in most of popular operating systems.

View file

@ -1,48 +1,48 @@
// Copyright 2018 The Prometheus Authors // Copyright 2018 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package fsm package fsm
import ( import (
"fmt" "fmt"
"io" "io"
) )
// DumpFSM accepts a io.writer and write the current FSM into dot file format. // DumpFSM accepts a io.writer and write the current FSM into dot file format.
func (f *FSM) DumpFSM(w io.Writer) { func (f *FSM) DumpFSM(w io.Writer) {
idx := 0 idx := 0
states := make(map[int]*mappingState) states := make(map[int]*mappingState)
states[idx] = f.root states[idx] = f.root
w.Write([]byte("digraph g {\n")) w.Write([]byte("digraph g {\n"))
w.Write([]byte("rankdir=LR\n")) // make it vertical w.Write([]byte("rankdir=LR\n")) // make it vertical
w.Write([]byte("node [ label=\"\",style=filled,fillcolor=white,shape=circle ]\n")) // remove label of node w.Write([]byte("node [ label=\"\",style=filled,fillcolor=white,shape=circle ]\n")) // remove label of node
for idx < len(states) { for idx < len(states) {
for field, transition := range states[idx].transitions { for field, transition := range states[idx].transitions {
states[len(states)] = transition states[len(states)] = transition
w.Write([]byte(fmt.Sprintf("%d -> %d [label = \"%s\"];\n", idx, len(states)-1, field))) w.Write([]byte(fmt.Sprintf("%d -> %d [label = \"%s\"];\n", idx, len(states)-1, field)))
if idx == 0 { if idx == 0 {
// color for metric types // color for metric types
w.Write([]byte(fmt.Sprintf("%d [color=\"#D6B656\",fillcolor=\"#FFF2CC\"];\n", len(states)-1))) w.Write([]byte(fmt.Sprintf("%d [color=\"#D6B656\",fillcolor=\"#FFF2CC\"];\n", len(states)-1)))
} else if transition.transitions == nil || len(transition.transitions) == 0 { } else if transition.transitions == nil || len(transition.transitions) == 0 {
// color for end state // color for end state
w.Write([]byte(fmt.Sprintf("%d [color=\"#82B366\",fillcolor=\"#D5E8D4\"];\n", len(states)-1))) w.Write([]byte(fmt.Sprintf("%d [color=\"#82B366\",fillcolor=\"#D5E8D4\"];\n", len(states)-1)))
} }
} }
idx++ idx++
} }
// color for start state // color for start state
w.Write([]byte(fmt.Sprintf("0 [color=\"#a94442\",fillcolor=\"#f2dede\"];\n"))) w.Write([]byte(fmt.Sprintf("0 [color=\"#a94442\",fillcolor=\"#f2dede\"];\n")))
w.Write([]byte("}")) w.Write([]byte("}"))
} }

View file

@ -1,76 +1,76 @@
// Copyright 2018 The Prometheus Authors // Copyright 2018 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package fsm package fsm
import ( import (
"fmt" "fmt"
"regexp" "regexp"
"strconv" "strconv"
"strings" "strings"
) )
var ( var (
templateReplaceCaptureRE = regexp.MustCompile(`\$\{?([a-zA-Z0-9_\$]+)\}?`) templateReplaceCaptureRE = regexp.MustCompile(`\$\{?([a-zA-Z0-9_\$]+)\}?`)
) )
type TemplateFormatter struct { type TemplateFormatter struct {
captureIndexes []int captureIndexes []int
captureCount int captureCount int
fmtString string fmtString string
} }
// NewTemplateFormatter instantiates a TemplateFormatter // NewTemplateFormatter instantiates a TemplateFormatter
// from given template string and the maximum amount of captures. // from given template string and the maximum amount of captures.
func NewTemplateFormatter(template string, captureCount int) *TemplateFormatter { func NewTemplateFormatter(template string, captureCount int) *TemplateFormatter {
matches := templateReplaceCaptureRE.FindAllStringSubmatch(template, -1) matches := templateReplaceCaptureRE.FindAllStringSubmatch(template, -1)
if len(matches) == 0 { if len(matches) == 0 {
// if no regex reference found, keep it as it is // if no regex reference found, keep it as it is
return &TemplateFormatter{captureCount: 0, fmtString: template} return &TemplateFormatter{captureCount: 0, fmtString: template}
} }
var indexes []int var indexes []int
valueFormatter := template valueFormatter := template
for _, match := range matches { for _, match := range matches {
idx, err := strconv.Atoi(match[len(match)-1]) idx, err := strconv.Atoi(match[len(match)-1])
if err != nil || idx > captureCount || idx < 1 { if err != nil || idx > captureCount || idx < 1 {
// if index larger than captured count or using unsupported named capture group, // if index larger than captured count or using unsupported named capture group,
// replace with empty string // replace with empty string
valueFormatter = strings.Replace(valueFormatter, match[0], "", -1) valueFormatter = strings.Replace(valueFormatter, match[0], "", -1)
} else { } else {
valueFormatter = strings.Replace(valueFormatter, match[0], "%s", -1) valueFormatter = strings.Replace(valueFormatter, match[0], "%s", -1)
// note: the regex reference variable $? starts from 1 // note: the regex reference variable $? starts from 1
indexes = append(indexes, idx-1) indexes = append(indexes, idx-1)
} }
} }
return &TemplateFormatter{ return &TemplateFormatter{
captureIndexes: indexes, captureIndexes: indexes,
captureCount: len(indexes), captureCount: len(indexes),
fmtString: valueFormatter, fmtString: valueFormatter,
} }
} }
// Format accepts a list containing captured strings and returns the formatted // Format accepts a list containing captured strings and returns the formatted
// string using the template stored in current TemplateFormatter. // string using the template stored in current TemplateFormatter.
func (formatter *TemplateFormatter) Format(captures []string) string { func (formatter *TemplateFormatter) Format(captures []string) string {
if formatter.captureCount == 0 { if formatter.captureCount == 0 {
// no label substitution, keep as it is // no label substitution, keep as it is
return formatter.fmtString return formatter.fmtString
} }
indexes := formatter.captureIndexes indexes := formatter.captureIndexes
vargs := make([]interface{}, formatter.captureCount) vargs := make([]interface{}, formatter.captureCount)
for i, idx := range indexes { for i, idx := range indexes {
vargs[i] = captures[idx] vargs[i] = captures[idx]
} }
return fmt.Sprintf(formatter.fmtString, vargs...) return fmt.Sprintf(formatter.fmtString, vargs...)
} }

View file

@ -1,326 +1,326 @@
// Copyright 2018 The Prometheus Authors // Copyright 2018 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package fsm package fsm
import ( import (
"regexp" "regexp"
"strings" "strings"
"github.com/prometheus/common/log" "github.com/prometheus/common/log"
) )
type mappingState struct { type mappingState struct {
transitions map[string]*mappingState transitions map[string]*mappingState
minRemainingLength int minRemainingLength int
maxRemainingLength int maxRemainingLength int
// result* members are nil unless there's a metric ends with this state // result* members are nil unless there's a metric ends with this state
Result interface{} Result interface{}
ResultPriority int ResultPriority int
} }
type fsmBacktrackStackCursor struct { type fsmBacktrackStackCursor struct {
fieldIndex int fieldIndex int
captureIndex int captureIndex int
currentCapture string currentCapture string
state *mappingState state *mappingState
prev *fsmBacktrackStackCursor prev *fsmBacktrackStackCursor
next *fsmBacktrackStackCursor next *fsmBacktrackStackCursor
} }
type FSM struct { type FSM struct {
root *mappingState root *mappingState
metricTypes []string metricTypes []string
statesCount int statesCount int
BacktrackingNeeded bool BacktrackingNeeded bool
OrderingDisabled bool OrderingDisabled bool
} }
// NewFSM creates a new FSM instance // NewFSM creates a new FSM instance
func NewFSM(metricTypes []string, maxPossibleTransitions int, orderingDisabled bool) *FSM { func NewFSM(metricTypes []string, maxPossibleTransitions int, orderingDisabled bool) *FSM {
fsm := FSM{} fsm := FSM{}
root := &mappingState{} root := &mappingState{}
root.transitions = make(map[string]*mappingState, len(metricTypes)) root.transitions = make(map[string]*mappingState, len(metricTypes))
for _, field := range metricTypes { for _, field := range metricTypes {
state := &mappingState{} state := &mappingState{}
(*state).transitions = make(map[string]*mappingState, maxPossibleTransitions) (*state).transitions = make(map[string]*mappingState, maxPossibleTransitions)
root.transitions[string(field)] = state root.transitions[string(field)] = state
} }
fsm.OrderingDisabled = orderingDisabled fsm.OrderingDisabled = orderingDisabled
fsm.metricTypes = metricTypes fsm.metricTypes = metricTypes
fsm.statesCount = 0 fsm.statesCount = 0
fsm.root = root fsm.root = root
return &fsm return &fsm
} }
// AddState adds a mapping rule into the existing FSM. // AddState adds a mapping rule into the existing FSM.
// The maxPossibleTransitions parameter sets the expected count of transitions left. // The maxPossibleTransitions parameter sets the expected count of transitions left.
// The result parameter sets the generic type to be returned when fsm found a match in GetMapping. // The result parameter sets the generic type to be returned when fsm found a match in GetMapping.
func (f *FSM) AddState(match string, matchMetricType string, maxPossibleTransitions int, result interface{}) int { func (f *FSM) AddState(match string, matchMetricType string, maxPossibleTransitions int, result interface{}) int {
// first split by "." // first split by "."
matchFields := strings.Split(match, ".") matchFields := strings.Split(match, ".")
// fill into our FSM // fill into our FSM
roots := []*mappingState{} roots := []*mappingState{}
// first state is the metric type // first state is the metric type
if matchMetricType == "" { if matchMetricType == "" {
// if metricType not specified, connect the start state from all three types // if metricType not specified, connect the start state from all three types
for _, metricType := range f.metricTypes { for _, metricType := range f.metricTypes {
roots = append(roots, f.root.transitions[string(metricType)]) roots = append(roots, f.root.transitions[string(metricType)])
} }
} else { } else {
roots = append(roots, f.root.transitions[matchMetricType]) roots = append(roots, f.root.transitions[matchMetricType])
} }
var captureCount int var captureCount int
var finalStates []*mappingState var finalStates []*mappingState
// iterating over different start state (different metric types) // iterating over different start state (different metric types)
for _, root := range roots { for _, root := range roots {
captureCount = 0 captureCount = 0
// for each start state, connect from start state to end state // for each start state, connect from start state to end state
for i, field := range matchFields { for i, field := range matchFields {
state, prs := root.transitions[field] state, prs := root.transitions[field]
if !prs { if !prs {
// create a state if it's not exist in the fsm // create a state if it's not exist in the fsm
state = &mappingState{} state = &mappingState{}
(*state).transitions = make(map[string]*mappingState, maxPossibleTransitions) (*state).transitions = make(map[string]*mappingState, maxPossibleTransitions)
(*state).maxRemainingLength = len(matchFields) - i - 1 (*state).maxRemainingLength = len(matchFields) - i - 1
(*state).minRemainingLength = len(matchFields) - i - 1 (*state).minRemainingLength = len(matchFields) - i - 1
root.transitions[field] = state root.transitions[field] = state
// if this is last field, set result to currentMapping instance // if this is last field, set result to currentMapping instance
if i == len(matchFields)-1 { if i == len(matchFields)-1 {
root.transitions[field].Result = result root.transitions[field].Result = result
} }
} else { } else {
(*state).maxRemainingLength = max(len(matchFields)-i-1, (*state).maxRemainingLength) (*state).maxRemainingLength = max(len(matchFields)-i-1, (*state).maxRemainingLength)
(*state).minRemainingLength = min(len(matchFields)-i-1, (*state).minRemainingLength) (*state).minRemainingLength = min(len(matchFields)-i-1, (*state).minRemainingLength)
} }
if field == "*" { if field == "*" {
captureCount++ captureCount++
} }
// goto next state // goto next state
root = state root = state
} }
finalStates = append(finalStates, root) finalStates = append(finalStates, root)
} }
for _, state := range finalStates { for _, state := range finalStates {
state.ResultPriority = f.statesCount state.ResultPriority = f.statesCount
} }
f.statesCount++ f.statesCount++
return captureCount return captureCount
} }
// GetMapping using the fsm to find matching rules according to given statsdMetric and statsdMetricType. // GetMapping using the fsm to find matching rules according to given statsdMetric and statsdMetricType.
// If it finds a match, the final state and the captured strings are returned; // If it finds a match, the final state and the captured strings are returned;
// if there's no match found, nil and a empty list will be returned. // if there's no match found, nil and a empty list will be returned.
func (f *FSM) GetMapping(statsdMetric string, statsdMetricType string) (*mappingState, []string) { func (f *FSM) GetMapping(statsdMetric string, statsdMetricType string) (*mappingState, []string) {
matchFields := strings.Split(statsdMetric, ".") matchFields := strings.Split(statsdMetric, ".")
currentState := f.root.transitions[statsdMetricType] currentState := f.root.transitions[statsdMetricType]
// the cursor/pointer in the backtrack stack implemented as a double-linked list // the cursor/pointer in the backtrack stack implemented as a double-linked list
var backtrackCursor *fsmBacktrackStackCursor var backtrackCursor *fsmBacktrackStackCursor
resumeFromBacktrack := false resumeFromBacktrack := false
// the return variable // the return variable
var finalState *mappingState var finalState *mappingState
captures := make([]string, len(matchFields)) captures := make([]string, len(matchFields))
finalCaptures := make([]string, len(matchFields)) finalCaptures := make([]string, len(matchFields))
// keep track of captured group so we don't need to do append() on captures // keep track of captured group so we don't need to do append() on captures
captureIdx := 0 captureIdx := 0
filedsCount := len(matchFields) filedsCount := len(matchFields)
i := 0 i := 0
var state *mappingState var state *mappingState
for { // the loop for backtracking for { // the loop for backtracking
for { // the loop for a single "depth only" search for { // the loop for a single "depth only" search
var present bool var present bool
// if we resume from backtrack, we should skip this branch in this case // if we resume from backtrack, we should skip this branch in this case
// since the state that were saved at the end of this branch // since the state that were saved at the end of this branch
if !resumeFromBacktrack { if !resumeFromBacktrack {
if len(currentState.transitions) > 0 { if len(currentState.transitions) > 0 {
field := matchFields[i] field := matchFields[i]
state, present = currentState.transitions[field] state, present = currentState.transitions[field]
fieldsLeft := filedsCount - i - 1 fieldsLeft := filedsCount - i - 1
// also compare length upfront to avoid unnecessary loop or backtrack // also compare length upfront to avoid unnecessary loop or backtrack
if !present || fieldsLeft > state.maxRemainingLength || fieldsLeft < state.minRemainingLength { if !present || fieldsLeft > state.maxRemainingLength || fieldsLeft < state.minRemainingLength {
state, present = currentState.transitions["*"] state, present = currentState.transitions["*"]
if !present || fieldsLeft > state.maxRemainingLength || fieldsLeft < state.minRemainingLength { if !present || fieldsLeft > state.maxRemainingLength || fieldsLeft < state.minRemainingLength {
break break
} else { } else {
captures[captureIdx] = field captures[captureIdx] = field
captureIdx++ captureIdx++
} }
} else if f.BacktrackingNeeded { } else if f.BacktrackingNeeded {
// if backtracking is needed, also check for alternative transition, i.e. * // if backtracking is needed, also check for alternative transition, i.e. *
altState, present := currentState.transitions["*"] altState, present := currentState.transitions["*"]
if !present || fieldsLeft > altState.maxRemainingLength || fieldsLeft < altState.minRemainingLength { if !present || fieldsLeft > altState.maxRemainingLength || fieldsLeft < altState.minRemainingLength {
} else { } else {
// push to backtracking stack // push to backtracking stack
newCursor := fsmBacktrackStackCursor{prev: backtrackCursor, state: altState, newCursor := fsmBacktrackStackCursor{prev: backtrackCursor, state: altState,
fieldIndex: i, fieldIndex: i,
captureIndex: captureIdx, currentCapture: field, captureIndex: captureIdx, currentCapture: field,
} }
// if this is not the first time, connect to the previous cursor // if this is not the first time, connect to the previous cursor
if backtrackCursor != nil { if backtrackCursor != nil {
backtrackCursor.next = &newCursor backtrackCursor.next = &newCursor
} }
backtrackCursor = &newCursor backtrackCursor = &newCursor
} }
} }
} else { } else {
// no more transitions for this state // no more transitions for this state
break break
} }
} // backtrack will resume from here } // backtrack will resume from here
// do we reach a final state? // do we reach a final state?
if state.Result != nil && i == filedsCount-1 { if state.Result != nil && i == filedsCount-1 {
if f.OrderingDisabled { if f.OrderingDisabled {
finalState = state finalState = state
return finalState, captures return finalState, captures
} else if finalState == nil || finalState.ResultPriority > state.ResultPriority { } else if finalState == nil || finalState.ResultPriority > state.ResultPriority {
// if we care about ordering, try to find a result with highest prioity // if we care about ordering, try to find a result with highest prioity
finalState = state finalState = state
// do a deep copy to preserve current captures // do a deep copy to preserve current captures
copy(finalCaptures, captures) copy(finalCaptures, captures)
} }
break break
} }
i++ i++
if i >= filedsCount { if i >= filedsCount {
break break
} }
resumeFromBacktrack = false resumeFromBacktrack = false
currentState = state currentState = state
} }
if backtrackCursor == nil { if backtrackCursor == nil {
// if we are not doing backtracking or all path has been travesaled // if we are not doing backtracking or all path has been travesaled
break break
} else { } else {
// pop one from stack // pop one from stack
state = backtrackCursor.state state = backtrackCursor.state
currentState = state currentState = state
i = backtrackCursor.fieldIndex i = backtrackCursor.fieldIndex
captureIdx = backtrackCursor.captureIndex + 1 captureIdx = backtrackCursor.captureIndex + 1
// put the * capture back // put the * capture back
captures[captureIdx-1] = backtrackCursor.currentCapture captures[captureIdx-1] = backtrackCursor.currentCapture
backtrackCursor = backtrackCursor.prev backtrackCursor = backtrackCursor.prev
if backtrackCursor != nil { if backtrackCursor != nil {
// deref for GC // deref for GC
backtrackCursor.next = nil backtrackCursor.next = nil
} }
resumeFromBacktrack = true resumeFromBacktrack = true
} }
} }
return finalState, finalCaptures return finalState, finalCaptures
} }
// TestIfNeedBacktracking tests if backtrack is needed for given list of mappings // TestIfNeedBacktracking tests if backtrack is needed for given list of mappings
// and whether ordering is disabled. // and whether ordering is disabled.
func TestIfNeedBacktracking(mappings []string, orderingDisabled bool) bool { func TestIfNeedBacktracking(mappings []string, orderingDisabled bool) bool {
backtrackingNeeded := false backtrackingNeeded := false
// A has * in rules, but there's other transisitions at the same state, // A has * in rules, but there's other transisitions at the same state,
// this makes A the cause of backtracking // this makes A the cause of backtracking
ruleByLength := make(map[int][]string) ruleByLength := make(map[int][]string)
ruleREByLength := make(map[int][]*regexp.Regexp) ruleREByLength := make(map[int][]*regexp.Regexp)
// first sort rules by length // first sort rules by length
for _, mapping := range mappings { for _, mapping := range mappings {
l := len(strings.Split(mapping, ".")) l := len(strings.Split(mapping, "."))
ruleByLength[l] = append(ruleByLength[l], mapping) ruleByLength[l] = append(ruleByLength[l], mapping)
metricRe := strings.Replace(mapping, ".", "\\.", -1) metricRe := strings.Replace(mapping, ".", "\\.", -1)
metricRe = strings.Replace(metricRe, "*", "([^.]*)", -1) metricRe = strings.Replace(metricRe, "*", "([^.]*)", -1)
regex, err := regexp.Compile("^" + metricRe + "$") regex, err := regexp.Compile("^" + metricRe + "$")
if err != nil { if err != nil {
log.Warnf("invalid match %s. cannot compile regex in mapping: %v", mapping, err) log.Warnf("invalid match %s. cannot compile regex in mapping: %v", mapping, err)
} }
// put into array no matter there's error or not, we will skip later if regex is nil // put into array no matter there's error or not, we will skip later if regex is nil
ruleREByLength[l] = append(ruleREByLength[l], regex) ruleREByLength[l] = append(ruleREByLength[l], regex)
} }
for l, rules := range ruleByLength { for l, rules := range ruleByLength {
if len(rules) == 1 { if len(rules) == 1 {
continue continue
} }
rulesRE := ruleREByLength[l] rulesRE := ruleREByLength[l]
for i1, r1 := range rules { for i1, r1 := range rules {
currentRuleNeedBacktrack := false currentRuleNeedBacktrack := false
re1 := rulesRE[i1] re1 := rulesRE[i1]
if re1 == nil || !strings.Contains(r1, "*") { if re1 == nil || !strings.Contains(r1, "*") {
continue continue
} }
// if rule r1 is A.B.C.*.E.*, is there a rule r2 is A.B.C.D.x.x or A.B.C.*.E.F ? (x is any string or *) // if rule r1 is A.B.C.*.E.*, is there a rule r2 is A.B.C.D.x.x or A.B.C.*.E.F ? (x is any string or *)
// if such r2 exists, then to match r1 we will need backtracking // if such r2 exists, then to match r1 we will need backtracking
for index := 0; index < len(r1); index++ { for index := 0; index < len(r1); index++ {
if r1[index] != '*' { if r1[index] != '*' {
continue continue
} }
// translate the substring of r1 from 0 to the index of current * into regex // translate the substring of r1 from 0 to the index of current * into regex
// A.B.C.*.E.* will becomes ^A\.B\.C\. and ^A\.B\.C\.\*\.E\. // A.B.C.*.E.* will becomes ^A\.B\.C\. and ^A\.B\.C\.\*\.E\.
reStr := strings.Replace(r1[:index], ".", "\\.", -1) reStr := strings.Replace(r1[:index], ".", "\\.", -1)
reStr = strings.Replace(reStr, "*", "\\*", -1) reStr = strings.Replace(reStr, "*", "\\*", -1)
re := regexp.MustCompile("^" + reStr) re := regexp.MustCompile("^" + reStr)
for i2, r2 := range rules { for i2, r2 := range rules {
if i2 == i1 { if i2 == i1 {
continue continue
} }
if len(re.FindStringSubmatchIndex(r2)) > 0 { if len(re.FindStringSubmatchIndex(r2)) > 0 {
currentRuleNeedBacktrack = true currentRuleNeedBacktrack = true
break break
} }
} }
} }
for i2, r2 := range rules { for i2, r2 := range rules {
if i2 != i1 && len(re1.FindStringSubmatchIndex(r2)) > 0 { if i2 != i1 && len(re1.FindStringSubmatchIndex(r2)) > 0 {
// log if we care about ordering and the superset occurs before // log if we care about ordering and the superset occurs before
if !orderingDisabled && i1 < i2 { if !orderingDisabled && i1 < i2 {
log.Warnf("match \"%s\" is a super set of match \"%s\" but in a lower order, "+ log.Warnf("match \"%s\" is a super set of match \"%s\" but in a lower order, "+
"the first will never be matched", r1, r2) "the first will never be matched", r1, r2)
} }
currentRuleNeedBacktrack = false currentRuleNeedBacktrack = false
} }
} }
for i2, re2 := range rulesRE { for i2, re2 := range rulesRE {
if i2 == i1 || re2 == nil { if i2 == i1 || re2 == nil {
continue continue
} }
// if r1 is a subset of other rule, we don't need backtrack // if r1 is a subset of other rule, we don't need backtrack
// because either we turned on ordering // because either we turned on ordering
// or we disabled ordering and can't match it even with backtrack // or we disabled ordering and can't match it even with backtrack
if len(re2.FindStringSubmatchIndex(r1)) > 0 { if len(re2.FindStringSubmatchIndex(r1)) > 0 {
currentRuleNeedBacktrack = false currentRuleNeedBacktrack = false
} }
} }
if currentRuleNeedBacktrack { if currentRuleNeedBacktrack {
log.Warnf("backtracking required because of match \"%s\", "+ log.Warnf("backtracking required because of match \"%s\", "+
"matching performance may be degraded", r1) "matching performance may be degraded", r1)
backtrackingNeeded = true backtrackingNeeded = true
} }
} }
} }
// backtracking will always be needed if ordering of rules is not disabled // backtracking will always be needed if ordering of rules is not disabled
// since transistions are stored in (unordered) map // since transistions are stored in (unordered) map
// note: don't move this branch to the beginning of this function // note: don't move this branch to the beginning of this function
// since we need logs for superset rules // since we need logs for superset rules
return !orderingDisabled || backtrackingNeeded return !orderingDisabled || backtrackingNeeded
} }

View file

@ -1,30 +1,30 @@
// Copyright 2018 The Prometheus Authors // Copyright 2018 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package fsm package fsm
// min and max implementation for integer // min and max implementation for integer
func min(x, y int) int { func min(x, y int) int {
if x < y { if x < y {
return x return x
} }
return y return y
} }
func max(x, y int) int { func max(x, y int) int {
if x > y { if x > y {
return x return x
} }
return y return y
} }

View file

@ -1,386 +1,386 @@
// Copyright 2013 The Prometheus Authors // Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package mapper package mapper
import ( import (
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"regexp" "regexp"
"sync" "sync"
"time" "time"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log" "github.com/prometheus/common/log"
"github.com/prometheus/statsd_exporter/pkg/mapper/fsm" "github.com/prometheus/statsd_exporter/pkg/mapper/fsm"
yaml "gopkg.in/yaml.v2" yaml "gopkg.in/yaml.v2"
) )
var ( var (
statsdMetricRE = `[a-zA-Z_](-?[a-zA-Z0-9_])+` statsdMetricRE = `[a-zA-Z_](-?[a-zA-Z0-9_])+`
templateReplaceRE = `(\$\{?\d+\}?)` templateReplaceRE = `(\$\{?\d+\}?)`
metricLineRE = regexp.MustCompile(`^(\*\.|` + statsdMetricRE + `\.)+(\*|` + statsdMetricRE + `)$`) metricLineRE = regexp.MustCompile(`^(\*\.|` + statsdMetricRE + `\.)+(\*|` + statsdMetricRE + `)$`)
metricNameRE = regexp.MustCompile(`^([a-zA-Z_]|` + templateReplaceRE + `)([a-zA-Z0-9_]|` + templateReplaceRE + `)*$`) metricNameRE = regexp.MustCompile(`^([a-zA-Z_]|` + templateReplaceRE + `)([a-zA-Z0-9_]|` + templateReplaceRE + `)*$`)
labelNameRE = regexp.MustCompile(`^[a-zA-Z_][a-zA-Z0-9_]+$`) labelNameRE = regexp.MustCompile(`^[a-zA-Z_][a-zA-Z0-9_]+$`)
) )
type mapperConfigDefaults struct { type mapperConfigDefaults struct {
TimerType TimerType `yaml:"timer_type"` TimerType TimerType `yaml:"timer_type"`
Buckets []float64 `yaml:"buckets"` Buckets []float64 `yaml:"buckets"`
Quantiles []metricObjective `yaml:"quantiles"` Quantiles []metricObjective `yaml:"quantiles"`
MatchType MatchType `yaml:"match_type"` MatchType MatchType `yaml:"match_type"`
GlobDisableOrdering bool `yaml:"glob_disable_ordering"` GlobDisableOrdering bool `yaml:"glob_disable_ordering"`
Ttl time.Duration `yaml:"ttl"` Ttl time.Duration `yaml:"ttl"`
} }
type MetricMapper struct { type MetricMapper struct {
Defaults mapperConfigDefaults `yaml:"defaults"` Defaults mapperConfigDefaults `yaml:"defaults"`
Mappings []MetricMapping `yaml:"mappings"` Mappings []MetricMapping `yaml:"mappings"`
FSM *fsm.FSM FSM *fsm.FSM
doFSM bool doFSM bool
doRegex bool doRegex bool
cache MetricMapperCache cache MetricMapperCache
mutex sync.RWMutex mutex sync.RWMutex
MappingsCount prometheus.Gauge MappingsCount prometheus.Gauge
} }
type MetricMapping struct { type MetricMapping struct {
Match string `yaml:"match"` Match string `yaml:"match"`
Name string `yaml:"name"` Name string `yaml:"name"`
nameFormatter *fsm.TemplateFormatter nameFormatter *fsm.TemplateFormatter
regex *regexp.Regexp regex *regexp.Regexp
Labels prometheus.Labels `yaml:"labels"` Labels prometheus.Labels `yaml:"labels"`
labelKeys []string labelKeys []string
labelFormatters []*fsm.TemplateFormatter labelFormatters []*fsm.TemplateFormatter
TimerType TimerType `yaml:"timer_type"` TimerType TimerType `yaml:"timer_type"`
LegacyBuckets []float64 `yaml:"buckets"` LegacyBuckets []float64 `yaml:"buckets"`
LegacyQuantiles []metricObjective `yaml:"quantiles"` LegacyQuantiles []metricObjective `yaml:"quantiles"`
MatchType MatchType `yaml:"match_type"` MatchType MatchType `yaml:"match_type"`
HelpText string `yaml:"help"` HelpText string `yaml:"help"`
Action ActionType `yaml:"action"` Action ActionType `yaml:"action"`
MatchMetricType MetricType `yaml:"match_metric_type"` MatchMetricType MetricType `yaml:"match_metric_type"`
Ttl time.Duration `yaml:"ttl"` Ttl time.Duration `yaml:"ttl"`
SummaryOptions *SummaryOptions `yaml:"summary_options"` SummaryOptions *SummaryOptions `yaml:"summary_options"`
HistogramOptions *HistogramOptions `yaml:"histogram_options"` HistogramOptions *HistogramOptions `yaml:"histogram_options"`
} }
type SummaryOptions struct { type SummaryOptions struct {
Quantiles []metricObjective `yaml:"quantiles"` Quantiles []metricObjective `yaml:"quantiles"`
MaxAge time.Duration `yaml:"max_age"` MaxAge time.Duration `yaml:"max_age"`
AgeBuckets uint32 `yaml:"age_buckets"` AgeBuckets uint32 `yaml:"age_buckets"`
BufCap uint32 `yaml:"buf_cap"` BufCap uint32 `yaml:"buf_cap"`
} }
type HistogramOptions struct { type HistogramOptions struct {
Buckets []float64 `yaml:"buckets"` Buckets []float64 `yaml:"buckets"`
} }
type metricObjective struct { type metricObjective struct {
Quantile float64 `yaml:"quantile"` Quantile float64 `yaml:"quantile"`
Error float64 `yaml:"error"` Error float64 `yaml:"error"`
} }
var defaultQuantiles = []metricObjective{ var defaultQuantiles = []metricObjective{
{Quantile: 0.5, Error: 0.05}, {Quantile: 0.5, Error: 0.05},
{Quantile: 0.9, Error: 0.01}, {Quantile: 0.9, Error: 0.01},
{Quantile: 0.99, Error: 0.001}, {Quantile: 0.99, Error: 0.001},
} }
func (m *MetricMapper) InitFromYAMLString(fileContents string, cacheSize int, options ...CacheOption) error { func (m *MetricMapper) InitFromYAMLString(fileContents string, cacheSize int, options ...CacheOption) error {
var n MetricMapper var n MetricMapper
if err := yaml.Unmarshal([]byte(fileContents), &n); err != nil { if err := yaml.Unmarshal([]byte(fileContents), &n); err != nil {
return err return err
} }
if n.Defaults.Buckets == nil || len(n.Defaults.Buckets) == 0 { if n.Defaults.Buckets == nil || len(n.Defaults.Buckets) == 0 {
n.Defaults.Buckets = prometheus.DefBuckets n.Defaults.Buckets = prometheus.DefBuckets
} }
if n.Defaults.Quantiles == nil || len(n.Defaults.Quantiles) == 0 { if n.Defaults.Quantiles == nil || len(n.Defaults.Quantiles) == 0 {
n.Defaults.Quantiles = defaultQuantiles n.Defaults.Quantiles = defaultQuantiles
} }
if n.Defaults.MatchType == MatchTypeDefault { if n.Defaults.MatchType == MatchTypeDefault {
n.Defaults.MatchType = MatchTypeGlob n.Defaults.MatchType = MatchTypeGlob
} }
remainingMappingsCount := len(n.Mappings) remainingMappingsCount := len(n.Mappings)
n.FSM = fsm.NewFSM([]string{string(MetricTypeCounter), string(MetricTypeGauge), string(MetricTypeTimer)}, n.FSM = fsm.NewFSM([]string{string(MetricTypeCounter), string(MetricTypeGauge), string(MetricTypeTimer)},
remainingMappingsCount, n.Defaults.GlobDisableOrdering) remainingMappingsCount, n.Defaults.GlobDisableOrdering)
for i := range n.Mappings { for i := range n.Mappings {
remainingMappingsCount-- remainingMappingsCount--
currentMapping := &n.Mappings[i] currentMapping := &n.Mappings[i]
// check that label is correct // check that label is correct
for k := range currentMapping.Labels { for k := range currentMapping.Labels {
if !labelNameRE.MatchString(k) { if !labelNameRE.MatchString(k) {
return fmt.Errorf("invalid label key: %s", k) return fmt.Errorf("invalid label key: %s", k)
} }
} }
if currentMapping.Name == "" { if currentMapping.Name == "" {
return fmt.Errorf("line %d: metric mapping didn't set a metric name", i) return fmt.Errorf("line %d: metric mapping didn't set a metric name", i)
} }
if !metricNameRE.MatchString(currentMapping.Name) { if !metricNameRE.MatchString(currentMapping.Name) {
return fmt.Errorf("metric name '%s' doesn't match regex '%s'", currentMapping.Name, metricNameRE) return fmt.Errorf("metric name '%s' doesn't match regex '%s'", currentMapping.Name, metricNameRE)
} }
if currentMapping.MatchType == "" { if currentMapping.MatchType == "" {
currentMapping.MatchType = n.Defaults.MatchType currentMapping.MatchType = n.Defaults.MatchType
} }
if currentMapping.Action == "" { if currentMapping.Action == "" {
currentMapping.Action = ActionTypeMap currentMapping.Action = ActionTypeMap
} }
if currentMapping.MatchType == MatchTypeGlob { if currentMapping.MatchType == MatchTypeGlob {
n.doFSM = true n.doFSM = true
if !metricLineRE.MatchString(currentMapping.Match) { if !metricLineRE.MatchString(currentMapping.Match) {
return fmt.Errorf("invalid match: %s", currentMapping.Match) return fmt.Errorf("invalid match: %s", currentMapping.Match)
} }
captureCount := n.FSM.AddState(currentMapping.Match, string(currentMapping.MatchMetricType), captureCount := n.FSM.AddState(currentMapping.Match, string(currentMapping.MatchMetricType),
remainingMappingsCount, currentMapping) remainingMappingsCount, currentMapping)
currentMapping.nameFormatter = fsm.NewTemplateFormatter(currentMapping.Name, captureCount) currentMapping.nameFormatter = fsm.NewTemplateFormatter(currentMapping.Name, captureCount)
labelKeys := make([]string, len(currentMapping.Labels)) labelKeys := make([]string, len(currentMapping.Labels))
labelFormatters := make([]*fsm.TemplateFormatter, len(currentMapping.Labels)) labelFormatters := make([]*fsm.TemplateFormatter, len(currentMapping.Labels))
labelIndex := 0 labelIndex := 0
for label, valueExpr := range currentMapping.Labels { for label, valueExpr := range currentMapping.Labels {
labelKeys[labelIndex] = label labelKeys[labelIndex] = label
labelFormatters[labelIndex] = fsm.NewTemplateFormatter(valueExpr, captureCount) labelFormatters[labelIndex] = fsm.NewTemplateFormatter(valueExpr, captureCount)
labelIndex++ labelIndex++
} }
currentMapping.labelFormatters = labelFormatters currentMapping.labelFormatters = labelFormatters
currentMapping.labelKeys = labelKeys currentMapping.labelKeys = labelKeys
} else { } else {
if regex, err := regexp.Compile(currentMapping.Match); err != nil { if regex, err := regexp.Compile(currentMapping.Match); err != nil {
return fmt.Errorf("invalid regex %s in mapping: %v", currentMapping.Match, err) return fmt.Errorf("invalid regex %s in mapping: %v", currentMapping.Match, err)
} else { } else {
currentMapping.regex = regex currentMapping.regex = regex
} }
n.doRegex = true n.doRegex = true
} }
if currentMapping.TimerType == "" { if currentMapping.TimerType == "" {
currentMapping.TimerType = n.Defaults.TimerType currentMapping.TimerType = n.Defaults.TimerType
} }
if currentMapping.LegacyQuantiles != nil && if currentMapping.LegacyQuantiles != nil &&
(currentMapping.SummaryOptions == nil || currentMapping.SummaryOptions.Quantiles != nil) { (currentMapping.SummaryOptions == nil || currentMapping.SummaryOptions.Quantiles != nil) {
log.Warn("using the top level quantiles is deprecated. Please use quantiles in the summary_options hierarchy") log.Warn("using the top level quantiles is deprecated. Please use quantiles in the summary_options hierarchy")
} }
if currentMapping.LegacyBuckets != nil && if currentMapping.LegacyBuckets != nil &&
(currentMapping.HistogramOptions == nil || currentMapping.HistogramOptions.Buckets != nil) { (currentMapping.HistogramOptions == nil || currentMapping.HistogramOptions.Buckets != nil) {
log.Warn("using the top level buckets is deprecated. Please use buckets in the histogram_options hierarchy") log.Warn("using the top level buckets is deprecated. Please use buckets in the histogram_options hierarchy")
} }
if currentMapping.SummaryOptions != nil && if currentMapping.SummaryOptions != nil &&
currentMapping.LegacyQuantiles != nil && currentMapping.LegacyQuantiles != nil &&
currentMapping.SummaryOptions.Quantiles != nil { currentMapping.SummaryOptions.Quantiles != nil {
return fmt.Errorf("cannot use quantiles in both the top level and summary options at the same time in %s", currentMapping.Match) return fmt.Errorf("cannot use quantiles in both the top level and summary options at the same time in %s", currentMapping.Match)
} }
if currentMapping.HistogramOptions != nil && if currentMapping.HistogramOptions != nil &&
currentMapping.LegacyBuckets != nil && currentMapping.LegacyBuckets != nil &&
currentMapping.HistogramOptions.Buckets != nil { currentMapping.HistogramOptions.Buckets != nil {
return fmt.Errorf("cannot use buckets in both the top level and histogram options at the same time in %s", currentMapping.Match) return fmt.Errorf("cannot use buckets in both the top level and histogram options at the same time in %s", currentMapping.Match)
} }
if currentMapping.TimerType == TimerTypeHistogram { if currentMapping.TimerType == TimerTypeHistogram {
if currentMapping.SummaryOptions != nil { if currentMapping.SummaryOptions != nil {
return fmt.Errorf("cannot use histogram timer and summary options at the same time") return fmt.Errorf("cannot use histogram timer and summary options at the same time")
} }
if currentMapping.HistogramOptions == nil { if currentMapping.HistogramOptions == nil {
currentMapping.HistogramOptions = &HistogramOptions{} currentMapping.HistogramOptions = &HistogramOptions{}
} }
if currentMapping.LegacyBuckets != nil && len(currentMapping.LegacyBuckets) != 0 { if currentMapping.LegacyBuckets != nil && len(currentMapping.LegacyBuckets) != 0 {
currentMapping.HistogramOptions.Buckets = currentMapping.LegacyBuckets currentMapping.HistogramOptions.Buckets = currentMapping.LegacyBuckets
} }
if currentMapping.HistogramOptions.Buckets == nil || len(currentMapping.HistogramOptions.Buckets) == 0 { if currentMapping.HistogramOptions.Buckets == nil || len(currentMapping.HistogramOptions.Buckets) == 0 {
currentMapping.HistogramOptions.Buckets = n.Defaults.Buckets currentMapping.HistogramOptions.Buckets = n.Defaults.Buckets
} }
} }
if currentMapping.TimerType == TimerTypeSummary { if currentMapping.TimerType == TimerTypeSummary {
if currentMapping.HistogramOptions != nil { if currentMapping.HistogramOptions != nil {
return fmt.Errorf("cannot use summary timer and histogram options at the same time") return fmt.Errorf("cannot use summary timer and histogram options at the same time")
} }
if currentMapping.SummaryOptions == nil { if currentMapping.SummaryOptions == nil {
currentMapping.SummaryOptions = &SummaryOptions{} currentMapping.SummaryOptions = &SummaryOptions{}
} }
if currentMapping.LegacyQuantiles != nil && len(currentMapping.LegacyQuantiles) != 0 { if currentMapping.LegacyQuantiles != nil && len(currentMapping.LegacyQuantiles) != 0 {
currentMapping.SummaryOptions.Quantiles = currentMapping.LegacyQuantiles currentMapping.SummaryOptions.Quantiles = currentMapping.LegacyQuantiles
} }
if currentMapping.SummaryOptions.Quantiles == nil || len(currentMapping.SummaryOptions.Quantiles) == 0 { if currentMapping.SummaryOptions.Quantiles == nil || len(currentMapping.SummaryOptions.Quantiles) == 0 {
currentMapping.SummaryOptions.Quantiles = n.Defaults.Quantiles currentMapping.SummaryOptions.Quantiles = n.Defaults.Quantiles
} }
} }
if currentMapping.Ttl == 0 && n.Defaults.Ttl > 0 { if currentMapping.Ttl == 0 && n.Defaults.Ttl > 0 {
currentMapping.Ttl = n.Defaults.Ttl currentMapping.Ttl = n.Defaults.Ttl
} }
} }
m.mutex.Lock() m.mutex.Lock()
defer m.mutex.Unlock() defer m.mutex.Unlock()
m.Defaults = n.Defaults m.Defaults = n.Defaults
m.Mappings = n.Mappings m.Mappings = n.Mappings
m.InitCache(cacheSize, options...) m.InitCache(cacheSize, options...)
if n.doFSM { if n.doFSM {
var mappings []string var mappings []string
for _, mapping := range n.Mappings { for _, mapping := range n.Mappings {
if mapping.MatchType == MatchTypeGlob { if mapping.MatchType == MatchTypeGlob {
mappings = append(mappings, mapping.Match) mappings = append(mappings, mapping.Match)
} }
} }
n.FSM.BacktrackingNeeded = fsm.TestIfNeedBacktracking(mappings, n.FSM.OrderingDisabled) n.FSM.BacktrackingNeeded = fsm.TestIfNeedBacktracking(mappings, n.FSM.OrderingDisabled)
m.FSM = n.FSM m.FSM = n.FSM
m.doRegex = n.doRegex m.doRegex = n.doRegex
} }
m.doFSM = n.doFSM m.doFSM = n.doFSM
if m.MappingsCount != nil { if m.MappingsCount != nil {
m.MappingsCount.Set(float64(len(n.Mappings))) m.MappingsCount.Set(float64(len(n.Mappings)))
} }
return nil return nil
} }
func (m *MetricMapper) InitFromFile(fileName string, cacheSize int, options ...CacheOption) error { func (m *MetricMapper) InitFromFile(fileName string, cacheSize int, options ...CacheOption) error {
mappingStr, err := ioutil.ReadFile(fileName) mappingStr, err := ioutil.ReadFile(fileName)
if err != nil { if err != nil {
return err return err
} }
return m.InitFromYAMLString(string(mappingStr), cacheSize, options...) return m.InitFromYAMLString(string(mappingStr), cacheSize, options...)
} }
func (m *MetricMapper) InitCache(cacheSize int, options ...CacheOption) { func (m *MetricMapper) InitCache(cacheSize int, options ...CacheOption) {
if cacheSize == 0 { if cacheSize == 0 {
m.cache = NewMetricMapperNoopCache() m.cache = NewMetricMapperNoopCache()
} else { } else {
o := cacheOptions{ o := cacheOptions{
cacheType: "lru", cacheType: "lru",
} }
for _, f := range options { for _, f := range options {
f(&o) f(&o)
} }
var ( var (
cache MetricMapperCache cache MetricMapperCache
err error err error
) )
switch o.cacheType { switch o.cacheType {
case "lru": case "lru":
cache, err = NewMetricMapperCache(cacheSize) cache, err = NewMetricMapperCache(cacheSize)
case "random": case "random":
cache, err = NewMetricMapperRRCache(cacheSize) cache, err = NewMetricMapperRRCache(cacheSize)
default: default:
err = fmt.Errorf("unsupported cache type %q", o.cacheType) err = fmt.Errorf("unsupported cache type %q", o.cacheType)
} }
if err != nil { if err != nil {
log.Fatalf("Unable to setup metric cache. Caused by: %s", err) log.Fatalf("Unable to setup metric cache. Caused by: %s", err)
} }
m.cache = cache m.cache = cache
} }
} }
func (m *MetricMapper) GetMapping(statsdMetric string, statsdMetricType MetricType) (*MetricMapping, prometheus.Labels, bool) { func (m *MetricMapper) GetMapping(statsdMetric string, statsdMetricType MetricType) (*MetricMapping, prometheus.Labels, bool) {
m.mutex.RLock() m.mutex.RLock()
defer m.mutex.RUnlock() defer m.mutex.RUnlock()
result, cached := m.cache.Get(statsdMetric, statsdMetricType) result, cached := m.cache.Get(statsdMetric, statsdMetricType)
if cached { if cached {
return result.Mapping, result.Labels, result.Matched return result.Mapping, result.Labels, result.Matched
} }
// glob matching // glob matching
if m.doFSM { if m.doFSM {
finalState, captures := m.FSM.GetMapping(statsdMetric, string(statsdMetricType)) finalState, captures := m.FSM.GetMapping(statsdMetric, string(statsdMetricType))
if finalState != nil && finalState.Result != nil { if finalState != nil && finalState.Result != nil {
v := finalState.Result.(*MetricMapping) v := finalState.Result.(*MetricMapping)
result := copyMetricMapping(v) result := copyMetricMapping(v)
result.Name = result.nameFormatter.Format(captures) result.Name = result.nameFormatter.Format(captures)
labels := prometheus.Labels{} labels := prometheus.Labels{}
for index, formatter := range result.labelFormatters { for index, formatter := range result.labelFormatters {
labels[result.labelKeys[index]] = formatter.Format(captures) labels[result.labelKeys[index]] = formatter.Format(captures)
} }
m.cache.AddMatch(statsdMetric, statsdMetricType, result, labels) m.cache.AddMatch(statsdMetric, statsdMetricType, result, labels)
return result, labels, true return result, labels, true
} else if !m.doRegex { } else if !m.doRegex {
// if there's no regex match type, return immediately // if there's no regex match type, return immediately
m.cache.AddMiss(statsdMetric, statsdMetricType) m.cache.AddMiss(statsdMetric, statsdMetricType)
return nil, nil, false return nil, nil, false
} }
} }
// regex matching // regex matching
for _, mapping := range m.Mappings { for _, mapping := range m.Mappings {
// if a rule don't have regex matching type, the regex field is unset // if a rule don't have regex matching type, the regex field is unset
if mapping.regex == nil { if mapping.regex == nil {
continue continue
} }
matches := mapping.regex.FindStringSubmatchIndex(statsdMetric) matches := mapping.regex.FindStringSubmatchIndex(statsdMetric)
if len(matches) == 0 { if len(matches) == 0 {
continue continue
} }
mapping.Name = string(mapping.regex.ExpandString( mapping.Name = string(mapping.regex.ExpandString(
[]byte{}, []byte{},
mapping.Name, mapping.Name,
statsdMetric, statsdMetric,
matches, matches,
)) ))
if mt := mapping.MatchMetricType; mt != "" && mt != statsdMetricType { if mt := mapping.MatchMetricType; mt != "" && mt != statsdMetricType {
continue continue
} }
labels := prometheus.Labels{} labels := prometheus.Labels{}
for label, valueExpr := range mapping.Labels { for label, valueExpr := range mapping.Labels {
value := mapping.regex.ExpandString([]byte{}, valueExpr, statsdMetric, matches) value := mapping.regex.ExpandString([]byte{}, valueExpr, statsdMetric, matches)
labels[label] = string(value) labels[label] = string(value)
} }
m.cache.AddMatch(statsdMetric, statsdMetricType, &mapping, labels) m.cache.AddMatch(statsdMetric, statsdMetricType, &mapping, labels)
return &mapping, labels, true return &mapping, labels, true
} }
m.cache.AddMiss(statsdMetric, statsdMetricType) m.cache.AddMiss(statsdMetric, statsdMetricType)
return nil, nil, false return nil, nil, false
} }
// make a shallow copy so that we do not overwrite name // make a shallow copy so that we do not overwrite name
// as multiple names can be matched by same mapping // as multiple names can be matched by same mapping
func copyMetricMapping(in *MetricMapping) *MetricMapping { func copyMetricMapping(in *MetricMapping) *MetricMapping {
var out MetricMapping var out MetricMapping
out = *in out = *in
return &out return &out
} }

File diff suppressed because it is too large Load diff

View file

@ -1,198 +1,198 @@
// Copyright 2019 The Prometheus Authors // Copyright 2019 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package mapper package mapper
import ( import (
"sync" "sync"
lru "github.com/hashicorp/golang-lru" lru "github.com/hashicorp/golang-lru"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
) )
var ( var (
cacheLength = prometheus.NewGauge( cacheLength = prometheus.NewGauge(
prometheus.GaugeOpts{ prometheus.GaugeOpts{
Name: "statsd_metric_mapper_cache_length", Name: "statsd_metric_mapper_cache_length",
Help: "The count of unique metrics currently cached.", Help: "The count of unique metrics currently cached.",
}, },
) )
cacheGetsTotal = prometheus.NewCounter( cacheGetsTotal = prometheus.NewCounter(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_metric_mapper_cache_gets_total", Name: "statsd_metric_mapper_cache_gets_total",
Help: "The count of total metric cache gets.", Help: "The count of total metric cache gets.",
}, },
) )
cacheHitsTotal = prometheus.NewCounter( cacheHitsTotal = prometheus.NewCounter(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "statsd_metric_mapper_cache_hits_total", Name: "statsd_metric_mapper_cache_hits_total",
Help: "The count of total metric cache hits.", Help: "The count of total metric cache hits.",
}, },
) )
) )
type cacheOptions struct { type cacheOptions struct {
cacheType string cacheType string
} }
type CacheOption func(*cacheOptions) type CacheOption func(*cacheOptions)
func WithCacheType(cacheType string) CacheOption { func WithCacheType(cacheType string) CacheOption {
return func(o *cacheOptions) { return func(o *cacheOptions) {
o.cacheType = cacheType o.cacheType = cacheType
} }
} }
type MetricMapperCacheResult struct { type MetricMapperCacheResult struct {
Mapping *MetricMapping Mapping *MetricMapping
Matched bool Matched bool
Labels prometheus.Labels Labels prometheus.Labels
} }
type MetricMapperCache interface { type MetricMapperCache interface {
Get(metricString string, metricType MetricType) (*MetricMapperCacheResult, bool) Get(metricString string, metricType MetricType) (*MetricMapperCacheResult, bool)
AddMatch(metricString string, metricType MetricType, mapping *MetricMapping, labels prometheus.Labels) AddMatch(metricString string, metricType MetricType, mapping *MetricMapping, labels prometheus.Labels)
AddMiss(metricString string, metricType MetricType) AddMiss(metricString string, metricType MetricType)
} }
type MetricMapperLRUCache struct { type MetricMapperLRUCache struct {
MetricMapperCache MetricMapperCache
cache *lru.Cache cache *lru.Cache
} }
type MetricMapperNoopCache struct { type MetricMapperNoopCache struct {
MetricMapperCache MetricMapperCache
} }
func NewMetricMapperCache(size int) (*MetricMapperLRUCache, error) { func NewMetricMapperCache(size int) (*MetricMapperLRUCache, error) {
cacheLength.Set(0) cacheLength.Set(0)
cache, err := lru.New(size) cache, err := lru.New(size)
if err != nil { if err != nil {
return &MetricMapperLRUCache{}, err return &MetricMapperLRUCache{}, err
} }
return &MetricMapperLRUCache{cache: cache}, nil return &MetricMapperLRUCache{cache: cache}, nil
} }
func (m *MetricMapperLRUCache) Get(metricString string, metricType MetricType) (*MetricMapperCacheResult, bool) { func (m *MetricMapperLRUCache) Get(metricString string, metricType MetricType) (*MetricMapperCacheResult, bool) {
cacheGetsTotal.Inc() cacheGetsTotal.Inc()
if result, ok := m.cache.Get(formatKey(metricString, metricType)); ok { if result, ok := m.cache.Get(formatKey(metricString, metricType)); ok {
cacheHitsTotal.Inc() cacheHitsTotal.Inc()
return result.(*MetricMapperCacheResult), true return result.(*MetricMapperCacheResult), true
} else { } else {
return nil, false return nil, false
} }
} }
func (m *MetricMapperLRUCache) AddMatch(metricString string, metricType MetricType, mapping *MetricMapping, labels prometheus.Labels) { func (m *MetricMapperLRUCache) AddMatch(metricString string, metricType MetricType, mapping *MetricMapping, labels prometheus.Labels) {
go m.trackCacheLength() go m.trackCacheLength()
m.cache.Add(formatKey(metricString, metricType), &MetricMapperCacheResult{Mapping: mapping, Matched: true, Labels: labels}) m.cache.Add(formatKey(metricString, metricType), &MetricMapperCacheResult{Mapping: mapping, Matched: true, Labels: labels})
} }
func (m *MetricMapperLRUCache) AddMiss(metricString string, metricType MetricType) { func (m *MetricMapperLRUCache) AddMiss(metricString string, metricType MetricType) {
go m.trackCacheLength() go m.trackCacheLength()
m.cache.Add(formatKey(metricString, metricType), &MetricMapperCacheResult{Matched: false}) m.cache.Add(formatKey(metricString, metricType), &MetricMapperCacheResult{Matched: false})
} }
func (m *MetricMapperLRUCache) trackCacheLength() { func (m *MetricMapperLRUCache) trackCacheLength() {
cacheLength.Set(float64(m.cache.Len())) cacheLength.Set(float64(m.cache.Len()))
} }
func formatKey(metricString string, metricType MetricType) string { func formatKey(metricString string, metricType MetricType) string {
return string(metricType) + "." + metricString return string(metricType) + "." + metricString
} }
func NewMetricMapperNoopCache() *MetricMapperNoopCache { func NewMetricMapperNoopCache() *MetricMapperNoopCache {
cacheLength.Set(0) cacheLength.Set(0)
return &MetricMapperNoopCache{} return &MetricMapperNoopCache{}
} }
func (m *MetricMapperNoopCache) Get(metricString string, metricType MetricType) (*MetricMapperCacheResult, bool) { func (m *MetricMapperNoopCache) Get(metricString string, metricType MetricType) (*MetricMapperCacheResult, bool) {
return nil, false return nil, false
} }
func (m *MetricMapperNoopCache) AddMatch(metricString string, metricType MetricType, mapping *MetricMapping, labels prometheus.Labels) { func (m *MetricMapperNoopCache) AddMatch(metricString string, metricType MetricType, mapping *MetricMapping, labels prometheus.Labels) {
return return
} }
func (m *MetricMapperNoopCache) AddMiss(metricString string, metricType MetricType) { func (m *MetricMapperNoopCache) AddMiss(metricString string, metricType MetricType) {
return return
} }
type MetricMapperRRCache struct { type MetricMapperRRCache struct {
MetricMapperCache MetricMapperCache
lock sync.RWMutex lock sync.RWMutex
size int size int
items map[string]*MetricMapperCacheResult items map[string]*MetricMapperCacheResult
} }
func NewMetricMapperRRCache(size int) (*MetricMapperRRCache, error) { func NewMetricMapperRRCache(size int) (*MetricMapperRRCache, error) {
cacheLength.Set(0) cacheLength.Set(0)
c := &MetricMapperRRCache{ c := &MetricMapperRRCache{
items: make(map[string]*MetricMapperCacheResult, size+1), items: make(map[string]*MetricMapperCacheResult, size+1),
size: size, size: size,
} }
return c, nil return c, nil
} }
func (m *MetricMapperRRCache) Get(metricString string, metricType MetricType) (*MetricMapperCacheResult, bool) { func (m *MetricMapperRRCache) Get(metricString string, metricType MetricType) (*MetricMapperCacheResult, bool) {
key := formatKey(metricString, metricType) key := formatKey(metricString, metricType)
m.lock.RLock() m.lock.RLock()
result, ok := m.items[key] result, ok := m.items[key]
m.lock.RUnlock() m.lock.RUnlock()
return result, ok return result, ok
} }
func (m *MetricMapperRRCache) addItem(metricString string, metricType MetricType, result *MetricMapperCacheResult) { func (m *MetricMapperRRCache) addItem(metricString string, metricType MetricType, result *MetricMapperCacheResult) {
go m.trackCacheLength() go m.trackCacheLength()
key := formatKey(metricString, metricType) key := formatKey(metricString, metricType)
m.lock.Lock() m.lock.Lock()
m.items[key] = result m.items[key] = result
// evict an item if needed // evict an item if needed
if len(m.items) > m.size { if len(m.items) > m.size {
for k := range m.items { for k := range m.items {
delete(m.items, k) delete(m.items, k)
break break
} }
} }
m.lock.Unlock() m.lock.Unlock()
} }
func (m *MetricMapperRRCache) AddMatch(metricString string, metricType MetricType, mapping *MetricMapping, labels prometheus.Labels) { func (m *MetricMapperRRCache) AddMatch(metricString string, metricType MetricType, mapping *MetricMapping, labels prometheus.Labels) {
e := &MetricMapperCacheResult{Mapping: mapping, Matched: true, Labels: labels} e := &MetricMapperCacheResult{Mapping: mapping, Matched: true, Labels: labels}
m.addItem(metricString, metricType, e) m.addItem(metricString, metricType, e)
} }
func (m *MetricMapperRRCache) AddMiss(metricString string, metricType MetricType) { func (m *MetricMapperRRCache) AddMiss(metricString string, metricType MetricType) {
e := &MetricMapperCacheResult{Matched: false} e := &MetricMapperCacheResult{Matched: false}
m.addItem(metricString, metricType, e) m.addItem(metricString, metricType, e)
} }
func (m *MetricMapperRRCache) trackCacheLength() { func (m *MetricMapperRRCache) trackCacheLength() {
m.lock.RLock() m.lock.RLock()
length := len(m.items) length := len(m.items)
m.lock.RUnlock() m.lock.RUnlock()
cacheLength.Set(float64(length)) cacheLength.Set(float64(length))
} }
func init() { func init() {
prometheus.MustRegister(cacheLength) prometheus.MustRegister(cacheLength)
prometheus.MustRegister(cacheGetsTotal) prometheus.MustRegister(cacheGetsTotal)
prometheus.MustRegister(cacheHitsTotal) prometheus.MustRegister(cacheHitsTotal)
} }

File diff suppressed because it is too large Load diff

View file

@ -1,41 +1,41 @@
// Copyright 2013 The Prometheus Authors // Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package mapper package mapper
import "fmt" import "fmt"
type MatchType string type MatchType string
const ( const (
MatchTypeGlob MatchType = "glob" MatchTypeGlob MatchType = "glob"
MatchTypeRegex MatchType = "regex" MatchTypeRegex MatchType = "regex"
MatchTypeDefault MatchType = "" MatchTypeDefault MatchType = ""
) )
func (t *MatchType) UnmarshalYAML(unmarshal func(interface{}) error) error { func (t *MatchType) UnmarshalYAML(unmarshal func(interface{}) error) error {
var v string var v string
if err := unmarshal(&v); err != nil { if err := unmarshal(&v); err != nil {
return err return err
} }
switch MatchType(v) { switch MatchType(v) {
case MatchTypeRegex: case MatchTypeRegex:
*t = MatchTypeRegex *t = MatchTypeRegex
case MatchTypeGlob, MatchTypeDefault: case MatchTypeGlob, MatchTypeDefault:
*t = MatchTypeGlob *t = MatchTypeGlob
default: default:
return fmt.Errorf("invalid match type %q", v) return fmt.Errorf("invalid match type %q", v)
} }
return nil return nil
} }

View file

@ -1,43 +1,43 @@
// Copyright 2018 The Prometheus Authors // Copyright 2018 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package mapper package mapper
import "fmt" import "fmt"
type MetricType string type MetricType string
const ( const (
MetricTypeCounter MetricType = "counter" MetricTypeCounter MetricType = "counter"
MetricTypeGauge MetricType = "gauge" MetricTypeGauge MetricType = "gauge"
MetricTypeTimer MetricType = "timer" MetricTypeTimer MetricType = "timer"
) )
func (m *MetricType) UnmarshalYAML(unmarshal func(interface{}) error) error { func (m *MetricType) UnmarshalYAML(unmarshal func(interface{}) error) error {
var v string var v string
if err := unmarshal(&v); err != nil { if err := unmarshal(&v); err != nil {
return err return err
} }
switch MetricType(v) { switch MetricType(v) {
case MetricTypeCounter: case MetricTypeCounter:
*m = MetricTypeCounter *m = MetricTypeCounter
case MetricTypeGauge: case MetricTypeGauge:
*m = MetricTypeGauge *m = MetricTypeGauge
case MetricTypeTimer: case MetricTypeTimer:
*m = MetricTypeTimer *m = MetricTypeTimer
default: default:
return fmt.Errorf("invalid metric type '%s'", v) return fmt.Errorf("invalid metric type '%s'", v)
} }
return nil return nil
} }

View file

@ -1,41 +1,41 @@
// Copyright 2013 The Prometheus Authors // Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
// You may obtain a copy of the License at // You may obtain a copy of the License at
// //
// http://www.apache.org/licenses/LICENSE-2.0 // http://www.apache.org/licenses/LICENSE-2.0
// //
// Unless required by applicable law or agreed to in writing, software // Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, // distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
package mapper package mapper
import "fmt" import "fmt"
type TimerType string type TimerType string
const ( const (
TimerTypeHistogram TimerType = "histogram" TimerTypeHistogram TimerType = "histogram"
TimerTypeSummary TimerType = "summary" TimerTypeSummary TimerType = "summary"
TimerTypeDefault TimerType = "" TimerTypeDefault TimerType = ""
) )
func (t *TimerType) UnmarshalYAML(unmarshal func(interface{}) error) error { func (t *TimerType) UnmarshalYAML(unmarshal func(interface{}) error) error {
var v string var v string
if err := unmarshal(&v); err != nil { if err := unmarshal(&v); err != nil {
return err return err
} }
switch TimerType(v) { switch TimerType(v) {
case TimerTypeHistogram: case TimerTypeHistogram:
*t = TimerTypeHistogram *t = TimerTypeHistogram
case TimerTypeSummary, TimerTypeDefault: case TimerTypeSummary, TimerTypeDefault:
*t = TimerTypeSummary *t = TimerTypeSummary
default: default:
return fmt.Errorf("invalid timer type '%s'", v) return fmt.Errorf("invalid timer type '%s'", v)
} }
return nil return nil
} }

View file

@ -1,54 +1,54 @@
package metrics package metrics
import ( import (
"time" "time"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
) )
type MetricType int type MetricType int
const ( const (
CounterMetricType MetricType = iota CounterMetricType MetricType = iota
GaugeMetricType GaugeMetricType
SummaryMetricType SummaryMetricType
HistogramMetricType HistogramMetricType
) )
type NameHash uint64 type NameHash uint64
type ValueHash uint64 type ValueHash uint64
type LabelHash struct { type LabelHash struct {
// This is a hash over the label names // This is a hash over the label names
Names NameHash Names NameHash
// This is a hash over the label names + label values // This is a hash over the label names + label values
Values ValueHash Values ValueHash
} }
type MetricHolder interface{} type MetricHolder interface{}
type VectorHolder interface { type VectorHolder interface {
Delete(label prometheus.Labels) bool Delete(label prometheus.Labels) bool
} }
type Vector struct { type Vector struct {
Holder VectorHolder Holder VectorHolder
RefCount uint64 RefCount uint64
} }
type Metric struct { type Metric struct {
MetricType MetricType MetricType MetricType
// Vectors key is the hash of the label names // Vectors key is the hash of the label names
Vectors map[NameHash]*Vector Vectors map[NameHash]*Vector
// Metrics key is a hash of the label names + label values // Metrics key is a hash of the label names + label values
Metrics map[ValueHash]*RegisteredMetric Metrics map[ValueHash]*RegisteredMetric
} }
type RegisteredMetric struct { type RegisteredMetric struct {
LastRegisteredAt time.Time LastRegisteredAt time.Time
Labels prometheus.Labels Labels prometheus.Labels
TTL time.Duration TTL time.Duration
Metric MetricHolder Metric MetricHolder
VecKey NameHash VecKey NameHash
} }

View file

@ -1,42 +1,42 @@
package metrics package metrics
import ( import (
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
) )
type metricType int type metricType int
const ( const (
CounterMetricType metricType = iota CounterMetricType metricType = iota
GaugeMetricType GaugeMetricType
SummaryMetricType SummaryMetricType
HistogramMetricType HistogramMetricType
) )
type nameHash uint64 type nameHash uint64
type valueHash uint64 type valueHash uint64
type labelHash struct { type labelHash struct {
// This is a hash over the label names // This is a hash over the label names
names nameHash names nameHash
// This is a hash over the label names + label values // This is a hash over the label names + label values
values valueHash values valueHash
} }
type metricHolder interface{} type metricHolder interface{}
type vectorHolder interface { type vectorHolder interface {
Delete(label prometheus.Labels) bool Delete(label prometheus.Labels) bool
} }
type vector struct { type vector struct {
holder vectorHolder holder vectorHolder
refCount uint64 refCount uint64
} }
type metric struct { type metric struct {
metricType metricType metricType metricType
// Vectors key is the hash of the label names // Vectors key is the hash of the label names
vectors map[nameHash]*vector vectors map[nameHash]*vector
// Metrics key is a hash of the label names + label values // Metrics key is a hash of the label names + label values
metrics map[valueHash]*registeredMetric metrics map[valueHash]*registeredMetric
} }

View file

@ -1,370 +1,370 @@
package registry package registry
import ( import (
"bytes" "bytes"
"fmt" "fmt"
"hash" "hash"
"hash/fnv" "hash/fnv"
"sort" "sort"
"time" "time"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/model" "github.com/prometheus/common/model"
"github.com/prometheus/statsd_exporter/pkg/clock" "github.com/prometheus/statsd_exporter/pkg/clock"
"github.com/prometheus/statsd_exporter/pkg/mapper" "github.com/prometheus/statsd_exporter/pkg/mapper"
"github.com/prometheus/statsd_exporter/pkg/metrics" "github.com/prometheus/statsd_exporter/pkg/metrics"
) )
// uncheckedCollector wraps a Collector but its Describe method yields no Desc. // uncheckedCollector wraps a Collector but its Describe method yields no Desc.
// This allows incoming metrics to have inconsistent label sets // This allows incoming metrics to have inconsistent label sets
type uncheckedCollector struct { type uncheckedCollector struct {
c prometheus.Collector c prometheus.Collector
} }
func (u uncheckedCollector) Describe(_ chan<- *prometheus.Desc) {} func (u uncheckedCollector) Describe(_ chan<- *prometheus.Desc) {}
func (u uncheckedCollector) Collect(c chan<- prometheus.Metric) { func (u uncheckedCollector) Collect(c chan<- prometheus.Metric) {
u.c.Collect(c) u.c.Collect(c)
} }
type Registry struct { type Registry struct {
Metrics map[string]metrics.Metric Metrics map[string]metrics.Metric
Mapper *mapper.MetricMapper Mapper *mapper.MetricMapper
// The below value and label variables are allocated in the registry struct // The below value and label variables are allocated in the registry struct
// so that we don't have to allocate them every time have to compute a label // so that we don't have to allocate them every time have to compute a label
// hash. // hash.
ValueBuf, NameBuf bytes.Buffer ValueBuf, NameBuf bytes.Buffer
Hasher hash.Hash64 Hasher hash.Hash64
} }
func NewRegistry(mapper *mapper.MetricMapper) *Registry { func NewRegistry(mapper *mapper.MetricMapper) *Registry {
return &Registry{ return &Registry{
Metrics: make(map[string]metrics.Metric), Metrics: make(map[string]metrics.Metric),
Mapper: mapper, Mapper: mapper,
Hasher: fnv.New64a(), Hasher: fnv.New64a(),
} }
} }
func (r *Registry) MetricConflicts(metricName string, metricType metrics.MetricType) bool { func (r *Registry) MetricConflicts(metricName string, metricType metrics.MetricType) bool {
vector, hasMetrics := r.Metrics[metricName] vector, hasMetrics := r.Metrics[metricName]
if !hasMetrics { if !hasMetrics {
// No metrics.Metric with this name exists // No metrics.Metric with this name exists
return false return false
} }
if vector.MetricType == metricType { if vector.MetricType == metricType {
// We've found a copy of this metrics.Metric with this type, but different // We've found a copy of this metrics.Metric with this type, but different
// labels, so it's safe to create a new one. // labels, so it's safe to create a new one.
return false return false
} }
// The metrics.Metric exists, but it's of a different type than we're trying to // The metrics.Metric exists, but it's of a different type than we're trying to
// create. // create.
return true return true
} }
func (r *Registry) StoreCounter(metricName string, hash metrics.LabelHash, labels prometheus.Labels, vec *prometheus.CounterVec, c prometheus.Counter, ttl time.Duration) { func (r *Registry) StoreCounter(metricName string, hash metrics.LabelHash, labels prometheus.Labels, vec *prometheus.CounterVec, c prometheus.Counter, ttl time.Duration) {
r.Store(metricName, hash, labels, vec, c, metrics.CounterMetricType, ttl) r.Store(metricName, hash, labels, vec, c, metrics.CounterMetricType, ttl)
} }
func (r *Registry) StoreGauge(metricName string, hash metrics.LabelHash, labels prometheus.Labels, vec *prometheus.GaugeVec, g prometheus.Counter, ttl time.Duration) { func (r *Registry) StoreGauge(metricName string, hash metrics.LabelHash, labels prometheus.Labels, vec *prometheus.GaugeVec, g prometheus.Counter, ttl time.Duration) {
r.Store(metricName, hash, labels, vec, g, metrics.GaugeMetricType, ttl) r.Store(metricName, hash, labels, vec, g, metrics.GaugeMetricType, ttl)
} }
func (r *Registry) StoreHistogram(metricName string, hash metrics.LabelHash, labels prometheus.Labels, vec *prometheus.HistogramVec, o prometheus.Observer, ttl time.Duration) { func (r *Registry) StoreHistogram(metricName string, hash metrics.LabelHash, labels prometheus.Labels, vec *prometheus.HistogramVec, o prometheus.Observer, ttl time.Duration) {
r.Store(metricName, hash, labels, vec, o, metrics.HistogramMetricType, ttl) r.Store(metricName, hash, labels, vec, o, metrics.HistogramMetricType, ttl)
} }
func (r *Registry) StoreSummary(metricName string, hash metrics.LabelHash, labels prometheus.Labels, vec *prometheus.SummaryVec, o prometheus.Observer, ttl time.Duration) { func (r *Registry) StoreSummary(metricName string, hash metrics.LabelHash, labels prometheus.Labels, vec *prometheus.SummaryVec, o prometheus.Observer, ttl time.Duration) {
r.Store(metricName, hash, labels, vec, o, metrics.SummaryMetricType, ttl) r.Store(metricName, hash, labels, vec, o, metrics.SummaryMetricType, ttl)
} }
func (r *Registry) Store(metricName string, hash metrics.LabelHash, labels prometheus.Labels, vh metrics.VectorHolder, mh metrics.MetricHolder, metricType metrics.MetricType, ttl time.Duration) { func (r *Registry) Store(metricName string, hash metrics.LabelHash, labels prometheus.Labels, vh metrics.VectorHolder, mh metrics.MetricHolder, metricType metrics.MetricType, ttl time.Duration) {
metric, hasMetrics := r.Metrics[metricName] metric, hasMetrics := r.Metrics[metricName]
if !hasMetrics { if !hasMetrics {
metric.MetricType = metricType metric.MetricType = metricType
metric.Vectors = make(map[metrics.NameHash]*metrics.Vector) metric.Vectors = make(map[metrics.NameHash]*metrics.Vector)
metric.Metrics = make(map[metrics.ValueHash]*metrics.RegisteredMetric) metric.Metrics = make(map[metrics.ValueHash]*metrics.RegisteredMetric)
r.Metrics[metricName] = metric r.Metrics[metricName] = metric
} }
v, ok := metric.Vectors[hash.Names] v, ok := metric.Vectors[hash.Names]
if !ok { if !ok {
v = &metrics.Vector{Holder: vh} v = &metrics.Vector{Holder: vh}
metric.Vectors[hash.Names] = v metric.Vectors[hash.Names] = v
} }
rm, ok := metric.Metrics[hash.Values] rm, ok := metric.Metrics[hash.Values]
if !ok { if !ok {
rm = &metrics.RegisteredMetric{ rm = &metrics.RegisteredMetric{
Labels: labels, Labels: labels,
TTL: ttl, TTL: ttl,
Metric: mh, Metric: mh,
VecKey: hash.Names, VecKey: hash.Names,
} }
metric.Metrics[hash.Values] = rm metric.Metrics[hash.Values] = rm
v.RefCount++ v.RefCount++
} }
now := clock.Now() now := clock.Now()
rm.LastRegisteredAt = now rm.LastRegisteredAt = now
// Update ttl from mapping // Update ttl from mapping
rm.TTL = ttl rm.TTL = ttl
} }
func (r *Registry) Get(metricName string, hash metrics.LabelHash, metricType metrics.MetricType) (metrics.VectorHolder, metrics.MetricHolder) { func (r *Registry) Get(metricName string, hash metrics.LabelHash, metricType metrics.MetricType) (metrics.VectorHolder, metrics.MetricHolder) {
metric, hasMetric := r.Metrics[metricName] metric, hasMetric := r.Metrics[metricName]
if !hasMetric { if !hasMetric {
return nil, nil return nil, nil
} }
if metric.MetricType != metricType { if metric.MetricType != metricType {
return nil, nil return nil, nil
} }
rm, ok := metric.Metrics[hash.Values] rm, ok := metric.Metrics[hash.Values]
if ok { if ok {
now := clock.Now() now := clock.Now()
rm.LastRegisteredAt = now rm.LastRegisteredAt = now
return metric.Vectors[hash.Names].Holder, rm.Metric return metric.Vectors[hash.Names].Holder, rm.Metric
} }
vector, ok := metric.Vectors[hash.Names] vector, ok := metric.Vectors[hash.Names]
if ok { if ok {
return vector.Holder, nil return vector.Holder, nil
} }
return nil, nil return nil, nil
} }
func (r *Registry) GetCounter(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping, metricsCount *prometheus.GaugeVec) (prometheus.Counter, error) { func (r *Registry) GetCounter(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping, metricsCount *prometheus.GaugeVec) (prometheus.Counter, error) {
hash, labelNames := r.HashLabels(labels) hash, labelNames := r.HashLabels(labels)
vh, mh := r.Get(metricName, hash, metrics.CounterMetricType) vh, mh := r.Get(metricName, hash, metrics.CounterMetricType)
if mh != nil { if mh != nil {
return mh.(prometheus.Counter), nil return mh.(prometheus.Counter), nil
} }
if r.MetricConflicts(metricName, metrics.CounterMetricType) { if r.MetricConflicts(metricName, metrics.CounterMetricType) {
return nil, fmt.Errorf("Metric with name %s is already registered", metricName) return nil, fmt.Errorf("Metric with name %s is already registered", metricName)
} }
var counterVec *prometheus.CounterVec var counterVec *prometheus.CounterVec
if vh == nil { if vh == nil {
metricsCount.WithLabelValues("counter").Inc() metricsCount.WithLabelValues("counter").Inc()
counterVec = prometheus.NewCounterVec(prometheus.CounterOpts{ counterVec = prometheus.NewCounterVec(prometheus.CounterOpts{
Name: metricName, Name: metricName,
Help: help, Help: help,
}, labelNames) }, labelNames)
if err := prometheus.Register(uncheckedCollector{counterVec}); err != nil { if err := prometheus.Register(uncheckedCollector{counterVec}); err != nil {
return nil, err return nil, err
} }
} else { } else {
counterVec = vh.(*prometheus.CounterVec) counterVec = vh.(*prometheus.CounterVec)
} }
var counter prometheus.Counter var counter prometheus.Counter
var err error var err error
if counter, err = counterVec.GetMetricWith(labels); err != nil { if counter, err = counterVec.GetMetricWith(labels); err != nil {
return nil, err return nil, err
} }
r.StoreCounter(metricName, hash, labels, counterVec, counter, mapping.Ttl) r.StoreCounter(metricName, hash, labels, counterVec, counter, mapping.Ttl)
return counter, nil return counter, nil
} }
func (r *Registry) GetGauge(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping, metricsCount *prometheus.GaugeVec) (prometheus.Gauge, error) { func (r *Registry) GetGauge(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping, metricsCount *prometheus.GaugeVec) (prometheus.Gauge, error) {
hash, labelNames := r.HashLabels(labels) hash, labelNames := r.HashLabels(labels)
vh, mh := r.Get(metricName, hash, metrics.GaugeMetricType) vh, mh := r.Get(metricName, hash, metrics.GaugeMetricType)
if mh != nil { if mh != nil {
return mh.(prometheus.Gauge), nil return mh.(prometheus.Gauge), nil
} }
if r.MetricConflicts(metricName, metrics.GaugeMetricType) { if r.MetricConflicts(metricName, metrics.GaugeMetricType) {
return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName) return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName)
} }
var gaugeVec *prometheus.GaugeVec var gaugeVec *prometheus.GaugeVec
if vh == nil { if vh == nil {
metricsCount.WithLabelValues("gauge").Inc() metricsCount.WithLabelValues("gauge").Inc()
gaugeVec = prometheus.NewGaugeVec(prometheus.GaugeOpts{ gaugeVec = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Name: metricName, Name: metricName,
Help: help, Help: help,
}, labelNames) }, labelNames)
if err := prometheus.Register(uncheckedCollector{gaugeVec}); err != nil { if err := prometheus.Register(uncheckedCollector{gaugeVec}); err != nil {
return nil, err return nil, err
} }
} else { } else {
gaugeVec = vh.(*prometheus.GaugeVec) gaugeVec = vh.(*prometheus.GaugeVec)
} }
var gauge prometheus.Gauge var gauge prometheus.Gauge
var err error var err error
if gauge, err = gaugeVec.GetMetricWith(labels); err != nil { if gauge, err = gaugeVec.GetMetricWith(labels); err != nil {
return nil, err return nil, err
} }
r.StoreGauge(metricName, hash, labels, gaugeVec, gauge, mapping.Ttl) r.StoreGauge(metricName, hash, labels, gaugeVec, gauge, mapping.Ttl)
return gauge, nil return gauge, nil
} }
func (r *Registry) GetHistogram(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping, metricsCount *prometheus.GaugeVec) (prometheus.Observer, error) { func (r *Registry) GetHistogram(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping, metricsCount *prometheus.GaugeVec) (prometheus.Observer, error) {
hash, labelNames := r.HashLabels(labels) hash, labelNames := r.HashLabels(labels)
vh, mh := r.Get(metricName, hash, metrics.HistogramMetricType) vh, mh := r.Get(metricName, hash, metrics.HistogramMetricType)
if mh != nil { if mh != nil {
return mh.(prometheus.Observer), nil return mh.(prometheus.Observer), nil
} }
if r.MetricConflicts(metricName, metrics.HistogramMetricType) { if r.MetricConflicts(metricName, metrics.HistogramMetricType) {
return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName) return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName)
} }
if r.MetricConflicts(metricName+"_sum", metrics.HistogramMetricType) { if r.MetricConflicts(metricName+"_sum", metrics.HistogramMetricType) {
return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName) return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName)
} }
if r.MetricConflicts(metricName+"_count", metrics.HistogramMetricType) { if r.MetricConflicts(metricName+"_count", metrics.HistogramMetricType) {
return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName) return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName)
} }
if r.MetricConflicts(metricName+"_bucket", metrics.HistogramMetricType) { if r.MetricConflicts(metricName+"_bucket", metrics.HistogramMetricType) {
return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName) return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName)
} }
var histogramVec *prometheus.HistogramVec var histogramVec *prometheus.HistogramVec
if vh == nil { if vh == nil {
metricsCount.WithLabelValues("histogram").Inc() metricsCount.WithLabelValues("histogram").Inc()
buckets := r.Mapper.Defaults.Buckets buckets := r.Mapper.Defaults.Buckets
if mapping.HistogramOptions != nil && len(mapping.HistogramOptions.Buckets) > 0 { if mapping.HistogramOptions != nil && len(mapping.HistogramOptions.Buckets) > 0 {
buckets = mapping.HistogramOptions.Buckets buckets = mapping.HistogramOptions.Buckets
} }
histogramVec = prometheus.NewHistogramVec(prometheus.HistogramOpts{ histogramVec = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Name: metricName, Name: metricName,
Help: help, Help: help,
Buckets: buckets, Buckets: buckets,
}, labelNames) }, labelNames)
if err := prometheus.Register(uncheckedCollector{histogramVec}); err != nil { if err := prometheus.Register(uncheckedCollector{histogramVec}); err != nil {
return nil, err return nil, err
} }
} else { } else {
histogramVec = vh.(*prometheus.HistogramVec) histogramVec = vh.(*prometheus.HistogramVec)
} }
var observer prometheus.Observer var observer prometheus.Observer
var err error var err error
if observer, err = histogramVec.GetMetricWith(labels); err != nil { if observer, err = histogramVec.GetMetricWith(labels); err != nil {
return nil, err return nil, err
} }
r.StoreHistogram(metricName, hash, labels, histogramVec, observer, mapping.Ttl) r.StoreHistogram(metricName, hash, labels, histogramVec, observer, mapping.Ttl)
return observer, nil return observer, nil
} }
func (r *Registry) GetSummary(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping, metricsCount *prometheus.GaugeVec) (prometheus.Observer, error) { func (r *Registry) GetSummary(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping, metricsCount *prometheus.GaugeVec) (prometheus.Observer, error) {
hash, labelNames := r.HashLabels(labels) hash, labelNames := r.HashLabels(labels)
vh, mh := r.Get(metricName, hash, metrics.SummaryMetricType) vh, mh := r.Get(metricName, hash, metrics.SummaryMetricType)
if mh != nil { if mh != nil {
return mh.(prometheus.Observer), nil return mh.(prometheus.Observer), nil
} }
if r.MetricConflicts(metricName, metrics.SummaryMetricType) { if r.MetricConflicts(metricName, metrics.SummaryMetricType) {
return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName) return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName)
} }
if r.MetricConflicts(metricName+"_sum", metrics.SummaryMetricType) { if r.MetricConflicts(metricName+"_sum", metrics.SummaryMetricType) {
return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName) return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName)
} }
if r.MetricConflicts(metricName+"_count", metrics.SummaryMetricType) { if r.MetricConflicts(metricName+"_count", metrics.SummaryMetricType) {
return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName) return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName)
} }
var summaryVec *prometheus.SummaryVec var summaryVec *prometheus.SummaryVec
if vh == nil { if vh == nil {
metricsCount.WithLabelValues("summary").Inc() metricsCount.WithLabelValues("summary").Inc()
quantiles := r.Mapper.Defaults.Quantiles quantiles := r.Mapper.Defaults.Quantiles
if mapping != nil && mapping.SummaryOptions != nil && len(mapping.SummaryOptions.Quantiles) > 0 { if mapping != nil && mapping.SummaryOptions != nil && len(mapping.SummaryOptions.Quantiles) > 0 {
quantiles = mapping.SummaryOptions.Quantiles quantiles = mapping.SummaryOptions.Quantiles
} }
summaryOptions := mapper.SummaryOptions{} summaryOptions := mapper.SummaryOptions{}
if mapping != nil && mapping.SummaryOptions != nil { if mapping != nil && mapping.SummaryOptions != nil {
summaryOptions = *mapping.SummaryOptions summaryOptions = *mapping.SummaryOptions
} }
objectives := make(map[float64]float64) objectives := make(map[float64]float64)
for _, q := range quantiles { for _, q := range quantiles {
objectives[q.Quantile] = q.Error objectives[q.Quantile] = q.Error
} }
// In the case of no mapping file, explicitly define the default quantiles // In the case of no mapping file, explicitly define the default quantiles
if len(objectives) == 0 { if len(objectives) == 0 {
objectives = map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001} objectives = map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001}
} }
summaryVec = prometheus.NewSummaryVec(prometheus.SummaryOpts{ summaryVec = prometheus.NewSummaryVec(prometheus.SummaryOpts{
Name: metricName, Name: metricName,
Help: help, Help: help,
Objectives: objectives, Objectives: objectives,
MaxAge: summaryOptions.MaxAge, MaxAge: summaryOptions.MaxAge,
AgeBuckets: summaryOptions.AgeBuckets, AgeBuckets: summaryOptions.AgeBuckets,
BufCap: summaryOptions.BufCap, BufCap: summaryOptions.BufCap,
}, labelNames) }, labelNames)
if err := prometheus.Register(uncheckedCollector{summaryVec}); err != nil { if err := prometheus.Register(uncheckedCollector{summaryVec}); err != nil {
return nil, err return nil, err
} }
} else { } else {
summaryVec = vh.(*prometheus.SummaryVec) summaryVec = vh.(*prometheus.SummaryVec)
} }
var observer prometheus.Observer var observer prometheus.Observer
var err error var err error
if observer, err = summaryVec.GetMetricWith(labels); err != nil { if observer, err = summaryVec.GetMetricWith(labels); err != nil {
return nil, err return nil, err
} }
r.StoreSummary(metricName, hash, labels, summaryVec, observer, mapping.Ttl) r.StoreSummary(metricName, hash, labels, summaryVec, observer, mapping.Ttl)
return observer, nil return observer, nil
} }
func (r *Registry) RemoveStaleMetrics() { func (r *Registry) RemoveStaleMetrics() {
now := clock.Now() now := clock.Now()
// delete timeseries with expired ttl // delete timeseries with expired ttl
for _, metric := range r.Metrics { for _, metric := range r.Metrics {
for hash, rm := range metric.Metrics { for hash, rm := range metric.Metrics {
if rm.TTL == 0 { if rm.TTL == 0 {
continue continue
} }
if rm.LastRegisteredAt.Add(rm.TTL).Before(now) { if rm.LastRegisteredAt.Add(rm.TTL).Before(now) {
metric.Vectors[rm.VecKey].Holder.Delete(rm.Labels) metric.Vectors[rm.VecKey].Holder.Delete(rm.Labels)
metric.Vectors[rm.VecKey].RefCount-- metric.Vectors[rm.VecKey].RefCount--
delete(metric.Metrics, hash) delete(metric.Metrics, hash)
} }
} }
} }
} }
// Calculates a hash of both the label names and the label names and values. // Calculates a hash of both the label names and the label names and values.
func (r *Registry) HashLabels(labels prometheus.Labels) (metrics.LabelHash, []string) { func (r *Registry) HashLabels(labels prometheus.Labels) (metrics.LabelHash, []string) {
r.Hasher.Reset() r.Hasher.Reset()
r.NameBuf.Reset() r.NameBuf.Reset()
r.ValueBuf.Reset() r.ValueBuf.Reset()
labelNames := make([]string, 0, len(labels)) labelNames := make([]string, 0, len(labels))
for labelName := range labels { for labelName := range labels {
labelNames = append(labelNames, labelName) labelNames = append(labelNames, labelName)
} }
sort.Strings(labelNames) sort.Strings(labelNames)
r.ValueBuf.WriteByte(model.SeparatorByte) r.ValueBuf.WriteByte(model.SeparatorByte)
for _, labelName := range labelNames { for _, labelName := range labelNames {
r.ValueBuf.WriteString(labels[labelName]) r.ValueBuf.WriteString(labels[labelName])
r.ValueBuf.WriteByte(model.SeparatorByte) r.ValueBuf.WriteByte(model.SeparatorByte)
r.NameBuf.WriteString(labelName) r.NameBuf.WriteString(labelName)
r.NameBuf.WriteByte(model.SeparatorByte) r.NameBuf.WriteByte(model.SeparatorByte)
} }
lh := metrics.LabelHash{} lh := metrics.LabelHash{}
r.Hasher.Write(r.NameBuf.Bytes()) r.Hasher.Write(r.NameBuf.Bytes())
lh.Names = metrics.NameHash(r.Hasher.Sum64()) lh.Names = metrics.NameHash(r.Hasher.Sum64())
// Now add the values to the names we've already hashed. // Now add the values to the names we've already hashed.
r.Hasher.Write(r.ValueBuf.Bytes()) r.Hasher.Write(r.ValueBuf.Bytes())
lh.Values = metrics.ValueHash(r.Hasher.Sum64()) lh.Values = metrics.ValueHash(r.Hasher.Sum64())
return lh, labelNames return lh, labelNames
} }

View file

@ -1,356 +1,356 @@
package registry package registry
import ( import (
"github.com/prometheus/statsd_exporter/pkg/metrics" "github.com/prometheus/statsd_exporter/pkg/metrics"
) )
type RegisteredMetric struct { type RegisteredMetric struct {
lastRegisteredAt time.Time lastRegisteredAt time.Time
labels prometheus.Labels labels prometheus.Labels
ttl time.Duration ttl time.Duration
metric metricHolder metric metricHolder
vecKey nameHash vecKey nameHash
} }
type Registry struct { type Registry struct {
metrics map[string]metric metrics map[string]metric
mapper *mapper.MetricMapper mapper *mapper.MetricMapper
// The below value and label variables are allocated in the registry struct // The below value and label variables are allocated in the registry struct
// so that we don't have to allocate them every time have to compute a label // so that we don't have to allocate them every time have to compute a label
// hash. // hash.
valueBuf, nameBuf bytes.Buffer valueBuf, nameBuf bytes.Buffer
hasher hash.Hash64 hasher hash.Hash64
} }
func NewRegistry(mapper *mapper.MetricMapper) *registry { func NewRegistry(mapper *mapper.MetricMapper) *registry {
return &registry{ return &registry{
metrics: make(map[string]metric), metrics: make(map[string]metric),
mapper: mapper, mapper: mapper,
hasher: fnv.New64a(), hasher: fnv.New64a(),
} }
} }
func (r *registry) MetricConflicts(metricName string, metricType metricType) bool { func (r *registry) MetricConflicts(metricName string, metricType metricType) bool {
vector, hasMetric := r.metrics[metricName] vector, hasMetric := r.metrics[metricName]
if !hasMetric { if !hasMetric {
// No metric with this name exists // No metric with this name exists
return false return false
} }
if vector.metricType == metricType { if vector.metricType == metricType {
// We've found a copy of this metric with this type, but different // We've found a copy of this metric with this type, but different
// labels, so it's safe to create a new one. // labels, so it's safe to create a new one.
return false return false
} }
// The metric exists, but it's of a different type than we're trying to // The metric exists, but it's of a different type than we're trying to
// create. // create.
return true return true
} }
func (r *registry) StoreCounter(metricName string, hash labelHash, labels prometheus.Labels, vec *prometheus.CounterVec, c prometheus.Counter, ttl time.Duration) { func (r *registry) StoreCounter(metricName string, hash labelHash, labels prometheus.Labels, vec *prometheus.CounterVec, c prometheus.Counter, ttl time.Duration) {
r.store(metricName, hash, labels, vec, c, CounterMetricType, ttl) r.store(metricName, hash, labels, vec, c, CounterMetricType, ttl)
} }
func (r *registry) StoreGauge(metricName string, hash labelHash, labels prometheus.Labels, vec *prometheus.GaugeVec, g prometheus.Counter, ttl time.Duration) { func (r *registry) StoreGauge(metricName string, hash labelHash, labels prometheus.Labels, vec *prometheus.GaugeVec, g prometheus.Counter, ttl time.Duration) {
r.store(metricName, hash, labels, vec, g, GaugeMetricType, ttl) r.store(metricName, hash, labels, vec, g, GaugeMetricType, ttl)
} }
func (r *registry) StoreHistogram(metricName string, hash labelHash, labels prometheus.Labels, vec *prometheus.HistogramVec, o prometheus.Observer, ttl time.Duration) { func (r *registry) StoreHistogram(metricName string, hash labelHash, labels prometheus.Labels, vec *prometheus.HistogramVec, o prometheus.Observer, ttl time.Duration) {
r.store(metricName, hash, labels, vec, o, HistogramMetricType, ttl) r.store(metricName, hash, labels, vec, o, HistogramMetricType, ttl)
} }
func (r *registry) StoreSummary(metricName string, hash labelHash, labels prometheus.Labels, vec *prometheus.SummaryVec, o prometheus.Observer, ttl time.Duration) { func (r *registry) StoreSummary(metricName string, hash labelHash, labels prometheus.Labels, vec *prometheus.SummaryVec, o prometheus.Observer, ttl time.Duration) {
r.store(metricName, hash, labels, vec, o, SummaryMetricType, ttl) r.store(metricName, hash, labels, vec, o, SummaryMetricType, ttl)
} }
func (r *registry) Store(metricName string, hash labelHash, labels prometheus.Labels, vh vectorHolder, mh metricHolder, metricType metricType, ttl time.Duration) { func (r *registry) Store(metricName string, hash labelHash, labels prometheus.Labels, vh vectorHolder, mh metricHolder, metricType metricType, ttl time.Duration) {
metric, hasMetric := r.metrics[metricName] metric, hasMetric := r.metrics[metricName]
if !hasMetric { if !hasMetric {
metric.metricType = metricType metric.metricType = metricType
metric.vectors = make(map[nameHash]*vector) metric.vectors = make(map[nameHash]*vector)
metric.metrics = make(map[valueHash]*registeredMetric) metric.metrics = make(map[valueHash]*registeredMetric)
r.metrics[metricName] = metric r.metrics[metricName] = metric
} }
v, ok := metric.vectors[hash.names] v, ok := metric.vectors[hash.names]
if !ok { if !ok {
v = &vector{holder: vh} v = &vector{holder: vh}
metric.vectors[hash.names] = v metric.vectors[hash.names] = v
} }
rm, ok := metric.metrics[hash.values] rm, ok := metric.metrics[hash.values]
if !ok { if !ok {
rm = &registeredMetric{ rm = &registeredMetric{
labels: labels, labels: labels,
ttl: ttl, ttl: ttl,
metric: mh, metric: mh,
vecKey: hash.names, vecKey: hash.names,
} }
metric.metrics[hash.values] = rm metric.metrics[hash.values] = rm
v.refCount++ v.refCount++
} }
now := clock.Now() now := clock.Now()
rm.lastRegisteredAt = now rm.lastRegisteredAt = now
// Update ttl from mapping // Update ttl from mapping
rm.ttl = ttl rm.ttl = ttl
} }
func (r *registry) Get(metricName string, hash labelHash, metricType metricType) (vectorHolder, metricHolder) { func (r *registry) Get(metricName string, hash labelHash, metricType metricType) (vectorHolder, metricHolder) {
metric, hasMetric := r.metrics[metricName] metric, hasMetric := r.metrics[metricName]
if !hasMetric { if !hasMetric {
return nil, nil return nil, nil
} }
if metric.metricType != metricType { if metric.metricType != metricType {
return nil, nil return nil, nil
} }
rm, ok := metric.metrics[hash.values] rm, ok := metric.metrics[hash.values]
if ok { if ok {
now := clock.Now() now := clock.Now()
rm.lastRegisteredAt = now rm.lastRegisteredAt = now
return metric.vectors[hash.names].holder, rm.metric return metric.vectors[hash.names].holder, rm.metric
} }
vector, ok := metric.vectors[hash.names] vector, ok := metric.vectors[hash.names]
if ok { if ok {
return vector.holder, nil return vector.holder, nil
} }
return nil, nil return nil, nil
} }
func (r *registry) GetCounter(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping) (prometheus.Counter, error) { func (r *registry) GetCounter(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping) (prometheus.Counter, error) {
hash, labelNames := r.hashLabels(labels) hash, labelNames := r.hashLabels(labels)
vh, mh := r.get(metricName, hash, CounterMetricType) vh, mh := r.get(metricName, hash, CounterMetricType)
if mh != nil { if mh != nil {
return mh.(prometheus.Counter), nil return mh.(prometheus.Counter), nil
} }
if r.metricConflicts(metricName, CounterMetricType) { if r.metricConflicts(metricName, CounterMetricType) {
return nil, fmt.Errorf("metric with name %s is already registered", metricName) return nil, fmt.Errorf("metric with name %s is already registered", metricName)
} }
var counterVec *prometheus.CounterVec var counterVec *prometheus.CounterVec
if vh == nil { if vh == nil {
metricsCount.WithLabelValues("counter").Inc() metricsCount.WithLabelValues("counter").Inc()
counterVec = prometheus.NewCounterVec(prometheus.CounterOpts{ counterVec = prometheus.NewCounterVec(prometheus.CounterOpts{
Name: metricName, Name: metricName,
Help: help, Help: help,
}, labelNames) }, labelNames)
if err := prometheus.Register(uncheckedCollector{counterVec}); err != nil { if err := prometheus.Register(uncheckedCollector{counterVec}); err != nil {
return nil, err return nil, err
} }
} else { } else {
counterVec = vh.(*prometheus.CounterVec) counterVec = vh.(*prometheus.CounterVec)
} }
var counter prometheus.Counter var counter prometheus.Counter
var err error var err error
if counter, err = counterVec.GetMetricWith(labels); err != nil { if counter, err = counterVec.GetMetricWith(labels); err != nil {
return nil, err return nil, err
} }
r.storeCounter(metricName, hash, labels, counterVec, counter, mapping.Ttl) r.storeCounter(metricName, hash, labels, counterVec, counter, mapping.Ttl)
return counter, nil return counter, nil
} }
func (r *registry) GetGauge(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping) (prometheus.Gauge, error) { func (r *registry) GetGauge(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping) (prometheus.Gauge, error) {
hash, labelNames := r.hashLabels(labels) hash, labelNames := r.hashLabels(labels)
vh, mh := r.get(metricName, hash, GaugeMetricType) vh, mh := r.get(metricName, hash, GaugeMetricType)
if mh != nil { if mh != nil {
return mh.(prometheus.Gauge), nil return mh.(prometheus.Gauge), nil
} }
if r.metricConflicts(metricName, GaugeMetricType) { if r.metricConflicts(metricName, GaugeMetricType) {
return nil, fmt.Errorf("metric with name %s is already registered", metricName) return nil, fmt.Errorf("metric with name %s is already registered", metricName)
} }
var gaugeVec *prometheus.GaugeVec var gaugeVec *prometheus.GaugeVec
if vh == nil { if vh == nil {
metricsCount.WithLabelValues("gauge").Inc() metricsCount.WithLabelValues("gauge").Inc()
gaugeVec = prometheus.NewGaugeVec(prometheus.GaugeOpts{ gaugeVec = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Name: metricName, Name: metricName,
Help: help, Help: help,
}, labelNames) }, labelNames)
if err := prometheus.Register(uncheckedCollector{gaugeVec}); err != nil { if err := prometheus.Register(uncheckedCollector{gaugeVec}); err != nil {
return nil, err return nil, err
} }
} else { } else {
gaugeVec = vh.(*prometheus.GaugeVec) gaugeVec = vh.(*prometheus.GaugeVec)
} }
var gauge prometheus.Gauge var gauge prometheus.Gauge
var err error var err error
if gauge, err = gaugeVec.GetMetricWith(labels); err != nil { if gauge, err = gaugeVec.GetMetricWith(labels); err != nil {
return nil, err return nil, err
} }
r.storeGauge(metricName, hash, labels, gaugeVec, gauge, mapping.Ttl) r.storeGauge(metricName, hash, labels, gaugeVec, gauge, mapping.Ttl)
return gauge, nil return gauge, nil
} }
func (r *registry) GetHistogram(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping) (prometheus.Observer, error) { func (r *registry) GetHistogram(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping) (prometheus.Observer, error) {
hash, labelNames := r.hashLabels(labels) hash, labelNames := r.hashLabels(labels)
vh, mh := r.get(metricName, hash, HistogramMetricType) vh, mh := r.get(metricName, hash, HistogramMetricType)
if mh != nil { if mh != nil {
return mh.(prometheus.Observer), nil return mh.(prometheus.Observer), nil
} }
if r.metricConflicts(metricName, HistogramMetricType) { if r.metricConflicts(metricName, HistogramMetricType) {
return nil, fmt.Errorf("metric with name %s is already registered", metricName) return nil, fmt.Errorf("metric with name %s is already registered", metricName)
} }
if r.metricConflicts(metricName+"_sum", HistogramMetricType) { if r.metricConflicts(metricName+"_sum", HistogramMetricType) {
return nil, fmt.Errorf("metric with name %s is already registered", metricName) return nil, fmt.Errorf("metric with name %s is already registered", metricName)
} }
if r.metricConflicts(metricName+"_count", HistogramMetricType) { if r.metricConflicts(metricName+"_count", HistogramMetricType) {
return nil, fmt.Errorf("metric with name %s is already registered", metricName) return nil, fmt.Errorf("metric with name %s is already registered", metricName)
} }
if r.metricConflicts(metricName+"_bucket", HistogramMetricType) { if r.metricConflicts(metricName+"_bucket", HistogramMetricType) {
return nil, fmt.Errorf("metric with name %s is already registered", metricName) return nil, fmt.Errorf("metric with name %s is already registered", metricName)
} }
var histogramVec *prometheus.HistogramVec var histogramVec *prometheus.HistogramVec
if vh == nil { if vh == nil {
metricsCount.WithLabelValues("histogram").Inc() metricsCount.WithLabelValues("histogram").Inc()
buckets := r.mapper.Defaults.Buckets buckets := r.mapper.Defaults.Buckets
if mapping.HistogramOptions != nil && len(mapping.HistogramOptions.Buckets) > 0 { if mapping.HistogramOptions != nil && len(mapping.HistogramOptions.Buckets) > 0 {
buckets = mapping.HistogramOptions.Buckets buckets = mapping.HistogramOptions.Buckets
} }
histogramVec = prometheus.NewHistogramVec(prometheus.HistogramOpts{ histogramVec = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Name: metricName, Name: metricName,
Help: help, Help: help,
Buckets: buckets, Buckets: buckets,
}, labelNames) }, labelNames)
if err := prometheus.Register(uncheckedCollector{histogramVec}); err != nil { if err := prometheus.Register(uncheckedCollector{histogramVec}); err != nil {
return nil, err return nil, err
} }
} else { } else {
histogramVec = vh.(*prometheus.HistogramVec) histogramVec = vh.(*prometheus.HistogramVec)
} }
var observer prometheus.Observer var observer prometheus.Observer
var err error var err error
if observer, err = histogramVec.GetMetricWith(labels); err != nil { if observer, err = histogramVec.GetMetricWith(labels); err != nil {
return nil, err return nil, err
} }
r.storeHistogram(metricName, hash, labels, histogramVec, observer, mapping.Ttl) r.storeHistogram(metricName, hash, labels, histogramVec, observer, mapping.Ttl)
return observer, nil return observer, nil
} }
func (r *registry) GetSummary(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping) (prometheus.Observer, error) { func (r *registry) GetSummary(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping) (prometheus.Observer, error) {
hash, labelNames := r.hashLabels(labels) hash, labelNames := r.hashLabels(labels)
vh, mh := r.get(metricName, hash, SummaryMetricType) vh, mh := r.get(metricName, hash, SummaryMetricType)
if mh != nil { if mh != nil {
return mh.(prometheus.Observer), nil return mh.(prometheus.Observer), nil
} }
if r.metricConflicts(metricName, SummaryMetricType) { if r.metricConflicts(metricName, SummaryMetricType) {
return nil, fmt.Errorf("metric with name %s is already registered", metricName) return nil, fmt.Errorf("metric with name %s is already registered", metricName)
} }
if r.metricConflicts(metricName+"_sum", SummaryMetricType) { if r.metricConflicts(metricName+"_sum", SummaryMetricType) {
return nil, fmt.Errorf("metric with name %s is already registered", metricName) return nil, fmt.Errorf("metric with name %s is already registered", metricName)
} }
if r.metricConflicts(metricName+"_count", SummaryMetricType) { if r.metricConflicts(metricName+"_count", SummaryMetricType) {
return nil, fmt.Errorf("metric with name %s is already registered", metricName) return nil, fmt.Errorf("metric with name %s is already registered", metricName)
} }
var summaryVec *prometheus.SummaryVec var summaryVec *prometheus.SummaryVec
if vh == nil { if vh == nil {
metricsCount.WithLabelValues("summary").Inc() metricsCount.WithLabelValues("summary").Inc()
quantiles := r.mapper.Defaults.Quantiles quantiles := r.mapper.Defaults.Quantiles
if mapping != nil && mapping.SummaryOptions != nil && len(mapping.SummaryOptions.Quantiles) > 0 { if mapping != nil && mapping.SummaryOptions != nil && len(mapping.SummaryOptions.Quantiles) > 0 {
quantiles = mapping.SummaryOptions.Quantiles quantiles = mapping.SummaryOptions.Quantiles
} }
summaryOptions := mapper.SummaryOptions{} summaryOptions := mapper.SummaryOptions{}
if mapping != nil && mapping.SummaryOptions != nil { if mapping != nil && mapping.SummaryOptions != nil {
summaryOptions = *mapping.SummaryOptions summaryOptions = *mapping.SummaryOptions
} }
objectives := make(map[float64]float64) objectives := make(map[float64]float64)
for _, q := range quantiles { for _, q := range quantiles {
objectives[q.Quantile] = q.Error objectives[q.Quantile] = q.Error
} }
// In the case of no mapping file, explicitly define the default quantiles // In the case of no mapping file, explicitly define the default quantiles
if len(objectives) == 0 { if len(objectives) == 0 {
objectives = map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001} objectives = map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001}
} }
summaryVec = prometheus.NewSummaryVec(prometheus.SummaryOpts{ summaryVec = prometheus.NewSummaryVec(prometheus.SummaryOpts{
Name: metricName, Name: metricName,
Help: help, Help: help,
Objectives: objectives, Objectives: objectives,
MaxAge: summaryOptions.MaxAge, MaxAge: summaryOptions.MaxAge,
AgeBuckets: summaryOptions.AgeBuckets, AgeBuckets: summaryOptions.AgeBuckets,
BufCap: summaryOptions.BufCap, BufCap: summaryOptions.BufCap,
}, labelNames) }, labelNames)
if err := prometheus.Register(uncheckedCollector{summaryVec}); err != nil { if err := prometheus.Register(uncheckedCollector{summaryVec}); err != nil {
return nil, err return nil, err
} }
} else { } else {
summaryVec = vh.(*prometheus.SummaryVec) summaryVec = vh.(*prometheus.SummaryVec)
} }
var observer prometheus.Observer var observer prometheus.Observer
var err error var err error
if observer, err = summaryVec.GetMetricWith(labels); err != nil { if observer, err = summaryVec.GetMetricWith(labels); err != nil {
return nil, err return nil, err
} }
r.storeSummary(metricName, hash, labels, summaryVec, observer, mapping.Ttl) r.storeSummary(metricName, hash, labels, summaryVec, observer, mapping.Ttl)
return observer, nil return observer, nil
} }
func (r *registry) RemoveStaleMetrics() { func (r *registry) RemoveStaleMetrics() {
now := clock.Now() now := clock.Now()
// delete timeseries with expired ttl // delete timeseries with expired ttl
for _, metric := range r.metrics { for _, metric := range r.metrics {
for hash, rm := range metric.metrics { for hash, rm := range metric.metrics {
if rm.ttl == 0 { if rm.ttl == 0 {
continue continue
} }
if rm.lastRegisteredAt.Add(rm.ttl).Before(now) { if rm.lastRegisteredAt.Add(rm.ttl).Before(now) {
metric.vectors[rm.vecKey].holder.Delete(rm.labels) metric.vectors[rm.vecKey].holder.Delete(rm.labels)
metric.vectors[rm.vecKey].refCount-- metric.vectors[rm.vecKey].refCount--
delete(metric.metrics, hash) delete(metric.metrics, hash)
} }
} }
} }
} }
// Calculates a hash of both the label names and the label names and values. // Calculates a hash of both the label names and the label names and values.
func (r *registry) HashLabels(labels prometheus.Labels) (labelHash, []string) { func (r *registry) HashLabels(labels prometheus.Labels) (labelHash, []string) {
r.hasher.Reset() r.hasher.Reset()
r.nameBuf.Reset() r.nameBuf.Reset()
r.valueBuf.Reset() r.valueBuf.Reset()
labelNames := make([]string, 0, len(labels)) labelNames := make([]string, 0, len(labels))
for labelName := range labels { for labelName := range labels {
labelNames = append(labelNames, labelName) labelNames = append(labelNames, labelName)
} }
sort.Strings(labelNames) sort.Strings(labelNames)
r.valueBuf.WriteByte(model.SeparatorByte) r.valueBuf.WriteByte(model.SeparatorByte)
for _, labelName := range labelNames { for _, labelName := range labelNames {
r.valueBuf.WriteString(labels[labelName]) r.valueBuf.WriteString(labels[labelName])
r.valueBuf.WriteByte(model.SeparatorByte) r.valueBuf.WriteByte(model.SeparatorByte)
r.nameBuf.WriteString(labelName) r.nameBuf.WriteString(labelName)
r.nameBuf.WriteByte(model.SeparatorByte) r.nameBuf.WriteByte(model.SeparatorByte)
} }
lh := labelHash{} lh := labelHash{}
r.hasher.Write(r.nameBuf.Bytes()) r.hasher.Write(r.nameBuf.Bytes())
lh.names = nameHash(r.hasher.Sum64()) lh.names = nameHash(r.hasher.Sum64())
// Now add the values to the names we've already hashed. // Now add the values to the names we've already hashed.
r.hasher.Write(r.valueBuf.Bytes()) r.hasher.Write(r.valueBuf.Bytes())
lh.values = valueHash(r.hasher.Sum64()) lh.values = valueHash(r.hasher.Sum64())
return lh, labelNames return lh, labelNames
} }

View file

@ -1,53 +1,53 @@
package util package util
import ( import (
"fmt" "fmt"
"net" "net"
"strconv" "strconv"
) )
func IPPortFromString(addr string) (*net.IPAddr, int, error) { func IPPortFromString(addr string) (*net.IPAddr, int, error) {
host, portStr, err := net.SplitHostPort(addr) host, portStr, err := net.SplitHostPort(addr)
if err != nil { if err != nil {
return nil, 0, fmt.Errorf("bad StatsD listening address: %s", addr) return nil, 0, fmt.Errorf("bad StatsD listening address: %s", addr)
} }
if host == "" { if host == "" {
host = "0.0.0.0" host = "0.0.0.0"
} }
ip, err := net.ResolveIPAddr("ip", host) ip, err := net.ResolveIPAddr("ip", host)
if err != nil { if err != nil {
return nil, 0, fmt.Errorf("Unable to resolve %s: %s", host, err) return nil, 0, fmt.Errorf("Unable to resolve %s: %s", host, err)
} }
port, err := strconv.Atoi(portStr) port, err := strconv.Atoi(portStr)
if err != nil || port < 0 || port > 65535 { if err != nil || port < 0 || port > 65535 {
return nil, 0, fmt.Errorf("Bad port %s: %s", portStr, err) return nil, 0, fmt.Errorf("Bad port %s: %s", portStr, err)
} }
return ip, port, nil return ip, port, nil
} }
func UDPAddrFromString(addr string) (*net.UDPAddr, error) { func UDPAddrFromString(addr string) (*net.UDPAddr, error) {
ip, port, err := IPPortFromString(addr) ip, port, err := IPPortFromString(addr)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &net.UDPAddr{ return &net.UDPAddr{
IP: ip.IP, IP: ip.IP,
Port: port, Port: port,
Zone: ip.Zone, Zone: ip.Zone,
}, nil }, nil
} }
func TCPAddrFromString(addr string) (*net.TCPAddr, error) { func TCPAddrFromString(addr string) (*net.TCPAddr, error) {
ip, port, err := IPPortFromString(addr) ip, port, err := IPPortFromString(addr)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &net.TCPAddr{ return &net.TCPAddr{
IP: ip.IP, IP: ip.IP,
Port: port, Port: port,
Zone: ip.Zone, Zone: ip.Zone,
}, nil }, nil
} }

View file

@ -1,53 +1,53 @@
package util package util
import ( import (
"fmt" "fmt"
"net" "net"
"strconv" "strconv"
) )
func IPPortFromString(addr string) (*net.IPAddr, int, error) { func IPPortFromString(addr string) (*net.IPAddr, int, error) {
host, portStr, err := net.SplitHostPort(addr) host, portStr, err := net.SplitHostPort(addr)
if err != nil { if err != nil {
return nil, 0, fmt.Errorf("bad StatsD listening address: %s", addr) return nil, 0, fmt.Errorf("bad StatsD listening address: %s", addr)
} }
if host == "" { if host == "" {
host = "0.0.0.0" host = "0.0.0.0"
} }
ip, err := net.ResolveIPAddr("ip", host) ip, err := net.ResolveIPAddr("ip", host)
if err != nil { if err != nil {
return nil, 0, fmt.Errorf("Unable to resolve %s: %s", host, err) return nil, 0, fmt.Errorf("Unable to resolve %s: %s", host, err)
} }
port, err := strconv.Atoi(portStr) port, err := strconv.Atoi(portStr)
if err != nil || port < 0 || port > 65535 { if err != nil || port < 0 || port > 65535 {
return nil, 0, fmt.Errorf("Bad port %s: %s", portStr, err) return nil, 0, fmt.Errorf("Bad port %s: %s", portStr, err)
} }
return ip, port, nil return ip, port, nil
} }
func UDPAddrFromString(addr string) (*net.UDPAddr, error) { func UDPAddrFromString(addr string) (*net.UDPAddr, error) {
ip, port, err := ipPortFromString(addr) ip, port, err := ipPortFromString(addr)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &net.UDPAddr{ return &net.UDPAddr{
IP: ip.IP, IP: ip.IP,
Port: port, Port: port,
Zone: ip.Zone, Zone: ip.Zone,
}, nil }, nil
} }
func TCPAddrFromString(addr string) (*net.TCPAddr, error) { func TCPAddrFromString(addr string) (*net.TCPAddr, error) {
ip, port, err := ipPortFromString(addr) ip, port, err := ipPortFromString(addr)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &net.TCPAddr{ return &net.TCPAddr{
IP: ip.IP, IP: ip.IP,
Port: port, Port: port,
Zone: ip.Zone, Zone: ip.Zone,
}, nil }, nil
} }

View file

@ -1,27 +1,27 @@
Copyright (c) 2012 The Go Authors. All rights reserved. Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are modification, are permitted provided that the following conditions are
met: met:
* Redistributions of source code must retain the above copyright * Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer. notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above * Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the in the documentation and/or other materials provided with the
distribution. distribution.
* Neither the name of Google Inc. nor the names of its * Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from contributors may be used to endorse or promote products derived from
this software without specific prior written permission. this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View file

@ -1,25 +1,25 @@
# Go's `text/template` package with newline elision # Go's `text/template` package with newline elision
This is a fork of Go 1.4's [text/template](http://golang.org/pkg/text/template/) package with one addition: a backslash immediately after a closing delimiter will delete all subsequent newlines until a non-newline. This is a fork of Go 1.4's [text/template](http://golang.org/pkg/text/template/) package with one addition: a backslash immediately after a closing delimiter will delete all subsequent newlines until a non-newline.
eg. eg.
``` ```
{{if true}}\ {{if true}}\
hello hello
{{end}}\ {{end}}\
``` ```
Will result in: Will result in:
``` ```
hello\n hello\n
``` ```
Rather than: Rather than:
``` ```
\n \n
hello\n hello\n
\n \n
``` ```

View file

@ -1,406 +1,406 @@
// Copyright 2011 The Go Authors. All rights reserved. // Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
/* /*
Package template implements data-driven templates for generating textual output. Package template implements data-driven templates for generating textual output.
To generate HTML output, see package html/template, which has the same interface To generate HTML output, see package html/template, which has the same interface
as this package but automatically secures HTML output against certain attacks. as this package but automatically secures HTML output against certain attacks.
Templates are executed by applying them to a data structure. Annotations in the Templates are executed by applying them to a data structure. Annotations in the
template refer to elements of the data structure (typically a field of a struct template refer to elements of the data structure (typically a field of a struct
or a key in a map) to control execution and derive values to be displayed. or a key in a map) to control execution and derive values to be displayed.
Execution of the template walks the structure and sets the cursor, represented Execution of the template walks the structure and sets the cursor, represented
by a period '.' and called "dot", to the value at the current location in the by a period '.' and called "dot", to the value at the current location in the
structure as execution proceeds. structure as execution proceeds.
The input text for a template is UTF-8-encoded text in any format. The input text for a template is UTF-8-encoded text in any format.
"Actions"--data evaluations or control structures--are delimited by "Actions"--data evaluations or control structures--are delimited by
"{{" and "}}"; all text outside actions is copied to the output unchanged. "{{" and "}}"; all text outside actions is copied to the output unchanged.
Actions may not span newlines, although comments can. Actions may not span newlines, although comments can.
Once parsed, a template may be executed safely in parallel. Once parsed, a template may be executed safely in parallel.
Here is a trivial example that prints "17 items are made of wool". Here is a trivial example that prints "17 items are made of wool".
type Inventory struct { type Inventory struct {
Material string Material string
Count uint Count uint
} }
sweaters := Inventory{"wool", 17} sweaters := Inventory{"wool", 17}
tmpl, err := template.New("test").Parse("{{.Count}} items are made of {{.Material}}") tmpl, err := template.New("test").Parse("{{.Count}} items are made of {{.Material}}")
if err != nil { panic(err) } if err != nil { panic(err) }
err = tmpl.Execute(os.Stdout, sweaters) err = tmpl.Execute(os.Stdout, sweaters)
if err != nil { panic(err) } if err != nil { panic(err) }
More intricate examples appear below. More intricate examples appear below.
Actions Actions
Here is the list of actions. "Arguments" and "pipelines" are evaluations of Here is the list of actions. "Arguments" and "pipelines" are evaluations of
data, defined in detail below. data, defined in detail below.
*/ */
// {{/* a comment */}} // {{/* a comment */}}
// A comment; discarded. May contain newlines. // A comment; discarded. May contain newlines.
// Comments do not nest and must start and end at the // Comments do not nest and must start and end at the
// delimiters, as shown here. // delimiters, as shown here.
/* /*
{{pipeline}} {{pipeline}}
The default textual representation of the value of the pipeline The default textual representation of the value of the pipeline
is copied to the output. is copied to the output.
{{if pipeline}} T1 {{end}} {{if pipeline}} T1 {{end}}
If the value of the pipeline is empty, no output is generated; If the value of the pipeline is empty, no output is generated;
otherwise, T1 is executed. The empty values are false, 0, any otherwise, T1 is executed. The empty values are false, 0, any
nil pointer or interface value, and any array, slice, map, or nil pointer or interface value, and any array, slice, map, or
string of length zero. string of length zero.
Dot is unaffected. Dot is unaffected.
{{if pipeline}} T1 {{else}} T0 {{end}} {{if pipeline}} T1 {{else}} T0 {{end}}
If the value of the pipeline is empty, T0 is executed; If the value of the pipeline is empty, T0 is executed;
otherwise, T1 is executed. Dot is unaffected. otherwise, T1 is executed. Dot is unaffected.
{{if pipeline}} T1 {{else if pipeline}} T0 {{end}} {{if pipeline}} T1 {{else if pipeline}} T0 {{end}}
To simplify the appearance of if-else chains, the else action To simplify the appearance of if-else chains, the else action
of an if may include another if directly; the effect is exactly of an if may include another if directly; the effect is exactly
the same as writing the same as writing
{{if pipeline}} T1 {{else}}{{if pipeline}} T0 {{end}}{{end}} {{if pipeline}} T1 {{else}}{{if pipeline}} T0 {{end}}{{end}}
{{range pipeline}} T1 {{end}} {{range pipeline}} T1 {{end}}
The value of the pipeline must be an array, slice, map, or channel. The value of the pipeline must be an array, slice, map, or channel.
If the value of the pipeline has length zero, nothing is output; If the value of the pipeline has length zero, nothing is output;
otherwise, dot is set to the successive elements of the array, otherwise, dot is set to the successive elements of the array,
slice, or map and T1 is executed. If the value is a map and the slice, or map and T1 is executed. If the value is a map and the
keys are of basic type with a defined order ("comparable"), the keys are of basic type with a defined order ("comparable"), the
elements will be visited in sorted key order. elements will be visited in sorted key order.
{{range pipeline}} T1 {{else}} T0 {{end}} {{range pipeline}} T1 {{else}} T0 {{end}}
The value of the pipeline must be an array, slice, map, or channel. The value of the pipeline must be an array, slice, map, or channel.
If the value of the pipeline has length zero, dot is unaffected and If the value of the pipeline has length zero, dot is unaffected and
T0 is executed; otherwise, dot is set to the successive elements T0 is executed; otherwise, dot is set to the successive elements
of the array, slice, or map and T1 is executed. of the array, slice, or map and T1 is executed.
{{template "name"}} {{template "name"}}
The template with the specified name is executed with nil data. The template with the specified name is executed with nil data.
{{template "name" pipeline}} {{template "name" pipeline}}
The template with the specified name is executed with dot set The template with the specified name is executed with dot set
to the value of the pipeline. to the value of the pipeline.
{{with pipeline}} T1 {{end}} {{with pipeline}} T1 {{end}}
If the value of the pipeline is empty, no output is generated; If the value of the pipeline is empty, no output is generated;
otherwise, dot is set to the value of the pipeline and T1 is otherwise, dot is set to the value of the pipeline and T1 is
executed. executed.
{{with pipeline}} T1 {{else}} T0 {{end}} {{with pipeline}} T1 {{else}} T0 {{end}}
If the value of the pipeline is empty, dot is unaffected and T0 If the value of the pipeline is empty, dot is unaffected and T0
is executed; otherwise, dot is set to the value of the pipeline is executed; otherwise, dot is set to the value of the pipeline
and T1 is executed. and T1 is executed.
Arguments Arguments
An argument is a simple value, denoted by one of the following. An argument is a simple value, denoted by one of the following.
- A boolean, string, character, integer, floating-point, imaginary - A boolean, string, character, integer, floating-point, imaginary
or complex constant in Go syntax. These behave like Go's untyped or complex constant in Go syntax. These behave like Go's untyped
constants, although raw strings may not span newlines. constants, although raw strings may not span newlines.
- The keyword nil, representing an untyped Go nil. - The keyword nil, representing an untyped Go nil.
- The character '.' (period): - The character '.' (period):
. .
The result is the value of dot. The result is the value of dot.
- A variable name, which is a (possibly empty) alphanumeric string - A variable name, which is a (possibly empty) alphanumeric string
preceded by a dollar sign, such as preceded by a dollar sign, such as
$piOver2 $piOver2
or or
$ $
The result is the value of the variable. The result is the value of the variable.
Variables are described below. Variables are described below.
- The name of a field of the data, which must be a struct, preceded - The name of a field of the data, which must be a struct, preceded
by a period, such as by a period, such as
.Field .Field
The result is the value of the field. Field invocations may be The result is the value of the field. Field invocations may be
chained: chained:
.Field1.Field2 .Field1.Field2
Fields can also be evaluated on variables, including chaining: Fields can also be evaluated on variables, including chaining:
$x.Field1.Field2 $x.Field1.Field2
- The name of a key of the data, which must be a map, preceded - The name of a key of the data, which must be a map, preceded
by a period, such as by a period, such as
.Key .Key
The result is the map element value indexed by the key. The result is the map element value indexed by the key.
Key invocations may be chained and combined with fields to any Key invocations may be chained and combined with fields to any
depth: depth:
.Field1.Key1.Field2.Key2 .Field1.Key1.Field2.Key2
Although the key must be an alphanumeric identifier, unlike with Although the key must be an alphanumeric identifier, unlike with
field names they do not need to start with an upper case letter. field names they do not need to start with an upper case letter.
Keys can also be evaluated on variables, including chaining: Keys can also be evaluated on variables, including chaining:
$x.key1.key2 $x.key1.key2
- The name of a niladic method of the data, preceded by a period, - The name of a niladic method of the data, preceded by a period,
such as such as
.Method .Method
The result is the value of invoking the method with dot as the The result is the value of invoking the method with dot as the
receiver, dot.Method(). Such a method must have one return value (of receiver, dot.Method(). Such a method must have one return value (of
any type) or two return values, the second of which is an error. any type) or two return values, the second of which is an error.
If it has two and the returned error is non-nil, execution terminates If it has two and the returned error is non-nil, execution terminates
and an error is returned to the caller as the value of Execute. and an error is returned to the caller as the value of Execute.
Method invocations may be chained and combined with fields and keys Method invocations may be chained and combined with fields and keys
to any depth: to any depth:
.Field1.Key1.Method1.Field2.Key2.Method2 .Field1.Key1.Method1.Field2.Key2.Method2
Methods can also be evaluated on variables, including chaining: Methods can also be evaluated on variables, including chaining:
$x.Method1.Field $x.Method1.Field
- The name of a niladic function, such as - The name of a niladic function, such as
fun fun
The result is the value of invoking the function, fun(). The return The result is the value of invoking the function, fun(). The return
types and values behave as in methods. Functions and function types and values behave as in methods. Functions and function
names are described below. names are described below.
- A parenthesized instance of one the above, for grouping. The result - A parenthesized instance of one the above, for grouping. The result
may be accessed by a field or map key invocation. may be accessed by a field or map key invocation.
print (.F1 arg1) (.F2 arg2) print (.F1 arg1) (.F2 arg2)
(.StructValuedMethod "arg").Field (.StructValuedMethod "arg").Field
Arguments may evaluate to any type; if they are pointers the implementation Arguments may evaluate to any type; if they are pointers the implementation
automatically indirects to the base type when required. automatically indirects to the base type when required.
If an evaluation yields a function value, such as a function-valued If an evaluation yields a function value, such as a function-valued
field of a struct, the function is not invoked automatically, but it field of a struct, the function is not invoked automatically, but it
can be used as a truth value for an if action and the like. To invoke can be used as a truth value for an if action and the like. To invoke
it, use the call function, defined below. it, use the call function, defined below.
A pipeline is a possibly chained sequence of "commands". A command is a simple A pipeline is a possibly chained sequence of "commands". A command is a simple
value (argument) or a function or method call, possibly with multiple arguments: value (argument) or a function or method call, possibly with multiple arguments:
Argument Argument
The result is the value of evaluating the argument. The result is the value of evaluating the argument.
.Method [Argument...] .Method [Argument...]
The method can be alone or the last element of a chain but, The method can be alone or the last element of a chain but,
unlike methods in the middle of a chain, it can take arguments. unlike methods in the middle of a chain, it can take arguments.
The result is the value of calling the method with the The result is the value of calling the method with the
arguments: arguments:
dot.Method(Argument1, etc.) dot.Method(Argument1, etc.)
functionName [Argument...] functionName [Argument...]
The result is the value of calling the function associated The result is the value of calling the function associated
with the name: with the name:
function(Argument1, etc.) function(Argument1, etc.)
Functions and function names are described below. Functions and function names are described below.
Pipelines Pipelines
A pipeline may be "chained" by separating a sequence of commands with pipeline A pipeline may be "chained" by separating a sequence of commands with pipeline
characters '|'. In a chained pipeline, the result of the each command is characters '|'. In a chained pipeline, the result of the each command is
passed as the last argument of the following command. The output of the final passed as the last argument of the following command. The output of the final
command in the pipeline is the value of the pipeline. command in the pipeline is the value of the pipeline.
The output of a command will be either one value or two values, the second of The output of a command will be either one value or two values, the second of
which has type error. If that second value is present and evaluates to which has type error. If that second value is present and evaluates to
non-nil, execution terminates and the error is returned to the caller of non-nil, execution terminates and the error is returned to the caller of
Execute. Execute.
Variables Variables
A pipeline inside an action may initialize a variable to capture the result. A pipeline inside an action may initialize a variable to capture the result.
The initialization has syntax The initialization has syntax
$variable := pipeline $variable := pipeline
where $variable is the name of the variable. An action that declares a where $variable is the name of the variable. An action that declares a
variable produces no output. variable produces no output.
If a "range" action initializes a variable, the variable is set to the If a "range" action initializes a variable, the variable is set to the
successive elements of the iteration. Also, a "range" may declare two successive elements of the iteration. Also, a "range" may declare two
variables, separated by a comma: variables, separated by a comma:
range $index, $element := pipeline range $index, $element := pipeline
in which case $index and $element are set to the successive values of the in which case $index and $element are set to the successive values of the
array/slice index or map key and element, respectively. Note that if there is array/slice index or map key and element, respectively. Note that if there is
only one variable, it is assigned the element; this is opposite to the only one variable, it is assigned the element; this is opposite to the
convention in Go range clauses. convention in Go range clauses.
A variable's scope extends to the "end" action of the control structure ("if", A variable's scope extends to the "end" action of the control structure ("if",
"with", or "range") in which it is declared, or to the end of the template if "with", or "range") in which it is declared, or to the end of the template if
there is no such control structure. A template invocation does not inherit there is no such control structure. A template invocation does not inherit
variables from the point of its invocation. variables from the point of its invocation.
When execution begins, $ is set to the data argument passed to Execute, that is, When execution begins, $ is set to the data argument passed to Execute, that is,
to the starting value of dot. to the starting value of dot.
Examples Examples
Here are some example one-line templates demonstrating pipelines and variables. Here are some example one-line templates demonstrating pipelines and variables.
All produce the quoted word "output": All produce the quoted word "output":
{{"\"output\""}} {{"\"output\""}}
A string constant. A string constant.
{{`"output"`}} {{`"output"`}}
A raw string constant. A raw string constant.
{{printf "%q" "output"}} {{printf "%q" "output"}}
A function call. A function call.
{{"output" | printf "%q"}} {{"output" | printf "%q"}}
A function call whose final argument comes from the previous A function call whose final argument comes from the previous
command. command.
{{printf "%q" (print "out" "put")}} {{printf "%q" (print "out" "put")}}
A parenthesized argument. A parenthesized argument.
{{"put" | printf "%s%s" "out" | printf "%q"}} {{"put" | printf "%s%s" "out" | printf "%q"}}
A more elaborate call. A more elaborate call.
{{"output" | printf "%s" | printf "%q"}} {{"output" | printf "%s" | printf "%q"}}
A longer chain. A longer chain.
{{with "output"}}{{printf "%q" .}}{{end}} {{with "output"}}{{printf "%q" .}}{{end}}
A with action using dot. A with action using dot.
{{with $x := "output" | printf "%q"}}{{$x}}{{end}} {{with $x := "output" | printf "%q"}}{{$x}}{{end}}
A with action that creates and uses a variable. A with action that creates and uses a variable.
{{with $x := "output"}}{{printf "%q" $x}}{{end}} {{with $x := "output"}}{{printf "%q" $x}}{{end}}
A with action that uses the variable in another action. A with action that uses the variable in another action.
{{with $x := "output"}}{{$x | printf "%q"}}{{end}} {{with $x := "output"}}{{$x | printf "%q"}}{{end}}
The same, but pipelined. The same, but pipelined.
Functions Functions
During execution functions are found in two function maps: first in the During execution functions are found in two function maps: first in the
template, then in the global function map. By default, no functions are defined template, then in the global function map. By default, no functions are defined
in the template but the Funcs method can be used to add them. in the template but the Funcs method can be used to add them.
Predefined global functions are named as follows. Predefined global functions are named as follows.
and and
Returns the boolean AND of its arguments by returning the Returns the boolean AND of its arguments by returning the
first empty argument or the last argument, that is, first empty argument or the last argument, that is,
"and x y" behaves as "if x then y else x". All the "and x y" behaves as "if x then y else x". All the
arguments are evaluated. arguments are evaluated.
call call
Returns the result of calling the first argument, which Returns the result of calling the first argument, which
must be a function, with the remaining arguments as parameters. must be a function, with the remaining arguments as parameters.
Thus "call .X.Y 1 2" is, in Go notation, dot.X.Y(1, 2) where Thus "call .X.Y 1 2" is, in Go notation, dot.X.Y(1, 2) where
Y is a func-valued field, map entry, or the like. Y is a func-valued field, map entry, or the like.
The first argument must be the result of an evaluation The first argument must be the result of an evaluation
that yields a value of function type (as distinct from that yields a value of function type (as distinct from
a predefined function such as print). The function must a predefined function such as print). The function must
return either one or two result values, the second of which return either one or two result values, the second of which
is of type error. If the arguments don't match the function is of type error. If the arguments don't match the function
or the returned error value is non-nil, execution stops. or the returned error value is non-nil, execution stops.
html html
Returns the escaped HTML equivalent of the textual Returns the escaped HTML equivalent of the textual
representation of its arguments. representation of its arguments.
index index
Returns the result of indexing its first argument by the Returns the result of indexing its first argument by the
following arguments. Thus "index x 1 2 3" is, in Go syntax, following arguments. Thus "index x 1 2 3" is, in Go syntax,
x[1][2][3]. Each indexed item must be a map, slice, or array. x[1][2][3]. Each indexed item must be a map, slice, or array.
js js
Returns the escaped JavaScript equivalent of the textual Returns the escaped JavaScript equivalent of the textual
representation of its arguments. representation of its arguments.
len len
Returns the integer length of its argument. Returns the integer length of its argument.
not not
Returns the boolean negation of its single argument. Returns the boolean negation of its single argument.
or or
Returns the boolean OR of its arguments by returning the Returns the boolean OR of its arguments by returning the
first non-empty argument or the last argument, that is, first non-empty argument or the last argument, that is,
"or x y" behaves as "if x then x else y". All the "or x y" behaves as "if x then x else y". All the
arguments are evaluated. arguments are evaluated.
print print
An alias for fmt.Sprint An alias for fmt.Sprint
printf printf
An alias for fmt.Sprintf An alias for fmt.Sprintf
println println
An alias for fmt.Sprintln An alias for fmt.Sprintln
urlquery urlquery
Returns the escaped value of the textual representation of Returns the escaped value of the textual representation of
its arguments in a form suitable for embedding in a URL query. its arguments in a form suitable for embedding in a URL query.
The boolean functions take any zero value to be false and a non-zero The boolean functions take any zero value to be false and a non-zero
value to be true. value to be true.
There is also a set of binary comparison operators defined as There is also a set of binary comparison operators defined as
functions: functions:
eq eq
Returns the boolean truth of arg1 == arg2 Returns the boolean truth of arg1 == arg2
ne ne
Returns the boolean truth of arg1 != arg2 Returns the boolean truth of arg1 != arg2
lt lt
Returns the boolean truth of arg1 < arg2 Returns the boolean truth of arg1 < arg2
le le
Returns the boolean truth of arg1 <= arg2 Returns the boolean truth of arg1 <= arg2
gt gt
Returns the boolean truth of arg1 > arg2 Returns the boolean truth of arg1 > arg2
ge ge
Returns the boolean truth of arg1 >= arg2 Returns the boolean truth of arg1 >= arg2
For simpler multi-way equality tests, eq (only) accepts two or more For simpler multi-way equality tests, eq (only) accepts two or more
arguments and compares the second and subsequent to the first, arguments and compares the second and subsequent to the first,
returning in effect returning in effect
arg1==arg2 || arg1==arg3 || arg1==arg4 ... arg1==arg2 || arg1==arg3 || arg1==arg4 ...
(Unlike with || in Go, however, eq is a function call and all the (Unlike with || in Go, however, eq is a function call and all the
arguments will be evaluated.) arguments will be evaluated.)
The comparison functions work on basic types only (or named basic The comparison functions work on basic types only (or named basic
types, such as "type Celsius float32"). They implement the Go rules types, such as "type Celsius float32"). They implement the Go rules
for comparison of values, except that size and exact type are for comparison of values, except that size and exact type are
ignored, so any integer value, signed or unsigned, may be compared ignored, so any integer value, signed or unsigned, may be compared
with any other integer value. (The arithmetic value is compared, with any other integer value. (The arithmetic value is compared,
not the bit pattern, so all negative integers are less than all not the bit pattern, so all negative integers are less than all
unsigned integers.) However, as usual, one may not compare an int unsigned integers.) However, as usual, one may not compare an int
with a float32 and so on. with a float32 and so on.
Associated templates Associated templates
Each template is named by a string specified when it is created. Also, each Each template is named by a string specified when it is created. Also, each
template is associated with zero or more other templates that it may invoke by template is associated with zero or more other templates that it may invoke by
name; such associations are transitive and form a name space of templates. name; such associations are transitive and form a name space of templates.
A template may use a template invocation to instantiate another associated A template may use a template invocation to instantiate another associated
template; see the explanation of the "template" action above. The name must be template; see the explanation of the "template" action above. The name must be
that of a template associated with the template that contains the invocation. that of a template associated with the template that contains the invocation.
Nested template definitions Nested template definitions
When parsing a template, another template may be defined and associated with the When parsing a template, another template may be defined and associated with the
template being parsed. Template definitions must appear at the top level of the template being parsed. Template definitions must appear at the top level of the
template, much like global variables in a Go program. template, much like global variables in a Go program.
The syntax of such definitions is to surround each template declaration with a The syntax of such definitions is to surround each template declaration with a
"define" and "end" action. "define" and "end" action.
The define action names the template being created by providing a string The define action names the template being created by providing a string
constant. Here is a simple example: constant. Here is a simple example:
`{{define "T1"}}ONE{{end}} `{{define "T1"}}ONE{{end}}
{{define "T2"}}TWO{{end}} {{define "T2"}}TWO{{end}}
{{define "T3"}}{{template "T1"}} {{template "T2"}}{{end}} {{define "T3"}}{{template "T1"}} {{template "T2"}}{{end}}
{{template "T3"}}` {{template "T3"}}`
This defines two templates, T1 and T2, and a third T3 that invokes the other two This defines two templates, T1 and T2, and a third T3 that invokes the other two
when it is executed. Finally it invokes T3. If executed this template will when it is executed. Finally it invokes T3. If executed this template will
produce the text produce the text
ONE TWO ONE TWO
By construction, a template may reside in only one association. If it's By construction, a template may reside in only one association. If it's
necessary to have a template addressable from multiple associations, the necessary to have a template addressable from multiple associations, the
template definition must be parsed multiple times to create distinct *Template template definition must be parsed multiple times to create distinct *Template
values, or must be copied with the Clone or AddParseTree method. values, or must be copied with the Clone or AddParseTree method.
Parse may be called multiple times to assemble the various associated templates; Parse may be called multiple times to assemble the various associated templates;
see the ParseFiles and ParseGlob functions and methods for simple ways to parse see the ParseFiles and ParseGlob functions and methods for simple ways to parse
related templates stored in files. related templates stored in files.
A template may be executed directly or through ExecuteTemplate, which executes A template may be executed directly or through ExecuteTemplate, which executes
an associated template identified by name. To invoke our example above, we an associated template identified by name. To invoke our example above, we
might write, might write,
err := tmpl.Execute(os.Stdout, "no data needed") err := tmpl.Execute(os.Stdout, "no data needed")
if err != nil { if err != nil {
log.Fatalf("execution failed: %s", err) log.Fatalf("execution failed: %s", err)
} }
or to invoke a particular template explicitly by name, or to invoke a particular template explicitly by name,
err := tmpl.ExecuteTemplate(os.Stdout, "T2", "no data needed") err := tmpl.ExecuteTemplate(os.Stdout, "T2", "no data needed")
if err != nil { if err != nil {
log.Fatalf("execution failed: %s", err) log.Fatalf("execution failed: %s", err)
} }
*/ */
package template package template

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1 +1 @@
module github.com/alecthomas/template module github.com/alecthomas/template

View file

@ -1,108 +1,108 @@
// Copyright 2011 The Go Authors. All rights reserved. // Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
// Helper functions to make constructing templates easier. // Helper functions to make constructing templates easier.
package template package template
import ( import (
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"path/filepath" "path/filepath"
) )
// Functions and methods to parse templates. // Functions and methods to parse templates.
// Must is a helper that wraps a call to a function returning (*Template, error) // Must is a helper that wraps a call to a function returning (*Template, error)
// and panics if the error is non-nil. It is intended for use in variable // and panics if the error is non-nil. It is intended for use in variable
// initializations such as // initializations such as
// var t = template.Must(template.New("name").Parse("text")) // var t = template.Must(template.New("name").Parse("text"))
func Must(t *Template, err error) *Template { func Must(t *Template, err error) *Template {
if err != nil { if err != nil {
panic(err) panic(err)
} }
return t return t
} }
// ParseFiles creates a new Template and parses the template definitions from // ParseFiles creates a new Template and parses the template definitions from
// the named files. The returned template's name will have the (base) name and // the named files. The returned template's name will have the (base) name and
// (parsed) contents of the first file. There must be at least one file. // (parsed) contents of the first file. There must be at least one file.
// If an error occurs, parsing stops and the returned *Template is nil. // If an error occurs, parsing stops and the returned *Template is nil.
func ParseFiles(filenames ...string) (*Template, error) { func ParseFiles(filenames ...string) (*Template, error) {
return parseFiles(nil, filenames...) return parseFiles(nil, filenames...)
} }
// ParseFiles parses the named files and associates the resulting templates with // ParseFiles parses the named files and associates the resulting templates with
// t. If an error occurs, parsing stops and the returned template is nil; // t. If an error occurs, parsing stops and the returned template is nil;
// otherwise it is t. There must be at least one file. // otherwise it is t. There must be at least one file.
func (t *Template) ParseFiles(filenames ...string) (*Template, error) { func (t *Template) ParseFiles(filenames ...string) (*Template, error) {
return parseFiles(t, filenames...) return parseFiles(t, filenames...)
} }
// parseFiles is the helper for the method and function. If the argument // parseFiles is the helper for the method and function. If the argument
// template is nil, it is created from the first file. // template is nil, it is created from the first file.
func parseFiles(t *Template, filenames ...string) (*Template, error) { func parseFiles(t *Template, filenames ...string) (*Template, error) {
if len(filenames) == 0 { if len(filenames) == 0 {
// Not really a problem, but be consistent. // Not really a problem, but be consistent.
return nil, fmt.Errorf("template: no files named in call to ParseFiles") return nil, fmt.Errorf("template: no files named in call to ParseFiles")
} }
for _, filename := range filenames { for _, filename := range filenames {
b, err := ioutil.ReadFile(filename) b, err := ioutil.ReadFile(filename)
if err != nil { if err != nil {
return nil, err return nil, err
} }
s := string(b) s := string(b)
name := filepath.Base(filename) name := filepath.Base(filename)
// First template becomes return value if not already defined, // First template becomes return value if not already defined,
// and we use that one for subsequent New calls to associate // and we use that one for subsequent New calls to associate
// all the templates together. Also, if this file has the same name // all the templates together. Also, if this file has the same name
// as t, this file becomes the contents of t, so // as t, this file becomes the contents of t, so
// t, err := New(name).Funcs(xxx).ParseFiles(name) // t, err := New(name).Funcs(xxx).ParseFiles(name)
// works. Otherwise we create a new template associated with t. // works. Otherwise we create a new template associated with t.
var tmpl *Template var tmpl *Template
if t == nil { if t == nil {
t = New(name) t = New(name)
} }
if name == t.Name() { if name == t.Name() {
tmpl = t tmpl = t
} else { } else {
tmpl = t.New(name) tmpl = t.New(name)
} }
_, err = tmpl.Parse(s) _, err = tmpl.Parse(s)
if err != nil { if err != nil {
return nil, err return nil, err
} }
} }
return t, nil return t, nil
} }
// ParseGlob creates a new Template and parses the template definitions from the // ParseGlob creates a new Template and parses the template definitions from the
// files identified by the pattern, which must match at least one file. The // files identified by the pattern, which must match at least one file. The
// returned template will have the (base) name and (parsed) contents of the // returned template will have the (base) name and (parsed) contents of the
// first file matched by the pattern. ParseGlob is equivalent to calling // first file matched by the pattern. ParseGlob is equivalent to calling
// ParseFiles with the list of files matched by the pattern. // ParseFiles with the list of files matched by the pattern.
func ParseGlob(pattern string) (*Template, error) { func ParseGlob(pattern string) (*Template, error) {
return parseGlob(nil, pattern) return parseGlob(nil, pattern)
} }
// ParseGlob parses the template definitions in the files identified by the // ParseGlob parses the template definitions in the files identified by the
// pattern and associates the resulting templates with t. The pattern is // pattern and associates the resulting templates with t. The pattern is
// processed by filepath.Glob and must match at least one file. ParseGlob is // processed by filepath.Glob and must match at least one file. ParseGlob is
// equivalent to calling t.ParseFiles with the list of files matched by the // equivalent to calling t.ParseFiles with the list of files matched by the
// pattern. // pattern.
func (t *Template) ParseGlob(pattern string) (*Template, error) { func (t *Template) ParseGlob(pattern string) (*Template, error) {
return parseGlob(t, pattern) return parseGlob(t, pattern)
} }
// parseGlob is the implementation of the function and method ParseGlob. // parseGlob is the implementation of the function and method ParseGlob.
func parseGlob(t *Template, pattern string) (*Template, error) { func parseGlob(t *Template, pattern string) (*Template, error) {
filenames, err := filepath.Glob(pattern) filenames, err := filepath.Glob(pattern)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if len(filenames) == 0 { if len(filenames) == 0 {
return nil, fmt.Errorf("template: pattern matches no files: %#q", pattern) return nil, fmt.Errorf("template: pattern matches no files: %#q", pattern)
} }
return parseFiles(t, filenames...) return parseFiles(t, filenames...)
} }

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,218 +1,218 @@
// Copyright 2011 The Go Authors. All rights reserved. // Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
package template package template
import ( import (
"fmt" "fmt"
"reflect" "reflect"
"github.com/alecthomas/template/parse" "github.com/alecthomas/template/parse"
) )
// common holds the information shared by related templates. // common holds the information shared by related templates.
type common struct { type common struct {
tmpl map[string]*Template tmpl map[string]*Template
// We use two maps, one for parsing and one for execution. // We use two maps, one for parsing and one for execution.
// This separation makes the API cleaner since it doesn't // This separation makes the API cleaner since it doesn't
// expose reflection to the client. // expose reflection to the client.
parseFuncs FuncMap parseFuncs FuncMap
execFuncs map[string]reflect.Value execFuncs map[string]reflect.Value
} }
// Template is the representation of a parsed template. The *parse.Tree // Template is the representation of a parsed template. The *parse.Tree
// field is exported only for use by html/template and should be treated // field is exported only for use by html/template and should be treated
// as unexported by all other clients. // as unexported by all other clients.
type Template struct { type Template struct {
name string name string
*parse.Tree *parse.Tree
*common *common
leftDelim string leftDelim string
rightDelim string rightDelim string
} }
// New allocates a new template with the given name. // New allocates a new template with the given name.
func New(name string) *Template { func New(name string) *Template {
return &Template{ return &Template{
name: name, name: name,
} }
} }
// Name returns the name of the template. // Name returns the name of the template.
func (t *Template) Name() string { func (t *Template) Name() string {
return t.name return t.name
} }
// New allocates a new template associated with the given one and with the same // New allocates a new template associated with the given one and with the same
// delimiters. The association, which is transitive, allows one template to // delimiters. The association, which is transitive, allows one template to
// invoke another with a {{template}} action. // invoke another with a {{template}} action.
func (t *Template) New(name string) *Template { func (t *Template) New(name string) *Template {
t.init() t.init()
return &Template{ return &Template{
name: name, name: name,
common: t.common, common: t.common,
leftDelim: t.leftDelim, leftDelim: t.leftDelim,
rightDelim: t.rightDelim, rightDelim: t.rightDelim,
} }
} }
func (t *Template) init() { func (t *Template) init() {
if t.common == nil { if t.common == nil {
t.common = new(common) t.common = new(common)
t.tmpl = make(map[string]*Template) t.tmpl = make(map[string]*Template)
t.parseFuncs = make(FuncMap) t.parseFuncs = make(FuncMap)
t.execFuncs = make(map[string]reflect.Value) t.execFuncs = make(map[string]reflect.Value)
} }
} }
// Clone returns a duplicate of the template, including all associated // Clone returns a duplicate of the template, including all associated
// templates. The actual representation is not copied, but the name space of // templates. The actual representation is not copied, but the name space of
// associated templates is, so further calls to Parse in the copy will add // associated templates is, so further calls to Parse in the copy will add
// templates to the copy but not to the original. Clone can be used to prepare // templates to the copy but not to the original. Clone can be used to prepare
// common templates and use them with variant definitions for other templates // common templates and use them with variant definitions for other templates
// by adding the variants after the clone is made. // by adding the variants after the clone is made.
func (t *Template) Clone() (*Template, error) { func (t *Template) Clone() (*Template, error) {
nt := t.copy(nil) nt := t.copy(nil)
nt.init() nt.init()
nt.tmpl[t.name] = nt nt.tmpl[t.name] = nt
for k, v := range t.tmpl { for k, v := range t.tmpl {
if k == t.name { // Already installed. if k == t.name { // Already installed.
continue continue
} }
// The associated templates share nt's common structure. // The associated templates share nt's common structure.
tmpl := v.copy(nt.common) tmpl := v.copy(nt.common)
nt.tmpl[k] = tmpl nt.tmpl[k] = tmpl
} }
for k, v := range t.parseFuncs { for k, v := range t.parseFuncs {
nt.parseFuncs[k] = v nt.parseFuncs[k] = v
} }
for k, v := range t.execFuncs { for k, v := range t.execFuncs {
nt.execFuncs[k] = v nt.execFuncs[k] = v
} }
return nt, nil return nt, nil
} }
// copy returns a shallow copy of t, with common set to the argument. // copy returns a shallow copy of t, with common set to the argument.
func (t *Template) copy(c *common) *Template { func (t *Template) copy(c *common) *Template {
nt := New(t.name) nt := New(t.name)
nt.Tree = t.Tree nt.Tree = t.Tree
nt.common = c nt.common = c
nt.leftDelim = t.leftDelim nt.leftDelim = t.leftDelim
nt.rightDelim = t.rightDelim nt.rightDelim = t.rightDelim
return nt return nt
} }
// AddParseTree creates a new template with the name and parse tree // AddParseTree creates a new template with the name and parse tree
// and associates it with t. // and associates it with t.
func (t *Template) AddParseTree(name string, tree *parse.Tree) (*Template, error) { func (t *Template) AddParseTree(name string, tree *parse.Tree) (*Template, error) {
if t.common != nil && t.tmpl[name] != nil { if t.common != nil && t.tmpl[name] != nil {
return nil, fmt.Errorf("template: redefinition of template %q", name) return nil, fmt.Errorf("template: redefinition of template %q", name)
} }
nt := t.New(name) nt := t.New(name)
nt.Tree = tree nt.Tree = tree
t.tmpl[name] = nt t.tmpl[name] = nt
return nt, nil return nt, nil
} }
// Templates returns a slice of the templates associated with t, including t // Templates returns a slice of the templates associated with t, including t
// itself. // itself.
func (t *Template) Templates() []*Template { func (t *Template) Templates() []*Template {
if t.common == nil { if t.common == nil {
return nil return nil
} }
// Return a slice so we don't expose the map. // Return a slice so we don't expose the map.
m := make([]*Template, 0, len(t.tmpl)) m := make([]*Template, 0, len(t.tmpl))
for _, v := range t.tmpl { for _, v := range t.tmpl {
m = append(m, v) m = append(m, v)
} }
return m return m
} }
// Delims sets the action delimiters to the specified strings, to be used in // Delims sets the action delimiters to the specified strings, to be used in
// subsequent calls to Parse, ParseFiles, or ParseGlob. Nested template // subsequent calls to Parse, ParseFiles, or ParseGlob. Nested template
// definitions will inherit the settings. An empty delimiter stands for the // definitions will inherit the settings. An empty delimiter stands for the
// corresponding default: {{ or }}. // corresponding default: {{ or }}.
// The return value is the template, so calls can be chained. // The return value is the template, so calls can be chained.
func (t *Template) Delims(left, right string) *Template { func (t *Template) Delims(left, right string) *Template {
t.leftDelim = left t.leftDelim = left
t.rightDelim = right t.rightDelim = right
return t return t
} }
// Funcs adds the elements of the argument map to the template's function map. // Funcs adds the elements of the argument map to the template's function map.
// It panics if a value in the map is not a function with appropriate return // It panics if a value in the map is not a function with appropriate return
// type. However, it is legal to overwrite elements of the map. The return // type. However, it is legal to overwrite elements of the map. The return
// value is the template, so calls can be chained. // value is the template, so calls can be chained.
func (t *Template) Funcs(funcMap FuncMap) *Template { func (t *Template) Funcs(funcMap FuncMap) *Template {
t.init() t.init()
addValueFuncs(t.execFuncs, funcMap) addValueFuncs(t.execFuncs, funcMap)
addFuncs(t.parseFuncs, funcMap) addFuncs(t.parseFuncs, funcMap)
return t return t
} }
// Lookup returns the template with the given name that is associated with t, // Lookup returns the template with the given name that is associated with t,
// or nil if there is no such template. // or nil if there is no such template.
func (t *Template) Lookup(name string) *Template { func (t *Template) Lookup(name string) *Template {
if t.common == nil { if t.common == nil {
return nil return nil
} }
return t.tmpl[name] return t.tmpl[name]
} }
// Parse parses a string into a template. Nested template definitions will be // Parse parses a string into a template. Nested template definitions will be
// associated with the top-level template t. Parse may be called multiple times // associated with the top-level template t. Parse may be called multiple times
// to parse definitions of templates to associate with t. It is an error if a // to parse definitions of templates to associate with t. It is an error if a
// resulting template is non-empty (contains content other than template // resulting template is non-empty (contains content other than template
// definitions) and would replace a non-empty template with the same name. // definitions) and would replace a non-empty template with the same name.
// (In multiple calls to Parse with the same receiver template, only one call // (In multiple calls to Parse with the same receiver template, only one call
// can contain text other than space, comments, and template definitions.) // can contain text other than space, comments, and template definitions.)
func (t *Template) Parse(text string) (*Template, error) { func (t *Template) Parse(text string) (*Template, error) {
t.init() t.init()
trees, err := parse.Parse(t.name, text, t.leftDelim, t.rightDelim, t.parseFuncs, builtins) trees, err := parse.Parse(t.name, text, t.leftDelim, t.rightDelim, t.parseFuncs, builtins)
if err != nil { if err != nil {
return nil, err return nil, err
} }
// Add the newly parsed trees, including the one for t, into our common structure. // Add the newly parsed trees, including the one for t, into our common structure.
for name, tree := range trees { for name, tree := range trees {
// If the name we parsed is the name of this template, overwrite this template. // If the name we parsed is the name of this template, overwrite this template.
// The associate method checks it's not a redefinition. // The associate method checks it's not a redefinition.
tmpl := t tmpl := t
if name != t.name { if name != t.name {
tmpl = t.New(name) tmpl = t.New(name)
} }
// Even if t == tmpl, we need to install it in the common.tmpl map. // Even if t == tmpl, we need to install it in the common.tmpl map.
if replace, err := t.associate(tmpl, tree); err != nil { if replace, err := t.associate(tmpl, tree); err != nil {
return nil, err return nil, err
} else if replace { } else if replace {
tmpl.Tree = tree tmpl.Tree = tree
} }
tmpl.leftDelim = t.leftDelim tmpl.leftDelim = t.leftDelim
tmpl.rightDelim = t.rightDelim tmpl.rightDelim = t.rightDelim
} }
return t, nil return t, nil
} }
// associate installs the new template into the group of templates associated // associate installs the new template into the group of templates associated
// with t. It is an error to reuse a name except to overwrite an empty // with t. It is an error to reuse a name except to overwrite an empty
// template. The two are already known to share the common structure. // template. The two are already known to share the common structure.
// The boolean return value reports wither to store this tree as t.Tree. // The boolean return value reports wither to store this tree as t.Tree.
func (t *Template) associate(new *Template, tree *parse.Tree) (bool, error) { func (t *Template) associate(new *Template, tree *parse.Tree) (bool, error) {
if new.common != t.common { if new.common != t.common {
panic("internal error: associate not common") panic("internal error: associate not common")
} }
name := new.name name := new.name
if old := t.tmpl[name]; old != nil { if old := t.tmpl[name]; old != nil {
oldIsEmpty := parse.IsEmptyTree(old.Root) oldIsEmpty := parse.IsEmptyTree(old.Root)
newIsEmpty := parse.IsEmptyTree(tree.Root) newIsEmpty := parse.IsEmptyTree(tree.Root)
if newIsEmpty { if newIsEmpty {
// Whether old is empty or not, new is empty; no reason to replace old. // Whether old is empty or not, new is empty; no reason to replace old.
return false, nil return false, nil
} }
if !oldIsEmpty { if !oldIsEmpty {
return false, fmt.Errorf("template: redefinition of template %q", name) return false, fmt.Errorf("template: redefinition of template %q", name)
} }
} }
t.tmpl[name] = new t.tmpl[name] = new
return true, nil return true, nil
} }

View file

@ -1,19 +1,19 @@
Copyright (C) 2014 Alec Thomas Copyright (C) 2014 Alec Thomas
Permission is hereby granted, free of charge, to any person obtaining a copy of Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions: so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software. copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. SOFTWARE.

View file

@ -1,11 +1,11 @@
# Units - Helpful unit multipliers and functions for Go # Units - Helpful unit multipliers and functions for Go
The goal of this package is to have functionality similar to the [time](http://golang.org/pkg/time/) package. The goal of this package is to have functionality similar to the [time](http://golang.org/pkg/time/) package.
It allows for code like this: It allows for code like this:
```go ```go
n, err := ParseBase2Bytes("1KB") n, err := ParseBase2Bytes("1KB")
// n == 1024 // n == 1024
n = units.Mebibyte * 512 n = units.Mebibyte * 512
``` ```

View file

@ -1,83 +1,83 @@
package units package units
// Base2Bytes is the old non-SI power-of-2 byte scale (1024 bytes in a kilobyte, // Base2Bytes is the old non-SI power-of-2 byte scale (1024 bytes in a kilobyte,
// etc.). // etc.).
type Base2Bytes int64 type Base2Bytes int64
// Base-2 byte units. // Base-2 byte units.
const ( const (
Kibibyte Base2Bytes = 1024 Kibibyte Base2Bytes = 1024
KiB = Kibibyte KiB = Kibibyte
Mebibyte = Kibibyte * 1024 Mebibyte = Kibibyte * 1024
MiB = Mebibyte MiB = Mebibyte
Gibibyte = Mebibyte * 1024 Gibibyte = Mebibyte * 1024
GiB = Gibibyte GiB = Gibibyte
Tebibyte = Gibibyte * 1024 Tebibyte = Gibibyte * 1024
TiB = Tebibyte TiB = Tebibyte
Pebibyte = Tebibyte * 1024 Pebibyte = Tebibyte * 1024
PiB = Pebibyte PiB = Pebibyte
Exbibyte = Pebibyte * 1024 Exbibyte = Pebibyte * 1024
EiB = Exbibyte EiB = Exbibyte
) )
var ( var (
bytesUnitMap = MakeUnitMap("iB", "B", 1024) bytesUnitMap = MakeUnitMap("iB", "B", 1024)
oldBytesUnitMap = MakeUnitMap("B", "B", 1024) oldBytesUnitMap = MakeUnitMap("B", "B", 1024)
) )
// ParseBase2Bytes supports both iB and B in base-2 multipliers. That is, KB // ParseBase2Bytes supports both iB and B in base-2 multipliers. That is, KB
// and KiB are both 1024. // and KiB are both 1024.
func ParseBase2Bytes(s string) (Base2Bytes, error) { func ParseBase2Bytes(s string) (Base2Bytes, error) {
n, err := ParseUnit(s, bytesUnitMap) n, err := ParseUnit(s, bytesUnitMap)
if err != nil { if err != nil {
n, err = ParseUnit(s, oldBytesUnitMap) n, err = ParseUnit(s, oldBytesUnitMap)
} }
return Base2Bytes(n), err return Base2Bytes(n), err
} }
func (b Base2Bytes) String() string { func (b Base2Bytes) String() string {
return ToString(int64(b), 1024, "iB", "B") return ToString(int64(b), 1024, "iB", "B")
} }
var ( var (
metricBytesUnitMap = MakeUnitMap("B", "B", 1000) metricBytesUnitMap = MakeUnitMap("B", "B", 1000)
) )
// MetricBytes are SI byte units (1000 bytes in a kilobyte). // MetricBytes are SI byte units (1000 bytes in a kilobyte).
type MetricBytes SI type MetricBytes SI
// SI base-10 byte units. // SI base-10 byte units.
const ( const (
Kilobyte MetricBytes = 1000 Kilobyte MetricBytes = 1000
KB = Kilobyte KB = Kilobyte
Megabyte = Kilobyte * 1000 Megabyte = Kilobyte * 1000
MB = Megabyte MB = Megabyte
Gigabyte = Megabyte * 1000 Gigabyte = Megabyte * 1000
GB = Gigabyte GB = Gigabyte
Terabyte = Gigabyte * 1000 Terabyte = Gigabyte * 1000
TB = Terabyte TB = Terabyte
Petabyte = Terabyte * 1000 Petabyte = Terabyte * 1000
PB = Petabyte PB = Petabyte
Exabyte = Petabyte * 1000 Exabyte = Petabyte * 1000
EB = Exabyte EB = Exabyte
) )
// ParseMetricBytes parses base-10 metric byte units. That is, KB is 1000 bytes. // ParseMetricBytes parses base-10 metric byte units. That is, KB is 1000 bytes.
func ParseMetricBytes(s string) (MetricBytes, error) { func ParseMetricBytes(s string) (MetricBytes, error) {
n, err := ParseUnit(s, metricBytesUnitMap) n, err := ParseUnit(s, metricBytesUnitMap)
return MetricBytes(n), err return MetricBytes(n), err
} }
func (m MetricBytes) String() string { func (m MetricBytes) String() string {
return ToString(int64(m), 1000, "B", "B") return ToString(int64(m), 1000, "B", "B")
} }
// ParseStrictBytes supports both iB and B suffixes for base 2 and metric, // ParseStrictBytes supports both iB and B suffixes for base 2 and metric,
// respectively. That is, KiB represents 1024 and KB represents 1000. // respectively. That is, KiB represents 1024 and KB represents 1000.
func ParseStrictBytes(s string) (int64, error) { func ParseStrictBytes(s string) (int64, error) {
n, err := ParseUnit(s, bytesUnitMap) n, err := ParseUnit(s, bytesUnitMap)
if err != nil { if err != nil {
n, err = ParseUnit(s, metricBytesUnitMap) n, err = ParseUnit(s, metricBytesUnitMap)
} }
return int64(n), err return int64(n), err
} }

View file

@ -1,13 +1,13 @@
// Package units provides helpful unit multipliers and functions for Go. // Package units provides helpful unit multipliers and functions for Go.
// //
// The goal of this package is to have functionality similar to the time [1] package. // The goal of this package is to have functionality similar to the time [1] package.
// //
// //
// [1] http://golang.org/pkg/time/ // [1] http://golang.org/pkg/time/
// //
// It allows for code like this: // It allows for code like this:
// //
// n, err := ParseBase2Bytes("1KB") // n, err := ParseBase2Bytes("1KB")
// // n == 1024 // // n == 1024
// n = units.Mebibyte * 512 // n = units.Mebibyte * 512
package units package units

View file

@ -1 +1 @@
module github.com/alecthomas/units module github.com/alecthomas/units

View file

@ -1,26 +1,26 @@
package units package units
// SI units. // SI units.
type SI int64 type SI int64
// SI unit multiples. // SI unit multiples.
const ( const (
Kilo SI = 1000 Kilo SI = 1000
Mega = Kilo * 1000 Mega = Kilo * 1000
Giga = Mega * 1000 Giga = Mega * 1000
Tera = Giga * 1000 Tera = Giga * 1000
Peta = Tera * 1000 Peta = Tera * 1000
Exa = Peta * 1000 Exa = Peta * 1000
) )
func MakeUnitMap(suffix, shortSuffix string, scale int64) map[string]float64 { func MakeUnitMap(suffix, shortSuffix string, scale int64) map[string]float64 {
return map[string]float64{ return map[string]float64{
shortSuffix: 1, shortSuffix: 1,
"K" + suffix: float64(scale), "K" + suffix: float64(scale),
"M" + suffix: float64(scale * scale), "M" + suffix: float64(scale * scale),
"G" + suffix: float64(scale * scale * scale), "G" + suffix: float64(scale * scale * scale),
"T" + suffix: float64(scale * scale * scale * scale), "T" + suffix: float64(scale * scale * scale * scale),
"P" + suffix: float64(scale * scale * scale * scale * scale), "P" + suffix: float64(scale * scale * scale * scale * scale),
"E" + suffix: float64(scale * scale * scale * scale * scale * scale), "E" + suffix: float64(scale * scale * scale * scale * scale * scale),
} }
} }

View file

@ -1,138 +1,138 @@
package units package units
import ( import (
"errors" "errors"
"fmt" "fmt"
"strings" "strings"
) )
var ( var (
siUnits = []string{"", "K", "M", "G", "T", "P", "E"} siUnits = []string{"", "K", "M", "G", "T", "P", "E"}
) )
func ToString(n int64, scale int64, suffix, baseSuffix string) string { func ToString(n int64, scale int64, suffix, baseSuffix string) string {
mn := len(siUnits) mn := len(siUnits)
out := make([]string, mn) out := make([]string, mn)
for i, m := range siUnits { for i, m := range siUnits {
if n%scale != 0 || i == 0 && n == 0 { if n%scale != 0 || i == 0 && n == 0 {
s := suffix s := suffix
if i == 0 { if i == 0 {
s = baseSuffix s = baseSuffix
} }
out[mn-1-i] = fmt.Sprintf("%d%s%s", n%scale, m, s) out[mn-1-i] = fmt.Sprintf("%d%s%s", n%scale, m, s)
} }
n /= scale n /= scale
if n == 0 { if n == 0 {
break break
} }
} }
return strings.Join(out, "") return strings.Join(out, "")
} }
// Below code ripped straight from http://golang.org/src/pkg/time/format.go?s=33392:33438#L1123 // Below code ripped straight from http://golang.org/src/pkg/time/format.go?s=33392:33438#L1123
var errLeadingInt = errors.New("units: bad [0-9]*") // never printed var errLeadingInt = errors.New("units: bad [0-9]*") // never printed
// leadingInt consumes the leading [0-9]* from s. // leadingInt consumes the leading [0-9]* from s.
func leadingInt(s string) (x int64, rem string, err error) { func leadingInt(s string) (x int64, rem string, err error) {
i := 0 i := 0
for ; i < len(s); i++ { for ; i < len(s); i++ {
c := s[i] c := s[i]
if c < '0' || c > '9' { if c < '0' || c > '9' {
break break
} }
if x >= (1<<63-10)/10 { if x >= (1<<63-10)/10 {
// overflow // overflow
return 0, "", errLeadingInt return 0, "", errLeadingInt
} }
x = x*10 + int64(c) - '0' x = x*10 + int64(c) - '0'
} }
return x, s[i:], nil return x, s[i:], nil
} }
func ParseUnit(s string, unitMap map[string]float64) (int64, error) { func ParseUnit(s string, unitMap map[string]float64) (int64, error) {
// [-+]?([0-9]*(\.[0-9]*)?[a-z]+)+ // [-+]?([0-9]*(\.[0-9]*)?[a-z]+)+
orig := s orig := s
f := float64(0) f := float64(0)
neg := false neg := false
// Consume [-+]? // Consume [-+]?
if s != "" { if s != "" {
c := s[0] c := s[0]
if c == '-' || c == '+' { if c == '-' || c == '+' {
neg = c == '-' neg = c == '-'
s = s[1:] s = s[1:]
} }
} }
// Special case: if all that is left is "0", this is zero. // Special case: if all that is left is "0", this is zero.
if s == "0" { if s == "0" {
return 0, nil return 0, nil
} }
if s == "" { if s == "" {
return 0, errors.New("units: invalid " + orig) return 0, errors.New("units: invalid " + orig)
} }
for s != "" { for s != "" {
g := float64(0) // this element of the sequence g := float64(0) // this element of the sequence
var x int64 var x int64
var err error var err error
// The next character must be [0-9.] // The next character must be [0-9.]
if !(s[0] == '.' || ('0' <= s[0] && s[0] <= '9')) { if !(s[0] == '.' || ('0' <= s[0] && s[0] <= '9')) {
return 0, errors.New("units: invalid " + orig) return 0, errors.New("units: invalid " + orig)
} }
// Consume [0-9]* // Consume [0-9]*
pl := len(s) pl := len(s)
x, s, err = leadingInt(s) x, s, err = leadingInt(s)
if err != nil { if err != nil {
return 0, errors.New("units: invalid " + orig) return 0, errors.New("units: invalid " + orig)
} }
g = float64(x) g = float64(x)
pre := pl != len(s) // whether we consumed anything before a period pre := pl != len(s) // whether we consumed anything before a period
// Consume (\.[0-9]*)? // Consume (\.[0-9]*)?
post := false post := false
if s != "" && s[0] == '.' { if s != "" && s[0] == '.' {
s = s[1:] s = s[1:]
pl := len(s) pl := len(s)
x, s, err = leadingInt(s) x, s, err = leadingInt(s)
if err != nil { if err != nil {
return 0, errors.New("units: invalid " + orig) return 0, errors.New("units: invalid " + orig)
} }
scale := 1.0 scale := 1.0
for n := pl - len(s); n > 0; n-- { for n := pl - len(s); n > 0; n-- {
scale *= 10 scale *= 10
} }
g += float64(x) / scale g += float64(x) / scale
post = pl != len(s) post = pl != len(s)
} }
if !pre && !post { if !pre && !post {
// no digits (e.g. ".s" or "-.s") // no digits (e.g. ".s" or "-.s")
return 0, errors.New("units: invalid " + orig) return 0, errors.New("units: invalid " + orig)
} }
// Consume unit. // Consume unit.
i := 0 i := 0
for ; i < len(s); i++ { for ; i < len(s); i++ {
c := s[i] c := s[i]
if c == '.' || ('0' <= c && c <= '9') { if c == '.' || ('0' <= c && c <= '9') {
break break
} }
} }
u := s[:i] u := s[:i]
s = s[i:] s = s[i:]
unit, ok := unitMap[u] unit, ok := unitMap[u]
if !ok { if !ok {
return 0, errors.New("units: unknown unit " + u + " in " + orig) return 0, errors.New("units: unknown unit " + u + " in " + orig)
} }
f += g * unit f += g * unit
} }
if neg { if neg {
f = -f f = -f
} }
if f < float64(-1<<63) || f > float64(1<<63-1) { if f < float64(-1<<63) || f > float64(1<<63-1) {
return 0, errors.New("units: overflow parsing unit") return 0, errors.New("units: overflow parsing unit")
} }
return int64(f), nil return int64(f), nil
} }

View file

@ -1,20 +1,20 @@
Copyright (C) 2013 Blake Mizerany Copyright (C) 2013 Blake Mizerany
Permission is hereby granted, free of charge, to any person obtaining Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including "Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish, without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to permit persons to whom the Software is furnished to do so, subject to
the following conditions: the following conditions:
The above copyright notice and this permission notice shall be The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software. included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

File diff suppressed because it is too large Load diff

View file

@ -1,316 +1,316 @@
// Package quantile computes approximate quantiles over an unbounded data // Package quantile computes approximate quantiles over an unbounded data
// stream within low memory and CPU bounds. // stream within low memory and CPU bounds.
// //
// A small amount of accuracy is traded to achieve the above properties. // A small amount of accuracy is traded to achieve the above properties.
// //
// Multiple streams can be merged before calling Query to generate a single set // Multiple streams can be merged before calling Query to generate a single set
// of results. This is meaningful when the streams represent the same type of // of results. This is meaningful when the streams represent the same type of
// data. See Merge and Samples. // data. See Merge and Samples.
// //
// For more detailed information about the algorithm used, see: // For more detailed information about the algorithm used, see:
// //
// Effective Computation of Biased Quantiles over Data Streams // Effective Computation of Biased Quantiles over Data Streams
// //
// http://www.cs.rutgers.edu/~muthu/bquant.pdf // http://www.cs.rutgers.edu/~muthu/bquant.pdf
package quantile package quantile
import ( import (
"math" "math"
"sort" "sort"
) )
// Sample holds an observed value and meta information for compression. JSON // Sample holds an observed value and meta information for compression. JSON
// tags have been added for convenience. // tags have been added for convenience.
type Sample struct { type Sample struct {
Value float64 `json:",string"` Value float64 `json:",string"`
Width float64 `json:",string"` Width float64 `json:",string"`
Delta float64 `json:",string"` Delta float64 `json:",string"`
} }
// Samples represents a slice of samples. It implements sort.Interface. // Samples represents a slice of samples. It implements sort.Interface.
type Samples []Sample type Samples []Sample
func (a Samples) Len() int { return len(a) } func (a Samples) Len() int { return len(a) }
func (a Samples) Less(i, j int) bool { return a[i].Value < a[j].Value } func (a Samples) Less(i, j int) bool { return a[i].Value < a[j].Value }
func (a Samples) Swap(i, j int) { a[i], a[j] = a[j], a[i] } func (a Samples) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
type invariant func(s *stream, r float64) float64 type invariant func(s *stream, r float64) float64
// NewLowBiased returns an initialized Stream for low-biased quantiles // NewLowBiased returns an initialized Stream for low-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but // (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the lower ranks of the data // error guarantees can still be given even for the lower ranks of the data
// distribution. // distribution.
// //
// The provided epsilon is a relative error, i.e. the true quantile of a value // The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within (1±Epsilon)*Quantile. // returned by a query is guaranteed to be within (1±Epsilon)*Quantile.
// //
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error // See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties. // properties.
func NewLowBiased(epsilon float64) *Stream { func NewLowBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 { ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * r return 2 * epsilon * r
} }
return newStream(ƒ) return newStream(ƒ)
} }
// NewHighBiased returns an initialized Stream for high-biased quantiles // NewHighBiased returns an initialized Stream for high-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but // (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the higher ranks of the data // error guarantees can still be given even for the higher ranks of the data
// distribution. // distribution.
// //
// The provided epsilon is a relative error, i.e. the true quantile of a value // The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within 1-(1±Epsilon)*(1-Quantile). // returned by a query is guaranteed to be within 1-(1±Epsilon)*(1-Quantile).
// //
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error // See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties. // properties.
func NewHighBiased(epsilon float64) *Stream { func NewHighBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 { ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * (s.n - r) return 2 * epsilon * (s.n - r)
} }
return newStream(ƒ) return newStream(ƒ)
} }
// NewTargeted returns an initialized Stream concerned with a particular set of // NewTargeted returns an initialized Stream concerned with a particular set of
// quantile values that are supplied a priori. Knowing these a priori reduces // quantile values that are supplied a priori. Knowing these a priori reduces
// space and computation time. The targets map maps the desired quantiles to // space and computation time. The targets map maps the desired quantiles to
// their absolute errors, i.e. the true quantile of a value returned by a query // their absolute errors, i.e. the true quantile of a value returned by a query
// is guaranteed to be within (Quantile±Epsilon). // is guaranteed to be within (Quantile±Epsilon).
// //
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties. // See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties.
func NewTargeted(targetMap map[float64]float64) *Stream { func NewTargeted(targetMap map[float64]float64) *Stream {
// Convert map to slice to avoid slow iterations on a map. // Convert map to slice to avoid slow iterations on a map.
// ƒ is called on the hot path, so converting the map to a slice // ƒ is called on the hot path, so converting the map to a slice
// beforehand results in significant CPU savings. // beforehand results in significant CPU savings.
targets := targetMapToSlice(targetMap) targets := targetMapToSlice(targetMap)
ƒ := func(s *stream, r float64) float64 { ƒ := func(s *stream, r float64) float64 {
var m = math.MaxFloat64 var m = math.MaxFloat64
var f float64 var f float64
for _, t := range targets { for _, t := range targets {
if t.quantile*s.n <= r { if t.quantile*s.n <= r {
f = (2 * t.epsilon * r) / t.quantile f = (2 * t.epsilon * r) / t.quantile
} else { } else {
f = (2 * t.epsilon * (s.n - r)) / (1 - t.quantile) f = (2 * t.epsilon * (s.n - r)) / (1 - t.quantile)
} }
if f < m { if f < m {
m = f m = f
} }
} }
return m return m
} }
return newStream(ƒ) return newStream(ƒ)
} }
type target struct { type target struct {
quantile float64 quantile float64
epsilon float64 epsilon float64
} }
func targetMapToSlice(targetMap map[float64]float64) []target { func targetMapToSlice(targetMap map[float64]float64) []target {
targets := make([]target, 0, len(targetMap)) targets := make([]target, 0, len(targetMap))
for quantile, epsilon := range targetMap { for quantile, epsilon := range targetMap {
t := target{ t := target{
quantile: quantile, quantile: quantile,
epsilon: epsilon, epsilon: epsilon,
} }
targets = append(targets, t) targets = append(targets, t)
} }
return targets return targets
} }
// Stream computes quantiles for a stream of float64s. It is not thread-safe by // Stream computes quantiles for a stream of float64s. It is not thread-safe by
// design. Take care when using across multiple goroutines. // design. Take care when using across multiple goroutines.
type Stream struct { type Stream struct {
*stream *stream
b Samples b Samples
sorted bool sorted bool
} }
func newStream(ƒ invariant) *Stream { func newStream(ƒ invariant) *Stream {
x := &stream{ƒ: ƒ} x := &stream{ƒ: ƒ}
return &Stream{x, make(Samples, 0, 500), true} return &Stream{x, make(Samples, 0, 500), true}
} }
// Insert inserts v into the stream. // Insert inserts v into the stream.
func (s *Stream) Insert(v float64) { func (s *Stream) Insert(v float64) {
s.insert(Sample{Value: v, Width: 1}) s.insert(Sample{Value: v, Width: 1})
} }
func (s *Stream) insert(sample Sample) { func (s *Stream) insert(sample Sample) {
s.b = append(s.b, sample) s.b = append(s.b, sample)
s.sorted = false s.sorted = false
if len(s.b) == cap(s.b) { if len(s.b) == cap(s.b) {
s.flush() s.flush()
} }
} }
// Query returns the computed qth percentiles value. If s was created with // Query returns the computed qth percentiles value. If s was created with
// NewTargeted, and q is not in the set of quantiles provided a priori, Query // NewTargeted, and q is not in the set of quantiles provided a priori, Query
// will return an unspecified result. // will return an unspecified result.
func (s *Stream) Query(q float64) float64 { func (s *Stream) Query(q float64) float64 {
if !s.flushed() { if !s.flushed() {
// Fast path when there hasn't been enough data for a flush; // Fast path when there hasn't been enough data for a flush;
// this also yields better accuracy for small sets of data. // this also yields better accuracy for small sets of data.
l := len(s.b) l := len(s.b)
if l == 0 { if l == 0 {
return 0 return 0
} }
i := int(math.Ceil(float64(l) * q)) i := int(math.Ceil(float64(l) * q))
if i > 0 { if i > 0 {
i -= 1 i -= 1
} }
s.maybeSort() s.maybeSort()
return s.b[i].Value return s.b[i].Value
} }
s.flush() s.flush()
return s.stream.query(q) return s.stream.query(q)
} }
// Merge merges samples into the underlying streams samples. This is handy when // Merge merges samples into the underlying streams samples. This is handy when
// merging multiple streams from separate threads, database shards, etc. // merging multiple streams from separate threads, database shards, etc.
// //
// ATTENTION: This method is broken and does not yield correct results. The // ATTENTION: This method is broken and does not yield correct results. The
// underlying algorithm is not capable of merging streams correctly. // underlying algorithm is not capable of merging streams correctly.
func (s *Stream) Merge(samples Samples) { func (s *Stream) Merge(samples Samples) {
sort.Sort(samples) sort.Sort(samples)
s.stream.merge(samples) s.stream.merge(samples)
} }
// Reset reinitializes and clears the list reusing the samples buffer memory. // Reset reinitializes and clears the list reusing the samples buffer memory.
func (s *Stream) Reset() { func (s *Stream) Reset() {
s.stream.reset() s.stream.reset()
s.b = s.b[:0] s.b = s.b[:0]
} }
// Samples returns stream samples held by s. // Samples returns stream samples held by s.
func (s *Stream) Samples() Samples { func (s *Stream) Samples() Samples {
if !s.flushed() { if !s.flushed() {
return s.b return s.b
} }
s.flush() s.flush()
return s.stream.samples() return s.stream.samples()
} }
// Count returns the total number of samples observed in the stream // Count returns the total number of samples observed in the stream
// since initialization. // since initialization.
func (s *Stream) Count() int { func (s *Stream) Count() int {
return len(s.b) + s.stream.count() return len(s.b) + s.stream.count()
} }
func (s *Stream) flush() { func (s *Stream) flush() {
s.maybeSort() s.maybeSort()
s.stream.merge(s.b) s.stream.merge(s.b)
s.b = s.b[:0] s.b = s.b[:0]
} }
func (s *Stream) maybeSort() { func (s *Stream) maybeSort() {
if !s.sorted { if !s.sorted {
s.sorted = true s.sorted = true
sort.Sort(s.b) sort.Sort(s.b)
} }
} }
func (s *Stream) flushed() bool { func (s *Stream) flushed() bool {
return len(s.stream.l) > 0 return len(s.stream.l) > 0
} }
type stream struct { type stream struct {
n float64 n float64
l []Sample l []Sample
ƒ invariant ƒ invariant
} }
func (s *stream) reset() { func (s *stream) reset() {
s.l = s.l[:0] s.l = s.l[:0]
s.n = 0 s.n = 0
} }
func (s *stream) insert(v float64) { func (s *stream) insert(v float64) {
s.merge(Samples{{v, 1, 0}}) s.merge(Samples{{v, 1, 0}})
} }
func (s *stream) merge(samples Samples) { func (s *stream) merge(samples Samples) {
// TODO(beorn7): This tries to merge not only individual samples, but // TODO(beorn7): This tries to merge not only individual samples, but
// whole summaries. The paper doesn't mention merging summaries at // whole summaries. The paper doesn't mention merging summaries at
// all. Unittests show that the merging is inaccurate. Find out how to // all. Unittests show that the merging is inaccurate. Find out how to
// do merges properly. // do merges properly.
var r float64 var r float64
i := 0 i := 0
for _, sample := range samples { for _, sample := range samples {
for ; i < len(s.l); i++ { for ; i < len(s.l); i++ {
c := s.l[i] c := s.l[i]
if c.Value > sample.Value { if c.Value > sample.Value {
// Insert at position i. // Insert at position i.
s.l = append(s.l, Sample{}) s.l = append(s.l, Sample{})
copy(s.l[i+1:], s.l[i:]) copy(s.l[i+1:], s.l[i:])
s.l[i] = Sample{ s.l[i] = Sample{
sample.Value, sample.Value,
sample.Width, sample.Width,
math.Max(sample.Delta, math.Floor(s.ƒ(s, r))-1), math.Max(sample.Delta, math.Floor(s.ƒ(s, r))-1),
// TODO(beorn7): How to calculate delta correctly? // TODO(beorn7): How to calculate delta correctly?
} }
i++ i++
goto inserted goto inserted
} }
r += c.Width r += c.Width
} }
s.l = append(s.l, Sample{sample.Value, sample.Width, 0}) s.l = append(s.l, Sample{sample.Value, sample.Width, 0})
i++ i++
inserted: inserted:
s.n += sample.Width s.n += sample.Width
r += sample.Width r += sample.Width
} }
s.compress() s.compress()
} }
func (s *stream) count() int { func (s *stream) count() int {
return int(s.n) return int(s.n)
} }
func (s *stream) query(q float64) float64 { func (s *stream) query(q float64) float64 {
t := math.Ceil(q * s.n) t := math.Ceil(q * s.n)
t += math.Ceil(s.ƒ(s, t) / 2) t += math.Ceil(s.ƒ(s, t) / 2)
p := s.l[0] p := s.l[0]
var r float64 var r float64
for _, c := range s.l[1:] { for _, c := range s.l[1:] {
r += p.Width r += p.Width
if r+c.Width+c.Delta > t { if r+c.Width+c.Delta > t {
return p.Value return p.Value
} }
p = c p = c
} }
return p.Value return p.Value
} }
func (s *stream) compress() { func (s *stream) compress() {
if len(s.l) < 2 { if len(s.l) < 2 {
return return
} }
x := s.l[len(s.l)-1] x := s.l[len(s.l)-1]
xi := len(s.l) - 1 xi := len(s.l) - 1
r := s.n - 1 - x.Width r := s.n - 1 - x.Width
for i := len(s.l) - 2; i >= 0; i-- { for i := len(s.l) - 2; i >= 0; i-- {
c := s.l[i] c := s.l[i]
if c.Width+x.Width+x.Delta <= s.ƒ(s, r) { if c.Width+x.Width+x.Delta <= s.ƒ(s, r) {
x.Width += c.Width x.Width += c.Width
s.l[xi] = x s.l[xi] = x
// Remove element at i. // Remove element at i.
copy(s.l[i:], s.l[i+1:]) copy(s.l[i:], s.l[i+1:])
s.l = s.l[:len(s.l)-1] s.l = s.l[:len(s.l)-1]
xi -= 1 xi -= 1
} else { } else {
x = c x = c
xi = i xi = i
} }
r -= c.Width r -= c.Width
} }
} }
func (s *stream) samples() Samples { func (s *stream) samples() Samples {
samples := make(Samples, len(s.l)) samples := make(Samples, len(s.l))
copy(samples, s.l) copy(samples, s.l)
return samples return samples
} }

44
vendor/github.com/go-kit/kit/LICENSE generated vendored
View file

@ -1,22 +1,22 @@
The MIT License (MIT) The MIT License (MIT)
Copyright (c) 2015 Peter Bourgon Copyright (c) 2015 Peter Bourgon
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions: furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software. copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. SOFTWARE.

View file

@ -1,151 +1,151 @@
# package log # package log
`package log` provides a minimal interface for structured logging in services. `package log` provides a minimal interface for structured logging in services.
It may be wrapped to encode conventions, enforce type-safety, provide leveled It may be wrapped to encode conventions, enforce type-safety, provide leveled
logging, and so on. It can be used for both typical application log events, logging, and so on. It can be used for both typical application log events,
and log-structured data streams. and log-structured data streams.
## Structured logging ## Structured logging
Structured logging is, basically, conceding to the reality that logs are Structured logging is, basically, conceding to the reality that logs are
_data_, and warrant some level of schematic rigor. Using a stricter, _data_, and warrant some level of schematic rigor. Using a stricter,
key/value-oriented message format for our logs, containing contextual and key/value-oriented message format for our logs, containing contextual and
semantic information, makes it much easier to get insight into the semantic information, makes it much easier to get insight into the
operational activity of the systems we build. Consequently, `package log` is operational activity of the systems we build. Consequently, `package log` is
of the strong belief that "[the benefits of structured logging outweigh the of the strong belief that "[the benefits of structured logging outweigh the
minimal effort involved](https://www.thoughtworks.com/radar/techniques/structured-logging)". minimal effort involved](https://www.thoughtworks.com/radar/techniques/structured-logging)".
Migrating from unstructured to structured logging is probably a lot easier Migrating from unstructured to structured logging is probably a lot easier
than you'd expect. than you'd expect.
```go ```go
// Unstructured // Unstructured
log.Printf("HTTP server listening on %s", addr) log.Printf("HTTP server listening on %s", addr)
// Structured // Structured
logger.Log("transport", "HTTP", "addr", addr, "msg", "listening") logger.Log("transport", "HTTP", "addr", addr, "msg", "listening")
``` ```
## Usage ## Usage
### Typical application logging ### Typical application logging
```go ```go
w := log.NewSyncWriter(os.Stderr) w := log.NewSyncWriter(os.Stderr)
logger := log.NewLogfmtLogger(w) logger := log.NewLogfmtLogger(w)
logger.Log("question", "what is the meaning of life?", "answer", 42) logger.Log("question", "what is the meaning of life?", "answer", 42)
// Output: // Output:
// question="what is the meaning of life?" answer=42 // question="what is the meaning of life?" answer=42
``` ```
### Contextual Loggers ### Contextual Loggers
```go ```go
func main() { func main() {
var logger log.Logger var logger log.Logger
logger = log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr)) logger = log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr))
logger = log.With(logger, "instance_id", 123) logger = log.With(logger, "instance_id", 123)
logger.Log("msg", "starting") logger.Log("msg", "starting")
NewWorker(log.With(logger, "component", "worker")).Run() NewWorker(log.With(logger, "component", "worker")).Run()
NewSlacker(log.With(logger, "component", "slacker")).Run() NewSlacker(log.With(logger, "component", "slacker")).Run()
} }
// Output: // Output:
// instance_id=123 msg=starting // instance_id=123 msg=starting
// instance_id=123 component=worker msg=running // instance_id=123 component=worker msg=running
// instance_id=123 component=slacker msg=running // instance_id=123 component=slacker msg=running
``` ```
### Interact with stdlib logger ### Interact with stdlib logger
Redirect stdlib logger to Go kit logger. Redirect stdlib logger to Go kit logger.
```go ```go
import ( import (
"os" "os"
stdlog "log" stdlog "log"
kitlog "github.com/go-kit/kit/log" kitlog "github.com/go-kit/kit/log"
) )
func main() { func main() {
logger := kitlog.NewJSONLogger(kitlog.NewSyncWriter(os.Stdout)) logger := kitlog.NewJSONLogger(kitlog.NewSyncWriter(os.Stdout))
stdlog.SetOutput(kitlog.NewStdlibAdapter(logger)) stdlog.SetOutput(kitlog.NewStdlibAdapter(logger))
stdlog.Print("I sure like pie") stdlog.Print("I sure like pie")
} }
// Output: // Output:
// {"msg":"I sure like pie","ts":"2016/01/01 12:34:56"} // {"msg":"I sure like pie","ts":"2016/01/01 12:34:56"}
``` ```
Or, if, for legacy reasons, you need to pipe all of your logging through the Or, if, for legacy reasons, you need to pipe all of your logging through the
stdlib log package, you can redirect Go kit logger to the stdlib logger. stdlib log package, you can redirect Go kit logger to the stdlib logger.
```go ```go
logger := kitlog.NewLogfmtLogger(kitlog.StdlibWriter{}) logger := kitlog.NewLogfmtLogger(kitlog.StdlibWriter{})
logger.Log("legacy", true, "msg", "at least it's something") logger.Log("legacy", true, "msg", "at least it's something")
// Output: // Output:
// 2016/01/01 12:34:56 legacy=true msg="at least it's something" // 2016/01/01 12:34:56 legacy=true msg="at least it's something"
``` ```
### Timestamps and callers ### Timestamps and callers
```go ```go
var logger log.Logger var logger log.Logger
logger = log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr)) logger = log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr))
logger = log.With(logger, "ts", log.DefaultTimestampUTC, "caller", log.DefaultCaller) logger = log.With(logger, "ts", log.DefaultTimestampUTC, "caller", log.DefaultCaller)
logger.Log("msg", "hello") logger.Log("msg", "hello")
// Output: // Output:
// ts=2016-01-01T12:34:56Z caller=main.go:15 msg=hello // ts=2016-01-01T12:34:56Z caller=main.go:15 msg=hello
``` ```
## Levels ## Levels
Log levels are supported via the [level package](https://godoc.org/github.com/go-kit/kit/log/level). Log levels are supported via the [level package](https://godoc.org/github.com/go-kit/kit/log/level).
## Supported output formats ## Supported output formats
- [Logfmt](https://brandur.org/logfmt) ([see also](https://blog.codeship.com/logfmt-a-log-format-thats-easy-to-read-and-write)) - [Logfmt](https://brandur.org/logfmt) ([see also](https://blog.codeship.com/logfmt-a-log-format-thats-easy-to-read-and-write))
- JSON - JSON
## Enhancements ## Enhancements
`package log` is centered on the one-method Logger interface. `package log` is centered on the one-method Logger interface.
```go ```go
type Logger interface { type Logger interface {
Log(keyvals ...interface{}) error Log(keyvals ...interface{}) error
} }
``` ```
This interface, and its supporting code like is the product of much iteration This interface, and its supporting code like is the product of much iteration
and evaluation. For more details on the evolution of the Logger interface, and evaluation. For more details on the evolution of the Logger interface,
see [The Hunt for a Logger Interface](http://go-talks.appspot.com/github.com/ChrisHines/talks/structured-logging/structured-logging.slide#1), see [The Hunt for a Logger Interface](http://go-talks.appspot.com/github.com/ChrisHines/talks/structured-logging/structured-logging.slide#1),
a talk by [Chris Hines](https://github.com/ChrisHines). a talk by [Chris Hines](https://github.com/ChrisHines).
Also, please see Also, please see
[#63](https://github.com/go-kit/kit/issues/63), [#63](https://github.com/go-kit/kit/issues/63),
[#76](https://github.com/go-kit/kit/pull/76), [#76](https://github.com/go-kit/kit/pull/76),
[#131](https://github.com/go-kit/kit/issues/131), [#131](https://github.com/go-kit/kit/issues/131),
[#157](https://github.com/go-kit/kit/pull/157), [#157](https://github.com/go-kit/kit/pull/157),
[#164](https://github.com/go-kit/kit/issues/164), and [#164](https://github.com/go-kit/kit/issues/164), and
[#252](https://github.com/go-kit/kit/pull/252) [#252](https://github.com/go-kit/kit/pull/252)
to review historical conversations about package log and the Logger interface. to review historical conversations about package log and the Logger interface.
Value-add packages and suggestions, Value-add packages and suggestions,
like improvements to [the leveled logger](https://godoc.org/github.com/go-kit/kit/log/level), like improvements to [the leveled logger](https://godoc.org/github.com/go-kit/kit/log/level),
are of course welcome. Good proposals should are of course welcome. Good proposals should
- Be composable with [contextual loggers](https://godoc.org/github.com/go-kit/kit/log#With), - Be composable with [contextual loggers](https://godoc.org/github.com/go-kit/kit/log#With),
- Not break the behavior of [log.Caller](https://godoc.org/github.com/go-kit/kit/log#Caller) in any wrapped contextual loggers, and - Not break the behavior of [log.Caller](https://godoc.org/github.com/go-kit/kit/log#Caller) in any wrapped contextual loggers, and
- Be friendly to packages that accept only an unadorned log.Logger. - Be friendly to packages that accept only an unadorned log.Logger.
## Benchmarks & comparisons ## Benchmarks & comparisons
There are a few Go logging benchmarks and comparisons that include Go kit's package log. There are a few Go logging benchmarks and comparisons that include Go kit's package log.
- [imkira/go-loggers-bench](https://github.com/imkira/go-loggers-bench) includes kit/log - [imkira/go-loggers-bench](https://github.com/imkira/go-loggers-bench) includes kit/log
- [uber-common/zap](https://github.com/uber-common/zap), a zero-alloc logging library, includes a comparison with kit/log - [uber-common/zap](https://github.com/uber-common/zap), a zero-alloc logging library, includes a comparison with kit/log

View file

@ -1,116 +1,116 @@
// Package log provides a structured logger. // Package log provides a structured logger.
// //
// Structured logging produces logs easily consumed later by humans or // Structured logging produces logs easily consumed later by humans or
// machines. Humans might be interested in debugging errors, or tracing // machines. Humans might be interested in debugging errors, or tracing
// specific requests. Machines might be interested in counting interesting // specific requests. Machines might be interested in counting interesting
// events, or aggregating information for off-line processing. In both cases, // events, or aggregating information for off-line processing. In both cases,
// it is important that the log messages are structured and actionable. // it is important that the log messages are structured and actionable.
// Package log is designed to encourage both of these best practices. // Package log is designed to encourage both of these best practices.
// //
// Basic Usage // Basic Usage
// //
// The fundamental interface is Logger. Loggers create log events from // The fundamental interface is Logger. Loggers create log events from
// key/value data. The Logger interface has a single method, Log, which // key/value data. The Logger interface has a single method, Log, which
// accepts a sequence of alternating key/value pairs, which this package names // accepts a sequence of alternating key/value pairs, which this package names
// keyvals. // keyvals.
// //
// type Logger interface { // type Logger interface {
// Log(keyvals ...interface{}) error // Log(keyvals ...interface{}) error
// } // }
// //
// Here is an example of a function using a Logger to create log events. // Here is an example of a function using a Logger to create log events.
// //
// func RunTask(task Task, logger log.Logger) string { // func RunTask(task Task, logger log.Logger) string {
// logger.Log("taskID", task.ID, "event", "starting task") // logger.Log("taskID", task.ID, "event", "starting task")
// ... // ...
// logger.Log("taskID", task.ID, "event", "task complete") // logger.Log("taskID", task.ID, "event", "task complete")
// } // }
// //
// The keys in the above example are "taskID" and "event". The values are // The keys in the above example are "taskID" and "event". The values are
// task.ID, "starting task", and "task complete". Every key is followed // task.ID, "starting task", and "task complete". Every key is followed
// immediately by its value. // immediately by its value.
// //
// Keys are usually plain strings. Values may be any type that has a sensible // Keys are usually plain strings. Values may be any type that has a sensible
// encoding in the chosen log format. With structured logging it is a good // encoding in the chosen log format. With structured logging it is a good
// idea to log simple values without formatting them. This practice allows // idea to log simple values without formatting them. This practice allows
// the chosen logger to encode values in the most appropriate way. // the chosen logger to encode values in the most appropriate way.
// //
// Contextual Loggers // Contextual Loggers
// //
// A contextual logger stores keyvals that it includes in all log events. // A contextual logger stores keyvals that it includes in all log events.
// Building appropriate contextual loggers reduces repetition and aids // Building appropriate contextual loggers reduces repetition and aids
// consistency in the resulting log output. With and WithPrefix add context to // consistency in the resulting log output. With and WithPrefix add context to
// a logger. We can use With to improve the RunTask example. // a logger. We can use With to improve the RunTask example.
// //
// func RunTask(task Task, logger log.Logger) string { // func RunTask(task Task, logger log.Logger) string {
// logger = log.With(logger, "taskID", task.ID) // logger = log.With(logger, "taskID", task.ID)
// logger.Log("event", "starting task") // logger.Log("event", "starting task")
// ... // ...
// taskHelper(task.Cmd, logger) // taskHelper(task.Cmd, logger)
// ... // ...
// logger.Log("event", "task complete") // logger.Log("event", "task complete")
// } // }
// //
// The improved version emits the same log events as the original for the // The improved version emits the same log events as the original for the
// first and last calls to Log. Passing the contextual logger to taskHelper // first and last calls to Log. Passing the contextual logger to taskHelper
// enables each log event created by taskHelper to include the task.ID even // enables each log event created by taskHelper to include the task.ID even
// though taskHelper does not have access to that value. Using contextual // though taskHelper does not have access to that value. Using contextual
// loggers this way simplifies producing log output that enables tracing the // loggers this way simplifies producing log output that enables tracing the
// life cycle of individual tasks. (See the Contextual example for the full // life cycle of individual tasks. (See the Contextual example for the full
// code of the above snippet.) // code of the above snippet.)
// //
// Dynamic Contextual Values // Dynamic Contextual Values
// //
// A Valuer function stored in a contextual logger generates a new value each // A Valuer function stored in a contextual logger generates a new value each
// time an event is logged. The Valuer example demonstrates how this feature // time an event is logged. The Valuer example demonstrates how this feature
// works. // works.
// //
// Valuers provide the basis for consistently logging timestamps and source // Valuers provide the basis for consistently logging timestamps and source
// code location. The log package defines several valuers for that purpose. // code location. The log package defines several valuers for that purpose.
// See Timestamp, DefaultTimestamp, DefaultTimestampUTC, Caller, and // See Timestamp, DefaultTimestamp, DefaultTimestampUTC, Caller, and
// DefaultCaller. A common logger initialization sequence that ensures all log // DefaultCaller. A common logger initialization sequence that ensures all log
// entries contain a timestamp and source location looks like this: // entries contain a timestamp and source location looks like this:
// //
// logger := log.NewLogfmtLogger(log.NewSyncWriter(os.Stdout)) // logger := log.NewLogfmtLogger(log.NewSyncWriter(os.Stdout))
// logger = log.With(logger, "ts", log.DefaultTimestampUTC, "caller", log.DefaultCaller) // logger = log.With(logger, "ts", log.DefaultTimestampUTC, "caller", log.DefaultCaller)
// //
// Concurrent Safety // Concurrent Safety
// //
// Applications with multiple goroutines want each log event written to the // Applications with multiple goroutines want each log event written to the
// same logger to remain separate from other log events. Package log provides // same logger to remain separate from other log events. Package log provides
// two simple solutions for concurrent safe logging. // two simple solutions for concurrent safe logging.
// //
// NewSyncWriter wraps an io.Writer and serializes each call to its Write // NewSyncWriter wraps an io.Writer and serializes each call to its Write
// method. Using a SyncWriter has the benefit that the smallest practical // method. Using a SyncWriter has the benefit that the smallest practical
// portion of the logging logic is performed within a mutex, but it requires // portion of the logging logic is performed within a mutex, but it requires
// the formatting Logger to make only one call to Write per log event. // the formatting Logger to make only one call to Write per log event.
// //
// NewSyncLogger wraps any Logger and serializes each call to its Log method. // NewSyncLogger wraps any Logger and serializes each call to its Log method.
// Using a SyncLogger has the benefit that it guarantees each log event is // Using a SyncLogger has the benefit that it guarantees each log event is
// handled atomically within the wrapped logger, but it typically serializes // handled atomically within the wrapped logger, but it typically serializes
// both the formatting and output logic. Use a SyncLogger if the formatting // both the formatting and output logic. Use a SyncLogger if the formatting
// logger may perform multiple writes per log event. // logger may perform multiple writes per log event.
// //
// Error Handling // Error Handling
// //
// This package relies on the practice of wrapping or decorating loggers with // This package relies on the practice of wrapping or decorating loggers with
// other loggers to provide composable pieces of functionality. It also means // other loggers to provide composable pieces of functionality. It also means
// that Logger.Log must return an error because some // that Logger.Log must return an error because some
// implementations—especially those that output log data to an io.Writer—may // implementations—especially those that output log data to an io.Writer—may
// encounter errors that cannot be handled locally. This in turn means that // encounter errors that cannot be handled locally. This in turn means that
// Loggers that wrap other loggers should return errors from the wrapped // Loggers that wrap other loggers should return errors from the wrapped
// logger up the stack. // logger up the stack.
// //
// Fortunately, the decorator pattern also provides a way to avoid the // Fortunately, the decorator pattern also provides a way to avoid the
// necessity to check for errors every time an application calls Logger.Log. // necessity to check for errors every time an application calls Logger.Log.
// An application required to panic whenever its Logger encounters // An application required to panic whenever its Logger encounters
// an error could initialize its logger as follows. // an error could initialize its logger as follows.
// //
// fmtlogger := log.NewLogfmtLogger(log.NewSyncWriter(os.Stdout)) // fmtlogger := log.NewLogfmtLogger(log.NewSyncWriter(os.Stdout))
// logger := log.LoggerFunc(func(keyvals ...interface{}) error { // logger := log.LoggerFunc(func(keyvals ...interface{}) error {
// if err := fmtlogger.Log(keyvals...); err != nil { // if err := fmtlogger.Log(keyvals...); err != nil {
// panic(err) // panic(err)
// } // }
// return nil // return nil
// }) // })
package log package log

View file

@ -1,89 +1,89 @@
package log package log
import ( import (
"encoding" "encoding"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"reflect" "reflect"
) )
type jsonLogger struct { type jsonLogger struct {
io.Writer io.Writer
} }
// NewJSONLogger returns a Logger that encodes keyvals to the Writer as a // NewJSONLogger returns a Logger that encodes keyvals to the Writer as a
// single JSON object. Each log event produces no more than one call to // single JSON object. Each log event produces no more than one call to
// w.Write. The passed Writer must be safe for concurrent use by multiple // w.Write. The passed Writer must be safe for concurrent use by multiple
// goroutines if the returned Logger will be used concurrently. // goroutines if the returned Logger will be used concurrently.
func NewJSONLogger(w io.Writer) Logger { func NewJSONLogger(w io.Writer) Logger {
return &jsonLogger{w} return &jsonLogger{w}
} }
func (l *jsonLogger) Log(keyvals ...interface{}) error { func (l *jsonLogger) Log(keyvals ...interface{}) error {
n := (len(keyvals) + 1) / 2 // +1 to handle case when len is odd n := (len(keyvals) + 1) / 2 // +1 to handle case when len is odd
m := make(map[string]interface{}, n) m := make(map[string]interface{}, n)
for i := 0; i < len(keyvals); i += 2 { for i := 0; i < len(keyvals); i += 2 {
k := keyvals[i] k := keyvals[i]
var v interface{} = ErrMissingValue var v interface{} = ErrMissingValue
if i+1 < len(keyvals) { if i+1 < len(keyvals) {
v = keyvals[i+1] v = keyvals[i+1]
} }
merge(m, k, v) merge(m, k, v)
} }
return json.NewEncoder(l.Writer).Encode(m) return json.NewEncoder(l.Writer).Encode(m)
} }
func merge(dst map[string]interface{}, k, v interface{}) { func merge(dst map[string]interface{}, k, v interface{}) {
var key string var key string
switch x := k.(type) { switch x := k.(type) {
case string: case string:
key = x key = x
case fmt.Stringer: case fmt.Stringer:
key = safeString(x) key = safeString(x)
default: default:
key = fmt.Sprint(x) key = fmt.Sprint(x)
} }
// We want json.Marshaler and encoding.TextMarshaller to take priority over // We want json.Marshaler and encoding.TextMarshaller to take priority over
// err.Error() and v.String(). But json.Marshall (called later) does that by // err.Error() and v.String(). But json.Marshall (called later) does that by
// default so we force a no-op if it's one of those 2 case. // default so we force a no-op if it's one of those 2 case.
switch x := v.(type) { switch x := v.(type) {
case json.Marshaler: case json.Marshaler:
case encoding.TextMarshaler: case encoding.TextMarshaler:
case error: case error:
v = safeError(x) v = safeError(x)
case fmt.Stringer: case fmt.Stringer:
v = safeString(x) v = safeString(x)
} }
dst[key] = v dst[key] = v
} }
func safeString(str fmt.Stringer) (s string) { func safeString(str fmt.Stringer) (s string) {
defer func() { defer func() {
if panicVal := recover(); panicVal != nil { if panicVal := recover(); panicVal != nil {
if v := reflect.ValueOf(str); v.Kind() == reflect.Ptr && v.IsNil() { if v := reflect.ValueOf(str); v.Kind() == reflect.Ptr && v.IsNil() {
s = "NULL" s = "NULL"
} else { } else {
panic(panicVal) panic(panicVal)
} }
} }
}() }()
s = str.String() s = str.String()
return return
} }
func safeError(err error) (s interface{}) { func safeError(err error) (s interface{}) {
defer func() { defer func() {
if panicVal := recover(); panicVal != nil { if panicVal := recover(); panicVal != nil {
if v := reflect.ValueOf(err); v.Kind() == reflect.Ptr && v.IsNil() { if v := reflect.ValueOf(err); v.Kind() == reflect.Ptr && v.IsNil() {
s = nil s = nil
} else { } else {
panic(panicVal) panic(panicVal)
} }
} }
}() }()
s = err.Error() s = err.Error()
return return
} }

View file

@ -1,22 +1,22 @@
// Package level implements leveled logging on top of Go kit's log package. To // Package level implements leveled logging on top of Go kit's log package. To
// use the level package, create a logger as per normal in your func main, and // use the level package, create a logger as per normal in your func main, and
// wrap it with level.NewFilter. // wrap it with level.NewFilter.
// //
// var logger log.Logger // var logger log.Logger
// logger = log.NewLogfmtLogger(os.Stderr) // logger = log.NewLogfmtLogger(os.Stderr)
// logger = level.NewFilter(logger, level.AllowInfo()) // <-- // logger = level.NewFilter(logger, level.AllowInfo()) // <--
// logger = log.With(logger, "ts", log.DefaultTimestampUTC) // logger = log.With(logger, "ts", log.DefaultTimestampUTC)
// //
// Then, at the callsites, use one of the level.Debug, Info, Warn, or Error // Then, at the callsites, use one of the level.Debug, Info, Warn, or Error
// helper methods to emit leveled log events. // helper methods to emit leveled log events.
// //
// logger.Log("foo", "bar") // as normal, no level // logger.Log("foo", "bar") // as normal, no level
// level.Debug(logger).Log("request_id", reqID, "trace_data", trace.Get()) // level.Debug(logger).Log("request_id", reqID, "trace_data", trace.Get())
// if value > 100 { // if value > 100 {
// level.Error(logger).Log("value", value) // level.Error(logger).Log("value", value)
// } // }
// //
// NewFilter allows precise control over what happens when a log event is // NewFilter allows precise control over what happens when a log event is
// emitted without a level key, or if a squelched level is used. Check the // emitted without a level key, or if a squelched level is used. Check the
// Option functions for details. // Option functions for details.
package level package level

View file

@ -1,205 +1,205 @@
package level package level
import "github.com/go-kit/kit/log" import "github.com/go-kit/kit/log"
// Error returns a logger that includes a Key/ErrorValue pair. // Error returns a logger that includes a Key/ErrorValue pair.
func Error(logger log.Logger) log.Logger { func Error(logger log.Logger) log.Logger {
return log.WithPrefix(logger, Key(), ErrorValue()) return log.WithPrefix(logger, Key(), ErrorValue())
} }
// Warn returns a logger that includes a Key/WarnValue pair. // Warn returns a logger that includes a Key/WarnValue pair.
func Warn(logger log.Logger) log.Logger { func Warn(logger log.Logger) log.Logger {
return log.WithPrefix(logger, Key(), WarnValue()) return log.WithPrefix(logger, Key(), WarnValue())
} }
// Info returns a logger that includes a Key/InfoValue pair. // Info returns a logger that includes a Key/InfoValue pair.
func Info(logger log.Logger) log.Logger { func Info(logger log.Logger) log.Logger {
return log.WithPrefix(logger, Key(), InfoValue()) return log.WithPrefix(logger, Key(), InfoValue())
} }
// Debug returns a logger that includes a Key/DebugValue pair. // Debug returns a logger that includes a Key/DebugValue pair.
func Debug(logger log.Logger) log.Logger { func Debug(logger log.Logger) log.Logger {
return log.WithPrefix(logger, Key(), DebugValue()) return log.WithPrefix(logger, Key(), DebugValue())
} }
// NewFilter wraps next and implements level filtering. See the commentary on // NewFilter wraps next and implements level filtering. See the commentary on
// the Option functions for a detailed description of how to configure levels. // the Option functions for a detailed description of how to configure levels.
// If no options are provided, all leveled log events created with Debug, // If no options are provided, all leveled log events created with Debug,
// Info, Warn or Error helper methods are squelched and non-leveled log // Info, Warn or Error helper methods are squelched and non-leveled log
// events are passed to next unmodified. // events are passed to next unmodified.
func NewFilter(next log.Logger, options ...Option) log.Logger { func NewFilter(next log.Logger, options ...Option) log.Logger {
l := &logger{ l := &logger{
next: next, next: next,
} }
for _, option := range options { for _, option := range options {
option(l) option(l)
} }
return l return l
} }
type logger struct { type logger struct {
next log.Logger next log.Logger
allowed level allowed level
squelchNoLevel bool squelchNoLevel bool
errNotAllowed error errNotAllowed error
errNoLevel error errNoLevel error
} }
func (l *logger) Log(keyvals ...interface{}) error { func (l *logger) Log(keyvals ...interface{}) error {
var hasLevel, levelAllowed bool var hasLevel, levelAllowed bool
for i := 1; i < len(keyvals); i += 2 { for i := 1; i < len(keyvals); i += 2 {
if v, ok := keyvals[i].(*levelValue); ok { if v, ok := keyvals[i].(*levelValue); ok {
hasLevel = true hasLevel = true
levelAllowed = l.allowed&v.level != 0 levelAllowed = l.allowed&v.level != 0
break break
} }
} }
if !hasLevel && l.squelchNoLevel { if !hasLevel && l.squelchNoLevel {
return l.errNoLevel return l.errNoLevel
} }
if hasLevel && !levelAllowed { if hasLevel && !levelAllowed {
return l.errNotAllowed return l.errNotAllowed
} }
return l.next.Log(keyvals...) return l.next.Log(keyvals...)
} }
// Option sets a parameter for the leveled logger. // Option sets a parameter for the leveled logger.
type Option func(*logger) type Option func(*logger)
// AllowAll is an alias for AllowDebug. // AllowAll is an alias for AllowDebug.
func AllowAll() Option { func AllowAll() Option {
return AllowDebug() return AllowDebug()
} }
// AllowDebug allows error, warn, info and debug level log events to pass. // AllowDebug allows error, warn, info and debug level log events to pass.
func AllowDebug() Option { func AllowDebug() Option {
return allowed(levelError | levelWarn | levelInfo | levelDebug) return allowed(levelError | levelWarn | levelInfo | levelDebug)
} }
// AllowInfo allows error, warn and info level log events to pass. // AllowInfo allows error, warn and info level log events to pass.
func AllowInfo() Option { func AllowInfo() Option {
return allowed(levelError | levelWarn | levelInfo) return allowed(levelError | levelWarn | levelInfo)
} }
// AllowWarn allows error and warn level log events to pass. // AllowWarn allows error and warn level log events to pass.
func AllowWarn() Option { func AllowWarn() Option {
return allowed(levelError | levelWarn) return allowed(levelError | levelWarn)
} }
// AllowError allows only error level log events to pass. // AllowError allows only error level log events to pass.
func AllowError() Option { func AllowError() Option {
return allowed(levelError) return allowed(levelError)
} }
// AllowNone allows no leveled log events to pass. // AllowNone allows no leveled log events to pass.
func AllowNone() Option { func AllowNone() Option {
return allowed(0) return allowed(0)
} }
func allowed(allowed level) Option { func allowed(allowed level) Option {
return func(l *logger) { l.allowed = allowed } return func(l *logger) { l.allowed = allowed }
} }
// ErrNotAllowed sets the error to return from Log when it squelches a log // ErrNotAllowed sets the error to return from Log when it squelches a log
// event disallowed by the configured Allow[Level] option. By default, // event disallowed by the configured Allow[Level] option. By default,
// ErrNotAllowed is nil; in this case the log event is squelched with no // ErrNotAllowed is nil; in this case the log event is squelched with no
// error. // error.
func ErrNotAllowed(err error) Option { func ErrNotAllowed(err error) Option {
return func(l *logger) { l.errNotAllowed = err } return func(l *logger) { l.errNotAllowed = err }
} }
// SquelchNoLevel instructs Log to squelch log events with no level, so that // SquelchNoLevel instructs Log to squelch log events with no level, so that
// they don't proceed through to the wrapped logger. If SquelchNoLevel is set // they don't proceed through to the wrapped logger. If SquelchNoLevel is set
// to true and a log event is squelched in this way, the error value // to true and a log event is squelched in this way, the error value
// configured with ErrNoLevel is returned to the caller. // configured with ErrNoLevel is returned to the caller.
func SquelchNoLevel(squelch bool) Option { func SquelchNoLevel(squelch bool) Option {
return func(l *logger) { l.squelchNoLevel = squelch } return func(l *logger) { l.squelchNoLevel = squelch }
} }
// ErrNoLevel sets the error to return from Log when it squelches a log event // ErrNoLevel sets the error to return from Log when it squelches a log event
// with no level. By default, ErrNoLevel is nil; in this case the log event is // with no level. By default, ErrNoLevel is nil; in this case the log event is
// squelched with no error. // squelched with no error.
func ErrNoLevel(err error) Option { func ErrNoLevel(err error) Option {
return func(l *logger) { l.errNoLevel = err } return func(l *logger) { l.errNoLevel = err }
} }
// NewInjector wraps next and returns a logger that adds a Key/level pair to // NewInjector wraps next and returns a logger that adds a Key/level pair to
// the beginning of log events that don't already contain a level. In effect, // the beginning of log events that don't already contain a level. In effect,
// this gives a default level to logs without a level. // this gives a default level to logs without a level.
func NewInjector(next log.Logger, level Value) log.Logger { func NewInjector(next log.Logger, level Value) log.Logger {
return &injector{ return &injector{
next: next, next: next,
level: level, level: level,
} }
} }
type injector struct { type injector struct {
next log.Logger next log.Logger
level interface{} level interface{}
} }
func (l *injector) Log(keyvals ...interface{}) error { func (l *injector) Log(keyvals ...interface{}) error {
for i := 1; i < len(keyvals); i += 2 { for i := 1; i < len(keyvals); i += 2 {
if _, ok := keyvals[i].(*levelValue); ok { if _, ok := keyvals[i].(*levelValue); ok {
return l.next.Log(keyvals...) return l.next.Log(keyvals...)
} }
} }
kvs := make([]interface{}, len(keyvals)+2) kvs := make([]interface{}, len(keyvals)+2)
kvs[0], kvs[1] = key, l.level kvs[0], kvs[1] = key, l.level
copy(kvs[2:], keyvals) copy(kvs[2:], keyvals)
return l.next.Log(kvs...) return l.next.Log(kvs...)
} }
// Value is the interface that each of the canonical level values implement. // Value is the interface that each of the canonical level values implement.
// It contains unexported methods that prevent types from other packages from // It contains unexported methods that prevent types from other packages from
// implementing it and guaranteeing that NewFilter can distinguish the levels // implementing it and guaranteeing that NewFilter can distinguish the levels
// defined in this package from all other values. // defined in this package from all other values.
type Value interface { type Value interface {
String() string String() string
levelVal() levelVal()
} }
// Key returns the unique key added to log events by the loggers in this // Key returns the unique key added to log events by the loggers in this
// package. // package.
func Key() interface{} { return key } func Key() interface{} { return key }
// ErrorValue returns the unique value added to log events by Error. // ErrorValue returns the unique value added to log events by Error.
func ErrorValue() Value { return errorValue } func ErrorValue() Value { return errorValue }
// WarnValue returns the unique value added to log events by Warn. // WarnValue returns the unique value added to log events by Warn.
func WarnValue() Value { return warnValue } func WarnValue() Value { return warnValue }
// InfoValue returns the unique value added to log events by Info. // InfoValue returns the unique value added to log events by Info.
func InfoValue() Value { return infoValue } func InfoValue() Value { return infoValue }
// DebugValue returns the unique value added to log events by Warn. // DebugValue returns the unique value added to log events by Warn.
func DebugValue() Value { return debugValue } func DebugValue() Value { return debugValue }
var ( var (
// key is of type interface{} so that it allocates once during package // key is of type interface{} so that it allocates once during package
// initialization and avoids allocating every time the value is added to a // initialization and avoids allocating every time the value is added to a
// []interface{} later. // []interface{} later.
key interface{} = "level" key interface{} = "level"
errorValue = &levelValue{level: levelError, name: "error"} errorValue = &levelValue{level: levelError, name: "error"}
warnValue = &levelValue{level: levelWarn, name: "warn"} warnValue = &levelValue{level: levelWarn, name: "warn"}
infoValue = &levelValue{level: levelInfo, name: "info"} infoValue = &levelValue{level: levelInfo, name: "info"}
debugValue = &levelValue{level: levelDebug, name: "debug"} debugValue = &levelValue{level: levelDebug, name: "debug"}
) )
type level byte type level byte
const ( const (
levelDebug level = 1 << iota levelDebug level = 1 << iota
levelInfo levelInfo
levelWarn levelWarn
levelError levelError
) )
type levelValue struct { type levelValue struct {
name string name string
level level
} }
func (v *levelValue) String() string { return v.name } func (v *levelValue) String() string { return v.name }
func (v *levelValue) levelVal() {} func (v *levelValue) levelVal() {}

View file

@ -1,135 +1,135 @@
package log package log
import "errors" import "errors"
// Logger is the fundamental interface for all log operations. Log creates a // Logger is the fundamental interface for all log operations. Log creates a
// log event from keyvals, a variadic sequence of alternating keys and values. // log event from keyvals, a variadic sequence of alternating keys and values.
// Implementations must be safe for concurrent use by multiple goroutines. In // Implementations must be safe for concurrent use by multiple goroutines. In
// particular, any implementation of Logger that appends to keyvals or // particular, any implementation of Logger that appends to keyvals or
// modifies or retains any of its elements must make a copy first. // modifies or retains any of its elements must make a copy first.
type Logger interface { type Logger interface {
Log(keyvals ...interface{}) error Log(keyvals ...interface{}) error
} }
// ErrMissingValue is appended to keyvals slices with odd length to substitute // ErrMissingValue is appended to keyvals slices with odd length to substitute
// the missing value. // the missing value.
var ErrMissingValue = errors.New("(MISSING)") var ErrMissingValue = errors.New("(MISSING)")
// With returns a new contextual logger with keyvals prepended to those passed // With returns a new contextual logger with keyvals prepended to those passed
// to calls to Log. If logger is also a contextual logger created by With or // to calls to Log. If logger is also a contextual logger created by With or
// WithPrefix, keyvals is appended to the existing context. // WithPrefix, keyvals is appended to the existing context.
// //
// The returned Logger replaces all value elements (odd indexes) containing a // The returned Logger replaces all value elements (odd indexes) containing a
// Valuer with their generated value for each call to its Log method. // Valuer with their generated value for each call to its Log method.
func With(logger Logger, keyvals ...interface{}) Logger { func With(logger Logger, keyvals ...interface{}) Logger {
if len(keyvals) == 0 { if len(keyvals) == 0 {
return logger return logger
} }
l := newContext(logger) l := newContext(logger)
kvs := append(l.keyvals, keyvals...) kvs := append(l.keyvals, keyvals...)
if len(kvs)%2 != 0 { if len(kvs)%2 != 0 {
kvs = append(kvs, ErrMissingValue) kvs = append(kvs, ErrMissingValue)
} }
return &context{ return &context{
logger: l.logger, logger: l.logger,
// Limiting the capacity of the stored keyvals ensures that a new // Limiting the capacity of the stored keyvals ensures that a new
// backing array is created if the slice must grow in Log or With. // backing array is created if the slice must grow in Log or With.
// Using the extra capacity without copying risks a data race that // Using the extra capacity without copying risks a data race that
// would violate the Logger interface contract. // would violate the Logger interface contract.
keyvals: kvs[:len(kvs):len(kvs)], keyvals: kvs[:len(kvs):len(kvs)],
hasValuer: l.hasValuer || containsValuer(keyvals), hasValuer: l.hasValuer || containsValuer(keyvals),
} }
} }
// WithPrefix returns a new contextual logger with keyvals prepended to those // WithPrefix returns a new contextual logger with keyvals prepended to those
// passed to calls to Log. If logger is also a contextual logger created by // passed to calls to Log. If logger is also a contextual logger created by
// With or WithPrefix, keyvals is prepended to the existing context. // With or WithPrefix, keyvals is prepended to the existing context.
// //
// The returned Logger replaces all value elements (odd indexes) containing a // The returned Logger replaces all value elements (odd indexes) containing a
// Valuer with their generated value for each call to its Log method. // Valuer with their generated value for each call to its Log method.
func WithPrefix(logger Logger, keyvals ...interface{}) Logger { func WithPrefix(logger Logger, keyvals ...interface{}) Logger {
if len(keyvals) == 0 { if len(keyvals) == 0 {
return logger return logger
} }
l := newContext(logger) l := newContext(logger)
// Limiting the capacity of the stored keyvals ensures that a new // Limiting the capacity of the stored keyvals ensures that a new
// backing array is created if the slice must grow in Log or With. // backing array is created if the slice must grow in Log or With.
// Using the extra capacity without copying risks a data race that // Using the extra capacity without copying risks a data race that
// would violate the Logger interface contract. // would violate the Logger interface contract.
n := len(l.keyvals) + len(keyvals) n := len(l.keyvals) + len(keyvals)
if len(keyvals)%2 != 0 { if len(keyvals)%2 != 0 {
n++ n++
} }
kvs := make([]interface{}, 0, n) kvs := make([]interface{}, 0, n)
kvs = append(kvs, keyvals...) kvs = append(kvs, keyvals...)
if len(kvs)%2 != 0 { if len(kvs)%2 != 0 {
kvs = append(kvs, ErrMissingValue) kvs = append(kvs, ErrMissingValue)
} }
kvs = append(kvs, l.keyvals...) kvs = append(kvs, l.keyvals...)
return &context{ return &context{
logger: l.logger, logger: l.logger,
keyvals: kvs, keyvals: kvs,
hasValuer: l.hasValuer || containsValuer(keyvals), hasValuer: l.hasValuer || containsValuer(keyvals),
} }
} }
// context is the Logger implementation returned by With and WithPrefix. It // context is the Logger implementation returned by With and WithPrefix. It
// wraps a Logger and holds keyvals that it includes in all log events. Its // wraps a Logger and holds keyvals that it includes in all log events. Its
// Log method calls bindValues to generate values for each Valuer in the // Log method calls bindValues to generate values for each Valuer in the
// context keyvals. // context keyvals.
// //
// A context must always have the same number of stack frames between calls to // A context must always have the same number of stack frames between calls to
// its Log method and the eventual binding of Valuers to their value. This // its Log method and the eventual binding of Valuers to their value. This
// requirement comes from the functional requirement to allow a context to // requirement comes from the functional requirement to allow a context to
// resolve application call site information for a Caller stored in the // resolve application call site information for a Caller stored in the
// context. To do this we must be able to predict the number of logging // context. To do this we must be able to predict the number of logging
// functions on the stack when bindValues is called. // functions on the stack when bindValues is called.
// //
// Two implementation details provide the needed stack depth consistency. // Two implementation details provide the needed stack depth consistency.
// //
// 1. newContext avoids introducing an additional layer when asked to // 1. newContext avoids introducing an additional layer when asked to
// wrap another context. // wrap another context.
// 2. With and WithPrefix avoid introducing an additional layer by // 2. With and WithPrefix avoid introducing an additional layer by
// returning a newly constructed context with a merged keyvals rather // returning a newly constructed context with a merged keyvals rather
// than simply wrapping the existing context. // than simply wrapping the existing context.
type context struct { type context struct {
logger Logger logger Logger
keyvals []interface{} keyvals []interface{}
hasValuer bool hasValuer bool
} }
func newContext(logger Logger) *context { func newContext(logger Logger) *context {
if c, ok := logger.(*context); ok { if c, ok := logger.(*context); ok {
return c return c
} }
return &context{logger: logger} return &context{logger: logger}
} }
// Log replaces all value elements (odd indexes) containing a Valuer in the // Log replaces all value elements (odd indexes) containing a Valuer in the
// stored context with their generated value, appends keyvals, and passes the // stored context with their generated value, appends keyvals, and passes the
// result to the wrapped Logger. // result to the wrapped Logger.
func (l *context) Log(keyvals ...interface{}) error { func (l *context) Log(keyvals ...interface{}) error {
kvs := append(l.keyvals, keyvals...) kvs := append(l.keyvals, keyvals...)
if len(kvs)%2 != 0 { if len(kvs)%2 != 0 {
kvs = append(kvs, ErrMissingValue) kvs = append(kvs, ErrMissingValue)
} }
if l.hasValuer { if l.hasValuer {
// If no keyvals were appended above then we must copy l.keyvals so // If no keyvals were appended above then we must copy l.keyvals so
// that future log events will reevaluate the stored Valuers. // that future log events will reevaluate the stored Valuers.
if len(keyvals) == 0 { if len(keyvals) == 0 {
kvs = append([]interface{}{}, l.keyvals...) kvs = append([]interface{}{}, l.keyvals...)
} }
bindValues(kvs[:len(l.keyvals)]) bindValues(kvs[:len(l.keyvals)])
} }
return l.logger.Log(kvs...) return l.logger.Log(kvs...)
} }
// LoggerFunc is an adapter to allow use of ordinary functions as Loggers. If // LoggerFunc is an adapter to allow use of ordinary functions as Loggers. If
// f is a function with the appropriate signature, LoggerFunc(f) is a Logger // f is a function with the appropriate signature, LoggerFunc(f) is a Logger
// object that calls f. // object that calls f.
type LoggerFunc func(...interface{}) error type LoggerFunc func(...interface{}) error
// Log implements Logger by calling f(keyvals...). // Log implements Logger by calling f(keyvals...).
func (f LoggerFunc) Log(keyvals ...interface{}) error { func (f LoggerFunc) Log(keyvals ...interface{}) error {
return f(keyvals...) return f(keyvals...)
} }

View file

@ -1,62 +1,62 @@
package log package log
import ( import (
"bytes" "bytes"
"io" "io"
"sync" "sync"
"github.com/go-logfmt/logfmt" "github.com/go-logfmt/logfmt"
) )
type logfmtEncoder struct { type logfmtEncoder struct {
*logfmt.Encoder *logfmt.Encoder
buf bytes.Buffer buf bytes.Buffer
} }
func (l *logfmtEncoder) Reset() { func (l *logfmtEncoder) Reset() {
l.Encoder.Reset() l.Encoder.Reset()
l.buf.Reset() l.buf.Reset()
} }
var logfmtEncoderPool = sync.Pool{ var logfmtEncoderPool = sync.Pool{
New: func() interface{} { New: func() interface{} {
var enc logfmtEncoder var enc logfmtEncoder
enc.Encoder = logfmt.NewEncoder(&enc.buf) enc.Encoder = logfmt.NewEncoder(&enc.buf)
return &enc return &enc
}, },
} }
type logfmtLogger struct { type logfmtLogger struct {
w io.Writer w io.Writer
} }
// NewLogfmtLogger returns a logger that encodes keyvals to the Writer in // NewLogfmtLogger returns a logger that encodes keyvals to the Writer in
// logfmt format. Each log event produces no more than one call to w.Write. // logfmt format. Each log event produces no more than one call to w.Write.
// The passed Writer must be safe for concurrent use by multiple goroutines if // The passed Writer must be safe for concurrent use by multiple goroutines if
// the returned Logger will be used concurrently. // the returned Logger will be used concurrently.
func NewLogfmtLogger(w io.Writer) Logger { func NewLogfmtLogger(w io.Writer) Logger {
return &logfmtLogger{w} return &logfmtLogger{w}
} }
func (l logfmtLogger) Log(keyvals ...interface{}) error { func (l logfmtLogger) Log(keyvals ...interface{}) error {
enc := logfmtEncoderPool.Get().(*logfmtEncoder) enc := logfmtEncoderPool.Get().(*logfmtEncoder)
enc.Reset() enc.Reset()
defer logfmtEncoderPool.Put(enc) defer logfmtEncoderPool.Put(enc)
if err := enc.EncodeKeyvals(keyvals...); err != nil { if err := enc.EncodeKeyvals(keyvals...); err != nil {
return err return err
} }
// Add newline to the end of the buffer // Add newline to the end of the buffer
if err := enc.EndRecord(); err != nil { if err := enc.EndRecord(); err != nil {
return err return err
} }
// The Logger interface requires implementations to be safe for concurrent // The Logger interface requires implementations to be safe for concurrent
// use by multiple goroutines. For this implementation that means making // use by multiple goroutines. For this implementation that means making
// only one call to l.w.Write() for each call to Log. // only one call to l.w.Write() for each call to Log.
if _, err := l.w.Write(enc.buf.Bytes()); err != nil { if _, err := l.w.Write(enc.buf.Bytes()); err != nil {
return err return err
} }
return nil return nil
} }

View file

@ -1,8 +1,8 @@
package log package log
type nopLogger struct{} type nopLogger struct{}
// NewNopLogger returns a logger that doesn't do anything. // NewNopLogger returns a logger that doesn't do anything.
func NewNopLogger() Logger { return nopLogger{} } func NewNopLogger() Logger { return nopLogger{} }
func (nopLogger) Log(...interface{}) error { return nil } func (nopLogger) Log(...interface{}) error { return nil }

View file

@ -1,116 +1,116 @@
package log package log
import ( import (
"io" "io"
"log" "log"
"regexp" "regexp"
"strings" "strings"
) )
// StdlibWriter implements io.Writer by invoking the stdlib log.Print. It's // StdlibWriter implements io.Writer by invoking the stdlib log.Print. It's
// designed to be passed to a Go kit logger as the writer, for cases where // designed to be passed to a Go kit logger as the writer, for cases where
// it's necessary to redirect all Go kit log output to the stdlib logger. // it's necessary to redirect all Go kit log output to the stdlib logger.
// //
// If you have any choice in the matter, you shouldn't use this. Prefer to // If you have any choice in the matter, you shouldn't use this. Prefer to
// redirect the stdlib log to the Go kit logger via NewStdlibAdapter. // redirect the stdlib log to the Go kit logger via NewStdlibAdapter.
type StdlibWriter struct{} type StdlibWriter struct{}
// Write implements io.Writer. // Write implements io.Writer.
func (w StdlibWriter) Write(p []byte) (int, error) { func (w StdlibWriter) Write(p []byte) (int, error) {
log.Print(strings.TrimSpace(string(p))) log.Print(strings.TrimSpace(string(p)))
return len(p), nil return len(p), nil
} }
// StdlibAdapter wraps a Logger and allows it to be passed to the stdlib // StdlibAdapter wraps a Logger and allows it to be passed to the stdlib
// logger's SetOutput. It will extract date/timestamps, filenames, and // logger's SetOutput. It will extract date/timestamps, filenames, and
// messages, and place them under relevant keys. // messages, and place them under relevant keys.
type StdlibAdapter struct { type StdlibAdapter struct {
Logger Logger
timestampKey string timestampKey string
fileKey string fileKey string
messageKey string messageKey string
} }
// StdlibAdapterOption sets a parameter for the StdlibAdapter. // StdlibAdapterOption sets a parameter for the StdlibAdapter.
type StdlibAdapterOption func(*StdlibAdapter) type StdlibAdapterOption func(*StdlibAdapter)
// TimestampKey sets the key for the timestamp field. By default, it's "ts". // TimestampKey sets the key for the timestamp field. By default, it's "ts".
func TimestampKey(key string) StdlibAdapterOption { func TimestampKey(key string) StdlibAdapterOption {
return func(a *StdlibAdapter) { a.timestampKey = key } return func(a *StdlibAdapter) { a.timestampKey = key }
} }
// FileKey sets the key for the file and line field. By default, it's "caller". // FileKey sets the key for the file and line field. By default, it's "caller".
func FileKey(key string) StdlibAdapterOption { func FileKey(key string) StdlibAdapterOption {
return func(a *StdlibAdapter) { a.fileKey = key } return func(a *StdlibAdapter) { a.fileKey = key }
} }
// MessageKey sets the key for the actual log message. By default, it's "msg". // MessageKey sets the key for the actual log message. By default, it's "msg".
func MessageKey(key string) StdlibAdapterOption { func MessageKey(key string) StdlibAdapterOption {
return func(a *StdlibAdapter) { a.messageKey = key } return func(a *StdlibAdapter) { a.messageKey = key }
} }
// NewStdlibAdapter returns a new StdlibAdapter wrapper around the passed // NewStdlibAdapter returns a new StdlibAdapter wrapper around the passed
// logger. It's designed to be passed to log.SetOutput. // logger. It's designed to be passed to log.SetOutput.
func NewStdlibAdapter(logger Logger, options ...StdlibAdapterOption) io.Writer { func NewStdlibAdapter(logger Logger, options ...StdlibAdapterOption) io.Writer {
a := StdlibAdapter{ a := StdlibAdapter{
Logger: logger, Logger: logger,
timestampKey: "ts", timestampKey: "ts",
fileKey: "caller", fileKey: "caller",
messageKey: "msg", messageKey: "msg",
} }
for _, option := range options { for _, option := range options {
option(&a) option(&a)
} }
return a return a
} }
func (a StdlibAdapter) Write(p []byte) (int, error) { func (a StdlibAdapter) Write(p []byte) (int, error) {
result := subexps(p) result := subexps(p)
keyvals := []interface{}{} keyvals := []interface{}{}
var timestamp string var timestamp string
if date, ok := result["date"]; ok && date != "" { if date, ok := result["date"]; ok && date != "" {
timestamp = date timestamp = date
} }
if time, ok := result["time"]; ok && time != "" { if time, ok := result["time"]; ok && time != "" {
if timestamp != "" { if timestamp != "" {
timestamp += " " timestamp += " "
} }
timestamp += time timestamp += time
} }
if timestamp != "" { if timestamp != "" {
keyvals = append(keyvals, a.timestampKey, timestamp) keyvals = append(keyvals, a.timestampKey, timestamp)
} }
if file, ok := result["file"]; ok && file != "" { if file, ok := result["file"]; ok && file != "" {
keyvals = append(keyvals, a.fileKey, file) keyvals = append(keyvals, a.fileKey, file)
} }
if msg, ok := result["msg"]; ok { if msg, ok := result["msg"]; ok {
keyvals = append(keyvals, a.messageKey, msg) keyvals = append(keyvals, a.messageKey, msg)
} }
if err := a.Logger.Log(keyvals...); err != nil { if err := a.Logger.Log(keyvals...); err != nil {
return 0, err return 0, err
} }
return len(p), nil return len(p), nil
} }
const ( const (
logRegexpDate = `(?P<date>[0-9]{4}/[0-9]{2}/[0-9]{2})?[ ]?` logRegexpDate = `(?P<date>[0-9]{4}/[0-9]{2}/[0-9]{2})?[ ]?`
logRegexpTime = `(?P<time>[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?)?[ ]?` logRegexpTime = `(?P<time>[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?)?[ ]?`
logRegexpFile = `(?P<file>.+?:[0-9]+)?` logRegexpFile = `(?P<file>.+?:[0-9]+)?`
logRegexpMsg = `(: )?(?P<msg>.*)` logRegexpMsg = `(: )?(?P<msg>.*)`
) )
var ( var (
logRegexp = regexp.MustCompile(logRegexpDate + logRegexpTime + logRegexpFile + logRegexpMsg) logRegexp = regexp.MustCompile(logRegexpDate + logRegexpTime + logRegexpFile + logRegexpMsg)
) )
func subexps(line []byte) map[string]string { func subexps(line []byte) map[string]string {
m := logRegexp.FindSubmatch(line) m := logRegexp.FindSubmatch(line)
if len(m) < len(logRegexp.SubexpNames()) { if len(m) < len(logRegexp.SubexpNames()) {
return map[string]string{} return map[string]string{}
} }
result := map[string]string{} result := map[string]string{}
for i, name := range logRegexp.SubexpNames() { for i, name := range logRegexp.SubexpNames() {
result[name] = string(m[i]) result[name] = string(m[i])
} }
return result return result
} }

View file

@ -1,116 +1,116 @@
package log package log
import ( import (
"io" "io"
"sync" "sync"
"sync/atomic" "sync/atomic"
) )
// SwapLogger wraps another logger that may be safely replaced while other // SwapLogger wraps another logger that may be safely replaced while other
// goroutines use the SwapLogger concurrently. The zero value for a SwapLogger // goroutines use the SwapLogger concurrently. The zero value for a SwapLogger
// will discard all log events without error. // will discard all log events without error.
// //
// SwapLogger serves well as a package global logger that can be changed by // SwapLogger serves well as a package global logger that can be changed by
// importers. // importers.
type SwapLogger struct { type SwapLogger struct {
logger atomic.Value logger atomic.Value
} }
type loggerStruct struct { type loggerStruct struct {
Logger Logger
} }
// Log implements the Logger interface by forwarding keyvals to the currently // Log implements the Logger interface by forwarding keyvals to the currently
// wrapped logger. It does not log anything if the wrapped logger is nil. // wrapped logger. It does not log anything if the wrapped logger is nil.
func (l *SwapLogger) Log(keyvals ...interface{}) error { func (l *SwapLogger) Log(keyvals ...interface{}) error {
s, ok := l.logger.Load().(loggerStruct) s, ok := l.logger.Load().(loggerStruct)
if !ok || s.Logger == nil { if !ok || s.Logger == nil {
return nil return nil
} }
return s.Log(keyvals...) return s.Log(keyvals...)
} }
// Swap replaces the currently wrapped logger with logger. Swap may be called // Swap replaces the currently wrapped logger with logger. Swap may be called
// concurrently with calls to Log from other goroutines. // concurrently with calls to Log from other goroutines.
func (l *SwapLogger) Swap(logger Logger) { func (l *SwapLogger) Swap(logger Logger) {
l.logger.Store(loggerStruct{logger}) l.logger.Store(loggerStruct{logger})
} }
// NewSyncWriter returns a new writer that is safe for concurrent use by // NewSyncWriter returns a new writer that is safe for concurrent use by
// multiple goroutines. Writes to the returned writer are passed on to w. If // multiple goroutines. Writes to the returned writer are passed on to w. If
// another write is already in progress, the calling goroutine blocks until // another write is already in progress, the calling goroutine blocks until
// the writer is available. // the writer is available.
// //
// If w implements the following interface, so does the returned writer. // If w implements the following interface, so does the returned writer.
// //
// interface { // interface {
// Fd() uintptr // Fd() uintptr
// } // }
func NewSyncWriter(w io.Writer) io.Writer { func NewSyncWriter(w io.Writer) io.Writer {
switch w := w.(type) { switch w := w.(type) {
case fdWriter: case fdWriter:
return &fdSyncWriter{fdWriter: w} return &fdSyncWriter{fdWriter: w}
default: default:
return &syncWriter{Writer: w} return &syncWriter{Writer: w}
} }
} }
// syncWriter synchronizes concurrent writes to an io.Writer. // syncWriter synchronizes concurrent writes to an io.Writer.
type syncWriter struct { type syncWriter struct {
sync.Mutex sync.Mutex
io.Writer io.Writer
} }
// Write writes p to the underlying io.Writer. If another write is already in // Write writes p to the underlying io.Writer. If another write is already in
// progress, the calling goroutine blocks until the syncWriter is available. // progress, the calling goroutine blocks until the syncWriter is available.
func (w *syncWriter) Write(p []byte) (n int, err error) { func (w *syncWriter) Write(p []byte) (n int, err error) {
w.Lock() w.Lock()
n, err = w.Writer.Write(p) n, err = w.Writer.Write(p)
w.Unlock() w.Unlock()
return n, err return n, err
} }
// fdWriter is an io.Writer that also has an Fd method. The most common // fdWriter is an io.Writer that also has an Fd method. The most common
// example of an fdWriter is an *os.File. // example of an fdWriter is an *os.File.
type fdWriter interface { type fdWriter interface {
io.Writer io.Writer
Fd() uintptr Fd() uintptr
} }
// fdSyncWriter synchronizes concurrent writes to an fdWriter. // fdSyncWriter synchronizes concurrent writes to an fdWriter.
type fdSyncWriter struct { type fdSyncWriter struct {
sync.Mutex sync.Mutex
fdWriter fdWriter
} }
// Write writes p to the underlying io.Writer. If another write is already in // Write writes p to the underlying io.Writer. If another write is already in
// progress, the calling goroutine blocks until the fdSyncWriter is available. // progress, the calling goroutine blocks until the fdSyncWriter is available.
func (w *fdSyncWriter) Write(p []byte) (n int, err error) { func (w *fdSyncWriter) Write(p []byte) (n int, err error) {
w.Lock() w.Lock()
n, err = w.fdWriter.Write(p) n, err = w.fdWriter.Write(p)
w.Unlock() w.Unlock()
return n, err return n, err
} }
// syncLogger provides concurrent safe logging for another Logger. // syncLogger provides concurrent safe logging for another Logger.
type syncLogger struct { type syncLogger struct {
mu sync.Mutex mu sync.Mutex
logger Logger logger Logger
} }
// NewSyncLogger returns a logger that synchronizes concurrent use of the // NewSyncLogger returns a logger that synchronizes concurrent use of the
// wrapped logger. When multiple goroutines use the SyncLogger concurrently // wrapped logger. When multiple goroutines use the SyncLogger concurrently
// only one goroutine will be allowed to log to the wrapped logger at a time. // only one goroutine will be allowed to log to the wrapped logger at a time.
// The other goroutines will block until the logger is available. // The other goroutines will block until the logger is available.
func NewSyncLogger(logger Logger) Logger { func NewSyncLogger(logger Logger) Logger {
return &syncLogger{logger: logger} return &syncLogger{logger: logger}
} }
// Log logs keyvals to the underlying Logger. If another log is already in // Log logs keyvals to the underlying Logger. If another log is already in
// progress, the calling goroutine blocks until the syncLogger is available. // progress, the calling goroutine blocks until the syncLogger is available.
func (l *syncLogger) Log(keyvals ...interface{}) error { func (l *syncLogger) Log(keyvals ...interface{}) error {
l.mu.Lock() l.mu.Lock()
err := l.logger.Log(keyvals...) err := l.logger.Log(keyvals...)
l.mu.Unlock() l.mu.Unlock()
return err return err
} }

View file

@ -1,110 +1,110 @@
package log package log
import ( import (
"runtime" "runtime"
"strconv" "strconv"
"strings" "strings"
"time" "time"
) )
// A Valuer generates a log value. When passed to With or WithPrefix in a // A Valuer generates a log value. When passed to With or WithPrefix in a
// value element (odd indexes), it represents a dynamic value which is re- // value element (odd indexes), it represents a dynamic value which is re-
// evaluated with each log event. // evaluated with each log event.
type Valuer func() interface{} type Valuer func() interface{}
// bindValues replaces all value elements (odd indexes) containing a Valuer // bindValues replaces all value elements (odd indexes) containing a Valuer
// with their generated value. // with their generated value.
func bindValues(keyvals []interface{}) { func bindValues(keyvals []interface{}) {
for i := 1; i < len(keyvals); i += 2 { for i := 1; i < len(keyvals); i += 2 {
if v, ok := keyvals[i].(Valuer); ok { if v, ok := keyvals[i].(Valuer); ok {
keyvals[i] = v() keyvals[i] = v()
} }
} }
} }
// containsValuer returns true if any of the value elements (odd indexes) // containsValuer returns true if any of the value elements (odd indexes)
// contain a Valuer. // contain a Valuer.
func containsValuer(keyvals []interface{}) bool { func containsValuer(keyvals []interface{}) bool {
for i := 1; i < len(keyvals); i += 2 { for i := 1; i < len(keyvals); i += 2 {
if _, ok := keyvals[i].(Valuer); ok { if _, ok := keyvals[i].(Valuer); ok {
return true return true
} }
} }
return false return false
} }
// Timestamp returns a timestamp Valuer. It invokes the t function to get the // Timestamp returns a timestamp Valuer. It invokes the t function to get the
// time; unless you are doing something tricky, pass time.Now. // time; unless you are doing something tricky, pass time.Now.
// //
// Most users will want to use DefaultTimestamp or DefaultTimestampUTC, which // Most users will want to use DefaultTimestamp or DefaultTimestampUTC, which
// are TimestampFormats that use the RFC3339Nano format. // are TimestampFormats that use the RFC3339Nano format.
func Timestamp(t func() time.Time) Valuer { func Timestamp(t func() time.Time) Valuer {
return func() interface{} { return t() } return func() interface{} { return t() }
} }
// TimestampFormat returns a timestamp Valuer with a custom time format. It // TimestampFormat returns a timestamp Valuer with a custom time format. It
// invokes the t function to get the time to format; unless you are doing // invokes the t function to get the time to format; unless you are doing
// something tricky, pass time.Now. The layout string is passed to // something tricky, pass time.Now. The layout string is passed to
// Time.Format. // Time.Format.
// //
// Most users will want to use DefaultTimestamp or DefaultTimestampUTC, which // Most users will want to use DefaultTimestamp or DefaultTimestampUTC, which
// are TimestampFormats that use the RFC3339Nano format. // are TimestampFormats that use the RFC3339Nano format.
func TimestampFormat(t func() time.Time, layout string) Valuer { func TimestampFormat(t func() time.Time, layout string) Valuer {
return func() interface{} { return func() interface{} {
return timeFormat{ return timeFormat{
time: t(), time: t(),
layout: layout, layout: layout,
} }
} }
} }
// A timeFormat represents an instant in time and a layout used when // A timeFormat represents an instant in time and a layout used when
// marshaling to a text format. // marshaling to a text format.
type timeFormat struct { type timeFormat struct {
time time.Time time time.Time
layout string layout string
} }
func (tf timeFormat) String() string { func (tf timeFormat) String() string {
return tf.time.Format(tf.layout) return tf.time.Format(tf.layout)
} }
// MarshalText implements encoding.TextMarshaller. // MarshalText implements encoding.TextMarshaller.
func (tf timeFormat) MarshalText() (text []byte, err error) { func (tf timeFormat) MarshalText() (text []byte, err error) {
// The following code adapted from the standard library time.Time.Format // The following code adapted from the standard library time.Time.Format
// method. Using the same undocumented magic constant to extend the size // method. Using the same undocumented magic constant to extend the size
// of the buffer as seen there. // of the buffer as seen there.
b := make([]byte, 0, len(tf.layout)+10) b := make([]byte, 0, len(tf.layout)+10)
b = tf.time.AppendFormat(b, tf.layout) b = tf.time.AppendFormat(b, tf.layout)
return b, nil return b, nil
} }
// Caller returns a Valuer that returns a file and line from a specified depth // Caller returns a Valuer that returns a file and line from a specified depth
// in the callstack. Users will probably want to use DefaultCaller. // in the callstack. Users will probably want to use DefaultCaller.
func Caller(depth int) Valuer { func Caller(depth int) Valuer {
return func() interface{} { return func() interface{} {
_, file, line, _ := runtime.Caller(depth) _, file, line, _ := runtime.Caller(depth)
idx := strings.LastIndexByte(file, '/') idx := strings.LastIndexByte(file, '/')
// using idx+1 below handles both of following cases: // using idx+1 below handles both of following cases:
// idx == -1 because no "/" was found, or // idx == -1 because no "/" was found, or
// idx >= 0 and we want to start at the character after the found "/". // idx >= 0 and we want to start at the character after the found "/".
return file[idx+1:] + ":" + strconv.Itoa(line) return file[idx+1:] + ":" + strconv.Itoa(line)
} }
} }
var ( var (
// DefaultTimestamp is a Valuer that returns the current wallclock time, // DefaultTimestamp is a Valuer that returns the current wallclock time,
// respecting time zones, when bound. // respecting time zones, when bound.
DefaultTimestamp = TimestampFormat(time.Now, time.RFC3339Nano) DefaultTimestamp = TimestampFormat(time.Now, time.RFC3339Nano)
// DefaultTimestampUTC is a Valuer that returns the current time in UTC // DefaultTimestampUTC is a Valuer that returns the current time in UTC
// when bound. // when bound.
DefaultTimestampUTC = TimestampFormat( DefaultTimestampUTC = TimestampFormat(
func() time.Time { return time.Now().UTC() }, func() time.Time { return time.Now().UTC() },
time.RFC3339Nano, time.RFC3339Nano,
) )
// DefaultCaller is a Valuer that returns the file and line where the Log // DefaultCaller is a Valuer that returns the file and line where the Log
// method was invoked. It can only be used with log.With. // method was invoked. It can only be used with log.With.
DefaultCaller = Caller(3) DefaultCaller = Caller(3)
) )

View file

@ -1,4 +1,4 @@
_testdata/ _testdata/
_testdata2/ _testdata2/
logfmt-fuzz.zip logfmt-fuzz.zip
logfmt.test.exe logfmt.test.exe

View file

@ -1,16 +1,16 @@
language: go language: go
sudo: false sudo: false
go: go:
- "1.7.x" - "1.7.x"
- "1.8.x" - "1.8.x"
- "1.9.x" - "1.9.x"
- "1.10.x" - "1.10.x"
- "1.11.x" - "1.11.x"
- "tip" - "tip"
before_install: before_install:
- go get github.com/mattn/goveralls - go get github.com/mattn/goveralls
- go get golang.org/x/tools/cmd/cover - go get golang.org/x/tools/cmd/cover
script: script:
- goveralls -service=travis-ci - goveralls -service=travis-ci

View file

@ -1,41 +1,41 @@
# Changelog # Changelog
All notable changes to this project will be documented in this file. All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.4.0] - 2018-11-21 ## [0.4.0] - 2018-11-21
### Added ### Added
- Go module support by [@ChrisHines] - Go module support by [@ChrisHines]
- CHANGELOG by [@ChrisHines] - CHANGELOG by [@ChrisHines]
### Changed ### Changed
- Drop invalid runes from keys instead of returning ErrInvalidKey by [@ChrisHines] - Drop invalid runes from keys instead of returning ErrInvalidKey by [@ChrisHines]
- On panic while printing, attempt to print panic value by [@bboreham] - On panic while printing, attempt to print panic value by [@bboreham]
## [0.3.0] - 2016-11-15 ## [0.3.0] - 2016-11-15
### Added ### Added
- Pool buffers for quoted strings and byte slices by [@nussjustin] - Pool buffers for quoted strings and byte slices by [@nussjustin]
### Fixed ### Fixed
- Fuzz fix, quote invalid UTF-8 values by [@judwhite] - Fuzz fix, quote invalid UTF-8 values by [@judwhite]
## [0.2.0] - 2016-05-08 ## [0.2.0] - 2016-05-08
### Added ### Added
- Encoder.EncodeKeyvals by [@ChrisHines] - Encoder.EncodeKeyvals by [@ChrisHines]
## [0.1.0] - 2016-03-28 ## [0.1.0] - 2016-03-28
### Added ### Added
- Encoder by [@ChrisHines] - Encoder by [@ChrisHines]
- Decoder by [@ChrisHines] - Decoder by [@ChrisHines]
- MarshalKeyvals by [@ChrisHines] - MarshalKeyvals by [@ChrisHines]
[0.4.0]: https://github.com/go-logfmt/logfmt/compare/v0.3.0...v0.4.0 [0.4.0]: https://github.com/go-logfmt/logfmt/compare/v0.3.0...v0.4.0
[0.3.0]: https://github.com/go-logfmt/logfmt/compare/v0.2.0...v0.3.0 [0.3.0]: https://github.com/go-logfmt/logfmt/compare/v0.2.0...v0.3.0
[0.2.0]: https://github.com/go-logfmt/logfmt/compare/v0.1.0...v0.2.0 [0.2.0]: https://github.com/go-logfmt/logfmt/compare/v0.1.0...v0.2.0
[0.1.0]: https://github.com/go-logfmt/logfmt/commits/v0.1.0 [0.1.0]: https://github.com/go-logfmt/logfmt/commits/v0.1.0
[@ChrisHines]: https://github.com/ChrisHines [@ChrisHines]: https://github.com/ChrisHines
[@bboreham]: https://github.com/bboreham [@bboreham]: https://github.com/bboreham
[@judwhite]: https://github.com/judwhite [@judwhite]: https://github.com/judwhite
[@nussjustin]: https://github.com/nussjustin [@nussjustin]: https://github.com/nussjustin

View file

@ -1,22 +1,22 @@
The MIT License (MIT) The MIT License (MIT)
Copyright (c) 2015 go-logfmt Copyright (c) 2015 go-logfmt
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions: furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software. copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. SOFTWARE.

View file

@ -1,33 +1,33 @@
[![GoDoc](https://godoc.org/github.com/go-logfmt/logfmt?status.svg)](https://godoc.org/github.com/go-logfmt/logfmt) [![GoDoc](https://godoc.org/github.com/go-logfmt/logfmt?status.svg)](https://godoc.org/github.com/go-logfmt/logfmt)
[![Go Report Card](https://goreportcard.com/badge/go-logfmt/logfmt)](https://goreportcard.com/report/go-logfmt/logfmt) [![Go Report Card](https://goreportcard.com/badge/go-logfmt/logfmt)](https://goreportcard.com/report/go-logfmt/logfmt)
[![TravisCI](https://travis-ci.org/go-logfmt/logfmt.svg?branch=master)](https://travis-ci.org/go-logfmt/logfmt) [![TravisCI](https://travis-ci.org/go-logfmt/logfmt.svg?branch=master)](https://travis-ci.org/go-logfmt/logfmt)
[![Coverage Status](https://coveralls.io/repos/github/go-logfmt/logfmt/badge.svg?branch=master)](https://coveralls.io/github/go-logfmt/logfmt?branch=master) [![Coverage Status](https://coveralls.io/repos/github/go-logfmt/logfmt/badge.svg?branch=master)](https://coveralls.io/github/go-logfmt/logfmt?branch=master)
# logfmt # logfmt
Package logfmt implements utilities to marshal and unmarshal data in the [logfmt Package logfmt implements utilities to marshal and unmarshal data in the [logfmt
format](https://brandur.org/logfmt). It provides an API similar to format](https://brandur.org/logfmt). It provides an API similar to
[encoding/json](http://golang.org/pkg/encoding/json/) and [encoding/json](http://golang.org/pkg/encoding/json/) and
[encoding/xml](http://golang.org/pkg/encoding/xml/). [encoding/xml](http://golang.org/pkg/encoding/xml/).
The logfmt format was first documented by Brandur Leach in [this The logfmt format was first documented by Brandur Leach in [this
article](https://brandur.org/logfmt). The format has not been formally article](https://brandur.org/logfmt). The format has not been formally
standardized. The most authoritative public specification to date has been the standardized. The most authoritative public specification to date has been the
documentation of a Go Language [package](http://godoc.org/github.com/kr/logfmt) documentation of a Go Language [package](http://godoc.org/github.com/kr/logfmt)
written by Blake Mizerany and Keith Rarick. written by Blake Mizerany and Keith Rarick.
## Goals ## Goals
This project attempts to conform as closely as possible to the prior art, while This project attempts to conform as closely as possible to the prior art, while
also removing ambiguity where necessary to provide well behaved encoder and also removing ambiguity where necessary to provide well behaved encoder and
decoder implementations. decoder implementations.
## Non-goals ## Non-goals
This project does not attempt to formally standardize the logfmt format. In the This project does not attempt to formally standardize the logfmt format. In the
event that logfmt is standardized this project would take conforming to the event that logfmt is standardized this project would take conforming to the
standard as a goal. standard as a goal.
## Versioning ## Versioning
Package logfmt publishes releases via [semver](http://semver.org/) compatible Git tags prefixed with a single 'v'. Package logfmt publishes releases via [semver](http://semver.org/) compatible Git tags prefixed with a single 'v'.

View file

@ -1,237 +1,237 @@
package logfmt package logfmt
import ( import (
"bufio" "bufio"
"bytes" "bytes"
"fmt" "fmt"
"io" "io"
"unicode/utf8" "unicode/utf8"
) )
// A Decoder reads and decodes logfmt records from an input stream. // A Decoder reads and decodes logfmt records from an input stream.
type Decoder struct { type Decoder struct {
pos int pos int
key []byte key []byte
value []byte value []byte
lineNum int lineNum int
s *bufio.Scanner s *bufio.Scanner
err error err error
} }
// NewDecoder returns a new decoder that reads from r. // NewDecoder returns a new decoder that reads from r.
// //
// The decoder introduces its own buffering and may read data from r beyond // The decoder introduces its own buffering and may read data from r beyond
// the logfmt records requested. // the logfmt records requested.
func NewDecoder(r io.Reader) *Decoder { func NewDecoder(r io.Reader) *Decoder {
dec := &Decoder{ dec := &Decoder{
s: bufio.NewScanner(r), s: bufio.NewScanner(r),
} }
return dec return dec
} }
// ScanRecord advances the Decoder to the next record, which can then be // ScanRecord advances the Decoder to the next record, which can then be
// parsed with the ScanKeyval method. It returns false when decoding stops, // parsed with the ScanKeyval method. It returns false when decoding stops,
// either by reaching the end of the input or an error. After ScanRecord // either by reaching the end of the input or an error. After ScanRecord
// returns false, the Err method will return any error that occurred during // returns false, the Err method will return any error that occurred during
// decoding, except that if it was io.EOF, Err will return nil. // decoding, except that if it was io.EOF, Err will return nil.
func (dec *Decoder) ScanRecord() bool { func (dec *Decoder) ScanRecord() bool {
if dec.err != nil { if dec.err != nil {
return false return false
} }
if !dec.s.Scan() { if !dec.s.Scan() {
dec.err = dec.s.Err() dec.err = dec.s.Err()
return false return false
} }
dec.lineNum++ dec.lineNum++
dec.pos = 0 dec.pos = 0
return true return true
} }
// ScanKeyval advances the Decoder to the next key/value pair of the current // ScanKeyval advances the Decoder to the next key/value pair of the current
// record, which can then be retrieved with the Key and Value methods. It // record, which can then be retrieved with the Key and Value methods. It
// returns false when decoding stops, either by reaching the end of the // returns false when decoding stops, either by reaching the end of the
// current record or an error. // current record or an error.
func (dec *Decoder) ScanKeyval() bool { func (dec *Decoder) ScanKeyval() bool {
dec.key, dec.value = nil, nil dec.key, dec.value = nil, nil
if dec.err != nil { if dec.err != nil {
return false return false
} }
line := dec.s.Bytes() line := dec.s.Bytes()
// garbage // garbage
for p, c := range line[dec.pos:] { for p, c := range line[dec.pos:] {
if c > ' ' { if c > ' ' {
dec.pos += p dec.pos += p
goto key goto key
} }
} }
dec.pos = len(line) dec.pos = len(line)
return false return false
key: key:
const invalidKeyError = "invalid key" const invalidKeyError = "invalid key"
start, multibyte := dec.pos, false start, multibyte := dec.pos, false
for p, c := range line[dec.pos:] { for p, c := range line[dec.pos:] {
switch { switch {
case c == '=': case c == '=':
dec.pos += p dec.pos += p
if dec.pos > start { if dec.pos > start {
dec.key = line[start:dec.pos] dec.key = line[start:dec.pos]
if multibyte && bytes.IndexRune(dec.key, utf8.RuneError) != -1 { if multibyte && bytes.IndexRune(dec.key, utf8.RuneError) != -1 {
dec.syntaxError(invalidKeyError) dec.syntaxError(invalidKeyError)
return false return false
} }
} }
if dec.key == nil { if dec.key == nil {
dec.unexpectedByte(c) dec.unexpectedByte(c)
return false return false
} }
goto equal goto equal
case c == '"': case c == '"':
dec.pos += p dec.pos += p
dec.unexpectedByte(c) dec.unexpectedByte(c)
return false return false
case c <= ' ': case c <= ' ':
dec.pos += p dec.pos += p
if dec.pos > start { if dec.pos > start {
dec.key = line[start:dec.pos] dec.key = line[start:dec.pos]
if multibyte && bytes.IndexRune(dec.key, utf8.RuneError) != -1 { if multibyte && bytes.IndexRune(dec.key, utf8.RuneError) != -1 {
dec.syntaxError(invalidKeyError) dec.syntaxError(invalidKeyError)
return false return false
} }
} }
return true return true
case c >= utf8.RuneSelf: case c >= utf8.RuneSelf:
multibyte = true multibyte = true
} }
} }
dec.pos = len(line) dec.pos = len(line)
if dec.pos > start { if dec.pos > start {
dec.key = line[start:dec.pos] dec.key = line[start:dec.pos]
if multibyte && bytes.IndexRune(dec.key, utf8.RuneError) != -1 { if multibyte && bytes.IndexRune(dec.key, utf8.RuneError) != -1 {
dec.syntaxError(invalidKeyError) dec.syntaxError(invalidKeyError)
return false return false
} }
} }
return true return true
equal: equal:
dec.pos++ dec.pos++
if dec.pos >= len(line) { if dec.pos >= len(line) {
return true return true
} }
switch c := line[dec.pos]; { switch c := line[dec.pos]; {
case c <= ' ': case c <= ' ':
return true return true
case c == '"': case c == '"':
goto qvalue goto qvalue
} }
// value // value
start = dec.pos start = dec.pos
for p, c := range line[dec.pos:] { for p, c := range line[dec.pos:] {
switch { switch {
case c == '=' || c == '"': case c == '=' || c == '"':
dec.pos += p dec.pos += p
dec.unexpectedByte(c) dec.unexpectedByte(c)
return false return false
case c <= ' ': case c <= ' ':
dec.pos += p dec.pos += p
if dec.pos > start { if dec.pos > start {
dec.value = line[start:dec.pos] dec.value = line[start:dec.pos]
} }
return true return true
} }
} }
dec.pos = len(line) dec.pos = len(line)
if dec.pos > start { if dec.pos > start {
dec.value = line[start:dec.pos] dec.value = line[start:dec.pos]
} }
return true return true
qvalue: qvalue:
const ( const (
untermQuote = "unterminated quoted value" untermQuote = "unterminated quoted value"
invalidQuote = "invalid quoted value" invalidQuote = "invalid quoted value"
) )
hasEsc, esc := false, false hasEsc, esc := false, false
start = dec.pos start = dec.pos
for p, c := range line[dec.pos+1:] { for p, c := range line[dec.pos+1:] {
switch { switch {
case esc: case esc:
esc = false esc = false
case c == '\\': case c == '\\':
hasEsc, esc = true, true hasEsc, esc = true, true
case c == '"': case c == '"':
dec.pos += p + 2 dec.pos += p + 2
if hasEsc { if hasEsc {
v, ok := unquoteBytes(line[start:dec.pos]) v, ok := unquoteBytes(line[start:dec.pos])
if !ok { if !ok {
dec.syntaxError(invalidQuote) dec.syntaxError(invalidQuote)
return false return false
} }
dec.value = v dec.value = v
} else { } else {
start++ start++
end := dec.pos - 1 end := dec.pos - 1
if end > start { if end > start {
dec.value = line[start:end] dec.value = line[start:end]
} }
} }
return true return true
} }
} }
dec.pos = len(line) dec.pos = len(line)
dec.syntaxError(untermQuote) dec.syntaxError(untermQuote)
return false return false
} }
// Key returns the most recent key found by a call to ScanKeyval. The returned // Key returns the most recent key found by a call to ScanKeyval. The returned
// slice may point to internal buffers and is only valid until the next call // slice may point to internal buffers and is only valid until the next call
// to ScanRecord. It does no allocation. // to ScanRecord. It does no allocation.
func (dec *Decoder) Key() []byte { func (dec *Decoder) Key() []byte {
return dec.key return dec.key
} }
// Value returns the most recent value found by a call to ScanKeyval. The // Value returns the most recent value found by a call to ScanKeyval. The
// returned slice may point to internal buffers and is only valid until the // returned slice may point to internal buffers and is only valid until the
// next call to ScanRecord. It does no allocation when the value has no // next call to ScanRecord. It does no allocation when the value has no
// escape sequences. // escape sequences.
func (dec *Decoder) Value() []byte { func (dec *Decoder) Value() []byte {
return dec.value return dec.value
} }
// Err returns the first non-EOF error that was encountered by the Scanner. // Err returns the first non-EOF error that was encountered by the Scanner.
func (dec *Decoder) Err() error { func (dec *Decoder) Err() error {
return dec.err return dec.err
} }
func (dec *Decoder) syntaxError(msg string) { func (dec *Decoder) syntaxError(msg string) {
dec.err = &SyntaxError{ dec.err = &SyntaxError{
Msg: msg, Msg: msg,
Line: dec.lineNum, Line: dec.lineNum,
Pos: dec.pos + 1, Pos: dec.pos + 1,
} }
} }
func (dec *Decoder) unexpectedByte(c byte) { func (dec *Decoder) unexpectedByte(c byte) {
dec.err = &SyntaxError{ dec.err = &SyntaxError{
Msg: fmt.Sprintf("unexpected %q", c), Msg: fmt.Sprintf("unexpected %q", c),
Line: dec.lineNum, Line: dec.lineNum,
Pos: dec.pos + 1, Pos: dec.pos + 1,
} }
} }
// A SyntaxError represents a syntax error in the logfmt input stream. // A SyntaxError represents a syntax error in the logfmt input stream.
type SyntaxError struct { type SyntaxError struct {
Msg string Msg string
Line int Line int
Pos int Pos int
} }
func (e *SyntaxError) Error() string { func (e *SyntaxError) Error() string {
return fmt.Sprintf("logfmt syntax error at pos %d on line %d: %s", e.Pos, e.Line, e.Msg) return fmt.Sprintf("logfmt syntax error at pos %d on line %d: %s", e.Pos, e.Line, e.Msg)
} }

View file

@ -1,6 +1,6 @@
// Package logfmt implements utilities to marshal and unmarshal data in the // Package logfmt implements utilities to marshal and unmarshal data in the
// logfmt format. The logfmt format records key/value pairs in a way that // logfmt format. The logfmt format records key/value pairs in a way that
// balances readability for humans and simplicity of computer parsing. It is // balances readability for humans and simplicity of computer parsing. It is
// most commonly used as a more human friendly alternative to JSON for // most commonly used as a more human friendly alternative to JSON for
// structured logging. // structured logging.
package logfmt package logfmt

View file

@ -1,322 +1,322 @@
package logfmt package logfmt
import ( import (
"bytes" "bytes"
"encoding" "encoding"
"errors" "errors"
"fmt" "fmt"
"io" "io"
"reflect" "reflect"
"strings" "strings"
"unicode/utf8" "unicode/utf8"
) )
// MarshalKeyvals returns the logfmt encoding of keyvals, a variadic sequence // MarshalKeyvals returns the logfmt encoding of keyvals, a variadic sequence
// of alternating keys and values. // of alternating keys and values.
func MarshalKeyvals(keyvals ...interface{}) ([]byte, error) { func MarshalKeyvals(keyvals ...interface{}) ([]byte, error) {
buf := &bytes.Buffer{} buf := &bytes.Buffer{}
if err := NewEncoder(buf).EncodeKeyvals(keyvals...); err != nil { if err := NewEncoder(buf).EncodeKeyvals(keyvals...); err != nil {
return nil, err return nil, err
} }
return buf.Bytes(), nil return buf.Bytes(), nil
} }
// An Encoder writes logfmt data to an output stream. // An Encoder writes logfmt data to an output stream.
type Encoder struct { type Encoder struct {
w io.Writer w io.Writer
scratch bytes.Buffer scratch bytes.Buffer
needSep bool needSep bool
} }
// NewEncoder returns a new encoder that writes to w. // NewEncoder returns a new encoder that writes to w.
func NewEncoder(w io.Writer) *Encoder { func NewEncoder(w io.Writer) *Encoder {
return &Encoder{ return &Encoder{
w: w, w: w,
} }
} }
var ( var (
space = []byte(" ") space = []byte(" ")
equals = []byte("=") equals = []byte("=")
newline = []byte("\n") newline = []byte("\n")
null = []byte("null") null = []byte("null")
) )
// EncodeKeyval writes the logfmt encoding of key and value to the stream. A // EncodeKeyval writes the logfmt encoding of key and value to the stream. A
// single space is written before the second and subsequent keys in a record. // single space is written before the second and subsequent keys in a record.
// Nothing is written if a non-nil error is returned. // Nothing is written if a non-nil error is returned.
func (enc *Encoder) EncodeKeyval(key, value interface{}) error { func (enc *Encoder) EncodeKeyval(key, value interface{}) error {
enc.scratch.Reset() enc.scratch.Reset()
if enc.needSep { if enc.needSep {
if _, err := enc.scratch.Write(space); err != nil { if _, err := enc.scratch.Write(space); err != nil {
return err return err
} }
} }
if err := writeKey(&enc.scratch, key); err != nil { if err := writeKey(&enc.scratch, key); err != nil {
return err return err
} }
if _, err := enc.scratch.Write(equals); err != nil { if _, err := enc.scratch.Write(equals); err != nil {
return err return err
} }
if err := writeValue(&enc.scratch, value); err != nil { if err := writeValue(&enc.scratch, value); err != nil {
return err return err
} }
_, err := enc.w.Write(enc.scratch.Bytes()) _, err := enc.w.Write(enc.scratch.Bytes())
enc.needSep = true enc.needSep = true
return err return err
} }
// EncodeKeyvals writes the logfmt encoding of keyvals to the stream. Keyvals // EncodeKeyvals writes the logfmt encoding of keyvals to the stream. Keyvals
// is a variadic sequence of alternating keys and values. Keys of unsupported // is a variadic sequence of alternating keys and values. Keys of unsupported
// type are skipped along with their corresponding value. Values of // type are skipped along with their corresponding value. Values of
// unsupported type or that cause a MarshalerError are replaced by their error // unsupported type or that cause a MarshalerError are replaced by their error
// but do not cause EncodeKeyvals to return an error. If a non-nil error is // but do not cause EncodeKeyvals to return an error. If a non-nil error is
// returned some key/value pairs may not have be written. // returned some key/value pairs may not have be written.
func (enc *Encoder) EncodeKeyvals(keyvals ...interface{}) error { func (enc *Encoder) EncodeKeyvals(keyvals ...interface{}) error {
if len(keyvals) == 0 { if len(keyvals) == 0 {
return nil return nil
} }
if len(keyvals)%2 == 1 { if len(keyvals)%2 == 1 {
keyvals = append(keyvals, nil) keyvals = append(keyvals, nil)
} }
for i := 0; i < len(keyvals); i += 2 { for i := 0; i < len(keyvals); i += 2 {
k, v := keyvals[i], keyvals[i+1] k, v := keyvals[i], keyvals[i+1]
err := enc.EncodeKeyval(k, v) err := enc.EncodeKeyval(k, v)
if err == ErrUnsupportedKeyType { if err == ErrUnsupportedKeyType {
continue continue
} }
if _, ok := err.(*MarshalerError); ok || err == ErrUnsupportedValueType { if _, ok := err.(*MarshalerError); ok || err == ErrUnsupportedValueType {
v = err v = err
err = enc.EncodeKeyval(k, v) err = enc.EncodeKeyval(k, v)
} }
if err != nil { if err != nil {
return err return err
} }
} }
return nil return nil
} }
// MarshalerError represents an error encountered while marshaling a value. // MarshalerError represents an error encountered while marshaling a value.
type MarshalerError struct { type MarshalerError struct {
Type reflect.Type Type reflect.Type
Err error Err error
} }
func (e *MarshalerError) Error() string { func (e *MarshalerError) Error() string {
return "error marshaling value of type " + e.Type.String() + ": " + e.Err.Error() return "error marshaling value of type " + e.Type.String() + ": " + e.Err.Error()
} }
// ErrNilKey is returned by Marshal functions and Encoder methods if a key is // ErrNilKey is returned by Marshal functions and Encoder methods if a key is
// a nil interface or pointer value. // a nil interface or pointer value.
var ErrNilKey = errors.New("nil key") var ErrNilKey = errors.New("nil key")
// ErrInvalidKey is returned by Marshal functions and Encoder methods if, after // ErrInvalidKey is returned by Marshal functions and Encoder methods if, after
// dropping invalid runes, a key is empty. // dropping invalid runes, a key is empty.
var ErrInvalidKey = errors.New("invalid key") var ErrInvalidKey = errors.New("invalid key")
// ErrUnsupportedKeyType is returned by Encoder methods if a key has an // ErrUnsupportedKeyType is returned by Encoder methods if a key has an
// unsupported type. // unsupported type.
var ErrUnsupportedKeyType = errors.New("unsupported key type") var ErrUnsupportedKeyType = errors.New("unsupported key type")
// ErrUnsupportedValueType is returned by Encoder methods if a value has an // ErrUnsupportedValueType is returned by Encoder methods if a value has an
// unsupported type. // unsupported type.
var ErrUnsupportedValueType = errors.New("unsupported value type") var ErrUnsupportedValueType = errors.New("unsupported value type")
func writeKey(w io.Writer, key interface{}) error { func writeKey(w io.Writer, key interface{}) error {
if key == nil { if key == nil {
return ErrNilKey return ErrNilKey
} }
switch k := key.(type) { switch k := key.(type) {
case string: case string:
return writeStringKey(w, k) return writeStringKey(w, k)
case []byte: case []byte:
if k == nil { if k == nil {
return ErrNilKey return ErrNilKey
} }
return writeBytesKey(w, k) return writeBytesKey(w, k)
case encoding.TextMarshaler: case encoding.TextMarshaler:
kb, err := safeMarshal(k) kb, err := safeMarshal(k)
if err != nil { if err != nil {
return err return err
} }
if kb == nil { if kb == nil {
return ErrNilKey return ErrNilKey
} }
return writeBytesKey(w, kb) return writeBytesKey(w, kb)
case fmt.Stringer: case fmt.Stringer:
ks, ok := safeString(k) ks, ok := safeString(k)
if !ok { if !ok {
return ErrNilKey return ErrNilKey
} }
return writeStringKey(w, ks) return writeStringKey(w, ks)
default: default:
rkey := reflect.ValueOf(key) rkey := reflect.ValueOf(key)
switch rkey.Kind() { switch rkey.Kind() {
case reflect.Array, reflect.Chan, reflect.Func, reflect.Map, reflect.Slice, reflect.Struct: case reflect.Array, reflect.Chan, reflect.Func, reflect.Map, reflect.Slice, reflect.Struct:
return ErrUnsupportedKeyType return ErrUnsupportedKeyType
case reflect.Ptr: case reflect.Ptr:
if rkey.IsNil() { if rkey.IsNil() {
return ErrNilKey return ErrNilKey
} }
return writeKey(w, rkey.Elem().Interface()) return writeKey(w, rkey.Elem().Interface())
} }
return writeStringKey(w, fmt.Sprint(k)) return writeStringKey(w, fmt.Sprint(k))
} }
} }
// keyRuneFilter returns r for all valid key runes, and -1 for all invalid key // keyRuneFilter returns r for all valid key runes, and -1 for all invalid key
// runes. When used as the mapping function for strings.Map and bytes.Map // runes. When used as the mapping function for strings.Map and bytes.Map
// functions it causes them to remove invalid key runes from strings or byte // functions it causes them to remove invalid key runes from strings or byte
// slices respectively. // slices respectively.
func keyRuneFilter(r rune) rune { func keyRuneFilter(r rune) rune {
if r <= ' ' || r == '=' || r == '"' || r == utf8.RuneError { if r <= ' ' || r == '=' || r == '"' || r == utf8.RuneError {
return -1 return -1
} }
return r return r
} }
func writeStringKey(w io.Writer, key string) error { func writeStringKey(w io.Writer, key string) error {
k := strings.Map(keyRuneFilter, key) k := strings.Map(keyRuneFilter, key)
if k == "" { if k == "" {
return ErrInvalidKey return ErrInvalidKey
} }
_, err := io.WriteString(w, k) _, err := io.WriteString(w, k)
return err return err
} }
func writeBytesKey(w io.Writer, key []byte) error { func writeBytesKey(w io.Writer, key []byte) error {
k := bytes.Map(keyRuneFilter, key) k := bytes.Map(keyRuneFilter, key)
if len(k) == 0 { if len(k) == 0 {
return ErrInvalidKey return ErrInvalidKey
} }
_, err := w.Write(k) _, err := w.Write(k)
return err return err
} }
func writeValue(w io.Writer, value interface{}) error { func writeValue(w io.Writer, value interface{}) error {
switch v := value.(type) { switch v := value.(type) {
case nil: case nil:
return writeBytesValue(w, null) return writeBytesValue(w, null)
case string: case string:
return writeStringValue(w, v, true) return writeStringValue(w, v, true)
case []byte: case []byte:
return writeBytesValue(w, v) return writeBytesValue(w, v)
case encoding.TextMarshaler: case encoding.TextMarshaler:
vb, err := safeMarshal(v) vb, err := safeMarshal(v)
if err != nil { if err != nil {
return err return err
} }
if vb == nil { if vb == nil {
vb = null vb = null
} }
return writeBytesValue(w, vb) return writeBytesValue(w, vb)
case error: case error:
se, ok := safeError(v) se, ok := safeError(v)
return writeStringValue(w, se, ok) return writeStringValue(w, se, ok)
case fmt.Stringer: case fmt.Stringer:
ss, ok := safeString(v) ss, ok := safeString(v)
return writeStringValue(w, ss, ok) return writeStringValue(w, ss, ok)
default: default:
rvalue := reflect.ValueOf(value) rvalue := reflect.ValueOf(value)
switch rvalue.Kind() { switch rvalue.Kind() {
case reflect.Array, reflect.Chan, reflect.Func, reflect.Map, reflect.Slice, reflect.Struct: case reflect.Array, reflect.Chan, reflect.Func, reflect.Map, reflect.Slice, reflect.Struct:
return ErrUnsupportedValueType return ErrUnsupportedValueType
case reflect.Ptr: case reflect.Ptr:
if rvalue.IsNil() { if rvalue.IsNil() {
return writeBytesValue(w, null) return writeBytesValue(w, null)
} }
return writeValue(w, rvalue.Elem().Interface()) return writeValue(w, rvalue.Elem().Interface())
} }
return writeStringValue(w, fmt.Sprint(v), true) return writeStringValue(w, fmt.Sprint(v), true)
} }
} }
func needsQuotedValueRune(r rune) bool { func needsQuotedValueRune(r rune) bool {
return r <= ' ' || r == '=' || r == '"' || r == utf8.RuneError return r <= ' ' || r == '=' || r == '"' || r == utf8.RuneError
} }
func writeStringValue(w io.Writer, value string, ok bool) error { func writeStringValue(w io.Writer, value string, ok bool) error {
var err error var err error
if ok && value == "null" { if ok && value == "null" {
_, err = io.WriteString(w, `"null"`) _, err = io.WriteString(w, `"null"`)
} else if strings.IndexFunc(value, needsQuotedValueRune) != -1 { } else if strings.IndexFunc(value, needsQuotedValueRune) != -1 {
_, err = writeQuotedString(w, value) _, err = writeQuotedString(w, value)
} else { } else {
_, err = io.WriteString(w, value) _, err = io.WriteString(w, value)
} }
return err return err
} }
func writeBytesValue(w io.Writer, value []byte) error { func writeBytesValue(w io.Writer, value []byte) error {
var err error var err error
if bytes.IndexFunc(value, needsQuotedValueRune) != -1 { if bytes.IndexFunc(value, needsQuotedValueRune) != -1 {
_, err = writeQuotedBytes(w, value) _, err = writeQuotedBytes(w, value)
} else { } else {
_, err = w.Write(value) _, err = w.Write(value)
} }
return err return err
} }
// EndRecord writes a newline character to the stream and resets the encoder // EndRecord writes a newline character to the stream and resets the encoder
// to the beginning of a new record. // to the beginning of a new record.
func (enc *Encoder) EndRecord() error { func (enc *Encoder) EndRecord() error {
_, err := enc.w.Write(newline) _, err := enc.w.Write(newline)
if err == nil { if err == nil {
enc.needSep = false enc.needSep = false
} }
return err return err
} }
// Reset resets the encoder to the beginning of a new record. // Reset resets the encoder to the beginning of a new record.
func (enc *Encoder) Reset() { func (enc *Encoder) Reset() {
enc.needSep = false enc.needSep = false
} }
func safeError(err error) (s string, ok bool) { func safeError(err error) (s string, ok bool) {
defer func() { defer func() {
if panicVal := recover(); panicVal != nil { if panicVal := recover(); panicVal != nil {
if v := reflect.ValueOf(err); v.Kind() == reflect.Ptr && v.IsNil() { if v := reflect.ValueOf(err); v.Kind() == reflect.Ptr && v.IsNil() {
s, ok = "null", false s, ok = "null", false
} else { } else {
s, ok = fmt.Sprintf("PANIC:%v", panicVal), false s, ok = fmt.Sprintf("PANIC:%v", panicVal), false
} }
} }
}() }()
s, ok = err.Error(), true s, ok = err.Error(), true
return return
} }
func safeString(str fmt.Stringer) (s string, ok bool) { func safeString(str fmt.Stringer) (s string, ok bool) {
defer func() { defer func() {
if panicVal := recover(); panicVal != nil { if panicVal := recover(); panicVal != nil {
if v := reflect.ValueOf(str); v.Kind() == reflect.Ptr && v.IsNil() { if v := reflect.ValueOf(str); v.Kind() == reflect.Ptr && v.IsNil() {
s, ok = "null", false s, ok = "null", false
} else { } else {
s, ok = fmt.Sprintf("PANIC:%v", panicVal), true s, ok = fmt.Sprintf("PANIC:%v", panicVal), true
} }
} }
}() }()
s, ok = str.String(), true s, ok = str.String(), true
return return
} }
func safeMarshal(tm encoding.TextMarshaler) (b []byte, err error) { func safeMarshal(tm encoding.TextMarshaler) (b []byte, err error) {
defer func() { defer func() {
if panicVal := recover(); panicVal != nil { if panicVal := recover(); panicVal != nil {
if v := reflect.ValueOf(tm); v.Kind() == reflect.Ptr && v.IsNil() { if v := reflect.ValueOf(tm); v.Kind() == reflect.Ptr && v.IsNil() {
b, err = nil, nil b, err = nil, nil
} else { } else {
b, err = nil, fmt.Errorf("panic when marshalling: %s", panicVal) b, err = nil, fmt.Errorf("panic when marshalling: %s", panicVal)
} }
} }
}() }()
b, err = tm.MarshalText() b, err = tm.MarshalText()
if err != nil { if err != nil {
return nil, &MarshalerError{ return nil, &MarshalerError{
Type: reflect.TypeOf(tm), Type: reflect.TypeOf(tm),
Err: err, Err: err,
} }
} }
return return
} }

View file

@ -1,126 +1,126 @@
// +build gofuzz // +build gofuzz
package logfmt package logfmt
import ( import (
"bufio" "bufio"
"bytes" "bytes"
"fmt" "fmt"
"io" "io"
"reflect" "reflect"
kr "github.com/kr/logfmt" kr "github.com/kr/logfmt"
) )
// Fuzz checks reserialized data matches // Fuzz checks reserialized data matches
func Fuzz(data []byte) int { func Fuzz(data []byte) int {
parsed, err := parse(data) parsed, err := parse(data)
if err != nil { if err != nil {
return 0 return 0
} }
var w1 bytes.Buffer var w1 bytes.Buffer
if err = write(parsed, &w1); err != nil { if err = write(parsed, &w1); err != nil {
panic(err) panic(err)
} }
parsed, err = parse(w1.Bytes()) parsed, err = parse(w1.Bytes())
if err != nil { if err != nil {
panic(err) panic(err)
} }
var w2 bytes.Buffer var w2 bytes.Buffer
if err = write(parsed, &w2); err != nil { if err = write(parsed, &w2); err != nil {
panic(err) panic(err)
} }
if !bytes.Equal(w1.Bytes(), w2.Bytes()) { if !bytes.Equal(w1.Bytes(), w2.Bytes()) {
panic(fmt.Sprintf("reserialized data does not match:\n%q\n%q\n", w1.Bytes(), w2.Bytes())) panic(fmt.Sprintf("reserialized data does not match:\n%q\n%q\n", w1.Bytes(), w2.Bytes()))
} }
return 1 return 1
} }
// FuzzVsKR checks go-logfmt/logfmt against kr/logfmt // FuzzVsKR checks go-logfmt/logfmt against kr/logfmt
func FuzzVsKR(data []byte) int { func FuzzVsKR(data []byte) int {
parsed, err := parse(data) parsed, err := parse(data)
parsedKR, errKR := parseKR(data) parsedKR, errKR := parseKR(data)
// github.com/go-logfmt/logfmt is a stricter parser. It returns errors for // github.com/go-logfmt/logfmt is a stricter parser. It returns errors for
// more inputs than github.com/kr/logfmt. Ignore any inputs that have a // more inputs than github.com/kr/logfmt. Ignore any inputs that have a
// stict error. // stict error.
if err != nil { if err != nil {
return 0 return 0
} }
// Fail if the more forgiving parser finds an error not found by the // Fail if the more forgiving parser finds an error not found by the
// stricter parser. // stricter parser.
if errKR != nil { if errKR != nil {
panic(fmt.Sprintf("unmatched error: %v", errKR)) panic(fmt.Sprintf("unmatched error: %v", errKR))
} }
if !reflect.DeepEqual(parsed, parsedKR) { if !reflect.DeepEqual(parsed, parsedKR) {
panic(fmt.Sprintf("parsers disagree:\n%+v\n%+v\n", parsed, parsedKR)) panic(fmt.Sprintf("parsers disagree:\n%+v\n%+v\n", parsed, parsedKR))
} }
return 1 return 1
} }
type kv struct { type kv struct {
k, v []byte k, v []byte
} }
func parse(data []byte) ([][]kv, error) { func parse(data []byte) ([][]kv, error) {
var got [][]kv var got [][]kv
dec := NewDecoder(bytes.NewReader(data)) dec := NewDecoder(bytes.NewReader(data))
for dec.ScanRecord() { for dec.ScanRecord() {
var kvs []kv var kvs []kv
for dec.ScanKeyval() { for dec.ScanKeyval() {
kvs = append(kvs, kv{dec.Key(), dec.Value()}) kvs = append(kvs, kv{dec.Key(), dec.Value()})
} }
got = append(got, kvs) got = append(got, kvs)
} }
return got, dec.Err() return got, dec.Err()
} }
func parseKR(data []byte) ([][]kv, error) { func parseKR(data []byte) ([][]kv, error) {
var ( var (
s = bufio.NewScanner(bytes.NewReader(data)) s = bufio.NewScanner(bytes.NewReader(data))
err error err error
h saveHandler h saveHandler
got [][]kv got [][]kv
) )
for err == nil && s.Scan() { for err == nil && s.Scan() {
h.kvs = nil h.kvs = nil
err = kr.Unmarshal(s.Bytes(), &h) err = kr.Unmarshal(s.Bytes(), &h)
got = append(got, h.kvs) got = append(got, h.kvs)
} }
if err == nil { if err == nil {
err = s.Err() err = s.Err()
} }
return got, err return got, err
} }
type saveHandler struct { type saveHandler struct {
kvs []kv kvs []kv
} }
func (h *saveHandler) HandleLogfmt(key, val []byte) error { func (h *saveHandler) HandleLogfmt(key, val []byte) error {
if len(key) == 0 { if len(key) == 0 {
key = nil key = nil
} }
if len(val) == 0 { if len(val) == 0 {
val = nil val = nil
} }
h.kvs = append(h.kvs, kv{key, val}) h.kvs = append(h.kvs, kv{key, val})
return nil return nil
} }
func write(recs [][]kv, w io.Writer) error { func write(recs [][]kv, w io.Writer) error {
enc := NewEncoder(w) enc := NewEncoder(w)
for _, rec := range recs { for _, rec := range recs {
for _, f := range rec { for _, f := range rec {
if err := enc.EncodeKeyval(f.k, f.v); err != nil { if err := enc.EncodeKeyval(f.k, f.v); err != nil {
return err return err
} }
} }
if err := enc.EndRecord(); err != nil { if err := enc.EndRecord(); err != nil {
return err return err
} }
} }
return nil return nil
} }

View file

@ -1,3 +1,3 @@
module github.com/go-logfmt/logfmt module github.com/go-logfmt/logfmt
require github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515 require github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515

View file

@ -1,2 +1,2 @@
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515 h1:T+h1c/A9Gawja4Y9mFVWj2vyii2bbUNDw3kt9VxK2EY= github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515 h1:T+h1c/A9Gawja4Y9mFVWj2vyii2bbUNDw3kt9VxK2EY=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=

View file

@ -1,277 +1,277 @@
package logfmt package logfmt
import ( import (
"bytes" "bytes"
"io" "io"
"strconv" "strconv"
"sync" "sync"
"unicode" "unicode"
"unicode/utf16" "unicode/utf16"
"unicode/utf8" "unicode/utf8"
) )
// Taken from Go's encoding/json and modified for use here. // Taken from Go's encoding/json and modified for use here.
// Copyright 2010 The Go Authors. All rights reserved. // Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
var hex = "0123456789abcdef" var hex = "0123456789abcdef"
var bufferPool = sync.Pool{ var bufferPool = sync.Pool{
New: func() interface{} { New: func() interface{} {
return &bytes.Buffer{} return &bytes.Buffer{}
}, },
} }
func getBuffer() *bytes.Buffer { func getBuffer() *bytes.Buffer {
return bufferPool.Get().(*bytes.Buffer) return bufferPool.Get().(*bytes.Buffer)
} }
func poolBuffer(buf *bytes.Buffer) { func poolBuffer(buf *bytes.Buffer) {
buf.Reset() buf.Reset()
bufferPool.Put(buf) bufferPool.Put(buf)
} }
// NOTE: keep in sync with writeQuotedBytes below. // NOTE: keep in sync with writeQuotedBytes below.
func writeQuotedString(w io.Writer, s string) (int, error) { func writeQuotedString(w io.Writer, s string) (int, error) {
buf := getBuffer() buf := getBuffer()
buf.WriteByte('"') buf.WriteByte('"')
start := 0 start := 0
for i := 0; i < len(s); { for i := 0; i < len(s); {
if b := s[i]; b < utf8.RuneSelf { if b := s[i]; b < utf8.RuneSelf {
if 0x20 <= b && b != '\\' && b != '"' { if 0x20 <= b && b != '\\' && b != '"' {
i++ i++
continue continue
} }
if start < i { if start < i {
buf.WriteString(s[start:i]) buf.WriteString(s[start:i])
} }
switch b { switch b {
case '\\', '"': case '\\', '"':
buf.WriteByte('\\') buf.WriteByte('\\')
buf.WriteByte(b) buf.WriteByte(b)
case '\n': case '\n':
buf.WriteByte('\\') buf.WriteByte('\\')
buf.WriteByte('n') buf.WriteByte('n')
case '\r': case '\r':
buf.WriteByte('\\') buf.WriteByte('\\')
buf.WriteByte('r') buf.WriteByte('r')
case '\t': case '\t':
buf.WriteByte('\\') buf.WriteByte('\\')
buf.WriteByte('t') buf.WriteByte('t')
default: default:
// This encodes bytes < 0x20 except for \n, \r, and \t. // This encodes bytes < 0x20 except for \n, \r, and \t.
buf.WriteString(`\u00`) buf.WriteString(`\u00`)
buf.WriteByte(hex[b>>4]) buf.WriteByte(hex[b>>4])
buf.WriteByte(hex[b&0xF]) buf.WriteByte(hex[b&0xF])
} }
i++ i++
start = i start = i
continue continue
} }
c, size := utf8.DecodeRuneInString(s[i:]) c, size := utf8.DecodeRuneInString(s[i:])
if c == utf8.RuneError { if c == utf8.RuneError {
if start < i { if start < i {
buf.WriteString(s[start:i]) buf.WriteString(s[start:i])
} }
buf.WriteString(`\ufffd`) buf.WriteString(`\ufffd`)
i += size i += size
start = i start = i
continue continue
} }
i += size i += size
} }
if start < len(s) { if start < len(s) {
buf.WriteString(s[start:]) buf.WriteString(s[start:])
} }
buf.WriteByte('"') buf.WriteByte('"')
n, err := w.Write(buf.Bytes()) n, err := w.Write(buf.Bytes())
poolBuffer(buf) poolBuffer(buf)
return n, err return n, err
} }
// NOTE: keep in sync with writeQuoteString above. // NOTE: keep in sync with writeQuoteString above.
func writeQuotedBytes(w io.Writer, s []byte) (int, error) { func writeQuotedBytes(w io.Writer, s []byte) (int, error) {
buf := getBuffer() buf := getBuffer()
buf.WriteByte('"') buf.WriteByte('"')
start := 0 start := 0
for i := 0; i < len(s); { for i := 0; i < len(s); {
if b := s[i]; b < utf8.RuneSelf { if b := s[i]; b < utf8.RuneSelf {
if 0x20 <= b && b != '\\' && b != '"' { if 0x20 <= b && b != '\\' && b != '"' {
i++ i++
continue continue
} }
if start < i { if start < i {
buf.Write(s[start:i]) buf.Write(s[start:i])
} }
switch b { switch b {
case '\\', '"': case '\\', '"':
buf.WriteByte('\\') buf.WriteByte('\\')
buf.WriteByte(b) buf.WriteByte(b)
case '\n': case '\n':
buf.WriteByte('\\') buf.WriteByte('\\')
buf.WriteByte('n') buf.WriteByte('n')
case '\r': case '\r':
buf.WriteByte('\\') buf.WriteByte('\\')
buf.WriteByte('r') buf.WriteByte('r')
case '\t': case '\t':
buf.WriteByte('\\') buf.WriteByte('\\')
buf.WriteByte('t') buf.WriteByte('t')
default: default:
// This encodes bytes < 0x20 except for \n, \r, and \t. // This encodes bytes < 0x20 except for \n, \r, and \t.
buf.WriteString(`\u00`) buf.WriteString(`\u00`)
buf.WriteByte(hex[b>>4]) buf.WriteByte(hex[b>>4])
buf.WriteByte(hex[b&0xF]) buf.WriteByte(hex[b&0xF])
} }
i++ i++
start = i start = i
continue continue
} }
c, size := utf8.DecodeRune(s[i:]) c, size := utf8.DecodeRune(s[i:])
if c == utf8.RuneError { if c == utf8.RuneError {
if start < i { if start < i {
buf.Write(s[start:i]) buf.Write(s[start:i])
} }
buf.WriteString(`\ufffd`) buf.WriteString(`\ufffd`)
i += size i += size
start = i start = i
continue continue
} }
i += size i += size
} }
if start < len(s) { if start < len(s) {
buf.Write(s[start:]) buf.Write(s[start:])
} }
buf.WriteByte('"') buf.WriteByte('"')
n, err := w.Write(buf.Bytes()) n, err := w.Write(buf.Bytes())
poolBuffer(buf) poolBuffer(buf)
return n, err return n, err
} }
// getu4 decodes \uXXXX from the beginning of s, returning the hex value, // getu4 decodes \uXXXX from the beginning of s, returning the hex value,
// or it returns -1. // or it returns -1.
func getu4(s []byte) rune { func getu4(s []byte) rune {
if len(s) < 6 || s[0] != '\\' || s[1] != 'u' { if len(s) < 6 || s[0] != '\\' || s[1] != 'u' {
return -1 return -1
} }
r, err := strconv.ParseUint(string(s[2:6]), 16, 64) r, err := strconv.ParseUint(string(s[2:6]), 16, 64)
if err != nil { if err != nil {
return -1 return -1
} }
return rune(r) return rune(r)
} }
func unquoteBytes(s []byte) (t []byte, ok bool) { func unquoteBytes(s []byte) (t []byte, ok bool) {
if len(s) < 2 || s[0] != '"' || s[len(s)-1] != '"' { if len(s) < 2 || s[0] != '"' || s[len(s)-1] != '"' {
return return
} }
s = s[1 : len(s)-1] s = s[1 : len(s)-1]
// Check for unusual characters. If there are none, // Check for unusual characters. If there are none,
// then no unquoting is needed, so return a slice of the // then no unquoting is needed, so return a slice of the
// original bytes. // original bytes.
r := 0 r := 0
for r < len(s) { for r < len(s) {
c := s[r] c := s[r]
if c == '\\' || c == '"' || c < ' ' { if c == '\\' || c == '"' || c < ' ' {
break break
} }
if c < utf8.RuneSelf { if c < utf8.RuneSelf {
r++ r++
continue continue
} }
rr, size := utf8.DecodeRune(s[r:]) rr, size := utf8.DecodeRune(s[r:])
if rr == utf8.RuneError { if rr == utf8.RuneError {
break break
} }
r += size r += size
} }
if r == len(s) { if r == len(s) {
return s, true return s, true
} }
b := make([]byte, len(s)+2*utf8.UTFMax) b := make([]byte, len(s)+2*utf8.UTFMax)
w := copy(b, s[0:r]) w := copy(b, s[0:r])
for r < len(s) { for r < len(s) {
// Out of room? Can only happen if s is full of // Out of room? Can only happen if s is full of
// malformed UTF-8 and we're replacing each // malformed UTF-8 and we're replacing each
// byte with RuneError. // byte with RuneError.
if w >= len(b)-2*utf8.UTFMax { if w >= len(b)-2*utf8.UTFMax {
nb := make([]byte, (len(b)+utf8.UTFMax)*2) nb := make([]byte, (len(b)+utf8.UTFMax)*2)
copy(nb, b[0:w]) copy(nb, b[0:w])
b = nb b = nb
} }
switch c := s[r]; { switch c := s[r]; {
case c == '\\': case c == '\\':
r++ r++
if r >= len(s) { if r >= len(s) {
return return
} }
switch s[r] { switch s[r] {
default: default:
return return
case '"', '\\', '/', '\'': case '"', '\\', '/', '\'':
b[w] = s[r] b[w] = s[r]
r++ r++
w++ w++
case 'b': case 'b':
b[w] = '\b' b[w] = '\b'
r++ r++
w++ w++
case 'f': case 'f':
b[w] = '\f' b[w] = '\f'
r++ r++
w++ w++
case 'n': case 'n':
b[w] = '\n' b[w] = '\n'
r++ r++
w++ w++
case 'r': case 'r':
b[w] = '\r' b[w] = '\r'
r++ r++
w++ w++
case 't': case 't':
b[w] = '\t' b[w] = '\t'
r++ r++
w++ w++
case 'u': case 'u':
r-- r--
rr := getu4(s[r:]) rr := getu4(s[r:])
if rr < 0 { if rr < 0 {
return return
} }
r += 6 r += 6
if utf16.IsSurrogate(rr) { if utf16.IsSurrogate(rr) {
rr1 := getu4(s[r:]) rr1 := getu4(s[r:])
if dec := utf16.DecodeRune(rr, rr1); dec != unicode.ReplacementChar { if dec := utf16.DecodeRune(rr, rr1); dec != unicode.ReplacementChar {
// A valid pair; consume. // A valid pair; consume.
r += 6 r += 6
w += utf8.EncodeRune(b[w:], dec) w += utf8.EncodeRune(b[w:], dec)
break break
} }
// Invalid surrogate; fall back to replacement rune. // Invalid surrogate; fall back to replacement rune.
rr = unicode.ReplacementChar rr = unicode.ReplacementChar
} }
w += utf8.EncodeRune(b[w:], rr) w += utf8.EncodeRune(b[w:], rr)
} }
// Quote, control characters are invalid. // Quote, control characters are invalid.
case c == '"', c < ' ': case c == '"', c < ' ':
return return
// ASCII // ASCII
case c < utf8.RuneSelf: case c < utf8.RuneSelf:
b[w] = c b[w] = c
r++ r++
w++ w++
// Coerce to well-formed UTF-8. // Coerce to well-formed UTF-8.
default: default:
rr, size := utf8.DecodeRune(s[r:]) rr, size := utf8.DecodeRune(s[r:])
r += size r += size
w += utf8.EncodeRune(b[w:], rr) w += utf8.EncodeRune(b[w:], rr)
} }
} }
return b[0:w], true return b[0:w], true
} }

View file

@ -1,3 +1,3 @@
# This source code refers to The Go Authors for copyright purposes. # This source code refers to The Go Authors for copyright purposes.
# The master list of authors is in the main Go distribution, # The master list of authors is in the main Go distribution,
# visible at http://tip.golang.org/AUTHORS. # visible at http://tip.golang.org/AUTHORS.

View file

@ -1,3 +1,3 @@
# This source code was written by the Go contributors. # This source code was written by the Go contributors.
# The master list of contributors is in the main Go distribution, # The master list of contributors is in the main Go distribution,
# visible at http://tip.golang.org/CONTRIBUTORS. # visible at http://tip.golang.org/CONTRIBUTORS.

View file

@ -1,28 +1,28 @@
Copyright 2010 The Go Authors. All rights reserved. Copyright 2010 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are modification, are permitted provided that the following conditions are
met: met:
* Redistributions of source code must retain the above copyright * Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer. notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above * Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the in the documentation and/or other materials provided with the
distribution. distribution.
* Neither the name of Google Inc. nor the names of its * Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from contributors may be used to endorse or promote products derived from
this software without specific prior written permission. this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View file

@ -1,253 +1,253 @@
// Go support for Protocol Buffers - Google's data interchange format // Go support for Protocol Buffers - Google's data interchange format
// //
// Copyright 2011 The Go Authors. All rights reserved. // Copyright 2011 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf // https://github.com/golang/protobuf
// //
// Redistribution and use in source and binary forms, with or without // Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are // modification, are permitted provided that the following conditions are
// met: // met:
// //
// * Redistributions of source code must retain the above copyright // * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer. // notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above // * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer // copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the // in the documentation and/or other materials provided with the
// distribution. // distribution.
// * Neither the name of Google Inc. nor the names of its // * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from // contributors may be used to endorse or promote products derived from
// this software without specific prior written permission. // this software without specific prior written permission.
// //
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Protocol buffer deep copy and merge. // Protocol buffer deep copy and merge.
// TODO: RawMessage. // TODO: RawMessage.
package proto package proto
import ( import (
"fmt" "fmt"
"log" "log"
"reflect" "reflect"
"strings" "strings"
) )
// Clone returns a deep copy of a protocol buffer. // Clone returns a deep copy of a protocol buffer.
func Clone(src Message) Message { func Clone(src Message) Message {
in := reflect.ValueOf(src) in := reflect.ValueOf(src)
if in.IsNil() { if in.IsNil() {
return src return src
} }
out := reflect.New(in.Type().Elem()) out := reflect.New(in.Type().Elem())
dst := out.Interface().(Message) dst := out.Interface().(Message)
Merge(dst, src) Merge(dst, src)
return dst return dst
} }
// Merger is the interface representing objects that can merge messages of the same type. // Merger is the interface representing objects that can merge messages of the same type.
type Merger interface { type Merger interface {
// Merge merges src into this message. // Merge merges src into this message.
// Required and optional fields that are set in src will be set to that value in dst. // Required and optional fields that are set in src will be set to that value in dst.
// Elements of repeated fields will be appended. // Elements of repeated fields will be appended.
// //
// Merge may panic if called with a different argument type than the receiver. // Merge may panic if called with a different argument type than the receiver.
Merge(src Message) Merge(src Message)
} }
// generatedMerger is the custom merge method that generated protos will have. // generatedMerger is the custom merge method that generated protos will have.
// We must add this method since a generate Merge method will conflict with // We must add this method since a generate Merge method will conflict with
// many existing protos that have a Merge data field already defined. // many existing protos that have a Merge data field already defined.
type generatedMerger interface { type generatedMerger interface {
XXX_Merge(src Message) XXX_Merge(src Message)
} }
// Merge merges src into dst. // Merge merges src into dst.
// Required and optional fields that are set in src will be set to that value in dst. // Required and optional fields that are set in src will be set to that value in dst.
// Elements of repeated fields will be appended. // Elements of repeated fields will be appended.
// Merge panics if src and dst are not the same type, or if dst is nil. // Merge panics if src and dst are not the same type, or if dst is nil.
func Merge(dst, src Message) { func Merge(dst, src Message) {
if m, ok := dst.(Merger); ok { if m, ok := dst.(Merger); ok {
m.Merge(src) m.Merge(src)
return return
} }
in := reflect.ValueOf(src) in := reflect.ValueOf(src)
out := reflect.ValueOf(dst) out := reflect.ValueOf(dst)
if out.IsNil() { if out.IsNil() {
panic("proto: nil destination") panic("proto: nil destination")
} }
if in.Type() != out.Type() { if in.Type() != out.Type() {
panic(fmt.Sprintf("proto.Merge(%T, %T) type mismatch", dst, src)) panic(fmt.Sprintf("proto.Merge(%T, %T) type mismatch", dst, src))
} }
if in.IsNil() { if in.IsNil() {
return // Merge from nil src is a noop return // Merge from nil src is a noop
} }
if m, ok := dst.(generatedMerger); ok { if m, ok := dst.(generatedMerger); ok {
m.XXX_Merge(src) m.XXX_Merge(src)
return return
} }
mergeStruct(out.Elem(), in.Elem()) mergeStruct(out.Elem(), in.Elem())
} }
func mergeStruct(out, in reflect.Value) { func mergeStruct(out, in reflect.Value) {
sprop := GetProperties(in.Type()) sprop := GetProperties(in.Type())
for i := 0; i < in.NumField(); i++ { for i := 0; i < in.NumField(); i++ {
f := in.Type().Field(i) f := in.Type().Field(i)
if strings.HasPrefix(f.Name, "XXX_") { if strings.HasPrefix(f.Name, "XXX_") {
continue continue
} }
mergeAny(out.Field(i), in.Field(i), false, sprop.Prop[i]) mergeAny(out.Field(i), in.Field(i), false, sprop.Prop[i])
} }
if emIn, err := extendable(in.Addr().Interface()); err == nil { if emIn, err := extendable(in.Addr().Interface()); err == nil {
emOut, _ := extendable(out.Addr().Interface()) emOut, _ := extendable(out.Addr().Interface())
mIn, muIn := emIn.extensionsRead() mIn, muIn := emIn.extensionsRead()
if mIn != nil { if mIn != nil {
mOut := emOut.extensionsWrite() mOut := emOut.extensionsWrite()
muIn.Lock() muIn.Lock()
mergeExtension(mOut, mIn) mergeExtension(mOut, mIn)
muIn.Unlock() muIn.Unlock()
} }
} }
uf := in.FieldByName("XXX_unrecognized") uf := in.FieldByName("XXX_unrecognized")
if !uf.IsValid() { if !uf.IsValid() {
return return
} }
uin := uf.Bytes() uin := uf.Bytes()
if len(uin) > 0 { if len(uin) > 0 {
out.FieldByName("XXX_unrecognized").SetBytes(append([]byte(nil), uin...)) out.FieldByName("XXX_unrecognized").SetBytes(append([]byte(nil), uin...))
} }
} }
// mergeAny performs a merge between two values of the same type. // mergeAny performs a merge between two values of the same type.
// viaPtr indicates whether the values were indirected through a pointer (implying proto2). // viaPtr indicates whether the values were indirected through a pointer (implying proto2).
// prop is set if this is a struct field (it may be nil). // prop is set if this is a struct field (it may be nil).
func mergeAny(out, in reflect.Value, viaPtr bool, prop *Properties) { func mergeAny(out, in reflect.Value, viaPtr bool, prop *Properties) {
if in.Type() == protoMessageType { if in.Type() == protoMessageType {
if !in.IsNil() { if !in.IsNil() {
if out.IsNil() { if out.IsNil() {
out.Set(reflect.ValueOf(Clone(in.Interface().(Message)))) out.Set(reflect.ValueOf(Clone(in.Interface().(Message))))
} else { } else {
Merge(out.Interface().(Message), in.Interface().(Message)) Merge(out.Interface().(Message), in.Interface().(Message))
} }
} }
return return
} }
switch in.Kind() { switch in.Kind() {
case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64, case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64,
reflect.String, reflect.Uint32, reflect.Uint64: reflect.String, reflect.Uint32, reflect.Uint64:
if !viaPtr && isProto3Zero(in) { if !viaPtr && isProto3Zero(in) {
return return
} }
out.Set(in) out.Set(in)
case reflect.Interface: case reflect.Interface:
// Probably a oneof field; copy non-nil values. // Probably a oneof field; copy non-nil values.
if in.IsNil() { if in.IsNil() {
return return
} }
// Allocate destination if it is not set, or set to a different type. // Allocate destination if it is not set, or set to a different type.
// Otherwise we will merge as normal. // Otherwise we will merge as normal.
if out.IsNil() || out.Elem().Type() != in.Elem().Type() { if out.IsNil() || out.Elem().Type() != in.Elem().Type() {
out.Set(reflect.New(in.Elem().Elem().Type())) // interface -> *T -> T -> new(T) out.Set(reflect.New(in.Elem().Elem().Type())) // interface -> *T -> T -> new(T)
} }
mergeAny(out.Elem(), in.Elem(), false, nil) mergeAny(out.Elem(), in.Elem(), false, nil)
case reflect.Map: case reflect.Map:
if in.Len() == 0 { if in.Len() == 0 {
return return
} }
if out.IsNil() { if out.IsNil() {
out.Set(reflect.MakeMap(in.Type())) out.Set(reflect.MakeMap(in.Type()))
} }
// For maps with value types of *T or []byte we need to deep copy each value. // For maps with value types of *T or []byte we need to deep copy each value.
elemKind := in.Type().Elem().Kind() elemKind := in.Type().Elem().Kind()
for _, key := range in.MapKeys() { for _, key := range in.MapKeys() {
var val reflect.Value var val reflect.Value
switch elemKind { switch elemKind {
case reflect.Ptr: case reflect.Ptr:
val = reflect.New(in.Type().Elem().Elem()) val = reflect.New(in.Type().Elem().Elem())
mergeAny(val, in.MapIndex(key), false, nil) mergeAny(val, in.MapIndex(key), false, nil)
case reflect.Slice: case reflect.Slice:
val = in.MapIndex(key) val = in.MapIndex(key)
val = reflect.ValueOf(append([]byte{}, val.Bytes()...)) val = reflect.ValueOf(append([]byte{}, val.Bytes()...))
default: default:
val = in.MapIndex(key) val = in.MapIndex(key)
} }
out.SetMapIndex(key, val) out.SetMapIndex(key, val)
} }
case reflect.Ptr: case reflect.Ptr:
if in.IsNil() { if in.IsNil() {
return return
} }
if out.IsNil() { if out.IsNil() {
out.Set(reflect.New(in.Elem().Type())) out.Set(reflect.New(in.Elem().Type()))
} }
mergeAny(out.Elem(), in.Elem(), true, nil) mergeAny(out.Elem(), in.Elem(), true, nil)
case reflect.Slice: case reflect.Slice:
if in.IsNil() { if in.IsNil() {
return return
} }
if in.Type().Elem().Kind() == reflect.Uint8 { if in.Type().Elem().Kind() == reflect.Uint8 {
// []byte is a scalar bytes field, not a repeated field. // []byte is a scalar bytes field, not a repeated field.
// Edge case: if this is in a proto3 message, a zero length // Edge case: if this is in a proto3 message, a zero length
// bytes field is considered the zero value, and should not // bytes field is considered the zero value, and should not
// be merged. // be merged.
if prop != nil && prop.proto3 && in.Len() == 0 { if prop != nil && prop.proto3 && in.Len() == 0 {
return return
} }
// Make a deep copy. // Make a deep copy.
// Append to []byte{} instead of []byte(nil) so that we never end up // Append to []byte{} instead of []byte(nil) so that we never end up
// with a nil result. // with a nil result.
out.SetBytes(append([]byte{}, in.Bytes()...)) out.SetBytes(append([]byte{}, in.Bytes()...))
return return
} }
n := in.Len() n := in.Len()
if out.IsNil() { if out.IsNil() {
out.Set(reflect.MakeSlice(in.Type(), 0, n)) out.Set(reflect.MakeSlice(in.Type(), 0, n))
} }
switch in.Type().Elem().Kind() { switch in.Type().Elem().Kind() {
case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64, case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64,
reflect.String, reflect.Uint32, reflect.Uint64: reflect.String, reflect.Uint32, reflect.Uint64:
out.Set(reflect.AppendSlice(out, in)) out.Set(reflect.AppendSlice(out, in))
default: default:
for i := 0; i < n; i++ { for i := 0; i < n; i++ {
x := reflect.Indirect(reflect.New(in.Type().Elem())) x := reflect.Indirect(reflect.New(in.Type().Elem()))
mergeAny(x, in.Index(i), false, nil) mergeAny(x, in.Index(i), false, nil)
out.Set(reflect.Append(out, x)) out.Set(reflect.Append(out, x))
} }
} }
case reflect.Struct: case reflect.Struct:
mergeStruct(out, in) mergeStruct(out, in)
default: default:
// unknown type, so not a protocol buffer // unknown type, so not a protocol buffer
log.Printf("proto: don't know how to copy %v", in) log.Printf("proto: don't know how to copy %v", in)
} }
} }
func mergeExtension(out, in map[int32]Extension) { func mergeExtension(out, in map[int32]Extension) {
for extNum, eIn := range in { for extNum, eIn := range in {
eOut := Extension{desc: eIn.desc} eOut := Extension{desc: eIn.desc}
if eIn.value != nil { if eIn.value != nil {
v := reflect.New(reflect.TypeOf(eIn.value)).Elem() v := reflect.New(reflect.TypeOf(eIn.value)).Elem()
mergeAny(v, reflect.ValueOf(eIn.value), false, nil) mergeAny(v, reflect.ValueOf(eIn.value), false, nil)
eOut.value = v.Interface() eOut.value = v.Interface()
} }
if eIn.enc != nil { if eIn.enc != nil {
eOut.enc = make([]byte, len(eIn.enc)) eOut.enc = make([]byte, len(eIn.enc))
copy(eOut.enc, eIn.enc) copy(eOut.enc, eIn.enc)
} }
out[extNum] = eOut out[extNum] = eOut
} }
} }

View file

@ -1,427 +1,427 @@
// Go support for Protocol Buffers - Google's data interchange format // Go support for Protocol Buffers - Google's data interchange format
// //
// Copyright 2010 The Go Authors. All rights reserved. // Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf // https://github.com/golang/protobuf
// //
// Redistribution and use in source and binary forms, with or without // Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are // modification, are permitted provided that the following conditions are
// met: // met:
// //
// * Redistributions of source code must retain the above copyright // * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer. // notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above // * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer // copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the // in the documentation and/or other materials provided with the
// distribution. // distribution.
// * Neither the name of Google Inc. nor the names of its // * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from // contributors may be used to endorse or promote products derived from
// this software without specific prior written permission. // this software without specific prior written permission.
// //
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto package proto
/* /*
* Routines for decoding protocol buffer data to construct in-memory representations. * Routines for decoding protocol buffer data to construct in-memory representations.
*/ */
import ( import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
) )
// errOverflow is returned when an integer is too large to be represented. // errOverflow is returned when an integer is too large to be represented.
var errOverflow = errors.New("proto: integer overflow") var errOverflow = errors.New("proto: integer overflow")
// ErrInternalBadWireType is returned by generated code when an incorrect // ErrInternalBadWireType is returned by generated code when an incorrect
// wire type is encountered. It does not get returned to user code. // wire type is encountered. It does not get returned to user code.
var ErrInternalBadWireType = errors.New("proto: internal error: bad wiretype for oneof") var ErrInternalBadWireType = errors.New("proto: internal error: bad wiretype for oneof")
// DecodeVarint reads a varint-encoded integer from the slice. // DecodeVarint reads a varint-encoded integer from the slice.
// It returns the integer and the number of bytes consumed, or // It returns the integer and the number of bytes consumed, or
// zero if there is not enough. // zero if there is not enough.
// This is the format for the // This is the format for the
// int32, int64, uint32, uint64, bool, and enum // int32, int64, uint32, uint64, bool, and enum
// protocol buffer types. // protocol buffer types.
func DecodeVarint(buf []byte) (x uint64, n int) { func DecodeVarint(buf []byte) (x uint64, n int) {
for shift := uint(0); shift < 64; shift += 7 { for shift := uint(0); shift < 64; shift += 7 {
if n >= len(buf) { if n >= len(buf) {
return 0, 0 return 0, 0
} }
b := uint64(buf[n]) b := uint64(buf[n])
n++ n++
x |= (b & 0x7F) << shift x |= (b & 0x7F) << shift
if (b & 0x80) == 0 { if (b & 0x80) == 0 {
return x, n return x, n
} }
} }
// The number is too large to represent in a 64-bit value. // The number is too large to represent in a 64-bit value.
return 0, 0 return 0, 0
} }
func (p *Buffer) decodeVarintSlow() (x uint64, err error) { func (p *Buffer) decodeVarintSlow() (x uint64, err error) {
i := p.index i := p.index
l := len(p.buf) l := len(p.buf)
for shift := uint(0); shift < 64; shift += 7 { for shift := uint(0); shift < 64; shift += 7 {
if i >= l { if i >= l {
err = io.ErrUnexpectedEOF err = io.ErrUnexpectedEOF
return return
} }
b := p.buf[i] b := p.buf[i]
i++ i++
x |= (uint64(b) & 0x7F) << shift x |= (uint64(b) & 0x7F) << shift
if b < 0x80 { if b < 0x80 {
p.index = i p.index = i
return return
} }
} }
// The number is too large to represent in a 64-bit value. // The number is too large to represent in a 64-bit value.
err = errOverflow err = errOverflow
return return
} }
// DecodeVarint reads a varint-encoded integer from the Buffer. // DecodeVarint reads a varint-encoded integer from the Buffer.
// This is the format for the // This is the format for the
// int32, int64, uint32, uint64, bool, and enum // int32, int64, uint32, uint64, bool, and enum
// protocol buffer types. // protocol buffer types.
func (p *Buffer) DecodeVarint() (x uint64, err error) { func (p *Buffer) DecodeVarint() (x uint64, err error) {
i := p.index i := p.index
buf := p.buf buf := p.buf
if i >= len(buf) { if i >= len(buf) {
return 0, io.ErrUnexpectedEOF return 0, io.ErrUnexpectedEOF
} else if buf[i] < 0x80 { } else if buf[i] < 0x80 {
p.index++ p.index++
return uint64(buf[i]), nil return uint64(buf[i]), nil
} else if len(buf)-i < 10 { } else if len(buf)-i < 10 {
return p.decodeVarintSlow() return p.decodeVarintSlow()
} }
var b uint64 var b uint64
// we already checked the first byte // we already checked the first byte
x = uint64(buf[i]) - 0x80 x = uint64(buf[i]) - 0x80
i++ i++
b = uint64(buf[i]) b = uint64(buf[i])
i++ i++
x += b << 7 x += b << 7
if b&0x80 == 0 { if b&0x80 == 0 {
goto done goto done
} }
x -= 0x80 << 7 x -= 0x80 << 7
b = uint64(buf[i]) b = uint64(buf[i])
i++ i++
x += b << 14 x += b << 14
if b&0x80 == 0 { if b&0x80 == 0 {
goto done goto done
} }
x -= 0x80 << 14 x -= 0x80 << 14
b = uint64(buf[i]) b = uint64(buf[i])
i++ i++
x += b << 21 x += b << 21
if b&0x80 == 0 { if b&0x80 == 0 {
goto done goto done
} }
x -= 0x80 << 21 x -= 0x80 << 21
b = uint64(buf[i]) b = uint64(buf[i])
i++ i++
x += b << 28 x += b << 28
if b&0x80 == 0 { if b&0x80 == 0 {
goto done goto done
} }
x -= 0x80 << 28 x -= 0x80 << 28
b = uint64(buf[i]) b = uint64(buf[i])
i++ i++
x += b << 35 x += b << 35
if b&0x80 == 0 { if b&0x80 == 0 {
goto done goto done
} }
x -= 0x80 << 35 x -= 0x80 << 35
b = uint64(buf[i]) b = uint64(buf[i])
i++ i++
x += b << 42 x += b << 42
if b&0x80 == 0 { if b&0x80 == 0 {
goto done goto done
} }
x -= 0x80 << 42 x -= 0x80 << 42
b = uint64(buf[i]) b = uint64(buf[i])
i++ i++
x += b << 49 x += b << 49
if b&0x80 == 0 { if b&0x80 == 0 {
goto done goto done
} }
x -= 0x80 << 49 x -= 0x80 << 49
b = uint64(buf[i]) b = uint64(buf[i])
i++ i++
x += b << 56 x += b << 56
if b&0x80 == 0 { if b&0x80 == 0 {
goto done goto done
} }
x -= 0x80 << 56 x -= 0x80 << 56
b = uint64(buf[i]) b = uint64(buf[i])
i++ i++
x += b << 63 x += b << 63
if b&0x80 == 0 { if b&0x80 == 0 {
goto done goto done
} }
return 0, errOverflow return 0, errOverflow
done: done:
p.index = i p.index = i
return x, nil return x, nil
} }
// DecodeFixed64 reads a 64-bit integer from the Buffer. // DecodeFixed64 reads a 64-bit integer from the Buffer.
// This is the format for the // This is the format for the
// fixed64, sfixed64, and double protocol buffer types. // fixed64, sfixed64, and double protocol buffer types.
func (p *Buffer) DecodeFixed64() (x uint64, err error) { func (p *Buffer) DecodeFixed64() (x uint64, err error) {
// x, err already 0 // x, err already 0
i := p.index + 8 i := p.index + 8
if i < 0 || i > len(p.buf) { if i < 0 || i > len(p.buf) {
err = io.ErrUnexpectedEOF err = io.ErrUnexpectedEOF
return return
} }
p.index = i p.index = i
x = uint64(p.buf[i-8]) x = uint64(p.buf[i-8])
x |= uint64(p.buf[i-7]) << 8 x |= uint64(p.buf[i-7]) << 8
x |= uint64(p.buf[i-6]) << 16 x |= uint64(p.buf[i-6]) << 16
x |= uint64(p.buf[i-5]) << 24 x |= uint64(p.buf[i-5]) << 24
x |= uint64(p.buf[i-4]) << 32 x |= uint64(p.buf[i-4]) << 32
x |= uint64(p.buf[i-3]) << 40 x |= uint64(p.buf[i-3]) << 40
x |= uint64(p.buf[i-2]) << 48 x |= uint64(p.buf[i-2]) << 48
x |= uint64(p.buf[i-1]) << 56 x |= uint64(p.buf[i-1]) << 56
return return
} }
// DecodeFixed32 reads a 32-bit integer from the Buffer. // DecodeFixed32 reads a 32-bit integer from the Buffer.
// This is the format for the // This is the format for the
// fixed32, sfixed32, and float protocol buffer types. // fixed32, sfixed32, and float protocol buffer types.
func (p *Buffer) DecodeFixed32() (x uint64, err error) { func (p *Buffer) DecodeFixed32() (x uint64, err error) {
// x, err already 0 // x, err already 0
i := p.index + 4 i := p.index + 4
if i < 0 || i > len(p.buf) { if i < 0 || i > len(p.buf) {
err = io.ErrUnexpectedEOF err = io.ErrUnexpectedEOF
return return
} }
p.index = i p.index = i
x = uint64(p.buf[i-4]) x = uint64(p.buf[i-4])
x |= uint64(p.buf[i-3]) << 8 x |= uint64(p.buf[i-3]) << 8
x |= uint64(p.buf[i-2]) << 16 x |= uint64(p.buf[i-2]) << 16
x |= uint64(p.buf[i-1]) << 24 x |= uint64(p.buf[i-1]) << 24
return return
} }
// DecodeZigzag64 reads a zigzag-encoded 64-bit integer // DecodeZigzag64 reads a zigzag-encoded 64-bit integer
// from the Buffer. // from the Buffer.
// This is the format used for the sint64 protocol buffer type. // This is the format used for the sint64 protocol buffer type.
func (p *Buffer) DecodeZigzag64() (x uint64, err error) { func (p *Buffer) DecodeZigzag64() (x uint64, err error) {
x, err = p.DecodeVarint() x, err = p.DecodeVarint()
if err != nil { if err != nil {
return return
} }
x = (x >> 1) ^ uint64((int64(x&1)<<63)>>63) x = (x >> 1) ^ uint64((int64(x&1)<<63)>>63)
return return
} }
// DecodeZigzag32 reads a zigzag-encoded 32-bit integer // DecodeZigzag32 reads a zigzag-encoded 32-bit integer
// from the Buffer. // from the Buffer.
// This is the format used for the sint32 protocol buffer type. // This is the format used for the sint32 protocol buffer type.
func (p *Buffer) DecodeZigzag32() (x uint64, err error) { func (p *Buffer) DecodeZigzag32() (x uint64, err error) {
x, err = p.DecodeVarint() x, err = p.DecodeVarint()
if err != nil { if err != nil {
return return
} }
x = uint64((uint32(x) >> 1) ^ uint32((int32(x&1)<<31)>>31)) x = uint64((uint32(x) >> 1) ^ uint32((int32(x&1)<<31)>>31))
return return
} }
// DecodeRawBytes reads a count-delimited byte buffer from the Buffer. // DecodeRawBytes reads a count-delimited byte buffer from the Buffer.
// This is the format used for the bytes protocol buffer // This is the format used for the bytes protocol buffer
// type and for embedded messages. // type and for embedded messages.
func (p *Buffer) DecodeRawBytes(alloc bool) (buf []byte, err error) { func (p *Buffer) DecodeRawBytes(alloc bool) (buf []byte, err error) {
n, err := p.DecodeVarint() n, err := p.DecodeVarint()
if err != nil { if err != nil {
return nil, err return nil, err
} }
nb := int(n) nb := int(n)
if nb < 0 { if nb < 0 {
return nil, fmt.Errorf("proto: bad byte length %d", nb) return nil, fmt.Errorf("proto: bad byte length %d", nb)
} }
end := p.index + nb end := p.index + nb
if end < p.index || end > len(p.buf) { if end < p.index || end > len(p.buf) {
return nil, io.ErrUnexpectedEOF return nil, io.ErrUnexpectedEOF
} }
if !alloc { if !alloc {
// todo: check if can get more uses of alloc=false // todo: check if can get more uses of alloc=false
buf = p.buf[p.index:end] buf = p.buf[p.index:end]
p.index += nb p.index += nb
return return
} }
buf = make([]byte, nb) buf = make([]byte, nb)
copy(buf, p.buf[p.index:]) copy(buf, p.buf[p.index:])
p.index += nb p.index += nb
return return
} }
// DecodeStringBytes reads an encoded string from the Buffer. // DecodeStringBytes reads an encoded string from the Buffer.
// This is the format used for the proto2 string type. // This is the format used for the proto2 string type.
func (p *Buffer) DecodeStringBytes() (s string, err error) { func (p *Buffer) DecodeStringBytes() (s string, err error) {
buf, err := p.DecodeRawBytes(false) buf, err := p.DecodeRawBytes(false)
if err != nil { if err != nil {
return return
} }
return string(buf), nil return string(buf), nil
} }
// Unmarshaler is the interface representing objects that can // Unmarshaler is the interface representing objects that can
// unmarshal themselves. The argument points to data that may be // unmarshal themselves. The argument points to data that may be
// overwritten, so implementations should not keep references to the // overwritten, so implementations should not keep references to the
// buffer. // buffer.
// Unmarshal implementations should not clear the receiver. // Unmarshal implementations should not clear the receiver.
// Any unmarshaled data should be merged into the receiver. // Any unmarshaled data should be merged into the receiver.
// Callers of Unmarshal that do not want to retain existing data // Callers of Unmarshal that do not want to retain existing data
// should Reset the receiver before calling Unmarshal. // should Reset the receiver before calling Unmarshal.
type Unmarshaler interface { type Unmarshaler interface {
Unmarshal([]byte) error Unmarshal([]byte) error
} }
// newUnmarshaler is the interface representing objects that can // newUnmarshaler is the interface representing objects that can
// unmarshal themselves. The semantics are identical to Unmarshaler. // unmarshal themselves. The semantics are identical to Unmarshaler.
// //
// This exists to support protoc-gen-go generated messages. // This exists to support protoc-gen-go generated messages.
// The proto package will stop type-asserting to this interface in the future. // The proto package will stop type-asserting to this interface in the future.
// //
// DO NOT DEPEND ON THIS. // DO NOT DEPEND ON THIS.
type newUnmarshaler interface { type newUnmarshaler interface {
XXX_Unmarshal([]byte) error XXX_Unmarshal([]byte) error
} }
// Unmarshal parses the protocol buffer representation in buf and places the // Unmarshal parses the protocol buffer representation in buf and places the
// decoded result in pb. If the struct underlying pb does not match // decoded result in pb. If the struct underlying pb does not match
// the data in buf, the results can be unpredictable. // the data in buf, the results can be unpredictable.
// //
// Unmarshal resets pb before starting to unmarshal, so any // Unmarshal resets pb before starting to unmarshal, so any
// existing data in pb is always removed. Use UnmarshalMerge // existing data in pb is always removed. Use UnmarshalMerge
// to preserve and append to existing data. // to preserve and append to existing data.
func Unmarshal(buf []byte, pb Message) error { func Unmarshal(buf []byte, pb Message) error {
pb.Reset() pb.Reset()
if u, ok := pb.(newUnmarshaler); ok { if u, ok := pb.(newUnmarshaler); ok {
return u.XXX_Unmarshal(buf) return u.XXX_Unmarshal(buf)
} }
if u, ok := pb.(Unmarshaler); ok { if u, ok := pb.(Unmarshaler); ok {
return u.Unmarshal(buf) return u.Unmarshal(buf)
} }
return NewBuffer(buf).Unmarshal(pb) return NewBuffer(buf).Unmarshal(pb)
} }
// UnmarshalMerge parses the protocol buffer representation in buf and // UnmarshalMerge parses the protocol buffer representation in buf and
// writes the decoded result to pb. If the struct underlying pb does not match // writes the decoded result to pb. If the struct underlying pb does not match
// the data in buf, the results can be unpredictable. // the data in buf, the results can be unpredictable.
// //
// UnmarshalMerge merges into existing data in pb. // UnmarshalMerge merges into existing data in pb.
// Most code should use Unmarshal instead. // Most code should use Unmarshal instead.
func UnmarshalMerge(buf []byte, pb Message) error { func UnmarshalMerge(buf []byte, pb Message) error {
if u, ok := pb.(newUnmarshaler); ok { if u, ok := pb.(newUnmarshaler); ok {
return u.XXX_Unmarshal(buf) return u.XXX_Unmarshal(buf)
} }
if u, ok := pb.(Unmarshaler); ok { if u, ok := pb.(Unmarshaler); ok {
// NOTE: The history of proto have unfortunately been inconsistent // NOTE: The history of proto have unfortunately been inconsistent
// whether Unmarshaler should or should not implicitly clear itself. // whether Unmarshaler should or should not implicitly clear itself.
// Some implementations do, most do not. // Some implementations do, most do not.
// Thus, calling this here may or may not do what people want. // Thus, calling this here may or may not do what people want.
// //
// See https://github.com/golang/protobuf/issues/424 // See https://github.com/golang/protobuf/issues/424
return u.Unmarshal(buf) return u.Unmarshal(buf)
} }
return NewBuffer(buf).Unmarshal(pb) return NewBuffer(buf).Unmarshal(pb)
} }
// DecodeMessage reads a count-delimited message from the Buffer. // DecodeMessage reads a count-delimited message from the Buffer.
func (p *Buffer) DecodeMessage(pb Message) error { func (p *Buffer) DecodeMessage(pb Message) error {
enc, err := p.DecodeRawBytes(false) enc, err := p.DecodeRawBytes(false)
if err != nil { if err != nil {
return err return err
} }
return NewBuffer(enc).Unmarshal(pb) return NewBuffer(enc).Unmarshal(pb)
} }
// DecodeGroup reads a tag-delimited group from the Buffer. // DecodeGroup reads a tag-delimited group from the Buffer.
// StartGroup tag is already consumed. This function consumes // StartGroup tag is already consumed. This function consumes
// EndGroup tag. // EndGroup tag.
func (p *Buffer) DecodeGroup(pb Message) error { func (p *Buffer) DecodeGroup(pb Message) error {
b := p.buf[p.index:] b := p.buf[p.index:]
x, y := findEndGroup(b) x, y := findEndGroup(b)
if x < 0 { if x < 0 {
return io.ErrUnexpectedEOF return io.ErrUnexpectedEOF
} }
err := Unmarshal(b[:x], pb) err := Unmarshal(b[:x], pb)
p.index += y p.index += y
return err return err
} }
// Unmarshal parses the protocol buffer representation in the // Unmarshal parses the protocol buffer representation in the
// Buffer and places the decoded result in pb. If the struct // Buffer and places the decoded result in pb. If the struct
// underlying pb does not match the data in the buffer, the results can be // underlying pb does not match the data in the buffer, the results can be
// unpredictable. // unpredictable.
// //
// Unlike proto.Unmarshal, this does not reset pb before starting to unmarshal. // Unlike proto.Unmarshal, this does not reset pb before starting to unmarshal.
func (p *Buffer) Unmarshal(pb Message) error { func (p *Buffer) Unmarshal(pb Message) error {
// If the object can unmarshal itself, let it. // If the object can unmarshal itself, let it.
if u, ok := pb.(newUnmarshaler); ok { if u, ok := pb.(newUnmarshaler); ok {
err := u.XXX_Unmarshal(p.buf[p.index:]) err := u.XXX_Unmarshal(p.buf[p.index:])
p.index = len(p.buf) p.index = len(p.buf)
return err return err
} }
if u, ok := pb.(Unmarshaler); ok { if u, ok := pb.(Unmarshaler); ok {
// NOTE: The history of proto have unfortunately been inconsistent // NOTE: The history of proto have unfortunately been inconsistent
// whether Unmarshaler should or should not implicitly clear itself. // whether Unmarshaler should or should not implicitly clear itself.
// Some implementations do, most do not. // Some implementations do, most do not.
// Thus, calling this here may or may not do what people want. // Thus, calling this here may or may not do what people want.
// //
// See https://github.com/golang/protobuf/issues/424 // See https://github.com/golang/protobuf/issues/424
err := u.Unmarshal(p.buf[p.index:]) err := u.Unmarshal(p.buf[p.index:])
p.index = len(p.buf) p.index = len(p.buf)
return err return err
} }
// Slow workaround for messages that aren't Unmarshalers. // Slow workaround for messages that aren't Unmarshalers.
// This includes some hand-coded .pb.go files and // This includes some hand-coded .pb.go files and
// bootstrap protos. // bootstrap protos.
// TODO: fix all of those and then add Unmarshal to // TODO: fix all of those and then add Unmarshal to
// the Message interface. Then: // the Message interface. Then:
// The cast above and code below can be deleted. // The cast above and code below can be deleted.
// The old unmarshaler can be deleted. // The old unmarshaler can be deleted.
// Clients can call Unmarshal directly (can already do that, actually). // Clients can call Unmarshal directly (can already do that, actually).
var info InternalMessageInfo var info InternalMessageInfo
err := info.Unmarshal(pb, p.buf[p.index:]) err := info.Unmarshal(pb, p.buf[p.index:])
p.index = len(p.buf) p.index = len(p.buf)
return err return err
} }

View file

@ -1,63 +1,63 @@
// Go support for Protocol Buffers - Google's data interchange format // Go support for Protocol Buffers - Google's data interchange format
// //
// Copyright 2018 The Go Authors. All rights reserved. // Copyright 2018 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf // https://github.com/golang/protobuf
// //
// Redistribution and use in source and binary forms, with or without // Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are // modification, are permitted provided that the following conditions are
// met: // met:
// //
// * Redistributions of source code must retain the above copyright // * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer. // notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above // * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer // copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the // in the documentation and/or other materials provided with the
// distribution. // distribution.
// * Neither the name of Google Inc. nor the names of its // * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from // contributors may be used to endorse or promote products derived from
// this software without specific prior written permission. // this software without specific prior written permission.
// //
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto package proto
import "errors" import "errors"
// Deprecated: do not use. // Deprecated: do not use.
type Stats struct{ Emalloc, Dmalloc, Encode, Decode, Chit, Cmiss, Size uint64 } type Stats struct{ Emalloc, Dmalloc, Encode, Decode, Chit, Cmiss, Size uint64 }
// Deprecated: do not use. // Deprecated: do not use.
func GetStats() Stats { return Stats{} } func GetStats() Stats { return Stats{} }
// Deprecated: do not use. // Deprecated: do not use.
func MarshalMessageSet(interface{}) ([]byte, error) { func MarshalMessageSet(interface{}) ([]byte, error) {
return nil, errors.New("proto: not implemented") return nil, errors.New("proto: not implemented")
} }
// Deprecated: do not use. // Deprecated: do not use.
func UnmarshalMessageSet([]byte, interface{}) error { func UnmarshalMessageSet([]byte, interface{}) error {
return errors.New("proto: not implemented") return errors.New("proto: not implemented")
} }
// Deprecated: do not use. // Deprecated: do not use.
func MarshalMessageSetJSON(interface{}) ([]byte, error) { func MarshalMessageSetJSON(interface{}) ([]byte, error) {
return nil, errors.New("proto: not implemented") return nil, errors.New("proto: not implemented")
} }
// Deprecated: do not use. // Deprecated: do not use.
func UnmarshalMessageSetJSON([]byte, interface{}) error { func UnmarshalMessageSetJSON([]byte, interface{}) error {
return errors.New("proto: not implemented") return errors.New("proto: not implemented")
} }
// Deprecated: do not use. // Deprecated: do not use.
func RegisterMessageSetType(Message, int32, string) {} func RegisterMessageSetType(Message, int32, string) {}

View file

@ -1,350 +1,350 @@
// Go support for Protocol Buffers - Google's data interchange format // Go support for Protocol Buffers - Google's data interchange format
// //
// Copyright 2017 The Go Authors. All rights reserved. // Copyright 2017 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf // https://github.com/golang/protobuf
// //
// Redistribution and use in source and binary forms, with or without // Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are // modification, are permitted provided that the following conditions are
// met: // met:
// //
// * Redistributions of source code must retain the above copyright // * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer. // notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above // * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer // copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the // in the documentation and/or other materials provided with the
// distribution. // distribution.
// * Neither the name of Google Inc. nor the names of its // * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from // contributors may be used to endorse or promote products derived from
// this software without specific prior written permission. // this software without specific prior written permission.
// //
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto package proto
import ( import (
"fmt" "fmt"
"reflect" "reflect"
"strings" "strings"
"sync" "sync"
"sync/atomic" "sync/atomic"
) )
type generatedDiscarder interface { type generatedDiscarder interface {
XXX_DiscardUnknown() XXX_DiscardUnknown()
} }
// DiscardUnknown recursively discards all unknown fields from this message // DiscardUnknown recursively discards all unknown fields from this message
// and all embedded messages. // and all embedded messages.
// //
// When unmarshaling a message with unrecognized fields, the tags and values // When unmarshaling a message with unrecognized fields, the tags and values
// of such fields are preserved in the Message. This allows a later call to // of such fields are preserved in the Message. This allows a later call to
// marshal to be able to produce a message that continues to have those // marshal to be able to produce a message that continues to have those
// unrecognized fields. To avoid this, DiscardUnknown is used to // unrecognized fields. To avoid this, DiscardUnknown is used to
// explicitly clear the unknown fields after unmarshaling. // explicitly clear the unknown fields after unmarshaling.
// //
// For proto2 messages, the unknown fields of message extensions are only // For proto2 messages, the unknown fields of message extensions are only
// discarded from messages that have been accessed via GetExtension. // discarded from messages that have been accessed via GetExtension.
func DiscardUnknown(m Message) { func DiscardUnknown(m Message) {
if m, ok := m.(generatedDiscarder); ok { if m, ok := m.(generatedDiscarder); ok {
m.XXX_DiscardUnknown() m.XXX_DiscardUnknown()
return return
} }
// TODO: Dynamically populate a InternalMessageInfo for legacy messages, // TODO: Dynamically populate a InternalMessageInfo for legacy messages,
// but the master branch has no implementation for InternalMessageInfo, // but the master branch has no implementation for InternalMessageInfo,
// so it would be more work to replicate that approach. // so it would be more work to replicate that approach.
discardLegacy(m) discardLegacy(m)
} }
// DiscardUnknown recursively discards all unknown fields. // DiscardUnknown recursively discards all unknown fields.
func (a *InternalMessageInfo) DiscardUnknown(m Message) { func (a *InternalMessageInfo) DiscardUnknown(m Message) {
di := atomicLoadDiscardInfo(&a.discard) di := atomicLoadDiscardInfo(&a.discard)
if di == nil { if di == nil {
di = getDiscardInfo(reflect.TypeOf(m).Elem()) di = getDiscardInfo(reflect.TypeOf(m).Elem())
atomicStoreDiscardInfo(&a.discard, di) atomicStoreDiscardInfo(&a.discard, di)
} }
di.discard(toPointer(&m)) di.discard(toPointer(&m))
} }
type discardInfo struct { type discardInfo struct {
typ reflect.Type typ reflect.Type
initialized int32 // 0: only typ is valid, 1: everything is valid initialized int32 // 0: only typ is valid, 1: everything is valid
lock sync.Mutex lock sync.Mutex
fields []discardFieldInfo fields []discardFieldInfo
unrecognized field unrecognized field
} }
type discardFieldInfo struct { type discardFieldInfo struct {
field field // Offset of field, guaranteed to be valid field field // Offset of field, guaranteed to be valid
discard func(src pointer) discard func(src pointer)
} }
var ( var (
discardInfoMap = map[reflect.Type]*discardInfo{} discardInfoMap = map[reflect.Type]*discardInfo{}
discardInfoLock sync.Mutex discardInfoLock sync.Mutex
) )
func getDiscardInfo(t reflect.Type) *discardInfo { func getDiscardInfo(t reflect.Type) *discardInfo {
discardInfoLock.Lock() discardInfoLock.Lock()
defer discardInfoLock.Unlock() defer discardInfoLock.Unlock()
di := discardInfoMap[t] di := discardInfoMap[t]
if di == nil { if di == nil {
di = &discardInfo{typ: t} di = &discardInfo{typ: t}
discardInfoMap[t] = di discardInfoMap[t] = di
} }
return di return di
} }
func (di *discardInfo) discard(src pointer) { func (di *discardInfo) discard(src pointer) {
if src.isNil() { if src.isNil() {
return // Nothing to do. return // Nothing to do.
} }
if atomic.LoadInt32(&di.initialized) == 0 { if atomic.LoadInt32(&di.initialized) == 0 {
di.computeDiscardInfo() di.computeDiscardInfo()
} }
for _, fi := range di.fields { for _, fi := range di.fields {
sfp := src.offset(fi.field) sfp := src.offset(fi.field)
fi.discard(sfp) fi.discard(sfp)
} }
// For proto2 messages, only discard unknown fields in message extensions // For proto2 messages, only discard unknown fields in message extensions
// that have been accessed via GetExtension. // that have been accessed via GetExtension.
if em, err := extendable(src.asPointerTo(di.typ).Interface()); err == nil { if em, err := extendable(src.asPointerTo(di.typ).Interface()); err == nil {
// Ignore lock since DiscardUnknown is not concurrency safe. // Ignore lock since DiscardUnknown is not concurrency safe.
emm, _ := em.extensionsRead() emm, _ := em.extensionsRead()
for _, mx := range emm { for _, mx := range emm {
if m, ok := mx.value.(Message); ok { if m, ok := mx.value.(Message); ok {
DiscardUnknown(m) DiscardUnknown(m)
} }
} }
} }
if di.unrecognized.IsValid() { if di.unrecognized.IsValid() {
*src.offset(di.unrecognized).toBytes() = nil *src.offset(di.unrecognized).toBytes() = nil
} }
} }
func (di *discardInfo) computeDiscardInfo() { func (di *discardInfo) computeDiscardInfo() {
di.lock.Lock() di.lock.Lock()
defer di.lock.Unlock() defer di.lock.Unlock()
if di.initialized != 0 { if di.initialized != 0 {
return return
} }
t := di.typ t := di.typ
n := t.NumField() n := t.NumField()
for i := 0; i < n; i++ { for i := 0; i < n; i++ {
f := t.Field(i) f := t.Field(i)
if strings.HasPrefix(f.Name, "XXX_") { if strings.HasPrefix(f.Name, "XXX_") {
continue continue
} }
dfi := discardFieldInfo{field: toField(&f)} dfi := discardFieldInfo{field: toField(&f)}
tf := f.Type tf := f.Type
// Unwrap tf to get its most basic type. // Unwrap tf to get its most basic type.
var isPointer, isSlice bool var isPointer, isSlice bool
if tf.Kind() == reflect.Slice && tf.Elem().Kind() != reflect.Uint8 { if tf.Kind() == reflect.Slice && tf.Elem().Kind() != reflect.Uint8 {
isSlice = true isSlice = true
tf = tf.Elem() tf = tf.Elem()
} }
if tf.Kind() == reflect.Ptr { if tf.Kind() == reflect.Ptr {
isPointer = true isPointer = true
tf = tf.Elem() tf = tf.Elem()
} }
if isPointer && isSlice && tf.Kind() != reflect.Struct { if isPointer && isSlice && tf.Kind() != reflect.Struct {
panic(fmt.Sprintf("%v.%s cannot be a slice of pointers to primitive types", t, f.Name)) panic(fmt.Sprintf("%v.%s cannot be a slice of pointers to primitive types", t, f.Name))
} }
switch tf.Kind() { switch tf.Kind() {
case reflect.Struct: case reflect.Struct:
switch { switch {
case !isPointer: case !isPointer:
panic(fmt.Sprintf("%v.%s cannot be a direct struct value", t, f.Name)) panic(fmt.Sprintf("%v.%s cannot be a direct struct value", t, f.Name))
case isSlice: // E.g., []*pb.T case isSlice: // E.g., []*pb.T
di := getDiscardInfo(tf) di := getDiscardInfo(tf)
dfi.discard = func(src pointer) { dfi.discard = func(src pointer) {
sps := src.getPointerSlice() sps := src.getPointerSlice()
for _, sp := range sps { for _, sp := range sps {
if !sp.isNil() { if !sp.isNil() {
di.discard(sp) di.discard(sp)
} }
} }
} }
default: // E.g., *pb.T default: // E.g., *pb.T
di := getDiscardInfo(tf) di := getDiscardInfo(tf)
dfi.discard = func(src pointer) { dfi.discard = func(src pointer) {
sp := src.getPointer() sp := src.getPointer()
if !sp.isNil() { if !sp.isNil() {
di.discard(sp) di.discard(sp)
} }
} }
} }
case reflect.Map: case reflect.Map:
switch { switch {
case isPointer || isSlice: case isPointer || isSlice:
panic(fmt.Sprintf("%v.%s cannot be a pointer to a map or a slice of map values", t, f.Name)) panic(fmt.Sprintf("%v.%s cannot be a pointer to a map or a slice of map values", t, f.Name))
default: // E.g., map[K]V default: // E.g., map[K]V
if tf.Elem().Kind() == reflect.Ptr { // Proto struct (e.g., *T) if tf.Elem().Kind() == reflect.Ptr { // Proto struct (e.g., *T)
dfi.discard = func(src pointer) { dfi.discard = func(src pointer) {
sm := src.asPointerTo(tf).Elem() sm := src.asPointerTo(tf).Elem()
if sm.Len() == 0 { if sm.Len() == 0 {
return return
} }
for _, key := range sm.MapKeys() { for _, key := range sm.MapKeys() {
val := sm.MapIndex(key) val := sm.MapIndex(key)
DiscardUnknown(val.Interface().(Message)) DiscardUnknown(val.Interface().(Message))
} }
} }
} else { } else {
dfi.discard = func(pointer) {} // Noop dfi.discard = func(pointer) {} // Noop
} }
} }
case reflect.Interface: case reflect.Interface:
// Must be oneof field. // Must be oneof field.
switch { switch {
case isPointer || isSlice: case isPointer || isSlice:
panic(fmt.Sprintf("%v.%s cannot be a pointer to a interface or a slice of interface values", t, f.Name)) panic(fmt.Sprintf("%v.%s cannot be a pointer to a interface or a slice of interface values", t, f.Name))
default: // E.g., interface{} default: // E.g., interface{}
// TODO: Make this faster? // TODO: Make this faster?
dfi.discard = func(src pointer) { dfi.discard = func(src pointer) {
su := src.asPointerTo(tf).Elem() su := src.asPointerTo(tf).Elem()
if !su.IsNil() { if !su.IsNil() {
sv := su.Elem().Elem().Field(0) sv := su.Elem().Elem().Field(0)
if sv.Kind() == reflect.Ptr && sv.IsNil() { if sv.Kind() == reflect.Ptr && sv.IsNil() {
return return
} }
switch sv.Type().Kind() { switch sv.Type().Kind() {
case reflect.Ptr: // Proto struct (e.g., *T) case reflect.Ptr: // Proto struct (e.g., *T)
DiscardUnknown(sv.Interface().(Message)) DiscardUnknown(sv.Interface().(Message))
} }
} }
} }
} }
default: default:
continue continue
} }
di.fields = append(di.fields, dfi) di.fields = append(di.fields, dfi)
} }
di.unrecognized = invalidField di.unrecognized = invalidField
if f, ok := t.FieldByName("XXX_unrecognized"); ok { if f, ok := t.FieldByName("XXX_unrecognized"); ok {
if f.Type != reflect.TypeOf([]byte{}) { if f.Type != reflect.TypeOf([]byte{}) {
panic("expected XXX_unrecognized to be of type []byte") panic("expected XXX_unrecognized to be of type []byte")
} }
di.unrecognized = toField(&f) di.unrecognized = toField(&f)
} }
atomic.StoreInt32(&di.initialized, 1) atomic.StoreInt32(&di.initialized, 1)
} }
func discardLegacy(m Message) { func discardLegacy(m Message) {
v := reflect.ValueOf(m) v := reflect.ValueOf(m)
if v.Kind() != reflect.Ptr || v.IsNil() { if v.Kind() != reflect.Ptr || v.IsNil() {
return return
} }
v = v.Elem() v = v.Elem()
if v.Kind() != reflect.Struct { if v.Kind() != reflect.Struct {
return return
} }
t := v.Type() t := v.Type()
for i := 0; i < v.NumField(); i++ { for i := 0; i < v.NumField(); i++ {
f := t.Field(i) f := t.Field(i)
if strings.HasPrefix(f.Name, "XXX_") { if strings.HasPrefix(f.Name, "XXX_") {
continue continue
} }
vf := v.Field(i) vf := v.Field(i)
tf := f.Type tf := f.Type
// Unwrap tf to get its most basic type. // Unwrap tf to get its most basic type.
var isPointer, isSlice bool var isPointer, isSlice bool
if tf.Kind() == reflect.Slice && tf.Elem().Kind() != reflect.Uint8 { if tf.Kind() == reflect.Slice && tf.Elem().Kind() != reflect.Uint8 {
isSlice = true isSlice = true
tf = tf.Elem() tf = tf.Elem()
} }
if tf.Kind() == reflect.Ptr { if tf.Kind() == reflect.Ptr {
isPointer = true isPointer = true
tf = tf.Elem() tf = tf.Elem()
} }
if isPointer && isSlice && tf.Kind() != reflect.Struct { if isPointer && isSlice && tf.Kind() != reflect.Struct {
panic(fmt.Sprintf("%T.%s cannot be a slice of pointers to primitive types", m, f.Name)) panic(fmt.Sprintf("%T.%s cannot be a slice of pointers to primitive types", m, f.Name))
} }
switch tf.Kind() { switch tf.Kind() {
case reflect.Struct: case reflect.Struct:
switch { switch {
case !isPointer: case !isPointer:
panic(fmt.Sprintf("%T.%s cannot be a direct struct value", m, f.Name)) panic(fmt.Sprintf("%T.%s cannot be a direct struct value", m, f.Name))
case isSlice: // E.g., []*pb.T case isSlice: // E.g., []*pb.T
for j := 0; j < vf.Len(); j++ { for j := 0; j < vf.Len(); j++ {
discardLegacy(vf.Index(j).Interface().(Message)) discardLegacy(vf.Index(j).Interface().(Message))
} }
default: // E.g., *pb.T default: // E.g., *pb.T
discardLegacy(vf.Interface().(Message)) discardLegacy(vf.Interface().(Message))
} }
case reflect.Map: case reflect.Map:
switch { switch {
case isPointer || isSlice: case isPointer || isSlice:
panic(fmt.Sprintf("%T.%s cannot be a pointer to a map or a slice of map values", m, f.Name)) panic(fmt.Sprintf("%T.%s cannot be a pointer to a map or a slice of map values", m, f.Name))
default: // E.g., map[K]V default: // E.g., map[K]V
tv := vf.Type().Elem() tv := vf.Type().Elem()
if tv.Kind() == reflect.Ptr && tv.Implements(protoMessageType) { // Proto struct (e.g., *T) if tv.Kind() == reflect.Ptr && tv.Implements(protoMessageType) { // Proto struct (e.g., *T)
for _, key := range vf.MapKeys() { for _, key := range vf.MapKeys() {
val := vf.MapIndex(key) val := vf.MapIndex(key)
discardLegacy(val.Interface().(Message)) discardLegacy(val.Interface().(Message))
} }
} }
} }
case reflect.Interface: case reflect.Interface:
// Must be oneof field. // Must be oneof field.
switch { switch {
case isPointer || isSlice: case isPointer || isSlice:
panic(fmt.Sprintf("%T.%s cannot be a pointer to a interface or a slice of interface values", m, f.Name)) panic(fmt.Sprintf("%T.%s cannot be a pointer to a interface or a slice of interface values", m, f.Name))
default: // E.g., test_proto.isCommunique_Union interface default: // E.g., test_proto.isCommunique_Union interface
if !vf.IsNil() && f.Tag.Get("protobuf_oneof") != "" { if !vf.IsNil() && f.Tag.Get("protobuf_oneof") != "" {
vf = vf.Elem() // E.g., *test_proto.Communique_Msg vf = vf.Elem() // E.g., *test_proto.Communique_Msg
if !vf.IsNil() { if !vf.IsNil() {
vf = vf.Elem() // E.g., test_proto.Communique_Msg vf = vf.Elem() // E.g., test_proto.Communique_Msg
vf = vf.Field(0) // E.g., Proto struct (e.g., *T) or primitive value vf = vf.Field(0) // E.g., Proto struct (e.g., *T) or primitive value
if vf.Kind() == reflect.Ptr { if vf.Kind() == reflect.Ptr {
discardLegacy(vf.Interface().(Message)) discardLegacy(vf.Interface().(Message))
} }
} }
} }
} }
} }
} }
if vf := v.FieldByName("XXX_unrecognized"); vf.IsValid() { if vf := v.FieldByName("XXX_unrecognized"); vf.IsValid() {
if vf.Type() != reflect.TypeOf([]byte{}) { if vf.Type() != reflect.TypeOf([]byte{}) {
panic("expected XXX_unrecognized to be of type []byte") panic("expected XXX_unrecognized to be of type []byte")
} }
vf.Set(reflect.ValueOf([]byte(nil))) vf.Set(reflect.ValueOf([]byte(nil)))
} }
// For proto2 messages, only discard unknown fields in message extensions // For proto2 messages, only discard unknown fields in message extensions
// that have been accessed via GetExtension. // that have been accessed via GetExtension.
if em, err := extendable(m); err == nil { if em, err := extendable(m); err == nil {
// Ignore lock since discardLegacy is not concurrency safe. // Ignore lock since discardLegacy is not concurrency safe.
emm, _ := em.extensionsRead() emm, _ := em.extensionsRead()
for _, mx := range emm { for _, mx := range emm {
if m, ok := mx.value.(Message); ok { if m, ok := mx.value.(Message); ok {
discardLegacy(m) discardLegacy(m)
} }
} }
} }
} }

View file

@ -1,203 +1,203 @@
// Go support for Protocol Buffers - Google's data interchange format // Go support for Protocol Buffers - Google's data interchange format
// //
// Copyright 2010 The Go Authors. All rights reserved. // Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf // https://github.com/golang/protobuf
// //
// Redistribution and use in source and binary forms, with or without // Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are // modification, are permitted provided that the following conditions are
// met: // met:
// //
// * Redistributions of source code must retain the above copyright // * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer. // notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above // * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer // copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the // in the documentation and/or other materials provided with the
// distribution. // distribution.
// * Neither the name of Google Inc. nor the names of its // * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from // contributors may be used to endorse or promote products derived from
// this software without specific prior written permission. // this software without specific prior written permission.
// //
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto package proto
/* /*
* Routines for encoding data into the wire format for protocol buffers. * Routines for encoding data into the wire format for protocol buffers.
*/ */
import ( import (
"errors" "errors"
"reflect" "reflect"
) )
var ( var (
// errRepeatedHasNil is the error returned if Marshal is called with // errRepeatedHasNil is the error returned if Marshal is called with
// a struct with a repeated field containing a nil element. // a struct with a repeated field containing a nil element.
errRepeatedHasNil = errors.New("proto: repeated field has nil element") errRepeatedHasNil = errors.New("proto: repeated field has nil element")
// errOneofHasNil is the error returned if Marshal is called with // errOneofHasNil is the error returned if Marshal is called with
// a struct with a oneof field containing a nil element. // a struct with a oneof field containing a nil element.
errOneofHasNil = errors.New("proto: oneof field has nil value") errOneofHasNil = errors.New("proto: oneof field has nil value")
// ErrNil is the error returned if Marshal is called with nil. // ErrNil is the error returned if Marshal is called with nil.
ErrNil = errors.New("proto: Marshal called with nil") ErrNil = errors.New("proto: Marshal called with nil")
// ErrTooLarge is the error returned if Marshal is called with a // ErrTooLarge is the error returned if Marshal is called with a
// message that encodes to >2GB. // message that encodes to >2GB.
ErrTooLarge = errors.New("proto: message encodes to over 2 GB") ErrTooLarge = errors.New("proto: message encodes to over 2 GB")
) )
// The fundamental encoders that put bytes on the wire. // The fundamental encoders that put bytes on the wire.
// Those that take integer types all accept uint64 and are // Those that take integer types all accept uint64 and are
// therefore of type valueEncoder. // therefore of type valueEncoder.
const maxVarintBytes = 10 // maximum length of a varint const maxVarintBytes = 10 // maximum length of a varint
// EncodeVarint returns the varint encoding of x. // EncodeVarint returns the varint encoding of x.
// This is the format for the // This is the format for the
// int32, int64, uint32, uint64, bool, and enum // int32, int64, uint32, uint64, bool, and enum
// protocol buffer types. // protocol buffer types.
// Not used by the package itself, but helpful to clients // Not used by the package itself, but helpful to clients
// wishing to use the same encoding. // wishing to use the same encoding.
func EncodeVarint(x uint64) []byte { func EncodeVarint(x uint64) []byte {
var buf [maxVarintBytes]byte var buf [maxVarintBytes]byte
var n int var n int
for n = 0; x > 127; n++ { for n = 0; x > 127; n++ {
buf[n] = 0x80 | uint8(x&0x7F) buf[n] = 0x80 | uint8(x&0x7F)
x >>= 7 x >>= 7
} }
buf[n] = uint8(x) buf[n] = uint8(x)
n++ n++
return buf[0:n] return buf[0:n]
} }
// EncodeVarint writes a varint-encoded integer to the Buffer. // EncodeVarint writes a varint-encoded integer to the Buffer.
// This is the format for the // This is the format for the
// int32, int64, uint32, uint64, bool, and enum // int32, int64, uint32, uint64, bool, and enum
// protocol buffer types. // protocol buffer types.
func (p *Buffer) EncodeVarint(x uint64) error { func (p *Buffer) EncodeVarint(x uint64) error {
for x >= 1<<7 { for x >= 1<<7 {
p.buf = append(p.buf, uint8(x&0x7f|0x80)) p.buf = append(p.buf, uint8(x&0x7f|0x80))
x >>= 7 x >>= 7
} }
p.buf = append(p.buf, uint8(x)) p.buf = append(p.buf, uint8(x))
return nil return nil
} }
// SizeVarint returns the varint encoding size of an integer. // SizeVarint returns the varint encoding size of an integer.
func SizeVarint(x uint64) int { func SizeVarint(x uint64) int {
switch { switch {
case x < 1<<7: case x < 1<<7:
return 1 return 1
case x < 1<<14: case x < 1<<14:
return 2 return 2
case x < 1<<21: case x < 1<<21:
return 3 return 3
case x < 1<<28: case x < 1<<28:
return 4 return 4
case x < 1<<35: case x < 1<<35:
return 5 return 5
case x < 1<<42: case x < 1<<42:
return 6 return 6
case x < 1<<49: case x < 1<<49:
return 7 return 7
case x < 1<<56: case x < 1<<56:
return 8 return 8
case x < 1<<63: case x < 1<<63:
return 9 return 9
} }
return 10 return 10
} }
// EncodeFixed64 writes a 64-bit integer to the Buffer. // EncodeFixed64 writes a 64-bit integer to the Buffer.
// This is the format for the // This is the format for the
// fixed64, sfixed64, and double protocol buffer types. // fixed64, sfixed64, and double protocol buffer types.
func (p *Buffer) EncodeFixed64(x uint64) error { func (p *Buffer) EncodeFixed64(x uint64) error {
p.buf = append(p.buf, p.buf = append(p.buf,
uint8(x), uint8(x),
uint8(x>>8), uint8(x>>8),
uint8(x>>16), uint8(x>>16),
uint8(x>>24), uint8(x>>24),
uint8(x>>32), uint8(x>>32),
uint8(x>>40), uint8(x>>40),
uint8(x>>48), uint8(x>>48),
uint8(x>>56)) uint8(x>>56))
return nil return nil
} }
// EncodeFixed32 writes a 32-bit integer to the Buffer. // EncodeFixed32 writes a 32-bit integer to the Buffer.
// This is the format for the // This is the format for the
// fixed32, sfixed32, and float protocol buffer types. // fixed32, sfixed32, and float protocol buffer types.
func (p *Buffer) EncodeFixed32(x uint64) error { func (p *Buffer) EncodeFixed32(x uint64) error {
p.buf = append(p.buf, p.buf = append(p.buf,
uint8(x), uint8(x),
uint8(x>>8), uint8(x>>8),
uint8(x>>16), uint8(x>>16),
uint8(x>>24)) uint8(x>>24))
return nil return nil
} }
// EncodeZigzag64 writes a zigzag-encoded 64-bit integer // EncodeZigzag64 writes a zigzag-encoded 64-bit integer
// to the Buffer. // to the Buffer.
// This is the format used for the sint64 protocol buffer type. // This is the format used for the sint64 protocol buffer type.
func (p *Buffer) EncodeZigzag64(x uint64) error { func (p *Buffer) EncodeZigzag64(x uint64) error {
// use signed number to get arithmetic right shift. // use signed number to get arithmetic right shift.
return p.EncodeVarint(uint64((x << 1) ^ uint64((int64(x) >> 63)))) return p.EncodeVarint(uint64((x << 1) ^ uint64((int64(x) >> 63))))
} }
// EncodeZigzag32 writes a zigzag-encoded 32-bit integer // EncodeZigzag32 writes a zigzag-encoded 32-bit integer
// to the Buffer. // to the Buffer.
// This is the format used for the sint32 protocol buffer type. // This is the format used for the sint32 protocol buffer type.
func (p *Buffer) EncodeZigzag32(x uint64) error { func (p *Buffer) EncodeZigzag32(x uint64) error {
// use signed number to get arithmetic right shift. // use signed number to get arithmetic right shift.
return p.EncodeVarint(uint64((uint32(x) << 1) ^ uint32((int32(x) >> 31)))) return p.EncodeVarint(uint64((uint32(x) << 1) ^ uint32((int32(x) >> 31))))
} }
// EncodeRawBytes writes a count-delimited byte buffer to the Buffer. // EncodeRawBytes writes a count-delimited byte buffer to the Buffer.
// This is the format used for the bytes protocol buffer // This is the format used for the bytes protocol buffer
// type and for embedded messages. // type and for embedded messages.
func (p *Buffer) EncodeRawBytes(b []byte) error { func (p *Buffer) EncodeRawBytes(b []byte) error {
p.EncodeVarint(uint64(len(b))) p.EncodeVarint(uint64(len(b)))
p.buf = append(p.buf, b...) p.buf = append(p.buf, b...)
return nil return nil
} }
// EncodeStringBytes writes an encoded string to the Buffer. // EncodeStringBytes writes an encoded string to the Buffer.
// This is the format used for the proto2 string type. // This is the format used for the proto2 string type.
func (p *Buffer) EncodeStringBytes(s string) error { func (p *Buffer) EncodeStringBytes(s string) error {
p.EncodeVarint(uint64(len(s))) p.EncodeVarint(uint64(len(s)))
p.buf = append(p.buf, s...) p.buf = append(p.buf, s...)
return nil return nil
} }
// Marshaler is the interface representing objects that can marshal themselves. // Marshaler is the interface representing objects that can marshal themselves.
type Marshaler interface { type Marshaler interface {
Marshal() ([]byte, error) Marshal() ([]byte, error)
} }
// EncodeMessage writes the protocol buffer to the Buffer, // EncodeMessage writes the protocol buffer to the Buffer,
// prefixed by a varint-encoded length. // prefixed by a varint-encoded length.
func (p *Buffer) EncodeMessage(pb Message) error { func (p *Buffer) EncodeMessage(pb Message) error {
siz := Size(pb) siz := Size(pb)
p.EncodeVarint(uint64(siz)) p.EncodeVarint(uint64(siz))
return p.Marshal(pb) return p.Marshal(pb)
} }
// All protocol buffer fields are nillable, but be careful. // All protocol buffer fields are nillable, but be careful.
func isNil(v reflect.Value) bool { func isNil(v reflect.Value) bool {
switch v.Kind() { switch v.Kind() {
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice: case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return v.IsNil() return v.IsNil()
} }
return false return false
} }

Some files were not shown because too many files have changed in this diff Show more