Compare commits

...

297 commits

Author SHA1 Message Date
Matthias Rampke 58769c7b4d
Release 0.26.1
maintenance release, rolling up all the version updates.

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2024-03-22 16:04:13 +00:00
Matthias Rampke 9c4dfce4e0
Merge pull request #550 from prometheus/mr/shorten-readme
Remove `--help` output from README
2024-03-22 15:57:54 +00:00
Matthias Rampke 29c77f407c
Remove --help output from README
I have no idea if it is even up to date. The README file was too large to fit
Docker Hub's limitations; this removal brings us just under the [25KB
limit](https://github.com/docker/hub-feedback/issues/238).

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2024-03-22 15:52:23 +00:00
Matthias Rampke 233d74d0f7
Merge pull request #549 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2024-03-22 15:41:43 +00:00
prombot 3f985fa9ac Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2024-03-22 15:28:27 +00:00
Matthias Rampke 479379a908
Merge pull request #548 from prometheus/dependabot/go_modules/google.golang.org/protobuf-1.33.0
Bump google.golang.org/protobuf from 1.32.0 to 1.33.0
2024-03-22 15:20:11 +00:00
Matthias Rampke 6e02dfcaae
Merge pull request #547 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2024-03-22 15:12:42 +00:00
dependabot[bot] 2c4ffa9620
Bump google.golang.org/protobuf from 1.32.0 to 1.33.0
Bumps google.golang.org/protobuf from 1.32.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-13 23:37:16 +00:00
prombot ac0ef06e65 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2024-03-04 17:48:01 +00:00
Matthias Rampke 337849188c
Merge pull request #545 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2024-03-03 22:03:49 +01:00
prombot 4b21c8e662 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2024-03-03 17:47:54 +00:00
Matthias Rampke 91ccdb962a
Merge pull request #543 from prometheus/dependabot/go_modules/github.com/prometheus/client_golang-1.19.0
Bump github.com/prometheus/client_golang from 1.18.0 to 1.19.0
2024-03-03 14:57:34 +00:00
dependabot[bot] d93cb36b75
Bump github.com/prometheus/client_golang from 1.18.0 to 1.19.0
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.18.0 to 1.19.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/v1.19.0/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.18.0...v1.19.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-03 14:55:12 +00:00
Matthias Rampke 6483ce0ffe
Merge pull request #542 from prometheus/dependabot/go_modules/github.com/prometheus/client_model-0.6.0
Bump github.com/prometheus/client_model from 0.5.0 to 0.6.0
2024-03-03 14:53:30 +00:00
Matthias Rampke d5473a0f96
Merge pull request #540 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2024-03-03 14:53:17 +00:00
dependabot[bot] b9ee6639ce
Bump github.com/prometheus/client_model from 0.5.0 to 0.6.0
Bumps [github.com/prometheus/client_model](https://github.com/prometheus/client_model) from 0.5.0 to 0.6.0.
- [Release notes](https://github.com/prometheus/client_model/releases)
- [Commits](https://github.com/prometheus/client_model/compare/v0.5.0...v0.6.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_model
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-01 10:58:30 +00:00
prombot 80810d614e Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2024-02-23 17:47:48 +00:00
Ben Kochie cae614397a
Merge pull request #534 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2024-02-22 22:47:33 +01:00
Ben Kochie e729f64ef3
Merge pull request #537 from prometheus/dependabot/go_modules/github.com/prometheus/common-0.46.0
Bump github.com/prometheus/common from 0.45.0 to 0.46.0
2024-02-22 22:47:16 +01:00
dependabot[bot] adeacdd760
Bump github.com/prometheus/common from 0.45.0 to 0.46.0
Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.45.0 to 0.46.0.
- [Release notes](https://github.com/prometheus/common/releases)
- [Commits](https://github.com/prometheus/common/compare/v0.45.0...v0.46.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-01 10:15:22 +00:00
prombot a19a729111 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2024-01-16 17:47:57 +00:00
Ben Kochie 4c268bcdf7
Merge pull request #529 from prometheus/dependabot/go_modules/github.com/prometheus/exporter-toolkit-0.11.0
Bump github.com/prometheus/exporter-toolkit from 0.10.0 to 0.11.0
2024-01-16 15:41:46 +01:00
dependabot[bot] 8ec3225483
Bump github.com/prometheus/exporter-toolkit from 0.10.0 to 0.11.0
Bumps [github.com/prometheus/exporter-toolkit](https://github.com/prometheus/exporter-toolkit) from 0.10.0 to 0.11.0.
- [Release notes](https://github.com/prometheus/exporter-toolkit/releases)
- [Changelog](https://github.com/prometheus/exporter-toolkit/blob/master/CHANGELOG.md)
- [Commits](https://github.com/prometheus/exporter-toolkit/compare/v0.10.0...v0.11.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/exporter-toolkit
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-15 22:03:46 +00:00
Ben Kochie 7d28d2145f
Merge pull request #527 from prometheus/dependabot/go_modules/golang.org/x/crypto-0.17.0
Bump golang.org/x/crypto from 0.14.0 to 0.17.0
2024-01-15 23:02:23 +01:00
dependabot[bot] 48b0038897
Bump golang.org/x/crypto from 0.14.0 to 0.17.0
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.14.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.14.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-15 21:53:36 +00:00
Matthias Rampke abc3a1f95c
Merge pull request #533 from prometheus/mr/issue-523/test-existing
Add more tests for gauge increment/decrement
2024-01-15 22:52:05 +01:00
Ben Kochie 4a0e88e27b
Merge pull request #528 from prometheus/dependabot/go_modules/github.com/prometheus/client_golang-1.18.0
Bump github.com/prometheus/client_golang from 1.17.0 to 1.18.0
2024-01-15 22:51:07 +01:00
Matthias Rampke 7188ed4292
Add more tests for gauge increment/decrement
Validate the behavior for relative gauge events as per [the statsd type metric type documentatoin](https://github.com/statsd/statsd/blob/master/docs/metric_types.md#gauges).

Reaffirms #523.

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2024-01-15 22:44:07 +01:00
dependabot[bot] c48fa1bfa7
Bump github.com/prometheus/client_golang from 1.17.0 to 1.18.0
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.17.0 to 1.18.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.17.0...v1.18.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-15 21:35:46 +00:00
Ben Kochie 5d3e63295a
Merge pull request #526 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2024-01-15 22:35:30 +01:00
Matthias Rampke 8fdc626bfc
Merge pull request #532 from prometheus/superq/update_build
Update build
2024-01-15 21:25:53 +01:00
SuperQ 8adea73c00
Update build
* Update Go to 1.21.
* Cleanup unecessary build flags.
* Update minimum Go version to 1.20.

Fixes: https://github.com/prometheus/statsd_exporter/issues/531

Signed-off-by: SuperQ <superq@gmail.com>
2024-01-09 21:07:42 +01:00
prombot a853c2b0b2 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2023-12-07 17:47:56 +00:00
Matthias Rampke 2c7fd1edd4
Release 0.26.0
with documentation for #521 and changelog.

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2023-12-06 09:52:31 +00:00
Matthias Rampke 1e89c26ad6
Merge pull request #521 from rabenhorst/mapping-honor-labels
Add configuration for keeping original labels in mappings
2023-12-06 09:44:26 +00:00
Matthias Rampke 413f482751
Merge pull request #520 from prometheus/dependabot/go_modules/github.com/prometheus/common-0.45.0
Bump github.com/prometheus/common from 0.44.0 to 0.45.0
2023-12-06 09:42:32 +00:00
Matthias Rampke f9d238dbdd
Merge pull request #524 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2023-12-06 09:42:19 +00:00
Matthias Rampke ef104085db
Merge pull request #525 from prometheus/dependabot/go_modules/github.com/alecthomas/kingpin/v2-2.4.0
Bump github.com/alecthomas/kingpin/v2 from 2.3.2 to 2.4.0
2023-12-06 09:42:06 +00:00
dependabot[bot] 0750ae0346
Bump github.com/alecthomas/kingpin/v2 from 2.3.2 to 2.4.0
Bumps [github.com/alecthomas/kingpin/v2](https://github.com/alecthomas/kingpin) from 2.3.2 to 2.4.0.
- [Release notes](https://github.com/alecthomas/kingpin/releases)
- [Commits](https://github.com/alecthomas/kingpin/compare/v2.3.2...v2.4.0)

---
updated-dependencies:
- dependency-name: github.com/alecthomas/kingpin/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-01 10:51:20 +00:00
prombot 51cc8e4659 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2023-11-20 17:48:20 +00:00
dependabot[bot] a9b63f9e5f
Bump github.com/prometheus/common from 0.44.0 to 0.45.0
Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.44.0 to 0.45.0.
- [Release notes](https://github.com/prometheus/common/releases)
- [Commits](https://github.com/prometheus/common/compare/v0.44.0...v0.45.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-20 09:03:36 +00:00
Matthias Rampke 736b8b676b
Merge pull request #522 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2023-11-20 09:03:00 +00:00
Matthias Rampke 5b8dfc866d
Merge pull request #519 from prometheus/dependabot/go_modules/github.com/prometheus/client_model-0.5.0
Bump github.com/prometheus/client_model from 0.4.1-0.20230718164431-9a2bf3000d16 to 0.5.0
2023-11-20 09:02:17 +00:00
prombot 3c4f1af3ff Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2023-11-03 17:48:57 +00:00
Sebastian Rabenhorst 4392beadc3
Added config to honor labels in mappings
Signed-off-by: Sebastian Rabenhorst <sebastian.rabenhorst@shopify.com>
2023-11-02 14:15:07 +01:00
dependabot[bot] 22381d60c5
Bump github.com/prometheus/client_model
Bumps [github.com/prometheus/client_model](https://github.com/prometheus/client_model) from 0.4.1-0.20230718164431-9a2bf3000d16 to 0.5.0.
- [Release notes](https://github.com/prometheus/client_model/releases)
- [Commits](https://github.com/prometheus/client_model/commits/v0.5.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_model
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-01 10:13:09 +00:00
Matthias Rampke 56fe4f51cd
Fix up changelog for 0.25.0
- thank the contributors
- fix typo

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2023-10-24 08:18:11 +00:00
Matthias Rampke 3855b3e8c9
Merge pull request #518 from prometheus/mr/release-0.25.0
Release 0.25.0
2023-10-23 18:29:35 +02:00
Matthias Rampke 3ea12e444f
Release 0.25.0
with changelog. Call out that the monitoring for dropped events needs to
adapt to continue to be useful.

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2023-10-23 16:25:08 +00:00
Matthias Rampke fb1a02cf4a
Merge pull request #513 from prometheus/dependabot/go_modules/github.com/prometheus/client_golang-1.17.0
Bump github.com/prometheus/client_golang from 1.16.0 to 1.17.0
2023-10-23 18:19:55 +02:00
dependabot[bot] 8373cb382e
Bump github.com/prometheus/client_golang from 1.16.0 to 1.17.0
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.16.0 to 1.17.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.16.0...v1.17.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-23 16:06:10 +00:00
Matthias Rampke 3d8cac0b30
Merge pull request #514 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2023-10-23 18:05:47 +02:00
Matthias Rampke 2337d38124
Merge pull request #516 from prometheus/dependabot/go_modules/golang.org/x/net-0.17.0
Bump golang.org/x/net from 0.10.0 to 0.17.0
2023-10-23 18:04:27 +02:00
Matthias Rampke 12d14bb1d8
Merge pull request #511 from kullanici0606/master
Performance improvement - process udp packets in separate goroutine in order to minimize packet drops
2023-10-23 18:04:04 +02:00
kullanici0606 04ed8c69a9
fix typo
Co-authored-by: Matthias Rampke <matthias.rampke@googlemail.com>
Signed-off-by: kullanici0606 <35795498+kullanici0606@users.noreply.github.com>
2023-10-23 10:08:22 +03:00
kullanici0606 c246633aec add counter for dropped UDP packets. Remove unnecessary and misleading comment
Signed-off-by: kullanici0606 <yakup.turgut@btk.gov.tr>
2023-10-23 10:01:52 +03:00
dependabot[bot] 8e326ef3f3
Bump golang.org/x/net from 0.10.0 to 0.17.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.10.0 to 0.17.0.
- [Commits](https://github.com/golang/net/compare/v0.10.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-11 22:30:37 +00:00
prombot b98a84c3f6 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2023-10-02 17:48:56 +00:00
Matthias Rampke 5a947bd1a3
Merge pull request #512 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2023-09-26 20:51:09 +02:00
prombot ee8b05b646 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2023-09-25 17:49:17 +00:00
Matthias Rampke 3dcff5fed5
Merge pull request #509 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2023-09-25 15:50:54 +02:00
kullanici0606 aaaf26c074 avoid making copies of slices since we need to minimize the time spent in packet read in order not to drop packets
Signed-off-by: kullanici0606 <yakup.turgut@btk.gov.tr>
2023-09-20 10:37:53 +03:00
kullanici0606 b0c6d983e1 fix udp metrics increment
Signed-off-by: kullanici0606 <yakup.turgut@btk.gov.tr>
2023-09-20 10:37:53 +03:00
kullanici0606 ccb3eb6277 remove unnecessary log
Signed-off-by: kullanici0606 <yakup.turgut@btk.gov.tr>
2023-09-20 10:37:53 +03:00
kullanici0606 03255bf218 process udp packets in a separate goroutine so that udp packets are not dropped
Signed-off-by: kullanici0606 <yakup.turgut@btk.gov.tr>
2023-09-20 10:37:53 +03:00
Matthias Rampke 871e2d8df1
Merge pull request #510 from sumeshpremraj/spremraj/improve-network-debug-logging
Convert line payloads to string for debug logging
2023-07-10 22:39:30 +02:00
Sumesh Premraj 5123a1caa6 Convert line payload debug logs to string
Signed-off-by: Sumesh Premraj <sumeshpremraj@gmail.com>
2023-07-10 11:26:39 +02:00
prombot 3f8b98f180 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2023-07-07 17:48:32 +00:00
Matthias Rampke 886c18f947
Merge pull request #508 from prometheus/dependabot/go_modules/github.com/prometheus/client_golang-1.16.0
Bump github.com/prometheus/client_golang from 1.15.1 to 1.16.0
2023-07-07 15:42:49 +02:00
Matthias Rampke c76a152291
Merge pull request #507 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2023-07-07 15:42:13 +02:00
dependabot[bot] 69bc83fc47
Bump github.com/prometheus/client_golang from 1.15.1 to 1.16.0
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.15.1 to 1.16.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.15.1...v1.16.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-01 10:07:32 +00:00
prombot b1094b9061 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2023-06-27 17:49:01 +00:00
Matthias Rampke 8ffcdb3d5e
Merge pull request #506 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2023-06-19 06:56:45 +00:00
prombot 029cd6bca5 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2023-06-17 17:48:56 +00:00
Matthias Rampke e91ce201d5
Release 0.24.0
with changelogs for #504 and #499

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2023-06-02 07:49:36 +00:00
Matthias Rampke ddf7e0d31d
Merge pull request #504 from prometheus/superq/landing-page
Use exporter-toolkit landing page
2023-06-02 09:47:26 +02:00
Matthias Rampke 80e119a781
Merge pull request #499 from ebracho/master
Add `scale` field to mapping config
2023-06-02 09:46:01 +02:00
Matthias Rampke 191b07124c
Bump to 0.23.3
Accidentally based 0.23.2 off an older commit. Trying again.

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2023-06-02 07:43:20 +00:00
Matthias Rampke f73cc98fd7
Release 0.23.2
Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2023-06-02 07:39:14 +00:00
SuperQ 7f5125abdd
Use exporter-toolkit landing page
Use the exporter-toolkit to render the exporter homepage.
* Also allow metricsEndpoint to be on `/`.

Signed-off-by: SuperQ <superq@gmail.com>
2023-06-02 08:43:57 +02:00
Eddie Bracho 61f201130e document scale parameter
Signed-off-by: Eddie Bracho <eddiebracho@gmail.com>
2023-06-01 16:35:05 -07:00
Matthias Rampke 2813a69578
Merge pull request #501 from prometheus/dependabot/go_modules/github.com/prometheus/common-0.44.0
Bump github.com/prometheus/common from 0.42.0 to 0.44.0
2023-06-01 19:39:18 +00:00
dependabot[bot] 2a8dab9248
Bump github.com/prometheus/common from 0.42.0 to 0.44.0
Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.42.0 to 0.44.0.
- [Release notes](https://github.com/prometheus/common/releases)
- [Commits](https://github.com/prometheus/common/compare/v0.42.0...v0.44.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-06-01 19:22:13 +00:00
Matthias Rampke 5daeef52d8
Merge pull request #502 from prometheus/dependabot/go_modules/github.com/prometheus/client_model-0.4.0
Bump github.com/prometheus/client_model from 0.3.0 to 0.4.0
2023-06-01 19:21:24 +00:00
dependabot[bot] c87a7ac5eb
Bump github.com/prometheus/client_model from 0.3.0 to 0.4.0
Bumps [github.com/prometheus/client_model](https://github.com/prometheus/client_model) from 0.3.0 to 0.4.0.
- [Release notes](https://github.com/prometheus/client_model/releases)
- [Commits](https://github.com/prometheus/client_model/compare/v0.3.0...v0.4.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_model
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-06-01 19:06:52 +00:00
Matthias Rampke d747c05579
Merge pull request #503 from prometheus/dependabot/go_modules/github.com/prometheus/client_golang-1.15.1
Bump github.com/prometheus/client_golang from 1.14.0 to 1.15.1
2023-06-01 19:04:35 +00:00
Matthias Rampke 08c4bccb64
Update maintainer email
Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2023-06-01 18:54:30 +00:00
dependabot[bot] a0f4bcff5c
Bump github.com/prometheus/client_golang from 1.14.0 to 1.15.1
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.14.0 to 1.15.1.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.14.0...v1.15.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-06-01 10:59:29 +00:00
Eddie Bracho fdc8b5f852 Add scale field to mapping config
This field allows configuring unit conversions in mappings. For example:

mappings:
- match: foo.latency_ms
  name: foo_latency_seconds
  scale: 0.001
- match: bar.processed_kb
  name: bar_processed_bytes
  scale: 1024

Signed-off-by: Eddie Bracho <eddiebracho@gmail.com>
2023-05-29 19:54:43 -07:00
Matthias Rampke c3752cf30f
Merge pull request #493 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2023-04-17 12:43:09 +00:00
prombot a5b0ed182c Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2023-03-21 17:49:21 +00:00
Matthias Rampke 972cb24750
Merge pull request #491 from grafana/gaantunes/mapperConfigDefaults-public
Making mapperConfigDefaults public
2023-03-16 17:56:08 +00:00
Gabriel Antunes 24e2fc9980 Making mapperConfigDefaults and metricObjective public, so that Grafana Agent can instantiate MetricMapper without using InitFromYAMLString directly.
Signed-off-by: Gabriel Antunes <g.amaral.antunes@gmail.com>
2023-03-15 16:34:31 -06:00
Matthias Rampke aa341da7c4
Merge pull request #492 from prometheus/dependabot/go_modules/google.golang.org/protobuf-1.29.1
Bump google.golang.org/protobuf from 1.29.0 to 1.29.1
2023-03-15 19:58:33 +00:00
dependabot[bot] fb5d639dbf
Bump google.golang.org/protobuf from 1.29.0 to 1.29.1
Bumps [google.golang.org/protobuf](https://github.com/protocolbuffers/protobuf-go) from 1.29.0 to 1.29.1.
- [Release notes](https://github.com/protocolbuffers/protobuf-go/releases)
- [Changelog](https://github.com/protocolbuffers/protobuf-go/blob/master/release.bash)
- [Commits](https://github.com/protocolbuffers/protobuf-go/compare/v1.29.0...v1.29.1)

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-14 23:06:26 +00:00
Matthias Rampke 14d18d8e70
Release 0.23.1
Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2023-03-08 18:50:23 +00:00
Matthias Rampke bbba726e66
Merge pull request #490 from prometheus/mr/update-orb
Update Prometheus common orb
2023-03-08 18:47:59 +00:00
Matthias Rampke f0e9a8e2f8
Merge pull request #489 from prometheus/mr/update-kingpin
Update Go module dependencies
2023-03-08 18:47:48 +00:00
Matthias Rampke 0bdeac2162
Update Prometheus common orb
to fix issues with binfmt after the CircleCI infrastructure migration

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2023-03-08 18:21:13 +00:00
Matthias Rampke 9a980af22f
Update Go module dependencies
and switch to github.com/alecthomas/kingpin/v2 matching the changes in
prometheus/common

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2023-03-08 18:13:33 +00:00
Matthias Rampke 052676c42d
Merge pull request #485 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2023-02-20 22:27:47 +01:00
prombot 21a1763baf Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2023-02-20 17:48:46 +00:00
Matthias Rampke 832c5a5316
Merge pull request #484 from attenuation/master
Fix signalfx tag doc link
2023-01-27 14:53:50 +00:00
Jun Ouyang e22a86f5f9 fix 4040 link
Signed-off-by: Jun Ouyang <ouyangjun1999@gmail.com>
2023-01-27 20:18:43 +08:00
Matthias Rampke d73608eb14
Merge pull request #483 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2023-01-26 09:33:59 +00:00
prombot 9303da56a3 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2023-01-20 17:48:32 +00:00
Matthias Rampke 2caee689db
Merge pull request #478 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2023-01-05 09:11:02 +00:00
Matthias Rampke 07cf2fa0b1
Merge pull request #479 from prometheus/dependabot/go_modules/github.com/prometheus/common-0.39.0
Bump github.com/prometheus/common from 0.37.0 to 0.39.0
2023-01-05 09:10:47 +00:00
dependabot[bot] f6b2a21fbd
Bump github.com/prometheus/common from 0.37.0 to 0.39.0
Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.37.0 to 0.39.0.
- [Release notes](https://github.com/prometheus/common/releases)
- [Commits](https://github.com/prometheus/common/compare/v0.37.0...v0.39.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-01-01 10:02:07 +00:00
prombot c2447be23b Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2022-12-22 17:48:49 +00:00
Matthias Rampke b8ace4ae2c
Merge pull request #477 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2022-12-17 09:07:23 +00:00
prombot fea420de84 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2022-12-12 17:48:43 +00:00
Matthias Rampke 9c6e245521
Changelog for #474 & release
Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2022-12-07 10:07:56 +00:00
Matthias Rampke f6ab38f75e
Merge pull request #474 from pedro-stanaka/feat/update-golang-prom-client-native-histograms
Update client_golang to enable native histograms
2022-12-07 09:59:01 +00:00
Pedro Tanaka 559fc9c1af
Fixing error in example YAML
Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com>
2022-12-06 09:51:45 +01:00
Pedro Tanaka 507b0a20dd
Adding docs for new options
Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com>
2022-12-06 09:49:25 +01:00
Pedro Tanaka f77011fd34
Allow creation of native histograms
Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com>
2022-12-06 09:49:24 +01:00
Ben Kochie fbfa209c87
Merge pull request #476 from prometheus/superq/bump_go
Update Go
2022-11-29 07:28:08 -08:00
SuperQ fc1d834d59
Update Go
* Update to Go 1.19.
* Update some Go modules.

Signed-off-by: SuperQ <superq@gmail.com>
2022-11-29 15:33:57 +01:00
Matthias Rampke 81884eb3d8
Merge pull request #473 from prometheus/dependabot/go_modules/github.com/prometheus/client_model-0.3.0
Bump github.com/prometheus/client_model from 0.2.0 to 0.3.0
2022-11-07 18:00:07 +01:00
dependabot[bot] f231009802
Bump github.com/prometheus/client_model from 0.2.0 to 0.3.0
Bumps [github.com/prometheus/client_model](https://github.com/prometheus/client_model) from 0.2.0 to 0.3.0.
- [Release notes](https://github.com/prometheus/client_model/releases)
- [Commits](https://github.com/prometheus/client_model/compare/v0.2.0...v0.3.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_model
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-01 10:17:54 +00:00
Matthias Rampke dbdf4d9349
Merge pull request #468 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2022-09-22 23:26:45 +01:00
Matthias Rampke c74db12eaf
Merge pull request #470 from prometheus/matthiasr-patch-1
Changelog for #469
2022-09-22 23:26:16 +01:00
Matthias Rampke fd4ea19c21
Update VERSION
Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2022-09-22 23:25:53 +01:00
Matthias Rampke 1bcb9d1a97
Changelog for #469
Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2022-09-22 23:24:44 +01:00
Matthias Rampke a92aa4556e
Merge pull request #469 from don-philipe/Print-version-to-stdout
Print version, help etc to stdout
2022-09-22 23:22:24 +01:00
don-philipe 00d82daaf2
Print version, help etc to stdout
The default behavior of kingpin lib is to print stuff to stderr. This is now changed by using the os.Stdout writer. Now information like version and help text will be printed to stdout.

Signed-off-by: don-philipe <don_philipe@gmx.de>
2022-09-21 23:05:42 +02:00
prombot 65e7877ab3 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2022-09-21 17:54:26 +00:00
Matthias Rampke aecad1a2fa
Release 0.22.8
with changelog for #461 & #463

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2022-09-13 14:48:04 +00:00
Matthias Rampke 5d28e6d898
Merge pull request #463 from prometheus/dependabot/go_modules/github.com/prometheus/client_golang-1.13.0
Bump github.com/prometheus/client_golang from 1.12.2 to 1.13.0
2022-09-13 14:44:33 +00:00
Matthias Rampke edaa33860e
Merge pull request #461 from pedro-stanaka/fix/histogram-and-counters-clash
Counter with histogram suffix will clash with histogram
2022-09-13 14:43:11 +00:00
dependabot[bot] 89e7668d8d
Bump github.com/prometheus/client_golang from 1.12.2 to 1.13.0
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.12.2 to 1.13.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.12.2...v1.13.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-09-01 10:15:29 +00:00
Pedro Tanaka da06fabcb6
Use same error message
Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com>
2022-08-29 08:53:50 +02:00
Pedro Tanaka 83cec219c8
Checking conflict for gauges as well and refactoring into function
Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com>
2022-08-29 08:51:09 +02:00
Pedro Tanaka 31b05ef232
Fixing typo
Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com>
2022-08-26 11:58:41 +02:00
Pedro Tanaka c41fa101f5
Bring back test cases after rebase problem
Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com>
2022-08-26 11:57:57 +02:00
Pedro Tanaka 7afa060ce4
Fixing clash problem when counter clashes with previous registered histogram
Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com>
2022-08-26 11:40:22 +02:00
Pedro Tanaka 30c3e31574
Improving isolation of metric conflict test
Signed-off-by: Pedro Tanaka <pedro.stanaka@gmail.com>
2022-08-26 11:40:18 +02:00
Matthias Rampke e85098da3f
Merge pull request #455 from prometheus/dependabot/go_modules/github.com/prometheus/common-0.37.0
Bump github.com/prometheus/common from 0.35.0 to 0.37.0
2022-08-06 13:17:18 +00:00
dependabot[bot] d0585ec62b
Bump github.com/prometheus/common from 0.35.0 to 0.37.0
Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.35.0 to 0.37.0.
- [Release notes](https://github.com/prometheus/common/releases)
- [Commits](https://github.com/prometheus/common/compare/v0.35.0...v0.37.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-08-01 10:11:13 +00:00
Matthias Rampke af84364004
Merge pull request #453 from prometheus/mr/guard-logger-fsm
mapper: Make sure we have a logger before backtracking check
2022-07-29 15:06:37 +00:00
Matthias Rampke 1e801bc499
Merge pull request #454 from inosato/replace-ioutil
Remove ioutil
2022-07-29 15:06:24 +00:00
inosato b43a60e9c8 Remove ioutil
Signed-off-by: inosato <si17_21@yahoo.co.jp>
2022-07-29 23:39:29 +09:00
Matthias Rampke bc43c5606d
mapper: Make sure we have a logger before backtracking check
The FSM backtracking detection issues a log line. Make sure there is a
non-nil logger before doing that.

Avoids the crash in prometheus/graphite_exporter#197.

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2022-07-29 11:46:57 +00:00
Matthias Rampke c2505cf91e
Release 0.22.7
more housekeeping!

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2022-07-08 13:39:38 +00:00
Matthias Rampke 8f351a5577
Merge pull request #450 from prometheus/superq/update_build
Update build
2022-07-08 13:36:27 +00:00
SuperQ b1068d058a
Update build
* Update Go to 1.18.
* Update Go modules to 1.17 format.
* Enable dependabot.
* Add yamllint config.

Signed-off-by: SuperQ <superq@gmail.com>
2022-07-08 15:25:18 +02:00
Matthias Rampke 3b9ef1fef5
Update changelog & release 0.22.6
Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2022-07-08 12:51:22 +00:00
Matthias Rampke 9d4eeda5b1
Merge pull request #449 from prometheus/mr/dependency-update
housekeeping: update dependencies
2022-07-08 12:48:37 +00:00
Matthias Rampke c2c1af12ab
housekeeping: update dependencies
Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2022-07-08 12:41:14 +00:00
Matthias Rampke 9115f0fa39
Merge pull request #447 from pedro-stanaka/feat/doc-special-regex-match
Adding documentation about special regex match group
2022-07-08 10:59:00 +00:00
Pedro Tanaka 6ef0b3b4e8 Changing issue number for tracking support for globbling
Signed-off-by: Pedro Tanaka <pedro.stanaka@gmail.com>
2022-06-29 10:23:33 +02:00
Pedro Tanaka 1a148215de Adding note about behavior of .+ regex
Signed-off-by: Pedro Tanaka <pedro.stanaka@gmail.com>
2022-06-29 10:22:12 +02:00
Pedro Tanaka f935a9c869 Adding documentation about special regex match group
Signed-off-by: Pedro Tanaka <pedro.stanaka@gmail.com>
2022-06-29 10:22:12 +02:00
Matthias Rampke 3a63a4b86c
Merge pull request #445 from SandroJijavadze/fix-443-flaky-test
fix flaky test gosched hack
2022-06-28 20:18:13 +02:00
Sandro Jijavadze 2ab624a917 fix flaky test gosched hack
Signed-off-by: Sandro Jijavadze <sandrojijavadze@protonmail.com>
2022-06-22 15:31:41 +04:00
Matthias Rampke 848212e028
Merge pull request #440 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2022-06-18 11:58:53 +02:00
Matthias Rampke d90c8ff92d
Merge pull request #441 from pedro-stanaka/fix/benchmarks-mapper
Fixing broken benchmark for mapper
2022-06-18 10:17:07 +02:00
Pedro Tanaka 6e341dd805
Fixing broken benchmark for mapper
Signed-off-by: Pedro Tanaka <pedro.stanaka@gmail.com>
2022-06-17 12:14:29 +02:00
prombot c753dfaf76 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2022-06-14 19:50:38 +00:00
Augustin Husson 0d22f85f04
Merge pull request #436 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2022-06-14 00:27:42 +02:00
prombot b8504edbe4 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2022-06-02 19:50:41 +00:00
Matthias Rampke c714dcdf2e
Add #434 to release notes
just in time for 0.22.5 :)

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2022-05-06 12:01:02 +00:00
Matthias Rampke dcddbc4234
Merge pull request #434 from pedro-stanaka/relay-new-line-count-metric
New metric for relayed lines
2022-05-06 11:56:27 +00:00
Matthias Rampke d46243aa66
Update Year 2022-05-06 11:56:04 +00:00
Matthias Rampke 4d07e28b7c
Release 0.22.5
no material changes, only a maintenance release.

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2022-05-06 11:50:40 +00:00
Matthias Rampke ada7248ccb
Merge pull request #432 from prometheus/mr/go-image
Switch to cimg/go for build+test
2022-05-06 11:35:00 +00:00
Pedro Tanaka 0de518d3d5
Adding new metric to keep track of total relayed lines
* Adding new simple test to relay package
* Adding metric testing on relay
* Adding new metric to relay
* Adding new test section to check on metrics
* Fixing linting problems on new code
* Adding license headers on file

Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com>
2022-05-06 13:06:46 +02:00
Matthias Rampke 3427e36d50
Switch to cimg/go for build+test
To get off the [deprecated] circleci/golang.

Fixes #424.

[deprecated]: https://discuss.circleci.com/t/legacy-convenience-image-deprecation/41034

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2022-05-06 10:52:24 +00:00
Matthias Rampke 99a94fa8b4
Merge pull request #433 from prometheus/revert-429-relay-new-line-count-metric
Revert "New metric for total lines relayed"
2022-05-06 10:52:03 +00:00
Matthias Rampke 4b6983811e
Revert "New metric for total lines relayed"
Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2022-05-06 10:50:13 +00:00
Matthias Rampke e416ec14d3
Merge pull request #429 from pedro-stanaka/relay-new-line-count-metric
New metric for total lines relayed
2022-05-06 10:43:17 +00:00
Ben Kochie f8fc001bba
Merge pull request #431 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2022-05-05 22:24:34 +02:00
prombot 542ee7cfa9 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2022-05-05 19:50:36 +00:00
Ben Kochie 0cb34c771d
Merge pull request #430 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2022-05-03 22:17:13 +02:00
prombot 549a96a54c Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2022-05-03 19:50:38 +00:00
Pedro Tanaka fbf7837387
Fixing linting problems on new code
Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com>
2022-04-26 21:07:02 +02:00
Pedro Tanaka d015cda365
Adding new test section to check on metrics
Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com>
2022-04-22 14:58:21 +02:00
Pedro Tanaka 2d339b0853
Adding new metric to relay
Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com>
2022-04-21 11:47:19 +02:00
Pedro Tanaka 60742e1bd6
Adding metric testing on relay
Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com>
2022-04-21 11:43:58 +02:00
Pedro Tanaka 78dcd3b7b2
Adding new simple test to relay package
Signed-off-by: Pedro Tanaka <pedro.tanaka@shopify.com>
2022-04-21 08:27:16 +02:00
Matthias Rampke 4ad11fcafa
Merge pull request #426 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2022-03-31 22:12:52 +02:00
prombot 2f5add464e Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2022-03-31 19:50:48 +00:00
Matthias Rampke 4a086bc25b
Merge pull request #423 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2022-03-14 21:20:08 +01:00
prombot 55e536bc15 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2022-03-14 19:50:37 +00:00
Matthias Rampke 161f982ae2
Merge pull request #422 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2022-03-12 21:58:56 +01:00
prombot 5654505569 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2022-03-10 19:50:26 +00:00
Matthias Rampke 1a1dd2d8a3
Merge pull request #421 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2022-03-09 09:38:18 +01:00
prombot b8462d09ec Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2022-03-08 19:50:28 +00:00
Matthias Rampke c481b95155
Merge pull request #419 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2022-03-08 09:45:11 +01:00
prombot b1222831b6 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2022-03-03 19:50:41 +00:00
Matthias Rampke db4a4f2fbe
Merge pull request #415 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2021-12-19 12:03:11 +01:00
prombot 83180ae37f Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2021-12-15 00:01:52 +00:00
Matthias Rampke 5c6c8a28d0
Merge pull request #412 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2021-11-27 10:02:07 +01:00
prombot a32a9da527 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2021-11-27 00:01:51 +00:00
Matthias Rampke 7e2fe6c2e6
Bump version & add changelog for #409 #410
Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2021-11-26 10:32:37 +00:00
Matthias Rampke 7ba3f6a841
Merge pull request #409 from aleksandr-vin/aleksandr-vin-patch-1
Set numeric user to comply runAsNonRoot k8s policy
2021-11-26 11:22:06 +01:00
Matthias Rampke 6a5a089597
Merge pull request #410 from mattdurham/use_instance_registry
Use the instance registry instead of the prometheus.registry
2021-11-26 11:20:40 +01:00
Matthias Rampke 6564725760
Merge pull request #411 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2021-11-26 11:13:56 +01:00
prombot bfdf4ddee0 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2021-11-26 00:01:50 +00:00
Matt Durham aadb43ed02 Use the instance registry instead of the prometheus.registry
Signed-off-by: Matt Durham <mattdurham@ppog.org>
2021-11-17 11:37:39 -05:00
Aleksandr Vinokurov 258c8c0b2c
Set numeric user to comply runAsNonRoot k8s policy
When in k8s, container has `runAsNonRoot` policy and image has non-numeric user (nobody), then the deployment will fail as it cannot verify user is non-root.

Closes #406

Signed-off-by: Aleksandr Vinokurov <aleksandr.vin@gmail.com>
2021-11-16 17:48:56 +01:00
Matthias Rampke fae515f739
Merge pull request #405 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2021-11-12 16:57:43 +01:00
prombot 339c8efb33 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2021-11-09 00:01:47 +00:00
Matthias Rampke 16d4f317a5
Release 0.22.3
with changelog for #402

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2021-10-26 11:24:33 +00:00
Matthias Rampke 6c0283942c
Merge pull request #402 from agneum/389-allow-multiple-dashes
feat: handle multiple dashes in unmapped metrics (#389)
2021-10-26 13:13:52 +02:00
Matthias Rampke d0d948ee05
Merge pull request #404 from agneum/364-enable-more-linters
feat: enable more linters and fix warnings (#364)
2021-10-26 13:12:42 +02:00
Matthias Rampke 69b17ee390
Merge pull request #403 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2021-10-26 13:09:38 +02:00
akartasov 697eb33e2c
fix: remove vendor and disable-all options, disable the errcheck linter
Signed-off-by: akartasov <agneum@gmail.com>
2021-10-25 09:13:44 +07:00
akartasov b60989291b
feat: enable more linters and fix warnings (#364)
Signed-off-by: akartasov <agneum@gmail.com>
2021-10-24 21:27:31 +07:00
prombot f699c4ae94 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2021-10-24 00:01:42 +00:00
akartasov b6fc5ded9f
fix linter warnings: goimports, wsl
Signed-off-by: akartasov <agneum@gmail.com>
2021-10-24 00:13:53 +07:00
akartasov 2ab2c442cf
feat: handle double dashes in unmapped metrics (#389)
Signed-off-by: akartasov <agneum@gmail.com>
2021-10-23 23:53:47 +07:00
Ben Kochie 95d34445ed
Merge pull request #401 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2021-10-23 10:25:49 +02:00
prombot 83513ab186 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2021-10-23 00:01:50 +00:00
Matthias Rampke 5c8622abc1
Merge pull request #400 from prometheus/beorn7/deprecation
Add deprecation note to pkg directory
2021-10-14 13:22:22 +02:00
beorn7 4946fc865c Add deprecation note to pkg directory
Signed-off-by: beorn7 <beorn@grafana.com>
2021-10-13 14:40:09 +02:00
Ben Kochie d4e89cd38c
Merge pull request #398 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2021-09-12 12:46:18 +02:00
prombot f0c6a23b8c Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2021-09-12 00:02:12 +00:00
Ben Kochie f7894137b8
Merge pull request #395 from prometheus/superq/release_0.22.2
Release 0.22.2
2021-09-10 12:06:29 +02:00
SuperQ 9200507034
Release 0.22.2
* [ENHANCEMENT] Add metrics to relay ([#393](https://github.com/prometheus/statsd_exporter/pull/393))

Signed-off-by: SuperQ <superq@gmail.com>
2021-09-10 10:31:53 +02:00
Ben Kochie 884a8c0a21
Merge pull request #393 from prometheus/superq/relay_metrics
Add metrics to relay
2021-09-10 10:26:42 +02:00
SuperQ ae26c506f5
Add metrics to relay
Add some metrics to the statsd relay.
* The number of StatsD packets relayed.
* The number lines that were too long to relay.

Also convert main.go to use promauto for metric registration.

Signed-off-by: SuperQ <superq@gmail.com>
2021-09-02 11:57:39 +02:00
Matthias Rampke f39c9c3645
Merge pull request #392 from prometheus/mr/update-scenarios
Update transition scenarios
2021-09-01 17:51:24 +02:00
Matthias Rampke f5c6a55c0d
Update transition scenarios
Recommend running the exporter as a sidecar, now that the relay feature
makes this easy.

Re-order the overview section to put the final state first, then explain
transition and try-out options.

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2021-09-01 15:59:07 +02:00
Matthias Rampke 9b31bb8533
Add changelog entry for #390 & #388
fix ordering, and prepare to release today

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2021-09-01 15:20:47 +02:00
Matthias Rampke e86e67a7cf
Merge pull request #388 from prometheus/superq/relay
Add statsd relay
2021-09-01 15:17:44 +02:00
SuperQ 0b84893d4a
Add statsd relay
Add a simple statsd packet output relay that buffers and forwards raw
statsd lines.
* Only supports UDP output.
* Has a hard-coded buffering timeout of 1 second.

Closes: https://github.com/prometheus/statsd_exporter/issues/95

Signed-off-by: SuperQ <superq@gmail.com>
2021-09-01 15:14:01 +02:00
Matthias Rampke b04eba273a
Merge pull request #391 from prometheus/superq/go1.17
Update build for Go 1.17
2021-09-01 15:12:12 +02:00
Matthias Rampke 6c43ebbc79
Merge pull request #390 from royeo/feature/replace-level-pkg
refactor: replace level pkg
2021-09-01 15:08:40 +02:00
SuperQ 37da8dd118
Update build for Go 1.17
* Update to Go 1.17.
* Update Go mods.

Signed-off-by: SuperQ <superq@gmail.com>
2021-09-01 14:29:08 +02:00
davinlu 6329577a6b refactor: replace level pkg
Signed-off-by: davinlu <davinlu@tencent.com>
2021-09-01 18:55:45 +08:00
Matthias Rampke d57c8b11f3
Changelog for #381
Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2021-08-31 17:56:35 +02:00
Matthias Rampke a9c883a298
Merge pull request #381 from evandam/allow-multi-dash
allow multiple dashes in StatsD metric names
2021-08-31 17:55:00 +02:00
Matthias Rampke f9fea18a92
Skip version 0.22.0
it failed the license check.

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2021-08-31 17:49:20 +02:00
Matthias Rampke e84990974b
Merge pull request #382 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2021-08-31 17:34:33 +02:00
Matthias Rampke 42158f5cca
Manually sync Makefile.common
to unstick the repo-sync PR

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2021-08-31 12:51:32 +02:00
prombot 1f53a743c7
Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2021-08-31 12:42:35 +02:00
Matthias Rampke eea2c0f734
Back out of release 0.22.0
the main change (#385) is missing license headers

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2021-08-31 12:41:32 +02:00
Matthias Rampke 828d50cea5
Merge pull request #387 from prometheus/revert-385-master
Revert "replace level pkg"
2021-08-31 12:40:34 +02:00
Matthias Rampke c4f435e140
Revert "replace level pkg" 2021-08-31 12:39:34 +02:00
Matthias Rampke efdf11343c
Release 0.22.0
with changelog for #385 #386

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2021-08-31 12:24:24 +02:00
Matthias Rampke e395798230
Merge pull request #386 from royeo/feature/pprof
use http.DefaultServeMux to support pprof
2021-08-31 12:05:55 +02:00
Matthias Rampke 4c8028e865
Merge pull request #385 from royeo/master
replace level pkg
2021-08-31 12:05:27 +02:00
royeo c3a818ef62 replace level pkg
Signed-off-by: royeo <ljn6176@gmail.com>
2021-08-17 23:27:27 +08:00
davinlu f4fe58dcf2 use http.DefaultServeMux to support pprof
Signed-off-by: davinlu <davinlu@tencent.com>
2021-08-16 22:24:08 +08:00
Evan Van Dam abb7ec0afb allow multiple dashes in StatsD metric names
Signed-off-by: Evan Van Dam <evan.vandam@wonolo.com>
2021-06-17 09:43:40 -07:00
Matthias Rampke ef6627b9f0
Changelog & version bump for #379
Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2021-06-10 07:23:17 +00:00
Matthias Rampke 5ea682c01f
Merge pull request #379 from sagikazarmark/update-prom-libs
Update prom libs
2021-06-09 19:05:31 +02:00
Márk Sági-Kazár 04bc13d73c
refactor: use structured logging
Co-authored-by: Ben Kochie <superq@gmail.com>
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2021-06-09 14:44:55 +02:00
Mark Sagi-Kazar 2852aa2a5b
chore(deps): update common package
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2021-06-07 16:34:48 +02:00
Mark Sagi-Kazar 278a862099
refactor: use go-kit logger instead of deprecated common log package
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2021-06-07 16:00:08 +02:00
Mark Sagi-Kazar 5f419f96dd
chore(deps): update client_golang
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2021-06-07 15:39:54 +02:00
Matthias Rampke 2d1face4e0
Changelog for #378 & release 0.20.3
Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2021-06-04 08:02:05 +00:00
Matthias Rampke 7e3a7a5068
Merge pull request #378 from howardjohn/go-kit-log
Update dependencies and drop large go-kit update
2021-06-04 09:57:29 +02:00
John Howard c5113b732c Update dependencies and drop large go-kit update
Pulling in the same changes as done in
https://github.com/prometheus/common/issues/255

Signed-off-by: John Howard <howardjohn@google.com>
2021-06-03 08:00:27 -07:00
Matthias Rampke 1a86938882
Merge pull request #374 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2021-05-06 11:52:31 +02:00
Matthias Rampke cd27e25711
Merge pull request #376 from royeo/master
Fix comment
2021-05-06 11:51:57 +02:00
royeo 393b4ca495 Fix comment
Signed-off-by: royeo <ljn6176@gmail.com>
2021-05-06 16:08:29 +08:00
Matthias Rampke ea25c7a114
Release 0.20.2
with changelog for #375

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2021-05-03 15:25:02 +00:00
Matthias Rampke 2a6bbd7827
Merge pull request #375 from bakins/lru-cache
Use groupcache for LRU cache
2021-05-03 17:19:39 +02:00
Brian Akins a8f8067315 Use groupcache for LRU cache
Signed-off-by: Brian Akins <205350+bakins@users.noreply.github.com>
2021-04-28 12:15:19 -04:00
prombot c552b8b84b Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2021-04-14 00:01:41 +00:00
Matthias Rampke ce538cbae2
Merge pull request #373 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2021-04-01 08:31:59 +02:00
prombot b8d1fde4e4 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2021-03-31 00:01:37 +00:00
Matthias Rampke 2b5239a67f
Changelog for #365, cut bugfix release
Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2021-03-26 17:23:09 +00:00
Matthias Rampke a8c09aaa97
Merge pull request #365 from glightfoot/segment-numbers
Support metric name segments starting with numbers
2021-03-26 18:20:30 +01:00
Matthias Rampke 5793e05795
Add additional tests for #365
testing more edge cases.

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2021-03-26 16:54:29 +00:00
Matthias Rampke 5f7d52f795 Changelog for #363 & version bump 2021-03-26 09:12:50 +00:00
Matthias Rampke 84e68332f3
Merge pull request #366 from prometheus/matthiasr/gitpod-setup
Add configuration for Gitpod
2021-03-26 10:02:24 +01:00
Matthias Rampke 12281a972e
Merge pull request #363 from glightfoot/cache-interface
Better mapper cache interface
2021-03-26 10:00:30 +01:00
Matthias Rampke 8b496dfab5
Merge pull request #372 from matthiasr/mr/fix-yaml-backslash-quoting
Escape backslashes in regex examples
2021-03-23 18:30:24 +01:00
Matthias Rampke da69d725d0 Escape backslashes in regex examples
In double-quoted strings, the backslash is [special to YAML](http://blogs.perl.org/users/tinita/2018/03/strings-in-yaml---to-quote-or-not-to-quote.html).
Make sure that the examples do this correctly.
Fixes the wonky rendering on github.com.

Signed-off-by: Matthias Rampke <mr@soundcloud.com>
2021-03-23 17:05:56 +00:00
Matthias Rampke e462a256b1
Merge pull request #370 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2021-03-23 07:12:54 +01:00
prombot ad8d919409 Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2021-03-22 00:02:18 +00:00
Matthias Rampke 0151d93b46
Merge pull request #369 from prometheus/superq/vendor
Update build
2021-03-18 15:08:04 +01:00
Ben Kochie 8e28d7acca
Update build
* Drop /vendor.
* Bump Go modules.
* Bump build orb.

Signed-off-by: Ben Kochie <superq@gmail.com>
2021-03-18 11:18:02 +01:00
Ben Kochie ad0c3a62e1
Merge pull request #368 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2021-03-18 11:12:46 +01:00
Ben Kochie 094f3cd9e5
Bump Go to 1.16.
Signed-off-by: Ben Kochie <superq@gmail.com>
2021-03-18 10:47:05 +01:00
prombot 4962fd01ae Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2021-03-18 00:02:33 +00:00
Matthias Rampke b3aafad8f0
Merge pull request #367 from prometheus/repo_sync
Synchronize common files from prometheus/prometheus
2021-03-17 23:09:41 +01:00
prombot b1045e33ac Update common Prometheus files
Signed-off-by: prombot <prometheus-team@googlegroups.com>
2021-03-17 00:02:33 +00:00
Matthias Rampke f1ca904c98 Fully automate dev setup with Gitpod
This commit implements a fully-automated development setup using Gitpod.io, an
online IDE for GitLab, GitHub, and Bitbucket that enables Dev-Environments-As-Code.
This makes it easy for anyone to get a ready-to-code workspace for any branch,
issue or pull request almost instantly with a single click.

Similar to prometheus/prometheus#7749.

Signed-off-by: Matthias Rampke <mr@soundcloud.com>
2021-03-09 21:38:17 +00:00
glightfoot a197834f64 Add comments about thread-safety
Signed-off-by: glightfoot <glightfoot@rsglab.com>
2021-02-19 14:13:03 -05:00
glightfoot 8b306c8c76 remove noop cache, add helper function for tests
Signed-off-by: glightfoot <glightfoot@rsglab.com>
2021-02-19 14:08:53 -05:00
glightfoot 0c4a66e6a1 rework metricLineRE to support matching segments that start with a number
Signed-off-by: glightfoot <glightfoot@rsglab.com>
2021-02-07 11:21:48 -05:00
glightfoot aa529c8884 rename mapper_cache to mappercache
Signed-off-by: glightfoot <glightfoot@rsglab.com>
2021-02-06 23:27:11 -05:00
glightfoot 940e653ea6 cleanup
Signed-off-by: glightfoot <glightfoot@rsglab.com>
2021-02-06 23:21:49 -05:00
glightfoot 32c612d5e8 add license
Signed-off-by: glightfoot <glightfoot@rsglab.com>
2021-02-06 23:03:46 -05:00
glightfoot de9a51c863 fix TestMultipleMatches mapping indent
Signed-off-by: glightfoot <glightfoot@rsglab.com>
2021-02-06 23:00:06 -05:00
glightfoot 0099df7f71 move lru and rr mapper caches to their own package and make mapper_cache a better an interface for implementing externally`
Signed-off-by: glightfoot <glightfoot@rsglab.com>
2021-02-06 22:50:17 -05:00
Matthias Rampke fbcadbf71b
Changelog for #361 & release 0.20.0
add extra note about the deprecated attributes, eventually I want to
get rid of them.

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2021-02-05 16:53:35 +00:00
Matthias Rampke eeeedce405
Merge pull request #361 from glightfoot/defaults2
Allow setting defaults for histogram and summary options
2021-02-05 16:32:22 +00:00
glightfoot f8bba00868 format and clarify alias type comment
Signed-off-by: glightfoot <glightfoot@rsglab.com>
2021-01-28 14:24:41 -05:00
glightfoot 66a63a8c1f use summary options defaults in registry
Signed-off-by: glightfoot <glightfoot@rsglab.com>
2021-01-28 14:16:40 -05:00
glightfoot 58aed41ff2 remove deprecated fields from mapper defaults config
Signed-off-by: glightfoot <glightfoot@rsglab.com>
2021-01-28 13:59:42 -05:00
glightfoot 06411336c7 add support for setting histogram_options and summary_options in defaults
Signed-off-by: glightfoot <glightfoot@rsglab.com>
2021-01-28 13:58:52 -05:00
795 changed files with 2952 additions and 265810 deletions

View file

@ -1,54 +1,48 @@
---
version: 2.1
orbs:
prometheus: prometheus/prometheus@0.5.0
prometheus: prometheus/prometheus@0.17.1
executors:
# Whenever the Go version is updated here, .promu.yml should also be updated.
golang:
docker:
- image: circleci/golang:1.15
- image: cimg/go:1.21
jobs:
test:
executor: golang
steps:
- prometheus/setup_environment
- run: make
- run: git diff --exit-code
- prometheus/store_artifact:
file: statsd_exporter
- prometheus/setup_environment
- run: make
- run: git diff --exit-code
- prometheus/store_artifact:
file: statsd_exporter
workflows:
version: 2
statsd_exporter:
jobs:
- test:
filters:
tags:
only: /.*/
- prometheus/build:
name: build
filters:
tags:
only: /.*/
- prometheus/publish_master:
context: org-context
requires:
- test
- build
filters:
branches:
only: master
- prometheus/publish_release:
context: org-context
requires:
- test
- build
filters:
tags:
only: /^v[0-9]+(\.[0-9]+){2}(-.+|[^-.]*)$/
branches:
ignore: /.*/
- test:
filters:
tags:
only: /.*/
- prometheus/build:
name: build
filters:
tags:
only: /.*/
- prometheus/publish_master:
context: org-context
requires:
- test
- build
filters:
branches:
only: master
- prometheus/publish_release:
context: org-context
requires:
- test
- build
filters:
tags:
only: /^v[0-9]+(\.[0-9]+){2}(-.+|[^-.]*)$/
branches:
ignore: /.*/

6
.github/dependabot.yml vendored Normal file
View file

@ -0,0 +1,6 @@
version: 2
updates:
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "monthly"

View file

@ -0,0 +1,52 @@
---
name: Push README to Docker Hub
on:
push:
paths:
- "README.md"
- ".github/workflows/container_description.yml"
branches: [ main, master ]
permissions:
contents: read
jobs:
PushDockerHubReadme:
runs-on: ubuntu-latest
name: Push README to Docker Hub
if: github.repository_owner == 'prometheus' || github.repository_owner == 'prometheus-community' # Don't run this workflow on forks.
steps:
- name: git checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
- name: Set docker hub repo name
run: echo "DOCKER_REPO_NAME=$(make docker-repo-name)" >> $GITHUB_ENV
- name: Push README to Dockerhub
uses: christian-korneck/update-container-description-action@d36005551adeaba9698d8d67a296bd16fa91f8e8 # v1
env:
DOCKER_USER: ${{ secrets.DOCKER_HUB_LOGIN }}
DOCKER_PASS: ${{ secrets.DOCKER_HUB_PASSWORD }}
with:
destination_container_repo: ${{ env.DOCKER_REPO_NAME }}
provider: dockerhub
short_description: ${{ env.DOCKER_REPO_NAME }}
readme_file: 'README.md'
PushQuayIoReadme:
runs-on: ubuntu-latest
name: Push README to quay.io
if: github.repository_owner == 'prometheus' || github.repository_owner == 'prometheus-community' # Don't run this workflow on forks.
steps:
- name: git checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
- name: Set quay.io org name
run: echo "DOCKER_REPO=$(echo quay.io/${GITHUB_REPOSITORY_OWNER} | tr -d '-')" >> $GITHUB_ENV
- name: Set quay.io repo name
run: echo "DOCKER_REPO_NAME=$(make docker-repo-name)" >> $GITHUB_ENV
- name: Push README to quay.io
uses: christian-korneck/update-container-description-action@d36005551adeaba9698d8d67a296bd16fa91f8e8 # v1
env:
DOCKER_APIKEY: ${{ secrets.QUAY_IO_API_TOKEN }}
with:
destination_container_repo: ${{ env.DOCKER_REPO_NAME }}
provider: quay
readme_file: 'README.md'

38
.github/workflows/golangci-lint.yml vendored Normal file
View file

@ -0,0 +1,38 @@
---
# This action is synced from https://github.com/prometheus/prometheus
name: golangci-lint
on:
push:
paths:
- "go.sum"
- "go.mod"
- "**.go"
- "scripts/errcheck_excludes.txt"
- ".github/workflows/golangci-lint.yml"
- ".golangci.yml"
pull_request:
permissions: # added using https://github.com/step-security/secure-repo
contents: read
jobs:
golangci:
permissions:
contents: read # for actions/checkout to fetch code
pull-requests: read # for golangci/golangci-lint-action to fetch pull requests
name: lint
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
- name: install Go
uses: actions/setup-go@0c52d547c9bc32b1aa3301fd7a9cb496313a4491 # v5.0.0
with:
go-version: 1.22.x
- name: Install snmp_exporter/generator dependencies
run: sudo apt-get update && sudo apt-get -y install libsnmp-dev
if: github.repository == 'prometheus/snmp_exporter'
- name: Lint
uses: golangci/golangci-lint-action@3cfe3a4abbb849e10058ce4af15d205b6da42804 # v4.0.0
with:
version: v1.56.2

1
.gitignore vendored
View file

@ -5,3 +5,4 @@ dependencies-stamp
/.release
/.tarballs
*~
/vendor

6
.gitpod.Dockerfile vendored Normal file
View file

@ -0,0 +1,6 @@
FROM gitpod/workspace-full
RUN sudo apt-get update && \
sudo apt-get install -y netcat-traditional socat && \
sudo apt-get clean && \
sudo rm -rf /var/lib/apt/lists/*

13
.gitpod.yml Normal file
View file

@ -0,0 +1,13 @@
tasks:
- init: go get
command: go build . && ./statsd_exporter
- command: printf 'Try:\n\n\techo "test.gauge:42|g" | socat - TCP:127.0.0.1:9125\n\n'
ports:
- port: 9102
onOpen: open-preview
- port: 9125
onOpen: ignore
image:
file: .gitpod.Dockerfile

View file

@ -1,8 +1,27 @@
run:
modules-download-mode: vendor
# Run only staticcheck for now. Additional linters will be enabled one-by-one.
issues-exit-code: 1
tests: true
linters:
disable:
- errcheck
enable:
- staticcheck
disable-all: true
- deadcode
- goimports
- gosimple
- govet
- ineffassign
- nakedret
- staticcheck
- structcheck
- unused
- varcheck
- whitespace
linters-settings:
govet:
check-shadowing: true
issues:
exclude-use-default: false
exclude:
- 'shadow: declaration of "err" shadows declaration at line (\d+)'
max-issues-per-linter: 0
max-same-issues: 0

View file

@ -1,11 +1,10 @@
go:
# Whenever the Go version is updated here, .circle/config.yml should also
# be updated.
version: 1.15
version: 1.21
repository:
path: github.com/prometheus/statsd_exporter
build:
flags: -mod=vendor -a -tags 'netgo static_build'
ldflags: |
-X github.com/prometheus/common/version.Version={{.Version}}
-X github.com/prometheus/common/version.Revision={{.Revision}}

25
.yamllint Normal file
View file

@ -0,0 +1,25 @@
---
extends: default
ignore: |
ui/react-app/node_modules
rules:
braces:
max-spaces-inside: 1
level: error
brackets:
max-spaces-inside: 1
level: error
commas: disable
comments: disable
comments-indentation: disable
document-start: disable
indentation:
spaces: consistent
indent-sequences: consistent
key-duplicates:
ignore: |
config/testdata/section_key_dup.bad.yml
line-length: disable
truthy:
check-keys: false

View file

@ -1,3 +1,132 @@
## 0.26.1 / 2024-03-22
* [SECURITY] Update dependencies, including `google.golang.org/protobuf` for CVE-2024-24786
This is a maintenance release.
## 0.26.0 / 2023-12-06
* [CHANGE] Update dependencies: prometheus/common, client model, and Go 1.21
* [FEATURE] Add option to honor original labels from event tags over labels specified in mapping configuration ([#521](https://github.com/prometheus/statsd_exporter/pull/521))
Thank you @rabenhorst for the `honor_labels` contribution!
## 0.25.0 / 2023-10-23
* [CHANGE] Update `client_golang` ([#508](https://github.com/prometheus/statsd_exporter/pull/508), [#513](https://github.com/prometheus/statsd_exporter/pull/513))
* [ENHANCEMENT] Process UDP packets asynchronously ([#511](https://github.com/prometheus/statsd_exporter/pull/511))
* [BUGFIX] Debug-log incoming lines in cleartext ([#510](https://github.com/prometheus/statsd_exporter/pull/510))
* [SECURITY] Update `golang.org/x/net` ([#516](https://github.com/prometheus/statsd_exporter/pull/516))
This release is less likely to drop UDP packets under very high traffic.
Additionally, when it does, it now attempts to record that this happened in the metric `statsd_exporter_udp_packet_drops_total`, where previously this could only be detected from operating system metrics.
If you are already monitoring for OS-level UDP packet drops, you _must_ also monitor this metric.
The exporter will pull packets from the UDP socket queue much more quickly and queue them internally before processing.
Existing monitoring for packet drops will no longer be sufficient to detect dropped events, but attribution to the exporter is easier with this new metric.
Many thanks to @sumeshpremraj and @kullanici0606 for their contributions, and @pedro-stanaka for helping with the async UDP processing!
## 0.24.0 / 2023-06-02
* [FEATURE] Improve the landing page experience ([#504](https://github.com/prometheus/statsd_exporter/pull/504))
* [FEATURE] Support scaling parameter in mapping ([#499](https://github.com/prometheus/statsd_exporter/pull/499))
## 0.23.3 / 2023-06-02
* [SECURITY] Maintenance release, updating dependencies
* [ENHANCEMENT][library] Allow instantiating configuration without going through YAML ([#491](https://github.com/prometheus/statsd_exporter/pull/491))
Version 0.23.2 was mistagged and thus skipped.
## 0.23.1 / 2023-03-08
* [SECURITY] Update all dependencies ([#489](https://github.com/prometheus/statsd_exporter/pull/489))
## 0.23.0 / 2022-12-07
* [CHANGE] Print help and version to standard out ([#469](https://github.com/prometheus/statsd_exporter/pull/469))
* [FEATURE] Support experimental native histograms ([#474](https://github.com/prometheus/statsd_exporter/pull/474))
## 0.22.8 / 2022-09-13
* [BUGFIX] Prevent poisoning with gauge/distribution naming collision ([#461](https://github.com/prometheus/statsd_exporter/pull/461))
* [CHANGE] Update `client_golang` dependency ([#463](https://github.com/prometheus/statsd_exporter/pull/463))
## 0.22.7 / 2022-07-08
* [CHANGE] Build with Go 1.18 ([#450](https://github.com/prometheus/statsd_exporter/pull/450))
## 0.22.6 / 2022-07-08
* [CHANGE] Update dependencies ([#449](https://github.com/prometheus/statsd_exporter/pull/449))
This is another housekeeping release.
## 0.22.5 / 2022-05-06
* [ENHANCEMENT] Add metric for total lines relayed ([#434](https://github.com/prometheus/statsd_exporter/pull/434))
This release is built with Go 1.17.9, to address security issues in Go.
## 0.22.4 / 2021-11-26
* [BUGFIX] Make Docker image compatible with the runAsNonRoot setting in Kubernetes pods ([#409](https://github.com/prometheus/statsd_exporter/pull/409))
* [BUGFIX] Library: fix support for custom Registerers with histograms and summaries ([#410](https://github.com/prometheus/statsd_exporter/pull/410))
## 0.22.3 / 2021-10-26
* [BUGFIX] Accept metrics with multiple dashes even if not mapped ([#402](https://github.com/prometheus/statsd_exporter/pull/402))
## 0.22.2 / 2021-09-10
* [ENHANCEMENT] Add metrics to relay ([#393](https://github.com/prometheus/statsd_exporter/pull/393))
## 0.22.1 / 2021-09-01
* [ENHANCEMENT] Accept incoming metrics with multiple dashes (with mapping) ([#381](https://github.com/prometheus/statsd_exporter//pull/381))
* [ENHANCEMENT] Allow forwarding messages to statsd for easier transition ([#388](https://github.com/prometheus/statsd_exporter/pull/388))
* [BUGFIX] Actually expose pprof endpoints ([#386](https://github.com/prometheus/statsd_exporter/pull/386))
* [BUGFIX] Fix performance regression on metric ingestion ([#390](https://github.com/prometheus/statsd_exporter/pull/390))
## 0.21.0 / 2021-06-10
* [ENHANCEMENT] Update dependencies & switch to go-kit/log ([#379](https://github.com/prometheus/statsd_exporter/pull/379))
This release changes the log format to be more structured, in line with other Prometheus projects.
## 0.20.3 / 2021-06-04
* [ENHANCEMENT] Use extracted go-kit/log to reduce transitive dependencies ([#378](https://github.com/prometheus/statsd_exporter/pull/378))
Once again there is no functional change.
For library users, the dependency tree shrinks considerably.
See [prometheus/common#255](https://github.com/prometheus/common/issues/255) for more details.
## 0.20.2 / 2021-05-03
* [BUGFIX] Remove copyleft licensed dependency ([#375](https://github.com/prometheus/statsd_exporter/pull/375))
There is no functional change for exporter users.
Removing this dependency reduces uncertainty for anyone reusing the mapping code.
## 0.20.1 / 2021-03-26
* [CHANGE] [library] Split mapper caches out from mapper ([#363](https://github.com/prometheus/statsd_exporter/pull/363))
* [BUGFIX] Accept metric segments that start with numbers ([#365](https://github.com/prometheus/statsd_exporter/pull/365))
## 0.20.0 / 2021-02-05
* [ENHANCEMENT] Support full defaults for summaries and histograms ([#361](https://github.com/prometheus/statsd_exporter/pull/361))
This completes support for `summary_options` and `histogram_options`.
Change the legacy configuration attributes throughout the mapping configuration as follows:
* `quantiles: …` to `summary_options: { quantiles: … }`
* `buckets: …` to `histogram_options: { buckets: … }`
* `timer_type` to `observer_type`.
Support for the deprecated attributes will be removed in a future release.
## 0.19.1 / 2021-01-29
* [BUGFIX] Don't return empty responses to lifecycle api requests ([#360](https://github.com/prometheus/statsd_exporter/pull/360))
@ -330,7 +459,6 @@ NOTE: This release renames `statsd_bridge` to `statsd_exporter`
* [ENHANCEMENT] add root endpoint with redirect ([#25](https://github.com/prometheus/statsd_exporter/pulls/25))
* [CHANGE] rename bridge to exporter ([#26](https://github.com/prometheus/statsd_exporter/pulls/26))
## 0.1.0 / 2015-04-17
* Initial release

View file

@ -1,3 +1,3 @@
## Prometheus Community Code of Conduct
# Prometheus Community Code of Conduct
Prometheus follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
Prometheus follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).

View file

@ -16,3 +16,5 @@ Prometheus uses GitHub to manage reviews of pull requests.
and the _Formatting and style_ section of Peter Bourgon's [Go: Best
Practices for Production
Environments](http://peter.bourgon.org/go-in-production/#formatting-and-style).
* [![Gitpod ready-to-code](https://img.shields.io/badge/Gitpod-ready--to--code-blue?logo=gitpod)](https://gitpod.io/#https://github.com/prometheus/statsd_exporter)

View file

@ -7,7 +7,7 @@ ARG ARCH="amd64"
ARG OS="linux"
COPY .build/${OS}-${ARCH}/statsd_exporter /bin/statsd_exporter
USER nobody
USER 65534
EXPOSE 9102 9125 9125/udp
HEALTHCHECK CMD wget --spider -S "http://localhost:9102/metrics" -T 60 2>&1 || exit 1
ENTRYPOINT [ "/bin/statsd_exporter" ]

View file

@ -1 +1 @@
* Matthias Rampke <mr@soundcloud.com>
* Matthias Rampke <matthias@prometheus.io>

View file

@ -36,29 +36,6 @@ GO_VERSION ?= $(shell $(GO) version)
GO_VERSION_NUMBER ?= $(word 3, $(GO_VERSION))
PRE_GO_111 ?= $(shell echo $(GO_VERSION_NUMBER) | grep -E 'go1\.(10|[0-9])\.')
GOVENDOR :=
GO111MODULE :=
ifeq (, $(PRE_GO_111))
ifneq (,$(wildcard go.mod))
# Enforce Go modules support just in case the directory is inside GOPATH (and for Travis CI).
GO111MODULE := on
ifneq (,$(wildcard vendor))
# Always use the local vendor/ directory to satisfy the dependencies.
GOOPTS := $(GOOPTS) -mod=vendor
endif
endif
else
ifneq (,$(wildcard go.mod))
ifneq (,$(wildcard vendor))
$(warning This repository requires Go >= 1.11 because of Go modules)
$(warning Some recipes may not work as expected as the current Go runtime is '$(GO_VERSION_NUMBER)')
endif
else
# This repository isn't using Go modules (yet).
GOVENDOR := $(FIRST_GOPATH)/bin/govendor
endif
endif
PROMU := $(FIRST_GOPATH)/bin/promu
pkgs = ./...
@ -72,23 +49,32 @@ endif
GOTEST := $(GO) test
GOTEST_DIR :=
ifneq ($(CIRCLE_JOB),)
ifneq ($(shell which gotestsum),)
ifneq ($(shell command -v gotestsum 2> /dev/null),)
GOTEST_DIR := test-results
GOTEST := gotestsum --junitfile $(GOTEST_DIR)/unit-tests.xml --
endif
endif
PROMU_VERSION ?= 0.7.0
PROMU_VERSION ?= 0.15.0
PROMU_URL := https://github.com/prometheus/promu/releases/download/v$(PROMU_VERSION)/promu-$(PROMU_VERSION).$(GO_BUILD_PLATFORM).tar.gz
SKIP_GOLANGCI_LINT :=
GOLANGCI_LINT :=
GOLANGCI_LINT_OPTS ?=
GOLANGCI_LINT_VERSION ?= v1.18.0
# golangci-lint only supports linux, darwin and windows platforms on i386/amd64.
GOLANGCI_LINT_VERSION ?= v1.56.2
# golangci-lint only supports linux, darwin and windows platforms on i386/amd64/arm64.
# windows isn't included here because of the path separator being different.
ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux darwin))
ifeq ($(GOHOSTARCH),$(filter $(GOHOSTARCH),amd64 i386))
GOLANGCI_LINT := $(FIRST_GOPATH)/bin/golangci-lint
ifeq ($(GOHOSTARCH),$(filter $(GOHOSTARCH),amd64 i386 arm64))
# If we're in CI and there is an Actions file, that means the linter
# is being run in Actions, so we don't need to run it here.
ifneq (,$(SKIP_GOLANGCI_LINT))
GOLANGCI_LINT :=
else ifeq (,$(CIRCLE_JOB))
GOLANGCI_LINT := $(FIRST_GOPATH)/bin/golangci-lint
else ifeq (,$(wildcard .github/workflows/golangci-lint.yml))
GOLANGCI_LINT := $(FIRST_GOPATH)/bin/golangci-lint
endif
endif
endif
@ -105,6 +91,8 @@ BUILD_DOCKER_ARCHS = $(addprefix common-docker-,$(DOCKER_ARCHS))
PUBLISH_DOCKER_ARCHS = $(addprefix common-docker-publish-,$(DOCKER_ARCHS))
TAG_DOCKER_ARCHS = $(addprefix common-docker-tag-latest-,$(DOCKER_ARCHS))
SANITIZED_DOCKER_IMAGE_TAG := $(subst +,-,$(DOCKER_IMAGE_TAG))
ifeq ($(GOHOSTARCH),amd64)
ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux freebsd darwin windows))
# Only supported on amd64
@ -118,7 +106,7 @@ endif
%: common-% ;
.PHONY: common-all
common-all: precheck style check_license lint unused build test
common-all: precheck style check_license lint yamllint unused build test
.PHONY: common-style
common-style:
@ -144,32 +132,25 @@ common-check_license:
.PHONY: common-deps
common-deps:
@echo ">> getting dependencies"
ifdef GO111MODULE
GO111MODULE=$(GO111MODULE) $(GO) mod download
else
$(GO) get $(GOOPTS) -t ./...
endif
$(GO) mod download
.PHONY: update-go-deps
update-go-deps:
@echo ">> updating Go dependencies"
@for m in $$($(GO) list -mod=readonly -m -f '{{ if and (not .Indirect) (not .Main)}}{{.Path}}{{end}}' all); do \
$(GO) get $$m; \
$(GO) get -d $$m; \
done
GO111MODULE=$(GO111MODULE) $(GO) mod tidy
ifneq (,$(wildcard vendor))
GO111MODULE=$(GO111MODULE) $(GO) mod vendor
endif
$(GO) mod tidy
.PHONY: common-test-short
common-test-short: $(GOTEST_DIR)
@echo ">> running short tests"
GO111MODULE=$(GO111MODULE) $(GOTEST) -short $(GOOPTS) $(pkgs)
$(GOTEST) -short $(GOOPTS) $(pkgs)
.PHONY: common-test
common-test: $(GOTEST_DIR)
@echo ">> running all tests"
GO111MODULE=$(GO111MODULE) $(GOTEST) $(test-flags) $(GOOPTS) $(pkgs)
$(GOTEST) $(test-flags) $(GOOPTS) $(pkgs)
$(GOTEST_DIR):
@mkdir -p $@
@ -177,25 +158,34 @@ $(GOTEST_DIR):
.PHONY: common-format
common-format:
@echo ">> formatting code"
GO111MODULE=$(GO111MODULE) $(GO) fmt $(pkgs)
$(GO) fmt $(pkgs)
.PHONY: common-vet
common-vet:
@echo ">> vetting code"
GO111MODULE=$(GO111MODULE) $(GO) vet $(GOOPTS) $(pkgs)
$(GO) vet $(GOOPTS) $(pkgs)
.PHONY: common-lint
common-lint: $(GOLANGCI_LINT)
ifdef GOLANGCI_LINT
@echo ">> running golangci-lint"
ifdef GO111MODULE
# 'go list' needs to be executed before staticcheck to prepopulate the modules cache.
# Otherwise staticcheck might fail randomly for some reason not yet explained.
GO111MODULE=$(GO111MODULE) $(GO) list -e -compiled -test=true -export=false -deps=true -find=false -tags= -- ./... > /dev/null
GO111MODULE=$(GO111MODULE) $(GOLANGCI_LINT) run $(GOLANGCI_LINT_OPTS) $(pkgs)
else
$(GOLANGCI_LINT) run $(pkgs)
$(GOLANGCI_LINT) run $(GOLANGCI_LINT_OPTS) $(pkgs)
endif
.PHONY: common-lint-fix
common-lint-fix: $(GOLANGCI_LINT)
ifdef GOLANGCI_LINT
@echo ">> running golangci-lint fix"
$(GOLANGCI_LINT) run --fix $(GOLANGCI_LINT_OPTS) $(pkgs)
endif
.PHONY: common-yamllint
common-yamllint:
@echo ">> running yamllint on all YAML files in the repository"
ifeq (, $(shell command -v yamllint 2> /dev/null))
@echo "yamllint not installed so skipping"
else
yamllint .
endif
# For backward-compatibility.
@ -203,38 +193,29 @@ endif
common-staticcheck: lint
.PHONY: common-unused
common-unused: $(GOVENDOR)
ifdef GOVENDOR
@echo ">> running check for unused packages"
@$(GOVENDOR) list +unused | grep . && exit 1 || echo 'No unused packages'
else
ifdef GO111MODULE
common-unused:
@echo ">> running check for unused/missing packages in go.mod"
GO111MODULE=$(GO111MODULE) $(GO) mod tidy
ifeq (,$(wildcard vendor))
$(GO) mod tidy
@git diff --exit-code -- go.sum go.mod
else
@echo ">> running check for unused packages in vendor/"
GO111MODULE=$(GO111MODULE) $(GO) mod vendor
@git diff --exit-code -- go.sum go.mod vendor/
endif
endif
endif
.PHONY: common-build
common-build: promu
@echo ">> building binaries"
GO111MODULE=$(GO111MODULE) $(PROMU) build --prefix $(PREFIX) $(PROMU_BINARIES)
$(PROMU) build --prefix $(PREFIX) $(PROMU_BINARIES)
.PHONY: common-tarball
common-tarball: promu
@echo ">> building release tarball"
$(PROMU) tarball --prefix $(PREFIX) $(BIN_DIR)
.PHONY: common-docker-repo-name
common-docker-repo-name:
@echo "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)"
.PHONY: common-docker $(BUILD_DOCKER_ARCHS)
common-docker: $(BUILD_DOCKER_ARCHS)
$(BUILD_DOCKER_ARCHS): common-docker-%:
docker build -t "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" \
docker build -t "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)" \
-f $(DOCKERFILE_PATH) \
--build-arg ARCH="$*" \
--build-arg OS="linux" \
@ -243,19 +224,19 @@ $(BUILD_DOCKER_ARCHS): common-docker-%:
.PHONY: common-docker-publish $(PUBLISH_DOCKER_ARCHS)
common-docker-publish: $(PUBLISH_DOCKER_ARCHS)
$(PUBLISH_DOCKER_ARCHS): common-docker-publish-%:
docker push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)"
docker push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)"
DOCKER_MAJOR_VERSION_TAG = $(firstword $(subst ., ,$(shell cat VERSION)))
.PHONY: common-docker-tag-latest $(TAG_DOCKER_ARCHS)
common-docker-tag-latest: $(TAG_DOCKER_ARCHS)
$(TAG_DOCKER_ARCHS): common-docker-tag-latest-%:
docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:latest"
docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:v$(DOCKER_MAJOR_VERSION_TAG)"
docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:latest"
docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:v$(DOCKER_MAJOR_VERSION_TAG)"
.PHONY: common-docker-manifest
common-docker-manifest:
DOCKER_CLI_EXPERIMENTAL=enabled docker manifest create -a "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG)" $(foreach ARCH,$(DOCKER_ARCHS),$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$(ARCH):$(DOCKER_IMAGE_TAG))
DOCKER_CLI_EXPERIMENTAL=enabled docker manifest push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG)"
DOCKER_CLI_EXPERIMENTAL=enabled docker manifest create -a "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(SANITIZED_DOCKER_IMAGE_TAG)" $(foreach ARCH,$(DOCKER_ARCHS),$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$(ARCH):$(SANITIZED_DOCKER_IMAGE_TAG))
DOCKER_CLI_EXPERIMENTAL=enabled docker manifest push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(SANITIZED_DOCKER_IMAGE_TAG)"
.PHONY: promu
promu: $(PROMU)
@ -280,12 +261,6 @@ $(GOLANGCI_LINT):
| sh -s -- -b $(FIRST_GOPATH)/bin $(GOLANGCI_LINT_VERSION)
endif
ifdef GOVENDOR
.PHONY: $(GOVENDOR)
$(GOVENDOR):
GOOS= GOARCH= $(GO) get -u github.com/kardianos/govendor
endif
.PHONY: precheck
precheck::

207
README.md
View file

@ -8,26 +8,38 @@
## Overview
### With StatsD
The StatsD exporter is a drop-in replacement for StatsD.
This exporter translates StatsD metrics to Prometheus metrics via configured mapping rules.
To pipe metrics from an existing StatsD environment into Prometheus, configure
StatsD's repeater backend to repeat all received metrics to a `statsd_exporter`
process. This exporter translates StatsD metrics to Prometheus metrics via
configured mapping rules.
We recommend using the exporter only as an intermediate solution, and switching to [native Prometheus instrumentation](http://prometheus.io/docs/instrumenting/clientlibs/) in the long term.
While it is common to run centralized StatsD servers, the exporter works best as a [sidecar](https://docs.microsoft.com/en-us/azure/architecture/patterns/sidecar).
### Transitioning from an existing StatsD setup
The relay feature allows for a gradual transition.
Introduce the exporter by adding it as a sidecar alongside the application instances.
In Kubernetes, this means adding it to the [pod](https://kubernetes.io/docs/concepts/workloads/pods/).
Use the `--statsd.relay.address` to forward metrics to your existing StatsD UDP endpoint.
Relaying forwards statsd events unmodified, preserving the original metric name and tags in any format.
+-------------+ +----------+ +------------+
| Application +--->| Exporter +----------------->| StatsD |
+-------------+ +----------+ +------------+
^
| +------------+
+----------------------+ Prometheus |
+------------+
### Relaying from StatsD
To pipe metrics from an existing StatsD environment into Prometheus, configure StatsD's repeater backend to repeat all received metrics to a `statsd_exporter` process.
+----------+ +-------------------+ +--------------+
| StatsD |---(UDP/TCP repeater)--->| statsd_exporter |<---(scrape /metrics)---| Prometheus |
+----------+ +-------------------+ +--------------+
### Without StatsD
Since the StatsD exporter uses the same line protocol as StatsD itself, you can
also configure your applications to send StatsD metrics directly to the exporter.
In that case, you don't need to run a StatsD server anymore.
We recommend this only as an intermediate solution and recommend switching to
[native Prometheus instrumentation](http://prometheus.io/docs/instrumenting/clientlibs/)
in the long term.
This allows trying out the exporter with minimal effort, but does not provide the per-instance metrics of the sidecar pattern.
### Tagging Extensions
@ -68,7 +80,7 @@ in the DogStatsD documentation for the concept description and
If you encounter problems, note that this tagging style is incompatible with
the original `statsd` implementation.
For [SignalFX dimension](https://docs.signalfx.com/en/latest/integrations/agent/monitors/collectd-statsd.html#adding-dimensions-to-statsd-metrics), add the tags to the metric name in square brackets, as so:
For [SignalFX dimension](https://github.com/signalfx/signalfx-agent/blob/main/docs/monitors/collectd-statsd.md#adding-dimensions-to-statsd-metrics), add the tags to the metric name in square brackets, as so:
```
metric.name[tagName=val,tag2Name=val2]:0|c
@ -85,6 +97,9 @@ The exporter parses all tagging formats by default, but individual tagging forma
--no-statsd.parse-signalfx-tags
```
By default, labels explicitly specified in configuration take precedence over labels from tags.
To set the label from the statsd event tag, use [`honor_labels`](#honor-labels).
## Building and Running
NOTE: Version 0.7.0 switched to the [kingpin](https://github.com/alecthomas/kingpin) flags library. With this change, flag behaviour is POSIX-ish:
@ -93,73 +108,17 @@ NOTE: Version 0.7.0 switched to the [kingpin](https://github.com/alecthomas/king
* boolean long flags are disabled by prefixing with no (`--flag-name` is true, `--no-flag-name` is false)
* multiple short flags can be combined (but there currently is only one)
* flag processing stops at the first `--`
```
usage: statsd_exporter [<flags>]
Flags:
-h, --help Show context-sensitive help (also try
--help-long and --help-man).
--web.listen-address=":9102"
The address on which to expose the web interface
and generated Prometheus metrics.
--web.enable-lifecycle Enable shutdown and reload via HTTP request.
--web.telemetry-path="/metrics"
Path under which to expose metrics.
--statsd.listen-udp=":9125"
The UDP address on which to receive statsd
metric lines. "" disables it.
--statsd.listen-tcp=":9125"
The TCP address on which to receive statsd
metric lines. "" disables it.
--statsd.listen-unixgram=""
The Unixgram socket path to receive statsd
metric lines in datagram. "" disables it.
--statsd.unixsocket-mode="755"
The permission mode of the unix socket.
--statsd.mapping-config=STATSD.MAPPING-CONFIG
Metric mapping configuration file name.
--statsd.read-buffer=STATSD.READ-BUFFER
Size (in bytes) of the operating system's
transmit read buffer associated with the UDP or
Unixgram connection. Please make sure the kernel
parameters net.core.rmem_max is set to a value
greater than the value specified.
--statsd.cache-size=1000 Maximum size of your metric mapping cache.
Relies on least recently used replacement policy
if max size is reached.
--statsd.cache-type=lru Metric mapping cache type. Valid options are
"lru" and "random"
--statsd.event-queue-size=10000
Size of internal queue for processing events
--statsd.event-flush-threshold=1000
Number of events to hold in queue before
flushing
--statsd.event-flush-interval=200ms
Maximum time between event queue flushes.
--debug.dump-fsm="" The path to dump internal FSM generated for
glob matching as Dot file.
--check-config Check configuration and exit.
--statsd.parse-dogstatsd-tags
Parse DogStatsd style tags. Enabled by default.
--statsd.parse-influxdb-tags
Parse InfluxDB style tags. Enabled by default.
--statsd.parse-librato-tags
Parse Librato style tags. Enabled by default.
--statsd.parse-signalfx-tags
Parse SignalFX style tags. Enabled by default.
--log.level=info Only log messages with the given severity or
above. One of: [debug, info, warn, error]
--log.format=logfmt Output format of log messages. One of: [logfmt,
json]
--version Show application version.
```
* see `--help` for a full list of flags
## Lifecycle API
The `statsd_exporter` has an optional lifecycle API (disabled by default) that can be used to reload or quit the exporter
by sending a `PUT` or `POST` request to the `/-/reload` or `/-/quit` endpoints.
## Relay
The `statsd_exporter` has an optional mode that will buffer and relay incoming statsd lines to a remote server. This is useful to "tee" the data when migrating to using the exporter. The relay will flush the buffer at least once per second to avoid delaying delivery of metrics.
## Tests
$ go test
@ -276,8 +235,7 @@ mappings:
name: "${2}_total"
labels:
provider: "$1"
mappings:
- match: "(.*)\.(.*)--(.*)\.status\.(.*)\.count"
- match: "(.*)\\.(.*)--(.*)\\.status\.(.*)\\.count"
match_type: regex
name: "request_total"
labels:
@ -290,13 +248,34 @@ mappings:
Be aware about yaml escape rules as a mapping like the following one will not work.
```yaml
mappings:
- match: "test\.(\w+)\.(\w+)\.counter"
- match: "test\\.(\w+)\\.(\w+)\\.counter"
match_type: regex
name: "${2}_total"
labels:
provider: "$1"
```
#### Special match groups
When using regex, the match group `0` is the full match and can be used to attach labels to the metric.
Example:
```yaml
mappings:
- match: ".+"
match_type: regex
name: "$0"
labels:
statsd_metric_name: "$0"
```
If a metric `my.statsd_counter` is received, the metric name will **still** be mapped to `my_statsd_counter` (Prometheus compatible name).
But the metric will also have the label `statsd_metric_name` with the value `my.statsd_counter` (unchanged value).
Note: If you use the `match` like the example (i.e. `.+`), be aware that it will be a "catch-all" block. So it should come at the very end of the mapping list.
The same is not achievable with glob matching, for more details check [this issue](https://github.com/prometheus/statsd_exporter/issues/444).
### Naming, labels, and help
Please note that metrics with the same name must also have the same set of
@ -314,6 +293,13 @@ mappings:
code: "$1"
```
### Honor labels
By default, labels specified in the mapping configuration take precedence over tags in the statsd event.
To set the label value to the original tag value, if present, specify `honor_labels: true` in the mapping configuration.
In this case, the label specified in the mapping acts as a default.
### StatsD timers and distributions
By default, statsd timers and distributions (collectively "observers") are
@ -341,9 +327,9 @@ mappings:
error: 0.05
- quantile: 0.5
error: 0.005
max_summary_age: 30s
summary_age_buckets: 3
stream_buffer_size: 1000
max_age: 30s
age_buckets: 3
buf_cap: 1000
```
The default quantiles are 0.99, 0.9, and 0.5.
@ -362,6 +348,8 @@ mappings:
observer_type: histogram
histogram_options:
buckets: [ 0.01, 0.025, 0.05, 0.1 ]
native_histogram_bucket_factor: 1.1
native_histogram_max_buckets: 256
name: "my_timer"
labels:
provider: "$2"
@ -375,6 +363,11 @@ values](https://godoc.org/github.com/prometheus/client_golang/prometheus#pkg-var
are used for the histogram buckets:
`[.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10]`.
`+Inf` is added automatically.
If your Prometheus server is enabled to scrape native histograms (v2.40.0+),
then you can set the `native_histogram_bucket_factor` to configure precision of the
buckets in the sparse histogram. More about this in the original [client_golang docs](https://github.com/prometheus/client_golang/blob/449b46435075e6e069e05af920fe028b941033cf/prometheus/histogram.go#L399-L430).
Also, a configuration of the maximum number of buckets can be set with `native_histogram_max_buckets`, this
avoids the histograms to grow too large in memory. More about this in the original [client_golang docs](https://github.com/prometheus/client_golang/blob/449b46435075e6e069e05af920fe028b941033cf/prometheus/histogram.go#L443-L467).
`observer_type` is only used when the statsd metric type is a timer, histogram, or distribution.
`buckets` is only used when the statsd metric type is one of these, and the `observer_type` is set to `histogram`.
@ -402,7 +395,7 @@ parameter is specified the default value of `glob` will be assumed:
```yaml
mappings:
- match: "(.*)\.(.*)--(.*)\.status\.(.*)\.count"
- match: "(.*)\\.(.*)--(.*)\\.status\\.(.*)\\.count"
match_type: regex
name: "request_total"
labels:
@ -414,16 +407,36 @@ mappings:
### Global defaults
One may also set defaults for the observer type, buckets or quantiles, and match type.
One may also set defaults for the observer type, histogram options, summary options, and match type.
These will be used by all mappings that do not define them.
An option that can only be configured in `defaults` is `glob_disable_ordering`, which is `false` if omitted.
By setting this to `true`, `glob` match type will not honor the occurance of rules in the mapping rules file and always treat `*` as lower priority than a concrete string.
Setting `buckets` or `quantiles` in the defaults is deprecated in favor of `histogram_options` and `summary_options`, which will override the deprecated values.
If `summary_options` is present in a mapping config, it will only override the fields set in the mapping. Unset fields in the mapping will take the values from the defaults.
```yaml
defaults:
observer_type: histogram
buckets: [.005, .01, .025, .05, .1, .25, .5, 1, 2.5 ]
histogram_options:
buckets: [.005, .01, .025, .05, .1, .25, .5, 1, 2.5 ]
native_histogram_bucket_factor: 1.1
native_histogram_max_buckets: 256
summary_options:
quantiles:
- quantile: 0.99
error: 0.001
- quantile: 0.95
error: 0.01
- quantile: 0.9
error: 0.05
- quantile: 0.5
error: 0.005
max_age: 5m
age_buckets: 2
buf_cap: 1000
match_type: glob
glob_disable_ordering: false
ttl: 0 # metrics do not expire
@ -435,7 +448,7 @@ mappings:
provider: "$2"
outcome: "$3"
job: "${1}_server"
# This will be a summary timer.
# This will be a summary using the summary_options set in `defaults`
- match: "other.distribution.*.*.*"
observer_type: summary
name: "other_distribution"
@ -490,7 +503,7 @@ Possible values for `match_metric_type` are `gauge`, `counter` and `observer`.
There is a cache used to improve the performance of the metric mapping, that can greatly improvement performance.
The cache has a default maximum of 1000 unique statsd metric names -> prometheus metrics mappings that it can store.
This maximum can be adjust using the `statsd.cache-size` flag.
This maximum can be adjusted using the `statsd.cache-size` flag.
If the maximum is reached, entries are by default rotated using the [least recently used replacement policy](https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU)). This strategy is optimal when memory is constrained as only the most recent entries are retained.
@ -510,7 +523,24 @@ metrics that do not expire.
expire a metric only by changing the mapping configuration. At least one
sample must be received for updated mappings to take effect.
### Event flushing configuration
### Unit conversions
The `scale` parameter can be used to define unit conversions for metric values. The value is a floating point number to scale metric values by. This can be useful for converting non-base units (e.g. milliseconds, kilobytes) to base units (e.g. seconds, bytes) as recommended in [prometheus best practices](https://prometheus.io/docs/practices/naming/).
```yaml
mappings:
- match: foo.latency_ms
name: foo_latency_seconds
scale: 0.001
- match: bar.processed_kb
name: bar_processed_bytes
scale: 1024
- match: baz.latency_us
name: baz_latency_seconds
scale: 1e-6
```
### Event flushing configuration
Internally `statsd_exporter` runs a goroutine for each network listener (UDP, TCP & Unix Socket). These each receive and parse metrics received into an event. For performance purposes, these events are queued internally and flushed to the main exporter goroutine periodically in batches. The size of this queue and the flush criteria can be tuned with the `--statsd.event-queue-size`, `--statsd.event-flush-threshold` and `--statsd.event-flush-interval`. However, the defaults should perform well even for very high traffic environments.
@ -539,7 +569,6 @@ Semantic versioning of the exporter is based on the impact on users of the expor
We encourage re-use of these packages and welcome [issues](https://github.com/prometheus/statsd_exporter/issues?q=is%3Aopen+is%3Aissue+label%3Alibrary) related to their usability as a library.
[travis]: https://travis-ci.org/prometheus/statsd_exporter
[circleci]: https://circleci.com/gh/prometheus/statsd_exporter
[quay]: https://quay.io/repository/prometheus/statsd-exporter
[hub]: https://hub.docker.com/r/prom/statsd-exporter/

View file

@ -3,4 +3,4 @@
The Prometheus security policy, including how to report vulnerabilities, can be
found here:
https://prometheus.io/docs/operating/security/
<https://prometheus.io/docs/operating/security/>

View file

@ -1 +1 @@
0.19.1
0.26.1

View file

@ -20,7 +20,7 @@ import (
"testing"
"time"
"github.com/go-kit/kit/log"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
dto "github.com/prometheus/client_model/go"
@ -81,6 +81,58 @@ func TestHandlePacket(t *testing.T) {
GLabels: map[string]string{},
},
},
}, {
name: "gauge increment",
in: "foo:+10|g",
out: event.Events{
&event.GaugeEvent{
GMetricName: "foo",
GValue: 10,
GRelative: true,
GLabels: map[string]string{},
},
},
}, {
name: "gauge set negative",
in: "foo:0|g\nfoo:-1|g",
out: event.Events{
&event.GaugeEvent{
GMetricName: "foo",
GValue: 0,
GRelative: false,
GLabels: map[string]string{},
},
&event.GaugeEvent{
GMetricName: "foo",
GValue: -1,
GRelative: true,
GLabels: map[string]string{},
},
},
}, {
// Test the sequence given here https://github.com/statsd/statsd/blob/master/docs/metric_types.md#gauges
name: "gauge up and down",
in: "gaugor:333|g\ngaugor:-10|g\ngaugor:+4|g",
out: event.Events{
&event.GaugeEvent{
GMetricName: "gaugor",
GValue: 333,
GRelative: false,
GLabels: map[string]string{},
},
&event.GaugeEvent{
GMetricName: "gaugor",
GValue: -10,
GRelative: true,
GLabels: map[string]string{},
},
&event.GaugeEvent{
GMetricName: "gaugor",
GValue: 4,
GRelative: true,
GLabels: map[string]string{},
},
},
}, {
name: "simple timer",
in: "foo:200|ms",
@ -543,6 +595,7 @@ func TestHandlePacket(t *testing.T) {
Logger: log.NewNopLogger(),
LineParser: parser,
UDPPackets: udpPackets,
UDPPacketDrops: udpPacketDrops,
LinesReceived: linesReceived,
EventsFlushed: eventsFlushed,
SampleErrors: *sampleErrors,
@ -572,7 +625,7 @@ func TestHandlePacket(t *testing.T) {
le := len(events)
// Flatten actual events.
actual := event.Events{}
for i := 0; i < le; i++ {
for j := 0; j < le; j++ {
actual = append(actual, <-events...)
}
@ -650,7 +703,7 @@ mappings:
`
// Create mapper from config and start an Exporter with a synchronous channel
testMapper := &mapper.MetricMapper{}
err := testMapper.InitFromYAMLString(config, 0)
err := testMapper.InitFromYAMLString(config)
if err != nil {
t.Fatalf("Config load error: %s %s", config, err)
}

View file

@ -17,9 +17,10 @@ import (
"fmt"
"testing"
"github.com/go-kit/kit/log"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/statsd_exporter/pkg/event"
"github.com/prometheus/statsd_exporter/pkg/exporter"
"github.com/prometheus/statsd_exporter/pkg/line"
@ -64,6 +65,7 @@ func benchmarkUDPListener(times int, b *testing.B) {
// there are more events than input lines, need bigger buffer
events := make(chan event.Events, len(bytesInput)*times*2)
udpChan := make(chan []byte, len(bytesInput)*times*2)
l := listener.StatsDUDPListener{
EventHandler: &event.UnbufferedEventHandler{C: events},
@ -73,6 +75,7 @@ func benchmarkUDPListener(times int, b *testing.B) {
LinesReceived: linesReceived,
SamplesReceived: samplesReceived,
TagsReceived: tagsReceived,
UdpPacketQueue: udpChan,
}
// resume benchmark timer
@ -167,7 +170,7 @@ mappings:
`
testMapper := &mapper.MetricMapper{}
err := testMapper.InitFromYAMLString(config, 0)
err := testMapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}

45
go.mod
View file

@ -1,19 +1,36 @@
module github.com/prometheus/statsd_exporter
go 1.20
require (
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d // indirect
github.com/go-kit/kit v0.10.0
github.com/golang/protobuf v1.4.2 // indirect
github.com/hashicorp/golang-lru v0.5.4
github.com/pkg/errors v0.9.1 // indirect
github.com/prometheus/client_golang v1.6.0
github.com/prometheus/client_model v0.2.0
github.com/prometheus/common v0.10.0
github.com/sirupsen/logrus v1.6.0 // indirect
golang.org/x/sys v0.0.0-20200523222454-059865788121 // indirect
google.golang.org/protobuf v1.24.0 // indirect
gopkg.in/alecthomas/kingpin.v2 v2.2.6
gopkg.in/yaml.v2 v2.3.0
github.com/alecthomas/kingpin/v2 v2.4.0
github.com/go-kit/log v0.2.1
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da
github.com/prometheus/client_golang v1.19.0
github.com/prometheus/client_model v0.6.0
github.com/prometheus/common v0.48.0
github.com/prometheus/exporter-toolkit v0.11.0
github.com/stvp/go-udp-testing v0.0.0-20201019212854-469649b16807
gopkg.in/yaml.v2 v2.4.0
)
go 1.13
require (
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/go-logfmt/logfmt v0.6.0 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/jpillora/backoff v1.0.0 // indirect
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/xhit/go-str2duration/v2 v2.1.0 // indirect
golang.org/x/crypto v0.18.0 // indirect
golang.org/x/net v0.20.0 // indirect
golang.org/x/oauth2 v0.16.0 // indirect
golang.org/x/sync v0.5.0 // indirect
golang.org/x/sys v0.16.0 // indirect
golang.org/x/text v0.14.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.33.0 // indirect
)

494
go.sum
View file

@ -1,448 +1,82 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/Knetic/govaluate v3.0.1-0.20171022003610-9aa49832a739+incompatible/go.mod h1:r7JcOSlj0wfOMncg0iLm8Leh48TZaKVeNIfJntJ2wa0=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
github.com/VividCortex/gohistogram v1.0.0/go.mod h1:Pf5mBqqDxYaXu3hDrrU+w6nw50o/4+TcAqDqk/vUH7g=
github.com/afex/hystrix-go v0.0.0-20180502004556-fa1af6a1f4f5/go.mod h1:SkGFH1ia65gfNATL8TAiHDNxPzPdmEL5uirI2Uyuz6c=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc h1:cAKDfWh5VpdgMhJosfJnn5/FoN2SRZ4p7fJNX58YPaU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751 h1:JYp7IbQjafoB+tBA3gMyHYHrpOtNuDiK/uB5uXxq5wM=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf h1:qet1QNfXsQxTZqLG4oE62mJzwPIB8+Tee4RNCL9ulrY=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4 h1:Hs82Z41s6SdL1CELW+XaDYmOH4hkBN4/N9og/AsOv7E=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d h1:UQZhZ2O0vMHr2cI+DC1Mbh0TJxzA3RcLoMsFw+aXw7E=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/aryann/difflib v0.0.0-20170710044230-e206f873d14a/go.mod h1:DAHtR1m6lCRdSC2Tm3DSWRPvIPr6xNKyeHdqDQSQT+A=
github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQwij/eHl5CU=
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973 h1:xJ4a3vCFaGF/jqvzLMYoU8P317H5OQ+Via4RmuPwCS0=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0 h1:HWo1m869IqiPhD389kmkxeTalrjNbbJTC8LXupb+sl0=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/alecthomas/kingpin/v2 v2.4.0 h1:f48lwail6p8zpO1bC4TxtqACaGqHYA22qkHjHpqDjYY=
github.com/alecthomas/kingpin/v2 v2.4.0/go.mod h1:0gyi0zQnjuFk8xrkNKamJoyUo382HRL7ATRpFZCw6tE=
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 h1:s6gZFSlWYmbqAuRjVTiNNhvNRfY2Wxp9nhfyel4rklc=
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
github.com/cenkalti/backoff v2.2.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/clbanning/x2j v0.0.0-20191024224557-825249438eec/go.mod h1:jMjuTZXRI4dUb/I5gc9Hdhagfvm9+RyrPryS/auMzxE=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
github.com/edsrzf/mmap-go v1.0.0/go.mod h1:YO35OhQPt3KJa3ryjFM5Bs14WD66h8eGKpfaBNrHW5M=
github.com/envoyproxy/go-control-plane v0.6.9/go.mod h1:SBwIajubJHhxtWwsL9s8ss4safvEdbitLhGGK48rN6g=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/franela/goblin v0.0.0-20200105215937-c9ffbefa60db/go.mod h1:7dvUGVsVBjqR7JHJk0brhHOZYGmfBYOrK0ZhYMEtBr4=
github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-kit/kit v0.8.0 h1:Wz+5lgoB0kkuqLEc6NVmwRknTKP6dTGbSqvhZtBI/j0=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0 h1:wDJmvq38kDhkVxi50ni9ykkdUr1PKgqKOoi01fa0Mdk=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.10.0 h1:dXFJfIHVvUcpSgDOV+Ne6t7jXri8Tfv2uOLHUZ2XNuo=
github.com/go-kit/kit v0.10.0/go.mod h1:xUsJbQ/Fp4kEt7AFgCuvyX4a71u8h9jB8tj/ORgOZ7o=
github.com/go-logfmt/logfmt v0.3.0 h1:8HUsc87TaSWLKwrnumgC8/YconD2fJQsRJAsWaPg2ic=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0 h1:MP4Eh7ZCb31lleYCFuwm0oe4/YGak+5l1vA2NOE80nA=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0 h1:TrB8swr/68K7m9CcGut2g3UOihhbcbiMAYiuTXdEih4=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/go-stack/stack v1.8.0 h1:5SgMzNM5HxrEjV0ww2lTmX6E2Izsfxas4+YHWRs3Lsk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg=
github.com/go-kit/log v0.2.1 h1:MRVx0/zhvdseW+Gza6N9rVzU/IVzaeE1SFI4raAhmBU=
github.com/go-kit/log v0.2.1/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
github.com/go-logfmt/logfmt v0.6.0 h1:wGYYu3uicYdqXVgoYbvnkrPVXkuLM1p1ifugDMEdRi4=
github.com/go-logfmt/logfmt v0.6.0/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.7.3/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/consul/api v1.3.0/go.mod h1:MmDNSzIMUjNpY/mQ398R4bk2FnqQLoPndWW5VkKPlCE=
github.com/hashicorp/consul/sdk v0.3.0/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc=
github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/hudl/fargo v1.3.0/go.mod h1:y3CKSmjA+wD2gak7sUSXTAoopbhU08POFhmITJgmKTg=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/influxdata/influxdb1-client v0.0.0-20191209144304-8bf82d3c094d/go.mod h1:qj24IKcXYK6Iy9ceXlo3Tc+vtHo9lIhSX5JddghvEPo=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3 h1:CE8S1cTafDpPvMhIxNJKvHsGVBgn1xWYf1NbHQhywc8=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515 h1:T+h1c/A9Gawja4Y9mFVWj2vyii2bbUNDw3kt9VxK2EY=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/lightstep/lightstep-tracer-common/golang/gogo v0.0.0-20190605223551-bc2310a04743/go.mod h1:qklhhLq1aX+mtWk9cPHPzaBjWImj5ULL6C7HFJtXQMM=
github.com/lightstep/lightstep-tracer-go v0.18.1/go.mod h1:jlF1pusYV4pidLvZ+XD0UBX0ZE6WURAspgAczcDHrL4=
github.com/lyft/protoc-gen-validate v0.0.13/go.mod h1:XbGvPuh87YZc5TdIa2/I4pLk0QoUACkjt2znoq26NVQ=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg=
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/nats-io/jwt v0.3.0/go.mod h1:fRYCDE99xlTsqUzISS1Bi75UBJ6ljOJQOAAu5VglpSg=
github.com/nats-io/jwt v0.3.2/go.mod h1:/euKqTS1ZD+zzjYrY7pseZrTtWQSjujC7xjPc8wL6eU=
github.com/nats-io/nats-server/v2 v2.1.2/go.mod h1:Afk+wRZqkMQs/p45uXdrVLuab3gwv3Z8C4HTBu8GD/k=
github.com/nats-io/nats.go v1.9.1/go.mod h1:ZjDU1L/7fJ09jvUSRVBR2e7+RnLiiIQyqyzEE/Zbp4w=
github.com/nats-io/nkeys v0.1.0/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w=
github.com/nats-io/nkeys v0.1.3/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/oklog/oklog v0.3.2/go.mod h1:FCV+B7mhrz4o+ueLpx+KqkyXRGMWOYEvfiXtdGtbWGs=
github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA=
github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk=
github.com/opentracing-contrib/go-observer v0.0.0-20170622124052-a52f23424492/go.mod h1:Ngi6UdF0k5OKD5t5wlmGhe/EDKPoUM3BXZSSfIuJbis=
github.com/opentracing/basictracer-go v1.0.0/go.mod h1:QfBfYuafItcjQuMwinw9GhYKwFXS9KnPs5lxoYwgW74=
github.com/opentracing/opentracing-go v1.0.2/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/openzipkin-contrib/zipkin-go-opentracing v0.4.5/go.mod h1:/wsWhb9smxSfWAKL3wpBW7V8scJMt8N8gnaMCS9E/cA=
github.com/openzipkin/zipkin-go v0.1.6/go.mod h1:QgAqvLzwWbR/WpD4A3cGpPtJrZXNIiJc5AZX7/PBEpw=
github.com/openzipkin/zipkin-go v0.2.1/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnhQw8ySjnjRyN4=
github.com/openzipkin/zipkin-go v0.2.2/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnhQw8ySjnjRyN4=
github.com/pact-foundation/pact-go v1.0.4/go.mod h1:uExwJY4kCzNPcHRj+hCR/HBbOOIwwtUjcrb0b5/5kLM=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k=
github.com/performancecopilot/speed v3.0.0+incompatible/go.mod h1:/CLtqpZ5gBg1M9iaPbIdPPGyKcA8hKdoy6hAWba7Yac=
github.com/pierrec/lz4 v1.0.2-0.20190131084431-473cd7ce01a1/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc=
github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/profile v1.2.1/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/jpillora/backoff v1.0.0 h1:uvFg412JmmHBHw7iwprIxkPMI+sGQ4kzOWsMeHnm2EA=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f h1:KUppIJq7/+SVif2QVs3tOP0zanoHgBEVAwHxUSIzRqU=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=
github.com/prometheus/client_golang v1.0.0 h1:vrDKnkGzuGvhNAL56c7DBz29ZL+KxnoR0x7enabFceM=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.3.0/go.mod h1:hJaj2vgQTGQmVCsAACORcieXFeDPbaTKGT+JTgUa3og=
github.com/prometheus/client_golang v1.6.0 h1:YVPodQOcK15POxhgARIvnDRVpLcuK8mglnMrWfyrw6A=
github.com/prometheus/client_golang v1.6.0/go.mod h1:ZLOG9ck3JLRdB5MgO8f+lLTe83AXG6ro35rLTxvnIl4=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910 h1:idejC8f05m9MGOsuEi1ATq9shN03HrxNkD/luQvxCv8=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90 h1:S/YWwWx/RA8rT8tKFRuGUZhuA90OyIBpPCXkcbwU8DE=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.1.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1 h1:K0MGApIoQvMw27RTdJkPbr3JZ7DNbtxQNyi5STVM6Kw=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.7.0 h1:L+1lyG48J1zAQXA3RBX/nG/B3gjlHq0zTt2tlbJLyCY=
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4=
github.com/prometheus/common v0.10.0 h1:RyRA7RzGXQZiW+tGMr7sxa85G1z0yOpM1qq5c8lNawc=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2 h1:6LJUbpNm42llc4HRCuvApCSWB/WfhuNo9K98Q9sNGfs=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/prometheus/procfs v0.0.11 h1:DhHlBtkHWPYi8O2y31JkK0TF+DGM+51OopZjH/Ia5qI=
github.com/prometheus/procfs v0.0.11/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/samuel/go-zookeeper v0.0.0-20190923202752-2cc03de413da/go.mod h1:gi+0XIa01GRL2eRQVjQkKGqKF3SF9vZR/HnPullcV2E=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.2.0 h1:juTguoYk5qI21pwyTXY3B3Y5cOTH3ZUyZCg1v/mihuo=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2 h1:SPIRibHv4MatM3XXNO2BJeFLZwZ2LvZgfQ5+UNI2im4=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0 h1:UBcNElsrwanuuMsnGSlYmtmgbb23qDR5dG+6X6Oo89I=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/sony/gobreaker v0.4.1/go.mod h1:ZKptC7FHNvhBz7dN2LGjPVBz2sZJmc0/PkyDJOjmxWY=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/streadway/amqp v0.0.0-20190404075320-75d898a42a94/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
github.com/streadway/handy v0.0.0-20190108123426-d5acb3125c2a/go.mod h1:qNTQ5P5JnDBl6z3cMAg/SywNDC5ABu5ApDIw6lUbRmI=
github.com/prometheus/client_golang v1.19.0 h1:ygXvpU1AoN1MhdzckN+PyD9QJOSD4x7kmXYlnfbA6JU=
github.com/prometheus/client_golang v1.19.0/go.mod h1:ZRM9uEAypZakd+q/x7+gmsvXdURP+DABIEIjnmDdp+k=
github.com/prometheus/client_model v0.6.0 h1:k1v3CzpSRUTrKMppY35TLwPvxHqBu0bYgxZzqGIgaos=
github.com/prometheus/client_model v0.6.0/go.mod h1:NTQHnmxFpouOD0DpvP4XujX3CdOAGQPoaGhyTchlyt8=
github.com/prometheus/common v0.48.0 h1:QO8U2CdOzSn1BBsmXJXduaaW+dY/5QLjfB8svtSzKKE=
github.com/prometheus/common v0.48.0/go.mod h1:0/KsvlIEfPQCQ5I2iNSAWKPZziNCvRs5EC6ILDTlAPc=
github.com/prometheus/exporter-toolkit v0.11.0 h1:yNTsuZ0aNCNFQ3aFTD2uhPOvr4iD7fdBvKPAEGkNf+g=
github.com/prometheus/exporter-toolkit v0.11.0/go.mod h1:BVnENhnNecpwoTLiABx7mrPB/OLRIgN74qlQbV+FK1Q=
github.com/prometheus/procfs v0.12.0 h1:jluTpSng7V9hY0O2R9DzzJHYb2xULk9VTR1V1R/k6Bo=
github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3cnaOZAZEfOo=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738/go.mod h1:dnLIgRNXwCJa5e+c6mIZCrds/GIG4ncV9HhK5PX7jPg=
go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793 h1:u+LnwYTOOW7Ukr/fppxEb1Nwz0AtPflrblfvUudpo+I=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8=
github.com/stvp/go-udp-testing v0.0.0-20201019212854-469649b16807 h1:LUsDduamlucuNnWcaTbXQ6aLILFcLXADpOzeEH3U+OI=
github.com/stvp/go-udp-testing v0.0.0-20201019212854-469649b16807/go.mod h1:7jxmlfBCDBXRzr0eAQJ48XC1hBu1np4CS5+cHEYfwpc=
github.com/xhit/go-str2duration/v2 v2.1.0 h1:lxklc02Drh6ynqX+DdPyp5pCKLUQpRT8bp8Ydu2Bstc=
github.com/xhit/go-str2duration/v2 v2.1.0/go.mod h1:ohY8p+0f07DiV6Em5LKB0s2YpLtXVyJfNt1+BlmyAsU=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190125091013-d26f9f9a57f3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/crypto v0.18.0 h1:PGVlW0xEltQnzFZ55hkuX5+KLyrMYhHld1YHO4AKcdc=
golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1mg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f h1:Bl/8QSvNqXvPGPGXa2z5xUTmV7VDcZyvRZ+QQXkXTZQ=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5 h1:mzjBh+S5frKOsOBobWIMAbXavqjmgO17k/2puhcFR94=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/net v0.20.0 h1:aCL9BSgETF1k+blQaYUBx9hJ9LOGP3gAVemcZlf1Kpo=
golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY=
golang.org/x/oauth2 v0.16.0 h1:aDkGMBSYxElaoP81NpoUoz2oo2R2wHdZpGToUxfyQrQ=
golang.org/x/oauth2 v0.16.0/go.mod h1:hqZ+0LWXsiVoZpeld6jVt06P3adbS2Uu911W1SsJv2o=
golang.org/x/sync v0.5.0 h1:60k92dhOjHxJkrqnwsfl8KuaHbn/5dl0lUPUklKo3qE=
golang.org/x/sync v0.5.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894 h1:Cz4ceDQGXuKRnVBDTS23GTn/pU5OE2C0WrNTOYK1Uuc=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191220142924-d4481acd189f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200420163511-1957bb5e6d1f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121 h1:rITEj+UZHYC927n8GT97eC3zrpzXdb/voyeOuVKS46o=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.16.0 h1:xWw16ngr6ZMtmxDyKyIgsE93KNKz5HKmMa3b8ALHidU=
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200103221440-774c71fcf114/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190530194941-fb225487d101/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.22.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0 h1:4MY060fB1DLGMB/7MBTLnwQUY6+F09GEiz6SsrNqyzM=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.24.0 h1:UhZDfRO8JRQru4/+LlLE0BRKGF8L+PICnvYZmx/fEGA=
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
gopkg.in/alecthomas/kingpin.v2 v2.2.6 h1:jMFz6MfLP0/4fUyZle81rXUoxOBFi19VUFKVDOQfozc=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/gcfg.v1 v1.2.3/go.mod h1:yesOnuUOFQAhST5vPY4nbZsb/huCgGGXlipJsBn0b3o=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1 h1:mUhvW9EsL+naU5Q3cakzfE91YhliOondGd6ZrsDBHQE=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sourcegraph.com/sourcegraph/appdash v0.0.0-20190731080439-ebfcffb1b5c0/go.mod h1:hI742Nqp5OhwiqlzhgfbWU4mW4yO10fP+LoT9WOswdU=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=

View file

@ -16,7 +16,7 @@ package main
import (
"testing"
"github.com/go-kit/kit/log"
"github.com/go-kit/log"
"github.com/prometheus/statsd_exporter/pkg/line"
)

208
main.go
View file

@ -24,141 +24,147 @@ import (
"strconv"
"syscall"
"github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level"
"github.com/alecthomas/kingpin/v2"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/prometheus/common/promlog"
"github.com/prometheus/common/promlog/flag"
"github.com/prometheus/common/version"
"gopkg.in/alecthomas/kingpin.v2"
"github.com/prometheus/exporter-toolkit/web"
"github.com/prometheus/statsd_exporter/pkg/address"
"github.com/prometheus/statsd_exporter/pkg/event"
"github.com/prometheus/statsd_exporter/pkg/exporter"
"github.com/prometheus/statsd_exporter/pkg/level"
"github.com/prometheus/statsd_exporter/pkg/line"
"github.com/prometheus/statsd_exporter/pkg/listener"
"github.com/prometheus/statsd_exporter/pkg/mapper"
)
const (
defaultHelp = "Metric autogenerated by statsd_exporter."
regErrF = "Failed to update metric"
"github.com/prometheus/statsd_exporter/pkg/mappercache/lru"
"github.com/prometheus/statsd_exporter/pkg/mappercache/randomreplacement"
"github.com/prometheus/statsd_exporter/pkg/relay"
)
var (
eventStats = prometheus.NewCounterVec(
eventStats = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "statsd_exporter_events_total",
Help: "The total number of StatsD events seen.",
},
[]string{"type"},
)
eventsFlushed = prometheus.NewCounter(
eventsFlushed = promauto.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_event_queue_flushed_total",
Help: "Number of times events were flushed to exporter",
},
)
eventsUnmapped = prometheus.NewCounter(
eventsUnmapped = promauto.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_events_unmapped_total",
Help: "The total number of StatsD events no mapping was found for.",
})
udpPackets = prometheus.NewCounter(
udpPackets = promauto.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_udp_packets_total",
Help: "The total number of StatsD packets received over UDP.",
},
)
tcpConnections = prometheus.NewCounter(
udpPacketDrops = promauto.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_udp_packet_drops_total",
Help: "The total number of dropped StatsD packets which received over UDP.",
},
)
tcpConnections = promauto.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_tcp_connections_total",
Help: "The total number of TCP connections handled.",
},
)
tcpErrors = prometheus.NewCounter(
tcpErrors = promauto.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_tcp_connection_errors_total",
Help: "The number of errors encountered reading from TCP.",
},
)
tcpLineTooLong = prometheus.NewCounter(
tcpLineTooLong = promauto.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_tcp_too_long_lines_total",
Help: "The number of lines discarded due to being too long.",
},
)
unixgramPackets = prometheus.NewCounter(
unixgramPackets = promauto.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_unixgram_packets_total",
Help: "The total number of StatsD packets received over Unixgram.",
},
)
linesReceived = prometheus.NewCounter(
linesReceived = promauto.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_lines_total",
Help: "The total number of StatsD lines received.",
},
)
samplesReceived = prometheus.NewCounter(
samplesReceived = promauto.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_samples_total",
Help: "The total number of StatsD samples received.",
},
)
sampleErrors = prometheus.NewCounterVec(
sampleErrors = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "statsd_exporter_sample_errors_total",
Help: "The total number of errors parsing StatsD samples.",
},
[]string{"reason"},
)
tagsReceived = prometheus.NewCounter(
tagsReceived = promauto.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_tags_total",
Help: "The total number of DogStatsD tags processed.",
},
)
tagErrors = prometheus.NewCounter(
tagErrors = promauto.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_tag_errors_total",
Help: "The number of errors parsing DogStatsD tags.",
},
)
configLoads = prometheus.NewCounterVec(
configLoads = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "statsd_exporter_config_reloads_total",
Help: "The number of configuration reloads.",
},
[]string{"outcome"},
)
mappingsCount = prometheus.NewGauge(prometheus.GaugeOpts{
mappingsCount = promauto.NewGauge(prometheus.GaugeOpts{
Name: "statsd_exporter_loaded_mappings",
Help: "The current number of configured metric mappings.",
})
conflictingEventStats = prometheus.NewCounterVec(
conflictingEventStats = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "statsd_exporter_events_conflict_total",
Help: "The total number of StatsD events with conflicting names.",
},
[]string{"type"},
)
errorEventStats = prometheus.NewCounterVec(
errorEventStats = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "statsd_exporter_events_error_total",
Help: "The total number of StatsD events discarded due to errors.",
},
[]string{"reason"},
)
eventsActions = prometheus.NewCounterVec(
eventsActions = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "statsd_exporter_events_actions_total",
Help: "The total number of StatsD events by action.",
},
[]string{"action"},
)
metricsCount = prometheus.NewGaugeVec(
metricsCount = promauto.NewGaugeVec(
prometheus.GaugeOpts{
Name: "statsd_exporter_metrics_total",
Help: "The total number of metrics.",
@ -167,46 +173,12 @@ var (
)
)
func init() {
prometheus.MustRegister(version.NewCollector("statsd_exporter"))
prometheus.MustRegister(eventStats)
prometheus.MustRegister(eventsFlushed)
prometheus.MustRegister(eventsUnmapped)
prometheus.MustRegister(udpPackets)
prometheus.MustRegister(tcpConnections)
prometheus.MustRegister(tcpErrors)
prometheus.MustRegister(tcpLineTooLong)
prometheus.MustRegister(unixgramPackets)
prometheus.MustRegister(linesReceived)
prometheus.MustRegister(samplesReceived)
prometheus.MustRegister(sampleErrors)
prometheus.MustRegister(tagsReceived)
prometheus.MustRegister(tagErrors)
prometheus.MustRegister(configLoads)
prometheus.MustRegister(mappingsCount)
prometheus.MustRegister(conflictingEventStats)
prometheus.MustRegister(errorEventStats)
prometheus.MustRegister(eventsActions)
prometheus.MustRegister(metricsCount)
}
// uncheckedCollector wraps a Collector but its Describe method yields no Desc.
// This allows incoming metrics to have inconsistent label sets
type uncheckedCollector struct {
c prometheus.Collector
}
func (u uncheckedCollector) Describe(_ chan<- *prometheus.Desc) {}
func (u uncheckedCollector) Collect(c chan<- prometheus.Metric) {
u.c.Collect(c)
}
func serveHTTP(mux http.Handler, listenAddress string, logger log.Logger) {
level.Error(logger).Log("msg", http.ListenAndServe(listenAddress, mux))
os.Exit(1)
}
func sighupConfigReloader(fileName string, mapper *mapper.MetricMapper, cacheSize int, logger log.Logger, option mapper.CacheOption) {
func sighupConfigReloader(fileName string, mapper *mapper.MetricMapper, logger log.Logger) {
signals := make(chan os.Signal, 1)
signal.Notify(signals, syscall.SIGHUP)
@ -218,12 +190,12 @@ func sighupConfigReloader(fileName string, mapper *mapper.MetricMapper, cacheSiz
level.Info(logger).Log("msg", "Received signal, attempting reload", "signal", s)
reloadConfig(fileName, mapper, cacheSize, logger, option)
reloadConfig(fileName, mapper, logger)
}
}
func reloadConfig(fileName string, mapper *mapper.MetricMapper, cacheSize int, logger log.Logger, option mapper.CacheOption) {
err := mapper.InitFromFile(fileName, cacheSize, option)
func reloadConfig(fileName string, mapper *mapper.MetricMapper, logger log.Logger) {
err := mapper.InitFromFile(fileName)
if err != nil {
level.Info(logger).Log("msg", "Error reloading config", "error", err)
configLoads.WithLabelValues("failure").Inc()
@ -247,6 +219,29 @@ func dumpFSM(mapper *mapper.MetricMapper, dumpFilename string, logger log.Logger
return nil
}
func getCache(cacheSize int, cacheType string, registerer prometheus.Registerer) (mapper.MetricMapperCache, error) {
var cache mapper.MetricMapperCache
var err error
if cacheSize == 0 {
return nil, nil
} else {
switch cacheType {
case "lru":
cache, err = lru.NewMetricMapperLRUCache(registerer, cacheSize)
case "random":
cache, err = randomreplacement.NewMetricMapperRRCache(registerer, cacheSize)
default:
err = fmt.Errorf("unsupported cache type %q", cacheType)
}
if err != nil {
return nil, err
}
}
return cache, nil
}
func main() {
var (
listenAddress = kingpin.Flag("web.listen-address", "The address on which to expose the web interface and generated Prometheus metrics.").Default(":9102").String()
@ -261,7 +256,7 @@ func main() {
readBuffer = kingpin.Flag("statsd.read-buffer", "Size (in bytes) of the operating system's transmit read buffer associated with the UDP or Unixgram connection. Please make sure the kernel parameters net.core.rmem_max is set to a value greater than the value specified.").Int()
cacheSize = kingpin.Flag("statsd.cache-size", "Maximum size of your metric mapping cache. Relies on least recently used replacement policy if max size is reached.").Default("1000").Int()
cacheType = kingpin.Flag("statsd.cache-type", "Metric mapping cache type. Valid options are \"lru\" and \"random\"").Default("lru").Enum("lru", "random")
eventQueueSize = kingpin.Flag("statsd.event-queue-size", "Size of internal queue for processing events.").Default("10000").Int()
eventQueueSize = kingpin.Flag("statsd.event-queue-size", "Size of internal queue for processing events.").Default("10000").Uint()
eventFlushThreshold = kingpin.Flag("statsd.event-flush-threshold", "Number of events to hold in queue before flushing.").Default("1000").Int()
eventFlushInterval = kingpin.Flag("statsd.event-flush-interval", "Maximum time between event queue flushes.").Default("200ms").Duration()
dumpFSMPath = kingpin.Flag("debug.dump-fsm", "The path to dump internal FSM generated for glob matching as Dot file.").Default("").String()
@ -270,14 +265,23 @@ func main() {
influxdbTagsEnabled = kingpin.Flag("statsd.parse-influxdb-tags", "Parse InfluxDB style tags. Enabled by default.").Default("true").Bool()
libratoTagsEnabled = kingpin.Flag("statsd.parse-librato-tags", "Parse Librato style tags. Enabled by default.").Default("true").Bool()
signalFXTagsEnabled = kingpin.Flag("statsd.parse-signalfx-tags", "Parse SignalFX style tags. Enabled by default.").Default("true").Bool()
relayAddr = kingpin.Flag("statsd.relay.address", "The UDP relay target address (host:port)").String()
relayPacketLen = kingpin.Flag("statsd.relay.packet-length", "Maximum relay output packet length to avoid fragmentation").Default("1400").Uint()
udpPacketQueueSize = kingpin.Flag("statsd.udp-packet-queue-size", "Size of internal queue for processing UDP packets.").Default("10000").Int()
)
promlogConfig := &promlog.Config{}
flag.AddFlags(kingpin.CommandLine, promlogConfig)
kingpin.Version(version.Print("statsd_exporter"))
kingpin.CommandLine.UsageWriter(os.Stdout)
kingpin.HelpFlag.Short('h')
kingpin.Parse()
logger := promlog.New(promlogConfig)
if err := level.SetLogLevel(promlogConfig.Level.String()); err != nil {
level.Error(logger).Log("msg", "failed to set log level", "error", err)
os.Exit(1)
}
prometheus.MustRegister(version.NewCollector("statsd_exporter"))
parser := line.NewParser()
if *dogstatsdTagsEnabled {
@ -293,8 +297,6 @@ func main() {
parser.EnableSignalFXParsing()
}
cacheOption := mapper.WithCacheType(*cacheType)
level.Info(logger).Log("msg", "Starting StatsD -> Prometheus Exporter", "version", version.Info())
level.Info(logger).Log("msg", "Build context", "context", version.BuildContext())
@ -302,15 +304,23 @@ func main() {
defer close(events)
eventQueue := event.NewEventQueue(events, *eventFlushThreshold, *eventFlushInterval, eventsFlushed)
mapper := &mapper.MetricMapper{Registerer: prometheus.DefaultRegisterer, MappingsCount: mappingsCount}
thisMapper := &mapper.MetricMapper{Registerer: prometheus.DefaultRegisterer, MappingsCount: mappingsCount, Logger: logger}
cache, err := getCache(*cacheSize, *cacheType, thisMapper.Registerer)
if err != nil {
level.Error(logger).Log("msg", "Unable to setup metric mapper cache", "error", err)
os.Exit(1)
}
thisMapper.UseCache(cache)
if *mappingConfig != "" {
err := mapper.InitFromFile(*mappingConfig, *cacheSize, cacheOption)
err := thisMapper.InitFromFile(*mappingConfig)
if err != nil {
level.Error(logger).Log("msg", "error loading config", "error", err)
os.Exit(1)
}
if *dumpFSMPath != "" {
err := dumpFSM(mapper, *dumpFSMPath, logger)
err := dumpFSM(thisMapper, *dumpFSMPath, logger)
if err != nil {
level.Error(logger).Log("msg", "error dumping FSM", "error", err)
// Failure to dump the FSM is an error (the user asked for it and it
@ -318,17 +328,25 @@ func main() {
// afterwards).
}
}
} else {
mapper.InitCache(*cacheSize, cacheOption)
}
exporter := exporter.NewExporter(prometheus.DefaultRegisterer, mapper, logger, eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount)
exporter := exporter.NewExporter(prometheus.DefaultRegisterer, thisMapper, logger, eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount)
if *checkConfig {
level.Info(logger).Log("msg", "Configuration check successful, exiting")
return
}
var relayTarget *relay.Relay
if *relayAddr != "" {
var err error
relayTarget, err = relay.NewRelay(logger, *relayAddr, *relayPacketLen)
if err != nil {
level.Error(logger).Log("msg", "Unable to create relay", "err", err)
os.Exit(1)
}
}
level.Info(logger).Log("msg", "Accepting StatsD Traffic", "udp", *statsdListenUDP, "tcp", *statsdListenTCP, "unixgram", *statsdListenUnixgram)
level.Info(logger).Log("msg", "Accepting Prometheus Requests", "addr", *listenAddress)
@ -357,18 +375,23 @@ func main() {
}
}
udpPacketQueue := make(chan []byte, *udpPacketQueueSize)
ul := &listener.StatsDUDPListener{
Conn: uconn,
EventHandler: eventQueue,
Logger: logger,
LineParser: parser,
UDPPackets: udpPackets,
UDPPacketDrops: udpPacketDrops,
LinesReceived: linesReceived,
EventsFlushed: eventsFlushed,
Relay: relayTarget,
SampleErrors: *sampleErrors,
SamplesReceived: samplesReceived,
TagErrors: tagErrors,
TagsReceived: tagsReceived,
UdpPacketQueue: udpPacketQueue,
}
go ul.Listen()
@ -394,6 +417,7 @@ func main() {
LineParser: parser,
LinesReceived: linesReceived,
EventsFlushed: eventsFlushed,
Relay: relayTarget,
SampleErrors: *sampleErrors,
SamplesReceived: samplesReceived,
TagErrors: tagErrors,
@ -439,6 +463,7 @@ func main() {
UnixgramPackets: unixgramPackets,
LinesReceived: linesReceived,
EventsFlushed: eventsFlushed,
Relay: relayTarget,
SampleErrors: *sampleErrors,
SamplesReceived: samplesReceived,
TagErrors: tagErrors,
@ -463,20 +488,29 @@ func main() {
}
}
}
}
mux := http.NewServeMux()
mux := http.DefaultServeMux
mux.Handle(*metricsEndpoint, promhttp.Handler())
mux.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte(`<html>
<head><title>StatsD Exporter</title></head>
<body>
<h1>StatsD Exporter</h1>
<p><a href="` + *metricsEndpoint + `">Metrics</a></p>
</body>
</html>`))
})
if *metricsEndpoint != "/" && *metricsEndpoint != "" {
landingConfig := web.LandingConfig{
Name: "StatsD Exporter",
Description: "Prometheus Exporter for converting StatsD to Prometheus metrics",
Version: version.Info(),
Links: []web.LandingLinks{
{
Address: *metricsEndpoint,
Text: "Metrics",
},
},
}
landingPage, err := web.NewLandingPage(landingConfig)
if err != nil {
level.Error(logger).Log("err", err)
os.Exit(1)
}
mux.Handle("/", landingPage)
}
quitChan := make(chan struct{}, 1)
@ -489,7 +523,7 @@ func main() {
return
}
level.Info(logger).Log("msg", "Received lifecycle api reload, attempting reload")
reloadConfig(*mappingConfig, mapper, *cacheSize, logger, cacheOption)
reloadConfig(*mappingConfig, thisMapper, logger)
}
})
mux.HandleFunc("/-/quit", func(w http.ResponseWriter, r *http.Request) {
@ -518,7 +552,7 @@ func main() {
go serveHTTP(mux, *listenAddress, logger)
go sighupConfigReloader(*mappingConfig, mapper, *cacheSize, logger, cacheOption)
go sighupConfigReloader(*mappingConfig, thisMapper, logger)
go exporter.Listen(events)
signals := make(chan os.Signal, 1)

3
pkg/README.md Normal file
View file

@ -0,0 +1,3 @@
The `pkg` directory is deprecated.
Please do not add new packages to this directory.
Existing packages will be moved elsewhere eventually.

View file

@ -84,5 +84,4 @@ func TestEventIntervalFlush(t *testing.T) {
if len(events) != 10 {
t.Fatal("Expected 10 events in the event channel, but got", len(events))
}
}

View file

@ -17,11 +17,12 @@ import (
"os"
"time"
"github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/statsd_exporter/pkg/clock"
"github.com/prometheus/statsd_exporter/pkg/event"
"github.com/prometheus/statsd_exporter/pkg/level"
"github.com/prometheus/statsd_exporter/pkg/mapper"
"github.com/prometheus/statsd_exporter/pkg/registry"
)
@ -54,7 +55,6 @@ type Exporter struct {
// Listen handles all events sent to the given channel sequentially. It
// terminates when the channel is closed.
func (b *Exporter) Listen(e <-chan event.Events) {
removeStaleMetricsTicker := clock.NewTicker(time.Second)
for {
@ -76,7 +76,6 @@ func (b *Exporter) Listen(e <-chan event.Events) {
// handleEvent processes a single Event according to the configured mapping.
func (b *Exporter) handleEvent(thisEvent event.Event) {
mapping, labels, present := b.Mapper.GetMapping(thisEvent.MetricName(), thisEvent.MetricType())
if mapping == nil {
mapping = &mapper.MetricMapping{}
@ -106,6 +105,10 @@ func (b *Exporter) handleEvent(thisEvent event.Event) {
}
metricName = mapper.EscapeMetricName(mapping.Name)
for label, value := range labels {
if _, ok := prometheusLabels[label]; mapping.HonorLabels && ok {
continue
}
prometheusLabels[label] = value
}
b.EventsActions.WithLabelValues(string(mapping.Action)).Inc()
@ -114,19 +117,24 @@ func (b *Exporter) handleEvent(thisEvent event.Event) {
metricName = mapper.EscapeMetricName(thisEvent.MetricName())
}
eventValue := thisEvent.Value()
if mapping.Scale.Set {
eventValue *= mapping.Scale.Val
}
switch ev := thisEvent.(type) {
case *event.CounterEvent:
// We don't accept negative values for counters. Incrementing the counter with a negative number
// will cause the exporter to panic. Instead we will warn and continue to the next event.
if thisEvent.Value() < 0.0 {
level.Debug(b.Logger).Log("msg", "counter must be non-negative value", "metric", metricName, "event_value", thisEvent.Value())
if eventValue < 0.0 {
level.Debug(b.Logger).Log("msg", "counter must be non-negative value", "metric", metricName, "event_value", eventValue)
b.ErrorEventStats.WithLabelValues("illegal_negative_counter").Inc()
return
}
counter, err := b.Registry.GetCounter(metricName, prometheusLabels, help, mapping, b.MetricsCount)
if err == nil {
counter.Add(thisEvent.Value())
counter.Add(eventValue)
b.EventStats.WithLabelValues("counter").Inc()
} else {
level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err)
@ -138,9 +146,9 @@ func (b *Exporter) handleEvent(thisEvent event.Event) {
if err == nil {
if ev.GRelative {
gauge.Add(thisEvent.Value())
gauge.Add(eventValue)
} else {
gauge.Set(thisEvent.Value())
gauge.Set(eventValue)
}
b.EventStats.WithLabelValues("gauge").Inc()
} else {
@ -161,7 +169,7 @@ func (b *Exporter) handleEvent(thisEvent event.Event) {
case mapper.ObserverTypeHistogram:
histogram, err := b.Registry.GetHistogram(metricName, prometheusLabels, help, mapping, b.MetricsCount)
if err == nil {
histogram.Observe(thisEvent.Value())
histogram.Observe(eventValue)
b.EventStats.WithLabelValues("observer").Inc()
} else {
level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err)
@ -171,7 +179,7 @@ func (b *Exporter) handleEvent(thisEvent event.Event) {
case mapper.ObserverTypeDefault, mapper.ObserverTypeSummary:
summary, err := b.Registry.GetSummary(metricName, prometheusLabels, help, mapping, b.MetricsCount)
if err == nil {
summary.Observe(thisEvent.Value())
summary.Observe(eventValue)
b.EventStats.WithLabelValues("observer").Inc()
} else {
level.Debug(b.Logger).Log("msg", regErrF, "metric", metricName, "error", err)

View file

@ -19,7 +19,7 @@ import (
"testing"
"time"
"github.com/go-kit/kit/log"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
dto "github.com/prometheus/client_model/go"
@ -56,6 +56,12 @@ var (
Help: "The total number of StatsD packets received over UDP.",
},
)
udpPacketDrops = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_udp_packet_drops_total",
Help: "The total number of dropped StatsD packets which received over UDP.",
},
)
tcpConnections = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_tcp_connections_total",
@ -74,12 +80,6 @@ var (
Help: "The number of lines discarded due to being too long.",
},
)
unixgramPackets = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_unixgram_packets_total",
Help: "The total number of StatsD packets received over Unixgram.",
},
)
linesReceived = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "statsd_exporter_lines_total",
@ -111,17 +111,6 @@ var (
Help: "The number of errors parsing DogStatsD tags.",
},
)
configLoads = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "statsd_exporter_config_reloads_total",
Help: "The number of configuration reloads.",
},
[]string{"outcome"},
)
mappingsCount = prometheus.NewGauge(prometheus.GaugeOpts{
Name: "statsd_exporter_loaded_mappings",
Help: "The current number of configured metric mappings.",
})
conflictingEventStats = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "statsd_exporter_events_conflict_total",
@ -182,7 +171,6 @@ func TestNegativeCounter(t *testing.T) {
prev := getTelemetryCounterValue(errorCounter)
testMapper := mapper.MetricMapper{}
testMapper.InitCache(0)
ex := NewExporter(prometheus.DefaultRegisterer, &testMapper, log.NewNopLogger(), eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount)
ex.Listen(events)
@ -260,7 +248,7 @@ mappings:
name: "histogram_test"
`
testMapper := &mapper.MetricMapper{}
err := testMapper.InitFromYAMLString(config, 0)
err := testMapper.InitFromYAMLString(config)
if err != nil {
t.Fatalf("Config load error: %s %s", config, err)
}
@ -289,7 +277,7 @@ mappings:
// TestLabelParsing verifies that labels getting parsed out of metric
// names are being properly created.
func TestLabelParsing(t *testing.T) {
codes := [2]string{"200", "300"}
codes := [3]string{"200", "300", "400"}
events := make(chan event.Events)
go func() {
@ -304,6 +292,11 @@ func TestLabelParsing(t *testing.T) {
CValue: 1,
CLabels: make(map[string]string),
},
&event.CounterEvent{
CMetricName: "counter.test.400",
CValue: 1,
CLabels: map[string]string{"code": "should be overwritten"},
},
}
events <- c
close(events)
@ -318,7 +311,7 @@ mappings:
`
testMapper := &mapper.MetricMapper{}
err := testMapper.InitFromYAMLString(config, 0)
err := testMapper.InitFromYAMLString(config)
if err != nil {
t.Fatalf("Config load error: %s %s", config, err)
}
@ -341,6 +334,51 @@ mappings:
}
}
func TestHonorLabels(t *testing.T) {
metricName := "some_counter"
events := make(chan event.Events)
go func() {
c := event.Events{
&event.CounterEvent{
CMetricName: metricName,
CValue: 1,
CLabels: map[string]string{"some_label": "bar"},
},
}
events <- c
close(events)
}()
config := `
mappings:
- match: .*
match_type: regex
name: $0
labels:
some_label: foo
honor_labels: true
`
testMapper := &mapper.MetricMapper{
Logger: log.NewNopLogger(),
}
err := testMapper.InitFromYAMLString(config)
if err != nil {
t.Fatalf("Config load error: %s %s", config, err)
}
ex := NewExporter(prometheus.DefaultRegisterer, testMapper, log.NewNopLogger(), eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount)
ex.Listen(events)
metrics, err := prometheus.DefaultGatherer.Gather()
if err != nil {
t.Fatalf("Cannot gather from DefaultGatherer: %v", err)
}
if getFloat64(metrics, metricName, map[string]string{"some_label": "bar"}) == nil {
t.Fatalf("Could not find metrics for %s with label set", metricName)
}
}
// TestConflictingMetrics validates that the exporter will not register metrics
// of different types that have overlapping names.
func TestConflictingMetrics(t *testing.T) {
@ -419,6 +457,76 @@ func TestConflictingMetrics(t *testing.T) {
},
},
},
{
name: "histogram vs counter with count suffix",
expected: []float64{2},
in: event.Events{
&event.ObserverEvent{
OMetricName: "histogram_test1",
OValue: 2,
},
&event.CounterEvent{
CMetricName: "histogram_test1_count",
CValue: 1,
},
},
},
{
name: "histogram vs counter with sum suffix",
expected: []float64{2},
in: event.Events{
&event.ObserverEvent{
OMetricName: "histogram_test1",
OValue: 2,
},
&event.CounterEvent{
CMetricName: "histogram_test1_sum",
CValue: 1,
},
},
},
{
name: "histogram vs counter with bucket suffix",
expected: []float64{2},
in: event.Events{
&event.ObserverEvent{
OMetricName: "histogram_test1",
OValue: 2,
},
&event.CounterEvent{
CMetricName: "histogram_test1_bucket",
CValue: 1,
},
},
},
{
name: "histogram vs gauge with sum suffix",
expected: []float64{2},
in: event.Events{
&event.ObserverEvent{
OMetricName: "histogram_test1",
OValue: 2,
},
&event.GaugeEvent{
GMetricName: "histogram_test1_sum",
GValue: 1,
},
},
},
{
name: "histogram vs gauge with count suffix",
expected: []float64{2},
in: event.Events{
&event.ObserverEvent{
OMetricName: "histogram_test1",
OValue: 2,
},
&event.GaugeEvent{
GMetricName: "histogram_test1_count",
GValue: 1,
},
},
},
{
name: "counter vs histogram",
expected: []float64{1},
@ -528,7 +636,7 @@ mappings:
for _, s := range scenarios {
t.Run(s.name, func(t *testing.T) {
testMapper := &mapper.MetricMapper{}
err := testMapper.InitFromYAMLString(config, 0)
err := testMapper.InitFromYAMLString(config)
if err != nil {
t.Fatalf("Config load error: %s %s", config, err)
}
@ -538,10 +646,11 @@ mappings:
events <- s.in
close(events)
}()
ex := NewExporter(prometheus.DefaultRegisterer, testMapper, log.NewNopLogger(), eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount)
reg := prometheus.NewRegistry()
ex := NewExporter(reg, testMapper, log.NewNopLogger(), eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount)
ex.Listen(events)
metrics, err := prometheus.DefaultGatherer.Gather()
metrics, err := reg.Gather()
if err != nil {
t.Fatalf("Cannot gather from DefaultGatherer: %v", err)
}
@ -585,7 +694,7 @@ mappings:
name: "${1}"
`
testMapper := &mapper.MetricMapper{}
err := testMapper.InitFromYAMLString(config, 0)
err := testMapper.InitFromYAMLString(config)
if err != nil {
t.Fatalf("Config load error: %s %s", config, err)
}
@ -630,6 +739,7 @@ func TestInvalidUtf8InDatadogTagValue(t *testing.T) {
Logger: log.NewNopLogger(),
LineParser: parser,
UDPPackets: udpPackets,
UDPPacketDrops: udpPacketDrops,
LinesReceived: linesReceived,
EventsFlushed: eventsFlushed,
SampleErrors: *sampleErrors,
@ -658,7 +768,6 @@ func TestInvalidUtf8InDatadogTagValue(t *testing.T) {
}()
testMapper := mapper.MetricMapper{}
testMapper.InitCache(0)
ex := NewExporter(prometheus.DefaultRegisterer, &testMapper, log.NewNopLogger(), eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount)
ex.Listen(events)
@ -672,7 +781,6 @@ func TestSummaryWithQuantilesEmptyMapping(t *testing.T) {
events := make(chan event.Events)
go func() {
testMapper := mapper.MetricMapper{}
testMapper.InitCache(0)
ex := NewExporter(prometheus.DefaultRegisterer, &testMapper, log.NewNopLogger(), eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount)
ex.Listen(events)
@ -717,7 +825,6 @@ func TestHistogramUnits(t *testing.T) {
events := make(chan event.Events)
go func() {
testMapper := mapper.MetricMapper{}
testMapper.InitCache(0)
ex := NewExporter(prometheus.DefaultRegisterer, &testMapper, log.NewNopLogger(), eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount)
ex.Mapper.Defaults.ObserverType = mapper.ObserverTypeHistogram
ex.Listen(events)
@ -754,7 +861,6 @@ func TestCounterIncrement(t *testing.T) {
events := make(chan event.Events)
go func() {
testMapper := mapper.MetricMapper{}
testMapper.InitCache(0)
ex := NewExporter(prometheus.DefaultRegisterer, &testMapper, log.NewNopLogger(), eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount)
ex.Listen(events)
}()
@ -796,6 +902,115 @@ func TestCounterIncrement(t *testing.T) {
}
}
// Test case from https://github.com/statsd/statsd/blob/master/docs/metric_types.md#gauges
func TestGaugeIncrementDecrement(t *testing.T) {
// Start exporter with a synchronous channel
events := make(chan event.Events)
go func() {
testMapper := mapper.MetricMapper{}
ex := NewExporter(prometheus.DefaultRegisterer, &testMapper, log.NewNopLogger(), eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount)
ex.Listen(events)
}()
// Synchronously send a statsd event to wait for handleEvent execution.
// Then close events channel to stop a listener.
name := "gaugor"
c := event.Events{
&event.GaugeEvent{
GMetricName: "gaugor",
GValue: 333,
GRelative: false,
GLabels: map[string]string{},
},
&event.GaugeEvent{
GMetricName: "gaugor",
GValue: -10,
GRelative: true,
GLabels: map[string]string{},
},
&event.GaugeEvent{
GMetricName: "gaugor",
GValue: 4,
GRelative: true,
GLabels: map[string]string{},
},
}
events <- c
// Push empty event so that we block until the first event is consumed.
events <- event.Events{}
close(events)
// Check histogram value
metrics, err := prometheus.DefaultGatherer.Gather()
if err != nil {
t.Fatalf("Cannot gather from DefaultGatherer: %v", err)
}
value := getFloat64(metrics, name, nil)
if value == nil {
t.Fatal("gauge value should not be nil")
}
if *value != 327 {
t.Fatalf("gauge wasn't incremented and decremented properly")
}
}
func TestScaledMapping(t *testing.T) {
events := make(chan event.Events)
testMapper := mapper.MetricMapper{}
config := `mappings:
- match: foo.processed_kilobytes
name: processed_bytes
scale: 1024
labels:
service: foo`
err := testMapper.InitFromYAMLString(config)
if err != nil {
t.Fatalf("Config load error: %s %s", config, err)
}
// Start exporter with a synchronous channel
go func() {
ex := NewExporter(prometheus.DefaultRegisterer, &testMapper, log.NewNopLogger(), eventsActions, eventsUnmapped, errorEventStats, eventStats, conflictingEventStats, metricsCount)
ex.Listen(events)
}()
// Synchronously send a statsd event to wait for handleEvent execution.
// Then close events channel to stop a listener.
statsdName := "foo.processed_kilobytes"
statsdLabels := map[string]string{}
promName := "processed_bytes"
promLabels := map[string]string{"service": "foo"}
c := event.Events{
&event.CounterEvent{
CMetricName: statsdName,
CValue: 100,
CLabels: statsdLabels,
},
&event.CounterEvent{
CMetricName: statsdName,
CValue: 200,
CLabels: statsdLabels,
},
}
events <- c
// Push empty event so that we block until the first event is consumed.
events <- event.Events{}
close(events)
// Check counter value
metrics, err := prometheus.DefaultGatherer.Gather()
if err != nil {
t.Fatalf("Cannot gather from DefaultGatherer: %v", err)
}
value := getFloat64(metrics, promName, promLabels)
if value == nil {
t.Fatal("Counter value should not be nil")
}
if *value != 300*1024 {
t.Fatalf("Counter wasn't incremented properly")
}
}
type statsDPacketHandler interface {
HandlePacket(packet []byte)
SetEventHandler(eh event.EventHandler)
@ -857,7 +1072,7 @@ mappings:
`
// Create mapper from config and start an Exporter with a synchronous channel
testMapper := &mapper.MetricMapper{}
err := testMapper.InitFromYAMLString(config, 0)
err := testMapper.InitFromYAMLString(config)
if err != nil {
t.Fatalf("Config load error: %s %s", config, err)
}

97
pkg/level/level.go Normal file
View file

@ -0,0 +1,97 @@
// Copyright 2021 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package level
import (
"fmt"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
)
var logLevel = LevelInfo
// A Level is a logging priority. Higher levels are more important.
type Level int
const (
// LevelDebug logs are typically voluminous, and are usually disabled in
// production.
LevelDebug Level = iota
// LevelInfo is the default logging priority.
LevelInfo
// LevelWarn logs are more important than Info, but don't need individual
// human review.
LevelWarn
// LevelError logs are high-priority. If an application is running smoothly,
// it shouldn't generate any error-level logs.
LevelError
)
var emptyLogger = &EmptyLogger{}
type EmptyLogger struct{}
func (l *EmptyLogger) Log(keyvals ...interface{}) error {
return nil
}
// SetLogLevel sets the log level.
func SetLogLevel(level string) error {
switch level {
case "debug":
logLevel = LevelDebug
case "info":
logLevel = LevelInfo
case "warn":
logLevel = LevelWarn
case "error":
logLevel = LevelError
default:
return fmt.Errorf("unrecognized log level %s", level)
}
return nil
}
// Error returns a logger that includes a Key/ErrorValue pair.
func Error(logger log.Logger) log.Logger {
if logLevel <= LevelError {
return level.Error(logger)
}
return emptyLogger
}
// Warn returns a logger that includes a Key/WarnValue pair.
func Warn(logger log.Logger) log.Logger {
if logLevel <= LevelWarn {
return level.Warn(logger)
}
return emptyLogger
}
// Info returns a logger that includes a Key/InfoValue pair.
func Info(logger log.Logger) log.Logger {
if logLevel <= LevelInfo {
return level.Info(logger)
}
return emptyLogger
}
// Debug returns a logger that includes a Key/DebugValue pair.
func Debug(logger log.Logger) log.Logger {
if logLevel <= LevelDebug {
return level.Debug(logger)
}
return emptyLogger
}

110
pkg/level/level_test.go Normal file
View file

@ -0,0 +1,110 @@
// Copyright 2021 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package level
import (
"bytes"
"strings"
"testing"
"github.com/go-kit/log"
)
func TestSetLogLevel(t *testing.T) {
tests := []struct {
name string
level string
logLevel Level
wantErr bool
}{
{"wrong level", "foo", LevelInfo, true},
{"level debug", "debug", LevelDebug, false},
{"level info", "info", LevelInfo, false},
{"level warn", "warn", LevelWarn, false},
{"level error", "error", LevelError, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := SetLogLevel(tt.level); (err != nil) != tt.wantErr {
t.Fatalf("Expected log level to be set successfully, but got %v", err)
}
if tt.logLevel != logLevel {
t.Fatalf("Expected log level %v, but got %v", tt.logLevel, logLevel)
}
})
}
}
func TestVariousLevels(t *testing.T) {
tests := []struct {
name string
level string
want string
}{
{
"level debug",
"debug",
strings.Join([]string{
"level=debug log=debug",
"level=info log=info",
"level=warn log=warn",
"level=error log=error",
}, "\n"),
},
{
"level info",
"info",
strings.Join([]string{
"level=info log=info",
"level=warn log=warn",
"level=error log=error",
}, "\n"),
},
{
"level warn",
"warn",
strings.Join([]string{
"level=warn log=warn",
"level=error log=error",
}, "\n"),
},
{
"level error",
"error",
strings.Join([]string{
"level=error log=error",
}, "\n"),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var buf bytes.Buffer
logger := log.NewLogfmtLogger(&buf)
if err := SetLogLevel(tt.level); err != nil {
t.Fatalf("Expected log level to be set successfully, but got %v", err)
}
Debug(logger).Log("log", "debug")
Info(logger).Log("log", "info")
Warn(logger).Log("log", "warn")
Error(logger).Log("log", "error")
got := strings.TrimSpace(buf.String())
if tt.want != got {
t.Fatalf("Expected log output %v, but got %v", tt.want, got)
}
})
}
}

View file

@ -19,11 +19,11 @@ import (
"strings"
"unicode/utf8"
"github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/statsd_exporter/pkg/event"
"github.com/prometheus/statsd_exporter/pkg/level"
"github.com/prometheus/statsd_exporter/pkg/mapper"
)
@ -239,7 +239,6 @@ samples:
for _, sample := range samples {
samplesReceived.Inc()
components := strings.Split(sample, "|")
samplingFactor := 1.0
if len(components) < 2 || len(components) > 4 {
sampleErrors.WithLabelValues("malformed_component").Inc()
level.Debug(logger).Log("msg", "Bad component", "line", line)
@ -273,7 +272,7 @@ samples:
switch component[0] {
case '@':
samplingFactor, err = strconv.ParseFloat(component[1:], 64)
samplingFactor, err := strconv.ParseFloat(component[1:], 64)
if err != nil {
level.Debug(logger).Log("msg", "Invalid sampling factor", "component", component[1:], "line", line)
sampleErrors.WithLabelValues("invalid_sample_factor").Inc()

View file

@ -17,7 +17,7 @@ import (
"reflect"
"testing"
"github.com/go-kit/kit/log"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/statsd_exporter/pkg/event"

View file

@ -20,11 +20,12 @@ import (
"os"
"strings"
"github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/statsd_exporter/pkg/event"
"github.com/prometheus/statsd_exporter/pkg/level"
"github.com/prometheus/statsd_exporter/pkg/relay"
)
type Parser interface {
@ -37,12 +38,15 @@ type StatsDUDPListener struct {
Logger log.Logger
LineParser Parser
UDPPackets prometheus.Counter
UDPPacketDrops prometheus.Counter
LinesReceived prometheus.Counter
EventsFlushed prometheus.Counter
Relay *relay.Relay
SampleErrors prometheus.CounterVec
SamplesReceived prometheus.Counter
TagErrors prometheus.Counter
TagsReceived prometheus.Counter
UdpPacketQueue chan []byte
}
func (l *StatsDUDPListener) SetEventHandler(eh event.EventHandler) {
@ -51,6 +55,7 @@ func (l *StatsDUDPListener) SetEventHandler(eh event.EventHandler) {
func (l *StatsDUDPListener) Listen() {
buf := make([]byte, 65535)
go l.ProcessUdpPacketQueue()
for {
n, _, err := l.Conn.ReadFromUDP(buf)
if err != nil {
@ -62,16 +67,38 @@ func (l *StatsDUDPListener) Listen() {
level.Error(l.Logger).Log("error", err)
return
}
l.HandlePacket(buf[0:n])
l.EnqueueUdpPacket(buf, n)
}
}
func (l *StatsDUDPListener) EnqueueUdpPacket(packet []byte, n int) {
l.UDPPackets.Inc()
packetCopy := make([]byte, n)
copy(packetCopy, packet)
select {
case l.UdpPacketQueue <- packetCopy:
// do nothing
default:
l.UDPPacketDrops.Inc()
}
}
func (l *StatsDUDPListener) ProcessUdpPacketQueue() {
for {
packet := <-l.UdpPacketQueue
l.HandlePacket(packet)
}
}
func (l *StatsDUDPListener) HandlePacket(packet []byte) {
l.UDPPackets.Inc()
lines := strings.Split(string(packet), "\n")
for _, line := range lines {
level.Debug(l.Logger).Log("msg", "Incoming line", "proto", "udp", "line", line)
l.LinesReceived.Inc()
if l.Relay != nil && len(line) > 0 {
l.Relay.RelayLine(line)
}
l.EventHandler.Queue(l.LineParser.LineToEvents(line, l.SampleErrors, l.SamplesReceived, l.TagErrors, l.TagsReceived, l.Logger))
}
}
@ -83,6 +110,7 @@ type StatsDTCPListener struct {
LineParser Parser
LinesReceived prometheus.Counter
EventsFlushed prometheus.Counter
Relay *relay.Relay
SampleErrors prometheus.CounterVec
SamplesReceived prometheus.Counter
TagErrors prometheus.Counter
@ -127,13 +155,16 @@ func (l *StatsDTCPListener) HandleConn(c *net.TCPConn) {
}
break
}
level.Debug(l.Logger).Log("msg", "Incoming line", "proto", "tcp", "line", line)
level.Debug(l.Logger).Log("msg", "Incoming line", "proto", "tcp", "line", string(line))
if isPrefix {
l.TCPLineTooLong.Inc()
level.Debug(l.Logger).Log("msg", "Read failed: line too long", "addr", c.RemoteAddr())
break
}
l.LinesReceived.Inc()
if l.Relay != nil && len(line) > 0 {
l.Relay.RelayLine(string(line))
}
l.EventHandler.Queue(l.LineParser.LineToEvents(string(line), l.SampleErrors, l.SamplesReceived, l.TagErrors, l.TagsReceived, l.Logger))
}
}
@ -146,6 +177,7 @@ type StatsDUnixgramListener struct {
UnixgramPackets prometheus.Counter
LinesReceived prometheus.Counter
EventsFlushed prometheus.Counter
Relay *relay.Relay
SampleErrors prometheus.CounterVec
SamplesReceived prometheus.Counter
TagErrors prometheus.Counter
@ -179,6 +211,9 @@ func (l *StatsDUnixgramListener) HandlePacket(packet []byte) {
for _, line := range lines {
level.Debug(l.Logger).Log("msg", "Incoming line", "proto", "unixgram", "line", line)
l.LinesReceived.Inc()
if l.Relay != nil && len(line) > 0 {
l.Relay.RelayLine(line)
}
l.EventHandler.Queue(l.LineParser.LineToEvents(line, l.SampleErrors, l.SamplesReceived, l.TagErrors, l.TagsReceived, l.Logger))
}
}

View file

@ -39,6 +39,9 @@ func EscapeMetricName(metricName string) string {
// This is an character replacement method optimized for this limited
// use case. It is much faster than using a regex.
offset := 0
var prevChar rune
for i, c := range metricName {
// Seek forward, skipping valid characters until we find one that needs
// to be replaced, then add all the characters we've seen so far to the
@ -47,6 +50,13 @@ func EscapeMetricName(metricName string) string {
(c >= '0' && c <= '9') || (c == '_') {
// Character is valid, so skip over it without doing anything.
} else {
// Double-dashes are allowed if there is a corresponding mapping.
// For consistency, double-dashes should also be allowed in the default case.
if c == '-' && prevChar == '-' {
offset = i + utf8.RuneLen(c)
continue
}
if !escaped {
// Up until now we've been lazy and avoided actually allocating
// memory. Unfortunately we've now determined this string needs
@ -58,6 +68,8 @@ func EscapeMetricName(metricName string) string {
offset = i + utf8.RuneLen(c)
sb.WriteByte('_')
}
prevChar = c
}
if !escaped {

View file

@ -20,6 +20,8 @@ func TestEscapeMetricName(t *testing.T) {
"clean": "clean",
"0starts_with_digit": "_0starts_with_digit",
"with_underscore": "with_underscore",
"with--doubledash": "with_doubledash",
"with---multiple-dashes": "with_multiple_dashes",
"with.dot": "with_dot",
"with😱emoji": "with_emoji",
"with.*.multiple": "with___multiple",
@ -39,6 +41,8 @@ func BenchmarkEscapeMetricName(b *testing.B) {
"clean",
"0starts_with_digit",
"with_underscore",
"with--doubledash",
"with---multiple-dashes",
"with.dot",
"with😱emoji",
"with.*.multiple",

View file

@ -43,6 +43,6 @@ func (f *FSM) DumpFSM(w io.Writer) {
idx++
}
// color for start state
w.Write([]byte(fmt.Sprintf("0 [color=\"#a94442\",fillcolor=\"#f2dede\"];\n")))
w.Write([]byte(fmt.Sprintln("0 [color=\"#a94442\",fillcolor=\"#f2dede\"];")))
w.Write([]byte("}"))
}

View file

@ -17,7 +17,9 @@ import (
"regexp"
"strings"
"github.com/prometheus/common/log"
"github.com/go-kit/log"
"github.com/prometheus/statsd_exporter/pkg/level"
)
type mappingState struct {
@ -232,7 +234,7 @@ func (f *FSM) GetMapping(statsdMetric string, statsdMetricType string) (*mapping
// TestIfNeedBacktracking tests if backtrack is needed for given list of mappings
// and whether ordering is disabled.
func TestIfNeedBacktracking(mappings []string, orderingDisabled bool) bool {
func TestIfNeedBacktracking(mappings []string, orderingDisabled bool, logger log.Logger) bool {
backtrackingNeeded := false
// A has * in rules, but there's other transisitions at the same state,
// this makes A the cause of backtracking
@ -248,7 +250,7 @@ func TestIfNeedBacktracking(mappings []string, orderingDisabled bool) bool {
metricRe = strings.Replace(metricRe, "*", "([^.]*)", -1)
regex, err := regexp.Compile("^" + metricRe + "$")
if err != nil {
log.Warnf("invalid match %s. cannot compile regex in mapping: %v", mapping, err)
level.Warn(logger).Log("msg", "Invalid match, cannot compile regex in mapping", "mapping", mapping, "err", err)
}
// put into array no matter there's error or not, we will skip later if regex is nil
ruleREByLength[l] = append(ruleREByLength[l], regex)
@ -291,8 +293,7 @@ func TestIfNeedBacktracking(mappings []string, orderingDisabled bool) bool {
if i2 != i1 && len(re1.FindStringSubmatchIndex(r2)) > 0 {
// log if we care about ordering and the superset occurs before
if !orderingDisabled && i1 < i2 {
log.Warnf("match \"%s\" is a super set of match \"%s\" but in a lower order, "+
"the first will never be matched", r1, r2)
level.Warn(logger).Log("msg", "match is a super set of match but in a lower order, the first will never be matched", "first_match", r1, "second_match", r2)
}
currentRuleNeedBacktrack = false
}
@ -310,8 +311,7 @@ func TestIfNeedBacktracking(mappings []string, orderingDisabled bool) bool {
}
if currentRuleNeedBacktrack {
log.Warnf("backtracking required because of match \"%s\", "+
"matching performance may be degraded", r1)
level.Warn(logger).Log("msg", "backtracking required because of match. Performance may be degraded", "match", r1)
backtrackingNeeded = true
}
}

View file

@ -15,29 +15,35 @@ package mapper
import (
"fmt"
"io/ioutil"
"os"
"regexp"
"sync"
"time"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
"gopkg.in/yaml.v2"
"github.com/prometheus/statsd_exporter/pkg/level"
"github.com/prometheus/statsd_exporter/pkg/mapper/fsm"
yaml "gopkg.in/yaml.v2"
)
var (
statsdMetricRE = `[a-zA-Z_](-?[a-zA-Z0-9_])*`
templateReplaceRE = `(\$\{?\d+\}?)`
// The first segment of a match cannot start with a number
statsdMetricRE = `[a-zA-Z_]([a-zA-Z0-9_\-])*`
// The subsequent segments of a match can start with a number
// See https://github.com/prometheus/statsd_exporter/issues/328
statsdMetricSubsequentRE = `[a-zA-Z0-9_]([a-zA-Z0-9_\-])*`
templateReplaceRE = `(\$\{?\d+\}?)`
metricLineRE = regexp.MustCompile(`^(\*\.|` + statsdMetricRE + `\.)+(\*|` + statsdMetricRE + `)$`)
metricLineRE = regexp.MustCompile(`^(\*|` + statsdMetricRE + `)(\.\*|\.` + statsdMetricSubsequentRE + `)*$`)
metricNameRE = regexp.MustCompile(`^([a-zA-Z_]|` + templateReplaceRE + `)([a-zA-Z0-9_]|` + templateReplaceRE + `)*$`)
labelNameRE = regexp.MustCompile(`^[a-zA-Z_][a-zA-Z0-9_]+$`)
)
type MetricMapper struct {
Registerer prometheus.Registerer
Defaults mapperConfigDefaults `yaml:"defaults"`
Defaults MapperConfigDefaults `yaml:"defaults"`
Mappings []MetricMapping `yaml:"mappings"`
FSM *fsm.FSM
doFSM bool
@ -46,43 +52,53 @@ type MetricMapper struct {
mutex sync.RWMutex
MappingsCount prometheus.Gauge
Logger log.Logger
}
type SummaryOptions struct {
Quantiles []metricObjective `yaml:"quantiles"`
Quantiles []MetricObjective `yaml:"quantiles"`
MaxAge time.Duration `yaml:"max_age"`
AgeBuckets uint32 `yaml:"age_buckets"`
BufCap uint32 `yaml:"buf_cap"`
}
type HistogramOptions struct {
Buckets []float64 `yaml:"buckets"`
Buckets []float64 `yaml:"buckets"`
NativeHistogramBucketFactor float64 `yaml:"native_histogram_bucket_factor"`
NativeHistogramMaxBuckets uint32 `yaml:"native_histogram_max_buckets"`
}
type metricObjective struct {
type MetricObjective struct {
Quantile float64 `yaml:"quantile"`
Error float64 `yaml:"error"`
}
var defaultQuantiles = []metricObjective{
var defaultQuantiles = []MetricObjective{
{Quantile: 0.5, Error: 0.05},
{Quantile: 0.9, Error: 0.01},
{Quantile: 0.99, Error: 0.001},
}
func (m *MetricMapper) InitFromYAMLString(fileContents string, cacheSize int, options ...CacheOption) error {
func (m *MetricMapper) InitFromYAMLString(fileContents string) error {
var n MetricMapper
if err := yaml.Unmarshal([]byte(fileContents), &n); err != nil {
return err
}
if n.Defaults.Buckets == nil || len(n.Defaults.Buckets) == 0 {
n.Defaults.Buckets = prometheus.DefBuckets
if len(n.Defaults.HistogramOptions.Buckets) == 0 {
n.Defaults.HistogramOptions.Buckets = prometheus.DefBuckets
}
if n.Defaults.HistogramOptions.NativeHistogramBucketFactor == 0 {
n.Defaults.HistogramOptions.NativeHistogramBucketFactor = 1.1
}
if n.Defaults.HistogramOptions.NativeHistogramMaxBuckets <= 0 {
n.Defaults.HistogramOptions.NativeHistogramMaxBuckets = 256
}
if n.Defaults.Quantiles == nil || len(n.Defaults.Quantiles) == 0 {
n.Defaults.Quantiles = defaultQuantiles
if len(n.Defaults.SummaryOptions.Quantiles) == 0 {
n.Defaults.SummaryOptions.Quantiles = defaultQuantiles
}
if n.Defaults.MatchType == MatchTypeDefault {
@ -143,7 +159,6 @@ func (m *MetricMapper) InitFromYAMLString(fileContents string, cacheSize int, op
}
currentMapping.labelFormatters = labelFormatters
currentMapping.labelKeys = labelKeys
} else {
if regex, err := regexp.Compile(currentMapping.Match); err != nil {
return fmt.Errorf("invalid regex %s in mapping: %v", currentMapping.Match, err)
@ -159,12 +174,12 @@ func (m *MetricMapper) InitFromYAMLString(fileContents string, cacheSize int, op
if currentMapping.LegacyQuantiles != nil &&
(currentMapping.SummaryOptions == nil || currentMapping.SummaryOptions.Quantiles != nil) {
log.Warn("using the top level quantiles is deprecated. Please use quantiles in the summary_options hierarchy")
level.Warn(m.Logger).Log("msg", "using the top level quantiles is deprecated. Please use quantiles in the summary_options hierarchy")
}
if currentMapping.LegacyBuckets != nil &&
(currentMapping.HistogramOptions == nil || currentMapping.HistogramOptions.Buckets != nil) {
log.Warn("using the top level buckets is deprecated. Please use buckets in the histogram_options hierarchy")
level.Warn(m.Logger).Log("msg", "using the top level buckets is deprecated. Please use buckets in the histogram_options hierarchy")
}
if currentMapping.SummaryOptions != nil &&
@ -190,7 +205,7 @@ func (m *MetricMapper) InitFromYAMLString(fileContents string, cacheSize int, op
currentMapping.HistogramOptions.Buckets = currentMapping.LegacyBuckets
}
if currentMapping.HistogramOptions.Buckets == nil || len(currentMapping.HistogramOptions.Buckets) == 0 {
currentMapping.HistogramOptions.Buckets = n.Defaults.Buckets
currentMapping.HistogramOptions.Buckets = n.Defaults.HistogramOptions.Buckets
}
}
@ -205,22 +220,38 @@ func (m *MetricMapper) InitFromYAMLString(fileContents string, cacheSize int, op
currentMapping.SummaryOptions.Quantiles = currentMapping.LegacyQuantiles
}
if currentMapping.SummaryOptions.Quantiles == nil || len(currentMapping.SummaryOptions.Quantiles) == 0 {
currentMapping.SummaryOptions.Quantiles = n.Defaults.Quantiles
currentMapping.SummaryOptions.Quantiles = n.Defaults.SummaryOptions.Quantiles
}
if currentMapping.SummaryOptions.MaxAge == 0 {
currentMapping.SummaryOptions.MaxAge = n.Defaults.SummaryOptions.MaxAge
}
if currentMapping.SummaryOptions.AgeBuckets == 0 {
currentMapping.SummaryOptions.AgeBuckets = n.Defaults.SummaryOptions.AgeBuckets
}
if currentMapping.SummaryOptions.BufCap == 0 {
currentMapping.SummaryOptions.BufCap = n.Defaults.SummaryOptions.BufCap
}
}
if currentMapping.Ttl == 0 && n.Defaults.Ttl > 0 {
currentMapping.Ttl = n.Defaults.Ttl
}
}
m.mutex.Lock()
defer m.mutex.Unlock()
if m.Logger == nil {
m.Logger = log.NewNopLogger()
}
m.Defaults = n.Defaults
m.Mappings = n.Mappings
m.InitCache(cacheSize, options...)
// Reset the cache since this function can be used to reload config
if m.cache != nil {
m.cache.Reset()
}
if n.doFSM {
var mappings []string
@ -229,7 +260,7 @@ func (m *MetricMapper) InitFromYAMLString(fileContents string, cacheSize int, op
mappings = append(mappings, mapping.Match)
}
}
n.FSM.BacktrackingNeeded = fsm.TestIfNeedBacktracking(mappings, n.FSM.OrderingDisabled)
n.FSM.BacktrackingNeeded = fsm.TestIfNeedBacktracking(mappings, n.FSM.OrderingDisabled, m.Logger)
m.FSM = n.FSM
m.doRegex = n.doRegex
@ -239,56 +270,40 @@ func (m *MetricMapper) InitFromYAMLString(fileContents string, cacheSize int, op
if m.MappingsCount != nil {
m.MappingsCount.Set(float64(len(n.Mappings)))
}
return nil
}
func (m *MetricMapper) InitFromFile(fileName string, cacheSize int, options ...CacheOption) error {
mappingStr, err := ioutil.ReadFile(fileName)
func (m *MetricMapper) InitFromFile(fileName string) error {
mappingStr, err := os.ReadFile(fileName)
if err != nil {
return err
}
return m.InitFromYAMLString(string(mappingStr), cacheSize, options...)
return m.InitFromYAMLString(string(mappingStr))
}
func (m *MetricMapper) InitCache(cacheSize int, options ...CacheOption) {
if cacheSize == 0 {
m.cache = NewMetricMapperNoopCache(m.Registerer)
} else {
o := cacheOptions{
cacheType: "lru",
}
for _, f := range options {
f(&o)
}
var (
cache MetricMapperCache
err error
)
switch o.cacheType {
case "lru":
cache, err = NewMetricMapperCache(m.Registerer, cacheSize)
case "random":
cache, err = NewMetricMapperRRCache(m.Registerer, cacheSize)
default:
err = fmt.Errorf("unsupported cache type %q", o.cacheType)
}
if err != nil {
log.Fatalf("Unable to setup metric cache. Caused by: %s", err)
}
m.cache = cache
}
// UseCache tells the mapper to use a cache that implements the MetricMapperCache interface.
// This cache MUST be thread-safe!
func (m *MetricMapper) UseCache(cache MetricMapperCache) {
m.mutex.Lock()
defer m.mutex.Unlock()
m.cache = cache
}
func (m *MetricMapper) GetMapping(statsdMetric string, statsdMetricType MetricType) (*MetricMapping, prometheus.Labels, bool) {
m.mutex.RLock()
defer m.mutex.RUnlock()
result, cached := m.cache.Get(statsdMetric, statsdMetricType)
if cached {
return result.Mapping, result.Labels, result.Matched
// only use a cache if one is present
if m.cache != nil {
result, cached := m.cache.Get(formatKey(statsdMetric, statsdMetricType))
if cached {
r := result.(MetricMapperCacheResult)
return r.Mapping, r.Labels, r.Matched
}
}
// glob matching
if m.doFSM {
finalState, captures := m.FSM.GetMapping(statsdMetric, string(statsdMetricType))
@ -302,12 +317,23 @@ func (m *MetricMapper) GetMapping(statsdMetric string, statsdMetricType MetricTy
labels[result.labelKeys[index]] = formatter.Format(captures)
}
m.cache.AddMatch(statsdMetric, statsdMetricType, result, labels)
r := MetricMapperCacheResult{
Mapping: result,
Matched: true,
Labels: labels,
}
// add match to cache
if m.cache != nil {
m.cache.Add(formatKey(statsdMetric, statsdMetricType), r)
}
return result, labels, true
} else if !m.doRegex {
// if there's no regex match type, return immediately
m.cache.AddMiss(statsdMetric, statsdMetricType)
// Add miss to cache
if m.cache != nil {
m.cache.Add(formatKey(statsdMetric, statsdMetricType), MetricMapperCacheResult{})
}
return nil, nil, false
}
}
@ -340,19 +366,29 @@ func (m *MetricMapper) GetMapping(statsdMetric string, statsdMetricType MetricTy
labels[label] = string(value)
}
m.cache.AddMatch(statsdMetric, statsdMetricType, &mapping, labels)
r := MetricMapperCacheResult{
Mapping: &mapping,
Matched: true,
Labels: labels,
}
// Add Match to cache
if m.cache != nil {
m.cache.Add(formatKey(statsdMetric, statsdMetricType), r)
}
return &mapping, labels, true
}
m.cache.AddMiss(statsdMetric, statsdMetricType)
// Add Miss to cache
if m.cache != nil {
m.cache.Add(formatKey(statsdMetric, statsdMetricType), MetricMapperCacheResult{})
}
return nil, nil, false
}
// make a shallow copy so that we do not overwrite name
// as multiple names can be matched by same mapping
func copyMetricMapping(in *MetricMapping) *MetricMapping {
var out MetricMapping
out = *in
out := *in
return &out
}

View file

@ -17,6 +17,11 @@ import (
"fmt"
"math/rand"
"testing"
"github.com/go-kit/log"
"github.com/prometheus/statsd_exporter/pkg/mappercache/lru"
"github.com/prometheus/statsd_exporter/pkg/mappercache/randomreplacement"
)
var (
@ -105,7 +110,7 @@ mappings:
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -169,7 +174,7 @@ mappings:
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -239,8 +244,10 @@ mappings:
"foo.bar.baz",
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
mapper := MetricMapper{
Logger: log.NewNopLogger(),
}
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -304,7 +311,7 @@ mappings:
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -331,7 +338,7 @@ mappings:
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -359,7 +366,7 @@ mappings:
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -385,7 +392,7 @@ mappings:
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -412,7 +419,7 @@ mappings:
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -438,7 +445,7 @@ mappings:
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -465,7 +472,7 @@ mappings:
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -502,7 +509,7 @@ mappings:
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -540,7 +547,7 @@ mappings:
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -561,7 +568,7 @@ mappings:` + duplicateRules(100, ruleTemplateSingleMatchGlob)
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -582,9 +589,10 @@ mappings:` + duplicateRules(100, ruleTemplateSingleMatchGlob)
}
for _, cacheType := range []string{"lru", "random"} {
mapper := newTestMapperWithCache(cacheType, 1000)
b.Run(cacheType, func(b *testing.B) {
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 1000, WithCacheType(cacheType))
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -609,7 +617,7 @@ mappings:` + duplicateRules(10, ruleTemplateSingleMatchRegex)
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -632,9 +640,19 @@ mappings:` + duplicateRules(10, ruleTemplateSingleMatchRegex)
}
for _, cacheType := range []string{"lru", "random"} {
mapper := MetricMapper{}
var cache MetricMapperCache
switch cacheType {
case "lru":
cache, _ = lru.NewMetricMapperLRUCache(mapper.Registerer, 1000)
case "random":
cache, _ = randomreplacement.NewMetricMapperRRCache(mapper.Registerer, 1000)
}
mapper.UseCache(cache)
b.Run(cacheType, func(b *testing.B) {
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 1000, WithCacheType(cacheType))
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -657,7 +675,7 @@ mappings:` + duplicateRules(100, ruleTemplateSingleMatchGlob)
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -678,9 +696,10 @@ mappings:` + duplicateRules(100, ruleTemplateSingleMatchGlob)
}
for _, cacheType := range []string{"lru", "random"} {
mapper := newTestMapperWithCache(cacheType, 1000)
b.Run(cacheType, func(b *testing.B) {
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 1000, WithCacheType(cacheType))
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -703,7 +722,7 @@ mappings:` + duplicateRules(100, ruleTemplateSingleMatchGlob)
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -726,7 +745,7 @@ mappings:` + duplicateRules(100, ruleTemplateSingleMatchGlob)
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -749,7 +768,7 @@ mappings:` + duplicateRules(100, ruleTemplateSingleMatchRegex)
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -772,7 +791,7 @@ mappings:` + duplicateRules(100, ruleTemplateSingleMatchRegex)
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -793,7 +812,7 @@ mappings:` + duplicateRules(100, ruleTemplateMultipleMatchGlob)
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -814,9 +833,10 @@ mappings:` + duplicateRules(100, ruleTemplateMultipleMatchGlob)
}
for _, cacheType := range []string{"lru", "random"} {
mapper := newTestMapperWithCache(cacheType, 1000)
b.Run(cacheType, func(b *testing.B) {
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 1000, WithCacheType(cacheType))
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -841,7 +861,7 @@ mappings:` + duplicateRules(100, ruleTemplateMultipleMatchRegex)
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -864,7 +884,7 @@ mappings:` + duplicateRules(100, ruleTemplateMultipleMatchRegex)
}
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 0)
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -887,9 +907,10 @@ mappings:` + duplicateRules(100, ruleTemplateMultipleMatchRegex)
}
for _, cacheType := range []string{"lru", "random"} {
mapper := newTestMapperWithCache(cacheType, 1000)
b.Run(cacheType, func(b *testing.B) {
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 1000, WithCacheType(cacheType))
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -914,14 +935,15 @@ func duplicateMetrics(count int, template string) []string {
func BenchmarkGlob100RulesCached100Metrics(b *testing.B) {
config := `---
mappings:` + duplicateRules(100, ruleTemplateSingleMatchGlob)
mappings:` + duplicateRules(101, ruleTemplateSingleMatchGlob)
mappings := duplicateMetrics(100, "metric100")
for _, cacheType := range []string{"lru", "random"} {
mapper := newTestMapperWithCache(cacheType, 1000)
b.Run(cacheType, func(b *testing.B) {
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 1000, WithCacheType(cacheType))
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -947,9 +969,10 @@ mappings:` + duplicateRules(100, ruleTemplateSingleMatchGlob)
mappings := duplicateMetrics(100, "metric100")
for _, cacheType := range []string{"lru", "random"} {
mapper := newTestMapperWithCache(cacheType, 1000)
b.Run(cacheType, func(b *testing.B) {
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 50, WithCacheType(cacheType))
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}
@ -982,9 +1005,9 @@ mappings:` + duplicateRules(100, ruleTemplateSingleMatchGlob)
})
for _, cacheType := range []string{"lru", "random"} {
mapper := newTestMapperWithCache(cacheType, 50)
b.Run(cacheType, func(b *testing.B) {
mapper := MetricMapper{}
err := mapper.InitFromYAMLString(config, 50, WithCacheType(cacheType))
err := mapper.InitFromYAMLString(config)
if err != nil {
b.Fatalf("Config load error: %s %s", config, err)
}

View file

@ -14,9 +14,6 @@
package mapper
import (
"sync"
lru "github.com/hashicorp/golang-lru"
"github.com/prometheus/client_golang/prometheus"
)
@ -56,155 +53,22 @@ func NewCacheMetrics(reg prometheus.Registerer) *CacheMetrics {
return &m
}
type cacheOptions struct {
cacheType string
}
type CacheOption func(*cacheOptions)
func WithCacheType(cacheType string) CacheOption {
return func(o *cacheOptions) {
o.cacheType = cacheType
}
}
type MetricMapperCacheResult struct {
Mapping *MetricMapping
Matched bool
Labels prometheus.Labels
}
// MetricMapperCache MUST be thread-safe and should be instrumented with CacheMetrics
type MetricMapperCache interface {
Get(metricString string, metricType MetricType) (*MetricMapperCacheResult, bool)
AddMatch(metricString string, metricType MetricType, mapping *MetricMapping, labels prometheus.Labels)
AddMiss(metricString string, metricType MetricType)
}
type MetricMapperLRUCache struct {
MetricMapperCache
cache *lru.Cache
metrics *CacheMetrics
}
type MetricMapperNoopCache struct {
MetricMapperCache
metrics *CacheMetrics
}
func NewMetricMapperCache(reg prometheus.Registerer, size int) (*MetricMapperLRUCache, error) {
metrics := NewCacheMetrics(reg)
cache, err := lru.New(size)
if err != nil {
return &MetricMapperLRUCache{}, err
}
return &MetricMapperLRUCache{metrics: metrics, cache: cache}, nil
}
func (m *MetricMapperLRUCache) Get(metricString string, metricType MetricType) (*MetricMapperCacheResult, bool) {
m.metrics.CacheGetsTotal.Inc()
if result, ok := m.cache.Get(formatKey(metricString, metricType)); ok {
m.metrics.CacheHitsTotal.Inc()
return result.(*MetricMapperCacheResult), true
} else {
return nil, false
}
}
func (m *MetricMapperLRUCache) AddMatch(metricString string, metricType MetricType, mapping *MetricMapping, labels prometheus.Labels) {
go m.trackCacheLength()
m.cache.Add(formatKey(metricString, metricType), &MetricMapperCacheResult{Mapping: mapping, Matched: true, Labels: labels})
}
func (m *MetricMapperLRUCache) AddMiss(metricString string, metricType MetricType) {
go m.trackCacheLength()
m.cache.Add(formatKey(metricString, metricType), &MetricMapperCacheResult{Matched: false})
}
func (m *MetricMapperLRUCache) trackCacheLength() {
m.metrics.CacheLength.Set(float64(m.cache.Len()))
// Get a cached result
Get(metricKey string) (interface{}, bool)
// Add a statsd MetricMapperResult to the cache
Add(metricKey string, result interface{}) // Add an item to the cache
// Reset clears the cache for config reloads
Reset()
}
func formatKey(metricString string, metricType MetricType) string {
return string(metricType) + "." + metricString
}
func NewMetricMapperNoopCache(reg prometheus.Registerer) *MetricMapperNoopCache {
return &MetricMapperNoopCache{metrics: NewCacheMetrics(reg)}
}
func (m *MetricMapperNoopCache) Get(metricString string, metricType MetricType) (*MetricMapperCacheResult, bool) {
return nil, false
}
func (m *MetricMapperNoopCache) AddMatch(metricString string, metricType MetricType, mapping *MetricMapping, labels prometheus.Labels) {
return
}
func (m *MetricMapperNoopCache) AddMiss(metricString string, metricType MetricType) {
return
}
type MetricMapperRRCache struct {
MetricMapperCache
lock sync.RWMutex
size int
items map[string]*MetricMapperCacheResult
metrics *CacheMetrics
}
func NewMetricMapperRRCache(reg prometheus.Registerer, size int) (*MetricMapperRRCache, error) {
metrics := NewCacheMetrics(reg)
c := &MetricMapperRRCache{
items: make(map[string]*MetricMapperCacheResult, size+1),
size: size,
metrics: metrics,
}
return c, nil
}
func (m *MetricMapperRRCache) Get(metricString string, metricType MetricType) (*MetricMapperCacheResult, bool) {
key := formatKey(metricString, metricType)
m.lock.RLock()
result, ok := m.items[key]
m.lock.RUnlock()
return result, ok
}
func (m *MetricMapperRRCache) addItem(metricString string, metricType MetricType, result *MetricMapperCacheResult) {
go m.trackCacheLength()
key := formatKey(metricString, metricType)
m.lock.Lock()
m.items[key] = result
// evict an item if needed
if len(m.items) > m.size {
for k := range m.items {
delete(m.items, k)
break
}
}
m.lock.Unlock()
}
func (m *MetricMapperRRCache) AddMatch(metricString string, metricType MetricType, mapping *MetricMapping, labels prometheus.Labels) {
e := &MetricMapperCacheResult{Mapping: mapping, Matched: true, Labels: labels}
m.addItem(metricString, metricType, e)
}
func (m *MetricMapperRRCache) AddMiss(metricString string, metricType MetricType) {
e := &MetricMapperCacheResult{Matched: false}
m.addItem(metricString, metricType, e)
}
func (m *MetricMapperRRCache) trackCacheLength() {
m.lock.RLock()
length := len(m.items)
m.lock.RUnlock()
m.metrics.CacheLength.Set(float64(length))
}

View file

@ -15,20 +15,31 @@ package mapper
import "time"
type mapperConfigDefaults struct {
type MapperConfigDefaults struct {
ObserverType ObserverType `yaml:"observer_type"`
MatchType MatchType `yaml:"match_type"`
GlobDisableOrdering bool `yaml:"glob_disable_ordering"`
Ttl time.Duration `yaml:"ttl"`
SummaryOptions SummaryOptions `yaml:"summary_options"`
HistogramOptions HistogramOptions `yaml:"histogram_options"`
}
// mapperConfigDefaultsAlias is used to unmarshal the yaml config into mapperConfigDefaults and allows deprecated fields
type mapperConfigDefaultsAlias struct {
ObserverType ObserverType `yaml:"observer_type"`
TimerType ObserverType `yaml:"timer_type,omitempty"` // DEPRECATED - field only present to preserve backwards compatibility in configs. Always empty
Buckets []float64 `yaml:"buckets"`
Quantiles []metricObjective `yaml:"quantiles"`
TimerType ObserverType `yaml:"timer_type,omitempty"` // DEPRECATED - field only present to preserve backwards compatibility in configs
Buckets []float64 `yaml:"buckets"` // DEPRECATED - field only present to preserve backwards compatibility in configs
Quantiles []MetricObjective `yaml:"quantiles"` // DEPRECATED - field only present to preserve backwards compatibility in configs
MatchType MatchType `yaml:"match_type"`
GlobDisableOrdering bool `yaml:"glob_disable_ordering"`
Ttl time.Duration `yaml:"ttl"`
SummaryOptions SummaryOptions `yaml:"summary_options"`
HistogramOptions HistogramOptions `yaml:"histogram_options"`
}
// UnmarshalYAML is a custom unmarshal function to allow use of deprecated config keys
// observer_type will override timer_type
func (d *mapperConfigDefaults) UnmarshalYAML(unmarshal func(interface{}) error) error {
type mapperConfigDefaultsAlias mapperConfigDefaults
func (d *MapperConfigDefaults) UnmarshalYAML(unmarshal func(interface{}) error) error {
var tmp mapperConfigDefaultsAlias
if err := unmarshal(&tmp); err != nil {
return err
@ -36,16 +47,26 @@ func (d *mapperConfigDefaults) UnmarshalYAML(unmarshal func(interface{}) error)
// Copy defaults
d.ObserverType = tmp.ObserverType
d.Buckets = tmp.Buckets
d.Quantiles = tmp.Quantiles
d.MatchType = tmp.MatchType
d.GlobDisableOrdering = tmp.GlobDisableOrdering
d.Ttl = tmp.Ttl
d.SummaryOptions = tmp.SummaryOptions
d.HistogramOptions = tmp.HistogramOptions
// Use deprecated TimerType if necessary
if tmp.ObserverType == "" {
d.ObserverType = tmp.TimerType
}
// Use deprecated quantiles if necessary
if len(tmp.SummaryOptions.Quantiles) == 0 && len(tmp.Quantiles) > 0 {
d.SummaryOptions = SummaryOptions{Quantiles: tmp.Quantiles}
}
// Use deprecated buckets if necessary
if len(tmp.HistogramOptions.Buckets) == 0 && len(tmp.Buckets) > 0 {
d.HistogramOptions = HistogramOptions{Buckets: tmp.Buckets}
}
return nil
}

File diff suppressed because it is too large Load diff

View file

@ -28,12 +28,13 @@ type MetricMapping struct {
nameFormatter *fsm.TemplateFormatter
regex *regexp.Regexp
Labels prometheus.Labels `yaml:"labels"`
HonorLabels bool `yaml:"honor_labels"`
labelKeys []string
labelFormatters []*fsm.TemplateFormatter
ObserverType ObserverType `yaml:"observer_type"`
TimerType ObserverType `yaml:"timer_type,omitempty"` // DEPRECATED - field only present to preserve backwards compatibility in configs. Always empty
LegacyBuckets []float64 `yaml:"buckets"`
LegacyQuantiles []metricObjective `yaml:"quantiles"`
LegacyQuantiles []MetricObjective `yaml:"quantiles"`
MatchType MatchType `yaml:"match_type"`
HelpText string `yaml:"help"`
Action ActionType `yaml:"action"`
@ -41,6 +42,7 @@ type MetricMapping struct {
Ttl time.Duration `yaml:"ttl"`
SummaryOptions *SummaryOptions `yaml:"summary_options"`
HistogramOptions *HistogramOptions `yaml:"histogram_options"`
Scale MaybeFloat64 `yaml:"scale"`
}
// UnmarshalYAML is a custom unmarshal function to allow use of deprecated config keys
@ -56,6 +58,7 @@ func (m *MetricMapping) UnmarshalYAML(unmarshal func(interface{}) error) error {
m.Match = tmp.Match
m.Name = tmp.Name
m.Labels = tmp.Labels
m.HonorLabels = tmp.HonorLabels
m.ObserverType = tmp.ObserverType
m.LegacyBuckets = tmp.LegacyBuckets
m.LegacyQuantiles = tmp.LegacyQuantiles
@ -66,6 +69,7 @@ func (m *MetricMapping) UnmarshalYAML(unmarshal func(interface{}) error) error {
m.Ttl = tmp.Ttl
m.SummaryOptions = tmp.SummaryOptions
m.HistogramOptions = tmp.HistogramOptions
m.Scale = tmp.Scale
// Use deprecated TimerType if necessary
if tmp.ObserverType == "" {
@ -74,3 +78,25 @@ func (m *MetricMapping) UnmarshalYAML(unmarshal func(interface{}) error) error {
return nil
}
type MaybeFloat64 struct {
Set bool
Val float64
}
func (m *MaybeFloat64) MarshalYAML() (interface{}, error) {
if m.Set {
return m.Val, nil
}
return nil, nil
}
func (m *MaybeFloat64) UnmarshalYAML(unmarshal func(interface{}) error) error {
var tmp float64
if err := unmarshal(&tmp); err != nil {
return err
}
m.Val = tmp
m.Set = true
return nil
}

103
pkg/mappercache/lru/lru.go Normal file
View file

@ -0,0 +1,103 @@
// Copyright 2021 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package lru
import (
"sync"
"github.com/prometheus/client_golang/prometheus"
"github.com/golang/groupcache/lru"
"github.com/prometheus/statsd_exporter/pkg/mappercache"
)
type metricMapperLRUCache struct {
cache *lruCache
metrics *mappercache.CacheMetrics
}
func NewMetricMapperLRUCache(reg prometheus.Registerer, size int) (*metricMapperLRUCache, error) {
if size <= 0 {
return nil, nil
}
metrics := mappercache.NewCacheMetrics(reg)
cache := newLruCache(size)
return &metricMapperLRUCache{metrics: metrics, cache: cache}, nil
}
func (m *metricMapperLRUCache) Get(metricKey string) (interface{}, bool) {
m.metrics.CacheGetsTotal.Inc()
if result, ok := m.cache.Get(metricKey); ok {
m.metrics.CacheHitsTotal.Inc()
return result, true
} else {
return nil, false
}
}
func (m *metricMapperLRUCache) Add(metricKey string, result interface{}) {
go m.trackCacheLength()
m.cache.Add(metricKey, result)
}
func (m *metricMapperLRUCache) trackCacheLength() {
m.metrics.CacheLength.Set(float64(m.cache.Len()))
}
func (m *metricMapperLRUCache) Reset() {
m.cache.Clear()
m.metrics.CacheLength.Set(0)
}
type lruCache struct {
cache *lru.Cache
lock sync.RWMutex
}
func newLruCache(maxEntries int) *lruCache {
return &lruCache{
cache: lru.New(maxEntries),
}
}
func (l *lruCache) Get(key string) (interface{}, bool) {
l.lock.RLock()
defer l.lock.RUnlock()
return l.cache.Get(key)
}
func (l *lruCache) Add(key string, value interface{}) {
l.lock.Lock()
defer l.lock.Unlock()
l.cache.Add(key, value)
}
func (l *lruCache) Len() int {
l.lock.RLock()
defer l.lock.RUnlock()
return l.cache.Len()
}
func (l *lruCache) Clear() {
l.lock.Lock()
defer l.lock.Unlock()
l.cache.Clear()
}

View file

@ -0,0 +1,52 @@
// Copyright 2021 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package mappercache
import "github.com/prometheus/client_golang/prometheus"
type CacheMetrics struct {
CacheLength prometheus.Gauge
CacheGetsTotal prometheus.Counter
CacheHitsTotal prometheus.Counter
}
func NewCacheMetrics(reg prometheus.Registerer) *CacheMetrics {
var m CacheMetrics
m.CacheLength = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "statsd_metric_mapper_cache_length",
Help: "The count of unique metrics currently cached.",
},
)
m.CacheGetsTotal = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "statsd_metric_mapper_cache_gets_total",
Help: "The count of total metric cache gets.",
},
)
m.CacheHitsTotal = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "statsd_metric_mapper_cache_hits_total",
Help: "The count of total metric cache hits.",
},
)
if reg != nil {
reg.MustRegister(m.CacheLength)
reg.MustRegister(m.CacheGetsTotal)
reg.MustRegister(m.CacheHitsTotal)
}
return &m
}

View file

@ -0,0 +1,83 @@
// Copyright 2021 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package randomreplacement
import (
"sync"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/statsd_exporter/pkg/mappercache"
)
type metricMapperRRCache struct {
lock sync.RWMutex
size int
items map[string]interface{}
metrics *mappercache.CacheMetrics
}
func NewMetricMapperRRCache(reg prometheus.Registerer, size int) (*metricMapperRRCache, error) {
if size <= 0 {
return nil, nil
}
metrics := mappercache.NewCacheMetrics(reg)
c := &metricMapperRRCache{
items: make(map[string]interface{}, size+1),
size: size,
metrics: metrics,
}
return c, nil
}
func (m *metricMapperRRCache) Get(metricKey string) (interface{}, bool) {
m.lock.RLock()
result, ok := m.items[metricKey]
m.lock.RUnlock()
return result, ok
}
func (m *metricMapperRRCache) Add(metricKey string, result interface{}) {
go m.trackCacheLength()
m.lock.Lock()
m.items[metricKey] = result
// evict an item if needed
if len(m.items) > m.size {
for k := range m.items {
delete(m.items, k)
break
}
}
m.lock.Unlock()
}
func (m *metricMapperRRCache) Reset() {
m.lock.Lock()
defer m.lock.Unlock()
m.items = make(map[string]interface{}, m.size+1)
m.metrics.CacheLength.Set(0)
}
func (m *metricMapperRRCache) trackCacheLength() {
m.lock.RLock()
length := len(m.items)
m.lock.RUnlock()
m.metrics.CacheLength.Set(float64(length))
}

View file

@ -19,10 +19,12 @@ import (
"hash"
"hash/fnv"
"sort"
"strings"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/model"
"github.com/prometheus/statsd_exporter/pkg/clock"
"github.com/prometheus/statsd_exporter/pkg/mapper"
"github.com/prometheus/statsd_exporter/pkg/metrics"
@ -164,6 +166,11 @@ func (r *Registry) GetCounter(metricName string, labels prometheus.Labels, help
return nil, fmt.Errorf("metric with name %s is already registered", metricName)
}
err := r.checkHistogramNameCollision(metricName)
if err != nil {
return nil, err
}
var counterVec *prometheus.CounterVec
if vh == nil {
metricsCount.WithLabelValues("counter").Inc()
@ -180,7 +187,6 @@ func (r *Registry) GetCounter(metricName string, labels prometheus.Labels, help
}
var counter prometheus.Counter
var err error
if counter, err = counterVec.GetMetricWith(labels); err != nil {
return nil, err
}
@ -189,6 +195,18 @@ func (r *Registry) GetCounter(metricName string, labels prometheus.Labels, help
return counter, nil
}
func (r *Registry) checkHistogramNameCollision(metricName string) error {
histogramSuffixes := []string{"_bucket", "_count", "_sum"}
for _, suffix := range histogramSuffixes {
if strings.HasSuffix(metricName, suffix) {
if r.MetricConflicts(strings.TrimSuffix(metricName, suffix), metrics.CounterMetricType) {
return fmt.Errorf("metric with name %s is already registered", metricName)
}
}
}
return nil
}
func (r *Registry) GetGauge(metricName string, labels prometheus.Labels, help string, mapping *mapper.MetricMapping, metricsCount *prometheus.GaugeVec) (prometheus.Gauge, error) {
hash, labelNames := r.HashLabels(labels)
vh, mh := r.Get(metricName, hash, metrics.GaugeMetricType)
@ -200,6 +218,11 @@ func (r *Registry) GetGauge(metricName string, labels prometheus.Labels, help st
return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName)
}
err := r.checkHistogramNameCollision(metricName)
if err != nil {
return nil, fmt.Errorf("metrics.Metric with name %s is already registered", metricName)
}
var gaugeVec *prometheus.GaugeVec
if vh == nil {
metricsCount.WithLabelValues("gauge").Inc()
@ -216,7 +239,6 @@ func (r *Registry) GetGauge(metricName string, labels prometheus.Labels, help st
}
var gauge prometheus.Gauge
var err error
if gauge, err = gaugeVec.GetMetricWith(labels); err != nil {
return nil, err
}
@ -248,17 +270,29 @@ func (r *Registry) GetHistogram(metricName string, labels prometheus.Labels, hel
var histogramVec *prometheus.HistogramVec
if vh == nil {
metricsCount.WithLabelValues("histogram").Inc()
buckets := r.Mapper.Defaults.Buckets
buckets := r.Mapper.Defaults.HistogramOptions.Buckets
if mapping.HistogramOptions != nil && len(mapping.HistogramOptions.Buckets) > 0 {
buckets = mapping.HistogramOptions.Buckets
}
bucketFactor := r.Mapper.Defaults.HistogramOptions.NativeHistogramBucketFactor
if mapping.HistogramOptions != nil && mapping.HistogramOptions.NativeHistogramBucketFactor > 0 {
bucketFactor = mapping.HistogramOptions.NativeHistogramBucketFactor
}
maxBuckets := r.Mapper.Defaults.HistogramOptions.NativeHistogramMaxBuckets
if mapping.HistogramOptions != nil && mapping.HistogramOptions.NativeHistogramMaxBuckets > 0 {
maxBuckets = mapping.HistogramOptions.NativeHistogramMaxBuckets
}
histogramVec = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Name: metricName,
Help: help,
Buckets: buckets,
Name: metricName,
Help: help,
Buckets: buckets,
NativeHistogramBucketFactor: bucketFactor,
NativeHistogramMaxBucketNumber: maxBuckets,
}, labelNames)
if err := prometheus.Register(uncheckedCollector{histogramVec}); err != nil {
if err := r.Registerer.Register(uncheckedCollector{histogramVec}); err != nil {
return nil, err
}
} else {
@ -295,14 +329,21 @@ func (r *Registry) GetSummary(metricName string, labels prometheus.Labels, help
var summaryVec *prometheus.SummaryVec
if vh == nil {
metricsCount.WithLabelValues("summary").Inc()
quantiles := r.Mapper.Defaults.Quantiles
quantiles := r.Mapper.Defaults.SummaryOptions.Quantiles
if mapping != nil && mapping.SummaryOptions != nil && len(mapping.SummaryOptions.Quantiles) > 0 {
quantiles = mapping.SummaryOptions.Quantiles
}
summaryOptions := mapper.SummaryOptions{}
summaryOptions := mapper.SummaryOptions{
MaxAge: r.Mapper.Defaults.SummaryOptions.MaxAge,
AgeBuckets: r.Mapper.Defaults.SummaryOptions.AgeBuckets,
BufCap: r.Mapper.Defaults.SummaryOptions.BufCap,
}
if mapping != nil && mapping.SummaryOptions != nil {
summaryOptions = *mapping.SummaryOptions
}
objectives := make(map[float64]float64)
for _, q := range quantiles {
objectives[q.Quantile] = q.Error
@ -320,7 +361,7 @@ func (r *Registry) GetSummary(metricName string, labels prometheus.Labels, help
BufCap: summaryOptions.BufCap,
}, labelNames)
if err := prometheus.Register(uncheckedCollector{summaryVec}); err != nil {
if err := r.Registerer.Register(uncheckedCollector{summaryVec}); err != nil {
return nil, err
}
} else {
@ -354,7 +395,7 @@ func (r *Registry) RemoveStaleMetrics() {
}
}
// Calculates a hash of both the label names and the label names and values.
// Calculates a hash of both the label names and values.
func (r *Registry) HashLabels(labels prometheus.Labels) (metrics.LabelHash, []string) {
r.Hasher.Reset()
r.NameBuf.Reset()

167
pkg/relay/relay.go Normal file
View file

@ -0,0 +1,167 @@
// Copyright 2021 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package relay
import (
"bytes"
"fmt"
"net"
"strings"
"time"
"github.com/prometheus/statsd_exporter/pkg/clock"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prometheus/statsd_exporter/pkg/level"
)
type Relay struct {
addr *net.UDPAddr
bufferChannel chan []byte
conn *net.UDPConn
logger log.Logger
packetLength uint
packetsTotal prometheus.Counter
longLinesTotal prometheus.Counter
relayedLinesTotal prometheus.Counter
}
var (
relayPacketsTotal = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "statsd_exporter_relay_packets_total",
Help: "The number of StatsD packets relayed.",
},
[]string{"target"},
)
relayLongLinesTotal = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "statsd_exporter_relay_long_lines_total",
Help: "The number lines that were too long to relay.",
},
[]string{"target"},
)
relayLinesRelayedTotal = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "statsd_exporter_relay_lines_relayed_total",
Help: "The number of lines that were buffered to be relayed.",
},
[]string{"target"},
)
)
// NewRelay creates a statsd UDP relay. It can be used to send copies of statsd raw
// lines to a separate service.
func NewRelay(l log.Logger, target string, packetLength uint) (*Relay, error) {
addr, err := net.ResolveUDPAddr("udp", target)
if err != nil {
return nil, fmt.Errorf("unable to resolve target %s, err: %w", target, err)
}
conn, err := net.ListenUDP("udp", nil)
if err != nil {
return nil, fmt.Errorf("unable to listen on UDP, err: %w", err)
}
c := make(chan []byte, 100)
r := Relay{
addr: addr,
bufferChannel: c,
conn: conn,
logger: l,
packetLength: packetLength,
packetsTotal: relayPacketsTotal.WithLabelValues(target),
longLinesTotal: relayLongLinesTotal.WithLabelValues(target),
relayedLinesTotal: relayLinesRelayedTotal.WithLabelValues(target),
}
// Startup the UDP sender.
go r.relayOutput()
return &r, nil
}
// relayOutput buffers statsd lines and sends them to the relay target.
func (r *Relay) relayOutput() {
var buffer bytes.Buffer
var err error
relayInterval := clock.NewTicker(1 * time.Second)
defer relayInterval.Stop()
for {
select {
case <-relayInterval.C:
err = r.sendPacket(buffer.Bytes())
if err != nil {
level.Error(r.logger).Log("msg", "Error sending UDP packet", "error", err)
return
}
// Clear out the buffer.
buffer.Reset()
case b := <-r.bufferChannel:
if uint(len(b)+buffer.Len()) > r.packetLength {
level.Debug(r.logger).Log("msg", "Buffer full, sending packet", "length", buffer.Len())
err = r.sendPacket(buffer.Bytes())
if err != nil {
level.Error(r.logger).Log("msg", "Error sending UDP packet", "error", err)
return
}
// Seed the new buffer with the new line.
buffer.Reset()
buffer.Write(b)
} else {
level.Debug(r.logger).Log("msg", "Adding line to buffer", "line", string(b))
buffer.Write(b)
}
}
}
}
// sendPacket sends a single relay line to the destination target.
func (r *Relay) sendPacket(buf []byte) error {
if len(buf) == 0 {
level.Debug(r.logger).Log("msg", "Empty buffer, nothing to send")
return nil
}
level.Debug(r.logger).Log("msg", "Sending packet", "length", len(buf), "data", string(buf))
_, err := r.conn.WriteToUDP(buf, r.addr)
r.packetsTotal.Inc()
return err
}
// RelayLine processes a single statsd line and forwards it to the relay target.
func (r *Relay) RelayLine(l string) {
lineLength := uint(len(l))
if lineLength == 0 {
level.Debug(r.logger).Log("msg", "Empty line, not relaying")
return
}
if lineLength > r.packetLength-1 {
level.Warn(r.logger).Log("msg", "line too long, not relaying", "length", lineLength, "max", r.packetLength)
r.longLinesTotal.Inc()
return
}
level.Debug(r.logger).Log("msg", "Relaying line", "line", string(l))
if !strings.HasSuffix(l, "\n") {
l = l + "\n"
}
r.relayedLinesTotal.Inc()
r.bufferChannel <- []byte(l)
}

176
pkg/relay/relay_test.go Normal file
View file

@ -0,0 +1,176 @@
// Copyright 2022 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package relay
import (
"fmt"
"runtime"
"testing"
"time"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/statsd_exporter/pkg/clock"
"github.com/stvp/go-udp-testing"
)
func TestRelay_RelayLine(t *testing.T) {
type args struct {
lines []string
expected string
}
tests := []struct {
name string
args args
}{
{
name: "multiple lines",
args: args{
lines: []string{"foo5:100|c|#tag1:bar,#tag2:baz", "foo2:200|c|#tag1:bar,#tag2:baz"},
expected: "foo5:100|c|#tag1:bar,#tag2:baz\n",
},
},
}
for _, tt := range tests {
udp.SetAddr(":1160")
t.Run(tt.name, func(t *testing.T) {
tickerCh := make(chan time.Time)
clock.ClockInstance = &clock.Clock{
TickerCh: tickerCh,
}
clock.ClockInstance.Instant = time.Unix(0, 0)
logger := log.NewNopLogger()
r, err := NewRelay(
logger,
"localhost:1160",
200,
)
if err != nil {
t.Errorf("Did not expect error while creating relay.")
}
udp.ShouldReceive(t, tt.args.expected, func() {
for _, line := range tt.args.lines {
r.RelayLine(line)
}
for goSchedTimes := 0; goSchedTimes < 1000; goSchedTimes++ {
if len(r.bufferChannel) == 0 {
break
}
runtime.Gosched()
}
// Tick time forward to trigger a packet send.
clock.ClockInstance.Instant = time.Unix(1, 10)
clock.ClockInstance.TickerCh <- time.Unix(0, 0)
})
metrics, err := prometheus.DefaultGatherer.Gather()
if err != nil {
t.Fatalf("Cannot gather from DefaultGatherer: %v", err)
}
metricNames := map[string]float64{
"statsd_exporter_relay_long_lines_total": 0,
"statsd_exporter_relay_lines_relayed_total": float64(len(tt.args.lines)),
}
for metricName, expectedValue := range metricNames {
metric := getFloat64(metrics, metricName, prometheus.Labels{"target": "localhost:1160"})
if metric == nil {
t.Fatalf("Could not find time series with first label set for metric: %s", metricName)
}
if *metric != expectedValue {
t.Errorf("Expected metric %s to be %f, got %f", metricName, expectedValue, *metric)
}
}
prometheus.Unregister(relayLongLinesTotal)
prometheus.Unregister(relayLinesRelayedTotal)
})
}
}
// getFloat64 search for metric by name in array of MetricFamily and then search a value by labels.
// Method returns a value or nil if metric is not found.
func getFloat64(metrics []*dto.MetricFamily, name string, labels prometheus.Labels) *float64 {
var metricFamily *dto.MetricFamily
for _, m := range metrics {
if *m.Name == name {
metricFamily = m
break
}
}
if metricFamily == nil {
return nil
}
var metric *dto.Metric
labelStr := fmt.Sprintf("%v", labels)
for _, m := range metricFamily.Metric {
l := labelPairsAsLabels(m.GetLabel())
ls := fmt.Sprintf("%v", l)
if labelStr == ls {
metric = m
break
}
}
if metric == nil {
return nil
}
var value float64
if metric.Gauge != nil {
value = metric.Gauge.GetValue()
return &value
}
if metric.Counter != nil {
value = metric.Counter.GetValue()
return &value
}
if metric.Histogram != nil {
value = metric.Histogram.GetSampleSum()
return &value
}
if metric.Summary != nil {
value = metric.Summary.GetSampleSum()
return &value
}
if metric.Untyped != nil {
value = metric.Untyped.GetValue()
return &value
}
panic(fmt.Errorf("collected a non-gauge/counter/histogram/summary/untyped metric: %s", metric))
}
func labelPairsAsLabels(pairs []*dto.LabelPair) (labels prometheus.Labels) {
labels = prometheus.Labels{}
for _, pair := range pairs {
if pair.Name == nil {
continue
}
value := ""
if pair.Value != nil {
value = *pair.Value
}
labels[*pair.Name] = value
}
return
}

View file

@ -1,27 +0,0 @@
Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View file

@ -1,25 +0,0 @@
# Go's `text/template` package with newline elision
This is a fork of Go 1.4's [text/template](http://golang.org/pkg/text/template/) package with one addition: a backslash immediately after a closing delimiter will delete all subsequent newlines until a non-newline.
eg.
```
{{if true}}\
hello
{{end}}\
```
Will result in:
```
hello\n
```
Rather than:
```
\n
hello\n
\n
```

View file

@ -1,406 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
/*
Package template implements data-driven templates for generating textual output.
To generate HTML output, see package html/template, which has the same interface
as this package but automatically secures HTML output against certain attacks.
Templates are executed by applying them to a data structure. Annotations in the
template refer to elements of the data structure (typically a field of a struct
or a key in a map) to control execution and derive values to be displayed.
Execution of the template walks the structure and sets the cursor, represented
by a period '.' and called "dot", to the value at the current location in the
structure as execution proceeds.
The input text for a template is UTF-8-encoded text in any format.
"Actions"--data evaluations or control structures--are delimited by
"{{" and "}}"; all text outside actions is copied to the output unchanged.
Actions may not span newlines, although comments can.
Once parsed, a template may be executed safely in parallel.
Here is a trivial example that prints "17 items are made of wool".
type Inventory struct {
Material string
Count uint
}
sweaters := Inventory{"wool", 17}
tmpl, err := template.New("test").Parse("{{.Count}} items are made of {{.Material}}")
if err != nil { panic(err) }
err = tmpl.Execute(os.Stdout, sweaters)
if err != nil { panic(err) }
More intricate examples appear below.
Actions
Here is the list of actions. "Arguments" and "pipelines" are evaluations of
data, defined in detail below.
*/
// {{/* a comment */}}
// A comment; discarded. May contain newlines.
// Comments do not nest and must start and end at the
// delimiters, as shown here.
/*
{{pipeline}}
The default textual representation of the value of the pipeline
is copied to the output.
{{if pipeline}} T1 {{end}}
If the value of the pipeline is empty, no output is generated;
otherwise, T1 is executed. The empty values are false, 0, any
nil pointer or interface value, and any array, slice, map, or
string of length zero.
Dot is unaffected.
{{if pipeline}} T1 {{else}} T0 {{end}}
If the value of the pipeline is empty, T0 is executed;
otherwise, T1 is executed. Dot is unaffected.
{{if pipeline}} T1 {{else if pipeline}} T0 {{end}}
To simplify the appearance of if-else chains, the else action
of an if may include another if directly; the effect is exactly
the same as writing
{{if pipeline}} T1 {{else}}{{if pipeline}} T0 {{end}}{{end}}
{{range pipeline}} T1 {{end}}
The value of the pipeline must be an array, slice, map, or channel.
If the value of the pipeline has length zero, nothing is output;
otherwise, dot is set to the successive elements of the array,
slice, or map and T1 is executed. If the value is a map and the
keys are of basic type with a defined order ("comparable"), the
elements will be visited in sorted key order.
{{range pipeline}} T1 {{else}} T0 {{end}}
The value of the pipeline must be an array, slice, map, or channel.
If the value of the pipeline has length zero, dot is unaffected and
T0 is executed; otherwise, dot is set to the successive elements
of the array, slice, or map and T1 is executed.
{{template "name"}}
The template with the specified name is executed with nil data.
{{template "name" pipeline}}
The template with the specified name is executed with dot set
to the value of the pipeline.
{{with pipeline}} T1 {{end}}
If the value of the pipeline is empty, no output is generated;
otherwise, dot is set to the value of the pipeline and T1 is
executed.
{{with pipeline}} T1 {{else}} T0 {{end}}
If the value of the pipeline is empty, dot is unaffected and T0
is executed; otherwise, dot is set to the value of the pipeline
and T1 is executed.
Arguments
An argument is a simple value, denoted by one of the following.
- A boolean, string, character, integer, floating-point, imaginary
or complex constant in Go syntax. These behave like Go's untyped
constants, although raw strings may not span newlines.
- The keyword nil, representing an untyped Go nil.
- The character '.' (period):
.
The result is the value of dot.
- A variable name, which is a (possibly empty) alphanumeric string
preceded by a dollar sign, such as
$piOver2
or
$
The result is the value of the variable.
Variables are described below.
- The name of a field of the data, which must be a struct, preceded
by a period, such as
.Field
The result is the value of the field. Field invocations may be
chained:
.Field1.Field2
Fields can also be evaluated on variables, including chaining:
$x.Field1.Field2
- The name of a key of the data, which must be a map, preceded
by a period, such as
.Key
The result is the map element value indexed by the key.
Key invocations may be chained and combined with fields to any
depth:
.Field1.Key1.Field2.Key2
Although the key must be an alphanumeric identifier, unlike with
field names they do not need to start with an upper case letter.
Keys can also be evaluated on variables, including chaining:
$x.key1.key2
- The name of a niladic method of the data, preceded by a period,
such as
.Method
The result is the value of invoking the method with dot as the
receiver, dot.Method(). Such a method must have one return value (of
any type) or two return values, the second of which is an error.
If it has two and the returned error is non-nil, execution terminates
and an error is returned to the caller as the value of Execute.
Method invocations may be chained and combined with fields and keys
to any depth:
.Field1.Key1.Method1.Field2.Key2.Method2
Methods can also be evaluated on variables, including chaining:
$x.Method1.Field
- The name of a niladic function, such as
fun
The result is the value of invoking the function, fun(). The return
types and values behave as in methods. Functions and function
names are described below.
- A parenthesized instance of one the above, for grouping. The result
may be accessed by a field or map key invocation.
print (.F1 arg1) (.F2 arg2)
(.StructValuedMethod "arg").Field
Arguments may evaluate to any type; if they are pointers the implementation
automatically indirects to the base type when required.
If an evaluation yields a function value, such as a function-valued
field of a struct, the function is not invoked automatically, but it
can be used as a truth value for an if action and the like. To invoke
it, use the call function, defined below.
A pipeline is a possibly chained sequence of "commands". A command is a simple
value (argument) or a function or method call, possibly with multiple arguments:
Argument
The result is the value of evaluating the argument.
.Method [Argument...]
The method can be alone or the last element of a chain but,
unlike methods in the middle of a chain, it can take arguments.
The result is the value of calling the method with the
arguments:
dot.Method(Argument1, etc.)
functionName [Argument...]
The result is the value of calling the function associated
with the name:
function(Argument1, etc.)
Functions and function names are described below.
Pipelines
A pipeline may be "chained" by separating a sequence of commands with pipeline
characters '|'. In a chained pipeline, the result of the each command is
passed as the last argument of the following command. The output of the final
command in the pipeline is the value of the pipeline.
The output of a command will be either one value or two values, the second of
which has type error. If that second value is present and evaluates to
non-nil, execution terminates and the error is returned to the caller of
Execute.
Variables
A pipeline inside an action may initialize a variable to capture the result.
The initialization has syntax
$variable := pipeline
where $variable is the name of the variable. An action that declares a
variable produces no output.
If a "range" action initializes a variable, the variable is set to the
successive elements of the iteration. Also, a "range" may declare two
variables, separated by a comma:
range $index, $element := pipeline
in which case $index and $element are set to the successive values of the
array/slice index or map key and element, respectively. Note that if there is
only one variable, it is assigned the element; this is opposite to the
convention in Go range clauses.
A variable's scope extends to the "end" action of the control structure ("if",
"with", or "range") in which it is declared, or to the end of the template if
there is no such control structure. A template invocation does not inherit
variables from the point of its invocation.
When execution begins, $ is set to the data argument passed to Execute, that is,
to the starting value of dot.
Examples
Here are some example one-line templates demonstrating pipelines and variables.
All produce the quoted word "output":
{{"\"output\""}}
A string constant.
{{`"output"`}}
A raw string constant.
{{printf "%q" "output"}}
A function call.
{{"output" | printf "%q"}}
A function call whose final argument comes from the previous
command.
{{printf "%q" (print "out" "put")}}
A parenthesized argument.
{{"put" | printf "%s%s" "out" | printf "%q"}}
A more elaborate call.
{{"output" | printf "%s" | printf "%q"}}
A longer chain.
{{with "output"}}{{printf "%q" .}}{{end}}
A with action using dot.
{{with $x := "output" | printf "%q"}}{{$x}}{{end}}
A with action that creates and uses a variable.
{{with $x := "output"}}{{printf "%q" $x}}{{end}}
A with action that uses the variable in another action.
{{with $x := "output"}}{{$x | printf "%q"}}{{end}}
The same, but pipelined.
Functions
During execution functions are found in two function maps: first in the
template, then in the global function map. By default, no functions are defined
in the template but the Funcs method can be used to add them.
Predefined global functions are named as follows.
and
Returns the boolean AND of its arguments by returning the
first empty argument or the last argument, that is,
"and x y" behaves as "if x then y else x". All the
arguments are evaluated.
call
Returns the result of calling the first argument, which
must be a function, with the remaining arguments as parameters.
Thus "call .X.Y 1 2" is, in Go notation, dot.X.Y(1, 2) where
Y is a func-valued field, map entry, or the like.
The first argument must be the result of an evaluation
that yields a value of function type (as distinct from
a predefined function such as print). The function must
return either one or two result values, the second of which
is of type error. If the arguments don't match the function
or the returned error value is non-nil, execution stops.
html
Returns the escaped HTML equivalent of the textual
representation of its arguments.
index
Returns the result of indexing its first argument by the
following arguments. Thus "index x 1 2 3" is, in Go syntax,
x[1][2][3]. Each indexed item must be a map, slice, or array.
js
Returns the escaped JavaScript equivalent of the textual
representation of its arguments.
len
Returns the integer length of its argument.
not
Returns the boolean negation of its single argument.
or
Returns the boolean OR of its arguments by returning the
first non-empty argument or the last argument, that is,
"or x y" behaves as "if x then x else y". All the
arguments are evaluated.
print
An alias for fmt.Sprint
printf
An alias for fmt.Sprintf
println
An alias for fmt.Sprintln
urlquery
Returns the escaped value of the textual representation of
its arguments in a form suitable for embedding in a URL query.
The boolean functions take any zero value to be false and a non-zero
value to be true.
There is also a set of binary comparison operators defined as
functions:
eq
Returns the boolean truth of arg1 == arg2
ne
Returns the boolean truth of arg1 != arg2
lt
Returns the boolean truth of arg1 < arg2
le
Returns the boolean truth of arg1 <= arg2
gt
Returns the boolean truth of arg1 > arg2
ge
Returns the boolean truth of arg1 >= arg2
For simpler multi-way equality tests, eq (only) accepts two or more
arguments and compares the second and subsequent to the first,
returning in effect
arg1==arg2 || arg1==arg3 || arg1==arg4 ...
(Unlike with || in Go, however, eq is a function call and all the
arguments will be evaluated.)
The comparison functions work on basic types only (or named basic
types, such as "type Celsius float32"). They implement the Go rules
for comparison of values, except that size and exact type are
ignored, so any integer value, signed or unsigned, may be compared
with any other integer value. (The arithmetic value is compared,
not the bit pattern, so all negative integers are less than all
unsigned integers.) However, as usual, one may not compare an int
with a float32 and so on.
Associated templates
Each template is named by a string specified when it is created. Also, each
template is associated with zero or more other templates that it may invoke by
name; such associations are transitive and form a name space of templates.
A template may use a template invocation to instantiate another associated
template; see the explanation of the "template" action above. The name must be
that of a template associated with the template that contains the invocation.
Nested template definitions
When parsing a template, another template may be defined and associated with the
template being parsed. Template definitions must appear at the top level of the
template, much like global variables in a Go program.
The syntax of such definitions is to surround each template declaration with a
"define" and "end" action.
The define action names the template being created by providing a string
constant. Here is a simple example:
`{{define "T1"}}ONE{{end}}
{{define "T2"}}TWO{{end}}
{{define "T3"}}{{template "T1"}} {{template "T2"}}{{end}}
{{template "T3"}}`
This defines two templates, T1 and T2, and a third T3 that invokes the other two
when it is executed. Finally it invokes T3. If executed this template will
produce the text
ONE TWO
By construction, a template may reside in only one association. If it's
necessary to have a template addressable from multiple associations, the
template definition must be parsed multiple times to create distinct *Template
values, or must be copied with the Clone or AddParseTree method.
Parse may be called multiple times to assemble the various associated templates;
see the ParseFiles and ParseGlob functions and methods for simple ways to parse
related templates stored in files.
A template may be executed directly or through ExecuteTemplate, which executes
an associated template identified by name. To invoke our example above, we
might write,
err := tmpl.Execute(os.Stdout, "no data needed")
if err != nil {
log.Fatalf("execution failed: %s", err)
}
or to invoke a particular template explicitly by name,
err := tmpl.ExecuteTemplate(os.Stdout, "T2", "no data needed")
if err != nil {
log.Fatalf("execution failed: %s", err)
}
*/
package template

View file

@ -1,845 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package template
import (
"bytes"
"fmt"
"io"
"reflect"
"runtime"
"sort"
"strings"
"github.com/alecthomas/template/parse"
)
// state represents the state of an execution. It's not part of the
// template so that multiple executions of the same template
// can execute in parallel.
type state struct {
tmpl *Template
wr io.Writer
node parse.Node // current node, for errors
vars []variable // push-down stack of variable values.
}
// variable holds the dynamic value of a variable such as $, $x etc.
type variable struct {
name string
value reflect.Value
}
// push pushes a new variable on the stack.
func (s *state) push(name string, value reflect.Value) {
s.vars = append(s.vars, variable{name, value})
}
// mark returns the length of the variable stack.
func (s *state) mark() int {
return len(s.vars)
}
// pop pops the variable stack up to the mark.
func (s *state) pop(mark int) {
s.vars = s.vars[0:mark]
}
// setVar overwrites the top-nth variable on the stack. Used by range iterations.
func (s *state) setVar(n int, value reflect.Value) {
s.vars[len(s.vars)-n].value = value
}
// varValue returns the value of the named variable.
func (s *state) varValue(name string) reflect.Value {
for i := s.mark() - 1; i >= 0; i-- {
if s.vars[i].name == name {
return s.vars[i].value
}
}
s.errorf("undefined variable: %s", name)
return zero
}
var zero reflect.Value
// at marks the state to be on node n, for error reporting.
func (s *state) at(node parse.Node) {
s.node = node
}
// doublePercent returns the string with %'s replaced by %%, if necessary,
// so it can be used safely inside a Printf format string.
func doublePercent(str string) string {
if strings.Contains(str, "%") {
str = strings.Replace(str, "%", "%%", -1)
}
return str
}
// errorf formats the error and terminates processing.
func (s *state) errorf(format string, args ...interface{}) {
name := doublePercent(s.tmpl.Name())
if s.node == nil {
format = fmt.Sprintf("template: %s: %s", name, format)
} else {
location, context := s.tmpl.ErrorContext(s.node)
format = fmt.Sprintf("template: %s: executing %q at <%s>: %s", location, name, doublePercent(context), format)
}
panic(fmt.Errorf(format, args...))
}
// errRecover is the handler that turns panics into returns from the top
// level of Parse.
func errRecover(errp *error) {
e := recover()
if e != nil {
switch err := e.(type) {
case runtime.Error:
panic(e)
case error:
*errp = err
default:
panic(e)
}
}
}
// ExecuteTemplate applies the template associated with t that has the given name
// to the specified data object and writes the output to wr.
// If an error occurs executing the template or writing its output,
// execution stops, but partial results may already have been written to
// the output writer.
// A template may be executed safely in parallel.
func (t *Template) ExecuteTemplate(wr io.Writer, name string, data interface{}) error {
tmpl := t.tmpl[name]
if tmpl == nil {
return fmt.Errorf("template: no template %q associated with template %q", name, t.name)
}
return tmpl.Execute(wr, data)
}
// Execute applies a parsed template to the specified data object,
// and writes the output to wr.
// If an error occurs executing the template or writing its output,
// execution stops, but partial results may already have been written to
// the output writer.
// A template may be executed safely in parallel.
func (t *Template) Execute(wr io.Writer, data interface{}) (err error) {
defer errRecover(&err)
value := reflect.ValueOf(data)
state := &state{
tmpl: t,
wr: wr,
vars: []variable{{"$", value}},
}
t.init()
if t.Tree == nil || t.Root == nil {
var b bytes.Buffer
for name, tmpl := range t.tmpl {
if tmpl.Tree == nil || tmpl.Root == nil {
continue
}
if b.Len() > 0 {
b.WriteString(", ")
}
fmt.Fprintf(&b, "%q", name)
}
var s string
if b.Len() > 0 {
s = "; defined templates are: " + b.String()
}
state.errorf("%q is an incomplete or empty template%s", t.Name(), s)
}
state.walk(value, t.Root)
return
}
// Walk functions step through the major pieces of the template structure,
// generating output as they go.
func (s *state) walk(dot reflect.Value, node parse.Node) {
s.at(node)
switch node := node.(type) {
case *parse.ActionNode:
// Do not pop variables so they persist until next end.
// Also, if the action declares variables, don't print the result.
val := s.evalPipeline(dot, node.Pipe)
if len(node.Pipe.Decl) == 0 {
s.printValue(node, val)
}
case *parse.IfNode:
s.walkIfOrWith(parse.NodeIf, dot, node.Pipe, node.List, node.ElseList)
case *parse.ListNode:
for _, node := range node.Nodes {
s.walk(dot, node)
}
case *parse.RangeNode:
s.walkRange(dot, node)
case *parse.TemplateNode:
s.walkTemplate(dot, node)
case *parse.TextNode:
if _, err := s.wr.Write(node.Text); err != nil {
s.errorf("%s", err)
}
case *parse.WithNode:
s.walkIfOrWith(parse.NodeWith, dot, node.Pipe, node.List, node.ElseList)
default:
s.errorf("unknown node: %s", node)
}
}
// walkIfOrWith walks an 'if' or 'with' node. The two control structures
// are identical in behavior except that 'with' sets dot.
func (s *state) walkIfOrWith(typ parse.NodeType, dot reflect.Value, pipe *parse.PipeNode, list, elseList *parse.ListNode) {
defer s.pop(s.mark())
val := s.evalPipeline(dot, pipe)
truth, ok := isTrue(val)
if !ok {
s.errorf("if/with can't use %v", val)
}
if truth {
if typ == parse.NodeWith {
s.walk(val, list)
} else {
s.walk(dot, list)
}
} else if elseList != nil {
s.walk(dot, elseList)
}
}
// isTrue reports whether the value is 'true', in the sense of not the zero of its type,
// and whether the value has a meaningful truth value.
func isTrue(val reflect.Value) (truth, ok bool) {
if !val.IsValid() {
// Something like var x interface{}, never set. It's a form of nil.
return false, true
}
switch val.Kind() {
case reflect.Array, reflect.Map, reflect.Slice, reflect.String:
truth = val.Len() > 0
case reflect.Bool:
truth = val.Bool()
case reflect.Complex64, reflect.Complex128:
truth = val.Complex() != 0
case reflect.Chan, reflect.Func, reflect.Ptr, reflect.Interface:
truth = !val.IsNil()
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
truth = val.Int() != 0
case reflect.Float32, reflect.Float64:
truth = val.Float() != 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
truth = val.Uint() != 0
case reflect.Struct:
truth = true // Struct values are always true.
default:
return
}
return truth, true
}
func (s *state) walkRange(dot reflect.Value, r *parse.RangeNode) {
s.at(r)
defer s.pop(s.mark())
val, _ := indirect(s.evalPipeline(dot, r.Pipe))
// mark top of stack before any variables in the body are pushed.
mark := s.mark()
oneIteration := func(index, elem reflect.Value) {
// Set top var (lexically the second if there are two) to the element.
if len(r.Pipe.Decl) > 0 {
s.setVar(1, elem)
}
// Set next var (lexically the first if there are two) to the index.
if len(r.Pipe.Decl) > 1 {
s.setVar(2, index)
}
s.walk(elem, r.List)
s.pop(mark)
}
switch val.Kind() {
case reflect.Array, reflect.Slice:
if val.Len() == 0 {
break
}
for i := 0; i < val.Len(); i++ {
oneIteration(reflect.ValueOf(i), val.Index(i))
}
return
case reflect.Map:
if val.Len() == 0 {
break
}
for _, key := range sortKeys(val.MapKeys()) {
oneIteration(key, val.MapIndex(key))
}
return
case reflect.Chan:
if val.IsNil() {
break
}
i := 0
for ; ; i++ {
elem, ok := val.Recv()
if !ok {
break
}
oneIteration(reflect.ValueOf(i), elem)
}
if i == 0 {
break
}
return
case reflect.Invalid:
break // An invalid value is likely a nil map, etc. and acts like an empty map.
default:
s.errorf("range can't iterate over %v", val)
}
if r.ElseList != nil {
s.walk(dot, r.ElseList)
}
}
func (s *state) walkTemplate(dot reflect.Value, t *parse.TemplateNode) {
s.at(t)
tmpl := s.tmpl.tmpl[t.Name]
if tmpl == nil {
s.errorf("template %q not defined", t.Name)
}
// Variables declared by the pipeline persist.
dot = s.evalPipeline(dot, t.Pipe)
newState := *s
newState.tmpl = tmpl
// No dynamic scoping: template invocations inherit no variables.
newState.vars = []variable{{"$", dot}}
newState.walk(dot, tmpl.Root)
}
// Eval functions evaluate pipelines, commands, and their elements and extract
// values from the data structure by examining fields, calling methods, and so on.
// The printing of those values happens only through walk functions.
// evalPipeline returns the value acquired by evaluating a pipeline. If the
// pipeline has a variable declaration, the variable will be pushed on the
// stack. Callers should therefore pop the stack after they are finished
// executing commands depending on the pipeline value.
func (s *state) evalPipeline(dot reflect.Value, pipe *parse.PipeNode) (value reflect.Value) {
if pipe == nil {
return
}
s.at(pipe)
for _, cmd := range pipe.Cmds {
value = s.evalCommand(dot, cmd, value) // previous value is this one's final arg.
// If the object has type interface{}, dig down one level to the thing inside.
if value.Kind() == reflect.Interface && value.Type().NumMethod() == 0 {
value = reflect.ValueOf(value.Interface()) // lovely!
}
}
for _, variable := range pipe.Decl {
s.push(variable.Ident[0], value)
}
return value
}
func (s *state) notAFunction(args []parse.Node, final reflect.Value) {
if len(args) > 1 || final.IsValid() {
s.errorf("can't give argument to non-function %s", args[0])
}
}
func (s *state) evalCommand(dot reflect.Value, cmd *parse.CommandNode, final reflect.Value) reflect.Value {
firstWord := cmd.Args[0]
switch n := firstWord.(type) {
case *parse.FieldNode:
return s.evalFieldNode(dot, n, cmd.Args, final)
case *parse.ChainNode:
return s.evalChainNode(dot, n, cmd.Args, final)
case *parse.IdentifierNode:
// Must be a function.
return s.evalFunction(dot, n, cmd, cmd.Args, final)
case *parse.PipeNode:
// Parenthesized pipeline. The arguments are all inside the pipeline; final is ignored.
return s.evalPipeline(dot, n)
case *parse.VariableNode:
return s.evalVariableNode(dot, n, cmd.Args, final)
}
s.at(firstWord)
s.notAFunction(cmd.Args, final)
switch word := firstWord.(type) {
case *parse.BoolNode:
return reflect.ValueOf(word.True)
case *parse.DotNode:
return dot
case *parse.NilNode:
s.errorf("nil is not a command")
case *parse.NumberNode:
return s.idealConstant(word)
case *parse.StringNode:
return reflect.ValueOf(word.Text)
}
s.errorf("can't evaluate command %q", firstWord)
panic("not reached")
}
// idealConstant is called to return the value of a number in a context where
// we don't know the type. In that case, the syntax of the number tells us
// its type, and we use Go rules to resolve. Note there is no such thing as
// a uint ideal constant in this situation - the value must be of int type.
func (s *state) idealConstant(constant *parse.NumberNode) reflect.Value {
// These are ideal constants but we don't know the type
// and we have no context. (If it was a method argument,
// we'd know what we need.) The syntax guides us to some extent.
s.at(constant)
switch {
case constant.IsComplex:
return reflect.ValueOf(constant.Complex128) // incontrovertible.
case constant.IsFloat && !isHexConstant(constant.Text) && strings.IndexAny(constant.Text, ".eE") >= 0:
return reflect.ValueOf(constant.Float64)
case constant.IsInt:
n := int(constant.Int64)
if int64(n) != constant.Int64 {
s.errorf("%s overflows int", constant.Text)
}
return reflect.ValueOf(n)
case constant.IsUint:
s.errorf("%s overflows int", constant.Text)
}
return zero
}
func isHexConstant(s string) bool {
return len(s) > 2 && s[0] == '0' && (s[1] == 'x' || s[1] == 'X')
}
func (s *state) evalFieldNode(dot reflect.Value, field *parse.FieldNode, args []parse.Node, final reflect.Value) reflect.Value {
s.at(field)
return s.evalFieldChain(dot, dot, field, field.Ident, args, final)
}
func (s *state) evalChainNode(dot reflect.Value, chain *parse.ChainNode, args []parse.Node, final reflect.Value) reflect.Value {
s.at(chain)
// (pipe).Field1.Field2 has pipe as .Node, fields as .Field. Eval the pipeline, then the fields.
pipe := s.evalArg(dot, nil, chain.Node)
if len(chain.Field) == 0 {
s.errorf("internal error: no fields in evalChainNode")
}
return s.evalFieldChain(dot, pipe, chain, chain.Field, args, final)
}
func (s *state) evalVariableNode(dot reflect.Value, variable *parse.VariableNode, args []parse.Node, final reflect.Value) reflect.Value {
// $x.Field has $x as the first ident, Field as the second. Eval the var, then the fields.
s.at(variable)
value := s.varValue(variable.Ident[0])
if len(variable.Ident) == 1 {
s.notAFunction(args, final)
return value
}
return s.evalFieldChain(dot, value, variable, variable.Ident[1:], args, final)
}
// evalFieldChain evaluates .X.Y.Z possibly followed by arguments.
// dot is the environment in which to evaluate arguments, while
// receiver is the value being walked along the chain.
func (s *state) evalFieldChain(dot, receiver reflect.Value, node parse.Node, ident []string, args []parse.Node, final reflect.Value) reflect.Value {
n := len(ident)
for i := 0; i < n-1; i++ {
receiver = s.evalField(dot, ident[i], node, nil, zero, receiver)
}
// Now if it's a method, it gets the arguments.
return s.evalField(dot, ident[n-1], node, args, final, receiver)
}
func (s *state) evalFunction(dot reflect.Value, node *parse.IdentifierNode, cmd parse.Node, args []parse.Node, final reflect.Value) reflect.Value {
s.at(node)
name := node.Ident
function, ok := findFunction(name, s.tmpl)
if !ok {
s.errorf("%q is not a defined function", name)
}
return s.evalCall(dot, function, cmd, name, args, final)
}
// evalField evaluates an expression like (.Field) or (.Field arg1 arg2).
// The 'final' argument represents the return value from the preceding
// value of the pipeline, if any.
func (s *state) evalField(dot reflect.Value, fieldName string, node parse.Node, args []parse.Node, final, receiver reflect.Value) reflect.Value {
if !receiver.IsValid() {
return zero
}
typ := receiver.Type()
receiver, _ = indirect(receiver)
// Unless it's an interface, need to get to a value of type *T to guarantee
// we see all methods of T and *T.
ptr := receiver
if ptr.Kind() != reflect.Interface && ptr.CanAddr() {
ptr = ptr.Addr()
}
if method := ptr.MethodByName(fieldName); method.IsValid() {
return s.evalCall(dot, method, node, fieldName, args, final)
}
hasArgs := len(args) > 1 || final.IsValid()
// It's not a method; must be a field of a struct or an element of a map. The receiver must not be nil.
receiver, isNil := indirect(receiver)
if isNil {
s.errorf("nil pointer evaluating %s.%s", typ, fieldName)
}
switch receiver.Kind() {
case reflect.Struct:
tField, ok := receiver.Type().FieldByName(fieldName)
if ok {
field := receiver.FieldByIndex(tField.Index)
if tField.PkgPath != "" { // field is unexported
s.errorf("%s is an unexported field of struct type %s", fieldName, typ)
}
// If it's a function, we must call it.
if hasArgs {
s.errorf("%s has arguments but cannot be invoked as function", fieldName)
}
return field
}
s.errorf("%s is not a field of struct type %s", fieldName, typ)
case reflect.Map:
// If it's a map, attempt to use the field name as a key.
nameVal := reflect.ValueOf(fieldName)
if nameVal.Type().AssignableTo(receiver.Type().Key()) {
if hasArgs {
s.errorf("%s is not a method but has arguments", fieldName)
}
return receiver.MapIndex(nameVal)
}
}
s.errorf("can't evaluate field %s in type %s", fieldName, typ)
panic("not reached")
}
var (
errorType = reflect.TypeOf((*error)(nil)).Elem()
fmtStringerType = reflect.TypeOf((*fmt.Stringer)(nil)).Elem()
)
// evalCall executes a function or method call. If it's a method, fun already has the receiver bound, so
// it looks just like a function call. The arg list, if non-nil, includes (in the manner of the shell), arg[0]
// as the function itself.
func (s *state) evalCall(dot, fun reflect.Value, node parse.Node, name string, args []parse.Node, final reflect.Value) reflect.Value {
if args != nil {
args = args[1:] // Zeroth arg is function name/node; not passed to function.
}
typ := fun.Type()
numIn := len(args)
if final.IsValid() {
numIn++
}
numFixed := len(args)
if typ.IsVariadic() {
numFixed = typ.NumIn() - 1 // last arg is the variadic one.
if numIn < numFixed {
s.errorf("wrong number of args for %s: want at least %d got %d", name, typ.NumIn()-1, len(args))
}
} else if numIn < typ.NumIn()-1 || !typ.IsVariadic() && numIn != typ.NumIn() {
s.errorf("wrong number of args for %s: want %d got %d", name, typ.NumIn(), len(args))
}
if !goodFunc(typ) {
// TODO: This could still be a confusing error; maybe goodFunc should provide info.
s.errorf("can't call method/function %q with %d results", name, typ.NumOut())
}
// Build the arg list.
argv := make([]reflect.Value, numIn)
// Args must be evaluated. Fixed args first.
i := 0
for ; i < numFixed && i < len(args); i++ {
argv[i] = s.evalArg(dot, typ.In(i), args[i])
}
// Now the ... args.
if typ.IsVariadic() {
argType := typ.In(typ.NumIn() - 1).Elem() // Argument is a slice.
for ; i < len(args); i++ {
argv[i] = s.evalArg(dot, argType, args[i])
}
}
// Add final value if necessary.
if final.IsValid() {
t := typ.In(typ.NumIn() - 1)
if typ.IsVariadic() {
t = t.Elem()
}
argv[i] = s.validateType(final, t)
}
result := fun.Call(argv)
// If we have an error that is not nil, stop execution and return that error to the caller.
if len(result) == 2 && !result[1].IsNil() {
s.at(node)
s.errorf("error calling %s: %s", name, result[1].Interface().(error))
}
return result[0]
}
// canBeNil reports whether an untyped nil can be assigned to the type. See reflect.Zero.
func canBeNil(typ reflect.Type) bool {
switch typ.Kind() {
case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return true
}
return false
}
// validateType guarantees that the value is valid and assignable to the type.
func (s *state) validateType(value reflect.Value, typ reflect.Type) reflect.Value {
if !value.IsValid() {
if typ == nil || canBeNil(typ) {
// An untyped nil interface{}. Accept as a proper nil value.
return reflect.Zero(typ)
}
s.errorf("invalid value; expected %s", typ)
}
if typ != nil && !value.Type().AssignableTo(typ) {
if value.Kind() == reflect.Interface && !value.IsNil() {
value = value.Elem()
if value.Type().AssignableTo(typ) {
return value
}
// fallthrough
}
// Does one dereference or indirection work? We could do more, as we
// do with method receivers, but that gets messy and method receivers
// are much more constrained, so it makes more sense there than here.
// Besides, one is almost always all you need.
switch {
case value.Kind() == reflect.Ptr && value.Type().Elem().AssignableTo(typ):
value = value.Elem()
if !value.IsValid() {
s.errorf("dereference of nil pointer of type %s", typ)
}
case reflect.PtrTo(value.Type()).AssignableTo(typ) && value.CanAddr():
value = value.Addr()
default:
s.errorf("wrong type for value; expected %s; got %s", typ, value.Type())
}
}
return value
}
func (s *state) evalArg(dot reflect.Value, typ reflect.Type, n parse.Node) reflect.Value {
s.at(n)
switch arg := n.(type) {
case *parse.DotNode:
return s.validateType(dot, typ)
case *parse.NilNode:
if canBeNil(typ) {
return reflect.Zero(typ)
}
s.errorf("cannot assign nil to %s", typ)
case *parse.FieldNode:
return s.validateType(s.evalFieldNode(dot, arg, []parse.Node{n}, zero), typ)
case *parse.VariableNode:
return s.validateType(s.evalVariableNode(dot, arg, nil, zero), typ)
case *parse.PipeNode:
return s.validateType(s.evalPipeline(dot, arg), typ)
case *parse.IdentifierNode:
return s.evalFunction(dot, arg, arg, nil, zero)
case *parse.ChainNode:
return s.validateType(s.evalChainNode(dot, arg, nil, zero), typ)
}
switch typ.Kind() {
case reflect.Bool:
return s.evalBool(typ, n)
case reflect.Complex64, reflect.Complex128:
return s.evalComplex(typ, n)
case reflect.Float32, reflect.Float64:
return s.evalFloat(typ, n)
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return s.evalInteger(typ, n)
case reflect.Interface:
if typ.NumMethod() == 0 {
return s.evalEmptyInterface(dot, n)
}
case reflect.String:
return s.evalString(typ, n)
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
return s.evalUnsignedInteger(typ, n)
}
s.errorf("can't handle %s for arg of type %s", n, typ)
panic("not reached")
}
func (s *state) evalBool(typ reflect.Type, n parse.Node) reflect.Value {
s.at(n)
if n, ok := n.(*parse.BoolNode); ok {
value := reflect.New(typ).Elem()
value.SetBool(n.True)
return value
}
s.errorf("expected bool; found %s", n)
panic("not reached")
}
func (s *state) evalString(typ reflect.Type, n parse.Node) reflect.Value {
s.at(n)
if n, ok := n.(*parse.StringNode); ok {
value := reflect.New(typ).Elem()
value.SetString(n.Text)
return value
}
s.errorf("expected string; found %s", n)
panic("not reached")
}
func (s *state) evalInteger(typ reflect.Type, n parse.Node) reflect.Value {
s.at(n)
if n, ok := n.(*parse.NumberNode); ok && n.IsInt {
value := reflect.New(typ).Elem()
value.SetInt(n.Int64)
return value
}
s.errorf("expected integer; found %s", n)
panic("not reached")
}
func (s *state) evalUnsignedInteger(typ reflect.Type, n parse.Node) reflect.Value {
s.at(n)
if n, ok := n.(*parse.NumberNode); ok && n.IsUint {
value := reflect.New(typ).Elem()
value.SetUint(n.Uint64)
return value
}
s.errorf("expected unsigned integer; found %s", n)
panic("not reached")
}
func (s *state) evalFloat(typ reflect.Type, n parse.Node) reflect.Value {
s.at(n)
if n, ok := n.(*parse.NumberNode); ok && n.IsFloat {
value := reflect.New(typ).Elem()
value.SetFloat(n.Float64)
return value
}
s.errorf("expected float; found %s", n)
panic("not reached")
}
func (s *state) evalComplex(typ reflect.Type, n parse.Node) reflect.Value {
if n, ok := n.(*parse.NumberNode); ok && n.IsComplex {
value := reflect.New(typ).Elem()
value.SetComplex(n.Complex128)
return value
}
s.errorf("expected complex; found %s", n)
panic("not reached")
}
func (s *state) evalEmptyInterface(dot reflect.Value, n parse.Node) reflect.Value {
s.at(n)
switch n := n.(type) {
case *parse.BoolNode:
return reflect.ValueOf(n.True)
case *parse.DotNode:
return dot
case *parse.FieldNode:
return s.evalFieldNode(dot, n, nil, zero)
case *parse.IdentifierNode:
return s.evalFunction(dot, n, n, nil, zero)
case *parse.NilNode:
// NilNode is handled in evalArg, the only place that calls here.
s.errorf("evalEmptyInterface: nil (can't happen)")
case *parse.NumberNode:
return s.idealConstant(n)
case *parse.StringNode:
return reflect.ValueOf(n.Text)
case *parse.VariableNode:
return s.evalVariableNode(dot, n, nil, zero)
case *parse.PipeNode:
return s.evalPipeline(dot, n)
}
s.errorf("can't handle assignment of %s to empty interface argument", n)
panic("not reached")
}
// indirect returns the item at the end of indirection, and a bool to indicate if it's nil.
// We indirect through pointers and empty interfaces (only) because
// non-empty interfaces have methods we might need.
func indirect(v reflect.Value) (rv reflect.Value, isNil bool) {
for ; v.Kind() == reflect.Ptr || v.Kind() == reflect.Interface; v = v.Elem() {
if v.IsNil() {
return v, true
}
if v.Kind() == reflect.Interface && v.NumMethod() > 0 {
break
}
}
return v, false
}
// printValue writes the textual representation of the value to the output of
// the template.
func (s *state) printValue(n parse.Node, v reflect.Value) {
s.at(n)
iface, ok := printableValue(v)
if !ok {
s.errorf("can't print %s of type %s", n, v.Type())
}
fmt.Fprint(s.wr, iface)
}
// printableValue returns the, possibly indirected, interface value inside v that
// is best for a call to formatted printer.
func printableValue(v reflect.Value) (interface{}, bool) {
if v.Kind() == reflect.Ptr {
v, _ = indirect(v) // fmt.Fprint handles nil.
}
if !v.IsValid() {
return "<no value>", true
}
if !v.Type().Implements(errorType) && !v.Type().Implements(fmtStringerType) {
if v.CanAddr() && (reflect.PtrTo(v.Type()).Implements(errorType) || reflect.PtrTo(v.Type()).Implements(fmtStringerType)) {
v = v.Addr()
} else {
switch v.Kind() {
case reflect.Chan, reflect.Func:
return nil, false
}
}
}
return v.Interface(), true
}
// Types to help sort the keys in a map for reproducible output.
type rvs []reflect.Value
func (x rvs) Len() int { return len(x) }
func (x rvs) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
type rvInts struct{ rvs }
func (x rvInts) Less(i, j int) bool { return x.rvs[i].Int() < x.rvs[j].Int() }
type rvUints struct{ rvs }
func (x rvUints) Less(i, j int) bool { return x.rvs[i].Uint() < x.rvs[j].Uint() }
type rvFloats struct{ rvs }
func (x rvFloats) Less(i, j int) bool { return x.rvs[i].Float() < x.rvs[j].Float() }
type rvStrings struct{ rvs }
func (x rvStrings) Less(i, j int) bool { return x.rvs[i].String() < x.rvs[j].String() }
// sortKeys sorts (if it can) the slice of reflect.Values, which is a slice of map keys.
func sortKeys(v []reflect.Value) []reflect.Value {
if len(v) <= 1 {
return v
}
switch v[0].Kind() {
case reflect.Float32, reflect.Float64:
sort.Sort(rvFloats{v})
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
sort.Sort(rvInts{v})
case reflect.String:
sort.Sort(rvStrings{v})
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
sort.Sort(rvUints{v})
}
return v
}

View file

@ -1,598 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package template
import (
"bytes"
"errors"
"fmt"
"io"
"net/url"
"reflect"
"strings"
"unicode"
"unicode/utf8"
)
// FuncMap is the type of the map defining the mapping from names to functions.
// Each function must have either a single return value, or two return values of
// which the second has type error. In that case, if the second (error)
// return value evaluates to non-nil during execution, execution terminates and
// Execute returns that error.
type FuncMap map[string]interface{}
var builtins = FuncMap{
"and": and,
"call": call,
"html": HTMLEscaper,
"index": index,
"js": JSEscaper,
"len": length,
"not": not,
"or": or,
"print": fmt.Sprint,
"printf": fmt.Sprintf,
"println": fmt.Sprintln,
"urlquery": URLQueryEscaper,
// Comparisons
"eq": eq, // ==
"ge": ge, // >=
"gt": gt, // >
"le": le, // <=
"lt": lt, // <
"ne": ne, // !=
}
var builtinFuncs = createValueFuncs(builtins)
// createValueFuncs turns a FuncMap into a map[string]reflect.Value
func createValueFuncs(funcMap FuncMap) map[string]reflect.Value {
m := make(map[string]reflect.Value)
addValueFuncs(m, funcMap)
return m
}
// addValueFuncs adds to values the functions in funcs, converting them to reflect.Values.
func addValueFuncs(out map[string]reflect.Value, in FuncMap) {
for name, fn := range in {
v := reflect.ValueOf(fn)
if v.Kind() != reflect.Func {
panic("value for " + name + " not a function")
}
if !goodFunc(v.Type()) {
panic(fmt.Errorf("can't install method/function %q with %d results", name, v.Type().NumOut()))
}
out[name] = v
}
}
// addFuncs adds to values the functions in funcs. It does no checking of the input -
// call addValueFuncs first.
func addFuncs(out, in FuncMap) {
for name, fn := range in {
out[name] = fn
}
}
// goodFunc checks that the function or method has the right result signature.
func goodFunc(typ reflect.Type) bool {
// We allow functions with 1 result or 2 results where the second is an error.
switch {
case typ.NumOut() == 1:
return true
case typ.NumOut() == 2 && typ.Out(1) == errorType:
return true
}
return false
}
// findFunction looks for a function in the template, and global map.
func findFunction(name string, tmpl *Template) (reflect.Value, bool) {
if tmpl != nil && tmpl.common != nil {
if fn := tmpl.execFuncs[name]; fn.IsValid() {
return fn, true
}
}
if fn := builtinFuncs[name]; fn.IsValid() {
return fn, true
}
return reflect.Value{}, false
}
// Indexing.
// index returns the result of indexing its first argument by the following
// arguments. Thus "index x 1 2 3" is, in Go syntax, x[1][2][3]. Each
// indexed item must be a map, slice, or array.
func index(item interface{}, indices ...interface{}) (interface{}, error) {
v := reflect.ValueOf(item)
for _, i := range indices {
index := reflect.ValueOf(i)
var isNil bool
if v, isNil = indirect(v); isNil {
return nil, fmt.Errorf("index of nil pointer")
}
switch v.Kind() {
case reflect.Array, reflect.Slice, reflect.String:
var x int64
switch index.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
x = index.Int()
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
x = int64(index.Uint())
default:
return nil, fmt.Errorf("cannot index slice/array with type %s", index.Type())
}
if x < 0 || x >= int64(v.Len()) {
return nil, fmt.Errorf("index out of range: %d", x)
}
v = v.Index(int(x))
case reflect.Map:
if !index.IsValid() {
index = reflect.Zero(v.Type().Key())
}
if !index.Type().AssignableTo(v.Type().Key()) {
return nil, fmt.Errorf("%s is not index type for %s", index.Type(), v.Type())
}
if x := v.MapIndex(index); x.IsValid() {
v = x
} else {
v = reflect.Zero(v.Type().Elem())
}
default:
return nil, fmt.Errorf("can't index item of type %s", v.Type())
}
}
return v.Interface(), nil
}
// Length
// length returns the length of the item, with an error if it has no defined length.
func length(item interface{}) (int, error) {
v, isNil := indirect(reflect.ValueOf(item))
if isNil {
return 0, fmt.Errorf("len of nil pointer")
}
switch v.Kind() {
case reflect.Array, reflect.Chan, reflect.Map, reflect.Slice, reflect.String:
return v.Len(), nil
}
return 0, fmt.Errorf("len of type %s", v.Type())
}
// Function invocation
// call returns the result of evaluating the first argument as a function.
// The function must return 1 result, or 2 results, the second of which is an error.
func call(fn interface{}, args ...interface{}) (interface{}, error) {
v := reflect.ValueOf(fn)
typ := v.Type()
if typ.Kind() != reflect.Func {
return nil, fmt.Errorf("non-function of type %s", typ)
}
if !goodFunc(typ) {
return nil, fmt.Errorf("function called with %d args; should be 1 or 2", typ.NumOut())
}
numIn := typ.NumIn()
var dddType reflect.Type
if typ.IsVariadic() {
if len(args) < numIn-1 {
return nil, fmt.Errorf("wrong number of args: got %d want at least %d", len(args), numIn-1)
}
dddType = typ.In(numIn - 1).Elem()
} else {
if len(args) != numIn {
return nil, fmt.Errorf("wrong number of args: got %d want %d", len(args), numIn)
}
}
argv := make([]reflect.Value, len(args))
for i, arg := range args {
value := reflect.ValueOf(arg)
// Compute the expected type. Clumsy because of variadics.
var argType reflect.Type
if !typ.IsVariadic() || i < numIn-1 {
argType = typ.In(i)
} else {
argType = dddType
}
if !value.IsValid() && canBeNil(argType) {
value = reflect.Zero(argType)
}
if !value.Type().AssignableTo(argType) {
return nil, fmt.Errorf("arg %d has type %s; should be %s", i, value.Type(), argType)
}
argv[i] = value
}
result := v.Call(argv)
if len(result) == 2 && !result[1].IsNil() {
return result[0].Interface(), result[1].Interface().(error)
}
return result[0].Interface(), nil
}
// Boolean logic.
func truth(a interface{}) bool {
t, _ := isTrue(reflect.ValueOf(a))
return t
}
// and computes the Boolean AND of its arguments, returning
// the first false argument it encounters, or the last argument.
func and(arg0 interface{}, args ...interface{}) interface{} {
if !truth(arg0) {
return arg0
}
for i := range args {
arg0 = args[i]
if !truth(arg0) {
break
}
}
return arg0
}
// or computes the Boolean OR of its arguments, returning
// the first true argument it encounters, or the last argument.
func or(arg0 interface{}, args ...interface{}) interface{} {
if truth(arg0) {
return arg0
}
for i := range args {
arg0 = args[i]
if truth(arg0) {
break
}
}
return arg0
}
// not returns the Boolean negation of its argument.
func not(arg interface{}) (truth bool) {
truth, _ = isTrue(reflect.ValueOf(arg))
return !truth
}
// Comparison.
// TODO: Perhaps allow comparison between signed and unsigned integers.
var (
errBadComparisonType = errors.New("invalid type for comparison")
errBadComparison = errors.New("incompatible types for comparison")
errNoComparison = errors.New("missing argument for comparison")
)
type kind int
const (
invalidKind kind = iota
boolKind
complexKind
intKind
floatKind
integerKind
stringKind
uintKind
)
func basicKind(v reflect.Value) (kind, error) {
switch v.Kind() {
case reflect.Bool:
return boolKind, nil
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return intKind, nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
return uintKind, nil
case reflect.Float32, reflect.Float64:
return floatKind, nil
case reflect.Complex64, reflect.Complex128:
return complexKind, nil
case reflect.String:
return stringKind, nil
}
return invalidKind, errBadComparisonType
}
// eq evaluates the comparison a == b || a == c || ...
func eq(arg1 interface{}, arg2 ...interface{}) (bool, error) {
v1 := reflect.ValueOf(arg1)
k1, err := basicKind(v1)
if err != nil {
return false, err
}
if len(arg2) == 0 {
return false, errNoComparison
}
for _, arg := range arg2 {
v2 := reflect.ValueOf(arg)
k2, err := basicKind(v2)
if err != nil {
return false, err
}
truth := false
if k1 != k2 {
// Special case: Can compare integer values regardless of type's sign.
switch {
case k1 == intKind && k2 == uintKind:
truth = v1.Int() >= 0 && uint64(v1.Int()) == v2.Uint()
case k1 == uintKind && k2 == intKind:
truth = v2.Int() >= 0 && v1.Uint() == uint64(v2.Int())
default:
return false, errBadComparison
}
} else {
switch k1 {
case boolKind:
truth = v1.Bool() == v2.Bool()
case complexKind:
truth = v1.Complex() == v2.Complex()
case floatKind:
truth = v1.Float() == v2.Float()
case intKind:
truth = v1.Int() == v2.Int()
case stringKind:
truth = v1.String() == v2.String()
case uintKind:
truth = v1.Uint() == v2.Uint()
default:
panic("invalid kind")
}
}
if truth {
return true, nil
}
}
return false, nil
}
// ne evaluates the comparison a != b.
func ne(arg1, arg2 interface{}) (bool, error) {
// != is the inverse of ==.
equal, err := eq(arg1, arg2)
return !equal, err
}
// lt evaluates the comparison a < b.
func lt(arg1, arg2 interface{}) (bool, error) {
v1 := reflect.ValueOf(arg1)
k1, err := basicKind(v1)
if err != nil {
return false, err
}
v2 := reflect.ValueOf(arg2)
k2, err := basicKind(v2)
if err != nil {
return false, err
}
truth := false
if k1 != k2 {
// Special case: Can compare integer values regardless of type's sign.
switch {
case k1 == intKind && k2 == uintKind:
truth = v1.Int() < 0 || uint64(v1.Int()) < v2.Uint()
case k1 == uintKind && k2 == intKind:
truth = v2.Int() >= 0 && v1.Uint() < uint64(v2.Int())
default:
return false, errBadComparison
}
} else {
switch k1 {
case boolKind, complexKind:
return false, errBadComparisonType
case floatKind:
truth = v1.Float() < v2.Float()
case intKind:
truth = v1.Int() < v2.Int()
case stringKind:
truth = v1.String() < v2.String()
case uintKind:
truth = v1.Uint() < v2.Uint()
default:
panic("invalid kind")
}
}
return truth, nil
}
// le evaluates the comparison <= b.
func le(arg1, arg2 interface{}) (bool, error) {
// <= is < or ==.
lessThan, err := lt(arg1, arg2)
if lessThan || err != nil {
return lessThan, err
}
return eq(arg1, arg2)
}
// gt evaluates the comparison a > b.
func gt(arg1, arg2 interface{}) (bool, error) {
// > is the inverse of <=.
lessOrEqual, err := le(arg1, arg2)
if err != nil {
return false, err
}
return !lessOrEqual, nil
}
// ge evaluates the comparison a >= b.
func ge(arg1, arg2 interface{}) (bool, error) {
// >= is the inverse of <.
lessThan, err := lt(arg1, arg2)
if err != nil {
return false, err
}
return !lessThan, nil
}
// HTML escaping.
var (
htmlQuot = []byte("&#34;") // shorter than "&quot;"
htmlApos = []byte("&#39;") // shorter than "&apos;" and apos was not in HTML until HTML5
htmlAmp = []byte("&amp;")
htmlLt = []byte("&lt;")
htmlGt = []byte("&gt;")
)
// HTMLEscape writes to w the escaped HTML equivalent of the plain text data b.
func HTMLEscape(w io.Writer, b []byte) {
last := 0
for i, c := range b {
var html []byte
switch c {
case '"':
html = htmlQuot
case '\'':
html = htmlApos
case '&':
html = htmlAmp
case '<':
html = htmlLt
case '>':
html = htmlGt
default:
continue
}
w.Write(b[last:i])
w.Write(html)
last = i + 1
}
w.Write(b[last:])
}
// HTMLEscapeString returns the escaped HTML equivalent of the plain text data s.
func HTMLEscapeString(s string) string {
// Avoid allocation if we can.
if strings.IndexAny(s, `'"&<>`) < 0 {
return s
}
var b bytes.Buffer
HTMLEscape(&b, []byte(s))
return b.String()
}
// HTMLEscaper returns the escaped HTML equivalent of the textual
// representation of its arguments.
func HTMLEscaper(args ...interface{}) string {
return HTMLEscapeString(evalArgs(args))
}
// JavaScript escaping.
var (
jsLowUni = []byte(`\u00`)
hex = []byte("0123456789ABCDEF")
jsBackslash = []byte(`\\`)
jsApos = []byte(`\'`)
jsQuot = []byte(`\"`)
jsLt = []byte(`\x3C`)
jsGt = []byte(`\x3E`)
)
// JSEscape writes to w the escaped JavaScript equivalent of the plain text data b.
func JSEscape(w io.Writer, b []byte) {
last := 0
for i := 0; i < len(b); i++ {
c := b[i]
if !jsIsSpecial(rune(c)) {
// fast path: nothing to do
continue
}
w.Write(b[last:i])
if c < utf8.RuneSelf {
// Quotes, slashes and angle brackets get quoted.
// Control characters get written as \u00XX.
switch c {
case '\\':
w.Write(jsBackslash)
case '\'':
w.Write(jsApos)
case '"':
w.Write(jsQuot)
case '<':
w.Write(jsLt)
case '>':
w.Write(jsGt)
default:
w.Write(jsLowUni)
t, b := c>>4, c&0x0f
w.Write(hex[t : t+1])
w.Write(hex[b : b+1])
}
} else {
// Unicode rune.
r, size := utf8.DecodeRune(b[i:])
if unicode.IsPrint(r) {
w.Write(b[i : i+size])
} else {
fmt.Fprintf(w, "\\u%04X", r)
}
i += size - 1
}
last = i + 1
}
w.Write(b[last:])
}
// JSEscapeString returns the escaped JavaScript equivalent of the plain text data s.
func JSEscapeString(s string) string {
// Avoid allocation if we can.
if strings.IndexFunc(s, jsIsSpecial) < 0 {
return s
}
var b bytes.Buffer
JSEscape(&b, []byte(s))
return b.String()
}
func jsIsSpecial(r rune) bool {
switch r {
case '\\', '\'', '"', '<', '>':
return true
}
return r < ' ' || utf8.RuneSelf <= r
}
// JSEscaper returns the escaped JavaScript equivalent of the textual
// representation of its arguments.
func JSEscaper(args ...interface{}) string {
return JSEscapeString(evalArgs(args))
}
// URLQueryEscaper returns the escaped value of the textual representation of
// its arguments in a form suitable for embedding in a URL query.
func URLQueryEscaper(args ...interface{}) string {
return url.QueryEscape(evalArgs(args))
}
// evalArgs formats the list of arguments into a string. It is therefore equivalent to
// fmt.Sprint(args...)
// except that each argument is indirected (if a pointer), as required,
// using the same rules as the default string evaluation during template
// execution.
func evalArgs(args []interface{}) string {
ok := false
var s string
// Fast path for simple common case.
if len(args) == 1 {
s, ok = args[0].(string)
}
if !ok {
for i, arg := range args {
a, ok := printableValue(reflect.ValueOf(arg))
if ok {
args[i] = a
} // else left fmt do its thing
}
s = fmt.Sprint(args...)
}
return s
}

View file

@ -1 +0,0 @@
module github.com/alecthomas/template

View file

@ -1,108 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Helper functions to make constructing templates easier.
package template
import (
"fmt"
"io/ioutil"
"path/filepath"
)
// Functions and methods to parse templates.
// Must is a helper that wraps a call to a function returning (*Template, error)
// and panics if the error is non-nil. It is intended for use in variable
// initializations such as
// var t = template.Must(template.New("name").Parse("text"))
func Must(t *Template, err error) *Template {
if err != nil {
panic(err)
}
return t
}
// ParseFiles creates a new Template and parses the template definitions from
// the named files. The returned template's name will have the (base) name and
// (parsed) contents of the first file. There must be at least one file.
// If an error occurs, parsing stops and the returned *Template is nil.
func ParseFiles(filenames ...string) (*Template, error) {
return parseFiles(nil, filenames...)
}
// ParseFiles parses the named files and associates the resulting templates with
// t. If an error occurs, parsing stops and the returned template is nil;
// otherwise it is t. There must be at least one file.
func (t *Template) ParseFiles(filenames ...string) (*Template, error) {
return parseFiles(t, filenames...)
}
// parseFiles is the helper for the method and function. If the argument
// template is nil, it is created from the first file.
func parseFiles(t *Template, filenames ...string) (*Template, error) {
if len(filenames) == 0 {
// Not really a problem, but be consistent.
return nil, fmt.Errorf("template: no files named in call to ParseFiles")
}
for _, filename := range filenames {
b, err := ioutil.ReadFile(filename)
if err != nil {
return nil, err
}
s := string(b)
name := filepath.Base(filename)
// First template becomes return value if not already defined,
// and we use that one for subsequent New calls to associate
// all the templates together. Also, if this file has the same name
// as t, this file becomes the contents of t, so
// t, err := New(name).Funcs(xxx).ParseFiles(name)
// works. Otherwise we create a new template associated with t.
var tmpl *Template
if t == nil {
t = New(name)
}
if name == t.Name() {
tmpl = t
} else {
tmpl = t.New(name)
}
_, err = tmpl.Parse(s)
if err != nil {
return nil, err
}
}
return t, nil
}
// ParseGlob creates a new Template and parses the template definitions from the
// files identified by the pattern, which must match at least one file. The
// returned template will have the (base) name and (parsed) contents of the
// first file matched by the pattern. ParseGlob is equivalent to calling
// ParseFiles with the list of files matched by the pattern.
func ParseGlob(pattern string) (*Template, error) {
return parseGlob(nil, pattern)
}
// ParseGlob parses the template definitions in the files identified by the
// pattern and associates the resulting templates with t. The pattern is
// processed by filepath.Glob and must match at least one file. ParseGlob is
// equivalent to calling t.ParseFiles with the list of files matched by the
// pattern.
func (t *Template) ParseGlob(pattern string) (*Template, error) {
return parseGlob(t, pattern)
}
// parseGlob is the implementation of the function and method ParseGlob.
func parseGlob(t *Template, pattern string) (*Template, error) {
filenames, err := filepath.Glob(pattern)
if err != nil {
return nil, err
}
if len(filenames) == 0 {
return nil, fmt.Errorf("template: pattern matches no files: %#q", pattern)
}
return parseFiles(t, filenames...)
}

View file

@ -1,556 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package parse
import (
"fmt"
"strings"
"unicode"
"unicode/utf8"
)
// item represents a token or text string returned from the scanner.
type item struct {
typ itemType // The type of this item.
pos Pos // The starting position, in bytes, of this item in the input string.
val string // The value of this item.
}
func (i item) String() string {
switch {
case i.typ == itemEOF:
return "EOF"
case i.typ == itemError:
return i.val
case i.typ > itemKeyword:
return fmt.Sprintf("<%s>", i.val)
case len(i.val) > 10:
return fmt.Sprintf("%.10q...", i.val)
}
return fmt.Sprintf("%q", i.val)
}
// itemType identifies the type of lex items.
type itemType int
const (
itemError itemType = iota // error occurred; value is text of error
itemBool // boolean constant
itemChar // printable ASCII character; grab bag for comma etc.
itemCharConstant // character constant
itemComplex // complex constant (1+2i); imaginary is just a number
itemColonEquals // colon-equals (':=') introducing a declaration
itemEOF
itemField // alphanumeric identifier starting with '.'
itemIdentifier // alphanumeric identifier not starting with '.'
itemLeftDelim // left action delimiter
itemLeftParen // '(' inside action
itemNumber // simple number, including imaginary
itemPipe // pipe symbol
itemRawString // raw quoted string (includes quotes)
itemRightDelim // right action delimiter
itemElideNewline // elide newline after right delim
itemRightParen // ')' inside action
itemSpace // run of spaces separating arguments
itemString // quoted string (includes quotes)
itemText // plain text
itemVariable // variable starting with '$', such as '$' or '$1' or '$hello'
// Keywords appear after all the rest.
itemKeyword // used only to delimit the keywords
itemDot // the cursor, spelled '.'
itemDefine // define keyword
itemElse // else keyword
itemEnd // end keyword
itemIf // if keyword
itemNil // the untyped nil constant, easiest to treat as a keyword
itemRange // range keyword
itemTemplate // template keyword
itemWith // with keyword
)
var key = map[string]itemType{
".": itemDot,
"define": itemDefine,
"else": itemElse,
"end": itemEnd,
"if": itemIf,
"range": itemRange,
"nil": itemNil,
"template": itemTemplate,
"with": itemWith,
}
const eof = -1
// stateFn represents the state of the scanner as a function that returns the next state.
type stateFn func(*lexer) stateFn
// lexer holds the state of the scanner.
type lexer struct {
name string // the name of the input; used only for error reports
input string // the string being scanned
leftDelim string // start of action
rightDelim string // end of action
state stateFn // the next lexing function to enter
pos Pos // current position in the input
start Pos // start position of this item
width Pos // width of last rune read from input
lastPos Pos // position of most recent item returned by nextItem
items chan item // channel of scanned items
parenDepth int // nesting depth of ( ) exprs
}
// next returns the next rune in the input.
func (l *lexer) next() rune {
if int(l.pos) >= len(l.input) {
l.width = 0
return eof
}
r, w := utf8.DecodeRuneInString(l.input[l.pos:])
l.width = Pos(w)
l.pos += l.width
return r
}
// peek returns but does not consume the next rune in the input.
func (l *lexer) peek() rune {
r := l.next()
l.backup()
return r
}
// backup steps back one rune. Can only be called once per call of next.
func (l *lexer) backup() {
l.pos -= l.width
}
// emit passes an item back to the client.
func (l *lexer) emit(t itemType) {
l.items <- item{t, l.start, l.input[l.start:l.pos]}
l.start = l.pos
}
// ignore skips over the pending input before this point.
func (l *lexer) ignore() {
l.start = l.pos
}
// accept consumes the next rune if it's from the valid set.
func (l *lexer) accept(valid string) bool {
if strings.IndexRune(valid, l.next()) >= 0 {
return true
}
l.backup()
return false
}
// acceptRun consumes a run of runes from the valid set.
func (l *lexer) acceptRun(valid string) {
for strings.IndexRune(valid, l.next()) >= 0 {
}
l.backup()
}
// lineNumber reports which line we're on, based on the position of
// the previous item returned by nextItem. Doing it this way
// means we don't have to worry about peek double counting.
func (l *lexer) lineNumber() int {
return 1 + strings.Count(l.input[:l.lastPos], "\n")
}
// errorf returns an error token and terminates the scan by passing
// back a nil pointer that will be the next state, terminating l.nextItem.
func (l *lexer) errorf(format string, args ...interface{}) stateFn {
l.items <- item{itemError, l.start, fmt.Sprintf(format, args...)}
return nil
}
// nextItem returns the next item from the input.
func (l *lexer) nextItem() item {
item := <-l.items
l.lastPos = item.pos
return item
}
// lex creates a new scanner for the input string.
func lex(name, input, left, right string) *lexer {
if left == "" {
left = leftDelim
}
if right == "" {
right = rightDelim
}
l := &lexer{
name: name,
input: input,
leftDelim: left,
rightDelim: right,
items: make(chan item),
}
go l.run()
return l
}
// run runs the state machine for the lexer.
func (l *lexer) run() {
for l.state = lexText; l.state != nil; {
l.state = l.state(l)
}
}
// state functions
const (
leftDelim = "{{"
rightDelim = "}}"
leftComment = "/*"
rightComment = "*/"
)
// lexText scans until an opening action delimiter, "{{".
func lexText(l *lexer) stateFn {
for {
if strings.HasPrefix(l.input[l.pos:], l.leftDelim) {
if l.pos > l.start {
l.emit(itemText)
}
return lexLeftDelim
}
if l.next() == eof {
break
}
}
// Correctly reached EOF.
if l.pos > l.start {
l.emit(itemText)
}
l.emit(itemEOF)
return nil
}
// lexLeftDelim scans the left delimiter, which is known to be present.
func lexLeftDelim(l *lexer) stateFn {
l.pos += Pos(len(l.leftDelim))
if strings.HasPrefix(l.input[l.pos:], leftComment) {
return lexComment
}
l.emit(itemLeftDelim)
l.parenDepth = 0
return lexInsideAction
}
// lexComment scans a comment. The left comment marker is known to be present.
func lexComment(l *lexer) stateFn {
l.pos += Pos(len(leftComment))
i := strings.Index(l.input[l.pos:], rightComment)
if i < 0 {
return l.errorf("unclosed comment")
}
l.pos += Pos(i + len(rightComment))
if !strings.HasPrefix(l.input[l.pos:], l.rightDelim) {
return l.errorf("comment ends before closing delimiter")
}
l.pos += Pos(len(l.rightDelim))
l.ignore()
return lexText
}
// lexRightDelim scans the right delimiter, which is known to be present.
func lexRightDelim(l *lexer) stateFn {
l.pos += Pos(len(l.rightDelim))
l.emit(itemRightDelim)
if l.peek() == '\\' {
l.pos++
l.emit(itemElideNewline)
}
return lexText
}
// lexInsideAction scans the elements inside action delimiters.
func lexInsideAction(l *lexer) stateFn {
// Either number, quoted string, or identifier.
// Spaces separate arguments; runs of spaces turn into itemSpace.
// Pipe symbols separate and are emitted.
if strings.HasPrefix(l.input[l.pos:], l.rightDelim+"\\") || strings.HasPrefix(l.input[l.pos:], l.rightDelim) {
if l.parenDepth == 0 {
return lexRightDelim
}
return l.errorf("unclosed left paren")
}
switch r := l.next(); {
case r == eof || isEndOfLine(r):
return l.errorf("unclosed action")
case isSpace(r):
return lexSpace
case r == ':':
if l.next() != '=' {
return l.errorf("expected :=")
}
l.emit(itemColonEquals)
case r == '|':
l.emit(itemPipe)
case r == '"':
return lexQuote
case r == '`':
return lexRawQuote
case r == '$':
return lexVariable
case r == '\'':
return lexChar
case r == '.':
// special look-ahead for ".field" so we don't break l.backup().
if l.pos < Pos(len(l.input)) {
r := l.input[l.pos]
if r < '0' || '9' < r {
return lexField
}
}
fallthrough // '.' can start a number.
case r == '+' || r == '-' || ('0' <= r && r <= '9'):
l.backup()
return lexNumber
case isAlphaNumeric(r):
l.backup()
return lexIdentifier
case r == '(':
l.emit(itemLeftParen)
l.parenDepth++
return lexInsideAction
case r == ')':
l.emit(itemRightParen)
l.parenDepth--
if l.parenDepth < 0 {
return l.errorf("unexpected right paren %#U", r)
}
return lexInsideAction
case r <= unicode.MaxASCII && unicode.IsPrint(r):
l.emit(itemChar)
return lexInsideAction
default:
return l.errorf("unrecognized character in action: %#U", r)
}
return lexInsideAction
}
// lexSpace scans a run of space characters.
// One space has already been seen.
func lexSpace(l *lexer) stateFn {
for isSpace(l.peek()) {
l.next()
}
l.emit(itemSpace)
return lexInsideAction
}
// lexIdentifier scans an alphanumeric.
func lexIdentifier(l *lexer) stateFn {
Loop:
for {
switch r := l.next(); {
case isAlphaNumeric(r):
// absorb.
default:
l.backup()
word := l.input[l.start:l.pos]
if !l.atTerminator() {
return l.errorf("bad character %#U", r)
}
switch {
case key[word] > itemKeyword:
l.emit(key[word])
case word[0] == '.':
l.emit(itemField)
case word == "true", word == "false":
l.emit(itemBool)
default:
l.emit(itemIdentifier)
}
break Loop
}
}
return lexInsideAction
}
// lexField scans a field: .Alphanumeric.
// The . has been scanned.
func lexField(l *lexer) stateFn {
return lexFieldOrVariable(l, itemField)
}
// lexVariable scans a Variable: $Alphanumeric.
// The $ has been scanned.
func lexVariable(l *lexer) stateFn {
if l.atTerminator() { // Nothing interesting follows -> "$".
l.emit(itemVariable)
return lexInsideAction
}
return lexFieldOrVariable(l, itemVariable)
}
// lexVariable scans a field or variable: [.$]Alphanumeric.
// The . or $ has been scanned.
func lexFieldOrVariable(l *lexer, typ itemType) stateFn {
if l.atTerminator() { // Nothing interesting follows -> "." or "$".
if typ == itemVariable {
l.emit(itemVariable)
} else {
l.emit(itemDot)
}
return lexInsideAction
}
var r rune
for {
r = l.next()
if !isAlphaNumeric(r) {
l.backup()
break
}
}
if !l.atTerminator() {
return l.errorf("bad character %#U", r)
}
l.emit(typ)
return lexInsideAction
}
// atTerminator reports whether the input is at valid termination character to
// appear after an identifier. Breaks .X.Y into two pieces. Also catches cases
// like "$x+2" not being acceptable without a space, in case we decide one
// day to implement arithmetic.
func (l *lexer) atTerminator() bool {
r := l.peek()
if isSpace(r) || isEndOfLine(r) {
return true
}
switch r {
case eof, '.', ',', '|', ':', ')', '(':
return true
}
// Does r start the delimiter? This can be ambiguous (with delim=="//", $x/2 will
// succeed but should fail) but only in extremely rare cases caused by willfully
// bad choice of delimiter.
if rd, _ := utf8.DecodeRuneInString(l.rightDelim); rd == r {
return true
}
return false
}
// lexChar scans a character constant. The initial quote is already
// scanned. Syntax checking is done by the parser.
func lexChar(l *lexer) stateFn {
Loop:
for {
switch l.next() {
case '\\':
if r := l.next(); r != eof && r != '\n' {
break
}
fallthrough
case eof, '\n':
return l.errorf("unterminated character constant")
case '\'':
break Loop
}
}
l.emit(itemCharConstant)
return lexInsideAction
}
// lexNumber scans a number: decimal, octal, hex, float, or imaginary. This
// isn't a perfect number scanner - for instance it accepts "." and "0x0.2"
// and "089" - but when it's wrong the input is invalid and the parser (via
// strconv) will notice.
func lexNumber(l *lexer) stateFn {
if !l.scanNumber() {
return l.errorf("bad number syntax: %q", l.input[l.start:l.pos])
}
if sign := l.peek(); sign == '+' || sign == '-' {
// Complex: 1+2i. No spaces, must end in 'i'.
if !l.scanNumber() || l.input[l.pos-1] != 'i' {
return l.errorf("bad number syntax: %q", l.input[l.start:l.pos])
}
l.emit(itemComplex)
} else {
l.emit(itemNumber)
}
return lexInsideAction
}
func (l *lexer) scanNumber() bool {
// Optional leading sign.
l.accept("+-")
// Is it hex?
digits := "0123456789"
if l.accept("0") && l.accept("xX") {
digits = "0123456789abcdefABCDEF"
}
l.acceptRun(digits)
if l.accept(".") {
l.acceptRun(digits)
}
if l.accept("eE") {
l.accept("+-")
l.acceptRun("0123456789")
}
// Is it imaginary?
l.accept("i")
// Next thing mustn't be alphanumeric.
if isAlphaNumeric(l.peek()) {
l.next()
return false
}
return true
}
// lexQuote scans a quoted string.
func lexQuote(l *lexer) stateFn {
Loop:
for {
switch l.next() {
case '\\':
if r := l.next(); r != eof && r != '\n' {
break
}
fallthrough
case eof, '\n':
return l.errorf("unterminated quoted string")
case '"':
break Loop
}
}
l.emit(itemString)
return lexInsideAction
}
// lexRawQuote scans a raw quoted string.
func lexRawQuote(l *lexer) stateFn {
Loop:
for {
switch l.next() {
case eof, '\n':
return l.errorf("unterminated raw quoted string")
case '`':
break Loop
}
}
l.emit(itemRawString)
return lexInsideAction
}
// isSpace reports whether r is a space character.
func isSpace(r rune) bool {
return r == ' ' || r == '\t'
}
// isEndOfLine reports whether r is an end-of-line character.
func isEndOfLine(r rune) bool {
return r == '\r' || r == '\n'
}
// isAlphaNumeric reports whether r is an alphabetic, digit, or underscore.
func isAlphaNumeric(r rune) bool {
return r == '_' || unicode.IsLetter(r) || unicode.IsDigit(r)
}

View file

@ -1,834 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Parse nodes.
package parse
import (
"bytes"
"fmt"
"strconv"
"strings"
)
var textFormat = "%s" // Changed to "%q" in tests for better error messages.
// A Node is an element in the parse tree. The interface is trivial.
// The interface contains an unexported method so that only
// types local to this package can satisfy it.
type Node interface {
Type() NodeType
String() string
// Copy does a deep copy of the Node and all its components.
// To avoid type assertions, some XxxNodes also have specialized
// CopyXxx methods that return *XxxNode.
Copy() Node
Position() Pos // byte position of start of node in full original input string
// tree returns the containing *Tree.
// It is unexported so all implementations of Node are in this package.
tree() *Tree
}
// NodeType identifies the type of a parse tree node.
type NodeType int
// Pos represents a byte position in the original input text from which
// this template was parsed.
type Pos int
func (p Pos) Position() Pos {
return p
}
// Type returns itself and provides an easy default implementation
// for embedding in a Node. Embedded in all non-trivial Nodes.
func (t NodeType) Type() NodeType {
return t
}
const (
NodeText NodeType = iota // Plain text.
NodeAction // A non-control action such as a field evaluation.
NodeBool // A boolean constant.
NodeChain // A sequence of field accesses.
NodeCommand // An element of a pipeline.
NodeDot // The cursor, dot.
nodeElse // An else action. Not added to tree.
nodeEnd // An end action. Not added to tree.
NodeField // A field or method name.
NodeIdentifier // An identifier; always a function name.
NodeIf // An if action.
NodeList // A list of Nodes.
NodeNil // An untyped nil constant.
NodeNumber // A numerical constant.
NodePipe // A pipeline of commands.
NodeRange // A range action.
NodeString // A string constant.
NodeTemplate // A template invocation action.
NodeVariable // A $ variable.
NodeWith // A with action.
)
// Nodes.
// ListNode holds a sequence of nodes.
type ListNode struct {
NodeType
Pos
tr *Tree
Nodes []Node // The element nodes in lexical order.
}
func (t *Tree) newList(pos Pos) *ListNode {
return &ListNode{tr: t, NodeType: NodeList, Pos: pos}
}
func (l *ListNode) append(n Node) {
l.Nodes = append(l.Nodes, n)
}
func (l *ListNode) tree() *Tree {
return l.tr
}
func (l *ListNode) String() string {
b := new(bytes.Buffer)
for _, n := range l.Nodes {
fmt.Fprint(b, n)
}
return b.String()
}
func (l *ListNode) CopyList() *ListNode {
if l == nil {
return l
}
n := l.tr.newList(l.Pos)
for _, elem := range l.Nodes {
n.append(elem.Copy())
}
return n
}
func (l *ListNode) Copy() Node {
return l.CopyList()
}
// TextNode holds plain text.
type TextNode struct {
NodeType
Pos
tr *Tree
Text []byte // The text; may span newlines.
}
func (t *Tree) newText(pos Pos, text string) *TextNode {
return &TextNode{tr: t, NodeType: NodeText, Pos: pos, Text: []byte(text)}
}
func (t *TextNode) String() string {
return fmt.Sprintf(textFormat, t.Text)
}
func (t *TextNode) tree() *Tree {
return t.tr
}
func (t *TextNode) Copy() Node {
return &TextNode{tr: t.tr, NodeType: NodeText, Pos: t.Pos, Text: append([]byte{}, t.Text...)}
}
// PipeNode holds a pipeline with optional declaration
type PipeNode struct {
NodeType
Pos
tr *Tree
Line int // The line number in the input (deprecated; kept for compatibility)
Decl []*VariableNode // Variable declarations in lexical order.
Cmds []*CommandNode // The commands in lexical order.
}
func (t *Tree) newPipeline(pos Pos, line int, decl []*VariableNode) *PipeNode {
return &PipeNode{tr: t, NodeType: NodePipe, Pos: pos, Line: line, Decl: decl}
}
func (p *PipeNode) append(command *CommandNode) {
p.Cmds = append(p.Cmds, command)
}
func (p *PipeNode) String() string {
s := ""
if len(p.Decl) > 0 {
for i, v := range p.Decl {
if i > 0 {
s += ", "
}
s += v.String()
}
s += " := "
}
for i, c := range p.Cmds {
if i > 0 {
s += " | "
}
s += c.String()
}
return s
}
func (p *PipeNode) tree() *Tree {
return p.tr
}
func (p *PipeNode) CopyPipe() *PipeNode {
if p == nil {
return p
}
var decl []*VariableNode
for _, d := range p.Decl {
decl = append(decl, d.Copy().(*VariableNode))
}
n := p.tr.newPipeline(p.Pos, p.Line, decl)
for _, c := range p.Cmds {
n.append(c.Copy().(*CommandNode))
}
return n
}
func (p *PipeNode) Copy() Node {
return p.CopyPipe()
}
// ActionNode holds an action (something bounded by delimiters).
// Control actions have their own nodes; ActionNode represents simple
// ones such as field evaluations and parenthesized pipelines.
type ActionNode struct {
NodeType
Pos
tr *Tree
Line int // The line number in the input (deprecated; kept for compatibility)
Pipe *PipeNode // The pipeline in the action.
}
func (t *Tree) newAction(pos Pos, line int, pipe *PipeNode) *ActionNode {
return &ActionNode{tr: t, NodeType: NodeAction, Pos: pos, Line: line, Pipe: pipe}
}
func (a *ActionNode) String() string {
return fmt.Sprintf("{{%s}}", a.Pipe)
}
func (a *ActionNode) tree() *Tree {
return a.tr
}
func (a *ActionNode) Copy() Node {
return a.tr.newAction(a.Pos, a.Line, a.Pipe.CopyPipe())
}
// CommandNode holds a command (a pipeline inside an evaluating action).
type CommandNode struct {
NodeType
Pos
tr *Tree
Args []Node // Arguments in lexical order: Identifier, field, or constant.
}
func (t *Tree) newCommand(pos Pos) *CommandNode {
return &CommandNode{tr: t, NodeType: NodeCommand, Pos: pos}
}
func (c *CommandNode) append(arg Node) {
c.Args = append(c.Args, arg)
}
func (c *CommandNode) String() string {
s := ""
for i, arg := range c.Args {
if i > 0 {
s += " "
}
if arg, ok := arg.(*PipeNode); ok {
s += "(" + arg.String() + ")"
continue
}
s += arg.String()
}
return s
}
func (c *CommandNode) tree() *Tree {
return c.tr
}
func (c *CommandNode) Copy() Node {
if c == nil {
return c
}
n := c.tr.newCommand(c.Pos)
for _, c := range c.Args {
n.append(c.Copy())
}
return n
}
// IdentifierNode holds an identifier.
type IdentifierNode struct {
NodeType
Pos
tr *Tree
Ident string // The identifier's name.
}
// NewIdentifier returns a new IdentifierNode with the given identifier name.
func NewIdentifier(ident string) *IdentifierNode {
return &IdentifierNode{NodeType: NodeIdentifier, Ident: ident}
}
// SetPos sets the position. NewIdentifier is a public method so we can't modify its signature.
// Chained for convenience.
// TODO: fix one day?
func (i *IdentifierNode) SetPos(pos Pos) *IdentifierNode {
i.Pos = pos
return i
}
// SetTree sets the parent tree for the node. NewIdentifier is a public method so we can't modify its signature.
// Chained for convenience.
// TODO: fix one day?
func (i *IdentifierNode) SetTree(t *Tree) *IdentifierNode {
i.tr = t
return i
}
func (i *IdentifierNode) String() string {
return i.Ident
}
func (i *IdentifierNode) tree() *Tree {
return i.tr
}
func (i *IdentifierNode) Copy() Node {
return NewIdentifier(i.Ident).SetTree(i.tr).SetPos(i.Pos)
}
// VariableNode holds a list of variable names, possibly with chained field
// accesses. The dollar sign is part of the (first) name.
type VariableNode struct {
NodeType
Pos
tr *Tree
Ident []string // Variable name and fields in lexical order.
}
func (t *Tree) newVariable(pos Pos, ident string) *VariableNode {
return &VariableNode{tr: t, NodeType: NodeVariable, Pos: pos, Ident: strings.Split(ident, ".")}
}
func (v *VariableNode) String() string {
s := ""
for i, id := range v.Ident {
if i > 0 {
s += "."
}
s += id
}
return s
}
func (v *VariableNode) tree() *Tree {
return v.tr
}
func (v *VariableNode) Copy() Node {
return &VariableNode{tr: v.tr, NodeType: NodeVariable, Pos: v.Pos, Ident: append([]string{}, v.Ident...)}
}
// DotNode holds the special identifier '.'.
type DotNode struct {
NodeType
Pos
tr *Tree
}
func (t *Tree) newDot(pos Pos) *DotNode {
return &DotNode{tr: t, NodeType: NodeDot, Pos: pos}
}
func (d *DotNode) Type() NodeType {
// Override method on embedded NodeType for API compatibility.
// TODO: Not really a problem; could change API without effect but
// api tool complains.
return NodeDot
}
func (d *DotNode) String() string {
return "."
}
func (d *DotNode) tree() *Tree {
return d.tr
}
func (d *DotNode) Copy() Node {
return d.tr.newDot(d.Pos)
}
// NilNode holds the special identifier 'nil' representing an untyped nil constant.
type NilNode struct {
NodeType
Pos
tr *Tree
}
func (t *Tree) newNil(pos Pos) *NilNode {
return &NilNode{tr: t, NodeType: NodeNil, Pos: pos}
}
func (n *NilNode) Type() NodeType {
// Override method on embedded NodeType for API compatibility.
// TODO: Not really a problem; could change API without effect but
// api tool complains.
return NodeNil
}
func (n *NilNode) String() string {
return "nil"
}
func (n *NilNode) tree() *Tree {
return n.tr
}
func (n *NilNode) Copy() Node {
return n.tr.newNil(n.Pos)
}
// FieldNode holds a field (identifier starting with '.').
// The names may be chained ('.x.y').
// The period is dropped from each ident.
type FieldNode struct {
NodeType
Pos
tr *Tree
Ident []string // The identifiers in lexical order.
}
func (t *Tree) newField(pos Pos, ident string) *FieldNode {
return &FieldNode{tr: t, NodeType: NodeField, Pos: pos, Ident: strings.Split(ident[1:], ".")} // [1:] to drop leading period
}
func (f *FieldNode) String() string {
s := ""
for _, id := range f.Ident {
s += "." + id
}
return s
}
func (f *FieldNode) tree() *Tree {
return f.tr
}
func (f *FieldNode) Copy() Node {
return &FieldNode{tr: f.tr, NodeType: NodeField, Pos: f.Pos, Ident: append([]string{}, f.Ident...)}
}
// ChainNode holds a term followed by a chain of field accesses (identifier starting with '.').
// The names may be chained ('.x.y').
// The periods are dropped from each ident.
type ChainNode struct {
NodeType
Pos
tr *Tree
Node Node
Field []string // The identifiers in lexical order.
}
func (t *Tree) newChain(pos Pos, node Node) *ChainNode {
return &ChainNode{tr: t, NodeType: NodeChain, Pos: pos, Node: node}
}
// Add adds the named field (which should start with a period) to the end of the chain.
func (c *ChainNode) Add(field string) {
if len(field) == 0 || field[0] != '.' {
panic("no dot in field")
}
field = field[1:] // Remove leading dot.
if field == "" {
panic("empty field")
}
c.Field = append(c.Field, field)
}
func (c *ChainNode) String() string {
s := c.Node.String()
if _, ok := c.Node.(*PipeNode); ok {
s = "(" + s + ")"
}
for _, field := range c.Field {
s += "." + field
}
return s
}
func (c *ChainNode) tree() *Tree {
return c.tr
}
func (c *ChainNode) Copy() Node {
return &ChainNode{tr: c.tr, NodeType: NodeChain, Pos: c.Pos, Node: c.Node, Field: append([]string{}, c.Field...)}
}
// BoolNode holds a boolean constant.
type BoolNode struct {
NodeType
Pos
tr *Tree
True bool // The value of the boolean constant.
}
func (t *Tree) newBool(pos Pos, true bool) *BoolNode {
return &BoolNode{tr: t, NodeType: NodeBool, Pos: pos, True: true}
}
func (b *BoolNode) String() string {
if b.True {
return "true"
}
return "false"
}
func (b *BoolNode) tree() *Tree {
return b.tr
}
func (b *BoolNode) Copy() Node {
return b.tr.newBool(b.Pos, b.True)
}
// NumberNode holds a number: signed or unsigned integer, float, or complex.
// The value is parsed and stored under all the types that can represent the value.
// This simulates in a small amount of code the behavior of Go's ideal constants.
type NumberNode struct {
NodeType
Pos
tr *Tree
IsInt bool // Number has an integral value.
IsUint bool // Number has an unsigned integral value.
IsFloat bool // Number has a floating-point value.
IsComplex bool // Number is complex.
Int64 int64 // The signed integer value.
Uint64 uint64 // The unsigned integer value.
Float64 float64 // The floating-point value.
Complex128 complex128 // The complex value.
Text string // The original textual representation from the input.
}
func (t *Tree) newNumber(pos Pos, text string, typ itemType) (*NumberNode, error) {
n := &NumberNode{tr: t, NodeType: NodeNumber, Pos: pos, Text: text}
switch typ {
case itemCharConstant:
rune, _, tail, err := strconv.UnquoteChar(text[1:], text[0])
if err != nil {
return nil, err
}
if tail != "'" {
return nil, fmt.Errorf("malformed character constant: %s", text)
}
n.Int64 = int64(rune)
n.IsInt = true
n.Uint64 = uint64(rune)
n.IsUint = true
n.Float64 = float64(rune) // odd but those are the rules.
n.IsFloat = true
return n, nil
case itemComplex:
// fmt.Sscan can parse the pair, so let it do the work.
if _, err := fmt.Sscan(text, &n.Complex128); err != nil {
return nil, err
}
n.IsComplex = true
n.simplifyComplex()
return n, nil
}
// Imaginary constants can only be complex unless they are zero.
if len(text) > 0 && text[len(text)-1] == 'i' {
f, err := strconv.ParseFloat(text[:len(text)-1], 64)
if err == nil {
n.IsComplex = true
n.Complex128 = complex(0, f)
n.simplifyComplex()
return n, nil
}
}
// Do integer test first so we get 0x123 etc.
u, err := strconv.ParseUint(text, 0, 64) // will fail for -0; fixed below.
if err == nil {
n.IsUint = true
n.Uint64 = u
}
i, err := strconv.ParseInt(text, 0, 64)
if err == nil {
n.IsInt = true
n.Int64 = i
if i == 0 {
n.IsUint = true // in case of -0.
n.Uint64 = u
}
}
// If an integer extraction succeeded, promote the float.
if n.IsInt {
n.IsFloat = true
n.Float64 = float64(n.Int64)
} else if n.IsUint {
n.IsFloat = true
n.Float64 = float64(n.Uint64)
} else {
f, err := strconv.ParseFloat(text, 64)
if err == nil {
n.IsFloat = true
n.Float64 = f
// If a floating-point extraction succeeded, extract the int if needed.
if !n.IsInt && float64(int64(f)) == f {
n.IsInt = true
n.Int64 = int64(f)
}
if !n.IsUint && float64(uint64(f)) == f {
n.IsUint = true
n.Uint64 = uint64(f)
}
}
}
if !n.IsInt && !n.IsUint && !n.IsFloat {
return nil, fmt.Errorf("illegal number syntax: %q", text)
}
return n, nil
}
// simplifyComplex pulls out any other types that are represented by the complex number.
// These all require that the imaginary part be zero.
func (n *NumberNode) simplifyComplex() {
n.IsFloat = imag(n.Complex128) == 0
if n.IsFloat {
n.Float64 = real(n.Complex128)
n.IsInt = float64(int64(n.Float64)) == n.Float64
if n.IsInt {
n.Int64 = int64(n.Float64)
}
n.IsUint = float64(uint64(n.Float64)) == n.Float64
if n.IsUint {
n.Uint64 = uint64(n.Float64)
}
}
}
func (n *NumberNode) String() string {
return n.Text
}
func (n *NumberNode) tree() *Tree {
return n.tr
}
func (n *NumberNode) Copy() Node {
nn := new(NumberNode)
*nn = *n // Easy, fast, correct.
return nn
}
// StringNode holds a string constant. The value has been "unquoted".
type StringNode struct {
NodeType
Pos
tr *Tree
Quoted string // The original text of the string, with quotes.
Text string // The string, after quote processing.
}
func (t *Tree) newString(pos Pos, orig, text string) *StringNode {
return &StringNode{tr: t, NodeType: NodeString, Pos: pos, Quoted: orig, Text: text}
}
func (s *StringNode) String() string {
return s.Quoted
}
func (s *StringNode) tree() *Tree {
return s.tr
}
func (s *StringNode) Copy() Node {
return s.tr.newString(s.Pos, s.Quoted, s.Text)
}
// endNode represents an {{end}} action.
// It does not appear in the final parse tree.
type endNode struct {
NodeType
Pos
tr *Tree
}
func (t *Tree) newEnd(pos Pos) *endNode {
return &endNode{tr: t, NodeType: nodeEnd, Pos: pos}
}
func (e *endNode) String() string {
return "{{end}}"
}
func (e *endNode) tree() *Tree {
return e.tr
}
func (e *endNode) Copy() Node {
return e.tr.newEnd(e.Pos)
}
// elseNode represents an {{else}} action. Does not appear in the final tree.
type elseNode struct {
NodeType
Pos
tr *Tree
Line int // The line number in the input (deprecated; kept for compatibility)
}
func (t *Tree) newElse(pos Pos, line int) *elseNode {
return &elseNode{tr: t, NodeType: nodeElse, Pos: pos, Line: line}
}
func (e *elseNode) Type() NodeType {
return nodeElse
}
func (e *elseNode) String() string {
return "{{else}}"
}
func (e *elseNode) tree() *Tree {
return e.tr
}
func (e *elseNode) Copy() Node {
return e.tr.newElse(e.Pos, e.Line)
}
// BranchNode is the common representation of if, range, and with.
type BranchNode struct {
NodeType
Pos
tr *Tree
Line int // The line number in the input (deprecated; kept for compatibility)
Pipe *PipeNode // The pipeline to be evaluated.
List *ListNode // What to execute if the value is non-empty.
ElseList *ListNode // What to execute if the value is empty (nil if absent).
}
func (b *BranchNode) String() string {
name := ""
switch b.NodeType {
case NodeIf:
name = "if"
case NodeRange:
name = "range"
case NodeWith:
name = "with"
default:
panic("unknown branch type")
}
if b.ElseList != nil {
return fmt.Sprintf("{{%s %s}}%s{{else}}%s{{end}}", name, b.Pipe, b.List, b.ElseList)
}
return fmt.Sprintf("{{%s %s}}%s{{end}}", name, b.Pipe, b.List)
}
func (b *BranchNode) tree() *Tree {
return b.tr
}
func (b *BranchNode) Copy() Node {
switch b.NodeType {
case NodeIf:
return b.tr.newIf(b.Pos, b.Line, b.Pipe, b.List, b.ElseList)
case NodeRange:
return b.tr.newRange(b.Pos, b.Line, b.Pipe, b.List, b.ElseList)
case NodeWith:
return b.tr.newWith(b.Pos, b.Line, b.Pipe, b.List, b.ElseList)
default:
panic("unknown branch type")
}
}
// IfNode represents an {{if}} action and its commands.
type IfNode struct {
BranchNode
}
func (t *Tree) newIf(pos Pos, line int, pipe *PipeNode, list, elseList *ListNode) *IfNode {
return &IfNode{BranchNode{tr: t, NodeType: NodeIf, Pos: pos, Line: line, Pipe: pipe, List: list, ElseList: elseList}}
}
func (i *IfNode) Copy() Node {
return i.tr.newIf(i.Pos, i.Line, i.Pipe.CopyPipe(), i.List.CopyList(), i.ElseList.CopyList())
}
// RangeNode represents a {{range}} action and its commands.
type RangeNode struct {
BranchNode
}
func (t *Tree) newRange(pos Pos, line int, pipe *PipeNode, list, elseList *ListNode) *RangeNode {
return &RangeNode{BranchNode{tr: t, NodeType: NodeRange, Pos: pos, Line: line, Pipe: pipe, List: list, ElseList: elseList}}
}
func (r *RangeNode) Copy() Node {
return r.tr.newRange(r.Pos, r.Line, r.Pipe.CopyPipe(), r.List.CopyList(), r.ElseList.CopyList())
}
// WithNode represents a {{with}} action and its commands.
type WithNode struct {
BranchNode
}
func (t *Tree) newWith(pos Pos, line int, pipe *PipeNode, list, elseList *ListNode) *WithNode {
return &WithNode{BranchNode{tr: t, NodeType: NodeWith, Pos: pos, Line: line, Pipe: pipe, List: list, ElseList: elseList}}
}
func (w *WithNode) Copy() Node {
return w.tr.newWith(w.Pos, w.Line, w.Pipe.CopyPipe(), w.List.CopyList(), w.ElseList.CopyList())
}
// TemplateNode represents a {{template}} action.
type TemplateNode struct {
NodeType
Pos
tr *Tree
Line int // The line number in the input (deprecated; kept for compatibility)
Name string // The name of the template (unquoted).
Pipe *PipeNode // The command to evaluate as dot for the template.
}
func (t *Tree) newTemplate(pos Pos, line int, name string, pipe *PipeNode) *TemplateNode {
return &TemplateNode{tr: t, NodeType: NodeTemplate, Pos: pos, Line: line, Name: name, Pipe: pipe}
}
func (t *TemplateNode) String() string {
if t.Pipe == nil {
return fmt.Sprintf("{{template %q}}", t.Name)
}
return fmt.Sprintf("{{template %q %s}}", t.Name, t.Pipe)
}
func (t *TemplateNode) tree() *Tree {
return t.tr
}
func (t *TemplateNode) Copy() Node {
return t.tr.newTemplate(t.Pos, t.Line, t.Name, t.Pipe.CopyPipe())
}

View file

@ -1,700 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package parse builds parse trees for templates as defined by text/template
// and html/template. Clients should use those packages to construct templates
// rather than this one, which provides shared internal data structures not
// intended for general use.
package parse
import (
"bytes"
"fmt"
"runtime"
"strconv"
"strings"
)
// Tree is the representation of a single parsed template.
type Tree struct {
Name string // name of the template represented by the tree.
ParseName string // name of the top-level template during parsing, for error messages.
Root *ListNode // top-level root of the tree.
text string // text parsed to create the template (or its parent)
// Parsing only; cleared after parse.
funcs []map[string]interface{}
lex *lexer
token [3]item // three-token lookahead for parser.
peekCount int
vars []string // variables defined at the moment.
}
// Copy returns a copy of the Tree. Any parsing state is discarded.
func (t *Tree) Copy() *Tree {
if t == nil {
return nil
}
return &Tree{
Name: t.Name,
ParseName: t.ParseName,
Root: t.Root.CopyList(),
text: t.text,
}
}
// Parse returns a map from template name to parse.Tree, created by parsing the
// templates described in the argument string. The top-level template will be
// given the specified name. If an error is encountered, parsing stops and an
// empty map is returned with the error.
func Parse(name, text, leftDelim, rightDelim string, funcs ...map[string]interface{}) (treeSet map[string]*Tree, err error) {
treeSet = make(map[string]*Tree)
t := New(name)
t.text = text
_, err = t.Parse(text, leftDelim, rightDelim, treeSet, funcs...)
return
}
// next returns the next token.
func (t *Tree) next() item {
if t.peekCount > 0 {
t.peekCount--
} else {
t.token[0] = t.lex.nextItem()
}
return t.token[t.peekCount]
}
// backup backs the input stream up one token.
func (t *Tree) backup() {
t.peekCount++
}
// backup2 backs the input stream up two tokens.
// The zeroth token is already there.
func (t *Tree) backup2(t1 item) {
t.token[1] = t1
t.peekCount = 2
}
// backup3 backs the input stream up three tokens
// The zeroth token is already there.
func (t *Tree) backup3(t2, t1 item) { // Reverse order: we're pushing back.
t.token[1] = t1
t.token[2] = t2
t.peekCount = 3
}
// peek returns but does not consume the next token.
func (t *Tree) peek() item {
if t.peekCount > 0 {
return t.token[t.peekCount-1]
}
t.peekCount = 1
t.token[0] = t.lex.nextItem()
return t.token[0]
}
// nextNonSpace returns the next non-space token.
func (t *Tree) nextNonSpace() (token item) {
for {
token = t.next()
if token.typ != itemSpace {
break
}
}
return token
}
// peekNonSpace returns but does not consume the next non-space token.
func (t *Tree) peekNonSpace() (token item) {
for {
token = t.next()
if token.typ != itemSpace {
break
}
}
t.backup()
return token
}
// Parsing.
// New allocates a new parse tree with the given name.
func New(name string, funcs ...map[string]interface{}) *Tree {
return &Tree{
Name: name,
funcs: funcs,
}
}
// ErrorContext returns a textual representation of the location of the node in the input text.
// The receiver is only used when the node does not have a pointer to the tree inside,
// which can occur in old code.
func (t *Tree) ErrorContext(n Node) (location, context string) {
pos := int(n.Position())
tree := n.tree()
if tree == nil {
tree = t
}
text := tree.text[:pos]
byteNum := strings.LastIndex(text, "\n")
if byteNum == -1 {
byteNum = pos // On first line.
} else {
byteNum++ // After the newline.
byteNum = pos - byteNum
}
lineNum := 1 + strings.Count(text, "\n")
context = n.String()
if len(context) > 20 {
context = fmt.Sprintf("%.20s...", context)
}
return fmt.Sprintf("%s:%d:%d", tree.ParseName, lineNum, byteNum), context
}
// errorf formats the error and terminates processing.
func (t *Tree) errorf(format string, args ...interface{}) {
t.Root = nil
format = fmt.Sprintf("template: %s:%d: %s", t.ParseName, t.lex.lineNumber(), format)
panic(fmt.Errorf(format, args...))
}
// error terminates processing.
func (t *Tree) error(err error) {
t.errorf("%s", err)
}
// expect consumes the next token and guarantees it has the required type.
func (t *Tree) expect(expected itemType, context string) item {
token := t.nextNonSpace()
if token.typ != expected {
t.unexpected(token, context)
}
return token
}
// expectOneOf consumes the next token and guarantees it has one of the required types.
func (t *Tree) expectOneOf(expected1, expected2 itemType, context string) item {
token := t.nextNonSpace()
if token.typ != expected1 && token.typ != expected2 {
t.unexpected(token, context)
}
return token
}
// unexpected complains about the token and terminates processing.
func (t *Tree) unexpected(token item, context string) {
t.errorf("unexpected %s in %s", token, context)
}
// recover is the handler that turns panics into returns from the top level of Parse.
func (t *Tree) recover(errp *error) {
e := recover()
if e != nil {
if _, ok := e.(runtime.Error); ok {
panic(e)
}
if t != nil {
t.stopParse()
}
*errp = e.(error)
}
return
}
// startParse initializes the parser, using the lexer.
func (t *Tree) startParse(funcs []map[string]interface{}, lex *lexer) {
t.Root = nil
t.lex = lex
t.vars = []string{"$"}
t.funcs = funcs
}
// stopParse terminates parsing.
func (t *Tree) stopParse() {
t.lex = nil
t.vars = nil
t.funcs = nil
}
// Parse parses the template definition string to construct a representation of
// the template for execution. If either action delimiter string is empty, the
// default ("{{" or "}}") is used. Embedded template definitions are added to
// the treeSet map.
func (t *Tree) Parse(text, leftDelim, rightDelim string, treeSet map[string]*Tree, funcs ...map[string]interface{}) (tree *Tree, err error) {
defer t.recover(&err)
t.ParseName = t.Name
t.startParse(funcs, lex(t.Name, text, leftDelim, rightDelim))
t.text = text
t.parse(treeSet)
t.add(treeSet)
t.stopParse()
return t, nil
}
// add adds tree to the treeSet.
func (t *Tree) add(treeSet map[string]*Tree) {
tree := treeSet[t.Name]
if tree == nil || IsEmptyTree(tree.Root) {
treeSet[t.Name] = t
return
}
if !IsEmptyTree(t.Root) {
t.errorf("template: multiple definition of template %q", t.Name)
}
}
// IsEmptyTree reports whether this tree (node) is empty of everything but space.
func IsEmptyTree(n Node) bool {
switch n := n.(type) {
case nil:
return true
case *ActionNode:
case *IfNode:
case *ListNode:
for _, node := range n.Nodes {
if !IsEmptyTree(node) {
return false
}
}
return true
case *RangeNode:
case *TemplateNode:
case *TextNode:
return len(bytes.TrimSpace(n.Text)) == 0
case *WithNode:
default:
panic("unknown node: " + n.String())
}
return false
}
// parse is the top-level parser for a template, essentially the same
// as itemList except it also parses {{define}} actions.
// It runs to EOF.
func (t *Tree) parse(treeSet map[string]*Tree) (next Node) {
t.Root = t.newList(t.peek().pos)
for t.peek().typ != itemEOF {
if t.peek().typ == itemLeftDelim {
delim := t.next()
if t.nextNonSpace().typ == itemDefine {
newT := New("definition") // name will be updated once we know it.
newT.text = t.text
newT.ParseName = t.ParseName
newT.startParse(t.funcs, t.lex)
newT.parseDefinition(treeSet)
continue
}
t.backup2(delim)
}
n := t.textOrAction()
if n.Type() == nodeEnd {
t.errorf("unexpected %s", n)
}
t.Root.append(n)
}
return nil
}
// parseDefinition parses a {{define}} ... {{end}} template definition and
// installs the definition in the treeSet map. The "define" keyword has already
// been scanned.
func (t *Tree) parseDefinition(treeSet map[string]*Tree) {
const context = "define clause"
name := t.expectOneOf(itemString, itemRawString, context)
var err error
t.Name, err = strconv.Unquote(name.val)
if err != nil {
t.error(err)
}
t.expect(itemRightDelim, context)
var end Node
t.Root, end = t.itemList()
if end.Type() != nodeEnd {
t.errorf("unexpected %s in %s", end, context)
}
t.add(treeSet)
t.stopParse()
}
// itemList:
// textOrAction*
// Terminates at {{end}} or {{else}}, returned separately.
func (t *Tree) itemList() (list *ListNode, next Node) {
list = t.newList(t.peekNonSpace().pos)
for t.peekNonSpace().typ != itemEOF {
n := t.textOrAction()
switch n.Type() {
case nodeEnd, nodeElse:
return list, n
}
list.append(n)
}
t.errorf("unexpected EOF")
return
}
// textOrAction:
// text | action
func (t *Tree) textOrAction() Node {
switch token := t.nextNonSpace(); token.typ {
case itemElideNewline:
return t.elideNewline()
case itemText:
return t.newText(token.pos, token.val)
case itemLeftDelim:
return t.action()
default:
t.unexpected(token, "input")
}
return nil
}
// elideNewline:
// Remove newlines trailing rightDelim if \\ is present.
func (t *Tree) elideNewline() Node {
token := t.peek()
if token.typ != itemText {
t.unexpected(token, "input")
return nil
}
t.next()
stripped := strings.TrimLeft(token.val, "\n\r")
diff := len(token.val) - len(stripped)
if diff > 0 {
// This is a bit nasty. We mutate the token in-place to remove
// preceding newlines.
token.pos += Pos(diff)
token.val = stripped
}
return t.newText(token.pos, token.val)
}
// Action:
// control
// command ("|" command)*
// Left delim is past. Now get actions.
// First word could be a keyword such as range.
func (t *Tree) action() (n Node) {
switch token := t.nextNonSpace(); token.typ {
case itemElse:
return t.elseControl()
case itemEnd:
return t.endControl()
case itemIf:
return t.ifControl()
case itemRange:
return t.rangeControl()
case itemTemplate:
return t.templateControl()
case itemWith:
return t.withControl()
}
t.backup()
// Do not pop variables; they persist until "end".
return t.newAction(t.peek().pos, t.lex.lineNumber(), t.pipeline("command"))
}
// Pipeline:
// declarations? command ('|' command)*
func (t *Tree) pipeline(context string) (pipe *PipeNode) {
var decl []*VariableNode
pos := t.peekNonSpace().pos
// Are there declarations?
for {
if v := t.peekNonSpace(); v.typ == itemVariable {
t.next()
// Since space is a token, we need 3-token look-ahead here in the worst case:
// in "$x foo" we need to read "foo" (as opposed to ":=") to know that $x is an
// argument variable rather than a declaration. So remember the token
// adjacent to the variable so we can push it back if necessary.
tokenAfterVariable := t.peek()
if next := t.peekNonSpace(); next.typ == itemColonEquals || (next.typ == itemChar && next.val == ",") {
t.nextNonSpace()
variable := t.newVariable(v.pos, v.val)
decl = append(decl, variable)
t.vars = append(t.vars, v.val)
if next.typ == itemChar && next.val == "," {
if context == "range" && len(decl) < 2 {
continue
}
t.errorf("too many declarations in %s", context)
}
} else if tokenAfterVariable.typ == itemSpace {
t.backup3(v, tokenAfterVariable)
} else {
t.backup2(v)
}
}
break
}
pipe = t.newPipeline(pos, t.lex.lineNumber(), decl)
for {
switch token := t.nextNonSpace(); token.typ {
case itemRightDelim, itemRightParen:
if len(pipe.Cmds) == 0 {
t.errorf("missing value for %s", context)
}
if token.typ == itemRightParen {
t.backup()
}
return
case itemBool, itemCharConstant, itemComplex, itemDot, itemField, itemIdentifier,
itemNumber, itemNil, itemRawString, itemString, itemVariable, itemLeftParen:
t.backup()
pipe.append(t.command())
default:
t.unexpected(token, context)
}
}
}
func (t *Tree) parseControl(allowElseIf bool, context string) (pos Pos, line int, pipe *PipeNode, list, elseList *ListNode) {
defer t.popVars(len(t.vars))
line = t.lex.lineNumber()
pipe = t.pipeline(context)
var next Node
list, next = t.itemList()
switch next.Type() {
case nodeEnd: //done
case nodeElse:
if allowElseIf {
// Special case for "else if". If the "else" is followed immediately by an "if",
// the elseControl will have left the "if" token pending. Treat
// {{if a}}_{{else if b}}_{{end}}
// as
// {{if a}}_{{else}}{{if b}}_{{end}}{{end}}.
// To do this, parse the if as usual and stop at it {{end}}; the subsequent{{end}}
// is assumed. This technique works even for long if-else-if chains.
// TODO: Should we allow else-if in with and range?
if t.peek().typ == itemIf {
t.next() // Consume the "if" token.
elseList = t.newList(next.Position())
elseList.append(t.ifControl())
// Do not consume the next item - only one {{end}} required.
break
}
}
elseList, next = t.itemList()
if next.Type() != nodeEnd {
t.errorf("expected end; found %s", next)
}
}
return pipe.Position(), line, pipe, list, elseList
}
// If:
// {{if pipeline}} itemList {{end}}
// {{if pipeline}} itemList {{else}} itemList {{end}}
// If keyword is past.
func (t *Tree) ifControl() Node {
return t.newIf(t.parseControl(true, "if"))
}
// Range:
// {{range pipeline}} itemList {{end}}
// {{range pipeline}} itemList {{else}} itemList {{end}}
// Range keyword is past.
func (t *Tree) rangeControl() Node {
return t.newRange(t.parseControl(false, "range"))
}
// With:
// {{with pipeline}} itemList {{end}}
// {{with pipeline}} itemList {{else}} itemList {{end}}
// If keyword is past.
func (t *Tree) withControl() Node {
return t.newWith(t.parseControl(false, "with"))
}
// End:
// {{end}}
// End keyword is past.
func (t *Tree) endControl() Node {
return t.newEnd(t.expect(itemRightDelim, "end").pos)
}
// Else:
// {{else}}
// Else keyword is past.
func (t *Tree) elseControl() Node {
// Special case for "else if".
peek := t.peekNonSpace()
if peek.typ == itemIf {
// We see "{{else if ... " but in effect rewrite it to {{else}}{{if ... ".
return t.newElse(peek.pos, t.lex.lineNumber())
}
return t.newElse(t.expect(itemRightDelim, "else").pos, t.lex.lineNumber())
}
// Template:
// {{template stringValue pipeline}}
// Template keyword is past. The name must be something that can evaluate
// to a string.
func (t *Tree) templateControl() Node {
var name string
token := t.nextNonSpace()
switch token.typ {
case itemString, itemRawString:
s, err := strconv.Unquote(token.val)
if err != nil {
t.error(err)
}
name = s
default:
t.unexpected(token, "template invocation")
}
var pipe *PipeNode
if t.nextNonSpace().typ != itemRightDelim {
t.backup()
// Do not pop variables; they persist until "end".
pipe = t.pipeline("template")
}
return t.newTemplate(token.pos, t.lex.lineNumber(), name, pipe)
}
// command:
// operand (space operand)*
// space-separated arguments up to a pipeline character or right delimiter.
// we consume the pipe character but leave the right delim to terminate the action.
func (t *Tree) command() *CommandNode {
cmd := t.newCommand(t.peekNonSpace().pos)
for {
t.peekNonSpace() // skip leading spaces.
operand := t.operand()
if operand != nil {
cmd.append(operand)
}
switch token := t.next(); token.typ {
case itemSpace:
continue
case itemError:
t.errorf("%s", token.val)
case itemRightDelim, itemRightParen:
t.backup()
case itemPipe:
default:
t.errorf("unexpected %s in operand; missing space?", token)
}
break
}
if len(cmd.Args) == 0 {
t.errorf("empty command")
}
return cmd
}
// operand:
// term .Field*
// An operand is a space-separated component of a command,
// a term possibly followed by field accesses.
// A nil return means the next item is not an operand.
func (t *Tree) operand() Node {
node := t.term()
if node == nil {
return nil
}
if t.peek().typ == itemField {
chain := t.newChain(t.peek().pos, node)
for t.peek().typ == itemField {
chain.Add(t.next().val)
}
// Compatibility with original API: If the term is of type NodeField
// or NodeVariable, just put more fields on the original.
// Otherwise, keep the Chain node.
// TODO: Switch to Chains always when we can.
switch node.Type() {
case NodeField:
node = t.newField(chain.Position(), chain.String())
case NodeVariable:
node = t.newVariable(chain.Position(), chain.String())
default:
node = chain
}
}
return node
}
// term:
// literal (number, string, nil, boolean)
// function (identifier)
// .
// .Field
// $
// '(' pipeline ')'
// A term is a simple "expression".
// A nil return means the next item is not a term.
func (t *Tree) term() Node {
switch token := t.nextNonSpace(); token.typ {
case itemError:
t.errorf("%s", token.val)
case itemIdentifier:
if !t.hasFunction(token.val) {
t.errorf("function %q not defined", token.val)
}
return NewIdentifier(token.val).SetTree(t).SetPos(token.pos)
case itemDot:
return t.newDot(token.pos)
case itemNil:
return t.newNil(token.pos)
case itemVariable:
return t.useVar(token.pos, token.val)
case itemField:
return t.newField(token.pos, token.val)
case itemBool:
return t.newBool(token.pos, token.val == "true")
case itemCharConstant, itemComplex, itemNumber:
number, err := t.newNumber(token.pos, token.val, token.typ)
if err != nil {
t.error(err)
}
return number
case itemLeftParen:
pipe := t.pipeline("parenthesized pipeline")
if token := t.next(); token.typ != itemRightParen {
t.errorf("unclosed right paren: unexpected %s", token)
}
return pipe
case itemString, itemRawString:
s, err := strconv.Unquote(token.val)
if err != nil {
t.error(err)
}
return t.newString(token.pos, token.val, s)
}
t.backup()
return nil
}
// hasFunction reports if a function name exists in the Tree's maps.
func (t *Tree) hasFunction(name string) bool {
for _, funcMap := range t.funcs {
if funcMap == nil {
continue
}
if funcMap[name] != nil {
return true
}
}
return false
}
// popVars trims the variable list to the specified length
func (t *Tree) popVars(n int) {
t.vars = t.vars[:n]
}
// useVar returns a node for a variable reference. It errors if the
// variable is not defined.
func (t *Tree) useVar(pos Pos, name string) Node {
v := t.newVariable(pos, name)
for _, varName := range t.vars {
if varName == v.Ident[0] {
return v
}
}
t.errorf("undefined variable %q", v.Ident[0])
return nil
}

View file

@ -1,218 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package template
import (
"fmt"
"reflect"
"github.com/alecthomas/template/parse"
)
// common holds the information shared by related templates.
type common struct {
tmpl map[string]*Template
// We use two maps, one for parsing and one for execution.
// This separation makes the API cleaner since it doesn't
// expose reflection to the client.
parseFuncs FuncMap
execFuncs map[string]reflect.Value
}
// Template is the representation of a parsed template. The *parse.Tree
// field is exported only for use by html/template and should be treated
// as unexported by all other clients.
type Template struct {
name string
*parse.Tree
*common
leftDelim string
rightDelim string
}
// New allocates a new template with the given name.
func New(name string) *Template {
return &Template{
name: name,
}
}
// Name returns the name of the template.
func (t *Template) Name() string {
return t.name
}
// New allocates a new template associated with the given one and with the same
// delimiters. The association, which is transitive, allows one template to
// invoke another with a {{template}} action.
func (t *Template) New(name string) *Template {
t.init()
return &Template{
name: name,
common: t.common,
leftDelim: t.leftDelim,
rightDelim: t.rightDelim,
}
}
func (t *Template) init() {
if t.common == nil {
t.common = new(common)
t.tmpl = make(map[string]*Template)
t.parseFuncs = make(FuncMap)
t.execFuncs = make(map[string]reflect.Value)
}
}
// Clone returns a duplicate of the template, including all associated
// templates. The actual representation is not copied, but the name space of
// associated templates is, so further calls to Parse in the copy will add
// templates to the copy but not to the original. Clone can be used to prepare
// common templates and use them with variant definitions for other templates
// by adding the variants after the clone is made.
func (t *Template) Clone() (*Template, error) {
nt := t.copy(nil)
nt.init()
nt.tmpl[t.name] = nt
for k, v := range t.tmpl {
if k == t.name { // Already installed.
continue
}
// The associated templates share nt's common structure.
tmpl := v.copy(nt.common)
nt.tmpl[k] = tmpl
}
for k, v := range t.parseFuncs {
nt.parseFuncs[k] = v
}
for k, v := range t.execFuncs {
nt.execFuncs[k] = v
}
return nt, nil
}
// copy returns a shallow copy of t, with common set to the argument.
func (t *Template) copy(c *common) *Template {
nt := New(t.name)
nt.Tree = t.Tree
nt.common = c
nt.leftDelim = t.leftDelim
nt.rightDelim = t.rightDelim
return nt
}
// AddParseTree creates a new template with the name and parse tree
// and associates it with t.
func (t *Template) AddParseTree(name string, tree *parse.Tree) (*Template, error) {
if t.common != nil && t.tmpl[name] != nil {
return nil, fmt.Errorf("template: redefinition of template %q", name)
}
nt := t.New(name)
nt.Tree = tree
t.tmpl[name] = nt
return nt, nil
}
// Templates returns a slice of the templates associated with t, including t
// itself.
func (t *Template) Templates() []*Template {
if t.common == nil {
return nil
}
// Return a slice so we don't expose the map.
m := make([]*Template, 0, len(t.tmpl))
for _, v := range t.tmpl {
m = append(m, v)
}
return m
}
// Delims sets the action delimiters to the specified strings, to be used in
// subsequent calls to Parse, ParseFiles, or ParseGlob. Nested template
// definitions will inherit the settings. An empty delimiter stands for the
// corresponding default: {{ or }}.
// The return value is the template, so calls can be chained.
func (t *Template) Delims(left, right string) *Template {
t.leftDelim = left
t.rightDelim = right
return t
}
// Funcs adds the elements of the argument map to the template's function map.
// It panics if a value in the map is not a function with appropriate return
// type. However, it is legal to overwrite elements of the map. The return
// value is the template, so calls can be chained.
func (t *Template) Funcs(funcMap FuncMap) *Template {
t.init()
addValueFuncs(t.execFuncs, funcMap)
addFuncs(t.parseFuncs, funcMap)
return t
}
// Lookup returns the template with the given name that is associated with t,
// or nil if there is no such template.
func (t *Template) Lookup(name string) *Template {
if t.common == nil {
return nil
}
return t.tmpl[name]
}
// Parse parses a string into a template. Nested template definitions will be
// associated with the top-level template t. Parse may be called multiple times
// to parse definitions of templates to associate with t. It is an error if a
// resulting template is non-empty (contains content other than template
// definitions) and would replace a non-empty template with the same name.
// (In multiple calls to Parse with the same receiver template, only one call
// can contain text other than space, comments, and template definitions.)
func (t *Template) Parse(text string) (*Template, error) {
t.init()
trees, err := parse.Parse(t.name, text, t.leftDelim, t.rightDelim, t.parseFuncs, builtins)
if err != nil {
return nil, err
}
// Add the newly parsed trees, including the one for t, into our common structure.
for name, tree := range trees {
// If the name we parsed is the name of this template, overwrite this template.
// The associate method checks it's not a redefinition.
tmpl := t
if name != t.name {
tmpl = t.New(name)
}
// Even if t == tmpl, we need to install it in the common.tmpl map.
if replace, err := t.associate(tmpl, tree); err != nil {
return nil, err
} else if replace {
tmpl.Tree = tree
}
tmpl.leftDelim = t.leftDelim
tmpl.rightDelim = t.rightDelim
}
return t, nil
}
// associate installs the new template into the group of templates associated
// with t. It is an error to reuse a name except to overwrite an empty
// template. The two are already known to share the common structure.
// The boolean return value reports wither to store this tree as t.Tree.
func (t *Template) associate(new *Template, tree *parse.Tree) (bool, error) {
if new.common != t.common {
panic("internal error: associate not common")
}
name := new.name
if old := t.tmpl[name]; old != nil {
oldIsEmpty := parse.IsEmptyTree(old.Root)
newIsEmpty := parse.IsEmptyTree(tree.Root)
if newIsEmpty {
// Whether old is empty or not, new is empty; no reason to replace old.
return false, nil
}
if !oldIsEmpty {
return false, fmt.Errorf("template: redefinition of template %q", name)
}
}
t.tmpl[name] = new
return true, nil
}

View file

@ -1,19 +0,0 @@
Copyright (C) 2014 Alec Thomas
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View file

@ -1,11 +0,0 @@
# Units - Helpful unit multipliers and functions for Go
The goal of this package is to have functionality similar to the [time](http://golang.org/pkg/time/) package.
It allows for code like this:
```go
n, err := ParseBase2Bytes("1KB")
// n == 1024
n = units.Mebibyte * 512
```

View file

@ -1,85 +0,0 @@
package units
// Base2Bytes is the old non-SI power-of-2 byte scale (1024 bytes in a kilobyte,
// etc.).
type Base2Bytes int64
// Base-2 byte units.
const (
Kibibyte Base2Bytes = 1024
KiB = Kibibyte
Mebibyte = Kibibyte * 1024
MiB = Mebibyte
Gibibyte = Mebibyte * 1024
GiB = Gibibyte
Tebibyte = Gibibyte * 1024
TiB = Tebibyte
Pebibyte = Tebibyte * 1024
PiB = Pebibyte
Exbibyte = Pebibyte * 1024
EiB = Exbibyte
)
var (
bytesUnitMap = MakeUnitMap("iB", "B", 1024)
oldBytesUnitMap = MakeUnitMap("B", "B", 1024)
)
// ParseBase2Bytes supports both iB and B in base-2 multipliers. That is, KB
// and KiB are both 1024.
// However "kB", which is the correct SI spelling of 1000 Bytes, is rejected.
func ParseBase2Bytes(s string) (Base2Bytes, error) {
n, err := ParseUnit(s, bytesUnitMap)
if err != nil {
n, err = ParseUnit(s, oldBytesUnitMap)
}
return Base2Bytes(n), err
}
func (b Base2Bytes) String() string {
return ToString(int64(b), 1024, "iB", "B")
}
var (
metricBytesUnitMap = MakeUnitMap("B", "B", 1000)
)
// MetricBytes are SI byte units (1000 bytes in a kilobyte).
type MetricBytes SI
// SI base-10 byte units.
const (
Kilobyte MetricBytes = 1000
KB = Kilobyte
Megabyte = Kilobyte * 1000
MB = Megabyte
Gigabyte = Megabyte * 1000
GB = Gigabyte
Terabyte = Gigabyte * 1000
TB = Terabyte
Petabyte = Terabyte * 1000
PB = Petabyte
Exabyte = Petabyte * 1000
EB = Exabyte
)
// ParseMetricBytes parses base-10 metric byte units. That is, KB is 1000 bytes.
func ParseMetricBytes(s string) (MetricBytes, error) {
n, err := ParseUnit(s, metricBytesUnitMap)
return MetricBytes(n), err
}
// TODO: represents 1000B as uppercase "KB", while SI standard requires "kB".
func (m MetricBytes) String() string {
return ToString(int64(m), 1000, "B", "B")
}
// ParseStrictBytes supports both iB and B suffixes for base 2 and metric,
// respectively. That is, KiB represents 1024 and kB, KB represent 1000.
func ParseStrictBytes(s string) (int64, error) {
n, err := ParseUnit(s, bytesUnitMap)
if err != nil {
n, err = ParseUnit(s, metricBytesUnitMap)
}
return int64(n), err
}

View file

@ -1,13 +0,0 @@
// Package units provides helpful unit multipliers and functions for Go.
//
// The goal of this package is to have functionality similar to the time [1] package.
//
//
// [1] http://golang.org/pkg/time/
//
// It allows for code like this:
//
// n, err := ParseBase2Bytes("1KB")
// // n == 1024
// n = units.Mebibyte * 512
package units

View file

@ -1,3 +0,0 @@
module github.com/alecthomas/units
require github.com/stretchr/testify v1.4.0

View file

@ -1,11 +0,0 @@
github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=

View file

@ -1,50 +0,0 @@
package units
// SI units.
type SI int64
// SI unit multiples.
const (
Kilo SI = 1000
Mega = Kilo * 1000
Giga = Mega * 1000
Tera = Giga * 1000
Peta = Tera * 1000
Exa = Peta * 1000
)
func MakeUnitMap(suffix, shortSuffix string, scale int64) map[string]float64 {
res := map[string]float64{
shortSuffix: 1,
// see below for "k" / "K"
"M" + suffix: float64(scale * scale),
"G" + suffix: float64(scale * scale * scale),
"T" + suffix: float64(scale * scale * scale * scale),
"P" + suffix: float64(scale * scale * scale * scale * scale),
"E" + suffix: float64(scale * scale * scale * scale * scale * scale),
}
// Standard SI prefixes use lowercase "k" for kilo = 1000.
// For compatibility, and to be fool-proof, we accept both "k" and "K" in metric mode.
//
// However, official binary prefixes are always capitalized - "KiB" -
// and we specifically never parse "kB" as 1024B because:
//
// (1) people pedantic enough to use lowercase according to SI unlikely to abuse "k" to mean 1024 :-)
//
// (2) Use of capital K for 1024 was an informal tradition predating IEC prefixes:
// "The binary meaning of the kilobyte for 1024 bytes typically uses the symbol KB, with an
// uppercase letter K."
// -- https://en.wikipedia.org/wiki/Kilobyte#Base_2_(1024_bytes)
// "Capitalization of the letter K became the de facto standard for binary notation, although this
// could not be extended to higher powers, and use of the lowercase k did persist.[13][14][15]"
// -- https://en.wikipedia.org/wiki/Binary_prefix#History
// See also the extensive https://en.wikipedia.org/wiki/Timeline_of_binary_prefixes.
if scale == 1024 {
res["K"+suffix] = float64(scale)
} else {
res["k"+suffix] = float64(scale)
res["K"+suffix] = float64(scale)
}
return res
}

View file

@ -1,138 +0,0 @@
package units
import (
"errors"
"fmt"
"strings"
)
var (
siUnits = []string{"", "K", "M", "G", "T", "P", "E"}
)
func ToString(n int64, scale int64, suffix, baseSuffix string) string {
mn := len(siUnits)
out := make([]string, mn)
for i, m := range siUnits {
if n%scale != 0 || i == 0 && n == 0 {
s := suffix
if i == 0 {
s = baseSuffix
}
out[mn-1-i] = fmt.Sprintf("%d%s%s", n%scale, m, s)
}
n /= scale
if n == 0 {
break
}
}
return strings.Join(out, "")
}
// Below code ripped straight from http://golang.org/src/pkg/time/format.go?s=33392:33438#L1123
var errLeadingInt = errors.New("units: bad [0-9]*") // never printed
// leadingInt consumes the leading [0-9]* from s.
func leadingInt(s string) (x int64, rem string, err error) {
i := 0
for ; i < len(s); i++ {
c := s[i]
if c < '0' || c > '9' {
break
}
if x >= (1<<63-10)/10 {
// overflow
return 0, "", errLeadingInt
}
x = x*10 + int64(c) - '0'
}
return x, s[i:], nil
}
func ParseUnit(s string, unitMap map[string]float64) (int64, error) {
// [-+]?([0-9]*(\.[0-9]*)?[a-z]+)+
orig := s
f := float64(0)
neg := false
// Consume [-+]?
if s != "" {
c := s[0]
if c == '-' || c == '+' {
neg = c == '-'
s = s[1:]
}
}
// Special case: if all that is left is "0", this is zero.
if s == "0" {
return 0, nil
}
if s == "" {
return 0, errors.New("units: invalid " + orig)
}
for s != "" {
g := float64(0) // this element of the sequence
var x int64
var err error
// The next character must be [0-9.]
if !(s[0] == '.' || ('0' <= s[0] && s[0] <= '9')) {
return 0, errors.New("units: invalid " + orig)
}
// Consume [0-9]*
pl := len(s)
x, s, err = leadingInt(s)
if err != nil {
return 0, errors.New("units: invalid " + orig)
}
g = float64(x)
pre := pl != len(s) // whether we consumed anything before a period
// Consume (\.[0-9]*)?
post := false
if s != "" && s[0] == '.' {
s = s[1:]
pl := len(s)
x, s, err = leadingInt(s)
if err != nil {
return 0, errors.New("units: invalid " + orig)
}
scale := 1.0
for n := pl - len(s); n > 0; n-- {
scale *= 10
}
g += float64(x) / scale
post = pl != len(s)
}
if !pre && !post {
// no digits (e.g. ".s" or "-.s")
return 0, errors.New("units: invalid " + orig)
}
// Consume unit.
i := 0
for ; i < len(s); i++ {
c := s[i]
if c == '.' || ('0' <= c && c <= '9') {
break
}
}
u := s[:i]
s = s[i:]
unit, ok := unitMap[u]
if !ok {
return 0, errors.New("units: unknown unit " + u + " in " + orig)
}
f += g * unit
}
if neg {
f = -f
}
if f < float64(-1<<63) || f > float64(1<<63-1) {
return 0, errors.New("units: overflow parsing unit")
}
return int64(f), nil
}

View file

@ -1,20 +0,0 @@
Copyright (C) 2013 Blake Mizerany
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

File diff suppressed because it is too large Load diff

View file

@ -1,316 +0,0 @@
// Package quantile computes approximate quantiles over an unbounded data
// stream within low memory and CPU bounds.
//
// A small amount of accuracy is traded to achieve the above properties.
//
// Multiple streams can be merged before calling Query to generate a single set
// of results. This is meaningful when the streams represent the same type of
// data. See Merge and Samples.
//
// For more detailed information about the algorithm used, see:
//
// Effective Computation of Biased Quantiles over Data Streams
//
// http://www.cs.rutgers.edu/~muthu/bquant.pdf
package quantile
import (
"math"
"sort"
)
// Sample holds an observed value and meta information for compression. JSON
// tags have been added for convenience.
type Sample struct {
Value float64 `json:",string"`
Width float64 `json:",string"`
Delta float64 `json:",string"`
}
// Samples represents a slice of samples. It implements sort.Interface.
type Samples []Sample
func (a Samples) Len() int { return len(a) }
func (a Samples) Less(i, j int) bool { return a[i].Value < a[j].Value }
func (a Samples) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
type invariant func(s *stream, r float64) float64
// NewLowBiased returns an initialized Stream for low-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the lower ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within (1±Epsilon)*Quantile.
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewLowBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * r
}
return newStream(ƒ)
}
// NewHighBiased returns an initialized Stream for high-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the higher ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within 1-(1±Epsilon)*(1-Quantile).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewHighBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * (s.n - r)
}
return newStream(ƒ)
}
// NewTargeted returns an initialized Stream concerned with a particular set of
// quantile values that are supplied a priori. Knowing these a priori reduces
// space and computation time. The targets map maps the desired quantiles to
// their absolute errors, i.e. the true quantile of a value returned by a query
// is guaranteed to be within (Quantile±Epsilon).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties.
func NewTargeted(targetMap map[float64]float64) *Stream {
// Convert map to slice to avoid slow iterations on a map.
// ƒ is called on the hot path, so converting the map to a slice
// beforehand results in significant CPU savings.
targets := targetMapToSlice(targetMap)
ƒ := func(s *stream, r float64) float64 {
var m = math.MaxFloat64
var f float64
for _, t := range targets {
if t.quantile*s.n <= r {
f = (2 * t.epsilon * r) / t.quantile
} else {
f = (2 * t.epsilon * (s.n - r)) / (1 - t.quantile)
}
if f < m {
m = f
}
}
return m
}
return newStream(ƒ)
}
type target struct {
quantile float64
epsilon float64
}
func targetMapToSlice(targetMap map[float64]float64) []target {
targets := make([]target, 0, len(targetMap))
for quantile, epsilon := range targetMap {
t := target{
quantile: quantile,
epsilon: epsilon,
}
targets = append(targets, t)
}
return targets
}
// Stream computes quantiles for a stream of float64s. It is not thread-safe by
// design. Take care when using across multiple goroutines.
type Stream struct {
*stream
b Samples
sorted bool
}
func newStream(ƒ invariant) *Stream {
x := &stream{ƒ: ƒ}
return &Stream{x, make(Samples, 0, 500), true}
}
// Insert inserts v into the stream.
func (s *Stream) Insert(v float64) {
s.insert(Sample{Value: v, Width: 1})
}
func (s *Stream) insert(sample Sample) {
s.b = append(s.b, sample)
s.sorted = false
if len(s.b) == cap(s.b) {
s.flush()
}
}
// Query returns the computed qth percentiles value. If s was created with
// NewTargeted, and q is not in the set of quantiles provided a priori, Query
// will return an unspecified result.
func (s *Stream) Query(q float64) float64 {
if !s.flushed() {
// Fast path when there hasn't been enough data for a flush;
// this also yields better accuracy for small sets of data.
l := len(s.b)
if l == 0 {
return 0
}
i := int(math.Ceil(float64(l) * q))
if i > 0 {
i -= 1
}
s.maybeSort()
return s.b[i].Value
}
s.flush()
return s.stream.query(q)
}
// Merge merges samples into the underlying streams samples. This is handy when
// merging multiple streams from separate threads, database shards, etc.
//
// ATTENTION: This method is broken and does not yield correct results. The
// underlying algorithm is not capable of merging streams correctly.
func (s *Stream) Merge(samples Samples) {
sort.Sort(samples)
s.stream.merge(samples)
}
// Reset reinitializes and clears the list reusing the samples buffer memory.
func (s *Stream) Reset() {
s.stream.reset()
s.b = s.b[:0]
}
// Samples returns stream samples held by s.
func (s *Stream) Samples() Samples {
if !s.flushed() {
return s.b
}
s.flush()
return s.stream.samples()
}
// Count returns the total number of samples observed in the stream
// since initialization.
func (s *Stream) Count() int {
return len(s.b) + s.stream.count()
}
func (s *Stream) flush() {
s.maybeSort()
s.stream.merge(s.b)
s.b = s.b[:0]
}
func (s *Stream) maybeSort() {
if !s.sorted {
s.sorted = true
sort.Sort(s.b)
}
}
func (s *Stream) flushed() bool {
return len(s.stream.l) > 0
}
type stream struct {
n float64
l []Sample
ƒ invariant
}
func (s *stream) reset() {
s.l = s.l[:0]
s.n = 0
}
func (s *stream) insert(v float64) {
s.merge(Samples{{v, 1, 0}})
}
func (s *stream) merge(samples Samples) {
// TODO(beorn7): This tries to merge not only individual samples, but
// whole summaries. The paper doesn't mention merging summaries at
// all. Unittests show that the merging is inaccurate. Find out how to
// do merges properly.
var r float64
i := 0
for _, sample := range samples {
for ; i < len(s.l); i++ {
c := s.l[i]
if c.Value > sample.Value {
// Insert at position i.
s.l = append(s.l, Sample{})
copy(s.l[i+1:], s.l[i:])
s.l[i] = Sample{
sample.Value,
sample.Width,
math.Max(sample.Delta, math.Floor(s.ƒ(s, r))-1),
// TODO(beorn7): How to calculate delta correctly?
}
i++
goto inserted
}
r += c.Width
}
s.l = append(s.l, Sample{sample.Value, sample.Width, 0})
i++
inserted:
s.n += sample.Width
r += sample.Width
}
s.compress()
}
func (s *stream) count() int {
return int(s.n)
}
func (s *stream) query(q float64) float64 {
t := math.Ceil(q * s.n)
t += math.Ceil(s.ƒ(s, t) / 2)
p := s.l[0]
var r float64
for _, c := range s.l[1:] {
r += p.Width
if r+c.Width+c.Delta > t {
return p.Value
}
p = c
}
return p.Value
}
func (s *stream) compress() {
if len(s.l) < 2 {
return
}
x := s.l[len(s.l)-1]
xi := len(s.l) - 1
r := s.n - 1 - x.Width
for i := len(s.l) - 2; i >= 0; i-- {
c := s.l[i]
if c.Width+x.Width+x.Delta <= s.ƒ(s, r) {
x.Width += c.Width
s.l[xi] = x
// Remove element at i.
copy(s.l[i:], s.l[i+1:])
s.l = s.l[:len(s.l)-1]
xi -= 1
} else {
x = c
xi = i
}
r -= c.Width
}
}
func (s *stream) samples() Samples {
samples := make(Samples, len(s.l))
copy(samples, s.l)
return samples
}

View file

@ -1,8 +0,0 @@
language: go
go:
- "1.x"
- master
env:
- TAGS=""
- TAGS="-tags purego"
script: go test $TAGS -v ./...

View file

@ -1,22 +0,0 @@
Copyright (c) 2016 Caleb Spare
MIT License
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View file

@ -1,67 +0,0 @@
# xxhash
[![GoDoc](https://godoc.org/github.com/cespare/xxhash?status.svg)](https://godoc.org/github.com/cespare/xxhash)
[![Build Status](https://travis-ci.org/cespare/xxhash.svg?branch=master)](https://travis-ci.org/cespare/xxhash)
xxhash is a Go implementation of the 64-bit
[xxHash](http://cyan4973.github.io/xxHash/) algorithm, XXH64. This is a
high-quality hashing algorithm that is much faster than anything in the Go
standard library.
This package provides a straightforward API:
```
func Sum64(b []byte) uint64
func Sum64String(s string) uint64
type Digest struct{ ... }
func New() *Digest
```
The `Digest` type implements hash.Hash64. Its key methods are:
```
func (*Digest) Write([]byte) (int, error)
func (*Digest) WriteString(string) (int, error)
func (*Digest) Sum64() uint64
```
This implementation provides a fast pure-Go implementation and an even faster
assembly implementation for amd64.
## Compatibility
This package is in a module and the latest code is in version 2 of the module.
You need a version of Go with at least "minimal module compatibility" to use
github.com/cespare/xxhash/v2:
* 1.9.7+ for Go 1.9
* 1.10.3+ for Go 1.10
* Go 1.11 or later
I recommend using the latest release of Go.
## Benchmarks
Here are some quick benchmarks comparing the pure-Go and assembly
implementations of Sum64.
| input size | purego | asm |
| --- | --- | --- |
| 5 B | 979.66 MB/s | 1291.17 MB/s |
| 100 B | 7475.26 MB/s | 7973.40 MB/s |
| 4 KB | 17573.46 MB/s | 17602.65 MB/s |
| 10 MB | 17131.46 MB/s | 17142.16 MB/s |
These numbers were generated on Ubuntu 18.04 with an Intel i7-8700K CPU using
the following commands under Go 1.11.2:
```
$ go test -tags purego -benchtime 10s -bench '/xxhash,direct,bytes'
$ go test -benchtime 10s -bench '/xxhash,direct,bytes'
```
## Projects using this package
- [InfluxDB](https://github.com/influxdata/influxdb)
- [Prometheus](https://github.com/prometheus/prometheus)
- [FreeCache](https://github.com/coocood/freecache)

View file

@ -1,3 +0,0 @@
module github.com/cespare/xxhash/v2
go 1.11

View file

View file

@ -1,236 +0,0 @@
// Package xxhash implements the 64-bit variant of xxHash (XXH64) as described
// at http://cyan4973.github.io/xxHash/.
package xxhash
import (
"encoding/binary"
"errors"
"math/bits"
)
const (
prime1 uint64 = 11400714785074694791
prime2 uint64 = 14029467366897019727
prime3 uint64 = 1609587929392839161
prime4 uint64 = 9650029242287828579
prime5 uint64 = 2870177450012600261
)
// NOTE(caleb): I'm using both consts and vars of the primes. Using consts where
// possible in the Go code is worth a small (but measurable) performance boost
// by avoiding some MOVQs. Vars are needed for the asm and also are useful for
// convenience in the Go code in a few places where we need to intentionally
// avoid constant arithmetic (e.g., v1 := prime1 + prime2 fails because the
// result overflows a uint64).
var (
prime1v = prime1
prime2v = prime2
prime3v = prime3
prime4v = prime4
prime5v = prime5
)
// Digest implements hash.Hash64.
type Digest struct {
v1 uint64
v2 uint64
v3 uint64
v4 uint64
total uint64
mem [32]byte
n int // how much of mem is used
}
// New creates a new Digest that computes the 64-bit xxHash algorithm.
func New() *Digest {
var d Digest
d.Reset()
return &d
}
// Reset clears the Digest's state so that it can be reused.
func (d *Digest) Reset() {
d.v1 = prime1v + prime2
d.v2 = prime2
d.v3 = 0
d.v4 = -prime1v
d.total = 0
d.n = 0
}
// Size always returns 8 bytes.
func (d *Digest) Size() int { return 8 }
// BlockSize always returns 32 bytes.
func (d *Digest) BlockSize() int { return 32 }
// Write adds more data to d. It always returns len(b), nil.
func (d *Digest) Write(b []byte) (n int, err error) {
n = len(b)
d.total += uint64(n)
if d.n+n < 32 {
// This new data doesn't even fill the current block.
copy(d.mem[d.n:], b)
d.n += n
return
}
if d.n > 0 {
// Finish off the partial block.
copy(d.mem[d.n:], b)
d.v1 = round(d.v1, u64(d.mem[0:8]))
d.v2 = round(d.v2, u64(d.mem[8:16]))
d.v3 = round(d.v3, u64(d.mem[16:24]))
d.v4 = round(d.v4, u64(d.mem[24:32]))
b = b[32-d.n:]
d.n = 0
}
if len(b) >= 32 {
// One or more full blocks left.
nw := writeBlocks(d, b)
b = b[nw:]
}
// Store any remaining partial block.
copy(d.mem[:], b)
d.n = len(b)
return
}
// Sum appends the current hash to b and returns the resulting slice.
func (d *Digest) Sum(b []byte) []byte {
s := d.Sum64()
return append(
b,
byte(s>>56),
byte(s>>48),
byte(s>>40),
byte(s>>32),
byte(s>>24),
byte(s>>16),
byte(s>>8),
byte(s),
)
}
// Sum64 returns the current hash.
func (d *Digest) Sum64() uint64 {
var h uint64
if d.total >= 32 {
v1, v2, v3, v4 := d.v1, d.v2, d.v3, d.v4
h = rol1(v1) + rol7(v2) + rol12(v3) + rol18(v4)
h = mergeRound(h, v1)
h = mergeRound(h, v2)
h = mergeRound(h, v3)
h = mergeRound(h, v4)
} else {
h = d.v3 + prime5
}
h += d.total
i, end := 0, d.n
for ; i+8 <= end; i += 8 {
k1 := round(0, u64(d.mem[i:i+8]))
h ^= k1
h = rol27(h)*prime1 + prime4
}
if i+4 <= end {
h ^= uint64(u32(d.mem[i:i+4])) * prime1
h = rol23(h)*prime2 + prime3
i += 4
}
for i < end {
h ^= uint64(d.mem[i]) * prime5
h = rol11(h) * prime1
i++
}
h ^= h >> 33
h *= prime2
h ^= h >> 29
h *= prime3
h ^= h >> 32
return h
}
const (
magic = "xxh\x06"
marshaledSize = len(magic) + 8*5 + 32
)
// MarshalBinary implements the encoding.BinaryMarshaler interface.
func (d *Digest) MarshalBinary() ([]byte, error) {
b := make([]byte, 0, marshaledSize)
b = append(b, magic...)
b = appendUint64(b, d.v1)
b = appendUint64(b, d.v2)
b = appendUint64(b, d.v3)
b = appendUint64(b, d.v4)
b = appendUint64(b, d.total)
b = append(b, d.mem[:d.n]...)
b = b[:len(b)+len(d.mem)-d.n]
return b, nil
}
// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
func (d *Digest) UnmarshalBinary(b []byte) error {
if len(b) < len(magic) || string(b[:len(magic)]) != magic {
return errors.New("xxhash: invalid hash state identifier")
}
if len(b) != marshaledSize {
return errors.New("xxhash: invalid hash state size")
}
b = b[len(magic):]
b, d.v1 = consumeUint64(b)
b, d.v2 = consumeUint64(b)
b, d.v3 = consumeUint64(b)
b, d.v4 = consumeUint64(b)
b, d.total = consumeUint64(b)
copy(d.mem[:], b)
b = b[len(d.mem):]
d.n = int(d.total % uint64(len(d.mem)))
return nil
}
func appendUint64(b []byte, x uint64) []byte {
var a [8]byte
binary.LittleEndian.PutUint64(a[:], x)
return append(b, a[:]...)
}
func consumeUint64(b []byte) ([]byte, uint64) {
x := u64(b)
return b[8:], x
}
func u64(b []byte) uint64 { return binary.LittleEndian.Uint64(b) }
func u32(b []byte) uint32 { return binary.LittleEndian.Uint32(b) }
func round(acc, input uint64) uint64 {
acc += input * prime2
acc = rol31(acc)
acc *= prime1
return acc
}
func mergeRound(acc, val uint64) uint64 {
val = round(0, val)
acc ^= val
acc = acc*prime1 + prime4
return acc
}
func rol1(x uint64) uint64 { return bits.RotateLeft64(x, 1) }
func rol7(x uint64) uint64 { return bits.RotateLeft64(x, 7) }
func rol11(x uint64) uint64 { return bits.RotateLeft64(x, 11) }
func rol12(x uint64) uint64 { return bits.RotateLeft64(x, 12) }
func rol18(x uint64) uint64 { return bits.RotateLeft64(x, 18) }
func rol23(x uint64) uint64 { return bits.RotateLeft64(x, 23) }
func rol27(x uint64) uint64 { return bits.RotateLeft64(x, 27) }
func rol31(x uint64) uint64 { return bits.RotateLeft64(x, 31) }

View file

@ -1,13 +0,0 @@
// +build !appengine
// +build gc
// +build !purego
package xxhash
// Sum64 computes the 64-bit xxHash digest of b.
//
//go:noescape
func Sum64(b []byte) uint64
//go:noescape
func writeBlocks(d *Digest, b []byte) int

View file

@ -1,215 +0,0 @@
// +build !appengine
// +build gc
// +build !purego
#include "textflag.h"
// Register allocation:
// AX h
// CX pointer to advance through b
// DX n
// BX loop end
// R8 v1, k1
// R9 v2
// R10 v3
// R11 v4
// R12 tmp
// R13 prime1v
// R14 prime2v
// R15 prime4v
// round reads from and advances the buffer pointer in CX.
// It assumes that R13 has prime1v and R14 has prime2v.
#define round(r) \
MOVQ (CX), R12 \
ADDQ $8, CX \
IMULQ R14, R12 \
ADDQ R12, r \
ROLQ $31, r \
IMULQ R13, r
// mergeRound applies a merge round on the two registers acc and val.
// It assumes that R13 has prime1v, R14 has prime2v, and R15 has prime4v.
#define mergeRound(acc, val) \
IMULQ R14, val \
ROLQ $31, val \
IMULQ R13, val \
XORQ val, acc \
IMULQ R13, acc \
ADDQ R15, acc
// func Sum64(b []byte) uint64
TEXT ·Sum64(SB), NOSPLIT, $0-32
// Load fixed primes.
MOVQ ·prime1v(SB), R13
MOVQ ·prime2v(SB), R14
MOVQ ·prime4v(SB), R15
// Load slice.
MOVQ b_base+0(FP), CX
MOVQ b_len+8(FP), DX
LEAQ (CX)(DX*1), BX
// The first loop limit will be len(b)-32.
SUBQ $32, BX
// Check whether we have at least one block.
CMPQ DX, $32
JLT noBlocks
// Set up initial state (v1, v2, v3, v4).
MOVQ R13, R8
ADDQ R14, R8
MOVQ R14, R9
XORQ R10, R10
XORQ R11, R11
SUBQ R13, R11
// Loop until CX > BX.
blockLoop:
round(R8)
round(R9)
round(R10)
round(R11)
CMPQ CX, BX
JLE blockLoop
MOVQ R8, AX
ROLQ $1, AX
MOVQ R9, R12
ROLQ $7, R12
ADDQ R12, AX
MOVQ R10, R12
ROLQ $12, R12
ADDQ R12, AX
MOVQ R11, R12
ROLQ $18, R12
ADDQ R12, AX
mergeRound(AX, R8)
mergeRound(AX, R9)
mergeRound(AX, R10)
mergeRound(AX, R11)
JMP afterBlocks
noBlocks:
MOVQ ·prime5v(SB), AX
afterBlocks:
ADDQ DX, AX
// Right now BX has len(b)-32, and we want to loop until CX > len(b)-8.
ADDQ $24, BX
CMPQ CX, BX
JG fourByte
wordLoop:
// Calculate k1.
MOVQ (CX), R8
ADDQ $8, CX
IMULQ R14, R8
ROLQ $31, R8
IMULQ R13, R8
XORQ R8, AX
ROLQ $27, AX
IMULQ R13, AX
ADDQ R15, AX
CMPQ CX, BX
JLE wordLoop
fourByte:
ADDQ $4, BX
CMPQ CX, BX
JG singles
MOVL (CX), R8
ADDQ $4, CX
IMULQ R13, R8
XORQ R8, AX
ROLQ $23, AX
IMULQ R14, AX
ADDQ ·prime3v(SB), AX
singles:
ADDQ $4, BX
CMPQ CX, BX
JGE finalize
singlesLoop:
MOVBQZX (CX), R12
ADDQ $1, CX
IMULQ ·prime5v(SB), R12
XORQ R12, AX
ROLQ $11, AX
IMULQ R13, AX
CMPQ CX, BX
JL singlesLoop
finalize:
MOVQ AX, R12
SHRQ $33, R12
XORQ R12, AX
IMULQ R14, AX
MOVQ AX, R12
SHRQ $29, R12
XORQ R12, AX
IMULQ ·prime3v(SB), AX
MOVQ AX, R12
SHRQ $32, R12
XORQ R12, AX
MOVQ AX, ret+24(FP)
RET
// writeBlocks uses the same registers as above except that it uses AX to store
// the d pointer.
// func writeBlocks(d *Digest, b []byte) int
TEXT ·writeBlocks(SB), NOSPLIT, $0-40
// Load fixed primes needed for round.
MOVQ ·prime1v(SB), R13
MOVQ ·prime2v(SB), R14
// Load slice.
MOVQ b_base+8(FP), CX
MOVQ b_len+16(FP), DX
LEAQ (CX)(DX*1), BX
SUBQ $32, BX
// Load vN from d.
MOVQ d+0(FP), AX
MOVQ 0(AX), R8 // v1
MOVQ 8(AX), R9 // v2
MOVQ 16(AX), R10 // v3
MOVQ 24(AX), R11 // v4
// We don't need to check the loop condition here; this function is
// always called with at least one block of data to process.
blockLoop:
round(R8)
round(R9)
round(R10)
round(R11)
CMPQ CX, BX
JLE blockLoop
// Copy vN back to d.
MOVQ R8, 0(AX)
MOVQ R9, 8(AX)
MOVQ R10, 16(AX)
MOVQ R11, 24(AX)
// The number of bytes written is CX minus the old base pointer.
SUBQ b_base+8(FP), CX
MOVQ CX, ret+32(FP)
RET

View file

@ -1,76 +0,0 @@
// +build !amd64 appengine !gc purego
package xxhash
// Sum64 computes the 64-bit xxHash digest of b.
func Sum64(b []byte) uint64 {
// A simpler version would be
// d := New()
// d.Write(b)
// return d.Sum64()
// but this is faster, particularly for small inputs.
n := len(b)
var h uint64
if n >= 32 {
v1 := prime1v + prime2
v2 := prime2
v3 := uint64(0)
v4 := -prime1v
for len(b) >= 32 {
v1 = round(v1, u64(b[0:8:len(b)]))
v2 = round(v2, u64(b[8:16:len(b)]))
v3 = round(v3, u64(b[16:24:len(b)]))
v4 = round(v4, u64(b[24:32:len(b)]))
b = b[32:len(b):len(b)]
}
h = rol1(v1) + rol7(v2) + rol12(v3) + rol18(v4)
h = mergeRound(h, v1)
h = mergeRound(h, v2)
h = mergeRound(h, v3)
h = mergeRound(h, v4)
} else {
h = prime5
}
h += uint64(n)
i, end := 0, len(b)
for ; i+8 <= end; i += 8 {
k1 := round(0, u64(b[i:i+8:len(b)]))
h ^= k1
h = rol27(h)*prime1 + prime4
}
if i+4 <= end {
h ^= uint64(u32(b[i:i+4:len(b)])) * prime1
h = rol23(h)*prime2 + prime3
i += 4
}
for ; i < end; i++ {
h ^= uint64(b[i]) * prime5
h = rol11(h) * prime1
}
h ^= h >> 33
h *= prime2
h ^= h >> 29
h *= prime3
h ^= h >> 32
return h
}
func writeBlocks(d *Digest, b []byte) int {
v1, v2, v3, v4 := d.v1, d.v2, d.v3, d.v4
n := len(b)
for len(b) >= 32 {
v1 = round(v1, u64(b[0:8:len(b)]))
v2 = round(v2, u64(b[8:16:len(b)]))
v3 = round(v3, u64(b[16:24:len(b)]))
v4 = round(v4, u64(b[24:32:len(b)]))
b = b[32:len(b):len(b)]
}
d.v1, d.v2, d.v3, d.v4 = v1, v2, v3, v4
return n - len(b)
}

View file

@ -1,15 +0,0 @@
// +build appengine
// This file contains the safe implementations of otherwise unsafe-using code.
package xxhash
// Sum64String computes the 64-bit xxHash digest of s.
func Sum64String(s string) uint64 {
return Sum64([]byte(s))
}
// WriteString adds more data to d. It always returns len(s), nil.
func (d *Digest) WriteString(s string) (n int, err error) {
return d.Write([]byte(s))
}

View file

@ -1,46 +0,0 @@
// +build !appengine
// This file encapsulates usage of unsafe.
// xxhash_safe.go contains the safe implementations.
package xxhash
import (
"reflect"
"unsafe"
)
// Notes:
//
// See https://groups.google.com/d/msg/golang-nuts/dcjzJy-bSpw/tcZYBzQqAQAJ
// for some discussion about these unsafe conversions.
//
// In the future it's possible that compiler optimizations will make these
// unsafe operations unnecessary: https://golang.org/issue/2205.
//
// Both of these wrapper functions still incur function call overhead since they
// will not be inlined. We could write Go/asm copies of Sum64 and Digest.Write
// for strings to squeeze out a bit more speed. Mid-stack inlining should
// eventually fix this.
// Sum64String computes the 64-bit xxHash digest of s.
// It may be faster than Sum64([]byte(s)) by avoiding a copy.
func Sum64String(s string) uint64 {
var b []byte
bh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
bh.Data = (*reflect.StringHeader)(unsafe.Pointer(&s)).Data
bh.Len = len(s)
bh.Cap = len(s)
return Sum64(b)
}
// WriteString adds more data to d. It always returns len(s), nil.
// It may be faster than Write([]byte(s)) by avoiding a copy.
func (d *Digest) WriteString(s string) (n int, err error) {
var b []byte
bh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
bh.Data = (*reflect.StringHeader)(unsafe.Pointer(&s)).Data
bh.Len = len(s)
bh.Cap = len(s)
return d.Write(b)
}

22
vendor/github.com/go-kit/kit/LICENSE generated vendored
View file

@ -1,22 +0,0 @@
The MIT License (MIT)
Copyright (c) 2015 Peter Bourgon
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View file

@ -1,151 +0,0 @@
# package log
`package log` provides a minimal interface for structured logging in services.
It may be wrapped to encode conventions, enforce type-safety, provide leveled
logging, and so on. It can be used for both typical application log events,
and log-structured data streams.
## Structured logging
Structured logging is, basically, conceding to the reality that logs are
_data_, and warrant some level of schematic rigor. Using a stricter,
key/value-oriented message format for our logs, containing contextual and
semantic information, makes it much easier to get insight into the
operational activity of the systems we build. Consequently, `package log` is
of the strong belief that "[the benefits of structured logging outweigh the
minimal effort involved](https://www.thoughtworks.com/radar/techniques/structured-logging)".
Migrating from unstructured to structured logging is probably a lot easier
than you'd expect.
```go
// Unstructured
log.Printf("HTTP server listening on %s", addr)
// Structured
logger.Log("transport", "HTTP", "addr", addr, "msg", "listening")
```
## Usage
### Typical application logging
```go
w := log.NewSyncWriter(os.Stderr)
logger := log.NewLogfmtLogger(w)
logger.Log("question", "what is the meaning of life?", "answer", 42)
// Output:
// question="what is the meaning of life?" answer=42
```
### Contextual Loggers
```go
func main() {
var logger log.Logger
logger = log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr))
logger = log.With(logger, "instance_id", 123)
logger.Log("msg", "starting")
NewWorker(log.With(logger, "component", "worker")).Run()
NewSlacker(log.With(logger, "component", "slacker")).Run()
}
// Output:
// instance_id=123 msg=starting
// instance_id=123 component=worker msg=running
// instance_id=123 component=slacker msg=running
```
### Interact with stdlib logger
Redirect stdlib logger to Go kit logger.
```go
import (
"os"
stdlog "log"
kitlog "github.com/go-kit/kit/log"
)
func main() {
logger := kitlog.NewJSONLogger(kitlog.NewSyncWriter(os.Stdout))
stdlog.SetOutput(kitlog.NewStdlibAdapter(logger))
stdlog.Print("I sure like pie")
}
// Output:
// {"msg":"I sure like pie","ts":"2016/01/01 12:34:56"}
```
Or, if, for legacy reasons, you need to pipe all of your logging through the
stdlib log package, you can redirect Go kit logger to the stdlib logger.
```go
logger := kitlog.NewLogfmtLogger(kitlog.StdlibWriter{})
logger.Log("legacy", true, "msg", "at least it's something")
// Output:
// 2016/01/01 12:34:56 legacy=true msg="at least it's something"
```
### Timestamps and callers
```go
var logger log.Logger
logger = log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr))
logger = log.With(logger, "ts", log.DefaultTimestampUTC, "caller", log.DefaultCaller)
logger.Log("msg", "hello")
// Output:
// ts=2016-01-01T12:34:56Z caller=main.go:15 msg=hello
```
## Levels
Log levels are supported via the [level package](https://godoc.org/github.com/go-kit/kit/log/level).
## Supported output formats
- [Logfmt](https://brandur.org/logfmt) ([see also](https://blog.codeship.com/logfmt-a-log-format-thats-easy-to-read-and-write))
- JSON
## Enhancements
`package log` is centered on the one-method Logger interface.
```go
type Logger interface {
Log(keyvals ...interface{}) error
}
```
This interface, and its supporting code like is the product of much iteration
and evaluation. For more details on the evolution of the Logger interface,
see [The Hunt for a Logger Interface](http://go-talks.appspot.com/github.com/ChrisHines/talks/structured-logging/structured-logging.slide#1),
a talk by [Chris Hines](https://github.com/ChrisHines).
Also, please see
[#63](https://github.com/go-kit/kit/issues/63),
[#76](https://github.com/go-kit/kit/pull/76),
[#131](https://github.com/go-kit/kit/issues/131),
[#157](https://github.com/go-kit/kit/pull/157),
[#164](https://github.com/go-kit/kit/issues/164), and
[#252](https://github.com/go-kit/kit/pull/252)
to review historical conversations about package log and the Logger interface.
Value-add packages and suggestions,
like improvements to [the leveled logger](https://godoc.org/github.com/go-kit/kit/log/level),
are of course welcome. Good proposals should
- Be composable with [contextual loggers](https://godoc.org/github.com/go-kit/kit/log#With),
- Not break the behavior of [log.Caller](https://godoc.org/github.com/go-kit/kit/log#Caller) in any wrapped contextual loggers, and
- Be friendly to packages that accept only an unadorned log.Logger.
## Benchmarks & comparisons
There are a few Go logging benchmarks and comparisons that include Go kit's package log.
- [imkira/go-loggers-bench](https://github.com/imkira/go-loggers-bench) includes kit/log
- [uber-common/zap](https://github.com/uber-common/zap), a zero-alloc logging library, includes a comparison with kit/log

View file

@ -1,116 +0,0 @@
// Package log provides a structured logger.
//
// Structured logging produces logs easily consumed later by humans or
// machines. Humans might be interested in debugging errors, or tracing
// specific requests. Machines might be interested in counting interesting
// events, or aggregating information for off-line processing. In both cases,
// it is important that the log messages are structured and actionable.
// Package log is designed to encourage both of these best practices.
//
// Basic Usage
//
// The fundamental interface is Logger. Loggers create log events from
// key/value data. The Logger interface has a single method, Log, which
// accepts a sequence of alternating key/value pairs, which this package names
// keyvals.
//
// type Logger interface {
// Log(keyvals ...interface{}) error
// }
//
// Here is an example of a function using a Logger to create log events.
//
// func RunTask(task Task, logger log.Logger) string {
// logger.Log("taskID", task.ID, "event", "starting task")
// ...
// logger.Log("taskID", task.ID, "event", "task complete")
// }
//
// The keys in the above example are "taskID" and "event". The values are
// task.ID, "starting task", and "task complete". Every key is followed
// immediately by its value.
//
// Keys are usually plain strings. Values may be any type that has a sensible
// encoding in the chosen log format. With structured logging it is a good
// idea to log simple values without formatting them. This practice allows
// the chosen logger to encode values in the most appropriate way.
//
// Contextual Loggers
//
// A contextual logger stores keyvals that it includes in all log events.
// Building appropriate contextual loggers reduces repetition and aids
// consistency in the resulting log output. With and WithPrefix add context to
// a logger. We can use With to improve the RunTask example.
//
// func RunTask(task Task, logger log.Logger) string {
// logger = log.With(logger, "taskID", task.ID)
// logger.Log("event", "starting task")
// ...
// taskHelper(task.Cmd, logger)
// ...
// logger.Log("event", "task complete")
// }
//
// The improved version emits the same log events as the original for the
// first and last calls to Log. Passing the contextual logger to taskHelper
// enables each log event created by taskHelper to include the task.ID even
// though taskHelper does not have access to that value. Using contextual
// loggers this way simplifies producing log output that enables tracing the
// life cycle of individual tasks. (See the Contextual example for the full
// code of the above snippet.)
//
// Dynamic Contextual Values
//
// A Valuer function stored in a contextual logger generates a new value each
// time an event is logged. The Valuer example demonstrates how this feature
// works.
//
// Valuers provide the basis for consistently logging timestamps and source
// code location. The log package defines several valuers for that purpose.
// See Timestamp, DefaultTimestamp, DefaultTimestampUTC, Caller, and
// DefaultCaller. A common logger initialization sequence that ensures all log
// entries contain a timestamp and source location looks like this:
//
// logger := log.NewLogfmtLogger(log.NewSyncWriter(os.Stdout))
// logger = log.With(logger, "ts", log.DefaultTimestampUTC, "caller", log.DefaultCaller)
//
// Concurrent Safety
//
// Applications with multiple goroutines want each log event written to the
// same logger to remain separate from other log events. Package log provides
// two simple solutions for concurrent safe logging.
//
// NewSyncWriter wraps an io.Writer and serializes each call to its Write
// method. Using a SyncWriter has the benefit that the smallest practical
// portion of the logging logic is performed within a mutex, but it requires
// the formatting Logger to make only one call to Write per log event.
//
// NewSyncLogger wraps any Logger and serializes each call to its Log method.
// Using a SyncLogger has the benefit that it guarantees each log event is
// handled atomically within the wrapped logger, but it typically serializes
// both the formatting and output logic. Use a SyncLogger if the formatting
// logger may perform multiple writes per log event.
//
// Error Handling
//
// This package relies on the practice of wrapping or decorating loggers with
// other loggers to provide composable pieces of functionality. It also means
// that Logger.Log must return an error because some
// implementations—especially those that output log data to an io.Writer—may
// encounter errors that cannot be handled locally. This in turn means that
// Loggers that wrap other loggers should return errors from the wrapped
// logger up the stack.
//
// Fortunately, the decorator pattern also provides a way to avoid the
// necessity to check for errors every time an application calls Logger.Log.
// An application required to panic whenever its Logger encounters
// an error could initialize its logger as follows.
//
// fmtlogger := log.NewLogfmtLogger(log.NewSyncWriter(os.Stdout))
// logger := log.LoggerFunc(func(keyvals ...interface{}) error {
// if err := fmtlogger.Log(keyvals...); err != nil {
// panic(err)
// }
// return nil
// })
package log

View file

@ -1,91 +0,0 @@
package log
import (
"encoding"
"encoding/json"
"fmt"
"io"
"reflect"
)
type jsonLogger struct {
io.Writer
}
// NewJSONLogger returns a Logger that encodes keyvals to the Writer as a
// single JSON object. Each log event produces no more than one call to
// w.Write. The passed Writer must be safe for concurrent use by multiple
// goroutines if the returned Logger will be used concurrently.
func NewJSONLogger(w io.Writer) Logger {
return &jsonLogger{w}
}
func (l *jsonLogger) Log(keyvals ...interface{}) error {
n := (len(keyvals) + 1) / 2 // +1 to handle case when len is odd
m := make(map[string]interface{}, n)
for i := 0; i < len(keyvals); i += 2 {
k := keyvals[i]
var v interface{} = ErrMissingValue
if i+1 < len(keyvals) {
v = keyvals[i+1]
}
merge(m, k, v)
}
enc := json.NewEncoder(l.Writer)
enc.SetEscapeHTML(false)
return enc.Encode(m)
}
func merge(dst map[string]interface{}, k, v interface{}) {
var key string
switch x := k.(type) {
case string:
key = x
case fmt.Stringer:
key = safeString(x)
default:
key = fmt.Sprint(x)
}
// We want json.Marshaler and encoding.TextMarshaller to take priority over
// err.Error() and v.String(). But json.Marshall (called later) does that by
// default so we force a no-op if it's one of those 2 case.
switch x := v.(type) {
case json.Marshaler:
case encoding.TextMarshaler:
case error:
v = safeError(x)
case fmt.Stringer:
v = safeString(x)
}
dst[key] = v
}
func safeString(str fmt.Stringer) (s string) {
defer func() {
if panicVal := recover(); panicVal != nil {
if v := reflect.ValueOf(str); v.Kind() == reflect.Ptr && v.IsNil() {
s = "NULL"
} else {
panic(panicVal)
}
}
}()
s = str.String()
return
}
func safeError(err error) (s interface{}) {
defer func() {
if panicVal := recover(); panicVal != nil {
if v := reflect.ValueOf(err); v.Kind() == reflect.Ptr && v.IsNil() {
s = nil
} else {
panic(panicVal)
}
}
}()
s = err.Error()
return
}

View file

@ -1,22 +0,0 @@
// Package level implements leveled logging on top of Go kit's log package. To
// use the level package, create a logger as per normal in your func main, and
// wrap it with level.NewFilter.
//
// var logger log.Logger
// logger = log.NewLogfmtLogger(os.Stderr)
// logger = level.NewFilter(logger, level.AllowInfo()) // <--
// logger = log.With(logger, "ts", log.DefaultTimestampUTC)
//
// Then, at the callsites, use one of the level.Debug, Info, Warn, or Error
// helper methods to emit leveled log events.
//
// logger.Log("foo", "bar") // as normal, no level
// level.Debug(logger).Log("request_id", reqID, "trace_data", trace.Get())
// if value > 100 {
// level.Error(logger).Log("value", value)
// }
//
// NewFilter allows precise control over what happens when a log event is
// emitted without a level key, or if a squelched level is used. Check the
// Option functions for details.
package level

View file

@ -1,205 +0,0 @@
package level
import "github.com/go-kit/kit/log"
// Error returns a logger that includes a Key/ErrorValue pair.
func Error(logger log.Logger) log.Logger {
return log.WithPrefix(logger, Key(), ErrorValue())
}
// Warn returns a logger that includes a Key/WarnValue pair.
func Warn(logger log.Logger) log.Logger {
return log.WithPrefix(logger, Key(), WarnValue())
}
// Info returns a logger that includes a Key/InfoValue pair.
func Info(logger log.Logger) log.Logger {
return log.WithPrefix(logger, Key(), InfoValue())
}
// Debug returns a logger that includes a Key/DebugValue pair.
func Debug(logger log.Logger) log.Logger {
return log.WithPrefix(logger, Key(), DebugValue())
}
// NewFilter wraps next and implements level filtering. See the commentary on
// the Option functions for a detailed description of how to configure levels.
// If no options are provided, all leveled log events created with Debug,
// Info, Warn or Error helper methods are squelched and non-leveled log
// events are passed to next unmodified.
func NewFilter(next log.Logger, options ...Option) log.Logger {
l := &logger{
next: next,
}
for _, option := range options {
option(l)
}
return l
}
type logger struct {
next log.Logger
allowed level
squelchNoLevel bool
errNotAllowed error
errNoLevel error
}
func (l *logger) Log(keyvals ...interface{}) error {
var hasLevel, levelAllowed bool
for i := 1; i < len(keyvals); i += 2 {
if v, ok := keyvals[i].(*levelValue); ok {
hasLevel = true
levelAllowed = l.allowed&v.level != 0
break
}
}
if !hasLevel && l.squelchNoLevel {
return l.errNoLevel
}
if hasLevel && !levelAllowed {
return l.errNotAllowed
}
return l.next.Log(keyvals...)
}
// Option sets a parameter for the leveled logger.
type Option func(*logger)
// AllowAll is an alias for AllowDebug.
func AllowAll() Option {
return AllowDebug()
}
// AllowDebug allows error, warn, info and debug level log events to pass.
func AllowDebug() Option {
return allowed(levelError | levelWarn | levelInfo | levelDebug)
}
// AllowInfo allows error, warn and info level log events to pass.
func AllowInfo() Option {
return allowed(levelError | levelWarn | levelInfo)
}
// AllowWarn allows error and warn level log events to pass.
func AllowWarn() Option {
return allowed(levelError | levelWarn)
}
// AllowError allows only error level log events to pass.
func AllowError() Option {
return allowed(levelError)
}
// AllowNone allows no leveled log events to pass.
func AllowNone() Option {
return allowed(0)
}
func allowed(allowed level) Option {
return func(l *logger) { l.allowed = allowed }
}
// ErrNotAllowed sets the error to return from Log when it squelches a log
// event disallowed by the configured Allow[Level] option. By default,
// ErrNotAllowed is nil; in this case the log event is squelched with no
// error.
func ErrNotAllowed(err error) Option {
return func(l *logger) { l.errNotAllowed = err }
}
// SquelchNoLevel instructs Log to squelch log events with no level, so that
// they don't proceed through to the wrapped logger. If SquelchNoLevel is set
// to true and a log event is squelched in this way, the error value
// configured with ErrNoLevel is returned to the caller.
func SquelchNoLevel(squelch bool) Option {
return func(l *logger) { l.squelchNoLevel = squelch }
}
// ErrNoLevel sets the error to return from Log when it squelches a log event
// with no level. By default, ErrNoLevel is nil; in this case the log event is
// squelched with no error.
func ErrNoLevel(err error) Option {
return func(l *logger) { l.errNoLevel = err }
}
// NewInjector wraps next and returns a logger that adds a Key/level pair to
// the beginning of log events that don't already contain a level. In effect,
// this gives a default level to logs without a level.
func NewInjector(next log.Logger, level Value) log.Logger {
return &injector{
next: next,
level: level,
}
}
type injector struct {
next log.Logger
level interface{}
}
func (l *injector) Log(keyvals ...interface{}) error {
for i := 1; i < len(keyvals); i += 2 {
if _, ok := keyvals[i].(*levelValue); ok {
return l.next.Log(keyvals...)
}
}
kvs := make([]interface{}, len(keyvals)+2)
kvs[0], kvs[1] = key, l.level
copy(kvs[2:], keyvals)
return l.next.Log(kvs...)
}
// Value is the interface that each of the canonical level values implement.
// It contains unexported methods that prevent types from other packages from
// implementing it and guaranteeing that NewFilter can distinguish the levels
// defined in this package from all other values.
type Value interface {
String() string
levelVal()
}
// Key returns the unique key added to log events by the loggers in this
// package.
func Key() interface{} { return key }
// ErrorValue returns the unique value added to log events by Error.
func ErrorValue() Value { return errorValue }
// WarnValue returns the unique value added to log events by Warn.
func WarnValue() Value { return warnValue }
// InfoValue returns the unique value added to log events by Info.
func InfoValue() Value { return infoValue }
// DebugValue returns the unique value added to log events by Warn.
func DebugValue() Value { return debugValue }
var (
// key is of type interface{} so that it allocates once during package
// initialization and avoids allocating every time the value is added to a
// []interface{} later.
key interface{} = "level"
errorValue = &levelValue{level: levelError, name: "error"}
warnValue = &levelValue{level: levelWarn, name: "warn"}
infoValue = &levelValue{level: levelInfo, name: "info"}
debugValue = &levelValue{level: levelDebug, name: "debug"}
)
type level byte
const (
levelDebug level = 1 << iota
levelInfo
levelWarn
levelError
)
type levelValue struct {
name string
level
}
func (v *levelValue) String() string { return v.name }
func (v *levelValue) levelVal() {}

View file

@ -1,135 +0,0 @@
package log
import "errors"
// Logger is the fundamental interface for all log operations. Log creates a
// log event from keyvals, a variadic sequence of alternating keys and values.
// Implementations must be safe for concurrent use by multiple goroutines. In
// particular, any implementation of Logger that appends to keyvals or
// modifies or retains any of its elements must make a copy first.
type Logger interface {
Log(keyvals ...interface{}) error
}
// ErrMissingValue is appended to keyvals slices with odd length to substitute
// the missing value.
var ErrMissingValue = errors.New("(MISSING)")
// With returns a new contextual logger with keyvals prepended to those passed
// to calls to Log. If logger is also a contextual logger created by With or
// WithPrefix, keyvals is appended to the existing context.
//
// The returned Logger replaces all value elements (odd indexes) containing a
// Valuer with their generated value for each call to its Log method.
func With(logger Logger, keyvals ...interface{}) Logger {
if len(keyvals) == 0 {
return logger
}
l := newContext(logger)
kvs := append(l.keyvals, keyvals...)
if len(kvs)%2 != 0 {
kvs = append(kvs, ErrMissingValue)
}
return &context{
logger: l.logger,
// Limiting the capacity of the stored keyvals ensures that a new
// backing array is created if the slice must grow in Log or With.
// Using the extra capacity without copying risks a data race that
// would violate the Logger interface contract.
keyvals: kvs[:len(kvs):len(kvs)],
hasValuer: l.hasValuer || containsValuer(keyvals),
}
}
// WithPrefix returns a new contextual logger with keyvals prepended to those
// passed to calls to Log. If logger is also a contextual logger created by
// With or WithPrefix, keyvals is prepended to the existing context.
//
// The returned Logger replaces all value elements (odd indexes) containing a
// Valuer with their generated value for each call to its Log method.
func WithPrefix(logger Logger, keyvals ...interface{}) Logger {
if len(keyvals) == 0 {
return logger
}
l := newContext(logger)
// Limiting the capacity of the stored keyvals ensures that a new
// backing array is created if the slice must grow in Log or With.
// Using the extra capacity without copying risks a data race that
// would violate the Logger interface contract.
n := len(l.keyvals) + len(keyvals)
if len(keyvals)%2 != 0 {
n++
}
kvs := make([]interface{}, 0, n)
kvs = append(kvs, keyvals...)
if len(kvs)%2 != 0 {
kvs = append(kvs, ErrMissingValue)
}
kvs = append(kvs, l.keyvals...)
return &context{
logger: l.logger,
keyvals: kvs,
hasValuer: l.hasValuer || containsValuer(keyvals),
}
}
// context is the Logger implementation returned by With and WithPrefix. It
// wraps a Logger and holds keyvals that it includes in all log events. Its
// Log method calls bindValues to generate values for each Valuer in the
// context keyvals.
//
// A context must always have the same number of stack frames between calls to
// its Log method and the eventual binding of Valuers to their value. This
// requirement comes from the functional requirement to allow a context to
// resolve application call site information for a Caller stored in the
// context. To do this we must be able to predict the number of logging
// functions on the stack when bindValues is called.
//
// Two implementation details provide the needed stack depth consistency.
//
// 1. newContext avoids introducing an additional layer when asked to
// wrap another context.
// 2. With and WithPrefix avoid introducing an additional layer by
// returning a newly constructed context with a merged keyvals rather
// than simply wrapping the existing context.
type context struct {
logger Logger
keyvals []interface{}
hasValuer bool
}
func newContext(logger Logger) *context {
if c, ok := logger.(*context); ok {
return c
}
return &context{logger: logger}
}
// Log replaces all value elements (odd indexes) containing a Valuer in the
// stored context with their generated value, appends keyvals, and passes the
// result to the wrapped Logger.
func (l *context) Log(keyvals ...interface{}) error {
kvs := append(l.keyvals, keyvals...)
if len(kvs)%2 != 0 {
kvs = append(kvs, ErrMissingValue)
}
if l.hasValuer {
// If no keyvals were appended above then we must copy l.keyvals so
// that future log events will reevaluate the stored Valuers.
if len(keyvals) == 0 {
kvs = append([]interface{}{}, l.keyvals...)
}
bindValues(kvs[:len(l.keyvals)])
}
return l.logger.Log(kvs...)
}
// LoggerFunc is an adapter to allow use of ordinary functions as Loggers. If
// f is a function with the appropriate signature, LoggerFunc(f) is a Logger
// object that calls f.
type LoggerFunc func(...interface{}) error
// Log implements Logger by calling f(keyvals...).
func (f LoggerFunc) Log(keyvals ...interface{}) error {
return f(keyvals...)
}

View file

@ -1,62 +0,0 @@
package log
import (
"bytes"
"io"
"sync"
"github.com/go-logfmt/logfmt"
)
type logfmtEncoder struct {
*logfmt.Encoder
buf bytes.Buffer
}
func (l *logfmtEncoder) Reset() {
l.Encoder.Reset()
l.buf.Reset()
}
var logfmtEncoderPool = sync.Pool{
New: func() interface{} {
var enc logfmtEncoder
enc.Encoder = logfmt.NewEncoder(&enc.buf)
return &enc
},
}
type logfmtLogger struct {
w io.Writer
}
// NewLogfmtLogger returns a logger that encodes keyvals to the Writer in
// logfmt format. Each log event produces no more than one call to w.Write.
// The passed Writer must be safe for concurrent use by multiple goroutines if
// the returned Logger will be used concurrently.
func NewLogfmtLogger(w io.Writer) Logger {
return &logfmtLogger{w}
}
func (l logfmtLogger) Log(keyvals ...interface{}) error {
enc := logfmtEncoderPool.Get().(*logfmtEncoder)
enc.Reset()
defer logfmtEncoderPool.Put(enc)
if err := enc.EncodeKeyvals(keyvals...); err != nil {
return err
}
// Add newline to the end of the buffer
if err := enc.EndRecord(); err != nil {
return err
}
// The Logger interface requires implementations to be safe for concurrent
// use by multiple goroutines. For this implementation that means making
// only one call to l.w.Write() for each call to Log.
if _, err := l.w.Write(enc.buf.Bytes()); err != nil {
return err
}
return nil
}

View file

@ -1,8 +0,0 @@
package log
type nopLogger struct{}
// NewNopLogger returns a logger that doesn't do anything.
func NewNopLogger() Logger { return nopLogger{} }
func (nopLogger) Log(...interface{}) error { return nil }

View file

@ -1,116 +0,0 @@
package log
import (
"io"
"log"
"regexp"
"strings"
)
// StdlibWriter implements io.Writer by invoking the stdlib log.Print. It's
// designed to be passed to a Go kit logger as the writer, for cases where
// it's necessary to redirect all Go kit log output to the stdlib logger.
//
// If you have any choice in the matter, you shouldn't use this. Prefer to
// redirect the stdlib log to the Go kit logger via NewStdlibAdapter.
type StdlibWriter struct{}
// Write implements io.Writer.
func (w StdlibWriter) Write(p []byte) (int, error) {
log.Print(strings.TrimSpace(string(p)))
return len(p), nil
}
// StdlibAdapter wraps a Logger and allows it to be passed to the stdlib
// logger's SetOutput. It will extract date/timestamps, filenames, and
// messages, and place them under relevant keys.
type StdlibAdapter struct {
Logger
timestampKey string
fileKey string
messageKey string
}
// StdlibAdapterOption sets a parameter for the StdlibAdapter.
type StdlibAdapterOption func(*StdlibAdapter)
// TimestampKey sets the key for the timestamp field. By default, it's "ts".
func TimestampKey(key string) StdlibAdapterOption {
return func(a *StdlibAdapter) { a.timestampKey = key }
}
// FileKey sets the key for the file and line field. By default, it's "caller".
func FileKey(key string) StdlibAdapterOption {
return func(a *StdlibAdapter) { a.fileKey = key }
}
// MessageKey sets the key for the actual log message. By default, it's "msg".
func MessageKey(key string) StdlibAdapterOption {
return func(a *StdlibAdapter) { a.messageKey = key }
}
// NewStdlibAdapter returns a new StdlibAdapter wrapper around the passed
// logger. It's designed to be passed to log.SetOutput.
func NewStdlibAdapter(logger Logger, options ...StdlibAdapterOption) io.Writer {
a := StdlibAdapter{
Logger: logger,
timestampKey: "ts",
fileKey: "caller",
messageKey: "msg",
}
for _, option := range options {
option(&a)
}
return a
}
func (a StdlibAdapter) Write(p []byte) (int, error) {
result := subexps(p)
keyvals := []interface{}{}
var timestamp string
if date, ok := result["date"]; ok && date != "" {
timestamp = date
}
if time, ok := result["time"]; ok && time != "" {
if timestamp != "" {
timestamp += " "
}
timestamp += time
}
if timestamp != "" {
keyvals = append(keyvals, a.timestampKey, timestamp)
}
if file, ok := result["file"]; ok && file != "" {
keyvals = append(keyvals, a.fileKey, file)
}
if msg, ok := result["msg"]; ok {
keyvals = append(keyvals, a.messageKey, msg)
}
if err := a.Logger.Log(keyvals...); err != nil {
return 0, err
}
return len(p), nil
}
const (
logRegexpDate = `(?P<date>[0-9]{4}/[0-9]{2}/[0-9]{2})?[ ]?`
logRegexpTime = `(?P<time>[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?)?[ ]?`
logRegexpFile = `(?P<file>.+?:[0-9]+)?`
logRegexpMsg = `(: )?(?P<msg>.*)`
)
var (
logRegexp = regexp.MustCompile(logRegexpDate + logRegexpTime + logRegexpFile + logRegexpMsg)
)
func subexps(line []byte) map[string]string {
m := logRegexp.FindSubmatch(line)
if len(m) < len(logRegexp.SubexpNames()) {
return map[string]string{}
}
result := map[string]string{}
for i, name := range logRegexp.SubexpNames() {
result[name] = string(m[i])
}
return result
}

View file

@ -1,116 +0,0 @@
package log
import (
"io"
"sync"
"sync/atomic"
)
// SwapLogger wraps another logger that may be safely replaced while other
// goroutines use the SwapLogger concurrently. The zero value for a SwapLogger
// will discard all log events without error.
//
// SwapLogger serves well as a package global logger that can be changed by
// importers.
type SwapLogger struct {
logger atomic.Value
}
type loggerStruct struct {
Logger
}
// Log implements the Logger interface by forwarding keyvals to the currently
// wrapped logger. It does not log anything if the wrapped logger is nil.
func (l *SwapLogger) Log(keyvals ...interface{}) error {
s, ok := l.logger.Load().(loggerStruct)
if !ok || s.Logger == nil {
return nil
}
return s.Log(keyvals...)
}
// Swap replaces the currently wrapped logger with logger. Swap may be called
// concurrently with calls to Log from other goroutines.
func (l *SwapLogger) Swap(logger Logger) {
l.logger.Store(loggerStruct{logger})
}
// NewSyncWriter returns a new writer that is safe for concurrent use by
// multiple goroutines. Writes to the returned writer are passed on to w. If
// another write is already in progress, the calling goroutine blocks until
// the writer is available.
//
// If w implements the following interface, so does the returned writer.
//
// interface {
// Fd() uintptr
// }
func NewSyncWriter(w io.Writer) io.Writer {
switch w := w.(type) {
case fdWriter:
return &fdSyncWriter{fdWriter: w}
default:
return &syncWriter{Writer: w}
}
}
// syncWriter synchronizes concurrent writes to an io.Writer.
type syncWriter struct {
sync.Mutex
io.Writer
}
// Write writes p to the underlying io.Writer. If another write is already in
// progress, the calling goroutine blocks until the syncWriter is available.
func (w *syncWriter) Write(p []byte) (n int, err error) {
w.Lock()
n, err = w.Writer.Write(p)
w.Unlock()
return n, err
}
// fdWriter is an io.Writer that also has an Fd method. The most common
// example of an fdWriter is an *os.File.
type fdWriter interface {
io.Writer
Fd() uintptr
}
// fdSyncWriter synchronizes concurrent writes to an fdWriter.
type fdSyncWriter struct {
sync.Mutex
fdWriter
}
// Write writes p to the underlying io.Writer. If another write is already in
// progress, the calling goroutine blocks until the fdSyncWriter is available.
func (w *fdSyncWriter) Write(p []byte) (n int, err error) {
w.Lock()
n, err = w.fdWriter.Write(p)
w.Unlock()
return n, err
}
// syncLogger provides concurrent safe logging for another Logger.
type syncLogger struct {
mu sync.Mutex
logger Logger
}
// NewSyncLogger returns a logger that synchronizes concurrent use of the
// wrapped logger. When multiple goroutines use the SyncLogger concurrently
// only one goroutine will be allowed to log to the wrapped logger at a time.
// The other goroutines will block until the logger is available.
func NewSyncLogger(logger Logger) Logger {
return &syncLogger{logger: logger}
}
// Log logs keyvals to the underlying Logger. If another log is already in
// progress, the calling goroutine blocks until the syncLogger is available.
func (l *syncLogger) Log(keyvals ...interface{}) error {
l.mu.Lock()
err := l.logger.Log(keyvals...)
l.mu.Unlock()
return err
}

View file

@ -1,110 +0,0 @@
package log
import (
"runtime"
"strconv"
"strings"
"time"
)
// A Valuer generates a log value. When passed to With or WithPrefix in a
// value element (odd indexes), it represents a dynamic value which is re-
// evaluated with each log event.
type Valuer func() interface{}
// bindValues replaces all value elements (odd indexes) containing a Valuer
// with their generated value.
func bindValues(keyvals []interface{}) {
for i := 1; i < len(keyvals); i += 2 {
if v, ok := keyvals[i].(Valuer); ok {
keyvals[i] = v()
}
}
}
// containsValuer returns true if any of the value elements (odd indexes)
// contain a Valuer.
func containsValuer(keyvals []interface{}) bool {
for i := 1; i < len(keyvals); i += 2 {
if _, ok := keyvals[i].(Valuer); ok {
return true
}
}
return false
}
// Timestamp returns a timestamp Valuer. It invokes the t function to get the
// time; unless you are doing something tricky, pass time.Now.
//
// Most users will want to use DefaultTimestamp or DefaultTimestampUTC, which
// are TimestampFormats that use the RFC3339Nano format.
func Timestamp(t func() time.Time) Valuer {
return func() interface{} { return t() }
}
// TimestampFormat returns a timestamp Valuer with a custom time format. It
// invokes the t function to get the time to format; unless you are doing
// something tricky, pass time.Now. The layout string is passed to
// Time.Format.
//
// Most users will want to use DefaultTimestamp or DefaultTimestampUTC, which
// are TimestampFormats that use the RFC3339Nano format.
func TimestampFormat(t func() time.Time, layout string) Valuer {
return func() interface{} {
return timeFormat{
time: t(),
layout: layout,
}
}
}
// A timeFormat represents an instant in time and a layout used when
// marshaling to a text format.
type timeFormat struct {
time time.Time
layout string
}
func (tf timeFormat) String() string {
return tf.time.Format(tf.layout)
}
// MarshalText implements encoding.TextMarshaller.
func (tf timeFormat) MarshalText() (text []byte, err error) {
// The following code adapted from the standard library time.Time.Format
// method. Using the same undocumented magic constant to extend the size
// of the buffer as seen there.
b := make([]byte, 0, len(tf.layout)+10)
b = tf.time.AppendFormat(b, tf.layout)
return b, nil
}
// Caller returns a Valuer that returns a file and line from a specified depth
// in the callstack. Users will probably want to use DefaultCaller.
func Caller(depth int) Valuer {
return func() interface{} {
_, file, line, _ := runtime.Caller(depth)
idx := strings.LastIndexByte(file, '/')
// using idx+1 below handles both of following cases:
// idx == -1 because no "/" was found, or
// idx >= 0 and we want to start at the character after the found "/".
return file[idx+1:] + ":" + strconv.Itoa(line)
}
}
var (
// DefaultTimestamp is a Valuer that returns the current wallclock time,
// respecting time zones, when bound.
DefaultTimestamp = TimestampFormat(time.Now, time.RFC3339Nano)
// DefaultTimestampUTC is a Valuer that returns the current time in UTC
// when bound.
DefaultTimestampUTC = TimestampFormat(
func() time.Time { return time.Now().UTC() },
time.RFC3339Nano,
)
// DefaultCaller is a Valuer that returns the file and line where the Log
// method was invoked. It can only be used with log.With.
DefaultCaller = Caller(3)
)

View file

@ -1 +0,0 @@
.vscode/

View file

@ -1,18 +0,0 @@
language: go
sudo: false
go:
- "1.7.x"
- "1.8.x"
- "1.9.x"
- "1.10.x"
- "1.11.x"
- "1.12.x"
- "1.13.x"
- "tip"
before_install:
- go get github.com/mattn/goveralls
- go get golang.org/x/tools/cmd/cover
script:
- goveralls -service=travis-ci

View file

@ -1,48 +0,0 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.5.0] - 2020-01-03
### Changed
- Remove the dependency on github.com/kr/logfmt by [@ChrisHines]
- Move fuzz code to github.com/go-logfmt/fuzzlogfmt by [@ChrisHines]
## [0.4.0] - 2018-11-21
### Added
- Go module support by [@ChrisHines]
- CHANGELOG by [@ChrisHines]
### Changed
- Drop invalid runes from keys instead of returning ErrInvalidKey by [@ChrisHines]
- On panic while printing, attempt to print panic value by [@bboreham]
## [0.3.0] - 2016-11-15
### Added
- Pool buffers for quoted strings and byte slices by [@nussjustin]
### Fixed
- Fuzz fix, quote invalid UTF-8 values by [@judwhite]
## [0.2.0] - 2016-05-08
### Added
- Encoder.EncodeKeyvals by [@ChrisHines]
## [0.1.0] - 2016-03-28
### Added
- Encoder by [@ChrisHines]
- Decoder by [@ChrisHines]
- MarshalKeyvals by [@ChrisHines]
[0.5.0]: https://github.com/go-logfmt/logfmt/compare/v0.4.0...v0.5.0
[0.4.0]: https://github.com/go-logfmt/logfmt/compare/v0.3.0...v0.4.0
[0.3.0]: https://github.com/go-logfmt/logfmt/compare/v0.2.0...v0.3.0
[0.2.0]: https://github.com/go-logfmt/logfmt/compare/v0.1.0...v0.2.0
[0.1.0]: https://github.com/go-logfmt/logfmt/commits/v0.1.0
[@ChrisHines]: https://github.com/ChrisHines
[@bboreham]: https://github.com/bboreham
[@judwhite]: https://github.com/judwhite
[@nussjustin]: https://github.com/nussjustin

View file

@ -1,22 +0,0 @@
The MIT License (MIT)
Copyright (c) 2015 go-logfmt
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View file

@ -1,33 +0,0 @@
[![GoDoc](https://godoc.org/github.com/go-logfmt/logfmt?status.svg)](https://godoc.org/github.com/go-logfmt/logfmt)
[![Go Report Card](https://goreportcard.com/badge/go-logfmt/logfmt)](https://goreportcard.com/report/go-logfmt/logfmt)
[![TravisCI](https://travis-ci.org/go-logfmt/logfmt.svg?branch=master)](https://travis-ci.org/go-logfmt/logfmt)
[![Coverage Status](https://coveralls.io/repos/github/go-logfmt/logfmt/badge.svg?branch=master)](https://coveralls.io/github/go-logfmt/logfmt?branch=master)
# logfmt
Package logfmt implements utilities to marshal and unmarshal data in the [logfmt
format](https://brandur.org/logfmt). It provides an API similar to
[encoding/json](http://golang.org/pkg/encoding/json/) and
[encoding/xml](http://golang.org/pkg/encoding/xml/).
The logfmt format was first documented by Brandur Leach in [this
article](https://brandur.org/logfmt). The format has not been formally
standardized. The most authoritative public specification to date has been the
documentation of a Go Language [package](http://godoc.org/github.com/kr/logfmt)
written by Blake Mizerany and Keith Rarick.
## Goals
This project attempts to conform as closely as possible to the prior art, while
also removing ambiguity where necessary to provide well behaved encoder and
decoder implementations.
## Non-goals
This project does not attempt to formally standardize the logfmt format. In the
event that logfmt is standardized this project would take conforming to the
standard as a goal.
## Versioning
Package logfmt publishes releases via [semver](http://semver.org/) compatible Git tags prefixed with a single 'v'.

Some files were not shown because too many files have changed in this diff Show more