We send the slice off to be process, but potentially start overwriting
the previous content straight away. Just start again with a fresh slice.
Fixes#247
Signed-off-by: Tristan Colgate <tristan@qubit.com>
lastRegistredAt element was only set at creation of a time series.
Now it is updated every time a new event is seen for that time series.
Signed-off-by: Cedric Den Haese <cedric.den.haese@gmail.com>
They can start with a * which makes YAML barf if not quoted. Instead of
figuring out individually what does and does not need to be quoted,
always quote.
Signed-off-by: Matthias Rampke <mr@soundcloud.com>
This is more portable (works on any system that has UNIX signals) and
removes the dependency on fsnotify, which doesn't compile on AIX. It
also reduces the probability of failed reloads due to partial writes.
Fixes#241.
Signed-off-by: Matthias Rampke <mr@soundcloud.com>
- Statsd allows users to provide a metric with the same name but
differing types (counter, gauge, timer)
- The exporter allows this by letting users specify a
"match_metric_type" in the mapping config
- However the mapper cache does not look at the metric type so it would
return a MetricMapperCacheResult for the type of the first metric with
that name that the exporter saw
- Add MetricType to the signature for the cache and format the metric
name with the type to provide unique keys for a metric with the same
name but differing type
Signed-off-by: Andy Paine <andy.paine@digital.cabinet-office.gov.uk>
At high traffic levels, the locking around sending on channels can cause
a large amount of blocking and CPU usage. These adds an event queue
mechanism so that events are queued for short period of time, and
flushed in batches to the main exporter goroutine periodically.
The default is is to flush every 1000 events, or every 200ms, whichever
happens first.
Signed-off-by: Clayton O'Neill <claytono@github.com>
This was attempting to clean up any vectors that no longer have metrics
associated with them. Unfortunately this isn't really possible because
the prometheus client registry allows the second registration of the
same metric, and then throws an error when gathering the metrics.
Another options would be to unregister the metric with the client
library, but that's not implemented for unchecked collectors.
Unfortunately this means that if we have a lot of metrics with differing
label sets appearing and disappearing, that we'll leak some amount of
memory for vectors that are never used again, but this is the way the
old metric tracking code worked also.
Signed-off-by: Clayton O'Neill <claytono@github.com>
This adds two tests.
The first test is to validate that two successive counter events will
increment a counter. This is more about ensuring we can look up the
same metric twice in a row than checking increment functionality.
The second test verifies that the hashLabels function returns different
results when labels are changed.
Signed-off-by: Clayton O'Neill <claytono@github.com>
This reworks/rewrites the way that metric registration and tracking is
handled across all of statsd_exporter. The goal here is to reduce
memory and cpu usage, but also to reduce complexity by unifying metric
registration with the ttl tracking for expiration.
Some high level notes:
* Previously metric names and labels were being hashed three times for
every event accepted: in the container code, the save label set code and
again in the prometheus client libraries. This unifies the first two
and caches the results of `GetMetricWith` to avoid the third.
* This optimizes the label hashing to reduce cpu overhead and memory
allocations. The label hashing code previously showed up high on all
profiling done for both CPU and memory allocations
Using the BenchmarkExporterListener benchmark, the improvement looks
like this.
Before:
cpu: 11,341,797 ns/op
memory allocated: 1,731,119 B/op
memory allocations: 58,028 allocs/op
After:
cpu: 7,084,651 ns/op
memory allocated: 906,556 B/op
memory allocations: 42,026 allocs/op
Signed-off-by: Clayton O'Neill <claytono@github.com>