Why there are both counters and gauges in Prometheus if gauges can act as counters?

From a conceptual point of view, gauge and counter have different purposes a gauge typically represent a state, usually with the purpose of detecting saturation. the absolute value of a counter is not really meaningful, the real purpose is rather to compute an evolution (usually a utilization) with functions like irate/rate(), increase() … Those evolution … Read more

Software development metrics and reporting [closed]

A tale from personal experience. Apologies for the length. A few years ago our development group tried setting “proper” measurable objectives for individuals and team leaders. The experiment lasted for just one year, because hard metrics didn’t really work very well for individual objectives (see my question on the subject for some links and further … Read more

/actuator/prometheus missing in @SpringbootTest

Update: @Thierry mentioned @AutoConfigureMetrics is deprecated and one needs to use the @AutoConfigureObservability annotation instead. See his post! Original post: I faced the same issue. After some tracing through spring-context ConditionEvaluator, I found that the newly introduced @ConditionalOnEnabledMetricsExport(“prometheus”) condition on PrometheusMetricsExportAutoConfiguration prevented the endpoint from loading. This is intended behavior due to https://github.com/spring-projects/spring-boot/pull/21658 and impacts … Read more

Macro VS Micro VS Weighted VS Samples F1 Score

The question is about the meaning of the average parameter in sklearn.metrics.f1_score. As you can see from the code: average=micro says the function to compute f1 by considering total true positives, false negatives and false positives (no matter of the prediction for each label in the dataset) average=macro says the function to compute f1 for … Read more

Is sklearn.metrics.mean_squared_error the larger the better (negated)?

The actual function “mean_squared_error” doesn’t have anything about the negative part. But the function implemented when you try ‘neg_mean_squared_error’ will return a negated version of the score. Please check the source code as to how its defined in the source code: neg_mean_squared_error_scorer = make_scorer(mean_squared_error, greater_is_better=False) Observe how the param greater_is_better is set to False. Now … Read more

other open source alternatives to codahale’s “metrics”? [closed]

Some suggestions: Perf4J: Perf4J is a set of utilities for calculating and displaying performance statistics for Java code. ERMA: ERMA (Extremely Reusable Monitoring API) is an instrumentation API that has been designed to be applicable for all monitoring needs. javasimon: Java Simon is a simple monitoring API that allows you to follow and better understand … Read more