Very precise and to the point video. Its very helpful ! You could probably add java version and gradle version used as well to readme file. Adding those detail may save some debugging efforts for who just want to try this example as is. I had latest version of java i.e. 14 and latest gradle version but it wasn't working with that, there were certain compatibility issues but when downgraded the java version to 11 it worked fine.
@TomGregoryTech4 жыл бұрын
Hi Gaurav. Thanks for the comment and sorry you had some problems with the repo. I do include the Gradle wrapper with the project which will fix the version of Gradle at 5.2.1. I'll make a note to look into the Java version, so thanks for pointing that out.
@TomGregoryTech4 жыл бұрын
Gradle updated to 6.4.1 and the project is now compiling with Java 14. See you in the next video.
@arunr7614 жыл бұрын
Great Explanations
@ChandanKumar-ou9fr Жыл бұрын
Super content 👌
@DmitriyBlokhin2 жыл бұрын
Thank you.
@branosvk6664 жыл бұрын
great explanation! Thanks
@xiaoduoxu2724 жыл бұрын
Thanks mate! I love it.
@maxidc14 жыл бұрын
Great video! Thanks!
@TomGregoryTech4 жыл бұрын
You're welcome!
@krisorsmso50943 жыл бұрын
Amazing video!
@benjamine.ndugga7294 жыл бұрын
Thanks Tom. I am trying to do something similar but with alot more details. I want to see values from sales made in real time.
@TomGregoryTech4 жыл бұрын
You're welcome. Maybe you could have a counter for number of sales, and one for total sale value? You can then graph the rate.
@benjamine.ndugga7294 жыл бұрын
@@TomGregoryTech thank you mate. I was looking all over for a solution, you just made it easy for me.
@wexwexexort4 жыл бұрын
Thanks for the video and useful information. Yet, I want to share that your choice of camera angle irritated me. Just a feedback.
@TomGregoryTech4 жыл бұрын
Thanks, and I know what you mean. New setup has an improved camera position.
@wayne14353 жыл бұрын
informative Great video;
@puru2901is3 жыл бұрын
Really a helpful video!!!
@rrr00bb13 жыл бұрын
i'm puzzled as to how I should be modelling the kinds of metrics you need for large file content. I basically want Universal Scalability Law, which is centered around analyzing a correctly weighted histogram of (load, throughput). if a file download starts, you know that the load went up by 1 (arrival counter); but you don't yet know the transfer rate. when the download stops, you know that the load went down by 1 (departure counter), the number of bytes involved, and the duration that request took. from there, you can work out that on arrival the transfer rate that the server had gone up by until departure. you can report that as (start,stop,bytes) ... or equivalently, a timestamped (duration,rate). because the OVERLAPS are preserved, you can calculate an astonishing number of important, and physically meaningful, metrics. in particular, you can measure concurrency, and predict when you can get no more throughput out of it. i think i'd have to report a timestamped (bytes,duration) tuple to derive a USL, or at least I'd have to report throughput bucketed by load (ie: concurrent requests).
@TomGregoryTech3 жыл бұрын
Hi rrr00bb, thanks for the question but it's not something I have experience with. Please post back here though if you find a way forward.
@rrr00bb13 жыл бұрын
@@TomGregoryTech i am working around by using two kinds of counters: bytes_total{load=2}, sec_total{load=2} ... keeping counters when i have a certain number of clients in the system. i don't use rate, but query as bytes_total/sec_total ... this gives the average rate that a user experiences AT a load - the times do not overlap, representing concurrency. that's different from the query rate(bytes_total[1m]) with a stack of graphs by load. in that case, it tells you which bytes went up while under each load; but you can't tell what percentage of the time you were AT that load. The Universal Scalability Law is all about getting a scatter plot of (load,throughput) pairs, and fitting a curve. It will forecast when you should stop trying to deal with the load by adding more servers, how many concurrent requests you can support, and the throughput that a user will experience at a given load; and there are related calculations to calculate expected response time from the same data.
@rrr00bb13 жыл бұрын
ie: bytes_total/sec_total is what an individual user experiences... demand. rate(bytes_total) is what the servers supply in throughput. A million users downloading from Google at 10MB/s ... vs 1 user downloading at 100MB/s on a simple web server. At any given time, there's a 2D point of (load,throughput). You can calculate this 3D volume exactly if at the end of every request, you get (timestamp,duration,bytes) tuple. The real information is in how the durations overlap. Simple counters don't really model it right.
@purshoth.k50272 жыл бұрын
Hi that's a great video. Can you help me to find a solution to absent metrics.. It seems that we keep receiving false alerts on absent metrics. Is there a way where we can stop this false alerts. 1) Is there any other alternative function instead of absent function which does the same work. 2) silencing an alert stops only the absent metric alert or stops entire genuine occurring alert as well.. 3) what happens if we reboot the alert manager will the false/in-active alert dissaper? We use prometheus version 2.10.1
@itaco80664 жыл бұрын
Awesome
@yxd001813 жыл бұрын
Is there a video explaining the math behind these metrics with stream data ?
@TomGregoryTech3 жыл бұрын
If only...
@ArunKumar-xw6iw3 жыл бұрын
Thanks for wonderful video. I learnt a lot. 09:41 - What is request_duration_bucket and what is value in the result indicates? Does it mean the sum of request duration of top 5 slower requests in the last 5 minutes is 9.625?
@TomGregoryTech3 жыл бұрын
Hi Arun. The "bucket" has an "le" label. Each bucket metric has a different label e.g. request_duration_bucket{le="10.0",} which means how many times was there a request with duration less than the "le" value (10 in this case). Importantly, each request can fall into multiple buckets. With the data captured in this format you can then query it in Prometheus with the histogram_quantile function.
@anoopsidhu34373 жыл бұрын
@@TomGregoryTech Thanks Tom for the explanation. I was wondering how does histogram_quantile works in relation with the multiple buckets scenarios in this example. Are we saying we take the 90 percentile value in each bucket and then sum it up to come up with the 9.625 value. I am bit confused how did we apply the histogram_quantile function in relation to individual buckets.
@TomGregoryTech3 жыл бұрын
@@anoopsidhu3437 Hi again! Here's how it's explained in the docs "The histogram_quantile() function interpolates quantile values by assuming a linear distribution within a bucket." You can read more here prometheus.io/docs/prometheus/latest/querying/functions/#histogram_quantile
@anoopsidhu34373 жыл бұрын
@@TomGregoryTech Thanks. If we have many buckets , do we interpolates quantile values in each bucket and then add it all or do we select one bucket that is selected based on quantile and come up with the value. The link that is provided is bit unclear on that.
@hadisoleimany62712 жыл бұрын
great
@99393645664 жыл бұрын
Hi, How can we query prometheus for any custom COUNTER metrics vale using Prometheus HTTP API (Dont want to use prometheus server UI).
@TomGregoryTech4 жыл бұрын
Hi. Try some of the examples in the HTTP API docs e.g. curl 'localhost:9090/api/v1/query?query=up&time=2015-07-01T20:10:51.781Z' prometheus.io/docs/prometheus/latest/querying/api/
@ignazioc4 жыл бұрын
Thanks for the info here. I was having trouble to find example of custom metrics. Only one suggestion: don't read :D
@naveen66553 жыл бұрын
How to calculate a MAX tps in a day of an application form pormetheus. Could you please some one let me knw.
@TomGregoryTech3 жыл бұрын
Depends what you mean by tps? Look into the max_over_time function prometheus.io/docs/prometheus/latest/querying/functions/
@kiranpreetkaur85164 жыл бұрын
Nice, can u pls lemme know how to setup Prometheus for Parse-server + MongoDB
@TomGregoryTech4 жыл бұрын
Hi Kiranpreet. I haven't used Parse before, but for MongoDB maybe you could use this Prometheus exporter github.com/percona/mongodb_exporter ?
@maxi1kian04 жыл бұрын
Hello what IDE are you using? Thanks
@TomGregoryTech4 жыл бұрын
Hi Max. I use IntelliJ IDEA :)
@maxi1kian04 жыл бұрын
@@TomGregoryTech thank u
@DmC9444 жыл бұрын
A bit late to the party, but how can request count be 22.5?
@TomGregoryTech4 жыл бұрын
Party's just starting. This is a very good question and I'm not really sure how the decimal value got in there. In the Prometheus Java client you can in fact increment a counter by any double value, although you can see in this code I'm incrementing by an integer github.com/tkgregory/metric-types/blob/master/src/main/java/com/tom/controller/CounterController.java I can only imagine that whilst recording this video I was trying out some non-integer increments, and hence you're seeing 22.5. Sorry for the confusion!
@amirkazemi78952 жыл бұрын
the intro looks like one of those get rich scams :)
@NagendraVaraPrasad-ks5nwАй бұрын
Can you please give me the git repo?
@TomGregoryTechАй бұрын
Hi. You can check it out, but note that I don't maintain it any more github.com/tkgregory/metric-types