What’s the best way to monitor your REST API? [closed]

Start with identifying the core needs that you think monitoring will solve. Try to answer the two questions “What do I want to know?” and “How do I want to act on that information?”.

Examples of “What do I want to know?”

  • Performance over time
  • Largest API users
  • Most commonly used API features
  • Error occurrence in the API

Examples of “How do I want to act on that information?”

  • Review a dashboard of known measurements
  • Be alerted when something changes beyond expected bounds
  • Trace execution that led to that state
  • Review measurements for the entire lifetime of the system

If you can answer those questions, you can either find the right third party solution that captures the metrics that you’re interested in, or inject monitoring probes into the right section of your API that will tell you what you need to do know. I noticed that you’re primarily a Laravel user, so it’s likely that many of the metrics you want to know can be captured by adding before ( Registering Before Filters On a Controller ) and after ( Registering an After Application Filter ) filters with your application, to measure time for response and successful completion of response. This is where the answers to the first set of questions are most important ( “What do I want to know?” ), as it will guide where and what you measure in your app.

Once you know where you can capture the data, selecting the right tool becomes a matter of choosing between (roughly) two classes of monitoring applications: highly specialized monitoring apps that are tightly bound to the operation of your application, and generalized monitoring software that is more akin to a time series database.

There are no popular (to my knowledge) examples of the highly specialized case that are open source. Many commercial solutions do exist however: NewRelic, Ruxit, DynaTrace, etc. etc. etc. Their function could easily be described to be similar to a remote profiler, with many other functions besides. (Also, don’t forget that a more traditional profiler may be useful for collecting some of the information you need – while it definitely will not supplant monitoring your application, there’s a lot of valuable information that can be gleaned from profiling even before you go to production.)

On the general side of things, there are many more open source options that I’m personally aware of. The longest lived is Graphite (a great intro to which may be read here: Measure Anything, Measure Everything), which is in pretty common use amongst many. Graphite is by far from the only option however, and you can find many other options like Kibana and InfluxDB should you wish to host yourself.

Many of these open source options also have hosted options available from several providers. Additionally, you’ll find that there are many entirely commercial options available in this camp (I’m founder of one, in fact 🙂 – Instrumental ).

Most of these commercial options exist because application owners have found it pretty onerous to run their own monitoring infrastructure on top of running their actual application; maintaining availability of yet another distributed system is not high on many ops personnel’s wishlists. 🙂

Leave a Comment