A helpful rule of thumb is when you see values in a context; monads can be seen as layering “effects” on:
- Maybe: partiality (uses: computations that can fail)
- Either: short-circuiting errors (uses: error/exception handling)
- [] (the list monad): nondeterminism (uses: list generation, filtering, …)
- State: a single mutable reference (uses: state)
- Reader: a shared environment (uses: variable bindings, common information, …)
- Writer: a “side-channel” output or accumulation (uses: logging, maintaining a write-only counter, …)
- Cont: non-local control-flow (uses: too numerous to list)
Usually, you should generally design your monad by layering on the monad transformers from the standard Monad Transformer Library, which let you combine the above effects into a single monad. Together, these handle the majority of monads you might want to use. There are some additional monads not included in the MTL, such as the probability and supply monads.
As far as developing an intuition for whether a newly-defined type is a monad, and how it behaves as one, you can think of it by going up from Functor to Monad:
- Functor lets you transform values with pure functions.
- Applicative lets you embed pure values and express application —
(<*>)lets you go from an embedded function and its embedded argument to an embedded result. - Monad lets the structure of embedded computations depend on the values of previous computations.
The easiest way to understand this is to look at the type of join:
join :: (Monad m) => m (m a) -> m a
This means that if you have an embedded computation whose result is a new embedded computation, you can create a computation that executes the result of that computation. So you can use monadic effects to create a new computation based on values of previous computations, and transfer control flow to that computation.
Interestingly, this can be a weakness of structuring things monadically: with Applicative, the structure of the computation is static (i.e. a given Applicative computation has a certain structure of effects that cannot change based on intermediate values), whereas with Monad it is dynamic. This can restrict the optimisation you can do; for instance, applicative parsers are less powerful than monadic ones (well, this isn’t strictly true, but it effectively is), but they can be optimised better.
Note that (>>=) can be defined as
m >>= f = join (fmap f m)
and so a monad can be defined simply with return and join (assuming it’s a Functor; all monads are applicative functors, but Haskell’s typeclass hierarchy unfortunately doesn’t require this for historical reasons).
As an additional note, you probably shouldn’t focus too heavily on monads, no matter what kind of buzz they get from misguided non-Haskellers. There are many typeclasses that represent meaningful and powerful patterns, and not everything is best expressed as a monad. Applicative, Monoid, Foldable… which abstraction to use depends entirely on your situation. And, of course, just because something is a monad doesn’t mean it can’t be other things too; being a monad is just another property of a type.
So, you shouldn’t think too much about “identifying monads”; the questions are more like:
- Can this code be expressed in a simpler monadic form? With which monad?
- Is this type I’ve just defined a monad? What generic patterns encoded by the standard functions on monads can I take advantage of?