I have trouble understanding how this actually helps more than just
storing the results.
When a method is marked with the async
modifier, the compiler will automatically transform the underlying method into a state-machine, as Stephan demonstrates in previous slides. This means that the use of the first method will always trigger a creation of a Task
.
In the second example, notice Stephan removed the async
modifier and the signature of the method is now public static Task<string> GetContentsAsync(string url)
. This now means that the responsibility of creating the Task
is on the implementer of the method and not the compiler. By caching Task<string>
, the only “penalty” of creating the Task
(actually, two tasks, as ContinueWith
will also create one) is when it’s unavailable in the cache, and not foreach method call.
In this particular example, IMO, wasn’t to re-use the network operation that is already ongoing when the first task executes, it was simply to reduce the amount of allocated Task
objects.
how do we know when to cache tasks?
Think of caching a Task
as if it were anything else, and this question can be viewed from a more broad perspective: When should I cache something? The answer to this question is broad, but I think the most common use case is when you have an expensive operation which is on the hotpath of your application. Should you always be caching tasks? definitely not. The overhead of the state-machine allocation is usually neglectable. If needed, profile your app, and then (and only then) think if caching would be of use in your particular use case.