From the perspective of a non-expert, there are two things that to me make the case for WF — one unique to workflow platforms, the other perhaps more of a convenience thing.
The convenience feature is the ability to create new ways of composing activities. Imperative programming provides only a limited repertoire of composition primitives: basically sequencing, if-else and loops. WF allows you to build your own composition operators such as interleaved execution, parallel execution, first past the post, etc. And of course it has the sophisticated composition mechanism of state machines built in.
I say this is a convenience feature because you can build all of these operators in an imperative language like C#: in fact that’s how you build the WF operators. But WF makes it easy to use and to read custom compositions, whereas in C# you’d rapidly go down in a hail of lambda expressions. So if you have complex orchestration requirements — that is, if the way your activities fit together is more complicated than sequences, if-else and loops — then WF may make your program easier to express and to understand.
The unique feature is durability. This is where the Shukla and Schmidt book starts from, and where it keeps coming back to. An imperative program written in C# or VB can run for hours, days, weeks if it’s lucky, maybe even months… but eventually IIS is going to cycle the app pool, or the admins are going to want to install the latest security updates, or somebody is going to trip over the power cord. How then does your program remember, “okay, I’ve got the purchase order from Unimaginative Company Names R Us, and I’m waiting on a credit approval from Bank of Breadheads Inc., and when I get that I can send the confirmation email”?
In a conventional imperative program, when the process dies, the execution state dies with it. You can start a new process, but it will start at the beginning of the program. Sure, you can create a database and use that to store flags like “got purchase order” and “got credit approval.” But now you have to write application-specific code to save and query state, and to jump back to the right point in the program depending on that state. And you have to design a new database and new save/restore/jump logic for every single long-running application.
Durable workflow is about tackling this problem. If you write your program as a workflow of activities, then WF will take care of persisting its state including where it is in its execution flow. The machine running the program may catch fire and burn down your data centre, but when the response from the bank comes in, WF will wake up your program in your other data centre, and it will start running at the right place and with the right data.
That to me is the “strong case” for WF. In many cases, you don’t need it: applications are sufficiently short-lived that failure isn’t a significant concern, and restarting from scratch is a viable recovery strategy. But for long-lived applications, such as where you may be orchestrating external systems that may take hours to respond, or business processes that involve humans who may take days to respons, durability may be the killer feature for WF.
Disclaimer: I am not a WF programmer and have never built a real-world WF system. I’m coming at this from a BizTalk background, and from what I’ve read about WF, so this assessment is kinda theoretical. Hope it helps all the same!