Real World F# Programming Part 2: Types

Ran into a situation last week that showed some more of the differences facing OO programmers moving to F#.

So I’ve got two directories. The program’s job is to take the files from one directory, do some stuff, then put the new file into the destination directory. This is a fairly common pattern.

To kick things off, I find the files. Then I try to figure out which files are in the source directory but not in the destination directory. Those are the ones I need to process. The code goes something like this:

doStuff, the initial version
  1. let doStuff (opts:RipProcessedPagesProgramConfig) =
  2.     let sourceDir = new System.IO.DirectoryInfo(opts.sourceDirectory.parameterValue)
  3.     let filesThatMightNeedProcessing = sourceDir.GetFiles()
  4.     let targetDir = new System.IO.DirectoryInfo(opts.destinationDirectory.parameterValue)
  5.     let filesAlreadyProcessed = targetDir.GetFiles()
  6.     let filesToProcess = filesThatMightNeedProcessing |> Array.filter(fun x->
  7.         (filesAlreadyProcessed |> Array.exists(fun y->x.Name=y.Name)
  8.         )
  9.     )
  10.     // DO THE "REAL WORK" HERE
  11.     printfn "%i files to process" filesToProcess.Length
  12.     ()

So I plopped this code into a couple of apps I’ll code later, then I went to work on something else for a while. Since it’s all live — but not necessarily visible to anybody — a few days later I took a look to see if the app thought it had any files to process.

It did not.

Now, of course, I can see that my Array.filter is actually backwards. I want to take the filesThatMightNeedProcessing and eliminate the filesAlreadyProcessed. What’s remaining are the filesToProcess. Instead, I check to see if the second set exists in the first. It does not, so the program never thinks there is anything to do. Instead of Array.exists, I really need something like Array.doesNotExist.

So is this a bug?

I’m not trying to be cute here, but I think that’s a matter of opinion. It’s like writing SQL. I described a transform. The computer ran it correctly. Did I describe the correct transform? Nope. But the code itself is acting correctly. I simply don’t know how many files might need processing. There is no way to add a test in here. Tests, in this case, would exist at the Operating System/DevOps level. So let’s put off testing for a bit, because it shouldn’t happen here. If your description of a transform is incorrect, it’s just incorrect.

So I need to take one array and “subtract” out another array — take all the items in the first array and remove those items that exist in the second array. Is there something called Array.doesNotExist?

No there is not.

Meh.

Ok. What kind of array do I have? Intellisense tells me it’s a System.IO.FileInfo[]

My first thought: this cannot be something that nobody else has seen. I’m not the first person doing this. This is just basic set operations. So I start googling. After a while, I come across this beautiful class called, oddly enough, “Set”. It’s in Microsoft.FSharp.Collections Damn, it’s sweet-looking class. It’s got superset, subset, contains, difference (which I want). It’s got everything.

So, being the “hack it until it works” kind of guy that I am, I look at what I have: an array of these FileInfo things. I look at what I want: a set. Can’t I just pipe one to the other? Something like this?

2014-09-15 fsharp 1

What the hell? What’s this thing about System.IComparable?

In order for the Set module to work correctly, it needs to be able to compare items inside your set. How can it tell if one thing equals another? All it has is a big bag of whatever you threw in there. Could be UI elements, like buttons. How would you sort buttons? By color? By size? There’s no right way. Integers, sure. Strings? Easy. But complex objects, like FileInfo?

Not so much.

As it turns out, this is a common pattern. In the OO world, we start with creating a type, say a Plain Old Java Object, or POJO. It’s got a constructor, 2 or 3 private members, some getters and setters, and maybe few methods. Life is good.

But then we want to do things. Bad things. Things it was never meant to do. Things involving other libraries. Things like serialize our object, compare it to others, add two objects together. It’s not enough that we have a new type. We need to start flushing out that type by supporting all sorts of standard methods (interfaces). If we support the right interface, our object will work magically with people who write libraries to do things we want.

Welcome to life in the world of I-want-to-make-a-new-type. Remember that class you had with three fields? Say you want to serialize it? You add in the interface IPersist. Now you have a couple more methods to fill out. Have some resources that must be cleaned up? Gotta add in IDisposable. Now you have another method to complete. Handling a list of something somebody else might want to walk? Plop in IEnumerable. Now you have even more methods to complete.

This is life in OO-land and frankly, I like it. There’s nothing as enjoyable as creating a new type and then flushing it all out with the things needed to make it part of the ecosystem. Copy constructors, operator overrides, implicit conversion constructors. I can, and have, spent all day or a couple of days creating a fully-formed, beautiful new type for the world, as good as any of the CLR types. Rock solid stuff.

But.

Funny thing, I’m not actually solving anybody’s problem while I’m doing this. I’m just fulfilling my own personal need to create order in the world. Might be nice for a hobby, but not so much when I’m supposed to stay focused on value.

There’s also the issue of dependencies which is the basis for much of the pain and suffering in OO world. Now that my simple POJO has a dozen interfaces and 35 methods, what the hell is going on with the class when I create method Foo and start calling it? Now I’ve got all these new internal fields like isDirty or versionNum that are connected to everything else.

You make complex objects, you gotta do TDD. Otherwise, you’re just playing with fire. Try putting a dozen or so of these things together. It works this time? Yay! Will it work next time? Who knows?

This is the bad part of OO — complex, hidden interdependencies that cause the code to be quite readable but the state of the system completely unknown to a maintenance programmer. (Ever go down 12 levels in an object graph while debugging to figure out what state something is in? Fun times.)

So my OO training, my instinct, and libraries themselves, they all want me to create my own type and start globbing stuff on there. This is simply the way things are done.

DO NOT DO THIS.

Instead, FP asks us a couple of questions: First, do I really need to change my data structures? Because that’s going to be painful.

No. Files are put into directories based on filename. You can’t have two files in the same directory with the same name. So I already have the data I need to sort things out. Just can’t figure out how to get to it.

Second: What is the simplest function I can write to get what I need?

Beats me, FP. Why do you keep asking questions? Look, I need to take what I have and only get part of the list out.

I spent a good hour thrashing here. You get used to this. It’s a quiet time. A time of introspection. I stared out the window at a dog licking its butt. I wanted to go online and find somebody who was wrong and get into a flame war, but I resisted. At some point I may have started drooling.

In OO you’re always figuring out where things go and wiring stuff up. Damn you’re a busy little beaver! Stuff has to go places! Once you do all the structuring and wiring? The code itself is usually pretty simple.

In FP you laser directly in on the hard part: the code needed to fix the problem. Aside from making sure you have the data you need, the hell with structure. That’s for refactoring. But this means that all the parts are there at one time. Let me repeat that. THE ENTIRE PROBLEM IS THERE AT ONE TIME. This is a different kind of feeling for an OO guy used to everything being in its place. You have to think in terms of data structure and function structure at the same time. For the first few months, I told folks I felt like I was carrying a linker around in my head. (I still do at times)

Eventually I was reduced to muttering to myself “Need to break up the set. Need to break up the set.”

So I do what I always do when I’m sitting there with a dumb look on my face and Google has failed me: I started bringing up library classes, then hitting the “dot” button, then having the IDE show me what that class could do.

I am not proud of my skills. But they suffice.

Hey look, the Array class also has Array.partition, which splits up an array. Isn’t that what I want? I need to split up an array into two parts: the part I want and the part I do not want. I could have two loops. On the outside loop, I’ll spin through all the files in the input directory. In the inside loop, I’ll see if there’s already a file with the same name in the output directory. The Array.partition function will split my array in two pieces. I only care about those that exist in the input but not the output. Something like this:

Hey! It works!
  1. let doStuff (opts:RipProcessedPagesProgramConfig) =
  2.     let sourceDir = new System.IO.DirectoryInfo(opts.sourceDirectory.parameterValue)
  3.     let filesThatMightNeedProcessing = sourceDir.GetFiles()
  4.     let targetDir = new System.IO.DirectoryInfo(opts.destinationDirectory.parameterValue)
  5.     let filesAlreadyProcessed = targetDir.GetFiles()
  6.     let filesToProcessSplit = filesThatMightNeedProcessing |> Array.partition(fun x->
  7.         (filesAlreadyProcessed |> Array.exists(fun y->y.Name=x.Name))
  8.         )
  9.     let filesToProcess = snd filesToProcessSplit
  10.  
  11.     // DO THE "REAL WORK" HERE
  12.     printfn "%i files to process" filesToProcess.Length
  13.     ()

Well I’ll be danged. Freaking A. That’s what I needed all along. I didn’t need a new class and a big honking type system hooked into it. I just needed to describe what I wanted using the stuff I already had available. My instinct to set up structures and start wiring stuff would have led me to OO/FP interop hell. Let’s not go there.

So if I’m not chasing things down to nail them in exactly one spot, how much should I “clean up”, anyway?

First, there’s Don’t Repeat Yourself, or DRY. Everything you write should be functionally-decomposed. There’s no free ride here. The real question is not whether to code it correctly, it’s how much to genericize it. All those good programming skills? They don’t go anywhere. In fact, your coding skills are going to get a great workout with FP.

I have three levels of re-use.

First, I’ll factor something out into a local structure/function in the main file I’m working with. I’ll use it there for some time — at least until I’m happy it can handle different callers under different conditions. (Remember it’s pure FP. It’s just describing a transform. Complexity is bare bones here. If you’re factoring out 50-line functions, you’re probably doing something wrong.)

Second, once I’m happy I might use it elsewhere, and it needs more maturing, I’ll move it up to my shared “Utils” module, which lives across all my projects. Then it gets pounded on a lot more, usually telling me things like I should name my parameters better, or handle weird OS error conditions in a reasonable way callers would expect. (You get a very nuanced view of errors as an FP programmer. It’s not black and white.)

Finally, I’ll attach it to a type somewhere. Would that be some kind of special FileInfo subtype that I created to do set operations?

Hell no.

As I mature the function, it becomes generic, so I end up with something that subtracts one kind of thing from another. In fact, let’s do that now, at least locally. That’s an easy refactor. I just need a source array, an array to subtract, and a function that can tell me which items match.

subtractArrays. Good enough for now.
  1. let subtractArrays sourceArray arrayToSubtract f =
  2.     let itemSplit = sourceArray |> Array.partition(fun x->
  3.         (arrayToSubtract |> Array.exists(fun y->(f x y)))
  4.         )
  5.     snd itemSplit
  6.  
  7. let doStuff (opts:RipFullPagesProgramConfig) =
  8.     let sourceDir = new System.IO.DirectoryInfo(opts.sourceDirectory.parameterValue)
  9.     let filesThatMightNeedProcessing = sourceDir.GetFiles()
  10.     printfn "%i files that might need processing" filesThatMightNeedProcessing.Length
  11.     let targetDir = new System.IO.DirectoryInfo(opts.destinationDirectory.parameterValue)
  12.     let filesAlreadyProcessed = targetDir.GetFiles()
  13.     printfn "%i files already processed" filesAlreadyProcessed.Length
  14.     let filesToProcess = subtractArrays filesThatMightNeedProcessing filesAlreadyProcessed (fun x y->x.Name=y.Name)
  15.     printfn "%i files to process" filesToProcess.Length
  16.     ()


Note the lack of types. Do I care what kind of array either the source or the one to subtract is? No. I do not. All I care is if I can distinguish the items in them. Hell, for all I care one array can be that System.IO.FileInfo thing, the other array can be filenames. What does it matter to the problem I’m solving?

What’s that sound? It’s the sound of some other FP guy busy at his computer, sending me a comment about how you could actually do what I wanted in 1 line of code. That’s fine. That’s the way these things work — and it’s why you don’t roll things up into types right away. Give it time. The important thing was that I stayed pure FP — no new data, no mutable fields, no for/next loops. I didn’t even use closures. As long as I stay clean, the code will continue to “collapse down” as it matures. Fun stuff. A different kind of fun than OO.

So where would this code end up, assuming it lives to become something useful and re-usable? In the array type, of course. Over time, functions migrate up into CLR types. If I want a random item from an array? I just ask it for one. Here’s the code for that.

Make Arrays give you random items
  1. type 'a “[]“ with
  2.     member x.randomItem =
  3.         let rnd = new System.Random()
  4.         let idx = rnd.Next(x.Length)
  5.         x.[idx]


Let me tell you, that was a painful function to work through! Happy I don’t have to ever worry about it again. Likewise, if I need to know how many times one string is inside another? I’ve got a string method for that. Basically anything I need to use a lot, I’ve automated it.

Over time, this gives me 40-50 symbols to manipulate in my head to solve any kind of problem. So while the coding part makes my brain hurt more with FP, maintenance and understanding of existing code is actually much, much easier. And with pure FP, everything I need is right there coming into the function. No dependency hell when I debug. It’s all right there in the IDE. Not that I debug using the IDE that much.

So does that mean I never create new types? Not at all! But that’s a story for another day…

 

 

September 15, 2014  Leave a comment

Real World F# Programming Part 1: Structuring Your Solution

I have several “side project” apps I work on throughout the year. One of those is Newspaper23.com. I have a problem with spending too much time online. Newspaper23 is supposed to go to all the sites I might visit, pull down the headlines and a synopsis of the article text. No ads, no votes, no comments, no email sign-ups. Just a quick overview of what’s happening. If I want more I can click through. (Although I might remove external links at a later date)

Right now newspaper23.com pulls down the headlines from the sites I visit and attempts to get text from the articles. It’s not always successful — probably gets enough text for my purposes about 70% or so. There’s some complicated stuff going on. And it doesn’t get the full text or collapse it into a synopsis yet. That’s coming down the road. But it’s a start. It goes through about 1600 articles a day from about a hundred sites. People are always saying F# isn’t a language for production, and you have to have some kind of framework/toolset to do anything useful, but that’s not true. I thought I’d show you how you can do really useful things with a very small toolset.

I’m running F# on Mono. The front-end is HTML5 and Jquery. There is no back-end. Or rather, the back end is text files. Right now it’s mostly static and supports less than a thousand users, but I plan on making it interactive and being able to scale up past 100K users. Although I have a brutally-minimal development environment, I don’t see any need in changing the stack to scale up massively. Note that this app is part-time hobby where I code for a few days just a couple of times a year. I don’t see my overall involvement changing as the system scales either. Server cost is about 40 bucks a month.

I come from an OO background so all of this is directed at all you JAVA/.Net types. You know who you are :)

 

Code Snippet
  1. module Types
  2.     open HtmlAgilityPack
  3.     open System.Text.RegularExpressions
  4.     type ‘a “[]“ with
  5.         member x.randomItem =
  6.             let rnd = new System.Random()
  7.             let idx = rnd.Next(x.Length)
  8.             x.[idx]
  9.     type System.String with
  10.         member x.ContainsAny (possibleMatches:string[]) =
  11.             let ret = possibleMatches |> Array.tryFind(fun y->
  12.                 x.Contains(y)
  13.                 )
  14.             ret.IsSome
  15.         member x.ContainsAnyRegex(possibleRegexMatches:string[]) =
  16.             let ret = possibleRegexMatches |> Array.tryFind(fun y->
  17.                 let rg = new System.Text.RegularExpressions.Regex(y)
  18.                 rg.IsMatch(x)
  19.                 )
  20.             ret.IsSome

(Yes, there will be code in this series)

The game here is small, composable executables. FP and small functions mean less code. Less code means less bugs. Deploy that smaller amount of code in samlller chunks and that means less maintenance. Little stand-alone things are inherently scalable. Nobody wonders whether they can deploy the “directory function” on multiple servers. Or the FTP program. Big, intertwined things are not. Start playing around sometime with sharding databases and load balancers. Ask some folks at the local BigCorp if they can actually deploy their enterprise software on a new set of servers easily.

I’ve found that you’ll write 4 or 5 good functions and you’re done with a stand-alone executable. In OO world you’ll spend forever wiring things up and testing the crap out of stuff to write the same code spread out over 8 classes. Then, because you’ve already created those classes, you’ll have a natural starting point for whatever new functionality you want. Which is exactly the opposite direction FP takes you in. In OO, the more you try to do, the more structure you have, the more structure, the more places to put new code, the more new code, the more brittle your solution. (And I’m not talking buggy, I’m talking brittle. This is not a testing or architecture issue. Composable things have composable architectures. Static graphs do not. [Yes, you can get there in OO, but many have ventured down this path. Few have arrived.])

The O/S or shell joins it all together. That’s right, you’re writing batch files. Just like a monad handles what happens between the lines of code in an imperative program, the shell is going to handle what happens between the composable programs in your project. A program runs. It creates output. The shell runs it at the appropriate time, the shell moves that output to where it can be processed as input by the next program, the shell monitors that the executable ran correctly. There is no difference between programming at the O/S level and at the application level. You work equally and in the same fashion in both.

Ever delete your production directory? You will. Having it all scripted makes this a non-event. You make this mistake once. Then you never make it again. DevOps isn’t some fashionable add-on; it’s just the natural way to create solutions. Automate early, and continue automating rigorously as you go along. DRY applies at all levels up and down the stack.

Each file has its own script file. There are config files and cron hooks it all up. This means CPU load, data size, and problem difficulty are all separate issues. Some of you might be thinking, isn’t this just bad old batch days? Are we killing transactional databases? No. In a way, the “bad old days” never left us. We just had little atomic batches of 1 that took an indeterminate amount of time because they had to traverse various object graphs across various systems. We can still have batches of 1 that run immediately — or batches of 10 that run every minute — or bathces of n that run every 5 minutes. It’s just that we’re in control. We’re simply fine-tuning things.

Note that I’m not saying don’t have a framework, database, or transactions. I’m saying be sure that you need one before you add one in. Too often the overhead from our tools outweighs the limited value we get, especially in apps, startups, and hobby projects. If you’re going to have a coder or network guy tweaking this system 3 years from now, it’s a different kind of math than if you’re writing an app on spec to put in the app store. Start simple. Add only the minimum amount you need, and only when you need it.

One of the things you’re going to need that’s non-negotiable is a progressive logging system that’s tweaked from the command line. Basically just a prinfn. I ended up creating something with about 20 LOC that works fine. Batch mode? All you need to see is a start/stop/error summary. Command line debug? You might need to see everything the program is doing. Remember, the goal: as much as possible, you should never touch the code or recompile. Every time you open the code up is a failure. If you can make major changes to the way the system works from the shell, you’re doing it right. If you can’t, you’re not.

Many of you might thing that you’re turning your code into a mess of printf statements like the bad old days. But if that’s what you’re doing, you’re doing it wrong. Each program shouldn’t have that many flags or things to kick out. Remember: you’re only doing a very small thing. For most apps, I’d shoot for less than 300LOC, and no more than 10-20 code paths. All code paths can be logged off or on from command line. To make this workable in a command-line format without creating a “War and Peace” of option flags means cyclomatic complexity must be fairly low. Which also keeps bugs down. Natch.

Of course Windows and various OO programs and frameworks have logging features too, but they run counter to the way things are done here. Usually you have to plug into a logging subsystem, throw things out, then go to some other tool, read the messages, do some analysis, and so on. In linux with command-line stuff, the logging, the analysis, and changing the code to stop the problem all happen from the same place, the command line. There’s no context-switching. Remember, most errors should be just configuration issues, not coding issues.

One of the ways I judge whether I’m on-track or not is the degree of futzing around I have to do when something goes wrong. Can I, from a cold start (after weeks of non-programming), take a look at the problem, figure out what’s happening, and fix it — all within 10 minutes or so? I should be able to. And, with only a few exceptions, that’s the way it has been working.

The way I do this is that I have the system monitor itself and report back to me whether each program is running and producing the expected output. Every 5 minutes it creates a web page (which is just another output file) that tells me how things are going. In the FP world, each program is ran several different ways with different inputs. Common error conditions have been hammered out in development. So most times, it runs fine in all but one or two cases. So before I even start I can identify the small piece of code and the type of data causing problems. The biggest part of debugging is done by simply looking at a web page while I eat my corn flakes.

System tests run all the time, against everything. It’s not just test-first development. It’s test-all-the-time deployment. Fixing a bug could easily involve writing a test in F#, writing another test at the shell level, and writing a monitor that shows me if there’s ever a problem again. Whatever you do, don’t trade off monolithic OO apps for monolithic shell apps. FP skills and thinking are really critical up and down the toolchain here.

It’s interesting that there’s no distinction between app and shell programming. This is good for both worlds. Once we started creating silos for application deployment, we started losing our way.

Instead of rock-solid apps, failure is expected, and I’m not just talking about defensive programming. Pattern-matching should make you think about all paths through a program. Failure at the app level should fit seamlessly into the DevOps ecosystem, with logging, fallbacks, reporting, resillience. Can’t open a file? Can’t write to a file? Who cares? Life goes on. There are a thousand other files. Log it and keep rolling. It could be a transient error. 99% of the time we don’t work with all-or-nothing situations.

As much as possible, you should check for File System and other common errors before you even start the work. My pattern is to load the command line parameters, load the config file (if there is one), then check to make sure I have all the external pieces I need — the actual files exist, the directories are there, and so on. This is stuff I can check early, before the “real” code, so I do it. That way I’m not in the middle of a function to do something really complex and then forgetting to check whether or not I have an input file. By the time I get to the work, I know that all of my tools are in place and ready. This allows me to structure solutions much better.

2014-09 Metrics

 

For instance I woke up this morning and looked at the stats. Looks like pulling things from medium.com isn’t working — and hasn’t been working for several days. That’s fine. It’s an issue with the WebClient request headers. So what? I’ll fix it when I feel like it. Compare this to waking up this morning with a monolithic server app and realizing the entire thing doesn’t run because of intricate (and cognitively hidden) dependencies between functions that cause the failure of WebClient in one part to prevent the writing of new data for the entire app in another part. Just getting started with the debugging from a cold start could take hours.

Note that a lot of DevOps sounds overwhelming. The temptation is to stop and plan everything out. Wrong answer. It’s an elephant. Planning is great, but it’s critical that you eat a little bit of the elephant at a time. Never go big-bang, especially in this pattern, because you really don’t know the shape of the outcome at the DevOps level. [Although there may be some corporate patterns to follow. Insert long discussion here]

Next time I’ll talk about how the actual programming works: how code evolves over time, how to keep things simple, and how not to make dumb OO mistakes in your FP code.

September 8, 2014  Leave a comment

My Agile 2014 Book Report

2014-07-Agile-2014-Weasels-For-Or-Against

I don’t do conferences. The last Agile conference I was at was five years ago, in 2009. So although I’ve been engaged in the work, I haven’t spent much time with large groups of Agile practitioners in some while. I thought it might be useful to folks if I wrote down my observations about the changes in five years.

The Good

  • We’re starting to engage with BigCorp environments in a more meaningful way. There’s still a lot of anger and over-promising going on, but the community is grudgingly accepting the fact that most Agile projects exist inside a larger corporate structure. If we want to have a trusting, healthy work environment, we’re going to need to be good partners.

  • Had one person come up to me and say something like “You know, you’re not the asshole I thought you were from reading you online.” It would do well for all of us to remember that for the most part, folks in the community are there to help others. It’s easy to be misunderstood online. It’s difficult to always assume kindness. Being snarky is just too much fun sometimes, and people don’t like having their baby called ugly. In fact, it’s probably impossible to fully engage with people online the way we do in person. We should know this! :)

  • I’m continuing to see creative things emerge from the community. This is the coolest part about the Agile community: because we don’t have it all figured out, there is a huge degree of experimentation going on. Good stuff.

The Bad

  • In many ways, Agile has lost its way. What began as a response by developers to the environments they found themselves in became a victim of its own success. It’s no longer developers finding new ways of developing software. It’s becoming Agile Everything. I don’t have a problem with that — after all, my 2009 session was on Agile in Non-Standard Teams — but there’s going to be a lot of growing pains.

  • The dirty secret is that in most cases (except for perhaps the biz track?) the rooms are filling with folks who already agree with the speaker. But speakers spend time justifying their position anyway. For such a large group, there was quite a bit of clanning. Sessions were already full of cheerleaders. It might be good to clearly understand whether we’re presenting something to the community for their consideration — or presenting something they already love and showing how to get others to like it. These are incompatible goals for the same session.

  • Maybe it was just me, but for such a relaxed group of facilitators, there was quite a bit of tension just under the surface. For a lot of folks, the conference meant a big chance to do something: to get the next gig, to meet X and become friends, to hire for the next year, to start a conversation with a key lead. It was all fun and games, but every so often the veil would slip a bit and you’d see the stress involved. I wish all of those folks much luck.

The Culture

  • Dynamic Open Jam areas were awesome. Even though nobody cared about my proposed session on Weasels, I thoroughly enjoyed them.

  • I saw something very interesting in Open Jame on Wednesday. We were all doing presentation karaoke. A big crowd had formed to watch and participate; perhaps 40 folks. But our time was up. So the leader of the freeform session said “Our time is up, we should respect the next person, who is here to talk about X”The guy gets up and somebody from the crowd says “Hey! Why don’t we just combine the two things?”So we spend another five minutes doing both presentation karaoke and talking about the new topic. That way, we maximized the number of people that stayed involved, while at the same time switching speakers. It was a nice example of both being respectful and adapting to changing conditions.

  • The party on the last night was most enjoyable. I think this was the most relaxed state that I saw folks in. Not sure if the alcohol had anything to do with that :) Lots of great conversations going on.

  • Where did all the developers go? Maybe it was just me, but it seemed like there was a lot more “meta” stuff presented. It didn’t seem like there was as much technical stuff.



Budgeting? Strategic alignment? Huh? Who let the managers into this place?

Budgeting? Strategic alignment? Huh? Who let the managers into this place?

Good and Bad

  • People really hate SAFe (The Scaled Agile Framework, a detailed guide supposedly describing how to run teams of teams in an Agile manner) — to the point that some speakers had a shtick of opening mocking it. I’m process agnostic — I don’t hate anything and all I want is to help folks. SAFe, like anything else, has good and bad parts. Keep the good parts, ditch the bad parts. But for some, SAFe seems like a step backwards.


    What concerns me about watching both sides of this is the emotional investment both groups have in already knowing how things are going to turn out without the necessarily huge sample size it would take to determine this for the industry as a whole. One group might think “Why of course Agile is going to have to evolve into more traditional product management. How else would it work?” The other might think “Why of course we would never put so much structure into what we do. That’s what prompted us to become Agile in the first place.”


    Look, I don’t know. Give me 1,000 examples of SAFe actually being deployed — not some arcane discussion about what the textbook says but how it actually works in the real world — and I can start drawing some conclusions. Until then? This is just a lot of ugliness that I’m not sure serves a greater purpose. Sad.


  • UX, or figuring out what to build, is making waves. Some folks love it, some folks think we’re back to imposing waterfall on the process. I tend to think a) because it takes the team closer to value creation it’s probably the most important thing the community has going right now, and b) it’s just not baked enough yet. At least not for the rest of us. (I don’t mean that practitioners don’t know what they are doing. My point is that it is not formed in such a way that the Agile community can easily digest it.) That’s fine with me, but not so much with others. I’m really excited about seeing more growth in this area.

Summary

We are realizing that any kind of role definition in an organization can be a huge source of impediment for that organization growing and adapting. You’re better off training engineers to do other things than you are bringing in folks who do other things and expecting them to work with engineers. So much of everything can be automated, and whatever your role is, you should be automating it away.

Having said that, I don’t think anybody really knows what to do with this information. We already have a huge workforce with predefined roles. What to do with them? Nobody wants to say it directly, but there it is: we have the wrong workforce for the types of problems we need to be solving.

Finally, it’s very difficult to be excited about new things you’re trying and at the same time be a pragmatist about using only what works. It’s possible, but it’s tough. If Agile is only love and goodness, then you’re probably doing it the wrong way. Agile is useful because the shared values lead us into exploring areas we are emotionally uncomfortable with. Not because it’s a new religion or philosophy to beat other people over the head with. It should be telling you to try things you don’t like. If not, you’re not doing it the right way. Enough philosophy (grin).

August 5, 2014  Leave a comment

Agile Memes, Part 1

Over the last couple of weeks, I’ve had some fun combining Agile concepts, humor, and memes. I thought this might be something you could find useful sharing with your team.

Burn-down-in-a-straight-line

Didn't-deliver-all-the-stories-we-promised-well-have-to-ferret-that-out

wanted-more-agility-got-more-tools

what-if-I-told-you-that-Agile-practices

your-kanban-has-got-kanban-in-it

you-just-stood-in-the-stand-up-and-told-us-how-youre-doing-y-u-no-update-your-task-cardyou-cant-just-change-the-acceptance-criteria-the-day-before-the-demo

May 26, 2014  Leave a comment

Words Don’t Actually Mean Anything

bad-agile-backlogs
(The following is an excerpt from the upcoming “Backlogs 2″ e-book)


Aristotle would have made a terrible programmer because words don’t actually mean anything. Philosophers in general are a big pain in the ass. They’re also responsible for civilization and modern life.

That’s the conclusion you’ve reached after spending all day Saturday at the library. After the pounding your brain has taken, you are looking forward to some down time.

Aristotle lived over two thousand years ago. Back in the day, philosophers like Aristotle were a sort of cross between motivational speaker, law professor and librarian. They could read and write; and they were where you sent your kids when you wanted them to learn how to live a good life, what good and bad were, and how to succeed.

The Sophists were big back then. Sophists believed that justice and meaning were relative and that it was the art of social communication and persuasion that was important. They taught their students, sons of rich and affluent parents, how to argue both sides of an argument, manipulate the emotions of listeners, and handle themselves with an air of dignity. Basically, they taught all the skills you’d need to become and stay powerful in ancient Greece.

Sophists started off very popular, but by Aristotle’s time most people didn’t like them. First, they charged exorbitant amounts of money. Second, they championed the wishy-washy nature of things instead of actually taking a firm stand on anything. “Man is the measure of all things,” said a famous sophist. Folks wondered if sophists really stood for anything at all except for how to look good and become powerful.

Aristotle didn’t like them much either. He created his own school of philosophy, didn’t charge for it, and along the way invented science.

He thought that things had to have meaning. There was some external universal reality that we could learn, categorize, and reason about. It wasn’t all relative. You could use your senses to gain intuitive knowledge of the world around you. Knowledge could allow you to deduce how things work everywhere. There is a real world, with real cause and effect. Learning about universal truths gave us knowledge we could apply everywhere.

Things that we observe have attributes and actions. It was the reasoning about these attributes and actions over a few examples that gave rise to understanding for all the others of the same type. We can share this understanding by creating a body of knowledge that describes the universe. There was a universal idea of, say a cow. Aristotle called this universal idea, the truest version of something possible, a “universal form”. Once we understood things about the universal form of a cow, we could then apply that knowledge to all actual cows we might meet.

Attributes helped us create a master categorization system. You divided the world into categories depending on what attributes things had. This is an animal. This is a plant. Animals have these attributes. Plants have these. This kind of plant has bark. This kind of plant has none. This kind of plant has five leaves. This kind of plant has three. By simply listing the things each plant owns, its attributes, we could start coming up with categories about what kinds of plants they were. Same goes for animals, and everything else, for that matter.

Same goes for actions. This rock falls from a great height. This dandelion seed floats away. Things we observe DO things. They have actions they perform, sometimes on their own, sometimes in response to stimulus. They behave in certain ways. Just like with attributes, by listing the types of actions various things could do; we continue developing our categorization system.

Eventually, once we’ve described all the attributes and actions of something, we determine exactly what it is, its universal form. Then we can use logic and deduction to figure out why it works the way it does. Once we know how the universal form works, we know how all the examples of universal forms we see in life work. If I know how one rock of coal acts when it is set on fire, I know how all rocks of coal will act. This is common sense to us, but Aristotle was the first to come up with it.

Aristotle’s lesson: there’s a master definition of things, a set of universal forms. We discover the meaning of the world by discovering the attributes and actions of things to figure out exactly what they are. If we understand what something is, and we understand how the universal form of that thing behaves, we understand the thing itself. Once we have exact definitions, we can reason about universal forms instead of having to talk about each item separately.

Categorizing by attributes and actions, understanding by deductive logic, having exact definitions, working with abstract universal forms — those ideas grew into everything we call science. Thousands of scientific disciplines came from Aristotle. Quite a legacy for some old Greek guy.

Categorizing the attributes and actions of things was fairly straightforward and could be used no matter what you were talking about. Creating master categorization systems and dictionaries wasn’t so hard. Deductive logic, on the other hand, got more and more complicated the more we picked at it.

Systems of reasoning greatly increased the growth of math. Throughout the centuries, smart people have longed for an answer to the question: if I know certain things, what other things can I use logic to figure out? They created different rigid and formal ways of reasoning about things that depended on universal forms. Each system had pros and cons. There was set theory, formal logic, symbolic logic, predicate calculus, non-euclidean geometry, and a lot more.

Philosophers devised hundreds of systems of reasoning about all sorts of things. They kept creating whole new branches of science as they went along. Some people were interested in how surfaces relate to each other. Some people were fascinated by the relationship between math and physical reality. Some people wanted to know more about what right and wrong were. Some people wanted to find out about diseases. Some people wanted to know about the relationship between truth and beauty. Each of these used the same core Aristotlean principles of categorization of universal forms using exact definitions, then reasoning about those forms.

Highly abstract, formal logic, in particular, invented by another philosopher guy named Bertrand Russell around 1900, led to something called Von Neumann machines. Von Neumann machines led to modern computers.

That’s right, aside from creating science in general, Aristotle’s ideas about deductive logic using universal forms led to an emphasis on logic, then formal logic, then the creation of computers — machines that operate based on rigid universal rules about what is true and false and how to act based on what is true or false.

This science thing was turning out to be a big hit. Meanwhile you never hear much about the sophists. “Sophist” is commonly used to describe somebody who has no sense of right and wrong and uses lots of weasel words.

But everything wasn’t skittles and beer.

First we had a problem with this thing that Aristotle did when he set things up, where he stepped outside of himself and reasoned about things at a universal level. There was a land above science, a meta-science, and philosophers were the folks who operated outside of science asking useful questions. Because Aristotle asked questions about important things, universal truths, we consider him one of the first great philosophers.

The problem was that using reason and logic to work at this higher, universal level caused as much confusion as positive change in the world.

Not to put too fine of a point on things, philosophy through the centuries has been full of really smart people with one or two good ideas that spend their entire lives making those ideas more and more complex and unwieldy. In a simple world, philosopher X comes up with a good idea about why the sky is blue. In reality, philosopher X comes up with a good idea about why the sky is blue, then spends 40 years and writes 200 papers (including 12 books) on the nature of the sky, what kind of blue is important, and how the blue in the sky is actually the key to fish migration in Eastern Tibet and 50 other odd and vaguely understood concepts which he feels you have to know because they are the key to truly understanding his work.

These were not simple-minded or crazy people. They were the smartest of their day. They were simply taking Aristotle’s idea that there are universal truths that we can discover using reason and logic and trying to take it to the next level.

So instead of simple ideas that spawned new sciences, it was more the case that philosophers came up with extremely complex and delicate theories about things that couldn’t be measured or talked about — except by other philosophers. The useful philosopher who spawned a new science was an oddball, and even in that case, most of the time he created as much confusion among later scientists and philosophers as he shined light on anything.

There was confusion over what different terms mean, over which parts of which philosophies apply to which things, if one philosopher actually agrees or disagrees with another one, and even what the philosopher meant when he was writing all that stuff. Frankly this gave most people the impression that all of philosophy was bullshit. That was a shame, because there was some really good stuff in there too. Every now and then it changed the world.

The confusion in terms, meaning, scope, and method of reasoning got philosophers asking questions about philosophy itself and how we actually knew stuff. Then things got really screwy. Philosophers asked if they could really know whether they existed or not, or whether if they went to a swamp and were replaced by an exact duplicate if that new philosopher would be the same person. We had philosophers pushing fat people in front of trolleys and all sorts of other mental experiment shenanigans. There was a lot of smoke but not much fire. Made for some great science fiction, though.

Second, and this was worse, the idea that we could figure out the mechanism of things took a quick hit, and it never recovered. As it turned out, almost all of the time, we could not figure out why things work the way they do. Remember Isaac Newton? He saw an apple fall from a tree and came up with the Law of Universal Gravitation. This was an astounding mathematical accomplishment that allowed us to do all kinds of neat things, like shoot rockets full of men to the moon. There was only one tiny problem: the law didn’t say _why_ gravity worked, it just gave equations for _how_ it would act. For all we knew there were tiny little gerbils running around inside of anything that had mass. Maybe magic dust. Newton didn’t know, and to a large degree, we still don’t know.

Or medicine. Doctors noticed certain patterns of observations, such as chimney sweeps in 19th century England almost always caught testicular cancer. Some doctors speculated that chimneys caused cancer. They stopped people from being chimney sweeps. Testicular cancer dropped. New observations were made, rules guessed at, hypotheses tested. Over time the terms of the debate got finer and finer, finally settling on something approximating “We believe repeated chemical insults to cells by certain agents over time can cause some cells to spontaneously mutate, at times becoming new creatures which can survive and thrive in the host and take their life.” But we don’t know for sure. We don’t know why. There’s a ton of things we don’t know. All we can do is keep making more refined, tentative models that then we test. As these models get more refined, they get more useful for doing practical stuff in the real world, like reducing cancer rates. But we still don’t know the mechanism, the why.

There is a provisional guess, based on the observation of a lot of data. We keep gathering more data, creating possible rules that might explain the data, then creating testable hypotheses to test the rules, then testing the hypotheses, building a model. Then we start over again. The process loop of science contains called abduction, deduction, and induction, and the guy who explained how it all worked is probably the most creative and insightful philosopher-scientist you’ve never heard of.

Charles Sanders Peirce was the smartest thinker in America in the late 1800s, and even fewer people know him than Frederick Winslow Taylor. Peirce was an unknown hero, a rogue, an outsider, discovering things years or decades before others, but rarely getting the credit for it because he never got attention for his work. In the 1880s, he was the first to realize that logical operations could be carried out by electrical circuits — beating the guy who got credit for it by almost 50 years. He was one of the founders of statistics, creating and using techniques that decades later were to be known as Bayesian Statistics. He was the grandfather of symbolic logic, and much debate still exists as to where Bertrand Russell got all the ideas he had when he went about creating formal logic. Many believe Peirce was shortchanged.

But, like many smart people, Peirce was also ill-tempered and prone to tick others off. His only academic job was at Johns Hopkins where he he lectured in logic for a few years. The enemies he made there followed him the rest of his life. When his marriage didn’t work out and he started seeing another woman, they had enough evidence to get him fired. A trustee at Johns Hopkins was informed that Peirce, while a Hopkins employee, had lived and traveled with a woman to whom he was not married. And that, as they say, was the end of that. For one of the greatest thinkers of the late 19th century, it was the end of Peirce’s academic career. He tried the rest of his life to find jobs at other academic institutions without success.

He lived as a poor man, purchasing an old farm in Pennsylvania with inheritance money but never making any money from it. He was always in debt. Family members repeatedly had to bail him out. Although he wrote prolifically, he was unpopular among the scientists of the day, so his work received no recognition. Only as time passed, as other famous scientists and philosophers slowly started giving credit to Peirce many decades later did it finally became known what a great thinker he was. His “rehabilitation” continues to this day, as the study of Peirce has become an academic endeavor of its own. The man who couldn’t get a job at an academic institution and was considered a crank or crackpot now has scholars devoting their careers to studying his work.

Because of his history, Peirce had a unique outsider’s view. Although he was a philosopher, he always thought of himself as just a working scientist. As such, he saw his job as trying to make sense of the foundations of science, just like the philosophers. In his case, though, it was just to get work done.

To start organizing his work, he began by grouping the way we work with knowledge into two parts: speculative, which is the study of the abstract nature of things, and practical, which is the study of how we can gain control over things. Theoretical physics? Speculative. Applied physics? Practical. Philosophy? Speculative. Peirce taught that although the two types of investigations looked almost the same, they were completely different types of experiences. The scientist should never confuse the two.

This led him to creating a new science called semiotics, which was interested in how organisms work with symbols and signs, external and internal, to do things. Every living thing uses signs and symbols to understand and manipulate the world, but nobody had ever studied how they did it.

Thinking about the importance of practical knowledge and manipulating the world around us led to his famous Pragmatic Maxim: “To ascertain the meaning of an intellectual conception one should consider what practical consequences might result from the truth of that conception — and the sum of these consequences constitute the entire meaning of the conception.”

That is, when approaching some system of symbols, reasoning, or meaning, we have to be prepared to abandon everything, philosophy, tradition, reason, pre-existing rules, and what-not, and ask ourselves: what can we use this for? Because at the end of the day, if we can’t use knowledge to manipulate the world around us effectively, it has no value. And, in fact, the only value knowledge has is the ability it provides for us to use it to manipulate the real world. Newton figured out how to model gravity but not the mechanism, and that’s okay. It’s far more important that we have models with practical uses than it is to debate correctness of speculative systems. In fact, we can’t reason at all about speculative systems. The only value a system of symbols can have is being able to change things.

Mental ideas we work with have to _do something_. Pragmatists believe that most philosophical topics, such as the nature of language, knowledge, concepts, belief, meaning, and science — are best looked at in terms of their practical uses, not in terms of how “accurate” they are. Speculative thinking is nonsense in the philosophical sense. That is, we are unable to have any intelligent conversation about it one way or another.

The great pragmatists that followed Peirce took pragmatism everywhere: education, epistemology, metaphysics, the philosophy of science, indeterminism, philosophy of religion, instincts, history, democracy, journalism — the list goes on and on. As always, the sign of a true philosophical breakthrough is one which changes the universe, and Peirce’s pragmatism certainly qualifies.

Peirce’s lesson: in order for us to make sense of the chaos and uncertainty of science and philosophy, we have to hold both to a high standard of only concerning ourselves with the things we can use to change the world around us. This was far more important than arguing about being “right”.

This sounds very familiar to people in the Agile community. The dang thing has to work, consistently, no matter whether you did the process right or not. Right or wrong has nothing to do with it. It’s all about usefulness. A specification means nothing. It has no immediate affect. A test, on the other hand, is useful because it constantly tells us by passing or failing whether or not our output is acceptable or not. It has use.

Nobody ever heard of Peirce, laboring away in his farmhouse producing tons of papers they never read (at least until 100 years later, in some cases) but everybody knew Bertrand Russell and his star pupil Ludwig Wittgenstein, living large at the opposite end of the spectrum. They lived at the same time but in different worlds. Russell was a member Britain’s nobility, well-respected, rich, and widely admired. Everybody considered Wittegnstein to be a genius and he never wanted for anything.

Russell invented formal logic. He is considered the most influential philosopher of the 20th century. Wittgenstein wasn’t much of a slouch, either. Russell took reasoning to the highest levels man has ever reached with formal logic, yet, like always, there were a lot of pieces that didn’t fit together. Reasoning about right or wrong inside speculative systems was a waste of time, no matter how rigorous they were, as Peirce had shown decades earlier, but no one heard him.

Wittgenstein took it on himself to fix it. So he wrote a big book, the Tractatus, that he felt was the final solution to all the problems philosophy faced. He told his friends there wasn’t much more for him to do. Wittgenstein wasn’t much on the humble side, although better than most. (“Annoying genius philosopher guy” seems to a be a recurring theme in these stories.)

After thinking about things some more, Wittgenstein realized that, surprise, he might have made some mistakes in the Tractatus. So he wrote another book, Philosophical Investigations, that wasn’t published until after after his death in 1953. There was a good reason philosophy kept getting tangled up around itself. There was a good reason that we had difficultly separating the speculative and the practical. There was a good reason that philosophers had a good idea or two, then, by trying to tease it out using systems of logic, always ended up out in the weeds somewhere.

Wittgenstein solved the problem from the other end, assuming formal abstract systems had value and looking for fault elsewhere. What he came up with blew your mind. No wonder Collier was stumped by the consultants.

The problem here wasn’t science, or knowledge, or logic, or reason. It was both simpler and more profound. It was language. Human language. Human language was much more slippery than people realized. It wasn’t logic and reason that were broken, it was the idea of a universal categorization system and universal forms based on language. Human language gives us a sense of certainty in meaning that simply does not exist.

It can’t be made to exist, either. Aristotle’s universal forms might still be valid, but they’re not precisely approachable using human language.

Although we do it everyday, we do not truly understand how people communicate with each other. In speculative efforts, where it was all theory, this led to much confusion, as one time a term was used to mean one thing, and another time it meant something slightly different, imperceptibly different. In our everyday lives, people unconsciously looked at whatever the result they were shooting for instead of the language itself, so they didn’t notice. The exact meanings of words didn’t matter to them.

The reason this was important, the reason Collier sent you here, was that there was a special case where lots of theory and speculative talk ran headlong into a wall of whether it was useful or not, and did so on a regular basis: technology development. In technology development, business, marketing, and sales people spoke in abstract, fluffy, speculative terms, much like philosophers did. But at some point, that language had to translate into something practical, just like things in the practical sciences. And so the problems philosophy had been experiencing over and over again over periods of years, decades, and centuries, where there were subtle differences in terms and different ways of looking at things that didn’t agree with each other? Technology development teams experienced those same problems in time periods measured in days, weeks or months.

Technology development is a microcosm of philosophy and science, all rolled into a small period of intense effort. It’s science on steroids.

To illustrate the nature of language, Wittgenstein suggested a thought experiment.

Let’s assume you go to work as a helper for a local carpenter. Neither of you know any language at all, but he knows how to build houses and you want to help. The first day you show up, you begin to play what’s known as a “language game”. He points to a rod with a weight on the end. He says “hammer”. You grab the hammer and hand it to him. This game, of him pointing, naming nouns, and you bringing those things to him, results in your knowledge of what words like “hammer” mean: they mean to pick up a certain type of object and bring it over. Maybe it’s a red object.

The fact is, you don’t know what hammer means, at least not a lot of it. You only know enough to do the job you’ve been given. Another person playing a language game with another carpenter might think of “hammer” as being a rod with a black piece of metal at the end. As Peirce would remind us, that’s good enough. We have results. We have each played the language game to the point where we can gain usefulness from it. Later on, the carpenter might add to the game, showing you different locations and trying two-word sentences “hammer toolbox” or “hammer here.”

Each time you play the game, and each time the game evolves, you learn more and more about what some arbitrary symbol “hammer” means — in that particular language, that particular social construct.

The problem scientists, philosophers, and many others were having was given to us by Aristotle. Turns out science was a gag gift. The assumption was that one word has the same meaning for everybody. That language could represent some unchanging universal form. But the reality is that meaning in language is inherently individualistic, and based on a thousand interactions over the years in various language games that person has played. Sure, for 99% of us, we could identify a hammer based on a picture. That’s because most of us have a high degree of fidelity in our language games around the concept of “hammer.” But even then, some might think “claw hammer”, or “ball peen hammer” while to others there would be no distinction. Would you recognize every possible picture of a hammer as a hammer? Probably not.

Words don’t mean exactly the same between different people. The kicker, the reason this has went on so long without folks noticing is that most of the time, it doesn’t matter. Also, we pick up language games as infants and use them constantly without thinking about it all of our lives. It’s part of our nature.

Where it does matter is when it comes time to convert the philosophical ideas of what might exist into the concrete reality of computer programs, which are built on formal logic — into a system of symbols that assumes that there are universal forms and that things are rigid and relate to each other in rigid and predefined ways.

Let’s say you were asked to write a customer account management program for a large retailer. Given a title “customer account management program”, would you have enough knowledge to write the program? Of course not. You would need more detail.

Being good little Taylorites, over the years we have tried to solve this problem by breaking it down into smaller and smaller processes which can then be optimized. It’s never worked very well. Now you know why.

Just like with Taylor’s Scientific Management and creative processes, it seemed that we could break down behavior and meaning into infinite detail, and there still would be lots of ambiguity remaining. There’s always some critical detail we left out. There’s always some industry context or overloaded jargon that gets omitted.

Suppose you have a spreadsheet. A stranger walks in the room and asks you to create a list with columns to account for customer information, then leaves you alone. Could you complete that task? Of course not. You would need more context. So how much would be enough?

You could look up “customer” in a dictionary, but that wouldn’t help. You could talk to programmers on the internet about what they keep track of for customers. That might help some, but it would be nowhere near being correct. In fact, there is no definition for customer in terms of formal computer logic for your particular application. There is no definition because it hasn’t been created yet. You and the stranger haven’t played any language games to make one. The term “customer” is nonsense to you, meaningless.

Agile had the concept of bringing the end-user (or Product Owner) in with the team to describe things as needed as close to when the work happens as possible. It was to remove waste, or rework; but in reality it was the best solution to a problem that was not fully understood: meanings are subjective and depend on the language games involved in creating them. We get the guy in the room with us because the team needs to play language games right up until the last possible moment. Just like with the carpenter, you play the game until it’s good enough. “Good enough” is vastly different for every team, every Product Owner, and every problem.

Wittgenstein’s lesson: Communities of humans play language games all the time, it’s a state of nature. Language is inherently flexible, vague, and slippery. It gains meaning only to be “good enough” inside that particular community and only for particular uses unique to that community. Nothing means anything until we play language games to make it mean something. We can’t reason about universal forms. Instead, we have to deal with each item separately. Meaning was relative, highly dependent on the person’s experiences, and created by social interactions.

Looks like the sophists weren’t so stupid after all. Aristotle would not be happy. There was no way in hell that Mr. Collier was going to like this.

Aristotle said that there was a universal form for everything, and that by having exact definitions and using formal systems of logic on it we could deduce things about the universal forms that would then apply to everything in the real world. Peirce said that formal abstract systems are by nature speculative, and that speculative systems are nonsense. Unless it can change things in the real world, it is impossible to reason whether things in these systems are correct or incorrect. Computers can change the world using a formal system of logic, so technically, they might be the first devices ever able to translate abstract concepts into real-world effects. Wittgenstein said that didn’t matter: that natural human languages will never, ever match up with the universal forms that all systems of reasoning are built on anyway, so although the computer can work with universal abstract ideas, you’d never get the actual things people spoke about translated directly into the computer.

Science works because at heart it is based on probability, not because it is based on reason and logic.

Language games are terribly non-intuitive for folks brought up to believe that to find the meaning to a word you simply go to the dictionary, or for folks brought up in the scientific method school of thought that says that language can rigorously describe something so that any listener would receive the same meaning. Heck, it would even drive most programmers crazy, with their concept of language being so closely associated with formal systems of logic.

Believing that language can describe reality exactly has sent millions of projects off the rails, and thousands of philosophers to the old folks’ home early. (Wittgenstein grew to loathe philosophy, declaring that it was much more useful as a form of therapy rather than a quest for truth. We should use philosophy the way we would use a conversation at a bar with a particularly smart person, a conversation with a therapist, or a counseling session with a priest: a useful system of beliefs to help us move through life with more understanding and less pain.)

Instead, your imaginary spreadsheet project would proceed like this: you would form a group with people who had initial diverse internal definitions to the thousand or so words surrounding “customer information”. You would play various games, just like the carpenter and helper, until the group came to a common consensus as to what all the words mean and how they relate to one another. This wouldn’t be a formal process — language games seldom are — but it would be a process, a social process, and it would take time.

What you couldn’t do, because it’s impossible, is capture all the terms, definitions, idioms and such required for the project and convey it to another person’s brain. At some far degree of descriptive hell you might get close, maybe close enough, but you’d never achieve the same results as if you just sat down and had everybody naturally create the language on their way to the solution. And even if you managed to somehow describe enough for some initial, tightly-circumscribed work, you’d never cover changes, modifications, re-planning — all the parts of “living the problem” that occurs as part of natural social interaction using language games.

It would be like trying to prepare somebody for a trip to an alien planet and alien civilization using only a book written in English. Could you cover the language, the culture, how to properly communicate during all of the things that might happen? It’s impossible. The best you could hope for would be to get the person into a state where they could begin their own language games and become one with the culture once they got there. Everybody knows this, yet when we talk about specifications and backlogs for technology development, we forget it, and act as if we can join up the abstract and concrete using more and more words as glue. Maybe special pictures. If only we had the right system, we think. If only we included something about fish migration in Tibet. We are all philosophers at heart.

This was why Jones found that throwing away his backlog and re-doing it — restating the backlog — had value. This was why teams that worked problems involving the entire backlog understood and executed on the domain better than teams who were given a few bits at a time. This was why standing teams in an organization had such an advantage. They were playing language games that removed ambiguity and increased performance over time.

You don’t evolve the backlog to be able to work with any team; you evolve a particular team to be able to work with a particular backlog. It’s not the backlog that matures, though lots of things might be added to it. It’s the team that matures through language games.

Solving technology problems always involves creating a new language among the people creating the solution.

Words don’t mean anything, Aristotle would have been a lousy programmer, philosophers were a pain in the ass, Collier was going to kill you, and you were late for your date.

May 14, 2014  Leave a comment

Dear Agile Friends: Please Stop It With The Pointless Bickering

giving-a-shit-about-the-agile-values

Every couple of weeks, it’s more bickering.

Should teams co-locate? I don’t know, I don’t think there’s a universal answer for all teams, and I want to work from home but I can’t do that and do my job effectively. Does that stop us from arguing? Heck no! One bunch will line up saying co-location is the only thing that works, another bunch will line up and say co-location is 19th-century thinking about should be abolished.

And away we go.

Should TDD be used? Once again, no universal answer, I have my own view, and the way I’d like things to be and the way they are in my current work are different. So let’s all line up and start bickering over whether TDD is dead or not.

How about some of these new program-level Agile constructs, like SAFe? Same answers. Program-level Agility is just now getting some real traction and good anecdotal feedback from the field. Much too early to generalize, and who knows how much generalization would be useful anyway? But, we can go around the mulberry bush a few times on that.

What is it with the bickering? There’s a moderator on a popular LinkedIn Agile forum that decided that anybody who posted a blog link would have the post characterized as “promotions” and sent off to nobody-reads-it-land. I wasn’t crazy about that decision, made my case, then encouraged him to do what he thought was best.

That was over a month ago. He’s still on the same thread arguing about why his policy is the only sane one, and how we should all agree. Good grief!

Seems like a lot of us Agile guys are really good at arguing. Makes you wonder how much fun it would be working alongside them in a team.

Of all the Agile material I’ve consumed over the years, I like the Manifesto the most. The reason I like it was that there was a room full of guys who were all making money with various recipe books for making good software happen — and they managed to agree on what values should be important no matter what processes you were using. This was a moment of sanity.

Then many of them went out and created new branded recipe books and went back to bickering with each other. (I exaggerate, but only by a little. It’s more accurate to say their adherents did this. Many of the original gang have settled down. Not all.)

I know this drives people crazy, but after watching hundreds of teams doing every kind of futzed up process you can imagine, I only care about one thing: how well is the team evolving using the values as a guide?

I don’t care if they use tools, if they all have funny haircuts, if they wear uniforms and salute each other. Are they using a good system of values and changing up how they do things over time? If so, then they’ll be fine. If not, and this is important, even if they were doing the “right” things, I’d have little faith they knew what the hell they were doing.

A bunch of us yahoos coming in and bickering over whether story points should be normalized or not is not helpful — if we do it the wrong way. If we’re having some sort of “future of Agile” discussion, where the end times are upon us and we must turn back now and go back to the old ways? Probably not so useful. If, however, we’re sharing experiences and talking about why some things might work better than others, while acknowledging that many people do things many different ways? Probably much more useful.

We — and I mean me as much as anybody else — tend to make the internet some kind of drama-of-the-week contest. Everything is a disaster. We are all emoting. Well, I’ve got news for you: it’s not. Encourage using the values as a way to establish trust. Insist on teams continuing to try new things and learn. Have some humility about what we actually know as an industry and what we don’t. The rest will work itself out. I promise. :)

May 7, 2014  Leave a comment

Dear Agile, It’s Time to Grow Up

GirlPlayingTBall_Medium

Dear Agile. I’ve been with you a long time. Heck, I was there back in the 80s and 90s, even before you were born. Remember Fast Cycle Time? RAD? Or any of the other systems that emphasized rapid development in close association with the users? Remember all the fun we had before you had a name, when we were doing weekly code drops of tested code? Customers loved us. Developers were happy. It was a good thing. We were all happy then.

And it was even good after you were born. We had a whole bunch of things we could drop into the Agile bucket. There was eXtreme Programming. There was Agile Data modeling. There was even an Agile Unified Process. Seemed like the more we thought about it, the more other things besides development also needed to be Agile. Marketing, Sales, Startups, Program Management. The Agile manifesto was such a good set of values that wherever we applied them, we found improvement.

And that’s the problem.

Agile, it’s time to grow up. When you were born, it was easy to believe in developers as sitting in some small room, customer by their side, banging out great solutions. But that was then, that was your childhood. Now that you’ve gotten older, you have to be aware that most of the time these developers exist as part of some bigger ecosystem: a cross-functional team to develop a new product line, a team of vendors bidding on a government project, or a lean startup team practicing customer development.

It’s not just software any more, if it ever was. Now we’re expecting you to play in the bigger world with a broader view of how developers actually are employed. You have to grow up.

I know what you’re thinking: why can’t everybody just be like Spotify? One big company with a bunch of kick-ass Agile teams in it? Why can’t we just get paid, without having to think about where the money comes from? If the company/customer/client wants us to work in a large group of developers, why can’t we just tell them no? Why should we do estimates? Why do we have to add in all this other stuff anyway? Why can’t it just be the way it used to be, with you and your pair-programming buddy writing killer software for one guy you had to make happy?

I don’t blame you. Growing up is tough. Nobody wants to do it. It’s tough to look out on the world and realize that there are so many other important things besides just the things you’ve been used to. It’s tough realizing so many other people want part of your time, and you have duties and responsibilities in life that might not be so much fun.

I understand this.

But the world needs you. You see, the values you emphasized and the techniques that resulted from them are applicable across a lot more than just software. The world is becoming digital, and the digital world needs Agile to help guide it.

Sure, the world outside the team room has a lot of dysfunctional behavior. There’s management principles that are outdated, there are ideas and models that are harmful to the people who hold them. Many companies are being run in a way that actively destroys morale. Aside from product and intellectual property development, which was our old homestead, there’s also manufacturing and service work. Those activities play by their own rules, and although our value preferences stay the same, the practices and techniques change. That can be very confusing to you, I know. I’ve seen you struggle with trying to apply product development ideas to services and manufacturing. Sometimes it works. Sometimes it hurts.

But I want you to know, I believe in you. I’ve seen you overcome big odds, and the people using Agile principles today are some of the smartest people in the world.

You’re going to do fine. But enough with the whining and insisting on living in a tiny piece of the world. It’s time to grow up.

April 21, 2014  Leave a comment

Agile’s Business Problem

Agile has a business problem.

I was watching a video of Uncle Bob Martin awhile back, and he said something that struck a nerve.

[Paraphrased] “When we sat down to do the Agile Manifesto, several of us wanted everybody to be sure that the purpose here was re-establishing trust with the business.”

At the time, he was making a case for TDD — how that if we keep writing code in Sprint 1 that breaks in Sprint 3, or delivering buggy software, or not being reliable, you lose the trust with the business. But I think his observation says something about the technology world in general.

Given a choice, we don’t want to be part of the business. Many technology developers work at an arm’s length or more from the actual business. After all, one of the big “benefits” of Scrum is that the Product Owner brings in a list of things for the team to do. The team isn’t expected to know business things, only deliver on a to-do list from the PO.

Most small company development teams have sales, marketing, and management out in the field bringing in the money. They just deliver stuff. Most BigCorp teams are so far away from value creation that I doubt many on the team could describe the relative importance of what they’re delivering or how it fit into the corporate strategy. They just deliver stuff. Non-profit development teams deal with inter-office politics: their goal is to produce something the non-profit likes. They just deliver stuff. Everybody — except for startups — just delivers stuff.

And why not? Let’s face it: business is messy. The amount of work you put into a product is not directly related to the value it might create. There are many factors outside your control. There is no single person to make happy. And other businesses are trying to do the same thing as you are — many of them with more experience and capability than you have.

Wouldn’t you rather just work from a to-do list too?

We tell ourselves what we want to hear, what’s popular. So we focus on our backlog, how well we create things using it, and how to improve our little world. We don’t worry about the larger one.

The problem is: once you draw the line at the Product Owner, once you make the trade-off that, for purposes of “all that business stuff”, the Product Owner is the magic guy, then it becomes a lot easier to start making other arbitrary compromises. Wherever there’s a business consideration, why not just change the rules so to eliminate it?

Is the Product Owner asking you for estimates on when you’ll be done? Stop estimating! Does several new recent corporate acquisitions mean that the new product is going to have 8 teams all working together? Insist on firing everybody until you get one team. After all, the organization should adapt to Agile, not the other way around. Rituals like demos and sprint planning bugging you? Hell, just go to kanban/flow-based work and who cares about cadence, calendar, or commitments?

Sometimes these conversations make me want to laugh. Sometimes I wonder: is there something there? Am I missing something that’s going to provide value for folks down the road? I intuitively like a lot of what I hear: things like flow-based work systems. But I think that’s the problem: we’re confusing the theory in our heads and what we’d like to hear or not with what actually works.

We’re resistant to philosophical change. I know there’s a lot of people in the Agile community who already have everything figured out. As a coach and technologist with 30 years of experience and eyes-on hundreds of teams, back when I started blogging and writing, I thought others would be happy to have me share with them.

I was mistaken. Instead, amazingly, most coaches and practitioners — even those with a year or two of experience and a 2-day class under their belt — already know everything. The old cranky farts are worse, because, well, they’ve been around the block a time or two. Selection bias is a powerful thing, and if you decided you figured something out back in 1995, you’ve had decades of time to convince yourself there isn’t much more to learn. We get the general idea of something in our head, then we insist on the rest of the world conforming to that general idea.

The Agile community is an echo chamber. Instead of viewing this entire enterprise as an evolutionary effort spanning many decades, we view it as a brand-based deal where we’re all competing for market share. People don’t worry which parts of Scrum to adapt here or there, they worry about whether Scrum is still “pure” or not. They worry about whether we have “true” Agile or it has gotten corrupted.

These ideas, instead of being just ideas that we’re supposed to take and learn from, have become some weird kind of Platonic form. There is a “true” Scrum, and if we could only stay in the light of true Scrum, everything will be fine.

We’re not using ideas as stepping stones; we’re coming up with things we like hearing and then branding them. Then we have arguments over branding. This is dumb.

The Agile community continues to have a crisis of conscience: are we part of a system that is trying to learn better ways of providing real value to real people? Or are we trying to create a system of value for the way we work that we will then expect the world to conform to? It’s an important question, and it’s important to determine which side of the fence you come down on.

To me, Agile is a series of ideas about things that mostly work in incremental and iterative development. None of it is perfect, none of it is a magic bullet, and everything I’ve seen so far has places where it works better and places where it doesn’t. I don’t say that to dismiss ideas like Scrum, XP, and so forth. I say it to put them in context. But it’s nothing if it doesn’t do something real for somebody, if it doesn’t deliver.

I expect this body of work to grow, and hopefully I can come up with a thought or two that might last a while. But if it’s going to grow, it’s going to need to keep becoming more and more a part of business and less of its own separate universe.

Note: I didn’t say the entire Agile community. I still have faith that this is the best place to be for people who care about happy and productive teams. My point is that the future is in startups and in adapting what we do in order to create real value (instead of just cranking out technical solutions), not in taking the tiny bit we’ve figured out so far, over-generalizing, and then trying to make the world conform to it.

March 30, 2014  Leave a comment

Why I Don’t Care About Agile, Lean, Kanban, Scrum, and XP

Lately there has been quite a donnybrook in the Agile community. Is it time to leave Agile behind?

As it turns out, the suits have taken over, taking all that great Agile goodness and turning it into just more command and control, top-down, recipe-book dogmatic nonsense.

So some folks say we should just “return to REAL Agile”. Some folks say the brand Agile is dead: it has been killed by entities intent on adopt, extend, extinguish. Some folks say that Agile was never any good to start with — either it was too wishy-washy and meaningless, unlike Scrum, or that it was too emotional and not logical enough, unlike DAD (or many others).

As for me? I just like collecting shiny objects.

Agile is a brand. It’s a brand without an owner. That means that each of us has to come up with some kind of working definition for what the brand stands for. For me, that brand represents something like “best practices in iterative and incremental development”. If you’ve got one of those, you can publish it under Agile. Works for me.

Of course, people go off the deep end just on that formulation. What do we mean by “best practices”. Isn’t that prescriptive? Are we saying that we know the perfect answer to how teams should act?

sigh.

I’ve been around this shrubbery many times, and I think it comes down to what you want to do in life.

I’m in this to help developers. I really don’t care what you call it, or how it makes me feel. I don’t even care if it all fits together in some master plan, or whether it has a class or not. If it helps developers, I want to do it.

Some folks are in this to change the world. To them, Agile is a movement. It exists to show what is wrong and what needs to be fixed.

Some folks are in this for a theory of life, the universe, and everything. They want a philosophical grounding that they can use for their development, their team — even the organization.

I believe these last two groups are expecting a bit much. Sadly, though, it’s these powerful emotional constructs that drive adoption of things. So they’re always going to be with us.

I also believe that Nietzsche was right: any sort of structured system of belief is going to be self-contradictory. In other words, no matter what you do, once you make a system out of it, it’s broken. And we technologists love to make systems out of things. Even things that are supposedly non-prescriptive. After watching this industry for decades, I believe this is unavoidable.

All of this has led me to believe that fighting over the term “Agile” is a fool’s game. If you only care about what works, it doesn’t matter what folks call it, and no matter what comes along next, the community is just going to do to it what it did to Agile. If you care so much about terms, then get your head out of your ass and start focusing on what’s important. We have work to do. Give up on allying yourself to a certain brand or term — or proclaiming that you’re opposed to a certain brand or term — and just collect things that make developers’ lives better. Then we can share.

March 11, 2014  1 Comment

How We Got To Agile

Frederick Winslow Taylor

Frederick Winslow Taylor

1900. Allentown, Pennsylvania. The dawn of a new century. Manufacturing looked nothing like today.

For one thing, there were no managers. There were owners, There were overseers. Foremen. But not pocket-protectored MBAs wandering the shop floor, clipboard in hand. Back then owners were upper class, landowners, holders of property. They didn’t bother hanging out with riff-raff. Sure, if necessary, you could find the owner or one of his lackeys in the main office. An overseer, a shop foreman or two were on the floor keeping the men sober and busy. But aside from production goals and tradition, the workers were left to their work.

And why not? For skilled work, after completing a lengthy apprenticeship, the job was handled by craftsmen, many times from a guild. They made their own decisions about how their work was performed. Simple work? Workers were hired off the street and paid by how much output was desired. No matter the type of work, each job was a special creation, a snowflake, even if it was just something that fitted into a larger whole.

For unskilled work, workers didn’t do anything. Or rather, they did everything, but weren’t particularly good at any one thing or another. Need more output? Threaten more. Promise more. Workers were miserable.

Craftsmen were the paid same amount no matter how much was produced, and centuries of the apprenticeship system taught workers the wisdom that the more they did, the worse things got. The more they did, the more mistakes they would make. The more they did, the more they would stand out. The more they did, the more they would work themselves out of a job.

Let’s say you got a wild hair up your ass and came in one week and killed it. What do you think would happen next week, kid? I’ll tell you what. Next week the bosses would expect you to work at the same rate, that’s what. And they’d expect everybody else to work at that rate too. Do you have any idea what kind of living hell they’d make for us if we were all constantly working at top speed?

Nope, twice the people working at half efficiency was much better for all concerned than all of us working as hard as we can. It was even better for the company, because then there were extra people around to take up the slack when people got sick. The main job here, if you were smart, was convincing management that you were working as hard as humanly possible — while slacking off as much as you could without getting caught. More people, more money, more leisure time, more security, more everything.

It was, after all, us against them. We have to create our own slack. We can’t trust the business to do it for us.

The early 1900s were the sunset of the Victorian Age, and owners really didn’t much care. They were owners. They were obviously better educated and more hard-charging than the average knuckle-dragger on the shop floor. Foremen provided the raw material, said some encouraging words from time-to-time, and mostly just left workers to sort things out the best they could. Fire the slow workers. Make sure nobody was goofing off. The beatings will continue until morale improves. This is the way it is. This is the way it always was.

It was, after all, us against them. We have to use a stern hand on the workers or nothing gets done.

And so what if things ran 10% better this month than last month? The company made money through political connections, through inside contacts, through an old-boy network. Not by making a better widget.

It sounds odd today, but back then nobody actually organized their work in the abstract. Sure, work was orderly, but nobody sat down and looked at how it all fitted together. There was no master plan or detailed architecture at the task level. There was no optimized design. For management, it was beneath them. How would they know how to do things better than the workers? For workers, using rough guesses and rules of thumb was the way of organizing the work handed down from person to person. Your mentor guessed his material needs a certain way. This was the way you guessed your material needs. Your mentor paced his prep work a certain way. This is the way you paced your prep work — if either of you thought about pacing at all. Or prep.

One mentor taught it one way, another a different way. The standard was that there was no standard.

It was a true collection of artisans, and because everybody was an artisan, nobody was. True skills in such an environment were social ones. Get along with each other. Be respected by your peers. To owners, in terms of measured technical expertise, all the workers looked mostly the same. Other than that, it was all reputation.

Taylor fretted. How much activity was being wasted? How much material over-ordered and never used? How many workers were doing jobs they had no ability in? How many men were living lives of pointlessness, either working hard and not smart, or not working at all while nobody was the wiser?

No. No matter what kind of happy face you wanted to put on it, whether you were owners or workers, this arrangement was killing morale, efficiency, and most importantly, led to distrust, resentment, and a bitter us-versus-them attitude for all involved.

And what of the larger picture? What kind of impact did this work structure have on factories? On the nation as a whole? On the families of the people caught up in it?

Just because we had always done it this way didn’t mean it always had to be done this way.

It was unacceptable. Something had to change.


A hundred years later and it had changed, massively, but somehow we ended up just trading one misery for another.

2001. Snowbird, Utah. A small gaggle of software developers, consultants, and process wonks — nerds, really — met to discuss the state of technology development.

It wasn’t that good. Over the past couple of decades, as more and more people entered the workforce as programmers, dozens of different methods for improving productivity had been tried. Books written. Whitepapers prepared. Doctoral theses defended. People were certified. People set up huge organizational systems to manage technology creation. People honed and tweaked. They lived lives of quiet desperation inside huge corporate and governmental machines dedicated to making technology happen.

For skilled work, nobody did anything, or rather, they were busy, but little got accomplished. Workers were miserable.

Early on, people developing technology started with a simple premise: things happen in a certain order. It’s sequential. People need something. They tell us. We write some code. We debug. We give it to them. They are happy. One thing naturally leads to another. Work naturally happens in a clear sequence.

So it looked like developing technology was just a matter of listing out each step and doing each step correctly, right? Pieces of work come in at a high level, then cascade down to product coming out the other end. It was a big waterfall. You got on at the top, you got off at the bottom. This idea that sequencing was the most important part of the structure of technology development and that we should order our work as some version of a large linear sequence was called “waterfall methodologies” or just “waterfall”.

In theory, waterfall was natural and intuitive. Only problem, in practice it didn’t work so well. In practice, there wasn’t always a single path production could take. There were branches, loops. Once product was released, it might enter the system again as people needed to fix problems or add new features. Instead of a simple waterfall with a few stages from “beginning” to “end”, technology development started looking more and more like a flowchart. Then it started looking like a really complicated flowchart.

The idea that technology development is a flowchart with cycles, not a straight line, over it’s entire lifetime is called a Software Development Life Cycle, or SDLC. The idea is that whatever you’re doing in a modern technology development shop, it’s just work inside a little node on a flowchart somewhere called the SDLC. Your node has inputs and outputs. Rules of operation. Quality criteria. You worked the node. Or the node worked you. Depended on how you looked at it.

Lightweight SDLC by deliverable -- documents to be delivered. Each deliverable would have it's own complex flowchart of processes to create it.

Lightweight SDLC by deliverable — documents to be delivered. Each deliverable would have it’s own complex flowchart of processes to create it. Projects started in the upper left and ended in the lower right.

Even at this level of complexity things weren’t complete, though. Or rather, no matter how complicated we made our flowchart, there was always something new that needed to be added, and it was important to add it. After all, companies in this world made money by being as precise and efficient as possible, by making a better widget. Chasing the correct SDLC, tweaking it and making it better, looked a lot like something that over time created diagrams of infinite complexity.

So practitioners doubled-back. Instead of looking at things over a lifespan, how about if we changed focus? Looked at things in small increments? You’d do the same things over and over again, releasing a new little bit of stuff when it was ready. Instead of focusing on the tiny details of what you are doing, couldn’t you just tell me what you can do by next Friday? Maybe technology development wasn’t a flowchart, maybe it was just a string of little nodes all the same, each one representing a unit of time.

Focusing on delivering pieces of a whole in small units of time was called incremental and iterative development, and it’s mostly stuck with us. It helped. But not as much as we hoped.

First, we still had the question of process. What did we do inside of each release, or increment? Nobody knew. How long should an increment be? Should there be a standard increment length? Should we just work until we think we’re ready? Obviously if we wanted any chance of improvement, we had to break down, define, and optimize each piece of our work, right?

People added in details to answer all of these questions, and, just like before, a simple idea of doing small increments over and over again in an iterative fashion became many super-huge systems of how to program, how to report your time, how to create one of the standard project management plans, what sorts of status reports were necessary and when, and which of the 83 standard artifacts you needed to complete for the upcoming release meeting next Thursday afternoon. Yes, the nodes were all the same. They were just little boxes of time. Inside each node, though, we crammed a miniature SDLC. And that SDLC could get just as complicated and unwieldy as the old ones did.

It was the dawn of the Age of Dilbert, and workers led days that were full of meetings, mostly management meetings, mostly asking why things weren’t completed yet. Each and every bit of their time was tracked. Everything had a check and a cross-check. There was a standard structure and system for everything they did and everything was meticulously measured and tracked. We define. We measure. We optimize. Productivity was king.

But productivity did not increase. Instead, it plummeted. There was plenty of activity, mind you, at least on paper, but nothing of any value got released in any reasonable time frame. Developers were miserable. Managers were miserable. Customers were miserable.

Trust was lost between the business and the workers. Promises were never kept. Deadlines always slipped. No matter what kind of happy face you wanted to put on it, whether you were owners or workers, this arrangement was killing morale, efficiency, and most importantly, led to distrust, resentment, and a bitter us-versus-them attitude for all involved. Eventually each side recognized the game. It was the same old one. It was, after all, still us against them.

Looking back, some extremely smart people across the industry had worked at this problem of how to organize technology development work over a period of many decades. All sorts of various systems had been erected, diagrams created, but no matter where we started, no matter the philosophy and rules used, technology development always ended up a complex mess of rules, contracts, meetings, flowcharts, late hours, unhappy spouses, and kids with missing-in-action parents.

It was unacceptable. Something had to change.

Early "modern" production line. It was found that when workers were required to take "tests" during their work, productivity magically improved.

Early “modern” production line. It was found that when workers were required to take “tests” during their work, productivity magically improved.

Back in 1900, Taylor agreed. Something had to change. He looked at the shop floor. The company was spending millions of dollars on labor, and yet nobody was using any kind of system to keep track of how things were done. It beggared belief. Surely we could learn something from technology and science and apply it to the most important thing we do during our day — our work. Surely work had to be something more valuable than simply a force of wills between owners and workers.

What we needed was a scientific method for managing our work.

1. Replace the old rule-of-thumb work methods with methods based on a scientific study of the tasks.

2. Scientifically select, train, and develop each worker rather than passively leaving them to train themselves.

3. Cooperate with the workers to ensure that scientifically developed methods are being followed.

4. Divide work nearly equally between managers and workers, so that the managers apply scientific management principles to planning the work and the workers actually perform the tasks.

In short, we needed to emphasize science, reason, and hands-on management over tradition, old wives’ tales, and class separation. Science was the way forward for all of us. Science was our hope. It was insane not to use science to help us improve the way we created things. In Taylor’s mind, it wasn’t that class distinctions would go away, lord no. You’d never take a lowly worker bee and make much of a thinker out of him. That wasn’t even in the cards. We couldn’t change reality. We couldn’t change attitudes and values, but we could, and should, change our processes and tools; in fact we should break down, define and tweak each and every thing we do during our work day to make sure it makes sense to us and is providing the maximum amount of value.

Looking out over Allentown, Taylor knew our emphasis was correct, we just needed to plan, measure, and change every little bit of our action we took during the day, use the scientific method to meet our goals better.

Which is exactly the opposite conclusion the Agile Consultants reached a century later after living in the world Taylor created.


Looking out over the state of technology industry from Snowbird, no matter where people started, no matter what philosophy and rules they used, technology development always ended up a complex mess of rules, contracts, meetings, flowcharts, and late hours. The business always lost trust in the workers. The workers always lost trust in the business. The customers lost trust in both. Everybody ended up unhappy.

It wasn’t that any of the specific things we did or tools we used were bad, really. In many cases there were really good reasons for folks using them. It just always seemed that somehow things got out of hand, and processes and tools that were supposed to be supporting the work and helping things run smoother instead became prisons that people lived and worked inside of. It wasn’t enjoyable. It didn’t even accomplish the things it was supposed to.

We couldn’t always change our processes and tools. Many times they were the best we had, and we had to use something. But we could, and should, change our attitudes and values.

We needed a change in emphasis. Thus the Agile Manifesto.

The Agile Manifesto is one of those things that started off as a few bullet points in a conference somewhere in the backwoods of industry and somehow gained a life of its own. Sure, there were a lot of other things that came out of the conference — they decided on the buzzword “Agile”, they came up with a set of principles, there were various structured systems of work that spun off. They drank beer. There was a lot of writing and marketing. But nothing with as much staying power as the Manifesto itself.

The reason the Agile Manifesto resonated was because the Manifesto was not just another system of telling folks how to run technology teams. Instead, it was a reaction, a reaction to the hundreds of methods that had already been proposed and tried (and billions wasted) developing technology. It wasn’t a set of instructions. It was a set of values. A polite primordial scream. People who read the manifesto could choose whether to agree with the values or not. If they agreed, they were then free to apply them in any way they desired. They were asked to agree on where our values had gone astray, not about the perfect way to run our next technology development program. There was always plenty of that.

The Agile Manifesto went something like this:

“We are uncovering better ways of developing
software by doing it and helping others do it.
Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on
the right, we value the items on the left more.”

Might want to read that list again, mentally comparing the items on the left to the items on the right.

Pretty simple stuff, and it might have gone without much notice, but the timing was good. There was a website. People saw it and liked what it said. A movement was born.

Then came the hard part: proving that the principles had practical value and weren’t just more happy-talk or managment-speak, sophistry, which is exactly where Frederick Winslow Taylor found himself back in 1900 when he started spreading the gospel of Scientific Management.

The very first "light paintings" were created by time and motion pioneers interested in defining and optimizing how workers worked

The very first “light paintings” were created by time and motion pioneers interested in defining and optimizing how workers worked

At first they hated his ideas, and some of the ones that hated it the most were the ones you’d least suspect: factory owners.

That’s right, because although we associate management today with close oversight and control over the workers, early 20th century owners already had all the control and oversight they wanted. They could hire and fire at will. They were comfortable. They had tradition.

In many cases they even enjoyed very close, paternalistic relationships with folks on the floor. There was rarely outright hatred. A good worker was loved by his boss and he loved his boss too — all while he slacked off as much as possible. All while the boss planned draconian incentives to keep the workers honest. No hard feelings. It’s just the way things have always been. Upper class cracked the whip. Lower class pulled the oxcart.

Owners hated Taylor’s Scientific Management ideas. Here come these guys, these Management Consultants, carrying this radical departure from tradition. Required a lot of oversight and work that looked silly, and at the end of the day, what do you have? Same old lower class, uneducated riff-raff, god bless ‘em, that you had to begin with. You couldn’t make a silk purse out of a sow’s ear. These ideas amounted to nothing much more than an an unnatural mingling. Of ideas. Of classes. Pointless bean counting.

He was the first person to ever use a stopwatch on the factory floor. Who would use a stopwatch for menial tasks? Those were for sporting events. Train schedules. But Taylor would. He would sit and time people as they did their jobs, breaking down each task and cataloging it, watching closely how long it took to accomplish, what the averages looked like, and how much variance there was. Which methods worked better than which others.

Scientific Management: the idea that careful definition, measurement, planning, and architecting could make a huge difference overall.

Taylor watched some men moving pig iron. On average, they moved 12.5 tons per day. If necessary, the company could offer huge sums of money to get them to move more, say 47.5 tons, but experience showed that the only thing that would happen would be that the men would try as hard as they could for a while and then give up, exhausted.

If, on the other hand, a specially trained overseer, a real manager, conducted experiments to determine how much rest was required and at what intervals, then the manager could synchronize the lifting and resting among all the men such that every man could move 47.5 tons per day, every day, all without becoming exhausted. Manager does the thinking. Worker does the working.

Don’t just whip the horse harder and pull on his reins. Take apart everything the horse does, climb inside his brain, and optimize each motion. Think for the horse.

Of course, not every worker would be capable of moving that much pig iron. Perhaps only 1/8 of the general population could. Not all of our skills and weaknesses are the same. Some folks have attributes that make them very good at moving pig iron. Most do not. The company would be doing both the men and society a favor by selecting only those people for which moving pig iron came naturally. This could be accomplished with physical fitness tests. Science helped the managers define the work, plan the work, measure the work, and select the workers for the work.

No detail was too small for Scientific Management to analyze. Taylor studied shovel design. How long does it take to operate a shovel? How does that interval change depending on how much weight the shovel is carrying? What is the optimum weight for an operator to lift and move using a shovel in order to maximize throughput?

Based on time and motion studies, he determined that the optimum weight a worker could lift using a shovel was 21 pounds. Since various materials being shoveled had various densities, obviously shovels should be sized according to the type of material being moved so that they would, at their fullest, weigh 21 pounds. Companies started providing workers with appropriate shovels. Prior to that, workers brought their own shovels to work. Substantial productivity gains were realized — and more importantly, concretely proven — by recording productivity metrics before and after the change. No more running things by anecdote or tradition.

Science rocks.

As this caught on, others joined in, including a husband and wife team that used new and highly-experimental motion picture apparatus to study motions of bricklayers. More dramatic improvements were proven.

As it turns out, Scientific Management was something that any educated person with discipline and a scientific mindset could get in on. Just required a new way of looking at things. Which is exactly the way the Agile Manifesto consultants wanted things a hundred years later. A new way of looking at things. It was just common sense: we should share values. Empathize those. No-brainer.


But it wasn’t a no-brainer. People hated the Agile Manifesto in 2000 just like they hated Scientific Management in 1900. And surprisingly, many of those who hated it were also the ones you’d least suspect: developers living in old oppressive systems.

Why should they like it? They had a complex, unwieldy system, sure, but it was their system. It was something they had spent many years putting together and tweaking to optimum performance. Sure, some of the new guys thought the system sucked, was overly complex, and ate people’s souls, but that’s just because they were new guys. Hippies. Slackers. They didn’t understand the reasons why things had to be the way they were because they weren’t looking at all of the things that could go wrong.

And here come these new schmucks, these…Agile Consultants? What do they offer? A bunch of hot air and loose talk to encourage youngsters and all those already unhappy with the way things are going. These Agile Consultant guys would be better off spending their time and energy explaining why things had to be the way they were instead of building fantasy castles in the air. Perhaps doing some real research on processes and procedures we might have missed. Give us more detail. Provide practical advice instead of regurgitated marketing speak.

And yes, things were taking a lot longer than they used to, but that’s not important. What’s important is that we’re finally beginning to operate this place like an engineering shop instead of a mob. Optimization comes later, after standardization and baselining. Everybody knows that. We’re becoming professionals. We have a system. We make complex things, therefore our system should be naturally complex. We are just now getting around to adopting some intricate yet practical and important processes, and these guys are suggesting things that sound like they were invented last week in a beer hall and could be explained on a page or two in the back of a comic book.

But the really crazy thing? The thing that drove us nuts? Most of the time, the practices developed from the Agile Values? They worked.

The daily stand-up is practiced by hundreds of thousands of technology teams around the world

The daily stand-up is practiced by hundreds of thousands of technology teams around the world

For example, they had this accomplished, committed, needs-help cycle. Once a day, each person would go through a brief communication cycle that covered — and only covered — their accomplished, committed, and needed-help items. The entire team would stand in a circle and toss around a Nerf ball. When you caught the ball, you communicated to the rest of the team — not a manager! — what you accomplished over the last day. You communicated commitments about what you thought you could accomplish over the coming day. Finally, and this was bizarre, you communicated what you were weak or worried about and asked for help from the others. And the team provided it! Without being told! On their own!

The same thing happened at the next level up, the team. The team repeated this cycle every so often, demonstrating what they had accomplished, making a commitment to do more, admitting their weaknesses, then talking about where they needed help. Then management would be directed, by the team, no less, where they should help. By the team!

It was something you could train people on in five minutes. Heck, just reading about it and you’re already trained. Yet every team that honestly gave it a shot found productivity increasing, happiness increasing, participation increasing. Some even called this communication cycle the “five minute miracle.”

Then some of the other Agile guys had this idea of the team being the unit of work delivery. Instead of assigning work to a person, as has been done since the dawn of time, the idea was to let the team as a unit decide, commit, to what it could accomplish. Then the team, as a unit, was responsible for delivering it, not a specific person. All the problems and variances among different people still existed, of course, but it was up to the team to handle them. From the outside there was the team, and the team was all there was. Work went into the team, the team did work, and work came out of the team. What people did inside the team was of no concern to anybody but those in the team. Insanity.

But genius, too. This freed each team to dynamically optimize its work patterns depending on the project, the team members, the type of work, perhaps even the individual work item or task. The company had a trade to make: time and money to the team for delivered product back to the company. The team had the freedom to make the value creation happen as productively as possible inside its commitment time box, which somebody decided to start calling a sprint.

Teams working in this fashion, with no hangups or baggage carried over from previous systems, customizing their optimizations as they went along, were reporting some astounding successes. So much so that most outsiders found it hard to believe. Two, three, ten times faster? Happier workers? Customers singing praises? Technology development, Management Consulting, had never seen that kind of performance improvement. That kind of excitement. Ever.

Then there was the whole Product Owner and Backlogs thing. Since the team was the unit of production, what was going to be the unit of value for the team to produce? There had to be some way of enumerating and measuring the value of the things the team was working on. There had to be some kind of interface between the team and the outside world. After all, how would the team know what to work on? And so the Product Owner was born. The Product Owner made a list of things that had business value. The Product Owner ordered this list from most valuable to least valuable. The Product Owner gave the list to the team, and the Product Owner was always there to answer questions about the list and the items in it.

This list, the list of things that had business value, was called the Backlog. The Backlog was the Product Owner’s, The Product Owner and his list represented things the business was willing to pay for, things that had real value. Not a lot of explanation was forthcoming at first about what went in the backlog, or what exactly the Product Owner did all day, or how the business knew these things were valuable, or what made a good or bad item on the backlog, but teams had a guy responsible for telling them what to do, and a list of stuff to do, and that seemed good enough for most folks.

A Product Owner and a Backlog eliminated long sessions with users who didn’t know what they wanted. Instead, the Product Owner could handle that. It eliminated huge hunks of time where people dickered about which thing was more important than which other thing. The Product Owner could handle that too. Why should the team care? Give them a list, a person they can talk to, and they can make things happen. We solved the problem by assigning it to somebody.

Big swaths of time that used to be spent coordinating and meeting were returned to the developers to do productive work. Tough problems were made simple. Developers made happy.

Now if it only would take off.


Frederick Taylor had the same struggle with Scientific Management Theory, but he didn’t give up. After working at a few places validating his ideas, he retired at age 50 and spent the rest of his life writing books and proving his ideas correct to anybody that would listen. And because people could see the results, he quickly gained a following. More and more managers saw the natural connection between work, management, and science.

early-assembly-line

Scientific Management went everywhere. Taylor single-handedly invented the job of Management Consultant. Most all of modern management theory is based in one way or another on Scientific Management Theory. Name a famous management consultant; Deming, Drucker, McKinsey; talk to them or read their work, and pretty soon they’re talking Frederick Winslow Taylor. Name a famous management theory: Six Sigma, TQM, MBO, Balanced Scorecard. It all goes back to Scientific Management Theory.

That’s right. Frederick Winslow Taylor is the guy responsible for just about every management fad you’ve ever had to suffer through.

His ideas of time and motion measurement and optimization, along with his use of the scientific method for process improvement, are used everywhere. In fact, it’s impossible to find anywhere they are not used. The entirety of modern life, with all the great increases in technology and efficiency, is built on Scientific Management Theory.

Even facing stiff initial resistance, the logic of Scientific Management quickly outstripped the emotional appeal of not rocking the boat.

Agile was slow going at first, too, but eventually took off for completely the opposite reason. While Taylor supporters had to argue science and logic, Agile adherents could just jump up on a soapbox and hold forth on the cruelty of the existing system. You couldn’t support Taylorism without supporting precise measurement and studied action, but you could support Agile without actually taking a stand about anything at all. You could just take a stand against stuff. It was enough to know that the system was broken. The system was broken! Tear it all down!

Early Agile adopters faced a ton of friction from the Scientific Management crowd, mostly in the upper echelons, but the emotional appeal of Agile Values quickly outstripped the logical appeal of Taylor’s measurement, planning, and optimization, at least for the people doing the work.

And as much as it sounded like another feel-good management fad, applied Agile had the distinct benefit of actually working. But it worked in a probabilistic, not scientific manner. People could take the values that the manifesto described and create various concrete processes to try in their teams. Then they could compare notes. Over time, some processes were mostly found to work well most of the time. Some only rarely. But practitioners could collect these process tricks, or games, like postage stamps, trading with other practitioners and selecting the appropriate ones depending on what kind of work was going on. This wasn’t about the right or perfectly optimized way to do things. This was about the way to do things that worked better than others most of the time. This difference turned out to be critical. Agile, at least the way the community applied it, wasn’t a standard; it was a heuristic.

Hundreds of books sprang up around Agile. Pick an area of technology development: coding, databases, hardware, the Internet. Successful, proven Agile techniques are there and continue to grow there as people organize new processes and structures of work and then place those processes, games, katas, or tricks, inside the larger controlling Agile value system.

Two opposite philosophies. Each deeply involved in all progress and organized activity on the planet. Each looking at the world in a completely different way. Scientific Management, which makes sense to anybody creating and running the machine of business. Agile, which makes sense to anybody making technology solutions happen.

Scientific Management tells us that life is rational, causal, and works according to logic and reason. If we define what we are doing, break it down into smaller pieces, we can optimize those pieces, thereby optimizing the whole. One set of people plans the work and another set of people implements the work. In fact, that this separation between planning and execution, combined with scrupulous measurement and adaptation, is the only sane way to organize our work.

Agile tells us that no matter how we break down and optimize our work, without our values being in the right place we end up creating a living hell for ourselves and others. That progress involving ideas is many times surreptitious, not directly causal. That you can only break down things so far. That trust is more important than measurements.

It’s as if there’s one world full of business people with suits, stopwatches and blueprints, measuring each little thing and determining how a little tweak here or there will change things by a percentage point or two. Another world is full of technologists with flip-flops, games, and Nerf balls, not talking about measurements or machines yet creating new products that change reality for the rest of us. And doing it on a regular basis.

And backlogs? Right at the boundary, in the hand of the Product Owner.

February 11, 2014  2 Comments

« older posts