Potemkin Village Agile



An interesting trend I’m noticing as Agile becomes more popular is seeing teams that say and do things that look good, but don’t seem to be accomplishing much or going anywhere. There are two types of these teams: Cargo Cult teams and Potemkin Village teams. You’ve probably heard of the first one, but the second one might be new to you.

“Potemkin Village Agile” is where everything is set up to appear as if the team has a high level of agility, but in fact it’s just another old-school project. Note that this is different from Cargo Cult Agile. Cargo Cult Agile is where teams go through the motions of doing stuff because they think that by going through the motions without understanding what they’re doing that somehow magic will happen and things will get better.

Potemkin Village Agile is under no such illusions. The goal here is just to create a nice-looking facade so that managers or whoever else will get off the team’s back so they can “get some real work done”.

These are the guys that show up at the nightclub in the cool car and expensive clothes — all of it rented. They’re the ones who want to appear smart online on a certain topic so they Google something and just rehash what Wikipedia says. These are Agile posers. In some ways, as long as they’re honest with themselves, they’re actually much better off than the Cargo Cult guys, because at least they’re under no illusion that any of it is going to amount to much.

Cargo Culters, on the other hand, are true believers who want to make it look like X because once it looks like X all of our problems will be solved. Potemkin Villagers are just putting on a puppet show for anybody who comes by to visit.

    In either case, it’s not unusual to be presented with a team that looks like it’s doing the right things but where performance sucks and folks outside the team don’t understand why. I thought it would be interesting to put together a quick list to sort out the Cargo Cult/Potemkin Village folks from the folks who may just be having a bad sprint or two. [Standard Disclaimer: As with any other list, I'm not saying teams have to conform to any of this. I'm not saying these things describe the perfect Agile team.] If you’re presented with a team that’s supposed to have a lot of agility but isn’t getting the results they’re supposed to get, take a look at this list and see what matches up and what doesn’t. You might have a CC or PVA situation on your hands.

  1. Is the team physically working alongside each other, or are they retreating to cubes or solo areas?

  2. During whatever daily chat the team has, are they focused on advancing the work, or finding work to fit whatever job roles each of them thinks they have?

  3. Do people code together, writing tests before they write solutions? And by “together”, I mean: is there a customer in the room working along with everybody else?

  4. Is the room quiet, like a library, where everybody is in their own cave, or is there a constant low-key burble while people are having a good time?

  5. Are these people you would want to hang out with?

  6. Does the team talk about important things like helping people, or are they focused on tooling and process tooling? Does most of the conversation revolve around solutions and benefits?

  7. Is everybody automatically “sharpening the saw” — making the build faster, refactoring old code, figuring out how not to have the same config issue twice — or is everybody just doing whatever is directly put in front of them without thinking of the bigger picture?

  8. Does this team look like a team that should be trusted by the people who are paying them? James Bonds or stapler guys?

  9. Does one person dominate everything, or do team members switch off during the day, leading or following as necessary to keep momentum going?

  10. Do team members easily admit ignorance and weakness to each other?

If you’re working with a Cargo Cult team, there’s a ton of literature out there about things to do. No need to rehash that here.

Oddly enough, I’ve seen several PVA teams that eventually turned out to be teams embracing real agility. It kinda goes like this: if they’re honest enough to make a goal of simply making it look good without lying to themselves or others, then they usually end up doing some things like standups or mobbing to make things look good. Funny thing, if you have an open mind and don’t actually expect anything, good or bad, pretty soon good things will happen — and you’ll be in a good mental place to take advantage of them. Whereas if you’re a Cargo Cult team, you’re thinking very rigidly. Many times are unable to see good stuff right in front of you because you’re focused on the wrong things.

If you’re working with a potential PVA team, you need to take some time to figure out whether or not they’re BSing all outsiders, i.e. a true PVA situation, or whether some folks are posers and some folks are Potemkin Villagers. The two groups have completely different ideas of where they are and what needs to happen — and each group needs to be treated differently. Complicating things is the fact that it’s not unusual to have a mixed team. (A naive suggestion would be something like “Well just sit them down and ask them all to identify with which group they’re in”. Problem: PVA folks, by definition, just want to make it look good to outsiders. They are unlikely to want to confront other team members, much less some outsider, with their belief that it’s all just a charade.) Fun times.

And if you’re on a team full of posers? The trick with being a poser is just to have fun and be honest about it! The more honesty and fun you bring to it, the better chance you might accidentally end up doing some pretty cool stuff.

August 26, 2015  Leave a comment

Real World F# Programming Part 2: Types

Ran into a situation last week that showed some more of the differences facing OO programmers moving to F#.

So I’ve got two directories. The program’s job is to take the files from one directory, do some stuff, then put the new file into the destination directory. This is a fairly common pattern.

To kick things off, I find the files. Then I try to figure out which files are in the source directory but not in the destination directory. Those are the ones I need to process. The code goes something like this:

doStuff, the initial version
  1. let doStuff (opts:RipProcessedPagesProgramConfig) =
  2.     let sourceDir = new System.IO.DirectoryInfo(opts.sourceDirectory.parameterValue)
  3.     let filesThatMightNeedProcessing = sourceDir.GetFiles()
  4.     let targetDir = new System.IO.DirectoryInfo(opts.destinationDirectory.parameterValue)
  5.     let filesAlreadyProcessed = targetDir.GetFiles()
  6.     let filesToProcess = filesThatMightNeedProcessing |> Array.filter(fun x->
  7.         (filesAlreadyProcessed |> Array.exists(fun y->x.Name=y.Name)
  8.         )
  9.     )
  10.     // DO THE "REAL WORK" HERE
  11.     printfn "%i files to process" filesToProcess.Length
  12.     ()

So I plopped this code into a couple of apps I’ll code later, then I went to work on something else for a while. Since it’s all live — but not necessarily visible to anybody — a few days later I took a look to see if the app thought it had any files to process.

It did not.

Now, of course, I can see that my Array.filter is actually backwards. I want to take the filesThatMightNeedProcessing and eliminate the filesAlreadyProcessed. What’s remaining are the filesToProcess. Instead, I check to see if the second set exists in the first. It does not, so the program never thinks there is anything to do. Instead of Array.exists, I really need something like Array.doesNotExist.

So is this a bug?

I’m not trying to be cute here, but I think that’s a matter of opinion. It’s like writing SQL. I described a transform. The computer ran it correctly. Did I describe the correct transform? Nope. But the code itself is acting correctly. I simply don’t know how many files might need processing. There is no way to add a test in here. Tests, in this case, would exist at the Operating System/DevOps level. So let’s put off testing for a bit, because it shouldn’t happen here. If your description of a transform is incorrect, it’s just incorrect.

So I need to take one array and “subtract” out another array — take all the items in the first array and remove those items that exist in the second array. Is there something called Array.doesNotExist?

No there is not.

Meh.

Ok. What kind of array do I have? Intellisense tells me it’s a System.IO.FileInfo[]

My first thought: this cannot be something that nobody else has seen. I’m not the first person doing this. This is just basic set operations. So I start googling. After a while, I come across this beautiful class called, oddly enough, “Set”. It’s in Microsoft.FSharp.Collections Damn, it’s sweet-looking class. It’s got superset, subset, contains, difference (which I want). It’s got everything.

So, being the “hack it until it works” kind of guy that I am, I look at what I have: an array of these FileInfo things. I look at what I want: a set. Can’t I just pipe one to the other? Something like this?

2014-09-15 fsharp 1

What the hell? What’s this thing about System.IComparable?

In order for the Set module to work correctly, it needs to be able to compare items inside your set. How can it tell if one thing equals another? All it has is a big bag of whatever you threw in there. Could be UI elements, like buttons. How would you sort buttons? By color? By size? There’s no right way. Integers, sure. Strings? Easy. But complex objects, like FileInfo?

Not so much.

As it turns out, this is a common pattern. In the OO world, we start with creating a type, say a Plain Old Java Object, or POJO. It’s got a constructor, 2 or 3 private members, some getters and setters, and maybe few methods. Life is good.

But then we want to do things. Bad things. Things it was never meant to do. Things involving other libraries. Things like serialize our object, compare it to others, add two objects together. It’s not enough that we have a new type. We need to start flushing out that type by supporting all sorts of standard methods (interfaces). If we support the right interface, our object will work magically with people who write libraries to do things we want.

Welcome to life in the world of I-want-to-make-a-new-type. Remember that class you had with three fields? Say you want to serialize it? You add in the interface IPersist. Now you have a couple more methods to fill out. Have some resources that must be cleaned up? Gotta add in IDisposable. Now you have another method to complete. Handling a list of something somebody else might want to walk? Plop in IEnumerable. Now you have even more methods to complete.

This is life in OO-land and frankly, I like it. There’s nothing as enjoyable as creating a new type and then flushing it all out with the things needed to make it part of the ecosystem. Copy constructors, operator overrides, implicit conversion constructors. I can, and have, spent all day or a couple of days creating a fully-formed, beautiful new type for the world, as good as any of the CLR types. Rock solid stuff.

But.

Funny thing, I’m not actually solving anybody’s problem while I’m doing this. I’m just fulfilling my own personal need to create order in the world. Might be nice for a hobby, but not so much when I’m supposed to stay focused on value.

There’s also the issue of dependencies which is the basis for much of the pain and suffering in OO world. Now that my simple POJO has a dozen interfaces and 35 methods, what the hell is going on with the class when I create method Foo and start calling it? Now I’ve got all these new internal fields like isDirty or versionNum that are connected to everything else.

You make complex objects, you gotta do TDD. Otherwise, you’re just playing with fire. Try putting a dozen or so of these things together. It works this time? Yay! Will it work next time? Who knows?

This is the bad part of OO — complex, hidden interdependencies that cause the code to be quite readable but the state of the system completely unknown to a maintenance programmer. (Ever go down 12 levels in an object graph while debugging to figure out what state something is in? Fun times.)

So my OO training, my instinct, and libraries themselves, they all want me to create my own type and start globbing stuff on there. This is simply the way things are done.

DO NOT DO THIS.

Instead, FP asks us a couple of questions: First, do I really need to change my data structures? Because that’s going to be painful.

No. Files are put into directories based on filename. You can’t have two files in the same directory with the same name. So I already have the data I need to sort things out. Just can’t figure out how to get to it.

Second: What is the simplest function I can write to get what I need?

Beats me, FP. Why do you keep asking questions? Look, I need to take what I have and only get part of the list out.

I spent a good hour thrashing here. You get used to this. It’s a quiet time. A time of introspection. I stared out the window at a dog licking its butt. I wanted to go online and find somebody who was wrong and get into a flame war, but I resisted. At some point I may have started drooling.

In OO you’re always figuring out where things go and wiring stuff up. Damn you’re a busy little beaver! Stuff has to go places! Once you do all the structuring and wiring? The code itself is usually pretty simple.

In FP you laser directly in on the hard part: the code needed to fix the problem. Aside from making sure you have the data you need, the hell with structure. That’s for refactoring. But this means that all the parts are there at one time. Let me repeat that. THE ENTIRE PROBLEM IS THERE AT ONE TIME. This is a different kind of feeling for an OO guy used to everything being in its place. You have to think in terms of data structure and function structure at the same time. For the first few months, I told folks I felt like I was carrying a linker around in my head. (I still do at times)

Eventually I was reduced to muttering to myself “Need to break up the set. Need to break up the set.”

So I do what I always do when I’m sitting there with a dumb look on my face and Google has failed me: I started bringing up library classes, then hitting the “dot” button, then having the IDE show me what that class could do.

I am not proud of my skills. But they suffice.

Hey look, the Array class also has Array.partition, which splits up an array. Isn’t that what I want? I need to split up an array into two parts: the part I want and the part I do not want. I could have two loops. On the outside loop, I’ll spin through all the files in the input directory. In the inside loop, I’ll see if there’s already a file with the same name in the output directory. The Array.partition function will split my array in two pieces. I only care about those that exist in the input but not the output. Something like this:

Hey! It works!
  1. let doStuff (opts:RipProcessedPagesProgramConfig) =
  2.     let sourceDir = new System.IO.DirectoryInfo(opts.sourceDirectory.parameterValue)
  3.     let filesThatMightNeedProcessing = sourceDir.GetFiles()
  4.     let targetDir = new System.IO.DirectoryInfo(opts.destinationDirectory.parameterValue)
  5.     let filesAlreadyProcessed = targetDir.GetFiles()
  6.     let filesToProcessSplit = filesThatMightNeedProcessing |> Array.partition(fun x->
  7.         (filesAlreadyProcessed |> Array.exists(fun y->y.Name=x.Name))
  8.         )
  9.     let filesToProcess = snd filesToProcessSplit
  10.  
  11.     // DO THE "REAL WORK" HERE
  12.     printfn "%i files to process" filesToProcess.Length
  13.     ()

Well I’ll be danged. Freaking A. That’s what I needed all along. I didn’t need a new class and a big honking type system hooked into it. I just needed to describe what I wanted using the stuff I already had available. My instinct to set up structures and start wiring stuff would have led me to OO/FP interop hell. Let’s not go there.

So if I’m not chasing things down to nail them in exactly one spot, how much should I “clean up”, anyway?

First, there’s Don’t Repeat Yourself, or DRY. Everything you write should be functionally-decomposed. There’s no free ride here. The real question is not whether to code it correctly, it’s how much to genericize it. All those good programming skills? They don’t go anywhere. In fact, your coding skills are going to get a great workout with FP.

I have three levels of re-use.

First, I’ll factor something out into a local structure/function in the main file I’m working with. I’ll use it there for some time — at least until I’m happy it can handle different callers under different conditions. (Remember it’s pure FP. It’s just describing a transform. Complexity is bare bones here. If you’re factoring out 50-line functions, you’re probably doing something wrong.)

Second, once I’m happy I might use it elsewhere, and it needs more maturing, I’ll move it up to my shared “Utils” module, which lives across all my projects. Then it gets pounded on a lot more, usually telling me things like I should name my parameters better, or handle weird OS error conditions in a reasonable way callers would expect. (You get a very nuanced view of errors as an FP programmer. It’s not black and white.)

Finally, I’ll attach it to a type somewhere. Would that be some kind of special FileInfo subtype that I created to do set operations?

Hell no.

As I mature the function, it becomes generic, so I end up with something that subtracts one kind of thing from another. In fact, let’s do that now, at least locally. That’s an easy refactor. I just need a source array, an array to subtract, and a function that can tell me which items match.

subtractArrays. Good enough for now.
  1. let subtractArrays sourceArray arrayToSubtract f =
  2.     let itemSplit = sourceArray |> Array.partition(fun x->
  3.         (arrayToSubtract |> Array.exists(fun y->(f x y)))
  4.         )
  5.     snd itemSplit
  6.  
  7. let doStuff (opts:RipFullPagesProgramConfig) =
  8.     let sourceDir = new System.IO.DirectoryInfo(opts.sourceDirectory.parameterValue)
  9.     let filesThatMightNeedProcessing = sourceDir.GetFiles()
  10.     printfn "%i files that might need processing" filesThatMightNeedProcessing.Length
  11.     let targetDir = new System.IO.DirectoryInfo(opts.destinationDirectory.parameterValue)
  12.     let filesAlreadyProcessed = targetDir.GetFiles()
  13.     printfn "%i files already processed" filesAlreadyProcessed.Length
  14.     let filesToProcess = subtractArrays filesThatMightNeedProcessing filesAlreadyProcessed (fun x y->x.Name=y.Name)
  15.     printfn "%i files to process" filesToProcess.Length
  16.     ()


Note the lack of types. Do I care what kind of array either the source or the one to subtract is? No. I do not. All I care is if I can distinguish the items in them. Hell, for all I care one array can be that System.IO.FileInfo thing, the other array can be filenames. What does it matter to the problem I’m solving?

What’s that sound? It’s the sound of some other FP guy busy at his computer, sending me a comment about how you could actually do what I wanted in 1 line of code. That’s fine. That’s the way these things work — and it’s why you don’t roll things up into types right away. Give it time. The important thing was that I stayed pure FP — no new data, no mutable fields, no for/next loops. I didn’t even use closures. As long as I stay clean, the code will continue to “collapse down” as it matures. Fun stuff. A different kind of fun than OO.

So where would this code end up, assuming it lives to become something useful and re-usable? In the array type, of course. Over time, functions migrate up into CLR types. If I want a random item from an array? I just ask it for one. Here’s the code for that.

Make Arrays give you random items
  1. type 'a “[]“ with
  2.     member x.randomItem =
  3.         let rnd = new System.Random()
  4.         let idx = rnd.Next(x.Length)
  5.         x.[idx]


Let me tell you, that was a painful function to work through! Happy I don’t have to ever worry about it again. Likewise, if I need to know how many times one string is inside another? I’ve got a string method for that. Basically anything I need to use a lot, I’ve automated it.

Over time, this gives me 40-50 symbols to manipulate in my head to solve any kind of problem. So while the coding part makes my brain hurt more with FP, maintenance and understanding of existing code is actually much, much easier. And with pure FP, everything I need is right there coming into the function. No dependency hell when I debug. It’s all right there in the IDE. Not that I debug using the IDE that much.

So does that mean I never create new types? Not at all! But that’s a story for another day…

 

 

September 15, 2014  Leave a comment

Real World F# Programming Part 1: Structuring Your Solution

I have several “side project” apps I work on throughout the year. One of those is Newspaper23.com. I have a problem with spending too much time online. Newspaper23 is supposed to go to all the sites I might visit, pull down the headlines and a synopsis of the article text. No ads, no votes, no comments, no email sign-ups. Just a quick overview of what’s happening. If I want more I can click through. (Although I might remove external links at a later date)

Right now newspaper23.com pulls down the headlines from the sites I visit and attempts to get text from the articles. It’s not always successful — probably gets enough text for my purposes about 70% or so. There’s some complicated stuff going on. And it doesn’t get the full text or collapse it into a synopsis yet. That’s coming down the road. But it’s a start. It goes through about 1600 articles a day from about a hundred sites. People are always saying F# isn’t a language for production, and you have to have some kind of framework/toolset to do anything useful, but that’s not true. I thought I’d show you how you can do really useful things with a very small toolset.

I’m running F# on Mono. The front-end is HTML5 and Jquery. There is no back-end. Or rather, the back end is text files. Right now it’s mostly static and supports less than a thousand users, but I plan on making it interactive and being able to scale up past 100K users. Although I have a brutally-minimal development environment, I don’t see any need in changing the stack to scale up massively. Note that this app is part-time hobby where I code for a few days just a couple of times a year. I don’t see my overall involvement changing as the system scales either. Server cost is about 40 bucks a month.

I come from an OO background so all of this is directed at all you JAVA/.Net types. You know who you are :)

 

Code Snippet
  1. module Types
  2.     open HtmlAgilityPack
  3.     open System.Text.RegularExpressions
  4.     type ‘a “[]“ with
  5.         member x.randomItem =
  6.             let rnd = new System.Random()
  7.             let idx = rnd.Next(x.Length)
  8.             x.[idx]
  9.     type System.String with
  10.         member x.ContainsAny (possibleMatches:string[]) =
  11.             let ret = possibleMatches |> Array.tryFind(fun y->
  12.                 x.Contains(y)
  13.                 )
  14.             ret.IsSome
  15.         member x.ContainsAnyRegex(possibleRegexMatches:string[]) =
  16.             let ret = possibleRegexMatches |> Array.tryFind(fun y->
  17.                 let rg = new System.Text.RegularExpressions.Regex(y)
  18.                 rg.IsMatch(x)
  19.                 )
  20.             ret.IsSome

(Yes, there will be code in this series)

The game here is small, composable executables. FP and small functions mean less code. Less code means less bugs. Deploy that smaller amount of code in samlller chunks and that means less maintenance. Little stand-alone things are inherently scalable. Nobody wonders whether they can deploy the “directory function” on multiple servers. Or the FTP program. Big, intertwined things are not. Start playing around sometime with sharding databases and load balancers. Ask some folks at the local BigCorp if they can actually deploy their enterprise software on a new set of servers easily.

I’ve found that you’ll write 4 or 5 good functions and you’re done with a stand-alone executable. In OO world you’ll spend forever wiring things up and testing the crap out of stuff to write the same code spread out over 8 classes. Then, because you’ve already created those classes, you’ll have a natural starting point for whatever new functionality you want. Which is exactly the opposite direction FP takes you in. In OO, the more you try to do, the more structure you have, the more structure, the more places to put new code, the more new code, the more brittle your solution. (And I’m not talking buggy, I’m talking brittle. This is not a testing or architecture issue. Composable things have composable architectures. Static graphs do not. [Yes, you can get there in OO, but many have ventured down this path. Few have arrived.])

The O/S or shell joins it all together. That’s right, you’re writing batch files. Just like a monad handles what happens between the lines of code in an imperative program, the shell is going to handle what happens between the composable programs in your project. A program runs. It creates output. The shell runs it at the appropriate time, the shell moves that output to where it can be processed as input by the next program, the shell monitors that the executable ran correctly. There is no difference between programming at the O/S level and at the application level. You work equally and in the same fashion in both.

Ever delete your production directory? You will. Having it all scripted makes this a non-event. You make this mistake once. Then you never make it again. DevOps isn’t some fashionable add-on; it’s just the natural way to create solutions. Automate early, and continue automating rigorously as you go along. DRY applies at all levels up and down the stack.

Each file has its own script file. There are config files and cron hooks it all up. This means CPU load, data size, and problem difficulty are all separate issues. Some of you might be thinking, isn’t this just bad old batch days? Are we killing transactional databases? No. In a way, the “bad old days” never left us. We just had little atomic batches of 1 that took an indeterminate amount of time because they had to traverse various object graphs across various systems. We can still have batches of 1 that run immediately — or batches of 10 that run every minute — or bathces of n that run every 5 minutes. It’s just that we’re in control. We’re simply fine-tuning things.

Note that I’m not saying don’t have a framework, database, or transactions. I’m saying be sure that you need one before you add one in. Too often the overhead from our tools outweighs the limited value we get, especially in apps, startups, and hobby projects. If you’re going to have a coder or network guy tweaking this system 3 years from now, it’s a different kind of math than if you’re writing an app on spec to put in the app store. Start simple. Add only the minimum amount you need, and only when you need it.

One of the things you’re going to need that’s non-negotiable is a progressive logging system that’s tweaked from the command line. Basically just a prinfn. I ended up creating something with about 20 LOC that works fine. Batch mode? All you need to see is a start/stop/error summary. Command line debug? You might need to see everything the program is doing. Remember, the goal: as much as possible, you should never touch the code or recompile. Every time you open the code up is a failure. If you can make major changes to the way the system works from the shell, you’re doing it right. If you can’t, you’re not.

Many of you might thing that you’re turning your code into a mess of printf statements like the bad old days. But if that’s what you’re doing, you’re doing it wrong. Each program shouldn’t have that many flags or things to kick out. Remember: you’re only doing a very small thing. For most apps, I’d shoot for less than 300LOC, and no more than 10-20 code paths. All code paths can be logged off or on from command line. To make this workable in a command-line format without creating a “War and Peace” of option flags means cyclomatic complexity must be fairly low. Which also keeps bugs down. Natch.

Of course Windows and various OO programs and frameworks have logging features too, but they run counter to the way things are done here. Usually you have to plug into a logging subsystem, throw things out, then go to some other tool, read the messages, do some analysis, and so on. In linux with command-line stuff, the logging, the analysis, and changing the code to stop the problem all happen from the same place, the command line. There’s no context-switching. Remember, most errors should be just configuration issues, not coding issues.

One of the ways I judge whether I’m on-track or not is the degree of futzing around I have to do when something goes wrong. Can I, from a cold start (after weeks of non-programming), take a look at the problem, figure out what’s happening, and fix it — all within 10 minutes or so? I should be able to. And, with only a few exceptions, that’s the way it has been working.

The way I do this is that I have the system monitor itself and report back to me whether each program is running and producing the expected output. Every 5 minutes it creates a web page (which is just another output file) that tells me how things are going. In the FP world, each program is ran several different ways with different inputs. Common error conditions have been hammered out in development. So most times, it runs fine in all but one or two cases. So before I even start I can identify the small piece of code and the type of data causing problems. The biggest part of debugging is done by simply looking at a web page while I eat my corn flakes.

System tests run all the time, against everything. It’s not just test-first development. It’s test-all-the-time deployment. Fixing a bug could easily involve writing a test in F#, writing another test at the shell level, and writing a monitor that shows me if there’s ever a problem again. Whatever you do, don’t trade off monolithic OO apps for monolithic shell apps. FP skills and thinking are really critical up and down the toolchain here.

It’s interesting that there’s no distinction between app and shell programming. This is good for both worlds. Once we started creating silos for application deployment, we started losing our way.

Instead of rock-solid apps, failure is expected, and I’m not just talking about defensive programming. Pattern-matching should make you think about all paths through a program. Failure at the app level should fit seamlessly into the DevOps ecosystem, with logging, fallbacks, reporting, resillience. Can’t open a file? Can’t write to a file? Who cares? Life goes on. There are a thousand other files. Log it and keep rolling. It could be a transient error. 99% of the time we don’t work with all-or-nothing situations.

As much as possible, you should check for File System and other common errors before you even start the work. My pattern is to load the command line parameters, load the config file (if there is one), then check to make sure I have all the external pieces I need — the actual files exist, the directories are there, and so on. This is stuff I can check early, before the “real” code, so I do it. That way I’m not in the middle of a function to do something really complex and then forgetting to check whether or not I have an input file. By the time I get to the work, I know that all of my tools are in place and ready. This allows me to structure solutions much better.

2014-09 Metrics

 

For instance I woke up this morning and looked at the stats. Looks like pulling things from medium.com isn’t working — and hasn’t been working for several days. That’s fine. It’s an issue with the WebClient request headers. So what? I’ll fix it when I feel like it. Compare this to waking up this morning with a monolithic server app and realizing the entire thing doesn’t run because of intricate (and cognitively hidden) dependencies between functions that cause the failure of WebClient in one part to prevent the writing of new data for the entire app in another part. Just getting started with the debugging from a cold start could take hours.

Note that a lot of DevOps sounds overwhelming. The temptation is to stop and plan everything out. Wrong answer. It’s an elephant. Planning is great, but it’s critical that you eat a little bit of the elephant at a time. Never go big-bang, especially in this pattern, because you really don’t know the shape of the outcome at the DevOps level. [Although there may be some corporate patterns to follow. Insert long discussion here]

Next time I’ll talk about how the actual programming works: how code evolves over time, how to keep things simple, and how not to make dumb OO mistakes in your FP code.

September 8, 2014  Leave a comment

My Agile 2014 Book Report

2014-07-Agile-2014-Weasels-For-Or-Against

I don’t do conferences. The last Agile conference I was at was five years ago, in 2009. So although I’ve been engaged in the work, I haven’t spent much time with large groups of Agile practitioners in some while. I thought it might be useful to folks if I wrote down my observations about the changes in five years.

The Good

  • We’re starting to engage with BigCorp environments in a more meaningful way. There’s still a lot of anger and over-promising going on, but the community is grudgingly accepting the fact that most Agile projects exist inside a larger corporate structure. If we want to have a trusting, healthy work environment, we’re going to need to be good partners.

  • Had one person come up to me and say something like “You know, you’re not the asshole I thought you were from reading you online.” It would do well for all of us to remember that for the most part, folks in the community are there to help others. It’s easy to be misunderstood online. It’s difficult to always assume kindness. Being snarky is just too much fun sometimes, and people don’t like having their baby called ugly. In fact, it’s probably impossible to fully engage with people online the way we do in person. We should know this! :)

  • I’m continuing to see creative things emerge from the community. This is the coolest part about the Agile community: because we don’t have it all figured out, there is a huge degree of experimentation going on. Good stuff.

The Bad

  • In many ways, Agile has lost its way. What began as a response by developers to the environments they found themselves in became a victim of its own success. It’s no longer developers finding new ways of developing software. It’s becoming Agile Everything. I don’t have a problem with that — after all, my 2009 session was on Agile in Non-Standard Teams — but there’s going to be a lot of growing pains.

  • The dirty secret is that in most cases (except for perhaps the biz track?) the rooms are filling with folks who already agree with the speaker. But speakers spend time justifying their position anyway. For such a large group, there was quite a bit of clanning. Sessions were already full of cheerleaders. It might be good to clearly understand whether we’re presenting something to the community for their consideration — or presenting something they already love and showing how to get others to like it. These are incompatible goals for the same session.

  • Maybe it was just me, but for such a relaxed group of facilitators, there was quite a bit of tension just under the surface. For a lot of folks, the conference meant a big chance to do something: to get the next gig, to meet X and become friends, to hire for the next year, to start a conversation with a key lead. It was all fun and games, but every so often the veil would slip a bit and you’d see the stress involved. I wish all of those folks much luck.

The Culture

  • Dynamic Open Jam areas were awesome. Even though nobody cared about my proposed session on Weasels, I thoroughly enjoyed them.

  • I saw something very interesting in Open Jame on Wednesday. We were all doing presentation karaoke. A big crowd had formed to watch and participate; perhaps 40 folks. But our time was up. So the leader of the freeform session said “Our time is up, we should respect the next person, who is here to talk about X”The guy gets up and somebody from the crowd says “Hey! Why don’t we just combine the two things?”So we spend another five minutes doing both presentation karaoke and talking about the new topic. That way, we maximized the number of people that stayed involved, while at the same time switching speakers. It was a nice example of both being respectful and adapting to changing conditions.

  • The party on the last night was most enjoyable. I think this was the most relaxed state that I saw folks in. Not sure if the alcohol had anything to do with that :) Lots of great conversations going on.

  • Where did all the developers go? Maybe it was just me, but it seemed like there was a lot more “meta” stuff presented. It didn’t seem like there was as much technical stuff.



Budgeting? Strategic alignment? Huh? Who let the managers into this place?

Budgeting? Strategic alignment? Huh? Who let the managers into this place?

Good and Bad

  • People really hate SAFe (The Scaled Agile Framework, a detailed guide supposedly describing how to run teams of teams in an Agile manner) — to the point that some speakers had a shtick of opening mocking it. I’m process agnostic — I don’t hate anything and all I want is to help folks. SAFe, like anything else, has good and bad parts. Keep the good parts, ditch the bad parts. But for some, SAFe seems like a step backwards.


    What concerns me about watching both sides of this is the emotional investment both groups have in already knowing how things are going to turn out without the necessarily huge sample size it would take to determine this for the industry as a whole. One group might think “Why of course Agile is going to have to evolve into more traditional product management. How else would it work?” The other might think “Why of course we would never put so much structure into what we do. That’s what prompted us to become Agile in the first place.”


    Look, I don’t know. Give me 1,000 examples of SAFe actually being deployed — not some arcane discussion about what the textbook says but how it actually works in the real world — and I can start drawing some conclusions. Until then? This is just a lot of ugliness that I’m not sure serves a greater purpose. Sad.


  • UX, or figuring out what to build, is making waves. Some folks love it, some folks think we’re back to imposing waterfall on the process. I tend to think a) because it takes the team closer to value creation it’s probably the most important thing the community has going right now, and b) it’s just not baked enough yet. At least not for the rest of us. (I don’t mean that practitioners don’t know what they are doing. My point is that it is not formed in such a way that the Agile community can easily digest it.) That’s fine with me, but not so much with others. I’m really excited about seeing more growth in this area.

Summary

We are realizing that any kind of role definition in an organization can be a huge source of impediment for that organization growing and adapting. You’re better off training engineers to do other things than you are bringing in folks who do other things and expecting them to work with engineers. So much of everything can be automated, and whatever your role is, you should be automating it away.

Having said that, I don’t think anybody really knows what to do with this information. We already have a huge workforce with predefined roles. What to do with them? Nobody wants to say it directly, but there it is: we have the wrong workforce for the types of problems we need to be solving.

Finally, it’s very difficult to be excited about new things you’re trying and at the same time be a pragmatist about using only what works. It’s possible, but it’s tough. If Agile is only love and goodness, then you’re probably doing it the wrong way. Agile is useful because the shared values lead us into exploring areas we are emotionally uncomfortable with. Not because it’s a new religion or philosophy to beat other people over the head with. It should be telling you to try things you don’t like. If not, you’re not doing it the right way. Enough philosophy (grin).

August 5, 2014  Leave a comment

Agile Memes, Part 1

Over the last couple of weeks, I’ve had some fun combining Agile concepts, humor, and memes. I thought this might be something you could find useful sharing with your team.

Burn-down-in-a-straight-line

Didn't-deliver-all-the-stories-we-promised-well-have-to-ferret-that-out

wanted-more-agility-got-more-tools

what-if-I-told-you-that-Agile-practices

your-kanban-has-got-kanban-in-it

you-just-stood-in-the-stand-up-and-told-us-how-youre-doing-y-u-no-update-your-task-cardyou-cant-just-change-the-acceptance-criteria-the-day-before-the-demo

May 26, 2014  Leave a comment

Words Don’t Actually Mean Anything

bad-agile-backlogs
(The following is an excerpt from the upcoming “Backlogs 2″ e-book)


Aristotle would have made a terrible programmer because words don’t actually mean anything. Philosophers in general are a big pain in the ass. They’re also responsible for civilization and modern life.

That’s the conclusion you’ve reached after spending all day Saturday at the library. After the pounding your brain has taken, you are looking forward to some down time.

Aristotle lived over two thousand years ago. Back in the day, philosophers like Aristotle were a sort of cross between motivational speaker, law professor and librarian. They could read and write; and they were where you sent your kids when you wanted them to learn how to live a good life, what good and bad were, and how to succeed.

The Sophists were big back then. Sophists believed that justice and meaning were relative and that it was the art of social communication and persuasion that was important. They taught their students, sons of rich and affluent parents, how to argue both sides of an argument, manipulate the emotions of listeners, and handle themselves with an air of dignity. Basically, they taught all the skills you’d need to become and stay powerful in ancient Greece.

Sophists started off very popular, but by Aristotle’s time most people didn’t like them. First, they charged exorbitant amounts of money. Second, they championed the wishy-washy nature of things instead of actually taking a firm stand on anything. “Man is the measure of all things,” said a famous sophist. Folks wondered if sophists really stood for anything at all except for how to look good and become powerful.

Aristotle didn’t like them much either. He created his own school of philosophy, didn’t charge for it, and along the way invented science.

He thought that things had to have meaning. There was some external universal reality that we could learn, categorize, and reason about. It wasn’t all relative. You could use your senses to gain intuitive knowledge of the world around you. Knowledge could allow you to deduce how things work everywhere. There is a real world, with real cause and effect. Learning about universal truths gave us knowledge we could apply everywhere.

Things that we observe have attributes and actions. It was the reasoning about these attributes and actions over a few examples that gave rise to understanding for all the others of the same type. We can share this understanding by creating a body of knowledge that describes the universe. There was a universal idea of, say a cow. Aristotle called this universal idea, the truest version of something possible, a “universal form”. Once we understood things about the universal form of a cow, we could then apply that knowledge to all actual cows we might meet.

Attributes helped us create a master categorization system. You divided the world into categories depending on what attributes things had. This is an animal. This is a plant. Animals have these attributes. Plants have these. This kind of plant has bark. This kind of plant has none. This kind of plant has five leaves. This kind of plant has three. By simply listing the things each plant owns, its attributes, we could start coming up with categories about what kinds of plants they were. Same goes for animals, and everything else, for that matter.

Same goes for actions. This rock falls from a great height. This dandelion seed floats away. Things we observe DO things. They have actions they perform, sometimes on their own, sometimes in response to stimulus. They behave in certain ways. Just like with attributes, by listing the types of actions various things could do; we continue developing our categorization system.

Eventually, once we’ve described all the attributes and actions of something, we determine exactly what it is, its universal form. Then we can use logic and deduction to figure out why it works the way it does. Once we know how the universal form works, we know how all the examples of universal forms we see in life work. If I know how one rock of coal acts when it is set on fire, I know how all rocks of coal will act. This is common sense to us, but Aristotle was the first to come up with it.

Aristotle’s lesson: there’s a master definition of things, a set of universal forms. We discover the meaning of the world by discovering the attributes and actions of things to figure out exactly what they are. If we understand what something is, and we understand how the universal form of that thing behaves, we understand the thing itself. Once we have exact definitions, we can reason about universal forms instead of having to talk about each item separately.

Categorizing by attributes and actions, understanding by deductive logic, having exact definitions, working with abstract universal forms — those ideas grew into everything we call science. Thousands of scientific disciplines came from Aristotle. Quite a legacy for some old Greek guy.

Categorizing the attributes and actions of things was fairly straightforward and could be used no matter what you were talking about. Creating master categorization systems and dictionaries wasn’t so hard. Deductive logic, on the other hand, got more and more complicated the more we picked at it.

Systems of reasoning greatly increased the growth of math. Throughout the centuries, smart people have longed for an answer to the question: if I know certain things, what other things can I use logic to figure out? They created different rigid and formal ways of reasoning about things that depended on universal forms. Each system had pros and cons. There was set theory, formal logic, symbolic logic, predicate calculus, non-euclidean geometry, and a lot more.

Philosophers devised hundreds of systems of reasoning about all sorts of things. They kept creating whole new branches of science as they went along. Some people were interested in how surfaces relate to each other. Some people were fascinated by the relationship between math and physical reality. Some people wanted to know more about what right and wrong were. Some people wanted to find out about diseases. Some people wanted to know about the relationship between truth and beauty. Each of these used the same core Aristotlean principles of categorization of universal forms using exact definitions, then reasoning about those forms.

Highly abstract, formal logic, in particular, invented by another philosopher guy named Bertrand Russell around 1900, led to something called Von Neumann machines. Von Neumann machines led to modern computers.

That’s right, aside from creating science in general, Aristotle’s ideas about deductive logic using universal forms led to an emphasis on logic, then formal logic, then the creation of computers — machines that operate based on rigid universal rules about what is true and false and how to act based on what is true or false.

This science thing was turning out to be a big hit. Meanwhile you never hear much about the sophists. “Sophist” is commonly used to describe somebody who has no sense of right and wrong and uses lots of weasel words.

But everything wasn’t skittles and beer.

First we had a problem with this thing that Aristotle did when he set things up, where he stepped outside of himself and reasoned about things at a universal level. There was a land above science, a meta-science, and philosophers were the folks who operated outside of science asking useful questions. Because Aristotle asked questions about important things, universal truths, we consider him one of the first great philosophers.

The problem was that using reason and logic to work at this higher, universal level caused as much confusion as positive change in the world.

Not to put too fine of a point on things, philosophy through the centuries has been full of really smart people with one or two good ideas that spend their entire lives making those ideas more and more complex and unwieldy. In a simple world, philosopher X comes up with a good idea about why the sky is blue. In reality, philosopher X comes up with a good idea about why the sky is blue, then spends 40 years and writes 200 papers (including 12 books) on the nature of the sky, what kind of blue is important, and how the blue in the sky is actually the key to fish migration in Eastern Tibet and 50 other odd and vaguely understood concepts which he feels you have to know because they are the key to truly understanding his work.

These were not simple-minded or crazy people. They were the smartest of their day. They were simply taking Aristotle’s idea that there are universal truths that we can discover using reason and logic and trying to take it to the next level.

So instead of simple ideas that spawned new sciences, it was more the case that philosophers came up with extremely complex and delicate theories about things that couldn’t be measured or talked about — except by other philosophers. The useful philosopher who spawned a new science was an oddball, and even in that case, most of the time he created as much confusion among later scientists and philosophers as he shined light on anything.

There was confusion over what different terms mean, over which parts of which philosophies apply to which things, if one philosopher actually agrees or disagrees with another one, and even what the philosopher meant when he was writing all that stuff. Frankly this gave most people the impression that all of philosophy was bullshit. That was a shame, because there was some really good stuff in there too. Every now and then it changed the world.

The confusion in terms, meaning, scope, and method of reasoning got philosophers asking questions about philosophy itself and how we actually knew stuff. Then things got really screwy. Philosophers asked if they could really know whether they existed or not, or whether if they went to a swamp and were replaced by an exact duplicate if that new philosopher would be the same person. We had philosophers pushing fat people in front of trolleys and all sorts of other mental experiment shenanigans. There was a lot of smoke but not much fire. Made for some great science fiction, though.

Second, and this was worse, the idea that we could figure out the mechanism of things took a quick hit, and it never recovered. As it turned out, almost all of the time, we could not figure out why things work the way they do. Remember Isaac Newton? He saw an apple fall from a tree and came up with the Law of Universal Gravitation. This was an astounding mathematical accomplishment that allowed us to do all kinds of neat things, like shoot rockets full of men to the moon. There was only one tiny problem: the law didn’t say _why_ gravity worked, it just gave equations for _how_ it would act. For all we knew there were tiny little gerbils running around inside of anything that had mass. Maybe magic dust. Newton didn’t know, and to a large degree, we still don’t know.

Or medicine. Doctors noticed certain patterns of observations, such as chimney sweeps in 19th century England almost always caught testicular cancer. Some doctors speculated that chimneys caused cancer. They stopped people from being chimney sweeps. Testicular cancer dropped. New observations were made, rules guessed at, hypotheses tested. Over time the terms of the debate got finer and finer, finally settling on something approximating “We believe repeated chemical insults to cells by certain agents over time can cause some cells to spontaneously mutate, at times becoming new creatures which can survive and thrive in the host and take their life.” But we don’t know for sure. We don’t know why. There’s a ton of things we don’t know. All we can do is keep making more refined, tentative models that then we test. As these models get more refined, they get more useful for doing practical stuff in the real world, like reducing cancer rates. But we still don’t know the mechanism, the why.

There is a provisional guess, based on the observation of a lot of data. We keep gathering more data, creating possible rules that might explain the data, then creating testable hypotheses to test the rules, then testing the hypotheses, building a model. Then we start over again. The process loop of science contains called abduction, deduction, and induction, and the guy who explained how it all worked is probably the most creative and insightful philosopher-scientist you’ve never heard of.

Charles Sanders Peirce was the smartest thinker in America in the late 1800s, and even fewer people know him than Frederick Winslow Taylor. Peirce was an unknown hero, a rogue, an outsider, discovering things years or decades before others, but rarely getting the credit for it because he never got attention for his work. In the 1880s, he was the first to realize that logical operations could be carried out by electrical circuits — beating the guy who got credit for it by almost 50 years. He was one of the founders of statistics, creating and using techniques that decades later were to be known as Bayesian Statistics. He was the grandfather of symbolic logic, and much debate still exists as to where Bertrand Russell got all the ideas he had when he went about creating formal logic. Many believe Peirce was shortchanged.

But, like many smart people, Peirce was also ill-tempered and prone to tick others off. His only academic job was at Johns Hopkins where he he lectured in logic for a few years. The enemies he made there followed him the rest of his life. When his marriage didn’t work out and he started seeing another woman, they had enough evidence to get him fired. A trustee at Johns Hopkins was informed that Peirce, while a Hopkins employee, had lived and traveled with a woman to whom he was not married. And that, as they say, was the end of that. For one of the greatest thinkers of the late 19th century, it was the end of Peirce’s academic career. He tried the rest of his life to find jobs at other academic institutions without success.

He lived as a poor man, purchasing an old farm in Pennsylvania with inheritance money but never making any money from it. He was always in debt. Family members repeatedly had to bail him out. Although he wrote prolifically, he was unpopular among the scientists of the day, so his work received no recognition. Only as time passed, as other famous scientists and philosophers slowly started giving credit to Peirce many decades later did it finally became known what a great thinker he was. His “rehabilitation” continues to this day, as the study of Peirce has become an academic endeavor of its own. The man who couldn’t get a job at an academic institution and was considered a crank or crackpot now has scholars devoting their careers to studying his work.

Because of his history, Peirce had a unique outsider’s view. Although he was a philosopher, he always thought of himself as just a working scientist. As such, he saw his job as trying to make sense of the foundations of science, just like the philosophers. In his case, though, it was just to get work done.

To start organizing his work, he began by grouping the way we work with knowledge into two parts: speculative, which is the study of the abstract nature of things, and practical, which is the study of how we can gain control over things. Theoretical physics? Speculative. Applied physics? Practical. Philosophy? Speculative. Peirce taught that although the two types of investigations looked almost the same, they were completely different types of experiences. The scientist should never confuse the two.

This led him to creating a new science called semiotics, which was interested in how organisms work with symbols and signs, external and internal, to do things. Every living thing uses signs and symbols to understand and manipulate the world, but nobody had ever studied how they did it.

Thinking about the importance of practical knowledge and manipulating the world around us led to his famous Pragmatic Maxim: “To ascertain the meaning of an intellectual conception one should consider what practical consequences might result from the truth of that conception — and the sum of these consequences constitute the entire meaning of the conception.”

That is, when approaching some system of symbols, reasoning, or meaning, we have to be prepared to abandon everything, philosophy, tradition, reason, pre-existing rules, and what-not, and ask ourselves: what can we use this for? Because at the end of the day, if we can’t use knowledge to manipulate the world around us effectively, it has no value. And, in fact, the only value knowledge has is the ability it provides for us to use it to manipulate the real world. Newton figured out how to model gravity but not the mechanism, and that’s okay. It’s far more important that we have models with practical uses than it is to debate correctness of speculative systems. In fact, we can’t reason at all about speculative systems. The only value a system of symbols can have is being able to change things.

Mental ideas we work with have to _do something_. Pragmatists believe that most philosophical topics, such as the nature of language, knowledge, concepts, belief, meaning, and science — are best looked at in terms of their practical uses, not in terms of how “accurate” they are. Speculative thinking is nonsense in the philosophical sense. That is, we are unable to have any intelligent conversation about it one way or another.

The great pragmatists that followed Peirce took pragmatism everywhere: education, epistemology, metaphysics, the philosophy of science, indeterminism, philosophy of religion, instincts, history, democracy, journalism — the list goes on and on. As always, the sign of a true philosophical breakthrough is one which changes the universe, and Peirce’s pragmatism certainly qualifies.

Peirce’s lesson: in order for us to make sense of the chaos and uncertainty of science and philosophy, we have to hold both to a high standard of only concerning ourselves with the things we can use to change the world around us. This was far more important than arguing about being “right”.

This sounds very familiar to people in the Agile community. The dang thing has to work, consistently, no matter whether you did the process right or not. Right or wrong has nothing to do with it. It’s all about usefulness. A specification means nothing. It has no immediate affect. A test, on the other hand, is useful because it constantly tells us by passing or failing whether or not our output is acceptable or not. It has use.

Nobody ever heard of Peirce, laboring away in his farmhouse producing tons of papers they never read (at least until 100 years later, in some cases) but everybody knew Bertrand Russell and his star pupil Ludwig Wittgenstein, living large at the opposite end of the spectrum. They lived at the same time but in different worlds. Russell was a member Britain’s nobility, well-respected, rich, and widely admired. Everybody considered Wittegnstein to be a genius and he never wanted for anything.

Russell invented formal logic. He is considered the most influential philosopher of the 20th century. Wittgenstein wasn’t much of a slouch, either. Russell took reasoning to the highest levels man has ever reached with formal logic, yet, like always, there were a lot of pieces that didn’t fit together. Reasoning about right or wrong inside speculative systems was a waste of time, no matter how rigorous they were, as Peirce had shown decades earlier, but no one heard him.

Wittgenstein took it on himself to fix it. So he wrote a big book, the Tractatus, that he felt was the final solution to all the problems philosophy faced. He told his friends there wasn’t much more for him to do. Wittgenstein wasn’t much on the humble side, although better than most. (“Annoying genius philosopher guy” seems to a be a recurring theme in these stories.)

After thinking about things some more, Wittgenstein realized that, surprise, he might have made some mistakes in the Tractatus. So he wrote another book, Philosophical Investigations, that wasn’t published until after after his death in 1953. There was a good reason philosophy kept getting tangled up around itself. There was a good reason that we had difficultly separating the speculative and the practical. There was a good reason that philosophers had a good idea or two, then, by trying to tease it out using systems of logic, always ended up out in the weeds somewhere.

Wittgenstein solved the problem from the other end, assuming formal abstract systems had value and looking for fault elsewhere. What he came up with blew your mind. No wonder Collier was stumped by the consultants.

The problem here wasn’t science, or knowledge, or logic, or reason. It was both simpler and more profound. It was language. Human language. Human language was much more slippery than people realized. It wasn’t logic and reason that were broken, it was the idea of a universal categorization system and universal forms based on language. Human language gives us a sense of certainty in meaning that simply does not exist.

It can’t be made to exist, either. Aristotle’s universal forms might still be valid, but they’re not precisely approachable using human language.

Although we do it everyday, we do not truly understand how people communicate with each other. In speculative efforts, where it was all theory, this led to much confusion, as one time a term was used to mean one thing, and another time it meant something slightly different, imperceptibly different. In our everyday lives, people unconsciously looked at whatever the result they were shooting for instead of the language itself, so they didn’t notice. The exact meanings of words didn’t matter to them.

The reason this was important, the reason Collier sent you here, was that there was a special case where lots of theory and speculative talk ran headlong into a wall of whether it was useful or not, and did so on a regular basis: technology development. In technology development, business, marketing, and sales people spoke in abstract, fluffy, speculative terms, much like philosophers did. But at some point, that language had to translate into something practical, just like things in the practical sciences. And so the problems philosophy had been experiencing over and over again over periods of years, decades, and centuries, where there were subtle differences in terms and different ways of looking at things that didn’t agree with each other? Technology development teams experienced those same problems in time periods measured in days, weeks or months.

Technology development is a microcosm of philosophy and science, all rolled into a small period of intense effort. It’s science on steroids.

To illustrate the nature of language, Wittgenstein suggested a thought experiment.

Let’s assume you go to work as a helper for a local carpenter. Neither of you know any language at all, but he knows how to build houses and you want to help. The first day you show up, you begin to play what’s known as a “language game”. He points to a rod with a weight on the end. He says “hammer”. You grab the hammer and hand it to him. This game, of him pointing, naming nouns, and you bringing those things to him, results in your knowledge of what words like “hammer” mean: they mean to pick up a certain type of object and bring it over. Maybe it’s a red object.

The fact is, you don’t know what hammer means, at least not a lot of it. You only know enough to do the job you’ve been given. Another person playing a language game with another carpenter might think of “hammer” as being a rod with a black piece of metal at the end. As Peirce would remind us, that’s good enough. We have results. We have each played the language game to the point where we can gain usefulness from it. Later on, the carpenter might add to the game, showing you different locations and trying two-word sentences “hammer toolbox” or “hammer here.”

Each time you play the game, and each time the game evolves, you learn more and more about what some arbitrary symbol “hammer” means — in that particular language, that particular social construct.

The problem scientists, philosophers, and many others were having was given to us by Aristotle. Turns out science was a gag gift. The assumption was that one word has the same meaning for everybody. That language could represent some unchanging universal form. But the reality is that meaning in language is inherently individualistic, and based on a thousand interactions over the years in various language games that person has played. Sure, for 99% of us, we could identify a hammer based on a picture. That’s because most of us have a high degree of fidelity in our language games around the concept of “hammer.” But even then, some might think “claw hammer”, or “ball peen hammer” while to others there would be no distinction. Would you recognize every possible picture of a hammer as a hammer? Probably not.

Words don’t mean exactly the same between different people. The kicker, the reason this has went on so long without folks noticing is that most of the time, it doesn’t matter. Also, we pick up language games as infants and use them constantly without thinking about it all of our lives. It’s part of our nature.

Where it does matter is when it comes time to convert the philosophical ideas of what might exist into the concrete reality of computer programs, which are built on formal logic — into a system of symbols that assumes that there are universal forms and that things are rigid and relate to each other in rigid and predefined ways.

Let’s say you were asked to write a customer account management program for a large retailer. Given a title “customer account management program”, would you have enough knowledge to write the program? Of course not. You would need more detail.

Being good little Taylorites, over the years we have tried to solve this problem by breaking it down into smaller and smaller processes which can then be optimized. It’s never worked very well. Now you know why.

Just like with Taylor’s Scientific Management and creative processes, it seemed that we could break down behavior and meaning into infinite detail, and there still would be lots of ambiguity remaining. There’s always some critical detail we left out. There’s always some industry context or overloaded jargon that gets omitted.

Suppose you have a spreadsheet. A stranger walks in the room and asks you to create a list with columns to account for customer information, then leaves you alone. Could you complete that task? Of course not. You would need more context. So how much would be enough?

You could look up “customer” in a dictionary, but that wouldn’t help. You could talk to programmers on the internet about what they keep track of for customers. That might help some, but it would be nowhere near being correct. In fact, there is no definition for customer in terms of formal computer logic for your particular application. There is no definition because it hasn’t been created yet. You and the stranger haven’t played any language games to make one. The term “customer” is nonsense to you, meaningless.

Agile had the concept of bringing the end-user (or Product Owner) in with the team to describe things as needed as close to when the work happens as possible. It was to remove waste, or rework; but in reality it was the best solution to a problem that was not fully understood: meanings are subjective and depend on the language games involved in creating them. We get the guy in the room with us because the team needs to play language games right up until the last possible moment. Just like with the carpenter, you play the game until it’s good enough. “Good enough” is vastly different for every team, every Product Owner, and every problem.

Wittgenstein’s lesson: Communities of humans play language games all the time, it’s a state of nature. Language is inherently flexible, vague, and slippery. It gains meaning only to be “good enough” inside that particular community and only for particular uses unique to that community. Nothing means anything until we play language games to make it mean something. We can’t reason about universal forms. Instead, we have to deal with each item separately. Meaning was relative, highly dependent on the person’s experiences, and created by social interactions.

Looks like the sophists weren’t so stupid after all. Aristotle would not be happy. There was no way in hell that Mr. Collier was going to like this.

Aristotle said that there was a universal form for everything, and that by having exact definitions and using formal systems of logic on it we could deduce things about the universal forms that would then apply to everything in the real world. Peirce said that formal abstract systems are by nature speculative, and that speculative systems are nonsense. Unless it can change things in the real world, it is impossible to reason whether things in these systems are correct or incorrect. Computers can change the world using a formal system of logic, so technically, they might be the first devices ever able to translate abstract concepts into real-world effects. Wittgenstein said that didn’t matter: that natural human languages will never, ever match up with the universal forms that all systems of reasoning are built on anyway, so although the computer can work with universal abstract ideas, you’d never get the actual things people spoke about translated directly into the computer.

Science works because at heart it is based on probability, not because it is based on reason and logic.

Language games are terribly non-intuitive for folks brought up to believe that to find the meaning to a word you simply go to the dictionary, or for folks brought up in the scientific method school of thought that says that language can rigorously describe something so that any listener would receive the same meaning. Heck, it would even drive most programmers crazy, with their concept of language being so closely associated with formal systems of logic.

Believing that language can describe reality exactly has sent millions of projects off the rails, and thousands of philosophers to the old folks’ home early. (Wittgenstein grew to loathe philosophy, declaring that it was much more useful as a form of therapy rather than a quest for truth. We should use philosophy the way we would use a conversation at a bar with a particularly smart person, a conversation with a therapist, or a counseling session with a priest: a useful system of beliefs to help us move through life with more understanding and less pain.)

Instead, your imaginary spreadsheet project would proceed like this: you would form a group with people who had initial diverse internal definitions to the thousand or so words surrounding “customer information”. You would play various games, just like the carpenter and helper, until the group came to a common consensus as to what all the words mean and how they relate to one another. This wouldn’t be a formal process — language games seldom are — but it would be a process, a social process, and it would take time.

What you couldn’t do, because it’s impossible, is capture all the terms, definitions, idioms and such required for the project and convey it to another person’s brain. At some far degree of descriptive hell you might get close, maybe close enough, but you’d never achieve the same results as if you just sat down and had everybody naturally create the language on their way to the solution. And even if you managed to somehow describe enough for some initial, tightly-circumscribed work, you’d never cover changes, modifications, re-planning — all the parts of “living the problem” that occurs as part of natural social interaction using language games.

It would be like trying to prepare somebody for a trip to an alien planet and alien civilization using only a book written in English. Could you cover the language, the culture, how to properly communicate during all of the things that might happen? It’s impossible. The best you could hope for would be to get the person into a state where they could begin their own language games and become one with the culture once they got there. Everybody knows this, yet when we talk about specifications and backlogs for technology development, we forget it, and act as if we can join up the abstract and concrete using more and more words as glue. Maybe special pictures. If only we had the right system, we think. If only we included something about fish migration in Tibet. We are all philosophers at heart.

This was why Jones found that throwing away his backlog and re-doing it — restating the backlog — had value. This was why teams that worked problems involving the entire backlog understood and executed on the domain better than teams who were given a few bits at a time. This was why standing teams in an organization had such an advantage. They were playing language games that removed ambiguity and increased performance over time.

You don’t evolve the backlog to be able to work with any team; you evolve a particular team to be able to work with a particular backlog. It’s not the backlog that matures, though lots of things might be added to it. It’s the team that matures through language games.

Solving technology problems always involves creating a new language among the people creating the solution.

Words don’t mean anything, Aristotle would have been a lousy programmer, philosophers were a pain in the ass, Collier was going to kill you, and you were late for your date.

May 14, 2014  Leave a comment

Dear Agile Friends: Please Stop It With The Pointless Bickering

giving-a-shit-about-the-agile-values

Every couple of weeks, it’s more bickering.

Should teams co-locate? I don’t know, I don’t think there’s a universal answer for all teams, and I want to work from home but I can’t do that and do my job effectively. Does that stop us from arguing? Heck no! One bunch will line up saying co-location is the only thing that works, another bunch will line up and say co-location is 19th-century thinking about should be abolished.

And away we go.

Should TDD be used? Once again, no universal answer, I have my own view, and the way I’d like things to be and the way they are in my current work are different. So let’s all line up and start bickering over whether TDD is dead or not.

How about some of these new program-level Agile constructs, like SAFe? Same answers. Program-level Agility is just now getting some real traction and good anecdotal feedback from the field. Much too early to generalize, and who knows how much generalization would be useful anyway? But, we can go around the mulberry bush a few times on that.

What is it with the bickering? There’s a moderator on a popular LinkedIn Agile forum that decided that anybody who posted a blog link would have the post characterized as “promotions” and sent off to nobody-reads-it-land. I wasn’t crazy about that decision, made my case, then encouraged him to do what he thought was best.

That was over a month ago. He’s still on the same thread arguing about why his policy is the only sane one, and how we should all agree. Good grief!

Seems like a lot of us Agile guys are really good at arguing. Makes you wonder how much fun it would be working alongside them in a team.

Of all the Agile material I’ve consumed over the years, I like the Manifesto the most. The reason I like it was that there was a room full of guys who were all making money with various recipe books for making good software happen — and they managed to agree on what values should be important no matter what processes you were using. This was a moment of sanity.

Then many of them went out and created new branded recipe books and went back to bickering with each other. (I exaggerate, but only by a little. It’s more accurate to say their adherents did this. Many of the original gang have settled down. Not all.)

I know this drives people crazy, but after watching hundreds of teams doing every kind of futzed up process you can imagine, I only care about one thing: how well is the team evolving using the values as a guide?

I don’t care if they use tools, if they all have funny haircuts, if they wear uniforms and salute each other. Are they using a good system of values and changing up how they do things over time? If so, then they’ll be fine. If not, and this is important, even if they were doing the “right” things, I’d have little faith they knew what the hell they were doing.

A bunch of us yahoos coming in and bickering over whether story points should be normalized or not is not helpful — if we do it the wrong way. If we’re having some sort of “future of Agile” discussion, where the end times are upon us and we must turn back now and go back to the old ways? Probably not so useful. If, however, we’re sharing experiences and talking about why some things might work better than others, while acknowledging that many people do things many different ways? Probably much more useful.

We — and I mean me as much as anybody else — tend to make the internet some kind of drama-of-the-week contest. Everything is a disaster. We are all emoting. Well, I’ve got news for you: it’s not. Encourage using the values as a way to establish trust. Insist on teams continuing to try new things and learn. Have some humility about what we actually know as an industry and what we don’t. The rest will work itself out. I promise. :)

May 7, 2014  Leave a comment

Dear Agile, It’s Time to Grow Up

GirlPlayingTBall_Medium

Dear Agile. I’ve been with you a long time. Heck, I was there back in the 80s and 90s, even before you were born. Remember Fast Cycle Time? RAD? Or any of the other systems that emphasized rapid development in close association with the users? Remember all the fun we had before you had a name, when we were doing weekly code drops of tested code? Customers loved us. Developers were happy. It was a good thing. We were all happy then.

And it was even good after you were born. We had a whole bunch of things we could drop into the Agile bucket. There was eXtreme Programming. There was Agile Data modeling. There was even an Agile Unified Process. Seemed like the more we thought about it, the more other things besides development also needed to be Agile. Marketing, Sales, Startups, Program Management. The Agile manifesto was such a good set of values that wherever we applied them, we found improvement.

And that’s the problem.

Agile, it’s time to grow up. When you were born, it was easy to believe in developers as sitting in some small room, customer by their side, banging out great solutions. But that was then, that was your childhood. Now that you’ve gotten older, you have to be aware that most of the time these developers exist as part of some bigger ecosystem: a cross-functional team to develop a new product line, a team of vendors bidding on a government project, or a lean startup team practicing customer development.

It’s not just software any more, if it ever was. Now we’re expecting you to play in the bigger world with a broader view of how developers actually are employed. You have to grow up.

I know what you’re thinking: why can’t everybody just be like Spotify? One big company with a bunch of kick-ass Agile teams in it? Why can’t we just get paid, without having to think about where the money comes from? If the company/customer/client wants us to work in a large group of developers, why can’t we just tell them no? Why should we do estimates? Why do we have to add in all this other stuff anyway? Why can’t it just be the way it used to be, with you and your pair-programming buddy writing killer software for one guy you had to make happy?

I don’t blame you. Growing up is tough. Nobody wants to do it. It’s tough to look out on the world and realize that there are so many other important things besides just the things you’ve been used to. It’s tough realizing so many other people want part of your time, and you have duties and responsibilities in life that might not be so much fun.

I understand this.

But the world needs you. You see, the values you emphasized and the techniques that resulted from them are applicable across a lot more than just software. The world is becoming digital, and the digital world needs Agile to help guide it.

Sure, the world outside the team room has a lot of dysfunctional behavior. There’s management principles that are outdated, there are ideas and models that are harmful to the people who hold them. Many companies are being run in a way that actively destroys morale. Aside from product and intellectual property development, which was our old homestead, there’s also manufacturing and service work. Those activities play by their own rules, and although our value preferences stay the same, the practices and techniques change. That can be very confusing to you, I know. I’ve seen you struggle with trying to apply product development ideas to services and manufacturing. Sometimes it works. Sometimes it hurts.

But I want you to know, I believe in you. I’ve seen you overcome big odds, and the people using Agile principles today are some of the smartest people in the world.

You’re going to do fine. But enough with the whining and insisting on living in a tiny piece of the world. It’s time to grow up.

April 21, 2014  Leave a comment

Agile’s Business Problem

Agile has a business problem.

I was watching a video of Uncle Bob Martin awhile back, and he said something that struck a nerve.

[Paraphrased] “When we sat down to do the Agile Manifesto, several of us wanted everybody to be sure that the purpose here was re-establishing trust with the business.”

At the time, he was making a case for TDD — how that if we keep writing code in Sprint 1 that breaks in Sprint 3, or delivering buggy software, or not being reliable, you lose the trust with the business. But I think his observation says something about the technology world in general.

Given a choice, we don’t want to be part of the business. Many technology developers work at an arm’s length or more from the actual business. After all, one of the big “benefits” of Scrum is that the Product Owner brings in a list of things for the team to do. The team isn’t expected to know business things, only deliver on a to-do list from the PO.

Most small company development teams have sales, marketing, and management out in the field bringing in the money. They just deliver stuff. Most BigCorp teams are so far away from value creation that I doubt many on the team could describe the relative importance of what they’re delivering or how it fit into the corporate strategy. They just deliver stuff. Non-profit development teams deal with inter-office politics: their goal is to produce something the non-profit likes. They just deliver stuff. Everybody — except for startups — just delivers stuff.

And why not? Let’s face it: business is messy. The amount of work you put into a product is not directly related to the value it might create. There are many factors outside your control. There is no single person to make happy. And other businesses are trying to do the same thing as you are — many of them with more experience and capability than you have.

Wouldn’t you rather just work from a to-do list too?

We tell ourselves what we want to hear, what’s popular. So we focus on our backlog, how well we create things using it, and how to improve our little world. We don’t worry about the larger one.

The problem is: once you draw the line at the Product Owner, once you make the trade-off that, for purposes of “all that business stuff”, the Product Owner is the magic guy, then it becomes a lot easier to start making other arbitrary compromises. Wherever there’s a business consideration, why not just change the rules so to eliminate it?

Is the Product Owner asking you for estimates on when you’ll be done? Stop estimating! Does several new recent corporate acquisitions mean that the new product is going to have 8 teams all working together? Insist on firing everybody until you get one team. After all, the organization should adapt to Agile, not the other way around. Rituals like demos and sprint planning bugging you? Hell, just go to kanban/flow-based work and who cares about cadence, calendar, or commitments?

Sometimes these conversations make me want to laugh. Sometimes I wonder: is there something there? Am I missing something that’s going to provide value for folks down the road? I intuitively like a lot of what I hear: things like flow-based work systems. But I think that’s the problem: we’re confusing the theory in our heads and what we’d like to hear or not with what actually works.

We’re resistant to philosophical change. I know there’s a lot of people in the Agile community who already have everything figured out. As a coach and technologist with 30 years of experience and eyes-on hundreds of teams, back when I started blogging and writing, I thought others would be happy to have me share with them.

I was mistaken. Instead, amazingly, most coaches and practitioners — even those with a year or two of experience and a 2-day class under their belt — already know everything. The old cranky farts are worse, because, well, they’ve been around the block a time or two. Selection bias is a powerful thing, and if you decided you figured something out back in 1995, you’ve had decades of time to convince yourself there isn’t much more to learn. We get the general idea of something in our head, then we insist on the rest of the world conforming to that general idea.

The Agile community is an echo chamber. Instead of viewing this entire enterprise as an evolutionary effort spanning many decades, we view it as a brand-based deal where we’re all competing for market share. People don’t worry which parts of Scrum to adapt here or there, they worry about whether Scrum is still “pure” or not. They worry about whether we have “true” Agile or it has gotten corrupted.

These ideas, instead of being just ideas that we’re supposed to take and learn from, have become some weird kind of Platonic form. There is a “true” Scrum, and if we could only stay in the light of true Scrum, everything will be fine.

We’re not using ideas as stepping stones; we’re coming up with things we like hearing and then branding them. Then we have arguments over branding. This is dumb.

The Agile community continues to have a crisis of conscience: are we part of a system that is trying to learn better ways of providing real value to real people? Or are we trying to create a system of value for the way we work that we will then expect the world to conform to? It’s an important question, and it’s important to determine which side of the fence you come down on.

To me, Agile is a series of ideas about things that mostly work in incremental and iterative development. None of it is perfect, none of it is a magic bullet, and everything I’ve seen so far has places where it works better and places where it doesn’t. I don’t say that to dismiss ideas like Scrum, XP, and so forth. I say it to put them in context. But it’s nothing if it doesn’t do something real for somebody, if it doesn’t deliver.

I expect this body of work to grow, and hopefully I can come up with a thought or two that might last a while. But if it’s going to grow, it’s going to need to keep becoming more and more a part of business and less of its own separate universe.

Note: I didn’t say the entire Agile community. I still have faith that this is the best place to be for people who care about happy and productive teams. My point is that the future is in startups and in adapting what we do in order to create real value (instead of just cranking out technical solutions), not in taking the tiny bit we’ve figured out so far, over-generalizing, and then trying to make the world conform to it.

March 30, 2014  Leave a comment

Why I Don’t Care About Agile, Lean, Kanban, Scrum, and XP

Lately there has been quite a donnybrook in the Agile community. Is it time to leave Agile behind?

As it turns out, the suits have taken over, taking all that great Agile goodness and turning it into just more command and control, top-down, recipe-book dogmatic nonsense.

So some folks say we should just “return to REAL Agile”. Some folks say the brand Agile is dead: it has been killed by entities intent on adopt, extend, extinguish. Some folks say that Agile was never any good to start with — either it was too wishy-washy and meaningless, unlike Scrum, or that it was too emotional and not logical enough, unlike DAD (or many others).

As for me? I just like collecting shiny objects.

Agile is a brand. It’s a brand without an owner. That means that each of us has to come up with some kind of working definition for what the brand stands for. For me, that brand represents something like “best practices in iterative and incremental development”. If you’ve got one of those, you can publish it under Agile. Works for me.

Of course, people go off the deep end just on that formulation. What do we mean by “best practices”. Isn’t that prescriptive? Are we saying that we know the perfect answer to how teams should act?

sigh.

I’ve been around this shrubbery many times, and I think it comes down to what you want to do in life.

I’m in this to help developers. I really don’t care what you call it, or how it makes me feel. I don’t even care if it all fits together in some master plan, or whether it has a class or not. If it helps developers, I want to do it.

Some folks are in this to change the world. To them, Agile is a movement. It exists to show what is wrong and what needs to be fixed.

Some folks are in this for a theory of life, the universe, and everything. They want a philosophical grounding that they can use for their development, their team — even the organization.

I believe these last two groups are expecting a bit much. Sadly, though, it’s these powerful emotional constructs that drive adoption of things. So they’re always going to be with us.

I also believe that Nietzsche was right: any sort of structured system of belief is going to be self-contradictory. In other words, no matter what you do, once you make a system out of it, it’s broken. And we technologists love to make systems out of things. Even things that are supposedly non-prescriptive. After watching this industry for decades, I believe this is unavoidable.

All of this has led me to believe that fighting over the term “Agile” is a fool’s game. If you only care about what works, it doesn’t matter what folks call it, and no matter what comes along next, the community is just going to do to it what it did to Agile. If you care so much about terms, then get your head out of your ass and start focusing on what’s important. We have work to do. Give up on allying yourself to a certain brand or term — or proclaiming that you’re opposed to a certain brand or term — and just collect things that make developers’ lives better. Then we can share.

March 11, 2014  1 Comment

« older posts