Failure Is Not An Option (In F#)

Why all the hate on Option types? You said you wouldn’t pass them from a function. Why not? What’s wrong with options?[1]

Programming should be fun. All you need is a good set of values, some skill, and the right attitude. F# is the most fun I’ve had in a programming language in years. This essay series is about that: having fun. Most books and essays are about writing awesome code. There’s not a lot of material about writing code aweseomely. The code you’ll find here has bugs! Just like your code. Can you find them? If you’re looking for the final version of the code, you can find it in the project’s GitHub page. You can find the story of how all of this started on the series index page.

There’s nothing wrong with options. I love options. I use options all of the time. Options are my friend.

I just use them in the right place. Understanding where the right place is? That’s the purpose of this essay.

But first we gotta talk about the Unix Philosophy and Total Programming. Or at least talk enough about them that the option stuff makes sense. Warning: I am going to vastly over-simplify a bunch of stuff so that this essay isn’t 40,000 words long.

When I started writing true microservices, I learned a lot of things I wouldn’t have learned otherwise. Microservices need to run at a certain time. They need to work with one another. They need to use the same data types and storage/transfer mechanism. (Some folks use a database for this but there are all kinds of problems with that. I will not go into them here.) Heck, they need to stop running — no matter what happens.

Most of the things I learned rested on a weak version of the Unix Philosophy and Total Programming. Instead of my trying to convince you that the things I learned were good, I’ll just point you to those.

The Unix Philosophy. The usual summary of the Unix Philosophy goes like this:

  • Write programs that do one thing and do it well.
  • Write programs to work together.
  • Write programs to handle text streams, because that is a universal interface.

There’s a lot more in the linked article. I have a much simpler working definition: Make it work like the unix command “ls”. That is, when I type “ls” into a linux prompt? It just works. I can’t think of any time it’s failed. It always does something. And I can join ls together with a bunch of other linux commands to do useful things. It’s just one program, but when combined with other programs it becomes tremendously more valuable than it is by itself. It does one thing, does it well, never fails, and infinitely connects to other programs to make more useful stuff than I could have imagined while building it.

That’s what I want in my code. Tiny pieces of rock-solid stuff that I can assemble later into various things I might need without having to write (much) code.

Total Programming. This is another rabbit hole you can dive down if you have a lot of time on your hands. The dumbed-down version goes like this: your program has to provably stop running at some point. When you start it up, you have to know — without a doubt — that it is going to stop. Once again, it doesn’t have to do anything. It just can’t hang.

This spins off into all kinds of type and category theory stuff. I am not a Computer Science person. I’m just some old fart that likes to code, so the dumb version is enough for me. There looks like some cool stuff there, though, if a person wanted to study up on it. Where you end up is mathematically creating a type system that is deterministic. Put differently, you stick all the rules, flows, validation, and the rest of it into the type system so that it is impossible for the program not to complete in some way. That’s where we’re headed. (But that’s not where we’re starting)

We’re are not laser-focused on the type system. Instead, like we said in the last essay, we’re always focusing on output. Behavior. Flow. What are you doing for me? Not structure. Structure is always derivative — but that’s material from the Info-Ops book and too much to go into here. For now, if we can remember never to focus on structure, to instead do things that “force” ourselves to create structure based on other constraints? We’ll go far.

But wait! What the heck are you going on about, Daniel?!? Console apps? Text streams? I don’t want to write no stinking console apps! Linux commands? What the heck? I’m doing modern web development. It’s not the dark ages anymore. This command-line crap doesn’t look like it has anything at all to do with my day-to-day work. What’s next, building our own C compiler out of coconuts?

It’s a fair point, but misguided. The purpose is to structure your code as if it were a command-line app. Not that it’s deployed that way. I would even make it work as a command-line app for testing purposes, no matter where it went. This philosophy, along with the onion, will tell you where everything goes, no matter what kind of architecture you’re using. Like we said in the last essay. If you’re doing it right, the architecture doesn’t matter. Carpenters don’t spend all day staring at their hammer.

Remember the onion?

This is not about command-line apps. It’s about how good programs are constructed. How the pieces go to live in various places is not relevant here. In fact, if it’s relevant, you’re probably focusing on the wrong thing. The command-line just provides the simplest and easiest-to-use first platform for the code to live on. I can take an appropriately-structured program from the command-line and run it anywhere, on dozens of platforms. All I need to do is add some shell code, the grunt-work stuff in layer 1. (Which ends up being reusable. Yay!) And if I can’t? Then I haven’t structured the program correctly.

This is the entire point of programming, right? Write stuff once, then use it in a bunch of places. I’m lazy. I want to get the maximum value for the minimum amount of work. Don’t you?

Let’s make up some dummy example to walk through this. Let’s say I have a text file that’s supposed to have lines where there’s a value which equals something, like this:

Our data

The job is simple. Group together things by letter and total up the numbers. Then display the results to the console The first thing I do is create a program that takes one command-line parameter for an input file. I’ll use that boilerplate stuff I wrote years ago.

open Utils
/// Command-line parameters for this particular (OptionExample) program
type OptionExampleProgramConfig =
    member this.printThis() =
        printfn "OptionExample Parameters Provided"
let programHelp = [|"This is an example program for talking about option types."|]
let defaultBaseOptions = createNewBaseOptions "optionExample" "Does some thing with some stuff" programHelp defaultVerbosity
let defaultInputFile = 
    createNewConfigEntry "I" "Input File (Optional)" 
        [|"/I:<filename> -> full name of the file to use for input."|]
        ("OptionEssayExampleFile.txt", Option<System.IO.FileInfo>.None)
let loadConfigFromCommandLine (args:string []):OptionExampleProgramConfig =
    if args.Length>0 && (args.[0]="?"||args.[0]="/?"||args.[0]="-?"||args.[0]="--?"||args.[0]="help"||args.[0]="/help"||args.[0]="-help"||args.[0]="--help"then raise (UserNeedsHelp args.[0]) else
    let newVerbosity =ConfigEntry<_>.populateValueFromCommandLine(defaultVerbosity, args)
    let newConfigBase = {defaultBaseOptions with verbose=newVerbosity}
    let newVerbosity =ConfigEntry<_>.populateValueFromCommandLine(defaultVerbosity, args)
    let newInputFile = ConfigEntry<_>.populateValueFromCommandLine(defaultInputFile, args)
    {configBase = newConfigBase; inputFile=newInputFile}
let doStuff (opts:OptionExampleProgramConfig) =
let main argv = 
        let opts = loadConfigFromCommandLine argv                
        commandLinePrintWhileEnter opts.configBase (opts.printThis)
        doStuff opts
        0 // remember to return an integer exit code
        | :? UserNeedsHelp as hex ->
            printfn "%s: %s" defaultBaseOptions.programName hex.Data0
            printfn "========================"
            printfn "Command Line Options:"
            // Manually list program config entries here 
        | :? System.Exception as ex ->
            System.Console.WriteLine ("Program terminated abnormally " + ex.Message)
            System.Console.WriteLine (ex.StackTrace)
            if ex.InnerException = null
                    System.Console.WriteLine("---   Inner Exception   ---")
                    System.Console.WriteLine (ex.InnerException.Message)
                    System.Console.WriteLine (ex.InnerException.StackTrace)

This took me an hour! Why? Is my toolkit overblown? Do I not know anything at all about coding? Am I a moron? (Please don’t answer that last question). No. I had indented the first line one way and the rest of the code another. It took me 10-20 minutes to put the code in. The next 40 minutes I spent trying to figure out why my code wasn’t working.

My code was working. The spacing was off. Sometimes I say very unkind things about F#.

Looks like it’s working

And look! We’re not even a third of the way into the boilerplate code and there’s an Option type. It’s this line:

("OptionEssayExampleFile.txt", Option.None)

Why do I need an option when all I’m doing is getting the input file? Because the input file might not exist. Why not just fail? Because that’s not the job of this code. This code just gets command-line parameters for whatever programs might need them. The programs themselves may fail — or not. That’s a program decision, not a decision for this library.

I’m working the outside of the onion, the part where my application touches the rest of the world. The rest of the world has unknowns and empty values! So the option type accurately reflects what might happen when I interact.

What’s next? Well, I have my parameters loading up, and I know I’m working from the outside of the onion inwards. What if the input file doesn’t exist?

That’s a decision that’s not part of the outer layer. It’s part of layer 2. At this point I consume the option and decide what I want to do. Maybe I provide default data. Maybe I just go away. It varies — so it’s not an outside layer question.

let inputFileDoesntExist = 
            (snd opts.inputFile.parameterValue).IsNone
            || (System.IO.File.Exists(fst opts.inputFile.parameterValue) = false)
        if inputFileDoesntExist
            (doStuff opts)

Now I’m transitioning from the nature of the outside world to how I want my program to run. And I want my program to run like a clock. No muss, no fuss. When I type in “ls” on the linux command line, it works. That’s what I want.

I’ve found that allowing option types past level 2 is basically a way of deferring important decisions until I’m in the middle of doing something else. This complicates things and is always a bad idea. I’m working on this other thing. Why the heck should I be concerned right now about whether the file is there or not? You get four or five option types floating around? They could take a simple five-line method and turn it into a 40-line logic monstrosity. That’s no fun. And it leads to crappy, muddled code with mixed responsibilities.

What’s the next step? Well I’m not going to do anything if there’s no file. What if there’s a file with spaces? Or bad lines? Or lines without the name-equals-number format?

Now I’m fully in level 2. I have successfully interacted with the outside world. I have some hunk of stuff in my hand that I have to do something with. Now I need to decide on how to clean, filter, sort, or replace data I don’t like. I’m transforming the outside world data into my application data.

There’s no right or wrong answer here. It’s up to you and your app. But you have to decide. Once we leave level 2, failure is not an option. That is, you only have stuff that you know you can process. So let’s add a little more code around our “doStuff” function. (It’s very important to use names that describe things. Ha!)

type OptionExampleFileLines = string[]
let makeStringListToProcess fileName  :OptionExampleFileLines=
        let textLines = System.IO.File.ReadAllLines fileName
        let textLinesWithoutEquals = textLines |> Array.filter(fun x->
        let textLinesWithEqualsAndWithoutAValueOnTheEnd = textLinesWithoutEquals |> Array.filter(fun x->
            let splitText = x.Split([|'='|])
            || fst (System.Int64.TryParse splitText.[2]) = false
        | :? System.Exception as ex ->
            printf "I am loading the file to process. I should never fail here, just return an empty array"
let doStuff (opts:OptionExampleProgramConfig) =
    let linesToProcess=makeStringListToProcess (fst opts.inputFile.parameterValue)

A few things to notice. First, I’ve added a type,OptionExampleFileLines. It doesn’t have a lot around it, but the day is still young. We’re just getting started. At level two we’re translating into our application types — so we need application types.

When I mentioned I was going to write about option types today, a friend said “I use them in smart constructors”

Smart constructors are a way to control how types are created so that you have more control over being sure the type isn’t going to blow up later on. (Apologies if I missed some details here.)

The crazy thing is, we’re saying the same thing. My friend is saying, “Look! You can make constructors such that you always know your types will run on your application. You have tight control, and by adding it to the type system you’re creating a program that cannot fail. Whereas I am saying, “Look! Once you begin interacting with the outside world, you’ll get messy things like null values and bad data. The first thing you have to do is add code to make sure your program cannot blow up.”

This is one of these things where you could end up violently agreeing. What you have to know is 1) It’s the same goal, and 2) You don’t have to choose one or the other. In fact, use both! There’s cleaning data to protect the program from the outside world, and then there’s cleaning data to protect the type system from bad data. Write some code to clean the data in general, then figure out where it should go.

After all, once I leave level 2, I want strongly typed data that I know won’t blow up. We’re headed the same way, the only difference is that my friend is looking at it from a type perspective and I’m looking at it from a data flow perspective. Remember! I’m always focusing on output. What are you doing for me? That drives structure, not the other way around.

The other thing to notice are my huge names and wordy code. Couldn’t I collapse that? Isn’t functional programming always supposed to look like “foo |> bar |> foobar”?

Yes and no. Functional programming can look dang near like anything you want it to. The computer doesn’t care. The important thing is whether or not you can look at a piece of code and immediately understand what it’s doing. When I visited my TDD guru friends Bob and James, one of the problems I noticed was that as good programmers, they’d almost immediately start refactoring, collapsing stuff, making the code cleaner.

That was a bad idea, because with FP, it all kinda collapses into nothingness. I end up losing track of what I’m doing. What’s the code supposed to do for people? Instead I’m taking some function and making it disappear. (It’s a nice trick. It just doesn’t help me reason about usefulness or not. Instead I’m wallowing around in how cool FP is.) Could I take that “makeStringListToProcess” code and make smaller? How about adding the program type right there, have it return a a name/value collection? Move it off to a generic function that takes any file and only returns the name/value parts of it?

The collapsing/refactoring game can go on almost forever, pushing both outward towards generic IO functions, inwards towards new language additions, and up the type chain to a more structure program type system.

These are all wonderful and great things, and I’ll be pushing the hell out of this code — once it starts doing something useful. Then I’ll use what it has to do (that’s useful) as a guideline for what to clean up first and how to clean it up. (Pushing as much as possible into the type system before you start is how you get Domain-Driven coding). Until then, however, I want to read what I’m doing in nice, human language. I especially want to think through all the outer onion issues around process data. I am a distracted, busy, forgetful, lazy programmer. A month from now I don’t want to load up the IDE and see something that looks like Klingon. Instead, I’ll refactor as I go and over time it all works out the same.

Finally, why not carry options into layer 3? What if I wanted to take the lines that had non-numeric values on them and output them to another file? What if I wanted to write a report for numeric entries and another for alpha entries?

This is where I got hung up a lot as an OO guy. What I was doing was focusing on the structure instead of the behavior. “I have this structure to read these files I want to do four or five things with. So I’ll keep the structure and just add in branches to do the other stuff.”

Nope nope nope nope nope. Remember the Unix Philosophy. “Write programs that do one thing and do it well” What I was doing was trying to be lazy and force re-use by taking the same structure and making it do multiple things. A divided house cannot stand, and it’s enough to do one thing and do it well. Then do the next thing.

And surprise! We get re-use, just like we wanted! We just get it by using shared libraries that we develop over time. In fact, I have never seen code resuability work so well as I have using F#, and each time I re-use it, it keeps getting better, because I developing and factoring various functions based on several real-world behaviors they have to support.

That’s extremely cool.

Option types are great, and there are multiple ways of looking at programming that are all valid: type-driven, flow, constraint-driven, test-driven, and so on. At the end of the day, however, everybody’s trying to do the same thing. So when you see somebody doing something one way talk about something, you can betcha the same thing happens when you’re doing it another way. Coding is coding. The important thing is not to forget any of this important stuff simply because you’ve decided to use one method of coding over another.

Most of all, have fun! And make stuff people want.

I hate to do a crummy commercial, but this essay is already huge and I really haven’t explained some critical things. If you’re interested in why I choose the approach of looking at F# coding in terms of behavior, value, and flow instead of types, you should read my Info-Ops book. Programming is one of many forms of project information. Once you learn how to organize all of your project information, not only will you be a better programmer, your paperwork and BS reports and meetings will decrease. Plus you’ll have more fun and make better stuff.

Daniel Markham sucks at programming, but he still loves it. What he’s good at is helping groups of technical people make stuff people want. He teaches them to do that using an understanding of value creation, maximizing time with users, minimal tool and paperwork overhead, and good technical practices, including things like ATDD and TDD. He’s been a fan of F# since it first came out.

1: This is paraphrased from a comment on a previous essay.

July 12, 2018  Leave a comment

Project Management Charts I Have Known

This is an elephant

I love charts and graphs. I remember one of the coolest programs I had for DOS was “Harvard Graphics”, which just created graphs. Fun times.

But then I became an independent business owner writing software for people. And then a contractor, tech lead, architect, and architect lead. Finally, I was doing some technical PM work! Yay! More graphs! Back then, you weren’t nothin’ if you didn’t have a GANTT chart to show somebody.

Anybody remember what GANTT stands for? (Nothing. It’s named after Henry L. Gantt.)

GANTT charts were really all I needed until people started wanting to do “formal” Agile — that’s Agile with all the names, roles, rituals, and such. (Back before agile was Agile, everybody worked, everybody helped one another, and we had stuff we promised the client we would do every week. Obviously this was too simple?)

Once Agile came along, everybody was talking about Scrum, Sprints, and Burn-down charts. Who can forget this guy?

An old friend, the sprint burndown

The burndown worked on a simple yet brilliant principle: if every day everybody guesses how much they have left, you can spot problems in complex work way ahead of the time you would if you interviewed each person separately to try to figure out what was going on. And when the guesses reach zero? There’s no more work to be had. Individually the guesses may be crap, but over many people, the overall numbers tend to be useful — even though they may not mean anything! The change is what you’re looking for, along with the linear regression that shows where “done” is.

It’s not engineering or a science. It’s a hack. But done correctly, it works. Plus it’s a nice little graph.

A personal favorite

Assuming the team and nature of the work remains the same, there are two major things that change in any project: how much stuff you have left to do and how good you are at doing stuff. The burn-up captures both of those. It’s a personal favorite, although I don’t see it in the wild as much as I’d like to. It’s not difficult to do, it involves guesses as we’ve said before, and over time it eventually starts being as dead-accurate as if it were a math problem. (Part of this might be the team psyching itself out that the chart accurately reflects where things are — but that’s a story for another day.)

I was teaching a coaching team many years ago and had a young coach bring me this graph that one of his Scrum Masters had put together.

The Cross-Fire

These guys were once every morning guessing how many hours they had left to do, then the SM was assembling it all into a graph of “How much we guess we have left”, “How much we budgeted”, “Actual hours we used”, and “How much budgeted we have remaining”

Wow! That’s a bunch of lines! It was a matrixed environment. That means that while there were teams, each person on the team was also on several other teams at the same time. Sounds crazy? It is. But it gives people the impression that there are a bunch of projects all being worked on at once without folks having to have difficult conversations about what’s important and what’s not.

(Each person also had multiple managers they reported to. One for each project, one for their department, and one “people manager”. Did I mention that these people needed to be agile in a big way? It was great. Once we showed up, the first thing they wanted to do was make a complicated set of diagrams and instruction manuals for exactly how to be agile. But I digress.)

I would never want a team to standardize something like this, but I liked what happened here. This team kept having a problem with delivering, even though they had 3-week sprints and could have just changed how much they promised to do and everything would be fine. But somehow they kept disappointing the customer and nobody knew why.

So the SM did this chart, which over several sprints showed that nobody worked much on the project the first week, just a little the second week, and then they tried to cram it all in on the last week. This was probably because everybody was busy on other projects doing the same thing.

They did a chart for a few sprints, figured out what was wrong, then stopped doing the chart.

I’m going to skip a bunch of Statistical Process Control charts — man, do those guys love charts! But I have to show this one.

The great and wonderful CFD

The (somewhat) new guys on the agile block are the Kanban folks. They take out the guessing entirely and just break stuff into really small pieces and track how fast the pieces get done. With large enough numbers, it’s the same difference. The CFD diagram is probably the key diagram to get all of the PM goodness you want without having to ask people to estimate anything.

There are also more program-level charts, showing how dependencies are managed and how complex releases happen. I’m scoping those out too — there are lot of PMs that do projects. Not as many that do programs. Some of it actually scales out nicely. If you can do a project burn-up, you can do a program burn-up. Same goes for CFDs.

But it’s time we talk about the elephant in the room. Or rather, the elephant in this post.

What is the most important and useful graph on this page?

It’s the elephant.

That’s because none of these graphs are useful at all. They’re just graphs. Pieces of paper with images on them. At least the elephant is honest about it. Do these graphs tell you something? They might — but as a manager, you’re not the person who needs to know stuff. The team is. And handing them pieces of paper — or showing something on a screen — probably won’t get much attention or buy-in.

Management is about people. If you say you’re a manager, I want to see your mouth moving, your ears listening, and your body in close proximity to the people you’re trying to help. After all, the primary goal of management is the elimination of obstacles. (Secondary is the coordination with outside constraints). If you’re not physically participating in important conversations, you’re not managing. You may be dictating. But you’re not managing.

Here’s somebody who’s managing.

Drawing. With a pencil. Oh the pain!

Every day when the team meets, after their five-minute standup, Shaunte draws a couple of graphs; whatever the team is interested in tracking that sprint. They always do burn-down because somebody is always asking “Are you done yet?” and that isn’t important enough to interrupt their work. They add in more as-needed, depending on the problems they’re working.

This is called publish-subscribe. If a bunch of people want something from me, instead of each of them bringing me into a meeting, I publish it in one spot. Then “subscribers” can come and get the information as needed. Saves a ton of time.

The team stands there while the graphs are updated. After all, they are the ones that own the data. The purpose of the graphs is to save time later on. It’s their graph, not hers. Sometimes they rotate around who does the graphs.

“Well!” I hear you say. “Certainly these graphs should be put into a tool! We have all kinds of great tools to handle this kind of grunt work! People can come online and see the graphs from anywhere! And the graphs are much nicer too! None of this crappy schoolhouse craft show.”

But the purpose of PM graphs aren’t to create data to move around, they’re to assist in management — conversations. Project Management is not bits and bytes. It’s people. Yes, you do reports that have these kinds of things in them — but that’s not the work. The graph alone is worthless because you don’t have the context and can’t engage in the conversation. In fact, it’s worse than worthless because it gives you the impression that you know something now that you didn’t before. If you’re four-steps removed from the team that’s fine. You don’t have to know a lot. If you’re only a step or two? It’s not. Do your job.

These graphs are hand-created because the team has to own and talk about them as they evolve. That’s their purpose. You don’t make a graph so that somebody else can read it and swoop into the team to announce all the things they’ve done wrong. You make a graph so that as you update it, the people affected can learn more and have better conversations about what’s happening.

They’re physically put on the wall because the team owns this. It isn’t some report you get when you push a button. It’s the replacement for a bunch of meetings. If you’re interested in how the team is doing? Come by and look at the chart. Got questions? Cool! There’s the team. You now have data plus context plus access to the people you need to do your job. This also saves you time. In fact, the first thing you should do as a program manager or higher is a “Wall walk” every morning. Before folks get there, walk around and let each team tell you how it’s going. Then you can pick and choose where to go listen and help out based on the need they report

Isn’t that a lot better than a management-level meeting where a bunch of folks look at charts made by some tool that don’t mean anything and are a hassle for everybody to update?

I’ve known a lot of Project Management charts in my time. But I’ve only known four tools that I’ve seen consistently kick ass over and over again: your feet, your eyes, your ears, and good conversations. You use them in that order.

July 11, 2018  Leave a comment

Teachers Of Functional Programming: Stop Driving Me Crazy With Math Problems!

“Actually, I’m starting not to like functional programming…if I see one more Fibonacci or factorial coding example in an F# tutorial or textbook, I’m going to stab the next nerd I find”

I feel this pain. When I learned F#, I’d start reading books and the first thing I saw was stuff that looked like this:

let rec fact x =
    if x < 1 then 1
    else x * fact (x - 1)

Hey look! Now I can do factorials!

Who the heck wants to do factorials?

Like most OO programmers, I struggled a lot learning F# and functional programming. I kept at it, slogging through a bunch of books with a lot of math problems disguised as coding. I had been programming for a long time, and I’ve never coded a math problem. People rarely come to business programmers and ask them to do math.

And then I would talk to some super-smart people, like the guys at google. I’d ask them what their code does, and I’d get weirdest answer.

Well really it just does some matrix math and assembles the results.

Now this code may be running on 100,000 servers worldwide, it may have fault-protection and do all sorts of wondrous things, but to them it’s all about some kind of math problem.

What was I missing? It didn’t make sense to me. Sure, I got the math coding part. I understood how to do it. But when was I going to start form validation? Business logic coding? Important stuff?

So I jumped into my first project. It was a common, plain windows application. I had written something like this in C, C++, VB, C#, maybe a couple of other languages. Using F#, I coded up some windows.

I decided to add in some mailbox stuff, which is kinda cool. And then I went functional-reactive, which sort of fit naturally. Then I wanted to do some other stuff…..

It was a mess. I ended up with code that looked like this. (I may have actually copied this from somewhere. Beats me. It’s been a while)

///<remarks>Worker bee class for doing long-running threaded work</remarks>
    type AsyncWorker<'T>(jobs: seq<Async<'T>>) =
        /// <summary>Capture the synchronization context to allow us to
        /// raise events back on the GUI thread</summary>
        let syncContext =
            let x = System.Threading.SynchronizationContext.Current
            if x = null then new System.Threading.SynchronizationContext() else x
        let cancellationCapability = new CancellationTokenSource()
        // Each of these lines declares an F# event that we can raise
        let allCompleted    = new Event<'T[]>()
        let error           = new Event<System.Exception>()

I had a disk utilities class, a class for writing html, a class for handling gating. I had a class for everything that I thought was important.

It’s not that I didn’t get it working — I did. The problem was that I was spending all of my time being an OO programmer in a functional world. My thinking was not in alignment with the tool I was using. I was interested in class libraries that could do cool stuff, which class libraries I would need to create, which code went in which class. In my mind I was assembling a large structure with a lot of little building blocks. Some of them I made on my own. Some I got elsewhere.

I did that a few times. Heck, I’d probably still be doing that if I hadn’t decided that I wanted to learn startups more than I wanted to learn functional programming. To learn startups, I started writing various web pages — no programming if I could get away with it. In fact, many times I would compete with myself to see if I could take a weekend and write something up on the web that would make money. Anything. The programming part didn’t matter. Just make something useful.

I did all kinds of cool stuff. Eventually I landed on serverless web applications a few years ahead of everybody else. Fun times.

But with those serverless applications, eventually I wanted to add functionality. How would I do that as directly as possible? Pick up a framework?

That’d be tough to do in a weekend, and I was all about making stuff people want and not losing focus on that. So why not just write an F# app that runs on the server as a plain-old CGI app? It’d just be an F# console app for linux. It’d read the header info of the page POSTing or GETing the data, then write html directly back out to the client. This is about as simple as I can possibly make it.

I wrote that up and it worked fine. It was a very small program! But it did what I wanted and I never had to maintain it. It just worked.

I did that a few times with various apps. I started to notice a pattern emerging. First I would “clean up” the incoming data: make sure it was valid, ignore bad data, take care of any option types, make it as correct as possible. Then I would process it. And the processing would always end up being some kind of simple sorting, matching, or recursing through various data structures.

In short, a math problem.

The light was beginning to come on. As I moved to microservices, I saw the pattern again: clean the data, take care of the housekeeping. By the time you finish cleaning things up the problem is trivial. My microservices never got more than around 100 lines of code — and they did very useful things! With very little maintenance!

I found that I needed the strongest types on the “outside” of the program, where it touched the rest of the world. But those types were all about I/O and data movement. As I moved inward, I wrote more and more generic code. This code I captured and put in a utilities library. I also refused to make a class until I was absolutely sure it was doing something useful, that is, that I had persisted state and methods that belonged together and could be used across several applications.

I went for a long time before I had good reason to make a class.

/// Parameterized type to allow command-line argument processing without a lot of extra coder work
    /// Instantiate the type with the type of value you want. Make a default entry in case nothing is found
    /// Then call the populate method. Will pull from args and return a val and args with the found value (if any consumed)
    type ConfigEntry<'A> =
        } with

I finally found some code that belonged in a class! It was a class to handle a problem I would have over and over again: taking care of the argument list passed in from the command-line. I made a nice generic type that took care of what I wanted where the programmer using the type could make it all work with just a few dozen lines of code.

// Create a type that will handle your program config
    // Use configBase to handle common stuff
    // Then put whatever you want, inheriting from ConfigEntry
    type MyAppConfig =
        member this.printThis() =
            printfn "MyProgram Parameters Provided"
            printfn "Input File Exists: %b" this.InputFile.parameterValue.FileInfoOption.IsSome
            printfn "Output File Exists: %b" this.OutputFile.parameterValue.FileInfoOption.IsSome
    // Add any help text you want
    let programHelp = [|
                        "Here's some program help."
                        ;"and some more.. as much as you want to provide,"
    // Add in default values
    let defaultBaseOptions = createNewBaseOptions "myapp" "gets new links from a site list" programHelp defaultVerbosity
    let defaultInputFileName="myAppInput.json"
    let defaultInputFileExists=System.IO.File.Exists(defaultInputFileName)
    let defaultInputFileInfo = if defaultInputFileExists then Some (new System.IO.FileInfo(defaultInputFileName)) else option.None
    let defaultInputFile= createNewConfigEntry "I" "Input File (Optional)" [|"/I:<fullName> -> full name of the file having program input."|] ({FileName=defaultInputFileName; FileInfoOption=defaultInputFileInfo})
    let defaultOutputFileName="linkLibrary.json"
    let defaultOutputFileExists=System.IO.File.Exists(defaultOutputFileName)
    let defaultOutputFileInfo = if defaultOutputFileExists then Some (new System.IO.FileInfo(defaultOutputFileName)) else option.None
    let defaultOutputFile = createNewConfigEntry "O" "Output File (Optional)" [|"/O:<fullName> -> full name of the file where program output will be deployed."|] ({FileName=defaultOutputFileName; FileInfoOption=defaultOutputFileInfo})
    // Do the actual loading
    // This returns back an MyAppConfig structure
    // to be used by the caller
    let loadConfigFromCommandLine (args:string []):MyAppConfig =
        if args.Length>0 && (args.[0]="?"||args.[0]="/?"||args.[0]="-?"||args.[0]="--?"||args.[0]="help"||args.[0]="/help"||args.[0]="-help"||args.[0]="--help"then raise (UserNeedsHelp args.[0]) else
        let newVerbosity =ConfigEntry<_>.populateValueFromCommandLine(defaultVerbosity, args)
        let newInputFile = ConfigEntry<_>.populateValueFromCommandLine(defaultInputFile, args)
        let newOutputFile = ConfigEntry<_>.populateValueFromCommandLine(defaultOutputFile, args)
        let newConfigBase = {defaultBaseOptions with verbose=newVerbosity}
            ConfigBase = newConfigBase

That’s a bit more setup than I like, but it’s all fairly straightforward. You should be able to read the comments and figure out what it does. It’s easy-to-use, and really? It’s not part of the app. It’s a thing that handles getting args from the command line. It’s become a standard part of my toolkit. I don’t count it as part of the code I write. I instantiate the type with a few things…and it just works.

This way of working outside-in is called an “Onion Architecture”. People have been doing it for years. Done correctly, it prevents the program from failing. No matter what, the program runs — which is exactly what you want in a true Unix-Philosophy microservices approach. (There’s a long discussion to be had about the Unix Philosophy that we’ll save for another day)

Welcome to the Onion Architecture. Much to come about this in later essays

And I finally figured out why everybody kept trying to give me math problems when I was learning! Functional programming to me is focused on output, not structure. What are you doing for me? Everything revolves around that, and it should be the driver of any coding or architectural decision you make. By the time the type system kept out bad data and controlled program flow, the only major thing that was left was understanding simple semantics and how first-class functions were an entirely different animal from the methods in C# that I had been used to. And math is probably the easiest way of understanding that.

They weren’t trying to teach me math. They were trying to teach me function construction and composition. FP is based on math, and it was the easiest thing they had handy to use as an illustration. Otherwise, to do the “real” stuff I was asking about, the authors would have had to guide me through the entire onion process. That’s too much overhead for talking about something simple like recursion or pattern-matching.

The reason those guys at google said their code just added some matrices is that the rest of it doesn’t matter. That outside-of-the-onion stuff is just grunt work, most of which you can reuse (as opposed to the “reuse” promised to us in OO, which never panned out.) In the diagram above, it’s level 2 and 3 where all the “interesting” stuff happens. In fact, it’s mostly level 3 — and it’s mostly not that much stuff. And just like in OO, once you start thinking the right way, with the grain of the language, the problem kind of “falls apart” and becomes trivial. It’s a matrix problem. The only difference is that it falls apart in an entirely different way in FP than OO.

July 8, 2018  Leave a comment

Function Overloading Five Ways in F#

Ever want to overload a function in F# like you do in other languages? So you have a function “foo” that sometimes might take a string, sometimes an int? You write two different functions with the same name that take different parameters, then you just call it. The compiler figures out which one to use.

Yeah, that’s not going to work in F#

I’ll explain why, and how to fix it, but my real purpose in today’s post is to talk about programmer workflow. When you’re learning F# and you get hung up, what should you do?

Yesterday I decided to start writing every now and then about F#. For my first post, I included a few little code snippets I’ve picked up over the years. I also wanted to do function overloading. People say you can’t do it, but you can! I just knew it!

I was wrong. It didn’t work like I thought it would. In fact, I couldn’t get it to work easily at all. So I had to find out why.

First thing I did was find the easiest cut-and-paste answer I could, like any good programmer. It uses a type and static methods. Turns out you can override methods inside a type.

// Function overloading five ways
// First way. Overload a type member with static methods
type FooType =
    static member foo(x:int) = printfn "The int is %d" x
    static member foo(x:string) = printfn "The string is %s" x 9 "weasels"

That kinda-sorta worked, but it sucked having to always stick that type name on the front of all of the calls. While I’d use it in a pinch, it just didn’t feel right. So I looked around some more.

As it turns out, you can use the “inline” operator to point to a static method on a type. Then the compiler figures out which of those methods to call. This gets you pretty close to something that’s function overloading.

// Second way. Overload a type member and use inline to resolve to it
type FooInt = { thing: int } with 
    static member foo (x:FooInt) = printfn "The int is %d" x.thing
type FooString = { label: string } with
    static member foo (x:FooString) = printfn "The string is %s" x.label
let inline foo (x:^t) =
    (^t: (static member foo: ^t -> unit) (x))
foo { thing = 98 } 
{ label = "Car" } |> foo 

Well heck. Now we don’t have to worry about the carrying the type name around everywhere. but we have to instantiate some stupid dummy-type first just to get the inline magic working. I still don’t like that. Isn’t there some way to use the inline function to do the overloading, but then have the type and such constructed behind the scenes? Let’s try this one.

// Third way. Ugly! Figure it out at runtime
// The problem is that not only will it take
// our parameters, it'll take _any_ parameters.
// Ouch!
let Bar (x:obj) =
    match x with
        | :? string as x-> x
        | :? int as x-> x
        |_->failwith "We don't do that"
Bar 98
Bar "cats"

The usage looks perfect. Just type in the function name and throw a parameter at it, just like you’d expect. But heck, you can throw anything at that function. There’s no safety at all. The only way to fix that is to return an option. And I don’t want to return options from functions unless I’m being held hostage at gunpoint. It’s a sign of my not-solving a problem and just foisting it on somebody else. That’s a code smell — and I’m not releasing anything into production that has that smell if I can help it.

Ok. Can I check for the type in the inline function, perhaps in this case only taking strings and ints, the types I know to work?

Looking around, I figured out how to check for a subtype. I couldn’t out how to test for one of multiple subtypes. We’re venturing here into type programming, where you’re writing code that controls which types you’ll let do which things. You can look at it as an extremely advanced form of polymorphism.

What we need, I think, is called a “typeclass”, but F# doesn’t have those. We could check and see if a method is present, like we did in our second example.

// Bonus fourth way. Vastly more ugly!
// Flag the type that it's okay, then check during compile
// and reference using inline. Uglier, but safer
type FooBar = {o:obj} with
    static member bar (x:FooBar) =
        match x.o with
            | :? string as x-> x
            | :? int as x-> x
            |_->failwith "We should never, ever get here"
type System.String with static member DoesFoo()=true
type System.Int32 with static member DoesFoo()=true
let inline barfoo (x:'t when 'a:(static member DoesFoo:unit->bool)) =
    {o=x} |> 
barfoo 6
barfoo "elephants"

Well that’s a big old hunk-o-fun. You know you’re having fun coding when you start changing the way the system types work. I would get out a flamethrower, but I don’t have one handy.

Way ugly. But look! Now we have type safety and a usage pattern that looks like it should. I guess this is probably the best compromise. Overload all you want inside a nested type, then use an inline call to handle the dispatch and mark your handled types for function safety. After all, this is meant for custom types. You shouldn’t be using with string and int as shown here.

Is there no way to get rid of this nested type/static nonsense? How about Active Patterns? I do some googling around and play around a bit.

// Play with some ideas
let Fun1(x)=printfn "s"
let Fun2(x)=printfn "q"
let Pick1 (|Fun1|Fun2|) (x:int)() = (if x<10 then Fun1 else Fun2)()
let Pick2 (x:int) = ((Pick1 x) x)()
Pick2 4 //outputs "s"
Pick2 15 // outputs "q"

That’s pretty cool. Still ugly as crap, but no nested types. Instead Active Patterns handle the switching. I can use them instead of inlining, but I’m back to the dynamic type problem: to make overloading work, I’m forced to take an obj type and then figure out what to do with it. I could fix that with my type annotation trick from example four.

Meh. Here’s what I have so far:

// Fifth way
// Combine what we have so far:
// Overload all you want, just add type in at end of func name
let FooFooString(x) = printfn "The string is %s" x
let FooFooInt(x) = printfn "The int is %d" x
// then make an active pattern to manage it that takes an obj
let FooFooAP (|FooFooString|FooFooInt|) (x:obj)() =
        match x with
            | :? string as x->FooFooString(x)
            | :? int as x->FooFooInt(x)
            |_->failwith "Back to dynamic typing problem, but without inlines"
// finally wrap it. I must be doing something wrong to have to use
// so much syntactic junk
let FooFoo (x:obj) = ((FooFooAP x) x)()
FooFoo 42
FooFoo "dasdf"

Wow, that’s some ugly syntax, isn’t it? I had to write a second function just to wrap the first one. In fact, I probably missed some things here. There has to be a way to clean that up.

And here’s a good place to stop. After researching and playing around with the code, I now have a better understanding of what’s going on. F# has type inference, which means that it’s always trying to figure out the appropriate types for you, instead of your having to put types on everything like you do in C or C#.

Type inference does not play well with function overloading. As soon as you type the first “foo” into the code, the compiler figures out what types it has to take. It’s locked in. If you type another “foo” in with a different signature, it doesn’t match the first. The type inference system blows up. You can scope this out to a static member inside a nested type in your code — probably because F# keeps track of more stuff when you explicitly and statically add a method to your own type.

Because F# tries to figure out all the type stuff behind the scenes for you, you’re never going to do function overloading like you would in other languages. That’s why the answer will always involve “moving up” and horsing around with the type system itself. Now you know.

It used to be that the Program.fs file was a module, kind of a hidden module. I screwed up thinking it was easy because in my mind I thought I could add static methods there. But it doesn’t work like that. Instead, after a couple of hours of poking around, I now have several options if I want to overload. And that’s plenty.

Who’s got time to chase down this stuff? I’ve got code to get out the door! I hear you. This is the kind of thing you do to “sharpen the saw”, make yourself better understand how the tool you’re using works. When I’m in a production environment, it’s the kind of thing I might do once or twice a week: take 2-3 hours and dig in to figure out how things work. I wouldn’t consider myself a professional otherwise.

Note the difference here. There are programmers who want to be uber-nerds. They’ll learn everything possible about DoodleSquat 7.1 and they’ll be able to tell you the difference between 7.0 and 7.1. Then here are “hack and slash” programmers (we’ve all been there), that are just looking to google something quickly, copy-and-paste, and move on.

Here we’re picking our battles, making sure every week there is at least one battle to fight, and then diving deep enough to understand the issues and be able to make several reasonable suggestions for solutions, then moving on. It’s not that I’m nerdy or not nerdy, I’m just nerdy enough.

And remember: F# is a mixed-mode language. If you come from an OO world and get stuck, fall back to your OO skills, just translate it into F#. Then schedule a “clean up” where you go pure functional, for later in the week, after you get the work done. Because it’s mixed-mode, F# teaches you good functional programming, it doesn’t mandate it.

I love writing in F# because it teaches me to be a better programmer. If you don’t let it teach you, you should probably just stick with whatever you’re doing now.

Thanks for hanging out! If you’re interested in watching Uncle Bob (Clean Code), James Grenning (Embedded TDD), and myself horse around with learning F#, check out these videos.

July 6, 2018  6 Comments

F# Tips

What the heck is the deal with the Microsoft RegEx object? It says it returns a MatchCollection, but it doesn’t look like any collection I’ve ever seen.

As I understand it, the MS regex object goes back — way back. Way back before folks stuck enumerators on things. So it’s a collection. Kinda. I don’t like System Type extensions willy-nilly, but this looked to me like a good place for one. So I do this:

    type System.Text.RegularExpressions.MatchCollection with
        member this.toSeq =
            seq {for i = 0 to this.Count - 1 do yield this.[i]}
        member this.toArray =
            [|for i = 0 to this.Count - 1 do yield this.[i] |]

I have an array and I’d like to write a function that picks a random item from it. How do I do that?

This is another type extension, and it shows the danger of writing type extensions. In this case, the extension assumes that the array is not empty. I could have written around that by returning an option type or something, but then I’d kind of defeat the purpose of writing the extension in the first place.

    type 'a ``[]`` with         
        member x.randomItem = 
            let rnd = new System.Random()
            let idx = rnd.Next(0,x.Length)
    type System.Random with
        /// Generates an infinite sequence of random numbers within the given range.
        member this.GetValues(minValue, maxValue) =
            Seq.initInfinite (fun _ -> this.Next(minValue, maxValue))

I also included how to get an infinite sequence of random numbers.

I hope you’ve enjoyed these tips!

July 5, 2018  Leave a comment

Top Five Reasons You’re Wrong About Needing a Large Backlog

Most of these happen because people are confused about what, exactly, a backlog should be in an Agile environment.

The future will be awesome! And really messy too. Does it have to be that way?

  • But we have a lot of work to do! – You are confusing activity with value. Backlogs measure value, not activity.

    The old way to do project management was to use a Work-Breakdown Structure (WBS). It took a lot of work and broke it into smaller pieces which would be assigned to individual team members to accomplish. This was great if one person was able to completely understand everything required to breakdown the work, but that is not true in technology development. In fact, the reason you hire people instead of robots is that you can’t predict how they will solve certain problems. So you give them the problems and let them work it out. Instead of a Work-Breakdown Structure (WBS), you need to create a Goal-Breakdown Structure (GBS). If that sounds like the same thing just with different words? Then you don’t get it.

    Goals are tests that describe value, not actions. They are the problems you are giving people to solve. So a backlog is a series of progressively-described tests. When the tests pass, the goals are met. How the tests get to the passing state is up to the workers. Your job as backlog owner is to create tests with more and more detail the closer you get to actually doing the work, not pre-digest work. Remember: the backlog is about useful things you’re providing the users, not useful things you’re doing. If your focus is on yourself, it’s in the wrong place.

  • Without a huge backlog, we’ll lose track of important details! – You make this mistake when everything is a blade of grass, but nothing is the lawn. It stems from the belief that lots of tiny things have more value than one big thing. If your backlog is goals, not tasks, then goals have natural hierarchies, right? A lot of detail about those goals goes higher up in the hierarchy so it doesn’t have to get repeated everywhere. (In fact, if you think about it, the only extra information needed for the backlog item should be the last few bits of information to make this goal different from similar ones under the same parent. The rest of the information is already in place. DRY.)

    You can store as much detail about anything you like, just not in the backlog. That’s not what it’s for. People stick all kinds of crap into their backlogs: database tables, to-do lists, design notes.

    Another way of looking at it is this: the backlog exists to easily let the people-solving-the-problems work with the people-they’re-trying-to-help to make sure they’re solving the right problems in the right order. Once it becomes incomprehensible because of size, it ceases to fulfill its mission. Instead it becomes an artifact, a deliverable, a paperwork tiger, an obstacle, not a facilitator to rapid value delivery. Backlogs, like all the rest of your tools, are supposed to make you go faster, not slower.

  • How else can I keep track of what people are doing? – I don’t know. Maybe ask them? A lot of teams do a daily standup where they talk about how the value delivery is going. (Poor teams focus on “Important stuff I did yesterday”. Good teams focus on “I am trying to deliver this value and I could use some help”.) Hang out there and listen to what’s said.

    There are a lot of Agile folks who will wave their hands around and say something like “Stop being a micro-manager”, but I get it: somebody should be looking at how big teams are, where teams should be formed or shut-down, who should go to which division, and so on. There is a people management skill that’s required in businesses of any size. It’s just backlogs aren’t the tool to do people management. There’s a recurring theme here: the minute you stop focusing on value delivery and start focusing on activity, you end up with a lot of activity and very little value delivery.

    So whatever your management needs are, you should handle them in some other way. (Which is outside the scope of this essay)

  • But we need to coordinate at a low level with Team X! – In a lot of corporate environments, the work is both pre-digested for the teams and then split out among several “expert” teams.

    Wow, does that create a mess. Nobody can do anything without coordinating with another team. It’s all one big hunk of work.

    At some point, your job is either to create value or go down a to-do list. If they’re giving you a to-do list, do the to-do list. You will find no value in this essay. But remember: if your job can be done by robots? It will be done by robots. Having a job where everything can effectively be broken up 3 months ahead of time by one really smart person should be setting off alarm bells. Something is wrong somewhere.

  • But we have this huge list of bugs/defects/change requests/feature requests/…! – Yes. I understand. You have this big list of stuff, right? And you also have this tool that handles big lists of stuff! How cool is that?

    Aside from the fact that you’ve destroyed the entire purpose of a backlog by making it incomprehensible en toto, here are a couple of things to consider.

    Perhaps a big list of anything is just going to lead to endless boring meetings, loss of morale, and a feeling that things are hopeless. Perhaps big lists of things fit naturally into a simple and easy-to-understand hierarchy along with everything else. Perhaps the only things you should be worried about are the things you’re working on and the things you’re getting ready to work on — the rest is useless overhead. Perhaps the entire purpose of a managing a backlog is to prevent this very scenario.

In 1963, a U.S. President stated that he wanted to see, in that decade, a man walking on the moon. Did he deliver the specifications to the Saturn V moon rocket? Of course not. Do you think every week he received details on how hard the guidance team down in Huntsville was doing? Of course not. He stated a test, a high-level goal he wanted accomplished. He received periodic updates as new goals were accomplished: man living in orbit, capsules docking, and so forth. People followed along as each of these sub-goals were met, knowing that with each new capability the moon came closer.

Backlogs can be easy, intuitive, fun, and positive motivators for a team! Yay! Or they can be a micro-management death march. It all depends on what you’re willing to learn — and what you’re willing to un-learn.

Interested in learning more? I wrote the book on effective information management in your project. As it turns out, looking at your processes and workflows in terms of information management can help you figure out where things are going wrong. It’s called “Info-Ops”.

July 4, 2018  Leave a comment

The Two Kinds Of Technology Thinkers

Edward de Bono had his “Six Types of Thinking” hats to describe the different kinds of thinking that go into solving problems. Those are great, but there are two kinds of thinking happens on every technology team that are far more important: Platonic thinking and Pragmatic thinking.

Although most folks use both types of thinking, people have a favorite they rely on when problem-solving. It’s important to know what your “default setting” is. It’s also important to call out and identify each type of thinking as it’s used. Both types have both huge benefits and huge drawbacks.

Platonic thinkers like to think of things in the abstract, in their pure form. They’re conceptual thinkers and work with the pure and generic form of things, the way they should be. Once they oriented themselves in the abstract, they take what they’ve learned and try to make it work with real things.

Pragmatic thinkers may never orient themselves. In fact, some resist the idea that orienting yourself against abstractions is even a laudable goal. Instead, they think of things in the concrete, cause and effect. If under these circumstances I do this one thing? This other thing seems to happen a lot. I don’t know why. I probably don’t even have time to figure it out. I can use that implied causal relationship to make the other thing happen. Now I can move to the next problem.

Ever get excited about a new software platform, download and install it, only to have popular things work without a hitch and oddball things totally flake out? If so, you’ve been a victim of Platonic thinking. The general idea was good. The concepts are in place to do a great number of useful things. It’s just the actual application of those cool ideas was limited to things the developers thought were most important. What you have is a beautiful system of organization that looks like it might work but not all the little pieces required to prove that it actually does work.

Ever work with a piece of code that’s been around so long that nobody knows exactly what it does? Somebody asks for something trivial, like a new field on a report, and the team estimates it might take six months. Six months! What you have is a bunch of little pieces that all work separately that aren’t organized into anything that’s easy to understand and maintain.

You find this kind of thinking everywhere, not just in tech teams. Pick up some political essays about any random topic from any year in past. Some essays will argue platonically: these are our values, these are the reasons for these things existing, these reasons come together in this way to create/limit another high-level concept. Some essays will argue pragmatically: We do this certain set of things because they fix these other things. No, they are inconsistent with one another, but they’ve been working up until now. Sometimes we’re not even sure why they work, but they do.

There is no right or wrong way of thinking. It’s important to be able to use both of them seamlessly when working with systems of any complexity. As an example, in TDD and OO code, we first start with a pragmatic question: what’s the smallest thing that we want this thing to do? Then we write a test. Then we make the test pass. Finally we refactor, making sure everything is in the right place. We’ve started pragmatically and moved to platonic thinking once we’ve established value.

Oddly enough, there are all sorts of problems trying to do it backwards, starting with platonic ideas of what your code “should” look like, what the pure and perfect form is, then moving to the nuts and bolts of things. Back when I first started OO I managed to do it several times, but more often than not working from platonic to pragmatic ends up with architecture astronaut syndrome and software that promises everything for everybody and ends up doing almost nothing for a very few people.

I’m no master of functional programming, but I suspect in true functional programming this pattern might be reversed. First we ask the platonic question, what’s the smallest/simplest structure/types that support the next thing we want to do? Then we ask the pragmatic question, how can I make that work without increasing code paths? (Run from the ZOMBIES!). Finally we get extremely pragmatic by covering our code paths with tests.

We get the word platonic from the Greek philosopher Plato. Plato believed in a universal set of truths. There is a chair. There is another chair. There exists a universal idea of “chair” that all chairs reflect. It is this universal set of ideas where truth lies. Everything else, all that we see, are merely shadows of that universal truth.

Plato’s top student, Aristotle, disagreed. He was more concerned with things in the real world, watching them, understanding them, creating catalogs of what he saw. He was more concerned with understanding each thing he experienced than speculating on what the universal pure form of something might be. And if he kept organizing his thoughts as worked, didn’t he end up in basically the same place Plato was, only he gets there from the bottom-up instead of the top-down?

So when you have these conversations, you are not the first. You are not alone. This conflict has been going on since the dawn of recorded history. I’ve always said that creating technology is applied philosophy. You walk into a new domain. You have to understand it and the people who live there. You have to take that understanding and create a workable and provable set of hypotheses that drive out an executable theory of operations to deliver real value. Then you code it and begin the science of “making and keeping users happy” that’s different for every domain/product pairing.

When creating a new science, both kinds of thinking are critical. Don’t get stuck in a rut — and always remember the weakness of whichever one you’re currently using.

Hi, I’m Daniel Markham. I wrote a book called Info-Ops that talks about how to have conversations and organize what we’re doing so that we build the right thing without a lot of the usual BS. As it turns out, looking at what we do from an information standpoint tells us a lot more than simply talking about what activities people do every day or how various tools are configured or used.

June 13, 2018  Leave a comment

Why Don’t Organizations Use Their Own Defect-Tracking Systems?

Picture this: you’re working on a critical application for your company, used by countless people around the world. This morning, as the new update rolls out, a user in Detroit pushes the main button on your app — and nothing. Your app hangs. Something went wrong. Now nobody can use your app.

Now picture this: you walk into a new team. You’re the person who knows WhizzBang 7.0, and the team desperately needs your help. The new update is failing! You grab a seat and ask for a working computer.

But they can’t give you a working computer. Security policy says only the person assigned to each computer is allowed to use it. So you ask for your own computer. But that’s going to take a couple of days to sort out — the infrastructure staff is slammed right now with customer complaints from some kind of app deployment problem. You ask if it’s possible to get a new login on an existing computer. It’s possible, but usually takes a few hours. So then you ask if somebody could code on their computer while you tell them what to do. That works, but it’s technically against policy. Pair-programming has not been approved yet for all dev teams.

Both of these situations involve defects. The first defect is about a deployed product. People are expecting value from your product and are only finding frustration. Everybody’s familiar with that one. The second defect is about a broken organization. The people who make things happen in your organization are expecting to keep creating value and grow your business…but they are only finding frustration.

Why Don’t Organizations Use Their Own Defect-Tracking Systems?

I can only come up with three reasons:

  • Organizations actively avoid unpleasant conversations – Logging organization defects would require pointing out a lot of places where the emperor has no clothes. Then somebody would have to work on each item. There would be meetings, discussions. They would not be fun.
  • Organizations are lax about making people responsible for stuff – If you have an app in your hand and the button doesn’t work, there’s a team(s) responsible for that app. There’s probably even a person responsible for that button — or at least somebody who’s an expert in helping you meet your goals. In good organizations, everybody is either responsible for everything or they are clearly assigned to help users or developers meet specific and defined goals. In poor organizations, problems are swept up, organized by some orthogonal metric to value like technology or architectural tier, given a title, and assigned to somebody. What does Joe do? Joe’s the DBA Team Lead. That’s fine, but what does Joe do?. Things that involve databases? Without a connection to value delivery, how do I work with that? In this situation, everybody is responsible for nothing.
  • There’s no defect taxonomy – This is a little better. So you decide to start logging org defects. How do you group and classify them? It’s not an impossible thing, but it’s something only a few organizations are familiar with

Why are defect-tracking systems only good for the users and not ourselves? Isn’t it much more important to make sure the organization is running well? Doesn’t fixing that prevent a lot of the defects that our users end up finding downstream? Isn’t it much more important to fix and optimize the machine that makes stuff people want instead of constantly playing whack-a-mole with bad stuff its made?

Hi, I’m Daniel Markham. I wrote a book called Info-Ops that talks about how to have conversations and organize what we’re doing so that we build the right thing without a lot of the usual BS. As it turns out, looking at what we do from an information standpoint tells us a lot more than simply talking about what activities people do every day or how various tools are configured or used.

May 28, 2018  Leave a comment

Technical Debt Edge Cases

Is Technical Debt always bad?

Everybody talks about Technical Debt. Most of the time it’s always considered to be A. Bad. Thing.

Nobody talks about when it might not be a bad thing, or when some folks think Technical Debt exists and it doesn’t.

continue reading »

April 4, 2016  Leave a comment

Technical Story Slicing 3 of 3


Too many times user stories and backlogs are taught at such a high level of abstraction that folks can’t get value from them. So let’s take a real project, developed on AWS using Microservices, and walk though how the backlog is created, prioritized, and delivered — the whole thing. Including code. Due to space limitations, it will be a personal project done as a hobby over a few weeks.

The last of our videos runs about 35 minutes. It’s the first in a three-part series which are all published on this blog. Technologies covered to some degree include F#, Mono, Unbuntu, AWS, TDD, DevOps, Error Handling, scoping, MVP, and debugging. (Because this is a front-to-back real-world example, none of these are covered in depth. Although we see quite a bit of code, the videos are suitable both for programmers and for business people who have to interact with programmers.)

November 6, 2015  Leave a comment

« older posts newer posts »