Words Don’t Actually Mean Anything

bad-agile-backlogs
(The following is an excerpt from the upcoming “Backlogs 2″ e-book)


Aristotle would have made a terrible programmer because words don’t actually mean anything. Philosophers in general are a big pain in the ass. They’re also responsible for civilization and modern life.

That’s the conclusion you’ve reached after spending all day Saturday at the library. After the pounding your brain has taken, you are looking forward to some down time.

Aristotle lived over two thousand years ago. Back in the day, philosophers like Aristotle were a sort of cross between motivational speaker, law professor and librarian. They could read and write; and they were where you sent your kids when you wanted them to learn how to live a good life, what good and bad were, and how to succeed.

The Sophists were big back then. Sophists believed that justice and meaning were relative and that it was the art of social communication and persuasion that was important. They taught their students, sons of rich and affluent parents, how to argue both sides of an argument, manipulate the emotions of listeners, and handle themselves with an air of dignity. Basically, they taught all the skills you’d need to become and stay powerful in ancient Greece.

Sophists started off very popular, but by Aristotle’s time most people didn’t like them. First, they charged exorbitant amounts of money. Second, they championed the wishy-washy nature of things instead of actually taking a firm stand on anything. “Man is the measure of all things,” said a famous sophist. Folks wondered if sophists really stood for anything at all except for how to look good and become powerful.

Aristotle didn’t like them much either. He created his own school of philosophy, didn’t charge for it, and along the way invented science.

He thought that things had to have meaning. There was some external universal reality that we could learn, categorize, and reason about. It wasn’t all relative. You could use your senses to gain intuitive knowledge of the world around you. Knowledge could allow you to deduce how things work everywhere. There is a real world, with real cause and effect. Learning about universal truths gave us knowledge we could apply everywhere.

Things that we observe have attributes and actions. It was the reasoning about these attributes and actions over a few examples that gave rise to understanding for all the others of the same type. We can share this understanding by creating a body of knowledge that describes the universe. There was a universal idea of, say a cow. Aristotle called this universal idea, the truest version of something possible, a “universal form”. Once we understood things about the universal form of a cow, we could then apply that knowledge to all actual cows we might meet.

Attributes helped us create a master categorization system. You divided the world into categories depending on what attributes things had. This is an animal. This is a plant. Animals have these attributes. Plants have these. This kind of plant has bark. This kind of plant has none. This kind of plant has five leaves. This kind of plant has three. By simply listing the things each plant owns, its attributes, we could start coming up with categories about what kinds of plants they were. Same goes for animals, and everything else, for that matter.

Same goes for actions. This rock falls from a great height. This dandelion seed floats away. Things we observe DO things. They have actions they perform, sometimes on their own, sometimes in response to stimulus. They behave in certain ways. Just like with attributes, by listing the types of actions various things could do; we continue developing our categorization system.

Eventually, once we’ve described all the attributes and actions of something, we determine exactly what it is, its universal form. Then we can use logic and deduction to figure out why it works the way it does. Once we know how the universal form works, we know how all the examples of universal forms we see in life work. If I know how one rock of coal acts when it is set on fire, I know how all rocks of coal will act. This is common sense to us, but Aristotle was the first to come up with it.

Aristotle’s lesson: there’s a master definition of things, a set of universal forms. We discover the meaning of the world by discovering the attributes and actions of things to figure out exactly what they are. If we understand what something is, and we understand how the universal form of that thing behaves, we understand the thing itself. Once we have exact definitions, we can reason about universal forms instead of having to talk about each item separately.

Categorizing by attributes and actions, understanding by deductive logic, having exact definitions, working with abstract universal forms — those ideas grew into everything we call science. Thousands of scientific disciplines came from Aristotle. Quite a legacy for some old Greek guy.

Categorizing the attributes and actions of things was fairly straightforward and could be used no matter what you were talking about. Creating master categorization systems and dictionaries wasn’t so hard. Deductive logic, on the other hand, got more and more complicated the more we picked at it.

Systems of reasoning greatly increased the growth of math. Throughout the centuries, smart people have longed for an answer to the question: if I know certain things, what other things can I use logic to figure out? They created different rigid and formal ways of reasoning about things that depended on universal forms. Each system had pros and cons. There was set theory, formal logic, symbolic logic, predicate calculus, non-euclidean geometry, and a lot more.

Philosophers devised hundreds of systems of reasoning about all sorts of things. They kept creating whole new branches of science as they went along. Some people were interested in how surfaces relate to each other. Some people were fascinated by the relationship between math and physical reality. Some people wanted to know more about what right and wrong were. Some people wanted to find out about diseases. Some people wanted to know about the relationship between truth and beauty. Each of these used the same core Aristotlean principles of categorization of universal forms using exact definitions, then reasoning about those forms.

Highly abstract, formal logic, in particular, invented by another philosopher guy named Bertrand Russell around 1900, led to something called Von Neumann machines. Von Neumann machines led to modern computers.

That’s right, aside from creating science in general, Aristotle’s ideas about deductive logic using universal forms led to an emphasis on logic, then formal logic, then the creation of computers — machines that operate based on rigid universal rules about what is true and false and how to act based on what is true or false.

This science thing was turning out to be a big hit. Meanwhile you never hear much about the sophists. “Sophist” is commonly used to describe somebody who has no sense of right and wrong and uses lots of weasel words.

But everything wasn’t skittles and beer.

First we had a problem with this thing that Aristotle did when he set things up, where he stepped outside of himself and reasoned about things at a universal level. There was a land above science, a meta-science, and philosophers were the folks who operated outside of science asking useful questions. Because Aristotle asked questions about important things, universal truths, we consider him one of the first great philosophers.

The problem was that using reason and logic to work at this higher, universal level caused as much confusion as positive change in the world.

Not to put too fine of a point on things, philosophy through the centuries has been full of really smart people with one or two good ideas that spend their entire lives making those ideas more and more complex and unwieldy. In a simple world, philosopher X comes up with a good idea about why the sky is blue. In reality, philosopher X comes up with a good idea about why the sky is blue, then spends 40 years and writes 200 papers (including 12 books) on the nature of the sky, what kind of blue is important, and how the blue in the sky is actually the key to fish migration in Eastern Tibet and 50 other odd and vaguely understood concepts which he feels you have to know because they are the key to truly understanding his work.

These were not simple-minded or crazy people. They were the smartest of their day. They were simply taking Aristotle’s idea that there are universal truths that we can discover using reason and logic and trying to take it to the next level.

So instead of simple ideas that spawned new sciences, it was more the case that philosophers came up with extremely complex and delicate theories about things that couldn’t be measured or talked about — except by other philosophers. The useful philosopher who spawned a new science was an oddball, and even in that case, most of the time he created as much confusion among later scientists and philosophers as he shined light on anything.

There was confusion over what different terms mean, over which parts of which philosophies apply to which things, if one philosopher actually agrees or disagrees with another one, and even what the philosopher meant when he was writing all that stuff. Frankly this gave most people the impression that all of philosophy was bullshit. That was a shame, because there was some really good stuff in there too. Every now and then it changed the world.

The confusion in terms, meaning, scope, and method of reasoning got philosophers asking questions about philosophy itself and how we actually knew stuff. Then things got really screwy. Philosophers asked if they could really know whether they existed or not, or whether if they went to a swamp and were replaced by an exact duplicate if that new philosopher would be the same person. We had philosophers pushing fat people in front of trolleys and all sorts of other mental experiment shenanigans. There was a lot of smoke but not much fire. Made for some great science fiction, though.

Second, and this was worse, the idea that we could figure out the mechanism of things took a quick hit, and it never recovered. As it turned out, almost all of the time, we could not figure out why things work the way they do. Remember Isaac Newton? He saw an apple fall from a tree and came up with the Law of Universal Gravitation. This was an astounding mathematical accomplishment that allowed us to do all kinds of neat things, like shoot rockets full of men to the moon. There was only one tiny problem: the law didn’t say _why_ gravity worked, it just gave equations for _how_ it would act. For all we knew there were tiny little gerbils running around inside of anything that had mass. Maybe magic dust. Newton didn’t know, and to a large degree, we still don’t know.

Or medicine. Doctors noticed certain patterns of observations, such as chimney sweeps in 19th century England almost always caught testicular cancer. Some doctors speculated that chimneys caused cancer. They stopped people from being chimney sweeps. Testicular cancer dropped. New observations were made, rules guessed at, hypotheses tested. Over time the terms of the debate got finer and finer, finally settling on something approximating “We believe repeated chemical insults to cells by certain agents over time can cause some cells to spontaneously mutate, at times becoming new creatures which can survive and thrive in the host and take their life.” But we don’t know for sure. We don’t know why. There’s a ton of things we don’t know. All we can do is keep making more refined, tentative models that then we test. As these models get more refined, they get more useful for doing practical stuff in the real world, like reducing cancer rates. But we still don’t know the mechanism, the why.

There is a provisional guess, based on the observation of a lot of data. We keep gathering more data, creating possible rules that might explain the data, then creating testable hypotheses to test the rules, then testing the hypotheses, building a model. Then we start over again. The process loop of science contains called abduction, deduction, and induction, and the guy who explained how it all worked is probably the most creative and insightful philosopher-scientist you’ve never heard of.

Charles Sanders Peirce was the smartest thinker in America in the late 1800s, and even fewer people know him than Frederick Winslow Taylor. Peirce was an unknown hero, a rogue, an outsider, discovering things years or decades before others, but rarely getting the credit for it because he never got attention for his work. In the 1880s, he was the first to realize that logical operations could be carried out by electrical circuits — beating the guy who got credit for it by almost 50 years. He was one of the founders of statistics, creating and using techniques that decades later were to be known as Bayesian Statistics. He was the grandfather of symbolic logic, and much debate still exists as to where Bertrand Russell got all the ideas he had when he went about creating formal logic. Many believe Peirce was shortchanged.

But, like many smart people, Peirce was also ill-tempered and prone to tick others off. His only academic job was at Johns Hopkins where he he lectured in logic for a few years. The enemies he made there followed him the rest of his life. When his marriage didn’t work out and he started seeing another woman, they had enough evidence to get him fired. A trustee at Johns Hopkins was informed that Peirce, while a Hopkins employee, had lived and traveled with a woman to whom he was not married. And that, as they say, was the end of that. For one of the greatest thinkers of the late 19th century, it was the end of Peirce’s academic career. He tried the rest of his life to find jobs at other academic institutions without success.

He lived as a poor man, purchasing an old farm in Pennsylvania with inheritance money but never making any money from it. He was always in debt. Family members repeatedly had to bail him out. Although he wrote prolifically, he was unpopular among the scientists of the day, so his work received no recognition. Only as time passed, as other famous scientists and philosophers slowly started giving credit to Peirce many decades later did it finally became known what a great thinker he was. His “rehabilitation” continues to this day, as the study of Peirce has become an academic endeavor of its own. The man who couldn’t get a job at an academic institution and was considered a crank or crackpot now has scholars devoting their careers to studying his work.

Because of his history, Peirce had a unique outsider’s view. Although he was a philosopher, he always thought of himself as just a working scientist. As such, he saw his job as trying to make sense of the foundations of science, just like the philosophers. In his case, though, it was just to get work done.

To start organizing his work, he began by grouping the way we work with knowledge into two parts: speculative, which is the study of the abstract nature of things, and practical, which is the study of how we can gain control over things. Theoretical physics? Speculative. Applied physics? Practical. Philosophy? Speculative. Peirce taught that although the two types of investigations looked almost the same, they were completely different types of experiences. The scientist should never confuse the two.

This led him to creating a new science called semiotics, which was interested in how organisms work with symbols and signs, external and internal, to do things. Every living thing uses signs and symbols to understand and manipulate the world, but nobody had ever studied how they did it.

Thinking about the importance of practical knowledge and manipulating the world around us led to his famous Pragmatic Maxim: “To ascertain the meaning of an intellectual conception one should consider what practical consequences might result from the truth of that conception — and the sum of these consequences constitute the entire meaning of the conception.”

That is, when approaching some system of symbols, reasoning, or meaning, we have to be prepared to abandon everything, philosophy, tradition, reason, pre-existing rules, and what-not, and ask ourselves: what can we use this for? Because at the end of the day, if we can’t use knowledge to manipulate the world around us effectively, it has no value. And, in fact, the only value knowledge has is the ability it provides for us to use it to manipulate the real world. Newton figured out how to model gravity but not the mechanism, and that’s okay. It’s far more important that we have models with practical uses than it is to debate correctness of speculative systems. In fact, we can’t reason at all about speculative systems. The only value a system of symbols can have is being able to change things.

Mental ideas we work with have to _do something_. Pragmatists believe that most philosophical topics, such as the nature of language, knowledge, concepts, belief, meaning, and science — are best looked at in terms of their practical uses, not in terms of how “accurate” they are. Speculative thinking is nonsense in the philosophical sense. That is, we are unable to have any intelligent conversation about it one way or another.

The great pragmatists that followed Peirce took pragmatism everywhere: education, epistemology, metaphysics, the philosophy of science, indeterminism, philosophy of religion, instincts, history, democracy, journalism — the list goes on and on. As always, the sign of a true philosophical breakthrough is one which changes the universe, and Peirce’s pragmatism certainly qualifies.

Peirce’s lesson: in order for us to make sense of the chaos and uncertainty of science and philosophy, we have to hold both to a high standard of only concerning ourselves with the things we can use to change the world around us. This was far more important than arguing about being “right”.

This sounds very familiar to people in the Agile community. The dang thing has to work, consistently, no matter whether you did the process right or not. Right or wrong has nothing to do with it. It’s all about usefulness. A specification means nothing. It has no immediate affect. A test, on the other hand, is useful because it constantly tells us by passing or failing whether or not our output is acceptable or not. It has use.

Nobody ever heard of Peirce, laboring away in his farmhouse producing tons of papers they never read (at least until 100 years later, in some cases) but everybody knew Bertrand Russell and his star pupil Ludwig Wittgenstein, living large at the opposite end of the spectrum. They lived at the same time but in different worlds. Russell was a member Britain’s nobility, well-respected, rich, and widely admired. Everybody considered Wittegnstein to be a genius and he never wanted for anything.

Russell invented formal logic. He is considered the most influential philosopher of the 20th century. Wittgenstein wasn’t much of a slouch, either. Russell took reasoning to the highest levels man has ever reached with formal logic, yet, like always, there were a lot of pieces that didn’t fit together. Reasoning about right or wrong inside speculative systems was a waste of time, no matter how rigorous they were, as Peirce had shown decades earlier, but no one heard him.

Wittgenstein took it on himself to fix it. So he wrote a big book, the Tractatus, that he felt was the final solution to all the problems philosophy faced. He told his friends there wasn’t much more for him to do. Wittgenstein wasn’t much on the humble side, although better than most. (“Annoying genius philosopher guy” seems to a be a recurring theme in these stories.)

After thinking about things some more, Wittgenstein realized that, surprise, he might have made some mistakes in the Tractatus. So he wrote another book, Philosophical Investigations, that wasn’t published until after after his death in 1953. There was a good reason philosophy kept getting tangled up around itself. There was a good reason that we had difficultly separating the speculative and the practical. There was a good reason that philosophers had a good idea or two, then, by trying to tease it out using systems of logic, always ended up out in the weeds somewhere.

Wittgenstein solved the problem from the other end, assuming formal abstract systems had value and looking for fault elsewhere. What he came up with blew your mind. No wonder Collier was stumped by the consultants.

The problem here wasn’t science, or knowledge, or logic, or reason. It was both simpler and more profound. It was language. Human language. Human language was much more slippery than people realized. It wasn’t logic and reason that were broken, it was the idea of a universal categorization system and universal forms based on language. Human language gives us a sense of certainty in meaning that simply does not exist.

It can’t be made to exist, either. Aristotle’s universal forms might still be valid, but they’re not precisely approachable using human language.

Although we do it everyday, we do not truly understand how people communicate with each other. In speculative efforts, where it was all theory, this led to much confusion, as one time a term was used to mean one thing, and another time it meant something slightly different, imperceptibly different. In our everyday lives, people unconsciously looked at whatever the result they were shooting for instead of the language itself, so they didn’t notice. The exact meanings of words didn’t matter to them.

The reason this was important, the reason Collier sent you here, was that there was a special case where lots of theory and speculative talk ran headlong into a wall of whether it was useful or not, and did so on a regular basis: technology development. In technology development, business, marketing, and sales people spoke in abstract, fluffy, speculative terms, much like philosophers did. But at some point, that language had to translate into something practical, just like things in the practical sciences. And so the problems philosophy had been experiencing over and over again over periods of years, decades, and centuries, where there were subtle differences in terms and different ways of looking at things that didn’t agree with each other? Technology development teams experienced those same problems in time periods measured in days, weeks or months.

Technology development is a microcosm of philosophy and science, all rolled into a small period of intense effort. It’s science on steroids.

To illustrate the nature of language, Wittgenstein suggested a thought experiment.

Let’s assume you go to work as a helper for a local carpenter. Neither of you know any language at all, but he knows how to build houses and you want to help. The first day you show up, you begin to play what’s known as a “language game”. He points to a rod with a weight on the end. He says “hammer”. You grab the hammer and hand it to him. This game, of him pointing, naming nouns, and you bringing those things to him, results in your knowledge of what words like “hammer” mean: they mean to pick up a certain type of object and bring it over. Maybe it’s a red object.

The fact is, you don’t know what hammer means, at least not a lot of it. You only know enough to do the job you’ve been given. Another person playing a language game with another carpenter might think of “hammer” as being a rod with a black piece of metal at the end. As Peirce would remind us, that’s good enough. We have results. We have each played the language game to the point where we can gain usefulness from it. Later on, the carpenter might add to the game, showing you different locations and trying two-word sentences “hammer toolbox” or “hammer here.”

Each time you play the game, and each time the game evolves, you learn more and more about what some arbitrary symbol “hammer” means — in that particular language, that particular social construct.

The problem scientists, philosophers, and many others were having was given to us by Aristotle. Turns out science was a gag gift. The assumption was that one word has the same meaning for everybody. That language could represent some unchanging universal form. But the reality is that meaning in language is inherently individualistic, and based on a thousand interactions over the years in various language games that person has played. Sure, for 99% of us, we could identify a hammer based on a picture. That’s because most of us have a high degree of fidelity in our language games around the concept of “hammer.” But even then, some might think “claw hammer”, or “ball peen hammer” while to others there would be no distinction. Would you recognize every possible picture of a hammer as a hammer? Probably not.

Words don’t mean exactly the same between different people. The kicker, the reason this has went on so long without folks noticing is that most of the time, it doesn’t matter. Also, we pick up language games as infants and use them constantly without thinking about it all of our lives. It’s part of our nature.

Where it does matter is when it comes time to convert the philosophical ideas of what might exist into the concrete reality of computer programs, which are built on formal logic — into a system of symbols that assumes that there are universal forms and that things are rigid and relate to each other in rigid and predefined ways.

Let’s say you were asked to write a customer account management program for a large retailer. Given a title “customer account management program”, would you have enough knowledge to write the program? Of course not. You would need more detail.

Being good little Taylorites, over the years we have tried to solve this problem by breaking it down into smaller and smaller processes which can then be optimized. It’s never worked very well. Now you know why.

Just like with Taylor’s Scientific Management and creative processes, it seemed that we could break down behavior and meaning into infinite detail, and there still would be lots of ambiguity remaining. There’s always some critical detail we left out. There’s always some industry context or overloaded jargon that gets omitted.

Suppose you have a spreadsheet. A stranger walks in the room and asks you to create a list with columns to account for customer information, then leaves you alone. Could you complete that task? Of course not. You would need more context. So how much would be enough?

You could look up “customer” in a dictionary, but that wouldn’t help. You could talk to programmers on the internet about what they keep track of for customers. That might help some, but it would be nowhere near being correct. In fact, there is no definition for customer in terms of formal computer logic for your particular application. There is no definition because it hasn’t been created yet. You and the stranger haven’t played any language games to make one. The term “customer” is nonsense to you, meaningless.

Agile had the concept of bringing the end-user (or Product Owner) in with the team to describe things as needed as close to when the work happens as possible. It was to remove waste, or rework; but in reality it was the best solution to a problem that was not fully understood: meanings are subjective and depend on the language games involved in creating them. We get the guy in the room with us because the team needs to play language games right up until the last possible moment. Just like with the carpenter, you play the game until it’s good enough. “Good enough” is vastly different for every team, every Product Owner, and every problem.

Wittgenstein’s lesson: Communities of humans play language games all the time, it’s a state of nature. Language is inherently flexible, vague, and slippery. It gains meaning only to be “good enough” inside that particular community and only for particular uses unique to that community. Nothing means anything until we play language games to make it mean something. We can’t reason about universal forms. Instead, we have to deal with each item separately. Meaning was relative, highly dependent on the person’s experiences, and created by social interactions.

Looks like the sophists weren’t so stupid after all. Aristotle would not be happy. There was no way in hell that Mr. Collier was going to like this.

Aristotle said that there was a universal form for everything, and that by having exact definitions and using formal systems of logic on it we could deduce things about the universal forms that would then apply to everything in the real world. Peirce said that formal abstract systems are by nature speculative, and that speculative systems are nonsense. Unless it can change things in the real world, it is impossible to reason whether things in these systems are correct or incorrect. Computers can change the world using a formal system of logic, so technically, they might be the first devices ever able to translate abstract concepts into real-world effects. Wittgenstein said that didn’t matter: that natural human languages will never, ever match up with the universal forms that all systems of reasoning are built on anyway, so although the computer can work with universal abstract ideas, you’d never get the actual things people spoke about translated directly into the computer.

Science works because at heart it is based on probability, not because it is based on reason and logic.

Language games are terribly non-intuitive for folks brought up to believe that to find the meaning to a word you simply go to the dictionary, or for folks brought up in the scientific method school of thought that says that language can rigorously describe something so that any listener would receive the same meaning. Heck, it would even drive most programmers crazy, with their concept of language being so closely associated with formal systems of logic.

Believing that language can describe reality exactly has sent millions of projects off the rails, and thousands of philosophers to the old folks’ home early. (Wittgenstein grew to loathe philosophy, declaring that it was much more useful as a form of therapy rather than a quest for truth. We should use philosophy the way we would use a conversation at a bar with a particularly smart person, a conversation with a therapist, or a counseling session with a priest: a useful system of beliefs to help us move through life with more understanding and less pain.)

Instead, your imaginary spreadsheet project would proceed like this: you would form a group with people who had initial diverse internal definitions to the thousand or so words surrounding “customer information”. You would play various games, just like the carpenter and helper, until the group came to a common consensus as to what all the words mean and how they relate to one another. This wouldn’t be a formal process — language games seldom are — but it would be a process, a social process, and it would take time.

What you couldn’t do, because it’s impossible, is capture all the terms, definitions, idioms and such required for the project and convey it to another person’s brain. At some far degree of descriptive hell you might get close, maybe close enough, but you’d never achieve the same results as if you just sat down and had everybody naturally create the language on their way to the solution. And even if you managed to somehow describe enough for some initial, tightly-circumscribed work, you’d never cover changes, modifications, re-planning — all the parts of “living the problem” that occurs as part of natural social interaction using language games.

It would be like trying to prepare somebody for a trip to an alien planet and alien civilization using only a book written in English. Could you cover the language, the culture, how to properly communicate during all of the things that might happen? It’s impossible. The best you could hope for would be to get the person into a state where they could begin their own language games and become one with the culture once they got there. Everybody knows this, yet when we talk about specifications and backlogs for technology development, we forget it, and act as if we can join up the abstract and concrete using more and more words as glue. Maybe special pictures. If only we had the right system, we think. If only we included something about fish migration in Tibet. We are all philosophers at heart.

This was why Jones found that throwing away his backlog and re-doing it — restating the backlog — had value. This was why teams that worked problems involving the entire backlog understood and executed on the domain better than teams who were given a few bits at a time. This was why standing teams in an organization had such an advantage. They were playing language games that removed ambiguity and increased performance over time.

You don’t evolve the backlog to be able to work with any team; you evolve a particular team to be able to work with a particular backlog. It’s not the backlog that matures, though lots of things might be added to it. It’s the team that matures through language games.

Solving technology problems always involves creating a new language among the people creating the solution.

Words don’t mean anything, Aristotle would have been a lousy programmer, philosophers were a pain in the ass, Collier was going to kill you, and you were late for your date.


Follow the author on Twitter

May 14, 2014

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy this password:

* Type or paste password here:

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>