Crowdsourcing a topic (part 2)

Which of the ideas here might be worth developing from half-baked to fully-baked?


 

“The philosophers have only interpreted the world, in various ways. The point, however, is to change it.” – Marx

So what might XP learn from FP? Some speculation.

Part of XP’s heritage – from both object-orientation and patterns – is an emphasis on modeling the static structure of some world. Let me be careful here: I’m not claiming that current practice is as crude as the (very) old “underline all the nouns in the requirements; those are your classes” advice. And I understand that not all classes in the system may seem to be about something in the business domain: see for example Ward Cunningham’s story of Advancers, where the programmers were quite happy to work with a class that seemed to have no external meaning because it still let them solve business problems.

Note, however, the happy ending to the Advancers story: the connection between Advancers and the business domain had been there all along! The coders had, via an indirect route, discovered something true about the (human-constructed) outside world. That appeals to all us heirs of Plato, who want categories (and classes) that “carve nature at its joints“.

But that platonic view has been challenged, notably by Lakoff’s Women, Fire, and Dangerous Things: What Categories Reveal About the Mind, which argues that categories as used by humans have fuzzy boundaries and often lack any property shared by all members. Other sources, such as Kahneman’s Thinking, Fast and Slow, suggest that what we traditionally think of as reasoning – manipulating mental representations in order to create plans for manipulation of the external world – is actually a fallback strategy that our brains use only when cheaper strategies won’t do. Others, like those who work in embodied cognition and ecological psychology, are expanding our understanding of how behaviors of living organisms that seem to require internal representations or categories can be accomplished without them. For example, people catching balls don’t need to perform expensive mental computations on data that the human perceptual system is in any case unable to provide; instead, they can just maintain simple visual invariants.

The incompatibility between what our programming languages assume and what’s actually true has been discussed in OO circles for a long time, but my sense is that people have never really come to grips with it. For example, there’s still often a sneaking dissatisfaction with terminology that has only internal meaning: wouldn’t it be better if we spoke only the ubiquitous language? (The answer is probably “yes”, but what do we do when we can’t?)

The model-the-world approach has been under even more pressure recently, it seems to me, because web apps don’t have time to model the world. A lot of our habits were formed in the days when objects persisted. Apps got launched. Then objects were created, often corresponding to some part of the world, and an awful lot of them lived as long as the app did (or as long as the corresponding part of the modeled world did). Nowadays, objects live only for the duration of an HTTP Request/Response cycle. They can’t live longer because that would screw up horizontal scalability.
FP languages are a better fit for this modern world. They tend to operate on various flavors of key/value pairs (maps, hashmaps, dictionaries) that heterogeneously lump together data useful to a particular transaction script. That is: they needn’t ever work on classes that have some external meaning.

The heterogeneous structures are passed through a series of processing stages, much like this:

flow

The stages tack on new values as needed, without necessarily much concern for whether they’re changing the meaning of the clump-of-data (what it represents). Put another way: adding a new key/value pair to data flowing through a stage doesn’t/shouldn’t inspire any agonized reflection over whether its type has changed: who cares?

And when the processing is done, (most) everything is discarded. There’s no attempt to maintain any coherent “God’s-eye view” picture of the world. Given the increasing use of eventual consistency, it’s not even clear whether a “God’s eye view” of “the state of the/a world” even makes sense.

As a bit of summary: XP has been (to some extent) hampered by a legacy of world-modeling. That attitude is “baked in” to the process by virtue of its history. FP could allow an alternative point of view: that programs don’t model the world. Rather they (pace Marx) change it by operating on particular bundles of inputs to produce particular bundles of outputs. It’s fine for a program to have no model of the world but still send instructions to actuators that physically poke at it.

 

Posted in Uncategorized

Agile Created the Extrovert Star

So, is Technology still an industry which is primarily suited to introverts? And if so, where do collaboration and communication fit in? Are XP and Agile bad news for introverts?

My Experience Report, which I’ll be presenting at this year’s conference (Wed, 10:30-12, Duddingston, JMCC building), is on the topic of Women and XP*. My own story is that working with XP has made my life easier as a woman, and a lot of that is because of communication and collaboration.

But in this post, “Agile Killed the Introvert Star”,  an ex colleague of mine discusses the effect of Agile on the introvert:

With the adoption of Agile, the previous plentiful habitats of the atypical introvert are under threat. Pair and mob programming have eroded the landscape even further, making programming an ‘almost’ sociable activity, reducing the opportunities for introverts to recharge.

hamster_ball

(cartoon courtesy of xkcd: http://imgs.xkcd.com/comics/hamster_ball.png)

This is something I’ve been thinking about a lot lately. At the end of David’s article, he links to a quiz which allows you to define yourself as either introvert or extrovert. It gives the impression that everyone is hard-wired to be one or the other. Maybe I am unusual, but I found myself fitting squarely into both camps. I enjoy spending time alone, and I answered “Strongly Disagree” to this one:

Being alone is boring and depressing.

…but I also answered “Strongly Agree” to this one:

I like speaking to large crowds of people and mingling.

When I first embarked on a career in computing, one of the big attractions for me was the ability to spend my working day sitting quietly in a corner, conversing with nothing other a computer and largely avoiding people altogether. If I needed to talk to anyone, I fired off an email or an IM – and I resented people that turned up unannounced at my elbow. Work fed the introverted side of me, and my social life fed my extrovert.

When I heard about pair programming, I laughed in derision. Seriously? Two people sitting at one computer? Quite apart from the apparent glaring inefficiency, it sounded like the height of discomfort. What if you didn’t like them? What if they were annoying? What if they just got everything wrong?

But over a period of months and years I discovered the benefits of all the different types of collaboration that XP offers. Not only was I gradually convinced of the advantages, I believe I became a slightly different person.

man_in_a_box

(image by Keith Allisonhttps://commons.wikimedia.org/wiki/File:Man_in_a_box.jpg)

Whether or not it’s actually true (and according to some research it might be all in our heads), and whether it’s caused by nature or nurture, women are perceived to be more sociable and better communicators. The stereotype of the computer geek as the socially-awkward loner who prefers computers to people, may be one of the things that discourages women from playing and working with technology.

Recruitment and education are in some areas taking a while to catch up, but the fact is that communication and collaboration are key skills for professionals working with XP. And if my experience is anything to go by, it’s not just that we need to recruit for a slightly different type of person. Existing professionals are becoming slightly different people.

Initially I wanted to stay in my bubble, be in total control of every line of code, and not rely on anyone but myself. I resented the idea of asking constantly for feedback, and I was nervous about letting somebody watch all my mistakes over my shoulder. I confess, when I was at school the phrase “working in teams” was the one I dreaded the most.

But when I lowered my barriers, I learnt how much better – and satisfying – my work was when it was produced in close collaboration with others. Not only that, I discovered how much more comfortable my working day was when I could ask for help without feeling inadequate. When I could stand up at the beginning of each day and be honest about my progress, without fearing censure or ridicule.

Agile and XP have taught me how to confine my introversion to the places where it is genuinely enjoyable and useful, and here is my theory: The vast majority of people, whether they admit it or not, experience an urge to belong. In my own case, because I felt like I didn’t belong, I redirected that urge so that I could belong to the misfits. But it was still a form of belonging. If computing is something that excites someone, and if the stereotype of the computer geek is that of somebody who’s not that great at socialising or communicating, maybe that encourages people to exaggerate – even fetishise – those aspects of themselves.

But one of the many fantastic things about human beings is how malleable we are. We learn, we change, we evolve constantly. And maybe XP is teaching us all, bit by bit, that it’s OK to talk. It’s even rather nice.

But. Maybe I’m down-playing the fact that I’ve always had an extrovert side to me. Maybe, for people who are more introverted, the amount of collaboration involved in (for instance) pair programming and mob programming is just painful? And that’s a permanent problem?

Mob pic - unicorns and demons

I don’t know the answer to that, but I do know that when I suggested an experiment in mob programming to my team at @LateRooms, I expected them to be reluctant. But in the end, none of us wanted to stop.

*I’m also facilitating a goldfish-bowl debate on the topic, “Can XP Close the Gender Gap?” (Thurs, 14:15, Pentland East, JMCC). I’m eager to get as many different perspectives as possible, so do please come along and contribute.

@ClareSudbery (https://medium.com/a-woman-in-technology) (http://engineering.laterooms.com/)

Tagged with: , , , ,
Posted in Uncategorized

Crowdsourcing a topic (part 1)

My topic at XP2016 will be about what XP can learn from functional programming (FP) and vice-versa. I hope to have some discussion before the conference, so that the discussion at the conference is productive.

First question is: why should I think that a style of teamwork would have anything much to do with a class of programming languages? Why wouldn’t it be that you could do all of XP with no reference to whether the code was being written in Smalltalk, Java, Ruby, Haskell, Clojure, or Elixir?

One reason is that history matters. As Jerry Weinberg has said, “things are the way they are because they got that way.” This note sketches what I know of relevant history. I expect to be corrected by readers.


 

My potted history of XP goes something like this: It started with Smalltalk, which introduced a particular set of people to a language+environment that was highly manipulable: that encouraged the “molding” or “growth” of programs. For reasons including coincidence, Smalltalk led to the patterns movement, which had some interesting characteristics:

  • Reasoning tended toward the metaphorical rather than the mathematical. (Lots of talk about “resolving forces” and “generative patterns”.)
  • Alexander’s A Pattern Language was an explicitly rhetorical work, one whose goal was partly to persuade you that the pattern being described was something you already “had” but hadn’t thought about explicitly. Patterns were written with a particular – though loose – structure that, importantly, wasn’t slavishly followed.
  • Patterns were anti-innovative in the sense that pattern writers were supposed to be collecting solutions that were “in the air” among the experts. They were about making certain tacit knowledge more explicit. As such, the patterns movement was ill-fitted for academia.

Smalltalk also inspired a particular exodus of the Knights of the Square Bracket. Smalltalk seemed to transition rather quickly from something supported by research labs like Xerox PARC and Tektronix Labs to a language that was used in particular odd and unfashionable niches such as insurance companies, tractor manufacturers, and the like. In such cases, you’d expect the researchers to shrug and move onto the next hot thing. Instead, a core set of people moved with their language into environments where technologies (like SQL!) were ugly, and the business logic was… not at all logical. Not even close. Unbearably messy.

XP, I claim, was in part an attempt by a particular set of ambitious, thoughtful, and inventive people to come to grips with a new and forbidding environment, bringing to bear attitudes and habits developed via Smalltalk and patterns.


 

Meanwhile, over in the land of Functional Programming…

Here’s where I make a disclaimer. I was peripherally involved with the Smalltalk -> Patterns -> XP world, mostly as a result of being attached to Ralph Johnson’s patterns reading group. Over at my day job, I was for a time even more peripherally attached to the Lisp world because I was involved in writing a virtual machine for Common Lisp during the AI Boom of the early 1980’s.

Although Lisp and functional programming are lumped together today, and were even in the 1980’s (with a series of conferences on Lisp and Functional Programming), they didn’t really seem so to me. Lisp looked upward to a messy domain (intelligence, artificial or not) and system-building, whereas functional languages looked to the clarity of mathematics for inspiration. The sentence “Functional languages are easier to reason about” was common even back then, and it was even more clear than now that “reason about” entailed “prove theorems about”.

After the AI Boom turned into the AI Winter, Lisp (together with its fancy and expensive personal workstations) fell greatly out of favor. It (in the form of Scheme) was used in teaching, but production uses were limited to die-hards in niches (much like Smalltalk).

Meanwhile, the functional languages chugged along in academia. Because of their mathematical slant and emphasis on innovative applications of core ideas, they could encyst themselves there until a favorable environment allowed a sudden epidemic breakout. It’s perhaps early to say, but we may be in the midst of a global pandemic. I hope so, because – for an awful lot of problems, especially today’s problems – FP programs really are easier to reason about, and not just in the “prove properties of” sense.

When I refer to “functional languages” in this series of posts, I’ll be referring to ones from the academic tradition. I believe that even Clojure, which uses Lisp syntax, is more akin – in style and features and, perhaps, community – to them than it is to the Lisp tradition that culminated in Common Lisp. That’s unfortunate for my talk, in that I don’t have the experience in Haskell or F# programming that I should.


The missing piece in my story is Erlang. Its adoption was long hampered by an obscure syntax. Elixir, which puts a Ruby-like syntax on top of the Erlang virtual machine, is now a valid choice for  early adopters, especially those writing applications like those for which Ruby on Rails is usually used. Unfortunately, I know little about Erlang’s history and have only a very novice understanding of its style and idioms (and that only through Elixir).


So, historians: what have I gotten all wrong?

 

Posted in Uncategorized

Innovation

We live in a different world than existed when Kent Beck’s book Extreme Programming Explained was published in the year 2000. To understand technology today, it is interesting to examine the origins of biggest innovations that software has brought us in the last 16 years. Here are my candidates for the top five:

          1.   The Cloud

          2.   Big Data

          3.   Antifragile Systems

          4.   Content Platforms

          5.   Mobile Apps 

The Cloud

In 2003 Nicholas Carr’s controversial article “IT Doesn’t Matter” was published in Harvard Business Review. He claimed that “the core functions of IT– data storage, data processing, and data transport” had become commodities, just like electricity, and they no longer provided differentiation. It’s amazing how right – and how wrong – that article turned out to be. At the time, perhaps 70% of an IT budget was allocated to infrastructure, and that infrastructure rarely offered a competitive advantage. On the other hand, since there was nowhere to purchase IT infrastructure as if it were electricity, there was a huge competitive advantage awaiting the company that figured out how package and sell such infrastructure. 


At the time, IT infrastructure was a big problem – especially for rapidly growing companies like Amazon.com. Amazon had started out with the standard enterprise architecture: a big front end coupled to a big back end. But the company was growing much faster than this architecture could support. CEO Jeff Bezos believed that the only way to scale to the level he had in mind was to create small autonomous teams. Thus by 2003, Amazon had restructured its digital organization into small (two-pizza) teams, each with end-to-end responsibility for a service. Individual teams were responsible for their own data, code, infrastructure, reliability, and customer satisfaction.


Amazon’s infrastructure was not set up to deal with the constant demands of multiple small teams, so things got chaotic for the operations department. This led Chris Pinkham, head of Amazon’s global infrastructure, to propose developing a capability that would let teams manage their own infrastructure – a capability that might eventually be sold to outside companies. As the proposal was being considered, Pinkham decided to return to South Africa where he had gone to school, so in 2004 Amazon gave him the funding to hire a team in South Africa and work on his idea. By 2006 the team’s product, Elastic Compute Cloud (EC2), was ready for release. It formed the kernel of what would become Amazon Web Services (AWS), which has since grown into a multi-billion-dollar business.


Amazon has consistently added software services on top of the hardware infrastructure – services like databases, analytics, access control, content delivery, containers, data streaming, and many others. It’s sort of like an IT department in a box, where almost everything you might need is readily available. Of course Amazon isn’t the only cloud company – it has several competitors.


So back to Carr’s article – Does IT matter?  Clearly the portion of a company’s IT that could be provided by AWS or similar cloud services does not provide differentiation, so from a competitive perspective, it doesn’t matter. If a company can’t provide infrastructure that matches the capability, cost, accessibility, reliability, and scalability of the cloud, then it may as well outsource its infrastructure to the cloud.


Outsourcing used to be considered a good cost reduction strategy, but often there was no clear distinction between undifferentiated context (that didn’t matter) and core competencies (that did). So companies frequently outsourced the wrong things – critical capabilities that nurtured innovation and provided competitive advantage. Today it is easier to tell the difference between core and context: if a cloud service provides it then anybody can buy it, so it’s probably context; what’s left is all that’s available to provide differentiation. In fact, one reason why “outsourcing” as we once knew it has fallen into disfavor is that today, much of the outsourcing is handled by cloud providers. 


The idea that infrastructure is context and the rest is core helps explain why internet companies do not have IT departments. For the last two decades, technology startups have chosen to divide their businesses along core and infrastructure lines rather than along technology lines. They put differentiating capabilities in the line business units rather than relegating them to cost centers, which generally works a lot better. In fact, many IT organizations might work better if they were split into two sections, one (infrastructure) treated as a commodity and the rest moved into (or changed into) a line organization. 

Big Data

In 2001 Doug Cutting released Lucene, a text indexing and search program, under the Apache software license. Cutting and Mike Cafarella then wrote a web crawler called Nutch to collect interesting data for Lucerne to index. But now they had a problem – the web crawler could index 100 million pages before it filled up the terabyte of data they could easily fit on one machine. At the time, managing large amounts of data across multiple machines was not a solved problem; most large enterprises stored their critical data in a single database running on a very large computer. 


But the web was growing exponentially, and when companies like Google and Yahoo set out to collect all of the information available on the web, currently available computers and databases were not even close to big enough to store and analyze all of that data. So they had to solve the problem of using multiple machines for data storage and analysis. 


One of the bigger problems with using multiple machines is the increased probability that one of machines will fail. Early in its history, Google decided to accept the fact that at its scale, hardware failure was inevitable, so it should be managed rather than avoided. This was accomplished by software which monitored each computer and disk drive in a data center, detected failure, kicked the failed component out of the system, and replaced it with a new component. This process required keeping multiple copies of all data, so when hardware failed the data it held was available in another location. Since recovering from a big failure carried more risk than recovering from a small failure, the data centers were stocked with inexpensive PC components that would experience many small failures. The software needed to detect and quickly recover from these “normal” hardware failures was perfected as the company grew. 


In 2003 Google employees published two seminal papers describing how the company dealt with the massive amounts of data it collected and managed. Web Search for a Planet: The Google Cluster Architecture by Luiz André Barroso, Jeffrey Dean, and Urs Hölzle described how Google managed it’s data centers with their inexpensive components. The Google File System by Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung described how the data was managed by dividing it into small chunks and maintaining multiple copies (typically three) of each chunk across the hardware. I remember that my reaction to these papers was “So that’s how they do it!” And I admired Google for sharing these sophisticated technical insights. 


Cutting and Cafarella had approximately the same reaction. Using the Google File System as a model, they spent 2004 working on a distributed file system for Nutch. The system abstracted a cluster of storage into a single file system running on commodity hardware, used relaxed consistency, and hid the complexity of load balancing and failure recovery from users. 


In fall, 2004, the next piece of the puzzle – analyzing massive amounts of stored data – was addressed by another Google paper: MapReduce: Simplified Data Processing on Large Clusters by Jeffrey Dean and Sanjay Ghemawat. Cutting and Cafarella spent 2005 rewriting Nutch and adding MapReduce, which they released as Apache Hadoop in 2006. At the same time, Yahoo decided it needed to develop something like MapReduce, and settled on hiring Cutting and building Apache Hadoop into software that could handle its massive scale. Over the next couple of years, Yahoo devoted a lot of effort to converting Apache Hadoop – open source software – from a system that could handle a few servers to a system capable of dealing with web-scale databases. In the process, their data scientists and business people discovered that Hadoop was as useful for business analysis as it was for web search. 


By 2008, most web scale companies in Silicon Valley – Twitter, Facebook, LinkedIn, etc. – were using Apache Hadoop and contributing their improvements. Then startups like Cloudera were founded to help enterprises use Hadoop to analyze their data. What made Hadoop so attractive? Until that time, useful data had to be structured in a relational database and stored on one computer. Space was limited, so you only kept the current value of any data element. Hadoop could take unlimited quantities of unstructured data stored on multiple servers and make it available for data scientists and software programs to analyze. It was like moving from a small village to a megalopolis – Hadoop opened up a vast array of possibilities that are just beginning to be explored.


In 2011 Yahoo found that its Hadoop engineers were being courted by the emerging Big Data companies, so it spun off Hortonworks to give the Hadoop engineering team their own Big Data startup to grow. By 2012, Apache Hadoop (still open source) had so many data processing appendages built on top of the core software that MapReduce was split off from the underlying distributed file system. The cluster resource management that used to be in MapReduce was replaced by YARN (Yet Another Resource Negotiator). This gave Apache Hadoop another growth spurt, as MapReduce joined a growing number of analytical capabilities that run on top of YARN. Apache Spark is one of those analytical layers which supports data analysis tools that are more sophisticated and easier to use than MapReduce. Machine learning and analytics on data streams are just two of the many capabilities that Spark offers – and there are certainly more Hadoop tools to come. The potential of Big Data is just beginning to be tapped. 


In the early 1990’s Tim Burners Lee worked to ensure that CERN made his underlying code for HTML, HTTP and URL’s available on a royalty free basis, and because of that we have the world wide web. Ever since, software engineers have understood that the most influential technical advances come from sharing ideas across organizations, allowing the best minds in the industry to come together and solve tough technical problems. Big Data is as capable as it is because Google and Yahoo and many others companies were willing to share their technical breakthroughs rather than keep them proprietary. In the software industry we understand that we do far better as individual companies when the industry as a whole experiences major technical advances. 

Antifragile Systems

It used to be considered unavoidable that as software systems grew in age and complexity, they became increasingly fragile. Every new release was accompanied by fear of unintended consequences, which triggered extensive testing and longer periods between releases. However, the “failure is not an option” approach is not viable at internet scale – because things will go wrong in any very large system. Ignoring the possibility of failure – and focusing on trying to prevent it – simply makes the system fragile. When the inevitable failure occurs, a fragile system is likely to break down catastrophically.[1]  


Rather than prevent failure, it is much more important to identify and contain failure, then recover with a minimum of inconvenience for consumers. Every large internet company has figured this out. Amazon, Google, Esty, Facebook, Netflix and many others have written or spoken about their approach to failure. Each of these companies has devoted a lot of effort to creating robust systems that can deal gracefully with unexpected and unpredictable situations.


Perhaps the most striking among these is Netflix, which has a good number of reliability engineers despite the fact that it has no data centers. Netflix’s approach was described in 2013 by Ariel Tseitlin in the article The Antifragile Organization: Embracing Failure to Improve Resilience and Maximize Availability.  The main way Netflix increases the resilience of its systems is by regularly inducing failure with a “Simian Army” of monkeys: Chaos Monkey does some damage twice an hour, Latency Monkey simulates instances that are sick but still working, Conformity Monkey shuts down instances that don’t adhere to best practices, Security Monkey looks for security holes, Janitor Monkey cleans up clutter, Chaos Gorilla simulates failure of an AWS availability zone and Chaos Kong might take a whole Amazon region off line. I was not surprised to hear that during a recent failure of an Amazon region, Netflix customers experienced very little disruption.


A Simian Army isn’t the only way to induce failure. Facebook’s motto “Move Fast and Break Things” is another approach to stressing a system. In 2015, Ben Maurer of Facebook published Fail at Scale – a good summary of how internet companies keep very large systems reliable despite failure induced by constant change, traffic surges, and hardware failures. 


Maurer notes that the primary goal for very large systems is not to prevent failure – this is both impossible and dangerous. The objective is to find the pathologies that amplify failure and keep them from occurring. Facebook has identified three failure-amplifying pathologies: 


1. Rapidly deployed configuration changes

Human error is amplified by rapid changes, but rather than decrease the number of deployments, companies with antifragile systems move small changes through a release pipeline. Here changes are checked for known errors and run in a limited environment. The system quickly reverts to a known good configuration if (when) problems are found. Because the changes are small and gradually introduced into the overall system under constant surveillance, catastrophic failures are unlikely. In fact, the pipeline increases the robustness of the system over time.


2. Hard dependencies on core services

Core services fail just like anything else, so code has to be written with that in mind. Generally hardened API’s that include best practices are used to invoke these services. Core services and their API’s are gradually improved by intentionally injecting failure into a core service to expose weaknesses that are then corrected as failure modes are identified.


3. Increased latency and resource exhaustion

Best practices for avoiding the well-known problem of resource exhaustion include managing server queues wisely and having clients track outstanding requests. It’s not that these strategies are unknown, it’s that they must become common practice for all software engineers in the organization. 


Well-designed dashboards, effective incident response, and after-action reviews that implement countermeasures to prevent re-occurrence round out Facebook’s toolkit for keeping its very large systems reliable.


We now know that fault tolerant systems are not only more robust, but also less risky than systems which we attempt to make failure-free. Therefore, common practice for assuring the reliability of large-scale software systems is moving toward software-managed release pipelines which orchestrate frequent small releases, in conjunction with failure induction and incident analysis to produce hardened infrastructure.

Content Platforms

Video is not new; television has been around for a long time, film for even longer. As revolutionary as film and TV have been, they push content to a mass audience; they do not inspire engagement. An early attempt at visual engagement was the PicturePhone of the 1970’s – a textbook example of a technical success and a commercial disaster. They got the PicturePhone use case wrong – not many people really wanted to be seen during a phone call. Videoconferencing did not fare much better – because few people understood that video is not about improving communication, it’s about sharing experience. 


In 2005, amidst a perfect storm of increasing bandwidth, decreasing cost of storage, and emerging video standards, three entrepreneurs – Chad Hurley, Steve Chen, and Jawed Karim – tried out an interesting use case for video: a dating site. But they couldn’t get anyone to submit “dating videos,” so they accepted any videos clips people wanted to upload. They were surprised at the videos they got: interesting experiences, impressive skills, how-to lessons – not what they expected, but at least it was something. The YouTube founders quickly added a search capability. This time they got the use case right and the rest is history. Video is the printing press of experience, and YouTube became the distributor of experience. Today, if you want to learn the latest unicycle tricks or how to get the back seat out of your car, you can find it on YouTube. 


YouTube was not the first successful content platform. Blogs date back to the late 1990’s where they began as diaries on personal web sites shared with friends and family. Then media companies began posting breaking news on their web sites to get their stories out before their competitors. Blogger, one of the earliest blog platforms, was launched just before Y2K and acquired by Google in 2003 – the same year WordPress was launched. As blogging popularity grew over the next few years, the use case shifted from diaries and news articles to ideas and opinions – and blogs increasingly resembled magazine articles. Those short diary entries meant for friends were more like scrapbooks; they came to be called tumbleblogs or microblogs. And – no surprise – separate platforms for these microblogs emerged: Tumblr in 2006 and Twitter in 2007.


One reason why blogs drifted away from diaries and scrapbooks is that alternative platforms emerged aimed at a very similar use case – which came to be called social networking. MySpace was launched in 2003 and became wildly popular over the next few years, only to be overtaken by Facebook, which was launched in 2004. 


Many other public content platforms have come (and gone) over the last decade; after all, a successful platform can usually be turned into a significant revenue stream. But the lessons learned by the founders of those early content platforms remain best practices for two-sided platforms today:

1. Get the use case right on both sides of the platform. 

Very few founders got both use cases exactly right to begin with, but the successful ones learned fast and adapted quickly. 

2. Attract a critical mass to both sides of the platform. 

Attracting enough traffic to generate network effects requires a dead simple contributor experience and an addictive consumer experience, plus a receptive audience for the initial release.

3. Take responsibility for content even if you don’t own it. 

In 2007 YouTube developed ContentID to identify copyrighted audio clips embedded in videos and make it easy for contributors to comply with attribution and licensing requirements. 

4. Be prepared for and deal effectively with stress. 

Some of the best antifragile patterns came from platform providers coping with extreme stress such as the massive traffic spikes at Twitter during natural disasters or hectic political events.

In short, successful platforms require insight, flexibility, discipline, and a lot of luck. Of course, this is the formula for most innovation. But don’t forget – no matter how good your process is, you still need the luck part. 

Mobile Apps

It’s hard to imagine what life was like without mobile apps, but they did not exist a mere eight years ago. In 2008 both Apple and Google released content platforms that allowed developers to get apps directly into the hands of smart phone owners with very little investment and few intermediaries. By 2014 (give or take a year, depending on whose data you look at) mobile apps had surpassed desktops as the path people take to the internet. It is impossible to ignore the importance of the platforms that make mobile apps possible, or the importance of the paradigm shift those apps have brought about in software engineering. 


Mobile apps tend to be small and focused on doing one thing well – after all, a consumer has to quickly understand what the app does. By and large, mobile apps do not communicate with each other, and when they do it is through a disciplined exchange mediated by the platform. Their relatively small size and isolation make it natural for each individual app to be owned by a single, relatively small team that accepts the responsibility for its success. As we saw earlier, Amazon moved to small autonomous teams a long time ago, but it took a significant architectural shift for those teams to be effective. Mobile apps provide a critical architectural shift that makes small independent teams practical, even in monolithic organizations. And they provide an ecosystem that allows small startups to compete effectively with those organizations.  


The nature of mobile apps changes the software development paradigm in other ways as well. As one bank manager told me, “We did our first mobile app as a project, so we thought that when the app was released, it was done. But every time there was an operating system update, we had to update the app. That was a surprise! There are so many phones to test and new features coming out that our apps are in a constant state of development. There is no such thing as maintenance – or maybe it’s all maintenance.”


The small teams, constant updates, and direct access to the deployed app have created a new dynamic in the IT world: software engineers have an immediate connection with the results of their work. App teams can track usage, observe failures and track metrics – then make changes accordingly. More than any other technology, mobile platforms have fostered the growth of small, independent product teams – with end-to-end responsibility – that use short feedback loops to constantly improve their offering. 


Let’s return to luck. If you have a large innovation effort, it probably has a 20% chance of success at best. If you have five small, separate innovation efforts, each with 20% chance of success, you have a much better chance that one of them will succeed – as long as they are truly autonomous and are not tied to an inflexible back end or flawed use case. Mobile apps create an environment where it can be both practical and advisable to break products into small, independent experiments, each owned by its own “full stack” team.[2] The more of these teams you have pursuing interesting ideas, the more likely you are that some of the ideas will become the innovative offerings that propel your company into the future. 

What about “XP” and “Agile”?

You might notice that “XP” and “Agile” are not on my list of innovations. And yet, XP and agile values are found in every major software innovation since XP Explained was published sixteen years ago. XP and Agile development do not cause innovation; they create the conditions necessary for innovation: flexibility and discipline, customer understanding and rapid feedback, small teams with end-to-end responsibility. No software development process can manufacture insight and or create luck. That is what people do.  

Mary Poppendieck

March 23, 2016

____________________________

Footnotes:

1.    “the problem with artificially suppressed volatility is not just that the system tends to become extremely fragile; it is that, at the same time, it exhibits no visible risks… Such environments eventually experience massive blowups… catching everyone off guard and undoing years of stability or, in almost all cases, ending up far worse than they were in their initial volatile state. Indeed, the longer it takes for the blowup to occur, the worse the resulting harm…”  Antifragile, Nassim Taleb p 106

2.   A full stack team contains all the people necessary to make things happen in not only the full technology stack, but also in the full stack of business capabilities necessary for the team to be successful.

Posted in Uncategorized