Product managers and software developers once wrote requirements documents that read like more like technical specs than like anything connected to human needs.  Agile development changed that, demonstrating that a more user-centric approach led to better software.  Agile frames requirements as user stories, where an actual action by an actual user guides development work (e.g., “User can leave comments in margin,” or “User can export spreadsheet as pdf”).  Recently, Clayton Christenson (pioneering author of The Innovator’s Dilemma) and others have put forth a “job to be done” framework for talking about how the way humans “hire” products to do something for them and how that insight might change how we develop and innovate.

As Christensen and his colleagues write in the Harvard Business Review,

We all have many jobs to be done in our lives. Some are little (pass the time while waiting in line); some are big (find a more fulfilling career). Some surface unpredictably (dress for an out-of-town business meeting after the airline lost my suitcase); some regularly (pack a healthful lunch for my daughter to take to school). When we buy a product, we essentially “hire” it to help us do a job. If it does the job well, the next time we’re confronted with the same job, we tend to hire that product again.

In this framework, technologies get adopted because users have a job opening that needs to be filled: I need to back my files up, so I hire Carbonite.  I need to compare prices on airline tickets and hotels, so I hire Expedia.  There was a time when I didn’t even realize I needed a way to carry my music collection with me on my runs, but when the iPod came along, I hired it to do just that.

I find Christensen’s framework helpful–and I use it in my work life–but in an era of automation, redundancy, and outsourcing–not to mention anxiety about our very purpose and ability to earn a living wage–questions of who gets hired to do what are fraught.  It’s hardly surprising that humans are building machines to do their work for them, but it’s hard to know whether to embrace automation or fear it.   Perhaps this is where Christensen’s framework surfaces important questions: who’s hiring whom? whose jobs are they? and ultimately who is acting and who is acted upon?

When we hire a technology (TurboTax to do our taxes or even a dishwasher to do our dishes), we may be replacing a human (sometimes ourselves) with machines that can duplicate the outcomes of human labor.  Yet it’s worth remembering that long before we were scared of the robots taking our jobs, we had the opposite fear: 19th and 20th century critics of modernization saw the automation of human labor as rendering humans themselves robotic, automatons.  When machines do our dishes, or robots replace assembly line jobs, or computers do our record keeping, who’s to say that humans weren’t temporarily occupying roles that had always been designed for robots?

Perhaps it’s an accident of history that humans were doing these jobs in the first place.  In a way, the Fordist rationalization of human labor attempted to transform humans into machines.  The human–a machine operator or bookkeeper or dishwasher–was a temporary placeholder, occupying a place on the assembly line, in a kitchen or hunched over ledgers, only so long as machines weren’t smart enough to operate themselves and spreadsheets couldn’t auto-populate.  We should have known that this was bound to end.  In retrospect, it’s obvious that robots would build their skills and take their jobs back.


Automatons were a cultural fascination of modernist Europe.  Building machinated dolls capable of performing human tasks (in this case, serve drinks) reflected anxieties about humanity losing itself in the face of rote, homogenized labor. 

Indeed, the mind-numbing, repetitive toil of modern labor has traditionally been regarded as something to be liberated from.  From Marx onward, activists dedicated to the liberation of humans from the misery of hard labor and the unjust distributions of its fruits have been deeply ambivalent (or just confused) about technology’s capacity to do work for humans.

Will being freed from machine-like labor free us to do what humans do best?  Tim O’Reilly (see Part I of this entry) singles out creating and caring as needs where humans have unique capacities, and where social needs cry out to be filled.  The utopian in me wonders what an education system would look like with tenfold the teachers, and what elder care would be like with tenfold the caregivers.  Yet the ed tech industry and robotic aids for the elderly suggest that we need intervene if we’re to maintain space and establish economic viability for the essentially human (and humane).  We need to decide what we value and how to preserve humanity’s unique capacities, lest we confirm the dystopians’ vision of losing them to automation and forgetting what it means to be human in the first place.

Click here to open external link

Agency, Automation & Reciprocity, Part II: Who’s Stealing Whose Jobs?