I used to think tech is best when it’s human. Now I’m not so sure. The first line from my professional bio used to read something like this: humanity in tech usually falls along one of two trajectories:
Tech as a perception melting tool for cosmic awe, or
Tech as a thing that gets something done for you without making you think too hard.
The point about cosmic awe still holds true. People use digital tools to make more awe-inspiring work than ever, even if most of that art has moved from idiosyncratic blogs and quirky personal sites into some doom scrolling strip mall of a social network. It's the second point I have trouble with now, the thing about tech tools being useful without making me think too hard. To start, effort is a good thing sometimes, which I’ve written about in previous editions of the newsletter. The second problem is that the sentence “technology is best when it’s human” intended to tell my readers that I'm hip to human-centred design, which is the nearly universal ethical framework of "design values" in tech. Unquestioning adherence to the principles of human centred design is a mandatory philosophical stance for working as a designer in tech today. I don’t think that’s good enough.
From punch cards to inclusion
It makes sense to start with a few definitions. Bear with me. I promise not to navel gaze too hard. Let’s start with design. Design is rendering intent. That’s the best definition to serve my purposes for now. It means that a designer is anyone who intends something and does something to or with or through something or someone else in order to make that intent a reality. Most (probably all) people design, even if they’re not professional designers: bureaucrats, middle managers, cooks, systems analysts, business owners, carpenters, parents, horticulturalists, musicians, mimes; all designers.
Now, what about human-centred design? In technology, there’s a bit of a false narrative here that’s worth unpacking. It goes like this: in the bad old days of punch cards, terminals, and command lines, tech was hard to use. Then the personal computing “revolution” of graphical user interfaces, windows, point-and-click, WYSIWYG, iPods, etc. made technology accessible for everyone because those things were designed by geniuses who put “real people” at the centre of their designs for the first time. The implication behind this version of events is that “human-centred” meant all humans.

The truth, though, is that technology has always been human-centred, it’s the definition of which humans we centre that has changed. Mainframes, command lines, and punchcards were still designed to be used by humans, just very few, specific, and technically trained humans. What usually gets passed off as “human-centred” tech from your garden-variety Silicon Valley-style tech company in the last 25 years or so is actually just “user-friendly” (a term which will cause physical pain to any UX designer who hears it), and the users to which that tech is friendly have tended to look just like the people who designed it. Wanna guess what those guys look(ed) like?
What most people think of as “human-centred” design is actually a slight expansion of the definition of who tech is for, from expert technician humans, to mostly middle-class, mostly white, mostly male, mostly cis-het, mostly anglophone, mostly North American humans. That’s still a massive increase in what we in startup land would call the “total addressable market” of consumer tech, from a few thousand technicians to hundreds of millions of normative North Americans. After some translation and expanded telecom infrastructure, a global industry of technologists expanded the “humans” we’re centring to billions of people in the able-bodied, relatively affluent global middle class. That’s still not all the humans, though.
Academics and tech experts used to talk about the “digital divide” between these “users” to whom the first few generations of personal computing tech has been friendly and the rest of humanity. Tech experts now call the people excluded from tech “the last billion.” Those people usually don’t have access to either the telecoms infrastructure or the wealth to use digital consumer products. Even if someone has enough money to buy the products, though, they still need to be literate, numerate, educated, and have a brain and skin and hands and fingers and eyes and ears that look and work in the normative way a tech team in an affluent megacity assumed they “should” work. A massive, sustained effort of activists all over the world, many of whom self-identify as designers, have been painstakingly working to expand the definition of which humans we centre in tech through what usually gets called accessibility, and inclusion. What tech inclusion activists like Eriol or Angie think about as human-centred tech probably gets pretty close to what most people think they mean when they say “human-centred” or “user-centred.”
This pan-human scope for design is where my head was at until I really started to grapple with the fact that we also live in an age of mass extinction. It turns out that being overly human-centred is destroying the planet.
The sky-watching anthropos
We live in an age of mass extinction. Just let that one breathe for a minute.

We live in an age of mass extinction. Call it climate change, climate crisis, ecological collapse, whatever. I won’t sermonize by dumping a bunch of factoids on your head. I will say, though, that the same framework of human-centred values that gave us the iPhone created a global climate crisis: anthropocentrism. The term anthropos is just academic speak for humans, as in anthropology, which is the academic study of human cultures. Anthropocentrism is a way of seeing the world where humans are central, like human-centred design but for everything. The human point of view, human bodies, human flourishing, and human expansion are the most important thing in the universe; that’s anthropocentrism. Where anthropocentrism meets “the environment,” by which we usually mean non-human lifeforms arranged according to non-human logic, that’s when we get what Timothy Morton calls “agrilogistics,” which is the manipulation of the world to serve the dominant form of human agriculture. That thought process where someone looks at something, whether it’s a command line on a computer terminal or an open field, and asks themselves some version of the question, “How can I manipulate this to human ends?” That process carried out by billions of humans since the end of the Neolithic around 10,000 BCE has created the final anthropo-buzzword of this essay: the anthropocene. That suffix “cene” is what geologists use to delineate a geological age. Paul J. Crutzen popularized the term as a way to describe the age in which human activity has impacted the Earth enough to constitute a new geological epoch. The Anthropocene means anthropocentrism at a global scale, which is a geologist’s way of saying mass extinction.
It goeth before a fall
I wish there were a way to narrowly “humanize” the world without destroying it, but we just don’t work that way. The problem is that human beings think in a reductive way. We can’t help it. Our brains like to create categories and boundaries. We like to simplify, generalize, and unify. See: I just did it. The existentialist philosopher, novelist, and playwright Albert Camus describes this yearning for unity in his book The Myth of Sisyphus:
"If thought discovered in the shimmering mirrors of phenomena eternal relations capable of summing them up and summing themselves up in a single principle, then would be seen an intellectual joy of which the myth of the blessed would be but a ridiculous imitation."
All of this striving for understanding is fine so long as we recognize that, impressive though humans are, we are extremely limited lifeforms; limited in time scale, limited in perspective, limited in so many, many ways. We’ve been trying to remind ourselves about these limitations in stories about hubris for thousands of years. When we forget our limitations, that’s when we get into trouble. As Morton says, "Nothing can be accessed all at once in its entirety." A person can never truly grasp something. Further than that: human thought is by no means the only way to grasp a thing. It's not even the primary access mode. A dog sniffing something is just as valid a means of grasping it as me thinking about it. Things can't be grasped by anything fully. Their mystery is irreducible. The word “mystery” comes from the Greek word muein which means to close your lips. Things are "unspeakable." The fact that they exist doesn't mean you can access them. Morton sums this up in the phrase, "The context of relevance is structurally incomplete." This means that when you want to grasp a thing, the relevance of that thing spirals out into a broken mesh infinitely in every direction, bigger and smaller. Everything is relevant to everything else. What he calls “ecological awareness" isn't some special, nirvana-like state of seeing the connections between everything because the not caring or unawareness is part of the spiralling forever context as well. "There can be no one context to rule them all." Given this, being merely human-centred is a liability.
Camus talks about this tension, the desire to reduce and unify a chaotic, irreducible universe, as the fundamental conflict in human existence, a conflict which we are powerless to resolve.
"The absurd man this catches sight of a burning and frigid, transparent and limited universe in which nothing is possible but everything is given, and beyond which all is collapse and nothingness. He can then decide to accept such a universe and draw from it his strength, his refusal to hope, and the unyielding evidence of a life without consolation."
Even though this “life without consolation” might be a beautiful thing for Camus, it scares most of us. So instead of recognizing the limitations of a human-centred viewpoint, we revert to anthropocentrism, to agrilogistics, and draw imaginary boundaries beyond which we label everything an “externality”, i.e. all the things I don’t know and don’t care about. This is how a company in Southern California building a human-centred, profitable way to listen to music like the iPod lead, in less than one human generation, to a global supply chain of rare earth mineral mining and e-waste dumps filled with obsolete smartphones. These are unintended consequences that have, at their root, the prioritization of an individualistic human context over every other context. The last generation of technology development relied on greater human-centricity as a way forward, but when it comes to mass extinction becoming more human-centred will not help.

In technology we’ve just reached the point where we’ve (mostly) stopped labelling other humans as externalities. But we’re not done. We need one more, big expansion of who and what we value: non-human lifeforms and the biosphere. To do that we’ll need to do away with centring altogether.
In the next issue of the newsletter, I’ll explain how thinking about tentacles might give us a way out of anthropocentric design.
Further reading
In addition to being a great book for thinking about the biosphere and how we are tangled in it, Timothy Morton’s book Being Ecological is also a pretty approachable introduction to “being isn’t presence” and the whole framework of object-oriented ontology with only some of the inscrutable German words.
Dr. Manhattan from Alan Moore’s graphic novel Watchmen is a beautiful depiction of what would happen were a human being given omnipotent, god-like powers to be everywhere simultaneously. He is the closest character in fiction that I can think of for whom “the context of relevance is structurally incomplete” does not apply. Even still, he gets entangled in human concerns and fucks everything up. I also have to mention Watchmen as a teaser for the next newsletter because of the importance of a giant squid monster to that story.
You should probably watch the HBO miniseries sequel to the original Watchmen graphic novel, both because of its broad cultural relevance, but also because it thematizes hubris a whole lot and has a killer soundtrack.
A friend gave me The Myth of Sisyphus last month because it’s one of her favourites. I was not expecting to find a bunch of connections between a book of existential philosophy written during World War Two and a book about ecology from 2018, but here we are.
Music
Here’s a Spotify playlist of every interesting song I (re)discovered in April.
Share the newsletter
If you know anyone who would enjoy the newsletter, please let them know about it. I’m grateful for every single reader here.
Thanks for reading and for your support.
Always an interesting point of view Justin! I enjoy reading these; thanks for sharing :)
Loved the essay’ it’s really eye-opening can’t wait for part 2. Thanks for sharing.