From a story on newyorker.com by Kyle Chayka headlined “The Birth of the Personal Computer”:
In 1979, two M.I.T. computer-science alumni and a Harvard Business School graduate launched a new piece of computer software for the Apple II machine, an early home computer. Called VisiCalc, short for “visible calculator,” it was a spreadsheet, with an unassuming interface of monochrome numerals and characters. But it was a dramatic upgrade from the paper-based charts traditionally used to project business revenue or manage a budget. VisiCalc could perform calculations and update figures across columns and rows in real time, based on formulas that the user programmed in. No more writing out numbers painstakingly by hand.
VisiCalc sold more than seven hundred thousand copies in its first six years, and almost single-handedly demonstrated the utility of the Apple II, which retailed for more than a thousand dollars at the time (the equivalent of more than five thousand dollars in 2023). Prior to the early seventies, computers were centralized machines—occupying entire rooms—that academics and hobbyists shared or rented time on, using them communally. They were more boiler-room infrastructure than life-style accessory, attended to by experts away from the public eye. With the VisiCalc software, suddenly, purchasing a very costly and sophisticated machine for the use of a single employee made sense. The computer began moving into daily life, into its stalwart position on the top of our desks.
As Laine Nooney, a professor of media and information industries at New York University, writes in their recent book “The Apple II Age: How the Computer Became Personal” (University of Chicago Press), VisiCalc kicked off the process of “ ‘computerizing’ business.” By now, of course, everything else has been computerized as well. Computers today are not just personal devices but intimate appendages, almost extensions of our own bodies and brains.
We use our smartphones in practically every aspect of life: working, socializing, dating, shopping, reading. Nooney’s book tracks the pivotal years of the shift toward personal computing, epitomized by the Apple II and sped along by consumer software—not just VisiCalc’s spreadsheets but adventure games and greeting-card-design tools—that made the computer a useful and convenient partner in daily life outside of the office, too.
Before there was the personal computer (which gave us the shorthand PC) the term of art for a domestic computing machine was “microcomputer.” The Altair 8800, which débuted, in 1975, was the first “microcomputer kit” to sell more than a few hundred units, according to Nooney. Made by a New Mexico company called MITS, which also sold kits for building rockets and calculators, the Altair emerged from a decentralized community of American computer hobbyists who were accustomed to building their own machines out of components from radios and televisions. “The Altair did not invent the idea of a computer one could personally own. Rather, it tapped into an ambient desire for ownership and individualized use,” Nooney writes.
The goal was “to create a technological world fashioned to one’s own desires.” Steve Wozniak and Steve Jobs, the co-founders of Apple, created the next wave of popular microcomputers with their first Apple computer, in 1976, and then the Apple II, in 1977. The latter was the company’s first commercial breakthrough; it went on to sell more than five million units. Wozniak’s technical innovations, such as designing circuits that were able to display different colors on a monitor, were matched by Jobs’s talent for creating a salable consumer product. He insisted that the Apple II be housed in a plastic casing, making it more elegant and approachable than hobbyists’ severe industrial boxes.
The development of the personal computer was iterative and contingent; it was not a matter of destiny but of experimentation in many different directions at once. The Apple II beat out its competitors, including the Commodore PET 2001 and the Tandy Corporation’s TRS-80 Model I, in part because of its open-endedness. Coming from the hobbyist community, Wozniak was used to designing computer hardware for expandability and modification. With the Apple II, purchasing a product off the shelf wasn’t conceived as an end point but as the start of a user’s process of customizing her own machine.
The Apple II looked a bit like a typewriter, with a keyboard extending off a sloped front. Accessories like monitors and drives were stacked on top like children’s building blocks. The user chose her own operating system and display monitor, and whichever appendages she desired, such as a modem or game controllers. To add ram, she had to open the housing and plug in a microchip card. “Installing the memory expansion card is easy,” the Apple II manual cheerily promised, above a photo of the exposed guts of the computer. As the demands of software and equipment evolved, Apple II owners found that their machines had the flexibility to keep up.
Nooney’s book tells the story of how computers became irrevocably personal, but what’s most striking, revisiting the history of the Apple II, is how much less personalizable our machines have become. Computers today, small enough to fit in the palms of our hands, require much less work on the part of the user. Apple’s iPhones all look more or less the same. Their cases are sealed; when they break or glitch, or when an upgrade is required, we tend to replace them outright and discard the old one. We control their superficial traits—choosing between rose-gold or alpine-green case covers—but make few decisions about how they function. Customizable computer towers, like the Mac Pro, are the domain of professionals and experts—a video editor who needs extra horsepower, for example. The rest of us just flip open our laptops and expect everything to run on its own.
Whatever customization we do engage in has moved to the realm of the digital. We can load apps on our iPhones at the press of a button, but only those that Apple allows into its App Store, which has rigid rules around monetary transactions and content. Some new platforms, such as Mastodon and Urbit, allow users to run their own customizable iterations of social-media software, but doing so requires its own forms of expertise. Otherwise, the likes of Facebook, Instagram, and TikTok dictate our digital experiences in ways we can’t change.
Nooney recounts how, over the seventies and eighties, investment began pouring in to technology and software companies from institutions and venture capitalists. Hobbyists and independent, small-scale firms that sold software in Ziploc bags were gradually crowded out by formalized, well-funded corporations advertised in glossy magazines and stocked by department stores.
Computers today are unavoidable fixtures of our lives, but instead of co-creators—modifying, hacking, and programming—we are sheer consumers. Our lack of agency is a boon for Silicon Valley companies, which profit from herding us frictionlessly through their gated infrastructure. Through digital data surveillance, we even become bulk products for the companies to sell in turn. Customization leads to diversity; diversity is less scalable, less easily packaged, and thus less profitable. Our computers may be personal, but they are not solely devoted to serving our needs.
—
Also see the March 29th story on cjr.org by Jem Bartholomew headlined “Q&A: Kyle Chayka on his ‘cultural investigations.'”
Kyle Chayka, a staff writer at The New Yorker, where he writes the “Infinite Scroll” column, thinks we should all spend less time on Twitter—not just because it’s good for us, but because the internet is changing. “We have to be less obsessed with the major public spaces and more engaged with smaller-scale ones,” Chayka said, when I asked him recently how journalists should cover the internet of the 2020s. “I think we’ve passed ‘peak platform.’” As users, “we’re realizing that we want smaller, more private spaces online, we want more direct connections to creators. And so new platforms are emerging to serve that.” He namechecked Substack, Patreon, and Discord.
Chayka, who is thirty-four, started writing Infinite Scroll in 2021 after almost a decade freelancing across digital media. He wrote about the aesthetics of Airbnb, our yearning for “conversation pits,” and, for CJR, about the aftermath of Artforum’s #MeToo scandal. In 2015, he cofounded Study Hall, a freelance community. In 2020, he wrote a book, The Longing for Less: Living with Minimalism (Bloomsbury). He is now working on another one, Filterworld: How Algorithms Flatten Culture (Doubleday), which is due out in January 2024.
Chayka’s style of writing can read like tilting a Léger painting ninety degrees, accentuating forms that we’ve always known were there but couldn’t quite see clearly. Compared to other reporting on digital media, which often sticks to the topsoil of internet-trend reporting, Chayka burrows deeper until he hits groundwater, revealing the currents that flow between seemingly disparate topics. This week, I spoke with Chayka about how culture and journalism are mediated by algorithms, why AI-generated art is clichéd, and how he’s adapting to the institutional voice at The New Yorker.
JB: You’re just finishing up a book on algorithms and culture. Can you tell us a little bit more about what you’re trying to say and explore with that book?
KC: So the book is called Filterworld: How Algorithms Flattened Culture. I came up with “filterworld” as a term to describe this media environment where everything we consume is mediated by algorithmic recommendations. We’re constantly being fed things that we’re supposed to like, being judged on past engagements. The thesis is that media and culture has been produced to fit these algorithmic feeds and digital platforms, so digital platforms have shaped what kinds of culture exist now—how journalism finds audiences, how musicians find listeners, how authors find readers. Any creator of content is forced to reckon with the dynamics of algorithmic recommendations. The consequence, in my opinion, is this flattening of culture. Things become more homogenous, less deeply compelling, less challenging, less subtle, and more flat and dull and inoffensive.
You’ve written a lot about how algorithmic digital platforms influence how we socialize, receive news, consume culture, find jobs, perform labor, spend money. Do you think we reckon with this impact on our culture enough?
I don’t think we reckon with it enough yet. From my reporting, digital platforms and feeds started to get much more mediated by recommendations around 2015, 2016, so it hasn’t actually been that long since we’ve started to experience these super algorithmically curated content feeds. Over the past eight years or so, we’ve come to understand how much it sucks, how bad it feels that they’re everywhere. So I think we’re just beginning to realize that the situation is bad, we’re beginning to feel the long-term effects. But I don’t think we’ve reckoned with, like, what the answer is to that situation, or how we get ourselves out of it.
What about newsrooms—we had the pivot to video around 2014, and then there seemed to be some reluctance to pivot to TikTok when that burst onto the scene a few years ago. How is journalism today shaped by these algorithmic forces?
I think it’s deeply shaped. There was that moment when Dean Baquet [the executive editor of the New York Times from 2014 to 2022], had to say, Twitter is not our assigning editor, we do not write for Twitter. And yet so much journalism and commentary was produced over the 2010s and into the 2020s as a response to what was going viral.
Newsrooms have been chasing algorithmic promotion for quite a while now. Whether that’s covering Twitter discourse, or things that happen on TikTok as if they’re happening in the physical world, or reporting on memes, which has become such a cottage industry in the media. Content has been produced that follows the dynamics of algorithmic feeds. If media companies want to reach audiences where they are—which media companies always want to do—they have to be on TikTok, they have to work through Spotify podcasts recommendations, they have to work through YouTube recommendation dynamics if they want to build an audience there. The priorities and obsessions of many publications have been shaped by this.
Everyone is suddenly talking about AI. The marketing and much of the media coverage suggests that this is going to change everything. But you’ve written about how these systems lack originality, authenticity, insight. How do you see AI influencing culture?
We’re living through a big change. If the early 2010s was the rise of algorithmic feeds, then probably the early 2020s is going to be the rise of generative AI tools. It’s very early to see what the effects are. But my theory, for now, is that algorithmic feeds forced a lot of people to conform to a standard by only recommending certain kinds of content. On Twitter, a prompt tweet—asking people what they have for breakfast—always works well. People repeated those tricks and found a solution to the platform. But AI tools tend to generate that average solution immediately. Because they’re always reaching back to generic archetypes and clichés.
I think the kind of culture that you see AI tools producing is mostly very dull and bad so far. There are ways that artists and writers can use AI in challenging ways. But that requires a critical approach to that technology. Whereas the millions of users who are getting onboarded into AI tools, they’re not using it with that kind of critical sensibility. They’re using it to serve their most immediate impulses—fanfiction generated text or images of Lord of the Rings elves dressed like Star Wars characters. It’s not a particularly inspiring body of culture yet.
In Infinite Scroll, you’ve said that the internet, in its quest to optimize digital experience, “denies texture.” As someone who has covered art and aesthetics, which obviously embrace texture, how do you find covering the internet for the column?
I always feel like I have a lot of ideas floating around and interesting subjects to tackle. But I try to cover the stuff that rewards more thinking. I try to think through, What will have some enduring impact? And, What’s a thought that I can communicate to people that will still be relevant in a week, or a month, or God forbid, a year or ten years? I think all the time about how Walter Benjamin wrote newspaper columns, just about stuff he observed in Berlin, and on the radio and new technologies of the nineteen thirties.
Do you go and read those today as preparation?
Yeah, I do. I’ve got a few anthologies of his work in various forms. He’s always been an inspiring writer to me—particularly this idea that you can write a column about technology, about changing cultural dynamics, in a moment, and, if you capture it well enough, it’ll still be relevant or useful to someone a hundred years later. I don’t know if you can do that about the internet. But I hope I aspire to that.
You’ve said in your writing that you try to relive what it feels like to interact with technology, because, compared to other beats, it’s people looking at their laptops or phones. How do you find a cinematic way of covering that?
That’s a good question. My sensibility comes from art history and art criticism, where you’re not covering a narrative. There’s nothing active happening—it’s not like a true-crime story, there’s not much suspense. But the drama of writing about art comes from describing the thing in front of you in the best way possible. And I find that really useful for the internet, because it offers a lot of visual and multimedia experiences. You can get a lot of drama and interest out of describing those. What does it feel like to be on TikTok? What purpose does it serve for me to listen to this podcast, or watch a YouTube video?
You come at stories in a slightly offbeat way. I’m curious what responses you get from people working in tech or other tech reporters?
A lot of different reactions. Tech and marketing people often appreciate when I frame something in a new way, because they kind of operationalize it. Like, they take my negative insight where I say “direct-to-consumer is ending the millennial aesthetic, blah, blah, blah,” and then they take that insight and monetize it, put it in their pitch deck and use it to frame their next startup.
How does that make you feel?
I feel super ambivalent about that. I mean, I appreciate it because it’s meaningful to them, they find some insight in it. But also I’m like, Guys, can’t you appreciate it as literature? [Laughs.] I’m trying to illuminate an experience, not, like, offer you a business model.
Tech reporting is a very broad space. There are people who do internet reporting a lot better than I do—Taylor Lorenz and Ryan Broderick, who have this granular insight into the internet, or tech reporters in Silicon Valley, like Mike Isaac and Ryan Mac, who are sourced up at Facebook or Google or whatever. I hope I can occupy a position of framing things and contextualizing them and offering the large-scale, more conceptual narrative of what’s happening. It piggybacks off the work of a lot of other people. My favorite response to pieces is, I’m a user of this technology and this has made me better understand my own relationship to it. That’s my goal, to bring up these collective experiences that we’re having and don’t understand yet.
When you were freelancing you said that you liked to “sneak in and blow things up,” by questioning a tech trend in a tech magazine. But now you have one of the most institutional voices, at The New Yorker. How has that transition been as a writer?
The column format gives me a structure to respond to in a way. The institutional voice is hard—it’s certainly a negotiation. The demographic of The New Yorker is not just a twenty-five-year-old TikTok person. [Laughs.] You have to speak to a very broad range of people. I’ve found the need to make my writing approachable, logical, and clear has been super helpful, because it makes me think about what the reader may or may not understand or how they need to be slowly introduced to concepts, rather than just referencing a TikTok meme as if everyone knows what it is. It’s very instructive for my writing.
Something that’s changed in my writing over fifteen years is that I used to think, What essay do I want to write? And now I think, How do I reach the reader where they are? How do I connect with the person at the other end of the screen? It’s good. It’s a little ego death. It’s creative in a different way.
You said a few years ago that writing is like “turning inside out your own obsessions.” Is that still true, and does that method drive or hinder you?
I think it’s a great method, it still works very well for me. What I find is that I have a dawning feeling about something I observe, I start seeing something everywhere. Gradually I can start forming that obsession into a piece of writing that brings other people and readers through that same process of epiphany. Like the vibes essay I wrote for the New Yorker in 2021 was literally just like, Wow so many people are saying “vibes.” Let me dig into this and understand what it’s about. I really enjoyed doing that. And I think it resonated with a lot of people. It’s like cultural investigations.
Speak Your Mind