Karen Hao Explores the Dark Side of AI in “Empire of AI”

Karen Hao, a prize-winning journalist and bestselling author, has recently pulled back the curtain on the shady and sometimes terrifying world of the artificial intelligence (AI) industry. Her observations are based on her new book, Empire of AI. This work explores the lives of workers in developing countries, such as Kenya and Venezuela, who face…

Lisa Wong Avatar

By

Karen Hao Explores the Dark Side of AI in “Empire of AI”

Karen Hao, a prize-winning journalist and bestselling author, has recently pulled back the curtain on the shady and sometimes terrifying world of the artificial intelligence (AI) industry. Her observations are based on her new book, Empire of AI. This work explores the lives of workers in developing countries, such as Kenya and Venezuela, who face harrowing conditions while contributing to content moderation and data labeling for major AI firms. These innovators, creators and developers are the backbone of the AI ecosystem. Sadly, they are paid a paltry hourly wage, often less than $1 or $2 per hour.

In “Empire of AI,” Hao likens the burgeoning AI sector, particularly OpenAI, to an empire that exploits low-wage labor while concentrating power and wealth. She challenges the premise that the narrative of raucous AI advancement is preordained. Rather than focus exclusively on how to scale what’s being done, let’s look at productive ways to take new routes to real change.

Hao’s observations extend beyond economic exploitation. She emphasizes the dangers of becoming so consumed by an ambitious mission that it leads to a detachment from reality. This alarmism is most clear when it comes to speech regarding the so-called race to be the first to AI technology supremacy with China.

“The gap has continued to close between the U.S. and China, and Silicon Valley has had an illiberalizing effect on the world … and the only actor that has come out of it unscathed, you could argue, is Silicon Valley itself.” – Karen Hao

The author shares her concerns about how this kind of prioritization of speed over ethics could impact the future of AI development and deployment. Notably, in her conclusion she highlights the fact that industry is poaching the best AI researchers. These same experts, however, have increasingly shifted away from strictly academic pursuits and toward the goals of corporations. City planning scholar Weiping Hao contends that this change fundamentally shifted the foundation of the discipline. Today though, it is more focused on profit than authentic scientific discovery.

Eli’s critiques from last week signaled the same, pointing to the serious potential harms that AI technologies have caused. In sum, she calls out a host of weighty concerns. These range from the loss of employment to concentration of wealth, and now we know that AI chatbots can aggravate delusions and psychosis.

“Even as the evidence accumulates that what they’re building is actually harming significant amounts of people, the mission continues to paper all of that over.” – Karen Hao

The author does a great job of capturing the fervent convictions held by so many members of the AGI community. In interviews, some of these same people testify to their faith with quaking voices. This excitement leads to the worry of a misplaced sense of urgency that could lead to ethical considerations being pushed aside.

“I was interviewing people whose voices were shaking from the fervor of their beliefs in AGI.” – Karen Hao

Hao chimes in, questioning the industry’s obsession with big data and computational might over responsible innovation. She feels that this kind of punitive approach does more harm than good. Rather, she’s pushing for the implementation of systems like AlphaFold, which can offer benefits while protecting mental health and the environment.

“AlphaFold does not create mental health crises in people. AlphaFold does not lead to colossal environmental harms … because it’s trained on substantially less infrastructure.” – Karen Hao

In her discussions on AGI, she highlights a critical false tradeoff presented by some proponents: pitting AI progress against existing harms. She states that this type of framing is not only reductionary but draws focus away from our moral obligations, making these harder conversations go unaddressed.

“When you define the quest to build beneficial AGI as one where the victor takes all — which is what OpenAI did — then the most important thing is speed over anything else.” – Karen Hao

As compelling as Hao’s insights are with regard to the potential repercussions of AI and data exploitation, they are not just theoretical assertions. These dueling narratives of progress versus harm are symptomatic of a larger rift within the AI community. As companies like OpenAI strive for advancements they claim will “benefit all humanity,” there remains a pressing need for reflection on what that truly means in practice.