The Problematic Nature Of Generative AI

The Problematic Nature Of Generative AI

[ad_1]

Dan Turchin is the Chief Executive Officer of PeopleReign, the system of intelligence for IT and HR employee service.

The first time I used Waze in 2007 was like seeing the future. Social traffic combined navigation with machine learning with user-generated content to create magic. It was impossible to explain everything Waze knew. It was also impossible to understand where AI would lead in less than two decades. What we’re experiencing now with generative AI is a future that is hard to comprehend except through the realization that we should question everything we thought we knew about how the world works.

I’ve been thinking about what it means to be human in the age of large language models. Below are thoughts about where we are and where we’re headed.

A Big Intellectual Property Heist

It was October 1999 when I discovered 30 years of bootlegged Jerry Garcia recordings. It was tantalizingly simple. Weeks into pirating songs I loved I realized I didn’t own anything I was downloading. As fun as it was to amass free collections of my favorite songs, the whole process made me feel dirty. I don’t shoplift or rob banks. I don’t cheat at blackjack or hijack planes. It didn’t feel right that I could steal music because it was convenient, consequence-free and everyone else was doing it. History repeats itself.

ChatGPT is hoovering up other people’s stuff, mashing it together and serving derived versions using large language models (LLMs) to turn text prompts into new work. We’re just starting to see copyright owners object to LLMs being trained on their work. Soon, AI services will either require paid subscriptions or be ad-supported. Generative AI vendors will need to publish a bibliography attributing content to its original authors. The technology behind ChatGPT is phenomenal, but even innovation is subject to the rules of business and the principles of ethics.

Generative AI Is Causing Content Quality To Revert To The Mean

We’re less than 12 months into generative AI euphoria and already every image on the web looks like a stock photo version of astronauts riding ponies in the style of Bauhaus. When everyone uses the same tools and text prompts and AI is asked to generate text or images from the same set of bland content scraped from the 2021 version of the public internet, everything we consume ends up looking … the same.

Text has the same, monotonic personality devoid of bold ideas. Images use the same palette of colors and familiar faces. Weak content creators can now deliver mediocre work. Strong content creators can now deliver mediocre work faster. It’s the creator economy equivalent of communism—everyone does fine, and nobody is exceptional. It’s the digital economy version of the Model T Ford—any color you want as long as it’s black. Be prepared for a rapid reversion to the mean. The self-learning behavior of large language models like GPT will only accelerate this as they learn from the bland content they’ve created. Think of LLMs as systems that are exceptionally well-designed to pattern match at scale. When fed large quantities of content they’ve generated, the scope of what they’ll generate will continue to narrow. Consider the implications of this before relying on generative AI to influence decisions that should be left to humans. For example, we should immediately restrict the use of generative AI when deciding who gets incarcerated, who gets a loan, who gets hired or who gets medical treatment.

In future posts, I’ll explore the new shape of responsible AI, a topic that needs to be discussed openly and frequently.

Unique Ideas Will Stand Out More

The decade ahead will create unique opportunities for humans with great ideas to have an impact on the world.

Resist the temptation to take credit for patchwork combinations of other people’s work. If you’re great, prove it by articulating unique ideas in unique ways. Never before has there been a better opportunity for ambitious thinkers to achieve greatness. When others are creating and consuming synthetically generated content like flying drones in a perpetual hover state, there’s an opportunity for non-drones to fly higher, farther and faster.

For example, students manipulating the system by having ChatGPT write essays miss an opportunity to learn, demonstrate a dangerously poor understanding of ethics and prove they’re no better than everyone else. Students who learn on their own, articulate original ideas and share a passion for a subject will outperform the machines by an increasingly wide margin.

The Path Forward

The future of humans is a fusion of what machines and people do best. What can be predicted or regurgitated should be left to machines, but what requires judgment or rational thinking should be left to us. Generative AI isn’t a crutch, it’s not a panacea, and it’s not a threat to humans. We’re the only species capable of synthesizing ideas, forming opinions and making decisions based on ethical principles. Let’s use this moment in history to embrace the future while investing in our humanity.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


[ad_2]