AI gets smarter, more capable, and more world-transforming every day. Here’s why that might not be a good thing.

By Kelsey Piper Updated Nov 28, 2022, 6:53am EST
Share this story
Share this on Facebook (opens in new window)
Share this on Twitter (opens in new window)
SHARE
All sharing options
This story is part of a group of stories called
The Highlight
Vox’s home for ambitious stories that explain our world.
In 2018 at the World Economic Forum in Davos, Google CEO Sundar Pichai had something to say: “AI is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.” Pichai’s comment was met with a healthy dose of skepticism. But nearly five years later, it’s looking more and more prescient.

AI translation is now so advanced that it’s on the brink of obviating language barriers on the internet among the most widely spoken languages. College professors are tearing their hair out because AI text generators can now write essays as well as your typical undergraduate — making it easy to cheat in a way no plagiarism detector can catch. AI-generated artwork is even winning state fairs. A new tool called Copilot uses machine learning to predict and complete lines of computer code, bringing the possibility of an AI system that could write itself one step closer. DeepMind’s AlphaFold system, which uses AI to predict the 3D structure of just about every protein in existence, was so impressive that the journal Science named it 2021’s Breakthrough of the Year.

You can even see it in the first paragraph of this story, which was largely generated for me by the OpenAI language model GPT-3.

While innovation in other technological fields can feel sluggish — as anyone waiting for the metaverse would know — AI is full steam ahead. The rapid pace of progress is feeding on itself, with more companies pouring more resources into AI development and computing power.

HANDING OVER HUGE SECTORS OF OUR SOCIETY TO BLACK-BOX ALGORITHMS WE BARELY UNDERSTAND CREATES A LOT OF PROBLEMS
Of course, handing over huge sectors of our society to black-box algorithms that we barely understand creates a lot of problems, which has already begun to help spark a regulatory response around the current challenges of AI discrimination and bias. But given the speed of development in the field, it’s long past time to move beyond a reactive mode, one where we only address AI’s downsides once they’re clear and present. We can’t only think about today’s systems, but where the entire enterprise is headed.

READ MORE HERE
https://www.vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinction

%d bloggers like this: