The Opportunities and Risks of ‘Superintelligent’ AI
In just a few years, artificial intelligence models are likely to become exponentially more powerful than current programs like OpenAI's ChatGPT, with human or superhuman-like capabilities that will allow the technology to work alongside you, just like a coworker or assistant might.
While AI technology can now "basically talk like humans," its capacity to learn and reason will skyrocket, according to Leopold Aschenbrenner, a former OpenAI engineer who recently published a long and provocative essay that has become a must-read text in Silicon Valley.
"The pace of deep learning progress in the last decade has simply been extraordinary. A mere decade ago it was revolutionary for a deep learning system to identify simple images," Aschenbrenner wrote in the five-part essay titled Situational Awareness. "We're literally running out of benchmarks."
Aschenbrenner recently founded an investment firm that focuses on artificial general intelligence. Previously he worked on the super-alignment team at OpenAI.
He sees the scale-up of AI technologies is "growing rapidly," so much so that he believes there can "Transformer-like breakthroughs with even bigger gains." He believes humans will have built a superintelligent AI by the end of the decade. By the end of the 2030s, " theworld will have been utterly, unrecognizably transformed," he wrote.
Meanwhile, OpenAI's CEO Altman has hinted at such parabolic advancements in his company's own products, saying the most recent ChatGPT model, GPT-4o, is already "way better than I thought it would be at this point."
With ChatGPT-5, the next iteration of OpenAI's flagship product currently in development, Altman predicts an "impressive leap" that will allow one person to do far more than what they can with the model available now.
Altman likened AI's current state to the early days of the iPhone. The first iPhone was essentially a combined cell phone and iPod, with some basic web navigation abilities. Today's iPhones are more like personal assistants. (Apple recently announced it would integrate GPT into its new iOS for the first time.)
The concept of superintelligence is one that's been debated in AI circles for years. The Swedish philosopher Nick Bostrom, who wrote the leading book on the topic published a decade ago, sees it as a source of both opportunity and existential risk.
In a brief interview with Newsweek, Bostrom laid out what he sees as three of the big perils of a superintelligent AI: humans losing control of the technology, the misuse of the technology by bad actors, and a more abstract risk of "digital minds with moral status being mistreated."
Bostrom also acknowledged that there is still no expert consensus on the exact risks.
"I think it would be unfortunate if people's initial impressions harden into unshakable dogmas this early in the process while we still know so little about this emerging technology," he said.
That is at least in part why Altman has co-founded the AI Ethics Council, a sort of non-binding AI governance model that could find itself with the thorny task of building the guardrails for the highway as it's under construction.
Angela Williams, the CEO and president of the international nonprofit network United Way who serves on the council's board, takes an optimistic view of the opportunities of advanced AI.
"I believe technology and tools will enhance how we as human beings live on this Earth and steward this Earth," Williams told Newsweek in an interview.
"I also have a sense of hope and belief in the human spirit, which artificial intelligence does not have, and therefore we as human beings, as long was we continue to have the open dialogue, as long as we continue to ask the right questions, as long as we continue to wrestle with the tensions of the new, we will still be ok."
Williams sees an advanced AI as "unlocking" new opportunities for people. She pointed to the disability services nonprofit Easterseals using robotic technology built for the military to help a paralyzed person move their arms.
That sort of AI-human fusion is indeed already happening. Last week, the neurotechnology startup Synchron announced it had integrated ChatGPT into its brain-computer interface to allow patients with severe neurodegenerative disease, such as ALS, to better communicate using their thoughts.
"This is critical for individuals with neurological disorders, who may otherwise have trouble generating complex responses contextual to their environment," Synchron said in a statement.
Altman, for his part, is also an investor in another brain-implant venture, Elon Musk's privately held Neuralink. Earlier this year, the first human patient was implanted with a Neuralink chip, allowing him to move a computer mouse using only his thoughts.
"That's here now, but that's an example of technology in the future for other uses can be redeployed for our daily work," Williams told Newsweek.
"I believe when we continue to study its uses and test it with people with disabilities or underrepresented individuals or children that are in low preforming schools, that we can learn from that and fast forward progress."
Guiding — but not getting in the way of — that development is what Williams said she is hoping the AI Ethics Council is able to assist with.
"I want us to be able to sit at the table with a group of cross functional leaders with varying expertise, backgrounds, experiences to be in a room to ask the questions, to ask the critical questions that prompt us to say 'oh I haven't thought about that,' whether it's for good or whether it creates something that's disruptive and not good for society," Williams said.
The council says its foremost goal is putting in place ethical guidelines around artificial intelligence, given that it is still so new. The council is looking into how to ensure the advancements are transformative, but also both ethical and inclusive.
Williams said this work is to ensure that AI is "as truthful and useful for the everyday person."
"It's up to those of us with the good intensions to push for the right response to the use of the tools in the appropriate way that we don't overregulate but regulate appropriately," Williams said.