Recently, OpenAI’s CEO Sam Altman, CTO Greg Brockman, and lead scientist Ilya Sutskever published a blog post outlining the company’s position on the development and governance of “superintelligence.”
The company, widely recognized as the current industry leader in generative artificial intelligence (AI) technologies, believes that it would be riskier not to develop superhuman AI than to continue with its efforts:
“Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.”
The potential for artificial intelligence (AI) systems to achieve human-level intelligence (commonly referred to as “AGI” or artificial general intelligence) as OpenAI warns, to surpass even expert-level human intelligence is the subject of extensive debate.
Many experts argue that it is not inevitable that machines will eventually match or beat our cognitive abilities.
It appears that Altman, Brockman, and Sutskever, the leaders of OpenAI, prefer to err on the side of caution. However, their version of a cautious approach does not require restraint.
The blog post recommends increased government oversight, public participation in decision-making, and enhanced collaboration between developers and companies in the space.
These points reflect the responses Altman provided to questions posed by subcommittee members of the Senate during a recent congressional hearing.
The blog post also notes that, according to OpenAI, it would be “counterintuitively dangerous and difficult” to prevent the development of superintelligence.
The post’s conclusion is, “We must get it right.” In explaining the apparent paradox, the authors suggest that a global surveillance system would be required to prevent the supposedly inevitable creation of superintelligent artificial intelligence.
“And even that,” they write, “isn’t guaranteed to work.” Ultimately, the authors conclude that OpenAI must continue working toward creating a superintelligent AI to develop the necessary controls and governance mechanisms to protect humanity from a superintelligent AI.
As the global debate over how these technologies and their development should be governed and regulated continues, the cryptocurrency, blockchain, and Web3 communities continue to exist in a familiar regulatory limbo.
AI has permeated every technology sector, including fintech.
With cryptocurrency trading bots built on ChatGPT and the GPT API and countless exchanges implementing AI solutions into their analysis and customer service platforms, any regulatory efforts affecting the development of consumer-facing AI products such as ChatGPT could disrupt both industries.