Focusing on AI's Mirage: Why Are We Thirsty in the Desert of Now?
Reorienting the Existential Discourse on AI
There's no shortage of dystopian narratives surrounding artificial intelligence (AI) in our discourse. From tech luminaries to media, the conversation often veers towards apocalyptic scenarios: super intelligent AI obliterating humanity or turning us into obsolescent relics. While such existential threats can't be discounted, are they stealing the limelight from AI's immediate and tangible challenges?
Let's make it clear: existential risks posed by AI are real and warrant serious consideration. Visionaries like OpenAI's Sam Altman are not erring by bringing these threats to our attention. The work being done by AI alignment teams is critical and has the potential to guide us towards a future where AI coexists with humanity harmoniously. But the focus on these future scenarios, while overlooking the urgent issues at hand, is where the imbalance resides.
Currently, we are dealing with AI systems like GPT-4, which, despite their impressive capabilities, can "hallucinate" - generate outputs detached from reality or facts, thus becoming potential tools for misinformation. This issue, although not an existential risk, is an immediate problem requiring attention now.
Then there are concerns of discrimination caused by biased algorithms, privacy violations, and widening healthcare disparities due to uneven access to AI technologies. While these problems might not pose threats to the entire human race's existence, they are damaging individual lives today. And as AI continues to permeate our daily existence, these damages could become widespread, affecting a significant portion of humanity.
Many in the field argue that resolving these immediate issues could, as a byproduct, help mitigate the more significant existential risks. The term "naturally emergent" is often used in this context - the notion that by addressing the algorithmic bias, improving transparency, and ensuring respect for human values, we are inherently constructing robust defenses against future catastrophic AI risks.
Why is the shift in focus essential? The answer lies in the corridors of Capitol Hill. With the United States Congress taking AI regulations seriously, the opportunity to address AI's immediate concerns through legislation is now. Congress, due to its legislative cycle and the realities of the political calendar, has limited opportunities to reassess and modify such regulations. If the legislative focus remains on the distant future, we risk missing out on addressing the issues at hand.
Moreover, the exponential evolution of AI technology underlines the urgency to act. A year ago, the capabilities we see today were hard to imagine. The speed of advancement leaves little room for regulatory error, reinforcing the importance of prioritizing the tangible, immediate challenges over the potential, existential ones.
The argument here is not against considering existential risks but a call to rebalance our approach. The legislative discourse must pivot from the uncertain future to the present, concrete challenges. As we strive for a harmonious AI-infused future, let's not lose sight of the pressing problems of the present. It's time we shift our focus and legislative might to mitigate the immediate concerns. Our existential future might just depend on it.