AI or Alien? Unveiling the True Face of Non-Human Intelligence
- Justin Riddiough
- July 20, 2023
The spectacles of alien spaceships and non-human intelligence have captured the collective imagination. With claims of secret government retrieval programs and the emergence of the enigmatic term “non-human intelligence,” (used at least 8 times within the 45 minute interview) it is clear that a paradigm shift is underway in how we perceive the unknown. However, beyond the captivating narratives surrounding UFOs, a more profound pivot is occurring—one that reframes concerns and fears about non-human intelligence towards the realm of artificial intelligence (AI).
Media attention has played a crucial role in perpetuating the idea of non-human intelligence as a dangerous “other.” An op ed in the New York Times challenges us to question the narrative by asking: Does the U.S. Government Want You to Believe in U.F.O.s? Another voice, author Yuval Noah Harari has been instrumental in popularizing dystopian scenarios of AI taking over the world, likening AI to alien non-human intelligence . By presenting an array of fear-mongering content, the Department of Defense and politicians are afforded a buffet of options to justify the need for stricter AI regulations. Let’s also not forget Representative Eschoo drawing parallels between Stable Diffusion and nuclear atrocities
Probing the Current Initiatives
The National Defense Authorization Act for Fiscal Year 2024 has recognized the significance of AI and non-human intelligence. The National Security Commission on AI has made far-reaching recommendations , aiming to expand federal AI research and development with a $40 billion investment. Additionally, they envision significant federal spending to build secure digital infrastructure, shared cloud computing access, and smart cities to harness AI’s potential for the benefit of all Americans.
- Ensure strong export controls on leading-edge AI chips and chip-making equipment
- License benign uses of chips with remote throttling capabilities
- Develop microelectronic controls embedded in AI chips to prevent the development of large AI models without security safeguards
- Utilize Defense Production Act authorities to:
- Require companies to report development or distribution of large AI computing clusters, training runs, and trained models
- Define thresholds for reporting: e.g., >1,000 AI chips, >10 27 bit operations, and >100 billion parameters
- AI companies, especially those with powerful models, should meet safety requirements.
- Proposes internal and external testing before release and publication of evaluation results.
- Suggests licensing or registration requirements for AI models above a certain capability threshold.
- Advocates for incentives to encourage compliance with safety requirements.
Section 230 Shifting Accountability: Illusions, Inadequacies, and Consequences
The significance the Senators placed on Section 230 during Altman’s hearing underlines a potentially misleading focal shift. Instead of addressing the true origins of foreign interference, such as the legal loopholes exposed in the landmark Citizens United Supreme Court decision that have allowed an unchecked flow of dark money undermining the democratic process , the Senators pointed at Section 230 of the Communications Decency Act.
Section 230 grants online platforms immunity from liability for user-generated content. However, attributing the woes of election interference solely to this shield law conspicuously deflects focus from the problem’s root—broken campaign finance laws and the repercussions of unchecked corporate spending in elections.
The Great Deception: Unearthing the Truth Beyond the Spectacle
The narrative surrounding the UFOs, framing them as fear-inducing entities of non-human intelligence, serves as a subtle diversion from the actual implications of AI’s immersion into our society. The alluring mystery that shrouds these ‘alien’ phenomena might conveniently provide a scapegoat for dystopian concerns that instead can be tied back to the intricate developments in artificial intelligence.
Simultaneously, the Senate’s oversight of Section 230, and its hasty ambition to avoid previous regrettable decisions, subtly demonstrate an undercurrent of disproportionate technology blame. This approach dismisses the root causes of technology misuse and instead seeks to regulate the technology itself, a misguided trend that seemingly resurfaces when regulating powerful technological advances.
Self-interested approaches tend to sideline societal benefits when it comes to chalking out the narrative around AI regulations. This poses a relevant question - What are our best interests in this rapidly progressing AI era, and who, indeed, is looking out for them? Correcting the narrative is of essence to direct the regulations where they genuinely matter: not merely controlling the technology, but maintaining responsibility and accountability for implementing AI systems ethically and judiciously.