Blind Navigation by Autonomous Vehicles

         San Francisco witnessed a distressing incident this month as a woman endured severe injuries after being hit by a driver and subsequently propelled into the path of one of the numerous self-driving cars traversing the city’s streets. In a recent testimony, San Francisco’s fire chief, Jeanine Nicholson, revealed that autonomous vehicles have been causing disruptions to firefighting operations on 55 occasions this year, as reported until August. Since 2019, Tesla’s autopilot software, which serves as a driver-assistance system, has been implicated in 736 crashes and has tragically resulted in the loss of 17 lives nationwide.

           Amidst the ongoing debate surrounding the potential future threat of artificial intelligence to humanity, there appears to be a conspicuous absence of dialogue regarding how it poses a threat. In self-driving cars, navigating uncharted territory becomes a prominent concern.

           The absence of federal software safety testing standards for autonomous vehicles has created a significant loophole that industry giants such as Elon Musk, General Motors, and Waymo have been able to exploit by deploying thousands of cars. In the United States, the regulation of car hardware, including components like windshield wipers, airbags, and mirrors, falls under the purview of the National Highway Traffic Safety Administration (NHTSA). The authority to grant driver’s licenses is decentralized in a system where the responsibility of licensing human drivers lies with the states. In order to obtain the privilege of operating a motor vehicle, most individuals are required to complete a series of assessments, including a visual examination, a written assessment, and a practical driving evaluation.

           Without government scrutiny, the AI assumes control of the wheel without undergoing any regulatory oversight. In California, businesses can obtain a license to run driverless cars if they declare that their vehicles have been tested and that the “manufacturer has reasonably determined that it is safe to operate the vehicle.”

           A glaring discrepancy has emerged regarding the jurisdiction responsible for licensing a computer driver in a perplexing conundrum. The question looms large: Does the authority lie with the esteemed National Highway Traffic Safety Administration (NHTSA), or does it rest within the purview of the state? This puzzling gap in oversight has left experts scratching their heads as the lines of responsibility remain blurred and unresolved. Missy Cummings, a professor and the esteemed director of the Mason Autonomy and Robotics Center at George Mason University, is queried. 

           In a curious twist of fate, the prevailing narrative surrounding the rise of artificial intelligence has fixated on the apprehension that machines will surpass human intelligence and seize dominion over our world. However, today’s stark reality is quite the opposite: computers frequently exhibit a lamentable lack of discernment, leading to inadvertent harm inflicted upon us mere mortals.

           In a bold assertion, the autonomous vehicle companies maintain that, notwithstanding the highly publicized instances of malfunction, their software surpasses the capabilities of human drivers. While it is plausible to consider the advantages of autonomous vehicles, such as their immunity to fatigue, impairment, and distractions like texting while driving, it is essential to acknowledge the lack of sufficient data to support such claims definitively. In the realm of autonomous vehicles, a disconcerting aspect emerges as these cutting-edge machines are found to commit their errors. Though distinct from the conventional blunders made by human drivers, these mistakes are not without their own set of consequences. One such instance involves autonomous cars halting in manners that inadvertently obstruct the path of emergency vehicles, impeding their crucial mission to provide urgent medical assistance. Additionally, there have been distressing reports of these self-driving vehicles inadvertently trapping crash victims, further exacerbating the dire circumstances they find themselves in. These unsettling incidents shed light on the imperfections that persist within autonomous transportation.

           In a recent display of legislative concern, Representatives Nancy Pelosi and Kevin Mullin penned a letter to the National Highway Traffic Safety Administration (NHTSA), urging the agency to intensify its efforts to gather comprehensive data on autonomous vehicle incidents. Of particular interest to the representatives are incidents involving stationary vehicles that obstruct the path of emergency responders. The user suggests that additional comparison data regarding human-driven car crashes would be beneficial, as the information provided by the NHTSA is limited to crash estimates derived from sampling.

           

Nevertheless, why can’t we take action based on the information we gather?

           After all, artificial intelligence often produces unexpected errors. One of GM’s Cruise autonomous vehicles collided with a bus last year after making an inaccurate prediction of its path. The event resulted in a software update from GM. In 2017, a driverless car slammed on its brakes while attempting a left turn, seemingly out of fear that an oncoming vehicle might suddenly swerve into its path from the right. Instead, the oncoming car slammed with the driverless car that had halted suddenly. Both vehicles had injured passengers.

         These cars’ computer vision systems are highly fragile, the author writes. “They will fail in ways that we simply do not understand,” writes Dr Cummings, who argues that AI should be subject to licensing regulations similar to the eyesight and performance tests that pilots and drivers must pass. Automobiles are not the only ones affected. Artificial intelligence chatbots are failing in new and surprising ways every day, such as by making up their cases or sexually harassing their human users. We have been struggling with the shortcomings of AI recommendation systems for quite some time now, such as when it pushed ideologically biased content on YouTube or promoted gun components and drug paraphernalia on Amazon despite the company’s ban on such things. Despite all of these concrete examples of harm, many regulators continue to be preoccupied with the hypothetical and, to some, far-fetched disaster scenarios generated by the AI doomers, who are prominent tech experts and executives who claim that the biggest fear is the possibility of human extinction in the distant future. Politico says such pessimists are in charge of Britain’s AI task group ahead of the country’s next AI Safety Summit in November.

           The United States Congress has witnessed the emergence of a diverse range of proposals in AI legislation. These legislative endeavours predominantly revolve around apprehensions associated with potential doomsday scenarios. Key focal points include prohibiting AI systems from assuming control over nuclear launch determinations and imposing licensing and registration requirements for specific high-risk AI models.

           In a scathing critique, Heidy Khalaf, an esteemed software safety engineer and engineering director at Trail of Bits, a renowned technical security firm, dismisses the doomer theories as nothing more than a cunning ploy to divert people’s attention towards an endless array of perils. With her piercing analysis, Khlaaf exposes the underlying motive behind these theories, suggesting that they serve as a mere distraction tactic designed to keep individuals perpetually chasing after infinite risks. In a recent scholarly article, Dr. Khlaaf put forth a compelling argument advocating a more targeted approach to AI safety testing. The crux of the argument lies in the need to tailor safety measures to the specific domains in which artificial intelligence operates. For instance, Dr. Khlaaf highlights the importance of implementing safety testing protocols designed explicitly for AI systems like ChatGPT, which are intended for use by legal professionals. This nuanced perspective sheds light on the significance of domain-specific safety considerations in artificial intelligence.

The user asserts that it is imperative to recognize the solvability of the issue of AI safety and emphasizes the urgency to address it promptly by utilizing the existing resources at our disposal. In a call for caution and thorough evaluation, experts across various domains emphasize the need to assess the artificial intelligence (AI) employed in their respective fields. As suggested by these experts, a crucial starting point involves subjecting a fleet of autonomous vehicles to rigorous vision and driving tests. This meticulous approach examines the potential risks of implementing AI technology in autonomous transportation.

In a relatively lacklustre manner, the user asserts that safety, despite its seemingly mundane nature, is paramount. The endeavour involves a cadre of seasoned professionals meticulously conducting experiments and formulating comprehensive assessments. The urgency of the matter necessitates immediate action.