As AI technology continues to rapidly evolve, a growing number of scientists are voicing their fears about the potential for an AI-induced disaster on par with nuclear warfare. According to Stanford’s 2023 Artificial Intelligence Index Report, one-third of researchers surveyed expressed concerns that decisions made by artificial intelligence could spark a catastrophe.
The report also noted that most researchers saw AI as leading to positive societal change and beneficial applications in areas, such as healthcare and transportation. However, some experts fear the potential consequences should something go wrong with an autonomous system or if it is used maliciously by someone intent on causing harm. In particular, there have been reports of rogue AIs being created specifically to destroy humanity – though none have yet succeeded in doing so.
Given these risks associated with advanced artificial intelligence systems, many scientists believe more must be done to ensure safety protocols are put into place before any potentially catastrophic events can occur due to faulty or malicious programming code. This includes developing ethical guidelines around how autonomous technologies should be designed and operated while also ensuring they remain secure from external threats like cyberattacks or data breaches, which could allow unauthorized access into sensitive systems containing confidential information about individuals or organizations worldwide.
Read more at The Jerusalem Post | JPost.com