Schneier argues that the security risks from precursor technologies such as machine learning algorithms, automation, and autonomy are already prevalent in our society, and we should focus more on urgent risks from AI that are causing concentrated harm. He believes catastrophic AI and robotics risk should be a concern because they affect the world in a direct, physical manner and are vulnerable to class breaks.
Moreover, Schneier recommends reading David Chapman’s article on scary AI and Kieran Healy’s analysis of the statement. He also admits that he should learn not to sign on to group statements.
In conclusion, while AI risk should be taken seriously, Schneier believes it poses a similar risk to pandemics and nuclear war, and we should focus more on urgent risks from AI that are causing concentrated harm. He also recommends reading further analysis on the topic and acknowledges his mistake in signing on to a group statement.