This is the Age of Artificial Intelligence (AI). Although it may seem like a newly arrived phenomenon, the AI Revolution has been developing for years. The public has only recently been made aware of the development of large generative pre-trained transformer (GPT) models, such as ChatGPT, which has surpassed our previous sensory threshold for AI. Suddenly, the evolution of AI is at the forefront of public attention, as if it had sprung up overnight – but it is an ongoing process which cannot easily be halted.
The implications of AI are widespread and complex, touching upon social, business, political, economic, and many other areas. For instance, OpenAI’s research into the potential labor market impacts of Large Language Models (LLMs) suggests that around 19% of workers may see at least 50% of their tasks impacted. At this time, however, we are limiting our discussion to the cybersecurity, privacy, and ethical implications of GPTs and LLMs.
In November 2022, ChatGPT-3 was made available for public use. It quickly became apparent that it was relatively easy to bypass the safety guardrails designed to prevent misuse, a process known as jailbreaking. In March 2023, OpenAI released GPT-4, which was able to accept image and text inputs, and respond to visual inputs as well. It was also capable of generating code and creating convincing phishing emails.
The use of AI to abuse others is theoretically prevented by internal guardrails designed to prevent misuse. Unfortunately, these guardrails have so far proven inadequate. Security experts are divided on the chances of ever creating a GPT model that cannot be abused. Some are hopeful that increased understanding of how jailbreaking is done and the implementation of robust security policies can help reduce the chances of misuse. Others, however, are less certain that it is possible to prevent abuse of AI technologies.
Despite the potential for misuse, AI can also be used for good. Diffblue’s AI product, Diffblue Cover, is designed to generate unit tests to improve code. GPT-4 can also be used to help visually impaired individuals to interpret their surroundings. Although it is impossible to completely eliminate the potential for misuse, it is possible to take steps to reduce the risks and ensure that AI is used for beneficial purposes.
In conclusion, the development of AI is an ongoing process, and the potential implications for society and economics are both vast and complex. GPTs and LLMs have already been used for malicious purposes, and the potential for future abuse is concerning. It is important to stay informed about the development of AI and the implications for society, and to implement policies and guardrails to ensure that it is used for beneficial purposes.