In the world of technology, cybercrime is a rapidly growing problem. OpenAI, a developer of ChatGPT, is facing scrutiny from the Federal Trade Commission (FTC) due to the potential for its AI-based chatbot to be used for malicious purposes. The Centre for AI and Digital Policy’s Complaint (CAIDP) has urged the FTC to ban OpenAI from releasing more versions of the chatbot that utilize AI and machine learning tools like GPT-4.
The CAIDP believes that GPT-4 is biased, deceptive, and puts public privacy at risk. OpenAI acknowledged this issue in November of last year, stating that the fault lies with the user rather than the software. The company noted the potential for the technology to be used to spread disinformation and influence computer networks, hindering future discussion and improvement.
The FTC has yet to respond to the complaint, but has stated that usage of AI technology should be transparent and foster liability. If GPT-4 does not meet these requirements, the FTC may impose a ban to protect consumer rights, as determined by a security evaluation.
Cybercrime is a serious issue that is not to be taken lightly. OpenAI’s AI-based chatbot is a potential threat to public privacy and security, and the FTC must take into consideration the implications of the technology before deciding whether or not to ban it. It is essential that organizations and individuals alike remain aware of the implications of such technology, and be mindful of how it is being used.