OpenAI Bans Accounts Exploiting ChatGPT for Malicious Surveillance Operations
OpenAI announced Friday that it banned several accounts using its ChatGPT tool to develop an AI-powered surveillance system, likely originating from China. The tool, leveraging Meta's Llama model, was used to monitor anti-China protests in the West, collect data from social media, and share insights with Chinese authorities. This operation, codenamed Peer Review, also helped analyze posts related to Uyghur rights protests, though the authenticity of some images remains unclear. Additionally, OpenAI disrupted other malicious campaigns, including a North Korea-linked job scam, anti-U.S. content creation in Latin America, and Iranian influence efforts.
The rise of AI-driven disinformation highlights growing concerns over bad actors using tools like ChatGPT for cybercrime. OpenAI urges stronger monitoring, access controls, and clearer policies to detect and block malicious activities. Users are advised to update security practices, report suspicious content, and collaborate with cybersecurity experts to improve AI safety.
Source: https://thehackernews.com/2025/02/openai-bans-accounts-misusing-chatgpt.html