Researchers Discover Security Gaps Putting AI Chatbots at Risk
Cybersecurity experts at Tenable have found seven security flaws in OpenAI’s GPT-4o and GPT-5 models that could let attackers steal users’ personal data or chat histories. OpenAI has already fixed some of the issues.
These flaws enable prompt injection attacks, where hackers hide malicious instructions that trick ChatGPT into revealing information or performing unintended actions. Examples include:
- Hidden commands inside websites or web comments.
- Dangerous links disguised as Bing ads or search results.
- “One-click” or “zero-click” links that automatically trigger harmful prompts.
- Bugs that let attackers hide code or corrupt ChatGPT’s memory.
Similar attacks have been reported in other AI tools like Claude, GitHub Copilot, and Microsoft 365 Copilot, showing how AI systems connected to the web are at higher risk.
Tenable warned that prompt injection is a general problem with large AI models and can’t be fully fixed yet. However, users can take precautions.
How to Stay Safe When Using ChatGPT
- Avoid clicking suspicious links shared in chats or prompts, especially ones with strange domains or parameters (e.g., ?q=...).
- Avoid pasting private or sensitive information into ChatGPT conversations unless you trust the environment.
- Be cautious when asking ChatGPT to summarize web pages — avoid unknown or untrusted sites.
- Review your chat history and memory settings to ensure no unwanted data is stored.
- Report strange or unsafe behavior directly to OpenAI if ChatGPT gives odd or unexpected replies.
- Keep browser extensions and plugins updated, and avoid unverified third-party ChatGPT integrations.
Source: https://thehackernews.com/2025/11/researchers-find-chatgpt.html