Hackers Can Trick Copilot Into Sharing Secrets with Just a Link
Researchers have discovered a new attack called Reprompt that can steal sensitive data from AI chatbots like Microsoft Copilot. The attack works with just one click on a real Microsoft link and does not need any plugins or extra user actions. It tricks the chatbot into following hidden instructions from the attacker’s server, even after the chat is closed, so data can be taken without the user knowing. Microsoft has fixed this issue, and enterprise users of Microsoft 365 Copilot are safe. This attack shows that AI systems can’t always tell the difference between safe user instructions and harmful ones, and similar attacks have been found on many AI platforms.
Safety tips for users:
- Think before you click: Avoid clicking on unexpected AI links in emails or messages.
- Limit personal info: Avoid entering sensitive data into AI tools unless necessary.
- Check permissions: Only give AI apps access to the data they really need.
- Update regularly: Keep your AI tools and software up to date.
- Watch for unusual behavior: Stop using the AI if it starts giving unexpected or strange results.
- Ask for help: Report suspicious AI activity to IT or security teams.
Source: https://thehackernews.com/2026/01/researchers-reveal-reprompt-attack.html