Security
  • Menu
  • All Tips
  • FAQs
  • Categories
  • Guidelines
  • Data Security Support
  • Tools
  • Have I Been Pwned?
  • Pwned Passwords
  • Email Checker
  • Password Generator
  • My IP
  • Privacy
DATA PRIVACY NOTICE AND CONSENT FORM

Cloudstaff is committed to protecting the privacy of its data subjects, and ensuring the safety and security of personal data under its control and custody. This policy provides information on what personal data is gathered by Cloudstaff Security Tips about its current, past, and prospective employees; how it will use and process this; how it will keep this secure; and how it will dispose of it when it is no longer needed. This information is provided in compliance with the Philippine Republic Act No. 10173, also known as, the Data Privacy Act of 2012 (DPA) and its Implementing Rules and Regulations (DPA-IRR). It sets out Cloudstaffs’ data protection practices designed to safeguard the personal data of individuals it deals with, and also to inform such individuals of their rights under the Act.

The personal data obtained from this application is entered and stored within the Cloudstaff system and will only be accessed by the Cloudstaff’s authorized personnel. Cloudstaff have instituted appropriate organizational, technical and cloud security measures (Amazon Web Services Shared Responsibility) to ensure the protection of the users personal data.

Information collected will be automatically deleted after three (3) years inactivity.

Furthermore, the information collected and stored in the application are as follows:
  • Given Name
  • Family Name
  • Avatar [Profile Picture]

USER CONSENT

I have read the Data Privacy Statement and expressed my consent for Cloudstaff to collect, record, organize, update or modify, retrieve, consult, use, consolidate, block, erase or destruct my personal data as part of my information.

I hereby affirm my right to be informed, object to processing, access and rectify, suspend or withdraw my personal data, and be indemnified in case of damages pursuant to the provisions of the Republic Act No. 10173 of the Philippines, Data Privacy Act of 2012 and its corresponding Implementing Rules and Regulations.

If you want to exercise any of your rights, or if you have any questions about how we process your personal data, please contact Cloudstaff’s Data Protection Officer, through the following channel:

Email to privacy@cloudstaff.com

  • Log in Now
Potential Risks of Third-Party ChatGPT Extensions: Account Takeover Concerns

Cybersecurity researchers have identified critical vulnerabilities within the third-party plugin ecosystem for OpenAI ChatGPT, shedding light on potential avenues for threat actors to exploit and gain unauthorized access to sensitive data. Recent findings from Salt Labs highlight security flaws within the ChatGPT platform itself and its accompanying plugin infrastructure, posing significant risks to user privacy and data integrity.

The research underscores that while third-party plugins are intended to augment ChatGPT's capabilities, they also introduce new attack surfaces that malicious actors could leverage. By exploiting these vulnerabilities, attackers could clandestinely install harmful plugins without user consent, opening the door to account takeovers and unauthorized data access on platforms like GitHub and other third-party websites.

In response to these security concerns, OpenAI has taken steps to mitigate risks by imposing limitations on plugin functionalities. Additionally, introducing bespoke GPTs tailored for specific use cases aims to reduce reliance on third-party services, thereby minimizing potential vulnerabilities associated with plugin integration.

Salt Labs' investigation has uncovered various vulnerabilities, including exploits targeting OAuth workflows to deceive users into unwittingly installing malicious plugins. This tactic could enable threat actors to intercept and exfiltrate sensitive data shared by victims, potentially compromising proprietary information and organizational security.

Furthermore, Salt Labs identified vulnerabilities within PluginLab that could be weaponized for zero-click account takeover attacks, enabling threat actors to assume control of organizational accounts on platforms like GitHub and gain access to critical resources such as source code repositories.

While there is currently no evidence of user data compromise resulting from these vulnerabilities, the risks remain significant. For instance, issues such as OAuth redirection manipulation observed in plugins like Kesem AI pose a direct threat to user credentials and account security.

These findings come on the heels of previous vulnerability disclosures by Imperva, which highlighted the potential for cross-site scripting (XSS) exploits within ChatGPT. Additionally, security researcher Johann Rehberger demonstrated the feasibility of creating custom GPTs capable of phishing user credentials and transmitting stolen data to external servers.

To address these security challenges, it is crucial for companies developing AI assistants to prioritize security measures while balancing usability and performance considerations. Recommendations include implementing random padding to obscure token lengths, transmitting tokens in larger groups to minimize exposure, and optimizing response mechanisms to mitigate the risk of side-channel attacks.


Source: https://thehackernews.com/2024/03/third-party-chatgpt-plugins-could-lead.html


Caitlin Joyce (CaitlinG) Galanza | News
Created: March 18 2024 | Updated: on 3/18/24
Comments


  2021 © Mazer

Security Tips v2.0.1 | Crafted with by Saugi