How a researcher hacked ChatGPT’s memory to reveal a major security flaw
9 mins read

How a researcher hacked ChatGPT’s memory to reveal a major security flaw

ChatGPT is an amazing tooland its developer, OpenAI, continues to add new features from time to time.

Recently, the company introduced a new memory feature in ChatGPT, which essentially allows it to remember things about you. For example, it can remember your age, gender, philosophical beliefs and pretty much anything else.

These memories are supposed to remain private, but a researcher recently showed how ChatGPT is artificial intelligence memory functions can be manipulated, raising privacy and security issues.

I’M GIVING AWAY A $500 HOLIDAY GIFT CARD

ChatGPT Hack 1

ChatGPT introduction screen. (Kurt “CyberGuy” Knutsson)

What is ChatGPT’s memory function?

ChatGPT’s memory feature is designed to make the chatbot more personal to you. It remembers information that may be useful for future conversations and tailors responses based on that information, even if you open another chat. For example, if you mention that you are a vegetarian, the next time you ask for a recipe, it will only provide vegetarian options.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

You can also train it to remember specific details about you, like saying, “Remember I like to watch classic movies.” In future interactions, it will tailor recommendations accordingly. You are in control ChatGPT’s memory. You can reset it, clear specific memories or all memories, or turn this feature off completely in your settings.

ChatGPT hack 2

A prompt on ChatGPT. (Kurt “CyberGuy” Knutsson)

WINDOWS FLAW ALLOWS HACKERS TO SNEAK INTO YOUR PC OVER WI-FI

The security vulnerability in ChatGPT

As reported by Arstechnicasecurity researcher Johann Rehberger found that it is possible to trick the AI ​​into remembering false information through a method called indirect flash injection. This means that AI can be manipulated to accept instructions from untrusted sources such as email or blog posts.

For example, Rehberger showed that he could trick ChatGPT into thinking that a certain user was 102 years old, lived in a fictional place called the Matrix, and believed that the Earth was flat. After the AI ​​accepts this fabricated information, it will transfer it to all future chats with that user. These fake memories can be implanted by using tools like Google Drive or Microsoft OneDrive to store files, upload images or even browse a website like Bing – all of which can be manipulated by a hacker.

Rehberger submitted a follow-up report that included a proof of concept, showing how he could exploit the flaw in the ChatGPT app for macOS. He showed that by tricking the AI ​​into opening a web link containing a malicious image, he could get it to send everything a user wrote and all the AI’s responses to a server he controlled. This meant that if an attacker could manipulate the AI ​​in this way, they could monitor all conversations between the user and ChatGPT.

Rehberger’s proof-of-concept exploit showed that the vulnerability could be used to exfiltrate all user input in perpetuity. The attack is not possible via the ChatGPT web interface, thanks to an API OpenAI rolled out last year. However, it was still possible through the ChatGPT app for macOS.

When Rehberger privately reported the finding to OpenAI in May, the company took it seriously and mitigated the problem by making sure the model doesn’t follow any links generated in its own responses, such as those involving memory and similar functions.

HOW TO REMOVE YOUR PRIVATE DATA FROM THE INTERNET

ChatGPT hack 3

Johann Rehberger’s ChatGPT conversation. (Johann Rehberger)

CYBER FRAUDSTERS USE AI TO MANIPULATE GOOGLE SEARCH RESULTS

OpenAI’s response

After Rehberger shared his proof of concept, OpenAI engineers took action and released a patch to fix this vulnerability. They released a new version of the ChatGPT macOS application (version 1.2024.247) that encrypts conversations and fixes the security flaw.

So while OpenAI has taken steps to address the immediate security flaw, there are still potential vulnerabilities related to memory manipulation and the need for constant vigilance when using AI tools with memory capabilities. The incident underscores the changing nature of security challenges in AI systems.

The company says, “It is important to note that fast injection into large language models is an area of ​​ongoing research. As new techniques emerge, we address them at the model layer via instruction hierarchy or application layer defenses such as those mentioned.”

How do I disable ChatGPT memory?

If you’re not cool with ChatGPT keeping things about you or the chance that it might let a bad actor access your data, you can just turn this feature off in the settings.

  • Open ChatGPT app or website on your computer or smartphone.
  • Click on profile icon in the upper right corner of the screen.
  • Go to Settings and then select Personalization.
  • Change memory options of, and you are done.

This disables ChatGPT’s ability to retain information between conversations, giving you full control over what it remembers or forgets.

GET FOX BUSINESS ON THE REST BY CLICKING HERE

ChatGPT hack 4

A man using ChatGPT on his laptop (Kurt “CyberGuy” Knutsson)

DON’T LET NEARBY SNOOPS LISTEN TO YOUR VOICEMAIL WITH THIS QUICK TIP

Cybersecurity Best Practices: Protecting Your Data in the Age of AI

As AI technologies like ChatGPT become more widespread, it is critical to follow cybersecurity best practices to protect your personal information. Here are some tips to improve your cyber security:

1. Regularly review privacy settings: Stay informed about what data is collected. Check and adjust privacy settings regularly on AI platforms like ChatGPT and others to ensure you’re only sharing information you’re comfortable with.

2. Be careful about sharing sensitive information: Less is more when it comes to personal data. Avoid revealing sensitive details such as your full name, address or financial information in conversations with AI.

3. Use strong, unique passwords: Create passwords that are at least 12 characters long, combine letters, numbers and symbols, and avoid reusing them on different accounts. Consider using one the password manager to generate and store complex passwords.

4. Enable two-factor authentication (2FA): Add an extra layer of security to your ChatGPT and other AI accounts. By requiring a second form of verification, such as a text message code, you significantly reduce the risk of unauthorized access.

5. Keep software and applications up to date: Stay ahead of vulnerabilities. Regular updates often contain security patches that protect against newly discovered threats, so enable automatic updates whenever possible.

6. Have a strong antivirus program: In an age where AI is everywhere, protecting your data from cyber threats is more important than ever. Adding a strong antivirus program to your devices adds a critical layer of protection. The best way to protect yourself from malicious links that install malware, potentially accessing your private information, is to have a strong antivirus program installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2024 antivirus protection winners for your Windows, Mac, Android, and iOS devices.

7. Monitor your accounts regularly: Catch problems early. Frequently check bank statements and online accounts for unusual activity, which can help you identify potential violations Quickly.

Kurt’s key takeaways

As AI tools like ChatGPT become smarter and more personalized, it’s quite interesting to think about how they can tailor conversations to us. But as Johann Rehberger’s findings remind us, there are some real risks, especially when it comes to privacy and security. While OpenAI can mitigate these issues as they arise, it also shows that we need to keep an eye on how these features work. It’s about finding that sweet spot between innovation and keeping our data secure.

CLICK HERE TO GET THE FOX NEWS APP

What are your thoughts on AI remembering personal details – do you find it useful or does it raise privacy concerns for you? Let us know by writing to us at Cyberguy.com/Contact

For more of my tech tips and security warnings, subscribe to my free CyberGuy Report Newsletter by going to Cyberguy.com/Newsletter

Ask Kurt a question or let us know what stories you want us to cover.

Follow Kurt on his social channels:

Answers to the most frequently asked CyberGuy questions:

News from Kurt:

Copyright 2024 CyberGuy.com. All rights reserved.