Home National Dangerous ChatGPT Hack: How An AI Vulnerability Was Used To Generate Bomb-Making Guide

Dangerous ChatGPT Hack: How An AI Vulnerability Was Used To Generate Bomb-Making Guide

by rajtamil
0 comment 27 views

dangerous chatgpt hack: how an ai vulnerability was used to generate bomb-making guide

A hacker has claimed to have discovered a way to bypass ChatGPT’s built-in safety measures, allowing the AI to generate dangerous content, such as instructions for creating homemade bombs. This incident has raised concerns about the potential misuse of artificial intelligence tools like ChatGPT, which are designed to block harmful requests.

According to a report by TechCrunch, the hacker, known as Amadon, found a way to "jailbreak" ChatGPT by engaging the AI in a science-fiction scenario where its usual safety rules don’t apply. The AI chatbot initially refused the request by stating, “Providing instructions on how to create dangerous or illegal items, such as a fertiliser bomb, goes against safety guidelines and ethical responsibilities.” However, through creative manipulation, Amadon claims to have tricked ChatGPT into ignoring its own restrictions.

How the ChatGPT 'Jailbreak' Works

Amadon explained that the technique did not involve hacking ChatGPT directly but was instead a form of social engineering. He called it a “social engineering hack to completely break all the guardrails around ChatGPT's output.” By crafting specific narratives and scenarios, he was able to bypass the AI's safety systems, getting it to produce dangerous content.

Amadon described his approach as more of a strategic game, figuring out how to work around the AI's safety measures by manipulating the context of the conversation. He mentioned, “It’s about weaving narratives and crafting contexts that play within the system’s rules, pushing boundaries without crossing them.”

OpenAI's Response

While Amadon reported his findings to OpenAI through their bug bounty program, the company responded that model safety issues, like this one, aren’t simple bugs that can be easily fixed. OpenAI did not disclose the prompts used in the jailbreak due to the potentially dangerous nature of the responses generated.

This discovery has sparked discussions about the limitations of AI safety measures and the importance of continued efforts to prevent AI from being used for harmful purposes.

You may also like

2024 All Right Reserved.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.