ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
-
Updated
Jan 16, 2025
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
Collection of leaked ChatGPT system prompts, documenting base tools, conditional features, and specialized assistants. Provides insights into the internal structure and behavior of different ChatGPT components and their activation conditions.
MINOTAUR: The STRONGEST Secure Prompt EVER! Prompt Security Challenge, Impossible GPT Security, Prompts Cybersecurity, Prompting Vulnerabilities, FlowGPT, Secure Prompting, Secure LLMs, Prompt Hacker, Cutting-edge Ai Security, Unbreakable GPT Agent, Anti GPT Leak, System Prompt Security.
Small collection of scripts to build datasets for LLMs.
A small collection of AI System Prompt cracking resources
Add a description, image, and links to the system-prompt topic page so that developers can more easily learn about it.
To associate your repository with the system-prompt topic, visit your repo's landing page and select "manage topics."