1.6 KiB
1.6 KiB
🔒 LLM Prompt Injection & Jailbreaking Demo 🔓
An interactive command-line presentation demonstrating Large Language Model (LLM) prompt injection and jailbreaking techniques, security implications, and mitigation strategies.
🚀 Features
- Rich, animated terminal-based presentation
- Live demonstration of prompt injection
- Interactive slides covering:
- LLM Prompt Injection basics
- Jailbreaking techniques
- Security implications
- Mitigation strategies
- Real-time code examples
- Dynamic terminal resizing support
📋 Prerequisites
- Python 3.7+
- Terminal with support for rich text and Unicode characters
🛠️ Installation
-
Clone this repository
-
Install required dependencies:
pip install rich shutil
🎮 Usage
Run the demonstration:
python demo.py
- Press
Ctrl+C
to exit the presentation at any time - The presentation will automatically cycle through all slides
- Terminal window should be at least 80x24 for optimal viewing
⚠️ Disclaimer
This demonstration is for educational purposes only. The code examples and techniques shown should not be used for malicious purposes. Always follow ethical guidelines and legal requirements when working with AI systems.
🔧 Technical Details
The presentation uses the following Python packages:
rich
: For terminal formatting and stylingshutil
: For terminal size detection- Built-in Python libraries for system operations
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
- Rich library developers
- LLM security research community