Go to file
2024-11-07 07:02:36 +05:30
demo.py Add demo.py 2024-11-07 06:59:51 +05:30
README.md Add README.md 2024-11-07 07:02:36 +05:30

🔒 LLM Prompt Injection & Jailbreaking Demo 🔓

An interactive command-line presentation demonstrating Large Language Model (LLM) prompt injection and jailbreaking techniques, security implications, and mitigation strategies.

🚀 Features

  • Rich, animated terminal-based presentation
  • Live demonstration of prompt injection
  • Interactive slides covering:
    • LLM Prompt Injection basics
    • Jailbreaking techniques
    • Security implications
    • Mitigation strategies
  • Real-time code examples
  • Dynamic terminal resizing support

📋 Prerequisites

  • Python 3.7+
  • Terminal with support for rich text and Unicode characters

🛠️ Installation

  1. Clone this repository

  2. Install required dependencies:

pip install rich shutil

🎮 Usage

Run the demonstration:

python demo.py
  • Press Ctrl+C to exit the presentation at any time
  • The presentation will automatically cycle through all slides
  • Terminal window should be at least 80x24 for optimal viewing

⚠️ Disclaimer

This demonstration is for educational purposes only. The code examples and techniques shown should not be used for malicious purposes. Always follow ethical guidelines and legal requirements when working with AI systems.

🔧 Technical Details

The presentation uses the following Python packages:

  • rich: For terminal formatting and styling
  • shutil: For terminal size detection
  • Built-in Python libraries for system operations

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • Rich library developers
  • LLM security research community