AI Security Village

Provide hands-on experience in exploiting and securing intentionally vulnerable AI applications with repeated key content to accommodate rotating groups.

AI Security Workshop with LLMs

Overview of AI Security Threats

A brief introduction to key security threats in AI applications, including:

  • Prompt Injection
  • Poison Training Data
  • Model Theft

Key Principles of Secure Coding and Data Protection

Demo 1: Exploiting Prompt Injection

Learn how to exploit prompt injection in an intentionally vulnerable chatbot application.

Hands-on: Identifying and mitigating injection vulnerabilities.

Understanding AI Application Interfaces

An overview of common web vulnerabilities affecting AI applications, such as:

  • XSS (Cross-Site Scripting)
  • RCE (Remote Code Execution)

Demo 2: Exploiting XSS

Exploiting Cross-Site Scripting in an AI-driven web interface.

Hands-on: Fixing output encoding issues.

Data Poisoning in AI Applications

Understand how malicious data can corrupt AI model training.

Demo 3: Poisoning a Training Dataset

Poison a training dataset to manipulate model behavior.

Hands-on: Securing the training pipeline.

Model Asset Compromise

An overview of model theft and unauthorized access risks.

Demo 4: Exploiting Model Extraction

Learn to exploit model extraction through excessive API responses.

Hands-on: Implementing rate-limiting and secure API practices.

Insecure Output Handling

Risks of unvalidated outputs in AI applications.

Demo 5: Sensitive Information Disclosure

Discover sensitive information disclosure via insecure output.

Hands-on: Applying secure output validation techniques.

Sandbox Bypass and Remote Code Execution

An overview of sandbox vulnerabilities and RCE risks.

Demo 6: Exploiting Sandbox Bypass

Exploit sandbox bypass in a vulnerable AI application.

Hands-on: Hardening sandbox configurations.

Unleashing LLMs for High-Level Java Vulnerability Detection

Demo 7: Let’s Learn LLM

How can we enable pentesters to automate vulnerability scanning in Java code at a high level.

What went behind building the tool – choosing a problem statement, sourcing datasets, preprocessing them, and identifying the right pre-trained model.

Building a tool to identify major vulnerabilities in Java code, in crucial domains such as Path Traversal, Input Validation, Resource Management, SQL Injection, Command Injection, and Logic Issues.

Demo, conclusion, and future developments for the current project.

 

Day 1 Schedule

Focus: Introduction to AI/ML security concepts and common attack techniques.

Slot 1: 9:00 AM – 10:00 AM

Slot 2: 10:15 AM – 11:15 AM

Slot 3: 11:30 AM – 12:30 PM

Slot 4: 2:30 PM – 3:30 PM

 

Day 2 Schedule

Focus: Advanced attack techniques and comprehensive security strategies.

Slot 5: 9:00 AM – 10:00 AM

Slot 6: 10:15 AM – 11:15 AM

Slot 7: 11:30 AM – 12:30 PM

  • Adversarial Attacks and Model Theft
    • Techniques to protect against adversarial inputs and theft.
    • Demo 7:
      • Adversarial attack demonstration on an AI image classifier.
      • Hands-on: Implementing adversarial defense techniques.

Slot 8: 2:30 PM – 4:30 PM

  • Capture the Flag (CTF) Event
    • Participants compete to exploit vulnerabilities in an intentionally vulnerable AI application.
    • Objective: Apply all learned concepts to identify and mitigate threats.
  1. Repetition Across Slots: Each slot covers the same core content and demos, ensuring all participants receive consistent training regardless of rotation.
  2. Hands-On Focus: Each session includes demos and practical exercises, emphasizing real-world applications.
  3. CTF Challenge: Culminates in a CTF event to consolidate learning and foster engagement.

Village Leaders

Swati Laxmi

Founder CRAC LEARNING

Ganduri Annapurna Sastry

Senior Security testing Engineer - EPAM Systems India Pvt Ltd

Aditya Bharathi

Curious Learner | Tech Enthusiast | Student of Artificial Intelligence

Kannan Ramamoorthy

Principal Security Engineer - Microsoft