Offensive AI (Workshop)

By: Arjen Wiersma

As AI is built into our apps, new ways for attackers to get in are created. In this hands-on workshop, you won’t just hear about theories, you’ll learn to think and act like an attacker. I will give you the tools to test, break, and ultimately, make your AI systems more secure.

This workshop is all about doing. I will show you how to write prompts, not to create things, but to break them. You will learn how to attack the weak spots where AI connects to other apps and how to trick AI into doing things it’s not supposed to do. By thinking like an attacker, you’ll learn to find security flaws before they cause real problems.

In this workshop, you will:

– learn prompt injection: Try out different ways to trick an AI system into ignoring its main rules.
– attack app connections: Find and attack the weak spots where AI connects to other parts of an application.
– think like an attacker: Learn how to hunt for and test security holes that are unique to AI.
– leverage AI for research: Turn the tables and use AI as a powerful assistant to find security weaknesses in any code base, including your own.
– build stronger defenses: Use what you find to build real, working security protections for your AI apps.

This workshop is for developers, security experts, or anyone who wants to get hands-on with AI security. You’ll leave with the skills and a new point of view to help make future AI tools safe.