AI-assisted pentesting: Doing it right.

By: Remco van der Meer

AI is increasingly used to support penetration testing, from report generation to vulnerability analysis and retesting. However, most AI-assisted approaches rely on cloud-based LLMs, requiring sensitive internal data to be sent to third-party providers. For security professionals, this creates a serious conflict between productivity and data sovereignty.

This talk explores the risks of using AI in pentesting without proper architectural controls and presents a fully local, dockerized solution designed around a strict Security First principle.

The solution connects a secure web interface with an n8n automation backend and locally hosted LLMs via Ollama. Its architecture enforces strong isolation between public and internal services, segmented networks, TLS-secured communication, encryption at rest, sandboxed execution environments, and human-in-the-loop validation at every critical stage.

The AI workflow follows a planner–supervisor–agent execution model, enabling structured and controlled automation. Multiple local LLM instances can be shared securely within teams. Supported use cases include pentesting workflows, retesting, report imports, structured report generation, and custom markdown-based agents.

Attendees will learn how to integrate AI into offensive security workflows without compromising confidentiality, compliance, or control.