This full-day, hands-on workshop equips developers, architects, and technical leaders with the knowledge and skills to secure AI systems end-to-end — from model interaction to production deployment. Participants learn how to recognize and mitigate AI-specific threats such as prompt injection, data leakage, model exfiltration, and unsafe tool execution.
Through a series of focused labs, attendees build, test, and harden AI agents and Model Context Protocol (MCP) services using modern defensive strategies, including guardrails, policy enforcement, authentication, auditing, and adversarial testing.
The training emphasizes real-world implementation over theory, using preconfigured environments in GitHub Codespaces for instant, reproducible results. By the end of the day, participants will have created a working secure AI pipeline that demonstrates best practices for trustworthy AI operations and resilient agent architectures.
The course blends short conceptual discussions with deep, hands-on practice across eight structured labs, each focusing on a key area of AI security. Labs can be completed in sequence within GitHub Codespaces, requiring no local setup.
1.Lab 1 – Mapping AI Security Risks
Identify the unique attack surfaces of AI systems, including LLMs, RAG pipelines, and agents. Learn how to perform a structured threat model and pinpoint where vulnerabilities typically occur.
2.Lab 2 – Securing Prompts and Contexts
Implement defensive prompting, context isolation, and sanitization to mitigate prompt injection, hidden instructions, and data leakage risks.
3.Lab 3 – Implementing Guardrails
Use open-source frameworks (e.g., Guardrails.ai, LlamaGuard) to validate LLM outputs, enforce content policies, and intercept unsafe completions before delivery.
4.Lab 4 – Hardening MCP Servers and Tools
Configure FastMCP servers with authentication, scoped tokens, and restricted tool manifests. Examine how to isolate and monitor server–client interactions to prevent privilege escalation.
5.Lab 5 – Auditing and Observability for Agents
Integrate structured logging, trace identifiers, and telemetry into AI pipelines. Learn how to monitor for suspicious tool calls and enforce explainability through audit trails.
6.Lab 6 – Adversarial Testing and Red-Teaming
Simulate common AI attacks—prompt injection, model hijacking, and context poisoning—and apply mitigation patterns using controlled experiments.
7.Lab 7 – Policy-Driven Governance
Introduce a “security-as-code” approach using policy files that define allowed tools, query types, and data scopes. Enforce runtime governance directly within your agent’s workflow.
8.Lab 8 – Secure Deployment and Lifecycle Management
Apply DevSecOps practices to containerize, sign, and deploy AI systems safely. Incorporate secrets management, vulnerability scanning, and compliance checks before release.
Outcome:
Participants finish the day with a secure, auditable, and policy-controlled AI system built from the ground up. They leave with practical experience defending agents, MCP servers, and model workflows—plus learning for integrating security-by-design principles into future projects.
This condensed hands-on session provides developers and technical leaders with a practical foundation in AI system security — from understanding the unique attack surfaces of LLMs and agents to applying effective guardrails, validation, and monitoring.
Participants explore key security principles across LLM pipelines, agent architectures, and Model Context Protocol (MCP) environments.
Through five focused labs, attendees learn how to detect vulnerabilities, prevent data leakage, and implement safe execution patterns for AI-driven workflows.
By the end of the session, participants will have a working understanding of common AI attack vectors, defensive design patterns, and secure deployment practices for agents and MCP-based systems.
The workshop combines rapid conceptual overviews with practical, short labs:
1.Lab 1 – Understanding AI Threat Surfaces
Explore how AI systems differ from traditional apps: prompt injection, training data poisoning, model exfiltration, and output manipulation.
2.Lab 2 – Secure Prompt and Context Handling
Implement techniques for input sanitization, instruction filtering, and chain-of-thought isolation in LLM and agent pipelines.
3.Lab 3 – Guardrails and Policy Enforcement
Apply open-source guardrail frameworks (e.g., Guardrails.ai or LlamaGuard) to validate responses and prevent unsafe completions.
4.Lab 4 – Securing Agent Tool Use
Configure tools and connectors with least-privilege access and safe error handling. Examine how to restrict and audit agent actions.
5.Lab 5 – Securing MCP Interactions
Learn how to authenticate, authorize, and scope MCP server calls. Practice securing endpoints and preventing untrusted tool injection.
Outcome:
Participants leave with an actionable framework for assessing AI application risk, implementing safeguards, and integrating secure development practices into their LLM and agent workflows.
Learning and understanding AI concepts is satisfying and rewarding, but the fun part is learning how to work with AI yourself. In this 1/2 day workshop, author, trainer, and experienced technologist Brent Laster will help you do both! We’ll explain why and how to run AI models locally, the basic ideas of agents and RAG, and show how to assemble a simple AI agent in Python that leverages RAG and uses a local model through Ollama. And you'll get to follow through with hands-on labs and produce your own instance running on your system in a GitHub Codespace
In this workshop, we'll walk you through what it means to run models locally, how to interact with them, and how to use them as the brain for an agent. Then, we'll enable them to access and use data from a PDF via retrieval-augmented generation (RAG) to make the results more relevant and meaningful. And you'll do all of this hands-on in a ready-made environment with no extra installs required.
No experience is needed on these technologies, although we do assume you do have a basic understanding of LLMs.
Attendees will need the following to do the hands-on labs:
Learning and understanding AI concepts is satisfying and rewarding, but the fun part is learning how to work with AI yourself. In this 1/2 day workshop, author, trainer, and experienced technologist Brent Laster will help you do both! We’ll explain why and how to run AI models locally, the basic ideas of agents and RAG, and show how to assemble a simple AI agent in Python that leverages RAG and uses a local model through Ollama. And you'll get to follow through with hands-on labs and produce your own instance running on your system in a GitHub Codespace
In this workshop, we'll walk you through what it means to run models locally, how to interact with them, and how to use them as the brain for an agent. Then, we'll enable them to access and use data from a PDF via retrieval-augmented generation (RAG) to make the results more relevant and meaningful. And you'll do all of this hands-on in a ready-made environment with no extra installs required.
No experience is needed on these technologies, although we do assume you do have a basic understanding of LLMs.
Attendees will need the following to do the hands-on labs:
Java has quietly grown into a more expressive, flexible, and modern language — but many developers haven’t kept up with the latest features. This two-part workshop explores the most useful additions to Java from recent releases, with hands-on examples and real-world scenarios.
Whether you’re still catching up from Java 8 or already using Java 21+, this series will give you a practical edge in writing cleaner, more modern Java code.
sealed classesrecordswitch expressionsJava has quietly grown into a more expressive, flexible, and modern language — but many developers haven’t kept up with the latest features. This two-part workshop explores the most useful additions to Java from recent releases, with hands-on examples and real-world scenarios.
Whether you’re still catching up from Java 8 or already using Java 21+, this series will give you a practical edge in writing cleaner, more modern Java code.
sealed classesrecordswitch expressionsJava has quietly grown into a more expressive, flexible, and modern language — but many developers haven’t kept up with the latest features. This two-part workshop explores the most useful additions to Java from recent releases, with hands-on examples and real-world scenarios.
Whether you’re still catching up from Java 8 or already using Java 21+, this series will give you a practical edge in writing cleaner, more modern Java code.
Java has quietly grown into a more expressive, flexible, and modern language — but many developers haven’t kept up with the latest features. This two-part workshop explores the most useful additions to Java from recent releases, with hands-on examples and real-world scenarios.
Whether you’re still catching up from Java 8 or already using Java 21+, this series will give you a practical edge in writing cleaner, more modern Java code.
REST APIs often fall into a cycle of constant refactoring and rewrites, leading to wasted time, technical debt, and endless rework. This is especially difficult when you don't control the API clients.
But what if this could be your last major API refactor? In this session, we’ll dive into strategies for designing and refactoring REST APIs with long-term sustainability in mind—ensuring that your next refactor sets you up for the future.
You’ll learn how to design APIs that can adapt to changing business requirements and scale effectively without requiring constant rewrites. We’ll explore principles like extensibility, versioning, and decoupling, all aimed at future-proofing your API while keeping backward compatibility intact. Along the way, we’ll examine real-world examples of incremental API refactoring, where breaking the cycle of endless rewrites is possible.
This session is perfect for API developers, architects, and tech leads who are ready to stop chasing their tails and want to invest in designing APIs that will stand the test of time—so they can focus on building great features instead of constantly rewriting code.
Architectural decisions are often influenced by blindspots, biases, and unchecked assumptions, which can lead to significant long-term challenges in system design. In this session, we’ll explore how these cognitive traps affect decision-making, leading to architectural blunders that could have been avoided with a more critical, holistic approach.
You’ll learn how common biases—such as confirmation bias and anchoring—can cloud judgment, and how to counteract them through problem-space thinking and reflective feedback loops. We’ll dive into real-world examples of architectural failures caused by biases or narrow thinking, and discuss strategies for expanding your perspective and applying critical thinking to system design.
Whether you’re an architect, developer, or technical lead, this session will provide you with tools to recognize and mitigate the impact of biases and blindspots, helping you make more informed, thoughtful architectural decisions that stand the test of time.
Agile has become an overused and overloaded buzzword, let's go back to first principles. Agile is the 12 principles. Agile is founded on fast feedback and embraces change. Agile is about making the right decisions at the right time while constantly learning and growing.
Architecture, on the other hand, seems to be the opposite. Once famously described by Grady Booch as “the stuff that's hard to change” there is overwhelming pressure to get architecture “right” early on as the ultimate necessary rework will be costly at best, and fatal at worst. But too much complexity, too early, can be just as costly or fatal. A truly practical approach to agile architecture is long overdue.
This session introduces a new approach to architecture that enables true agility and unprecedented evolvability in the architectures we design and build. Whether you are a already a seasoned architect, or are simply beginning that path, this session will fundamentally change the way you think about and approach software architecture.
Security problems empirically fall into two categories: bugs and flaws. Roughly half of the problems we encounter in the wild are bugs and about half are design flaws. A significant number of the bugs can be found through automated testing tools which frees you up to focus on the more pernicious design issues.
In addition to detecting the presence of common bugs, however, we can also imagine automating the application of corrective refactoring. In this talk, I will discuss using OpenRewrite to fix common security issues and keep them from coming back.
In this talk we will focus on:
Using OpenRewrite to automatically identify and fix known security vulnerabilities.
Integrating security scans with OpenRewrite for continuous improvement.
*Free up your time to address larger concerns by addressing the pedestrian but time-consuming security bugs.
One of the nice operational features of the REST architectural style as an approach to API Design is that is allows for separate evolution of the client and server. Depending on the design choices a team makes, however, you may be putting a higher burden on your clients than you intend when you introduce breaking changes.
By taking advantage of the capabilities of OpenRewrite, we can start to manage the process of independent evolution while minimizing the impact. Code migration and refactoring can be used to transition existing clients away from older or deprecated APIs and toward new versions with less effort than trying to do it by hand.
In this talk we will focus on:
Managing API lifecycle changes by automating the migration from deprecated to supported APIs.
Discussing API evolution strategies and when they require assisted refactoring and when they don’t.
*Integrating OpenRewrite into API-first development to ensure client code is always up-to-date with ease.
When Eliyahu Goldratt wrote The Goal, he showed how local optimizations (like adding robots to a factory line) can actually decrease overall performance. Today, AI threatens to repeat that mistake in software. We’re accelerating coding without improving flow. In this talk, Michael Carducci explores what it means to architect for the goal: continuous delivery of value through systems designed for flow.
Drawing insights from Architecture for Flow, Domain-Driven Design, Team Topologies, and his own Tailor-Made Architecture Model, Carducci shows how to align business strategy, architecture, and teams around shared constraints and feedback loops. You’ll discover how to turn automation into advantage, orchestrate AI within the system of work, and build socio-technical architectures that evolve—not just accelerate.
As code generation becomes increasingly automated, our role as developers and architects is evolving. The challenge ahead isn’t how to get AI to write more code, it’s how to guide it toward coherent, maintainable, and purposeful systems.
In this session, Michael Carducci reframes software architecture for the era of intelligent agents. You’ll learn how architectural constraints, composition, and trade-offs provide the compass for orchestrating AI tools effectively. Using principles from the Tailor-Made Architecture Model, Carducci introduces practical mental models to help you think architecturally, communicate intent clearly to your agents, and prevent automation from accelerating entropy. This talk reveals how the enduring discipline of architecture becomes the key to harnessing AI—not by replacing human creativity, but by amplifying it.
Everyone’s talking about AI models, but almost no one is talking about the data architecture that makes them intelligent. Today’s AI systems are brittle because they lack context, semantics, and shared understanding. In this session, Michael Carducci explores how linked data, RDF, ontologies, and knowledge graphs solve the very problems that leave the industry floundering: hallucination, inconsistency, and lack of interoperability.
Drawing from real-world examples, Carducci connects decades of overlooked research in semantic web technologies to the challenges of modern AI and agentic systems. You’ll see how meaning itself can be modeled, linked, and reasoned over; and why the future of AI depends not on bigger models, but on smarter data.
Everyone’s talking about AI models, but almost no one is talking about the data architecture that makes them intelligent. Today’s AI systems are brittle because they lack context, semantics, and shared understanding. In this session, Michael Carducci explores how linked data, RDF, ontologies, and knowledge graphs solve the very problems that leave the industry floundering: hallucination, inconsistency, and lack of interoperability.
Drawing from real-world examples, Carducci connects decades of overlooked research in semantic web technologies to the challenges of modern AI and agentic systems. You’ll see how meaning itself can be modeled, linked, and reasoned over; and why the future of AI depends not on bigger models, but on smarter data.
Microservices architecture has become a buzzword in the tech industry, promising unparalleled agility, scalability, and resilience. Yet, according to Gartner, more than 90% of organizations attempting to adopt microservices will fail. How can you ensure you're part of the successful 10%?
Success begins with looking beyond the superficial topology and understanding the unique demands this architectural style places on the teams, the organization, and the environment. These demands must be balanced against the current business needs and organizational realities while maintaining a clear and pragmatic path for incremental evolution.
In this session, Michael will share some real-world examples, practical insights, and proven techniques to balance both the power and complexities of microservices. Whether you're considering adopting microservices or already on the journey and facing challenges, this session will equip you with the knowledge and tools to succeed.
Since 1994, the original Gang of Four Design Patterns book, “Design Patterns: Elements of Reusable Object-Oriented Software” has helped developers recognize common patterns in development. The book was originally written in C++, but there have been books that translate the original design patterns into their preferred language. One feature of “The Gang of Four Design Patterns” that has particularly stuck with me has been testability for the most part. With the exception of singleton, all patterns are unit-testable. Design Patterns are also our common developer language. When a developer says, “Let's use the Decorator Pattern,” we know what is meant.
What's new, though, is functional programming, so we will also discuss how these patterns change in our new modern functional programming world. For example, functional currying in place of the builder pattern, using an enum for a singleton, and reconstructing the state pattern using sealed interfaces. We will cover so much more, and I think you will be excited about this topic and putting it into practice on your codebase.
Setup Requirements:
If you do not have the following or don't want to use the following, we can use Github Codespaces or Gitpod.io. Both of these options will have a VSCode instance online
Since 1994, the original Gang of Four Design Patterns book, “Design Patterns: Elements of Reusable Object-Oriented Software” has helped developers recognize common patterns in development. The book was originally written in C++, but there have been books that translate the original design patterns into their preferred language. One feature of “The Gang of Four Design Patterns” that has particularly stuck with me has been testability for the most part. With the exception of singleton, all patterns are unit-testable. Design Patterns are also our common developer language. When a developer says, “Let's use the Decorator Pattern,” we know what is meant.
What's new, though, is functional programming, so we will also discuss how these patterns change in our new modern functional programming world. For example, functional currying in place of the builder pattern, using an enum for a singleton, and reconstructing the state pattern using sealed interfaces. We will cover so much more, and I think you will be excited about this topic and putting it into practice on your codebase.
Setup Requirements:
If you do not have the following or don't want to use the following, we can use Github Codespaces or Gitpod.io. Both of these options will have a VSCode instance online
In this half-day workshop, we’ll practice Test-Driven Development (TDD) by solving a real problem step by step. You’ll learn how to think in tests, write clean code through refactoring, and use your IDE and AI tools effectively. We’ll also explore how modern Java features (like lambdas and streams) enhance testability, and discuss what’s worth testing — and what’s not.
In this half-day workshop, we’ll practice Test-Driven Development (TDD) by solving a real problem step by step. You’ll learn how to think in tests, write clean code through refactoring, and use your IDE and AI tools effectively. We’ll also explore how modern Java features (like lambdas and streams) enhance testability, and discuss what’s worth testing — and what’s not.
If you ask the typical technologist how to build a secure system, they will include encryption in the solution space. While this is a crucial security feature, in and of itself, it is an insufficient part of the plan. Additionally, there are a hundred ways it could go wrong. How do you know if you're doing it right? How do you know if you're getting the protections you expect?
Encryption isn't a single thing. It is a collection of tools combined together to solve problems of secrecy, authentication, integrity, and more. Sometimes those tools are deprecated because they no longer provide the protections that they once did.Technology changes. Attacks change. Who in your organization is tracking and validating your encryption strategy? How are quantum computing advancements going to change the game?No background will be assumed and not much math will be shown.
If you ask the typical technologist how to build a secure system, they will include encryption in the solution space. While this is a crucial security feature, in and of itself, it is an insufficient part of the plan. Additionally, there are a hundred ways it could go wrong. How do you know if you're doing it right? How do you know if you're getting the protections you expect?
Encryption isn't a single thing. It is a collection of tools combined together to solve problems of secrecy, authentication, integrity, and more. Sometimes those tools are deprecated because they no longer provide the protections that they once did.Technology changes. Attacks change. Who in your organization is tracking and validating your encryption strategy? How are quantum computing advancements going to change the game?No background will be assumed and not much math will be shown.
If you are getting tired of the appearance of new types of databases… too bad. We are increasingly relying on a variety of data storage and retrieval systems for specific purposes. Data does not have a single shape and indexing strategies that work for one are not necessarily good fits for others. So after hierarchical, relational, object, graph, columnoriented, document, temporal, appendonly, and everything else, get ready for Vector Databases to assist in the systematization of machine learning systems.
This will be an overview of the benefits of vectors databases as well as an introduction to the major players.
We will focus on open source versus commercial players, hosted versus local deployments, and the attempts to add vector search capabilities to existing storage systems.
We will cover:
If you are getting tired of the appearance of new types of databases… too bad. We are increasingly relying on a variety of data storage and retrieval systems for specific purposes. Data does not have a single shape and indexing strategies that work for one are not necessarily good fits for others. So after hierarchical, relational, object, graph, columnoriented, document, temporal, appendonly, and everything else, get ready for Vector Databases to assist in the systematization of machine learning systems.
This will be an overview of the benefits of vectors databases as well as an introduction to the major players.
We will focus on open source versus commercial players, hosted versus local deployments, and the attempts to add vector search capabilities to existing storage systems.
We will cover: