Speaker Topics - No Fluff Just Stuff

Securing LLMs: DevSecOps in the Age of AI

As enterprises rush to embed large language models (LLMs) into apps and platforms, a new AI-specific attack surface has emerged. Prompt injections, model hijacking, vector database poisoning, and jailbreak exploits aren’t covered by traditional DevSecOps playbooks.

This full-day, hands-on workshop gives architects, platform engineers, and security leaders the blueprint to secure AI-powered applications end-to-end. You’ll master the OWASP LLM Top 10, integrate AI-specific controls into CI/CD pipelines, and run live red-team vs blue-team exercises to build real defensive muscle.

Bottom line: if your job involves deploying, securing, or governing AI systems, this workshop shows you how to do it safely—before attackers do it for you.

What You’ll Learn

  • Where LLM vulnerabilities arise—and how attackers exploit them
  • How to apply the OWASP LLM Top 10 to enterprise pipelines
  • Building AI-specific guardrails: input sanitization, output filters, role controls
  • Embedding AI-aware scans and tests into GitHub/GitLab CI/CD workflows
  • Securing RAG systems, vector databases, and multi-agent environments
  • Red-team tactics (prompt injection, vector poisoning) and defensive countermeasures
  • Metrics and frameworks to prove AI security posture to executives and regulators

Who Should Attend

  • Software Architects designing AI-powered systems
  • Platform Engineers & DevSecOps Leads embedding LLMs into pipelines
  • Security Engineers assessing AI attack surfaces
  • CTOs, CISOs & Product Owners accountable for safety, trust, and compliance

Takeaways

  • OWASP LLM Top 10 → Mitigation Playbooks
  • Templates with AI-aware guardrails
  • Risk scoring model for AI attack surfaces
  • Red-/Blue-team lab scripts to rerun in your org
  • Executive briefing deck to align security with compliance & business impact

Agenda

Module 1 – The New AI Attack Surface

  • Anatomy of an LLM-powered app (prompts, RAG, embeddings, agents)
  • Why traditional DevSecOps misses AI-native risks
  • Mapping AI threats to enterprise trust boundaries

Module 2 – OWASP LLM Top 10 Deep Dive

  • Prompt injection & jailbreak exploits
  • Training data leakage & poisoning
  • Excessive agency in autonomous agents
  • Vector database & plugin/toolchain exploits
  • Model theft, shadow prompting, and output handling flaws

Module 3 – DevSecOps Patterns for LLMs

  • Designing input/output filters and schema validation
  • Prompt fuzzing, red teaming, and adversarial testing
  • Embedding AI guardrails into GitHub/GitLab CI/CD workflows
  • AI firewalls, inference governance, and runtime monitoring

Module 4 – Real-World Threat Simulations

  • Live prompt injection on an AI agent
  • Poisoning a vector database to manipulate RAG retrieval
  • Detection strategies for abnormal prompts and outputs
  • Hands-on ethical hacking tools for LLMs

Module 5 – Business Impact & Mitigation Framework

  • Risk scoring and prioritization for AI systems
  • Aligning AI security with KPIs: trust, uptime, compliance, brand protection
  • NIST AI RMF, ISO/IEC 42001, and EU AI Act readiness
  • Delivering an executive-ready AI security scorecard

About Rohit Bhardwaj

Rohit Bhardwaj is a Director of Architecture working at Salesforce. Rohit has extensive experience architecting multi-tenant cloud-native solutions in Resilient Microservices Service-Oriented architectures using AWS Stack. In addition, Rohit has a proven ability in designing solutions and executing and delivering transformational programs that reduce costs and increase efficiencies.

As a trusted advisor, leader, and collaborator, Rohit applies problem resolution, analytical, and operational skills to all initiatives and develops strategic requirements and solution analysis through all stages of the project life cycle and product readiness to execution.
Rohit excels in designing scalable cloud microservice architectures using Spring Boot and Netflix OSS technologies using AWS and Google clouds. As a Security Ninja, Rohit looks for ways to resolve application security vulnerabilities using ethical hacking and threat modeling. Rohit is excited about architecting cloud technologies using Dockers, REDIS, NGINX, RightScale, RabbitMQ, Apigee, Azul Zing, Actuate BIRT reporting, Chef, Splunk, Rest-Assured, SoapUI, Dynatrace, and EnterpriseDB. In addition, Rohit has developed lambda architecture solutions using Apache Spark, Cassandra, and Camel for real-time analytics and integration projects.

Rohit has done MBA from Babson College in Corporate Entrepreneurship, Masters in Computer Science from Boston University and Harvard University. Rohit is a regular speaker at No Fluff Just Stuff, UberConf, RichWeb, GIDS, and other international conferences.

Rohit loves to connect on http://www.productivecloudinnovation.com.
http://linkedin.com/in/rohit-bhardwaj-cloud or using Twitter at rbhardwaj1.

More About Rohit »