
SECURITY EDITION
April 10, 2026
Computer History Museum
00
Days
00
Hours
00
Minutes
00
Seconds
About AWS Community Day
Security isn't just a "nice to have"—it's the foundation of everything we build.
The cloud is moving faster than ever. With the rise of Generative and Agentic AI, the stakes for security have never been higher. A single misconfiguration in your Amazon Bedrock setup or an over-privileged AI agent can lead to a data breach at machine speed.
Join us at the AWS Community Day Bay Area for a day dedicated to education, awareness, and community. We're moving past the hype and into the utility phase—learning how to build secure-by-design AI workloads and leveraging AI-driven tools to protect our infrastructure.
Why you should be at the AWS Community Day
Security for AI: Learn to harden your AI workloads and secure the LLM supply chain.
AI for Security: See how autonomous agents and Amazon Q help detect and remediate threats before they escalate.
Expert-Led Learning: Dive into technical workshops and hands-on labs that go beyond the basics.
The Bay Area Network: Connect with industry leaders and fellow enthusiasts who are building the next generation of the cloud.
Whether you're an experienced architect or a cloud newcomer, join us to make your skills—and our world—smarter, safer, and more connected.

Christopher Rae: Head of AI Security Go-to-Market, AWS
Christopher Rae leads AI Security Go-to-Market for the AWS Worldwide Specialist Organization, where he defines global strategy for securing AI workloads and advancing AI-powered security capabilities. His work focuses on helping customers adopt AI on AWS securely by embedding secure-by-design and defense-in-depth principles across services such as Amazon Bedrock, Amazon SageMaker, Amazon Q, and open-source AI solutions.
With deep expertise spanning cybersecurity, artificial intelligence, and emerging technologies, Christopher brings a rare blend of technical architecture and business strategy. He is a frequent advisor, speaker, and thought leader on AI security, engaging with executive leadership, field teams, and the broader community to turn security into a competitive advantage while enabling innovation at scale.

Sarah Currey: Principal Practice Manager for AWS Security
Sarah Currey is a Principal Practice Manager for AWS Security, where she works closely with AWS leadership to shape and strengthen security practices, culture, and strategy across the organization. Partnering directly with the AWS Security VP, Sarah focuses on building long-term security programs that protect customers and internal teams while fostering a blame-free, learning-driven security culture.
Her work spans three core areas: developing forward-looking security strategy and leadership capability, building scalable mechanisms that improve security readiness and resilience, and driving meaningful community impact through security initiatives and sponsorships. With a deep commitment to continuous improvement and innovation, Sarah brings a practical, human-centered perspective to security that resonates far beyond technology alone.

Anton Babenko
Betajob
Streamlining Compliance: Leveraging Open-Source Terraform AWS modules [Advanced]
Are you navigating the complexities of compliance frameworks like SOC2, CIS, and HIPAA and seeking a more efficient path? This talk breaks down these frameworks simply and shows you a time-saving trick, making it perfect for anyone wanting to make their organization's compliance journey much easier. I'll start by outlining the basics of these frameworks and highlighting the challenges businesses face in implementing them. As the creator and maintainer of the terraform-aws-modules projects, I'll be excited to share how using these open-source Terraform AWS modules can streamline the compliance process. I'll walk you through real-life examples showing how such solutions significantly reduce the effort and time required for compliance. At the end of the talk, attendees will get actionable insights on using Terraform AWS modules for efficient compliance management.

Ishneet Kaur Dua
Senior Solutions Architect @AWS
Securing Large Language Models: Best Practices for Prompt Engineering and Mitigating Prompt Injection Attacks [Beginner]
The rapid adoption of large language models (LLMs) in enterprise IT environments has introduced new challenges in security, responsible AI, and privacy. One critical risk is the vulnerability to prompt injection attacks, where malicious actors manipulate input prompts to influence the LLM's outputs and introduce biases or harmful outcomes. This guide outlines security guardrails for mitigating prompt engineering and prompt injection attacks. The authors present a comprehensive approach to enhancing the prompt-level security of LLM-powered applications, including robust authentication mechanisms, encryption protocols, and optimized prompt designs. These measures aim to significantly improve the reliability and trustworthiness of AI-generated outputs, while maintaining high accuracy for non-malicious queries. The proposed security guardrails are compatible with various model providers and prompt templates, but require additional customization for specific models. By implementing these best practices, organizations can instill higher trust and credibility in the use of generative AI-based solutions, maintain uninterrupted system operations, and enable in-house data scientists and prompt engineers to uphold responsible AI practices.

Manas Satpahti
Principal Technical Account Manager @ AWS
Simplify Security Events Log Analysis with Amazon Q [Advanced]
Discover how to build security-focused applications with Amazon Q to analyze AWS accounts for compliance and vulnerabilities. Use automation to centralize security logs and events from AWS services, partner solutions, and open-source tools, and analyze using an intuitive chatbot interface. Through practical examples, explore how Generative AI enhances security analysis, delivering a richer experience with queries in natural language.

Parth Girish Patel
Sr AI/ML Architect @ AWS
Securing Large Language Models: Best Practices for Prompt Engineering and Mitigating Prompt Injection Attacks [Beginner]
The rapid adoption of large language models (LLMs) in enterprise IT environments has introduced new challenges in security, responsible AI, and privacy. One critical risk is the vulnerability to prompt injection attacks, where malicious actors manipulate input prompts to influence the LLM's outputs and introduce biases or harmful outcomes. This guide outlines security guardrails for mitigating prompt engineering and prompt injection attacks. The authors present a comprehensive approach to enhancing the prompt-level security of LLM-powered applications, including robust authentication mechanisms, encryption protocols, and optimized prompt designs. These measures aim to significantly improve the reliability and trustworthiness of AI-generated outputs, while maintaining high accuracy for non-malicious queries. The proposed security guardrails are compatible with various model providers and prompt templates, but require additional customization for specific models. By implementing these best practices, organizations can instill higher trust and credibility in the use of generative AI-based solutions, maintain uninterrupted system operations, and enable in-house data scientists and prompt engineers to uphold responsible AI practices.

Peter Sankauskas
AWS Community Hero @ Answers for AWS
Everything you didn't want to know about IAM [Beginner]
If you have used AWS, you have seen an error message stating "x is not authorized to perform y". This is a annoying fact. But how do you solve these? In this talk, Peter will walk though how IAM is designed, different types of policies and when they are useful. You will leave with techniques for understanding and debugging those access issues you wish didn't exist.

Sandeep Mohanty
Sr. Solutions Architect @ AWS
Simplify Security Events Log Analysis with Amazon Q [Advanced]
Discover how to build security-focused applications with Amazon Q to analyze AWS accounts for compliance and vulnerabilities. Use automation to centralize security logs and events from AWS services, partner solutions, and open-source tools, and analyze using an intuitive chatbot interface. Through practical examples, explore how Generative AI enhances security analysis, delivering a richer experience with queries in natural language.

Satish Jipster
Security specialist at SNOW Upgrade
Securing Generative AI applications using AWS Services [Business Focused]
Securing generative AI applications using AWS services involves implementing robust strategies to protect data, models, and infrastructure. This presentation explores how AWS tools like Identity and Access Management (IAM), AWS Key Management Service (KMS), and Amazon SageMaker enable secure model development, training, and deployment. Topics include safeguarding sensitive data with encryption, ensuring network security through Virtual Private Clouds (VPCs), and mitigating threats using services like AWS Shield and AWS WAF. Best practices for monitoring AI workloads with Amazon CloudWatch and addressing compliance requirements through AWS Audit Manager will also be discussed. Attendees will gain actionable insights to build and maintain secure, scalable, and resilient generative AI applications on AWS.

Shivansh Singh
Technical Leader, AWS Solutions Architecture
Creating secure code with Amazon Q Developer [Beginner]
In this session you will learn how to use Amazon Q Developer to create secure code. Write unit tests, optimize code, and scan for vulnerabilities, and discover how Amazon Q Developer suggests remediations that help fix your code instantaneously. Also, learn how you can use Amazon Q Developer security scanning to outperform other publicly benchmarkable tools on detection across popular programming languages.

Teri Radichel
Founder/ Principal Pentester, Researcher, Author
Threat Modeling a Batch Job System on AWS [Advanced]
I’ve been blogging about building a batch job system on AWS for about two years now as time allows, documented at https://medium.com/cloud-security/automating-cybersecurity-metrics-890dfabb6198. Initially I was “just” going to quickly show how to use batch jobs to run tools to analyze security in AWS accounts. For example, I run Prowler and other proprietary tools on AWS penetration tests and I can run those tools as batch jobs. But it turned into a much bigger endeavor as I considered how to deploy and run those jobs ~ securely ~ in a production environment. In this presentation, I’ll walk through some of the threats, mitigations, and I’ll talk about some unpublished developments.
| Time | Session Details | |||
|---|---|---|---|---|
Morning Sessions | ||||
08:00 AM - 4:00 PM | Badge pick up, Assisted Registration, Information Desk - Grand Lobby | |||
08:30 AM - 09:20 AM 50 minutes | Breakfast and Networking - Grand Hall Closes 10 minutes before Keynote. | |||
09:30 AM - 10:00 AM 30 minutes | Welcome, Introductions and Sponsors Parade - John Varghese - AWS Hero - Hahn Auditorium | |||
10:00 AM - 10:45 AM 45 minutes | Keynote - Everything starts with Security - Christopher Rae, Sarah Currey - Hahn Auditorium | |||
10:45 AM - 11:15 AM 30 minutes | Tea/coffee break and Networking - Grand Hall Sponsored by AWS | |||
Tracks | Hahn Auditorium | Lovelace | Boole | Glass rooms |
11:15 AM - 11:45 AM 30 minutes | Builder Cards --Shivansh Singh | |||
11:50 AM - 12:30 PM 40 minutes | ||||
12:20 PM - 1:20 PM 1 hour | Lunch and Networking - Grand Hall SPONSORS WANTED!! Also Brain Date | |||
Post Lunch Sessions | ||||
Tracks | Hahn Auditorium | Lovelace | Boole | Brain Date topics |
1:30 PM - 1:55 PM 25 minutes | Threat Modeling Batch Jobs --Teri Radichel | Securing GenAI with AWS --Satish Jipster | Brain Date --Conference Attendees | |
2:00 PM - 2:35 PM 35 minutes | Terraform for Compliance --Anton Babenko | |||
2:30 PM - 2:55 PM 25 minutes | Afternoon Tea break SPONSORS WANTED!! Also Brain Date | |||
Tracks | Hahn Auditorium | Lovelace | Boole | Glass rooms |
3:00 PM - 3:25 PM 25 minutes | Securing LLMs --Parth Patel and Ishneet Dua | Open Discussion | ||
3:30 PM - 3:55 PM 25 minutes | Security Log Analysis with Amazon Q --Manas Satpathi & Sandeep Mohanty | IAM Deep Dive --Peter Sankauskas | Secure Code with Amazon Q --Shivansh Singh | Open Discussion |
3:55 PM - 4:05 PM 10 minutes | Raffle & Closing Note - Hahn Auditorium | |||

AWS
Amazon Web Services (AWS) is the secure foundation for the global cloud, providing over 200 fully featured services designed to meet the most stringent security requirements of the world's leading organizations. For this Security Edition, AWS is highlighting the shift to autonomous defense, featuring new AI Security Agents and the Amazon Bedrock AgentCore to proactively neutralize threats before they reach production. By integrating zero-trust architectures and automated remediation into every layer of the stack, AWS empowers the community to innovate with "Shielded Velocity," ensuring that the fastest-growing startups and largest enterprises alike remain secure by design.

Intel AI
Intel offers comprehensive AI solutions through its Tiber™ AI Cloud and Tiber™ AI Studio, providing cutting-edge hardware and software platforms for scalable AI development and deployment. These services enable enterprises to efficiently build, optimize, and manage AI models across various industries, leveraging Intel’s advanced CPUs, GPUs, and AI accelerators. With a focus on reducing complexity and enhancing productivity, Intel empowers organizations to harness AI’s full potential.

Workato
Workato is the first Production-Ready Agentic Hub, offering a battle-tested implementation of the Model Context Protocol (MCP) that allows AI agents to securely trigger actions across 1,200+ systems today. While legacy platforms are still navigating "agentic roadmaps," Workato’s Agent Studio already powers autonomous workflows with built-in identity propagation and "secure-by-default" encryption. By transforming static APIs into intelligent, callable skills in minutes, Workato is winning the race to define the infrastructure of the AI-driven enterprise.

Sonrai Security
Sonrai Security delivers the industry’s only Cloud Permissions Firewall, moving beyond passive alerts to provide instant, one-click least privilege enforcement across multi-cloud environments. By automatically neutralizing 99% of unused identities and toxic permissions without breaking DevOps workflows, Sonrai eliminates the "identity debt" that legacy CSPM tools leave behind. Trusted by the Fortune 100, Sonrai ensures that sensitive data stays invisible to attackers while empowering developers to innovate at the speed of the cloud.

NOVAworks
NOVAworks is at the heart of our community’s workforce success, offering free, personalized career navigation and training services to individuals 17 and up in San Mateo and northern Santa Clara counties. We don’t just connect people with jobs—we connect them with opportunities to thrive. We fund internships that spark careers, advanced training that empowers workers to reimagine their futures, and innovative workforce solutions that fuel local businesses and communities.
Computer History Museum
1401 N Shoreline Blvd,
Mountain View, CA 94043