About the Instructor

Prashant Kulkarni
Lead AI Security Research Engineer, Google Cloud
Master of Science in Applied Data Science, University of Chicago
Professional Background
With over 20 years of cybersecurity experience, Prashant Kulkarni is a leading expert in AI security and machine learning safety. As a Lead AI Security Research Engineer at Google Cloud, he focuses on securing modern AI systems, including large language models, and collaborates extensively with customers on AI system security.
His expertise spans the critical intersection of cybersecurity and artificial intelligence, making him uniquely positioned to teach trustworthy machine learning principles that are both theoretically sound and practically applicable in today's rapidly evolving AI landscape.
Research & Expertise
Prashant's research interests and professional focus areas include:
🛡️ AI Security
- Securing large language models and generative AI systems
- Threat modeling for AI/ML pipelines
- Security-by-design for AI applications
🔒 Adversarial Defenses
- Robust training techniques against adversarial attacks
- Certified defense mechanisms
- Real-world robustness evaluation
🔐 Privacy-Preserving ML
- Differential privacy in production systems
- Federated learning security
- Privacy-utility trade-offs in AI systems
⚖️ Ethical AI Deployment
- Responsible AI practices in enterprise environments
- Bias detection and mitigation at scale
- AI governance and compliance frameworks
Teaching Philosophy
"Simplifying complex concepts and fostering hands-on learning, enabling participants to grasp intricate topics in responsible AI"
Prashant is passionate about empowering adult learners and believes in making complex technical concepts accessible through:
- Practical Application: Every theoretical concept is paired with hands-on implementation
- Real-World Context: Examples drawn from actual industry challenges and solutions
- Interactive Learning: Encouraging questions, discussions, and collaborative problem-solving
- Industry Relevance: Focus on skills and knowledge directly applicable to professional work
Academic Credentials
Master of Science in Applied Data Science
University of Chicago
Prashant's advanced degree in Applied Data Science provides him with both the theoretical foundation and practical experience necessary to teach the mathematical and statistical underpinnings of trustworthy ML while maintaining focus on real-world applications.
Professional Experience
Current Role: Lead AI Security Research Engineer
Google Cloud | Present
- Leads research initiatives in AI security and safety
- Develops security frameworks for enterprise AI deployments
- Collaborates with product teams on secure AI system design
- Works directly with customers on AI security implementations
20+ Years in Cybersecurity
Throughout his career, Prashant has:
- Developed security solutions for complex, large-scale systems
- Led teams in implementing robust security practices
- Advised organizations on emerging security threats
- Published research on cybersecurity and AI safety
Course: Trustworthy Machine Learning
UCLA Extension | Fall 2025
Prashant teaches Trustworthy Machine Learning at UCLA Extension, providing a comprehensive introduction to building AI systems that are fair, robust, transparent, and secure.
The course combines: - Theoretical Foundations: Mathematical frameworks for trustworthy AI - Practical Implementation: Hands-on labs with industry-standard tools - Real-World Case Studies: Examples from actual deployments and failures - Current Research: Latest developments in the field
Course Highlights
- Small class sizes for personalized attention
- Industry-relevant assignments and projects
- Guest speakers from leading tech companies
- Networking opportunities with professionals in the field
Industry Impact
Prashant's work has contributed to:
- Secure AI Frameworks used by enterprise customers at Google Cloud
- Best Practices for AI security in production environments
- Research Publications on trustworthy AI and security
- Training Programs for AI practitioners and security professionals
Public Talks & Presentations
Prashant regularly shares his expertise through speaking engagements at conferences, workshops, and industry events:
Featured Presentations
Federated Learning in Production: Security & Privacy Challenges
Flower AI Summit | 2025
Deep dive into security and privacy considerations when deploying federated learning systems at scale on Google Cloud. Prashant demonstrates how to fine tune Gemma 3 using Flower on GKE.
Agentic AI Security
UCLAx Open | 2025
Comprehensive overview of security challenges in agentic AI systems, covering threat models, attack vectors, and defense strategies for autonomous AI agents. Explores the unique security considerations when AI systems can take actions in the real world.
Speaking Topics
- AI Security & Safety: Securing large language models and generative AI systems
- Trustworthy ML in Enterprise: Practical implementation strategies
- Adversarial Defense: Real-world robustness challenges and solutions
- Privacy-Preserving AI: Differential privacy and federated learning
- Regulatory Compliance: Navigating AI governance frameworks
Upcoming Engagements
- Check back for upcoming speaking opportunities and conference presentations
For speaking requests and availability, please contact through professional channels.
Connect & Learn More
Professional Profiles
- LinkedIn: Connect with Prashant for professional updates and insights
- Google Scholar: Follow research publications
- GitHub: View code and projects - Open-source contributions and AI security implementations
Speaking & Consulting
Prashant is available for: - Conference presentations on AI security and trustworthy ML - Possible research collaboration - Guest lectures at academic institutions
Course Information
Interested in learning from Prashant?
📚 Enroll in Trustworthy Machine Learning at UCLA Extension
- Format: Online only
- Duration: 11-week intensive course
- Prerequisites: Foundational Staticstics, Calculus, Linear Algebra , ML knowledge (Deterministic and Neural networks), Python programming
- Certification: UCLA Extension certificate upon completion
Research & Publications
Prashant's research contributions span cybersecurity, AI safety, and trustworthy machine learning. His work bridges the gap between theoretical advances and practical implementations in enterprise environments.
Research Areas
- AI Security & Safety: Securing large language models and generative AI systems
- Adversarial Machine Learning: Robust defenses against adversarial attacks
- Privacy-Preserving ML: Differential privacy and federated learning
- Trustworthy AI: Fairness, accountability, and transparency in AI systems
Selected Publications
Google Scholar Profile
For a complete list of publications and citation metrics, visit Prashant's Google Scholar profile.
Recent Research Focus:
- Security frameworks for enterprise AI deployments
- Security of Large Language Models and Agentic AI
- Practical implementations of differential privacy
- Adversarial robustness in production ML systems
- Bias detection and mitigation at scale
Research Impact
- Developed security guidelines adopted by industry practitioners
- Contributed to open-source tools for trustworthy AI
- Published research on practical AI safety implementations
- Collaborated with academic institutions on AI security research
For speaking engagements, consulting inquiries, or course-related questions, please use the contact information available through UCLA Extension or connect via professional networks.