June 25, 2024 The Hibernia San Francisco, CA
CONNECT + COLLABORATE ON EVOLVING AI QUALITY
What is AIQCON?
AIQCon is not your average AI conference—we’re not here to gatekeep or bore you with the same conversations you’ve already had.
Our goal is to harness the power of our community to develop new GOLD STANDARDS for AI quality, bringing together renowned leaders and professionals from across the industry toward reliable AI quality solutions and standards. Talks led by top industry speakers who understand the problems you’re facing daily, space for authentic networking, and entertainment that you’ll actually enjoy, we guarantee this conference will be the most fun you’ve ever had while working.
Whether you’re leading teams or an individual contributor, we know you’ll walk away with a deeper knowledge of evolving AI and tools on the market, along with new friends and collaborators to add to your network.
ENGAGE WITH AI/ML PROS FROM:
Featured Speakers
Real Practitioners. Real Stories.
WHY ATTEND AIQCON
CONNECT
Everyone is welcome! The world of machine learning can be intimidating so we invite you to come as you are—hand over imposter syndrome to the computers. Come to make new friendships and build your professional network with folks who are as passionate as you are.
LEARN + INNOVATE
Create your journey through three comprehensive specialized tracks focused on quality, rigorous, and scalable AI. Come together with speakers and fellow attendees, feel inspired to approach old problems in new ways, and share hard-earned lessons. Bring your notebooks for in-person collaboration and problem-solving!
HAVE FUN
We’re serious about learning and business, but we aren’t serious people so we’ve built in real opportunities for fun in the agenda. Lighten up with a stand-up comedy set, grab a guitar to get your creative juices flowing, and mix and mingle with new connections.
VALUE
It’s our responsibility to host things in person that deliver value and aren't replicable online. For AIQCon, we’ve curated an agenda that’s comprehensive and goes deep on evolving AI quality in one jam-packed day. Meet and engage with speakers who understand the problems you’re facing daily and learn about products firsthand that can help you achieve your quality goals. You’ll even get access to recorded sessions after the conference.
“Amazing event! Thank you for all this ultra-high density of knowledge and passion on the topic!!!” - Isabela C.
JAM-PACKED AGENDA
INNOVATIVE TALKS + PANELS
- Main Hall
- Autonomous & Robotics
- Foundational/LLM/GenAI
-
10:00 AM 10:30 AM
NEW QUALITY STANDARDS FOR AUTONOMOUS DRIVING
Mo Elshenaway, CTO & President, Cruise
Fireside chat featuring Mo Elshenawy, President and CTO of Cruise Automation, and Mohamed Elgendy, CEO and Co-founder of Kolena. In this discussion, Mo Elshenawy will delve into the comprehensive AI philosophy that drives Cruise Automation. He will share unique insights into how Cruise is developing its quality standards from the ground up, with a particular focus on defining and achieving “perfect” driving. This fireside chat offers valuable perspectives on the rigorous processes involved in teaching autonomous vehicles to navigate with precision and safety. -
10:30 AM 11:00 AM
PANEL: AI AND GOVERNMENT REGULATION
Gerrit De Vynck
Tech Reporter, The Washington PostGerrit De Vynck from the Washington Post will be moderating a panel that will delve into NIST, government-implemented standards, and their roles in developing AI.
-
11:00 AM 11:30 AM
AI QUALITY STANDARDS
Gordon Hart
CPO & Co-founder, Kolena -
11:30 AM 12:00 PM
THE FUTURE OF TRUST IN LLMS
Richard Socher
CEO & Founder, You.comRichard Socher, CEO and founder of You.com and AIX Ventures will share insights from his journey of a decade in AI and NLP: from the invention of prompt engineering to founding You.com, an AI Assistant that was the first to integrate an LLM with live web access for accurate, up-to-date answers with citations. Richard will discuss tackling the biggest challenges facing LLMs, from hallucinations to generic responses. Gain insight into the potential for these advancements to be adopted by other LLM-based platforms. -
12:00 PM 12:30 PM
PANEL: THE DOLLARS AND CENTS BEHIND THE AI VC BOOM
Natasha Mascarenhas, Reporter, The Information
Natasha will be moderating a panel of leading VCs who have backed the top AI companies and understand the correction within the boom, flight to quality and what happens when OpenAI eats your lunch, how founders should think about giving big tech a spot on their cap tables, & generally how to invest at the speed of innovation right now.
-
2:00 PM 2:30 PM
Setting the Standard: Safety and Quality Benchmarks for Autonomous Systems
-
2:30 PM 3:00 PM
EIGHTY-THOUSAND POUND ROBOTS: AI DEVELOPMENT AND DEPLOYMENT AT KODIAK SPEED
Collin Otis
Director of Autonomy, Kodiak RoboticsKodiak is on a mission to automate the driving of every commercial vehicle in the world. Today, Kodiak operates a nationwide autonomous trucking network 24x7x365, on the highway, in the dirt, and everywhere in between. We also release and deploy software about 30 times per day across this fleet that is not just mission critical, but also safety critical. Our AI development process must match this criticality and speed, providing fast engineering iteration while guaranteeing a high level of quality that is the requirement of safety. In this talk, we’ll share the details of that process, from how the system is architected, trained, and evaluated, to the validation CICD pipeline, which is the lifeblood of the development flywheel. We’ll talk about how we collect cases, how we iterate models, and how we do quality assurance, data, and release management - all in a way that seamlessly keeps our robots truckin’ across the US.
-
3:00 PM 3:30 PM
The Future of Simulation: Building Virtual Worlds for AI Training and Testing
-
4:00 PM 4:30 PM
Workshop: Building a Robust Testing Framework for Your Autonomous System
-
4:30 PM 5:00 PM
Avoiding Self-Driving Disasters: Lessons Learned
-
1:00 PM 1:30 PM
TO RAG OR NOT TO RAG?
AMR AWADALLAH, CEO, Co-Founder, Vectara
Retrieval-Augmented-Generations (RAG) is a powerful technique to reduce hallucinations from Large Language Models (LLMs) in GenAI applications. However, large context windows (e.g. 1M tokens for Gemini 1.5 pro) can be a potential alternative to the RAG approach. This talk contrasts both approaches and highlights when Large Context Window is a better option thank RAG, and vice-versa.
-
1:30 PM 2:00 PM
THE ERA OF GENERATIVE AI
LUKAS BIEWALD, Cofounder & CEO, Weights and Biases
Weights &Biases CEO and Co-Founder Lukas Biewald will share his perspective on the Generative AI industry: where we've come from, where we are today, and where we're headed.
-
2:00 PM 2:30 PM
PANEL: A BLUEPRINT FOR SCALABLE & RELIABLE ENTERPRISE AI/ML SYSTEMS
Hira Dangol
VP AI/ML & Automation, Bank of AmericaRama Akkiraju
VP Enterprise AI/ML, NVIDIANitin Aggarwal
Head of AI Services, GoogleSteven Eliuk
VP AI & Governance, IBMEnterprise AI leaders continue to explore the best productivity solutions that solve business problems, mitigate risks and increase efficiency. Building reliable and secure AI/ML systems requires following industry standards, an operating framework, and best practices that can accelerate and streamline the scalable architecture that can produce expected business outcomes.
This session, featuring veteran practitioners, focuses on building scalable, reliable and quality AI and ML systems for the enterprises.
-An operating framework for AI/ML use cases
-Standards and best practices in building scalable and automated AI systems
-Governance workflow, modernization tools and total experience journey -
2:30 PM 3:00 PM
IF YOU LIKE SENTENCES SO MUCH, NAME EVERY SINGLE SENTENCE
Linus Lee
Research Engineer, NotionWhat do AI models see when the read and generate text and images? What are the units of meaning they use to understand the world? I’ll share some encouraging updates from my continuing exploration of how models process its input and generate data, enabled by recent breakthroughs in interpretability research. I’ll also discuss and share some demos of how this work opens up possibilities of radically different, more natural interfaces for working with generative AI models.
-
3:00 PM 3:30 PM
THE NEW AI STACK WITH FOUNDATION MODELS
Chip Huyen
VP of AI & OSS, Voltron DataHow has the ML engineering stack changed with foundation models? While the generative AI landscape is still rapidly evolving, some patterns have emerged. This talk discusses these patterns. Spoilers: the principles of deploying ML models into production remain the same, but we’re seeing many new challenges and new approaches. This talk is the result of Chip Huyen's survey of 900+ open source AI repos and discussions with many ML platform teams, both big and small.
-
3:30 4:00 PM
SIMPLE, PROVEN METHODS FOR IMPROVING AI QUALITY IN PRODUCTION
SHREYA RAJPAL, CEO, Guardrails AI
In this talk, Shreya will share a candid look back at a year dedicated to developing reliable AI tools in the open-source community. The talk will explore which tools and techniques have proven effective and which ones have not, providing valuable insights from real-world experiences. Additionally, Shreya will offer predictions on the future of AI tooling, identifying emerging trends and potential breakthroughs. This presentation is designed for anyone interested in the practical aspects of AI development and the evolving landscape of open-source technology, offering both reflections on past lessons and forward-looking perspectives.
-
4:00 PM 4:30 PM
FROM PREDICTIVE TO GENERATIVE: UBER'S JOURNEY
Kai Wang
Lead PM, AI Platform, UberRaajay Viswanathan
Software Engineer, UBERToday, Machine Learning (ML) plays a key role in Uber’s business, being used to make business-critical decisions like ETA, rider-driver matching, Eats homefeed ranking, and fraud detection. As Uber’s centralized ML platform, Michelangelo has been instrumental in driving Uber’s ML evolution since it was first introduced in 2016. It offers a set of comprehensive features that cover the end-to-end ML lifecycle, empowering Uber’s ML practitioners to develop and productize high-quality ML applications at scale.
-
4:30 PM 5:00 PM
INTEGRATING LLMs INTO PRODUCTS
EMMANUEL AMEISEN,
Research Engineer, AnthropicLearn about best practices when integrating Large Language Models (LLMs) into product development. We will discuss the strengths of modern LLMs like Claude and how they can be leveraged to enable and enhance various applications. The presentation will cover simple prompting strategies and design patterns that facilitate the effective incorporation of LLMs into products.
-
5:00 PM 5:30 PM
BUILDING SAFER AI: BALANCING DATA PRIVACY WITH INNOVATION
Stephanie Kirmer
Senior Machine Learning Engineer, DataGrailThe balance between AI innovation and data security and privacy is a major challenge for ML practitioners today. In this talk, I’ll discuss policy and ethical considerations that matter for those of us building ML and AI solutions, in particular around data security, and describe ways to make sure your work doesn’t create unnecessary risks for your organization. It is possible to create incredible advances in AI without risking breaches of sensitive data or damaging customer confidence, by using planning and thoughtful development strategies.