thank you for making aiqcon a success!
June 25, 2024 8:30AM The Hibernia San Francisco, CA
CONNECT + COLLABORATE ON EVOLVING AI QUALITY
What is AIQCON?
Brought to you by Kolena and MLOps Community, AIQCon (the AI Quality Conference) is not your average AI conference—we're not here to repeat the same old conversations you've already had.
In this jam-packed event, you'll engage with industry leaders and builders in creating the gold standard of AI Quality. With multiple tracks led by dozens of actual practitioners, we'll cover the essentials of AI quality: accuracy, transparency, generalization, bias mitigation, efficiency, cost, scalability, ethical considerations, and robustness.
With speaker sessions sharing the latest advancements, you'll acquire new skills, tools, and techniques for AI quality assessment, testing, and validation.
We guarantee you'll gain more insights and value from this day than any other event you'll attend this year. Join us!
ENGAGE WITH AI/ML PROS FROM:
Featured Speakers
Real Practitioners. Real Stories.
WHY ATTEND AIQCON
LEARN + INNOVATE
Journey through multiple talks focused on quality, reliable, robust, and scalable AI. Learn from trusted, innovative leaders and veteran practitioners tackling the biggest challenges in AI. We don't want to brag, but you should take a look at the speakers (and we're announcing more!). Come together with fellow attendees and speakers in our workshops for in-person collaboration and problem-solving. Bring your notebooks!
VALUE
An in-person conference should deliver value you can't find online. We've curated a comprehensive and jam-packed agenda that goes deep on AI quality. Learn about real use cases, tactics, and tools to help you build quality, scalable AI that provides sustainable organizational value and achieves your business objectives. You'll also get access to recorded sessions after the conference.
CONNECT
Attending AIQCon offers you the invaluable opportunity to connect with world-class experts and fellow AI/ML creators. This is your chance to engage directly with pioneers and leading professionals who are at the forefront of AI development. You'll gain exclusive insights, share knowledge, and discuss innovative ideas with those who share your passion for advancing AI technology.
JAM-PACKED AGENDA
INNOVATIVE TALKS + PANELS
- Morning Session
- Afternoon Session
-
NEW QUALITY STANDARDS FOR AUTONOMOUS DRIVING
Mo Elshenaway, CTO & President, Cruise
Fireside chat featuring Mo Elshenawy, President and CTO of Cruise Automation, and Mohamed Elgendy, CEO and Co-founder of Kolena. In this discussion, Mo Elshenawy will delve into the comprehensive AI philosophy that drives Cruise Automation. He will share unique insights into how Cruise is developing its quality standards from the ground up, with a particular focus on defining and achieving “perfect” driving. This fireside chat offers valuable perspectives on the rigorous processes involved in teaching autonomous vehicles to navigate with precision and safety. -
PANEL: AI AND GOVERNMENT REGULATION
Gerrit De Vynck
Tech Reporter, The Washington PostGerrit De Vynck from the Washington Post will be moderating a panel that will delve into NIST, government-implemented standards, and their roles in developing AI.
-
AI QUALITY STANDARDS
Gordon Hart
CPO & Co-founder, Kolena -
PANEL: THE DOLLARS AND CENTS BEHIND THE AI VC BOOM
Natasha Mascarenhas, Reporter, The Information
Natasha will be moderating a panel of leading VCs who have backed the top AI companies and understand the correction within the boom, flight to quality and what happens when OpenAI eats your lunch, how founders should think about giving big tech a spot on their cap tables, & generally how to invest at the speed of innovation right now.
-
TO RAG OR NOT TO RAG?
AMR AWADALLAH, CEO, Co-Founder, Vectara
Retrieval-Augmented-Generations (RAG) is a powerful technique to reduce hallucinations from Large Language Models (LLMs) in GenAI applications. However, large context windows (e.g. 1M tokens for Gemini 1.5 pro) can be a potential alternative to the RAG approach. This talk contrasts both approaches and highlights when Large Context Window is a better option thank RAG, and vice-versa.
-
THE ERA OF GENERATIVE AI
LUKAS BIEWALD, Cofounder & CEO, Weights and Biases
Weights &Biases CEO and Co-Founder Lukas Biewald will share his perspective on the Generative AI industry: where we've come from, where we are today, and where we're headed.
-
PANEL: A BLUEPRINT FOR SCALABLE & RELIABLE ENTERPRISE AI/ML SYSTEMS
Hira Dangol
VP AI/ML & Automation, Bank of AmericaRama Akkiraju
VP Enterprise AI/ML, NVIDIANitin Aggarwal
Head of AI Services, GoogleSteven Eliuk
VP AI & Governance, IBMEnterprise AI leaders continue to explore the best productivity solutions that solve business problems, mitigate risks and increase efficiency. Building reliable and secure AI/ML systems requires following industry standards, an operating framework, and best practices that can accelerate and streamline the scalable architecture that can produce expected business outcomes.
This session, featuring veteran practitioners, focuses on building scalable, reliable and quality AI and ML systems for the enterprises.
-An operating framework for AI/ML use cases
-Standards and best practices in building scalable and automated AI systems
-Governance workflow, modernization tools and total experience journey -
IF YOU LIKE SENTENCES SO MUCH, NAME EVERY SINGLE SENTENCE
Linus Lee
Research Engineer, NotionWhat do AI models see when the read and generate text and images? What are the units of meaning they use to understand the world? I’ll share some encouraging updates from my continuing exploration of how models process its input and generate data, enabled by recent breakthroughs in interpretability research. I’ll also discuss and share some demos of how this work opens up possibilities of radically different, more natural interfaces for working with generative AI models.
-
THE NEW AI STACK WITH FOUNDATION MODELS
Chip Huyen
VP of AI & OSS, Voltron DataHow has the ML engineering stack changed with foundation models? While the generative AI landscape is still rapidly evolving, some patterns have emerged. This talk discusses these patterns. Spoilers: the principles of deploying ML models into production remain the same, but we’re seeing many new challenges and new approaches. This talk is the result of Chip Huyen's survey of 900+ open source AI repos and discussions with many ML platform teams, both big and small.
-
FROM PREDICTIVE TO GENERATIVE: UBER'S JOURNEY
Kai Wang
Lead PM, AI Platform, UberRaajay Viswanathan
Software Engineer, UberToday, Machine Learning (ML) plays a key role in Uber’s business, being used to make business-critical decisions like ETA, rider-driver matching, Eats homefeed ranking, and fraud detection. As Uber’s centralized ML platform, Michelangelo has been instrumental in driving Uber’s ML evolution since it was first introduced in 2016. It offers a set of comprehensive features that cover the end-to-end ML lifecycle, empowering Uber’s ML practitioners to develop and productize high-quality ML applications at scale.
-
INTEGRATING LLMs INTO PRODUCTS
EMMANUEL AMEISEN,
Research Engineer, AnthropicLearn about best practices when integrating Large Language Models (LLMs) into product development. We will discuss the strengths of modern LLMs like Claude and how they can be leveraged to enable and enhance various applications. The presentation will cover simple prompting strategies and design patterns that facilitate the effective incorporation of LLMs into products.
See who's coming
Meet some of our attendees!
Connie Yang
Managing Principal of Machine Learning and Data Science, DesignMind
Shane Morris
Senior Executive Advisor, Devis
Aishwarya Reganti
Tech Lead, AWS
Sam Partee
CTO, Arcade AI
Catherine Nelson
Data Scientist, Freelance
James Lamb
Sr. Software Engineer - RAPIDS, NVIDIA
Jon Yeo
Technical PM, Pinterest
Deb Gardner
Associate Business Analyst, Booz Allen Hamilton
Joseph Sandoval
Principal Product Manager, Adobe
David Nunez
Partner, Abstract Group