Online event


May 2, 2024

You will be guided through a step-by-step demonstration on how to work with vision-language models.

What to expect ➡️

  • Discover refinement techniques: Learn how to refine your vision-language model for enhanced precision and adaptability.

  • Focus on fine-tuning: Utilize parameter-efficient fine-tuning methods to boost your model’s performance.

  • Interactive learning: Participate in a fast-paced, interactive session designed to elevate your modeling skills.

  • Unlock potential: Unleash the full potential of your vision-language model in this short community session.

🚀 Missed our live demo? No worries! We've got the recording. ⬇️

Dive into the innovative uses of the LLaVA model for vision and language tasks.

Who will be leading you?

Laura Funderburk
Senior Developer Advocate
Laura has a B.Sc. Mathematics from Simon Fraser University, and over three years of experience as a professional data scientist. Laura is enthusiastic about using open source for MLOps and DataOps and is passionate about outreach and education. In her day to day, Laura creates written content around building end to end scalable LLM pipelines with streaming data.
Sina Rafati
Lead AI Technologist at Boeing
AI Engineer/Lead Engineer with over 10 years of experience, excelling in leading engineering teams and pioneering in fields such as generative AI, machine learning, deep learning, and satellite telecommunications.