Generative AI may feel like the new kid on the block, but it’s not as different from traditional machine learning as you might think.
In this episode of Talking AI, we’re joined by Simba Khadder, Co-Founder & CEO of Featureform, and Omar Shanti, CTO of Hatchworks AI, to discuss how generative AI builds upon established ML principles and infrastructure. We explore the misconception that it exists as its own little island, when in reality, we see a clear continuity between classic ML and newer AI technologies.
Simba and Omar break down the unique aspects of GenAI, such as prompting and large language models (LLMs), while also highlighting the shared foundations with traditional ML. They also touch on the evolution of retrieval augmented generation (RAG), the ML vs. GenAI lifecycle, and what MLOps teams should consider when it comes to driving value in a business.
Want to stay ahead in the fast-moving world of AI? Tune in to Talking AI where we break down the latest trends and tools in Generative AI, MLOps, and more. Subscribe now, and don’t miss out on practical insights from industry experts. Let’s transform your data into real business impact!
- The similarities and differences between generative AI and traditional machine learning
- How prompts differentiate LLMs from earlier transformer models
- The challenges of implementing AI in production versus creating pilot projects
- How RAG has evolved and what it could mean for the future
- The importance of data as the core foundation for both traditional ML and generative AI
- How feature engineering in ML relates to embeddings in LLMs
- How an ML lifecycle differs from a GenAI one