Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Generative AI Is Not an Island: ML’s Core Principles

Generative AI may feel like the new kid on the block, but it’s not as different from traditional machine learning as you might think.

In this episode of Talking AI, we’re joined by Simba Khadder, Co-Founder & CEO of Featureform, and Omar Shanti, CTO of Hatchworks AI, to discuss how generative AI builds upon established ML principles and infrastructure. We explore the misconception that it exists as its own little island, when in reality, we see a clear continuity between classic ML and newer AI technologies.

Simba and Omar break down the unique aspects of GenAI, such as prompting and large language models (LLMs), while also highlighting the shared foundations with traditional ML. They also touch on the evolution of retrieval augmented generation (RAG), the ML vs. GenAI lifecycle, and what MLOps teams should consider when it comes to driving value in a business.

Want to stay ahead in the fast-moving world of AI? Tune in to Talking AI where we break down the latest trends and tools in Generative AI, MLOps, and more. Subscribe now, and don’t miss out on practical insights from industry experts. Let’s transform your data into real business impact!

Key Moments:
  • The similarities and differences between generative AI and traditional machine learning
  • How prompts differentiate LLMs from earlier transformer models
  • The challenges of implementing AI in production versus creating pilot projects
  • How RAG has evolved and what it could mean for the future
  • The importance of data as the core foundation for both traditional ML and generative AI
  • How feature engineering in ML relates to embeddings in LLMs
  • How an ML lifecycle differs from a GenAI one
Key links: