👋 Hey, I am Deepak and welcome to another edition of my newsletter. I deep dive into topics around building products and driving growth.
For the new ones here, do check out the popular posts that I have written recently if you haven’t
10,000+ smart, curious folks have subscribed to the growth catalyst newsletter so far. To receive the newsletter weekly in your email, consider subscribing 👇
Dear Reader,
We launched a new program on Product Sense and Strategy yesterday. Here are the slides from the session conducted for the shortlisted candidates for the program. The response has been good and the seats are 80% full in less than 24 hours. If interested, you can apply here.
Over to the topic at hand,
While learning about AI product management, it is important to address practical questions and challenges as we cover the topics. A key question that every PM will face while building AI products is whether Supervised models are still relevant in the era of LLMs?
If we go by the perception of large language models such as GPT, people tend to believe that LLMs can solve all the AI problems. The truth is far from this perception, and we will focus there in this post.
The Relevance of LLMs
LLMs excel in natural language tasks which require understanding the natural language texts and generating them. The prime example for this is ChatGPT and BARD. Both have a text-first interface.
Over time, LLMs will become multi-modal. Multi-modality means that it can handle multiple forms of inputs and outputs like text, image, voice, video, etc.
You can read this series of posts if you are interested in understanding LLMs deeper -
But as you understand them better, you realise that they aren’t useful for every problem out there.
Supervised models continue to work well in scenarios where labeled data is abundant and we need to predict specific outcomes accurately. Let’s delve deeper into it.
The Supervised Models
For PMs, understanding the strengths and limitations of supervised models is crucial for identifying use-cases where they excel.
For example, Customer churn prediction (predicting which customer will churn out of product in a given time period) can be better done by supervised models. LLMs aren't suitable for all predictive tasks in businesses. The reason behind it is that while LLMs are getting better at high school maths, they are designed for words, not numbers. Analysing and predicting numerical data often requires specific mathematical operations, statistical techniques, and algorithms.
Supervised Models > LLMs
Here are few cases where Supervised models can outperform LLMs.
Small Datasets: LLMs require vast amounts of data for training, and they may underperform when faced with small datasets. For example, in credit card fraud detection, companies like Feedzai use supervised models trained on historical transaction data to identify fraudulent patterns and behaviours by using anomaly detection at a user level.
Content Moderation: LLMs struggle with content moderation of a specific community, where adherence to specific guidelines or regulations is essential. Here is a paper detailing that if you are interested — https://arxiv.org/pdf/2309.14517.pdf.
From the research paper,One potential reason for this is that while LLMs are able to reason in a “forward direction,” by interpreting rules and examining if content falls within those rules, CrossMod is a “reverse direction” tool which starts from actual content removals and learns patterns that can be applied to other communities.
On the other hand, LLMs are better than supervised models in NLP tasks such as
Natural Language Understanding (NLU) such as sentiment analysis, language translation, and question-answering.
Text Generation tasks like content creation
Semantic Search: Microsoft's Bing search engine employs LLMs like GPT-4 to improve search relevance and answer user queries more accurately.
Speech Recognition
Chatbots
Supervised Models + LLMs
In some cases, leveraging a hybrid approach that combines the strengths of supervised models with the capabilities of LLMs can yield optimal results. For example, we can use LLMs to classify natural language feedback from users as positive, negative, and neutral. This feedback rating can then be fed into supervised models to predict churn better.
Another example is domain-specific tasks. LLMs are trained on general text available on the internet. They may struggle with domain-specific tasks that require specialised knowledge or industry-specific terminology. For example, analysing a legal document well will require both labelled dataset training (supervised) and LLMs.
The supervised models + LLMs will outperform LLMs by leveraging domain-specific knowledge and terminology unique to the legal field.
Cost
The current cost of LLMs is really prohibitive for it to become a viable option for many small-and-medium companies. In fact, it can become unsustainable even for a company like Google. From Arstechnica article,
Exactly how many billions of Google's $60 billion in yearly net income will be sucked up by a chatbot is up for debate. One estimate in the Reuters report is from Morgan Stanley, which tacks on a $6 billion yearly cost increase for Google if a "ChatGPT-like AI were to handle half the queries it receives with 50-word answers." Another estimate from consulting firm SemiAnalysis claims it would cost $3 billion.
Cost should be an important consideration for any business and that is another factor that makes supervised models relevant.
Summary
To summarise, while LLMs are amazing for NLP and NLP-adjacent tasks, they are not very suitable for numerical, predictive tasks. It is particularly true in specific domains, or domains where dataset is small. Add to that, cost is an important consideration to be taken into account while evaluating supervised vs LLM models.
An interesting thing to look out for would be to use both to solve cases that aren’t possible with one of them :)
That would be all for this week.
Check the video on this topic
Thank you for reading :)
If you found it interesting, you will also love my