My PM Interview® - Preparation for Success

My PM Interview® - Preparation for Success

Designing the Brain of Your AI Product

How to Use Prompts, RAG, Fine-Tuning, and Transfer Learning to Make LLMs Actually Useful

My PM Interview's avatar
My PM Interview
Nov 24, 2025
∙ Paid

Share

Most AI products don’t fail because the model is “not powerful enough.”
They fail because the context you give the model is wrong or weak.

  • Wrong or vague instructions → garbage outputs

  • Missing or outdated data → hallucinations

  • Overloaded prompts → confused responses

  • Premature fine-tuning → cost with no real gain

This is the territory of context engineering: everything you do to control what the model sees and how it thinks before it produces an answer.

You can think of it as the AI equivalent of product requirements: if you feed chaos in, you get chaos out.

This article goes deep into:

  1. Why context engineering matters

  2. Prompt engineering: how to give good instructions

  3. Retrieval-Augmented Generation (RAG): how to feed the right data

  4. Fine-tuning: when you really need to change the model itself

  5. Transfer learning & distillation: making small models smarter

  6. A decision framework: prompt vs RAG vs fine-tuning vs transfer learning

  7. A practical checklist for AI product managers


Why context engineering matters?

Modern large language models (LLMs) are trained on vast text corpora and learn to predict the next token in a sequence. They’re extremely good at:

  • Understanding natural language

  • Following instructions (if they’re clear)

  • Combining knowledge from many domains

But they have two fundamental limits:

  1. They only know what was in their training data (plus any updatable memory).

  2. They only see what fits inside the context window for each request.

To work around those constraints, you need to control three things:

  • Instructions – what exactly you’re asking it to do (prompt engineering)

  • External data – what information it should base its answer on (RAG)

  • Internal parameters – what knowledge and behaviour are “baked into” the model (fine-tuning, transfer learning)

Doing this well is what turns “a clever chatbot” into a reliable product.


Prompt Engineering

Prompt engineering is the art of telling the model:

  • Who it should act as

  • What it should do

  • How it should format the answer

  • What constraints it must obey

Modern guides describe prompt engineering as a key way to customise model behaviour without retraining, using patterns like zero-shot, few-shot, and chain-of-thought prompting.

1. Core principles

You don’t need 1,000 “magic prompts.” You need a few principles:

  1. Be explicit about the role

    • “Act as a senior product manager for B2B SaaS…”

    • “Act as an expert meeting note-taker…”
      Roles help the model adopt the right style and level of detail.

  2. Specify the task clearly

    • “Summarise this meeting transcript in 150–200 words.”

    • “Extract all action items with owners and due dates.”

    • “Propose 3 product ideas and evaluate them on impact vs effort.”
      Vague requests (“Help me with this”) = vague answers.

  3. Define the output format

    • JSON schema for APIs

    • Tables for dashboards

    • Bullet lists for summaries

    • Headings and sections for reports
      This makes integration and evaluation much easier.

  4. Constrain behaviour

    • “If information is missing, say ‘I don’t know’ rather than guessing.”

    • “Do not fabricate URLs or email addresses.”

    • “Only use facts present in the provided context.”
      Constraints are how you reduce hallucinations.

  5. Encourage step-by-step thinking
    Techniques like chain-of-thought (CoT) ask the model to reason in steps. Research shows CoT prompting improves performance on multi-step reasoning tasks by explicitly asking the model to explain its thinking before giving the final answer.

    • “First list out the steps you would take, then provide the final answer.”

    • “Think step-by-step. Don’t skip any steps.”

  6. Use examples (few-shot prompting)
    Few-shot prompting gives a couple of examples to demonstrate the pattern you want. Studies and practitioner guides show that providing 2–5 well-chosen examples can significantly improve the model’s consistency for structured tasks.

    • Show one or two “good” Q&A pairs

    • Show an example of the exact format you expect

  7. Iterate like an engineer
    Treat prompts as code:

    • Observe failure modes (hallucination, verbosity, missing fields)

    • Add constraints or clarifications

    • Refactor into reusable templates

2. A reusable prompt skeleton

For many product tasks, you can use a basic structure like:

Role: “You are a [role]…”
Task: “Your job is to [task].”
Input: “Here is the input: …”
Rules: “Follow these rules: 1) … 2) … 3) …”
Output format: “Respond in this exact JSON/table/markdown format: …”
Reasoning: “Think step-by-step, then provide the final answer only in the specified format.”

Once you have a handful of such templates, you can plug them into your product for:

  • Meeting summaries

  • Ticket triage

  • Requirements generation

  • User interview synthesis

  • Experiment analysis

Prompt engineering is always the first lever you should pull before touching data pipelines or model weights.


RAG:

Prompting only uses what’s already in the model’s head. But most products need to answer questions based on your own data:

  • Internal docs and wikis

  • Notion / Confluence spaces

  • CRM or ticketing data

  • Legal and policy documents

  • Domain-specific knowledge bases

You could stuff everything into the prompt, but:

  • Context windows are limited

  • Quality degrades when context gets very long

  • You don’t want to send sensitive or irrelevant data every time

This is where Retrieval-Augmented Generation (RAG) comes in.

1. What is RAG?

RAG connects an LLM to an external knowledge base. When a user asks a question:

  1. You retrieve the most relevant pieces of information

  2. You augment the prompt with those pieces

  3. You ask the model to generate an answer that uses that context

Cloud, infrastructure, and AI vendors describe RAG as a pattern that grounds LLM outputs in authoritative, up-to-date data, reducing hallucinations and letting the system answer from your own docs or databases.

2. How a RAG pipeline really works

A typical pipeline has two phases:

A. Building the knowledge base

  1. Ingest sources

    • PDFs, docs, HTML, transcripts, spreadsheets, database rows, etc.

  2. Chunk the content

    • Split long documents into smaller, meaningful chunks (e.g., paragraphs, sections, code blocks)

    • Choose chunk size and overlap based on the domain (code vs marketing copy vs legal docs)

  3. Embed the chunks

    • Convert each chunk into a vector embedding using an embedding model

    • Embeddings map semantically similar text to nearby points in a vector space

  4. Store in a vector database

    • Save each embedding + original text + metadata (doc ID, section, timestamp, permissions)

    • Use a vector store that supports similarity search, filtering, and access control

Guides from vector DB providers and cloud platforms suggest that chunking strategy, embedding model choice, and metadata design significantly affect retrieval quality.

B. Answering a query

  1. User asks a question

    • “What are the key changes in the latest privacy policy?”

    • “What are my top three priorities for tomorrow, based on my tasks and meetings?”

  2. Embed the query

    • Convert the user’s query into a vector using the same embedding model

  3. Retrieve similar chunks

    • Search the vector database for the most similar embeddings (top-k retrieval)

    • Optionally filter by metadata (e.g., only that user’s docs, or only last 3 months)

  4. Construct the augmented prompt

    • System instructions +

    • Retrieved chunks as “context” +

    • The user’s query

    Example structure:

“Answer the question using only the context below. If the answer isn’t present, say you don’t know.

Context:
[chunk 1]
[chunk 2]
[chunk 3]

Question:
[user’s question]”

  1. Call the LLM

    • The model reads the context + query and generates an answer grounded in that context

This lets you keep your knowledge base outside the model, update it frequently, and still give the model just enough context to be smart.

3. Common mistakes and best practices

Real-world teams report that most RAG implementations fail to reach production because of poor retrieval design, not model quality. (kapa.ai)

Key pitfalls:

  1. Bad chunking

    • Chunks that are too small lose context

    • Chunks that are too big waste tokens and dilute relevance

    • Fix: tune chunk size per use case and use overlap

  2. No metadata or filtering

    • Searching across the entire corpus for every query is slow and often irrelevant

    • Fix: tag chunks with doc type, owner, date, product area, etc. Use filters.

  3. No retrieval evaluation

    • Teams look only at final answers, not whether retrieved documents were relevant

    • Fix: evaluate retrieval independently (precision / recall / relevance scores)

  4. No guardrails on context usage

    • If you don’t tell the model to only use the provided context, it may “mix in” its own prior knowledge

    • Fix: explicit instructions (“Answer using only the context. If missing, say you don’t know.”)

  5. Ignoring user experience

    • RAG isn’t just backend; UX matters:

      • Show citations or links to source documents

      • Let users inspect the retrieved context

      • Make it easy to correct wrong answers

4. Hierarchical and advanced RAG patterns

Some tasks, like summarising multiple long documents, break naive RAG:

  • The query “Summarise this entire folder” doesn’t match any specific chunk

  • You don’t want random patches of “summary” text; you want a coherent overview

Hence hierarchical RAG patterns:

  • First summarise each document individually

  • Then summarise the collection using these per-document summaries as context

  • Optionally “zoom in” into sections based on follow-up questions

Cloud vendors and practitioners recommend such multi-step, hierarchical strategies for large corpora to keep context size manageable while preserving high-level meaning. (Microsoft Learn)

As an AI PM, you don’t need to implement all of this yourself, but you must understand when simple “top-k chunk retrieval” is not enough and when you need multi-step pipelines.


Fine-tuning:

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 PREPTERVIEW EDU SOLUTIONS PRIVATE LIMITED
Publisher Privacy ∙ Publisher Terms
Substack
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture