3️⃣Build a proof of concept

Assumed audience: You’re a product designer working on an LLM powered feature. You’re familiar with how LLMs work. Use this as a starting point for further exploration and innovation.

👋 Introduction

You have a solution in mind, it’s time to start testing feasibility and support your engineering team. Here’s how you can bring in the required product context into the building process.


1️⃣ Inputs

Identify the sources for the LLM to pull from.

Some questions to get you started—

  1. What type of queries would you anticipate from users?

  2. Do you have a knowledge base that will act as a source? Any other form of documentation?

  3. Does it need additional interaction level information about the user’s behaviour?

  4. Is there going to be a collaborative back-and-forth between the user and the LLM? If so, how should you retain context in an LLM<>user collaboration session?


2️⃣ Outputs

Articulate what a meaningful response looks like for your use case. Identify what kind of fine-tuning or prompting you’d need to get to this response reliably.

Create a dataset of ideal responses. Creating good training data is a design and copy responsibility. Generating high quality example data is critical to build the best experiences. Learn more about crafting responses here.

Using an LLM

Bring in stakeholders who have context of the product to collaborate on the prompts and examples.

Iterate on prompts

Keep them specific and context rich. Prompt engineering uses natural language so it’s quite accessible, and is core to the product experience. If you’re using OpenAI, see how far you can get with just testing out your idea on the Playground before you actually start pulling from all your sources.

Fine-tune your model

Do this if a prompt is not effective. It will improve accuracy, give a deeper understanding of the product space, and be able to respond to a wide input variety.

📣 “Fine-tuning improves on few-shot learning by training on many more examples than can fit in the prompt, letting you achieve better results on a wide number of tasks. Once a model has been fine-tuned, you won't need to provide as many examples in the prompt. This saves costs and enables lower-latency requests.”

From Fine-tuning the OpenAI API


Last updated