4️⃣LLM Inputs

Assumed audience: You’re a product designer working on an LLM powered feature. You’re familiar with how LLMs work. Use this as a starting point for further exploration and innovation.

👋 Introduction

When designing for LLMs, there are a few additional states that a designer would need to consider. These are peculiarities specific to how LLMs work (eg. slower load time, streaming capability), and common interaction patterns we’re noticing across products (eg: empty states doubling down on educating users, explicit triggers, feedback). We’ll dive into the details of each state in this article.


The user finds your feature, in a context where it’s useful to them. Some common signifiers for AI are sparkles (✨), and bots (🤖).

🤞 Try to layer your LLM capability on an existing feature. This makes sure that the user discovers it in the right context.

🤞 If your LLM feature is stand-alone, use an existing pattern, or ensure your AI signifier stands out to pique interest.

🤞 Maybe you don’t need discovery, user’s input, or a trigger at all. You can weave in your LLM response into an action that a user is already taking frequently, and improve the experience there.

🤔 Empty state

When they click on the entry point, the empty state should help users understand what they can do with this feature. This is important to get right since LLMs are a new tech capability, and your users may not be familiar with what it can do.

🤞 Give examples of what the user can ask. If your examples are context specific, they are more useful.

✍🏻 Query & Intent

The user types in their query. As they’re typing, the system is constantly trying to identify what type of result it should provide.

🤞 Understand how intent detection is going to work in your specific implementation. Play out the different scenarios to stress the pattern you choose.

💥 Trigger

The user hits “enter” or the “send” icon. It’s a good idea to give LLM features an explicit trigger (vs triggering API calls with each character being typed). This is because API calls to your model are relatively expensive, and your results are not going to show up instantaneously anyway.

Last updated