TLDW logo

Emily Campbell - AI UX Deep Dive

By Dive Club 🤿

Summary

## Key takeaways - **Shift to Human-in-the-Loop Design**: Designers now guide the relationship between user and AI model, helping communicate intent, check understanding, and adapt to needs, shifting from designers in the loop to human in the loop. [01:34], [01:48] - **Wayfinders for Continuous Onboarding**: Wayfinders like sample galleries show how others use AI and prompts to provide starting places, important for onboarding and as AI learns user over time. [02:04], [02:31] - **Tuners Enhance Prompts Pre-Submission**: Tuners like preset styles and AI-improved prompts let users confirm model understanding before submitting, ensuring intent is captured without wasting time. [04:19], [04:37] - **Trust via Show-Your-Work Streams**: Stream of thought shows AI's logic inline like 'This is what I'm doing,' mimicking intern showing work to build trust before autonomous action. [26:02], [26:34] - **Co-founder Inverts Onboarding Trust**: Co-founder analyzes Gmail to generate sample email in user's voice immediately, proving capability at step one and hooking user before deeper relationship. [12:38], [13:30] - **Hire Curious Brand Designers**: Prioritize curiosity, self-direction to build, developed taste from sampling, and brand designers shaping AI personality for trust like Poke app's sardonic humor. [37:13], [39:16]

Topics Covered

  • Designers Guide Human-AI Intent
  • Wayfinders Enable Continuous Onboarding
  • Tuners Confirm Intent Pre-Prompt
  • Trust Via Show Work Patterns
  • Curiosity Trumps All Skills

Full Transcript

In one of the most popular episodes yet, Vitali Friedman talked about what's next for AI design patterns. And in that episode, he frequently referenced Shape of AI, which is an incredible database of AI design patterns. So, I wanted to

get straight to the source and go deep with the creator, Emily Campbell, who's the VP of design at Hacker Rank. And she's going to teach us in this episode how to design great AI experiences because she's studied these products

more than just about anyone that I've ever seen. You know, if we think about our traditional like software interaction patterns, historically, it's been us as designers or product people making a guess about what somebody needs to do and then

putting that out there as some piece of software, some service that they use and then, you know, 99% of the time we're at least a little bit wrong. And so we want to learn faster. And so the whole iteration loop of creativity has been

around us trying to represent what somebody else is trying to do, render that intent, figure out how wrong we are, learn, and then improve it. And

there's always a lag. And what's happened now with AI entering our world is the people using our products actually get to interact with this the system itself, with the system itself. And so the way that we think about what

then does our software need to do? What do our interfaces and our interactions need to enable? It's how do we help them communicate their intent to the model?

Figure out if the model understood their intent effectively and then adapt to their needs. And so the designer is really now guiding that relationship,

their needs. And so the designer is really now guiding that relationship, that experience, helping the user get the right context, the right input to the model and then guard the model to meet the user's needs and constraints

and so on. And so it's like we've almost shifted from designers in the loop to now this like human in the loop model. And so this is what I've been using to define and start to pocket the patterns that I'm seeing emerge into categories that help me then

translate that to like the user experience. So um you know first we've got what I've been calling wayfinders and these are the things that help me understand how to get started. So these are really important during onboarding,

but we also know that there's a continuous onboarding inherent in these experiences. As AI is getting to know you, it opens up new ways to interact with it that maybe wouldn't have been there or wouldn't have made sense to introduce early on.

And so like for example, if I pop over into this shape of AI, which is where I've been cataloging all of the patterns that I'm seeing, some of the examples of wayfinders are like being able to see a sample gallery. like how are other

people using this AI? What prompts are they using? Can I actually go in and see how they got to this result so that I can then try and get to this result and then have a starting place where I can move forward. >> Real quick message and then we can jump

back into it. If you're still designing in Figma and rebuilding in Framer, then you're doing twice the work. With Framer's design pages, you no longer have to jump between tools. In the last page that I made for the dive website, I

explored and built entirely in Framer. You can sketch, iterate, structure, and publish to the web all from the same place. Framer isn't just a site builder.

It's a design tool for your entire workflow. And you can start creating today for free at framer.com. And if you use the code rid, you can

unlock a free month of Framer Pro. Big news, animations just launched in Mobin.

So you can see how world-class apps use motion to guide, delight, and create seamless experiences. It's just another reason why Mobin is an absolute cheat

seamless experiences. It's just another reason why Mobin is an absolute cheat code for your entire design team. We use it all the time, and I can't wait to start sending animation ideas to the rest of the team. So head to dive.comclub/mobin

to check it out today. That's m ob i n. Okay, now on to the episode.

>> Comp building is really hard. It's actually one of the most limiting aspects to interacting with AI because how much do you say? Are you saying enough? And there are things that are happening now that are like maybe you've

enough? And there are things that are happening now that are like maybe you've seen the pattern where AI can actually improve your prompt for you. >> Yeah.

>> Right. So, I've been calling these tuners. These include things like having preset styles. Yeah. Being able to say like here's a prompt that I here's kind

preset styles. Yeah. Being able to say like here's a prompt that I here's kind of what I'm thinking that I want to do. and then having AI say back to you, okay, this is what I actually understand this action to be. Is this right? Should

I take this or do you want to modify it before you move forward? The user is now working with the model to make sure the model understands their intent before they even submit their initial prompt. And there's all sorts of reasons for

that we can get into. Um, but you know, this is what this is why this this flow has been so fascinating to me because it helps us understand that what we're doing is not just building software for humans. We're actually building a

meeting place between a human and something synthetic, something else, and then helping to guide that experience to be positive, to be efficient, to have

low friction and low cost on both sides. What I'm providing is a framework that I use to interpret what I'm seeing to try and work backwards, develop the language that I can use, my team can use so that we're all kind of starting with a common

common language, a common understanding knowing it's going to evolve. So this

model of like I come in, I've got some intent, I submit it, I figure out how close I got to it, I continue to see it work through a workflow or iterate through multiple versions of something, and I just spend a lot of time

iterating. Over time, what's happening is that's building my trust that the AI

iterating. Over time, what's happening is that's building my trust that the AI understands my intent, that I'm building an understanding of its capabilities and its functionality, and so I can go deeper. And that's where moving right in

my flowchart head, the actual interactivity with AI goes so much deeper than the surface. And all of our conversations about like, you know, hey, are we overusing the chatbot or, you know, what are the other services that

we should be thinking about? If we abstract away from software for a moment and we think about like what if I wasn't hiring AI to do something? What if I was

hiring a person to, you know, generate a draft of all of this user research that I just dropped into its window? Well, I don't necessarily expect my to just give this to a human I've never worked with and then have them come back and give me

a good result. If I'm getting to know somebody, my first step is, hey, why don't you take a few of these and come back and show me what you've done and then we'll go a little bit deeper. let me verify your work up front. And so,

you know, when we're first getting started with some AI product, we're using the interface a lot. We're directly saying, "Hey, this is what I want you to do." And then we're verifying that it actually understood

it. And so, we spend a lot of time here. And the chat interface is a very useful

it. And so, we spend a lot of time here. And the chat interface is a very useful way of doing that because conversation carries a lot of data. Like you and I, we don't know each other that well, but we could get to know each other really

efficiently by just talking and um having you tell me about your history and you know me sharing my screen and all these crazy flowcharts I keep in my head. It's a very efficient way of building an understanding, building a

head. It's a very efficient way of building an understanding, building a shared context, a shared language that you can branch off of. But as we start to get to know each other, we start to communicate in more nuanced ways. We

start to communicate through context. So, you know, you might make a face as I'm presenting something and that tells me that, oh, okay, what I'm saying is really boring or really interesting or we're starting to pick up on inferred cues of interactivity that we might even be unconscious of. Our user interface

kind of goes away. And so there's this skumorphic aspect to AI interaction and that even just at like these surface levels as we're thinking about the design, we shouldn't just be thinking about what is the right set of buttons

or fields or forms or whatever. We're also thinking about, hey, how quickly can we get to a place where the AI is actually able to get to that deeper

contextual understanding of the human and then start to show its understanding and let the human say, "No, actually, it's this. Okay, cool." And then eventually I can kind of get out of the way and let the AI go and do its thing.

And that's what we see here is that these become really important up front.

I'm at a tune. I'm going to prompt. I'm gonna give some input. But over time, the AI starts to pick up on my logic. And my job actually becomes more like

observing collaborating overseeing verifying, and maybe at some point even completely stepping away and letting AI run autonomously. The very notion of an individual user's ability to tell some model, no, you don't understand what I'm

trying to get at actually do this instead. I don't need to wait for a design team to give me a call and do great discovery and go and ship it through the agile process and then release it and invite me to their webinar. I can just do it. And so, as we talk about like all this stuff with

webinar. I can just do it. And so, as we talk about like all this stuff with generative UI and and all the amazing ideas that that brings about, we're kind of already there in a subtle way. people can >> directly interact with the program that

they're using and that to me like alone is revolutionary >> where that leads me then is I naturally start thinking about okay well then how do the deliverables that we even associate with the professional role of UX designer change you know it's there's

a tangibility to that user interface level that almost provides a level of comfort because it's like yeah I know what I bring to the table there's these boxes and annotations and flowcharts and By exposing more of the system and

letting users interact with the system and mold and shape it, it gets a little bit more hazy in my mind of like what designers even own? How far do we go into that system? If we have ideas for how to improve the system and how users

interact at that level, what is that deliverable even look like? What are we creating? And I don't know, I have more questions than answers still at that

creating? And I don't know, I have more questions than answers still at that level. So, I started documenting these patterns in the fall of 2023. And what I

level. So, I started documenting these patterns in the fall of 2023. And what I was already noticing is that there were some places where things were starting to converge, but there were more places where things were divergent. And that's

kind of still the case. When we think about things like taste, like how do you we keep talking about designers need to have their own taste? You develop taste not just by having a sense of your own aesthetic and your own conviction and

opinion, but also by sampling as much as you can to understand what works and what doesn't and why because some things work in some cases and some work in others and they aren't always interchangeable. And so like one of the

framings that I've used recently is like having great taste isn't just knowing that food is good. It's knowing whether or not it needs a little more salt. And

the only way you can know that is if you've sampled it in all of its different variations, overs salted, undersulted with this side dish, with this wine. And only then do you actually have the true taste of saying, it just

this wine. And only then do you actually have the true taste of saying, it just needs a little bit more of something. And I know what that something is. And

so I started to just catalog everything I was seeing for myself. So I had started, so this is this table inside of my notion. And I've got a whole bunch of

these that are just a mess of stuff. Anytime I see some new product pop up, um, if it looks interesting, if it looks like, oh, there's there's something to this that I

I haven't seen, I just throw it here. And then every week I go through 20 or so of these and I start to catalog everything I'm seeing. This is currently

my desktop. These are all clips that I've been capturing. So here's an

my desktop. These are all clips that I've been capturing. So here's an example of a product I recently went through. Have you heard of this

co-founder co-founder.co I think? >> No. No. >> So their whole thing is they create workflows agentive styled workflows through plain language instead of

building them out like you would with like the N8N product or Zapier and so on. So, you come in and you add your Gmail and it immediately starts to give you

on. So, you come in and you add your Gmail and it immediately starts to give you context about you. I thought this was fascinating. Like, this was, >> oh, this is cool.

>> Isn't this cool? I'd never seen another product go about it this way. A lot of times when companies are onboarding you in, they get your information and then they just start asking you questions because they're trying to build context about you. But what they've done is they've inverted that and said, "This is

about you. But what they've done is they've inverted that and said, "This is what I think I know about you. Let me just prove to you that I'm good at what I'm doing at step one of onboarding. And so I put in my URL and it just

immediately starts to spit stuff back. Then it connects to Gmail. Okay, cool. I

do that with notion. I've done that with chat GBT. I'm thinking this is going to allow me to like pull up a doc from my notion and then connect it to a

workflow. No, man. It immediately told me how I email. So, it took

workflow. No, man. It immediately told me how I email. So, it took >> this information from my actual inbox and then just created a sample email in

my voice and then I can edit this. So, right off the bat, it's going through that iterative loop where it's saying based off the content and context you've provided, this is how I can serve your needs. Is this accurate? And if not,

let's figure that out as soon as possible before we go any deeper into this relationship. Like, I am hooked. I am already hooked in this onboarding

this relationship. Like, I am hooked. I am already hooked in this onboarding process because now I want to know what's behind this. I want to know how you can keep up this context layer. And then um it it does this through the

calendar. It explains its memory and then it puts you out into the actual

calendar. It explains its memory and then it puts you out into the actual workflow builder and you describe what you want to do in plain language. And

again going back to this idea of like there's a skumorphic aspect to this new interactive language. This is how I would interact with somebody that I was

interactive language. This is how I would interact with somebody that I was interviewing to be a personal assistant, right? I wouldn't expect them to go out and email my accountant or my best friend in my voice before I had a chance

to see their work. You know, hey, how do you interpret this? Um, by the way, I don't sign off that way. I actually prefer to sign off this way. But this AI is already doing it. And so, it's emulating that human experience of show me your work. Let me build trust. And then I'm going to give you more things

to do. And then I'm going to give you more context and more data. And that allows me to

to do. And then I'm going to give you more context and more data. And that allows me to essentially like grow with the AI's context of me, like grow in that that depth of interaction as opposed to it taking all this time to get set up and

then hitting me up with a, hey, go and fill out five forms about your personal tone of voice. No man, go to my email. My email contains my tone of voice. It's

such a simple mental model, but hearing you kind of create this mental picture of as a designer, you are creating a meeting space, facilitating this interaction between the user and then it's kind of weird to say, but like a

real person, like an assistant, and what would you do? How would you facilitate that interaction? There's a clarity about that that I I really appreciate. Actually,

that interaction? There's a clarity about that that I I really appreciate. Actually,

>> it's also important for us to think through this because it helps us understand the risk associated with it as well. I have a 10-year-old son who had downloaded this app that I thought was about K-pop Demon Hunters.

And the next thing you know, he's running into my room crying because the main person in K-pop Demon Hunters was trying to date him. And it was just this

wakeup moment for me that the incentive model of these products is to get data about you and to build a relationship of trust so that it can go deeper and deeper into your ecosystem and into your world. Now in a business context, that's

really great. Oh my gosh, I suddenly have this personal assistant that

really great. Oh my gosh, I suddenly have this personal assistant that totally understands how I schedule my meetings and I don't need to go and tell it. It just knows. That's an a remarkable step forward. But when you

it. It just knows. That's an a remarkable step forward. But when you translate that onto a consumer use case, when you translate that into a situation where somebody who's looking for a little bit of warmth or a little bit of

information starts to find this model that really gets them, you can end up in some really dark places too. And so coming back to this mental model here, what happens at the interface and the context that we shared affects things we can't see. And then

even further it's like what happens when these agents start interacting with each other within their own content in their own context in their own languages which is actually happening now in research labs synthetic stuff interacting with synthetic stuff how do we design for that so you said earlier we have more

questions than answers like I think we have to because we are at the very early phases of a massive transformation and how we share this digital world which is

our world essentially really isn't a digital physical barrier anymore. How do

we share that with synthetic stuff? >> Well, I kind of want to just tap into your perspective as somebody who gosh, I mean, you're putting a lot of effort into keeping up to date with everything that's happening and studying these

patterns and what's working and what's not working and some of the trends and how we're evolving the way we think about interface design with all of these crazy capabilities that we're still wrapping our head around. What are some of the things that you find interesting or some of the more sophisticated

patterns where you're like, "Oh, you know, like that's something worth double clicking on or leaning further into." And if it looks like us just doing a bunch of screen sharing and popping through examples, I think that would be

amazing. I've been designing products every day for the last 15 years, but in

amazing. I've been designing products every day for the last 15 years, but in the last 6 months, everything has changed. With AI in the mix, I'm cranking out ideas faster than ever. But none of that matters if I can't get the

feedback that I need to get the team aligned. And right now, getting async feedback still kind of sucks. So, I'm building the product I've always wanted, and it's called Inflight. I use it every day to share ideas and get feedback from

the team, and it's totally changing the way that I work. So, I'm excited to show you. Right now, I'm only giving access to DiveClub listeners. So head to dive.club/inflight

you. Right now, I'm only giving access to DiveClub listeners. So head to dive.club/inflight

to claim your spot. >> Anything that gives humans control and particularly gives humans control who aren't super technical. That's the most interesting thing to me right now

because that affects first of all just how do you help non-technical people from getting completely subsumed? Actually, it's not even just on like how do you help people not get subsumed by these models? And then how do you help

them feel like they are the ones always in charge? So, in this like tuner category, um a couple that stand out. So, I mentioned the prompt enhancer removing the sense that I always need to have the answers when I'm starting to

interact with AI. I can come in and I can say, "Hey, what do you want to create?" And I can just give a really highle overview. And then if I hit

create?" And I can just give a really highle overview. And then if I hit enhance prompt, it actually writes essentially a PRD for me. So replet and bolt and cursor and like their planning mode. Um they are all starting to

emulate this idea that you don't need to necessarily be the product manager for AI. Your job is to just say this is what I'm looking for. But AI is going to say

AI. Your job is to just say this is what I'm looking for. But AI is going to say hey let me just show you what I'm going to do before we go any further. number

one, so you don't waste your time and tokens on something that's actually not what you're looking for. But also, hey, if you want to modify this or if you want to do this again, it gives that agency over to that person who's like, okay, I actually know what a good prompt looks like now. I don't need to go and

follow some influencer on LinkedIn and buy their, you know, prompt workbook. I

can literally just go to the source. I can just go to the model. We're seeing

this with like this is florafana.ai AI and they built this really early on into their their nodes. And it's it's brilliant because again, when I'm in this creative mode, the idea that I would leave this creative mode that I'm

in to go into some analytical go and like construct the perfect prompt, it doesn't make sense. Instead, meet me where I'm at. That's the that's the human experience part of this. Just kind of give me enough for me to run with it, and then we can keep iterating and move it to where you want to go. So, that's

one that's really really interesting to me. Um, these parameters, I don't know if you've seen these start to pop up. This idea that I can like adjust the temperature on something is really fascinating to me. So, like if I'm in 11

Labs, for example, I can describe some sound and then I can actually say, I want my prompt to highly influence the outcome or I want you to just kind of use this as a general nudge and then I want you to run with it and go and

create something out of it. Um, Midjourney was one of the first products to to start to introduce these in the interface. So, I can say, "Hey, I want this to have a lot of variety or a little bit of variety, a lot of the

midjourney styization or my personal stylization or keep it pretty low-key.

So, these are all examples of parameter selectors that I've been collecting over the whatever year and a half or so that I've had this folder. And you'll notice some of these are like they're not literal temperature sliders. They'll

just give you these defaults, but others are like this one's really interesting.

So this one's air table. So if I'm having AI like generate um some prompt that's going to roll through my table. Then I want to I want to generate the prompt and then it's going to autofill it down the table. I can go beyond just

saying here's what I want you to go and do. I can actually say, hey, I want you to have some variability in this. So, for example, if you're developing a product and I don't know, maybe it's like an internal tool or maybe you're

creating personas out of user data, like that's a use case I'm really fascinated by is how do you convert analytical data and translate it into something that I can interact with, like what is the spectre of this person, the digital twin

of some data footprint that exists in my analytics somewhere. Well, I might want to have this be, you know, pretty varied. Like, I want different personalities. I want the I want it to be true to the data. But then in terms

personalities. I want the I want it to be true to the data. But then in terms of coloring in the rest of the box, like please create a really colorful set or maybe I really just want you to stay true to the data and not try and give me

other variability. I can start to control that inside of the interface.

other variability. I can start to control that inside of the interface.

And so it basically takes this idea like in again instead of having to write the perfect prompt up front, I can convey just enough intent and then have AI tell me, okay, this is what I think you're saying. I can communicate kind of directionally where I want to go and then I can give it some guidance. The

replet example is so interesting to me because I think a lot of the times I'm looking at sliders and I'm trying to figure out what the differences are between the options, but I haven't seen it presented like this where you have almost like the feature list where it's really clear what is changing from each option.

>> Yes, anything that can give that kind of context. Here, I'll give you one more example. Just being able to select a model. So there was this whole brewhaha

example. Just being able to select a model. So there was this whole brewhaha when chat GBT5 dropped and instead of being able to select from the broad assortment of models, they introduced an automatic model router which is now

taken on um it's you're going to find it in a lot of these products. >> Mhm.

>> How do I know what model to choose? Like if I'm working with um some maybe I'm working with text, maybe I'm working with images like Korea does a really good job with this of just telling you, hey, use this model if

you're looking for human accuracy in photography, but if you are producing more like generative artwork that's a little more analog, maybe use this other model. So that's really interesting to me too is just how do we help people see

model. So that's really interesting to me too is just how do we help people see the stuff that they just just don't know because they're not reading all of the EVEL reports coming out of these labs. You know, >> one of the topics that I was hoping to

get your take on is just the category of trust and transparency as we're working on more agentic systems even and I'm curious if there are certain patterns or trends that you're seeing. >> AI can only serve my needs if it has

access to my content. I'm only going to give it access to my content and my context and who I am and who I know and how I interact with them if I trust it.

It's almost like the new usability becomes how quickly can you build trust in a um legible way. So the user knows okay something's happening that I can

understand. And so this is like my high level. We want to be able to show that

understand. And so this is like my high level. We want to be able to show that the AI can meet the user's needs. The user gives us data. like really solid onboarding, you get a little bit of data, immediately that context is derived by the AI and it's able to return something a little more personal

or a little adaptable to them. The better this adaptive experience is, the more they trust it. And so it's a combination of like how the model performs, but also its wrapper, the experience and the the interface and so on. So some of the patterns that stand out to me, this is this category of

on. So some of the patterns that stand out to me, this is this category of governors, which is where you see a lot of trust. And then I've also got these literally trust builders. Um, so I'll just talk about a couple of these that I'm seeing. Um,

we've all become pretty accustomed to this idea of seeing stream of thought, like stream of thought consciousness. This is this is the number one thing um that I'm paying attention to right now because it's changing really fast and

it's changing subtly. For example, when I when you used to use chat GBT, like it would just say something like, "Hey, I'm thinking. I'm I'm searching and now I'm thinking and now I'm searching for something else." And then um the same

week that GPT dropped that OpenAI dropped Atlas, they moved all of this logic into an inline place inside of the actual interface. So, it's actually not just saying, "Hey, this is what I'm doing," but it's telling you up front in

actual like words, "This is what I'm doing. This is where I'm looking. This

is what I'm learning. Again, if we abstract it to that skumorphic lens, like if I hired an intern before I trust that intern, I want to see it. I want to see their work. So, I'm going to meet with them daily. Hey, show me what you

were working on. Show your work. Okay, interesting. I see that you did this.

Listen, let me talk to you a little bit about, you know, border radii or something. I'm going to teach you something. Then go back out, come back

something. I'm going to teach you something. Then go back out, come back to me, show me that you learned it. But after I've seen that a few times, I'm going to start stepping away. So when you think about like these agentive

browsers like chatbt Atlas, before I start get to a point that I'm going to just let AI run wild inside of my life, I need to make sure that it's doing something legible, that it's doing something good. And so being able to

actually show the work up front becomes really important. The idea of planning mode is another one of those. And again, it's not just trust because of the

logical side. So, Replet does, I think, the best job out of any of the generators at this particular

logical side. So, Replet does, I think, the best job out of any of the generators at this particular pattern. If you tell it what you want to build, before it ever builds something,

pattern. If you tell it what you want to build, before it ever builds something, it'll give you the option of, okay, I can go and create like a really rough prototype or review my plan and I'm going to go build the actual thing. So,

it's telling you its logic. It's saying, hey, this is my plan of action. Like,

this is this is how I'm going to go build this thing. But it also says like, do you want to just kind of see where I'm going before I go and spend all these tokens creating this thing? And so you are constantly in the director's

chair. You have the ability to go, "Oh, wait a minute. I don't actually want you

chair. You have the ability to go, "Oh, wait a minute. I don't actually want you to do this or I want to revise the way you're thinking about this before you actually go in and build out, you know, whatever this this web app or these changes to my application or whatever are." That's how we trust people. And so

we should think about that in terms of these these interactions too. Um, but

then there's also there's other layers of trust because there's the like I trust you to go do something on my behalf. But this is back to that whole like we're not just talking about humans designing for humans anymore. We're now

designing for a world where humans are interacting with nonhumans. The

synthetic stuff. So uh patterns like consent. How do you know that something is using your data to potentially build a contextual understanding of you if you are not the person who is directing that thing? And the way we've been

approaching it honestly is pretty bad. Like very few companies do this well, especially these um these audio recorders and transcribers. A few of

them like Fireflies sends an email ahead of time with an opt out form. A lot of these are are consent at the moment. So they'll just say like, "Hey, we're using this. Just so you know, I guess you can choose not to join this

interview if you don't want to be recorded." But that's not really consent. That's, you know, >> and they just offload it to the user, too. Like I basically at this point just assume that every meeting that I'm in is being recorded with granola, which is

crazy, right? Like that happens so quickly. And then you think about with

crazy, right? Like that happens so quickly. And then you think about with wearables, the Limitless Pendant originally had this incredible feature where it would only record once it actually heard consent from another

person in the conversation, even if they didn't know you were you had this pendant. And they've removed this by default. It's still available as an

pendant. And they've removed this by default. It's still available as an option, but they've removed it by default, which I think is is telling anyway. So like there's the there's the privacy concerns that have always

anyway. So like there's the there's the privacy concerns that have always existed, but now that we have this additional thing where we have these models that are constantly collecting data, mapping it to other information

about people. If somebody's wearing meta glasses, they know me because I've had

about people. If somebody's wearing meta glasses, they know me because I've had photos on Facebook since they launched in 2006. And if you come up to me at a party and you have glasses on and you're talking to me and that data is going

back into their models, information about me, where I am, who I'm talking to, what I'm wearing, what I'm drinking, you name it, is feeding back into their models and most likely entering their graph that it can be used to send me

advertisements that I never consented to and had no idea was even like in the ether. And now time to get like dark, but this is where this like it's such an

ether. And now time to get like dark, but this is where this like it's such an incredibly powerful and high agency experience that does all these amazing things and it's also so dark and bad and scary. And it just points to the

importance of us as designers, just to kind of put a bow on it, like knowing what's below the surface. Knowing that when we're designing these great experiences that help the person who bought the AI product go and do

something, the data being collected has other impacts that may affect people well outside of that initial experience, but ultimately we are responsible for >> I think I want to take this opportunity to zoom all the way out then because

given everything we're talking about and how quickly the world is changing and the stakes attached to the modern practice of design, even how is this shaping the way that you show up as a design leader and the way that you even

think about managing and leading and investing into an org.

>> I work at Hacker Inc. And so we help um people find jobs and so you come into our our platform and you demonstrate your skills and there's a lot of concern about people coming in and cheating you know or doing things intentionally or

not that could be seen as influencing the results in an imper co-pilot that essentially could be your personal proctor like there just to say hey just you know like when you switch tabs looking for syntax it's actually

going to be registered in this negative way and so you might not want to do it.

And the way that we approached this problem was by first saying, "What does a great experience look like with a real proctor? What does it look like when I get started? What do I want to hear? What am I afraid of? If they say

get started? What do I want to hear? What am I afraid of? If they say something, how could I misinterpret it? What happens if they need to intervene?

What types of questions might I ask? Would they be able to answer that question?" And so we actually created a service map of what an amazing human-

question?" And so we actually created a service map of what an amazing human- centered experience could look like. And then we said, how do we translate this into software? And so it creates this new framing where we've been building

into software? And so it creates this new framing where we've been building software as a service and now it's almost like well design the service first and then say how do we translate this to software. So that's one thing that we've been doing is we've just been abstracting a lot and that's you know

back into this skorphic element. So going through that and actually thinking about the service first and then saying okay what is the software layer? What is

the AI layer? What context do we already have? How can we collect it in the most seamless way? That's something that we're we're doing a lot of. Another

seamless way? That's something that we're we're doing a lot of. Another

thing is realizing that the design of that experience is not limited to the interface. The way that the prompt the actual prompt of the software itself is

interface. The way that the prompt the actual prompt of the software itself is configured or some feature is going to dramatically change that user experience. understanding how different types of or different lengths of context

experience. understanding how different types of or different lengths of context or things shared at certain times affects how the model responds to you and how easily it can adapt to you and give you the right options. You know,

that all affects the user experience. So, we've been trying to get designs into code as fast as possible. Not just because of this whole like should designers code thing, but actually more because the model itself is now part of

the experience. It's actually a party to the experience. And so we need to

the experience. It's actually a party to the experience. And so we need to understand not just how does the user intersect with this thing we're creating, but how does this third party affect their experience and how do we

design for them or at least design for the user's ability to direct it more effectively. I want to double click on the piece where you talked about how

effectively. I want to double click on the piece where you talked about how you're trying to get into code a little bit more quickly because that's a theme that I've been hearing. But I'd like to understand how that is changing the

design process and how the way that designers in your or even collaborate with different stakeholders. What are some of the deltas that exist given that change?

>> There's a little bit of past is prologue because I don't know that we're that far off from where we were 15 years ago when we didn't have all these incredible prototyping tools and so like designers often had to get to a good enough

version of something in HTML, CSS, and JavaScript. Like designers over a certain age can all tell you I have rudimentary CSS, HTML, and JavaScript because it was the most effective way for me to communicate my intent upfront.

And so now we're seeing that be abstracted into these prototyping tools.

It's imperfect. Like I'm just going to go ahead and say it. Every single team is operating differently. We have different levels of conviction and understanding. And so it's it really does look different from team to team.

understanding. And so it's it really does look different from team to team.

But on the teams where we have either a lot more of green space to play with like so AI is a little bit more native to the experience or where um we have a lot more conviction and understanding about the market and who we're we're

designing for. Um yeah, we're moving pretty quickly into at least some sort

designing for. Um yeah, we're moving pretty quickly into at least some sort of living prototype. And so the tools we're using there were are more Figma make um lovable. Those are the two that most people are using. Um, and and just

because of the convenience factor. I was talking to a designer on my team this morning who's working on um something for our our like uh AI data product. And

he got to a point where he was like, it's just so much easier for me to push this into Figma make and then show the engineer, hey, this is kind of how I'm thinking about this interaction than trying to prototype it. Nobody thinks

that that's going to be the final version. Like we're not even going to bother going into dev mode. but it does create the ability to just convey what you have in your head a lot faster. And so, um, that's where those tools are

fitting in. In terms of like actually working within the codebase, we're just

fitting in. In terms of like actually working within the codebase, we're just starting to tiptoe into that. And, and a lot of the reason is that if you're not set up um to do that from scratch, individual teams, individual front end,

you know, design engineers or front-end engineers working closely with a designer can can get pretty far. Um, but because we work in like an enterprise context with really strict accessibility standards and so on, we want to move forward together. And so right now we're doing a lot of the operational

forward together. And so right now we're doing a lot of the operational groundwork to let us be able to move more of design into our actual development tools. So we have a goal of all designers being in cursor by the end

development tools. So we have a goal of all designers being in cursor by the end of 2026. Given all the uncertainty with workflows and tooling and how

of 2026. Given all the uncertainty with workflows and tooling and how collaboration is changing, how does this shift what you're prioritizing when you're thinking about the types of designers that you want to hire to like,

you know, set off on this journey with? >> That's a deep question because it really again it really depends because design is now important in a lot of different ways. I love it when people come to an interview and can tell me with their

ways. I love it when people come to an interview and can tell me with their eyes lighting up like this is something I vibe coded. This is something I'm building. The most important skill is curiosity right now. Curiosity and then

building. The most important skill is curiosity right now. Curiosity and then followed very quickly by go get them attitude, you know. So I was curious about something and then I went and learned it and then I got stuck. Cool. I

want to have that conversation. But the fact that you have the self-direction to say,"I am hungry to learn something new and then I'm going to go and try and figure it out." That's the most important skill set. So I tell like

younger designers, just go build. Just go and find something and go and build it. And it's okay if it doesn't really work. It's okay if you would never put

it. And it's okay if it doesn't really work. It's okay if you would never put your personal credentials inside of it because what you're doing is you're just showing how you can start to shape and mold this clay of this new thing and begin to develop your own understanding. So that's that's a big part of it. We

talk a lot about taste, but again, it comes down to not just having an opinion. So there's a lot of people with great aesthetics who really struggle to

opinion. So there's a lot of people with great aesthetics who really struggle to translate beyond that aesthetic or beyond the immediate problems that they've had to apply it to. And so we are spiking on visual design. It's it's

just a mustave like really strong portfolios. There's no excuse not to do the basics. But then beyond that, I just want people who have started to develop

the basics. But then beyond that, I just want people who have started to develop their own language of what's working and what's not. So someone who can say, "Hey, I spent the last week just trying out all these different products and I want to tell you why this one worked better than that one." That's another

signal that this person has really high agency and can start to see beyond their own lens or their own experience. So it gets back to that curiosity piece. We're

definitely leaning into brand, but particularly brand designers who are thinking about a wider experience because brand doesn't stop at the website or at, you know, a really awesome header, you know, on some social

media site. Brand translates into that trust layer. Like what is the

media site. Brand translates into that trust layer. Like what is the personality of AI when you first meet it during onboarding? That's brand. The

poke app from interaction. I started using that earlier in the summer and it changed my mind about how we think about chatbased interactions because it showed

how the personality of a model can represent a company, a company's humor, a company's sense of the world. Like this is not some some tool that's just trying to bury into my personal data. like maybe it is, but it's fun and it's

going to try and understand me and like I like to be around people like that. I

like the kind of sardonic humor, so heck yes, I'll give you my money. Um, this

was before everybody was getting it down to $1 a month, so I'm a little bummed about that. But like to me, brand designers who are thinking about every

about that. But like to me, brand designers who are thinking about every single touch point, who are thinking about using the product as part of a community, it's part of like being invited into this vibe club that yes, I do want to give you my data. I do want to give you my context because I trust

you because I want to be part of this group. That's brand. And so we need brand designers to be thinking beyond just the canvas in front of them. And I

guess the last one I'd say is just just people who are comfortable with ambiguity and are comfortable inviting others into ambiguity. But I feel like that's always been a necessary part of design. Like whenever I have

stakeholders telling me this is what we should do, this is my opinion. My next

step is great. Let's go sketch it out. Let's go do adups. I'm going to go spend 90 minutes with you and we are going to come up with as many different ideas for how to approach this as possible. So your voice is at the table. One, people

realize pretty darn fast how hard it is to actually come up with viable concepts and play them through a journey. And so it gives them a deep understanding of you and how you're working, but it also starts to create that shared language and that shared view of like what are we actually trying to do? What is the

actual endgame here? It's not your opinion versus mine. It's who is this person we're serving and how can we best serve them? And so designers who are comfortable of like inviting people into the mess and holding space for the mess and not drowning in it becomes just a a superhero capability right now.

>> I love hearing you talk about curiosity because it's so evident that you're putting it into practice too. you know, you have all of these folders and screenshots and as a design leader too, like you know, you might be responsible

for fewer pixels than a lot of the people who are making these interfaces and still to say, you know what, I'm going to play with all of these and develop an opinion on them and even hone my taste not at the interface level, but

at the model level and how these more natural language interactions look and what they feel like. And so I've just really enjoyed hearing more about how you think and approach the practice of design and how it's all changing and

just appreciate your perspective. So thank you so much for coming on and sharing it with us today, Emily. >> Yeah. Yeah. No, this was really fun.

It's fun to invite people into my own little mess, I guess.

>> Before I let you go, I want to take just one minute to run you through my favorite products because I'm constantly asked what's in my stack. Framer is how I build websites. Genway is how I do research. Granola is how I take notes

during crit. Jitter is how I animate my designs. Lovable is how I build my ideas

during crit. Jitter is how I animate my designs. Lovable is how I build my ideas in code. Mobin is how I find design inspiration. Paper is how I design like

in code. Mobin is how I find design inspiration. Paper is how I design like a creative. And Raycast is my shortcut every step of the way. Now, I've hand

a creative. And Raycast is my shortcut every step of the way. Now, I've hand selected these companies so that I can do these episodes full-time. So, by far the number one way to support the show is to check them out. You can find the

full list at dive.com/partners.

Loading...

Loading video analysis...