Podcast: Play in new window | Download
Think AI is just a magic button that spits out perfect results? Think again! The problem isn’t the tool, it’s the way we talk to it. Dustin Stout, founder of Magai, shares game-changing strategies for crafting smarter AI prompts that yield more tailored results.
Mastering AI Prompts for Better Results Episode Transcript
Rich: My next guest is an entrepreneur, AI enthusiast, and founder at Magai. Since starting his blog in 2011, he has built a successful career as a full-time digital marketing consultant, speaker, and has created numerous products that help digital creators accomplish more. His current focus is Magai, the world’s best AI tools in one place.
Today we’re going to be talking about how you can improve your AI prompting game with Dustin W. Stout. Dustin, welcome to the podcast.
Dustin: Thanks, Rich. Happy to be here.
Rich: Now, before we jump into better prompting, I want to talk for a moment about your company, Magai. Now, I have explained this to other people as a smorgasbord for AI, where people can test and compare different AI models side by side, both on the LLM, the large language model side, as well as the image and even video generation.
Would you say that is an accurate description, or would you like to add anything to that?
Dustin: I will say it’s the most unique description I’ve ever heard. I have not heard ‘smorgasbord’ yet, but it is accurate in that we do… you know, another way I’ve heard people explain it is it’s an aggregator of AI tools. The way that I tend to explain it is it’s a bundle. Just like you would sort of bundle your…
Rich: Please don’t say, ‘home and auto’.
Dustin: I was going to say your home internet and TV subscription cable, but it’s really much more than that.
So the way that I’ve explained it in the past is imagine if you could get your Netflix, Prime Video, Paramount+, Max, and all the top streaming companies. You’d get all of those in one app with one subscription that would be the equivalent of Magai because we bring the world’s best tools, the same exact ones you’re paying for, but we wrap them into our beautiful interface so that you don’t have to relearn a different thing. You don’t have to relog into a different thing every time you get all of them in the same place with the same exact powers and abilities, but it wrapped in a single beautiful package.
Rich: And I will say that I have almost every one of those streaming services you mentioned through my Roku, but I have to pay for each one. And you have one set fee that gives you access to all those different tools. So if anybody’s at home going, well, I could just do that myself. Well, no, you can’t. This is a much better deal.
But so that’s Magai. Definitely recommend people check that out. But I want to talk a little bit about, or I want to shift over to talking about prompting, better prompt engineering. And let’s start with the LLMs, the large language models like Cloud and ChatGPT and Gemini.
Now, I hear experts talking about prompt engineering. That can sound intimidating, especially if someone hasn’t had a lot of hands-on experience with AI tools. Can you give us some examples of how to transform maybe simple or basic prompts, like, “write me a blog post about attic insulation and turn it into something that would actually create significantly better output?”
Dustin: Yeah. So there’s a lot to it, and I’ll do my best to make sure it’s as simple as possible. Because I think the biggest problem that people have when it comes to interacting with AI models is they want to treat it like a computer thing.
And what I mean by that is we’ve been trained over the years to speak a certain way to Google search. You know, we type in one thing, and we expect to get a bunch of results, or we have apps that give us buttons to click, and we expect to click one single button and get everything we need out of that one single button click. But generative AI, particularly conversational AI like ChatGPT or Claude or Gemini, it’s predicated on this idea of AI understands natural language in the same way that a human being understands natural language.
And it’s built to respond to you and have these back-and-forth dialogues. It is not meant to be a single click easy button to give you whatever you need. And there’s many reasons why that doesn’t work and why people get frustrated because they think, oh, I asked it to write me this social media post and it was really garbage. Didn’t sound anything like me. It wasn’t any good. And that’s because the AI doesn’t know anything about you. If you don’t give it the proper context, if you don’t give it the proper backstory, and if you don’t give it specific direction, you’re going to get generic garbage content out of it.
And so really it comes down to this. Understand what it is you need to provide it in order to get that result that you’re looking for in the same way that you would give say an employee, the backstory. What I ask people to do is sit down and think about if I were giving this task to a person.
What would I need to preface this task with before they can do it? Right? I wouldn’t walk up to a random stranger on the street and say, write a blog post about thermodynamics, right? Like, that doesn’t make sense. Even if I were to hire the smartest college student straight out of college, he’s got his doctorate in general studies, he knows everything about everything. I wouldn’t sit him down and say, okay, first task, write me a blog post about thermodynamics. That’d be a really poor direction. That would be really bad to just give them that simple, shallow instruction.
What you do when you hire an employee is you sit them down, you give them the employee handbook. This is our business. This is the name of our company. This is the product or service that we deliver. This is the customer that we serve. These are our guiding principles. This is what our objective is. And this is your role in the company. You would give it a role like you give that person a role. You are the director of content creation or whatever that role is. And you do these specific things to help us achieve these specific outcomes.
If you don’t give AI that same level of attention and backstory, guess what? What you get out of that AI is going to be pretty much unusable. And you’re either going to have to do a lot of work to massage it into something usable, or you’re just not going to use it at all. And then you’re going to go back to your old ways. So the premise of everything when it comes to AI is, understand you need to be detailed and give it the backstory it needs for it to do its job.
So where do you start? So the first thing I always say is first and foremost, you need to start with the role. AI in its nature, Again, it’s trained on all these billions of parameters, all this data in the world. So when you ask it a question, it’s drawing on all that knowledge and it’s generalizing all of it and just trying to come up with the best answer based on what you’ve given it.
However, there’s a trick that many high-level AI experts have learned that if you first tell the AI what role it’s playing or ask it to put itself in the shoes of a specific knowledge expert say, you know PhD in thermodynamics, for our weird example. Don’t know why I picked that one. So if you were to say you are a PhD in thermodynamics and you have a background in writing, very easy to read content about the area of thermodynamics, you’re starting off at a much different place than just giving a generic AI.
So what happens when you give the AI a role? Essentially what you’re doing is you’re telling it instead of drawing from this wide gamut of training, let’s focus your knowledge on these specific areas. And so when the AI responds, because it does listen to you, it does a good job of playing the role you give it. It’s narrowed its focus, so to speak. And so it’s the difference between setting a wide focus and then setting a narrow focus for it to pull from. The results are much better, right?
So first and foremost, we want to start with what role does the AI need to play in order to accomplish the task that we’re giving it? That’s number one. Number two is, what backstory does it need in order to accomplish this task? Does it need to know about your business? Because it doesn’t by default. Does it need to know the services you provide? Does it need to know the customers that you serve? And so you really want to ask yourself, what does the AI actually need to know about me and my business in order to effectively complete this task.
And so role context, and then it’s a matter of writing a prompt that actually articulates your given task. And so if it’s writing a blog post now, one of the things you have to remember about AI is it’s, it can be really good if you give it a role, you give it a backstory. But it’s still not great at churning out a bunch of content at once. All these AI models have limits. And one of those limits tends to be the output length of what it’s giving you. And what happens if you ask it for a lot of output, if you’re asking it to write a full blog post, 1,500 words or so that it might do it. It might attempt that task for you without questioning it.
But what you’ll notice is it’s very thin and shallow content because it has an output limit. It’s trying to stay within that output limit. So it’s going to try and do the whole thing in one shot, and it’s going to be as thin as possible for each section.
Now, those of us who’ve been doing blog post writing and content writing and high-level SEO for a long time will tell you that thin, non-deep content is not going to do you any favors. You want in depth content that actually drives value, that actually answers questions and shows deep authority on a given subject. And so, in order to do that, you can’t just ask it for a single output. Again, going back to this idea that AI is better in a conversational environment.
So what I often tell people to do is understand what your task is and then break that task down into as many sub tasks as you think is necessary. So for the example of a blog post, don’t just say, “write me a blog post on this topic”. Even if you’ve given it the role, even if you’ve given it the backstory, first have it write an outline and take a systematic approach to how an actual human writer would approach that test.
First, they would write an outline. Or maybe even before that, they might just brainstorm some talking points. That’s what I often do with RAI when I’m coming up with a blog post. I start with, what are some talking points around this topic of thermodynamics? What are some really interesting or needed things that we should talk about if we’re writing about thermodynamics? And then have the AI brainstorm with you.
Once that’s done, you can say, okay, now turn this into a full outline or a blog post, and then it’ll write the outline. What you’re doing here is you’re allowing it the, the full space of its of its processing power to get as deep as possible on small tasks that it can do in small chunks, rather than 1,500 words at once.
And so what I do after that is once I’ve kind of looked at the outline and maybe I have a few suggestions, I’ll say change this, add that, take away that, conversationally, back and forth. As you do this back and forth, the AI gets better and better. Then you want to step by step through the outline. So, all right, the outline’s great. Now let’s write the introduction. Okay. That looks good. Now let’s write section two. Okay. Change this, change that. That’s good. Now write section three, step by step through the process.
Rich: So by doing that, would you also suggest that maybe it read what it’s already written before it goes on to the next section? Because when I’ve broken it apart in times like that, sometimes I find that it starts hitting the same notes over and over again. It’s repeating itself. Unless I suggest, take a look at what you’ve written before and then continue on building on what you’ve already said.
Dustin: That could be helpful. It depends on what model you’re using, I suppose. In my experience, it hasn’t done that. There have been times where the outline has been kind of repetitive. And so I think maybe it comes down to make sure the outline and the bullet points that sticks to that original outline and that there’s no repetition possible within that outline.
But as long as you’re in step, instructions are very clear, we’re going to step by step through this. You’re going to write each section one at a time. That way it doesn’t think I’m writing a blog post each time. Each section is its own blog post self-contained. So it might just kind of be in the prompting there, but if you’re finding that it’s doing that, all you have to do is really say, “Hey, you’re repeating yourself, stop doing that”, and have it do that.
But that’s my experience anyways, I tend to use Claude as my go-to writing model for anything that’s going to be shared publicly. And Claude seems to be very good at doing what I want it to do.
Rich: Now, if we’re doing a lot of these, where these prompts can obviously get very long, giving the context, telling it what you want it to accomplish. You also may want to be giving examples of your voice or your tone or talk about who you’re writing for. Because if you’re writing for a business owner versus an intern, you’re going to say different things to that person.
So would you recommend that we just have really long prompts? Or is this the opportunity to start using things like custom GBTs and Claude Projects or Gemini Gems, or whatever they’re called?
Dustin: Yeah. So I believe very much in the idea of custom context. And so there’s two technical things at play here. So there’s the prompts that you go back and forth inside of the chat. But there’s also something that most users aren’t aware of, and they don’t know about, and that’s called the system instruction or the system message. I think OpenAI just changed the phrase to developer message.
But essentially every AI model that you use, whether it’s Claude or Gemini or open AI, they have a system level set of instructions. So they have a role to play. They’re given a role by default. And so when you’re in there prompting it, even if you say in the prompt, “I want you to take on this role of a content writer”, that system instruction that it has is its number one priority. So you’re filtering your role. It has to filter it through the lens of its system instruction.
And that’s where GPTs or projects or custom instructions, or in Magai, we call them personas. That’s where these come into play. The GPT or the persona or the project, the custom instruction replaces the system instruction. So effectively what you’re doing is you’re now changing the AI’s generic system instructions to be customized to your needs. And that is far more effective than giving that AI our role to play in a typical prompting method. So highly recommend that.
And when you build these custom instructions, you want to really outline all of the things that you wanted to keep in mind at all times. Now here’s the other thing, the other benefit of custom instructions or personas or GPTs. This system level instruction is always within the AI’s memory. So it’s front and center every time, even if your conversation goes outside the parameters of the AI’s memory limit. Because AI does have memory limits. It may not tell you it does, but it does.
It’ll start forgetting things at one point. The system instructions are never forgotten, because they are present every single time. Every time you send a message the AI has to filter it through that. So very important to take that methodology and really customize the AI’s role and personality within those custom instructions.
Now you can put anything you want in there. You can put the role in there. You can put the background history about your business or whatever it needs to have in its context. You can put your brand voice stuff in there. Anything that you think is necessary for the AI to hold front and center, whenever it produces something for you.
Now, one thing that you do also want to keep in mind is you don’t need to load it with too much information. So one of the things that I often tell people is minimum viable context. What is the minimum viable context that this AI needs to know in order to complete a specific task? And so one thing that you need to keep in mind is that every bit of context, every message, every prompt gets added to that context of the conversation, and it will influence the output. So if you have a bunch of context in there that is irrelevant to your current goal or objective, that irrelevant content is still influencing the output. So you don’t want to put stuff in there that isn’t necessary. You don’t want conversations or chat threads to go on longer than they need to for a specific task.
You know, a lot of people in the beginning, and I still hear about people doing this today. Is they have one chat thread, and they think by staying in that chat thread, they’re teaching the AI to be better and better at what it does. And while it may kind of improve slightly over time, what you’re doing is you’re flooding it with a bunch of context that is probably steering your output, and in directions that you may not need it to go, it might be diluting the efficacy of a given task.
So if you started with customer support type stuff, and then you switch over to, I want to have you write social media content, and then you switch it over to administrative questions, and then you switch it over to business growth questions, you’re then flooding the AI with all this context that is going to color every output. It’s much more effective if you start a new conversation for every single given task. You’re going to get more specific output and you’re going to not dilute it with unnecessary information. So that’s something you should keep in mind when having conversations, but also when you’re building your custom context for these AI models. Anything you put in there is going to color the output, so be wise.
Rich: Yeah, and I’ve noticed over time, sometimes on ChatGPT, I’ll get that ‘memory updated’ thing when I’m just writing in a chat and sometimes, and I didn’t realize this, but it remembers things from one conversation even though I’m in a new thread. Which I didn’t realize until all of a sudden one day I was asking it for hot sauce recipe ideas, and it started reminding me about the different types of hot peppers that I had grown in the garden. And I’m like, wait a sec, this is a brand-new thread. how does it remember it?
But it does remember certain things, which also then made me realize that sometimes when I talk about one client, even if I’m starting a new chat, it may remember some things that have been coloring the brand of another one. So I do think those custom instructions that you talked about, whether it’s custom GPTs or gems or projects, critically important to kind of keep them separated, as the old song goes.
Dustin: It’s also important to note that that’s not a feature that you have to have turned on. This was something new that OpenAI introduced, I don’t know how many months ago at this point, but when they first introduced it, they introduced it as this revolutionary new feature, and it’s going to be extremely helpful.
I will tell you as someone who has had tens of thousands of conversations with AI and different models, that feature is completely unhelpful. I would turn it off immediately, because you can turn it off and make it so that the AI is not remembering all your conversations.
But again, going back to the efficacy and the bleed through of different conversations, you absolutely don’t want that. I have never seen a scenario where having that activated is actually helpful. Unless you are really specific about all your conversations, or about your business, or about specific clients, you don’t want that bleed over unnecessarily.
So I advise people to turn that off. And even in Magai, we don’t have that feature, because I think it’s just completely unhelpful. If you have stuff that you wanted to keep in mind, put that in context.
Rich: I have different people on my team who are specialists in Google ads or SEO or social media. And although there’s overlap in their understanding, there’s one person I go to if I have a Google ads question, somebody else I go to if I have an SEO question, and that really helps in other businesses, it might be that you have account managers. So thinking about AI as an entire team that you can implement, you know, it’s better to have those specialists necessarily than a team of generalists.
So you’ve given us a lot to think about and a lot to work on when it comes to these prompts for LLMs. I’d like to shift just for a little bit and talk about some of the image generation tools. Because Magai also has a slew of image and even video generation tools on this smorgasbord that you’ve created.
Now I’ll be honest, I do love image generation in AI. I found some uses for it, especially when it comes to putting together slide decks and things like that, or background images for some other things I have. But I’m also very frustrated sometimes with the output of these tools no matter how much or how little information I give it. Things like, the character is facing the camera, or the character is facing us, and then four out of four images, the character has the back of his head to us. So what recommendations do you have for prompt engineering when it comes to image generation tools?
Dustin: So I would want to ask what model you were using and what interface you’re using when that happened. Because one of the things that most people aren’t aware of, is apps like ChatGPT will actually change your prompt without you knowing it. They have their own prompt enhancing mechanisms running in the background that they may or may not tell you about, and they may be completely rewriting your prompt based on what they think is best.
In Magai, we have a prompt enhanced feature, but it’s completely within your control. If you just type in your prompt and hit ‘send’, your prompt is sent to the model exactly as you wrote it. But we have a button if you want the AI to enhance that prompt and make it better, you can click the button, and it rewrites your prompt, and then you can look at it, read it, and edit if you see fit. Or try it again.
So, first thing to know is, some of these tools aren’t following your instructions, they may think they have a better idea of what your instruction should be. That’s the caution I would have. So this is especially true with Dall-E and Dall-E has this built in by default. So even in Magai, we cannot stop it from rewriting the prompts for Dall-E images. It just always does it.
So the real thing that you want to do is always be as specific as possible. As much detail as you can muster up put into that prompt, so that the AI cannot mistakenly decide it’s going to add things that maybe you didn’t describe. And this is the bit Dall-E seems to have the biggest problem with. I think because it’s the prompt rewriting that’s built in.
But other models such as Flux is a very good model. It’s pretty good, probably the leading model in realism right now, apart from Midjourney. But Flux is very good at sticking to your prompt and not creating things that you didn’t include there. Now, I will caveat that with, if your prompts are not descriptive enough, it will have to fill in some of the blanks.
So if you just say, maybe a picture of a cat playing cards, that’s sort of my go-to, a cat playing cards. If you don’t specify the breed of cat, what environment that cat is in, what style of cat that is, it’s going to have to make that stuff up because it has to fill in those blanks in order to make the picture.
So three things you want to keep in mind, the subject, what is the subject that you want it to create in your image setting, where is this subject, what environment are they in and style subject setting and style. So the style refers to, is this a photograph? Is this an illustration? Is it anime? Is it an oil painting? What specific type of artistic style do you want applied to that? So long as you have those three elements in your prompt for images, you can really get something that’s specific to your needs.
And even within those three areas, again, filling in some very specific details will help you get exactly what you’re looking for if you have a specific vision in your mind. And then the other challenge there is that some models are better at some styles than others. So like I said, Flux tends to be really good at photo realism, whereas something like an ideogram can be really good at illustrative type styles.
And then you have a company like Leonardo, which we’ve integrated with, that has a bunch of fine-tuned image models that are for very specific use cases like anime, or realism, or CGI, that kind of thing. So finding the right model for the right job is very good. Just like different artists are good at different styles, different AI models are good at different styles. And so having the ability to sort of play with different ones and see what they come up with is really beneficial in this wild world of AI, because you get to explore and say, okay, this is the one that’s best for my needs and I’ll stick with that one.
Rich: One of the challenges that I’ve had in the past is consistency between images. So if I want to have a specific character doing a few different things, I can never seem to make it look like the same character image to image. I know some people have had success with that.
What tips, like if we have a mascot or if we have just a character that we want to use for a series of images in a blog post or a slide deck or whatever, what are some tips or what is the best platform, the best tool to use to create that consistency in characters?
Dustin: Yeah, so consistent characters has always been a challenge because of the nature of image generation. The whole point or the whole makeup of how image generation works is it’s creating something brand new every time. Even when you use an image as part of your prompt that it’s supposed to learn from these AI models, because of their training and how they’re trained, they are created to make something different, yet similar.
So there’s still a lot of controversy around how these image models are made, training themselves on copyrighted material without permission. And so I think part of that hard coding into these models is to not recreate something that maybe they weren’t allowed to have trained on in the first place, and so they have to create something new every time by default. But we do have the ability with some models to train it on our own custom data. Some people refer to this as the technical term is “allora”.
So Flux in particular is a company that makes creating allora’s custom trained image models, or fine-tuned models, really easy to do. We have this integration into Magai. So essentially what you could do is you can come up with a character and maybe one pose and try to train it on that character. Although it doesn’t, it’s not very good if you only use one image.
The methodology that you would want to use if you did want a consistent character and to be able to have it as flexible as possible, put them in different scenarios. What you want to do is create a character sheet. And this is something that animators have used for a long time. It’s essentially one image, but has multiple stances and positions and angles of that same character. And you can then produce several different images from that. Put that into a training model and say, train yourself on this character. And then produce that custom image model, which will create that character consistently every time.
A lot of people use this for not just mascots, but also for AI avatars, for doing their own photo shoots. And that was one of the things that Magai users have been raving about since we released it, is the ability to do their own AI photo shoots of themselves, upload 10 or 20 images of themselves, and then put themselves in all kinds of cool, fancy situations.
So that’s really the best way to do it. There are other apps that might make it a little bit easier. But as of now, the technology is just not good enough without specifically training a custom model.
Rich: Yeah. I’m thinking about our Agents of Change, the characters that we use for our stuff, and I would love to see them in different situations. But what I’m hearing is I need to go back to my illustrator who created it in the first place, hire him to create a character sheet for each of these. Then it was at Flux that you mentioned, what was the tool that you…?
Dustin: Yeah, Flux is the one that we have embedded into Magai for people to create their own custom image models. There are other apps out there that, in all honesty, are probably using Flux, but they’re not telling you because they want it to seem proprietary. But yeah, that’s the one we use and the one that I’ve seen the best results with.
Rich: Awesome. Josh, if you’re listening, I’m probably going to be sending more work your way.
I wanted to wrap up on a question about prompt library. So, can you just briefly explain the benefits of a prompt library for a company or an organization? And any tips to help keeping that prompt library as easy to access, easy to update as possible?
Dustin: Yeah, so this is a big challenge, right? So keeping all your AI stuff systematized and organized is a challenge of any organization. As soon as you get past five or six people, keeping everything in one place becomes difficult because everybody has their own system.
So I think it comes down to really deciding on what the SOP is, where are we storing these things? Where do we house them? And what is the standard operating procedure for how we implement these things. I am not a huge fan of saved prompts even though I built it into our product. It’s part of our product but I find myself just kind of re-wheeling it every time just because I know how to use AI.
But there are probably people in your company that don’t know how to prompt AI. And so you definitely want to come up with a system, a place to store these prompts, and then maybe just do a quick training. Say, hey, when you’re doing this type of task, here’s where the saved prompts are for this task. These are the procedural prompts. Start with this one, go to that one. You know, even if it’s a single Google doc, I’ve seen some people use that.
You might have a tool out there that actually helps you save these prompts. Like I said, Magai has a way to save prompts and put them into folders and share them. So I’m biased there in that aspect, but there are all sorts of ways you could use them. But being very clear and specific about how your organization uses them, I think, is crucial in taking that time to really flush out which prompts work. And spend some time exploring which one works better. Maybe with a few tweaks here, it got a little bit better. So now we’ll use this one, but then just having them accessible in say a Google doc or something is probably the best route.
Rich: Awesome. I want to wrap up. For someone who’s listening right now, doesn’t have a lot of experience with this advanced prompting, what is one thing that you could recommend they do today to kind of start the process?
Dustin: So two quick tips here. When you have a task, so what tends to be the hardest part for people is breaking down a task into smaller tasks. So one thing you can do is start with this. Say, here’s what I need your help accomplishing. Here’s the task. What questions do I need to answer in order for you to accomplish this task? And then simply wait for the AI to ask you the questions it needs to ask you to sort of fill in the blanks. Because oftentimes we’ll take for granted the things that we know that maybe the AI won’t know. And so by having it ask you questions to fill in those blanks, we’ll unlock a whole lot of good context that you need to feed it.
And one little bonus tip there is, have it ask you one question at a time. Have it ask you one question at a time, again, going back and forth. So when you have it ask you one question at a time, say, “Ask me each question one at a time, I will answer. And then you give me the next question by doing a conversational back and forth.” The AI is learning with each new message, each new answer you give it. And it may come up with different ideas of what other questions it might need to ask you based on your answers.
Whereas if you have it just say, “Ask me all the questions you need to ask me”, It’s going to give you a list of questions. You’re going to answer all those questions. It’s going to go. Okay And that’ll be it. But if you have that back and forth, you spend the time to one at a time, it learns and it gets better and it gets better. And it maybe unlocks some more potential areas that it needs to find out more information to really nail that task.
So those are my two things. Ask me all the questions you need to ask me. Ask me one at a time.
Rich: Dustin, this has been fantastic. For people who want to learn more about you, more about Magai, where can we send them online?
Dustin: You can go to magai.co, it’s m a g a i dot c o, and everything you need is right there.
Rich: Awesome. Dustin, thank you so much. I really appreciate your time today.
Dustin: You’re welcome.
Show Notes:
Dustin W. Stout is the founder of Magai, a powerful AI platform designed to simplify and enhance the way people interact with AI tools. A longtime digital marketing expert and speaker, he’s passionate about helping creators and businesses harness AI more effectively through better prompting and workflow strategies.
Rich Brooks is the President of flyte new media, a web design & digital marketing agency in Portland, Maine, and founder of the Agents of Change. He’s passionate about helping small businesses grow online and has put his 25+ years of experience into the book, The Lead Machine: The Small Business Guide to Digital Marketing.