Example input
EXAMPLE 1:
Blog Post URL: "https://letsenhance.io/blog/article/ai-text-prompt-guide/"
Blog Post Content: "Image Prompts
In some cases will be asked to upload images to an AI generator, since some rendering platforms need visuals to make new renderings. If you’re looking for a rendering that closely resembles the technique and style of a particular creation, an image prompt would be more accurate than a textual one.
Original: Girl with a Pearl Earring by Johannes Vermeer | Outpainting by: August Kamp
One of the best examples of image prompts is DALL.E’s Outpainting AI. This AI rendering platform takes an existing visual and renders the visuals that it assumes would be its continuation. The image above is a demonstration of how DALL.E’s AI created the surroundings of Johannes Vermeer’s famous painting, Girl with a Pearl Earring. The square in the middle shows the original painting and everything on the outside was rendered by the DALL.E Outpainting AI.
Both Image and Text Prompts
In some cases, you will be asked to mix the two prompt types together to render new visuals. This is done to ensure that the rendered image is as close to what the prompt engineer was looking to generate as possible.
The Anatomy of Image Generation Prompts
To effectively render images using AI, you must first understand what that particular AI rendering platform specializes in. The DALL.E series of rendering platforms specialize in rendering photo-realistic visuals, whereas Midjourney favors more digital art or illustration formats. Our own Let’s Enhance Image Generator works well with both illustrations and photorealism, but can also render images that resemble 3D sculpted models.
Use At Least 3-7 Words
There is no single correct way of writing text prompts for generative AI platforms, but most prompt engineers would agree, that textual prompts should be at least 3-7 words long if you’re looking to render something more detailed and less abstract.
While you don’t need to strictly abide by these rules, but if you’re looking for a more detailed and complex rendering, then a 3-7 word text prompt would be your best bet. The more descriptive your text prompt, the easier it is for the AI to understand what you’re looking for.
Subject: The Who and What of Text Prompts
As with human artists, AI-image generators need a subject to render. This can be a person, object, or location that will be the main focus of your rendered image. You can use prompts with more than one subject, but to keep things simple for now, let’s keep it at 1 subject per text prompt.
Pro tip: Avoid using abstract concepts (love, hate, justice, infinity, joy) as subjects. You can most certainly render something using these keywords, but the results will be very inconsistent in what they depict. Use concrete nouns (human, cup, dog, planet, headphones) as the subject of your prompt for more accurate results.
Similar to sentences in English, the subject must be a noun. Some simple examples would be human, car, forest, apple, living room interior, and soda can. The AI can successfully generate an image from any of these nouns alone, but to get more detailed and complex AI renderings, you need to create a more descriptive text prompt.
Descriptors: The Doing What, Where, and How
To add more complexity to your rendering and help the AI narrow down what images to use as references, you need to utilize descriptors. While any word that describes the subject of the text prompt can be considered a descriptor, these tend to be verbs and adjectives that answer questions like:
What is happening?
What is the subject doing?
How is the subject doing this?
What’s happening around the subject?
What does the subject look like?
To illustrate this point, we did a little experiment with our Let’s Enhance Image Generator and tried to render a raccoon that is reading. The rendering on the left was rendered when we simply entered the text prompt “raccoon reading”. And while the AI was able to generate the subject and one descriptor successfully, the example is still rather simple.
Rendered using Let’s Enhance Image Generator.
On the right, however, the prompt contains two additional descriptors “a book” and “in a library”. These additional descriptors allowed the AI to narrow down the reference photos, thus rendering an image far more complex than its counterpart with a far more simplistic text prompt.
Pro tip: Experiment with descriptors to see how they affect different aspects of the image. Descriptors are not a definite science and may yield wildly different results, so mix and match them to see what the results look like.
Keep in mind, that it is really a matter of preference and necessity whether you use additional descriptors and how many. If you’re looking for a simple rendering of a finch, simply typing in “finch” in the text prompt is going to be enough for most AI image generators.
Rendered using Stable Diffusion.
But the far more descriptive text prompt of the example on the right demonstrates how accurately the AI can render a more detailed visual thanks to additional descriptors, such as “a tiny”, “on a branch”, “with spring flowers on the background”.
Aesthetic and Style: How the Rendering Will Look
Finally, the last bit of a common text prompt for generative AI platforms are keywords and phrases that add the finishing touches to the rendering. These last few words will determine the style, framing, and overall aesthetic of the composition. Keywords like “photo”, “oil painting”, or “3D sculpture” work really well in this section. You can also use prompts like “close up”, “wide shot”, or “portrait” for additional framing options.
And there’s also the option of choosing an art style, as well as naming specific artists whose work you wish the AI to imitate. In the example above, the Let’s Enhance Image Generator was told to render an impressionist painting in the style of Vincent Van Gogh of the Batmobile stuck in LA traffic.
Rendered using Let’s Enhance Image Generator.
You can see that the two iterations in the left column emphasized the wide shot, as both compositions show a more distant photo of the traffic and don’t focus on the subject as much. However, all 4 renders were made to imitate impressionist paintings, and more specifically, the style of Van Gogh. With those additional aesthetic prompts at the end of our text prompts, we were able to successfully stylize our rendered images, making them more unique.
Rendered using Lexica.
Here’s another example of how the same prompt with different aesthetic keywords can change the entire style of the rendered image. The rendering on the left used a simple prompt with a subject and some additional descriptors. However, the rendering to the right, thanks to keywords such as “vaporwave aesthetic” and “product photography”, has a more defined visual aesthetic, with the gradient in the background mimicking product photography on top of the neon vaporwave aesthetic.
How Let’s Enhance Image Generator Bypasses a Common Problem
One of the limitations we come across when rendering with AI is their resolution caps. The processing power that would be required to render hundreds of thousands of high-resolution images all at once would be immense.
So to avoid overloading their platform, most AI generators are engineered with resolution caps. This means the images you generate have a maximum resolution that can only be circumvented by purchasing a premium account, though in some cases it can be free.
Resolution caps for the biggest AI image generation platforms.
Lucky for us here at Let’s Enhance, we already have another AI capable of taking images with smaller resolutions and upscaling them to larger sizes. This is why we integrated our upscale AI into the image generation AI and bypassed this limitation.
After the image is rendered, you have the option of receiving it in its native 512 by 512px or upscaling it 4 times to 2048 by 2048px without a drop in quality. This is a higher resolution than what most generative AI platforms are putting out today, even with the most expensive service packages. This is a great feature for those who work with AI image generators and are looking for high-quality visuals with a flexible price model."
EXAMPLE 2:
Blog Post URL: "https://online.rmit.edu.au/blog/why-ux-design-so-important-business"
Blog Post Content/Summary: "UX measures how a consumer feels when interacting with a system. It’s the art, and science, of trying to fulfil the user’s needs.
RMIT Online
RMIT Online
Running a business without good User Experience Design (UX) is like driving a car without tires. It’ll run, perhaps. But not well. And you’re doing damage to your brand every time you take it out in public. UX measures how a consumer feels when interacting with a system. It’s the art, and science, of trying to fulfil the user’s needs, which, in turn, leads to improved business performance: better loyalty, less attrition, more conversions, more interaction, more revenue.
The current state of UX adoption
UX Design is commonplace these days, although specific rates of adoption vary from company to company.
When InVision surveyed over 2000 businesses across 24 industries, they found that 77% of them recorded improved customer satisfaction as a direct result of good design. 81% said design had also improved product usability.
“We found that those dominating their industries are the ones treating the screen like the most important place on Earth” the report said. “In fact, companies with high design maturity see cost savings, revenue gains, and brand and market position improvements as a result of their design efforts.”
Interestingly, while the benefits of UX have been known for a while now (the discipline dates back to the 19th century, although it didn’t really get moving till the 1990s) InVision also found that only 5% of companies are empowering designers to reap the maximum benefit. In other words, there’s a lot of room for improvement. Even within companies who claim to champion good user design.
To measure how good your company may or may not be at UX, we first have to look at something called ‘Design Maturity’.
What is Design Maturity?
Design Maturity, or UX Maturity as it’s sometimes known, is a framework to assess a company’s investment and commitment to user-centered design. It’s a cheat sheet. A way for companies to quickly take stock of their UX strengths and weaknesses. Jakob Nielsen developed one of the earliest UX Maturity models in 2006, which included eight ‘stages’ of maturity, but there have been subtle improvements over the years, and these days most maturity models feature about five or six stages.
Still, they share a similar structure.
Stage 1: Absent. The lowest stage of maturity. User Design is ignored completely within the company.
Stage 2: Limited. UX work occurs, but it’s rare and not particularly valued.
Stage 3: Emergent. The UX work is promising, but lacks consistency.
Stage 4: Structured. The company has a structured UX program, but with varying degrees of success.
Stage 5: Integrated. UX work is structured, effective, and intrinsic to the organisation.
Stage 6: User-driven. UX dedication permeates all levels of the company, revealing deep insights.
The more your company invests in UX, the more structured that investment becomes, and the more integrated the findings are across the entire business, the more ‘mature’ it’s said to be.
Improving your UX maturity can be done in several ways. You can look at culture (how UX is valued within the company, and the number of dedicated UX professionals), strategy (how UX resources are prioritised, and to what end), process (how UX research is systematically used within the company) and outcomes (intentionally outlining the ROI and KPI of various UX initiatives).
The important thing to remember is that these factors don’t exist in isolation. There’s no point hiring more and more UX designers, for example, if leadership isn’t on-board, if designers’ findings aren’t worked into the wider business, if there are no structures in place, or specific user-based targets established. UX maturity is almost a mindset, and it works most effectively when everyone believes in the power of good design.
“Not all companies understand the value of UX, especially companies at the lower levels of holistic integration and design maturity,” notes DesignerUp. “But for a company to see these benefits, adoption is crucial. This includes involvement from the key stakeholders and how design is considered at every stage of product decision making.”
The benefits of UX Design
Because UX design touches all aspects of a digital product, its potential is almost limitless. Good UX can improve product quality, obviously – it literally makes digital products more useable – but it can also benefit customer satisfaction, long-term engagement, employee productivity, brand loyalty, time to market, conversion metrics, brand equity and (of course) revenue.
“User experience is important because it tries to fulfil the user’s needs,” says UX expert Prayag Gangadharan. “It aims to provide positive experiences that keep a user loyal to the product and brand. A meaningful user experience allows you to define customer journeys on your product that are most conducts to business success.”
Frank Chimero famously put it another way: “People ignore design that ignores people.”
The findings back this up. When InVision surveyed over 2000 global companies, they found a huge discrepancy between the performance of ‘mature’ UX brands, versus those with minimal UX investment. Companies at ‘Level 5’ (the most mature stage) recorded four times the revenue, five times the cost savings, six times faster time-to-market, and 26 times better valuation. Those are crushing statistics.
The evidence is pretty clear. UX maturity equals profit. It equals relevance. And best of all, it equals happy users."