Let’s talk about AI
AI is a hot topic, with discussions and articles about it appearing everywhere. To help shed light on AI, we have posed a few questions, ranging from the basics to more complex issues. So, let’s dive right in.
01. How does it work? Why is everyone talking about it?
In basic terms, AI refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. Overall, AI systems are trained using large amounts of data, and then use that knowledge to make decisions or predictions about new data.
The ones you are probably more familiar with are the diffusion models including DALL-E, Chat-GPT (both developed by OpenAI), Midjourney and Stable Diffusion. They’ve gained popularity due to their user-friendly interfaces and inspirations. The premise is very easy, the user writes a prompt and seconds later you get a visual or text result depending on the type of AI generator you are using.
02. How will AI affect the design world?
AI is changing the way designers work and is opening up new possibilities for automated, personalised and data-driven design. AI can automate some design tasks, but creativity and critical thinking are skills that cannot be replaced. Designers will still be responsible for creating unique and innovative concepts. Great design will always need great ideas behind it.
AI can automate some design tasks, but creativity and critical thinking are skills that cannot be replaced.
03. What will be the role of the designer in a world with AI?
The role of the designer will change to some extent as some tasks and processes will be automated, but AI won’t completely replace designers. Something we need to understand is that AI can answer questions or solve problems but only humans can ask questions. AI will be another tool under our belt like Figma or Photoshop are now. The role of the designer will probably evolve into an art director role, conceptual and strategic thinking being the most important skills.
04. Can we already use it in our work?
Right now we can use it in different ways. For instance, it can be great for enhancing inspiring mood boards. Also, It is great for image-making, but not so good at working with typography. Chat-GPT for example is pretty good as a tool for generating copy, but still limited in some ways. One limitation of ChatGPT is that it often returns incorrect or difficult to verify information, on the other hand there is Perplexity.ai, which answers questions in natural language and adds links to the sources of its assertions.
The results can be close to what you had in mind or very far from it. Many people give up after not getting what they were expecting. The truth is, AI is a tool that needs practice; you have to really spend time refining your prompts to achieve something undeniably good, but at the same time all this trial and error is training the AI so it can improve.
AI can answer questions or solve problems but only humans can ask questions.
05. What are the shortcomings?
They are already becoming a bit repetitive and even untrained eyes will soon learn to recognize the machine’s hand. For example, if you look carefully it is easy to recognise ‘the Midjourney look’. These tools are only as good as the data sets they are fed and trained on. AI is still limited by the current data set which is based on the past and present. Whereas human imagination has no limit to create the unexpected.
06. What about the copyrights?
Right now OpenAi, Midjourney and StabilityAI give their users full copyright of the generated images, but that can be problematic when the data they are trained on belongs to someone else. This is a gray area, as it is difficult to copyright a visual style.
It is interesting to see the stance of stock imagery places like Shutterstock, Adobe or Getty Images will take on this matter. The last one suing StabilityAI saying their AI model Illegally scraped its content, violating copyright laws. The truth is once the legal loopholes are overcome, these services need to ally with AI companies, otherwise their current business models won’t survive.
We think that an AI-generated world will bring a necessity for the human touch.
07. What are the most common failures, what can be improved?
There are many failures, such as biases, visual stereotypes, inaccurate image cataloging, overly sexualised female avatars, exaggerated racial phenotypes, and difficulty in reading mixed-race characteristics. AI is a reflection of human behaviour; so to improve AI, we need to address our own biases and prejudices. There is still a lot of clean up work to be done to remove this.
Also, AI can lead us to a copy/paste mindset and a disregard for the references that were used to create their output. Today, AI needs more transparency instead of being a black box of algorithms.
08. What could be the consequences for brands?
We think that an AI-generated world will bring a necessity for the human touch. Brands need for uniqueness and distinctiveness will possibly bring a return to DIY, offline techniques and more handcrafted identities.
This technology is evolving as we speak, in a hyper-accelerated timeline -not in terms of years but by days- It will be on a completely new level, but like with any new technology is important to educate people so we all use it in a positive way.
Words: Javier Lopez, Associate Partner
Illustrations: @lenka.re