Integrating A.I. into design process

It’s not the rendering that takes time to make Pixar movies good.

Marvel makes 20 movies a year with more rendering and money but they are all shit.

It’s the story that takes time to make it good.

1 Like

your right, of course. nobody needs 200 marvel movies. 20 is enough, but the productivity growth of AI will mean thousands of layoffs. supply vs. demand.
Since we make real things for real people, AI won’t have the same effect on us, just the form Development and appearance definition phase of product launches will shrink to 10% of todays efforts.
Some product categories, will have relativly fewer designers than others

1 Like

really, is that possible that we actually be in the movies ?

I’m actually surprised that this thread isn’t more active.

I came across this guy on LinkedIn and it really opened my eyes as he’s a IDer using the tools in a professional way and exploring the application in real design processes not just making random pictures of fantasy crap.

https://www.linkedin.com/in/hrodriguez1/

He also offers a lot of courses for specific apps.

I’ve been dipping my toes in the water.

Thusfar, I’ve found some use for Newarc.ai for quickie renders and color exploration. It’s not really accurate or possible to get something new or technical and you could spend more time make tweaks compared to just rendering in photoshop to get what you want but it’s helpful for a guide.

Krea is pretty interesting and Hector uses it a lot in his videos. I’ve only played with it and haven’t found a use.

Vizcom looks powerful but have not dived in.

I’m using Midjourney a lot lately to generate images for moldboards, people for creative direction visuals and presentation secondary images and for making mockups instead of stock photos. It is pretty amazing.

I’m definitely not fully integrated and need more time to see where I can best make use of the tools. Like anything it is all about using the right tool for the job.

2 Likes

That Midjourney mood board workflow seems pretty useful and a place where hallucinations can be of service.

At the moment I’m using AI as design adjacent I’d say. Instead of trying to find stuff on google images. I make it. Also good for some abstract kind of inspo stuff.

Just a few examples I made but didn’t use.



You didn’t “make” anything. You did a Google image search and hit “randomize results”. Congrats on abdicating the least important part of the job: coming up with an idea.

1 Like

Actually no. It’s made with a mid journey prompt. It didn’t exist until I generated it.

Oh, this is fun.

Nice prices. Although I prefer self exploration with my stupéficants, guided journeys also have their place. The novelty inebriation wears off with time.

I am not feeling the utility of the images you posted Richard. They feel like “stuff”, personal perhaps, but they do, to me, look the way Ryan describes, slightly discomposed images from search.

Stuff: Freshly, unique gen’d pixels, yes, but lacking in depth. While they are realizations of the prompts, they are not iconic or indexical signs. Therefore, as the instant recognition of AI output, any effort to decipher, or even read is suspended.

1 Like

Yup. Stuff. Out of context, everything is stuff.

As mentioned, these weren’t designs. Nor any of them from the same project/exoration/phase. Sorry if they don’t make sense. I was only trying to show the diversity of easily created stuff.

What ???

there’s a generative AI documentary about Brian Eno that will never play the same way twice from the guy who did the Rams and Helvetica films.

so interior designers and graphic designers plus illustrators have been earning at least some of their income with these tools for a couple years. movies and gaming designers are being laid off by the thousands because of the acceleration in process the tech affords. and it’s only been the first couple years of it’s infancy (dall-e launched january 2021). kudos to the artspeak critique for pointing out it’s aesthetic limits, but Feng Zhu’s list of what AI cannot do is more helpful, however.

when Meta announced 3D models exporting from text prompts, I’m betting everyone at autodesk and solidworks shit bricks, because to publish the research, meta must have their IP buttoned up. Imagine taking a fraction of effort and exporting editable files that can be handed off to engineering! That will be their holy grail. Smaller companies (rhino and blender?) will probably not be able to afford the license fees and disappear.

most impacted at first however will be products that are largely form variations of a common theme or the kind of product cycles that get ‘refreshes’ in between major model upgrades, they won’t need designers. the design director will accomplish those changes without support staff.

AI will be as big a sea change as CAD itself was, only won’t take as long. it’s already begun, the list of what AI can’t do has gotten shorter.

Stuff:

Explaining the use of the word in the previous paragraph in reference to these images:



Freshly, unique gen’d pixels, yes, but lacking in depth.

Richard notes that the images are generated specifically in response to his prompt into the midjourney system. I acknowledge that the images are freshly “gen’d”, i.e. generated, but that does not negate Ryan’s critique that they are in essence “randomized Google image search” results. They are the product of scraping images and statistical, algorithmic reconstructions in a gridded 2D pixel format. Rendered out as a jpeg.

While they are realizations of the prompts, ...

It is hard to use words like creation for machine generated output. Realizations are the concrete output of the text prompts into Midjourney.
“Creations” or “compositions” can be used to describe the output of an artist or photographer. The prompt equivalents, or subjects for these images might be referred to as “inspiration” or “reference material.” The resulting images reflect the individual style, technique, and interpretation of the artist or photographer, distinguishing them from AI-generated visuals which are randomized interpretations of textual prompts. As employed in this sentence fragment, realizations.

... they are not iconic or indexical signs

Iconic representations are images that resemble or imitate the subject they represent. The relationship between the image and its referent (the thing it represents) is based on likeness or similarity.

Indexical Signs: In semiotic terms, photographs are indexical signs, meaning they have a direct causal relationship with the subject they depict because they are created through the capturing of light from the actual scene.

I make the distinction that the machine output of the three example images is neither iconic or indexical, it is a construct, or realization. In 2024, this distinction is readily apparent, although I realize that suspicion about image origins can still lead to false positives for AI-generated content.

Therefore, as the instant recognition of AI output, any effort to decipher, or even read is suspended.

These three images read as AI-generated. I argue that, given the overwhelming flood of images we encounter daily, we must selectively allocate our cognitive resources for interpreting them. The process of scraping, digesting, and algorithmically excreting images forces us to decide whether to invest time in searching for meaningful content. Personally, I choose to bypass these type of images without further consideration.

The output is labelled extant material stirred up in a blender randomly. Again, you’re not actually doing anything or making anything. It’s an image search, and only the most credulous AI slop promoters think otherwise.

Not a random blender, a custom smoothie machine. No one claims to be growing the fruit for the smoothies or coming up with the recipes, we are typing in the flavors and texture we want and the blended attractive drink comes out, different every time.

AI image generation is far from random. It’s a highly controlled, pattern-based process driven by extensive training data and deep algorithms. While there might be some variability in the outputs, this is not randomness but rather a reflection of the model’s ability to generate diverse yet pattern-consistent images.

In essence, AI-generated images are among the least random artifacts produced by technology, embodying a high degree of predictability and control.

Amusement pour les yeux (Amusement for the eyes)
To extend the food metaphor, the way mood board AI images are being used are as an “amusement pour les yeux” serving as a visual prelude to a design project, similar to how an amuse-bouche teases the palate before a meal. AI-generated images are curated based on the chef’s (designer’s) selection, providing a bite-sized burst of inspiration to set the mood and ignite creativity for the project ahead.

Understanding AI Image Generation
  1. Training Data and Patterns:
  • AI models, especially those used for image generation like GANs (Generative Adversarial Networks) or diffusion models, are trained on vast datasets. These datasets contain numerous images with associated features, and the AI learns patterns, styles, and structures from them.
  • The generation process is deeply rooted in these learned patterns. Every pixel and detail in the output is influenced by the data the model has seen and the rules it has learned.
  1. Controlled Inputs:
  • When generating an image, the input parameters (prompts, seed values, etc.) play a crucial role in determining the output. These inputs guide the AI to produce a specific type of image, making the process highly controlled and predictable to a large extent.
  • Even though there might be some variability, this is not randomness but rather a controlled variation within learned constraints.
  1. Algorithmic Precision:
  • The algorithms underpinning AI image generation are mathematical and deterministic. They follow precise steps to transform input data into output images. This deterministic nature ensures that given the same input conditions, the output will be consistently similar.
  • This precision stands in stark contrast to true randomness, where outcomes are unpredictable and do not follow a specific pattern.
  1. Repeatability and Consistency:
  • AI-generated images can be replicated if the same initial conditions are applied. This repeatability is another indicator of the non-random nature of AI outputs. True randomness would not allow for such consistent replication.
  • In practice, designers and engineers often rely on this repeatability to refine and perfect AI-generated designs.

Misconceptions about Randomness in AI

  1. Perceived Variability:
  • The variability in AI outputs can sometimes be mistaken for randomness. However, this variability is a result of the model’s attempt to diversify within its learned patterns, not a lack of order or predictability.
  • Think of it as a musician improvising within a given scale. The music may vary, but it’s still bound by the rules of the scale.
  1. Complexity and Understanding:
  • The complexity of AI models and their outputs might make them seem unpredictable to those not deeply familiar with the underlying technology. This perceived unpredictability can be misconstrued as randomness.
  • However, for those who understand the model’s workings, the outputs are a natural extension of the training data and input parameters.

Why don’t you give us an update on the Rabbit you promoted so breathlessly then get back to us on AI.

I did, it was a joke, likely even a scam. Did my full mea culpa. Check the thread.

1 Like

I can’t even follow who is for/against/understands/doesn’t in this thread…

Anyhow.

@Ryan_C As mentioned, it’s not random. You just don’t know what the images are for so it looks that way. Anything without context is seemingly random.

Let me explain some context for one of the examples.

The first image with the guy in the blue jacket was visual play on a traditional running jacket like this one from Running Room.


These jackets have been around forever and are typically seen on old ladies and guys jogging at the local shop.

There’s a huge trend now in running (and other sports) to take “dad style” or what isn’t cool and make it cool, by way of exaggeration, fashion influences, etc. for millennials and Gen Z. The image I prompted in Midjourney was a fun exploration of what this could look like. The effect was supposed to be poking fun at the “make something uncool, cool”. The image as mentioned was not used for a project, but if it was to be the start of a real design exploration, it could be an easy way to communicate to a brand how you could play with a traditional style “but make it fashion”. Much easier than a sketch to tell the story.

The prompt was something like:

a fashion streetwear photo of a runner wearing an oversized blue jacket and shorts in a studio with solid blue background. The jacket comes down to his knees.

Here’s an additional quick exploration of an AI image to video test using Luma to further tell the story. (this version I swapped the guys head).

I get that. ( Slight defense, I was ten hours into a 14 hour plane ride. )

I had a visceral reaction to the Midjourney images. You explained the context and I reframed my expectations, they are ‘amuse yeux’ and that requires much less analysis, they are atmospheric, I chilled out a bit.

A single word, triple punctuation challenge made me have to clarify what should have been a word riff on the origins of imagery.

I remember this discussion from 2010 about rights to images used on mood boards. Comical in retrospec, except to inspiration purity extremists. :face_with_peeking_eye:

I am in a state of dynamic equilibrium in the AI discussion. I have always been a fan of sampling in music or art, and obviously design. The discussion of what was sampled in whatever fractional percentage, I couldn’t care less.

I hope this is AI is a phase and that live human generated content will return to favor. Probably in vain except for high end projects. But for now, this year, this is the zeitgeist. We as designers will find ways to use it.

At the same time, it’s hard to unsee zippers that magically fix gaps, awkwardly paced gaits, and a white man emerge from the shadows as a black man. :smile: Surrealism can be an inspiration.

All good.

I firmly believe that AI is a tool like any other.

My initial reaction to use of AI in design was also to call BS on it. Most of what I saw was not design, just shitty pictures of impossible things that people were calling “design”.

I think the key understanding the capabilities and limits and using it appropriately. For my examples, the AI images are like a sketch. The details aren’t worked out, there’s things that don’t make sense, but it communicates an intent or direction.

In my limited experience using AI tools, I’ve found it’s very easy to make a picture, but it’s hard to make the picture you want.

That being said, fidelity and control are two different things and sometimes accuracy doesn’t matter.

Another example of use case I just employed-

CMF direction.

It’s pretty efficient to generate specific colors/material images to illustrate potential color and material makeups using AI tools (Midjourney), vs. traditionally I would have to go through google image search or the 30,000+ photos of shoes I have on my hard drive to find the ones that match the creative direction I’m looking for.

Colorways are usually shown using flat lineart, which can be hard to read (colors looks different on different materials and hard to show florescent colors), so using newarc.ai with a line drawing can help show potential general direction. Materials are not accurate. The design gets messed up. But good enough if you squint to get a general idea.

In the end though the AI isn’t designing. I am. I’m just using the AI to help communicate my decisions.

I can’t see the use of AI going away.

R


1 Like