I signed up for Dall-E and gave it a spin this afternoon, I’d definitely recommend trying it out!
Here’s the prompt and output:
a purple foam yeezy slider in the style of ettore sottsass
I signed up for Dall-E and gave it a spin this afternoon, I’d definitely recommend trying it out!
Here’s the prompt and output:
a purple foam yeezy slider in the style of ettore sottsass
prompt
syd mead style rendering of a horse-drawn cart with cyber poodles pulling it through the early morning mist of a Scottish moor with a glass citadel in the background
output
I took one of the versions and filled it out to the sides with new frames:
For some apples to apples, I tried Midjourney and Stable Diffusion with the same prompt “a purple foam yeezy slider in the style of ettore sottsass” - I’d say Midjourney was better at further iteration and refinement but Dall-E produced the best first pass.
Midjourney:
Stable Diffusion:
I think that is a good use case @designbreathing. It is interesting because in his key words he is inputting specific artists… so for the algorithm to work it is sampling those artists’s work… is that an IP issue for someone to take up? Granted humans are doing the a similar thing. You might input “Daniel Simon” and “Scott Robertson” into an AI engine or spend hours looking at their work and emulating it… you could argue that is the same thing. But in the end the human still had to develop the skill vs the AI literally sourcing the images from the original artists.
Either way, pretty amazing what these engines can do. Max Yoshimoto, former design lead at Lunar, Motorola, and Google Pixel wrote this recently on the topic:
I see it mainly as a styling ideation tool at the moment, and industrial design is so much more than that. For every nice sketch we see on a project there might be hours and hours of defining problems, positioning strategy, negotiations with product managers, working through details with manufacturing, etc etc…
related:
Raymundo’s video is great, I’m looking forward to trying some of it out.
interesting article. got to wonder how design education evolves from here?
I just read a graphics blog where the writer preferred the Ai to the human illustrator’s results, which leads me to…a while ago there was a prediction that eventually designers would face mass layoffs, and Directors would become curators of digital ideas
Tomorrow evening (Thursday, 11/10 @ 7pm EST) the three Ohio IDSA chapters are putting on event discussing ID and emerging technology, which will include both designing for AI and with AI, like the examples in this thread. We’ll be discussing both workflows and implications of the technology, with plenty of time for discussion with the audience.
Join us if you can: IDSA Blurred Lines 3.0 | Industrial Design & Emerging Technology Tickets, Thu, Nov 10, 2022 at 7:00 PM | Eventbrite
@JCS yes, sorry it took a bit to reply. You can find it here: IDSA Blurred Lines 3.0 | Industrial Design & Emerging Technology - YouTube
One artist’s thoughts:
I cam across this article by Jacob Morgan on the use of AI design in his workflow.
https://jacobmorgandesign.squarespace.com/artificially-inspired
Jacob uses Midjourney to create concepts based on the themes he wants to explore and then takes what he likes from prompt groups to create his own unique concepts. In his own words:
“By using AI to supplement our sources of inspiration as designers it’s easier to increase the breadth and novelty of our concepts while decreasing the bias of anchoring onto existing concepts.
In concept generation, instead of spending hours designing a helmet that is stylistically similar to a Ferrari, why would we not spend ten minutes generating sixteen Ferrari-inspired concepts in Midjourney to riff on?”
It was really great to see a case study of AI complimenting ID skills rather than an article about AI replacing ID, or the other end of the spectrum, writing off AI as useless.
If you have already, how are you using AI in your workflow?
Interesting article, but I would make the counterpoint that this isn’t really a use of AI design at all and the subject is just another of using AI to make pretty pictures. Not to mention the “design” process really just looks like tracing an AI image. Not a lot of real sketch development here.
How is the helmet “design” solving any problems or any specific direction? Just a hemet shape with random holes on it? Is it any better in airflow than another helmet? Does it further a brand design language? Is it designed for a specific consumer? A manufacturing process? A performance benefit? A price point?
The use of AI for “inspiration” I suppose could be over, but seems to me it’s just a circular feedback loop. The AI uses existing images and combines with the prompts so can you really find something to be inspired by if you are asking it what you want to see and it’s made of stuff that already exists?
Ultimately I’d say the result of this article is indicative of the process. I’m not an expert in helmets, but the final design doesn’t look particularly novel, well resolved or anything I would expect anyone sketching a helmet wouldn’t come up with. Looking at inspiration or not.
The weird ones though at the end if anything seem a bit more interesting. Maybe the design choices were weak?
Fair enough rebuttal @rkuchinsky
You’re right that there isn’t a design objective or specific direction and the result is another helmet.
This case study and the output is pretty shallow, I assume the designer did this to learn and play with the AI and not as a commercial project…
I’ve never used the AI software, so I don’t know if it can be responsive to cues like make a helmet with more ventilation. I doubt it could contribute anything of value for the more complex questions like manufacturability, price point etc.
But maybe you could use the AI outputs in a matrix against the design objectives to identify which features from the AI concepts are working. Then take those elements to create new concepts.
I’m just trying to get my head around this new tech and see if there is a way I can use it in my workflow.
Right now I see it as another contributor to early brainstorming mood boards etc. and not able to output resolved design.
Full disclosure, my marketing background CEO sent me this article and is excited about using AI in a future project mostly to advertise the use of “cutting edge AI design”.
I plan to get ahead of him and establish the design direction for the project before I get the email from him with a bunch of AI screenshots saying add this to the product line.
My guess would be that the time it takes to get a project in front of the marketing press or consumers that the term “AI design” will be toxic. The “strike it while it’s hot” time is now. Context from this week.
Behind the scenes, theoretically, a Machine Language model could be trained to factor protective requirements, ventilation targets, and material and molding envelopes. The result would be more targeted as an idea generator. However, the issue is that it is more about the curation of concepts than steering the intent of a designer. Which may be more satisfying to non-experts in any given sector.
This all is reminiscent of a discussion of Generative design on these boards back in 2010. Rereading the “lively” thread, Algorithmic Design Generation or AI Design Image Generation could be interchanged.
This post by @seurban was a good summation of factors to consider from such tools.
Yeah I think this is it. Where the ML is baked into the tools rather than just the final output. Imagine CAD with something like real predictive geometry, or rendering program materials with variable text sliders rather than bump maps.