A.I. Design, introducing "Dall-E"

This direction of research is going to change a lot of things.

This behemoth 12-billion-parameter neural network takes a text caption (i.e. “an armchair in the shape of an avocado”) and generates images to match it:

Might want to restrict access to high-quality surface models from the source datasets, and eke out another year of some design professions. Although, LIDAR/laser scans will probably circumvent the need for original source models.

That’s amazing. Of course, designers are more than just random image generators, but this sure does put out some good concepts. Seems like it could be a useful tool, but it is worth contemplating what might be automated in the future and where we will fit at that time.

If I was an illustrator making clip-art, though, I’d be shaking in my boots right now. Who needs them when I can just type “happy puppy peeking out of a red gift box with a heart in the background” and get 25 options.

I am not pro this concept, just looking to analyze it. It marks a shift in my previous thinking on the role of machines in design.

One could say that designers are focused, concept generators, and initial curators of concepts in an industrialization chain. Our inputs are a library of existing forms and process mechanics, usage information, personal preference.

We designers have been able to occupy one niche by taking the sentances in a brief, researching, and generating concepts. In the context of machine learning and available database, 2D sketch concepts are low hanging fruit. I expect someone to scrape “industrial design sketches” and make a convincing generator within a year or two. It may already be available just by the right engine and query. These tools will only get more proficient. These are not tens of thousands of “whatever blended together” to make ID goo, they are targeted based on the input data.

“AI” generating 3D models, also not hard to imagine. We all know the rote nature of CAD work, in 1990, I wrote a parametric CAD script to automatically draft all construction component drawings for snowboard production, from a text file input. At the time imagining it could be reduced to sentences about the target user. Over the years the 3D superiority was felt by mastery of the clumsy tools. Now it is not about better tools, the machine learning process skips that step and will be able to “read” the final products.

An example: 3D scan every Nike shoe every made. If it is a Nike internal project, they can use the actual lasts as a build reference. If a competitor, (Amazon?) another step might be needed, an internal shape scan. Include timestamps to be able to reference other design trends preceding and succeeding, color, shape. Look at market and opinion leader uptake. Crunch the data on a GPU cluster that costs one half of the yearly design budget. Machine generate sketches, generate full renders, generate 3D models for print. The designs will be curated by humans, and thankfully, machines will never generate something like the “YZY D Rose”.

^Repeat for any industry.

It is on the horizon.

They opened it up - you can go give it a spin!

I work at a university in the UK and there’s some really interesting research into using AI-generated images within the creative process. It’s an exciting time (and a little bit scary too).

Baby steps, but AI babies grow fast. ChatGPT writing .STP files is on the horizon.

A human will always be able to “put that special something” in a design I think.

A human touch.

These AI generators will probably work wonders for certain manufacturing companies in certain parts of the world, though.

The type of mid-range company that can manufacture well but is less competent in designing original products.