First product designed by Artificial Intelligence?

Would we consider this stacking tray design (for enhanced usability by robots) to be the first product designed by AI?

Is it a designed product if it only uses deductive reasoning in its design?

Is it not a product in the sense we work in because it is not designed for human use? Even if the ultimate benefit is for humans through better robots being more able and efficient?

When the solution is “Get the lawyers involved.” as the article implies, you know it won’t turn you well.

While the AI came up with the fractal design in the article, it did not determine the application of the fractal. A human did and they should be named inventor. Same goes with the “flickering” pattern in the article. AI did the pattern, not the application.

For the sake of argument, let’s say at some point in the near future, the AI does determine application. How exactly is it different than a monkey doing the same thing? Theory is, if you get enough monkeys typing, one is bound to do War and Peace. All AI is, is a lot of monkeys. Either name the person who got all the monkeys in a room as inventor, or, if the monkey-master is truly a benevolent god, make it public domain.

Otherwise it is a silly academic thought experiment. Mental masterbation for lawyers. Ick.

When the robots figure out how to mount their own legal defense then we can name them as inventors. Someone had to write the code to get the AI to “design” the tray…which as far as I can tell is a vector of a snow flake?

Unless you know more about the project from another source then I don’t know where you are getting this from. The way the article is written implies the AI devised the soltuion (the trays) for a purpose (stacking) which is better than current options for the end users (robots), which ultimately benefits humans. If that isn’t design, then what is?


Some primates already devise tools, and hence arguably ‘design’ - what they do is intentional, based on knowledge and experience, purpose driven and makes a task easier/more efficient/more enjoyable. Design is an intentional act, any notion of this being entirely random and unintentional is implied only by you here, and that is what the article implies - that it is done ‘consciously’, you could say.


This does not seem to be about about randomly smashing information together and seeing what happens as you seem to imply. Coding the ability to synthesize knowledge into solutions is not in itself directing in what ways solutions are devised. The implication is that the AI devised the outcome based on the parameters of the users, the method of use, the purpose and the efficiency compared to existing solutions. This sounds like design through and through.

In fact the very article goes against this claim you are making; "Unlike some machine-learning systems, Dabus has not been trained to solve particular problems.

Instead, it seeks to devise and develop new ideas - “what is traditionally considered the mental part of the inventive act”, according to creator Stephen Thalerw"

Do you not think that AI will be able to design in the future, or are you only disagreeing with this instance at this point in time?

There are questions here regarding who the designer is in a mass-customisation scenario where the end user defines the parameters they prefer. This is still debated now, even after years of having parametric models available. Under your definition here, the one who wrote the Grasshopper definition (visual coding) is the designer - yet they are not the one who defines the parameters of performance or aesthetics, only (generally) the function and purpose.

I already wrote that AI could design in the future and no evidence has been shown in the article that it has been done at this time. I could use the fractal patterns generated for products other than packaging and a flickering light, which should I choose? The AI didn’t. And why in hells would you fall for the line " better than current options for the end users". Do tell, how exactly was “better” measured? The purpose of packaging is to protect, contain, distribute and market. Stacking is only a small portion of its objective. Seems the designer of the AI hasn’t a clue on how to develop a new product.

So again, why would AI get two shits whether it is named as inventor? I mean other than my original point of mental masterbation for lawyers. Gonna give that monkey IP for War and Peace?

Reminds me of that South Park episode where they make fun of how Family Guy is written by a bunch of manatees randomly selecting balls with words on them…

As someone who works in computational design and occasionally uses machine learning techniques, it’s super interesting! However, that article and the private research group (http://www.imagination-engines.com/) are weird…

Not in my books. For one, I can’t find much information on what they actually did. Did the AI scour Amazon, figure out from reviews people had issues with trays, independently decide to pursue tray designs and somehow independently came up with what we know as fractals? My gut feeling is the AI was pretrained to score stackability and it was able to play around with a few parameters on a pre programmed fractal generator and optimized from there…

One of the first physical objects that gets cited as being designed by AI are extremely high performance “evolved antennas” that were designed by NASA and some affiliated academics with work beginning in the early 90s.

They were designed through genetic algorithms. While that’s on the dumb side of artificial intelligence it does still fit most definitions. I think a key part of the outcome is that it wasn’t simply a predictable optimization. From what I understand, the engineers were surprised by some of the solutions and were able to get significant improvements over human designed antennae.

Great point to bring up, most of these discussion on the state of AI in relation to design comes down to a lot of semantics. I’m personally much more interested with the use and the outcome than trying to figure out which box AI fits in, especially since its such a moving target.

But to answer your question, I think our bias towards admiring human accomplishment is causing us to move the goal posts. Would anyone argue that say, Charles Goodyear didn’t invent vulcanized rubber because he did it by accident?

Even then, I think a lot of AI strategies, especially within the realm of machine learning would fit the deductive reasoning bill more than the “give enough monkeys enough time/happy accident” bill. Since they lack the context we may have, they need to churn through tons of data in order to gain an understanding but once that’s figured out they can predict a solution to any query within that realm of knowledge.


As for the current state of AI with respect to design, I see it as two buckets one of big picture problem and one of minutia.

In the big picture real, I see problems approaching the business and marketing side of design work. For example WeWork has been talking about how they are using machine learning to predict the number and size of meeting rooms to put in a new coworking space based on usage data from existing coworking spaces. Data science has certainly been a part of many corporations to help answer business type questions in the last decade, but I could imagine the type of questions being looked at becoming more design related. Though to quote Daniel Davis, one of the researchers at WeWork, Why are you looking at AI tools if your company isn’t even using basic statistics? It’s a great reminder of the hype surrounding AI/machine learning and that there’s a lot to be learned by just objectively looking at business data and questioning the status quo.

There has been a decent amount of work where AI has been trained to solve specific 3D tasks. Techniques around topology optimization come to mind. I’ve personally used it as part of a group where we trained an ML algorithm on determining optimal bracing angles for beams dependent on length and angle between the beams. We then used it to generate hundreds of braces for a space frame structure in seconds. Autodesk’s optimization of the office layout of their Toronto office is also an interesting usecase where their program is finding a balance between much more goals than we could parse as humans.

To the best of my knowledge, very few people are looking at neural networks being applied to the complete design process in 3D space as freely as we see it with done with images with recognition, style transfer and creative work coming from adversarial networks. Part of it is that 3D data is much less available than image data and the additional dimension adds a significant amount of complexity. While we don’t know what technical advances lay ahead of us, teaching aesthetics, manufacturability, usability of a product in 3D space, UX, etc. in a generalized sense doesn’t seem possible in the near to medium future.

Another interesting design avenue is how the integration of AI will change the products we design. The Nest thermostat comes to mind, the UX its so much nicer than a traditional thermostat because the thermostat has just a bit more understanding. I’m curious to see what else could benefit


Back to the article where I was saying the research group seems a bit weird. From the link in the article, you’ll get to http://artificialinventor.com/?page_id=19 where you’ll learn that their intention for patents belonging to AI really is that the patent would be to the owner of the AI. Which again would need to be clarified in the case of a company selling pretrained AIs to other companies that would operate them. From that site, you’ll get to http://www.imagination-engines.com/ which seems to be the core of the research group. Note that I can’t find a single worked through use case for their technology, to be fair it seems like they may be operating in the areo/military space. It seems like it’s a small team/possibly a single a person that has been around AI since it’s first wave in the 70s. The most noteworthy patent I can find is “Device for the Autonomous Generation of Useful Information”. It’s a super long patent filing and I haven’t been through the whole thing and don’t know the extent of the actual claim at the end but it seems to share some stuff in common with current generative adversarial network. This is the current type of neural net used in many of the style transfer, deepfakes, image enhancers, sketch to image, etc. I’m a bit suspicious that their play may somehow involve them getting profits from any patent filed by a group using an AI? I might have my tinfoil hat on.

Also found this gem of a video by the owner of this company:

Great contribution Louis, that antenna in my book is also one of the if not the earliest truly AI designed product. NASA is still one of the forerunners in the area of AI-driven optimization. AI often finds solutions that humans would hardly ever have come up with yet are superior in Pareto analysis.

Also literature has for decades already described how design processes could be automated.

“The question is not if your job can be taken over by robots, but when it will.” from Kevin Kelly’s The Inevitable.

Questions for discussion:

Is that antenna a consumer product?
Is the design in question industrial design, or are we talking design more in the way engineers speak of design?

I think a more interesting application would be to take a well designed product and use AI to optimize the environmental impact by simulating different materials and mechanical details.

The antennae is not “designed” by any accepted definition of the term design. Because the try has a user which engages with it it is more designed than the antennae. The antennae is Engineered (the application of science).

Neri Oxman’s definition of the four domains of knowledge is a very good one that differentiates how the antennae is not designed, taken from https://jods.mitpress.mit.edu/pub/ageofentanglement:

“The role of Science is to explain and predict the world around us; it ‘converts’ information into knowledge. The role of Engineering is to apply scientific knowledge to the development of solutions for empirical problems; it ‘converts’ knowledge into utility. The role of Design is to produce embodiments of solutions that maximize function and augment human experience; it ‘converts’ utility into behavior. The role of Art is to question human behavior and create awareness of the world around us; it ‘converts’ behavior into new perceptions of information”

Under any accepted definition of design (e.g. Bruce Archer or John Heskett) the antennae is not designed.

Also yes, it is imperative to differentiate between AI and manmade algorithms which are used to optimise from variables which are given. If the idea for a new tray really was the idea of the AI (no idea if it was, badly written article or deception if not) then it would be different than if it was told to optimise a tray. People here are claiming it is the latter but nowhere is it said that this is the case, the opposite is stated.

Kind of funny to reflect back on this conversation now!

I thought this was an interesting take on the whole thing:

It reminds me of a chance encounter I had years back with Chuck Pelly. I was a young designer attending an IDSA conference and they had a BMW concept car (The X Coupe as I remember it) in the award winner gallery. This older designer walks up to me and asks me what I think and I said it seemed like the proportions were off somehow, like things were slightly the wrong scale. That is when he introduced himself as Chuck Kelly, head of BMW Designworks… he could see that the look on my face said I had put my foot in my mouth (once again), but he saved me and said he was glad he waited to tell me who he was so he could get my real opinion. Chuck went onto to tell me that they had made the X Coupe entirely digitally, no clay model, and rushed through the process… so things were a bit off. How more seasoned designers could account for it by knowing through experience where a door handle should sit for example, but younger designers who didn’t have the experience were off making these things that just didn’t quite work in the real…They were trying out this accelerated process and the decision was made to go back to clay models, albeit a revised process where they milled the clay from a digital file and then perfected it so they could make sure everything was right.

It makes me think will a similar thing happen here with AI? For designers who have been through the process many times it could be great to look at a lot of iterations and make quick decisions. For those still learning or early in their careers it might be more of a challenge?

Just positing an idea. My mind is still very much not made up and open, though I haven’t felt much interest in trying it to be honest. Just haven’t felt the need.

Its going to get to the point where you show the AI say 3 successful products and it takes the benefits of all 3 and spits out a new product sufficiently different looking that you can legally sell it as something new.

maybe… but also take a step back and remember when AI started driving cars and how well that is going…

autopilot-accident-2-gif

Might this just be the hype du jour? Crowd-sourcing, to Crowd-funding, to Self Driving, to Crypto, to NFTs, to AI art… I’m sure something else will take over the news in a few months as this falls into the trough of disillusionment… I think it will eventually become a productive tool, but I’m not sure if that will happen this year…

A lot of people in the 3D DCC/CGI field are worried about losing their jobs to AI, especially people who do 3D modeling for a living.

Take your typical RPG/medieval fantasy game for example.

You find 10 images or drawings of knights in armour online.

Then you say “Give me the helmet from image 1, the armoured breast plate from image 2, the shield from image 5 and the sword from image 8. Change everything to the art style from image 4 and image 10 combined 50/50 with 25% random variation. Make the sword 30% longer and change the eagle on the shield to a yellow lion head seen from the left side.”

You get 80% of your 3D knight in armour model within minutes.

Then a single 3D modeler spends 3 hours applying polish and minor changes manually.

That 3D knight model would normally have taken a 3D modeler at least 1 week to finish properly.

Now you don’t need to employ 50+ 3D modelers for your RPG game.

You just employ 10 specialized in different types of objects.

Nobody knows how close this is, but a lot of people doing 3D DCC are worried about it.

They are also worried that their artistic input will be reduced to cleaning up after an AI that doed the most of the work.

This is where you lost me.

The input of 2D data to get a composite image is one thing, but the algorithm to derive 3D data from that 2D image is not mature yet. If you have sources that indicate otherwise, please do share.

Some of the latest research shared online indicates that tools like Grasshopper and CHATgpt are only able to manipulate crude number sets for now. Nothing indicates that CHATgpt is able to generate code and turn it into a 3D model with tools like grasshopper/Rhino yet.

https://media.licdn.com/dms/image/C5622AQF4ZPq0Hr6LgQ/feedshare-shrink_800/0/1673718194643?e=1677110400&v=beta&t=sYtVKzD2NzOVH7G3T5jqWI1PEuNJZhevANoj0lEhUGA

1 Like

A couple of lawsuits hitting the news: