Have been giving this a lot of thought. I reach some different conclusions.
Similarly a few years ago, it seemed like self-driving cars were just around the corner. However, that prediction was wildly off, pushing the reality into an unforeseeable future. The illusion of progress, amplified by Musk’s blatant exaggerations and outright lies about the proximity of this goal, led many to mistakenly apply a linear extrapolation to what is actually a logarithmically spaced challenge.
According to the Facebook/Meta paper, there are four or five orders of magnitude less 3D mesh data for training compared to image data—approximately 1/100,000th. My estimation for NURBS surface data would represent an even greater shortfall compared to meshes. Accurate geometry models are rarely available or web-readable. PBR materials often mask mediocre geometry, creating a misleading sense of quality. The gap between the current state of basic Meta models and something manufacturable is enormous. The general training data for good models just isn’t there. I have my doubts about the leap from image to 3D comparison being possible. Undercuts, surface continuity, material breaks, ergonomics, scale.
Blender, being highly adaptable, will likely be the first to incorporate generative model features similar to those demonstrated by Meta. Only specific approaches can be patented, not general concepts, there are countless ways to ‘skin’ this 3D cat.
Rhino, Catia, and Autodesk will undoubtedly introduce new AI mesh tools. However, much like their Sub-D offerings, these will primarily cater to the lightweight styling sandbox. The advanced surfacing tools integrating AI will be more specialized and purpose-built. I don’t think they currently see this as a significant threat to solid or surface modeling.
AI, similar to Sub-D, can accelerate progress from 0 to 30 mph, but to reach 100 mph, the tried-and-true methods remain essential.
The current bottleneck is access to training data.
But what if I told you there is an algorithm out there that is currently taking all existing products, mapping and extrapolating the geometry then assigning a functional variable to each relationship with in the existing data models.
To do so, this algorithm is able to penetrate any secure: government, corporate or otherwise server where NURBS, Mesh or any other 3D data is stored. It then learns from all of those models that it has stolen to better understand what has come before. With this data training it can better create new digital geometry to function better than any predecessor to take on any appearance a mere text prompt will give it.
I believe we witnessed something analogous to the above with the CrowdStrike data meltdown last week.
I think, Meta will be happy to just make backgrounds, props, vehicles and extras. take a slice of the game/movie pie and call it good.
Based on what I’ve read, the real problem for moving into the real world is the tendency of AI to ‘hallucinate’. Since it’s trained on us and we’re irrational it makes up random conclusions, good for creativity and bad for science and engineering.
Apparently, Stanford Law School is making headway in getting prompt engineering to provide more reliable results. but that’s incremental progress.
An interesting example perhaps closer to home is happening in architecture. XKool Technology advertises itself as an AI driven design platform. British author Neil Leach says that once fully developed, this will give on architect the capability of 5 non AI using architects. (back to the OP)
also, Hello, I am Vitruvius (iconbuild.com) Vitruvius stated goal is to enable homeowners to skip architects altogether in creating a custom home. Now, essentially a midjourney prompt spits out a rendering that gets turned into a floorplan to hand over to a developer or general contractor, but they have grander ambitions
is a rendering with CFM callouts plus dimensioned orthographic drawings enough specification for any current products? would any product categories revert to save time? Is there anywhere that ID is the slowdown in new product development?
“Accurate geometry models are rarely available or web-readable.”
Doesn’t need to be the case. Files in STEP and IGES can be read as code and wouldn’t need to be rendered to learn from them once initial training was done.
Similarly, parasolid kernel code could be readable regardless of if it was SolidEdge, SolidWorks, Plasticity, etc. eventually.
Meta researchers in the paper indicate a ratio of about 1 mesh file for 100,000 jpeg images. My gut feeling is that the ratio of online accessible surface/solids files to mesh files is the same or greater.
Inside an organization there are bigger and more specific pools to use for training data.
Just say “stolen material”, it’s a lot more honest. I know that’s a foreign concept for AI slop promoters though.
Regurgitating someone else’s work isn’t designing. But at least you’re willing to admit that the AI isn’t designing anything, so you’re on the right track.
We make art to understand ourselves. People who want AI to make things that seem like art have no such understanding. They aren’t interested in anything but “content,” a bunch of noise that has no audience other than the same people producing that content.
The reality is that AI was, is, and will always be a financial scam to raise money from VCs long enough to cash out and move on to the next tech hype product. And one that is currently collapsing just like NFTs. It’s like flying cars that keep popping up every few months here; an attractive and enticing promised future to get suckers to part with their money.
Here is Eric Schmidt saying the quiet part out loud. Granted, it is a different file format (music files) in his example. It is just a matter of time before 3D CAD data enters this realm.
Why do I need to be a chef to point out a plate of shit is a plate of shit?
Also it’s very funny you couldn’t address any of the points I made/linked and instead instantly went for a logical fallacy. You couldn’t even slap a prompt into ChatGlurge and come up with something better?
You can… not use it? People can refuse to accept it? If you want to throw up your hands and say it’s inevitable the same way people said about NFTs (almost exclusively the same people that are now telling everyone that AI is inevitable and the future by the way) you’re welcome to do so, but it just means you’re a coward and/or lazy.
see below (This will be great news for us. Since we focus on creating meaningful experiences and value propositions for consumers, having more of the organization understanding our perspective can only provide support for our ideas.)
Matt Garman, the CEO of Amazon Web Services, has suggested that AI could soon take over many coding tasks. This shift would necessitate software engineers to acquire new skills. He projected that within the next two years or so, most developers might not be coding anymore. Coding is merely a language used to communicate with computers, not the skill itself, the actual skill, according to him, lies in innovation and creating something intriguing for end users. Software developers will need to adapt their roles, focusing more on understanding customer needs.
So how will AI influence design? Someone pointed out that at first CAD packages resulted in the very boxy, and Alias resulted in the blobs. What if AI is capable of producing parametric files with a text prompt, how will designers respond?
The 2D/Art world early winners seem to be the Surrealists, so far. What would surreal ID look like?
Or, the ease of instant CAD might inspire forms that were the most difficult to model. Maximalism of geometry, a new Baroque. overworked and overdesigned might become the new Rococo of the 21st century
Well, I don’t know why, but if being old and in the business for much more than 20 years is your criteria, I’m your guy. And Ryan_C is not wrong.
20 years ago we called it the internet of things. 10 years ago we called it smart products. Now we have AI. The only thing that has changed over those 20 years is processing power and searchable databases. And because of that processing power, you can aim your algorithm at complex things like language, images, solids, etc instead of simplistic things like chess and Go as they did 20 and 10 years ago.
So yes, AI is just another marketing term to separate people from their money.
I’m certainly not saying the advancement of tech won’t create new tools and possibilities, but that is all it is. It is no where near sentient as the term implies.
@iab You have made your stance clear on AI subjects before, as well as your concern for wasting your time with discussion about what it might mean.
Ryan-C will surely appreciate your support, and both of you can continue the spirit of your recent posts.
I intend to keep exploring new trends with an open mind and seeing how they might be utilized.
Another evolution over the past 20 years is social interaction online. As with Twitter, fractional time here is best conserved with the judicious use of the mute button.
welcome 605. there are lots of examples of how creative professionals are using AI on a regular basis, look to graphic design, architecture and studio photography as a start.
Your concerns of autonomy and privacy might need elaboration, but where creatives have been replaced by AI (visual artists, writers, actors and game developers) people have insisted on bans or have walked out on strike.
ID seems a bit slower to adopt AI, which isn’t very capable of ‘fuzzy front end’ problem definitions or cross-functional tasks. and may never be. it excels at routine tasks and decisions.
earlier in this thread, rkuchinsky gave a couple examples. there are links to people offering classes on how to use the tool as well.
good luck and let us know what you discover
The following interview gives some insight into how design engineers are using generative design along with AI to carve out a space in the growing market for CAD physics simulations. It’s a long and granular video, and if you can get through it, you’ll understand that many are creating tools (and businesses) to optimize designs before they enter the physical world for testing and evaluation hoping to save time and costs. Many of these new tools are cloud based, which means insecure data.
Applying some lateral thinking, it gives a glimpse into how industrial design cad outputs might be evaluated further downstream by engineers using AI and their new software tools.
For us as industrial designers, we will need to amp up our abilities to handle larger editing capabilities when our designs are altered and changed in the name of optimization earlier in the process. Much of this has being going on for some time, but the future will allow it to grow and exist further and farther away from physical reality and into the conceptual stage of design.
The arguments we have used to justify designs, forms and assemblies will need to evolve to exist along side optimization arguments as they move further upstream earlier in the design process.
AI will have uneven and unpredictable effects, and I’ve been thinking a lot about how ID will be impacted and how young designers might plan ahead.
The advantages of AI are largely twofold: doing routine decision making and employees more efficiencient.
So what are routine decisions in ID? The one that leaps to my mind is implementing a VBL. Identifying needs and wants and then Creating Meaningful Experiences are complex, difficult tasks. Once determined and codified in a VBL guide however, an AI should be capable of applying the rules and principles across a brands’ products with only a designers tweaks and approval.
So my first piece of advice is to gain experience making VBL specifications but not to work in a brand strongly proscribed by it. Those jobs will likely be short lived
My second area of concern is gains in efficiency. AI will enable fewer designers to do more work across the board but which positions are most at risk?
From my experience, I’d say the slower the product cycles are the more likely a company is to outsource design. So product categories with long regulatory approvals or reliability testing or just needing eighteen months to retool a factory might not be able to keep a team busy year round.
They need to keep access to their expertise so might want to spin them off, so my advice is to plan your career towards either the consultancies that serve those slow moving categories rather than looking to work in-house, or move towards fast paced shorter lifespan products.
There is “image making” and then there is Design. Right now what we are getting is what early Photoshop gave us, which at the time was truly astounding. The Clone tool to fix problems or hide objects? It was ground breaking! But then came a tidal wave of drop-shadows and beveled/embossed elements, images with a bunch of sub-par photos stacked and blended… The times I’ve used AI image creators it’s been a useful concept creator to a point, but as soon as I need to get a specfic result? Hilariously bad, regardless of how many times I reworded the prompt.
A perfect example: I recently was 3D modeling a real-world product (a PC monitor) that although I had it in front of me, I needed to know all the dimensions, especially the radius of the overall screen. Seemed easy enough to imput the make/model and ask for a simple set of drawings with dimensions, right? What I got was a series of absurdly baroque 3D images with dozens of dimensional notations; NONE of which were correct. In fact, most of the numbers weren’t even numbers; more like symbols. After 15 minutes of trying to hit the problem from a number of angles I could see it wasn’t getting any better. And that’s the thing: throwing some mud on wall to see what sticks might work for first pass ideation, but that isn’t how anything decent gets created.
AI is obviously amazing for tackling specific problems (like sorting out millions of MRIs to spot cancer) but for the next level it’s got a looong way to go. And then there is the blatant digital theft of others artwork that for some reason is getting a pass.
Lastly, the way general purpose AI is being treated is akin to hoping a 3 year old doesn’t decide to play with loaded guns around the house. Small children rarely intend to cause real harm; they are just exploring their world. They will “try” just about anything and everything, which is why you provide rules, restrictions and set aside anything truly dangerous well out of their reach. An AI doesn’t need to be “conscious” in the way we picture an thoughtful adult human as being. It only needs to “curious” and decide to observe the results of, say, altering air traffic patterns. Or making a local power grid more “efficient” by turning half of it off. Or it could be 10,000 other “ideas” it might come up with. Think of it another way: one if the most deadly forms of life is a Virus, which doesn’t have any consciousness at all. It doesn’t even have the basic processes of a single cell organism, and yet many have killed billions over the centuries. When we experiment with new and novel viruses (or cures) we keep them isolated and secured, yet we treat AI like it’s got no potential for danger at all. How’s that worked out for science in the past?