Steps Towards 3D Design AI Example, Helmets HST

In July 2024, as the revenue from empowered amateurs disappoints investors and the hype subsides, discussions around AI will quickly become outdated. AI’s role as a tool, however, will persist. The current “outside in” brute force methods will give way to an “inside out” approach. Smart models will interact directly with surfaces and skeletons rather than pixels and meshes. This shift cannot rely on scraping existing data but will require industry-specific development and domain expertise, such as FBX animations, or Nurbs creation methods.

To begin, there’s a need for a designer-accessible parametric model wrapped in a UI. Here’s a practical example of such an approach applied to surface creation for helmets:

( 24" touchscreen interface on the top third of the screen, CAD window bottom 2/3, 10X speed)

The underlying concept of this project, HST, focuses on helmet shapes using single-span surfaces. These surfaces are challenging to parameterize and typically require manual control point manipulation to achieve desired forms. Single-span surfaces excel in aesthetics and serve as foundational elements for functional and stylistic components throughout the design process.

Parametric models enable extensive and complex manipulations with strategic inputs. These inputs can be trained into specific AI models, such as a custom ChatGPT model.

Training involves designers describing their intent and monitoring changes within a 3D model, specifically adjustments in driving parameters.

Usage entails requesting modifications through verbal or image input, with the AI API delivering adjusted parametric settings and an instantaneous rebuild of the 3D model.

This approach marks a step towards leveraging AI for streamlined and intuitive design processes, tailored to specific industrial needs and creative workflows.

HST implements a forward-thinking approach to integrating parametric modeling with AI capabilities, specifically tailored for the demands of helmet design. By focusing on single-span surfaces and leveraging smart models, we aim to enhance both aesthetic quality and functional efficiency in the design process.

Thoughts and feedback on this approach? How do you see AI evolving in the realm of CAD and design? Are there specific challenges or features you believe should be prioritized?

3 Likes

To be clear, the creators of that video do not mention AI.
There is scope for parametric models being easier to build and control (generally, a carefully built model is easier to control and modify).
What would speed the process is for the computer to take supervised guesses at which control points I want to select and then control. Manual selection of control points takes a lot of mouse work! This is a UI problem that doesn’t necessarily require ‘AI’ to solve.
It would involve ‘generation’ though. “Computer, here is my surface controlled by these splines, show me a dozen ways that control points on these splines can be related to each other”. If by chance one of these methods is what I had in mind, it will save a lot of mouse pushing.

I don’t see any AI here. This is just a GUI for a grasshopper definition, a very complex and impressive one at that.

1 Like

This is step one toward AI incorporation. I am the creator of the video and the UI wrapped grasshopper3D script.

The AI section is not yet documented in the video.

To train the custom AI, it’s my approach that there needs to be a pathway from the designer’s intent to factors in the geometry.

The first step is a GUI that can directly drive the geometry. This is shown in the video.

Thanks :blush:.

The next step is specifying the rules/guides for the AI engine. Spending a lot of time inputting various model states, and narrating the changes to link the language, and the modified geometric states, factors and images. Training. First with my experience, and then hopefully other designers to round it out.

The goal is to be able to enter changes verbally and have the LLM language processing control variables. True, at this point the ease is so high that further facilitating changes by voice is extra, but I expect some surprises.

The next update will be more clearly linked with the LLM engine.

Thanks for the feedback.

3 Likes

a fascinating question, what’s the next gen of AI CAD interaction going to look like. thanks for taking the first step

when discussing why the company I worked for outsourced all design engineering to India with a lead ME, he said something like - real engineers to math for a living not CAD. within a few years none of the engineers would even try to open an assembly.
this is one possible future for ID

what you describe as a combination verbal and GUI approach makes sense to me because as I foresee this enhanced workflow, we’ll use CAD software a lot less as a proportion of the design process and that deep knowledge of the tools will fade away.
One question is what vocabulary will be needed to drive the manipulation of the parametric model. will it learn us (the ideal) and we’ll be able to use plain-english plus the language of form (how we traditionally critique form), or will it remain the language or jargon of a CAD package?
This leads to the second question. Since we no longer build parametric models from scratch, how can the GUI become user friendly enough to not require memorization of hundreds of commands and thousands of modifiers?
I like what you’re showing in the demo, with each geometric features’ modifiers presented and altered individually and dynamically. that portion of the GUI would change per feature, for example a simple extrude would only have one dimension and a draft angle to alter.

1 Like

“One question is what vocabulary will be needed to drive the manipulation of the parametric model.”
A tailor making suits already has a different vocabulary to that of an automotive body stylist designing cars. Their respective CAD softwares would 1, interpret the designers commands, 2, simulate the processes required to manufacture it, 3, simulate the item as it would be in use.

Sold on the “inside-out” approach! Scraping data feels limiting. Training AI on designer intent within the design software itself is a smarter way to go. Would love to see a demo of the HST project.

1 Like