Steps Towards 3D Design AI Example, Helmets HST

In July 2024, as the revenue from empowered amateurs disappoints investors and the hype subsides, discussions around AI will quickly become outdated. AI’s role as a tool, however, will persist. The current “outside in” brute force methods will give way to an “inside out” approach. Smart models will interact directly with surfaces and skeletons rather than pixels and meshes. This shift cannot rely on scraping existing data but will require industry-specific development and domain expertise, such as FBX animations, or Nurbs creation methods.

To begin, there’s a need for a designer-accessible parametric model wrapped in a UI. Here’s a practical example of such an approach applied to surface creation for helmets:

( 24" touchscreen interface on the top third of the screen, CAD window bottom 2/3, 10X speed)

The underlying concept of this project, HST, focuses on helmet shapes using single-span surfaces. These surfaces are challenging to parameterize and typically require manual control point manipulation to achieve desired forms. Single-span surfaces excel in aesthetics and serve as foundational elements for functional and stylistic components throughout the design process.

Parametric models enable extensive and complex manipulations with strategic inputs. These inputs can be trained into specific AI models, such as a custom ChatGPT model.

Training involves designers describing their intent and monitoring changes within a 3D model, specifically adjustments in driving parameters.

Usage entails requesting modifications through verbal or image input, with the AI API delivering adjusted parametric settings and an instantaneous rebuild of the 3D model.

This approach marks a step towards leveraging AI for streamlined and intuitive design processes, tailored to specific industrial needs and creative workflows.

HST implements a forward-thinking approach to integrating parametric modeling with AI capabilities, specifically tailored for the demands of helmet design. By focusing on single-span surfaces and leveraging smart models, we aim to enhance both aesthetic quality and functional efficiency in the design process.

Thoughts and feedback on this approach? How do you see AI evolving in the realm of CAD and design? Are there specific challenges or features you believe should be prioritized?


To be clear, the creators of that video do not mention AI.
There is scope for parametric models being easier to build and control (generally, a carefully built model is easier to control and modify).
What would speed the process is for the computer to take supervised guesses at which control points I want to select and then control. Manual selection of control points takes a lot of mouse work! This is a UI problem that doesn’t necessarily require ‘AI’ to solve.
It would involve ‘generation’ though. “Computer, here is my surface controlled by these splines, show me a dozen ways that control points on these splines can be related to each other”. If by chance one of these methods is what I had in mind, it will save a lot of mouse pushing.

I don’t see any AI here. This is just a GUI for a grasshopper definition, a very complex and impressive one at that.

1 Like

This is step one toward AI incorporation. I am the creator of the video and the UI wrapped grasshopper3D script.

The AI section is not yet documented in the video.

To train the custom AI, it’s my approach that there needs to be a pathway from the designer’s intent to factors in the geometry.

The first step is a GUI that can directly drive the geometry. This is shown in the video.

Thanks :blush:.

The next step is specifying the rules/guides for the AI engine. Spending a lot of time inputting various model states, and narrating the changes to link the language, and the modified geometric states, factors and images. Training. First with my experience, and then hopefully other designers to round it out.

The goal is to be able to enter changes verbally and have the LLM language processing control variables. True, at this point the ease is so high that further facilitating changes by voice is extra, but I expect some surprises.

The next update will be more clearly linked with the LLM engine.

Thanks for the feedback.