In recent years there’s been a lot of discussion and projects exploring how opensource approaches can be applied to the design of products. Where this seems to become problematic is that unlike the source data of software projects where open source approaches originated, which inherently contains the systems of logic and interdependencies that underscore a project, the data we tend to release as product designers is generally .stl and vector files. While these allow an object to be freely reproduced with readily available production tools, these file types don’t really facilitate making meaningful adaptations and augmentations.
I think there’s quite a bit of potential to apply algorithmic design tools such as grasshopper in order to design in ways that result in source files that maintain adaptability. I’d love to hear the opinions of some other designers on this topic.
I think it depends on what you are referring to as, “meaningful adaptations and augmentations.” I’ll speak to the 3D model work flow and not vector data models.
A common product design work flow I have seen over and over is taking the initial CAD model build file and passing it off to someone who can prep the data model for physical output. Depending on the complexity of the design (i.e number of surfaces) this can be very time intensive depending on the CAD skill of the designer who generated the initial 3D data model.
Since product designers are responsible for the physical world, data models must be brought into the physical world in order to be properly evaluated to make changes. This is where the product design process is very different from the more simplified software design development process. Source data will always become physical and in many cases that physical prototype is then manipulated, modified and changed in the physical world first before the designer goes back to the data model to make changes. An important distinction between software and product design development is that the data model for software design is the final product, whereas the data model for the product design is merely a guide or map to reproduce the final product in the physical world.
Programs like Grasshopper attempt to empower the designer to significantly change the data model by offering algorithmic options. This is all well and good, but at the end of the day, the product designer is responsible for the physical world appearance, impact and interaction of the design. This cannot be properly objectified or evaluated for changes without bringing it into the physical world through mockup, rapid prototype or 3D printed model. Grasshopper merely allows for visual changes for change sake on the screen with out the ability to properly evaluate how the change is impacting its presence in the physical world. Source data remains in the brain and memory of the designer through out the product design process.
Vector art and images are typically trying to bring rational geometric reality to a sketch or drawing concept. Perhaps this area of concept development is better suited for algorithmic 2D visual expressions of data rather than the 3D and physical world.
I agree that in designing physical products, it’s necessary to evaluate an item as it exists in the physical world, not just on a screen. A lot of necessary information can be gleaned simply through the opportunity to touch and interact with a model and rapid prototyping makes iteration between virtual and physical representations of an item a lot easier. Even the best simulation engines provide little information about what a product feels like.
However, I’m also interested in ways to facilitate modifications being made not only by other designers, but by end users. Additive manufacturing gets us closer to the equivalent of a “beta” release, but how do we begin to gather and document problems that arise in real-world use? What frameworks could be used to facilitate a “debugging” process for physical goods?
I agree that data models for product design are merely representations, not the final product, and there’s really no way of getting around this…however, I also think that if the right variables are introduced to facilitate context-specific change, it’s possible to make variations that impact not only how a product looks but how it functions.
Main thought, seems like a lot of buzz words in that post.
is the average end user interested in modifying their product? Likely just as interested as they are in fan fiction. A niche few will be super into modifying and writing stories (which is way easier than modifying a 3d file for an average person). Everyone else will be busy socializing, dating, getting married, then raising kids.
Mainstream productization of this type of thing will be Nike ID… I picked a color from a predetermined palette and so I feel like I modified/customized/designed… but mainly it is gasification of picking from very specific predetermined outcomes.
Not cynical, just truthful based on observation of humans. For example, I have microsoft word, yet I’ve written no books. I have garage band, yet I’ve made recorded no albums. Tools don’t equal ability or even interest. For those with both ability and interest, the increased tools are wonderful… most of those people are likely designers or engineers of some kind.
Agree. Sounds more like a thesis research paper than somebody trying to get some practical answers.
So open ended that it could go in a thousand different directions. Didn’t makerbot just close down a few stores recently? Why would you buy your 3D printer at Home Depot? I blame topics like this that misinform or get people hyped up about something to create some buzz. You wouldn’t believe how many times I’ve had to hear unrealistic ideas about 3D printing just because people saw online that China is 3D printing houses.
I guess the closest thing would be graphic design templates for websites/flyers/letterheads. You can customize them but at the end they end up being bland templates. If you want something truly unique and good/new/fresh then you hire an actual designer. You can’t compare digital algorithms to millions of physical products out there.
Give us a good real-life example of what you would like to achieve or would like to see happen, if not it will just be a bunch of theoretical opinions. For years the aircraft industry and it’s partners (3D modeling and Rendering Software companies) have been trying to create an automated program where the client (airlines) can simply click, drag and drop monuments, sidewalls, bins, seats, carpet, leather and fabric color, etc in order to make their custom airplane interior selection. It has never worked because airlines always want the option that is not shown…so we end up having to visualize each on a case by case basis.