How AI is Transforming the Industrial Design Workflow

How AI can be integrated into industrial design to enhance creativity and accelerate visualization.

By Sunny Owen . October 1, 2025 . 10 min read


As AI continues to transform industries, we were curious to see how it could fit into the industrial design process. It was fascinating to see how other professionals and students were using AI in their process and sharing their work on social media. This inspired us to explore AI tools ourselves, to see how they might enhance our own design process.

In this post, we share our experimentation with tools like Vizcom, MidJourney, and ChatGPT, aiming to understand the possibilities, challenges, and practical applications of integrating AI into our workflow through three key goals.

Primary goals of this AI tool exploration

  • Expanding creativity through AI tools: Our curiosity led us to explore whether AI could become a meaningful part of our design toolkit. We experimented with how it might support idea generation and iterative development, aiming to see how it could complement traditional design methods. Our goal was to discover how these tools could broaden the range of possibilities and enhance our overall creativity.
  • Rapid visualization: One of the most immediate benefits we experienced was speed. For example, AI enabled us to render short animations and visualizations in under 30 seconds, something that would typically take hours. Through experimentation, we discovered that AI can generate short animations or visualizations in as little as 30 seconds, a task that would normally take much longer. This opened our eyes to its potential as a time-saving tool, and motivated us to dive deeper into what other efficiencies AI could bring to the design process.
  • Enhancing collaboration: We found AI could play a role here as well, acting almost like a co-creator. By quickly producing variations or visual aids, it gave our team new ways to exchange ideas, challenge assumptions.

The tools that we used 

As part of our initial research, we experimented with several AI tools to see how they could support our creative process:

  • Vizcom: Translating ideas into 3D visuals, refining forms, and prepping  mockups for animation in Midjourney.
  • MidJourney: Rapid visual renderings and animation sequences from prompts.
  • ChatGPT: Refining prompts
  • Photoshop Ai:  Enhancing visuals and polishing concepts (yes, we ended up bringing out trusty friend photoshop along. We’ll dive into more details on that at the end!)

The experiment 

Our learning process was pretty organic to learn different AI tools and it involved a lot of back-and-forth as we explored how to use AI tools, discovering the strengths and limitations of each. To share our experience, we’ve broken down the experiment into three sections, highlighting our initial intent, the results, and the insights that led to the next steps.

1. Turning rough ideas into visuals with Vizcom

  • The initial intent: The initial plan was to upload a rough volume and silhouette of the 3D CAD model, then refine it into a detailed robot vacuum that clearly resembles a functional device. 
  • Prompt: Design a robot vacuum inspired by colorful Scandinavian style. Refine the overall form to feel modern, minimal, and playful while exploring modifications to the shape for a fresh. aesthetic.
  • The result: The results were somewhat mixed. In some renders, brushes appeared on the side of the vacuum dock, but the initial intent was to add more details, like brushes on the robot vacuum itself. The prompt wasn’t detailed enough for Vizcom to fully capture this (We will discuss this further in #2). However, after trying a few variations, the tool was able to generate the overall details, parting lines, and material finishes that gave the model the appearance of a robot vacuum.
  • Insight and the next step: Vizcom proved powerful in transforming a blocky 3D CAD model into polished, robot-vacuum-like renders. Through our experiments, we found it to be an excellent tool for enhancing visual effects and converting rough sketches or CAD models into high-resolution, photorealistic outputs. That said, making changes to proportions, product volumes, and overall forms was more challenging. To extend the exploration, we brought the refined renders from Vizcom into MidJourney, where we continue to build on these outputs and explore new design directions.

2. Refining form and proportion by Midjourney Bring it back to Vizcom to refine

We also experimented with using MidJourney to explore broader variations in proportion and form, then brought those directions back into Vizcom for further refinement. In the following section, we will discuss how we used multiple AI tools in an interactive workflow.

Vizcom > Midjourney

  • The intent: By using Midjorney to explore broader shapes and proportions from the uploaded image
  • Prompt (Midjourney): Industrial design concept of a next-gen vacuum robot, exploring new form factors and surface finishes, sleek modern aesthetic, high-quality product rendering, soft studio lighting. (Parameter that we used: Chaos55, ar 16:9 V7 stylize 500)
  • The result: MidJourney pushed the original concept into new proportions and forms, building on the foundation established in Vizcom. At times, however, it took the exploration too far by misplacing key components or even generating unintended duplicates of the robot vacuum. These unexpected results highlighted the need for further refinement to clean up the inconsistencies.
  • Insight and the next step: We started with MidJourney’s ‘weirdness’ setting at 0, as higher values often produced unusable results. To generate variations that differed meaningfully from the starting point, we had to accept some AI-driven ‘weirdness,’ which then required post-processing and cleanup. While this didn’t make the outputs unusable, they did need additional editing to meet our needs. For the next step, we brought the results back into Vizcom to refine details and clean up the renders.

Midjourney > Vizcom

  • The intent: Remove unnecessary elements in MidJourney and polish the renderings for clarity and focus.
  • Modified Section on Workspace:
    Prompt: Close-up of a sleek robot vacuum in the corner of a bright, modern living room, sunlit hardwood floors, natural light streaming through large windows, minimal and airy interior, realistic detail, slightly zoomed-in perspective.
  • The Result: We achieved the desired composition within the setting and were able to further explore color, material, and finish (CMF) variations.
  • The insight: We removed unnecessary elements using the eraser tool in Vizcom and filled gaps with color, then ran a render to smooth out imperfections and produce a more polished result. To place the robot vacuum in the desired studio or room setting, it was easier to start with a white background and let Vizcom generate the environment. 

For achieving the ideal room composition, the key was using a highly specific prompt: clearly describing the lighting, perspective, materials, and overall mood helped guide Vizcom to create a setting that matched our vision. This approach allowed us to experiment with different layouts and visual contexts while maintaining control over the final aesthetic.

Vizcom refinement 

Prompt in Modify: Close-up of a sleek robot vacuum in the corner of a bright, modern living room, sunlit hardwood floors, natural light streaming through large windows, minimal and airy interior, realistic detail, slightly zoomed in perspective.

The challenge & Solutions

One of the biggest challenges in using AI tools was developing a concept that doesn’t yet exist or breaking from the norm, since these tools are trained on existing data, patterns, and visual references. While AI excels at generating variations of familiar ideas, it struggles to create something truly novel or unconventional.

Our question was: what if we tried to create a concept that doesn’t exist yet, and how could we guide the AI to achieve what we needed?

We explored creating a robot vacuum concept that hadn’t been realized before, focusing on how to work with AI tools to bring it to life. A key takeaway from this process was the importance of understanding how the AI interprets prompts. We used ChatGPT to craft and optimize prompts that proved helpful in achieving the desired results. 

Here are three key tips for controlling AI to achieve the results you want:

  • Use Vizcom breakdown to structure descriptions: When we generate an image in Vizcom, we examine its description to see how the system interprets and labels the visual. This helps us understand Vizcom’s terminology and visual language, allowing us to refine and control the elements we want in future outputs.
  • Ask ChatGPT to translate into prompt language: We took the description from Vizcom and asked ChatGPT to convert it into clear, concise prompt language. This helped us create prompts that were more precise and effective for generating the desired outputs.
  • Iterate interactively: describe what failed and let ChatGPT refine: Describe what didn’t work and let ChatGPT refine. This back-and-forth process between Vizcom and ChatGPT helped us shape the right prompt language. We’re still learning, but it has already brought us closer to the outcomes we want.

Final Outcome and Key Takeaways

Iterating Aesthetics for the Robot Vacuum

By using AI tools, we were able to generate concepts that explored different variations of our aesthetic goals. Our aim was to examine multiple design directions for the robot vacuum while keeping the overall proportions consistent. These explorations demonstrate how AI can expand our design possibilities, even as we continue learning how to effectively integrate these tools into the industrial design process. While these concepts are not intended as final designs, certain elements can be adopted and inform the broader design process.

From Independent Thinking to Collective Creativity

Having two distinct platforms in Vizcom proved to be a powerful collaboration tool. Workspaces allow team members to work independently, while Workbench provides a more collaborative canvas for joint creation. Leveraging both, teams can work efficiently and cohesively, contributing seamlessly to a single sketchbook. In addition, the ability to track the history of each team member’s creative process was powerful in revealing hidden ideas that can inspire the other team members. Honestly, we’re just beginning to learn how this AI tool could shape the future of collaboration.

From 2D to Animated Visualization

 

AI tools offer a powerful way to quickly demonstrate concepts through animation. It’s impressive how fast these tools can generate motion, compared to traditional methods that often take days to render. 

In MidJourney, you can generate animations quickly, even by just adding a frame without a prompt, which makes it easy to get ideas flowing and see motion right away. However, it struggled with consistency and lacked tools to refine results, often requiring multiple reruns or fixes in other software like ChatGPT for prompt refinement. 

Vizcom, on the other hand, delivered more polished results, with realistic details like fabric creases and natural lighting changes, though our testing was limited. Overall, MidJourney is better for speed and variety, while Vizcom shines when it comes to producing high-quality, environment-aware product animations.

Redefining Possibilities in Design with AI

After spending a few days exploring AI and its capabilities, these AI tools offer enormous potential to enhance creativity, speed up visualization, and transform the way we explore and communicate design ideas. 

We’re excited about how AI tools may evolve to generate outcomes that more closely reflect the original design intent, while also enabling humans to develop more effective ways of communicating with AI. The evolving dialogue between human creativity and AI capabilities has the potential to redefine the design process, enabling designers to iterate faster, explore new levels of creativity and innovation.