top of page

Why we are not building a generic chatbot

11 min read

We’re building more than just “ChatGPT inside CAD.” By turning CAD into AAD (AI-Aided Design), we aim to create agents that makes you feel it is living INSIDE CAD workflows, not a tool that sits next to it.

Today’s AI x CAD products tend to fall into four forms:

Canvas + Chat Bubble

👍 Pros:

  • Intuitive, visual, and precise targeting of geometry for prompts.

  • Feels immediate and lightweight for simple tasks.

🤔 Cons:

  • Outputs are often non-editable or lossy, requiring “prompt luck” to get the right result.

  • Hard to scale to multi-step or parametric workflows.

Node-Based Scripts

👍 Pros:

  • Fully editable and transparent logic.

  • Supports complex, parametric, and multi-step workflows with precision.

🤔 Cons:

  • Primarily adopted by advanced or technical users.

  • Difficult to maintain across teams; scripts break easily when context or project structure changes.

Parallel Chat Window

👍 Pros:

  • Familiar “ChatGPT-like” interaction model.

  • Easy onboarding for non-technical users.

🤔 Cons:

  • Too general and detached from real CAD workflows.

  • CAD’s unique constraints limit consistency and reliability, leading to unpredictable outputs.

Entirely New CAD Software

👍 Pros:

  • Maximum freedom to design AI-native features.

  • Can rethink workflows without legacy constraints.

🤔 Cons:

  • Extremely high switching cost and adoption risk for enterprises and industry professionals.

  • Incompatible with existing tooling, standards, and interoperability requirements.

What’s missing is a tool that lives within users’ workflow, allowing users to accomplish more with less. Here's a sneak peek into what we learned and what we are building towards:

Human Communication as the Blueprint for AI Collaboration

Seeing how users interact with different CAD MCPs -- and even with our own early MVP as a typical “CAD co-pilot” -- we realized that the bar for truly "vibe CADing" is still extremely high. The challenge isn’t that users are unwilling to write detailed prompts; it’s that CAD is fundamentally spatial. When a designer wants to tell AI to “put this over there,” it can take an unreasonable amount of effort to specify what “this” is and where “there” should be.

At first, we assumed this was simply a communication skill that people could adapt to over time. We even launched a series called Reer Academy, sharing best practices on prompting for precise 3D manipulation. But as we experimented more with real users, it became clear that a standalone AI assistant -- no matter how powerful -- just isn’t the most intuitive way to enter an established professional workflow. It often feels omnipotent yet vague, ambitious yet over-promising.

So we shifted focus. We studied how humans communicate with each other when collaborating on CAD tasks, and building experiences that mirrors those patterns. Instead of a universal chatbot, we are building towards a system that appears as an ambient, purpose-built helper woven directly into each workflow -- feeling natural, contextual, and often offering help before user even realize they need it.

From Muscle Memory to Machine Intelligence

Since our goal is to automate the repetitive, often tedious grunt work in professional CAD, we’ve centered our research on understanding how experts actually work -- their muscle memory, favorite shortcuts, navigation habits, and the micro-actions they repeat dozens of times a day. While many AI tools start from the macro workflow level, we’ve found that it’s often the micro-scale behaviors, the small, incremental steps people hardly notice, when observed across many workflows, reveal surprisingly consistent patterns. And once you can connect those patterns across tasks, tools, and contexts, you start to see exactly where repetition and cognitive load accumulate.

Those moments are not only pain points - they’re clear signals for where AI can step in naturally, predict what the user is trying to do, and remove friction without disrupting flow.

This led us to a core philosophy for Reer’s UX/AX design: build AI agents that feel alive inside CAD -- ​aware of context, sensitive to intent, and quietly supportive -- ​yet never intrusive or overbearing. AII in service of our mission to transform CAD into AAD. 

Creating Incremental “Quality of Life” Magic

From our user testing, the strongest whoa moments don’t actually come from big, zero-to-one generative feats, like when the AI agent builds a full floor slab based on a prompt. Instead, they happen in the smaller, smarter moments: when a user changes a floor height and Reer automatically extends dozens of columns to reach the new ceiling heights accordingly. These micro-interactions feel intelligent, anticipatory, and aligned with how people actually work. 

As we collect more of these reactions, we’ve realized something fundamental: expectation management is everything. Two users can have the AI complete the exact same task, yet feel vastly different levels of satisfaction depending on whether the AI’s behavior aligns with their mental model.

 

Many of our business users initially worry about the learning curve cost and about whether AI tools lack the domain depth required for professional-grade workflows. But when we ask them to compare onboarding a new team member with onboarding Reer, the hesitation dissolves. They immediately understand the parallel and become far more open to adopting the tool. Just like working with a standout teammate, the moment we are impressed by a person’s ability is when they are actively thinking one step further, understanding the essence of the problem, and acting proactively instead of passively waiting to be instructed. And this is what a generic chatbot cannot achieve.

Industry Signals from Adobe

At Adobe MAX 2025, the world’s largest creative conference, their Firefly launch reinforced a model that resonates deeply with our approach.

 

Adobe Firefly is a collection of generative AI models embedded directly across Adobe’s creative tools. Instead of releasing a standalone “AI-only” product, Adobe treats Firefly as an engine that powers specific, context-defined features native to each application. Crucially, these AI features are triggered through the same gestures creative professionals already use. For example, “Generative Fill” in Photoshop doesn’t start from a chat box; it starts from a selection, one of the most fundamental actions in Photoshop. Users trigger it through familiar behavior without being forced to use it. The AI prompt field simply lives in an existing sidebar, visually indistinguishable from any other tool. AI becomes an un-intrusive addition to the workflow.

 

Their approach to building a centralized agentic engine is strong validation of our vision to integrate multiple CAD platforms into a unified Reer IDE (Intelligent Design Environment). But CAD is fundamentally different. While Adobe users operate within a single ecosystem, CAD workflows are deeply fragmented -- spanning different tools, file types, and software families. In our user survey, 86% of professionals report juggling multiple CAD applications on a regular basis.

This fragmentation is exactly why our approach matters. We are building toward a seamless agentic experience that feels native inside each CAD file while remaining connected through a shared, centralized IDE. Localized when needed, centralized when it counts.

 

Reer is Not a Chatbot, but an Input Engine.

Reer isn’t built to chat -- it’s built to understand and act. While others translate words into words, Reer translates intention into CAD actions, reading the story designers tell through clicks, selections, and the subtle choreography of modeling.

CAD has always been a spatial language -- quiet, precise, deeply contextual. So we built an agent that listens not just to text, but to everything: actions, habits, micro-movements, and the invisible rhythm of creation.

Because every gesture is intention.
Every intention is a signal.
And every signal deserves action, not conversation.

This is the future of AAD.

And we’re just getting started...

bottom of page