You upload a clean SketchUp viewport of an empty living room. You get back a render with a sofa you didn't model, a rug that wasn't there, and a window that's moved 60cm to the left. This is the most common complaint architects have about AI rendering — and it has a specific technical cause that most tools do nothing to address.
Understanding why hallucination happens is the first step to stopping it.
Why does this happen?
Diffusion models — the AI technology behind most rendering tools — learn by training on millions of photographs. They learn the statistical patterns of what real spaces look like: living rooms usually have sofas, kitchens usually have appliances, windows usually appear at a certain height and width relative to the room.
When you give one of these models a SketchUp viewport of an empty room, the model is looking at something that doesn't match its training data expectations. An empty room with concrete floors and no furniture is statistically unusual. The model's learned response is to make it look more like the living rooms in its training data — which means adding furniture, adjusting proportions, and filling in the gaps.
This isn't the model malfunctioning. It's the model doing exactly what it was trained to do. The problem is that it was trained on general photography, not architectural design documentation.
The ControlNet conditioning problem
The technical mechanism behind hallucination is conditioning strength. Most AI rendering tools use a neural network extension called ControlNet to anchor the output to your 3D model. ControlNet takes a structural representation of your viewport — edge lines, depth maps, surface normals — and uses it to guide the generation.
The strength of this conditioning is the key variable. High conditioning weight: the output stays close to your input geometry. The model renders what you drew. Low conditioning weight: the model has more creative freedom. The output looks more photogenic, but drifts further from your design.
General-purpose AI tools default to lower conditioning strength because it produces results that look better to a general audience. Architecture-specific tools default to higher conditioning because clients need to see the actual design.
The practical consequence: a lower-conditioned render will almost always look more impressive in a demo. But it will also add furniture you didn't design and move walls you did.
Five signs your tool is hallucinating
1. Furniture appeared that you didn't model. The clearest sign. If there's a chair in the render that wasn't in your model, the conditioning strength is too low or the tool doesn't expose control over it.
2. Wall positions shifted. A room that was 4.5m wide looks 3.8m wide in the render. This is the model adjusting proportions to match what "looks right" based on training data.
3. Windows changed proportion or position. Windows are particularly prone to this — the model has strong learned priors about what window-to-wall ratios look like, and it will adjust yours to match.
4. Ceiling height looks wrong. A 3m ceiling renders as 2.6m. Same mechanism — the model is calibrating to its training data expectations.
5. Materials changed colour or texture significantly. You specified pale grey concrete and got warm beige plaster. The model is applying what it associates with the scene type, not what you specified.
How to reduce hallucination in your current tool
If your tool exposes conditioning strength as a slider or setting, increase it. More conditioning = more fidelity to your input. The render may look slightly less "perfect" but it will look more like what you designed.
Remove vague style prompts that invite creative interpretation. "Beautiful, luxurious interior" is an invitation for the model to apply its own idea of what beautiful and luxurious means — which may not match your design. Be specific: "interior, concrete floors, oak cabinetry, white walls, no furniture."
Use clean solid-colour materials in your model. Heavily textured or coloured materials in your viewport give the model more raw material to work with but also more latitude to reinterpret. Clean flat colours are easier for the conditioning to hold.
If you're rendering an empty room, consider adding basic placeholder geometry where furniture should go — simple boxes indicating a sofa volume, a dining table footprint. This signals to the model that you know those areas are empty by design, not by omission.
Why architecture-specific tools handle this differently
The difference isn't just conditioning strength — it's the system prompt. Every AI rendering tool is running a prompt under the hood that instructs the model how to interpret your input. General-purpose tools use prompts optimised for visual quality. Architecture-specific tools use prompts that explicitly instruct the model not to add or remove elements.
A prompt that includes something equivalent to "preserve all geometry exactly as shown, do not add furniture, do not modify wall positions, do not alter proportions" produces fundamentally different outputs from a prompt that says "photorealistic interior render."
This is why architecture-specific tooling matters. The model is the same underlying technology. The prompt engineering is what makes it architectural rather than generative.
What this means for client deliverables
A hallucinated render that goes to a client is worse than no render at all.
If a client sees a render with a sofa they assume you selected and a rug they assume is in the specification, you've created expectations that don't exist in the design. When the actual design gets presented — no sofa, different proportions — the client compares it to the render and finds it lacking. You've set yourself up for a difficult conversation.
Geometry fidelity isn't a technical nicety. It's a professional requirement for anything that goes to a client, enters a competition, or gets used in a planning submission. The render should show what you designed. Anything else is misleading.
Why do AI renders add furniture I didn't model? AI rendering tools use diffusion models trained on general photography. When your model lacks furniture, the AI fills it in to match its training data expectations. Tools with high ControlNet conditioning strength and architecture-specific prompts prevent this by anchoring the output to your exact geometry.
How do I stop AI from changing my room layout? Increase the conditioning strength in your tool if it's exposed as a setting. Use specific, restrictive prompts rather than vague aesthetic ones. Use a tool designed for architectural fidelity rather than a general-purpose image generator. Architecture-specific tools like Maquete use tuned prompts that explicitly prevent geometry modification.
Does Maquete add hallucinated furniture? No — Maquete's prompts are specifically engineered to preserve your model geometry. We don't add furniture you didn't model, and we don't alter wall positions, window proportions, or room dimensions. If your model shows an empty room, the render shows an empty room.
Which AI rendering tools preserve geometry? Tools built specifically for architectural use — Maquete, and to varying degrees tools like Veras at higher conditioning settings — are designed for geometry preservation. General-purpose diffusion tools (Midjourney, standard Stable Diffusion) have no architectural conditioning and hallucinate freely. The key question to ask any tool: what is your ControlNet conditioning strength, and what does your system prompt say about geometry?