AI render tools have been around long enough now that most architects have tried one. The typical experience goes like this: upload a SketchUp screenshot, hit generate, get something that looks almost right but not quite. The shadows are slightly off. The materials drift between angles. It looks like a render, not a photograph. You show it to the client anyway because you're out of time.
This isn't an AI problem. It's a specification problem.
Photographers don't walk into a space and start shooting. They make a series of deliberate decisions before they touch the camera — time of day, light direction, window treatment, whether the artificial lights are on or off. Each of those decisions has a precise technical consequence. Get them right and the image is convincing. Skip them and you're hoping the camera figures it out.
AI render models work the same way. The difference between a render that looks like a photograph and one that looks like a render is almost always whether someone thought carefully about what they were asking for before they asked.
Here are the decisions that matter.
Time of day is a physical specification, not a mood setting
Choosing "daytime" tells the model almost nothing. Daytime at 8am and daytime at 2pm are completely different light scenarios — different sun angle, different shadow length, different colour temperature. A model that doesn't know which one you want will invent something. Invented light looks invented.
The conditions that work best for residential interiors are specific. Sunrise for east-facing spaces with warm material palettes. Soft mid-morning for bathrooms and kitchens where you want clean, neutral light. Blue hour for spaces where the artificial lighting is part of the design story — the exterior going dark while interior sources glow at 2700K creates an atmosphere that clients respond to immediately.
Pick a time. Commit to it. Vague instructions produce vague results.
Window treatment is the most important decision most people skip
A room with bare windows and a room with a translucent curtain are photographically different spaces even if the geometry is identical.
The translucent curtain turns the entire window into a soft area light. No hard shadow edges. Neutral whites. The exterior disappears behind a luminous plane. This is the condition behind most high-end residential photography — that calm, airy quality that makes a space look expensive.
Roller blinds directionalise the light without sharpening it. You get a gradient from window to opposite wall, with the exterior slightly overexposed. It gives the space depth that the curtain condition doesn't quite achieve.
Direct sun through horizontal slats is the dramatic option — linear shadow patterns, high contrast, warm afternoon colour temperature. When it's right it's great. When the shadows are pushed too far it looks like a CGI demonstration of what blinds do rather than a photograph of a real space.
The window treatment isn't a detail. It's the primary decision that determines shadow quality across the entire frame, and most people either skip it or leave it on a default.
Your modelling software is leaving artefacts in your renders
SketchUp prints flat-shaded geometry with visible edges. The darker shading near a corner or crease is viewport shadow — not a different material. A render model that doesn't know you're working in SketchUp might interpret that shadow as a tonal variation and carry it through into the output. The result is a wall with three slightly different tones for no physical reason.
Revit and ArchiCAD prints include hatching that represents material types in BIM documentation. If the model doesn't know what software you're using, it might interpret that hatching as a surface texture.
This is a category of error that's invisible until the client asks why the wall looks patchy. Telling the model what software your print came from is not a formality.
"White walls" is not a material specification
This is the one that produces the most inconsistency across angles of the same project.
AI models are trained on photographs of real spaces. "White walls" in that training data covers everything from polished plaster to rough render to painted drywall to fabric wallcovering. Without further instruction, the model picks something from that entire distribution. On one angle it picks smooth plaster. On another it picks something with a slight texture. The two renders look like different projects.
The fix is specificity. "Smooth white plaster, matte finish, subtle organic imperfection, no texture pattern" is a specification. The model has a much narrower range of options and the outputs become consistent across angles.
The same applies to every material that matters. Marble: which marble? Polished Calacatta with fine grey veining or high-contrast Nero Marquina? Timber: raw oak or dark walnut, oiled or lacquered? The more precisely you describe it, the more predictable the output.
The architects who are getting consistently good results from AI render tools are not using better tools. They're specifying better. They're approaching the brief the way a photographer approaches a shoot — every variable considered, every decision deliberate.
The renders that look like photographs started with a photographer's thinking. The ones that look like renders didn't.
Maquete was built to make these decisions explicit rather than bury them. Time of day, window treatment, artificial lighting, modelling software, material specifications — surfaced as decisions rather than hidden behind a style selector. Because the knowledge of how a space should look already exists. It lives in the architect's head. The tool should be extracting it, not requiring you to guess at the right words.