It's 9pm. Client meeting at 10am. The model's done but you've got nothing to show except a SketchUp screenshot with purple shadows and that default sky gradient that screams "I haven't started on visuals yet."
You've been here before. We all have.
The old options were all bad. Fire up V-Ray and spend four hours on materials and lighting setup. Export to Lumion and realise you forgot to install that grass asset pack. Open Midjourney and describe your building in words, then spend an hour trying to get it to stop inventing extra floors.
Here's what I actually do now. Five minutes, start to finish. No rendering engine. No prompt engineering. No praying to the GPU gods.
Step 0: Get your SketchUp screenshot right
This takes 30 seconds and it's the single biggest factor in your final render quality. Everything downstream depends on this frame.
Pick your camera angle in SketchUp the way you'd position a camera on a real shoot. Eye-level for interiors. Slightly elevated for exteriors. The composition you choose here is the composition you get back — the AI doesn't reframe your shot, it renders exactly what you show it.
Turn off SketchUp's shadows and fog. Hide the axes. You want clean geometry with as little visual noise as possible. The AI reads every pixel. If there's a stray guideline running through your living room, it'll try to make sense of it.
Export as PNG. Don't overthink the resolution — 1080p or higher is fine. Save it somewhere you can find it in ten seconds.
Step 1: Upload and let the AI read your design
Drop your screenshot into Maquete's guided workflow. Pick your source app (SketchUp, Revit, Rhino — whatever you modelled in). Give the project a name if you want to find it later.
Here's where things get interesting. The AI analyses your image and comes back with a full technical reading of the scene. It identifies every element it can see — the ceiling treatment, the wall finishes, the flooring, the furniture, the window frames, the light fixtures. It maps out the spatial layout, the camera position, the composition.
You never see this technical reading. It happens behind the scenes. What you see is the next step.
Step 2: Confirm your materials
The AI takes its best guess at every material in the scene, then asks you to confirm or correct. It knows that thing on the ceiling is a slatted element — but is it natural timber or painted MDF? Matte or satin finish? Does it have visible grain?
Each element shows up as a card. The AI's guess is pre-filled. You can:
- Confirm it by selecting from the dropdown options (material type, finish, reflectivity)
- Upload a reference photo of the actual material — a product sample, a swatch, a photo from the supplier's website
- Skip it and let the AI decide based on what it sees in the image
- Add notes — "the floor is actually Corten steel" or "these cushions are dark green velvet, not grey"
For a typical interior with 8-10 elements, this takes about 90 seconds if you're being thorough. Faster if you skip the elements you don't care about.
The key insight: you don't need to get everything perfect. The materials that matter are the big surfaces — floor, walls, ceiling. Get those right and the render sells itself. Nobody's zooming in to check whether your door handle is brushed nickel or satin chrome.
Step 3: Set your lighting
This is where you stop being an architect and start being a photographer.
The AI shows you every light fixture it found in the scene. You toggle each one on or off. For the ones that are on, you set colour temperature (2700K warm, 3000K neutral, 4000K cool) and intensity (soft, medium, bright).
Then you set the natural light. This is just a time-of-day slider. Morning light from the east with long shadows. Midday overhead with minimal drama. Late afternoon golden hour streaming through the west-facing windows. Overcast diffuse light that wraps everything evenly.
You also control the mood: soft or defined shadows, neutral or slightly dramatic contrast, neutral or enveloping atmosphere.
My default for client presentations: 10am natural light, soft shadows, neutral contrast. It's clean, it's honest, it doesn't oversell the space. Save the golden hour drama for the Instagram post.
Step 4: Describe what's outside the windows
If the AI detected windows or openings, it asks what's on the other side. Three options:
- Upload a photo of the actual site context — the garden, the street, the neighbouring building
- Describe it in text — "tropical Brazilian vegetation, banana trees and monstera"
- Skip it — the AI will fill in something contextually appropriate
This step gets overlooked but it matters more than you'd think. A beautiful interior render with a blank white void outside the window looks fake. Even a simple "suburban garden with mature trees" gives the scene depth and realism.
Step 5: Add a quality reference (optional)
You can upload a photo that represents the photographic quality you're after. Not the geometry, not the furniture, not the materials — just the quality of light, the depth of field, the editorial feel.
I keep a folder of three or four architectural photography references from magazines. One bright and airy, one moody and dramatic, one warm residential, one cool commercial. Drop in whichever matches the vibe.
This step is optional. Skip it and the AI defaults to clean editorial architectural photography. Which is fine for most presentations.
Step 6: Hit generate
Pick your output size (2K is fine for presentations, 4K if you're printing) and aspect ratio (3:2 for most interiors, 16:9 for wide shots). Hit the button.
The tool compiles everything you've specified — materials, lighting, context, quality reference — into a detailed render prompt. You never see this prompt. It's assembled from your decisions, not your words.
Then it generates. Thirty to sixty seconds for 2K. A bit longer for 4K.
The regen trick
Here's the part that makes this a workflow instead of a gamble.
Your first render might be 90% there. The geometry is correct, the materials are right, but maybe the light direction feels slightly off. Or you want to see it with the pendant lights on instead of off.
You get four free regenerations per session. Each one uses the same compiled configuration — same materials, same context, same quality — but produces a new variation. It's like asking a photographer to take another shot from the same position.
After four regens, each additional one costs one credit. But four is usually enough to get something you're happy with.
What actually matters (and what doesn't)
After doing this a few hundred times, here's what I've learned moves the needle:
Matters a lot:
- Camera angle in SketchUp. If the composition is bad, the render is bad. No amount of AI fixes a boring frame.
- Getting the three big surfaces right: floor, walls, ceiling. These are 80% of what the eye sees.
- Time of day. It sets the entire mood. A 4pm render and a 10am render of the same space feel like different buildings.
Matters somewhat:
- Light fixture on/off states. Adds warmth and realism but won't make or break the image.
- Context outside windows. Important for realism but a reasonable AI guess is usually fine.
- Quality reference photo. Helpful for nailing a specific editorial style. Not essential.
Doesn't matter much:
- Individual furniture material details. Skip these unless the sofa fabric is a key design decision.
- Exact colour temperature of each fixture. 2700K vs 3000K is invisible at presentation scale.
- Output resolution for screen presentations. 2K is more than enough. Don't waste credits on 4K for a Zoom call.
When this workflow isn't enough
I'm not going to pretend this replaces everything.
If you need animated walkthroughs, you need Enscape or Twinmotion. If you need construction-phase documentation renders with exact material specifications called out, you need a proper rendering pipeline. If you need 50 angles of the same project for a marketing brochure, the per-render cost starts to matter.
This workflow is for the 80% case. The client meeting tomorrow. The planning submission that needs one good perspective. The Instagram post that needs to not look like a SketchUp screenshot. The competition entry where you need three views and you've got an evening.
For that 80%, five minutes is all it takes.
The actual point
The reason I built this tool — and the reason it works the way it does — is that architects already know what their building looks like. You've spent weeks or months designing it. You don't need to describe it to a chatbot in words. You need the chatbot to look at what you've already made and turn it into a photograph.
Every decision in this workflow maps to a decision a photographer would make on a real shoot. What time of day. Which lights are on. What's visible through the windows. That's it. You're directing a photo shoot, not writing a prompt.
The render should show the client their actual building. Not a reinterpretation. Not an AI's creative vision. Their building, their materials, their spatial decisions — just lit properly and photographed well.
That's a five-minute job. And it's all most projects need.