Rodin Gen-2 Edit — Talk to Your 3D Models and They Actually Listen

Hyper3D just turned 3D AI from a one-shot generation trick into a real creative tool. Rodin Gen-2 Edit lets you talk to your 3D models — literally — and reshape them with natural language. This is the missing piece the entire 3D AI pipeline has been waiting for.

Rodin Gen-2 Edit by Hyper3D - AI-powered 3D model generation and editing platform
Rodin Gen-2 Edit brings AI-driven 3D generation to production quality. Source: Hyper3D

The Story: From Generation to Conversation

Every 3D AI tool until now has followed the same pattern: you feed in a prompt or image, wait, and get a model out. If the result is 90% right but the left arm is wrong, you start over. Rodin Gen-2 Edit breaks this cycle entirely.

Built by Hyper3D (formerly Deemos), Rodin Gen-2 Edit is the world’s first true 3D GenAI editing platform. The core innovation: you can upload any 3D model — your own assets, models from other tools, whatever — and modify them using natural language or voice commands. Select an area by dragging a box, describe what you want changed, and the AI handles the rest. Need to turn a beach buggy into an armed motorcycle? Just say so.

Example 3D model generated with Rodin Gen-2 showing high-quality mesh and textures
Rodin Gen-2 generates production-quality 3D assets from images or text prompts. Source: Oreate AI

Under the Hood: 10 Billion Parameters and BANG

Rodin Gen-2 runs on a 10-billion parameter architecture called BANG (Bilateral Algorithmic Neural Generation). The BANG paper was recognized among the Top 10 Technical Papers at SIGGRAPH 2025, and for good reason — it solves one of the biggest headaches in AI 3D: the fact that most models output a single fused mesh blob that no artist can actually work with.

BANG uses recursive part-based generation, intelligently dividing objects into coherent components. A generated character comes out with separate head, torso, arms, legs — like a real artist would build it. This means clean topology, proper UV layouts, and meshes that are actually riggable. The 4X improvement in geometric mesh quality over Gen-1 isn’t just a marketing number — the quad-based topology genuinely holds up for animation and game pipelines.

Rodin Gen-2 mesh topology comparison showing clean quad-based wireframes at multiple density levels
Rodin Gen-2 generates clean, quad-based mesh topology at multiple resolution levels — ready for production. Source: Scenario

The Killer Features

Partial Redo. Generated a sword but the hilt looks off? Select just the hilt, describe what you want, and regenerate only that part. The rest stays untouched. This alone puts Rodin in a different league from every competitor.

Smart Low-poly + Normal Baking. Upload any model, convert it to artist-style low-poly wireframes, then bake all the high-poly detail into normal maps. One community member (@WJ_T_BOY) demonstrated a workflow where you can take any external model through this pipeline and get game-ready assets in minutes.

T/A-Pose Control. Toggle this on and generated characters automatically come out in standard T-Pose or A-Pose — a godsend for anyone who’s ever tried to rig an AI-generated character that came out in some random yoga position.

Voice-Driven Editing. Hyper3D bills this as the first AI product where you can edit 3D models with your voice. It’s not just a gimmick — for rapid iteration during concepting, being able to say “make the helmet bigger, add spikes on the shoulders” while viewing the model is genuinely faster than typing.

Rodin Gen-2 Edit editing workflow showing model transformation capabilities
The Rodin editing workflow: upload, select, describe, transform. Source: Oreate AI

Why You Should Care

Until now, AI 3D generation has been a parlor trick with a critical flaw: zero iteration control. You generate, you hope, you start over. That’s not a creative tool — that’s a slot machine. Rodin Gen-2 Edit is the first platform that treats 3D AI like an actual production workflow: generate, review, refine, ship.

The implications are massive for game development (rapid asset prototyping), VFX (previs and stand-in modeling), e-commerce (product visualization from a single photo), and even 3D printing. And with the API available on Replicate, fal.ai, and WaveSpeed, this isn’t locked behind a single platform — you can integrate it into your own pipeline.

Rodin Gen-2 available as API on Replicate for pipeline integration
Rodin Gen-2 is accessible via API on platforms like Replicate, fal.ai, and WaveSpeed. Source: Replicate

Try It / Follow Them

IK3D Lab Take

This is the inflection point we’ve been waiting for. Generation was step one — editing is where 3D AI becomes a real tool instead of a toy. The BANG architecture’s part-based approach is technically brilliant, and the fact that you can bring your OWN models into the editing pipeline (not just Rodin-generated ones) is what makes this genuinely production-viable. The catch? Complex multi-part assemblies still need cleanup, and you need clean input images to get clean output. But as a first draft machine that you can actually iterate on? Nothing else comes close right now.

 

Sharing is caring!

Leave a Reply

Your email address will not be published. Required fields are marked *