Do you want to convert a 2D image into a 3D model auto-magically? On 5 March 2024, Stability AI and Tripo AI released TripoSR: Fast 3D Object Generation from Single Images that does exactly that!

I know nothing about 3D modelling, but now I don’t need to if all I want is a very basic 3D asset. Talk about AI replacing jobs! (I won’t...)

Installation

As always, I am using ComfyUI Portable on Windows, and I will need to add the ComfyUI-Flowty-TripoSR custom nodes.

cd ComfyUI\custom_nodes
git clone https://github.com/flowtyone/ComfyUI-Flowty-TripoSR.git
cd ..\..
python_embeded\python.exe -m pip install -r ComfyUI\custom_noeds\Flowty-TripoSR\requirements.txt

I did ran the pip command in the last line, but it went and downgraded some libraries for me. Instead, I perhaps should’ve just run pip install trimesh==4.0.5 and resolve other missing dependencies, if any, manually.

Also, at this point you may wish to install ComfyUI-3D-Pack to save the mesh instead of just previewing it as in the workflow below.

Finally, download TripoSR model.ckpt and save it to ComfyUI/models/checkpoints, renaming it to something more sensible in the process - warning: this is not a .safetensor.

Workflow

There are only two nodes required to convert a 2D image into a 3D object, the third being just a previewer:

  • first, TripoSRModelLoader loads the model,
  • second, TripoSRSampler takes an image and a mask (which masks out the background) and generates the 3D mesh,
  • and as mentioned TripoSRViewer just previews the 3D object.

However, here, I am combining my last workflow used to Generate transparent images with Layer Diffusion with the TripoSR workflow, because this way, the mask is automatically generated.

TripoSR + Layer Diffusion + SDXL workflow

Basically, this gives me text-to-image-with-mask-to-3D-object using SDXL, Layer Diffusion and TripoSR.

  • The 3D object is represented as a blob of triangular meshes - hence the need for trimesh.
  • The lowest geometry_resolution is 128 but the mesh is too... noisy - 256 is the default, and further increasing is possible at the expense of memory.
  • The output mesh is saved as a (Wavefront) meshsave_*.obj file in the ComfyUI\output folder - this is hard coded.
  • The mesh can be loaded, visualized and converted using Online 3D Viewer - either on-line or self-hosted.