Back when I got my first resin 3d printer, I spent a lot of time trying to figure out what sorts of objects and models were worth printing. Full structures? Took too long, and I only needed one of each. Windows? Hard to print objects with open spaces and unsupported spans. Clutter? Good. Common freight cars? Good. My big lesson was that I wanted to 3d print objects that I needed a lot of, that were mostly a single blobby object, and had enough curvy detail that I couldn’t make them by hand.
One really good find was the idea of human figures. Model railroads do look better with some humans around to help encourage a sense of scale and make our imaginary world seem lived-in and occupied. Although many of the places on a model railroad might be remote and lonely, there’s also a bunch of places where the railroad intersects with lots of people.
Back in 2014, I wrote about some early experiments with figures. I’d just found that another maker, The Great Fredini, had taken a 3d scanning rig to maker faires on the east coast and scanned various faire goers. He’d uploaded the STL files to the Thingiverse, a site for sharing 3d models. These were wonderful models - they were detailed enough for HO figures, I could print lots of them, and the figures looked more American than our favorite fallback of Faller’s 1970s European figures.
The Great Fredini’s models did have one problem: they were all scans of modern, 2010 era people going to an exposition in hot east coast weather. Folks tended to be dressed informally and for hot weather. T-shirts and shorts were common. One of my favorite figures was a big barrel of a guy wearing a football jersey and giving a thumbs up. It’s a great figure for a contemporary layout, but it’s not quite Fred Astaire or Grapes of Wrath.
I was also very specific about the kinds of figures I wanted. For the Vasona Branch’s 1930’s setting, most of the people I’d see would be the workers going into and out of the canneries and packing houses. Both kinds of businesses hired hundreds of workers for the summer rush, and the workers would have been dressed in very 1930’s garb. Old photos and histories talk about lines of women walking into the cannery in dresses and white aprons, and coming out with aprons stained from fruit and tomatoes. Men unloading fruit and working in the warehouses would be dressed for hot and dirty weather. I also needed a bunch of variations of the figures - young and old, tall and short, portly and thin. None of the Great Fredini’s figures were appropriate for capturing these crowds.
I tried using a figure modeling program to create my own models, but wasn’t good at setting human poses nor could I figure out how to get 1930’s clothing on the models.
Finally Getting Unstuck
I almost got unstuck last year; at the NMRA Pacific Coast Region’s annual convention in San Luis Obispo, Michael Eldridge gave a talk on using some new 3d modeling software for making human figures. This program, Daz3D, was intended for graphic artists to create realistic human images for advertisements and other purposes. It would also do 3d models, allowed posing, and had a market for buying clothing designed by the company and others. This seemed like a decent solution. In practice, though, the learning curve was steep: I had to learn how to use the software, how to pose a figure, how to get period-appropriate clothing models, and turn it into a solid “watertight” 3d model. This turned out to be beyond my ability for the time I was willing to devote to the project. Posing didn’t come naturally to me, and the 3d model machinations to turn the figures into a printable model required too much knowledge of the cryptic Blender 3d modeling tool. One particular problem was that clothing was treated as an infinitely thin separate object, so combining the figure and clothing into a single solid figure required significant 3d modeling knowledge I didn’t have. I put the project aside again.
Luckily, at this year’s PCR convention, Michael gave a new clinic using new AI tools to make 3d figures. Michael’s new approach got rid of most of the learning curve. Rather than using CAD-like figure software to create a human 3d model, Michael showed a much simpler approach: using some of the generative AI tools (ChatGPT, Gemini, Claude, etc) to generate pictures of a correctly dressed and posed figure, then using a commercial website called tripo3d.ai to create a 3d model from the figure. This approach didn’t require 3d modeling skill at all, but just having one or more photos of the figure to be created; the software inferred anything else it needed (like what a human back looked like), and it would generate a decent model for 3d printing. Yay!
Making a Figure
The first step is to generate a 2d image of the figure; these can be reference sheet images (aka an artist’s drawing of front, side, top, and bottoms of the figure as might be used to get approval from a client for an image or character), or simply a front and side view. These can be generated by the free AI tools such as Google’s Gemini from just an english description of what you want.
For example, you can go to gemini.google.com and type:
Can you generate a reference sheet image for a 3d figure of a middle-aged 1930’s woman in a knee length dress and apron with a nurses cap waling with a small step and without a base under the figure
I got the following, though the form of the image varies wildly on every run.
Generative AI tools are weird - they can generate different results every time, so the figure I generate today from a prompt will certainly not match what I get tomorrow. I added the clause about the “no base under figure” because a previous time I ran it, the figures would often be placed on a circular base. My tests yesterday also made reference images with a slightly different format, and always listed a “drawn by” with a name that I assume was completely made up. Weird.You can put in a different prompt and get another image, demand changes, etc. Note that all the captions and labels won’t matter- the 3d figure generating software seems to ignore it.
Michael had suggested the reference image, but I started asking just for a front and side view - that seemed to be sufficient for the later steps, and was much easier to check over.
For example, type the following into gemini.google.com:
Can you make a front and side image for a 3d model of a 1930’s workman with a thin build and dressed for warmer California weather, and have the figure walking briskly
I got this from that prompt and description.
Great! So now as long as can type an English description, I can make a figure!
Next step is the 3d model. Michael Eldridge and others suggested tripo3d.ai , a paid service with a $20/month plan that allows you to create models for 60 figures a month or so without paying more. (Signing up for a plan is a challenge - make sure you’re getting the monthly plan. Supposedly, you can use a free plan if you choose to render the 3d figure using their older AI system.) I went to the 3d workspace page ( https://studio.tripo3d.ai/workspace/generate ) , and dragged the front/side view into the “upload” box on the left . I then pressed the “Generate multi views” button just below the image - this automatically splits the front and side view into separate reference images for the model making. Finally, I pressed the “generate model” button at the bottom left to start the process of making the 3d model. It takes about 3 minutes. When the image is generated, I could then go to the "list of assets" to see the model and select it for exporting. (Careful - the models don’t show up in Safari. I had to use the Chrome browser on a Mac to see the model on the tripo3d.ai site.)
Now that I’d downloaded a 3d model as an STL file, I treated it like any other 3d model - I brought it into my 3d program’s slicer program, added a support structure to hold the object when printing and support arms and other overhanging parts of the model, then set up a print job with that figure. The worst part was making a support structure that supported the right parts of the figure, but were easy to cut off. After a quick 3d print, I had my figure printed. The software complained about bugs in the 3d model, but I see those even with the models I create in SketchUp - the slicing software was able to fix the flaws itself just fine.
I ended up doing ten figures from seven models, mirroring the walking figures for variation. I also made multiple prints of each figure, adjusting the scale of each to vary heights. After a bit of printing, I had around 100 figures done, printed in ten minutes on the newer printer.
I’m using this crowd of figures for the workers walking into or out of the canneries and packing houses - I needed a mob of figures, and I needed figures that looked like cannery workers and looked 1930s-like. (See if you can guess the prompt for each.). This workflow gave me the figures I needed, and got it done in essentially one long day.
result
I’m very, very happy with this process. With a day or so of work, I had a half dozen figure models, and had 3d-printed the first round of figures. There’s still a fair amount of hand work required to make these figures: setting up the printer, removing and cleaning the printed figures, removing support structure and cleaning up the figure, curing the figure in UV light, then priming the figures. Painting the figures is probably the most time-consuming part of the whole process; I’ve been gluing a dozen or twenty figures to a strip of styrene and going down the line assembly-line fashion. All that’s worth it to me to be able to have some figures that match my era and setting.
The neat thing about the figures and the 3d software is how easy it was compared to handcrafted models for freight cars. Most human figures that we try doing with the AI tools method take very little thought about how to make something printable; the figures are all sort of blobby, continuous single models that print efficiently. There's not a lot of overhangs or separate pieces to make us have to think about what direction to print the figures, or whether the model can be printed successfully. The software also doesn’t need perfect information; it already knows what a human arm or back looks like, so it can infer something reasonable that isn’t well-specified in the input image. The most challenging part, honestly, was just figuring out support for things like arms and hands so they'd print decently, and then just dealing with printing and cleaning up the figures.
Compared to the commercial figures, the models generated by AI are interesting. They’ve got a surprising amount of detail, such as exaggerated shoes or rolled-up sleeves that both looks good on the model and 3d prints well. The resulting parts remind me more of 1970's O-scale white metal figures cast in rubber molds than the Faller injection-molded figures.
As I was writing this, I looked back at my past adventures with figures in 2014, and got reminded how much the 3d printing world had changed. Back then, my Form One printer could print nine figures at a time in about 90 minutes - the software had a problem processing more 3d models in the slicing software, and the Form One’s slow movements of the laser beam with galvanometer mirrors meant the 3d printing was slow. With the Anycubic Photon printer and a modern computer, I was able to print 80 figures in 10 minutes because of the faster UV screen that can expose an entire layer of resin at one time. That’s also with a printer that’s one-tenth the cost of the original Form One. It’s really gotten to a point where any modeler can be mass-producing 3d parts.







No comments:
Post a Comment
Note: Only a member of this blog may post a comment.