Making Time Lapse Video of 3D Prints

3D printing is an ideal subject for time lapse videos. The common approach is to set up your camera to grab a frame at regular intervals, like once every 30 seconds. This is easy to do and works well, but for a more refined result keep reading for some tips and tricks.

Better 3D Printing Time Lapse Videos

Final Yoda 3D PrintWhen you grab frames at regular intervals, something is probably going to be jumping around from frame to frame. Which part actually appears to move in your time lapse video depends on your setup.

  • If your camera is not attached to your printer and the print bed moves, then both your printed item and the extruder are going to jump around. This is not what you want.
  • If the print bed is stationary, or your camera is attached to a moving print bed, then the item you’re printing will stay in one place but the print head will appear to move around. This isn’t terrible, but it can be distracting and you can get different shadows in every frame which messes up your lighting.
  • If your camera is attached to the extruder, then your object, the print bed, and the environment will look like they all move around.

Here’s a video example that shows what we’re trying to achieve:

Pros:

  • Nothing is moving; your focus stays on the object being printed.
  • The object grows at a consistent rate because there’s exactly one frame per layer.
  • The object is not obscured or shadowed by the extruder. This allows for good, consistent lighting throughout the video.
  • You don’t need to attach the camera to your printer, even if you have a moving print bed.

Cons:

  • There is not a consistent time ratio between the video and real life because the interval between frames varies from layer to layer.
  • You don’t get to see the operation of the printer at all. If that’s what you need, the traditional technique is a better choice.
  • The total print time is increased by four to five seconds per layer.

Setting Up One Layer Per Frame Capture

Making this work takes advantage of Slic3r’s ability to call post-processing scripts and the @execute pseudo-gcode command that Repetier Host supports. In my environment, I’m running both of these on a Linux Mint host that drives my 3D printer. Finally, I’m using a D-Link IP camera that makes it easy to grab a snapshot with a simple URL.

Capturing Snapshots

The first step is to make a small script that will grab a snapshot from your camera and save it to disk. Here’s what I did in Python:

#!/usr/bin/env python

import datetime
import subprocess

f_name = datetime.datetime.now().strftime("%Y%m%d%H%M%S.jpg")

subprocess.call("/usr/bin/curl -o '%s' http://192.168.5.29/image/jpeg.cgi" % f_name, shell=True)

 

If you prefer, this can be done just as easily with a shell script:

#!/bin/sh

/usr/bin/curl -o `date +%Y%m%d%H%M%S`.jpg http://192.168.5.29/image/jpeg.cgi

 

The command above works with most D-Link network cameras, but if you have something different, you’ll have to find out how to grab a snapshot from a script. For example, if you’re running Linux and want to use a webcam, the streamer utility may be the way to go. Once you’ve got an executable script that will grab a snapshot and save it to disk with the date and time as the filename, you’re ready for the next step.

Modifying Your G-Code

To get a repeatable, clean photo for every layer, we make the following things happen every time the Z position is increased:

  1. Move the printer to a specific location that puts the object in the same place every time, well positioned for lighting and a good photograph.
  2. Run our external script to capture a snapshot from the camera
  3. Allow the normal printing to resume.

Repetier Host can execute an external script at any point while sending g-code to your printer, with some caveats. To do this, you put a line in your g-code that looks like this:

@execute <script full pathname> <optional parameters>

Final Bunny 3D PrintBut there are some things to be aware of so that it will work the way you expect it to. First, at least in my environment, Repetier Host runs the external script asynchronously and keeps sending g-code to the printer immediately. It does not wait for the script to complete. That means that step #3 above, when printing resumes, could start before we actually grab our frame. So we’ll need to insert some delays if we want to make sure the printer is where it should be when the external script runs.

The other wrinkle is caused by the buffer in the printer’s g-code processor. Repetier Host will send g-code lines into the buffer and doesn’t really know when they actually get executed. Because of this buffer, when Repetier Host reaches the @execute command and runs the external script, some g-code commands before that point are still queued and haven’t been done yet. In particular, step #1 to put the printer in a known position won’t be complete, and probably not even started, when Repetier Host gets to step #2. To solve this, we’ll add another delay and some extra commands that essentially flush the buffer right before the @execute command.

Here’s a typical sequence at the end of one layer and the start of the next. The added comments don’t appear in the g-code and were added here for clarity:

G1 X66.533 Y100.927 E29.70731 ; last piece of filament deposited for this layer
G1 E28.70731 F1800.00000 ; retract the filament a tiny bit
G92 E0 ; reset the filament position counter — this doesn’t move the filament
G1 Z0.550 F7800.000 ; raise the Z position to the next layer
G1 X74.636 Y67.136 F7800.000 ; move without extruding to the starting point for the layer
G1 E1.00000 F1800.00000 ; reverse the filament retraction so it’s ready to extrude
G1 X74.636 Y102.136 E1.25459 F1800.000 ; extrude the first piece of filament for the next layer

And here’s what we want the g-code to look like after our modifications to grab a snapshot for each layer:

G1 X66.533 Y100.927 E29.70731
G1 E28.70731 F1800.00000
G92 E0
G1 Z0.550 F7800.000 ; we put our extra bits after the Z position change command
G1 X0 Y0 F7800 ; move the printer to our ideal location for a photo
G4 P2000 ; pause for 2,000ms (2 seconds)
G4 P1 ; pause for 1ms — these are here to flush the g-code buffer
G4 P1
G4 P1
G4 P1
G4 P1
G4 P1
G4 P1
G4 P1
G4 P1
@execute /home/chris/getframe.py ; take a snapshot
G4 P1000 ; pause for 1 second to let the script execute
G1 X74.636 Y67.136 F7800.000
G1 E1.00000 F1800.00000
G1 X74.636 Y102.136 E1.25459 F1800.000

Here’s the code that can be run as a post-processor script that will modify the g-code the way we want. Slic3r can call this code automatically when saving g-code files, but you could also run it manually on existing g-code files. The script takes a single command-line option, which is the name of the g-code file to modify in place.

#!/usr/bin/env python

import re
import subprocess
import sys

# Ignore all Z position changes until we see a G90 command. The
# Z movements before that are for things like the homing sequence.
#
# Take a final snapshot of the finished object when we see a M107
# command, since there isn't any more Z movement after the final
# layer is printed.
#
# The delay (G4) values and the number of buffer-flushing
# commands (G4 P1) were found experimentally and will likely
# need to be tweaked for other environments.

RE_Z_MOVE = re.compile(r"G1.*Z")

f_name = sys.argv[1]
with open(f_name) as f:
    content = f.readlines()

output = []
printing = False
for one_line in content:
    output.append(one_line)
    if one_line.find("G90") != -1:
        printing = True
    if printing and (RE_Z_MOVE.match(one_line) or one_line.find("M107") != -1):
        output.append("G1 X0 Y0 F7800\n")
        output.append("G4 P2000\n")
        output.append("G4 P1\n")
        output.append("G4 P1\n")
        output.append("G4 P1\n")
        output.append("G4 P1\n")
        output.append("G4 P1\n")
        output.append("G4 P1\n")
        output.append("G4 P1\n")
        output.append("G4 P1\n")
        output.append("G4 P1\n")
        output.append("@execute /home/chris/getframe.py")
        output.append("G4 P1000\n")

with open(f_name, "w") as f:
    for one_line in output:
        f.write(one_line)

 

Make sure the full pathname of the frame grabbing script is correct for your setup in line 42.

Lighting Is Key

Take a look at this video, which was the first successful test of the code above. Compare it to the Yoda time lapse and you’ll see that the lighting is weaker, causing the video to have more grain in it. This first attempt also captured too much of the room beyond the printer, including movement that jumps around for the first part of the sequence.

To get really good videos, you need to control the lighting. This isn’t just about shining a light on the object. You also need to ensure that the light levels beyond your subject area don’t upset the white balance or the focus. For something like 3D printing videos, you really don’t want to have extra stuff in the background or the rest of the frame at all.

3D Printing Photo ScreenThe simple solution I chose for this was to stand up a couple white barriers made of foam board (you may only need the smaller size, or you might find it useful to get a bigger pack for a better price break). This constrains the camera’s view, provides a nice black background against which to photograph the print, and helps with lighting by reflecting more toward the subject.

To connect the foam board together, I designed and printed a couple clips. Since I had the automatic time lapse code working, I turned the clip print into this animated gif just because I could.

Foam Board Clip Model3D Printed Foam Board Clipfoam_board_clip_3d_print_time_lapse

 

 

FacebookTwitterGoogle+EmailEvernotePocket

2 comments

  1. Hi,

    Very nice tutorial.
    I did the same thing using an IO pin on the printer. This pin switches a mosfet. This mosfet is connected to the MIC input of an Android audio jack via a 220 Ohm resistor.
    Doing so allows to have a 3D printer command of the Vol+ button of the android.
    When you have the Camera On Vol+ triggers a picture… Et voila

    JP

    1. Very cool! Your hardware-based approach is ultimately more flexible since you can trigger pretty much any kind of camera.

Leave a Reply

Your email address will not be published. Required fields are marked *