Long-exposure photography compared to image-stacking video frames (ImageMagick/FFmpeg)



Pictured above: comparisons of images made from a segment on "Good Mythical Morning" involving "light painting". In the top-left, a 30-second exposure from a still-camera in the studio. Below it, an image made using ImageMagick's '-evaluate-sequence' function, on all frames taken from the 30 seconds of video. In this case, the 'max' setting was used, which stacks maximum pixel values. In the top-right, a single frame from the video, and below it, 100-frames stacked with FFmpeg using sequential 'tblend' filters.

# ImageMagick - Use with extracted frames or FFmpeg image pipe (limited to 4GB)
 convert -limit memory 4GB frames/*.png -evaluate-sequence max merged-frames.png

# FFmpeg - Chain of tblend filters (N.B. inefficient - better ways to do this)
ffmpeg -i video.mp4 -vf tblend=all_mode=lighten,tblend=all_mode=lighten,... 
As a comparison, here is an image made from the same frames but using 'mean' average with ImageMagick.



A video demo for the FFmpeg version


source video: https://www.youtube.com/watch?v=1tdKZYT4YLY&t=2m4s