根据ffmpy
文档,似乎最相关的选项是使用using-pipe-protocol.
与使用PIL读取图像不同,我们可以将PNG图像作为二进制数据读取到BytesIO
(将所有图像读取到内存中的类似文件的对象):
# List of input image files (assume all images are in the same resolution, and the same "pixel format").
images = ['frame 1.png', 'frame 2.png', 'frame 3.png', 'frame 4.png', 'frame 5.png', 'frame 6.png', 'frame 7.png', 'frame 8.png']
# Read PNG images from files, and write to BytesIO object in memory (read images as binary data without decoding).
images_in_memory = io.BytesIO()
for png_file_name in images:
with open(png_file_name, 'rb') as f:
images_in_memory.write(f.read())
Run ffmpy.FFmpeg
using pipe protocol.
Pass images_in_memory.getbuffer()
as input_data
argument to ff.run
:
ff = ffmpy.FFmpeg(
inputs={'pipe:0': '-y -f image2pipe -r 1'},
outputs={'output.gif': None},
executable='\\ffmpeg\\bin\\ffmpeg.exe')
# Write the entire buffer of encoded PNG images to the "pipe".
ff.run(input_data=images_in_memory.getbuffer(), stdout=subprocess.PIPE)
The above solution seems a bit awkward, but it's the best solution I could find using ffmpy
.
There are other FFmpeg to Python binding like ffmpeg-python, that supports writing the images one by one in a loop.
Using ffmpy, we have to read all the images into memory from advance.
The above solution keeps the PNG images in their encoded (binary form).
Instead of decoding the images with PIL (for example), FFmpeg is going to decode the PNG images.
Letting FFmpeg decode the images is more efficient, and saves memory.
The limitation is that all the images must have the same resolution.
The images also must have the same "pixel format" (all RGB or all RGBA but not a mix).
In case images have different resolution or pixels format, we have to decode the images (and maybe resize the images) using Python, and write images as "raw video".
为了进行测试,我们可以使用FFmpeg CLI创建PNG映像:
ffmpeg -f lavfi -i testsrc=size=192x108:rate=1:duration=8 "frame %d.png"
.
完整的代码示例:
import ffmpy
import io
import subprocess
#Building sample images using FFmpeg CLI for testing: ffmpeg -f lavfi -i testsrc=size=192x108:rate=1:duration=8 "frame %d.png"
# List of input image files (assume all images are in the same resolution, and the same "pixel format").
images = ['frame 1.png', 'frame 2.png', 'frame 3.png', 'frame 4.png', 'frame 5.png', 'frame 6.png', 'frame 7.png', 'frame 8.png']
# Read PNG images from files, and write to BytesIO object in memory (read images as binary data without decoding).
images_in_memory = io.BytesIO()
for png_file_name in images:
with open(png_file_name, 'rb') as f:
images_in_memory.write(f.read())
# Use pipe protocol: https://ffmpy.readthedocs.io/en/latest/examples.html#using-pipe-protocol
ff = ffmpy.FFmpeg(
inputs={'pipe:0': '-y -f image2pipe -r 1'},
outputs={'output.gif': None},
executable='\\ffmpeg\\bin\\ffmpeg.exe') # Note: I have ffmpeg.exe is in C:\ffmpeg\bin folder
ff.run(input_data=images_in_memory.getbuffer(), stdout=subprocess.PIPE)
Sample output output.gif
:
最新情况:
使用来自Pillow的图像的相同解决方案:
如果我们将Pillow中的图像以PNG格式保存到BytesIO,则上述解决方案也有效.
示例:
import ffmpy
import io
import subprocess
from PIL import Image as Img
#Building sample images using FFmpeg CLI for testing: ffmpeg -f lavfi -i testsrc=size=192x108:rate=1:duration=8 "frame %d.png"
# List of input image files (assume all images are in the same resolution, and the same "pixel format").
images = ['frame 1.png', 'frame 2.png', 'frame 3.png', 'frame 4.png', 'frame 5.png', 'frame 6.png', 'frame 7.png', 'frame 8.png']
# Read PNG images from files, and write to BytesIO object in memory (read images as binary data without decoding).
images_in_memory = io.BytesIO()
for png_file_name in images:
img = Img.open(png_file_name)
# Modify the images using PIL...
img.save(images_in_memory, format="png")
# Use pipe protocol: https://ffmpy.readthedocs.io/en/latest/examples.html#using-pipe-protocol
ff = ffmpy.FFmpeg(
inputs={'pipe:0': '-y -f image2pipe -r 1'},
outputs={'output.gif': None},
executable='\\ffmpeg\\bin\\ffmpeg.exe')
ff.run(input_data=images_in_memory.getbuffer(), stdout=subprocess.PIPE)
在内存中将图像编码为PNG在执行时间上并不是最高效的,但它节省了内存空间.