I’m trying to extract the frames of a video as individual images but it’s really slow, except when I’m using jpeg. The obvious issue with jpegs is the data loss from the compression, I want the images to be lossless. Extracting them as jpegs manages about 50-70 fps but as pngs it’s only 4 fps and it seems to continue getting slower, after 1 minute of the 11 minute video it’s only 3.5 fps.

I suspect it’s because I’m doing this on an external 5tb hard drive, connected over USB 3.0 and the write speed can’t keep up. So my idea was to use a different image format. I tried lossless jpeg xl and lossless webp but both of them are even slower, only managing to extract at about 0.5 fps or something. I have no idea why that’s so slow, the files are a lot smaller than png, so it can’t be because of the write speed.

I would appreciate it if anyone could help me with this.

14 points

Honestly I don’t know, but it seems to me like extracting every single frame of a video as a lossless PNG is only really something that’s necessary if you’re trying to archive something or do frame by frame restoration. Either way, it is something that you hopefully aren’t doing every day, so why not just let it run overnight & move on?

Otherwise ask yourself if you can settle with just extracting a single clip/section, or what’s actually wrong with lossy jpeg with a low -qscale:v (high quality) - start around 5 and work down until you visually can’t see any difference

permalink
report
reply
3 points

I’m doing this to upscale and interpolate the video and I want the best quality possible, since the source is using h.264 and I’m exporting to AV1. I was using jpeg with qscale:v 0 and 100% quality but you could still see compression artifacts, which is why I want to use a lossless format now. The upscaling and interpolation also takes quite a lot of time, so I’m also trying to minimize the time each step takes, if possible, since I’ll be doing this with multiple videos and I’ll probably use these scripts I made in the future a few more times.

permalink
report
parent
reply
4 points

Have you verified that they’re actually new jpeg artifacts, not just the h264 artifacts?

permalink
report
parent
reply
2 points

Yes, I compared it to the same frame exported as a png

permalink
report
parent
reply
2 points

Have you considered using av1an? it supports vaporsynth which has a large amount of upscale and frame interpolation tools, AI or not. If your upscaler supports vapoursynth, it could be a lot better option.

permalink
report
parent
reply
2 points
*

I use upscayl-ncnn (basically just the cli version of Upscayl) and it doesn’t support vapoursynth. I’ve heard of it before but I don’t really know what it is or how to use it.

permalink
report
parent
reply
6 points

It probably becomes CPU limited with those other compression algorithms.

You could use something like atop to find the bottleneck.

permalink
report
reply
3 points

Yeah, that’s the probably the case for those. I looked at CPU usage when using webp and one CPU core was always at 100%. Even tough it seems to not be able to use multiple cores, that’s still really slow, no? Or is that normal?

Also, my CPU is a Ryzen 5 3600, just to get an idea of what performance would be expected.

permalink
report
parent
reply
2 points

My first thought was similar - there might be some hardware acceleration happening for the jpgs that isn’t for the other formats, resulting in a CPU bottleneck. A modern harddrive over USB3.0 should be capable of hundreds of megabits to several gigabits per second. It seems unlikely that’s your bottleneck (though you can feel free to share stats and correct the assumption if this is incorrect - if your pngs are in the 40 megabyte range, your 3.5 per second would be pretty taxing).

If you are seeing only 1 CPU core at 100%, perhaps you could split the video clip, and process multiple clips in parallel?

permalink
report
parent
reply
1 point

Coming back to this, what you said at the end was really interesting. I could manually split up the file and run the frame extract script for each one at the same time but do you know if it’s possible to automate this? Or even better, run each instance of ffmpeg on the same video file and just extract every nth frame, like I said in my earlier reply?

permalink
report
parent
reply
1 point

At this point I’m very sure that the drive speed is actually the bottleneck. I’m not sure why it’s so slow tho. Splitting it is an interesting idea, maybe it’s also possible to tell ffmpeg to only extract every 6th frame and start at a different frame for each of the 6 cores.

permalink
report
parent
reply
5 points
*

A) Export using a lower effort, with libjxl effort 2 or something will be fine.

B) Export to a faster image format like QOI or TIFF or PPM/PNM etc.

PNG, JXL, WEBP, all have fairly high encode times by default with ffmpeg. lower the effort or use a faster format

If you think that it really could be write speed limitations, encode to a ramdisk first then transfer if you have the spare ram, but using a different and faster format will probably help as PNG is still very slow to encode. (writing to /tmp is fine for this)

permalink
report
reply
2 points
*

A) I actually didn’t know about this before, do you know what option I need to use in ffmpeg to set the effort?

B) I tried those but it’s the same issue as with png, that the hard drive’s write speed is too slow (or it’s the USB 3 connection but the result is the same)

Edit: Just found out how to set the effort. Setting it to 1 is quite a bit faster but still slow at only 3.8 fps.

permalink
report
parent
reply
2 points
*

what are your system specs? at a low effort you should be getting a lot more FPS, what cli command are you using? but I guess it would be best for you to export to /tmp given enough ram and then go from there

EDIT: for context, when encoding libjxl I would do -distance 0 -effort 2 for lossless output

permalink
report
parent
reply
2 points

I have a Ryzen 5 3600. My command was ffmpeg -i video.mp4 -threads 12 -distance 0 -effort 1 extract/%06d.jxl.

permalink
report
parent
reply
2 points

PNG is a rather slow algorithm based on the DEFLATE compression from zip/gzip. You could extract to bmp or some other uncompressed format. First, to ensure it is lossless, make sure it supports the video’s pix_fmt without needing conversion.

permalink
report
reply
2 points

Using bmp has the same bottleneck as png, which is the write speed of the hard drive

permalink
report
parent
reply
5 points

Well, you found your problem then. You will need to get a decent quality SSD to speed it up. Avoid those cheap QLC SSDs, they are slower than mechanical hard drives once the SLC cache fills up.

permalink
report
parent
reply
0 points

I don’t really wanna buy another SSD just for this. I already have two SSDs in my PC, I just don’t have enough storage left. All the frames are gonna be like 300gb.

permalink
report
parent
reply
1 point

going from YUV->RGB wont incur any meaningful loss, going from RGB -> YUV on the other hand can, but it’s rare that it will actually happen so long as you arent messing up your bitdepth too much

permalink
report
parent
reply
-1 points

I’ll bet with mpeg to jpeg it doesn’t have to re-encode the image, which it’s doing with the other formats.

permalink
report
reply
4 points

h.264 (the compression algorithm the video uses) and jpeg are entirely different, so it does have to re-encode

permalink
report
parent
reply
0 points

Actually they both use Discrete Cosign Transform!

PNGs use DEFLATE which is a generic compression standard that exhaustively searches for smaller ways to compact the data.

I would recommend comparing the quality of images of different formats against eachother to see if there is noticeable lossyness.

If the PNGs are indeed better, try to set the initial compression of the PNGs to “zero” and come back later to “crush” them smaller.

permalink
report
parent
reply
1 point

Even if they use the same technique, they’re entirely different algorithms and h.264 also takes information from multiple different frames, which is why the video is 1.7gb but a folder with each frame saved as a png is over 300gb.

The formats with the best compression, where it might be fine, are jpeg xl and webp, as far as I know. They’re even slower tho because they’re so CPU intensive and only use one thread.

Setting the png compression to 0 doesn’t help because the bottleneck for png is the hard drives write speed. I already tried that.

permalink
report
parent
reply

Linux

!linux@lemmy.ml

Create post

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word “Linux” in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

  • Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
  • No misinformation
  • No NSFW content
  • No hate speech, bigotry, etc

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

Community stats

  • 9.3K

    Monthly active users

  • 3.2K

    Posts

  • 37K

    Comments