I tried to download some videos from Reddit using YT-DLP and it didn’t work, I think maybe because Reddit limited access
I made a script for grabbing reddit videos that’s been working pretty well for me, needs Babashka to run https://babashka.org/
#!/usr/bin/env bb
(require '[clojure.java.shell :refer [sh]]
'[clojure.string :as string]
'[cheshire.core :as cheshire]
'[org.httpkit.client :as http]
'[clojure.walk :as walk])
(defn http-get [url]
(-> @(http/get url {})
:body))
(defn find-base-url [data]
(let [results (atom [])]
(walk/postwalk
(fn [node]
(when (and (string? node) (.contains node "DASH"))
(swap! results conj node))
node)
data)
(some-> @results first (string/replace #"DASH_[0-9]+\.mp4" ""))))
(defn find-best-quality [names audio?]
(->> ((if audio? filter remove) #(.contains (.toLowerCase %) "audio") names)
(sort-by
(fn [n]
(-> n
(string/replace #"\.mp4" "")
(string/replace #"[a-zA-Z_]" "")
(Integer/parseInt))))
(last)))
(defn find-parts [base-url data]
(let [url (atom nil)
_ (walk/prewalk
(fn [node]
(when (and (map? node)
(contains? node :dash_url))
(reset! url (:dash_url node)))
node)
data)
xml (http-get @url)
parts (->> (re-seq #"<BaseURL>(.*?)</BaseURL>" xml) (map second))
best-video (find-best-quality parts false)
best-audio (find-best-quality parts true)]
[(str base-url best-video) (str base-url best-audio)]))
(defn filename [url]
(let [idx (inc (.lastIndexOf url "/"))]
(subs url idx)))
(defn tsname []
(str "video-" (System/currentTimeMillis) ".mp4"))
(let [data (-> (first *command-line-args*) (str ".json") http-get (cheshire/decode true))
base-url (find-base-url data)
[video-url audio-url] (find-parts base-url data)
video-file (filename video-url)
audio-file (filename audio-url)]
(sh "wget" video-url)
(sh "wget" audio-url)
(sh "ffmpeg" "-i" video-file "-stream_loop" "-1" "-i" audio-file "-shortest" "-map" "0:v:0" "-map" "1:a:0" "-y" (tsname))
(sh "rm" audio-file video-file))
https://github.com/yt-dlp/yt-dlp/blob/master/supportedsites.md
Reddit is listed in the list of supported sites. I just tested it with a random post video post on Reddit, and it downloaded the file perfectly fine (played in local player). My theory you either did a user error and gave a link that is not a video post, I’m not sure if posts that link to a video would work, I think the post itself must be a video post. Or you tested it when Reddit blocked yt-dlp. The yt-dlp team needs to update it first, then it functions again. YouTube does the same.
You might also look at gallery-dl
One of my favourite applications. I stopped paying for spotify and just use this to get music these days. Everything gets uploaded to youtube anyways.
Does it automatically grab things like metadata (author, cover art, etc.) for you? And if it requires a flag, do you know it?
Unless the artist only posts on YouTube, try soulseek. Most files have metadata already included, and if they don’t, you can just download from another user.
May I suggest SpotDL specifically for Spotify: https://github.com/spotDL/spotify-downloader
Does it work anymore? I’ve been getting the 500 error while trying to use it for a couple of months non stop.
Downloading music from YouTube will get you MP3s, but they will have gone through the YT compression algorithms.
Use Deemix instead. Downloads MP3s straight from the Deezer servers with all metadata and album art.
Lucida.to would also be a pretty good choice, you can choose to download from either Deezer, Qobuz, Tidal, Spotify, Deezer on Amazon Music.
If you like this article, please consider following the site on Mastodon/Fedi, email, or RSS. It helps me get information like this out to a wider audience :)
Maybe a little bit shameless plug from me, but I want point to my Bash script for Linux to make the daily yt-dlp life easier: https://github.com/thingsiplay/yt-dlp-lemon yt-dlp-lemon -h
will show only a few options and yt-dlp-lemon -H
shows everything the script supports.
It’s the main way I watch youtube now. After Piped and Newpipe stopped working for me across all devices, I only use 2 methods of watching Youtube now. Open in mpv (which is configured to use yt-dlp in the backend to make things faster), and download using yt-dlp. So it’s key to me keeping on watching Youtube. Recently, I’ve started getting ads showing up even on Mobile Vivaldi, so no more YT on my phone.
So my new workflow is to use Piped to find a video, then copy the end of the link and type “yt-dlp <C-S-v>” in a terminal, wait for the video(s) to download, and open in mpv.
OR
In some cases, use Qutebrowser, with a custom keybind to open a video in mpv.
Maybe just pay for YouTube Premium at at that point? It pays the video creators, and you don’t have to have a janky playback setup.
Why would anyone want to support one of the most evil companies in the world alongside though?
If you don’t like Google keeping a cut, then sign up for all the Patreons for everyone you watch.
No matter how hanky this setup is, the official YouTube app is jankier.
It pays the video creators
Then why are almost all of them on patreon and ask for a donation?
Because it’s an additional source of revenue, and they can provide rewards outside of YouTube.
So my new workflow is to use Piped to find a video, then copy the end of the link and type “yt-dlp <C-S-v>” in a terminal, wait for the video(s) to download, and open in mpv.
Why not just pass the YouTube link to mpv so you don’t have to wait for the video to download?
I still have to wait a long time for the video to load in the Mpv cache, and sometimes I want a bunch of videos to watch later (or watch multiple times if they’re educational). In which case, I either open up a bunch of videos in their own mpv windows and they all load while I’m watching the first one, or I download them while I’m doing something else.
But loading a bunch of mpv windows is heavier than a bunch of terminals running yt-dlp (and I could also just switch to using tmux… which I probably should get around to at some point).
I still have to wait a long time for the video to load in the Mpv cache
In my experience the video loads in a few seconds compared to the minutes it’d take for it to download, but I get your second point.