compat_urllib_request.Request
[downloader/dash] Use sanitized_Request
[downloader/http] Use sanitized_Request
[atresplayer] Use sanitized_Request
[bambuser] Use sanitized_Request
[bliptv] Use sanitized_Request
[brightcove] Use sanitized_Request
[cbs] Use sanitized_Request
[ceskatelevize] Use sanitized_Request
[collegerama] Use sanitized_Request
[extractor/common] Use sanitized_Request
[crunchyroll] Use sanitized_Request
[dailymotion] Use sanitized_Request
[dcn] Use sanitized_Request
[dramafever] Use sanitized_Request
[dumpert] Use sanitized_Request
[eitb] Use sanitized_Request
[escapist] Use sanitized_Request
[everyonesmixtape] Use sanitized_Request
[extremetube] Use sanitized_Request
[facebook] Use sanitized_Request
[fc2] Use sanitized_Request
[flickr] Use sanitized_Request
[4tube] Use sanitized_Request
[gdcvault] Use sanitized_Request
[extractor/generic] Use sanitized_Request
[hearthisat] Use sanitized_Request
[hotnewhiphop] Use sanitized_Request
[hypem] Use sanitized_Request
[iprima] Use sanitized_Request
[ivi] Use sanitized_Request
[keezmovies] Use sanitized_Request
[letv] Use sanitized_Request
[lynda] Use sanitized_Request
[metacafe] Use sanitized_Request
[minhateca] Use sanitized_Request
[miomio] Use sanitized_Request
[meovideo] Use sanitized_Request
[mofosex] Use sanitized_Request
[moniker] Use sanitized_Request
[mooshare] Use sanitized_Request
[movieclips] Use sanitized_Request
[mtv] Use sanitized_Request
[myvideo] Use sanitized_Request
[neteasemusic] Use sanitized_Request
[nfb] Use sanitized_Request
[niconico] Use sanitized_Request
[noco] Use sanitized_Request
[nosvideo] Use sanitized_Request
[novamov] Use sanitized_Request
[nowness] Use sanitized_Request
[nuvid] Use sanitized_Request
[played] Use sanitized_Request
[pluralsight] Use sanitized_Request
[pornhub] Use sanitized_Request
[pornotube] Use sanitized_Request
[primesharetv] Use sanitized_Request
[promptfile] Use sanitized_Request
[qqmusic] Use sanitized_Request
[rtve] Use sanitized_Request
[safari] Use sanitized_Request
[sandia] Use sanitized_Request
[shared] Use sanitized_Request
[sharesix] Use sanitized_Request
[sina] Use sanitized_Request
[smotri] Use sanitized_Request
[sohu] Use sanitized_Request
[spankwire] Use sanitized_Request
[sportdeutschland] Use sanitized_Request
[streamcloud] Use sanitized_Request
[streamcz] Use sanitized_Request
[tapely] Use sanitized_Request
[tube8] Use sanitized_Request
[tubitv] Use sanitized_Request
[twitch] Use sanitized_Request
[twitter] Use sanitized_Request
[udemy] Use sanitized_Request
[vbox7] Use sanitized_Request
[veoh] Use sanitized_Request
[vessel] Use sanitized_Request
[vevo] Use sanitized_Request
[viddler] Use sanitized_Request
[videomega] Use sanitized_Request
[viewvster] Use sanitized_Request
[viki] Use sanitized_Request
[vk] Use sanitized_Request
[vodlocker] Use sanitized_Request
[voicerepublic] Use sanitized_Request
[wistia] Use sanitized_Request
[xfileshare] Use sanitized_Request
[xtube] Use sanitized_Request
[xvideos] Use sanitized_Request
[yandexmusic] Use sanitized_Request
[youku] Use sanitized_Request
[youporn] Use sanitized_Request
[youtube] Use sanitized_Request
[patreon] Use sanitized_Request
[extractor/common] Remove unused import
[nfb] PEP 8
When using 'bestvideo/best,bestaudio', 'bestvideo/best' must be set as the current_selector (instead of appending it to the selectors), otherwise when it gets the ',' it would append 'None' to the selectors.
'bestvideo+bestaudio/best' was incorrectly interpreted as 'bestvideo+(bestaudio/best)', so it would fail if 'bestaudio' doesn't exist instead of falling back to 'best'.
The spec string is processed using 'tokenize.tokenize' to split it in words and operators, the filters are still processed using regular expressions.
This should make easier to allow grouping operators with parens.
For some extractors that are hard to workout a good _VALID_URL we use very vague and unrestrictive ones,
e.g. just allowing anything after hostname and capturing part of URL as id.
If some of these extractors happen to have an video embed of some different hoster or platform
and this scenario was not handled in extractor itself we end up with inability to download this embed
until extractor is fixed to support embed of this kind.
Forcing downloader to use the generic extractor can be a neat temporary solution for this problem.
Example: FiveTV extractor with Tvigle embed - http://www.5-tv.ru/rabota/broadcasts/48/
Otherwise it's impossible to only download non-DASH formats, for example `best[height=?480]/best` would download a DASH video if it's the only one with height=480, instead for falling back to the second format specifier.
For audio only urls (soundcloud, bandcamp ...), the best audio will be downloaded as before.
It doesn't work well with 'bestvideo' and 'bestaudio' because they are usually before the max quality.
Format filters should be used instead, they are more flexible and don't require the requested quality to exist for each video.
Without the '--keep-video' option the two files would be downloaded again and even using the option, ffmpeg would be run again, which for some videos can take a long time.
We use a temporary file with ffmpeg so that the final file only exists if it success
Since keep_video started as None we always set it to keep_video_wish unless it was None, so in the end keep_video == keep_video_wish. This should have been changed in f3ff1a3696, but I didn't notice it.
We need to keep the orginal subtitles information, so that the '--load-info' option can be used to list or select the subtitles again.
We'll also be able to have a separate field for storing the automatic captions info.
For each language the extractor builds a list with the available formats sorted (like for video formats), then YoutubeDL selects one of them using the '--sub-format' option which now allows giving the format preferences (for example 'ass/srt/best').
For each format the 'url' field can be set so that we only download the contents if needed, or if the contents needs to be processed (like in crunchyroll) the 'data' field can be used.
The reasons for this change are:
* We weren't checking that the format given with '--sub-format' was available, checking it in each extractor would be repetitive.
* It allows to easily support giving a format preference.
* The subtitles were automatically downloaded in the extractor, but I think that if you use for example the '--dump-json' option you want to finish as fast as possible.
Currently only the ted extractor has been updated, but the old system still works.
Useful for external tools using the json output.
The methods '_calc_headers' and '_calc_cookies' have been copied from the downloader/external, now they just use "info_dict['http_headers']".
If one of the processors said the file should be kept, it wouldn't pay
attention to the response from the following processors. This was wrong if the
'keep_video' option was False, if the first extractor modifies the original file
and then we extract its audio we don't want to keep the original video file.