Compare commits

...

642 Commits

Author SHA1 Message Date
Henrik Heimbuerger
48f8eca81a
Merge d4664a5346 into c5098961b0 2024-11-23 11:14:32 +00:00
dirkf
d4664a5346
Remove (last?) set literal 2024-11-23 11:14:30 +00:00
dirkf
92d881c33f
Linty 2024-11-23 11:03:37 +00:00
dirkf
bd4729a866
[utils] Add json_stringify()
* somewhat like JSON.stringify()
* replaces json.dumps(..., separators=(',',':')).encode('utf-8')
* more kwarg options available
2024-11-23 11:00:00 +00:00
dirkf
79abdae734
Add Art19IE to extractors.py
And clean  up sorting
2024-11-23 10:47:21 +00:00
dirkf
88619125c8
Create art19.py 2024-11-23 10:39:54 +00:00
dirkf
3565d21951
Merge branch 'master' into add-nebula-support 2024-11-23 10:34:26 +00:00
dirkf
ddbadd037f
Update PR with back-port from its development in yt-dlp 2024-11-23 10:31:42 +00:00
dirkf
c5098961b0 [Youtube] Rework n function extraction pattern
Now also succeeds with player b12cc44b
2024-08-06 20:59:09 +01:00
dirkf
dbc08fba83 [jsinterp] Improve slice implementation for player b12cc44b
Partly taken from yt-dlp/yt-dlp#10664, thx seproDev
        Fixes #32896
2024-08-06 20:51:38 +01:00
Aiur Adept
71223bff39
[Youtube] Fix nsig extraction for player 20dfca59 (#32891)
* dirkf's patch for nsig extraction
* add generic search per  yt-dlp/yt-dlp/pull/10611 - thx bashonly

---------

Co-authored-by: dirkf <fieldhouse@gmx.net>
2024-08-01 19:18:34 +01:00
dirkf
e1b3fa242c [Youtube] Find n function name in player 3400486c
Fixes #32877
2024-07-25 00:16:00 +01:00
dirkf
451046d62a [Youtube] Make n-sig throttling diagnostic up-to-date 2024-07-24 14:33:34 +01:00
dirkf
16f5bbc464 [YouTube] Fix nsig processing for player b22ef6e7
* improve extraction of function name (like yt-dlp/yt-dlp#10390)
* always use JSInterp to extract function code (yt-dlp/yt-dlp#10396, thx seproDev, pukkandan)
2024-07-11 00:50:46 +01:00
dirkf
d35ce6ce95 [jsinterp] Support functionality for player b22ef6e7
* support `prototype` for call() and apply() (yt-dlp/yt-dlp#10392, thx Grub4k)
* map JS `Array` to `list`
2024-07-11 00:50:46 +01:00
dirkf
76ac69917e [jsinterp] Further improve expression parsing (fix fd8242e)
Passes tests from yt-dlp
2024-07-11 00:50:46 +01:00
dirkf
756f6b45c7 [jsinterp] Re-align JSInterp and tests (esp.) with yt-dlp
Thx: various yt-dlp authors
2024-07-11 00:50:46 +01:00
bashonly
43a74c5fa5 [core] Address gaps in allowed extensions
Adds some extensions missing in 4652109643
(from yt-dlp/yt-dlp#10362)

Authored by: bashonly
Co-authored by: dirkf
2024-07-11 00:50:46 +01:00
dirkf
a452f9437c [core] Fix PR #32830 for fixed extensionless output template 2024-07-07 22:33:32 +01:00
unkernet
36801c62df
[YandexMusic] Save track version in the title field
PR #32837
* Add track version to track title
2024-07-07 20:18:33 +01:00
Sergey Musatov
f4b47754d9
[YandexMusic] Download music in High Quality (320 Kbit/s)
PR #31159
2024-07-06 11:04:36 +01:00
dirkf
37cea84f77 [core,utils] Support unpublicised --no-check-extensions 2024-07-02 15:38:50 +01:00
dirkf
4652109643 [core,utils] Implement unsafe file extension mitigation
* from https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-79w7-vh3h-8g4, thx grub4k
2024-07-02 15:38:50 +01:00
dirkf
3c466186a8 [utils] Back-port Namespace and MEDIA_EXTENSIONS from yt-dlp
Thx pukkandan
* Namespace: https://github.com/yt-dlp/yt-dlp/commit/591bb9d355
* MEDIA_EXTENSIONS: https://github.com/yt-dlp/yt-dlp/commit/8dc5930511
2024-07-02 15:38:50 +01:00
dirkf
4d05f84325 [PalcoMP3] Conform to new linter rule
* no space after @ in decorator
2024-06-20 20:03:49 +01:00
dirkf
e0094e63c3 [jsinterp] Various tweaks
* treat Infinity like NaN
* cache operator list
2024-06-20 20:03:49 +01:00
dirkf
fd8242e3ef [jsinterp] Fix and improve expression parsing
* improve BODMAS (fixes https://github.com/ytdl-org/youtube-dl/issues/32815)
* support more weird expressions with multiple unary ops
2024-06-20 20:03:49 +01:00
dirkf
ad01fa6cca [jsinterp] Add Debugger from yt-dlp
* https://github.com/yt-dlp/yt-dlp/commit/8f53dc4
* thx pukkandan
2024-06-20 20:03:49 +01:00
dirkf
2eac0fa379 [utils] Save orig_msg in ExtractorError 2024-06-20 20:03:49 +01:00
Paper
0153b387e5
[VidLii] Add 720p support (#30924)
* [VidLii] Add HD support  (yt-dlp backport-ish)

* Also fix a bug with the view count

---------

Co-authored-by: dirkf <fieldhouse@gmx.net>
2024-06-11 13:21:39 +01:00
dirkf
a48fe7491d [ORF] Skip tests with limited availability 2024-06-11 12:52:13 +01:00
dirkf
e20ca543f0 [ORF] Re-factor and updateORFFM4StoryIE
* fix getting media via DASH instead of inaccessible mp4
* also get in-page YT media
2024-06-11 12:52:13 +01:00
dirkf
e39466051f [ORF] Support sound.orf.at, updating ORFRadioIE
* maintain support for xx.orf.at/player/... URLs
* add `ORFRadioCollectionIE` to support playlists in ORF Sound
* back-port and re-work `ORFPodcastIE` from https://github.com/yt-dlp/yt-dlp/pull/8486, thx Esokrates
2024-06-11 12:52:13 +01:00
dirkf
d95c0d203f [ORF] Support on.orf.at, replacing ORFTVthekIE
* add `ORFONIE`, back-porting yt-dlp PR https://github.com/yt-dlp/yt-dlp/pull/9113 and friends: thx HobbyistDev, TuxCoder, seproDev
* re-factor to support livestreams via new `ORFONliveIE`
2024-06-11 12:52:13 +01:00
dirkf
3bde6a5752 [test] Improve download test
* skip reason can't be unicode in Py2
* remove duplicate assert...Equal functions
2024-06-11 12:52:13 +01:00
dirkf
50f6c5668a [core] Re-factor with _fill_common_fields() as used in yt-dlp 2024-06-11 12:52:13 +01:00
dirkf
b4ff08bd2d [core] Safer handling of nested playlist data 2024-06-11 12:52:13 +01:00
kmnx
88bd8b9f87
[mixcloud] updated mixcloud API server address (#32557)
* updated mixcloud API server address
* fix tests
* etc

---------

Co-authored-by: dirkf <fieldhouse@gmx.net>
2024-06-11 12:38:24 +01:00
dirkf
21924742f7 [InfoExtractor] Misc yt-dlp back-ports, etc
* add _yes_playlist() method
* avoid crash using _NETRC_MACHINE
* use _search_json() in _search_nextjs_data()
* _search_nextjs_data() default is JSON, not text
* test for above
2024-05-30 15:46:36 +01:00
dirkf
768ccccd9b [compat] Avoid type comparison in compat_ord
NB This isn't actually a compat fn; it should be utils.int_from_int_or_char
2024-05-30 15:46:36 +01:00
dirkf
eee9a247eb [utils] Split out traversal.py dummy and traversal tests 2024-05-30 15:46:36 +01:00
dirkf
34484e49f5 [compat] Improve compat_etree_iterfind for Py2.6
Adapted from https://raw.githubusercontent.com/python/cpython/2.7/Lib/xml/etree/ElementPath.py
2024-05-30 15:46:36 +01:00
dirkf
06da64ee51 [utils] Update traverse_obj() from yt-dlp
* remove `is_user_input` option per https://github.com/yt-dlp/yt-dlp/pull/8673
* support traversal of compat_xml_etree_ElementTree_Element per https://github.com/yt-dlp/yt-dlp/pull/8911
* allow un/branching using all and any per https://github.com/yt-dlp/yt-dlp/pull/9571
* support traversal of compat_cookies.Morsel and multiple types in `set()` keys per https://github.com/yt-dlp/yt-dlp/pull/9577
thx Grub4k for these
* also, move traversal tests to a separate class
* allow for unordered dicts in tests for Py<3.7
2024-05-30 15:46:36 +01:00
dirkf
a08f2b7e45
[workflows/ci.yml] Temporary workaround for Python 3.5 _pip_ failures
https://github.com/actions/setup-python/issues/866
2024-05-15 16:57:59 +01:00
dirkf
668332b973 [YouPorn] Add playlist extractors
* YouPornCategoryIE
* YouPornChannelIE
* YouPornCollectionIE
* YouPornStarIE
* YouPornTagIE
* YouPornVideosIE,
2024-04-22 01:34:26 +01:00
dirkf
0b2ce3685e [YouPorn] Improve extraction
* detect unwatchable videos
* improve duration extraction
* fix count extraction and support large values
* detect and remove SEO spam boilerplate description
2024-04-22 01:34:26 +01:00
dirkf
c2766cb80e [test/test_download] Support 'playlist_maxcount:count' expected value
* parallel to `playlist_mincount'
* specify both for a range of playlist lengths
* if max < min the test will always fail!
2024-04-22 01:34:26 +01:00
dirkf
eb38665438 [YouPorn] Incorporate yt-dlp PR 8827
* from https://github.com/yt-dlp/yt-dlp/pull/8827
* extract from webpage instead of broken API URL
* thx The-MAGI
2024-04-22 01:34:26 +01:00
dirkf
e0727e4ab6 [postprocessor/ffmpeg] Fix finding ffprobe (bug in 21792b8)
Fixes 21792b88b7 (commitcomment-140705274), thx: vonProteus
2024-04-07 15:33:30 +01:00
Ori Avtalion
4ea59c6107
[utils] Fix crash in _report_ignoring_subs from c58b655 (#32762)
Align `utils.bug_reports_message()` with yt-dlp https://github.com/yt-dlp/yt-dlp/commit/5873d4ccdd, thanks fstirlitz

---------

Co-authored-by: dirkf <fieldhouse@gmx.net>
2024-04-05 15:25:29 +01:00
dirkf
21792b88b7 [external/FFmpeg] Fix and improve --ffmpeg-location handling
* pass YoutubeDL (FileDownloader) to FFmpegPostProcessor constructor
* consolidate path search in FFmpegPostProcessor
* make availability of FFmpegFD depend on existence of FFmpegPostProcessor
* detect ffmpeg executable on instantiation of FFmpegFD
* resolves #32735
2024-03-27 13:11:17 +00:00
dirkf
d8f134a664 [downloader/external] Fix "Resource Warning" in downloader test
* add compat_subprocess_Popen context manager
* apply context manager in FFmpegFD._call_downloader()
2024-03-27 13:11:17 +00:00
dirkf
31a15a7c8d [compat] Simplify/fix compat_html_parser_HTMLParseError 2024-03-27 13:11:17 +00:00
dirkf
19dc10b986 [utils] Apply compat_contextlib_suppress 2024-03-27 13:11:17 +00:00
dirkf
182f63e82a [compat] Add compat_contextlib_suppress
with compat_contextlib_suppress(*Exceptions):
    # code that fails silently for any of Exceptions
2024-03-27 13:11:17 +00:00
gy-chen
71211e7db7
[Youtube] Fix unwanted private method __ie_msg in f8b0135850
Fixes `AttributeError no attribute '_YoutubeIE__ie_msg'` if unable to decode n-parameter
2024-03-23 15:30:13 +00:00
Zizheng Guo
a96a45b2cd
[Vimeo] Improve config extraction (#32742)
* update for more robust json parsing
2024-03-12 11:44:13 +00:00
hatsomatt
820fae3b3a [Videa] Fix extraction
* update API URL
* from https://github.com/yt-dlp/yt-dlp/pull/8003
* thanks to the authors!

Closes yt-dlp/7427
Authored by: hatsomatt, aky-01
2024-03-08 13:14:52 +00:00
dirkf
aef24d97e9 [Videa] Align with yt-dlp 2024-03-08 13:14:52 +00:00
dirkf
f7b30e3f73 [XFileShare] Update extractor for 2024
* simplify aa_decode()
* review and update supported sites and tests
* in above, include FileMoon.sx, and remove separate module
* incorporate changes from yt-dlp
* allow for decoding multiple scripts (eg, FileMoon)
* use new JWPlayer extraction
2024-03-08 13:03:42 +00:00
dirkf
f66372403f [InfoExtractor] Rework and improve JWPlayer extraction
* use traverse_obj() and _search_json()
* support playlist `.load({**video1},{**video2}, ...)`
* support transform_source=... for _extract_jwplayer_data()
2024-03-08 13:03:42 +00:00
dirkf
7216fa2ac4 [InfoExtractor] Add _search_json()
* uses the error diagnostic to truncate the JSON string
* may be confused by non-C-Pythons
2024-03-08 13:03:42 +00:00
dirkf
acc383b9e3 [utils] Let int_or_none() accept a base, like int() 2024-03-08 13:03:42 +00:00
Hubert Hirtz
f0812d7848
[utils] Handle user:pass in URLs (#28801)
* Handle user:pass in URLs

Fixes "nonnumeric port" errors when youtube-dl is given URLs with
usernames and passwords such as:

    http://username:password@example.com/myvideo.mp4

Refs:
- https://en.wikipedia.org/wiki/Basic_access_authentication
- https://tools.ietf.org/html/rfc1738#section-3.1
- https://docs.python.org/3.8/library/urllib.parse.html#urllib.parse.urlsplit

Fixes #18276 (point 4)
Fixes #20258
Fixes #26211 (see comment)

* Align code with yt-dlp

---------

Co-authored-by: dirkf <fieldhouse@gmx.net>
2024-03-04 01:27:55 +00:00
Aaron Tan
40bd5c1815
[caffeine.tv] Add new extractor (#32514)
* Add CaffeineTVIE info extractor to support site caffeine.tv

---------

Co-authored-by: dirkf <fieldhouse@gmx.net>
2024-02-22 12:54:07 +00:00
dirkf
70f230f9cf
[GBNews]Add new extractor for GB News TV channel (#29432)
* Add extractor for GB News TV channel

* Support more GBNews URL formats
Allow alphanumeric and _ in place of `shows`, which redirect to site's preferred URL

* Update for 2024
2024-02-22 12:44:00 +00:00
dirkf
48ddab1f3a
[downloader/external] Fix WgetFD proxy (rev 2)
From PR (defunct source), closes #29343.
Matches https://github.com/yt-dlp/yt-dlp/pull/3152
Thx former user kikuyan.
2024-02-21 16:29:08 +00:00
dirkf
7687389f08 [Vbox7] Improve extraction, adding features from yt-dlp PR #9100
* changes from https://github.com/yt-dlp/yt-dlp/pull/9100 (thx
seproDev):
  - attempt HLS extraction
  - re-enable XFF
  - test `view_count`, `duration` extraction
* improve commenting, error checks
2024-02-19 00:53:22 +00:00
dirkf
4416f82c80 [Vbox7IE] Sanitise ld+json containing unexpected characters
* based on PR #29680
* added hack to force invoking `transform_source`
* fixes #26218
2024-02-02 12:36:05 +00:00
dirkf
bdda6b81df [Vbox7IE] Improve extraction
* DASH extraction no longer fails with new range support
* but always find combined formats if available
* suppress ineffective XFF geo-bypass (causes time-outs)
* adapted from https://github.com/ytdl-org/youtube-dl/pull/29680
* thx former GH user kikuyan
2024-02-02 12:36:05 +00:00
dirkf
1fd8f802b8 [InfoExtractor] Correctly resolve BaseURL in DASH manifest
Specs:
* ISO/IEC 23009-1:2012 section 5.6
* RFC 3986 section 5.
2024-02-02 12:36:05 +00:00
dirkf
4eaeb9b2c6 [InfoExtractor] Support byte range for DASH
* adapted from https://github.com/ytdl-org/youtube-dl/pull/30279
* thx former GH user kikuyan
2024-02-02 12:36:05 +00:00
dirkf
bec9180e89 [downloader/dash] Support range in fragment (format f'{start}-{end}')
* adapted from https://github.com/ytdl-org/youtube-dl/pull/30279
 * thx former GH user kikuyan
2024-02-02 12:36:05 +00:00
dirkf
c58b655a9e [InfoExtractor] Support DASH subtitle extraction (yt-dlp back-port) 2024-02-02 12:36:05 +00:00
dirkf
dc512e3a8a [YouTube] Fix like_count extraction using likeButtonViewModel
* also fix various tests
* TODO: check against yt-dlp tests
2024-01-22 11:10:34 +00:00
dirkf
f8b0135850 [YouTube] Rework n-sig processing, realigning with yt-dlp
* apply n-sig before chunked fragments, fixes #32692
2024-01-22 11:10:34 +00:00
dirkf
640d39f03a [InfoExtractor] Support some warning and ._downloader shortcut methods from yt-dlp 2024-01-22 11:10:34 +00:00
dirkf
6651871416 [compat] Rework compat for method parameter of compat_urllib_request.Request constructor
* fixes #32573
* does not break `utils.HEADrequest` (eg)
2024-01-22 11:10:34 +00:00
mk-pmb
be008e657d [core] Fix format string injection for metadata JSON filename message. 2023-12-06 02:45:41 +00:00
Robotix
b1bbc1e502
[Epidemic Sound] Add new extractor (#32628)
* Add simple extractor
* Support separate tracks
* Use index as id instead of slug

---------

Co-authored-by: dirkf <fieldhouse@gmx.net>
2023-12-06 01:17:57 +00:00
dirkf
55a442adae
[Imgur] Overhaul extractor module (#32612)
Revise extractors for new API and page formats
2023-12-05 20:02:30 +00:00
mimvahedi
c62936a5f2
[telewebion] Fix extraction (#32634)
* [telewebion] fix extraction

Resolves https://github.com/ytdl-org/youtube-dl/issues/5135#issuecomment-932952119

---------

Co-authored-by: dirkf <fieldhouse@gmx.net>
2023-12-02 15:25:09 +00:00
dirkf
427472351c [utils] Make restricted filenames ignore characters in Unicode categories Mark, Other
Resolves #32629
2023-11-29 22:08:01 +00:00
dirkf
c6538ed323 [workflows/ci.yml] Use setup-python for now released Python 3.12 2023-11-29 22:08:01 +00:00
dirkf
8d227cb97b [workflows/ci.yml] Actually use default values for push and pull_request 2023-11-29 22:08:01 +00:00
dirkf
4e115e18cb [workflows/ci.yml] Run apt-get update before installing 2023-11-29 22:08:01 +00:00
ReenigneArcher
b7fca0fab3 [Youtube] Update consent cookie handling to match site
Apologies for force push!
[skip ci]
2023-11-29 21:43:02 +00:00
dirkf
00ef748cc0 [downloader] Fix baa6c5e: show ETA of http download as ETA instead of total d/l time 2023-09-24 22:07:47 +01:00
dirkf
66ab0814c4 [utils] Revert bbd3e7e, updating docstring, test instead 2023-09-03 23:15:19 +01:00
dirkf
bbd3e7e999 [utils] Properly handle list values in update_url()
An actual list value in a query update could have been treated
as a list of values because of the key:list parse_qs format.
2023-09-03 01:18:22 +01:00
dirkf
21caaf2380 [test] Remove redundancy from lambda expected value regex 2023-09-03 01:13:40 +01:00
dirkf
31f50c8194 [S4C] Add thumbnail extraction, extract series as playlist
Based on https://github.com/yt-dlp/yt-dlp/pull/7776: thx ifan-t, bashonly
2023-08-31 23:16:50 +01:00
dirkf
7d58f0769a
[ci.yml] Improve conditions for nosetest installations 2023-08-31 17:16:47 +01:00
dirkf
86e3cf5e58 [S4C] Add extractor for Sianel Pedwar Cymru
* from https://github.com/yt-dlp/yt-dlp/pull/7730, thx ifan-t, bashonly
2023-08-04 22:54:12 +01:00
dirkf
2efc8de4d2 [utils] Advertise optional supported Content-Encodings 2023-08-01 01:05:09 +01:00
dirkf
e4178b5af3 [utils] Add and use filter_dict() from yt-dlp 2023-08-01 01:05:09 +01:00
dirkf
2d2a4bc832 [utils] Revise isinstance() tests (especially for str/unicode/bytes) to complete Linter fix 2023-08-01 01:05:09 +01:00
dirkf
7d965e6b65 [utils] Avoid comparing type(var), etc, to pass new Linter rules 2023-08-01 01:05:09 +01:00
dirkf
abef53466d [utils] Rework URL path munging for ., .. components
* move processing to YoutubeDLHandler
* also process `Location` header for redirect
* use tests from https://github.com/yt-dlp/yt-dlp/pull/7662
2023-07-29 14:27:26 +01:00
dirkf
e7926ae9f4 [utils] Rework decoding of Content-Encodings
* support nested encodings
* support optional `br` encoding, if brotli package is installed
* support optional 'compress' encoding, if ncompress package is installed
* response `Content-Encoding` has only unprocessed encodings, or removed
* response `Content-Length` is decoded length (usable for filesize metadata)
* use zlib for both deflate and gzip decompression
* some elements taken from yt-dlp: thx especially coletdjnz
2023-07-29 14:27:26 +01:00
dirkf
87e578c9b8 [workflows/ci.yml] Update to setup-java@v3
* avoid Node 12 deprecation
2023-07-29 14:27:26 +01:00
dirkf
0861812d72 [build] Fix typo in devscripts/fish-completion.py (fix 2285605) 2023-07-25 15:11:15 +01:00
dirkf
b870181229 [build] Extend use of devscripts/utils 2023-07-25 13:19:43 +01:00
dirkf
a25e9f3c84 [compat] Use compat_open() 2023-07-25 13:19:43 +01:00
dirkf
aac33155e4 [build] Add and use devscripts/utils 2023-07-25 13:19:43 +01:00
dirkf
2b7dd3b2a2 [utils] Fix update_Request() with empty data (not None) 2023-07-25 13:19:43 +01:00
dirkf
44faa71b19 [test/test_execution.py] Use compat_subprocess_get_DEVNULL() 2023-07-25 13:19:43 +01:00
dirkf
7bce2ad441 [build] Fix various Jython CI and test issues 2023-07-25 13:19:43 +01:00
dirkf
ca71e56c48 [workflows/ci.yml] Build 3.12 with pyenv 2023-07-25 13:19:43 +01:00
dirkf
2a4e9faa77 [doc] Update developer guidance
* mention pynose
* mention traverse_obj and add/revise examples

[skip ci]
2023-07-25 13:19:43 +01:00
dirkf
74eef6bb5e [workflows/ci.yml] Extend Python versions
* add 3.10 - 3.12
* use https://pypi.org/project/pynose/ for Py >= 3.9
* test Windows with 3.4
* set defaults (main, both) except push: (all, core)
2023-07-25 13:19:43 +01:00
dirkf
1fa8b86f0b [utils] Remove stray undocumented Host header in redirect (fix 46fde7c) 2023-07-20 05:29:59 +01:00
dirkf
b2ba24bb02 [InfoExtractor] Add _match_valid_url() class method and refactor
* API compatible with yt-dlp
* also support Sequence of patterns in _VALID_URL
* one place to compile _VALID_URL
* TODO: remove existing extractor shims
2023-07-19 22:14:50 +01:00
dirkf
a190b55964 [utils] Fix broken Py 3.11+ compat in traverse_obj()
* inspect.getargspec is missing despite doc claiming backward compat
* replace with emulation of `Signature.bind()`
2023-07-19 22:14:50 +01:00
dirkf
b2741f2654 [InfoExtractor] Add search methods for Next/Nuxt.js from yt-dlp
* add _search_nextjs_data(), from https://github.com/yt-dlp/yt-dlp/pull/1386
  thanks selfisekai
* add _search_nuxt_data(), from https://github.com/yt-dlp/yt-dlp/pull/1921,
  thanks Lesmiscore, pukkandan
* add tests for the above
* also fix HTML5 type recognition and tests, from
  222a230871,
  thanks Lesmiscore
* update extractors in PR using above, fix tests.
2023-07-19 22:14:50 +01:00
dirkf
8465222041 [Clipchamp] Add new extractor back-ported from yt-dlp 2023-07-19 22:14:50 +01:00
dirkf
4339910df3 [DLF] Add site extractors back-ported from yt-dlp
* from https://github.com/yt-dlp/yt-dlp/pull/6697, thanks nick-cd
2023-07-19 22:14:50 +01:00
dirkf
eaaf4c6736 [Whyp] Add extractor back-ported from yt-dlp
* from https://github.com/yt-dlp/yt-dlp/pull/6803, thanks CoryTibbettsDev
2023-07-19 22:14:50 +01:00
dirkf
4566e6e53e [GlobalPlayer] Add site extractors back-ported from yt-dlp
* from https://github.com/yt-dlp/yt-dlp/pull/6903, thanks garret1317
2023-07-19 22:14:50 +01:00
dirkf
1e8ccdd2eb [InfoExtractor] Support groups in _search_regex(), etc 2023-07-19 22:14:50 +01:00
dirkf
cb9366eda5 [utils] Minor updates (merge_dicts, T)
A couple of mods to ease yt-dlp back-ports:
* add kwargs to merge_dicts:
  `unblank=True` (disallow empty string), `rev=False` (reverse the merge list)
* add `T(x)` shortcut for `{x}`, unsupported in Py2.6
2023-07-19 22:14:50 +01:00
dirkf
d9d07a9581 [utils] Improve js_to_json, align with yt-dlp
* support variable substitution, from https://github.com/yt-dlp/yt-dlp/pull/#521 etc,
  thanks ChillingPepper, Grub4k, pukkandan
* improve escape handling, from https://github.com/yt-dlp/yt-dlp/pull/#521
  thanks Grub4k
* support template strings from https://github.com/yt-dlp/yt-dlp/pull/6623
  thanks Grub4k
* add limited `!` evaluation (eg, !!0 -> false, see tests)
2023-07-19 22:14:50 +01:00
dirkf
825a40744b [utils] Align traverse_obj() with yt-dlp
Thanks Grub4k for these:
* traverse `Iterable`s, from https://github.com/yt-dlp/yt-dlp/pull/6902, etc
* traverse `set` key for transformations/filters, `re.Match` group names, from
  776995bc10, etc
* traverse `re.Match`es, from https://github.com/yt-dlp/yt-dlp/pull/5174
* always return list when branching, from https://github.com/yt-dlp/yt-dlp/pull/5170
2023-07-19 22:14:50 +01:00
dirkf
47214e46d8 [compat] Fix old Pythons broken loading of valueless cookie attributes
Cookie string parsing in Py 2.6.9, probably earlier, requires `=`.
Also 3.2, though the CPython code appears to be OK: 3.1 was also wrong.
2023-07-18 10:50:46 +01:00
dirkf
1d8d5a93f7 [test] Fixes for old Pythons 2023-07-18 10:50:46 +01:00
dirkf
1634b1d61e [doc] Warn against setting cookies with --add-header 2023-07-18 10:50:46 +01:00
bashonly
21438a4194 [downloader/external] Fix cookie support 2023-07-18 10:50:46 +01:00
Simon Sawicki
8334ec961b [core] Process header cookies on loading 2023-07-18 10:50:46 +01:00
bashonly
3801d36416 [utils] YoutubeDLCookieJar: Add get_cookie_header and get_cookies_for_url methods 2023-07-18 10:50:46 +01:00
dirkf
b383be9887 [core] Remove Cookie header on redirect to prevent leaks
Adated from yt-dlp/yt-dlp-ghsa-v8mc-9377-rwjj/pull/1/commits/101caac
Thx coletdjnz
2023-07-18 10:50:46 +01:00
dirkf
46fde7caee [core] Update redirect handling from yt-dlp
* Thx coletdjnz: https://github.com/yt-dlp/yt-dlp/pull/7094
* add test that redirected `POST` loses its `Content-Type`
2023-07-18 10:50:46 +01:00
dirkf
648dc5304c [compat] Add Request and HTTPClient compat for redirect
* support `method` parameter of `Request.__init__`  (Py 2 and old Py 3)
* support `getcode` method of compat_http_client.HTTPResponse (Py 2)
2023-07-18 10:50:46 +01:00
dirkf
1720c04dc5 [test] Make skipped tests in test_execution work with Py 2.6 2023-07-18 10:50:46 +01:00
dirkf
d5ef405c5d [core] Align error reporting methods with yt-dlp 2023-07-18 10:50:46 +01:00
dirkf
f47fdb9564 [utils] Add {expected_type} and Iterable support to traverse_obj() 2023-07-18 10:50:46 +01:00
dirkf
b6dff4073d [core] Revert version display from b8a86dc 2023-07-18 10:50:46 +01:00
dirkf
f24bc9272e [Misc] Fixes for 2.6 compatibility 2023-07-05 22:58:54 +01:00
dirkf
b08a580906 [workflows/ci.yml] Fix test support for Py 2.6 2023-07-05 22:58:09 +01:00
dirkf
2500300c2a [workflows/ci.yml] Restore test support for Py 3.2 2023-07-05 22:51:15 +01:00
dirkf
58fc5bde47 [workflows/ci.yml] Restore test support for Py 3.3, 3.4, and add 2.6 2023-06-23 00:15:06 +01:00
dirkf
fa7f0effbe [YouTube] Avoid crash in author extraction 2023-06-22 23:14:21 +01:00
dirkf
ebdc82c586 [workflows/ci.yml] Replace actions/setup-python for legacy Pythons
Thanks MatteoH2O1999: https://github.com/MatteoH2O1999/setup-python
2023-06-22 23:12:22 +01:00
pukkandan
9112e668a5 [YouTube] Improve nsig function name extraction
Fixes player b7910ca8, using `,` vs `;`
See https://github.com/ytdl-org/youtube-dl/issues/32292#issuecomment-1602231170

Co-authored-by: dirkf
2023-06-22 16:46:53 +01:00
dirkf
07af47960f [YouTube] Improve fix for ae8ba2c
Thx: https://github.com/yt-dlp/yt-dlp/commit/01aba25
2023-06-18 00:52:18 +01:00
dirkf
ae8ba2c319 [YouTube] Fix KeyError QV in signature extraction failed
* temporarily force missing global definition into sig JS
* improve test: thanks https://github.com/yt-dlp/yt-dlp/issues/7327#issuecomment-1595274615
* resolves #32314
2023-06-17 15:55:19 +01:00
dirkf
d6433cbb2c [jsinterp] Don't find unrelated objects 2023-06-17 15:46:12 +01:00
dirkf
ff75c300f5 [jsinterp] Fix test for failed match in extract_object() 2023-06-17 15:34:11 +01:00
dirkf
a2534f7b88 [jsinterp] Fix div bug breaking player 8c7583ff
Thx bashonly: https://github.com/ytdl-org/youtube-dl/issues/32292#issuecomment-1585639223
Fixes #32292
2023-06-11 17:23:00 +01:00
dirkf
b8a86dcf1a [core] Revise 1f7c6f8 to help downstream merger (possibly) 2023-05-26 20:25:25 +01:00
dirkf
2389c7cbd3 [compat] Fix casefold import __all__ syntax in a19855f 2023-05-23 17:11:22 +01:00
dirkf
ee731f3d00 [ITV] Fix UA capitalisation in 384f632 2023-05-23 16:50:25 +01:00
dirkf
1f7c6f8b2b [core] Further improve platform debug log
* see d1c6c5c
2023-05-23 16:50:25 +01:00
dirkf
d89c2137ba [jsinterp] Small updates for a85a875
* update signature tests
* clarify NaN handling
2023-05-23 16:50:25 +01:00
dirkf
d1c6c5c4d6 [core] Improve platform debug log, based on yt-dlp 2023-05-11 21:17:31 +01:00
dirkf
6ed3433828 [jsinterp] Add short-cut evaluation for common expression
* special handling for (d%e.length+e.length)%e.length speeds up ~6%
2023-05-11 21:02:01 +01:00
dirkf
a85a875fef [jsinterp] Handle NaN in bitwise operators
* also add _NaN
* also pull function naming from yt-dlp
2023-05-11 20:59:30 +01:00
dirkf
11cc3f3ad0 [utils] Fix compiled_regex_type in 249f2b6 2023-05-11 20:53:07 +01:00
dirkf
64d6dd64c8 [YouTube] Support Releases tab 2023-04-23 22:58:35 +01:00
dirkf
211cbfd5d4 [jsinterp] Minimally handle arithmetic operator precedence
Resolves #32066
2023-04-21 14:04:30 +01:00
dirkf
26035bde46 [DashSegmentsFD] Correctly detect errors when fragment_retries == 0
* use the success flag instead of the retry count
* establish the fragment_url outside the retry loop
* only report skipping a fragment once.
* resolves #32033
2023-04-13 00:23:17 +01:00
dirkf
2da3fa04a6 [YouTube] Simplify signature patterns 2023-04-12 23:53:14 +01:00
Gabriel Nagy
735e87adfc
[core] Sanitize info dict before dumping JSON (fixes fe7e130) (#32032)
* follow up to fe7e130 which didn't fix everything.

Co-authored-by: dirkf <fieldhouse@gmx.net>
2023-04-12 23:40:38 +01:00
dirkf
fe7e13066c [core] Add and use sanitize_info() method from yt-dlp 2023-04-10 17:12:31 +01:00
dirkf
213d1d91bf [core] No longer importing copy 2023-04-06 19:49:46 +01:00
dirkf
f8253a5289 [core] Avoid deepcopy of ctx dict (fix f35b757) (Pt 2) 2023-04-06 19:42:36 +01:00
dirkf
d6ae3b77cd [core] Avoid deepcopy of ctx dict (fix f35b757)
* may now contain `LazyList`s
* resolves #31999
2023-04-06 14:56:55 +01:00
dirkf
9f4d83ff42 [options] Add --mtime option, unsets default --no-mtime
* resolves #1709 (!)
2023-04-05 19:05:16 +01:00
dirkf
25124bd640 [devscripts] Improve hack to convert command-line options to API options
* define equality for DateRange
* don't show default DateRange
2023-04-05 19:05:16 +01:00
dirkf
78da22489b [compat] Add and use compat_open() like Py3 open()
* resolves FIXME: ytdl-org/youtube-dl/commit/dfe5fa4
2023-04-05 18:57:37 +01:00
dirkf
557dbac173 [FragmentFD] Fix iteration with infinite limit
* fixes ytdl-org/youtube-dl/baa6c5e
* resolves #31885
2023-04-05 18:55:41 +01:00
dirkf
cdf40b6aa6 [test] Update tests for Ubuntu 20.04
* 18.04 test runner was withdrawn
* for now, disable Py 3.3/3.4 tests
2023-04-05 18:54:30 +01:00
pukkandan
3f6d2bd76f [extractor/youtube] Bypass throttling for -f17
and related cleanup

Thanks @AudricV for the finding

Ref: yt-dlp/yt-dlp/commit/c9abebb
2023-03-19 02:29:00 +00:00
pukkandan
88f28f620b [extractor/youtube] Construct fragment list lazily
Ref: yt-dlp/yt-dlp/commit/e389d17
See: yt-dlp/yt-dlp#6517
2023-03-19 02:29:00 +00:00
dirkf
f35b757c82 [utils] Ensure allow_types for variadic() is a tuple 2023-03-19 02:29:00 +00:00
dirkf
45495228b7 [downloader/http] Only check for resumability when actually resuming 2023-03-19 02:15:41 +00:00
dirkf
6fece0a96b [AENetworksBaseIE] Report missing show data instead of crash 2023-03-14 16:23:20 +00:00
dirkf
70ff013910 [devscripts] Add a hack to convert command-line options to API options 2023-03-14 16:23:20 +00:00
dirkf
e8de54bce5 [core] Handle /../ sequences in HTTP URLs
* use Python's RFC implementation for embedded sequences
* hack: strip unbalanced leading `../` from path, like eg Firefox

See https://github.com/yt-dlp/yt-dlp/issues/3355
2023-03-14 16:23:20 +00:00
dirkf
baa6c5e95c [FragmentFD] Respect --no-continue
* discard partial fragment on `--no-continue`
* continue with correct progress display otherwise

Resolves #21467
2023-03-14 16:23:20 +00:00
dirkf
5c985d4f81 [downloader] Let _ffmpeg_ handle DASH segments
Fixes https://github.com/ytdl-org/youtube-dl/issues/31792 after 3da1783.
2023-03-14 16:23:20 +00:00
dirkf
8c86fd33dc
[doc] Improve "guidance" on bug reporting 2023-03-09 16:40:30 +00:00
Sophira
27d41d7365
[doc] Recommend "Get cookies.txt LOCALLY" extension in README.md (#31763)
* remove link to suspect "Get cookies.txt" extension, dropped from Chrome store
* link to new Manifest V3-compatible open-source "Get cookies.txt LOCALLY" extension.

Fixes #31465.
2023-03-07 15:49:31 +00:00
dirkf
0402710227 [jsinterp] Fix regexp parsing and .replace[All] method
* For performance, make regexp object instantiation lazy
 * Other small performance improvements
2023-03-07 01:24:21 +00:00
pukkandan
3e92c60fcd [jsinterp] Handle Date at epoch 0
See yt-dlp/yt_dlp#6400
2023-03-03 15:02:15 +00:00
pukkandan
3da17834a4 [Youtube] Construct dash formats with range query
See yt-dlp/yt_dlp#6369
2023-03-03 15:02:15 +00:00
dirkf
f7ce98a21e [YouTube] Support @owner format in uploader_id etc
* implement https://github.com/ytdl-org/youtube-dl/issues/31530#issuecomment-1435734719
* update affected tests
* misc clean-ups
2023-02-24 12:22:16 +00:00
dirkf
e67e52a8f8 [test] Support test-case with volatile ID (eg live show)
Signalled by regexp ID value, eg: `'id': r're:[\da-zA-Z_-]{8,}'`
2023-02-24 12:22:16 +00:00
pukkandan
1d3751c3fe Escape URLs in sanitized_Request, not sanitize_url d2558234cf5dd12d6896eed5427b7dcdb3ab7b5a added escaping of URLs while sanitizing. However, sanitize_url may not always receive an actual URL. Eg: When using youtube-dl "search query" --default-search ytsearch, search query gets escaped to search%20query before being prefixed with ytsearch: which is not the intended behavior. So the escaping is moved to sanitized_Request instead. 2023-02-20 20:27:25 +00:00
df
6067451e43 [Vimeo] Fix e19ec52 for tween-age Pythons
* a check in older Pythons in the 2.7 and earlier, 3.3, 3.4 series caused "sre_constants.error: nothing to repeat"
* satisfy the check by avoiding nested qualifiers that can match empty string

Resolves #31597
2023-02-20 01:41:46 +00:00
dirkf
57802e632f [jsinterp] Fix dict comprehension for Py2.6
Resolves #31600
2023-02-19 13:48:58 +00:00
dirkf
2dd6c6edd8
[YouTube] Avoid crash if uploader_id extraction fails
See #31530.
2023-02-17 11:16:54 +00:00
dirkf
dd9aa74bee [test] Avoid name TestIE which causes a pytest warning
See: 060ac76257
2023-02-14 16:36:40 +00:00
dirkf
42b098dd79 [InfoExtractor] Handle unquoted values in OpenGraph searches 2023-02-14 02:53:16 +00:00
fonkap
6f8c2635a5 [StreamsbIE] Add extractor for streamsb.com (viewsb.com) (#31517)
* Add extractor for streamsb.com (viewsb.com)

* make data url using app.js version

---------

Co-authored-by: dirkf <fieldhouse@gmx.net>
2023-02-13 03:54:51 +00:00
fonkap
de48105dd8 [KommunetvIE] Add extractor for kommunetv.no (#31516)
* Add extractor for kommunetv.no
* Using utils.update_url instead of regex

---------

Co-authored-by: dirkf <fieldhouse@gmx.net>
2023-02-13 03:54:51 +00:00
fonkap
822f19f05d [FileMoonIE] Add extractor for filemoon.sx (#31515)
---------

Co-authored-by: dirkf <fieldhouse@gmx.net>
2023-02-13 03:54:51 +00:00
teddy171
33db85c571 [feat]: Add support to external downloader aria2p (#31500)
* feat: add class Aria2pFD

* feat: create call_downloader function

* feat: a colorful download interface to aria2pFD

* feat: change value name

* Apply suggestions from code review

Co-authored-by: dirkf <fieldhouse@gmx.net>

* Typo in suggestion

* fix: remove unused value

* fix: add not function to return value(0 is normal); add total_seconds to download.eta(timedelta object); add waiting status when hook progress

* fix: remove unuse method ..utils.format_bytes

* fix: be up to flake8

* fix: be up to flake8

* Apply suggestions from code review

* [feat] test external downloader aria2p

* [feat] test external downloader aria2p

* [fix] test_external_downloader.py

* Apply suggestions from code review

Co-authored-by: dirkf <fieldhouse@gmx.net>

* Apply suggestions from code review

Co-authored-by: dirkf <fieldhouse@gmx.net>

* Update test/test_external_downloader.py

Co-authored-by: dirkf <fieldhouse@gmx.net>

* Update test/test_external_downloader.py

Co-authored-by: dirkf <fieldhouse@gmx.net>

* Update youtube_dl/downloader/external.py

Co-authored-by: dirkf <fieldhouse@gmx.net>

* refactoring code and fix bugs

* Apply suggestions from code review

* Rename test_external_downloader.py to test_downloader_external.py

---------

Co-authored-by: dirkf <fieldhouse@gmx.net>
2023-02-13 03:54:51 +00:00
Valentin Metz
f33923cba7 [rbgtum] Add new extractor (#31305)
* [rbgtum] Add new extractor

* Small update, force CI

---------

Co-authored-by: dirkf <fieldhouse@gmx.net>
2023-02-13 03:54:51 +00:00
dirkf
e8198c517b [YouTube] Fix tests 2023-02-13 03:54:51 +00:00
dirkf
bafb6dec72 [YouTube] Refresh compat/utils usage
* import parse_qs()
* import parse_qs in lazy_extractors (clears old TODO)
* clean up old compiled lazy_extractors for Py2
* use update_url()
2023-02-13 03:54:51 +00:00
dirkf
4e04f10499 [compat] Update test_compat
[skip ci]
2023-02-13 03:54:51 +00:00
dirkf
90c9f789d9 [utils] Add parse_qs, update_url
[skip ci]
2023-02-13 03:54:51 +00:00
dirkf
249f2b6316 [compat] Systematise compat_ naming
[skip ci]
2023-02-13 03:54:51 +00:00
dirkf
d6b14ba316 [test] Fix TestAgeRestriction
* age restriction may cause DownloadError
* update obsolete test URLs
[skip ci]
2023-02-13 03:54:51 +00:00
dirkf
30e986b834 [YouTube] Add signatureTimestamp for age-gate bypass 2023-02-13 03:54:51 +00:00
dirkf
58988c1421 [YouTube] Bypass age-gating for certain restricted videos
* Use TVHTML5_SIMPLY_EMBEDDED_PLAYER client

* Also add and fix tests

* Introduce and use new utility function `update_url()`
2023-02-13 03:54:51 +00:00
dirkf
e19ec52322 [Vimeo] Support /user{video_id}/{slug} URL format 2023-02-12 22:16:00 +00:00
dirkf
f2f90887ca [Vimeo] Fix Unable to extract info section redux
* as reported in yt-dlp/yt-dlp#6149
* also allow newline in target JSON object
2023-02-12 22:16:00 +00:00
dirkf
cd987e6fca [jsinterp] Nits 2023-02-12 22:16:00 +00:00
dirkf
d947ffe8e3 [IGN] Overhaul extractor to avoid URL redirection loop
Consequently/also:
* centralise video data extraction
* detect 404 and 503 expected errors
* handle the test video in IGNVideo
* handle two additional page formats for the tests in IGNArticle
2023-02-12 22:16:00 +00:00
dirkf
384f632e8a
[ITV] Overhaul ITV extractor (#30266)
* support ITVX URLs (thanks Vangelis66)
* support legacy ITV Hub URLs
* include extraction fix 4c57dd2 from sleaux-meaux 3 May 2021
* include extraction fix 6fbcc16, fix by staubichsauger & pukkandan
* work-around duration parsing pending fix to utils.parse_duration
* apply default vanilla UA for pages and media to avoid site blocking
* also detect and report `Episode not found` instead of generic 404
* rework ITVBTCCIE with geo-block detection, best effort geo-restriction handling, news article support
* fix tests
2023-02-03 21:10:07 +00:00
dirkf
9d17948b5a
[myvideoge] Add new extractor (#31360)
NB download tests on CI servers blocked 

Co-authored-by: Alfonso Solbes <fonk666@gmail.com>
2023-02-02 23:25:44 +00:00
afterdelight
f316f5d4e3
[xhamster] add support for new domain xhvid.com (#31370) 2023-02-02 23:20:14 +00:00
dirkf
bc6f94e459
[FIFA] Back-port extractor from yt-dlp (#31385) 2023-02-02 23:19:03 +00:00
Epsilonator
be3392a0d4
[Blerp] Add new extractor (#31398)
Co-authored-by: dirkf <fieldhouse@gmx.net>
2023-02-02 17:33:09 +00:00
zhangeric-15
6d829d8119
[YouTube] Fix not finding videos listed under a channel's "shorts" subpage. (#31409)
Resolves #31336

Co-authored-by: Jouni Järvinen <rautamiekka@users.noreply.github.com>
Co-authored-by: dirkf <fieldhouse@gmx.net>
2023-02-02 17:26:31 +00:00
Ruowang Sun
98b0cf1cd0
[Callin] Add new extractor (#31414)
Co-authored-by: dirkf <fieldhouse@gmx.net>
2023-02-02 17:21:05 +00:00
Leon Etienne
e9611a2a36
[pr0gramm] implement InfoExtractor, Resolves #31433 (#31434)
* [pr0gramm] implement infoextractor

* [pr0gramm] remove misplaced comment, uncapture regex-group

* [pr0gramm]: specify utf-8 coding

* [pr0gramm]: add trailing comma to lists for maintainability

* [pr0gramm]: ie only sets upload_date attribute

* [pr0gramm]: add video_id to title

* [pr0gramm]: more forgiving _valid_url regex

* [pr0gramm]: add uploader to title, if set

* Discriminate URL pattern

---------

Co-authored-by: dirkf <fieldhouse@gmx.net>
2023-02-02 17:13:39 +00:00
JChris246
807e593a32
[cammodels] fix and improve extractor (#31453)
Co-authored-by: dirkf <fieldhouse@gmx.net>
2023-02-02 17:12:36 +00:00
Rodrigo Dias
297fbff23b
[doc] Fixed typo appearing to promise an example (#31489)
Resolves #31425 

Co-authored-by: dirkf <fieldhouse@gmx.net>
2023-02-02 17:10:09 +00:00
Brian Marks
37cbdfa0e7
[americastestkitchen] Add support for downloading entire series (#31493)
Also
* support new sites and URL patterns
* back-port from yt-dlp

Co-authored-by: dirkf <fieldhouse@gmx.net>
2023-02-02 16:58:21 +00:00
dirkf
295736c9cb [jsinterp] Improve parsing
* support subset `... else if ...`
* support `while`
* add `RegExp` class
* generalise `new` support
* limited more debug strings
* matching test changes
2023-02-02 16:31:49 +00:00
pukkandan
14ef89a8da Support if statements
Fix for yt-dlp/yt_dlp#6131
Closes #31509
2023-02-02 13:12:46 +00:00
dirkf
195f22f679
[generic] Improve KVS (etc) extraction 2022-11-13 15:09:29 +00:00
dirkf
fc2beab0e7
[generic] Improve KVS (etc) extraction
* detect kt_player('kt_player', 'https://.../kt_player.swf?v=5...
* detect age limit if 18 USC 2257 is mentioned
* test with shooshtime.com

Partially resolves #31332.
2022-11-13 14:59:30 +00:00
FraFraFra-LongD
1a4fbe8462
Added ThisVid.com support (#29187)
* add ThisVidIE, ThisVidMemberIE, ThisVidPlaylistIE
* redirect embed to main page for more metadata
* use KVS extraction newly added to GenericIE and remove duplicate tests
* also add MrDeepFake etc compat to GenericIE
(closes #22390)

Co-authored-by: dirkf <fieldhouse@gmx.net>
2022-11-13 13:22:04 +00:00
dirkf
c2f9be3e63
[generic] Add KVS player extraction 2022-11-12 11:55:05 +00:00
dirkf
604762a9f8
[common:jwplayer] Improve jwplayer extraction and parsing (#31000)
* don't crash parser if jwplayer_data is invalid (empty, or no formats)
* use `label` in `sources[n]` as `format_id`
* relax `jwplayer().setup(...)` RE (also rework PR #27274 enhancement)
* detect more manifest formats in _parse_jwplayer_formats() (from PR #29596)
* improve metadata extraction (from PR #25433)
* remember URLs in a set
* use parse_resolution() in format
* extract filesize in format (from yt-dlp)

Co-authored-by: kikuyan <kikuyan@users.noreply.github.com>
Co-authored-by: martin54 <martin54@users.noreply.github.com>
2022-11-11 00:49:13 +00:00
Moises Lima
47e70fff8b
[PeekVids, PlayVids] Add new extractor (#29765)
* Merge back-port from yt-dlp
* Merge features from PR #29798
* Improve metadata extraction

Co-authored-by: dirkf <fieldhouse@gmx.net>
Co-authored by: AXDOOMER
2022-11-09 20:26:30 +00:00
dirkf
de39d1281c
[extractor/ceskatelevize] Back-port extractor from yt-dlp, etc (#30713)
* back-port extractor, removing CeskaTelevizePoradyIE
* follow redirect URL
* support liveBroadcast and videobonusDetail in __NEXT__ data
* return single video for singleton playlist
* fix/add tests
2022-11-04 10:13:07 +00:00
Andrei Lebedev
27ed77aabb
[utils] Backport traverse_obj (etc) from yt-dlp (#31156)
* Backport traverse_obj and closely related function from yt-dlp (code by pukkandan)
* Backport LazyList, variadic(), try_call (code by pukkandan)
* Recast using yt-dlp's newer traverse_obj() implementation and tests (code by grub4k)
* Add tests for Unicode case folding support matching Py3.5+ (requires f102e3d)
* Improve/add tests for variadic, try_call, join_nonempty

Co-authored-by: dirkf <fieldhouse@gmx.net>
2022-11-03 10:09:37 +00:00
dirkf
c4b19a8816
[compat] Work around in case folding for narrow Python build
Resolves #31324.
2022-11-02 11:56:26 +00:00
dirkf
087ddc2371
[compat] Add test for compat_casefold() 2022-11-01 22:47:02 +00:00
dirkf
65ccb0dd4e
[compat] Add test for compat_casefold() 2022-11-01 21:33:39 +00:00
dirkf
a874871801
[compat] Reformat casefold.py for easier updating 2022-11-01 19:25:59 +00:00
dirkf
b7c25959f0
[compat] Unify unicode/str compat and move up 2022-11-01 12:40:23 +00:00
dirkf
f102e3dc4e
[compat] Add compat_casefold and compat_re_Match, for traverse_obj() port 2022-10-31 21:27:14 +00:00
dirkf
a19855f0f5
[compat] Add Python 2 Unicode casefold using a trivial wrapper around icu/CaseFolding.txt 2022-10-31 21:18:36 +00:00
Xie Yanbo
ce5d36486e
[netease] Support urls shared from mobile app (#31304)
Co-authored-by: dirkf <fieldhouse@gmx.net>
2022-10-30 11:48:44 +00:00
Xie Yanbo
d25cf62086
[netease] Impove error handling (#31303)
* add warnings for users outside of China
* skip empty song urls

Co-authored-by: dirkf <fieldhouse@gmx.net>
2022-10-30 11:46:46 +00:00
dirkf
502cefa41f
[Vimeo] Update variable name in hydration JSON pattern
Fixes #31311
2022-10-27 14:33:00 +00:00
dirkf
0faa45d6c0
[BongaCams] Support new .net domain
Resolves #31262.
2022-10-20 11:06:44 +00:00
ache
447edc48e6
Fix ADN extractor (#31275)
* Rename Anime Digital Network to Animation Digital Network, animationdigitalnetwork.fr
* Update the test to an available video
* Update the decoding key of subtitles
* Keep the support of old URLs
* Add a test to match the old URL
* Reduce redundancy of the URL name
* Fix md5 ^^"
* Fix undefined _BASE
* Process HTTP error text (eg geo-block) correctly and uniformly in Py3, Py2
* Skip test for CI since geo-blocked

Signed-off-by: ache <ache@ache.one>
Co-authored-by: dirkf <fieldhouse@gmx.net>
2022-10-18 16:06:27 +01:00
dirkf
ee8560d01e
[ManyVids] Support new single-page app structure 2022-10-13 02:42:49 +00:00
dirkf
7135277fec
[ManyVids] Support new single-page app structure
See https://github.com/yt-dlp/yt-dlp/issues/5210#issuecomment-1276919962.
2022-10-13 01:59:01 +00:00
dirkf
7bbd5b13d4
[Motherless] Pull from yt-dlp, etc
* use username field
* loosen regexes
* warn on page count 0 in group
* avoid reloading group page 1
Closes #29626
2022-10-12 01:09:55 +01:00
Xie Yanbo
c91cbf6072
[netease] Get netease music download url through player api (#31235)
* remove unplayable song from test
* compatible with python 2
* using standard User_Agent, fix imports
* use hash instead of long description
* fix lint
* fix hash
2022-10-11 13:55:09 +01:00
dirkf
11b284c81f
[Common:JWPlayer] Fix x1000 scaling error
See https://github.com/yt-dlp/yt-dlp/issues/5106#issuecomment-1264625161
2022-10-11 12:36:44 +00:00
dirkf
c94a459a24
[utils] Sanitize look-alike Unicode glyphs in non-ID filename fields when --restrict-filenames
Implements https://github.com/ytdl-org/youtube-dl/issues/31216#issuecomment-1236102822, which has a test.
2022-10-11 12:18:12 +00:00
dirkf
6e2626f092
[JSInterp] Improve separation logic
Based on 0468a3b325
2022-10-11 05:58:10 +01:00
dirkf
c282e5f8d7 [ZDF] Overhaul ZDF extractors
* pull some yt-dlp changes into ZDFBaseIE._extract_format()
* add test cases from yt-dlp to ZDFIE
* fix crash in ZDFIE._extract_mobile() when object had no `formitaeten`
* improve title extraction in ZDFChannelIE (remove trailing station ident)
* avoid extracting non-video playlist items (fixes #31149)
2022-10-11 00:05:17 +01:00
dirkf
2ced5a7912 [test] Implement string "lambda x: condition(x)" as an expected value
Semantics equivalent to `assert condition(got)`
2022-10-11 00:05:17 +01:00
Xiyue
82e4eca711
[motherless] Fixed the broken uploader_id in the extractor (#31243)
* Fixed the broken uploader_id in the extractor.
* Make uploader_id RE looser
* Fix uploader_id in test Motherless_3
* Fix group pagination
* # coding: utf-8

Co-authored-by: Andy Xuming <xuminic@gmail.com>
Co-authored-by: dirkf <fieldhouse@gmx.net>
2022-10-10 23:52:48 +01:00
dirkf
1b1442887e
[manyvids] Improve extraction (#31172)
* extract all formats from page
* extract description, uploader, views, likes
* downrate previews
* fix tests
* use txt_or_none()
2022-10-10 19:26:32 +01:00
dirkf
22127b271c
[NRK] Remove explicit Accept-Encoding header that invites Brotli
Fixes #31285
2022-10-10 17:41:40 +00:00
coletdjnz
d35557a75d [Telegraaf] Use mobile GraphQL API endpoint
Workaround for Cloudflare 403
Fixes https://github.com/yt-dlp/yt-dlp/issues/5000
Authored by: coletdjnz
2022-10-04 11:43:08 +01:00
dirkf
9493ffdb8b [test] Use windows-2019 for tests
(At least for now) resolves #31249
2022-10-04 11:31:29 +01:00
pukkandan
7009bb9f31 [jsinterp] Workaround operator associativity issue
* temporary fix for player 5a3b6271 [1]

1. https://github.com/yt-dlp/yt-dlp/issues/4635#issuecomment-1235384480
2022-09-03 00:53:56 +01:00
dirkf
218c423bc0 [cache] Add cache validation by program version, based on yt-dlp 2022-09-01 13:28:30 +01:00
dirkf
55c823634d [jsinterp] Handle new YT players 113ca41c, c57c113c
* add NaN
* allow any white-space character for `after_op`
* align with yt-dlp f26af78a8ac11d9d617ed31ea5282cfaa5bcbcfa (charcodeAt and bitwise overflow)
* allow escaping in regex, fixing player c57c113c
2022-09-01 10:57:12 +01:00
dirkf
4050e10a4c [options] Document that postprocessing is not forced by --postprocessor-args
Resolves #30307
2022-08-29 13:02:17 +01:00
dirkf
ed5c44e7b7 [compat] Replace deficient ChainMap class in Py3.3 and earlier
* fix version check
2022-08-26 12:22:01 +01:00
dirkf
0f6422590e [compat] Replace deficient ChainMap class in Py3.3 and earlier 2022-08-26 10:24:42 +01:00
dirkf
4c6fba3765 [jsinterp] Improve try/catch/finally support 2022-08-26 08:51:17 +01:00
dirkf
d619dd712f [jsinterp] Fix bug in operator precedence
* from 164b03c486
* added tests
2022-08-25 12:16:10 +01:00
dirkf
573b13410e [YouTube] Improve error check for n-sig processing 2022-08-25 12:14:59 +01:00
dirkf
66e58dccc2 [core] Avoid processing empty format list after removing bad formats
* also ensure compat encoding of error strings
2022-08-21 00:45:06 +01:00
dirkf
556862bc91 [utils] Ensure RFC3986 encoding result is unicode 2022-08-21 00:45:06 +01:00
gudata
a8d5316aaf
[infoq] Avoid crash if the page has no mp3Form
* proposed fix for issue #31131, aligns with yt-dlp

Co-authored-by: dirkf <fieldhouse@gmx.net>
2022-08-19 21:00:21 +01:00
dirkf
fd3f3bebd0 [uktvplay] Support domain without .uktv 2022-08-19 19:11:08 +01:00
dirkf
46b8ae2f52 [jsinterp] Clean up and pull yt-dlp style
* add compat_re_Pattern
* improve compat_collections_chain_map
* use class JS_Undefined
* remove unused code
2022-08-19 15:34:33 +01:00
dirkf
538ec65ba7
[jsinterp] Handle regexp literals and throw/catch execution (#31182)
* based on f6ca640b12, thanks pukkandan
* adds parse support for regexp flags
2022-08-19 11:45:04 +01:00
dirkf
b0a60ce203
[jsinterp] Improve JS language support (#31175)
* operator ??
* operator ?.
* operator **
* accurate operator functions
* `undefined` handling
* object literals {a: 1, "b": expr}
* more tests for weird JS comparisons: see https://github.com/ytdl-org/youtube-dl/issues/31173#issuecomment-1217854397.
2022-08-17 14:22:02 +01:00
dirkf
e52e8b8111 [postprocessor] Don't replace existing value with null metadata parsed from title 2022-08-15 16:45:04 +01:00
dirkf
d231b56717
[jsinterp] Overhaul JSInterp to handle new YT players 4c3f79c5, 324f67b9 (#31170)
* back-port from yt-dlp 8f53dc44a0cc1c2d98c35740b9293462c080f5d0, thanks pukkandan
* also support void, improve <</>> precedence, improve expressions in comma-list
* add more tests
2022-08-14 18:45:45 +01:00
dirkf
e6a836d54c [core] Make --max-downloads ... stop immediately on reaching the limit
Based on and closes #26638.
2022-08-10 15:37:59 +01:00
dirkf
deee741fb1
[test, etc] Improve download test logs; also clean up some new flake8 issues (#31153)
* [test] Identify testcase errors better
* [test] Identify download errors better
* [extractor/minds] Linter
* [extractor/aes] Linter
2022-08-09 21:05:00 +01:00
Wes
adb5294177
[aenetworks] Update _THEPLATFORM_KEY and _THEPLATFORM_SECRET (#29749)
Fixes ytdl-org/youtube-dl#29300
2022-07-30 02:10:00 +01:00
Kyraminol Endyeran
5f5c127ece
[VVVVID] Support video/dash types (#31060)
Resolves #31030.
2022-07-12 00:35:40 +01:00
dirkf
090acd58c1
[options] Improve be35e53 (--match-/reject-title parameter value)
Resolves #31064.
2022-07-03 20:05:21 +01:00
dirkf
a03b9775d5 [Mediaset] Support player version number in URL pattern
Ref: https://github.com/yt-dlp/yt-dlp/issues/4141
2022-06-26 14:24:06 +01:00
dirkf
8a158a936c [NHK] Use new API URL 2022-06-15 18:28:19 +01:00
dirkf
11665dd236 [test] Fix linter for 3aa94d7945 2022-06-15 18:28:19 +01:00
dirkf
cc179df346 [XHamster] Support xhday.com alias, extract uploader_id
* support xhday.com alias for xhamster.com (resolves #31023)
  Authored by: dirkf
* extract `uploader_id`:
  from 908b56eaf7
  (PR https://github.com/yt-dlp/yt-dlp/pull/844)
  Authored by: octotherp
2022-06-12 14:10:38 +01:00
pukkandan
0700fde640 [utils, etc] Kill child processes when yt-dl is killed
* derived from PR #26592, closes #26592

Authored by: Unrud
2022-06-10 19:57:46 +01:00
dirkf
811c480f7b [YouTube] Support JSON3 subtitle format
* subtitle tests updated to match
2022-06-09 15:25:23 +01:00
dirkf
3aa94d7945 [test] Fix workable subtitle tests (except YT) and mark others as skip, broken
* broken tests need to be fixed when fixing the respective IE
2022-06-08 23:11:33 +01:00
dirkf
ef044be34b [test] Skip not _WORKING IE in subtitle tests; use unittest.skipTest throughout 2022-06-08 15:52:21 +01:00
dirkf
530f4582d0 [HRFernsehen] Back-port new extractor from yt-dlp
Closes #26445, where this was originally proposed.
2022-06-06 19:29:48 +01:00
pukkandan
1baa0f5f66 [utils] Escape URL while sanitizing
Closes #31008, #yt-dlp/263

While this fixes the issue in question, it does not try to address the root-cause of the problem
Refer: 915f911e365736227e134ad654601443dbfd7ccb, f5fa042c82300218a2d07b95dd6b9c0756745db3
2022-06-06 16:03:04 +01:00
LewdyCoder
9aa8e5340f
[Readme] Clarified extractor naming (#29799)
* Exported usable extractors must be named `xxxxIE`

Co-authored-by: dirkf <fieldhouse@gmx.net>
2022-05-30 02:50:50 +01:00
dirkf
04fd3289d3 [YouPorn] Improve upload_date extraction
See https://github.com/yt-dlp/yt-dlp/issues/2701#issuecomment-1034341883
2022-05-28 13:54:32 +01:00
dirkf
52c3751df7 [utils] Enable ALPN in HTTPS to satisfy broken servers
See https://github.com/yt-dlp/yt-dlp/issues/3878
2022-05-28 13:52:51 +01:00
dirkf
187a48aee2 [YouTube] Handle player c5a4daa1 with indirect n-function definition
* resolves #30976
2022-05-24 15:43:56 +01:00
Jacob Chapman
be35e5343a Update options.py 2022-05-20 05:25:54 +01:00
dirkf
c3deca86ae
[wat.tv] Add version pver to metadata API call
Resolves #30959.
2022-05-19 17:41:48 +00:00
dirkf
c7965b9fc2
[NHK] Support alphabetic characters in 7-char NhkVod IDs (#29682) 2022-05-09 18:54:41 +01:00
dirkf
e988fa4523 [doc] Clarify test naming 2022-04-29 16:56:00 +01:00
dirkf
e27d8d819f
[streamcz] Remove empty '{}'.format() for Py2.6
Use `'-join()'` here, or `{0}`, ..., in general.
2022-04-29 13:36:02 +01:00
Árni Dagur
ebc627847c
[KTH] Add new extractor for KTH play (#30885)
* Implement extractor for KTH play
* Make KTH Play url regex more relaxed
2022-04-28 10:18:10 +01:00
dirkf
a0068bd6be [Youtube] Fix "n" descrambling for player fae06c11
Resolves #30856.
2022-04-15 16:07:09 +01:00
dirkf
b764dbe773
Disable blank issues 2022-04-10 05:49:09 +01:00
nixxo
871645a4a4 [RAI] Fix extraction of http formats
From https://github.com/yt-dlp/yt-dlp/pull/3272
Closes https://github.com/yt-dlp/yt-dlp/issues/3270
Authored by: nixxo
2022-04-05 15:21:59 +01:00
nixxo
1f50a07771 [RAI] Extend formats with direct http mp4 link (PR #27990)
* initial support for creating direct mp4 link
* improved regexes and info extraction
* added "connection: close" to request headers
* updated to https://github.com/yt-dlp/yt-dlp/pull/208
2022-04-05 15:21:59 +01:00
nixxo
9e5ca66f16 [RAI] Added checks for DRM protected content (PR #27657)
reviewed by pukkandan (https://github.com/yt-dlp/yt-dlp/pull/150)
2022-04-05 15:21:59 +01:00
lihan7
17d295a1ec [extractor/bilibili] Fix path "/audio/auxxxxx" download return 403 2022-04-01 00:46:34 +01:00
dirkf
49c5293014 Ignore --external-downloader-args if --external-downloader was rejected
... and generate warning
2022-03-25 14:47:26 +00:00
df
6508688e88 Make default upload_/release_date a compat_str
Ensures download tests pass in Python 2 as well as 3; also
add YoutubeDL tests for timestamp -> upload_date etc.
2022-02-26 10:29:42 +00:00
dirkf
4194d253c0 Avoid skipping ID when unlisted_hash is numeric
Pattern needed a non-greedy match; also replaced a redundant test with one for this, issue 29690
2022-02-26 10:29:42 +00:00
dirkf
f8e543c906 [Alsace20TV] Add new extractors Alsace20TVIE, Alsace20TVEmbedIE 2022-02-24 18:43:47 +00:00
dirkf
c4d1738316 [CPAC] Add extractor for Canadian Parliament
CPACIE: single episode
CPACPlaylistIE: playlists and searches
2022-02-24 18:27:57 +00:00
dirkf
1f13ccfd7f
Fixed groups() call on potentially empty regex search object (#30676)
* Fixed groups() call on potentially empty regex search object.
- https://github.com/ytdl-org/youtube-dl/issues/30521

* minimising lines changed

Co-authored-by: yayorbitgum <50963144+yayorbitgum@users.noreply.github.com>
2022-02-24 18:26:58 +00:00
marieell
923292ba64 [aliexpress] Fix test case 2022-02-24 13:44:52 +00:00
Lesmiscore (Naoya Ozaki)
782bfd26db
[bigo] add support for bigo.tv (#30635)
* [bigo] add support for bigo.tv

* [bigo] prepend "Bigo says"

* title fallback

* add error for invalid json data
2022-02-24 13:34:32 +00:00
Vladimir Stavrinov
3472227074
[rutv] fix vbr for empty string value (#30623)
* [rutv] use str_to_int() (thx dirkf)
2022-02-14 17:54:31 +00:00
Petr Vaněk
bf23bc0489 add missing __future__ import unicode_literals 2022-02-14 07:07:05 +00:00
Petr Vaněk
85bf26c1d0 resolve problem with unpacking operator for <py3.5 2022-02-14 07:07:05 +00:00
Petr Vaněk
d8adca1b66 [streamcz] test fixes and one additional test 2022-02-14 07:07:05 +00:00
Petr Vaněk
d02064218b do not use f-strings 2022-02-14 07:07:05 +00:00
Petr Vaněk
b1297308fb avoid traverse_obj function 2022-02-14 07:07:05 +00:00
Petr Vaněk
8088ce036a revert: use _match_valid_url function 2022-02-14 07:07:05 +00:00
Petr Vaněk
29f7bfc4d7 [streamcz] cherry-pick from yt-dlp
Cherry-picked-from: 7d449fff5346 ("[streamcz] Fix extractor (#1616)")
2022-02-14 07:07:05 +00:00
dirkf
74f8cc48af [extractor/videa] Back-port from yt-dlp PRs 463+1028
Authored by: nyuszika7h
2022-02-11 12:43:26 +00:00
kikuyan
8ff961d10f [extractor/videa] fix extraction in Py2
Fixes #30416
2022-02-11 12:43:26 +00:00
dirkf
266b6ef185 [BBC] Also allow PID with leading 'l' (live?) 2022-02-09 21:21:59 +00:00
dirkf
825d3426c5
[Nuvid] Use site JSON for video details (#29332)
Back-port yt-dlp PR 1022 onto PR #17890 and update

Video details aren't in the original HTML now but populated by async JS

Co-authored by: u-spec-png
Co-authored by: vidaritos
2022-02-09 02:40:34 +00:00
dirkf
47b0c8697a [ARD] Back-port subtitle extraction from yt-dlp PR 2409
Authored by: fstirlitz
Fixes #30543
Closes #17766 (thanks ngdio)
2022-02-07 13:47:38 +00:00
Seonghyeon Cho
734dfbb4e3 Remove redundant assigning format_id 2022-02-05 03:04:35 +00:00
df
ddc080a562 Add ArteTVCategoryIE to support category playlists 2022-02-05 03:02:56 +00:00
Abdullah Ibn Fulan
16a3fe2ba6 Updated Album URL regex
Mistakenly forgot to edit a line in last commit.

Co-authored-by: dirkf <fieldhouse@gmx.net>
2022-02-05 02:53:23 +00:00
Abdullah Ibn Fulan
c820a284a2 [extractor/audiomack] Updated URL regex, corrected invalid testcases, fixed bug
Co-authored-by: dirkf <fieldhouse@gmx.net>
2022-02-05 02:53:23 +00:00
dirkf
58babe9af7 Support __INITIAL_DATA__ with stringified JSON
Add test and fix test for bbcthreeConfig
2022-02-05 02:51:46 +00:00
df
6d4932f023 Try for timestamp, description from window.__INITIAL_DATA__ pages 2022-02-05 02:51:46 +00:00
dirkf
92d73ef393 [niconico] Implement heartbeat for download 2022-02-05 02:47:21 +00:00
dirkf
91278f4b6b [niconico] Back-port extractor from yt-dlp
Add Nico search extractors, fix extraction
2022-02-05 02:47:21 +00:00
dirkf
73e1ab6125 [test:download] Only extract enough videos for playlist_mincount 2022-02-05 02:47:21 +00:00
dirkf
584715a803 [applepodcasts] Extract default thumbnail image 2022-02-05 02:32:45 +00:00
dirkf
e00b0eab1e [applepodcasts] Improve format extraction
Set acodec and vcodec, etc, to avoid breaking, eg, bestaudio
2022-02-05 02:32:45 +00:00
dirkf
005339d637 [applepodcasts] Support new AMP-ish page structure 2022-02-05 02:32:45 +00:00
Chris Rose
23ad6402a6 xvideos: Fix for #30271 2022-02-05 02:24:51 +00:00
dirkf
9642344965 Fix tests for working IEs; disable obsolete WDRMobile 2022-02-05 02:22:45 +00:00
dirkf
568c7005d5 Fix WDRMaus; extend URL matching for other Maus pages; improve ID extraction 2022-02-05 02:22:45 +00:00
dirkf
5cb4833f40 Update URPlayIE extractor for Next.js page format, with subtitles 2022-02-05 02:16:53 +00:00
dirkf
5197336de6 Support more deeply nested ptmd_path with test, update tests 2022-02-05 02:14:35 +00:00
dirkf
01824d275b Additional tweaks: allow any .ndr.de, simplify quote match 2022-02-05 02:12:44 +00:00
dirkf
39a98b09a2 Fix NDR, NJoy tests 2022-02-05 02:12:44 +00:00
dirkf
f0a05a55c2 NJoy: improve extraction of NDR id, description, etc with current page formats 2022-02-05 02:12:44 +00:00
dirkf
4186e81777 NDR: improve extraction of NDR id, description, etc with current page formats 2022-02-05 02:12:44 +00:00
dirkf
b494824286 Support Tele5 pages with Discovery Networks format instead of JWPlatform 2022-02-05 02:08:11 +00:00
dirkf
8248133e5e Back-port yt-dlp Viki extractor
From https://github.com/yt-dlp/yt-dlp/pull/2540
2022-02-04 15:49:12 +00:00
dirkf
27dbf6f0ab Return the item itself if playlist has one entry
Removes playlist spam from log
2022-02-04 14:28:50 +00:00
dirkf
61d791726f Find TV2DK Kaltura ID in Nuxt.js page format 2022-02-04 14:28:50 +00:00
pukkandan
0c0876f790 [youtube:search] Add tests 2022-02-04 11:09:18 +00:00
dirkf
7a497f1405 Rework 2c2c2bd with an actual Mix page and realistic playlist size
From 2c2c2bd348 (commitcomment-65953545)
2022-02-04 04:09:23 +00:00
dirkf
5add3f4373 Merge branch 'pukkandan-yt-searchurl' into yt-dl-master
Closes #27749
2022-02-04 03:50:32 +00:00
pukkandan
78ce962f4f [youtube] Support channel search
Code from cd684175ad
2022-02-03 01:02:58 +00:00
dirkf
41f0043983 Avoid crashing if n-sig decode fails 2022-02-02 14:25:03 +00:00
dirkf
34c06b16f5 Support Youtube Shorts URL format 2022-02-01 14:40:20 +00:00
dirkf
1e677567cd
[YouTube] Fix n-sig for player e06dea74 (#30582)
From yt-dl commit 48416bc
2022-02-01 14:39:03 +00:00
df
af9e72507e Implement n-param descrambling using JSInterp
Fixes #29326, closes #29790, closes #30004, closes #30024, closes #30052,
closes #30088, closes #30097, closes #30102, closes #30109, closes #30119,
closes #30125, closes #30128, closes #30162, closes #30173, closes #30186,
closes #30192, closes #30221, closes #30239, closes #30539, closes #30552.
2022-01-31 00:19:58 +00:00
dirkf
6ca7b77696 Refactor JSInterpreter._separate
yt-dlp/yt-dlp/@06dfe0a, improve _MATCHING_PARENS
2022-01-30 00:05:54 +00:00
dirkf
9d142109f4 Back-port test_youtube_signature.py from yt-dlp and fix JSInterp accordingly 2022-01-30 00:05:54 +00:00
dirkf
1ca673bd98 Fix splice to handle float
Needed for new youtube js player f1ca6900
Add 57dbe8077f (diff-729b57caa8d006426f6a8960c061f519a8b6658682284015e069745af52ffb07)
2022-01-30 00:05:54 +00:00
df
e1eae16b56 Handle default in switch better
Add a1fc7ca074
Thanks coletdjnz
2022-01-30 00:05:54 +00:00
df
96f87aaa3b Back-port JS interpreter upgrade from yt-dlp PR #1437 2022-01-30 00:05:54 +00:00
df
5f5de51a49 Add compat_map/filter and use the former 2022-01-30 00:05:36 +00:00
df
39ca35e765 Fix test_youtube_flat_playlist_extraction 2022-01-29 20:00:21 +00:00
df
d76d59d99d Remove obsolete non-working test_youtube_toptracks 2022-01-29 20:00:21 +00:00
df
2c2c2bd348 Fix test_youtube_mix 2022-01-29 20:00:21 +00:00
df
46e0a729b2 Remove obsolete test_youtube_course 2022-01-29 20:00:21 +00:00
df
57044eaceb Fix test_youtube_playlist_noplaylist 2022-01-29 20:00:21 +00:00
pukkandan
a3373da70c
Merge branch 'UP/youtube-dl' into dl/YoutubeSearchURLIE 2022-01-30 01:07:28 +05:30
pukkandan
2c4cb134a9
Fix max_results 2022-01-30 00:54:22 +05:30
pukkandan
bfe72723d8
Use itertools.islice 2022-01-30 00:49:55 +05:30
pukkandan
ed99d68bdd
Add back YoutubeSearchURLIE 2022-01-30 00:41:47 +05:30
Sergey M․
5014bd67c2
release 2021.12.17 2021-12-17 01:49:07 +07:00
Sergey M․
e418823350
[ChangeLog] Actualize
[ci skip]
2021-12-17 01:43:16 +07:00
lanegramling
b5242da7d2
[youtube] Update signature function patterns (closes #30363) (#30366) 2021-12-17 01:42:17 +07:00
bopol
a803582717
[peertube] only call description endpoint if necessary (#29383) 2021-07-01 06:53:22 +00:00
Remita Amine
7fb9564420 [periscope] pass referer to HLS requests(closes #29419) 2021-06-28 20:08:39 +01:00
Aleri Kaisattera
379f52a495
[liveleak] Remove extractor (closes #17625, closes #24222) (#29331) 2021-06-21 04:23:50 +07:00
Sergey M․
cb668eb973
[pornhub] Add support for pornhubthbh7ap3u.onion 2021-06-21 04:08:15 +07:00
Sergey M․
751c9ae39a
[pornhub] Detect geo restriction 2021-06-21 03:33:43 +07:00
Sergey M․
da32828208
[pornhub] Dismiss tbr extracted from download URLs (closes #28927)
No longer reliable
2021-06-21 03:22:37 +07:00
Sergey M․
2ccee8db74
[curiositystream:collection] Extend _VALID_URL (closes #26326, closes #29117) 2021-06-21 01:54:52 +07:00
Sergey M․
47f2f2fbe9
[youtube] Make get_video_info processing more robust (closes #29333) 2021-06-21 01:35:21 +07:00
Sergey M․
03ab02730f
[youtube] Workaround for get_video_info request (refs #29333)
See https://github.com/ytdl-org/youtube-dl/issues/29333#issuecomment-864049544
2021-06-21 01:34:27 +07:00
Tianyi Shi
4c77a2e538
[bilibili] Strip uploader name (#29202) 2021-06-21 01:03:21 +07:00
bopol
4131703001
[youtube] Update invidious instance list (#29281) 2021-06-21 00:42:09 +07:00
Logan B
cc21aebe90
[umg:de] Update GraphQL API URL (#29304)
Previous one no longer resolves

Co-authored-by: Sergey M. <dstftw@gmail.com>
2021-06-21 00:41:14 +07:00
Sergey M․
57b9a4b4c6
[nrk] Switch psapi URL to https (closes #29344)
Catalog calls no longer work via http
2021-06-21 00:36:28 +07:00
kikuyan
3a7ef27cf3
[postprocessor/ffmpeg] Show ffmpeg output on error (refs #22680) (#29336) 2021-06-20 23:58:19 +07:00
kikuyan
a7f61feab2
[egghead] Add support for app.egghead.io (closes #28404) (#29303)
Co-authored-by: Sergey M. <dstftw@gmail.com>
2021-06-17 10:34:33 +07:00
kikuyan
8fe5d54eb7
[appleconnect] Fix extraction (#29208) 2021-06-17 04:12:13 +07:00
kikuyan
d156bc8d59
[orf:tvthek] Add support for MPD formats (closes #28672) (#29236) 2021-06-17 04:02:06 +07:00
Sergey M
c2350cac24
[README.md] Update MSVC 2010 redist URL (closes #29222) 2021-06-06 05:32:27 +07:00
Sergey M․
b224cf39d5
release 2021.06.06 2021-06-06 01:38:22 +07:00
Sergey M․
5f85eb820c
[ChangeLog] Actualize
[ci skip]
2021-06-06 01:32:15 +07:00
Sergey M․
bb7ac1ed66
[facebook] Improve login required detection 2021-06-06 01:16:43 +07:00
Sergey M․
fdf91c52a8
[youporn] Fix formats and view count extraction (closes #29216) 2021-06-06 00:11:09 +07:00
Sergey M․
943070af4a
[orf:tvthek] Fix thumbnails extraction (closes #29217) 2021-06-05 23:42:25 +07:00
Remita Amine
82f3993ba3 [formula1] fix extraction(closes #29206) 2021-06-04 17:51:44 +01:00
Sergey M․
d495292852
[ard] Relax _VALID_URL and fix video ids (closes #22724, closes #29091) 2021-05-30 06:14:59 +07:00
Sergey M․
2ee6c7f110
[ustream] Detect https embeds (closes #29133) 2021-05-30 03:43:59 +07:00
Sergey M․
6511b8e8d7
[ted] Prefer own formats over external sources (closes #29142) 2021-05-30 03:05:22 +07:00
Sergey M․
f3cd1d9cec
[twitch:clips] Improve extraction (closes #29149) 2021-05-30 01:49:51 +07:00
phlip
e13a01061d
[twitch:clips] Add access token query to download URLs (closes #29136) 2021-05-30 01:47:33 +07:00
Sergey M․
24297a42ef
[youtube] Fix get_video_info request (closes #29086, closes #29165) 2021-05-30 00:36:26 +07:00
Remita Amine
1980ff4550 [vimeo] fix vimeo pro embed extraction(closes #29126) 2021-05-26 11:04:39 +01:00
Remita Amine
dfbbe2902f [redbulltv] fix embed data extraction(closes #28770) 2021-05-17 12:56:49 +01:00
Remita Amine
e1a9d0ef78 [shahid] relax _VALID_URL(closes #28772, closes #28930) 2021-05-17 12:37:39 +01:00
Sergey M․
f47627a1c9
release 2021.05.16 2021-05-16 22:55:05 +07:00
Sergey M․
efeb9e0fbf
[ChangeLog] Actualize
[ci skip]
2021-05-16 22:40:39 +07:00
Sergey M․
e90a890f01
[playstuff] Add extractor (closes #28901, closes #28931) 2021-05-16 22:31:37 +07:00
Sergey M․
199c645bee
[eroprofile] Skip test 2021-05-16 22:01:51 +07:00
Sergey M․
503a3744ad
[eroprofile] Fix extraction (closes #23200, closes #23626, closes #29008) 2021-05-16 21:57:21 +07:00
kr4ssi
ef03721f47
[vivo] Add support for vivo.st (#29009)
Co-authored-by: Sergey M. <dstftw@gmail.com>
2021-05-16 21:46:32 +07:00
Sergey M․
1e8aaa1d15
[generic] Add support for og:audio (closes #28311, closes #29015) 2021-05-16 21:42:38 +07:00
Sergey M․
6423d7054e
[options] Fix thumbnail option group name (closes #29042) 2021-05-16 21:34:10 +07:00
Sergey M․
eb5080286a
[phoenix] Fix extraction (closes #29057) 2021-05-16 21:21:14 +07:00
Sergey M․
286e01ce30
[generic] Add support for sibnet embeds 2021-05-16 20:50:32 +07:00
Sergey M․
8536dcafd8
[vk] Add support for sibnet embeds (closes #9500) 2021-05-16 20:48:24 +07:00
Sergey M․
552b139911
[generic] Add Referer header for direct videojs download URLs (closes #2879, closes #20217, closes #29053) 2021-05-16 20:29:35 +07:00
Lukas Anzinger
2202cef0e4
[orf:radio] Switch download URLs to HTTPS (closes #29012) (#29046) 2021-05-16 19:54:15 +07:00
Sergey M․
a726009987
[blinkx] Remove extractor (closes #28941)
No longer exists.
2021-05-05 04:12:35 +07:00
catboy
03afef7538
[medaltv] Relax _VALID_URL (#28884)
Co-authored-by: Sergey M. <dstftw@gmail.com>
2021-05-05 03:44:07 +07:00
Jacob Chapman
b797c1cc75
[YoutubeDL] Improve extract_info doc (#28946)
Co-authored-by: Sergey M. <dstftw@gmail.com>
2021-05-05 03:31:24 +07:00
Sergey M․
04be55307a
[funimation] Add support for optional lang code in URLs (closes #28950) 2021-05-05 02:54:12 +07:00
Sergey M․
504e4d804d
[gdcvault] Add support for HTML5 videos 2021-05-05 02:44:29 +07:00
Sergey M․
1786cd3fe4
[dispeak] DRY and update tests (closes #28970) 2021-05-05 02:30:42 +07:00
Ben Rog-Wilhelm
b8645c1f58
[dispeak] Improve FLV extraction (closes #13513) 2021-05-05 02:24:55 +07:00
Ben Rog-Wilhelm
fe05191b8c
[kaltura] Improve iframe extraction (#28969)
Co-authored-by: Sergey M. <dstftw@gmail.com>
2021-05-05 02:14:35 +07:00
Sergey M․
0204838163
[kaltura] Make embed code alternatives actually work 2021-05-05 02:01:22 +07:00
Sergey M․
a0df8a0617
[cda] Improve extraction (closes #28709, closes #28937) 2021-05-01 22:53:30 +07:00
Sergey M․
d1b9a5e2ef
[twitter] Improve formats extraction from vmap URL (closes #28909) 2021-05-01 19:00:39 +07:00
Sergey M․
ff04d43c46
[xtube] Fix formats extraction (closes #28870) 2021-05-01 18:33:05 +07:00
Sergey M․
d2f72c40db
[svtplay] Improve extraction (closes #28507, closes #28876) 2021-05-01 18:09:32 +07:00
Sergey M․
e33dfb445c
[tv2dk] Fix extraction (closes #28888) 2021-05-01 17:53:27 +07:00
Sergey M․
94520568b3
[workflows/ci.yml] Update link to jython-installer 2021-04-26 02:16:47 +07:00
Sergey M․
273964d190
release 2021.04.26 2021-04-26 01:33:30 +07:00
Sergey M․
346dd3b5e8
[ChangeLog] Actualize
[ci skip]
2021-04-26 01:29:50 +07:00
schnusch
f5c2c06231
[xfileshare] Add support for wolfstream.tv (#28858) 2021-04-26 00:32:47 +07:00
Sergey M․
57eaaff5cf
[francetvinfo] Improve video id extraction (closes #28792) 2021-04-25 22:52:28 +07:00
Sergey M․
999329cf6b
[workflows/ci.yml] Fix install nose for Jython 2021-04-24 23:52:16 +07:00
catboy
c6ab792990
[medaltv] Fix extraction (#28807)
numeric clip ids are no longer used by medal, and integer user ids are now sent as strings.
2021-04-24 19:10:35 +07:00
The Hatsune Daishi
0db79d8181
[tver] Redirect all downloads to Brightcove (#28849) 2021-04-24 18:58:03 +07:00
Sergey M․
7e8b3f9439
[youtube] Remove unused code 2021-04-21 05:37:51 +07:00
Sergey M․
ac19c3ac80
[go] Improve video id extraction (closes #25207, closes #25216, closes #26058) 2021-04-21 05:35:39 +07:00
Sergey M․
c4a451bcdd
[test_execution] Add test for lazy extractors (refs #28780) 2021-04-21 04:47:29 +07:00
Sergey M․
5ad69d3d0e
[test_youtube_misc] Move YoutubeIE.extract_id test into separate module 2021-04-21 04:45:13 +07:00
Sergey M․
32290307a4
[youtube] Fix lazy extractors (closes #28780) 2021-04-21 03:56:04 +07:00
Sergey M․
dab83a2597
[bbc] Extract full description from __INITIAL_DATA__ (refs #28774) 2021-04-21 03:00:56 +07:00
dirkf
41920fc80e
[bbc] Extract description and timestamp from __INITIAL_DATA__ (#28774) 2021-04-21 02:51:55 +07:00
Sergey M․
9f6c03a006
[cbsnews] Fix extraction for python <3.6 (closes #23359) 2021-04-17 05:05:31 +07:00
Sergey M․
596b26606c
release 2021.04.17 2021-04-17 03:50:09 +07:00
Sergey M․
f20b505b46
[ChangeLog] Actualize
[ci skip]
2021-04-17 03:47:00 +07:00
Sergey M․
cfee2dfe83
[utils] PEP 8 2021-04-17 03:32:04 +07:00
Sergey M․
30a3a4c70f
[lbry] Add support for HLS videos (closes #27877, closes #28768) 2021-04-17 03:23:47 +07:00
Sergey M․
a00a7e0cad
[utils] Add support for support for experimental HTTP response status code 308 Permanent Redirect (refs #27877, refs #28768) 2021-04-17 03:22:13 +07:00
Sergey M․
54558e0baa
[youtube] Improve stretch extraction and fix stretched ratio calculation (closes #28769) 2021-04-17 02:27:54 +07:00
Sergey M․
7c52395479
[youtube:tab] Improve grid extraction (closes #28725) 2021-04-17 01:13:10 +07:00
zraktvor
ea87ed8394
[youtube:tab] Detect series playlist on playlists page (closes #28723) 2021-04-17 01:13:10 +07:00
Cássio Ávila
d01e261a15
[youtube] Add more invidious instances (#28706) 2021-04-17 00:31:34 +07:00
quyleanh
79e4ccfc4b
[pluralsight] Extend anti-throttling timeout (#28712) 2021-04-17 00:30:10 +07:00
Sergey M․
06159135ef
[youtube] Improve URL to extractor routing (closes #27572, closes #28335, closes #28742) 2021-04-17 00:07:32 +07:00
Aaron Lipinski
4fb25ff5a3 [maoritv] Add new extractor(closes #24552) 2021-04-09 09:02:37 +01:00
Sergey M․
1b0a13f33c
[youtube:tab] Pass innertube context and x-goog-visitor-id header along with continuation requests (closes #28702) 2021-04-09 02:10:34 +07:00
Remita Amine
27e5a4464d [mtv] Fix Viacom A/B Testing Video Player extraction(closes #28703) 2021-04-08 18:54:44 +01:00
Sergey M․
545d6cb9d0
[pornhub] Extract DASH and HLS formats from get_media end point (closes #28698) 2021-04-08 15:32:59 +07:00
Remita Amine
006eea564d [cbssports] fix extraction(closes #28682) 2021-04-07 14:01:48 +01:00
Remita Amine
281b8e3443 [jamendo] fix track extraction(closes #28686) 2021-04-07 10:41:06 +01:00
Remita Amine
c0c5134c57 [curiositystream] fix format extraction(closes #26845, closes #28668) 2021-04-07 09:27:05 +01:00
Sergey M․
72a2c0a9ed
release 2021.04.07 2021-04-07 03:42:24 +07:00
Sergey M․
445db582a2
[ChangeLog] Actualize
[ci skip]
2021-04-07 03:39:02 +07:00
Sergey M․
6b116f0c03
[youtube] Fix videos with restricted location (closes #28685) 2021-04-07 03:34:43 +07:00
Sergey M․
70d0d4f9be
[compat] Use more conventional name for compat SimpleCookie 2021-04-06 14:22:28 +07:00
Sergey M․
6b315d96bc
[compat] flake8 2021-04-06 14:15:13 +07:00
guredora
25b1287323 [line] add support live.line.me (closes #17205)(closes #28658) 2021-04-05 10:11:01 +01:00
Remita Amine
760c911299 [compat] add compat_SimpleCookie to __all__ array 2021-04-05 07:16:50 +01:00
Remita Amine
162bf9e10a [compat] add compat_SimpleCookie 2021-04-04 19:49:24 +01:00
Remita Amine
6beb1ac65b [extractor/common] keep support for non standard JSON-LD VideoObject author values 2021-04-04 19:16:17 +01:00
Remita Amine
3ae9c0f410 [vimeo] improve extraction(closes #28591) 2021-04-04 16:28:26 +01:00
Remita Amine
e165f5641f [extractor/common] fix JSON-LD VideoObject author extraction 2021-04-04 16:28:26 +01:00
RomanEmelyanov
aee6feb02a
[youku] Update ccode(closes #17852, closes #28447, closes #28460) (#28648) 2021-04-04 08:14:37 +00:00
Remita Amine
654b4f4ff2 [youtube] prioritize information from YoutubeIE for playlist entries(closes #28619, closes #28636) 2021-04-03 08:23:35 +01:00
Remita Amine
1df2596f81 [extractor/common] fix _get_cookies method for python 2(#20673, #23256, #20326, closes #28640) 2021-04-03 07:54:16 +01:00
Remita Amine
04d4a3b136 [screencastomatic] fix extraction(closes #11976, closes #24489) 2021-04-01 19:05:00 +01:00
Allan Daemon
392c467f95 [palcomp3] Add new extractor(closes #13120) 2021-04-01 17:10:38 +01:00
Vid
c5aa8f36bf [arnes] Add new extractor(closes #28483) 2021-04-01 13:59:12 +01:00
Remita Amine
3748863070 [youtube:tab] Add support for hashtag videos extraction(closes #28308) 2021-04-01 11:52:23 +01:00
Sergey M․
ca304beb15
release 2021.04.01 2021-04-01 04:47:11 +07:00
Sergey M․
e789bb1aa4
[ChangeLog] Actualize
[ci skip]
2021-04-01 04:43:54 +07:00
Sergey M․
14f29f087e
[youtube] Setup CONSENT cookie when needed (closes #28604) 2021-04-01 04:05:10 +07:00
Remita Amine
b97fb2edac [vimeo] fix password protected review extraction(closes #27591) 2021-03-31 20:07:13 +01:00
Remita Amine
28bab774a0 [youtube] imporve age-restricted video extraction(#28578) 2021-03-30 21:45:08 +01:00
Sergey M․
8f493de9fb
release 2021.03.31 2021-03-31 02:59:07 +07:00
Sergey M․
207bc35d34
[ChangeLog] Actualize
[ci skip]
2021-03-31 02:58:01 +07:00
Remita Amine
955894e72f [vlive] fix inkey request(closes #28589) 2021-03-30 10:01:06 +01:00
Sergey M․
287e50b56b
[francetvinfo] Improve video id extraction (closes #28584) 2021-03-30 03:37:43 +07:00
Chris Hranj
da762c4e32
[instagram] Improve title extraction and extract duration (#28469)
Co-authored-by: Sergey M. <dstftw@gmail.com>
2021-03-30 02:05:19 +07:00
Remita Amine
87a8bde777 [sbs] add support for ondemand watch URLs(closes #28566) 2021-03-28 08:46:33 +01:00
Remita Amine
49fc0a567f [youtube] fix video's channel extraction(closes #28562) 2021-03-27 19:11:41 +01:00
Remita Amine
cc777dcaa0 [picarto] fix live stream extraction(closes #28532) 2021-03-27 17:37:45 +01:00
Remita Amine
c785911870 [vimeo] fix unlisted video extraction(closes #28414) 2021-03-25 17:06:57 +01:00
Remita Amine
605e7b5e47 [youtube:tab] fix playlist/comunity continuation items extraction(closes #28266) 2021-03-25 12:53:18 +01:00
Remita Amine
8562218350 [ard] improve clip id extraction(#22724)(closes #28528) 2021-03-24 19:29:25 +01:00
Sergey M․
76da1c954a
release 2021.03.25 2021-03-25 00:04:10 +07:00
Sergey M․
c2fbfb49da
[ChangeLog] Actualize
[ci skip]
2021-03-25 00:03:00 +07:00
Roman Sebastian Karwacik
d1069d33b4 [zoom] Add new extractor(closes #16597, closes #27002, closes #28531) 2021-03-24 17:26:44 +01:00
The Hatsune Daishi
eafcadea26
[extractor] escape forgotten dot for hostnames in regular expression (#28530) 2021-03-24 14:33:19 +00:00
Remita Amine
a40002444e [bbc] fix BBC IPlayer Episodes/Group extraction(closes #28360) 2021-03-24 15:10:19 +01:00
Sergey M․
5208ae92fc
[youtube] Fix default value for youtube_include_dash_manifest (closes #28523) 2021-03-24 02:57:35 +07:00
Remita Amine
8117d613ac [zingmp3] fix extraction(closes #11589, closes #16409, closes #16968, closes #27205) 2021-03-22 15:58:56 +01:00
Martin Ström
00b4d72d1e
[vgtv] Add support for new tv.aftonbladet.se URL schema (#28514)
Co-authored-by: Sergey M <dstftw@gmail.com>
2021-03-22 20:56:58 +07:00
Remita Amine
21ccd0d7f4 [tiktok] detect private videos(closes #28453) 2021-03-21 09:10:38 +01:00
Sergey M․
7e79ba7dd6
[vimeo:album] Fix extraction for albums with number of videos multiple to page size (closes #28486) 2021-03-20 05:47:26 +07:00
Remita Amine
fa6bf0a711 [vvvvid] fix kenc format extraction(closes #28473) 2021-03-19 12:37:22 +01:00
Remita Amine
f912d6c8cf [mlb] fix video extracion(#21241) 2021-03-15 21:46:39 +01:00
Sergey M․
357bfe251d
[svtplay] Improve extraction (closes #28448) 2021-03-15 20:42:20 +07:00
Remita Amine
3be098010f [applepodcasts] fix extraction(closes #28445) 2021-03-14 20:08:46 +01:00
Remita Amine
9955bb4a27 [rtve] improve extraction
- extract all formats
- fix RTVE Infantil extraction(closes #24851)
- extract is_live and series
2021-03-14 15:05:25 +01:00
Sergey M․
ebfd66c4b1
release 2021.03.14 2021-03-14 09:38:16 +07:00
Sergey M․
b509d24b2f
[ChangeLog] Actualize
[ci skip]
2021-03-14 09:36:11 +07:00
Sergey M․
1860d0f41c
[southpark] Fix extraction and add support for southparkstudios.com (closes #26763, closes #28413) 2021-03-14 09:26:54 +07:00
Remita Amine
60845121ca [sportdeutschland] fix extraction(closes #21856)(closes #28425) 2021-03-13 15:19:24 +01:00
Remita Amine
1182f9567b [pinterest] reduce the number of HLS format requests 2021-03-12 18:11:11 +01:00
Remita Amine
ef414343e5 [peertube] improve thumbnail extraction(closes #28419) 2021-03-12 10:48:58 +01:00
Remita Amine
43d986acd8 [tver] improve title extraction(closes #28418) 2021-03-12 10:14:28 +01:00
Remita Amine
9c644a6419 [fujitv] fix HLS formats extension(closes #28416) 2021-03-12 09:51:01 +01:00
Remita Amine
fc2c6d5323 [shahid] fix format extraction(closes #28383) 2021-03-10 13:16:21 +01:00
Remita Amine
64ed3af328 [lbry] add support for channel filters(closes #28385) 2021-03-10 11:45:30 +01:00
Sergey M․
bae7dbf78b
[bandcamp] Extract release_timestamp 2021-03-10 03:41:21 +07:00
Sergey M․
15c24b0346
[lbry] Extract release_timestamp (closes #28386) 2021-03-10 03:40:56 +07:00
Sergey M․
477bff6906
Introduce release_timestamp meta field (refs #28386) 2021-03-10 03:36:31 +07:00
Sergey M․
1a1ccd9a6e
[pornhub] Detect flagged videos 2021-03-10 02:56:01 +07:00
Sergey M․
7dc513487f
[pornhub] Extract formats from get_media end point (#28395) 2021-03-10 02:54:10 +07:00
Remita Amine
c6a14755bb [bilibili] fix video info extraction(closes #28341) 2021-03-08 16:53:50 +01:00
Remita Amine
7f064d50db [cbs] add support for Paramount+ (closes #28342) 2021-03-07 08:32:37 +01:00
Remita Amine
b8b622fbeb [trovo] Add Origin header to VOD formats(closes #28346) 2021-03-04 17:57:16 +01:00
Remita Amine
ec64ec9651 [voxmedia] fix volume embed extraction(closes #28338) 2021-03-04 12:42:31 +01:00
Sergey M․
f68692b004
release 2021.03.03 2021-03-03 11:47:34 +07:00
Sergey M․
8c9766f4bf
[ChangeLog] Actualize
[ci skip]
2021-03-03 11:44:49 +07:00
Sergey M․
061c030133
[youtube:tab] Switch continuation to browse API (closes #28289, closes #28327)
Until further investigation.
2021-03-03 11:42:59 +07:00
Remita Amine
8f56907afa [9c9media] fix extraction for videos with multiple ContentPackages(closes #28309) 2021-03-02 12:04:31 +01:00
Remita Amine
e1adb3ed4f [bbc] correct catched exception type 2021-03-02 11:21:49 +01:00
dirkf
e465b25c1f [bbc] add support for BBC Reel videos(closes #21870, closes #23660, closes #28268) 2021-03-02 10:49:20 +01:00
Sergey M․
7c06216abf
release 2021.03.02 2021-03-02 06:19:42 +07:00
Sergey M․
0002888627
[ChangeLog] Actualize
[ci skip]
2021-03-02 06:16:41 +07:00
Sergey M․
3fb14cd214
[zdf] Rework extractors (closes #11606, closes #13473, closes #17354, closes #21185, closes #26711, closes #27068, closes #27930, closes #28198, closes #28199, closes #28274)
* Generalize unique video ids for zdf based extractors
* Improve extraction
* Fix 3sat and phoenix
2021-03-02 06:07:30 +07:00
Remita Amine
bee6182680 [stretchinternet] Fix extraction(closes #28297) 2021-03-01 14:00:03 +01:00
Remita Amine
38fe5e239a [urplay] fix episode data extraction(closes #28292) 2021-02-28 12:31:18 +01:00
Remita Amine
678d46f6bb [bandaichannel] Add new extractor(closes #21404) 2021-02-28 10:42:41 +01:00
Alexander Seiler
3c58f9e0b9 [srgssr] improve extraction
- extract subtitle
- fix extraction for new videos
- update srf download domains

closes #14717
closes #14725
closes #27231
closes #28238
2021-02-25 15:50:49 +01:00
Remita Amine
ef28e33249 [vvvvid] reduce season request payload size 2021-02-24 22:29:35 +01:00
nixxo
9662e4964b
[vvvvid] extract series sublists playlist_title (#27601) (#27618) 2021-02-24 22:17:29 +01:00
Remita Amine
44603290e5 [dplay] Extract Ad-Free uplynk URLs(#28160) 2021-02-24 18:34:28 +01:00
Remita Amine
1631fca1ee [wat] detect DRM protected videos(closes #27958) 2021-02-23 13:50:18 +01:00
Remita Amine
295860ff00 [tf1] improve extraction(closes #27980)(closes #28040) 2021-02-23 12:41:32 +01:00
Sergey M․
8cb4b71909
[tmz] Fix and improve extraction (closes #24603, closes #24687, closes #28211) 2021-02-23 18:37:06 +07:00
Remita Amine
d81421af4b [gedidigital] improve asset id matching 2021-02-22 23:02:15 +01:00
nixxo
7422a2194f [gedidigital] Add new extractor(closes #7347)(closes #26946) 2021-02-22 20:42:14 +01:00
Remita Amine
2090dbdc8c [youtube] fix get_video_info request 2021-02-21 23:09:09 +01:00
Sergey M․
0a04e03a02
release 2021.02.22 2021-02-22 02:42:16 +07:00
Sergey M․
44b2d5f5fc
[ChangeLog] Actualize
[ci skip]
2021-02-22 02:40:00 +07:00
Sergey M․
aa9118a373
[apa] Improve extraction (closes #27750) 2021-02-22 02:29:50 +07:00
Adrian Heine
36abc16c3c
[apa] Fix extraction 2021-02-22 02:28:28 +07:00
Sergey M․
919d764600
[youporn] Skip test 2021-02-21 23:21:38 +07:00
piplongrun
696183e133
[youporn] Extract duration (#28019)
Co-authored-by: Sergey M <dstftw@gmail.com>
2021-02-21 23:19:37 +07:00
SirCipherz
f90d825a6b
[peertube] Add support for canard.tube (#28190) 2021-02-21 23:05:33 +07:00
Remita Amine
3037ab00c7 [youtube] fixup m4a_dash formats(closes #28165) 2021-02-21 10:31:27 +01:00
Isaac-the-Man
21e872b19a [samplefocus] Add new extractor(closes #27763) 2021-02-20 10:55:19 +01:00
Remita Amine
cf2dbec630 [vimeo] add support for unlisted video source format extraction 2021-02-19 21:13:56 +01:00
Remita Amine
b92bb0e02a [viki] improve extraction(closes #26522)(closes #28203)
- extract uploader_url and episode_number
- report login required error
- extract 480p formats
- fix API v4 calls
2021-02-19 16:00:22 +01:00
Remita Amine
40edffae3d [ninegag] unscape title(#28201) 2021-02-19 11:55:40 +01:00
Sergey M․
9fc5eafb8e
[youtube] Improve _VALID_URL (refs #28193) 2021-02-18 04:59:56 +07:00
bopol
08c2fbb844
[youtube] Add support for redirect.invidious.io (#28193)
Co-authored-by: Sergey M <dstftw@gmail.com>
2021-02-18 04:29:32 +07:00
Remita Amine
3997efb65e [dplay] add support for de.hgtv.com (closes #28182) 2021-02-17 19:50:04 +01:00
Remita Amine
a7356dffe9 [dplay] Add support for discoveryplus.com (closes #24698) 2021-02-17 18:33:33 +01:00
dmsummers
e20ec43094 [simplecast] Add new extractor(closes #24107) 2021-02-17 14:53:23 +01:00
PrinceOfPuppers
70baa7bfae
[test_youtube_lists] Actualize youtube flat playlist test (closes #28045) 2021-02-17 04:58:54 +07:00
PrinceOfPuppers
8980f53b42
[youtube] Fix uploader extraction in flat playlist mode (#28045) 2021-02-17 04:21:33 +07:00
Sergey M․
a363fb5d28
[yandexmusic:playlist] Request missing tracks in chunks (closes #27355, closes #28184) 2021-02-17 04:03:54 +07:00
Max
646052e416
[postprocessor/embedthumbnail] Recognize atomicparsley binary in lowercase (#28112) 2021-02-17 03:22:51 +07:00
Stephen Stair
844e4cbc54 [storyfire] Add new extractor(closes #25628)(closes #26349) 2021-02-16 21:14:43 +01:00
Remita Amine
56c63c8c02 [zhihu] Add new extractor(closes #28177) 2021-02-16 10:08:43 +01:00
Sergey M․
07eb8f1916
[youtube] Fix controversial videos when authenticated with cookies (closes #28174) 2021-02-16 05:57:53 +07:00
Remita Amine
4b5410c5c8 [ccma] fix timestamp parsing in python 2 2021-02-15 13:06:54 +01:00
Remita Amine
be2e9b76ee [videopress] add support for video.wordpress.com 2021-02-14 22:10:06 +01:00
Remita Amine
d8085580f6 [kakao] improve info extraction and detect geo restriction(closes #26577) 2021-02-14 19:48:26 +01:00
Remita Amine
6d32c6c6d3 [xboxclips] fix extraction(closes #27151) 2021-02-14 16:22:45 +01:00
Sergey M․
f94d764993
[ard] Improve formats extraction (closes #28155) 2021-02-14 05:03:15 +07:00
Kevin Velghe
f28f1b4d6e
[canvas] Add new extractor for Dagelijkse Kost (#28119) 2021-02-11 08:04:16 +00:00
Sergey M․
360d5f0daa
release 2021.02.10 2021-02-10 22:34:47 +07:00
Sergey M․
cd493c5adc
[ChangeLog] Actualize
[ci skip]
2021-02-10 22:32:25 +07:00
Sergey M․
a4c7ed6b1e
[youtube:tab] Improve grid continuation extraction (closes #28130) 2021-02-10 22:28:58 +07:00
Remita Amine
7f8b8bc418 [ign] fix extraction(closes #24771) 2021-02-08 15:58:20 +01:00
Sergey M․
311ebdd9a5
[xhamster] Extract formats from xplayer settings and extract filesizes (closes #28114) 2021-02-08 15:47:12 +07:00
Remita Amine
99c68db0a8 [youtube] add support phone/tablet JS player(closes #26424) 2021-02-08 09:20:28 +01:00
Sergey M․
5fc53690cb
[archiveorg] Fix and improve extraction (closes #21330, closes #23586, closes #25277, closes #26780, closes #27109, closes #27236, closes #28063) 2021-02-07 20:34:41 +07:00
Sergey M․
7a9161578e
[cda] Detect geo restricted videos (refs #28106) 2021-02-07 19:18:40 +07:00
Adrian Heine né Lang
2405854705
[urplay] Fix extraction (closes #28073) (#28074) 2021-02-07 02:46:05 +07:00
Sergey M․
0cf09c2b41
[youtube] Fix release date extraction (closes #28094) 2021-02-07 02:17:03 +07:00
Sergey M․
0156ce95c5
[youtube] Extract abr and vbr (closes #28100) 2021-02-07 02:03:47 +07:00
Remita Amine
1641b13232 [youtube] skip OTF formats(#28070) 2021-02-04 13:05:35 +01:00
Sergey M․
a4bdc3112b
release 2021.02.04.1 2021-02-04 13:11:33 +07:00
Sergey M․
c7d407bca2
[ChangeLog] Actualize
[ci skip]
2021-02-04 13:09:28 +07:00
Sergey M․
7215691ab7
[youtube] Prefer DASH formats (closes #28070) 2021-02-04 13:07:43 +07:00
Adrian Heine né Lang
fc88e8f0e3
[azmedien] Fix extraction (#28064) 2021-02-03 23:57:56 +00:00
Sergey M․
cfefb7d854
release 2021.02.04 2021-02-04 04:49:25 +07:00
Sergey M․
3c07d007ca
[ChangeLog] Actualize
[ci skip]
2021-02-04 04:47:30 +07:00
Sergey M․
89c5a7d5aa
[pornhub] Implement lazy playlist extraction 2021-02-04 04:42:14 +07:00
Sergey M․
2adc0c51cd
[pornhub] Add placeholder netrc machine 2021-02-04 04:20:09 +07:00
Sergey M․
1f0910bc27
[svtplay] Fix video id extraction (closes #28058) 2021-02-04 04:17:45 +07:00
Sergey M․
e22ff4e356
[pornhub] Add support for authentication (closes #18797, closes #21416, closes #24294) 2021-02-04 04:09:11 +07:00
Sergey M․
83031d749b
[pornhub:user] Add support for URLs unavailable via /videos page and improve paging (closes #27853) 2021-02-04 00:25:53 +07:00
Remita Amine
1b731ebcaa [bravotv] add support for oxygen.com(closes #13357)(closes #22500) 2021-02-03 18:13:17 +01:00
Remita Amine
ab25f3f431 [youtube] pass embed URL to get_video_info request 2021-02-03 17:15:31 +01:00
Guillem Vela
07f7aad81c [ccma] improve metadata extraction(closes #27994)
- extract age_limit, alt_title, categories, series and episode_number
- fix timestamp multiple subtitles extraction
2021-02-03 09:19:54 +01:00
Remita Amine
1e2575df87 Credit @adrianheine for #27732 2021-02-03 00:21:46 +01:00
Remita Amine
b111a64135 [egghead] fix typo 2021-02-02 19:05:37 +01:00
Viren Rajput
0e3a968479 [egghead] update API domain(closes #28038) 2021-02-02 19:00:36 +01:00
Remita Amine
c11f7cf9bd [vidzi] remove extractor(closes #12629) 2021-02-01 22:35:28 +01:00
Remita Amine
8fa7cc387d [vidio] improve metadata extraction 2021-02-01 21:35:18 +01:00
Remita Amine
65eee5a745 [youtube] improve subtitle extraction 2021-02-01 18:12:35 +01:00
Remita Amine
efef4ddf51 [youtube] fix chapter extraction fallback 2021-02-01 16:49:52 +01:00
Remita Amine
159a3d48df [youtube] keep _formats array for format sorting tests 2021-02-01 16:36:19 +01:00
Remita Amine
b46483a6ec [youtube/test_youtube_signature] fix test 2021-02-01 16:35:07 +01:00
Remita Amine
9c724601ba [youtube] remove description chapters tests
video description no longer contain yt.www.watch.player.seekTo
function
2021-02-01 16:11:07 +01:00
Remita Amine
67299f23d8 [youtube] Rewrite Extractor
- improve format sorting
- remove unused code(swf parsing, ...)
- fix series metadata extraction
- fix trailer video extraction
- improve error reporting
- extract video location
2021-02-01 14:53:01 +01:00
Henrik Heimbuerger
a0f69f9526 [nebula] Fix stale session issues
When Nebula isn't accessed for a while, the Zype access token stored on
the Nebula backend expires. It is then no longer returned by the user
endpoint.
The Nebula frontend has the same issue and keeps polling for the Zype
token in this case.
This isn't implemented in this extractor yet, but at least a specific
error message now prints some helpful advice.
2021-01-17 22:25:51 +01:00
Henrik Heimbuerger
9fdfd6d3ba [nebula] Prevent cookies from breaking Nebula auth
When the 'sessionid' cookie is submitted to the `/auth/login/` endpoint,
the response is always a 403. This typically happens when youtube_dl is
run with both `--netrc` and `--cookies` as your default configuration.
In that situation, the first authentication succeeds and stores the
`sessionid` cookie in the cookie jar. During subsequent authentication
attempts, the cookie is sent alongside and causes the authentication to
fail.

This is very unexpected and we therefore specifically handle this case.
2021-01-17 15:52:02 +01:00
Henrik Heimbuerger
59c0e6e3d8 [nebula] Log attempted authentication method 2021-01-17 15:52:02 +01:00
Henrik Heimbuerger
8b4c9da62a [nebula] Clean up credentials-based authentication 2021-01-17 15:52:02 +01:00
Henrik Heimbuerger
2562c9ec74 [nebula] Implement PoC of netrc authentication 2021-01-17 15:52:02 +01:00
Henrik Heimbuerger
f8eb89748b [nebula] Update test video checksums 2021-01-17 15:52:02 +01:00
Henrik Heimbuerger
30362440dc [nebula] Improve performance by avoiding redirect 2021-01-17 15:52:02 +01:00
Henrik Heimbuerger
1317a43a6a [nebula] Implement Zype API key retrieval from JS chunk 2021-01-17 15:52:02 +01:00
Henrik Heimbuerger
18582060c2 [nebula] Rewrite extractor to new frontend (refs #21258) 2021-01-17 15:52:02 +01:00
Henrik Heimbuerger
af3434b839 [nebula] Relax meta data lookups 2021-01-17 15:52:01 +01:00
Henrik Heimbuerger
61cead3235 [nebula] Add better channel title extraction (refs #21258) 2021-01-17 15:52:01 +01:00
Henrik Heimbuerger
469cae38cd [nebula] Add additional test cases and improve cookie envvar handling 2021-01-17 15:52:01 +01:00
Henrik Heimbuerger
f6ac8cd495 [nebula] Add basic support for Nebula (refs #21258) 2021-01-17 15:52:01 +01:00
257 changed files with 26202 additions and 8261 deletions

View File

@ -18,7 +18,7 @@ title: ''
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.01.24.1. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.12.17. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape. - Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates. - Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
@ -26,7 +26,7 @@ Carefully read and work through this check list in order to prevent the most com
--> -->
- [ ] I'm reporting a broken site support - [ ] I'm reporting a broken site support
- [ ] I've verified that I'm running youtube-dl version **2021.01.24.1** - [ ] I've verified that I'm running youtube-dl version **2021.12.17**
- [ ] I've checked that all provided URLs are alive and playable in a browser - [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped - [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar issues including closed ones - [ ] I've searched the bugtracker for similar issues including closed ones
@ -41,7 +41,7 @@ Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <
[debug] User config: [] [debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj'] [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2021.01.24.1 [debug] youtube-dl version 2021.12.17
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2 [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {} [debug] Proxy map: {}

View File

@ -19,7 +19,7 @@ labels: 'site-support-request'
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.01.24.1. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.12.17. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights. - Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
@ -27,7 +27,7 @@ Carefully read and work through this check list in order to prevent the most com
--> -->
- [ ] I'm reporting a new site support request - [ ] I'm reporting a new site support request
- [ ] I've verified that I'm running youtube-dl version **2021.01.24.1** - [ ] I've verified that I'm running youtube-dl version **2021.12.17**
- [ ] I've checked that all provided URLs are alive and playable in a browser - [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that none of provided URLs violate any copyrights - [ ] I've checked that none of provided URLs violate any copyrights
- [ ] I've searched the bugtracker for similar site support requests including closed ones - [ ] I've searched the bugtracker for similar site support requests including closed ones

View File

@ -18,13 +18,13 @@ title: ''
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.01.24.1. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.12.17. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar site feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Search the bugtracker for similar site feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x]) - Finally, put x into all relevant boxes (like this [x])
--> -->
- [ ] I'm reporting a site feature request - [ ] I'm reporting a site feature request
- [ ] I've verified that I'm running youtube-dl version **2021.01.24.1** - [ ] I've verified that I'm running youtube-dl version **2021.12.17**
- [ ] I've searched the bugtracker for similar site feature requests including closed ones - [ ] I've searched the bugtracker for similar site feature requests including closed ones

View File

@ -18,7 +18,7 @@ title: ''
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.01.24.1. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.12.17. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape. - Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates. - Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
@ -27,7 +27,7 @@ Carefully read and work through this check list in order to prevent the most com
--> -->
- [ ] I'm reporting a broken site support issue - [ ] I'm reporting a broken site support issue
- [ ] I've verified that I'm running youtube-dl version **2021.01.24.1** - [ ] I've verified that I'm running youtube-dl version **2021.12.17**
- [ ] I've checked that all provided URLs are alive and playable in a browser - [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped - [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar bug reports including closed ones - [ ] I've searched the bugtracker for similar bug reports including closed ones
@ -43,7 +43,7 @@ Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <
[debug] User config: [] [debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj'] [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2021.01.24.1 [debug] youtube-dl version 2021.12.17
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2 [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {} [debug] Proxy map: {}

View File

@ -19,13 +19,13 @@ labels: 'request'
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.01.24.1. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.12.17. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x]) - Finally, put x into all relevant boxes (like this [x])
--> -->
- [ ] I'm reporting a feature request - [ ] I'm reporting a feature request
- [ ] I've verified that I'm running youtube-dl version **2021.01.24.1** - [ ] I've verified that I'm running youtube-dl version **2021.12.17**
- [ ] I've searched the bugtracker for similar feature requests including closed ones - [ ] I've searched the bugtracker for similar feature requests including closed ones

1
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@ -0,0 +1 @@
blank_issues_enabled: false

View File

@ -1,74 +1,479 @@
name: CI name: CI
on: [push, pull_request]
env:
all-cpython-versions: 2.6, 2.7, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 3.10, 3.11, 3.12
main-cpython-versions: 2.7, 3.2, 3.5, 3.9, 3.11
pypy-versions: pypy-2.7, pypy-3.6, pypy-3.7
cpython-versions: main
test-set: core
# Python beta version to be built using pyenv before setup-python support
# Must also be included in all-cpython-versions
next: 3.13
on:
push:
# push inputs aren't known to GitHub
inputs:
cpython-versions:
type: string
default: all
test-set:
type: string
default: core
pull_request:
# pull_request inputs aren't known to GitHub
inputs:
cpython-versions:
type: string
default: main
test-set:
type: string
default: both
workflow_dispatch:
inputs:
cpython-versions:
type: choice
description: CPython versions (main = 2.7, 3.2, 3.5, 3.9, 3.11)
options:
- all
- main
required: true
default: main
test-set:
type: choice
description: core, download
options:
- both
- core
- download
required: true
default: both
permissions:
contents: read
jobs: jobs:
select:
name: Select tests from inputs
runs-on: ubuntu-latest
outputs:
cpython-versions: ${{ steps.run.outputs.cpython-versions }}
test-set: ${{ steps.run.outputs.test-set }}
own-pip-versions: ${{ steps.run.outputs.own-pip-versions }}
steps:
# push and pull_request inputs aren't known to GitHub (pt3)
- name: Set push defaults
if: ${{ github.event_name == 'push' }}
env:
cpython-versions: all
test-set: core
run: |
echo "cpython-versions=${{env.cpython-versions}}" >> "$GITHUB_ENV"
echo "test_set=${{env.test_set}}" >> "$GITHUB_ENV"
- name: Get pull_request inputs
if: ${{ github.event_name == 'pull_request' }}
env:
cpython-versions: main
test-set: both
run: |
echo "cpython-versions=${{env.cpython-versions}}" >> "$GITHUB_ENV"
echo "test_set=${{env.test_set}}" >> "$GITHUB_ENV"
- name: Make version array
id: run
run: |
# Make a JSON Array from comma/space-separated string (no extra escaping)
json_list() { \
ret=""; IFS="${IFS},"; set -- $*; \
for a in "$@"; do \
ret=$(printf '%s"%s"' "${ret}${ret:+, }" "$a"); \
done; \
printf '[%s]' "$ret"; }
tests="${{ inputs.test-set || env.test-set }}"
[ $tests = both ] && tests="core download"
printf 'test-set=%s\n' "$(json_list $tests)" >> "$GITHUB_OUTPUT"
versions="${{ inputs.cpython-versions || env.cpython-versions }}"
if [ "$versions" = all ]; then \
versions="${{ env.all-cpython-versions }}"; else \
versions="${{ env.main-cpython-versions }}"; \
fi
printf 'cpython-versions=%s\n' \
"$(json_list ${versions}${versions:+, }${{ env.pypy-versions }})" >> "$GITHUB_OUTPUT"
# versions with a special get-pip.py in a per-version subdirectory
printf 'own-pip-versions=%s\n' \
"$(json_list 2.6, 2.7, 3.2, 3.3, 3.4, 3.5, 3.6)" >> "$GITHUB_OUTPUT"
tests: tests:
name: Tests name: Run tests
needs: select
permissions:
contents: read
packages: write
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
env:
PIP: python -m pip
PIP_DISABLE_PIP_VERSION_CHECK: true
PIP_NO_PYTHON_VERSION_WARNING: true
strategy: strategy:
fail-fast: true fail-fast: true
matrix: matrix:
os: [ubuntu-18.04] os: [ubuntu-20.04]
# TODO: python 2.6 python-version: ${{ fromJSON(needs.select.outputs.cpython-versions) }}
python-version: [2.7, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, pypy-2.7, pypy-3.6, pypy-3.7]
python-impl: [cpython] python-impl: [cpython]
ytdl-test-set: [core, download] ytdl-test-set: ${{ fromJSON(needs.select.outputs.test-set) }}
run-tests-ext: [sh] run-tests-ext: [sh]
include: include:
# python 3.2 is only available on windows via setup-python - os: windows-2019
- os: windows-latest python-version: 3.4
python-version: 3.2
python-impl: cpython python-impl: cpython
ytdl-test-set: core ytdl-test-set: ${{ contains(needs.select.outputs.test-set, 'core') && 'core' || 'nocore' }}
run-tests-ext: bat run-tests-ext: bat
- os: windows-latest - os: windows-2019
python-version: 3.2 python-version: 3.4
python-impl: cpython python-impl: cpython
ytdl-test-set: download ytdl-test-set: ${{ contains(needs.select.outputs.test-set, 'download') && 'download' || 'nodownload' }}
run-tests-ext: bat run-tests-ext: bat
# jython # jython
- os: ubuntu-18.04 - os: ubuntu-20.04
python-version: 2.7
python-impl: jython python-impl: jython
ytdl-test-set: core ytdl-test-set: ${{ contains(needs.select.outputs.test-set, 'core') && 'core' || 'nocore' }}
run-tests-ext: sh run-tests-ext: sh
- os: ubuntu-18.04 - os: ubuntu-20.04
python-version: 2.7
python-impl: jython python-impl: jython
ytdl-test-set: download ytdl-test-set: ${{ contains(needs.select.outputs.test-set, 'download') && 'download' || 'nodownload' }}
run-tests-ext: sh run-tests-ext: sh
steps: steps:
- uses: actions/checkout@v2 - name: Prepare Linux
- name: Set up Python ${{ matrix.python-version }} if: ${{ startswith(matrix.os, 'ubuntu') }}
uses: actions/setup-python@v2 shell: bash
if: ${{ matrix.python-impl == 'cpython' }} run: |
# apt in runner, if needed, may not be up-to-date
sudo apt-get update
- name: Checkout
uses: actions/checkout@v3
#-------- Python 3 -----
- name: Set up supported Python ${{ matrix.python-version }}
id: setup-python
if: ${{ matrix.python-impl == 'cpython' && matrix.python-version != '2.6' && matrix.python-version != '2.7' && matrix.python-version != env.next }}
# wrap broken actions/setup-python@v4
# NB may run apt-get install in Linux
uses: ytdl-org/setup-python@v1
env:
# Temporary workaround for Python 3.5 failures - May 2024
PIP_TRUSTED_HOST: "pypi.python.org pypi.org files.pythonhosted.org"
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
cache-build: true
allow-build: info
- name: Locate supported Python ${{ matrix.python-version }}
if: ${{ env.pythonLocation }}
shell: bash
run: |
echo "PYTHONHOME=${pythonLocation}" >> "$GITHUB_ENV"
export expected="${{ steps.setup-python.outputs.python-path }}"
dirname() { printf '%s\n' \
'import os, sys' \
'print(os.path.dirname(sys.argv[1]))' \
| ${expected} - "$1"; }
expd="$(dirname "$expected")"
export python="$(command -v python)"
[ "$expd" = "$(dirname "$python")" ] || echo "PATH=$expd:${PATH}" >> "$GITHUB_ENV"
[ -x "$python" ] || printf '%s\n' \
'import os' \
'exp = os.environ["expected"]' \
'python = os.environ["python"]' \
'exps = os.path.split(exp)' \
'if python and (os.path.dirname(python) == exp[0]):' \
' exit(0)' \
'exps[1] = "python" + os.path.splitext(exps[1])[1]' \
'python = os.path.join(*exps)' \
'try:' \
' os.symlink(exp, python)' \
'except AttributeError:' \
' os.rename(exp, python)' \
| ${expected} -
printf '%s\n' \
'import sys' \
'print(sys.path)' \
| ${expected} -
#-------- Python next (was 3.12) -
- name: Set up CPython 3.next environment
if: ${{ matrix.python-impl == 'cpython' && matrix.python-version == env.next }}
shell: bash
run: |
PYENV_ROOT=$HOME/.local/share/pyenv
echo "PYENV_ROOT=${PYENV_ROOT}" >> "$GITHUB_ENV"
- name: Cache Python 3.next
id: cachenext
if: ${{ matrix.python-impl == 'cpython' && matrix.python-version == env.next }}
uses: actions/cache@v3
with:
key: python-${{ env.next }}
path: |
${{ env.PYENV_ROOT }}
- name: Build and set up Python 3.next
if: ${{ matrix.python-impl == 'cpython' && matrix.python-version == env.next && ! steps.cachenext.outputs.cache-hit }}
# dl and build locally
shell: bash
run: |
# Install build environment
sudo apt-get install -y build-essential llvm libssl-dev tk-dev \
libncursesw5-dev libreadline-dev libsqlite3-dev \
libffi-dev xz-utils zlib1g-dev libbz2-dev liblzma-dev
# Download PyEnv from its GitHub repository.
export PYENV_ROOT=${{ env.PYENV_ROOT }}
export PATH=$PYENV_ROOT/bin:$PATH
git clone "https://github.com/pyenv/pyenv.git" "$PYENV_ROOT"
pyenv install ${{ env.next }}
- name: Locate Python 3.next
if: ${{ matrix.python-impl == 'cpython' && matrix.python-version == env.next }}
shell: bash
run: |
PYTHONHOME="$(echo "${{ env.PYENV_ROOT }}/versions/${{ env.next }}."*)"
test -n "$PYTHONHOME"
echo "PYTHONHOME=$PYTHONHOME" >> "$GITHUB_ENV"
echo "PATH=${PYTHONHOME}/bin:$PATH" >> "$GITHUB_ENV"
#-------- Python 2.7 --
- name: Set up Python 2.7
if: ${{ matrix.python-impl == 'cpython' && matrix.python-version == '2.7' }}
# install 2.7
shell: bash
run: |
sudo apt-get install -y python2 python-is-python2
echo "PYTHONHOME=/usr" >> "$GITHUB_ENV"
#-------- Python 2.6 --
- name: Set up Python 2.6 environment
if: ${{ matrix.python-impl == 'cpython' && matrix.python-version == '2.6' }}
shell: bash
run: |
openssl_name=openssl-1.0.2u
echo "openssl_name=${openssl_name}" >> "$GITHUB_ENV"
openssl_dir=$HOME/.local/opt/$openssl_name
echo "openssl_dir=${openssl_dir}" >> "$GITHUB_ENV"
PYENV_ROOT=$HOME/.local/share/pyenv
echo "PYENV_ROOT=${PYENV_ROOT}" >> "$GITHUB_ENV"
sudo apt-get install -y openssl ca-certificates
- name: Cache Python 2.6
id: cache26
if: ${{ matrix.python-version == '2.6' }}
uses: actions/cache@v3
with:
key: python-2.6.9
path: |
${{ env.openssl_dir }}
${{ env.PYENV_ROOT }}
- name: Build and set up Python 2.6
if: ${{ matrix.python-impl == 'cpython' && matrix.python-version == '2.6' && ! steps.cache26.outputs.cache-hit }}
# dl and build locally
shell: bash
run: |
# Install build environment
sudo apt-get install -y build-essential llvm libssl-dev tk-dev \
libncursesw5-dev libreadline-dev libsqlite3-dev \
libffi-dev xz-utils zlib1g-dev libbz2-dev liblzma-dev
# Download and install OpenSSL 1.0.2, back in time
openssl_name=${{ env.openssl_name }}
openssl_targz=${openssl_name}.tar.gz
openssl_dir=${{ env.openssl_dir }}
openssl_inc=$openssl_dir/include
openssl_lib=$openssl_dir/lib
openssl_ssl=$openssl_dir/ssl
curl -L "https://www.openssl.org/source/$openssl_targz" -o $openssl_targz
tar -xf $openssl_targz
( cd $openssl_name; \
./config --prefix=$openssl_dir --openssldir=${openssl_dir}/ssl \
--libdir=lib -Wl,-rpath=${openssl_dir}/lib shared zlib-dynamic && \
make && \
make install )
rm -rf $openssl_name
rmdir $openssl_ssl/certs && ln -s /etc/ssl/certs $openssl_ssl/certs
# Download PyEnv from its GitHub repository.
export PYENV_ROOT=${{ env.PYENV_ROOT }}
export PATH=$PYENV_ROOT/bin:$PATH
git clone "https://github.com/pyenv/pyenv.git" "$PYENV_ROOT"
# Prevent pyenv build trying (and failing) to update pip
export GET_PIP=get-pip-2.6.py
echo 'import sys; sys.exit(0)' > ${GET_PIP}
GET_PIP=$(realpath $GET_PIP)
# Build and install Python
export CFLAGS="-I$openssl_inc"
export LDFLAGS="-L$openssl_lib"
export LD_LIBRARY_PATH="$openssl_lib"
pyenv install 2.6.9
- name: Locate Python 2.6
if: ${{ matrix.python-impl == 'cpython' && matrix.python-version == '2.6' }}
shell: bash
run: |
PYTHONHOME="${{ env.PYENV_ROOT }}/versions/2.6.9"
echo "PYTHONHOME=$PYTHONHOME" >> "$GITHUB_ENV"
echo "PATH=${PYTHONHOME}/bin:$PATH" >> "$GITHUB_ENV"
echo "LD_LIBRARY_PATH=${{ env.openssl_dir }}/lib${LD_LIBRARY_PATH:+:}${LD_LIBRARY_PATH}" >> "$GITHUB_ENV"
#-------- Jython ------
- name: Set up Java 8 - name: Set up Java 8
if: ${{ matrix.python-impl == 'jython' }} if: ${{ matrix.python-impl == 'jython' }}
uses: actions/setup-java@v1 uses: actions/setup-java@v3
with: with:
java-version: 8 java-version: 8
- name: Install Jython distribution: 'zulu'
- name: Setup Jython environment
if: ${{ matrix.python-impl == 'jython' }} if: ${{ matrix.python-impl == 'jython' }}
shell: bash
run: | run: |
wget http://search.maven.org/remotecontent?filepath=org/python/jython-installer/2.7.1/jython-installer-2.7.1.jar -O jython-installer.jar echo "JYTHON_ROOT=${HOME}/jython" >> "$GITHUB_ENV"
java -jar jython-installer.jar -s -d "$HOME/jython" echo "PIP=pip" >> "$GITHUB_ENV"
echo "$HOME/jython/bin" >> $GITHUB_PATH - name: Cache Jython
- name: Install nose id: cachejy
run: pip install nose if: ${{ matrix.python-impl == 'jython' && matrix.python-version == '2.7' }}
uses: actions/cache@v3
with:
# 2.7.3 now available, may solve SNI issue
key: jython-2.7.1
path: |
${{ env.JYTHON_ROOT }}
- name: Install Jython
if: ${{ matrix.python-impl == 'jython' && matrix.python-version == '2.7' && ! steps.cachejy.outputs.cache-hit }}
shell: bash
run: |
JYTHON_ROOT="${{ env.JYTHON_ROOT }}"
curl -L "https://repo1.maven.org/maven2/org/python/jython-installer/2.7.1/jython-installer-2.7.1.jar" -o jython-installer.jar
java -jar jython-installer.jar -s -d "${JYTHON_ROOT}"
echo "${JYTHON_ROOT}/bin" >> "$GITHUB_PATH"
- name: Set up cached Jython
if: ${{ steps.cachejy.outputs.cache-hit }}
shell: bash
run: |
JYTHON_ROOT="${{ env.JYTHON_ROOT }}"
echo "${JYTHON_ROOT}/bin" >> $GITHUB_PATH
- name: Install supporting Python 2.7 if possible
if: ${{ steps.cachejy.outputs.cache-hit }}
shell: bash
run: |
sudo apt-get install -y python2.7 || true
#-------- pip ---------
- name: Set up supported Python ${{ matrix.python-version }} pip
if: ${{ (matrix.python-version != '3.2' && steps.setup-python.outputs.python-path) || matrix.python-version == '2.7' }}
# This step may run in either Linux or Windows
shell: bash
run: |
echo "$PATH"
echo "$PYTHONHOME"
# curl is available on both Windows and Linux, -L follows redirects, -O gets name
python -m ensurepip || python -m pip --version || { \
get_pip="${{ contains(needs.select.outputs.own-pip-versions, matrix.python-version) && format('{0}/', matrix.python-version) || '' }}"; \
curl -L -O "https://bootstrap.pypa.io/pip/${get_pip}get-pip.py"; \
python get-pip.py; }
- name: Set up Python 2.6 pip
if: ${{ matrix.python-version == '2.6' }}
shell: bash
run: |
python -m pip --version || { \
curl -L -O "https://bootstrap.pypa.io/pip/2.6/get-pip.py"; \
curl -L -O "https://files.pythonhosted.org/packages/ac/95/a05b56bb975efa78d3557efa36acaf9cf5d2fd0ee0062060493687432e03/pip-9.0.3-py2.py3-none-any.whl"; \
python get-pip.py --no-setuptools --no-wheel pip-9.0.3-py2.py3-none-any.whl; }
# work-around to invoke pip module on 2.6: https://bugs.python.org/issue2751
echo "PIP=python -m pip.__main__" >> "$GITHUB_ENV"
- name: Set up other Python ${{ matrix.python-version }} pip
if: ${{ matrix.python-version == '3.2' && steps.setup-python.outputs.python-path }}
shell: bash
run: |
python -m pip --version || { \
curl -L -O "https://bootstrap.pypa.io/pip/3.2/get-pip.py"; \
curl -L -O "https://files.pythonhosted.org/packages/b2/d0/cd115fe345dd6f07ec1c780020a7dfe74966fceeb171e0f20d1d4905b0b7/pip-7.1.2-py2.py3-none-any.whl"; \
python get-pip.py --no-setuptools --no-wheel pip-7.1.2-py2.py3-none-any.whl; }
#-------- unittest ----
- name: Upgrade Unittest for Python 2.6
if: ${{ matrix.python-version == '2.6' }}
shell: bash
run: |
# Work around deprecation of support for non-SNI clients at PyPI CDN (see https://status.python.org/incidents/hzmjhqsdjqgb)
$PIP -qq show unittest2 || { \
for u in "65/26/32b8464df2a97e6dd1b656ed26b2c194606c16fe163c695a992b36c11cdf/six-1.13.0-py2.py3-none-any.whl" \
"f2/94/3af39d34be01a24a6e65433d19e107099374224905f1e0cc6bbe1fd22a2f/argparse-1.4.0-py2.py3-none-any.whl" \
"c7/a3/c5da2a44c85bfbb6eebcfc1dde24933f8704441b98fdde6528f4831757a6/linecache2-1.0.0-py2.py3-none-any.whl" \
"17/0a/6ac05a3723017a967193456a2efa0aa9ac4b51456891af1e2353bb9de21e/traceback2-1.4.0-py2.py3-none-any.whl" \
"72/20/7f0f433060a962200b7272b8c12ba90ef5b903e218174301d0abfd523813/unittest2-1.1.0-py2.py3-none-any.whl"; do \
curl -L -O "https://files.pythonhosted.org/packages/${u}"; \
$PIP install ${u##*/}; \
done; }
# make tests use unittest2
for test in ./test/test_*.py ./test/helper.py; do
sed -r -i -e '/^import unittest$/s/test/test2 as unittest/' "$test"
done
#-------- nose --------
- name: Install nose for Python ${{ matrix.python-version }}
if: ${{ (matrix.python-version != '3.2' && steps.setup-python.outputs.python-path) || (matrix.python-impl == 'cpython' && (matrix.python-version == '2.7' || matrix.python-version == env.next)) }}
shell: bash
run: |
echo "$PATH"
echo "$PYTHONHOME"
# Use PyNose for recent Pythons instead of Nose
py3ver="${{ matrix.python-version }}"
py3ver=${py3ver#3.}
[ "$py3ver" != "${{ matrix.python-version }}" ] && py3ver=${py3ver%.*} || py3ver=0
[ "$py3ver" -ge 9 ] && nose=pynose || nose=nose
$PIP -qq show $nose || $PIP install $nose
- name: Install nose for other Python 2
if: ${{ matrix.python-impl == 'jython' || (matrix.python-impl == 'cpython' && matrix.python-version == '2.6') }}
shell: bash
run: |
# Work around deprecation of support for non-SNI clients at PyPI CDN (see https://status.python.org/incidents/hzmjhqsdjqgb)
$PIP -qq show nose || { \
curl -L -O "https://files.pythonhosted.org/packages/99/4f/13fb671119e65c4dce97c60e67d3fd9e6f7f809f2b307e2611f4701205cb/nose-1.3.7-py2-none-any.whl"; \
$PIP install nose-1.3.7-py2-none-any.whl; }
- name: Install nose for other Python 3
if: ${{ matrix.python-version == '3.2' && steps.setup-python.outputs.python-path }}
shell: bash
run: |
$PIP -qq show nose || { \
curl -L -O "https://files.pythonhosted.org/packages/15/d8/dd071918c040f50fa1cf80da16423af51ff8ce4a0f2399b7bf8de45ac3d9/nose-1.3.7-py3-none-any.whl"; \
$PIP install nose-1.3.7-py3-none-any.whl; }
- name: Set up nosetest test
if: ${{ contains(needs.select.outputs.test-set, matrix.ytdl-test-set ) }}
shell: bash
run: |
# set PYTHON_VER
PYTHON_VER=${{ matrix.python-version }}
[ "${PYTHON_VER#*-}" != "$PYTHON_VER" ] || PYTHON_VER="${{ matrix.python-impl }}-${PYTHON_VER}"
echo "PYTHON_VER=$PYTHON_VER" >> "$GITHUB_ENV"
echo "PYTHON_IMPL=${{ matrix.python-impl }}" >> "$GITHUB_ENV"
# define a test to validate the Python version used by nosetests
printf '%s\n' \
'from __future__ import unicode_literals' \
'import sys, os, platform' \
'try:' \
' import unittest2 as unittest' \
'except ImportError:' \
' import unittest' \
'class TestPython(unittest.TestCase):' \
' def setUp(self):' \
' self.ver = os.environ["PYTHON_VER"].split("-")' \
' def test_python_ver(self):' \
' self.assertEqual(["%d" % v for v in sys.version_info[:2]], self.ver[-1].split(".")[:2])' \
' self.assertTrue(sys.version.startswith(self.ver[-1]))' \
' self.assertIn(self.ver[0], ",".join((sys.version, platform.python_implementation())).lower())' \
' def test_python_impl(self):' \
' self.assertIn(platform.python_implementation().lower(), (os.environ["PYTHON_IMPL"], self.ver[0]))' \
> test/test_python.py
#-------- TESTS -------
- name: Run tests - name: Run tests
if: ${{ contains(needs.select.outputs.test-set, matrix.ytdl-test-set ) }}
continue-on-error: ${{ matrix.ytdl-test-set == 'download' || matrix.python-impl == 'jython' }} continue-on-error: ${{ matrix.ytdl-test-set == 'download' || matrix.python-impl == 'jython' }}
env: env:
YTDL_TEST_SET: ${{ matrix.ytdl-test-set }} YTDL_TEST_SET: ${{ matrix.ytdl-test-set }}
run: ./devscripts/run_tests.${{ matrix.run-tests-ext }} run: |
./devscripts/run_tests.${{ matrix.run-tests-ext }}
flake8: flake8:
name: Linter name: Linter
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v3
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v2 uses: actions/setup-python@v4
with: with:
python-version: 3.9 python-version: 3.9
- name: Install flake8 - name: Install flake8
run: pip install flake8 run: pip install flake8
- name: Run flake8 - name: Run flake8
run: flake8 . run: flake8 .

View File

@ -246,3 +246,5 @@ Enes Solak
Nathan Rossi Nathan Rossi
Thomas van der Berg Thomas van der Berg
Luca Cherubin Luca Cherubin
Adrian Heine
Henrik Heimbuerger

View File

@ -150,7 +150,7 @@ After you have ensured this site is distributing its content legally, you can fo
# TODO more properties (see youtube_dl/extractor/common.py) # TODO more properties (see youtube_dl/extractor/common.py)
} }
``` ```
5. Add an import in [`youtube_dl/extractor/extractors.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/extractors.py). 5. Add an import in [`youtube_dl/extractor/extractors.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/extractors.py). This makes the extractor available for use, as long as the class ends with `IE`.
6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in. 6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in.
7. Have a look at [`youtube_dl/extractor/common.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303). Add tests and code for as many as you want. 7. Have a look at [`youtube_dl/extractor/common.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303). Add tests and code for as many as you want.
8. Make sure your code follows [youtube-dl coding conventions](#youtube-dl-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart): 8. Make sure your code follows [youtube-dl coding conventions](#youtube-dl-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart):

339
ChangeLog
View File

@ -1,3 +1,342 @@
version 2021.12.17
Core
* [postprocessor/ffmpeg] Show ffmpeg output on error (#22680, #29336)
Extractors
* [youtube] Update signature function patterns (#30363, #30366)
* [peertube] Only call description endpoint if necessary (#29383)
* [periscope] Pass referer to HLS requests (#29419)
- [liveleak] Remove extractor (#17625, #24222, #29331)
+ [pornhub] Add support for pornhubthbh7ap3u.onion
* [pornhub] Detect geo restriction
* [pornhub] Dismiss tbr extracted from download URLs (#28927)
* [curiositystream:collection] Extend _VALID_URL (#26326, #29117)
* [youtube] Make get_video_info processing more robust (#29333)
* [youtube] Workaround for get_video_info request (#29333)
* [bilibili] Strip uploader name (#29202)
* [youtube] Update invidious instance list (#29281)
* [umg:de] Update GraphQL API URL (#29304)
* [nrk] Switch psapi URL to https (#29344)
+ [egghead] Add support for app.egghead.io (#28404, #29303)
* [appleconnect] Fix extraction (#29208)
+ [orf:tvthek] Add support for MPD formats (#28672, #29236)
version 2021.06.06
Extractors
* [facebook] Improve login required detection
* [youporn] Fix formats and view count extraction (#29216)
* [orf:tvthek] Fix thumbnails extraction (#29217)
* [formula1] Fix extraction (#29206)
* [ard] Relax URL regular expression and fix video ids (#22724, #29091)
+ [ustream] Detect https embeds (#29133)
* [ted] Prefer own formats over external sources (#29142)
* [twitch:clips] Improve extraction (#29149)
+ [twitch:clips] Add access token query to download URLs (#29136)
* [youtube] Fix get_video_info request (#29086, #29165)
* [vimeo] Fix vimeo pro embed extraction (#29126)
* [redbulltv] Fix embed data extraction (#28770)
* [shahid] Relax URL regular expression (#28772, #28930)
version 2021.05.16
Core
* [options] Fix thumbnail option group name (#29042)
* [YoutubeDL] Improve extract_info doc (#28946)
Extractors
+ [playstuff] Add support for play.stuff.co.nz (#28901, #28931)
* [eroprofile] Fix extraction (#23200, #23626, #29008)
+ [vivo] Add support for vivo.st (#29009)
+ [generic] Add support for og:audio (#28311, #29015)
* [phoenix] Fix extraction (#29057)
+ [generic] Add support for sibnet embeds
+ [vk] Add support for sibnet embeds (#9500)
+ [generic] Add Referer header for direct videojs download URLs (#2879,
#20217, #29053)
* [orf:radio] Switch download URLs to HTTPS (#29012, #29046)
- [blinkx] Remove extractor (#28941)
* [medaltv] Relax URL regular expression (#28884)
+ [funimation] Add support for optional lang code in URLs (#28950)
+ [gdcvault] Add support for HTML5 videos
* [dispeak] Improve FLV extraction (#13513, #28970)
* [kaltura] Improve iframe extraction (#28969)
* [kaltura] Make embed code alternatives actually work
* [cda] Improve extraction (#28709, #28937)
* [twitter] Improve formats extraction from vmap URL (#28909)
* [xtube] Fix formats extraction (#28870)
* [svtplay] Improve extraction (#28507, #28876)
* [tv2dk] Fix extraction (#28888)
version 2021.04.26
Extractors
+ [xfileshare] Add support for wolfstream.tv (#28858)
* [francetvinfo] Improve video id extraction (#28792)
* [medaltv] Fix extraction (#28807)
* [tver] Redirect all downloads to Brightcove (#28849)
* [go] Improve video id extraction (#25207, #25216, #26058)
* [youtube] Fix lazy extractors (#28780)
+ [bbc] Extract description and timestamp from __INITIAL_DATA__ (#28774)
* [cbsnews] Fix extraction for python <3.6 (#23359)
version 2021.04.17
Core
+ [utils] Add support for experimental HTTP response status code
308 Permanent Redirect (#27877, #28768)
Extractors
+ [lbry] Add support for HLS videos (#27877, #28768)
* [youtube] Fix stretched ratio calculation
* [youtube] Improve stretch extraction (#28769)
* [youtube:tab] Improve grid extraction (#28725)
+ [youtube:tab] Detect series playlist on playlists page (#28723)
+ [youtube] Add more invidious instances (#28706)
* [pluralsight] Extend anti-throttling timeout (#28712)
* [youtube] Improve URL to extractor routing (#27572, #28335, #28742)
+ [maoritv] Add support for maoritelevision.com (#24552)
+ [youtube:tab] Pass innertube context and x-goog-visitor-id header along with
continuation requests (#28702)
* [mtv] Fix Viacom A/B Testing Video Player extraction (#28703)
+ [pornhub] Extract DASH and HLS formats from get_media end point (#28698)
* [cbssports] Fix extraction (#28682)
* [jamendo] Fix track extraction (#28686)
* [curiositystream] Fix format extraction (#26845, #28668)
version 2021.04.07
Core
* [extractor/common] Use compat_cookies_SimpleCookie for _get_cookies
+ [compat] Introduce compat_cookies_SimpleCookie
* [extractor/common] Improve JSON-LD author extraction
* [extractor/common] Fix _get_cookies on python 2 (#20673, #23256, #20326,
#28640)
Extractors
* [youtube] Fix extraction of videos with restricted location (#28685)
+ [line] Add support for live.line.me (#17205, #28658)
* [vimeo] Improve extraction (#28591)
* [youku] Update ccode (#17852, #28447, #28460, #28648)
* [youtube] Prefer direct entry metadata over entry metadata from playlist
(#28619, #28636)
* [screencastomatic] Fix extraction (#11976, #24489)
+ [palcomp3] Add support for palcomp3.com (#13120)
+ [arnes] Add support for video.arnes.si (#28483)
+ [youtube:tab] Add support for hashtags (#28308)
version 2021.04.01
Extractors
* [youtube] Setup CONSENT cookie when needed (#28604)
* [vimeo] Fix password protected review extraction (#27591)
* [youtube] Improve age-restricted video extraction (#28578)
version 2021.03.31
Extractors
* [vlive] Fix inkey request (#28589)
* [francetvinfo] Improve video id extraction (#28584)
+ [instagram] Extract duration (#28469)
* [instagram] Improve title extraction (#28469)
+ [sbs] Add support for ondemand watch URLs (#28566)
* [youtube] Fix video's channel extraction (#28562)
* [picarto] Fix live stream extraction (#28532)
* [vimeo] Fix unlisted video extraction (#28414)
* [youtube:tab] Fix playlist/community continuation items extraction (#28266)
* [ard] Improve clip id extraction (#22724, #28528)
version 2021.03.25
Extractors
+ [zoom] Add support for zoom.us (#16597, #27002, #28531)
* [bbc] Fix BBC IPlayer Episodes/Group extraction (#28360)
* [youtube] Fix default value for youtube_include_dash_manifest (#28523)
* [zingmp3] Fix extraction (#11589, #16409, #16968, #27205)
+ [vgtv] Add support for new tv.aftonbladet.se URL schema (#28514)
+ [tiktok] Detect private videos (#28453)
* [vimeo:album] Fix extraction for albums with number of videos multiple
to page size (#28486)
* [vvvvid] Fix kenc format extraction (#28473)
* [mlb] Fix video extraction (#21241)
* [svtplay] Improve extraction (#28448)
* [applepodcasts] Fix extraction (#28445)
* [rtve] Improve extraction
+ Extract all formats
* Fix RTVE Infantil extraction (#24851)
+ Extract is_live and series
version 2021.03.14
Core
+ Introduce release_timestamp meta field (#28386)
Extractors
+ [southpark] Add support for southparkstudios.com (#28413)
* [southpark] Fix extraction (#26763, #28413)
* [sportdeutschland] Fix extraction (#21856, #28425)
* [pinterest] Reduce the number of HLS format requests
* [peertube] Improve thumbnail extraction (#28419)
* [tver] Improve title extraction (#28418)
* [fujitv] Fix HLS formats extension (#28416)
* [shahid] Fix format extraction (#28383)
+ [lbry] Add support for channel filters (#28385)
+ [bandcamp] Extract release timestamp
+ [lbry] Extract release timestamp (#28386)
* [pornhub] Detect flagged videos
+ [pornhub] Extract formats from get_media end point (#28395)
* [bilibili] Fix video info extraction (#28341)
+ [cbs] Add support for Paramount+ (#28342)
+ [trovo] Add Origin header to VOD formats (#28346)
* [voxmedia] Fix volume embed extraction (#28338)
version 2021.03.03
Extractors
* [youtube:tab] Switch continuation to browse API (#28289, #28327)
* [9c9media] Fix extraction for videos with multiple ContentPackages (#28309)
+ [bbc] Add support for BBC Reel videos (#21870, #23660, #28268)
version 2021.03.02
Extractors
* [zdf] Rework extractors (#11606, #13473, #17354, #21185, #26711, #27068,
#27930, #28198, #28199, #28274)
* Generalize cross-extractor video ids for zdf based extractors
* Improve extraction
* Fix 3sat and phoenix
* [stretchinternet] Fix extraction (#28297)
* [urplay] Fix episode data extraction (#28292)
+ [bandaichannel] Add support for b-ch.com (#21404)
* [srgssr] Improve extraction (#14717, #14725, #27231, #28238)
+ Extract subtitle
* Fix extraction for new videos
* Update srf download domains
* [vvvvid] Reduce season request payload size
+ [vvvvid] Extract series sublists playlist title (#27601, #27618)
+ [dplay] Extract Ad-Free uplynk URLs (#28160)
+ [wat] Detect DRM protected videos (#27958)
* [tf1] Improve extraction (#27980, #28040)
* [tmz] Fix and improve extraction (#24603, #24687, 28211)
+ [gedidigital] Add support for Gedi group sites (#7347, #26946)
* [youtube] Fix get_video_info request
version 2021.02.22
Core
+ [postprocessor/embedthumbnail] Recognize atomicparsley binary in lowercase
(#28112)
Extractors
* [apa] Fix and improve extraction (#27750)
+ [youporn] Extract duration (#28019)
+ [peertube] Add support for canard.tube (#28190)
* [youtube] Fixup m4a_dash formats (#28165)
+ [samplefocus] Add support for samplefocus.com (#27763)
+ [vimeo] Add support for unlisted video source format extraction
* [viki] Improve extraction (#26522, #28203)
* Extract uploader URL and episode number
* Report login required error
+ Extract 480p formats
* Fix API v4 calls
* [ninegag] Unescape title (#28201)
* [youtube] Improve URL regular expression (#28193)
+ [youtube] Add support for redirect.invidious.io (#28193)
+ [dplay] Add support for de.hgtv.com (#28182)
+ [dplay] Add support for discoveryplus.com (#24698)
+ [simplecast] Add support for simplecast.com (#24107)
* [youtube] Fix uploader extraction in flat playlist mode (#28045)
* [yandexmusic:playlist] Request missing tracks in chunks (#27355, #28184)
+ [storyfire] Add support for storyfire.com (#25628, #26349)
+ [zhihu] Add support for zhihu.com (#28177)
* [youtube] Fix controversial videos when authenticated with cookies (#28174)
* [ccma] Fix timestamp parsing in python 2
+ [videopress] Add support for video.wordpress.com
* [kakao] Improve info extraction and detect geo restriction (#26577)
* [xboxclips] Fix extraction (#27151)
* [ard] Improve formats extraction (#28155)
+ [canvas] Add support for dagelijksekost.een.be (#28119)
version 2021.02.10
Extractors
* [youtube:tab] Improve grid continuation extraction (#28130)
* [ign] Fix extraction (#24771)
+ [xhamster] Extract format filesize
+ [xhamster] Extract formats from xplayer settings (#28114)
+ [youtube] Add support phone/tablet JS player (#26424)
* [archiveorg] Fix and improve extraction (#21330, #23586, #25277, #26780,
#27109, #27236, #28063)
+ [cda] Detect geo restricted videos (#28106)
* [urplay] Fix extraction (#28073, #28074)
* [youtube] Fix release date extraction (#28094)
+ [youtube] Extract abr and vbr (#28100)
* [youtube] Skip OTF formats (#28070)
version 2021.02.04.1
Extractors
* [youtube] Prefer DASH formats (#28070)
* [azmedien] Fix extraction (#28064)
version 2021.02.04
Extractors
* [pornhub] Implement lazy playlist extraction
* [svtplay] Fix video id extraction (#28058)
+ [pornhub] Add support for authentication (#18797, #21416, #24294)
* [pornhub:user] Improve paging
+ [pornhub:user] Add support for URLs unavailable via /videos page (#27853)
+ [bravotv] Add support for oxygen.com (#13357, #22500)
+ [youtube] Pass embed URL to get_video_info request
* [ccma] Improve metadata extraction (#27994)
+ Extract age limit, alt title, categories, series and episode number
* Fix timestamp multiple subtitles extraction
* [egghead] Update API domain (#28038)
- [vidzi] Remove extractor (#12629)
* [vidio] Improve metadata extraction
* [youtube] Improve subtitles extraction
* [youtube] Fix chapter extraction fallback
* [youtube] Rewrite extractor
* Improve format sorting
* Remove unused code
* Fix series metadata extraction
* Fix trailer video extraction
* Improve error reporting
+ Extract video location
+ [vvvvid] Add support for youtube embeds (#27825)
* [googledrive] Report download page errors (#28005)
* [vlive] Fix error message decoding for python 2 (#28004)
* [youtube] Improve DASH formats file size extraction
* [cda] Improve birth validation detection (#14022, #27929)
+ [awaan] Extract uploader id (#27963)
+ [medialaan] Add support DPG Media MyChannels based websites (#14871, #15597,
#16106, #16489)
* [abcnews] Fix extraction (#12394, #27920)
* [AMP] Fix upload date and timestamp extraction (#27970)
* [tv4] Relax URL regular expression (#27964)
+ [tv2] Add support for mtvuutiset.fi (#27744)
* [adn] Improve login warning reporting
* [zype] Fix uplynk id extraction (#27956)
+ [adn] Add support for authentication (#17091, #27841, #27937)
version 2021.01.24.1 version 2021.01.24.1
Core Core

151
README.md
View File

@ -33,7 +33,7 @@ Windows users can [download an .exe file](https://yt-dl.org/latest/youtube-dl.ex
You can also use pip: You can also use pip:
sudo -H pip install --upgrade youtube-dl sudo -H pip install --upgrade youtube-dl
This command will update youtube-dl if you have already installed it. See the [pypi page](https://pypi.python.org/pypi/youtube_dl) for more information. This command will update youtube-dl if you have already installed it. See the [pypi page](https://pypi.python.org/pypi/youtube_dl) for more information.
macOS users can install youtube-dl with [Homebrew](https://brew.sh/): macOS users can install youtube-dl with [Homebrew](https://brew.sh/):
@ -287,7 +287,7 @@ Alternatively, refer to the [developer instructions](#developer-instructions) fo
--no-cache-dir Disable filesystem caching --no-cache-dir Disable filesystem caching
--rm-cache-dir Delete all filesystem cache files --rm-cache-dir Delete all filesystem cache files
## Thumbnail images: ## Thumbnail Options:
--write-thumbnail Write thumbnail image to disk --write-thumbnail Write thumbnail image to disk
--write-all-thumbnails Write all thumbnail image formats to --write-all-thumbnails Write all thumbnail image formats to
disk disk
@ -563,7 +563,7 @@ The basic usage is not to set any template arguments when downloading a single f
- `is_live` (boolean): Whether this video is a live stream or a fixed-length video - `is_live` (boolean): Whether this video is a live stream or a fixed-length video
- `start_time` (numeric): Time in seconds where the reproduction should start, as specified in the URL - `start_time` (numeric): Time in seconds where the reproduction should start, as specified in the URL
- `end_time` (numeric): Time in seconds where the reproduction should end, as specified in the URL - `end_time` (numeric): Time in seconds where the reproduction should end, as specified in the URL
- `format` (string): A human-readable description of the format - `format` (string): A human-readable description of the format
- `format_id` (string): Format code specified by `--format` - `format_id` (string): Format code specified by `--format`
- `format_note` (string): Additional info about the format - `format_note` (string): Additional info about the format
- `width` (numeric): Width of the video - `width` (numeric): Width of the video
@ -632,7 +632,7 @@ To use percent literals in an output template use `%%`. To output to stdout use
The current default template is `%(title)s-%(id)s.%(ext)s`. The current default template is `%(title)s-%(id)s.%(ext)s`.
In some cases, you don't want special characters such as 中, spaces, or &, such as when transferring the downloaded filename to a Windows system or the filename through an 8bit-unsafe channel. In these cases, add the `--restrict-filenames` flag to get a shorter title: In some cases, you don't want special characters such as 中, spaces, or &, such as when transferring the downloaded filename to a Windows system or the filename through an 8bit-unsafe channel. In these cases, add the `--restrict-filenames` flag to get a shorter title.
#### Output template and Windows batch files #### Output template and Windows batch files
@ -675,7 +675,7 @@ The general syntax for format selection is `--format FORMAT` or shorter `-f FORM
**tl;dr:** [navigate me to examples](#format-selection-examples). **tl;dr:** [navigate me to examples](#format-selection-examples).
The simplest case is requesting a specific format, for example with `-f 22` you can download the format with format code equal to 22. You can get the list of available format codes for particular video using `--list-formats` or `-F`. Note that these format codes are extractor specific. The simplest case is requesting a specific format, for example with `-f 22` you can download the format with format code equal to 22. You can get the list of available format codes for particular video using `--list-formats` or `-F`. Note that these format codes are extractor specific.
You can also use a file extension (currently `3gp`, `aac`, `flv`, `m4a`, `mp3`, `mp4`, `ogg`, `wav`, `webm` are supported) to download the best quality format of a particular file extension served as a single file, e.g. `-f webm` will download the best quality format with the `webm` extension served as a single file. You can also use a file extension (currently `3gp`, `aac`, `flv`, `m4a`, `mp3`, `mp4`, `ogg`, `wav`, `webm` are supported) to download the best quality format of a particular file extension served as a single file, e.g. `-f webm` will download the best quality format with the `webm` extension served as a single file.
@ -760,7 +760,7 @@ Videos can be filtered by their upload date using the options `--date`, `--dateb
- Absolute dates: Dates in the format `YYYYMMDD`. - Absolute dates: Dates in the format `YYYYMMDD`.
- Relative dates: Dates in the format `(now|today)[+-][0-9](day|week|month|year)(s)?` - Relative dates: Dates in the format `(now|today)[+-][0-9](day|week|month|year)(s)?`
Examples: Examples:
```bash ```bash
@ -893,7 +893,7 @@ Since June 2012 ([#342](https://github.com/ytdl-org/youtube-dl/issues/342)) yout
### The exe throws an error due to missing `MSVCR100.dll` ### The exe throws an error due to missing `MSVCR100.dll`
To run the exe you need to install first the [Microsoft Visual C++ 2010 Redistributable Package (x86)](https://www.microsoft.com/en-US/download/details.aspx?id=5555). To run the exe you need to install first the [Microsoft Visual C++ 2010 Service Pack 1 Redistributable Package (x86)](https://download.microsoft.com/download/1/6/5/165255E7-1014-4D0A-B094-B6A430A6BFFC/vcredist_x86.exe).
### On Windows, how should I set up ffmpeg and youtube-dl? Where should I put the exe files? ### On Windows, how should I set up ffmpeg and youtube-dl? Where should I put the exe files?
@ -918,7 +918,7 @@ Either prepend `https://www.youtube.com/watch?v=` or separate the ID from the op
Use the `--cookies` option, for example `--cookies /path/to/cookies/file.txt`. Use the `--cookies` option, for example `--cookies /path/to/cookies/file.txt`.
In order to extract cookies from browser use any conforming browser extension for exporting cookies. For example, [Get cookies.txt](https://chrome.google.com/webstore/detail/get-cookiestxt/bgaddhkoddajcdgocldbbfleckgcbcid/) (for Chrome) or [cookies.txt](https://addons.mozilla.org/en-US/firefox/addon/cookies-txt/) (for Firefox). In order to extract cookies from browser use any conforming browser extension for exporting cookies. For example, [Get cookies.txt LOCALLY](https://chrome.google.com/webstore/detail/get-cookiestxt-locally/cclelndahbckbenkjhflpdbgdldlbecc) (for Chrome) or [cookies.txt](https://addons.mozilla.org/en-US/firefox/addon/cookies-txt/) (for Firefox).
Note that the cookies file must be in Mozilla/Netscape format and the first line of the cookies file must be either `# HTTP Cookie File` or `# Netscape HTTP Cookie File`. Make sure you have correct [newline format](https://en.wikipedia.org/wiki/Newline) in the cookies file and convert newlines if necessary to correspond with your OS, namely `CRLF` (`\r\n`) for Windows and `LF` (`\n`) for Unix and Unix-like systems (Linux, macOS, etc.). `HTTP Error 400: Bad Request` when using `--cookies` is a good sign of invalid newline format. Note that the cookies file must be in Mozilla/Netscape format and the first line of the cookies file must be either `# HTTP Cookie File` or `# Netscape HTTP Cookie File`. Make sure you have correct [newline format](https://en.wikipedia.org/wiki/Newline) in the cookies file and convert newlines if necessary to correspond with your OS, namely `CRLF` (`\r\n`) for Windows and `LF` (`\n`) for Unix and Unix-like systems (Linux, macOS, etc.). `HTTP Error 400: Bad Request` when using `--cookies` is a good sign of invalid newline format.
@ -1000,6 +1000,8 @@ To run the test, simply invoke your favorite test runner, or execute a test file
python test/test_download.py python test/test_download.py
nosetests nosetests
For Python versions 3.6 and later, you can use [pynose](https://pypi.org/project/pynose/) to implement `nosetests`. The original [nose](https://pypi.org/project/nose/) has not been upgraded for 3.10 and later.
See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases. See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases.
If you want to create a build of youtube-dl yourself, you'll need If you want to create a build of youtube-dl yourself, you'll need
@ -1069,9 +1071,11 @@ After you have ensured this site is distributing its content legally, you can fo
} }
``` ```
5. Add an import in [`youtube_dl/extractor/extractors.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/extractors.py). 5. Add an import in [`youtube_dl/extractor/extractors.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/extractors.py).
6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in. 6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test (actually, test case) then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note:
7. Have a look at [`youtube_dl/extractor/common.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303). Add tests and code for as many as you want. * the test names use the extractor class name **without the trailing `IE`**
8. Make sure your code follows [youtube-dl coding conventions](#youtube-dl-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart): * tests with `only_matching` key in test's dict are not counted.
8. Have a look at [`youtube_dl/extractor/common.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303). Add tests and code for as many as you want.
9. Make sure your code follows [youtube-dl coding conventions](#youtube-dl-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart):
$ flake8 youtube_dl/extractor/yourextractor.py $ flake8 youtube_dl/extractor/yourextractor.py
@ -1089,7 +1093,7 @@ In any case, thank you very much for your contributions!
## youtube-dl coding conventions ## youtube-dl coding conventions
This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code. This section introduces guidelines for writing idiomatic, robust and future-proof extractor code.
Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old youtube-dl versions working. Even though this breakage issue is easily fixed by emitting a new version of youtube-dl with a fix incorporated, all the previous versions become broken in all repositories and distros' packages that may not be so prompt in fetching the update from us. Needless to say, some non rolling release distros may never receive an update at all. Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old youtube-dl versions working. Even though this breakage issue is easily fixed by emitting a new version of youtube-dl with a fix incorporated, all the previous versions become broken in all repositories and distros' packages that may not be so prompt in fetching the update from us. Needless to say, some non rolling release distros may never receive an update at all.
@ -1112,7 +1116,7 @@ Say you have some source dictionary `meta` that you've fetched as JSON with HTTP
```python ```python
meta = self._download_json(url, video_id) meta = self._download_json(url, video_id)
``` ```
Assume at this point `meta`'s layout is: Assume at this point `meta`'s layout is:
```python ```python
@ -1156,7 +1160,7 @@ description = self._search_regex(
``` ```
On failure this code will silently continue the extraction with `description` set to `None`. That is useful for metafields that may or may not be present. On failure this code will silently continue the extraction with `description` set to `None`. That is useful for metafields that may or may not be present.
### Provide fallbacks ### Provide fallbacks
When extracting metadata try to do so from multiple sources. For example if `title` is present in several places, try extracting from at least some of them. This makes it more future-proof in case some of the sources become unavailable. When extracting metadata try to do so from multiple sources. For example if `title` is present in several places, try extracting from at least some of them. This makes it more future-proof in case some of the sources become unavailable.
@ -1204,7 +1208,7 @@ r'(id|ID)=(?P<id>\d+)'
#### Make regular expressions relaxed and flexible #### Make regular expressions relaxed and flexible
When using regular expressions try to write them fuzzy, relaxed and flexible, skipping insignificant parts that are more likely to change, allowing both single and double quotes for quoted values and so on. When using regular expressions try to write them fuzzy, relaxed and flexible, skipping insignificant parts that are more likely to change, allowing both single and double quotes for quoted values and so on.
##### Example ##### Example
Say you need to extract `title` from the following HTML code: Say you need to extract `title` from the following HTML code:
@ -1228,7 +1232,7 @@ title = self._search_regex(
webpage, 'title', group='title') webpage, 'title', group='title')
``` ```
Note how you tolerate potential changes in the `style` attribute's value or switch from using double quotes to single for `class` attribute: Note how you tolerate potential changes in the `style` attribute's value or switch from using double quotes to single for `class` attribute:
The code definitely should not look like: The code definitely should not look like:
@ -1329,27 +1333,114 @@ Wrap all extracted numeric data into safe functions from [`youtube_dl/utils.py`]
Use `url_or_none` for safe URL processing. Use `url_or_none` for safe URL processing.
Use `try_get` for safe metadata extraction from parsed JSON. Use `traverse_obj` for safe metadata extraction from parsed JSON.
Use `unified_strdate` for uniform `upload_date` or any `YYYYMMDD` meta field extraction, `unified_timestamp` for uniform `timestamp` extraction, `parse_filesize` for `filesize` extraction, `parse_count` for count meta fields extraction, `parse_resolution`, `parse_duration` for `duration` extraction, `parse_age_limit` for `age_limit` extraction. Use `unified_strdate` for uniform `upload_date` or any `YYYYMMDD` meta field extraction, `unified_timestamp` for uniform `timestamp` extraction, `parse_filesize` for `filesize` extraction, `parse_count` for count meta fields extraction, `parse_resolution`, `parse_duration` for `duration` extraction, `parse_age_limit` for `age_limit` extraction.
Explore [`youtube_dl/utils.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/utils.py) for more useful convenience functions. Explore [`youtube_dl/utils.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/utils.py) for more useful convenience functions.
#### More examples #### More examples
##### Safely extract optional description from parsed JSON ##### Safely extract optional description from parsed JSON
When processing complex JSON, as often returned by site API requests or stashed in web pages for "hydration", you can use the `traverse_obj()` utility function to handle multiple fallback values and to ensure the expected type of metadata items. The function's docstring defines how the function works: also review usage in the codebase for more examples.
In this example, a text `description`, or `None`, is pulled from the `.result.video[0].summary` member of the parsed JSON `response`, if available.
```python
description = traverse_obj(response, ('result', 'video', 0, 'summary', T(compat_str)))
```
`T(...)` is a shorthand for a set literal; if you hate people who still run Python 2.6, `T(type_or_transformation)` could be written as a set literal `{type_or_transformation}`.
Some extractors use the older and less capable `try_get()` function in the same way.
```python ```python
description = try_get(response, lambda x: x['result']['video'][0]['summary'], compat_str) description = try_get(response, lambda x: x['result']['video'][0]['summary'], compat_str)
``` ```
##### Safely extract more optional metadata ##### Safely extract more optional metadata
In this example, various optional metadata values are extracted from the `.result.video[0]` member of the parsed JSON `response`, which is expected to be a JS object, parsed into a `dict`, with no crash if that isn't so, or if any of the target values are missing or invalid.
```python ```python
video = try_get(response, lambda x: x['result']['video'][0], dict) or {} video = traverse_obj(response, ('result', 'video', 0, T(dict))) or {}
# formerly:
# video = try_get(response, lambda x: x['result']['video'][0], dict) or {}
description = video.get('summary') description = video.get('summary')
duration = float_or_none(video.get('durationMs'), scale=1000) duration = float_or_none(video.get('durationMs'), scale=1000)
view_count = int_or_none(video.get('views')) view_count = int_or_none(video.get('views'))
``` ```
#### Safely extract nested lists
Suppose you've extracted JSON like this into a Python data structure named `media_json` using, say, the `_download_json()` or `_parse_json()` methods of `InfoExtractor`:
```json
{
"title": "Example video",
"comment": "try extracting this",
"media": [{
"type": "bad",
"size": 320,
"url": "https://some.cdn.site/bad.mp4"
}, {
"type": "streaming",
"url": "https://some.cdn.site/hls.m3u8"
}, {
"type": "super",
"size": 1280,
"url": "https://some.cdn.site/good.webm"
}],
"moreStuff": "more values",
...
}
```
Then extractor code like this can collect the various fields of the JSON:
```python
...
from ..utils import (
determine_ext,
int_or_none,
T,
traverse_obj,
txt_or_none,
url_or_none,
)
...
...
info_dict = {}
# extract title and description if valid and not empty
info_dict.update(traverse_obj(media_json, {
'title': ('title', T(txt_or_none)),
'description': ('comment', T(txt_or_none)),
}))
# extract any recognisable media formats
fmts = []
# traverse into "media" list, extract `dict`s with desired keys
for fmt in traverse_obj(media_json, ('media', Ellipsis, {
'format_id': ('type', T(txt_or_none)),
'url': ('url', T(url_or_none)),
'width': ('size', T(int_or_none)), })):
# bad `fmt` values were `None` and removed
if 'url' not in fmt:
continue
fmt_url = fmt['url'] # known to be valid URL
ext = determine_ext(fmt_url)
if ext == 'm3u8':
fmts.extend(self._extract_m3u8_formats(fmt_url, video_id, 'mp4', fatal=False))
else:
fmt['ext'] = ext
fmts.append(fmt)
# sort, raise if no formats
self._sort_formats(fmts)
info_dict['formats'] = fmts
...
```
The extractor raises an exception rather than random crashes if the JSON structure changes so that no formats are found.
# EMBEDDING YOUTUBE-DL # EMBEDDING YOUTUBE-DL
youtube-dl makes the best effort to be a good command-line program, and thus should be callable from any programming language. If you encounter any problems parsing its output, feel free to [create a report](https://github.com/ytdl-org/youtube-dl/issues/new). youtube-dl makes the best effort to be a good command-line program, and thus should be callable from any programming language. If you encounter any problems parsing its output, feel free to [create a report](https://github.com/ytdl-org/youtube-dl/issues/new).
@ -1406,7 +1497,11 @@ with youtube_dl.YoutubeDL(ydl_opts) as ydl:
# BUGS # BUGS
Bugs and suggestions should be reported at: <https://github.com/ytdl-org/youtube-dl/issues>. Unless you were prompted to or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email. For discussions, join us in the IRC channel [#youtube-dl](irc://chat.freenode.net/#youtube-dl) on freenode ([webchat](https://webchat.freenode.net/?randomnick=1&channels=youtube-dl)). Bugs and suggestions should be reported in the issue tracker: <https://github.com/ytdl-org/youtube-dl/issues> (<https://yt-dl.org/bug> is an alias for this). Unless you were prompted to or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email. For discussions, join us in the IRC channel [#youtube-dl](irc://chat.freenode.net/#youtube-dl) on freenode ([webchat](https://webchat.freenode.net/?randomnick=1&channels=youtube-dl)).
## Opening a bug report or suggestion
Be sure to follow instructions provided **below** and **in the issue tracker**. Complete the appropriate issue template fully. Consider whether your problem is covered by an existing issue: if so, follow the discussion there. Avoid commenting on existing duplicate issues as such comments do not add to the discussion of the issue and are liable to be treated as spam.
**Please include the full output of youtube-dl when run with `-v`**, i.e. **add** `-v` flag to **your command line**, copy the **whole** output and post it in the issue body wrapped in \`\`\` for better formatting. It should look similar to this: **Please include the full output of youtube-dl when run with `-v`**, i.e. **add** `-v` flag to **your command line**, copy the **whole** output and post it in the issue body wrapped in \`\`\` for better formatting. It should look similar to this:
``` ```
@ -1426,17 +1521,17 @@ $ youtube-dl -v <your command line>
The output (including the first lines) contains important debugging information. Issues without the full output are often not reproducible and therefore do not get solved in short order, if ever. The output (including the first lines) contains important debugging information. Issues without the full output are often not reproducible and therefore do not get solved in short order, if ever.
Please re-read your issue once again to avoid a couple of common mistakes (you can and should use this as a checklist): Finally please review your issue to avoid various common mistakes (you can and should use this as a checklist) listed below.
### Is the description of the issue itself sufficient? ### Is the description of the issue itself sufficient?
We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources. Many contributors, including myself, are also not native speakers, so we may misread some parts. We often get issue reports that are hard to understand. To avoid subsequent clarifications, and to assist participants who are not native English speakers, please elaborate on what feature you are requesting, or what bug you want to be fixed.
So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious Make sure that it's obvious
- What the problem is - What the problem is
- How it could be fixed - How it could be fixed
- How your proposed solution would look like - How your proposed solution would look
If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. As a committer myself, I often get frustrated by these issues, since the only possible way for me to move forward on them is to ask for clarification over and over. If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. As a committer myself, I often get frustrated by these issues, since the only possible way for me to move forward on them is to ask for clarification over and over.
@ -1446,14 +1541,14 @@ If your server has multiple IPs or you suspect censorship, adding `--call-home`
**Site support requests must contain an example URL**. An example URL is a URL you might want to download, like `https://www.youtube.com/watch?v=BaW_jenozKc`. There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. `https://www.youtube.com/`) is *not* an example URL. **Site support requests must contain an example URL**. An example URL is a URL you might want to download, like `https://www.youtube.com/watch?v=BaW_jenozKc`. There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. `https://www.youtube.com/`) is *not* an example URL.
### Is the issue already documented?
Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/ytdl-org/youtube-dl/search?type=Issues) of this repository. Initially, at least, use the search term `-label:duplicate` to focus on active issues. If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2015.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity.
### Are you using the latest version? ### Are you using the latest version?
Before reporting any issue, type `youtube-dl -U`. This should report that you're up-to-date. About 20% of the reports we receive are already fixed, but people are using outdated versions. This goes for feature requests as well. Before reporting any issue, type `youtube-dl -U`. This should report that you're up-to-date. About 20% of the reports we receive are already fixed, but people are using outdated versions. This goes for feature requests as well.
### Is the issue already documented?
Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/ytdl-org/youtube-dl/search?type=Issues) of this repository. If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2015.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity.
### Why are existing options not enough? ### Why are existing options not enough?
Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/ytdl-org/youtube-dl/blob/master/README.md#options). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem. Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/ytdl-org/youtube-dl/blob/master/README.md#options). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem.

1
devscripts/__init__.py Normal file
View File

@ -0,0 +1 @@
# Empty file needed to make devscripts.utils properly importable from outside

View File

@ -5,8 +5,12 @@ import os
from os.path import dirname as dirn from os.path import dirname as dirn
import sys import sys
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__))))) sys.path.insert(0, dirn(dirn(os.path.abspath(__file__))))
import youtube_dl import youtube_dl
from youtube_dl.compat import compat_open as open
from utils import read_file
BASH_COMPLETION_FILE = "youtube-dl.bash-completion" BASH_COMPLETION_FILE = "youtube-dl.bash-completion"
BASH_COMPLETION_TEMPLATE = "devscripts/bash-completion.in" BASH_COMPLETION_TEMPLATE = "devscripts/bash-completion.in"
@ -18,9 +22,8 @@ def build_completion(opt_parser):
for option in group.option_list: for option in group.option_list:
# for every long flag # for every long flag
opts_flag.append(option.get_opt_string()) opts_flag.append(option.get_opt_string())
with open(BASH_COMPLETION_TEMPLATE) as f: template = read_file(BASH_COMPLETION_TEMPLATE)
template = f.read() with open(BASH_COMPLETION_FILE, "w", encoding='utf-8') as f:
with open(BASH_COMPLETION_FILE, "w") as f:
# just using the special char # just using the special char
filled_template = template.replace("{{flags}}", " ".join(opts_flag)) filled_template = template.replace("{{flags}}", " ".join(opts_flag))
f.write(filled_template) f.write(filled_template)

83
devscripts/cli_to_api.py Executable file
View File

@ -0,0 +1,83 @@
#!/usr/bin/env python
# coding: utf-8
from __future__ import unicode_literals
"""
This script displays the API parameters corresponding to a yt-dl command line
Example:
$ ./cli_to_api.py -f best
{u'format': 'best'}
$
"""
# Allow direct execution
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import youtube_dl
from types import MethodType
def cli_to_api(*opts):
YDL = youtube_dl.YoutubeDL
# to extract the parsed options, break out of YoutubeDL instantiation
# return options via this Exception
class ParseYTDLResult(Exception):
def __init__(self, result):
super(ParseYTDLResult, self).__init__('result')
self.opts = result
# replacement constructor that raises ParseYTDLResult
def ytdl_init(ydl, ydl_opts):
super(YDL, ydl).__init__(ydl_opts)
raise ParseYTDLResult(ydl_opts)
# patch in the constructor
YDL.__init__ = MethodType(ytdl_init, YDL)
# core parser
def parsed_options(argv):
try:
youtube_dl._real_main(list(argv))
except ParseYTDLResult as result:
return result.opts
# from https://github.com/yt-dlp/yt-dlp/issues/5859#issuecomment-1363938900
default = parsed_options([])
def neq_opt(a, b):
if a == b:
return False
if a is None and repr(type(object)).endswith(".utils.DateRange'>"):
return '0001-01-01 - 9999-12-31' != '{0}'.format(b)
return a != b
diff = dict((k, v) for k, v in parsed_options(opts).items() if neq_opt(default[k], v))
if 'postprocessors' in diff:
diff['postprocessors'] = [pp for pp in diff['postprocessors'] if pp not in default['postprocessors']]
return diff
def main():
from pprint import PrettyPrinter
pprint = PrettyPrinter()
super_format = pprint.format
def format(object, context, maxlevels, level):
if repr(type(object)).endswith(".utils.DateRange'>"):
return '{0}: {1}>'.format(repr(object)[:-2], object), True, False
return super_format(object, context, maxlevels, level)
pprint.format = format
pprint.pprint(cli_to_api(*sys.argv))
if __name__ == '__main__':
main()

View File

@ -1,7 +1,6 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals from __future__ import unicode_literals
import io
import json import json
import mimetypes import mimetypes
import netrc import netrc
@ -10,7 +9,9 @@ import os
import re import re
import sys import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) dirn = os.path.dirname
sys.path.insert(0, dirn(dirn(os.path.abspath(__file__))))
from youtube_dl.compat import ( from youtube_dl.compat import (
compat_basestring, compat_basestring,
@ -22,6 +23,7 @@ from youtube_dl.utils import (
make_HTTPS_handler, make_HTTPS_handler,
sanitized_Request, sanitized_Request,
) )
from utils import read_file
class GitHubReleaser(object): class GitHubReleaser(object):
@ -89,8 +91,7 @@ def main():
changelog_file, version, build_path = args changelog_file, version, build_path = args
with io.open(changelog_file, encoding='utf-8') as inf: changelog = read_file(changelog_file)
changelog = inf.read()
mobj = re.search(r'(?s)version %s\n{2}(.+?)\n{3}' % version, changelog) mobj = re.search(r'(?s)version %s\n{2}(.+?)\n{3}' % version, changelog)
body = mobj.group(1) if mobj else '' body = mobj.group(1) if mobj else ''

View File

@ -6,10 +6,13 @@ import os
from os.path import dirname as dirn from os.path import dirname as dirn
import sys import sys
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__))))) sys.path.insert(0, dirn(dirn(os.path.abspath(__file__))))
import youtube_dl import youtube_dl
from youtube_dl.utils import shell_quote from youtube_dl.utils import shell_quote
from utils import read_file, write_file
FISH_COMPLETION_FILE = 'youtube-dl.fish' FISH_COMPLETION_FILE = 'youtube-dl.fish'
FISH_COMPLETION_TEMPLATE = 'devscripts/fish-completion.in' FISH_COMPLETION_TEMPLATE = 'devscripts/fish-completion.in'
@ -38,11 +41,9 @@ def build_completion(opt_parser):
complete_cmd.extend(EXTRA_ARGS.get(long_option, [])) complete_cmd.extend(EXTRA_ARGS.get(long_option, []))
commands.append(shell_quote(complete_cmd)) commands.append(shell_quote(complete_cmd))
with open(FISH_COMPLETION_TEMPLATE) as f: template = read_file(FISH_COMPLETION_TEMPLATE)
template = f.read()
filled_template = template.replace('{{commands}}', '\n'.join(commands)) filled_template = template.replace('{{commands}}', '\n'.join(commands))
with open(FISH_COMPLETION_FILE, 'w') as f: write_file(FISH_COMPLETION_FILE, filled_template)
f.write(filled_template)
parser = youtube_dl.parseOpts()[0] parser = youtube_dl.parseOpts()[0]

View File

@ -6,16 +6,21 @@ import sys
import hashlib import hashlib
import os.path import os.path
dirn = os.path.dirname
sys.path.insert(0, dirn(dirn(dirn(os.path.abspath(__file__)))))
from devscripts.utils import read_file, write_file
from youtube_dl.compat import compat_open as open
if len(sys.argv) <= 1: if len(sys.argv) <= 1:
print('Specify the version number as parameter') print('Specify the version number as parameter')
sys.exit() sys.exit()
version = sys.argv[1] version = sys.argv[1]
with open('update/LATEST_VERSION', 'w') as f: write_file('update/LATEST_VERSION', version)
f.write(version)
versions_info = json.load(open('update/versions.json')) versions_info = json.loads(read_file('update/versions.json'))
if 'signature' in versions_info: if 'signature' in versions_info:
del versions_info['signature'] del versions_info['signature']
@ -39,5 +44,5 @@ for key, filename in filenames.items():
versions_info['versions'][version] = new_version versions_info['versions'][version] = new_version
versions_info['latest'] = version versions_info['latest'] = version
with open('update/versions.json', 'w') as jsonf: with open('update/versions.json', 'w', encoding='utf-8') as jsonf:
json.dump(versions_info, jsonf, indent=4, sort_keys=True) json.dumps(versions_info, jsonf, indent=4, sort_keys=True)

View File

@ -2,14 +2,21 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import json import json
import os.path
import sys
versions_info = json.load(open('update/versions.json')) dirn = os.path.dirname
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
from utils import read_file, write_file
versions_info = json.loads(read_file('update/versions.json'))
version = versions_info['latest'] version = versions_info['latest']
version_dict = versions_info['versions'][version] version_dict = versions_info['versions'][version]
# Read template page # Read template page
with open('download.html.in', 'r', encoding='utf-8') as tmplf: template = read_file('download.html.in')
template = tmplf.read()
template = template.replace('@PROGRAM_VERSION@', version) template = template.replace('@PROGRAM_VERSION@', version)
template = template.replace('@PROGRAM_URL@', version_dict['bin'][0]) template = template.replace('@PROGRAM_URL@', version_dict['bin'][0])
@ -18,5 +25,5 @@ template = template.replace('@EXE_URL@', version_dict['exe'][0])
template = template.replace('@EXE_SHA256SUM@', version_dict['exe'][1]) template = template.replace('@EXE_SHA256SUM@', version_dict['exe'][1])
template = template.replace('@TAR_URL@', version_dict['tar'][0]) template = template.replace('@TAR_URL@', version_dict['tar'][0])
template = template.replace('@TAR_SHA256SUM@', version_dict['tar'][1]) template = template.replace('@TAR_SHA256SUM@', version_dict['tar'][1])
with open('download.html', 'w', encoding='utf-8') as dlf:
dlf.write(template) write_file('download.html', template)

View File

@ -5,17 +5,22 @@ from __future__ import with_statement, unicode_literals
import datetime import datetime
import glob import glob
import io # For Python 2 compatibility
import os import os
import re import re
import sys
year = str(datetime.datetime.now().year) dirn = os.path.dirname
sys.path.insert(0, dirn(dirn(dirn(os.path.abspath(__file__)))))
from devscripts.utils import read_file, write_file
from youtube_dl import compat_str
year = compat_str(datetime.datetime.now().year)
for fn in glob.glob('*.html*'): for fn in glob.glob('*.html*'):
with io.open(fn, encoding='utf-8') as f: content = read_file(fn)
content = f.read()
newc = re.sub(r'(?P<copyright>Copyright © 2011-)(?P<year>[0-9]{4})', 'Copyright © 2011-' + year, content) newc = re.sub(r'(?P<copyright>Copyright © 2011-)(?P<year>[0-9]{4})', 'Copyright © 2011-' + year, content)
if content != newc: if content != newc:
tmpFn = fn + '.part' tmpFn = fn + '.part'
with io.open(tmpFn, 'wt', encoding='utf-8') as outf: write_file(tmpFn, newc)
outf.write(newc)
os.rename(tmpFn, fn) os.rename(tmpFn, fn)

View File

@ -2,10 +2,16 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import datetime import datetime
import io
import json import json
import os.path
import textwrap import textwrap
import sys
dirn = os.path.dirname
sys.path.insert(0, dirn(dirn(os.path.abspath(__file__))))
from utils import write_file
atom_template = textwrap.dedent("""\ atom_template = textwrap.dedent("""\
<?xml version="1.0" encoding="utf-8"?> <?xml version="1.0" encoding="utf-8"?>
@ -72,5 +78,4 @@ for v in versions:
entries_str = textwrap.indent(''.join(entries), '\t') entries_str = textwrap.indent(''.join(entries), '\t')
atom_template = atom_template.replace('@ENTRIES@', entries_str) atom_template = atom_template.replace('@ENTRIES@', entries_str)
with io.open('update/releases.atom', 'w', encoding='utf-8') as atom_file: write_file('update/releases.atom', atom_template)
atom_file.write(atom_template)

View File

@ -5,15 +5,17 @@ import sys
import os import os
import textwrap import textwrap
dirn = os.path.dirname
# We must be able to import youtube_dl # We must be able to import youtube_dl
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))) sys.path.insert(0, dirn(dirn(dirn(os.path.abspath(__file__)))))
import youtube_dl import youtube_dl
from devscripts.utils import read_file, write_file
def main(): def main():
with open('supportedsites.html.in', 'r', encoding='utf-8') as tmplf: template = read_file('supportedsites.html.in')
template = tmplf.read()
ie_htmls = [] ie_htmls = []
for ie in youtube_dl.list_extractors(age_limit=None): for ie in youtube_dl.list_extractors(age_limit=None):
@ -29,8 +31,7 @@ def main():
template = template.replace('@SITES@', textwrap.indent('\n'.join(ie_htmls), '\t')) template = template.replace('@SITES@', textwrap.indent('\n'.join(ie_htmls), '\t'))
with open('supportedsites.html', 'w', encoding='utf-8') as sitesf: write_file('supportedsites.html', template)
sitesf.write(template)
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -1,10 +1,11 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals from __future__ import unicode_literals
import io
import optparse import optparse
import re import re
from utils import read_file, write_file
def main(): def main():
parser = optparse.OptionParser(usage='%prog INFILE OUTFILE') parser = optparse.OptionParser(usage='%prog INFILE OUTFILE')
@ -14,8 +15,7 @@ def main():
infile, outfile = args infile, outfile = args
with io.open(infile, encoding='utf-8') as inf: readme = read_file(infile)
readme = inf.read()
bug_text = re.search( bug_text = re.search(
r'(?s)#\s*BUGS\s*[^\n]*\s*(.*?)#\s*COPYRIGHT', readme).group(1) r'(?s)#\s*BUGS\s*[^\n]*\s*(.*?)#\s*COPYRIGHT', readme).group(1)
@ -25,8 +25,7 @@ def main():
out = bug_text + dev_text out = bug_text + dev_text
with io.open(outfile, 'w', encoding='utf-8') as outf: write_file(outfile, out)
outf.write(out)
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -1,8 +1,11 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals from __future__ import unicode_literals
import io
import optparse import optparse
import os.path
import sys
from utils import read_file, read_version, write_file
def main(): def main():
@ -13,17 +16,11 @@ def main():
infile, outfile = args infile, outfile = args
with io.open(infile, encoding='utf-8') as inf: issue_template_tmpl = read_file(infile)
issue_template_tmpl = inf.read()
# Get the version from youtube_dl/version.py without importing the package out = issue_template_tmpl % {'version': read_version()}
exec(compile(open('youtube_dl/version.py').read(),
'youtube_dl/version.py', 'exec'))
out = issue_template_tmpl % {'version': locals()['__version__']} write_file(outfile, out)
with io.open(outfile, 'w', encoding='utf-8') as outf:
outf.write(out)
if __name__ == '__main__': if __name__ == '__main__':
main() main()

View File

@ -1,28 +1,49 @@
from __future__ import unicode_literals, print_function from __future__ import unicode_literals, print_function
from inspect import getsource from inspect import getsource
import io
import os import os
from os.path import dirname as dirn from os.path import dirname as dirn
import re
import sys import sys
print('WARNING: Lazy loading extractors is an experimental feature that may not always work', file=sys.stderr) print('WARNING: Lazy loading extractors is an experimental feature that may not always work', file=sys.stderr)
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__))))) sys.path.insert(0, dirn(dirn(os.path.abspath(__file__))))
lazy_extractors_filename = sys.argv[1] lazy_extractors_filename = sys.argv[1]
if os.path.exists(lazy_extractors_filename): if os.path.exists(lazy_extractors_filename):
os.remove(lazy_extractors_filename) os.remove(lazy_extractors_filename)
# Py2: may be confused by leftover lazy_extractors.pyc
if sys.version_info[0] < 3:
for c in ('c', 'o'):
try:
os.remove(lazy_extractors_filename + 'c')
except OSError:
pass
from devscripts.utils import read_file, write_file
from youtube_dl.compat import compat_register_utf8
compat_register_utf8()
from youtube_dl.extractor import _ALL_CLASSES from youtube_dl.extractor import _ALL_CLASSES
from youtube_dl.extractor.common import InfoExtractor, SearchInfoExtractor from youtube_dl.extractor.common import InfoExtractor, SearchInfoExtractor
with open('devscripts/lazy_load_template.py', 'rt') as f: module_template = read_file('devscripts/lazy_load_template.py')
module_template = f.read()
def get_source(m):
return re.sub(r'(?m)^\s*#.*\n', '', getsource(m))
module_contents = [ module_contents = [
module_template + '\n' + getsource(InfoExtractor.suitable) + '\n', module_template,
'class LazyLoadSearchExtractor(LazyLoadExtractor):\n pass\n'] get_source(InfoExtractor.suitable),
get_source(InfoExtractor._match_valid_url) + '\n',
'class LazyLoadSearchExtractor(LazyLoadExtractor):\n pass\n',
# needed for suitable() methods of Youtube extractor (see #28780)
'from youtube_dl.utils import parse_qs, variadic\n',
]
ie_template = ''' ie_template = '''
class {name}({bases}): class {name}({bases}):
@ -54,7 +75,7 @@ def build_lazy_ie(ie, name):
valid_url=valid_url, valid_url=valid_url,
module=ie.__module__) module=ie.__module__)
if ie.suitable.__func__ is not InfoExtractor.suitable.__func__: if ie.suitable.__func__ is not InfoExtractor.suitable.__func__:
s += '\n' + getsource(ie.suitable) s += '\n' + get_source(ie.suitable)
if hasattr(ie, '_make_valid_url'): if hasattr(ie, '_make_valid_url'):
# search extractors # search extractors
s += make_valid_template.format(valid_url=ie._make_valid_url()) s += make_valid_template.format(valid_url=ie._make_valid_url())
@ -94,7 +115,17 @@ for ie in ordered_cls:
module_contents.append( module_contents.append(
'_ALL_CLASSES = [{0}]'.format(', '.join(names))) '_ALL_CLASSES = [{0}]'.format(', '.join(names)))
module_src = '\n'.join(module_contents) + '\n' module_src = '\n'.join(module_contents)
with io.open(lazy_extractors_filename, 'wt', encoding='utf-8') as f: write_file(lazy_extractors_filename, module_src + '\n')
f.write(module_src)
# work around JVM byte code module limit in Jython
if sys.platform.startswith('java') and sys.version_info[:2] == (2, 7):
import subprocess
from youtube_dl.compat import compat_subprocess_get_DEVNULL
# if Python 2.7 is available, use it to compile the module for Jython
try:
# if Python 2.7 is available, use it to compile the module for Jython
subprocess.check_call(['python2.7', '-m', 'py_compile', lazy_extractors_filename], stdout=compat_subprocess_get_DEVNULL())
except Exception:
pass

View File

@ -1,8 +1,14 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import io import os.path
import sys
import re import re
import sys
dirn = os.path.dirname
sys.path.insert(0, dirn(dirn(os.path.abspath(__file__))))
from utils import read_file
from youtube_dl.compat import compat_open as open
README_FILE = 'README.md' README_FILE = 'README.md'
helptext = sys.stdin.read() helptext = sys.stdin.read()
@ -10,8 +16,7 @@ helptext = sys.stdin.read()
if isinstance(helptext, bytes): if isinstance(helptext, bytes):
helptext = helptext.decode('utf-8') helptext = helptext.decode('utf-8')
with io.open(README_FILE, encoding='utf-8') as f: oldreadme = read_file(README_FILE)
oldreadme = f.read()
header = oldreadme[:oldreadme.index('# OPTIONS')] header = oldreadme[:oldreadme.index('# OPTIONS')]
footer = oldreadme[oldreadme.index('# CONFIGURATION'):] footer = oldreadme[oldreadme.index('# CONFIGURATION'):]
@ -20,7 +25,7 @@ options = helptext[helptext.index(' General Options:') + 19:]
options = re.sub(r'(?m)^ (\w.+)$', r'## \1', options) options = re.sub(r'(?m)^ (\w.+)$', r'## \1', options)
options = '# OPTIONS\n' + options + '\n' options = '# OPTIONS\n' + options + '\n'
with io.open(README_FILE, 'w', encoding='utf-8') as f: with open(README_FILE, 'w', encoding='utf-8') as f:
f.write(header) f.write(header)
f.write(options) f.write(options)
f.write(footer) f.write(footer)

View File

@ -1,17 +1,19 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import unicode_literals from __future__ import unicode_literals
import io
import optparse import optparse
import os import os.path
import sys import sys
# Import youtube_dl # Import youtube_dl
ROOT_DIR = os.path.join(os.path.dirname(__file__), '..') dirn = os.path.dirname
sys.path.insert(0, ROOT_DIR)
sys.path.insert(0, dirn(dirn(os.path.abspath(__file__))))
import youtube_dl import youtube_dl
from utils import write_file
def main(): def main():
parser = optparse.OptionParser(usage='%prog OUTFILE.md') parser = optparse.OptionParser(usage='%prog OUTFILE.md')
@ -38,8 +40,7 @@ def main():
' - ' + md + '\n' ' - ' + md + '\n'
for md in gen_ies_md(ies)) for md in gen_ies_md(ies))
with io.open(outfile, 'w', encoding='utf-8') as outf: write_file(outfile, out)
outf.write(out)
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -1,13 +1,13 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import io
import optparse import optparse
import os.path import os.path
import re import re
from utils import read_file, write_file
ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
README_FILE = os.path.join(ROOT_DIR, 'README.md') README_FILE = os.path.join(ROOT_DIR, 'README.md')
PREFIX = r'''%YOUTUBE-DL(1) PREFIX = r'''%YOUTUBE-DL(1)
# NAME # NAME
@ -29,8 +29,7 @@ def main():
outfile, = args outfile, = args
with io.open(README_FILE, encoding='utf-8') as f: readme = read_file(README_FILE)
readme = f.read()
readme = re.sub(r'(?s)^.*?(?=# DESCRIPTION)', '', readme) readme = re.sub(r'(?s)^.*?(?=# DESCRIPTION)', '', readme)
readme = re.sub(r'\s+youtube-dl \[OPTIONS\] URL \[URL\.\.\.\]', '', readme) readme = re.sub(r'\s+youtube-dl \[OPTIONS\] URL \[URL\.\.\.\]', '', readme)
@ -38,8 +37,7 @@ def main():
readme = filter_options(readme) readme = filter_options(readme)
with io.open(outfile, 'w', encoding='utf-8') as outf: write_file(outfile, readme)
outf.write(readme)
def filter_options(readme): def filter_options(readme):

62
devscripts/utils.py Normal file
View File

@ -0,0 +1,62 @@
# coding: utf-8
from __future__ import unicode_literals
import argparse
import functools
import os.path
import subprocess
import sys
dirn = os.path.dirname
sys.path.insert(0, dirn(dirn(os.path.abspath(__file__))))
from youtube_dl.compat import (
compat_kwargs,
compat_open as open,
)
def read_file(fname):
with open(fname, encoding='utf-8') as f:
return f.read()
def write_file(fname, content, mode='w'):
with open(fname, mode, encoding='utf-8') as f:
return f.write(content)
def read_version(fname='youtube_dl/version.py'):
"""Get the version without importing the package"""
exec(compile(read_file(fname), fname, 'exec'))
return locals()['__version__']
def get_filename_args(has_infile=False, default_outfile=None):
parser = argparse.ArgumentParser()
if has_infile:
parser.add_argument('infile', help='Input file')
kwargs = {'nargs': '?', 'default': default_outfile} if default_outfile else {}
kwargs['help'] = 'Output file'
parser.add_argument('outfile', **compat_kwargs(kwargs))
opts = parser.parse_args()
if has_infile:
return opts.infile, opts.outfile
return opts.outfile
def compose_functions(*functions):
return lambda x: functools.reduce(lambda y, f: f(y), functions, x)
def run_process(*args, **kwargs):
kwargs.setdefault('text', True)
kwargs.setdefault('check', True)
kwargs.setdefault('capture_output', True)
if kwargs['text']:
kwargs.setdefault('encoding', 'utf-8')
kwargs.setdefault('errors', 'replace')
kwargs = compat_kwargs(kwargs)
return subprocess.run(args, **kwargs)

View File

@ -7,6 +7,8 @@ import sys
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__))))) sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
import youtube_dl import youtube_dl
from utils import read_file, write_file
ZSH_COMPLETION_FILE = "youtube-dl.zsh" ZSH_COMPLETION_FILE = "youtube-dl.zsh"
ZSH_COMPLETION_TEMPLATE = "devscripts/zsh-completion.in" ZSH_COMPLETION_TEMPLATE = "devscripts/zsh-completion.in"
@ -34,15 +36,13 @@ def build_completion(opt_parser):
flags = [opt.get_opt_string() for opt in opts] flags = [opt.get_opt_string() for opt in opts]
with open(ZSH_COMPLETION_TEMPLATE) as f: template = read_file(ZSH_COMPLETION_TEMPLATE)
template = f.read()
template = template.replace("{{fileopts}}", "|".join(fileopts)) template = template.replace("{{fileopts}}", "|".join(fileopts))
template = template.replace("{{diropts}}", "|".join(diropts)) template = template.replace("{{diropts}}", "|".join(diropts))
template = template.replace("{{flags}}", " ".join(flags)) template = template.replace("{{flags}}", " ".join(flags))
with open(ZSH_COMPLETION_FILE, "w") as f: write_file(ZSH_COMPLETION_FILE, template)
f.write(template)
parser = youtube_dl.parseOpts()[0] parser = youtube_dl.parseOpts()[0]

View File

@ -1,9 +1,9 @@
# Supported sites # Supported sites
- **1tv**: Первый канал - **1tv**: Первый канал
- **1up.com**
- **20min** - **20min**
- **220.ro** - **220.ro**
- **23video** - **23video**
- **247sports**
- **24video** - **24video**
- **3qsdn**: 3Q SDN - **3qsdn**: 3Q SDN
- **3sat** - **3sat**
@ -83,6 +83,7 @@
- **awaan:video** - **awaan:video**
- **AZMedien**: AZ Medien videos - **AZMedien**: AZ Medien videos
- **BaiduVideo**: 百度视频 - **BaiduVideo**: 百度视频
- **bandaichannel**
- **Bandcamp** - **Bandcamp**
- **Bandcamp:album** - **Bandcamp:album**
- **Bandcamp:weekly** - **Bandcamp:weekly**
@ -90,7 +91,8 @@
- **bbc**: BBC - **bbc**: BBC
- **bbc.co.uk**: BBC iPlayer - **bbc.co.uk**: BBC iPlayer
- **bbc.co.uk:article**: BBC articles - **bbc.co.uk:article**: BBC articles
- **bbc.co.uk:iplayer:playlist** - **bbc.co.uk:iplayer:episodes**
- **bbc.co.uk:iplayer:group**
- **bbc.co.uk:playlist** - **bbc.co.uk:playlist**
- **BBVTV** - **BBVTV**
- **Beatport** - **Beatport**
@ -117,7 +119,6 @@
- **BitChuteChannel** - **BitChuteChannel**
- **BleacherReport** - **BleacherReport**
- **BleacherReportCMS** - **BleacherReportCMS**
- **blinkx**
- **Bloomberg** - **Bloomberg**
- **BokeCC** - **BokeCC**
- **BongaCams** - **BongaCams**
@ -159,7 +160,8 @@
- **cbsnews**: CBS News - **cbsnews**: CBS News
- **cbsnews:embed** - **cbsnews:embed**
- **cbsnews:livevideo**: CBS News Live Videos - **cbsnews:livevideo**: CBS News Live Videos
- **CBSSports** - **cbssports**
- **cbssports:embed**
- **CCMA** - **CCMA**
- **CCTV**: 央视网 - **CCTV**: 央视网
- **CDA** - **CDA**
@ -213,6 +215,7 @@
- **curiositystream** - **curiositystream**
- **curiositystream:collection** - **curiositystream:collection**
- **CWTV** - **CWTV**
- **DagelijkseKost**: dagelijksekost.een.be
- **DailyMail** - **DailyMail**
- **dailymotion** - **dailymotion**
- **dailymotion:playlist** - **dailymotion:playlist**
@ -234,6 +237,7 @@
- **DiscoveryGo** - **DiscoveryGo**
- **DiscoveryGoPlaylist** - **DiscoveryGoPlaylist**
- **DiscoveryNetworksDe** - **DiscoveryNetworksDe**
- **DiscoveryPlus**
- **DiscoveryVR** - **DiscoveryVR**
- **Disney** - **Disney**
- **dlive:stream** - **dlive:stream**
@ -329,6 +333,7 @@
- **Gaskrank** - **Gaskrank**
- **Gazeta** - **Gazeta**
- **GDCVault** - **GDCVault**
- **GediDigital**
- **generic**: Generic downloader that works on some sites - **generic**: Generic downloader that works on some sites
- **Gfycat** - **Gfycat**
- **GiantBomb** - **GiantBomb**
@ -354,6 +359,7 @@
- **HentaiStigma** - **HentaiStigma**
- **hetklokhuis** - **hetklokhuis**
- **hgtv.com:show** - **hgtv.com:show**
- **HGTVDe**
- **HiDive** - **HiDive**
- **HistoricFilms** - **HistoricFilms**
- **history:player** - **history:player**
@ -376,6 +382,8 @@
- **HungamaSong** - **HungamaSong**
- **Hypem** - **Hypem**
- **ign.com** - **ign.com**
- **IGNArticle**
- **IGNVideo**
- **IHeartRadio** - **IHeartRadio**
- **iheartradio:podcast** - **iheartradio:podcast**
- **imdb**: Internet Movie Database trailers - **imdb**: Internet Movie Database trailers
@ -456,14 +464,14 @@
- **limelight** - **limelight**
- **limelight:channel** - **limelight:channel**
- **limelight:channel_list** - **limelight:channel_list**
- **LineLive**
- **LineLiveChannel**
- **LineTV** - **LineTV**
- **linkedin:learning** - **linkedin:learning**
- **linkedin:learning:course** - **linkedin:learning:course**
- **LinuxAcademy** - **LinuxAcademy**
- **LiTV** - **LiTV**
- **LiveJournal** - **LiveJournal**
- **LiveLeak**
- **LiveLeakEmbed**
- **livestream** - **livestream**
- **livestream:original** - **livestream:original**
- **LnkGo** - **LnkGo**
@ -481,6 +489,7 @@
- **mangomolo:live** - **mangomolo:live**
- **mangomolo:video** - **mangomolo:video**
- **ManyVids** - **ManyVids**
- **MaoriTV**
- **Markiza** - **Markiza**
- **MarkizaPage** - **MarkizaPage**
- **massengeschmack.tv** - **massengeschmack.tv**
@ -516,6 +525,7 @@
- **mixcloud:playlist** - **mixcloud:playlist**
- **mixcloud:user** - **mixcloud:user**
- **MLB** - **MLB**
- **MLBVideo**
- **Mnet** - **Mnet**
- **MNetTV** - **MNetTV**
- **MoeVideo**: LetitBit video services: moevideo.net, playreplay.net and videochart.net - **MoeVideo**: LetitBit video services: moevideo.net, playreplay.net and videochart.net
@ -537,6 +547,7 @@
- **mtv:video** - **mtv:video**
- **mtvjapan** - **mtvjapan**
- **mtvservices:embedded** - **mtvservices:embedded**
- **MTVUutisetArticle**
- **MuenchenTV**: münchen.tv - **MuenchenTV**: münchen.tv
- **mva**: Microsoft Virtual Academy videos - **mva**: Microsoft Virtual Academy videos
- **mva:course**: Microsoft Virtual Academy courses - **mva:course**: Microsoft Virtual Academy courses
@ -670,12 +681,14 @@
- **OutsideTV** - **OutsideTV**
- **PacktPub** - **PacktPub**
- **PacktPubCourse** - **PacktPubCourse**
- **PalcoMP3:artist**
- **PalcoMP3:song**
- **PalcoMP3:video**
- **pandora.tv**: 판도라TV - **pandora.tv**: 판도라TV
- **ParamountNetwork** - **ParamountNetwork**
- **parliamentlive.tv**: UK parliament videos - **parliamentlive.tv**: UK parliament videos
- **Patreon** - **Patreon**
- **pbs**: Public Broadcasting Service (PBS) and member stations: PBS: Public Broadcasting Service, APT - Alabama Public Television (WBIQ), GPB/Georgia Public Broadcasting (WGTV), Mississippi Public Broadcasting (WMPN), Nashville Public Television (WNPT), WFSU-TV (WFSU), WSRE (WSRE), WTCI (WTCI), WPBA/Channel 30 (WPBA), Alaska Public Media (KAKM), Arizona PBS (KAET), KNME-TV/Channel 5 (KNME), Vegas PBS (KLVX), AETN/ARKANSAS ETV NETWORK (KETS), KET (WKLE), WKNO/Channel 10 (WKNO), LPB/LOUISIANA PUBLIC BROADCASTING (WLPB), OETA (KETA), Ozarks Public Television (KOZK), WSIU Public Broadcasting (WSIU), KEET TV (KEET), KIXE/Channel 9 (KIXE), KPBS San Diego (KPBS), KQED (KQED), KVIE Public Television (KVIE), PBS SoCal/KOCE (KOCE), ValleyPBS (KVPT), CONNECTICUT PUBLIC TELEVISION (WEDH), KNPB Channel 5 (KNPB), SOPTV (KSYS), Rocky Mountain PBS (KRMA), KENW-TV3 (KENW), KUED Channel 7 (KUED), Wyoming PBS (KCWC), Colorado Public Television / KBDI 12 (KBDI), KBYU-TV (KBYU), Thirteen/WNET New York (WNET), WGBH/Channel 2 (WGBH), WGBY (WGBY), NJTV Public Media NJ (WNJT), WLIW21 (WLIW), mpt/Maryland Public Television (WMPB), WETA Television and Radio (WETA), WHYY (WHYY), PBS 39 (WLVT), WVPT - Your Source for PBS and More! (WVPT), Howard University Television (WHUT), WEDU PBS (WEDU), WGCU Public Media (WGCU), WPBT2 (WPBT), WUCF TV (WUCF), WUFT/Channel 5 (WUFT), WXEL/Channel 42 (WXEL), WLRN/Channel 17 (WLRN), WUSF Public Broadcasting (WUSF), ETV (WRLK), UNC-TV (WUNC), PBS Hawaii - Oceanic Cable Channel 10 (KHET), Idaho Public Television (KAID), KSPS (KSPS), OPB (KOPB), KWSU/Channel 10 & KTNW/Channel 31 (KWSU), WILL-TV (WILL), Network Knowledge - WSEC/Springfield (WSEC), WTTW11 (WTTW), Iowa Public Television/IPTV (KDIN), Nine Network (KETC), PBS39 Fort Wayne (WFWA), WFYI Indianapolis (WFYI), Milwaukee Public Television (WMVS), WNIN (WNIN), WNIT Public Television (WNIT), WPT (WPNE), WVUT/Channel 22 (WVUT), WEIU/Channel 51 (WEIU), WQPT-TV (WQPT), WYCC PBS Chicago (WYCC), WIPB-TV (WIPB), WTIU (WTIU), CET (WCET), ThinkTVNetwork (WPTD), WBGU-TV (WBGU), WGVU TV (WGVU), NET1 (KUON), Pioneer Public Television (KWCM), SDPB Television (KUSD), TPT (KTCA), KSMQ (KSMQ), KPTS/Channel 8 (KPTS), KTWU/Channel 11 (KTWU), East Tennessee PBS (WSJK), WCTE-TV (WCTE), WLJT, Channel 11 (WLJT), WOSU TV (WOSU), WOUB/WOUC (WOUB), WVPB (WVPB), WKYU-PBS (WKYU), KERA 13 (KERA), MPBN (WCBB), Mountain Lake PBS (WCFE), NHPTV (WENH), Vermont PBS (WETK), witf (WITF), WQED Multimedia (WQED), WMHT Educational Telecommunications (WMHT), Q-TV (WDCQ), WTVS Detroit Public TV (WTVS), CMU Public Television (WCMU), WKAR-TV (WKAR), WNMU-TV Public TV 13 (WNMU), WDSE - WRPT (WDSE), WGTE TV (WGTE), Lakeland Public Television (KAWE), KMOS-TV - Channels 6.1, 6.2 and 6.3 (KMOS), MontanaPBS (KUSM), KRWG/Channel 22 (KRWG), KACV (KACV), KCOS/Channel 13 (KCOS), WCNY/Channel 24 (WCNY), WNED (WNED), WPBS (WPBS), WSKG Public TV (WSKG), WXXI (WXXI), WPSU (WPSU), WVIA Public Media Studios (WVIA), WTVI (WTVI), Western Reserve PBS (WNEO), WVIZ/PBS ideastream (WVIZ), KCTS 9 (KCTS), Basin PBS (KPBT), KUHT / Channel 8 (KUHT), KLRN (KLRN), KLRU (KLRU), WTJX Channel 12 (WTJX), WCVE PBS (WCVE), KBTC Public Television (KBTC) - **pbs**: Public Broadcasting Service (PBS) and member stations: PBS: Public Broadcasting Service, APT - Alabama Public Television (WBIQ), GPB/Georgia Public Broadcasting (WGTV), Mississippi Public Broadcasting (WMPN), Nashville Public Television (WNPT), WFSU-TV (WFSU), WSRE (WSRE), WTCI (WTCI), WPBA/Channel 30 (WPBA), Alaska Public Media (KAKM), Arizona PBS (KAET), KNME-TV/Channel 5 (KNME), Vegas PBS (KLVX), AETN/ARKANSAS ETV NETWORK (KETS), KET (WKLE), WKNO/Channel 10 (WKNO), LPB/LOUISIANA PUBLIC BROADCASTING (WLPB), OETA (KETA), Ozarks Public Television (KOZK), WSIU Public Broadcasting (WSIU), KEET TV (KEET), KIXE/Channel 9 (KIXE), KPBS San Diego (KPBS), KQED (KQED), KVIE Public Television (KVIE), PBS SoCal/KOCE (KOCE), ValleyPBS (KVPT), CONNECTICUT PUBLIC TELEVISION (WEDH), KNPB Channel 5 (KNPB), SOPTV (KSYS), Rocky Mountain PBS (KRMA), KENW-TV3 (KENW), KUED Channel 7 (KUED), Wyoming PBS (KCWC), Colorado Public Television / KBDI 12 (KBDI), KBYU-TV (KBYU), Thirteen/WNET New York (WNET), WGBH/Channel 2 (WGBH), WGBY (WGBY), NJTV Public Media NJ (WNJT), WLIW21 (WLIW), mpt/Maryland Public Television (WMPB), WETA Television and Radio (WETA), WHYY (WHYY), PBS 39 (WLVT), WVPT - Your Source for PBS and More! (WVPT), Howard University Television (WHUT), WEDU PBS (WEDU), WGCU Public Media (WGCU), WPBT2 (WPBT), WUCF TV (WUCF), WUFT/Channel 5 (WUFT), WXEL/Channel 42 (WXEL), WLRN/Channel 17 (WLRN), WUSF Public Broadcasting (WUSF), ETV (WRLK), UNC-TV (WUNC), PBS Hawaii - Oceanic Cable Channel 10 (KHET), Idaho Public Television (KAID), KSPS (KSPS), OPB (KOPB), KWSU/Channel 10 & KTNW/Channel 31 (KWSU), WILL-TV (WILL), Network Knowledge - WSEC/Springfield (WSEC), WTTW11 (WTTW), Iowa Public Television/IPTV (KDIN), Nine Network (KETC), PBS39 Fort Wayne (WFWA), WFYI Indianapolis (WFYI), Milwaukee Public Television (WMVS), WNIN (WNIN), WNIT Public Television (WNIT), WPT (WPNE), WVUT/Channel 22 (WVUT), WEIU/Channel 51 (WEIU), WQPT-TV (WQPT), WYCC PBS Chicago (WYCC), WIPB-TV (WIPB), WTIU (WTIU), CET (WCET), ThinkTVNetwork (WPTD), WBGU-TV (WBGU), WGVU TV (WGVU), NET1 (KUON), Pioneer Public Television (KWCM), SDPB Television (KUSD), TPT (KTCA), KSMQ (KSMQ), KPTS/Channel 8 (KPTS), KTWU/Channel 11 (KTWU), East Tennessee PBS (WSJK), WCTE-TV (WCTE), WLJT, Channel 11 (WLJT), WOSU TV (WOSU), WOUB/WOUC (WOUB), WVPB (WVPB), WKYU-PBS (WKYU), KERA 13 (KERA), MPBN (WCBB), Mountain Lake PBS (WCFE), NHPTV (WENH), Vermont PBS (WETK), witf (WITF), WQED Multimedia (WQED), WMHT Educational Telecommunications (WMHT), Q-TV (WDCQ), WTVS Detroit Public TV (WTVS), CMU Public Television (WCMU), WKAR-TV (WKAR), WNMU-TV Public TV 13 (WNMU), WDSE - WRPT (WDSE), WGTE TV (WGTE), Lakeland Public Television (KAWE), KMOS-TV - Channels 6.1, 6.2 and 6.3 (KMOS), MontanaPBS (KUSM), KRWG/Channel 22 (KRWG), KACV (KACV), KCOS/Channel 13 (KCOS), WCNY/Channel 24 (WCNY), WNED (WNED), WPBS (WPBS), WSKG Public TV (WSKG), WXXI (WXXI), WPSU (WPSU), WVIA Public Media Studios (WVIA), WTVI (WTVI), Western Reserve PBS (WNEO), WVIZ/PBS ideastream (WVIZ), KCTS 9 (KCTS), Basin PBS (KPBT), KUHT / Channel 8 (KUHT), KLRN (KLRN), KLRU (KLRU), WTJX Channel 12 (WTJX), WCVE PBS (WCVE), KBTC Public Television (KBTC)
- **pcmag**
- **PearVideo** - **PearVideo**
- **PeerTube** - **PeerTube**
- **People** - **People**
@ -697,6 +710,7 @@
- **play.fm** - **play.fm**
- **player.sky.it** - **player.sky.it**
- **PlayPlusTV** - **PlayPlusTV**
- **PlayStuff**
- **PlaysTV** - **PlaysTV**
- **Playtvak**: Playtvak.cz, iDNES.cz and Lidovky.cz - **Playtvak**: Playtvak.cz, iDNES.cz and Lidovky.cz
- **Playvid** - **Playvid**
@ -802,6 +816,7 @@
- **safari:course**: safaribooksonline.com online courses - **safari:course**: safaribooksonline.com online courses
- **SAKTV** - **SAKTV**
- **SaltTV** - **SaltTV**
- **SampleFocus**
- **Sapo**: SAPO Vídeos - **Sapo**: SAPO Vídeos
- **savefrom.net** - **savefrom.net**
- **SBS**: sbs.com.au - **SBS**: sbs.com.au
@ -824,6 +839,9 @@
- **ShahidShow** - **ShahidShow**
- **Shared**: shared.sx - **Shared**: shared.sx
- **ShowRoomLive** - **ShowRoomLive**
- **simplecast**
- **simplecast:episode**
- **simplecast:podcast**
- **Sina** - **Sina**
- **sky.it** - **sky.it**
- **sky:news** - **sky:news**
@ -876,6 +894,9 @@
- **Steam** - **Steam**
- **Stitcher** - **Stitcher**
- **StitcherShow** - **StitcherShow**
- **StoryFire**
- **StoryFireSeries**
- **StoryFireUser**
- **Streamable** - **Streamable**
- **streamcloud.eu** - **streamcloud.eu**
- **StreamCZ** - **StreamCZ**
@ -1044,6 +1065,7 @@
- **Vidbit** - **Vidbit**
- **Viddler** - **Viddler**
- **Videa** - **Videa**
- **video.arnes.si**: Arnes Video
- **video.google:search**: Google Video search - **video.google:search**: Google Video search
- **video.sky.it** - **video.sky.it**
- **video.sky.it:live** - **video.sky.it:live**
@ -1058,7 +1080,6 @@
- **vidme** - **vidme**
- **vidme:user** - **vidme:user**
- **vidme:user:likes** - **vidme:user:likes**
- **Vidzi**
- **vier**: vier.be and vijf.be - **vier**: vier.be and vijf.be
- **vier:videos** - **vier:videos**
- **viewlift** - **viewlift**
@ -1103,6 +1124,7 @@
- **vrv** - **vrv**
- **vrv:series** - **vrv:series**
- **VShare** - **VShare**
- **VTM**
- **VTXTV** - **VTXTV**
- **vube**: Vube.com - **vube**: Vube.com
- **VuClip** - **VuClip**
@ -1138,7 +1160,7 @@
- **WWE** - **WWE**
- **XBef** - **XBef**
- **XboxClips** - **XboxClips**
- **XFileShare**: XFileShare based sites: Aparat, ClipWatching, GoUnlimited, GoVid, HolaVid, Streamty, TheVideoBee, Uqload, VidBom, vidlo, VidLocker, VidShare, VUp, XVideoSharing - **XFileShare**: XFileShare based sites: Aparat, ClipWatching, GoUnlimited, GoVid, HolaVid, Streamty, TheVideoBee, Uqload, VidBom, vidlo, VidLocker, VidShare, VUp, WolfStream, XVideoSharing
- **XHamster** - **XHamster**
- **XHamsterEmbed** - **XHamsterEmbed**
- **XHamsterUser** - **XHamsterUser**
@ -1197,5 +1219,8 @@
- **ZattooLive** - **ZattooLive**
- **ZDF** - **ZDF**
- **ZDFChannel** - **ZDFChannel**
- **Zhihu**
- **zingmp3**: mp3.zing.vn - **zingmp3**: mp3.zing.vn
- **zingmp3:album**
- **zoom**
- **Zype** - **Zype**

View File

@ -1,22 +1,24 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import errno import errno
import io
import hashlib import hashlib
import json import json
import os.path import os.path
import re import re
import types
import ssl import ssl
import sys import sys
import types
import unittest
import youtube_dl.extractor import youtube_dl.extractor
from youtube_dl import YoutubeDL from youtube_dl import YoutubeDL
from youtube_dl.compat import ( from youtube_dl.compat import (
compat_open as open,
compat_os_name, compat_os_name,
compat_str, compat_str,
) )
from youtube_dl.utils import ( from youtube_dl.utils import (
IDENTITY,
preferredencoding, preferredencoding,
write_string, write_string,
) )
@ -27,10 +29,10 @@ def get_params(override=None):
"parameters.json") "parameters.json")
LOCAL_PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)), LOCAL_PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)),
"local_parameters.json") "local_parameters.json")
with io.open(PARAMETERS_FILE, encoding='utf-8') as pf: with open(PARAMETERS_FILE, encoding='utf-8') as pf:
parameters = json.load(pf) parameters = json.load(pf)
if os.path.exists(LOCAL_PARAMETERS_FILE): if os.path.exists(LOCAL_PARAMETERS_FILE):
with io.open(LOCAL_PARAMETERS_FILE, encoding='utf-8') as pf: with open(LOCAL_PARAMETERS_FILE, encoding='utf-8') as pf:
parameters.update(json.load(pf)) parameters.update(json.load(pf))
if override: if override:
parameters.update(override) parameters.update(override)
@ -72,7 +74,8 @@ class FakeYDL(YoutubeDL):
def to_screen(self, s, skip_eol=None): def to_screen(self, s, skip_eol=None):
print(s) print(s)
def trouble(self, s, tb=None): def trouble(self, *args, **kwargs):
s = args[0] if len(args) > 0 else kwargs.get('message', 'Missing message')
raise Exception(s) raise Exception(s)
def download(self, x): def download(self, x):
@ -89,6 +92,17 @@ class FakeYDL(YoutubeDL):
self.report_warning = types.MethodType(report_warning, self) self.report_warning = types.MethodType(report_warning, self)
class FakeLogger(object):
def debug(self, msg):
pass
def warning(self, msg):
pass
def error(self, msg):
pass
def gettestcases(include_onlymatching=False): def gettestcases(include_onlymatching=False):
for ie in youtube_dl.extractor.gen_extractors(): for ie in youtube_dl.extractor.gen_extractors():
for tc in ie.get_testcases(include_onlymatching): for tc in ie.get_testcases(include_onlymatching):
@ -128,6 +142,12 @@ def expect_value(self, got, expected, field):
self.assertTrue( self.assertTrue(
contains_str in got, contains_str in got,
'field %s (value: %r) should contain %r' % (field, got, contains_str)) 'field %s (value: %r) should contain %r' % (field, got, contains_str))
elif isinstance(expected, compat_str) and re.match(r'lambda \w+:', expected):
fn = eval(expected)
suite = expected.split(':', 1)[1].strip()
self.assertTrue(
fn(got),
'Expected field %s to meet condition %s, but value %r failed ' % (field, suite, got))
elif isinstance(expected, type): elif isinstance(expected, type):
self.assertTrue( self.assertTrue(
isinstance(got, expected), isinstance(got, expected),
@ -137,7 +157,7 @@ def expect_value(self, got, expected, field):
elif isinstance(expected, list) and isinstance(got, list): elif isinstance(expected, list) and isinstance(got, list):
self.assertEqual( self.assertEqual(
len(expected), len(got), len(expected), len(got),
'Expect a list of length %d, but got a list of length %d for field %s' % ( 'Expected a list of length %d, but got a list of length %d for field %s' % (
len(expected), len(got), field)) len(expected), len(got), field))
for index, (item_got, item_expected) in enumerate(zip(got, expected)): for index, (item_got, item_expected) in enumerate(zip(got, expected)):
type_got = type(item_got) type_got = type(item_got)
@ -161,18 +181,18 @@ def expect_value(self, got, expected, field):
op, _, expected_num = expected.partition(':') op, _, expected_num = expected.partition(':')
expected_num = int(expected_num) expected_num = int(expected_num)
if op == 'mincount': if op == 'mincount':
assert_func = assertGreaterEqual assert_func = self.assertGreaterEqual
msg_tmpl = 'Expected %d items in field %s, but only got %d' msg_tmpl = 'Expected %d items in field %s, but only got %d'
elif op == 'maxcount': elif op == 'maxcount':
assert_func = assertLessEqual assert_func = self.assertLessEqual
msg_tmpl = 'Expected maximum %d items in field %s, but got %d' msg_tmpl = 'Expected maximum %d items in field %s, but got %d'
elif op == 'count': elif op == 'count':
assert_func = assertEqual assert_func = self.assertEqual
msg_tmpl = 'Expected exactly %d items in field %s, but got %d' msg_tmpl = 'Expected exactly %d items in field %s, but got %d'
else: else:
assert False assert False
assert_func( assert_func(
self, len(got), expected_num, len(got), expected_num,
msg_tmpl % (expected_num, field, len(got))) msg_tmpl % (expected_num, field, len(got)))
return return
self.assertEqual( self.assertEqual(
@ -242,27 +262,6 @@ def assertRegexpMatches(self, text, regexp, msg=None):
self.assertTrue(m, msg) self.assertTrue(m, msg)
def assertGreaterEqual(self, got, expected, msg=None):
if not (got >= expected):
if msg is None:
msg = '%r not greater than or equal to %r' % (got, expected)
self.assertTrue(got >= expected, msg)
def assertLessEqual(self, got, expected, msg=None):
if not (got <= expected):
if msg is None:
msg = '%r not less than or equal to %r' % (got, expected)
self.assertTrue(got <= expected, msg)
def assertEqual(self, got, expected, msg=None):
if not (got == expected):
if msg is None:
msg = '%r not equal to %r' % (got, expected)
self.assertTrue(got == expected, msg)
def expect_warnings(ydl, warnings_re): def expect_warnings(ydl, warnings_re):
real_warning = ydl.report_warning real_warning = ydl.report_warning
@ -280,3 +279,7 @@ def http_server_port(httpd):
else: else:
sock = httpd.socket sock = httpd.socket
return sock.getsockname()[1] return sock.getsockname()[1]
def expectedFailureIf(cond):
return unittest.expectedFailure if cond else IDENTITY

View File

@ -18,7 +18,6 @@
"noprogress": false, "noprogress": false,
"outtmpl": "%(id)s.%(ext)s", "outtmpl": "%(id)s.%(ext)s",
"password": null, "password": null,
"playlistend": -1,
"playliststart": 1, "playliststart": 1,
"prefer_free_formats": false, "prefer_free_formats": false,
"quiet": false, "quiet": false,

View File

@ -3,19 +3,37 @@
from __future__ import unicode_literals from __future__ import unicode_literals
# Allow direct execution # Allow direct execution
import io
import os import os
import sys import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import FakeYDL, expect_dict, expect_value, http_server_port
from youtube_dl.compat import compat_etree_fromstring, compat_http_server
from youtube_dl.extractor.common import InfoExtractor
from youtube_dl.extractor import YoutubeIE, get_info_extractor
from youtube_dl.utils import encode_data_uri, strip_jsonp, ExtractorError, RegexNotFoundError
import threading import threading
from test.helper import (
expect_dict,
expect_value,
FakeYDL,
http_server_port,
)
from youtube_dl.compat import (
compat_etree_fromstring,
compat_http_server,
compat_open as open,
)
from youtube_dl.extractor.common import InfoExtractor
from youtube_dl.extractor import (
get_info_extractor,
YoutubeIE,
)
from youtube_dl.utils import (
encode_data_uri,
ExtractorError,
RegexNotFoundError,
strip_jsonp,
)
TEAPOT_RESPONSE_STATUS = 418 TEAPOT_RESPONSE_STATUS = 418
TEAPOT_RESPONSE_BODY = "<h1>418 I'm a teapot</h1>" TEAPOT_RESPONSE_BODY = "<h1>418 I'm a teapot</h1>"
@ -35,13 +53,13 @@ class InfoExtractorTestRequestHandler(compat_http_server.BaseHTTPRequestHandler)
assert False assert False
class TestIE(InfoExtractor): class DummyIE(InfoExtractor):
pass pass
class TestInfoExtractor(unittest.TestCase): class TestInfoExtractor(unittest.TestCase):
def setUp(self): def setUp(self):
self.ie = TestIE(FakeYDL()) self.ie = DummyIE(FakeYDL())
def test_ie_key(self): def test_ie_key(self):
self.assertEqual(get_info_extractor(YoutubeIE.ie_key()), YoutubeIE) self.assertEqual(get_info_extractor(YoutubeIE.ie_key()), YoutubeIE)
@ -62,6 +80,7 @@ class TestInfoExtractor(unittest.TestCase):
<meta name="og:test1" content='foo > < bar'/> <meta name="og:test1" content='foo > < bar'/>
<meta name="og:test2" content="foo >//< bar"/> <meta name="og:test2" content="foo >//< bar"/>
<meta property=og-test3 content='Ill-formatted opengraph'/> <meta property=og-test3 content='Ill-formatted opengraph'/>
<meta property=og:test4 content=unquoted-value/>
''' '''
self.assertEqual(ie._og_search_title(html), 'Foo') self.assertEqual(ie._og_search_title(html), 'Foo')
self.assertEqual(ie._og_search_description(html), 'Some video\'s description ') self.assertEqual(ie._og_search_description(html), 'Some video\'s description ')
@ -74,6 +93,7 @@ class TestInfoExtractor(unittest.TestCase):
self.assertEqual(ie._og_search_property(('test0', 'test1'), html), 'foo > < bar') self.assertEqual(ie._og_search_property(('test0', 'test1'), html), 'foo > < bar')
self.assertRaises(RegexNotFoundError, ie._og_search_property, 'test0', html, None, fatal=True) self.assertRaises(RegexNotFoundError, ie._og_search_property, 'test0', html, None, fatal=True)
self.assertRaises(RegexNotFoundError, ie._og_search_property, ('test0', 'test00'), html, None, fatal=True) self.assertRaises(RegexNotFoundError, ie._og_search_property, ('test0', 'test00'), html, None, fatal=True)
self.assertEqual(ie._og_search_property('test4', html), 'unquoted-value')
def test_html_search_meta(self): def test_html_search_meta(self):
ie = self.ie ie = self.ie
@ -98,6 +118,74 @@ class TestInfoExtractor(unittest.TestCase):
self.assertRaises(RegexNotFoundError, ie._html_search_meta, 'z', html, None, fatal=True) self.assertRaises(RegexNotFoundError, ie._html_search_meta, 'z', html, None, fatal=True)
self.assertRaises(RegexNotFoundError, ie._html_search_meta, ('z', 'x'), html, None, fatal=True) self.assertRaises(RegexNotFoundError, ie._html_search_meta, ('z', 'x'), html, None, fatal=True)
def test_search_nextjs_data(self):
html = '''
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="content-type" content=
"text/html; charset=utf-8">
<meta name="viewport" content="width=device-width">
<title>Test _search_nextjs_data()</title>
</head>
<body>
<div id="__next">
<div style="background-color:#17171E" class="FU" dir="ltr">
<div class="sc-93de261d-0 dyzzYE">
<div>
<header class="HD"></header>
<main class="MN">
<div style="height:0" class="HT0">
<div style="width:NaN%" data-testid=
"stream-container" class="WDN"></div>
</div>
</main>
</div>
<footer class="sc-6e5faf91-0 dEGaHS"></footer>
</div>
</div>
</div>
<script id="__NEXT_DATA__" type="application/json">
{"props":{"pageProps":{"video":{"id":"testid"}}}}
</script>
</body>
</html>
'''
search = self.ie._search_nextjs_data(html, 'testID')
self.assertEqual(search['props']['pageProps']['video']['id'], 'testid')
search = self.ie._search_nextjs_data(
'no next.js data here, move along', 'testID', default={'status': 0})
self.assertEqual(search['status'], 0)
def test_search_nuxt_data(self):
html = '''
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="content-type" content=
"text/html; charset=utf-8">
<title>Nuxt.js Test Page</title>
<meta name="viewport" content=
"width=device-width, initial-scale=1">
<meta data-hid="robots" name="robots" content="all">
</head>
<body class="BD">
<div id="__layout">
<h1 class="H1">Example heading</h1>
<div class="IN">
<p>Decoy text</p>
</div>
</div>
<script>
window.__NUXT__=(function(a,b,c,d,e,f,g,h){return {decoy:" default",data:[{track:{id:f,title:g}}]}}(null,null,"c",null,null,"testid","Nuxt.js title",null));
</script>
<script src="/_nuxt/a12345b.js" defer="defer"></script>
</body>
</html>
'''
search = self.ie._search_nuxt_data(html, 'testID')
self.assertEqual(search['track']['id'], 'testid')
def test_search_json_ld_realworld(self): def test_search_json_ld_realworld(self):
# https://github.com/ytdl-org/youtube-dl/issues/23306 # https://github.com/ytdl-org/youtube-dl/issues/23306
expect_dict( expect_dict(
@ -346,6 +434,24 @@ class TestInfoExtractor(unittest.TestCase):
}], }],
}) })
# from https://0000.studio/
# with type attribute but without extension in URL
expect_dict(
self,
self.ie._parse_html5_media_entries(
'https://0000.studio',
r'''
<video src="https://d1ggyt9m8pwf3g.cloudfront.net/protected/ap-northeast-1:1864af40-28d5-492b-b739-b32314b1a527/archive/clip/838db6a7-8973-4cd6-840d-8517e4093c92"
controls="controls" type="video/mp4" preload="metadata" autoplay="autoplay" playsinline class="object-contain">
</video>
''', None)[0],
{
'formats': [{
'url': 'https://d1ggyt9m8pwf3g.cloudfront.net/protected/ap-northeast-1:1864af40-28d5-492b-b739-b32314b1a527/archive/clip/838db6a7-8973-4cd6-840d-8517e4093c92',
'ext': 'mp4',
}],
})
def test_extract_jwplayer_data_realworld(self): def test_extract_jwplayer_data_realworld(self):
# from http://www.suffolk.edu/sjc/ # from http://www.suffolk.edu/sjc/
expect_dict( expect_dict(
@ -799,8 +905,8 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
] ]
for m3u8_file, m3u8_url, expected_formats in _TEST_CASES: for m3u8_file, m3u8_url, expected_formats in _TEST_CASES:
with io.open('./test/testdata/m3u8/%s.m3u8' % m3u8_file, with open('./test/testdata/m3u8/%s.m3u8' % m3u8_file,
mode='r', encoding='utf-8') as f: mode='r', encoding='utf-8') as f:
formats = self.ie._parse_m3u8_formats( formats = self.ie._parse_m3u8_formats(
f.read(), m3u8_url, ext='mp4') f.read(), m3u8_url, ext='mp4')
self.ie._sort_formats(formats) self.ie._sort_formats(formats)
@ -890,7 +996,8 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'tbr': 5997.485, 'tbr': 5997.485,
'width': 1920, 'width': 1920,
'height': 1080, 'height': 1080,
}] }],
{},
), ( ), (
# https://github.com/ytdl-org/youtube-dl/pull/14844 # https://github.com/ytdl-org/youtube-dl/pull/14844
'urls_only', 'urls_only',
@ -973,7 +1080,8 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'tbr': 4400, 'tbr': 4400,
'width': 1920, 'width': 1920,
'height': 1080, 'height': 1080,
}] }],
{},
), ( ), (
# https://github.com/ytdl-org/youtube-dl/issues/20346 # https://github.com/ytdl-org/youtube-dl/issues/20346
# Media considered unfragmented even though it contains # Media considered unfragmented even though it contains
@ -1019,18 +1127,185 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'width': 360, 'width': 360,
'height': 360, 'height': 360,
'fps': 30, 'fps': 30,
}] }],
{},
), (
# https://github.com/ytdl-org/youtube-dl/issues/30235
# Bento4 generated test mpd
# mp4dash --mpd-name=manifest.mpd --no-split --use-segment-list mediafiles
'url_and_range',
'http://unknown/manifest.mpd', # mpd_url
'http://unknown/', # mpd_base_url
[{
'manifest_url': 'http://unknown/manifest.mpd',
'fragment_base_url': 'http://unknown/',
'ext': 'm4a',
'format_id': 'audio-und-mp4a.40.2',
'format_note': 'DASH audio',
'container': 'm4a_dash',
'protocol': 'http_dash_segments',
'acodec': 'mp4a.40.2',
'vcodec': 'none',
'tbr': 98.808,
}, {
'manifest_url': 'http://unknown/manifest.mpd',
'fragment_base_url': 'http://unknown/',
'ext': 'mp4',
'format_id': 'video-avc1',
'format_note': 'DASH video',
'container': 'mp4_dash',
'protocol': 'http_dash_segments',
'acodec': 'none',
'vcodec': 'avc1.4D401E',
'tbr': 699.597,
'width': 768,
'height': 432
}],
{},
), (
# https://github.com/ytdl-org/youtube-dl/issues/27575
# GPAC generated test mpd
# MP4Box -dash 10000 -single-file -out manifest.mpd mediafiles
'range_only',
'http://unknown/manifest.mpd', # mpd_url
'http://unknown/', # mpd_base_url
[{
'manifest_url': 'http://unknown/manifest.mpd',
'fragment_base_url': 'http://unknown/audio_dashinit.mp4',
'ext': 'm4a',
'format_id': '2',
'format_note': 'DASH audio',
'container': 'm4a_dash',
'protocol': 'http_dash_segments',
'acodec': 'mp4a.40.2',
'vcodec': 'none',
'tbr': 98.096,
}, {
'manifest_url': 'http://unknown/manifest.mpd',
'fragment_base_url': 'http://unknown/video_dashinit.mp4',
'ext': 'mp4',
'format_id': '1',
'format_note': 'DASH video',
'container': 'mp4_dash',
'protocol': 'http_dash_segments',
'acodec': 'none',
'vcodec': 'avc1.4D401E',
'tbr': 526.987,
'width': 768,
'height': 432
}],
{},
), (
'subtitles',
'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/',
[{
'format_id': 'audio=128001',
'manifest_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'ext': 'm4a',
'tbr': 128.001,
'asr': 48000,
'format_note': 'DASH audio',
'container': 'm4a_dash',
'vcodec': 'none',
'acodec': 'mp4a.40.2',
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'fragment_base_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/dash/',
'protocol': 'http_dash_segments',
}, {
'format_id': 'video=100000',
'manifest_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'ext': 'mp4',
'width': 336,
'height': 144,
'tbr': 100,
'format_note': 'DASH video',
'container': 'mp4_dash',
'vcodec': 'avc1.4D401F',
'acodec': 'none',
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'fragment_base_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/dash/',
'protocol': 'http_dash_segments',
}, {
'format_id': 'video=326000',
'manifest_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'ext': 'mp4',
'width': 562,
'height': 240,
'tbr': 326,
'format_note': 'DASH video',
'container': 'mp4_dash',
'vcodec': 'avc1.4D401F',
'acodec': 'none',
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'fragment_base_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/dash/',
'protocol': 'http_dash_segments',
}, {
'format_id': 'video=698000',
'manifest_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'ext': 'mp4',
'width': 844,
'height': 360,
'tbr': 698,
'format_note': 'DASH video',
'container': 'mp4_dash',
'vcodec': 'avc1.4D401F',
'acodec': 'none',
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'fragment_base_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/dash/',
'protocol': 'http_dash_segments',
}, {
'format_id': 'video=1493000',
'manifest_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'ext': 'mp4',
'width': 1126,
'height': 480,
'tbr': 1493,
'format_note': 'DASH video',
'container': 'mp4_dash',
'vcodec': 'avc1.4D401F',
'acodec': 'none',
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'fragment_base_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/dash/',
'protocol': 'http_dash_segments',
}, {
'format_id': 'video=4482000',
'manifest_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'ext': 'mp4',
'width': 1688,
'height': 720,
'tbr': 4482,
'format_note': 'DASH video',
'container': 'mp4_dash',
'vcodec': 'avc1.4D401F',
'acodec': 'none',
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'fragment_base_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/dash/',
'protocol': 'http_dash_segments',
}],
{
'en': [
{
'ext': 'mp4',
'manifest_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'fragment_base_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/dash/',
'protocol': 'http_dash_segments',
}
]
},
) )
] ]
for mpd_file, mpd_url, mpd_base_url, expected_formats in _TEST_CASES: for mpd_file, mpd_url, mpd_base_url, expected_formats, expected_subtitles in _TEST_CASES:
with io.open('./test/testdata/mpd/%s.mpd' % mpd_file, with open('./test/testdata/mpd/%s.mpd' % mpd_file,
mode='r', encoding='utf-8') as f: mode='r', encoding='utf-8') as f:
formats = self.ie._parse_mpd_formats( formats, subtitles = self.ie._parse_mpd_formats_and_subtitles(
compat_etree_fromstring(f.read().encode('utf-8')), compat_etree_fromstring(f.read().encode('utf-8')),
mpd_base_url=mpd_base_url, mpd_url=mpd_url) mpd_base_url=mpd_base_url, mpd_url=mpd_url)
self.ie._sort_formats(formats) self.ie._sort_formats(formats)
expect_value(self, formats, expected_formats, None) expect_value(self, formats, expected_formats, None)
expect_value(self, subtitles, expected_subtitles, None)
def test_parse_f4m_formats(self): def test_parse_f4m_formats(self):
_TEST_CASES = [ _TEST_CASES = [
@ -1051,8 +1326,8 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
] ]
for f4m_file, f4m_url, expected_formats in _TEST_CASES: for f4m_file, f4m_url, expected_formats in _TEST_CASES:
with io.open('./test/testdata/f4m/%s.f4m' % f4m_file, with open('./test/testdata/f4m/%s.f4m' % f4m_file,
mode='r', encoding='utf-8') as f: mode='r', encoding='utf-8') as f:
formats = self.ie._parse_f4m_formats( formats = self.ie._parse_f4m_formats(
compat_etree_fromstring(f.read().encode('utf-8')), compat_etree_fromstring(f.read().encode('utf-8')),
f4m_url, None) f4m_url, None)
@ -1099,8 +1374,8 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
] ]
for xspf_file, xspf_url, expected_entries in _TEST_CASES: for xspf_file, xspf_url, expected_entries in _TEST_CASES:
with io.open('./test/testdata/xspf/%s.xspf' % xspf_file, with open('./test/testdata/xspf/%s.xspf' % xspf_file,
mode='r', encoding='utf-8') as f: mode='r', encoding='utf-8') as f:
entries = self.ie._parse_xspf( entries = self.ie._parse_xspf(
compat_etree_fromstring(f.read().encode('utf-8')), compat_etree_fromstring(f.read().encode('utf-8')),
xspf_file, xspf_url=xspf_url, xspf_base_url=xspf_url) xspf_file, xspf_url=xspf_url, xspf_base_url=xspf_url)

View File

@ -10,14 +10,31 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import copy import copy
import json
from test.helper import FakeYDL, assertRegexpMatches from test.helper import (
FakeYDL,
assertRegexpMatches,
try_rm,
)
from youtube_dl import YoutubeDL from youtube_dl import YoutubeDL
from youtube_dl.compat import compat_str, compat_urllib_error from youtube_dl.compat import (
compat_http_cookiejar_Cookie,
compat_http_cookies_SimpleCookie,
compat_kwargs,
compat_open as open,
compat_str,
compat_urllib_error,
)
from youtube_dl.extractor import YoutubeIE from youtube_dl.extractor import YoutubeIE
from youtube_dl.extractor.common import InfoExtractor from youtube_dl.extractor.common import InfoExtractor
from youtube_dl.postprocessor.common import PostProcessor from youtube_dl.postprocessor.common import PostProcessor
from youtube_dl.utils import ExtractorError, match_filter_func from youtube_dl.utils import (
ExtractorError,
match_filter_func,
traverse_obj,
)
TEST_URL = 'http://localhost/sample.mp4' TEST_URL = 'http://localhost/sample.mp4'
@ -29,11 +46,14 @@ class YDL(FakeYDL):
self.msgs = [] self.msgs = []
def process_info(self, info_dict): def process_info(self, info_dict):
self.downloaded_info_dicts.append(info_dict) self.downloaded_info_dicts.append(info_dict.copy())
def to_screen(self, msg): def to_screen(self, msg):
self.msgs.append(msg) self.msgs.append(msg)
def dl(self, *args, **kwargs):
assert False, 'Downloader must not be invoked for test_YoutubeDL'
def _make_result(formats, **kwargs): def _make_result(formats, **kwargs):
res = { res = {
@ -42,8 +62,9 @@ def _make_result(formats, **kwargs):
'title': 'testttitle', 'title': 'testttitle',
'extractor': 'testex', 'extractor': 'testex',
'extractor_key': 'TestEx', 'extractor_key': 'TestEx',
'webpage_url': 'http://example.com/watch?v=shenanigans',
} }
res.update(**kwargs) res.update(**compat_kwargs(kwargs))
return res return res
@ -681,12 +702,12 @@ class TestYoutubeDL(unittest.TestCase):
class SimplePP(PostProcessor): class SimplePP(PostProcessor):
def run(self, info): def run(self, info):
with open(audiofile, 'wt') as f: with open(audiofile, 'w') as f:
f.write('EXAMPLE') f.write('EXAMPLE')
return [info['filepath']], info return [info['filepath']], info
def run_pp(params, PP): def run_pp(params, PP):
with open(filename, 'wt') as f: with open(filename, 'w') as f:
f.write('EXAMPLE') f.write('EXAMPLE')
ydl = YoutubeDL(params) ydl = YoutubeDL(params)
ydl.add_post_processor(PP()) ydl.add_post_processor(PP())
@ -705,7 +726,7 @@ class TestYoutubeDL(unittest.TestCase):
class ModifierPP(PostProcessor): class ModifierPP(PostProcessor):
def run(self, info): def run(self, info):
with open(info['filepath'], 'wt') as f: with open(info['filepath'], 'w') as f:
f.write('MODIFIED') f.write('MODIFIED')
return [], info return [], info
@ -930,17 +951,11 @@ class TestYoutubeDL(unittest.TestCase):
# Test case for https://github.com/ytdl-org/youtube-dl/issues/27064 # Test case for https://github.com/ytdl-org/youtube-dl/issues/27064
def test_ignoreerrors_for_playlist_with_url_transparent_iterable_entries(self): def test_ignoreerrors_for_playlist_with_url_transparent_iterable_entries(self):
class _YDL(YDL): ydl = YDL({
def __init__(self, *args, **kwargs):
super(_YDL, self).__init__(*args, **kwargs)
def trouble(self, s, tb=None):
pass
ydl = _YDL({
'format': 'extra', 'format': 'extra',
'ignoreerrors': True, 'ignoreerrors': True,
}) })
ydl.trouble = lambda *_, **__: None
class VideoIE(InfoExtractor): class VideoIE(InfoExtractor):
_VALID_URL = r'video:(?P<id>\d+)' _VALID_URL = r'video:(?P<id>\d+)'
@ -997,6 +1012,180 @@ class TestYoutubeDL(unittest.TestCase):
self.assertEqual(downloaded['extractor'], 'Video') self.assertEqual(downloaded['extractor'], 'Video')
self.assertEqual(downloaded['extractor_key'], 'Video') self.assertEqual(downloaded['extractor_key'], 'Video')
def test_default_times(self):
"""Test addition of missing upload/release/_date from /release_/timestamp"""
info = {
'id': '1234',
'url': TEST_URL,
'title': 'Title',
'ext': 'mp4',
'timestamp': 1631352900,
'release_timestamp': 1632995931,
}
params = {'simulate': True, }
ydl = FakeYDL(params)
out_info = ydl.process_ie_result(info)
self.assertTrue(isinstance(out_info['upload_date'], compat_str))
self.assertEqual(out_info['upload_date'], '20210911')
self.assertTrue(isinstance(out_info['release_date'], compat_str))
self.assertEqual(out_info['release_date'], '20210930')
class TestYoutubeDLCookies(unittest.TestCase):
@staticmethod
def encode_cookie(cookie):
if not isinstance(cookie, dict):
cookie = vars(cookie)
for name, value in cookie.items():
yield name, compat_str(value)
@classmethod
def comparable_cookies(cls, cookies):
# Work around cookiejar cookies not being unicode strings
return sorted(map(tuple, map(sorted, map(cls.encode_cookie, cookies))))
def assertSameCookies(self, c1, c2, msg=None):
return self.assertEqual(
*map(self.comparable_cookies, (c1, c2)),
msg=msg)
def assertSameCookieStrings(self, c1, c2, msg=None):
return self.assertSameCookies(
*map(lambda c: compat_http_cookies_SimpleCookie(c).values(), (c1, c2)),
msg=msg)
def test_header_cookies(self):
ydl = FakeYDL()
ydl.report_warning = lambda *_, **__: None
def cookie(name, value, version=None, domain='', path='', secure=False, expires=None):
return compat_http_cookiejar_Cookie(
version or 0, name, value, None, False,
domain, bool(domain), bool(domain), path, bool(path),
secure, expires, False, None, None, rest={})
test_url, test_domain = (t % ('yt.dl',) for t in ('https://%s/test', '.%s'))
def test(encoded_cookies, cookies, headers=False, round_trip=None, error_re=None):
def _test():
ydl.cookiejar.clear()
ydl._load_cookies(encoded_cookies, autoscope=headers)
if headers:
ydl._apply_header_cookies(test_url)
data = {'url': test_url}
ydl._calc_headers(data)
self.assertSameCookies(
cookies, ydl.cookiejar,
'Extracted cookiejar.Cookie is not the same')
if not headers:
self.assertSameCookieStrings(
data.get('cookies'), round_trip or encoded_cookies,
msg='Cookie is not the same as round trip')
ydl.__dict__['_YoutubeDL__header_cookies'] = []
try:
_test()
except AssertionError:
raise
except Exception as e:
if not error_re:
raise
assertRegexpMatches(self, e.args[0], error_re.join(('.*',) * 2))
test('test=value; Domain=' + test_domain, [cookie('test', 'value', domain=test_domain)])
test('test=value', [cookie('test', 'value')], error_re='Unscoped cookies are not allowed')
test('cookie1=value1; Domain={0}; Path=/test; cookie2=value2; Domain={0}; Path=/'.format(test_domain), [
cookie('cookie1', 'value1', domain=test_domain, path='/test'),
cookie('cookie2', 'value2', domain=test_domain, path='/')])
cookie_kw = compat_kwargs(
{'domain': test_domain, 'path': '/test', 'secure': True, 'expires': '9999999999', })
test('test=value; Domain={domain}; Path={path}; Secure; Expires={expires}'.format(**cookie_kw), [
cookie('test', 'value', **cookie_kw)])
test('test="value; "; path=/test; domain=' + test_domain, [
cookie('test', 'value; ', domain=test_domain, path='/test')],
round_trip='test="value\\073 "; Domain={0}; Path=/test'.format(test_domain))
test('name=; Domain=' + test_domain, [cookie('name', '', domain=test_domain)],
round_trip='name=""; Domain=' + test_domain)
test('test=value', [cookie('test', 'value', domain=test_domain)], headers=True)
test('cookie1=value; Domain={0}; cookie2=value'.format(test_domain), [],
headers=True, error_re='Invalid syntax')
ydl.report_warning = ydl.report_error
test('test=value', [], headers=True, error_re='Passing cookies as a header is a potential security risk')
def test_infojson_cookies(self):
TEST_FILE = 'test_infojson_cookies.info.json'
TEST_URL = 'https://example.com/example.mp4'
COOKIES = 'a=b; Domain=.example.com; c=d; Domain=.example.com'
COOKIE_HEADER = {'Cookie': 'a=b; c=d'}
ydl = FakeYDL()
ydl.process_info = lambda x: ydl._write_info_json('test', x, TEST_FILE)
def make_info(info_header_cookies=False, fmts_header_cookies=False, cookies_field=False):
fmt = {'url': TEST_URL}
if fmts_header_cookies:
fmt['http_headers'] = COOKIE_HEADER
if cookies_field:
fmt['cookies'] = COOKIES
return _make_result([fmt], http_headers=COOKIE_HEADER if info_header_cookies else None)
def test(initial_info, note):
def failure_msg(why):
return ' when '.join((why, note))
result = {}
result['processed'] = ydl.process_ie_result(initial_info)
self.assertTrue(ydl.cookiejar.get_cookies_for_url(TEST_URL),
msg=failure_msg('No cookies set in cookiejar after initial process'))
ydl.cookiejar.clear()
with open(TEST_FILE) as infojson:
result['loaded'] = ydl.sanitize_info(json.load(infojson), True)
result['final'] = ydl.process_ie_result(result['loaded'].copy(), download=False)
self.assertTrue(ydl.cookiejar.get_cookies_for_url(TEST_URL),
msg=failure_msg('No cookies set in cookiejar after final process'))
ydl.cookiejar.clear()
for key in ('processed', 'loaded', 'final'):
info = result[key]
self.assertIsNone(
traverse_obj(info, ((None, ('formats', 0)), 'http_headers', 'Cookie'), casesense=False, get_all=False),
msg=failure_msg('Cookie header not removed in {0} result'.format(key)))
self.assertSameCookieStrings(
traverse_obj(info, ((None, ('formats', 0)), 'cookies'), get_all=False), COOKIES,
msg=failure_msg('No cookies field found in {0} result'.format(key)))
test({'url': TEST_URL, 'http_headers': COOKIE_HEADER, 'id': '1', 'title': 'x'}, 'no formats field')
test(make_info(info_header_cookies=True), 'info_dict header cokies')
test(make_info(fmts_header_cookies=True), 'format header cookies')
test(make_info(info_header_cookies=True, fmts_header_cookies=True), 'info_dict and format header cookies')
test(make_info(info_header_cookies=True, fmts_header_cookies=True, cookies_field=True), 'all cookies fields')
test(make_info(cookies_field=True), 'cookies format field')
test({'url': TEST_URL, 'cookies': COOKIES, 'id': '1', 'title': 'x'}, 'info_dict cookies field only')
try_rm(TEST_FILE)
def test_add_headers_cookie(self):
def check_for_cookie_header(result):
return traverse_obj(result, ((None, ('formats', 0)), 'http_headers', 'Cookie'), casesense=False, get_all=False)
ydl = FakeYDL({'http_headers': {'Cookie': 'a=b'}})
ydl._apply_header_cookies(_make_result([])['webpage_url']) # Scope to input webpage URL: .example.com
fmt = {'url': 'https://example.com/video.mp4'}
result = ydl.process_ie_result(_make_result([fmt]), download=False)
self.assertIsNone(check_for_cookie_header(result), msg='http_headers cookies in result info_dict')
self.assertEqual(result.get('cookies'), 'a=b; Domain=.example.com', msg='No cookies were set in cookies field')
self.assertIn('a=b', ydl.cookiejar.get_cookie_header(fmt['url']), msg='No cookies were set in cookiejar')
fmt = {'url': 'https://wrong.com/video.mp4'}
result = ydl.process_ie_result(_make_result([fmt]), download=False)
self.assertIsNone(check_for_cookie_header(result), msg='http_headers cookies for wrong domain')
self.assertFalse(result.get('cookies'), msg='Cookies set in cookies field for wrong domain')
self.assertFalse(ydl.cookiejar.get_cookie_header(fmt['url']), msg='Cookies set in cookiejar for wrong domain')
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -46,6 +46,20 @@ class TestYoutubeDLCookieJar(unittest.TestCase):
# will be ignored # will be ignored
self.assertFalse(cookiejar._cookies) self.assertFalse(cookiejar._cookies)
def test_get_cookie_header(self):
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/httponly_cookies.txt')
cookiejar.load(ignore_discard=True, ignore_expires=True)
header = cookiejar.get_cookie_header('https://www.foobar.foobar')
self.assertIn('HTTPONLY_COOKIE', header)
def test_get_cookies_for_url(self):
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/session_cookies.txt')
cookiejar.load(ignore_discard=True, ignore_expires=True)
cookies = cookiejar.get_cookies_for_url('https://www.foobar.foobar/')
self.assertEqual(len(cookies), 2)
cookies = cookiejar.get_cookies_for_url('https://foobar.foobar/')
self.assertFalse(cookies)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -8,7 +8,7 @@ import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from youtube_dl.aes import aes_decrypt, aes_encrypt, aes_cbc_decrypt, aes_cbc_encrypt, aes_decrypt_text from youtube_dl.aes import aes_decrypt, aes_encrypt, aes_cbc_decrypt, aes_cbc_encrypt, aes_decrypt_text, aes_ecb_encrypt
from youtube_dl.utils import bytes_to_intlist, intlist_to_bytes from youtube_dl.utils import bytes_to_intlist, intlist_to_bytes
import base64 import base64
@ -58,6 +58,13 @@ class TestAES(unittest.TestCase):
decrypted = (aes_decrypt_text(encrypted, password, 32)) decrypted = (aes_decrypt_text(encrypted, password, 32))
self.assertEqual(decrypted, self.secret_msg) self.assertEqual(decrypted, self.secret_msg)
def test_ecb_encrypt(self):
data = bytes_to_intlist(self.secret_msg)
encrypted = intlist_to_bytes(aes_ecb_encrypt(data, self.key))
self.assertEqual(
encrypted,
b'\xaa\x86]\x81\x97>\x02\x92\x9d\x1bR[[L/u\xd3&\xd1(h\xde{\x81\x94\xba\x02\xae\xbd\xa6\xd0:')
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -11,6 +11,7 @@ from test.helper import try_rm
from youtube_dl import YoutubeDL from youtube_dl import YoutubeDL
from youtube_dl.utils import DownloadError
def _download_restricted(url, filename, age): def _download_restricted(url, filename, age):
@ -26,7 +27,10 @@ def _download_restricted(url, filename, age):
ydl.add_default_info_extractors() ydl.add_default_info_extractors()
json_filename = os.path.splitext(filename)[0] + '.info.json' json_filename = os.path.splitext(filename)[0] + '.info.json'
try_rm(json_filename) try_rm(json_filename)
ydl.download([url]) try:
ydl.download([url])
except DownloadError:
try_rm(json_filename)
res = os.path.exists(json_filename) res = os.path.exists(json_filename)
try_rm(json_filename) try_rm(json_filename)
return res return res
@ -38,12 +42,12 @@ class TestAgeRestriction(unittest.TestCase):
self.assertFalse(_download_restricted(url, filename, age)) self.assertFalse(_download_restricted(url, filename, age))
def test_youtube(self): def test_youtube(self):
self._assert_restricted('07FYdnEawAQ', '07FYdnEawAQ.mp4', 10) self._assert_restricted('HtVdAasjOgU', 'HtVdAasjOgU.mp4', 10)
def test_youporn(self): def test_youporn(self):
self._assert_restricted( self._assert_restricted(
'http://www.youporn.com/watch/505835/sex-ed-is-it-safe-to-masturbate-daily/', 'https://www.youporn.com/watch/16715086/sex-ed-in-detention-18-asmr/',
'505835.mp4', 2, old_age=25) '16715086.mp4', 2, old_age=25)
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -66,18 +66,9 @@ class TestAllURLsMatching(unittest.TestCase):
self.assertMatch('https://www.youtube.com/feed/watch_later', ['youtube:tab']) self.assertMatch('https://www.youtube.com/feed/watch_later', ['youtube:tab'])
self.assertMatch('https://www.youtube.com/feed/subscriptions', ['youtube:tab']) self.assertMatch('https://www.youtube.com/feed/subscriptions', ['youtube:tab'])
# def test_youtube_search_matching(self): def test_youtube_search_matching(self):
# self.assertMatch('http://www.youtube.com/results?search_query=making+mustard', ['youtube:search_url']) self.assertMatch('http://www.youtube.com/results?search_query=making+mustard', ['youtube:search_url'])
# self.assertMatch('https://www.youtube.com/results?baz=bar&search_query=youtube-dl+test+video&filters=video&lclk=video', ['youtube:search_url']) self.assertMatch('https://www.youtube.com/results?baz=bar&search_query=youtube-dl+test+video&filters=video&lclk=video', ['youtube:search_url'])
def test_youtube_extract(self):
assertExtractId = lambda url, id: self.assertEqual(YoutubeIE.extract_id(url), id)
assertExtractId('http://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
assertExtractId('https://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
assertExtractId('https://www.youtube.com/watch?feature=player_embedded&v=BaW_jenozKc', 'BaW_jenozKc')
assertExtractId('https://www.youtube.com/watch_popup?v=BaW_jenozKc', 'BaW_jenozKc')
assertExtractId('http://www.youtube.com/watch?v=BaW_jenozKcsharePLED17F32AD9753930', 'BaW_jenozKc')
assertExtractId('BaW_jenozKc', 'BaW_jenozKc')
def test_facebook_matching(self): def test_facebook_matching(self):
self.assertTrue(FacebookIE.suitable('https://www.facebook.com/Shiniknoh#!/photo.php?v=10153317450565268')) self.assertTrue(FacebookIE.suitable('https://www.facebook.com/Shiniknoh#!/photo.php?v=10153317450565268'))

View File

@ -3,17 +3,18 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import shutil
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import shutil
from test.helper import FakeYDL from test.helper import FakeYDL
from youtube_dl.cache import Cache from youtube_dl.cache import Cache
from youtube_dl.utils import version_tuple
from youtube_dl.version import __version__
def _is_empty(d): def _is_empty(d):
@ -54,6 +55,17 @@ class TestCache(unittest.TestCase):
self.assertFalse(os.path.exists(self.test_dir)) self.assertFalse(os.path.exists(self.test_dir))
self.assertEqual(c.load('test_cache', 'k.'), None) self.assertEqual(c.load('test_cache', 'k.'), None)
def test_cache_validation(self):
ydl = FakeYDL({
'cachedir': self.test_dir,
})
c = Cache(ydl)
obj = {'x': 1, 'y': ['ä', '\\a', True]}
c.store('test_cache', 'k.', obj)
self.assertEqual(c.load('test_cache', 'k.', min_ver='1970.01.01'), obj)
new_version = '.'.join(('%d' % ((v + 1) if i == 0 else v, )) for i, v in enumerate(version_tuple(__version__)))
self.assertIs(c.load('test_cache', 'k.', min_ver=new_version), None)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -11,6 +11,7 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from youtube_dl.compat import ( from youtube_dl.compat import (
compat_casefold,
compat_getenv, compat_getenv,
compat_setenv, compat_setenv,
compat_etree_Element, compat_etree_Element,
@ -22,6 +23,7 @@ from youtube_dl.compat import (
compat_urllib_parse_unquote, compat_urllib_parse_unquote,
compat_urllib_parse_unquote_plus, compat_urllib_parse_unquote_plus,
compat_urllib_parse_urlencode, compat_urllib_parse_urlencode,
compat_urllib_request,
) )
@ -47,10 +49,11 @@ class TestCompat(unittest.TestCase):
def test_all_present(self): def test_all_present(self):
import youtube_dl.compat import youtube_dl.compat
all_names = youtube_dl.compat.__all__ all_names = sorted(
present_names = set(filter( youtube_dl.compat.__all__ + youtube_dl.compat.legacy)
present_names = set(map(compat_str, filter(
lambda c: '_' in c and not c.startswith('_'), lambda c: '_' in c and not c.startswith('_'),
dir(youtube_dl.compat))) - set(['unicode_literals']) dir(youtube_dl.compat)))) - set(['unicode_literals'])
self.assertEqual(all_names, sorted(present_names)) self.assertEqual(all_names, sorted(present_names))
def test_compat_urllib_parse_unquote(self): def test_compat_urllib_parse_unquote(self):
@ -118,9 +121,34 @@ class TestCompat(unittest.TestCase):
<smil xmlns="http://www.w3.org/2001/SMIL20/Language"></smil>''' <smil xmlns="http://www.w3.org/2001/SMIL20/Language"></smil>'''
compat_etree_fromstring(xml) compat_etree_fromstring(xml)
def test_struct_unpack(self): def test_compat_struct_unpack(self):
self.assertEqual(compat_struct_unpack('!B', b'\x00'), (0,)) self.assertEqual(compat_struct_unpack('!B', b'\x00'), (0,))
def test_compat_casefold(self):
if hasattr(compat_str, 'casefold'):
# don't bother to test str.casefold() (again)
return
# thanks https://bugs.python.org/file24232/casefolding.patch
self.assertEqual(compat_casefold('hello'), 'hello')
self.assertEqual(compat_casefold('hELlo'), 'hello')
self.assertEqual(compat_casefold('ß'), 'ss')
self.assertEqual(compat_casefold(''), 'fi')
self.assertEqual(compat_casefold('\u03a3'), '\u03c3')
self.assertEqual(compat_casefold('A\u0345\u03a3'), 'a\u03b9\u03c3')
def test_compat_urllib_request_Request(self):
self.assertEqual(
compat_urllib_request.Request('http://127.0.0.1', method='PUT').get_method(),
'PUT')
class PUTrequest(compat_urllib_request.Request):
def get_method(self):
return 'PUT'
self.assertEqual(
PUTrequest('http://127.0.0.1').get_method(),
'PUT')
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -9,7 +9,6 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import ( from test.helper import (
assertGreaterEqual,
expect_warnings, expect_warnings,
get_params, get_params,
gettestcases, gettestcases,
@ -20,26 +19,35 @@ from test.helper import (
import hashlib import hashlib
import io
import json import json
import socket import socket
import youtube_dl.YoutubeDL import youtube_dl.YoutubeDL
from youtube_dl.compat import ( from youtube_dl.compat import (
compat_http_client, compat_http_client,
compat_urllib_error,
compat_HTTPError, compat_HTTPError,
compat_open as open,
compat_urllib_error,
) )
from youtube_dl.utils import ( from youtube_dl.utils import (
DownloadError, DownloadError,
ExtractorError, ExtractorError,
error_to_compat_str,
format_bytes, format_bytes,
IDENTITY,
preferredencoding,
UnavailableVideoError, UnavailableVideoError,
) )
from youtube_dl.extractor import get_info_extractor from youtube_dl.extractor import get_info_extractor
RETRIES = 3 RETRIES = 3
# Some unittest APIs require actual str
if not isinstance('TEST', str):
_encode_str = lambda s: s.encode(preferredencoding())
else:
_encode_str = IDENTITY
class YoutubeDL(youtube_dl.YoutubeDL): class YoutubeDL(youtube_dl.YoutubeDL):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
@ -100,27 +108,31 @@ def generator(test_case, tname):
def print_skipping(reason): def print_skipping(reason):
print('Skipping %s: %s' % (test_case['name'], reason)) print('Skipping %s: %s' % (test_case['name'], reason))
self.skipTest(_encode_str(reason))
if not ie.working(): if not ie.working():
print_skipping('IE marked as not _WORKING') print_skipping('IE marked as not _WORKING')
return
for tc in test_cases: for tc in test_cases:
info_dict = tc.get('info_dict', {}) info_dict = tc.get('info_dict', {})
if not (info_dict.get('id') and info_dict.get('ext')): if not (info_dict.get('id') and info_dict.get('ext')):
raise Exception('Test definition incorrect. The output file cannot be known. Are both \'id\' and \'ext\' keys present?') raise Exception('Test definition (%s) requires both \'id\' and \'ext\' keys present to define the output file' % (tname, ))
if 'skip' in test_case: if 'skip' in test_case:
print_skipping(test_case['skip']) print_skipping(test_case['skip'])
return
for other_ie in other_ies: for other_ie in other_ies:
if not other_ie.working(): if not other_ie.working():
print_skipping('test depends on %sIE, marked as not WORKING' % other_ie.ie_key()) print_skipping('test depends on %sIE, marked as not WORKING' % other_ie.ie_key())
return
params = get_params(test_case.get('params', {})) params = get_params(test_case.get('params', {}))
params['outtmpl'] = tname + '_' + params['outtmpl'] params['outtmpl'] = tname + '_' + params['outtmpl']
if is_playlist and 'playlist' not in test_case: if is_playlist and 'playlist' not in test_case:
params.setdefault('extract_flat', 'in_playlist') params.setdefault('extract_flat', 'in_playlist')
params.setdefault('playlistend',
test_case['playlist_maxcount'] + 1
if test_case.get('playlist_maxcount')
else test_case.get('playlist_mincount'))
params.setdefault('skip_download', True) params.setdefault('skip_download', True)
ydl = YoutubeDL(params, auto_init=False) ydl = YoutubeDL(params, auto_init=False)
@ -146,6 +158,7 @@ def generator(test_case, tname):
try_rm(tc_filename) try_rm(tc_filename)
try_rm(tc_filename + '.part') try_rm(tc_filename + '.part')
try_rm(os.path.splitext(tc_filename)[0] + '.info.json') try_rm(os.path.splitext(tc_filename)[0] + '.info.json')
try_rm_tcs_files() try_rm_tcs_files()
try: try:
try_num = 1 try_num = 1
@ -160,7 +173,9 @@ def generator(test_case, tname):
except (DownloadError, ExtractorError) as err: except (DownloadError, ExtractorError) as err:
# Check if the exception is not a network related one # Check if the exception is not a network related one
if not err.exc_info[0] in (compat_urllib_error.URLError, socket.timeout, UnavailableVideoError, compat_http_client.BadStatusLine) or (err.exc_info[0] == compat_HTTPError and err.exc_info[1].code == 503): if not err.exc_info[0] in (compat_urllib_error.URLError, socket.timeout, UnavailableVideoError, compat_http_client.BadStatusLine) or (err.exc_info[0] == compat_HTTPError and err.exc_info[1].code == 503):
raise msg = getattr(err, 'msg', error_to_compat_str(err))
err.msg = '%s (%s)' % (msg, tname, )
raise err
if try_num == RETRIES: if try_num == RETRIES:
report_warning('%s failed due to network errors, skipping...' % tname) report_warning('%s failed due to network errors, skipping...' % tname)
@ -178,13 +193,19 @@ def generator(test_case, tname):
expect_info_dict(self, res_dict, test_case.get('info_dict', {})) expect_info_dict(self, res_dict, test_case.get('info_dict', {}))
if 'playlist_mincount' in test_case: if 'playlist_mincount' in test_case:
assertGreaterEqual( self.assertGreaterEqual(
self,
len(res_dict['entries']), len(res_dict['entries']),
test_case['playlist_mincount'], test_case['playlist_mincount'],
'Expected at least %d in playlist %s, but got only %d' % ( 'Expected at least %d in playlist %s, but got only %d' % (
test_case['playlist_mincount'], test_case['url'], test_case['playlist_mincount'], test_case['url'],
len(res_dict['entries']))) len(res_dict['entries'])))
if 'playlist_maxcount' in test_case:
self.assertLessEqual(
len(res_dict['entries']),
test_case['playlist_maxcount'],
'Expected at most %d in playlist %s, but got %d' % (
test_case['playlist_maxcount'], test_case['url'],
len(res_dict['entries'])))
if 'playlist_count' in test_case: if 'playlist_count' in test_case:
self.assertEqual( self.assertEqual(
len(res_dict['entries']), len(res_dict['entries']),
@ -209,7 +230,15 @@ def generator(test_case, tname):
# First, check test cases' data against extracted data alone # First, check test cases' data against extracted data alone
expect_info_dict(self, tc_res_dict, tc.get('info_dict', {})) expect_info_dict(self, tc_res_dict, tc.get('info_dict', {}))
# Now, check downloaded file consistency # Now, check downloaded file consistency
# support test-case with volatile ID, signalled by regexp value
if tc.get('info_dict', {}).get('id', '').startswith('re:'):
test_id = tc['info_dict']['id']
tc['info_dict']['id'] = tc_res_dict['id']
else:
test_id = None
tc_filename = get_tc_filename(tc) tc_filename = get_tc_filename(tc)
if test_id:
tc['info_dict']['id'] = test_id
if not test_case.get('params', {}).get('skip_download', False): if not test_case.get('params', {}).get('skip_download', False):
self.assertTrue(os.path.exists(tc_filename), msg='Missing file ' + tc_filename) self.assertTrue(os.path.exists(tc_filename), msg='Missing file ' + tc_filename)
self.assertTrue(tc_filename in finished_hook_called) self.assertTrue(tc_filename in finished_hook_called)
@ -218,8 +247,8 @@ def generator(test_case, tname):
if params.get('test'): if params.get('test'):
expected_minsize = max(expected_minsize, 10000) expected_minsize = max(expected_minsize, 10000)
got_fsize = os.path.getsize(tc_filename) got_fsize = os.path.getsize(tc_filename)
assertGreaterEqual( self.assertGreaterEqual(
self, got_fsize, expected_minsize, got_fsize, expected_minsize,
'Expected %s to be at least %s, but it\'s only %s ' % 'Expected %s to be at least %s, but it\'s only %s ' %
(tc_filename, format_bytes(expected_minsize), (tc_filename, format_bytes(expected_minsize),
format_bytes(got_fsize))) format_bytes(got_fsize)))
@ -232,7 +261,7 @@ def generator(test_case, tname):
self.assertTrue( self.assertTrue(
os.path.exists(info_json_fn), os.path.exists(info_json_fn),
'Missing info file %s' % info_json_fn) 'Missing info file %s' % info_json_fn)
with io.open(info_json_fn, encoding='utf-8') as infof: with open(info_json_fn, encoding='utf-8') as infof:
info_dict = json.load(infof) info_dict = json.load(infof)
expect_info_dict(self, info_dict, tc.get('info_dict', {})) expect_info_dict(self, info_dict, tc.get('info_dict', {}))
finally: finally:

View File

@ -0,0 +1,272 @@
#!/usr/bin/env python
# coding: utf-8
from __future__ import unicode_literals
# Allow direct execution
import os
import re
import sys
import subprocess
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import (
FakeLogger,
FakeYDL,
http_server_port,
try_rm,
)
from youtube_dl import YoutubeDL
from youtube_dl.compat import (
compat_contextlib_suppress,
compat_http_cookiejar_Cookie,
compat_http_server,
compat_kwargs,
)
from youtube_dl.utils import (
encodeFilename,
join_nonempty,
)
from youtube_dl.downloader.external import (
Aria2cFD,
Aria2pFD,
AxelFD,
CurlFD,
FFmpegFD,
HttpieFD,
WgetFD,
)
from youtube_dl.postprocessor import (
FFmpegPostProcessor,
)
import threading
TEST_SIZE = 10 * 1024
TEST_COOKIE = {
'version': 0,
'name': 'test',
'value': 'ytdlp',
'port': None,
'port_specified': False,
'domain': '.example.com',
'domain_specified': True,
'domain_initial_dot': False,
'path': '/',
'path_specified': True,
'secure': False,
'expires': None,
'discard': False,
'comment': None,
'comment_url': None,
'rest': {},
}
TEST_COOKIE_VALUE = join_nonempty('name', 'value', delim='=', from_dict=TEST_COOKIE)
TEST_INFO = {'url': 'http://www.example.com/'}
def cookiejar_Cookie(**cookie_args):
return compat_http_cookiejar_Cookie(**compat_kwargs(cookie_args))
def ifExternalFDAvailable(externalFD):
return unittest.skipUnless(externalFD.available(),
externalFD.get_basename() + ' not found')
class HTTPTestRequestHandler(compat_http_server.BaseHTTPRequestHandler):
def log_message(self, format, *args):
pass
def send_content_range(self, total=None):
range_header = self.headers.get('Range')
start = end = None
if range_header:
mobj = re.match(r'bytes=(\d+)-(\d+)', range_header)
if mobj:
start, end = (int(mobj.group(i)) for i in (1, 2))
valid_range = start is not None and end is not None
if valid_range:
content_range = 'bytes %d-%d' % (start, end)
if total:
content_range += '/%d' % total
self.send_header('Content-Range', content_range)
return (end - start + 1) if valid_range else total
def serve(self, range=True, content_length=True):
self.send_response(200)
self.send_header('Content-Type', 'video/mp4')
size = TEST_SIZE
if range:
size = self.send_content_range(TEST_SIZE)
if content_length:
self.send_header('Content-Length', size)
self.end_headers()
self.wfile.write(b'#' * size)
def do_GET(self):
if self.path == '/regular':
self.serve()
elif self.path == '/no-content-length':
self.serve(content_length=False)
elif self.path == '/no-range':
self.serve(range=False)
elif self.path == '/no-range-no-content-length':
self.serve(range=False, content_length=False)
else:
assert False, 'unrecognised server path'
@ifExternalFDAvailable(Aria2pFD)
class TestAria2pFD(unittest.TestCase):
def setUp(self):
self.httpd = compat_http_server.HTTPServer(
('127.0.0.1', 0), HTTPTestRequestHandler)
self.port = http_server_port(self.httpd)
self.server_thread = threading.Thread(target=self.httpd.serve_forever)
self.server_thread.daemon = True
self.server_thread.start()
def download(self, params, ep):
with subprocess.Popen(
['aria2c', '--enable-rpc'],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL
) as process:
if not process.poll():
filename = 'testfile.mp4'
params['logger'] = FakeLogger()
params['outtmpl'] = filename
ydl = YoutubeDL(params)
try_rm(encodeFilename(filename))
self.assertEqual(ydl.download(['http://127.0.0.1:%d/%s' % (self.port, ep)]), 0)
self.assertEqual(os.path.getsize(encodeFilename(filename)), TEST_SIZE)
try_rm(encodeFilename(filename))
process.kill()
def download_all(self, params):
for ep in ('regular', 'no-content-length', 'no-range', 'no-range-no-content-length'):
self.download(params, ep)
def test_regular(self):
self.download_all({'external_downloader': 'aria2p'})
def test_chunked(self):
self.download_all({
'external_downloader': 'aria2p',
'http_chunk_size': 1000,
})
@ifExternalFDAvailable(HttpieFD)
class TestHttpieFD(unittest.TestCase):
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = HttpieFD(ydl, {})
self.assertEqual(
downloader._make_cmd('test', TEST_INFO),
['http', '--download', '--output', 'test', 'http://www.example.com/'])
# Test cookie header is added
ydl.cookiejar.set_cookie(cookiejar_Cookie(**TEST_COOKIE))
self.assertEqual(
downloader._make_cmd('test', TEST_INFO),
['http', '--download', '--output', 'test',
'http://www.example.com/', 'Cookie:' + TEST_COOKIE_VALUE])
@ifExternalFDAvailable(AxelFD)
class TestAxelFD(unittest.TestCase):
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = AxelFD(ydl, {})
self.assertEqual(
downloader._make_cmd('test', TEST_INFO),
['axel', '-o', 'test', '--', 'http://www.example.com/'])
# Test cookie header is added
ydl.cookiejar.set_cookie(cookiejar_Cookie(**TEST_COOKIE))
self.assertEqual(
downloader._make_cmd('test', TEST_INFO),
['axel', '-o', 'test', '-H', 'Cookie: ' + TEST_COOKIE_VALUE,
'--max-redirect=0', '--', 'http://www.example.com/'])
@ifExternalFDAvailable(WgetFD)
class TestWgetFD(unittest.TestCase):
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = WgetFD(ydl, {})
self.assertNotIn('--load-cookies', downloader._make_cmd('test', TEST_INFO))
# Test cookiejar tempfile arg is added
ydl.cookiejar.set_cookie(cookiejar_Cookie(**TEST_COOKIE))
self.assertIn('--load-cookies', downloader._make_cmd('test', TEST_INFO))
@ifExternalFDAvailable(CurlFD)
class TestCurlFD(unittest.TestCase):
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = CurlFD(ydl, {})
self.assertNotIn('--cookie', downloader._make_cmd('test', TEST_INFO))
# Test cookie header is added
ydl.cookiejar.set_cookie(cookiejar_Cookie(**TEST_COOKIE))
self.assertIn('--cookie', downloader._make_cmd('test', TEST_INFO))
self.assertIn(TEST_COOKIE_VALUE, downloader._make_cmd('test', TEST_INFO))
@ifExternalFDAvailable(Aria2cFD)
class TestAria2cFD(unittest.TestCase):
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = Aria2cFD(ydl, {})
downloader._make_cmd('test', TEST_INFO)
self.assertFalse(hasattr(downloader, '_cookies_tempfile'))
# Test cookiejar tempfile arg is added
ydl.cookiejar.set_cookie(cookiejar_Cookie(**TEST_COOKIE))
cmd = downloader._make_cmd('test', TEST_INFO)
self.assertIn('--load-cookies=%s' % downloader._cookies_tempfile, cmd)
# Handle delegated availability
def ifFFmpegFDAvailable(externalFD):
# raise SkipTest, or set False!
avail = ifExternalFDAvailable(externalFD) and False
with compat_contextlib_suppress(Exception):
avail = FFmpegPostProcessor(downloader=None).available
return unittest.skipUnless(
avail, externalFD.get_basename() + ' not found')
@ifFFmpegFDAvailable(FFmpegFD)
class TestFFmpegFD(unittest.TestCase):
_args = []
def _test_cmd(self, args):
self._args = args
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = FFmpegFD(ydl, {})
downloader._debug_cmd = self._test_cmd
info_dict = TEST_INFO.copy()
info_dict['ext'] = 'mp4'
downloader._call_downloader('test', info_dict)
self.assertEqual(self._args, [
'ffmpeg', '-y', '-i', 'http://www.example.com/',
'-c', 'copy', '-f', 'mp4', 'file:test'])
# Test cookies arg is added
ydl.cookiejar.set_cookie(cookiejar_Cookie(**TEST_COOKIE))
downloader._call_downloader('test', info_dict)
self.assertEqual(self._args, [
'ffmpeg', '-y', '-cookies', TEST_COOKIE_VALUE + '; path=/; domain=.example.com;\r\n',
'-i', 'http://www.example.com/', '-c', 'copy', '-f', 'mp4', 'file:test'])
if __name__ == '__main__':
unittest.main()

View File

@ -9,7 +9,11 @@ import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import http_server_port, try_rm from test.helper import (
FakeLogger,
http_server_port,
try_rm,
)
from youtube_dl import YoutubeDL from youtube_dl import YoutubeDL
from youtube_dl.compat import compat_http_server from youtube_dl.compat import compat_http_server
from youtube_dl.downloader.http import HttpFD from youtube_dl.downloader.http import HttpFD
@ -66,17 +70,6 @@ class HTTPTestRequestHandler(compat_http_server.BaseHTTPRequestHandler):
assert False assert False
class FakeLogger(object):
def debug(self, msg):
pass
def warning(self, msg):
pass
def error(self, msg):
pass
class TestHttpFD(unittest.TestCase): class TestHttpFD(unittest.TestCase):
def setUp(self): def setUp(self):
self.httpd = compat_http_server.HTTPServer( self.httpd = compat_http_server.HTTPServer(
@ -95,7 +88,7 @@ class TestHttpFD(unittest.TestCase):
self.assertTrue(downloader.real_download(filename, { self.assertTrue(downloader.real_download(filename, {
'url': 'http://127.0.0.1:%d/%s' % (self.port, ep), 'url': 'http://127.0.0.1:%d/%s' % (self.port, ep),
})) }))
self.assertEqual(os.path.getsize(encodeFilename(filename)), TEST_SIZE) self.assertEqual(os.path.getsize(encodeFilename(filename)), TEST_SIZE, ep)
try_rm(encodeFilename(filename)) try_rm(encodeFilename(filename))
def download_all(self, params): def download_all(self, params):

View File

@ -8,37 +8,55 @@ import unittest
import sys import sys
import os import os
import subprocess import subprocess
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from youtube_dl.utils import encodeArgument
rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.insert(0, rootDir)
try: from youtube_dl.compat import compat_register_utf8, compat_subprocess_get_DEVNULL
_DEV_NULL = subprocess.DEVNULL from youtube_dl.utils import encodeArgument
except AttributeError:
_DEV_NULL = open(os.devnull, 'wb') compat_register_utf8()
_DEV_NULL = compat_subprocess_get_DEVNULL()
class TestExecution(unittest.TestCase): class TestExecution(unittest.TestCase):
def setUp(self):
self.module = 'youtube_dl'
if sys.version_info < (2, 7):
self.module += '.__main__'
def test_import(self): def test_import(self):
subprocess.check_call([sys.executable, '-c', 'import youtube_dl'], cwd=rootDir) subprocess.check_call([sys.executable, '-c', 'import youtube_dl'], cwd=rootDir)
def test_module_exec(self): def test_module_exec(self):
if sys.version_info >= (2, 7): # Python 2.6 doesn't support package execution subprocess.check_call([sys.executable, '-m', self.module, '--version'], cwd=rootDir, stdout=_DEV_NULL)
subprocess.check_call([sys.executable, '-m', 'youtube_dl', '--version'], cwd=rootDir, stdout=_DEV_NULL)
def test_main_exec(self): def test_main_exec(self):
subprocess.check_call([sys.executable, 'youtube_dl/__main__.py', '--version'], cwd=rootDir, stdout=_DEV_NULL) subprocess.check_call([sys.executable, os.path.normpath('youtube_dl/__main__.py'), '--version'], cwd=rootDir, stdout=_DEV_NULL)
def test_cmdline_umlauts(self): def test_cmdline_umlauts(self):
os.environ['PYTHONIOENCODING'] = 'utf-8'
p = subprocess.Popen( p = subprocess.Popen(
[sys.executable, 'youtube_dl/__main__.py', encodeArgument('ä'), '--version'], [sys.executable, '-m', self.module, encodeArgument('ä'), '--version'],
cwd=rootDir, stdout=_DEV_NULL, stderr=subprocess.PIPE) cwd=rootDir, stdout=_DEV_NULL, stderr=subprocess.PIPE)
_, stderr = p.communicate() _, stderr = p.communicate()
self.assertFalse(stderr) self.assertFalse(stderr)
def test_lazy_extractors(self):
lazy_extractors = os.path.normpath('youtube_dl/extractor/lazy_extractors.py')
try:
subprocess.check_call([sys.executable, os.path.normpath('devscripts/make_lazy_extractors.py'), lazy_extractors], cwd=rootDir, stdout=_DEV_NULL)
subprocess.check_call([sys.executable, os.path.normpath('test/test_all_urls.py')], cwd=rootDir, stdout=_DEV_NULL)
finally:
for x in ('', 'c') if sys.version_info[0] < 3 else ('',):
try:
os.remove(lazy_extractors + x)
except OSError:
pass
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -8,30 +8,163 @@ import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import http_server_port import contextlib
from youtube_dl import YoutubeDL import gzip
from youtube_dl.compat import compat_http_server, compat_urllib_request import io
import ssl import ssl
import tempfile
import threading import threading
import zlib
# avoid deprecated alias assertRaisesRegexp
if hasattr(unittest.TestCase, 'assertRaisesRegex'):
unittest.TestCase.assertRaisesRegexp = unittest.TestCase.assertRaisesRegex
try:
import brotli
except ImportError:
brotli = None
try:
from urllib.request import pathname2url
except ImportError:
from urllib import pathname2url
from youtube_dl.compat import (
compat_http_cookiejar_Cookie,
compat_http_server,
compat_str as str,
compat_urllib_error,
compat_urllib_HTTPError,
compat_urllib_parse,
compat_urllib_request,
)
from youtube_dl.utils import (
sanitized_Request,
update_Request,
urlencode_postdata,
)
from test.helper import (
expectedFailureIf,
FakeYDL,
FakeLogger,
http_server_port,
)
from youtube_dl import YoutubeDL
TEST_DIR = os.path.dirname(os.path.abspath(__file__)) TEST_DIR = os.path.dirname(os.path.abspath(__file__))
class HTTPTestRequestHandler(compat_http_server.BaseHTTPRequestHandler): class HTTPTestRequestHandler(compat_http_server.BaseHTTPRequestHandler):
protocol_version = 'HTTP/1.1'
# work-around old/new -style class inheritance
def super(self, meth_name, *args, **kwargs):
from types import MethodType
try:
super()
fn = lambda s, m, *a, **k: getattr(super(), m)(*a, **k)
except TypeError:
fn = lambda s, m, *a, **k: getattr(compat_http_server.BaseHTTPRequestHandler, m)(s, *a, **k)
self.super = MethodType(fn, self)
return self.super(meth_name, *args, **kwargs)
def log_message(self, format, *args): def log_message(self, format, *args):
pass pass
def _headers(self):
payload = str(self.headers).encode('utf-8')
self.send_response(200)
self.send_header('Content-Type', 'application/json')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload)
def _redirect(self):
self.send_response(int(self.path[len('/redirect_'):]))
self.send_header('Location', '/method')
self.send_header('Content-Length', '0')
self.end_headers()
def _method(self, method, payload=None):
self.send_response(200)
self.send_header('Content-Length', str(len(payload or '')))
self.send_header('Method', method)
self.end_headers()
if payload:
self.wfile.write(payload)
def _status(self, status):
payload = '<html>{0} NOT FOUND</html>'.format(status).encode('utf-8')
self.send_response(int(status))
self.send_header('Content-Type', 'text/html; charset=utf-8')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload)
def _read_data(self):
if 'Content-Length' in self.headers:
return self.rfile.read(int(self.headers['Content-Length']))
def _test_url(self, path, host='127.0.0.1', scheme='http', port=None):
return '{0}://{1}:{2}/{3}'.format(
scheme, host,
port if port is not None
else http_server_port(self.server), path)
def do_POST(self):
data = self._read_data()
if self.path.startswith('/redirect_'):
self._redirect()
elif self.path.startswith('/method'):
self._method('POST', data)
elif self.path.startswith('/headers'):
self._headers()
else:
self._status(404)
def do_HEAD(self):
if self.path.startswith('/redirect_'):
self._redirect()
elif self.path.startswith('/method'):
self._method('HEAD')
else:
self._status(404)
def do_PUT(self):
data = self._read_data()
if self.path.startswith('/redirect_'):
self._redirect()
elif self.path.startswith('/method'):
self._method('PUT', data)
else:
self._status(404)
def do_GET(self): def do_GET(self):
def respond(payload=b'<html><video src="/vid.mp4" /></html>',
payload_type='text/html; charset=utf-8',
payload_encoding=None,
resp_code=200):
self.send_response(resp_code)
self.send_header('Content-Type', payload_type)
if payload_encoding:
self.send_header('Content-Encoding', payload_encoding)
self.send_header('Content-Length', str(len(payload))) # required for persistent connections
self.end_headers()
self.wfile.write(payload)
def gzip_compress(p):
buf = io.BytesIO()
with contextlib.closing(gzip.GzipFile(fileobj=buf, mode='wb')) as f:
f.write(p)
return buf.getvalue()
if self.path == '/video.html': if self.path == '/video.html':
self.send_response(200) respond()
self.send_header('Content-Type', 'text/html; charset=utf-8')
self.end_headers()
self.wfile.write(b'<html><video src="/vid.mp4" /></html>')
elif self.path == '/vid.mp4': elif self.path == '/vid.mp4':
self.send_response(200) respond(b'\x00\x00\x00\x00\x20\x66\x74[video]', 'video/mp4')
self.send_header('Content-Type', 'video/mp4')
self.end_headers()
self.wfile.write(b'\x00\x00\x00\x00\x20\x66\x74[video]')
elif self.path == '/302': elif self.path == '/302':
if sys.version_info[0] == 3: if sys.version_info[0] == 3:
# XXX: Python 3 http server does not allow non-ASCII header values # XXX: Python 3 http server does not allow non-ASCII header values
@ -39,71 +172,336 @@ class HTTPTestRequestHandler(compat_http_server.BaseHTTPRequestHandler):
self.end_headers() self.end_headers()
return return
new_url = 'http://127.0.0.1:%d/中文.html' % http_server_port(self.server) new_url = self._test_url('中文.html')
self.send_response(302) self.send_response(302)
self.send_header(b'Location', new_url.encode('utf-8')) self.send_header(b'Location', new_url.encode('utf-8'))
self.end_headers() self.end_headers()
elif self.path == '/%E4%B8%AD%E6%96%87.html': elif self.path == '/%E4%B8%AD%E6%96%87.html':
self.send_response(200) respond()
self.send_header('Content-Type', 'text/html; charset=utf-8') elif self.path == '/%c7%9f':
respond()
elif self.path == '/redirect_dotsegments':
self.send_response(301)
# redirect to /headers but with dot segments before
self.send_header('Location', '/a/b/./../../headers')
self.send_header('Content-Length', '0')
self.end_headers() self.end_headers()
self.wfile.write(b'<html><video src="/vid.mp4" /></html>') elif self.path.startswith('/redirect_'):
self._redirect()
elif self.path.startswith('/method'):
self._method('GET')
elif self.path.startswith('/headers'):
self._headers()
elif self.path.startswith('/308-to-headers'):
self.send_response(308)
self.send_header('Location', '/headers')
self.send_header('Content-Length', '0')
self.end_headers()
elif self.path == '/trailing_garbage':
payload = b'<html><video src="/vid.mp4" /></html>'
compressed = gzip_compress(payload) + b'trailing garbage'
respond(compressed, payload_encoding='gzip')
elif self.path == '/302-non-ascii-redirect':
new_url = self._test_url('中文.html')
# actually respond with permanent redirect
self.send_response(301)
self.send_header('Location', new_url)
self.send_header('Content-Length', '0')
self.end_headers()
elif self.path == '/content-encoding':
encodings = self.headers.get('ytdl-encoding', '')
payload = b'<html><video src="/vid.mp4" /></html>'
for encoding in filter(None, (e.strip() for e in encodings.split(','))):
if encoding == 'br' and brotli:
payload = brotli.compress(payload)
elif encoding == 'gzip':
payload = gzip_compress(payload)
elif encoding == 'deflate':
payload = zlib.compress(payload)
elif encoding == 'unsupported':
payload = b'raw'
break
else:
self._status(415)
return
respond(payload, payload_encoding=encodings)
else: else:
assert False self._status(404)
def send_header(self, keyword, value):
"""
Forcibly allow HTTP server to send non percent-encoded non-ASCII characters in headers.
This is against what is defined in RFC 3986: but we need to test that we support this
since some sites incorrectly do this.
"""
if keyword.lower() == 'connection':
return self.super('send_header', keyword, value)
class FakeLogger(object): if not hasattr(self, '_headers_buffer'):
def debug(self, msg): self._headers_buffer = []
pass
def warning(self, msg): self._headers_buffer.append('{0}: {1}\r\n'.format(keyword, value).encode('utf-8'))
pass
def error(self, msg): def end_headers(self):
pass if hasattr(self, '_headers_buffer'):
self.wfile.write(b''.join(self._headers_buffer))
self._headers_buffer = []
self.super('end_headers')
class TestHTTP(unittest.TestCase): class TestHTTP(unittest.TestCase):
# when does it make sense to check the SSL certificate?
_check_cert = (
sys.version_info >= (3, 2)
or (sys.version_info[0] == 2 and sys.version_info[1:] >= (7, 19)))
def setUp(self): def setUp(self):
self.httpd = compat_http_server.HTTPServer( # HTTP server
self.http_httpd = compat_http_server.HTTPServer(
('127.0.0.1', 0), HTTPTestRequestHandler) ('127.0.0.1', 0), HTTPTestRequestHandler)
self.port = http_server_port(self.httpd) self.http_port = http_server_port(self.http_httpd)
self.server_thread = threading.Thread(target=self.httpd.serve_forever)
self.server_thread.daemon = True self.http_server_thread = threading.Thread(target=self.http_httpd.serve_forever)
self.server_thread.start() self.http_server_thread.daemon = True
self.http_server_thread.start()
try:
from http.server import ThreadingHTTPServer
except ImportError:
try:
from socketserver import ThreadingMixIn
except ImportError:
from SocketServer import ThreadingMixIn
class ThreadingHTTPServer(ThreadingMixIn, compat_http_server.HTTPServer):
pass
# HTTPS server
certfn = os.path.join(TEST_DIR, 'testcert.pem')
self.https_httpd = ThreadingHTTPServer(
('127.0.0.1', 0), HTTPTestRequestHandler)
try:
sslctx = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
sslctx.verify_mode = ssl.CERT_NONE
sslctx.check_hostname = False
sslctx.load_cert_chain(certfn, None)
self.https_httpd.socket = sslctx.wrap_socket(
self.https_httpd.socket, server_side=True)
except AttributeError:
self.https_httpd.socket = ssl.wrap_socket(
self.https_httpd.socket, certfile=certfn, server_side=True)
self.https_port = http_server_port(self.https_httpd)
self.https_server_thread = threading.Thread(target=self.https_httpd.serve_forever)
self.https_server_thread.daemon = True
self.https_server_thread.start()
def tearDown(self):
def closer(svr):
def _closer():
svr.shutdown()
svr.server_close()
return _closer
shutdown_thread = threading.Thread(target=closer(self.http_httpd))
shutdown_thread.start()
self.http_server_thread.join(2.0)
shutdown_thread = threading.Thread(target=closer(self.https_httpd))
shutdown_thread.start()
self.https_server_thread.join(2.0)
def _test_url(self, path, host='127.0.0.1', scheme='http', port=None):
return '{0}://{1}:{2}/{3}'.format(
scheme, host,
port if port is not None
else self.https_port if scheme == 'https'
else self.http_port, path)
@unittest.skipUnless(_check_cert, 'No support for certificate check in SSL')
def test_nocheckcertificate(self):
with FakeYDL({'logger': FakeLogger()}) as ydl:
with self.assertRaises(compat_urllib_error.URLError):
ydl.urlopen(sanitized_Request(self._test_url('headers', scheme='https')))
with FakeYDL({'logger': FakeLogger(), 'nocheckcertificate': True}) as ydl:
r = ydl.urlopen(sanitized_Request(self._test_url('headers', scheme='https')))
self.assertEqual(r.getcode(), 200)
r.close()
def test_percent_encode(self):
with FakeYDL() as ydl:
# Unicode characters should be encoded with uppercase percent-encoding
res = ydl.urlopen(sanitized_Request(self._test_url('中文.html')))
self.assertEqual(res.getcode(), 200)
res.close()
# don't normalize existing percent encodings
res = ydl.urlopen(sanitized_Request(self._test_url('%c7%9f')))
self.assertEqual(res.getcode(), 200)
res.close()
def test_unicode_path_redirection(self): def test_unicode_path_redirection(self):
# XXX: Python 3 http server does not allow non-ASCII header values with FakeYDL() as ydl:
if sys.version_info[0] == 3: r = ydl.urlopen(sanitized_Request(self._test_url('302-non-ascii-redirect')))
return self.assertEqual(r.url, self._test_url('%E4%B8%AD%E6%96%87.html'))
r.close()
ydl = YoutubeDL({'logger': FakeLogger()}) def test_redirect(self):
r = ydl.extract_info('http://127.0.0.1:%d/302' % self.port) with FakeYDL() as ydl:
self.assertEqual(r['entries'][0]['url'], 'http://127.0.0.1:%d/vid.mp4' % self.port) def do_req(redirect_status, method, check_no_content=False):
data = b'testdata' if method in ('POST', 'PUT') else None
res = ydl.urlopen(sanitized_Request(
self._test_url('redirect_{0}'.format(redirect_status)),
method=method, data=data))
if check_no_content:
self.assertNotIn('Content-Type', res.headers)
return res.read().decode('utf-8'), res.headers.get('method', '')
# A 303 must either use GET or HEAD for subsequent request
self.assertEqual(do_req(303, 'POST'), ('', 'GET'))
self.assertEqual(do_req(303, 'HEAD'), ('', 'HEAD'))
self.assertEqual(do_req(303, 'PUT'), ('', 'GET'))
class TestHTTPS(unittest.TestCase): # 301 and 302 turn POST only into a GET, with no Content-Type
def setUp(self): self.assertEqual(do_req(301, 'POST', True), ('', 'GET'))
certfn = os.path.join(TEST_DIR, 'testcert.pem') self.assertEqual(do_req(301, 'HEAD'), ('', 'HEAD'))
self.httpd = compat_http_server.HTTPServer( self.assertEqual(do_req(302, 'POST', True), ('', 'GET'))
('127.0.0.1', 0), HTTPTestRequestHandler) self.assertEqual(do_req(302, 'HEAD'), ('', 'HEAD'))
self.httpd.socket = ssl.wrap_socket(
self.httpd.socket, certfile=certfn, server_side=True)
self.port = http_server_port(self.httpd)
self.server_thread = threading.Thread(target=self.httpd.serve_forever)
self.server_thread.daemon = True
self.server_thread.start()
def test_nocheckcertificate(self): self.assertEqual(do_req(301, 'PUT'), ('testdata', 'PUT'))
if sys.version_info >= (2, 7, 9): # No certificate checking anyways self.assertEqual(do_req(302, 'PUT'), ('testdata', 'PUT'))
ydl = YoutubeDL({'logger': FakeLogger()})
self.assertRaises(
Exception,
ydl.extract_info, 'https://127.0.0.1:%d/video.html' % self.port)
ydl = YoutubeDL({'logger': FakeLogger(), 'nocheckcertificate': True}) # 307 and 308 should not change method
r = ydl.extract_info('https://127.0.0.1:%d/video.html' % self.port) for m in ('POST', 'PUT'):
self.assertEqual(r['entries'][0]['url'], 'https://127.0.0.1:%d/vid.mp4' % self.port) self.assertEqual(do_req(307, m), ('testdata', m))
self.assertEqual(do_req(308, m), ('testdata', m))
self.assertEqual(do_req(307, 'HEAD'), ('', 'HEAD'))
self.assertEqual(do_req(308, 'HEAD'), ('', 'HEAD'))
# These should not redirect and instead raise an HTTPError
for code in (300, 304, 305, 306):
with self.assertRaises(compat_urllib_HTTPError):
do_req(code, 'GET')
# Jython 2.7.1 times out for some reason
@expectedFailureIf(sys.platform.startswith('java') and sys.version_info < (2, 7, 2))
def test_content_type(self):
# https://github.com/yt-dlp/yt-dlp/commit/379a4f161d4ad3e40932dcf5aca6e6fb9715ab28
with FakeYDL({'nocheckcertificate': True}) as ydl:
# method should be auto-detected as POST
r = sanitized_Request(self._test_url('headers', scheme='https'), data=urlencode_postdata({'test': 'test'}))
headers = ydl.urlopen(r).read().decode('utf-8')
self.assertIn('Content-Type: application/x-www-form-urlencoded', headers)
# test http
r = sanitized_Request(self._test_url('headers'), data=urlencode_postdata({'test': 'test'}))
headers = ydl.urlopen(r).read().decode('utf-8')
self.assertIn('Content-Type: application/x-www-form-urlencoded', headers)
def test_update_req(self):
req = sanitized_Request('http://example.com')
assert req.data is None
assert req.get_method() == 'GET'
assert not req.has_header('Content-Type')
# Test that zero-byte payloads will be sent
req = update_Request(req, data=b'')
assert req.data == b''
assert req.get_method() == 'POST'
# yt-dl expects data to be encoded and Content-Type to be added by sender
# assert req.get_header('Content-Type') == 'application/x-www-form-urlencoded'
def test_cookiejar(self):
with FakeYDL() as ydl:
ydl.cookiejar.set_cookie(compat_http_cookiejar_Cookie(
0, 'test', 'ytdl', None, False, '127.0.0.1', True,
False, '/headers', True, False, None, False, None, None, {}))
data = ydl.urlopen(sanitized_Request(
self._test_url('headers'))).read().decode('utf-8')
self.assertIn('Cookie: test=ytdl', data)
def test_passed_cookie_header(self):
# We should accept a Cookie header being passed as in normal headers and handle it appropriately.
with FakeYDL() as ydl:
# Specified Cookie header should be used
res = ydl.urlopen(sanitized_Request(
self._test_url('headers'), headers={'Cookie': 'test=test'})).read().decode('utf-8')
self.assertIn('Cookie: test=test', res)
# Specified Cookie header should be removed on any redirect
res = ydl.urlopen(sanitized_Request(
self._test_url('308-to-headers'), headers={'Cookie': 'test=test'})).read().decode('utf-8')
self.assertNotIn('Cookie: test=test', res)
# Specified Cookie header should override global cookiejar for that request
ydl.cookiejar.set_cookie(compat_http_cookiejar_Cookie(
0, 'test', 'ytdlp', None, False, '127.0.0.1', True,
False, '/headers', True, False, None, False, None, None, {}))
data = ydl.urlopen(sanitized_Request(
self._test_url('headers'), headers={'Cookie': 'test=test'})).read().decode('utf-8')
self.assertNotIn('Cookie: test=ytdlp', data)
self.assertIn('Cookie: test=test', data)
def test_no_compression_compat_header(self):
with FakeYDL() as ydl:
data = ydl.urlopen(
sanitized_Request(
self._test_url('headers'),
headers={'Youtubedl-no-compression': True})).read()
self.assertIn(b'Accept-Encoding: identity', data)
self.assertNotIn(b'youtubedl-no-compression', data.lower())
def test_gzip_trailing_garbage(self):
# https://github.com/ytdl-org/youtube-dl/commit/aa3e950764337ef9800c936f4de89b31c00dfcf5
# https://github.com/ytdl-org/youtube-dl/commit/6f2ec15cee79d35dba065677cad9da7491ec6e6f
with FakeYDL() as ydl:
data = ydl.urlopen(sanitized_Request(self._test_url('trailing_garbage'))).read().decode('utf-8')
self.assertEqual(data, '<html><video src="/vid.mp4" /></html>')
def __test_compression(self, encoding):
with FakeYDL() as ydl:
res = ydl.urlopen(
sanitized_Request(
self._test_url('content-encoding'),
headers={'ytdl-encoding': encoding}))
# decoded encodings are removed: only check for valid decompressed data
self.assertEqual(res.read(), b'<html><video src="/vid.mp4" /></html>')
@unittest.skipUnless(brotli, 'brotli support is not installed')
def test_brotli(self):
self.__test_compression('br')
def test_deflate(self):
self.__test_compression('deflate')
def test_gzip(self):
self.__test_compression('gzip')
def test_multiple_encodings(self):
# https://www.rfc-editor.org/rfc/rfc9110.html#section-8.4
for pair in ('gzip,deflate', 'deflate, gzip', 'gzip, gzip', 'deflate, deflate'):
self.__test_compression(pair)
def test_unsupported_encoding(self):
# it should return the raw content
with FakeYDL() as ydl:
res = ydl.urlopen(
sanitized_Request(
self._test_url('content-encoding'),
headers={'ytdl-encoding': 'unsupported'}))
self.assertEqual(res.headers.get('Content-Encoding'), 'unsupported')
self.assertEqual(res.read(), b'raw')
def test_remove_dot_segments(self):
with FakeYDL() as ydl:
res = ydl.urlopen(sanitized_Request(self._test_url('a/b/./../../headers')))
self.assertEqual(compat_urllib_parse.urlparse(res.geturl()).path, '/headers')
res = ydl.urlopen(sanitized_Request(self._test_url('redirect_dotsegments')))
self.assertEqual(compat_urllib_parse.urlparse(res.geturl()).path, '/headers')
def _build_proxy_handler(name): def _build_proxy_handler(name):
@ -117,7 +515,7 @@ def _build_proxy_handler(name):
self.send_response(200) self.send_response(200)
self.send_header('Content-Type', 'text/plain; charset=utf-8') self.send_header('Content-Type', 'text/plain; charset=utf-8')
self.end_headers() self.end_headers()
self.wfile.write('{self.proxy_name}: {self.path}'.format(self=self).encode('utf-8')) self.wfile.write('{0}: {1}'.format(self.proxy_name, self.path).encode('utf-8'))
return HTTPTestRequestHandler return HTTPTestRequestHandler
@ -137,10 +535,30 @@ class TestProxy(unittest.TestCase):
self.geo_proxy_thread.daemon = True self.geo_proxy_thread.daemon = True
self.geo_proxy_thread.start() self.geo_proxy_thread.start()
def tearDown(self):
def closer(svr):
def _closer():
svr.shutdown()
svr.server_close()
return _closer
shutdown_thread = threading.Thread(target=closer(self.proxy))
shutdown_thread.start()
self.proxy_thread.join(2.0)
shutdown_thread = threading.Thread(target=closer(self.geo_proxy))
shutdown_thread.start()
self.geo_proxy_thread.join(2.0)
def _test_proxy(self, host='127.0.0.1', port=None):
return '{0}:{1}'.format(
host, port if port is not None else self.port)
def test_proxy(self): def test_proxy(self):
geo_proxy = '127.0.0.1:{0}'.format(self.geo_port) geo_proxy = self._test_proxy(port=self.geo_port)
ydl = YoutubeDL({ ydl = YoutubeDL({
'proxy': '127.0.0.1:{0}'.format(self.port), 'proxy': self._test_proxy(),
'geo_verification_proxy': geo_proxy, 'geo_verification_proxy': geo_proxy,
}) })
url = 'http://foo.com/bar' url = 'http://foo.com/bar'
@ -154,7 +572,7 @@ class TestProxy(unittest.TestCase):
def test_proxy_with_idn(self): def test_proxy_with_idn(self):
ydl = YoutubeDL({ ydl = YoutubeDL({
'proxy': '127.0.0.1:{0}'.format(self.port), 'proxy': self._test_proxy(),
}) })
url = 'http://中文.tw/' url = 'http://中文.tw/'
response = ydl.urlopen(url).read().decode('utf-8') response = ydl.urlopen(url).read().decode('utf-8')
@ -162,5 +580,25 @@ class TestProxy(unittest.TestCase):
self.assertEqual(response, 'normal: http://xn--fiq228c.tw/') self.assertEqual(response, 'normal: http://xn--fiq228c.tw/')
class TestFileURL(unittest.TestCase):
# See https://github.com/ytdl-org/youtube-dl/issues/8227
def test_file_urls(self):
tf = tempfile.NamedTemporaryFile(delete=False)
tf.write(b'foobar')
tf.close()
url = compat_urllib_parse.urljoin('file://', pathname2url(tf.name))
with FakeYDL() as ydl:
self.assertRaisesRegexp(
compat_urllib_error.URLError, 'file:// scheme is explicitly disabled in youtube-dl for security reasons', ydl.urlopen, url)
# not yet implemented
"""
with FakeYDL({'enable_file_urls': True}) as ydl:
res = ydl.urlopen(url)
self.assertEqual(res.read(), b'foobar')
res.close()
"""
os.unlink(tf.name)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -8,109 +8,450 @@ import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from youtube_dl.jsinterp import JSInterpreter import math
import re
from youtube_dl.compat import compat_str
from youtube_dl.jsinterp import JS_Undefined, JSInterpreter
NaN = object()
class TestJSInterpreter(unittest.TestCase): class TestJSInterpreter(unittest.TestCase):
def _test(self, jsi_or_code, expected, func='f', args=()):
if isinstance(jsi_or_code, compat_str):
jsi_or_code = JSInterpreter(jsi_or_code)
got = jsi_or_code.call_function(func, *args)
if expected is NaN:
self.assertTrue(math.isnan(got), '{0} is not NaN'.format(got))
else:
self.assertEqual(got, expected)
def test_basic(self): def test_basic(self):
jsi = JSInterpreter('function x(){;}') jsi = JSInterpreter('function f(){;}')
self.assertEqual(jsi.call_function('x'), None) self.assertEqual(repr(jsi.extract_function('f')), 'F<f>')
self._test(jsi, None)
jsi = JSInterpreter('function x3(){return 42;}') self._test('function f(){return 42;}', 42)
self.assertEqual(jsi.call_function('x3'), 42) self._test('function f(){42}', None)
self._test('var f = function(){return 42;}', 42)
jsi = JSInterpreter('var x5 = function(){return 42;}') def test_add(self):
self.assertEqual(jsi.call_function('x5'), 42) self._test('function f(){return 42 + 7;}', 49)
self._test('function f(){return 42 + undefined;}', NaN)
self._test('function f(){return 42 + null;}', 42)
def test_sub(self):
self._test('function f(){return 42 - 7;}', 35)
self._test('function f(){return 42 - undefined;}', NaN)
self._test('function f(){return 42 - null;}', 42)
def test_mul(self):
self._test('function f(){return 42 * 7;}', 294)
self._test('function f(){return 42 * undefined;}', NaN)
self._test('function f(){return 42 * null;}', 0)
def test_div(self):
jsi = JSInterpreter('function f(a, b){return a / b;}')
self._test(jsi, NaN, args=(0, 0))
self._test(jsi, NaN, args=(JS_Undefined, 1))
self._test(jsi, float('inf'), args=(2, 0))
self._test(jsi, 0, args=(0, 3))
def test_mod(self):
self._test('function f(){return 42 % 7;}', 0)
self._test('function f(){return 42 % 0;}', NaN)
self._test('function f(){return 42 % undefined;}', NaN)
def test_exp(self):
self._test('function f(){return 42 ** 2;}', 1764)
self._test('function f(){return 42 ** undefined;}', NaN)
self._test('function f(){return 42 ** null;}', 1)
self._test('function f(){return undefined ** 42;}', NaN)
def test_calc(self): def test_calc(self):
jsi = JSInterpreter('function x4(a){return 2*a+1;}') self._test('function f(a){return 2*a+1;}', 7, args=[3])
self.assertEqual(jsi.call_function('x4', 3), 7)
def test_empty_return(self): def test_empty_return(self):
jsi = JSInterpreter('function f(){return; y()}') self._test('function f(){return; y()}', None)
self.assertEqual(jsi.call_function('f'), None)
def test_morespace(self): def test_morespace(self):
jsi = JSInterpreter('function x (a) { return 2 * a + 1 ; }') self._test('function f (a) { return 2 * a + 1 ; }', 7, args=[3])
self.assertEqual(jsi.call_function('x', 3), 7) self._test('function f () { x = 2 ; return x; }', 2)
jsi = JSInterpreter('function f () { x = 2 ; return x; }')
self.assertEqual(jsi.call_function('f'), 2)
def test_strange_chars(self): def test_strange_chars(self):
jsi = JSInterpreter('function $_xY1 ($_axY1) { var $_axY2 = $_axY1 + 1; return $_axY2; }') self._test('function $_xY1 ($_axY1) { var $_axY2 = $_axY1 + 1; return $_axY2; }',
self.assertEqual(jsi.call_function('$_xY1', 20), 21) 21, args=[20], func='$_xY1')
def test_operators(self): def test_operators(self):
jsi = JSInterpreter('function f(){return 1 << 5;}') self._test('function f(){return 1 << 5;}', 32)
self.assertEqual(jsi.call_function('f'), 32) self._test('function f(){return 2 ** 5}', 32)
self._test('function f(){return 19 & 21;}', 17)
jsi = JSInterpreter('function f(){return 19 & 21;}') self._test('function f(){return 11 >> 2;}', 2)
self.assertEqual(jsi.call_function('f'), 17) self._test('function f(){return []? 2+3: 4;}', 5)
self._test('function f(){return 1 == 2}', False)
jsi = JSInterpreter('function f(){return 11 >> 2;}') self._test('function f(){return 0 && 1 || 2;}', 2)
self.assertEqual(jsi.call_function('f'), 2) self._test('function f(){return 0 ?? 42;}', 0)
self._test('function f(){return "life, the universe and everything" < 42;}', False)
# https://github.com/ytdl-org/youtube-dl/issues/32815
self._test('function f(){return 0 - 7 * - 6;}', 42)
def test_array_access(self): def test_array_access(self):
jsi = JSInterpreter('function f(){var x = [1,2,3]; x[0] = 4; x[0] = 5; x[2] = 7; return x;}') self._test('function f(){var x = [1,2,3]; x[0] = 4; x[0] = 5; x[2.0] = 7; return x;}', [5, 2, 7])
self.assertEqual(jsi.call_function('f'), [5, 2, 7])
def test_parens(self): def test_parens(self):
jsi = JSInterpreter('function f(){return (1) + (2) * ((( (( (((((3)))))) )) ));}') self._test('function f(){return (1) + (2) * ((( (( (((((3)))))) )) ));}', 7)
self.assertEqual(jsi.call_function('f'), 7) self._test('function f(){return (1 + 2) * 3;}', 9)
jsi = JSInterpreter('function f(){return (1 + 2) * 3;}') def test_quotes(self):
self.assertEqual(jsi.call_function('f'), 9) self._test(r'function f(){return "a\"\\("}', r'a"\(')
def test_assignments(self): def test_assignments(self):
jsi = JSInterpreter('function f(){var x = 20; x = 30 + 1; return x;}') self._test('function f(){var x = 20; x = 30 + 1; return x;}', 31)
self.assertEqual(jsi.call_function('f'), 31) self._test('function f(){var x = 20; x += 30 + 1; return x;}', 51)
self._test('function f(){var x = 20; x -= 30 + 1; return x;}', -11)
jsi = JSInterpreter('function f(){var x = 20; x += 30 + 1; return x;}')
self.assertEqual(jsi.call_function('f'), 51)
jsi = JSInterpreter('function f(){var x = 20; x -= 30 + 1; return x;}')
self.assertEqual(jsi.call_function('f'), -11)
@unittest.skip('Not yet fully implemented')
def test_comments(self): def test_comments(self):
'Skipping: Not yet fully implemented' self._test('''
return function f() {
jsi = JSInterpreter(''' var x = /* 1 + */ 2;
function x() { var y = /* 30
var x = /* 1 + */ 2; * 40 */ 50;
var y = /* 30 return x + y;
* 40 */ 50; }
return x + y; ''', 52)
}
''')
self.assertEqual(jsi.call_function('x'), 52)
jsi = JSInterpreter(''' self._test('''
function f() { function f() {
var x = "/*"; var x = "/*";
var y = 1 /* comment */ + 2; var y = 1 /* comment */ + 2;
return y; return y;
} }
''') ''', 3)
self.assertEqual(jsi.call_function('f'), 3)
def test_precedence(self): def test_precedence(self):
jsi = JSInterpreter(''' self._test('''
function x() { function f() {
var a = [10, 20, 30, 40, 50]; var a = [10, 20, 30, 40, 50];
var b = 6; var b = 6;
a[0]=a[b%a.length]; a[0]=a[b%a.length];
return a; return a;
}''') }
self.assertEqual(jsi.call_function('x'), [20, 20, 30, 40, 50]) ''', [20, 20, 30, 40, 50])
def test_builtins(self):
self._test('function f() { return NaN }', NaN)
def test_Date(self):
self._test('function f() { return new Date("Wednesday 31 December 1969 18:01:26 MDT") - 0; }', 86000)
jsi = JSInterpreter('function f(dt) { return new Date(dt) - 0; }')
# date format m/d/y
self._test(jsi, 86000, args=['12/31/1969 18:01:26 MDT'])
# epoch 0
self._test(jsi, 0, args=['1 January 1970 00:00:00 UTC'])
def test_call(self): def test_call(self):
jsi = JSInterpreter(''' jsi = JSInterpreter('''
function x() { return 2; } function x() { return 2; }
function y(a) { return x() + a; } function y(a) { return x() + (a?a:0); }
function z() { return y(3); } function z() { return y(3); }
''') ''')
self.assertEqual(jsi.call_function('z'), 5) self._test(jsi, 5, func='z')
self._test(jsi, 2, func='y')
def test_if(self):
self._test('''
function f() {
let a = 9;
if (0==0) {a++}
return a
}
''', 10)
self._test('''
function f() {
if (0==0) {return 10}
}
''', 10)
self._test('''
function f() {
if (0!=0) {return 1}
else {return 10}
}
''', 10)
def test_elseif(self):
self._test('''
function f() {
if (0!=0) {return 1}
else if (1==0) {return 2}
else {return 10}
}
''', 10)
def test_for_loop(self):
self._test('function f() { a=0; for (i=0; i-10; i++) {a++} return a }', 10)
def test_while_loop(self):
self._test('function f() { a=0; while (a<10) {a++} return a }', 10)
def test_switch(self):
jsi = JSInterpreter('''
function f(x) { switch(x){
case 1:x+=1;
case 2:x+=2;
case 3:x+=3;break;
case 4:x+=4;
default:x=0;
} return x }
''')
self._test(jsi, 7, args=[1])
self._test(jsi, 6, args=[3])
self._test(jsi, 0, args=[5])
def test_switch_default(self):
jsi = JSInterpreter('''
function f(x) { switch(x){
case 2: x+=2;
default: x-=1;
case 5:
case 6: x+=6;
case 0: break;
case 1: x+=1;
} return x }
''')
self._test(jsi, 2, args=[1])
self._test(jsi, 11, args=[5])
self._test(jsi, 14, args=[9])
def test_try(self):
self._test('function f() { try{return 10} catch(e){return 5} }', 10)
def test_catch(self):
self._test('function f() { try{throw 10} catch(e){return 5} }', 5)
def test_finally(self):
self._test('function f() { try{throw 10} finally {return 42} }', 42)
self._test('function f() { try{throw 10} catch(e){return 5} finally {return 42} }', 42)
def test_nested_try(self):
self._test('''
function f() {try {
try{throw 10} finally {throw 42}
} catch(e){return 5} }
''', 5)
def test_for_loop_continue(self):
self._test('function f() { a=0; for (i=0; i-10; i++) { continue; a++ } return a }', 0)
def test_for_loop_break(self):
self._test('function f() { a=0; for (i=0; i-10; i++) { break; a++ } return a }', 0)
def test_for_loop_try(self):
self._test('''
function f() {
for (i=0; i-10; i++) { try { if (i == 5) throw i} catch {return 10} finally {break} };
return 42 }
''', 42)
def test_literal_list(self):
self._test('function f() { return [1, 2, "asdf", [5, 6, 7]][3] }', [5, 6, 7])
def test_comma(self):
self._test('function f() { a=5; a -= 1, a+=3; return a }', 7)
self._test('function f() { a=5; return (a -= 1, a+=3, a); }', 7)
self._test('function f() { return (l=[0,1,2,3], function(a, b){return a+b})((l[1], l[2]), l[3]) }', 5)
def test_void(self):
self._test('function f() { return void 42; }', None)
def test_return_function(self):
jsi = JSInterpreter('''
function x() { return [1, function(){return 1}][1] }
''')
self.assertEqual(jsi.call_function('x')([]), 1)
def test_null(self):
self._test('function f() { return null; }', None)
self._test('function f() { return [null > 0, null < 0, null == 0, null === 0]; }',
[False, False, False, False])
self._test('function f() { return [null >= 0, null <= 0]; }', [True, True])
def test_undefined(self):
self._test('function f() { return undefined === undefined; }', True)
self._test('function f() { return undefined; }', JS_Undefined)
self._test('function f() {return undefined ?? 42; }', 42)
self._test('function f() { let v; return v; }', JS_Undefined)
self._test('function f() { let v; return v**0; }', 1)
self._test('function f() { let v; return [v>42, v<=42, v&&42, 42&&v]; }',
[False, False, JS_Undefined, JS_Undefined])
self._test('''
function f() { return [
undefined === undefined,
undefined == undefined,
undefined == null
]; }
''', [True] * 3)
self._test('''
function f() { return [
undefined < undefined,
undefined > undefined,
undefined === 0,
undefined == 0,
undefined < 0,
undefined > 0,
undefined >= 0,
undefined <= 0,
undefined > null,
undefined < null,
undefined === null
]; }
''', [False] * 11)
jsi = JSInterpreter('''
function x() { let v; return [42+v, v+42, v**42, 42**v, 0**v]; }
''')
for y in jsi.call_function('x'):
self.assertTrue(math.isnan(y))
def test_object(self):
self._test('function f() { return {}; }', {})
self._test('function f() { let a = {m1: 42, m2: 0 }; return [a["m1"], a.m2]; }', [42, 0])
self._test('function f() { let a; return a?.qq; }', JS_Undefined)
self._test('function f() { let a = {m1: 42, m2: 0 }; return a?.qq; }', JS_Undefined)
def test_regex(self):
self._test('function f() { let a=/,,[/,913,/](,)}/; }', None)
jsi = JSInterpreter('''
function x() { let a=/,,[/,913,/](,)}/; "".replace(a, ""); return a; }
''')
attrs = set(('findall', 'finditer', 'match', 'scanner', 'search',
'split', 'sub', 'subn'))
if sys.version_info >= (2, 7):
# documented for 2.6 but may not be found
attrs.update(('flags', 'groupindex', 'groups', 'pattern'))
self.assertSetEqual(set(dir(jsi.call_function('x'))) & attrs, attrs)
jsi = JSInterpreter('''
function x() { let a=/,,[/,913,/](,)}/i; return a; }
''')
self.assertEqual(jsi.call_function('x').flags & ~re.U, re.I)
jsi = JSInterpreter(r'function f() { let a=/,][}",],()}(\[)/; return a; }')
self.assertEqual(jsi.call_function('f').pattern, r',][}",],()}(\[)')
jsi = JSInterpreter(r'function f() { let a=[/[)\\]/]; return a[0]; }')
self.assertEqual(jsi.call_function('f').pattern, r'[)\\]')
def test_replace(self):
self._test('function f() { let a="data-name".replace("data-", ""); return a }',
'name')
self._test('function f() { let a="data-name".replace(new RegExp("^.+-"), ""); return a; }',
'name')
self._test('function f() { let a="data-name".replace(/^.+-/, ""); return a; }',
'name')
self._test('function f() { let a="data-name".replace(/a/g, "o"); return a; }',
'doto-nome')
self._test('function f() { let a="data-name".replaceAll("a", "o"); return a; }',
'doto-nome')
def test_char_code_at(self):
jsi = JSInterpreter('function f(i){return "test".charCodeAt(i)}')
self._test(jsi, 116, args=[0])
self._test(jsi, 101, args=[1])
self._test(jsi, 115, args=[2])
self._test(jsi, 116, args=[3])
self._test(jsi, None, args=[4])
self._test(jsi, 116, args=['not_a_number'])
def test_bitwise_operators_overflow(self):
self._test('function f(){return -524999584 << 5}', 379882496)
self._test('function f(){return 1236566549 << 5}', 915423904)
def test_bitwise_operators_typecast(self):
# madness
self._test('function f(){return null << 5}', 0)
self._test('function f(){return undefined >> 5}', 0)
self._test('function f(){return 42 << NaN}', 42)
self._test('function f(){return 42 << Infinity}', 42)
def test_negative(self):
self._test('function f(){return 2 * -2.0 ;}', -4)
self._test('function f(){return 2 - - -2 ;}', 0)
self._test('function f(){return 2 - - - -2 ;}', 4)
self._test('function f(){return 2 - + + - -2;}', 0)
self._test('function f(){return 2 + - + - -2;}', 0)
def test_32066(self):
self._test(
"function f(){return Math.pow(3, 5) + new Date('1970-01-01T08:01:42.000+08:00') / 1000 * -239 - -24205;}",
70)
@unittest.skip('Not yet working')
def test_packed(self):
self._test(
'''function f(p,a,c,k,e,d){while(c--)if(k[c])p=p.replace(new RegExp('\\b'+c.toString(a)+'\\b','g'),k[c]);return p}''',
'''h 7=g("1j");7.7h({7g:[{33:"w://7f-7e-7d-7c.v.7b/7a/79/78/77/76.74?t=73&s=2s&e=72&f=2t&71=70.0.0.1&6z=6y&6x=6w"}],6v:"w://32.v.u/6u.31",16:"r%",15:"r%",6t:"6s",6r:"",6q:"l",6p:"l",6o:"6n",6m:\'6l\',6k:"6j",9:[{33:"/2u?b=6i&n=50&6h=w://32.v.u/6g.31",6f:"6e"}],1y:{6d:1,6c:\'#6b\',6a:\'#69\',68:"67",66:30,65:r,},"64":{63:"%62 2m%m%61%5z%5y%5x.u%5w%5v%5u.2y%22 2k%m%1o%22 5t%m%1o%22 5s%m%1o%22 2j%m%5r%22 16%m%5q%22 15%m%5p%22 5o%2z%5n%5m%2z",5l:"w://v.u/d/1k/5k.2y",5j:[]},\'5i\':{"5h":"5g"},5f:"5e",5d:"w://v.u",5c:{},5b:l,1x:[0.25,0.50,0.75,1,1.25,1.5,2]});h 1m,1n,5a;h 59=0,58=0;h 7=g("1j");h 2x=0,57=0,56=0;$.55({54:{\'53-52\':\'2i-51\'}});7.j(\'4z\',6(x){c(5>0&&x.1l>=5&&1n!=1){1n=1;$(\'q.4y\').4x(\'4w\')}});7.j(\'13\',6(x){2x=x.1l});7.j(\'2g\',6(x){2w(x)});7.j(\'4v\',6(){$(\'q.2v\').4u()});6 2w(x){$(\'q.2v\').4t();c(1m)19;1m=1;17=0;c(4s.4r===l){17=1}$.4q(\'/2u?b=4p&2l=1k&4o=2t-4n-4m-2s-4l&4k=&4j=&4i=&17=\'+17,6(2r){$(\'#4h\').4g(2r)});$(\'.3-8-4f-4e:4d("4c")\').2h(6(e){2q();g().4b(0);g().4a(l)});6 2q(){h $14=$("<q />").2p({1l:"49",16:"r%",15:"r%",48:0,2n:0,2o:47,46:"45(10%, 10%, 10%, 0.4)","44-43":"42"});$("<41 />").2p({16:"60%",15:"60%",2o:40,"3z-2n":"3y"}).3x({\'2m\':\'/?b=3w&2l=1k\',\'2k\':\'0\',\'2j\':\'2i\'}).2f($14);$14.2h(6(){$(3v).3u();g().2g()});$14.2f($(\'#1j\'))}g().13(0);}6 3t(){h 9=7.1b(2e);2d.2c(9);c(9.n>1){1r(i=0;i<9.n;i++){c(9[i].1a==2e){2d.2c(\'!!=\'+i);7.1p(i)}}}}7.j(\'3s\',6(){g().1h("/2a/3r.29","3q 10 28",6(){g().13(g().27()+10)},"2b");$("q[26=2b]").23().21(\'.3-20-1z\');g().1h("/2a/3p.29","3o 10 28",6(){h 12=g().27()-10;c(12<0)12=0;g().13(12)},"24");$("q[26=24]").23().21(\'.3-20-1z\');});6 1i(){}7.j(\'3n\',6(){1i()});7.j(\'3m\',6(){1i()});7.j("k",6(y){h 9=7.1b();c(9.n<2)19;$(\'.3-8-3l-3k\').3j(6(){$(\'#3-8-a-k\').1e(\'3-8-a-z\');$(\'.3-a-k\').p(\'o-1f\',\'11\')});7.1h("/3i/3h.3g","3f 3e",6(){$(\'.3-1w\').3d(\'3-8-1v\');$(\'.3-8-1y, .3-8-1x\').p(\'o-1g\',\'11\');c($(\'.3-1w\').3c(\'3-8-1v\')){$(\'.3-a-k\').p(\'o-1g\',\'l\');$(\'.3-a-k\').p(\'o-1f\',\'l\');$(\'.3-8-a\').1e(\'3-8-a-z\');$(\'.3-8-a:1u\').3b(\'3-8-a-z\')}3a{$(\'.3-a-k\').p(\'o-1g\',\'11\');$(\'.3-a-k\').p(\'o-1f\',\'11\');$(\'.3-8-a:1u\').1e(\'3-8-a-z\')}},"39");7.j("38",6(y){1d.37(\'1c\',y.9[y.36].1a)});c(1d.1t(\'1c\')){35("1s(1d.1t(\'1c\'));",34)}});h 18;6 1s(1q){h 9=7.1b();c(9.n>1){1r(i=0;i<9.n;i++){c(9[i].1a==1q){c(i==18){19}18=i;7.1p(i)}}}}',36,270,'|||jw|||function|player|settings|tracks|submenu||if||||jwplayer|var||on|audioTracks|true|3D|length|aria|attr|div|100|||sx|filemoon|https||event|active||false|tt|seek|dd|height|width|adb|current_audio|return|name|getAudioTracks|default_audio|localStorage|removeClass|expanded|checked|addButton|callMeMaybe|vplayer|0fxcyc2ajhp1|position|vvplay|vvad|220|setCurrentAudioTrack|audio_name|for|audio_set|getItem|last|open|controls|playbackRates|captions|rewind|icon|insertAfter||detach|ff00||button|getPosition|sec|png|player8|ff11|log|console|track_name|appendTo|play|click|no|scrolling|frameborder|file_code|src|top|zIndex|css|showCCform|data|1662367683|383371|dl|video_ad|doPlay|prevt|mp4|3E||jpg|thumbs|file|300|setTimeout|currentTrack|setItem|audioTrackChanged|dualSound|else|addClass|hasClass|toggleClass|Track|Audio|svg|dualy|images|mousedown|buttons|topbar|playAttemptFailed|beforePlay|Rewind|fr|Forward|ff|ready|set_audio_track|remove|this|upload_srt|prop|50px|margin|1000001|iframe|center|align|text|rgba|background|1000000|left|absolute|pause|setCurrentCaptions|Upload|contains|item|content|html|fviews|referer|prem|embed|3e57249ef633e0d03bf76ceb8d8a4b65|216|83|hash|view|get|TokenZir|window|hide|show|complete|slow|fadeIn|video_ad_fadein|time||cache|Cache|Content|headers|ajaxSetup|v2done|tott|vastdone2|vastdone1|vvbefore|playbackRateControls|cast|aboutlink|FileMoon|abouttext|UHD|1870|qualityLabels|sites|GNOME_POWER|link|2Fiframe|3C|allowfullscreen|22360|22640|22no|marginheight|marginwidth|2FGNOME_POWER|2F0fxcyc2ajhp1|2Fe|2Ffilemoon|2F|3A||22https|3Ciframe|code|sharing|fontOpacity|backgroundOpacity|Tahoma|fontFamily|303030|backgroundColor|FFFFFF|color|userFontScale|thumbnails|kind|0fxcyc2ajhp10000|url|get_slides|start|startparam|none|preload|html5|primary|hlshtml|androidhls|duration|uniform|stretching|0fxcyc2ajhp1_xt|image|2048|sp|6871|asn|127|srv|43200|_g3XlBcu2lmD9oDexD2NLWSmah2Nu3XcDrl93m9PwXY|m3u8||master|0fxcyc2ajhp1_x|00076|01|hls2|to|s01|delivery|storage|moon|sources|setup'''.split('|'))
def test_join(self):
test_input = list('test')
tests = [
'function f(a, b){return a.join(b)}',
'function f(a, b){return Array.prototype.join.call(a, b)}',
'function f(a, b){return Array.prototype.join.apply(a, [b])}',
]
for test in tests:
jsi = JSInterpreter(test)
self._test(jsi, 'test', args=[test_input, ''])
self._test(jsi, 't-e-s-t', args=[test_input, '-'])
self._test(jsi, '', args=[[], '-'])
def test_split(self):
test_result = list('test')
tests = [
'function f(a, b){return a.split(b)}',
'function f(a, b){return String.prototype.split.call(a, b)}',
'function f(a, b){return String.prototype.split.apply(a, [b])}',
]
for test in tests:
jsi = JSInterpreter(test)
self._test(jsi, test_result, args=['test', ''])
self._test(jsi, test_result, args=['t-e-s-t', '-'])
self._test(jsi, [''], args=['', '-'])
self._test(jsi, [], args=['', ''])
def test_slice(self):
self._test('function f(){return [0, 1, 2, 3, 4, 5, 6, 7, 8].slice()}', [0, 1, 2, 3, 4, 5, 6, 7, 8])
self._test('function f(){return [0, 1, 2, 3, 4, 5, 6, 7, 8].slice(0)}', [0, 1, 2, 3, 4, 5, 6, 7, 8])
self._test('function f(){return [0, 1, 2, 3, 4, 5, 6, 7, 8].slice(5)}', [5, 6, 7, 8])
self._test('function f(){return [0, 1, 2, 3, 4, 5, 6, 7, 8].slice(99)}', [])
self._test('function f(){return [0, 1, 2, 3, 4, 5, 6, 7, 8].slice(-2)}', [7, 8])
self._test('function f(){return [0, 1, 2, 3, 4, 5, 6, 7, 8].slice(-99)}', [0, 1, 2, 3, 4, 5, 6, 7, 8])
self._test('function f(){return [0, 1, 2, 3, 4, 5, 6, 7, 8].slice(0, 0)}', [])
self._test('function f(){return [0, 1, 2, 3, 4, 5, 6, 7, 8].slice(1, 0)}', [])
self._test('function f(){return [0, 1, 2, 3, 4, 5, 6, 7, 8].slice(0, 1)}', [0])
self._test('function f(){return [0, 1, 2, 3, 4, 5, 6, 7, 8].slice(3, 6)}', [3, 4, 5])
self._test('function f(){return [0, 1, 2, 3, 4, 5, 6, 7, 8].slice(1, -1)}', [1, 2, 3, 4, 5, 6, 7])
self._test('function f(){return [0, 1, 2, 3, 4, 5, 6, 7, 8].slice(-1, 1)}', [])
self._test('function f(){return [0, 1, 2, 3, 4, 5, 6, 7, 8].slice(-3, -1)}', [6, 7])
self._test('function f(){return "012345678".slice()}', '012345678')
self._test('function f(){return "012345678".slice(0)}', '012345678')
self._test('function f(){return "012345678".slice(5)}', '5678')
self._test('function f(){return "012345678".slice(99)}', '')
self._test('function f(){return "012345678".slice(-2)}', '78')
self._test('function f(){return "012345678".slice(-99)}', '012345678')
self._test('function f(){return "012345678".slice(0, 0)}', '')
self._test('function f(){return "012345678".slice(1, 0)}', '')
self._test('function f(){return "012345678".slice(0, 1)}', '0')
self._test('function f(){return "012345678".slice(3, 6)}', '345')
self._test('function f(){return "012345678".slice(1, -1)}', '1234567')
self._test('function f(){return "012345678".slice(-1, 1)}', '')
self._test('function f(){return "012345678".slice(-3, -1)}', '67')
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -38,6 +38,9 @@ class BaseTestSubtitles(unittest.TestCase):
self.DL = FakeYDL() self.DL = FakeYDL()
self.ie = self.IE() self.ie = self.IE()
self.DL.add_info_extractor(self.ie) self.DL.add_info_extractor(self.ie)
if not self.IE.working():
print('Skipping: %s marked as not _WORKING' % self.IE.ie_key())
self.skipTest('IE marked as not _WORKING')
def getInfoDict(self): def getInfoDict(self):
info_dict = self.DL.extract_info(self.url, download=False) info_dict = self.DL.extract_info(self.url, download=False)
@ -56,6 +59,21 @@ class BaseTestSubtitles(unittest.TestCase):
class TestYoutubeSubtitles(BaseTestSubtitles): class TestYoutubeSubtitles(BaseTestSubtitles):
# Available subtitles for QRS8MkLhQmM:
# Language formats
# ru vtt, ttml, srv3, srv2, srv1, json3
# fr vtt, ttml, srv3, srv2, srv1, json3
# en vtt, ttml, srv3, srv2, srv1, json3
# nl vtt, ttml, srv3, srv2, srv1, json3
# de vtt, ttml, srv3, srv2, srv1, json3
# ko vtt, ttml, srv3, srv2, srv1, json3
# it vtt, ttml, srv3, srv2, srv1, json3
# zh-Hant vtt, ttml, srv3, srv2, srv1, json3
# hi vtt, ttml, srv3, srv2, srv1, json3
# pt-BR vtt, ttml, srv3, srv2, srv1, json3
# es-MX vtt, ttml, srv3, srv2, srv1, json3
# ja vtt, ttml, srv3, srv2, srv1, json3
# pl vtt, ttml, srv3, srv2, srv1, json3
url = 'QRS8MkLhQmM' url = 'QRS8MkLhQmM'
IE = YoutubeIE IE = YoutubeIE
@ -64,41 +82,60 @@ class TestYoutubeSubtitles(BaseTestSubtitles):
self.DL.params['allsubtitles'] = True self.DL.params['allsubtitles'] = True
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
self.assertEqual(len(subtitles.keys()), 13) self.assertEqual(len(subtitles.keys()), 13)
self.assertEqual(md5(subtitles['en']), '3cb210999d3e021bd6c7f0ea751eab06') self.assertEqual(md5(subtitles['en']), 'ae1bd34126571a77aabd4d276b28044d')
self.assertEqual(md5(subtitles['it']), '6d752b98c31f1cf8d597050c7a2cb4b5') self.assertEqual(md5(subtitles['it']), '0e0b667ba68411d88fd1c5f4f4eab2f9')
for lang in ['fr', 'de']: for lang in ['fr', 'de']:
self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang) self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang)
def test_youtube_subtitles_ttml_format(self): def _test_subtitles_format(self, fmt, md5_hash, lang='en'):
self.DL.params['writesubtitles'] = True self.DL.params['writesubtitles'] = True
self.DL.params['subtitlesformat'] = 'ttml' self.DL.params['subtitlesformat'] = fmt
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
self.assertEqual(md5(subtitles['en']), 'e306f8c42842f723447d9f63ad65df54') self.assertEqual(md5(subtitles[lang]), md5_hash)
def test_youtube_subtitles_ttml_format(self):
self._test_subtitles_format('ttml', 'c97ddf1217390906fa9fbd34901f3da2')
def test_youtube_subtitles_vtt_format(self): def test_youtube_subtitles_vtt_format(self):
self.DL.params['writesubtitles'] = True self._test_subtitles_format('vtt', 'ae1bd34126571a77aabd4d276b28044d')
self.DL.params['subtitlesformat'] = 'vtt'
def test_youtube_subtitles_json3_format(self):
self._test_subtitles_format('json3', '688dd1ce0981683867e7fe6fde2a224b')
def _test_automatic_captions(self, url, lang):
self.url = url
self.DL.params['writeautomaticsub'] = True
self.DL.params['subtitleslangs'] = [lang]
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
self.assertEqual(md5(subtitles['en']), '3cb210999d3e021bd6c7f0ea751eab06') self.assertTrue(subtitles[lang] is not None)
def test_youtube_automatic_captions(self): def test_youtube_automatic_captions(self):
self.url = '8YoUxe5ncPo' # Available automatic captions for 8YoUxe5ncPo:
self.DL.params['writeautomaticsub'] = True # Language formats (all in vtt, ttml, srv3, srv2, srv1, json3)
self.DL.params['subtitleslangs'] = ['it'] # gu, zh-Hans, zh-Hant, gd, ga, gl, lb, la, lo, tt, tr,
subtitles = self.getSubtitles() # lv, lt, tk, th, tg, te, fil, haw, yi, ceb, yo, de, da,
self.assertTrue(subtitles['it'] is not None) # el, eo, en, eu, et, es, ru, rw, ro, bn, be, bg, uk, jv,
# bs, ja, or, xh, co, ca, cy, cs, ps, pt, pa, vi, pl, hy,
# hr, ht, hu, hmn, hi, ha, mg, uz, ml, mn, mi, mk, ur,
# mt, ms, mr, ug, ta, my, af, sw, is, am,
# *it*, iw, sv, ar,
# su, zu, az, id, ig, nl, no, ne, ny, fr, ku, fy, fa, fi,
# ka, kk, sr, sq, ko, kn, km, st, sk, si, so, sn, sm, sl,
# ky, sd
# ...
self._test_automatic_captions('8YoUxe5ncPo', 'it')
@unittest.skip('ASR subs all in all supported langs now')
def test_youtube_translated_subtitles(self): def test_youtube_translated_subtitles(self):
# This video has a subtitles track, which can be translated # This video has a subtitles track, which can be translated (#4555)
self.url = 'Ky9eprVWzlI' self._test_automatic_captions('Ky9eprVWzlI', 'it')
self.DL.params['writeautomaticsub'] = True
self.DL.params['subtitleslangs'] = ['it']
subtitles = self.getSubtitles()
self.assertTrue(subtitles['it'] is not None)
def test_youtube_nosubtitles(self): def test_youtube_nosubtitles(self):
self.DL.expect_warning('video doesn\'t have subtitles') self.DL.expect_warning('video doesn\'t have subtitles')
self.url = 'n5BB19UTcdA' # Available automatic captions for 8YoUxe5ncPo:
# ...
# 8YoUxe5ncPo has no subtitles
self.url = '8YoUxe5ncPo'
self.DL.params['writesubtitles'] = True self.DL.params['writesubtitles'] = True
self.DL.params['allsubtitles'] = True self.DL.params['allsubtitles'] = True
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
@ -128,6 +165,7 @@ class TestDailymotionSubtitles(BaseTestSubtitles):
self.assertFalse(subtitles) self.assertFalse(subtitles)
@unittest.skip('IE broken')
class TestTedSubtitles(BaseTestSubtitles): class TestTedSubtitles(BaseTestSubtitles):
url = 'http://www.ted.com/talks/dan_dennett_on_our_consciousness.html' url = 'http://www.ted.com/talks/dan_dennett_on_our_consciousness.html'
IE = TEDIE IE = TEDIE
@ -152,18 +190,19 @@ class TestVimeoSubtitles(BaseTestSubtitles):
self.DL.params['allsubtitles'] = True self.DL.params['allsubtitles'] = True
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
self.assertEqual(set(subtitles.keys()), set(['de', 'en', 'es', 'fr'])) self.assertEqual(set(subtitles.keys()), set(['de', 'en', 'es', 'fr']))
self.assertEqual(md5(subtitles['en']), '8062383cf4dec168fc40a088aa6d5888') self.assertEqual(md5(subtitles['en']), '386cbc9320b94e25cb364b97935e5dd1')
self.assertEqual(md5(subtitles['fr']), 'b6191146a6c5d3a452244d853fde6dc8') self.assertEqual(md5(subtitles['fr']), 'c9b69eef35bc6641c0d4da8a04f9dfac')
def test_nosubtitles(self): def test_nosubtitles(self):
self.DL.expect_warning('video doesn\'t have subtitles') self.DL.expect_warning('video doesn\'t have subtitles')
self.url = 'http://vimeo.com/56015672' self.url = 'http://vimeo.com/68093876'
self.DL.params['writesubtitles'] = True self.DL.params['writesubtitles'] = True
self.DL.params['allsubtitles'] = True self.DL.params['allsubtitles'] = True
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
self.assertFalse(subtitles) self.assertFalse(subtitles)
@unittest.skip('IE broken')
class TestWallaSubtitles(BaseTestSubtitles): class TestWallaSubtitles(BaseTestSubtitles):
url = 'http://vod.walla.co.il/movie/2705958/the-yes-men' url = 'http://vod.walla.co.il/movie/2705958/the-yes-men'
IE = WallaIE IE = WallaIE
@ -185,6 +224,7 @@ class TestWallaSubtitles(BaseTestSubtitles):
self.assertFalse(subtitles) self.assertFalse(subtitles)
@unittest.skip('IE broken')
class TestCeskaTelevizeSubtitles(BaseTestSubtitles): class TestCeskaTelevizeSubtitles(BaseTestSubtitles):
url = 'http://www.ceskatelevize.cz/ivysilani/10600540290-u6-uzasny-svet-techniky' url = 'http://www.ceskatelevize.cz/ivysilani/10600540290-u6-uzasny-svet-techniky'
IE = CeskaTelevizeIE IE = CeskaTelevizeIE
@ -206,6 +246,7 @@ class TestCeskaTelevizeSubtitles(BaseTestSubtitles):
self.assertFalse(subtitles) self.assertFalse(subtitles)
@unittest.skip('IE broken')
class TestLyndaSubtitles(BaseTestSubtitles): class TestLyndaSubtitles(BaseTestSubtitles):
url = 'http://www.lynda.com/Bootstrap-tutorials/Using-exercise-files/110885/114408-4.html' url = 'http://www.lynda.com/Bootstrap-tutorials/Using-exercise-files/110885/114408-4.html'
IE = LyndaIE IE = LyndaIE
@ -218,6 +259,7 @@ class TestLyndaSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['en']), '09bbe67222259bed60deaa26997d73a7') self.assertEqual(md5(subtitles['en']), '09bbe67222259bed60deaa26997d73a7')
@unittest.skip('IE broken')
class TestNPOSubtitles(BaseTestSubtitles): class TestNPOSubtitles(BaseTestSubtitles):
url = 'http://www.npo.nl/nos-journaal/28-08-2014/POW_00722860' url = 'http://www.npo.nl/nos-journaal/28-08-2014/POW_00722860'
IE = NPOIE IE = NPOIE
@ -230,6 +272,7 @@ class TestNPOSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['nl']), 'fc6435027572b63fb4ab143abd5ad3f4') self.assertEqual(md5(subtitles['nl']), 'fc6435027572b63fb4ab143abd5ad3f4')
@unittest.skip('IE broken')
class TestMTVSubtitles(BaseTestSubtitles): class TestMTVSubtitles(BaseTestSubtitles):
url = 'http://www.cc.com/video-clips/p63lk0/adam-devine-s-house-party-chasing-white-swans' url = 'http://www.cc.com/video-clips/p63lk0/adam-devine-s-house-party-chasing-white-swans'
IE = ComedyCentralIE IE = ComedyCentralIE
@ -252,9 +295,10 @@ class TestNRKSubtitles(BaseTestSubtitles):
def test_allsubtitles(self): def test_allsubtitles(self):
self.DL.params['writesubtitles'] = True self.DL.params['writesubtitles'] = True
self.DL.params['allsubtitles'] = True self.DL.params['allsubtitles'] = True
self.DL.params['format'] = 'best/bestvideo'
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
self.assertEqual(set(subtitles.keys()), set(['no'])) self.assertEqual(set(subtitles.keys()), set(['nb-ttv']))
self.assertEqual(md5(subtitles['no']), '544fa917d3197fcbee64634559221cc2') self.assertEqual(md5(subtitles['nb-ttv']), '67e06ff02d0deaf975e68f6cb8f6a149')
class TestRaiPlaySubtitles(BaseTestSubtitles): class TestRaiPlaySubtitles(BaseTestSubtitles):
@ -277,6 +321,7 @@ class TestRaiPlaySubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['it']), '4b3264186fbb103508abe5311cfcb9cd') self.assertEqual(md5(subtitles['it']), '4b3264186fbb103508abe5311cfcb9cd')
@unittest.skip('IE broken - DRM only')
class TestVikiSubtitles(BaseTestSubtitles): class TestVikiSubtitles(BaseTestSubtitles):
url = 'http://www.viki.com/videos/1060846v-punch-episode-18' url = 'http://www.viki.com/videos/1060846v-punch-episode-18'
IE = VikiIE IE = VikiIE
@ -303,6 +348,7 @@ class TestThePlatformSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['en']), '97e7670cbae3c4d26ae8bcc7fdd78d4b') self.assertEqual(md5(subtitles['en']), '97e7670cbae3c4d26ae8bcc7fdd78d4b')
@unittest.skip('IE broken')
class TestThePlatformFeedSubtitles(BaseTestSubtitles): class TestThePlatformFeedSubtitles(BaseTestSubtitles):
url = 'http://feed.theplatform.com/f/7wvmTC/msnbc_video-p-test?form=json&pretty=true&range=-40&byGuid=n_hardball_5biden_140207' url = 'http://feed.theplatform.com/f/7wvmTC/msnbc_video-p-test?form=json&pretty=true&range=-40&byGuid=n_hardball_5biden_140207'
IE = ThePlatformFeedIE IE = ThePlatformFeedIE
@ -338,7 +384,7 @@ class TestDemocracynowSubtitles(BaseTestSubtitles):
self.DL.params['allsubtitles'] = True self.DL.params['allsubtitles'] = True
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
self.assertEqual(set(subtitles.keys()), set(['en'])) self.assertEqual(set(subtitles.keys()), set(['en']))
self.assertEqual(md5(subtitles['en']), 'acaca989e24a9e45a6719c9b3d60815c') self.assertEqual(md5(subtitles['en']), 'a3cc4c0b5eadd74d9974f1c1f5101045')
def test_subtitles_in_page(self): def test_subtitles_in_page(self):
self.url = 'http://www.democracynow.org/2015/7/3/this_flag_comes_down_today_bree' self.url = 'http://www.democracynow.org/2015/7/3/this_flag_comes_down_today_bree'
@ -346,7 +392,7 @@ class TestDemocracynowSubtitles(BaseTestSubtitles):
self.DL.params['allsubtitles'] = True self.DL.params['allsubtitles'] = True
subtitles = self.getSubtitles() subtitles = self.getSubtitles()
self.assertEqual(set(subtitles.keys()), set(['en'])) self.assertEqual(set(subtitles.keys()), set(['en']))
self.assertEqual(md5(subtitles['en']), 'acaca989e24a9e45a6719c9b3d60815c') self.assertEqual(md5(subtitles['en']), 'a3cc4c0b5eadd74d9974f1c1f5101045')
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -5,16 +5,18 @@ from __future__ import unicode_literals
import os import os
import sys import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
dirn = os.path.dirname
sys.path.insert(0, dirn(dirn(os.path.abspath(__file__))))
import errno import errno
import io
import json import json
import re import re
import subprocess import subprocess
from youtube_dl.swfinterp import SWFInterpreter from youtube_dl.swfinterp import SWFInterpreter
from youtube_dl.compat import compat_open as open
TEST_DIR = os.path.join( TEST_DIR = os.path.join(
@ -43,7 +45,7 @@ def _make_testfunc(testfile):
'-static-link-runtime-shared-libraries', as_file]) '-static-link-runtime-shared-libraries', as_file])
except OSError as ose: except OSError as ose:
if ose.errno == errno.ENOENT: if ose.errno == errno.ENOENT:
print('mxmlc not found! Skipping test.') self.skipTest('mxmlc not found!')
return return
raise raise
@ -51,7 +53,7 @@ def _make_testfunc(testfile):
swf_content = swf_f.read() swf_content = swf_f.read()
swfi = SWFInterpreter(swf_content) swfi = SWFInterpreter(swf_content)
with io.open(as_file, 'r', encoding='utf-8') as as_f: with open(as_file, 'r', encoding='utf-8') as as_f:
as_content = as_f.read() as_content = as_f.read()
def _find_spec(key): def _find_spec(key):

509
test/test_traversal.py Normal file
View File

@ -0,0 +1,509 @@
#!/usr/bin/env python
# coding: utf-8
from __future__ import unicode_literals
# Allow direct execution
import os
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import re
from youtube_dl.traversal import (
dict_get,
get_first,
T,
traverse_obj,
)
from youtube_dl.compat import (
compat_etree_fromstring,
compat_http_cookies,
compat_str,
)
from youtube_dl.utils import (
int_or_none,
str_or_none,
)
_TEST_DATA = {
100: 100,
1.2: 1.2,
'str': 'str',
'None': None,
'...': Ellipsis,
'urls': [
{'index': 0, 'url': 'https://www.example.com/0'},
{'index': 1, 'url': 'https://www.example.com/1'},
],
'data': (
{'index': 2},
{'index': 3},
),
'dict': {},
}
if sys.version_info < (3, 0):
class _TestCase(unittest.TestCase):
def assertCountEqual(self, *args, **kwargs):
return self.assertItemsEqual(*args, **kwargs)
else:
_TestCase = unittest.TestCase
class TestTraversal(_TestCase):
def assertMaybeCountEqual(self, *args, **kwargs):
if sys.version_info < (3, 7):
# random dict order
return self.assertCountEqual(*args, **kwargs)
else:
return self.assertEqual(*args, **kwargs)
def test_traverse_obj(self):
# instant compat
str = compat_str
# define a pukka Iterable
def iter_range(stop):
for from_ in range(stop):
yield from_
# Test base functionality
self.assertEqual(traverse_obj(_TEST_DATA, ('str',)), 'str',
msg='allow tuple path')
self.assertEqual(traverse_obj(_TEST_DATA, ['str']), 'str',
msg='allow list path')
self.assertEqual(traverse_obj(_TEST_DATA, (value for value in ("str",))), 'str',
msg='allow iterable path')
self.assertEqual(traverse_obj(_TEST_DATA, 'str'), 'str',
msg='single items should be treated as a path')
self.assertEqual(traverse_obj(_TEST_DATA, None), _TEST_DATA)
self.assertEqual(traverse_obj(_TEST_DATA, 100), 100)
self.assertEqual(traverse_obj(_TEST_DATA, 1.2), 1.2)
# Test Ellipsis behavior
self.assertCountEqual(traverse_obj(_TEST_DATA, Ellipsis),
(item for item in _TEST_DATA.values() if item not in (None, {})),
msg='`...` should give all non-discarded values')
self.assertCountEqual(traverse_obj(_TEST_DATA, ('urls', 0, Ellipsis)), _TEST_DATA['urls'][0].values(),
msg='`...` selection for dicts should select all values')
self.assertEqual(traverse_obj(_TEST_DATA, (Ellipsis, Ellipsis, 'url')),
['https://www.example.com/0', 'https://www.example.com/1'],
msg='nested `...` queries should work')
self.assertCountEqual(traverse_obj(_TEST_DATA, (Ellipsis, Ellipsis, 'index')), iter_range(4),
msg='`...` query result should be flattened')
self.assertEqual(traverse_obj(iter(range(4)), Ellipsis), list(range(4)),
msg='`...` should accept iterables')
# Test function as key
self.assertEqual(traverse_obj(_TEST_DATA, lambda x, y: x == 'urls' and isinstance(y, list)),
[_TEST_DATA['urls']],
msg='function as query key should perform a filter based on (key, value)')
self.assertCountEqual(traverse_obj(_TEST_DATA, lambda _, x: isinstance(x[0], str)), set(('str',)),
msg='exceptions in the query function should be caught')
self.assertEqual(traverse_obj(iter(range(4)), lambda _, x: x % 2 == 0), [0, 2],
msg='function key should accept iterables')
if __debug__:
with self.assertRaises(Exception, msg='Wrong function signature should raise in debug'):
traverse_obj(_TEST_DATA, lambda a: Ellipsis)
with self.assertRaises(Exception, msg='Wrong function signature should raise in debug'):
traverse_obj(_TEST_DATA, lambda a, b, c: Ellipsis)
# Test set as key (transformation/type, like `expected_type`)
self.assertEqual(traverse_obj(_TEST_DATA, (Ellipsis, T(str.upper), )), ['STR'],
msg='Function in set should be a transformation')
self.assertEqual(traverse_obj(_TEST_DATA, ('fail', T(lambda _: 'const'))), 'const',
msg='Function in set should always be called')
self.assertEqual(traverse_obj(_TEST_DATA, (Ellipsis, T(str))), ['str'],
msg='Type in set should be a type filter')
self.assertMaybeCountEqual(traverse_obj(_TEST_DATA, (Ellipsis, T(str, int))), [100, 'str'],
msg='Multiple types in set should be a type filter')
self.assertEqual(traverse_obj(_TEST_DATA, T(dict)), _TEST_DATA,
msg='A single set should be wrapped into a path')
self.assertEqual(traverse_obj(_TEST_DATA, (Ellipsis, T(str.upper))), ['STR'],
msg='Transformation function should not raise')
self.assertMaybeCountEqual(traverse_obj(_TEST_DATA, (Ellipsis, T(str_or_none))),
[item for item in map(str_or_none, _TEST_DATA.values()) if item is not None],
msg='Function in set should be a transformation')
if __debug__:
with self.assertRaises(Exception, msg='Sets with length != 1 should raise in debug'):
traverse_obj(_TEST_DATA, set())
with self.assertRaises(Exception, msg='Sets with length != 1 should raise in debug'):
traverse_obj(_TEST_DATA, set((str.upper, str)))
# Test `slice` as a key
_SLICE_DATA = [0, 1, 2, 3, 4]
self.assertEqual(traverse_obj(_TEST_DATA, ('dict', slice(1))), None,
msg='slice on a dictionary should not throw')
self.assertEqual(traverse_obj(_SLICE_DATA, slice(1)), _SLICE_DATA[:1],
msg='slice key should apply slice to sequence')
self.assertEqual(traverse_obj(_SLICE_DATA, slice(1, 2)), _SLICE_DATA[1:2],
msg='slice key should apply slice to sequence')
self.assertEqual(traverse_obj(_SLICE_DATA, slice(1, 4, 2)), _SLICE_DATA[1:4:2],
msg='slice key should apply slice to sequence')
# Test alternative paths
self.assertEqual(traverse_obj(_TEST_DATA, 'fail', 'str'), 'str',
msg='multiple `paths` should be treated as alternative paths')
self.assertEqual(traverse_obj(_TEST_DATA, 'str', 100), 'str',
msg='alternatives should exit early')
self.assertEqual(traverse_obj(_TEST_DATA, 'fail', 'fail'), None,
msg='alternatives should return `default` if exhausted')
self.assertEqual(traverse_obj(_TEST_DATA, (Ellipsis, 'fail'), 100), 100,
msg='alternatives should track their own branching return')
self.assertEqual(traverse_obj(_TEST_DATA, ('dict', Ellipsis), ('data', Ellipsis)), list(_TEST_DATA['data']),
msg='alternatives on empty objects should search further')
# Test branch and path nesting
self.assertEqual(traverse_obj(_TEST_DATA, ('urls', (3, 0), 'url')), ['https://www.example.com/0'],
msg='tuple as key should be treated as branches')
self.assertEqual(traverse_obj(_TEST_DATA, ('urls', [3, 0], 'url')), ['https://www.example.com/0'],
msg='list as key should be treated as branches')
self.assertEqual(traverse_obj(_TEST_DATA, ('urls', ((1, 'fail'), (0, 'url')))), ['https://www.example.com/0'],
msg='double nesting in path should be treated as paths')
self.assertEqual(traverse_obj(['0', [1, 2]], [(0, 1), 0]), [1],
msg='do not fail early on branching')
self.assertCountEqual(traverse_obj(_TEST_DATA, ('urls', ((1, ('fail', 'url')), (0, 'url')))),
['https://www.example.com/0', 'https://www.example.com/1'],
msg='triple nesting in path should be treated as branches')
self.assertEqual(traverse_obj(_TEST_DATA, ('urls', ('fail', (Ellipsis, 'url')))),
['https://www.example.com/0', 'https://www.example.com/1'],
msg='ellipsis as branch path start gets flattened')
# Test dictionary as key
self.assertEqual(traverse_obj(_TEST_DATA, {0: 100, 1: 1.2}), {0: 100, 1: 1.2},
msg='dict key should result in a dict with the same keys')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('urls', 0, 'url')}),
{0: 'https://www.example.com/0'},
msg='dict key should allow paths')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('urls', (3, 0), 'url')}),
{0: ['https://www.example.com/0']},
msg='tuple in dict path should be treated as branches')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('urls', ((1, 'fail'), (0, 'url')))}),
{0: ['https://www.example.com/0']},
msg='double nesting in dict path should be treated as paths')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('urls', ((1, ('fail', 'url')), (0, 'url')))}),
{0: ['https://www.example.com/1', 'https://www.example.com/0']},
msg='triple nesting in dict path should be treated as branches')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}), {},
msg='remove `None` values when top level dict key fails')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}, default=Ellipsis), {0: Ellipsis},
msg='use `default` if key fails and `default`')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}), {},
msg='remove empty values when dict key')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}, default=Ellipsis), {0: Ellipsis},
msg='use `default` when dict key and a default')
self.assertEqual(traverse_obj(_TEST_DATA, {0: {0: 'fail'}}), {},
msg='remove empty values when nested dict key fails')
self.assertEqual(traverse_obj(None, {0: 'fail'}), {},
msg='default to dict if pruned')
self.assertEqual(traverse_obj(None, {0: 'fail'}, default=Ellipsis), {0: Ellipsis},
msg='default to dict if pruned and default is given')
self.assertEqual(traverse_obj(_TEST_DATA, {0: {0: 'fail'}}, default=Ellipsis), {0: {0: Ellipsis}},
msg='use nested `default` when nested dict key fails and `default`')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('dict', Ellipsis)}), {},
msg='remove key if branch in dict key not successful')
# Testing default parameter behavior
_DEFAULT_DATA = {'None': None, 'int': 0, 'list': []}
self.assertEqual(traverse_obj(_DEFAULT_DATA, 'fail'), None,
msg='default value should be `None`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, 'fail', 'fail', default=Ellipsis), Ellipsis,
msg='chained fails should result in default')
self.assertEqual(traverse_obj(_DEFAULT_DATA, 'None', 'int'), 0,
msg='should not short cirquit on `None`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, 'fail', default=1), 1,
msg='invalid dict key should result in `default`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, 'None', default=1), 1,
msg='`None` is a deliberate sentinel and should become `default`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, ('list', 10)), None,
msg='`IndexError` should result in `default`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, (Ellipsis, 'fail'), default=1), 1,
msg='if branched but not successful return `default` if defined, not `[]`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, (Ellipsis, 'fail'), default=None), None,
msg='if branched but not successful return `default` even if `default` is `None`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, (Ellipsis, 'fail')), [],
msg='if branched but not successful return `[]`, not `default`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, ('list', Ellipsis)), [],
msg='if branched but object is empty return `[]`, not `default`')
self.assertEqual(traverse_obj(None, Ellipsis), [],
msg='if branched but object is `None` return `[]`, not `default`')
self.assertEqual(traverse_obj({0: None}, (0, Ellipsis)), [],
msg='if branched but state is `None` return `[]`, not `default`')
branching_paths = [
('fail', Ellipsis),
(Ellipsis, 'fail'),
100 * ('fail',) + (Ellipsis,),
(Ellipsis,) + 100 * ('fail',),
]
for branching_path in branching_paths:
self.assertEqual(traverse_obj({}, branching_path), [],
msg='if branched but state is `None`, return `[]` (not `default`)')
self.assertEqual(traverse_obj({}, 'fail', branching_path), [],
msg='if branching in last alternative and previous did not match, return `[]` (not `default`)')
self.assertEqual(traverse_obj({0: 'x'}, 0, branching_path), 'x',
msg='if branching in last alternative and previous did match, return single value')
self.assertEqual(traverse_obj({0: 'x'}, branching_path, 0), 'x',
msg='if branching in first alternative and non-branching path does match, return single value')
self.assertEqual(traverse_obj({}, branching_path, 'fail'), None,
msg='if branching in first alternative and non-branching path does not match, return `default`')
# Testing expected_type behavior
_EXPECTED_TYPE_DATA = {'str': 'str', 'int': 0}
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=str),
'str', msg='accept matching `expected_type` type')
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=int),
None, msg='reject non-matching `expected_type` type')
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'int', expected_type=lambda x: str(x)),
'0', msg='transform type using type function')
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=lambda _: 1 / 0),
None, msg='wrap expected_type function in try_call')
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, Ellipsis, expected_type=str),
['str'], msg='eliminate items that expected_type fails on')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 100, 1: 1.2}, expected_type=int),
{0: 100}, msg='type as expected_type should filter dict values')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 100, 1: 1.2, 2: 'None'}, expected_type=str_or_none),
{0: '100', 1: '1.2'}, msg='function as expected_type should transform dict values')
self.assertEqual(traverse_obj(_TEST_DATA, ({0: 1.2}, 0, set((int_or_none,))), expected_type=int),
1, msg='expected_type should not filter non-final dict values')
self.assertEqual(traverse_obj(_TEST_DATA, {0: {0: 100, 1: 'str'}}, expected_type=int),
{0: {0: 100}}, msg='expected_type should transform deep dict values')
self.assertEqual(traverse_obj(_TEST_DATA, [({0: '...'}, {0: '...'})], expected_type=type(Ellipsis)),
[{0: Ellipsis}, {0: Ellipsis}], msg='expected_type should transform branched dict values')
self.assertEqual(traverse_obj({1: {3: 4}}, [(1, 2), 3], expected_type=int),
[4], msg='expected_type regression for type matching in tuple branching')
self.assertEqual(traverse_obj(_TEST_DATA, ['data', Ellipsis], expected_type=int),
[], msg='expected_type regression for type matching in dict result')
# Test get_all behavior
_GET_ALL_DATA = {'key': [0, 1, 2]}
self.assertEqual(traverse_obj(_GET_ALL_DATA, ('key', Ellipsis), get_all=False), 0,
msg='if not `get_all`, return only first matching value')
self.assertEqual(traverse_obj(_GET_ALL_DATA, Ellipsis, get_all=False), [0, 1, 2],
msg='do not overflatten if not `get_all`')
# Test casesense behavior
_CASESENSE_DATA = {
'KeY': 'value0',
0: {
'KeY': 'value1',
0: {'KeY': 'value2'},
},
# FULLWIDTH LATIN CAPITAL LETTER K
'\uff2bey': 'value3',
}
self.assertEqual(traverse_obj(_CASESENSE_DATA, 'key'), None,
msg='dict keys should be case sensitive unless `casesense`')
self.assertEqual(traverse_obj(_CASESENSE_DATA, 'keY',
casesense=False), 'value0',
msg='allow non matching key case if `casesense`')
self.assertEqual(traverse_obj(_CASESENSE_DATA, '\uff4bey', # FULLWIDTH LATIN SMALL LETTER K
casesense=False), 'value3',
msg='allow non matching Unicode key case if `casesense`')
self.assertEqual(traverse_obj(_CASESENSE_DATA, (0, ('keY',)),
casesense=False), ['value1'],
msg='allow non matching key case in branch if `casesense`')
self.assertEqual(traverse_obj(_CASESENSE_DATA, (0, ((0, 'keY'),)),
casesense=False), ['value2'],
msg='allow non matching key case in branch path if `casesense`')
# Test traverse_string behavior
_TRAVERSE_STRING_DATA = {'str': 'str', 1.2: 1.2}
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', 0)), None,
msg='do not traverse into string if not `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', 0),
_traverse_string=True), 's',
msg='traverse into string if `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, (1.2, 1),
_traverse_string=True), '.',
msg='traverse into converted data if `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', Ellipsis),
_traverse_string=True), 'str',
msg='`...` should result in string (same value) if `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', slice(0, None, 2)),
_traverse_string=True), 'sr',
msg='`slice` should result in string if `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', lambda i, v: i or v == 's'),
_traverse_string=True), 'str',
msg='function should result in string if `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', (0, 2)),
_traverse_string=True), ['s', 'r'],
msg='branching should result in list if `traverse_string`')
self.assertEqual(traverse_obj({}, (0, Ellipsis), _traverse_string=True), [],
msg='branching should result in list if `traverse_string`')
self.assertEqual(traverse_obj({}, (0, lambda x, y: True), _traverse_string=True), [],
msg='branching should result in list if `traverse_string`')
self.assertEqual(traverse_obj({}, (0, slice(1)), _traverse_string=True), [],
msg='branching should result in list if `traverse_string`')
# Test re.Match as input obj
mobj = re.match(r'^0(12)(?P<group>3)(4)?$', '0123')
self.assertEqual(traverse_obj(mobj, Ellipsis), [x for x in mobj.groups() if x is not None],
msg='`...` on a `re.Match` should give its `groups()`')
self.assertEqual(traverse_obj(mobj, lambda k, _: k in (0, 2)), ['0123', '3'],
msg='function on a `re.Match` should give groupno, value starting at 0')
self.assertEqual(traverse_obj(mobj, 'group'), '3',
msg='str key on a `re.Match` should give group with that name')
self.assertEqual(traverse_obj(mobj, 2), '3',
msg='int key on a `re.Match` should give group with that name')
self.assertEqual(traverse_obj(mobj, 'gRoUp', casesense=False), '3',
msg='str key on a `re.Match` should respect casesense')
self.assertEqual(traverse_obj(mobj, 'fail'), None,
msg='failing str key on a `re.Match` should return `default`')
self.assertEqual(traverse_obj(mobj, 'gRoUpS', casesense=False), None,
msg='failing str key on a `re.Match` should return `default`')
self.assertEqual(traverse_obj(mobj, 8), None,
msg='failing int key on a `re.Match` should return `default`')
self.assertEqual(traverse_obj(mobj, lambda k, _: k in (0, 'group')), ['0123', '3'],
msg='function on a `re.Match` should give group name as well')
# Test xml.etree.ElementTree.Element as input obj
etree = compat_etree_fromstring('''<?xml version="1.0"?>
<data>
<country name="Liechtenstein">
<rank>1</rank>
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E"/>
<neighbor name="Switzerland" direction="W"/>
</country>
<country name="Singapore">
<rank>4</rank>
<year>2011</year>
<gdppc>59900</gdppc>
<neighbor name="Malaysia" direction="N"/>
</country>
<country name="Panama">
<rank>68</rank>
<year>2011</year>
<gdppc>13600</gdppc>
<neighbor name="Costa Rica" direction="W"/>
<neighbor name="Colombia" direction="E"/>
</country>
</data>''')
self.assertEqual(traverse_obj(etree, ''), etree,
msg='empty str key should return the element itself')
self.assertEqual(traverse_obj(etree, 'country'), list(etree),
msg='str key should return all children with that tag name')
self.assertEqual(traverse_obj(etree, Ellipsis), list(etree),
msg='`...` as key should return all children')
self.assertEqual(traverse_obj(etree, lambda _, x: x[0].text == '4'), [etree[1]],
msg='function as key should get element as value')
self.assertEqual(traverse_obj(etree, lambda i, _: i == 1), [etree[1]],
msg='function as key should get index as key')
self.assertEqual(traverse_obj(etree, 0), etree[0],
msg='int key should return the nth child')
self.assertEqual(traverse_obj(etree, './/neighbor/@name'),
['Austria', 'Switzerland', 'Malaysia', 'Costa Rica', 'Colombia'],
msg='`@<attribute>` at end of path should give that attribute')
self.assertEqual(traverse_obj(etree, '//neighbor/@fail'), [None, None, None, None, None],
msg='`@<nonexistent>` at end of path should give `None`')
self.assertEqual(traverse_obj(etree, ('//neighbor/@', 2)), {'name': 'Malaysia', 'direction': 'N'},
msg='`@` should give the full attribute dict')
self.assertEqual(traverse_obj(etree, '//year/text()'), ['2008', '2011', '2011'],
msg='`text()` at end of path should give the inner text')
self.assertEqual(traverse_obj(etree, '//*[@direction]/@direction'), ['E', 'W', 'N', 'W', 'E'],
msg='full python xpath features should be supported')
self.assertEqual(traverse_obj(etree, (0, '@name')), 'Liechtenstein',
msg='special transformations should act on current element')
self.assertEqual(traverse_obj(etree, ('country', 0, Ellipsis, 'text()', T(int_or_none))), [1, 2008, 141100],
msg='special transformations should act on current element')
def test_traversal_unbranching(self):
self.assertEqual(traverse_obj(_TEST_DATA, [(100, 1.2), all]), [100, 1.2],
msg='`all` should give all results as list')
self.assertEqual(traverse_obj(_TEST_DATA, [(100, 1.2), any]), 100,
msg='`any` should give the first result')
self.assertEqual(traverse_obj(_TEST_DATA, [100, all]), [100],
msg='`all` should give list if non branching')
self.assertEqual(traverse_obj(_TEST_DATA, [100, any]), 100,
msg='`any` should give single item if non branching')
self.assertEqual(traverse_obj(_TEST_DATA, [('dict', 'None', 100), all]), [100],
msg='`all` should filter `None` and empty dict')
self.assertEqual(traverse_obj(_TEST_DATA, [('dict', 'None', 100), any]), 100,
msg='`any` should filter `None` and empty dict')
self.assertEqual(traverse_obj(_TEST_DATA, [{
'all': [('dict', 'None', 100, 1.2), all],
'any': [('dict', 'None', 100, 1.2), any],
}]), {'all': [100, 1.2], 'any': 100},
msg='`all`/`any` should apply to each dict path separately')
self.assertEqual(traverse_obj(_TEST_DATA, [{
'all': [('dict', 'None', 100, 1.2), all],
'any': [('dict', 'None', 100, 1.2), any],
}], get_all=False), {'all': [100, 1.2], 'any': 100},
msg='`all`/`any` should apply to dict regardless of `get_all`')
self.assertIs(traverse_obj(_TEST_DATA, [('dict', 'None', 100, 1.2), all, T(float)]), None,
msg='`all` should reset branching status')
self.assertIs(traverse_obj(_TEST_DATA, [('dict', 'None', 100, 1.2), any, T(float)]), None,
msg='`any` should reset branching status')
self.assertEqual(traverse_obj(_TEST_DATA, [('dict', 'None', 100, 1.2), all, Ellipsis, T(float)]), [1.2],
msg='`all` should allow further branching')
self.assertEqual(traverse_obj(_TEST_DATA, [('dict', 'None', 'urls', 'data'), any, Ellipsis, 'index']), [0, 1],
msg='`any` should allow further branching')
def test_traversal_morsel(self):
values = {
'expires': 'a',
'path': 'b',
'comment': 'c',
'domain': 'd',
'max-age': 'e',
'secure': 'f',
'httponly': 'g',
'version': 'h',
'samesite': 'i',
}
# SameSite added in Py3.8, breaks .update for 3.5-3.7
if sys.version_info < (3, 8):
del values['samesite']
morsel = compat_http_cookies.Morsel()
morsel.set(str('item_key'), 'item_value', 'coded_value')
morsel.update(values)
values['key'] = str('item_key')
values['value'] = 'item_value'
values = dict((str(k), v) for k, v in values.items())
# make test pass even without ordered dict
value_set = set(values.values())
for key, value in values.items():
self.assertEqual(traverse_obj(morsel, key), value,
msg='Morsel should provide access to all values')
self.assertEqual(set(traverse_obj(morsel, Ellipsis)), value_set,
msg='`...` should yield all values')
self.assertEqual(set(traverse_obj(morsel, lambda k, v: True)), value_set,
msg='function key should yield all values')
self.assertIs(traverse_obj(morsel, [(None,), any]), morsel,
msg='Morsel should not be implicitly changed to dict on usage')
def test_get_first(self):
self.assertEqual(get_first([{'a': None}, {'a': 'spam'}], 'a'), 'spam')
def test_dict_get(self):
FALSE_VALUES = {
'none': None,
'false': False,
'zero': 0,
'empty_string': '',
'empty_list': [],
}
d = FALSE_VALUES.copy()
d['a'] = 42
self.assertEqual(dict_get(d, 'a'), 42)
self.assertEqual(dict_get(d, 'b'), None)
self.assertEqual(dict_get(d, 'b', 42), 42)
self.assertEqual(dict_get(d, ('a', )), 42)
self.assertEqual(dict_get(d, ('b', 'a', )), 42)
self.assertEqual(dict_get(d, ('b', 'c', 'a', 'd', )), 42)
self.assertEqual(dict_get(d, ('b', 'c', )), None)
self.assertEqual(dict_get(d, ('b', 'c', ), 42), 42)
for key, false_value in FALSE_VALUES.items():
self.assertEqual(dict_get(d, ('b', 'c', key, )), None)
self.assertEqual(dict_get(d, ('b', 'c', key, ), skip_false_values=False), false_value)
if __name__ == '__main__':
unittest.main()

View File

@ -2,19 +2,21 @@ from __future__ import unicode_literals
# Allow direct execution # Allow direct execution
import os import os
import re
import sys import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import io dirn = os.path.dirname
import re
rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) rootDir = dirn(dirn(os.path.abspath(__file__)))
sys.path.insert(0, rootDir)
IGNORED_FILES = [ IGNORED_FILES = [
'setup.py', # http://bugs.python.org/issue13943 'setup.py', # http://bugs.python.org/issue13943
'conf.py', 'conf.py',
'buildserver.py', 'buildserver.py',
'get-pip.py',
] ]
IGNORED_DIRS = [ IGNORED_DIRS = [
@ -23,6 +25,7 @@ IGNORED_DIRS = [
] ]
from test.helper import assertRegexpMatches from test.helper import assertRegexpMatches
from youtube_dl.compat import compat_open as open
class TestUnicodeLiterals(unittest.TestCase): class TestUnicodeLiterals(unittest.TestCase):
@ -40,7 +43,7 @@ class TestUnicodeLiterals(unittest.TestCase):
continue continue
fn = os.path.join(dirpath, basename) fn = os.path.join(dirpath, basename)
with io.open(fn, encoding='utf-8') as inf: with open(fn, encoding='utf-8') as inf:
code = inf.read() code = inf.read()
if "'" not in code and '"' not in code: if "'" not in code and '"' not in code:

View File

@ -12,13 +12,16 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# Various small unit tests # Various small unit tests
import io import io
import itertools
import json import json
import types
import xml.etree.ElementTree import xml.etree.ElementTree
from youtube_dl.utils import ( from youtube_dl.utils import (
_UnsafeExtensionError,
age_restricted, age_restricted,
args_to_str, args_to_str,
encode_base_n, base_url,
caesar, caesar,
clean_html, clean_html,
clean_podcast_url, clean_podcast_url,
@ -26,11 +29,12 @@ from youtube_dl.utils import (
DateRange, DateRange,
detect_exe_version, detect_exe_version,
determine_ext, determine_ext,
dict_get, encode_base_n,
encode_compat_str, encode_compat_str,
encodeFilename, encodeFilename,
escape_rfc3986, escape_rfc3986,
escape_url, escape_url,
expand_path,
extract_attributes, extract_attributes,
ExtractorError, ExtractorError,
find_xpath_attr, find_xpath_attr,
@ -44,8 +48,11 @@ from youtube_dl.utils import (
int_or_none, int_or_none,
intlist_to_bytes, intlist_to_bytes,
is_html, is_html,
join_nonempty,
js_to_json, js_to_json,
LazyList,
limit_length, limit_length,
lowercase_escape,
merge_dicts, merge_dicts,
mimetype2ext, mimetype2ext,
month_by_name, month_by_name,
@ -54,24 +61,26 @@ from youtube_dl.utils import (
OnDemandPagedList, OnDemandPagedList,
orderedSet, orderedSet,
parse_age_limit, parse_age_limit,
parse_bitrate,
parse_duration, parse_duration,
parse_filesize, parse_filesize,
parse_codecs,
parse_count, parse_count,
parse_iso8601, parse_iso8601,
parse_resolution, parse_resolution,
parse_bitrate, parse_qs,
pkcs1pad, pkcs1pad,
read_batch_urls,
sanitize_filename,
sanitize_path,
sanitize_url,
expand_path,
prepend_extension, prepend_extension,
replace_extension, read_batch_urls,
remove_start, remove_start,
remove_end, remove_end,
remove_quotes, remove_quotes,
replace_extension,
rot47, rot47,
sanitize_filename,
sanitize_path,
sanitize_url,
sanitized_Request,
shell_quote, shell_quote,
smuggle_url, smuggle_url,
str_to_int, str_to_int,
@ -79,19 +88,19 @@ from youtube_dl.utils import (
strip_or_none, strip_or_none,
subtitles_filename, subtitles_filename,
timeconvert, timeconvert,
try_call,
unescapeHTML, unescapeHTML,
unified_strdate, unified_strdate,
unified_timestamp, unified_timestamp,
unsmuggle_url, unsmuggle_url,
uppercase_escape, uppercase_escape,
lowercase_escape,
url_basename, url_basename,
url_or_none, url_or_none,
base_url,
urljoin, urljoin,
urlencode_postdata, urlencode_postdata,
urshift, urshift,
update_url_query, update_url_query,
variadic,
version_tuple, version_tuple,
xpath_with_ns, xpath_with_ns,
xpath_element, xpath_element,
@ -104,7 +113,7 @@ from youtube_dl.utils import (
cli_option, cli_option,
cli_valueless_option, cli_valueless_option,
cli_bool_option, cli_bool_option,
parse_codecs, YoutubeDLHandler,
) )
from youtube_dl.compat import ( from youtube_dl.compat import (
compat_chr, compat_chr,
@ -112,12 +121,13 @@ from youtube_dl.compat import (
compat_getenv, compat_getenv,
compat_os_name, compat_os_name,
compat_setenv, compat_setenv,
compat_str,
compat_urlparse, compat_urlparse,
compat_parse_qs,
) )
class TestUtil(unittest.TestCase): class TestUtil(unittest.TestCase):
def test_timeconvert(self): def test_timeconvert(self):
self.assertTrue(timeconvert('') is None) self.assertTrue(timeconvert('') is None)
self.assertTrue(timeconvert('bougrg') is None) self.assertTrue(timeconvert('bougrg') is None)
@ -236,6 +246,19 @@ class TestUtil(unittest.TestCase):
self.assertEqual(sanitize_url('httpss://foo.bar'), 'https://foo.bar') self.assertEqual(sanitize_url('httpss://foo.bar'), 'https://foo.bar')
self.assertEqual(sanitize_url('rmtps://foo.bar'), 'rtmps://foo.bar') self.assertEqual(sanitize_url('rmtps://foo.bar'), 'rtmps://foo.bar')
self.assertEqual(sanitize_url('https://foo.bar'), 'https://foo.bar') self.assertEqual(sanitize_url('https://foo.bar'), 'https://foo.bar')
self.assertEqual(sanitize_url('foo bar'), 'foo bar')
def test_sanitized_Request(self):
self.assertFalse(sanitized_Request('http://foo.bar').has_header('Authorization'))
self.assertFalse(sanitized_Request('http://:foo.bar').has_header('Authorization'))
self.assertEqual(sanitized_Request('http://@foo.bar').get_header('Authorization'),
'Basic Og==')
self.assertEqual(sanitized_Request('http://:pass@foo.bar').get_header('Authorization'),
'Basic OnBhc3M=')
self.assertEqual(sanitized_Request('http://user:@foo.bar').get_header('Authorization'),
'Basic dXNlcjo=')
self.assertEqual(sanitized_Request('http://user:pass@foo.bar').get_header('Authorization'),
'Basic dXNlcjpwYXNz')
def test_expand_path(self): def test_expand_path(self):
def env(var): def env(var):
@ -249,6 +272,27 @@ class TestUtil(unittest.TestCase):
expand_path('~/%s' % env('YOUTUBE_DL_EXPATH_PATH')), expand_path('~/%s' % env('YOUTUBE_DL_EXPATH_PATH')),
'%s/expanded' % compat_getenv('HOME')) '%s/expanded' % compat_getenv('HOME'))
_uncommon_extensions = [
('exe', 'abc.exe.ext'),
('de', 'abc.de.ext'),
('../.mp4', None),
('..\\.mp4', None),
]
def assertUnsafeExtension(self, ext=None):
assert_raises = self.assertRaises(_UnsafeExtensionError)
assert_raises.ext = ext
orig_exit = assert_raises.__exit__
def my_exit(self_, exc_type, exc_val, exc_tb):
did_raise = orig_exit(exc_type, exc_val, exc_tb)
if did_raise and assert_raises.ext is not None:
self.assertEqual(assert_raises.ext, assert_raises.exception.extension, 'Unsafe extension not as unexpected')
return did_raise
assert_raises.__exit__ = types.MethodType(my_exit, assert_raises)
return assert_raises
def test_prepend_extension(self): def test_prepend_extension(self):
self.assertEqual(prepend_extension('abc.ext', 'temp'), 'abc.temp.ext') self.assertEqual(prepend_extension('abc.ext', 'temp'), 'abc.temp.ext')
self.assertEqual(prepend_extension('abc.ext', 'temp', 'ext'), 'abc.temp.ext') self.assertEqual(prepend_extension('abc.ext', 'temp', 'ext'), 'abc.temp.ext')
@ -257,6 +301,19 @@ class TestUtil(unittest.TestCase):
self.assertEqual(prepend_extension('.abc', 'temp'), '.abc.temp') self.assertEqual(prepend_extension('.abc', 'temp'), '.abc.temp')
self.assertEqual(prepend_extension('.abc.ext', 'temp'), '.abc.temp.ext') self.assertEqual(prepend_extension('.abc.ext', 'temp'), '.abc.temp.ext')
# Test uncommon extensions
self.assertEqual(prepend_extension('abc.ext', 'bin'), 'abc.bin.ext')
for ext, result in self._uncommon_extensions:
with self.assertUnsafeExtension(ext):
prepend_extension('abc', ext)
if result:
self.assertEqual(prepend_extension('abc.ext', ext, 'ext'), result)
else:
with self.assertUnsafeExtension(ext):
prepend_extension('abc.ext', ext, 'ext')
with self.assertUnsafeExtension(ext):
prepend_extension('abc.unexpected_ext', ext, 'ext')
def test_replace_extension(self): def test_replace_extension(self):
self.assertEqual(replace_extension('abc.ext', 'temp'), 'abc.temp') self.assertEqual(replace_extension('abc.ext', 'temp'), 'abc.temp')
self.assertEqual(replace_extension('abc.ext', 'temp', 'ext'), 'abc.temp') self.assertEqual(replace_extension('abc.ext', 'temp', 'ext'), 'abc.temp')
@ -265,6 +322,16 @@ class TestUtil(unittest.TestCase):
self.assertEqual(replace_extension('.abc', 'temp'), '.abc.temp') self.assertEqual(replace_extension('.abc', 'temp'), '.abc.temp')
self.assertEqual(replace_extension('.abc.ext', 'temp'), '.abc.temp') self.assertEqual(replace_extension('.abc.ext', 'temp'), '.abc.temp')
# Test uncommon extensions
self.assertEqual(replace_extension('abc.ext', 'bin'), 'abc.unknown_video')
for ext, _ in self._uncommon_extensions:
with self.assertUnsafeExtension(ext):
replace_extension('abc', ext)
with self.assertUnsafeExtension(ext):
replace_extension('abc.ext', ext, 'ext')
with self.assertUnsafeExtension(ext):
replace_extension('abc.unexpected_ext', ext, 'ext')
def test_subtitles_filename(self): def test_subtitles_filename(self):
self.assertEqual(subtitles_filename('abc.ext', 'en', 'vtt'), 'abc.en.vtt') self.assertEqual(subtitles_filename('abc.ext', 'en', 'vtt'), 'abc.en.vtt')
self.assertEqual(subtitles_filename('abc.ext', 'en', 'vtt', 'ext'), 'abc.en.vtt') self.assertEqual(subtitles_filename('abc.ext', 'en', 'vtt', 'ext'), 'abc.en.vtt')
@ -370,6 +437,9 @@ class TestUtil(unittest.TestCase):
self.assertEqual(unified_timestamp('Sep 11, 2013 | 5:49 AM'), 1378878540) self.assertEqual(unified_timestamp('Sep 11, 2013 | 5:49 AM'), 1378878540)
self.assertEqual(unified_timestamp('December 15, 2017 at 7:49 am'), 1513324140) self.assertEqual(unified_timestamp('December 15, 2017 at 7:49 am'), 1513324140)
self.assertEqual(unified_timestamp('2018-03-14T08:32:43.1493874+00:00'), 1521016363) self.assertEqual(unified_timestamp('2018-03-14T08:32:43.1493874+00:00'), 1521016363)
self.assertEqual(unified_timestamp('December 31 1969 20:00:01 EDT'), 1)
self.assertEqual(unified_timestamp('Wednesday 31 December 1969 18:01:26 MDT'), 86)
self.assertEqual(unified_timestamp('12/31/1969 20:01:18 EDT', False), 78)
def test_determine_ext(self): def test_determine_ext(self):
self.assertEqual(determine_ext('http://example.com/foo/bar.mp4/?download'), 'mp4') self.assertEqual(determine_ext('http://example.com/foo/bar.mp4/?download'), 'mp4')
@ -491,11 +561,14 @@ class TestUtil(unittest.TestCase):
self.assertEqual(float_or_none(set()), None) self.assertEqual(float_or_none(set()), None)
def test_int_or_none(self): def test_int_or_none(self):
self.assertEqual(int_or_none(42), 42)
self.assertEqual(int_or_none('42'), 42) self.assertEqual(int_or_none('42'), 42)
self.assertEqual(int_or_none(''), None) self.assertEqual(int_or_none(''), None)
self.assertEqual(int_or_none(None), None) self.assertEqual(int_or_none(None), None)
self.assertEqual(int_or_none([]), None) self.assertEqual(int_or_none([]), None)
self.assertEqual(int_or_none(set()), None) self.assertEqual(int_or_none(set()), None)
self.assertEqual(int_or_none('42', base=8), 34)
self.assertRaises(TypeError, int_or_none(42, base=8))
def test_str_to_int(self): def test_str_to_int(self):
self.assertEqual(str_to_int('123,456'), 123456) self.assertEqual(str_to_int('123,456'), 123456)
@ -662,38 +735,36 @@ class TestUtil(unittest.TestCase):
self.assertTrue(isinstance(data, bytes)) self.assertTrue(isinstance(data, bytes))
def test_update_url_query(self): def test_update_url_query(self):
def query_dict(url): self.assertEqual(parse_qs(update_url_query(
return compat_parse_qs(compat_urlparse.urlparse(url).query)
self.assertEqual(query_dict(update_url_query(
'http://example.com/path', {'quality': ['HD'], 'format': ['mp4']})), 'http://example.com/path', {'quality': ['HD'], 'format': ['mp4']})),
query_dict('http://example.com/path?quality=HD&format=mp4')) parse_qs('http://example.com/path?quality=HD&format=mp4'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path', {'system': ['LINUX', 'WINDOWS']})), 'http://example.com/path', {'system': ['LINUX', 'WINDOWS']})),
query_dict('http://example.com/path?system=LINUX&system=WINDOWS')) parse_qs('http://example.com/path?system=LINUX&system=WINDOWS'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path', {'fields': 'id,formats,subtitles'})), 'http://example.com/path', {'fields': 'id,formats,subtitles'})),
query_dict('http://example.com/path?fields=id,formats,subtitles')) parse_qs('http://example.com/path?fields=id,formats,subtitles'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path', {'fields': ('id,formats,subtitles', 'thumbnails')})), 'http://example.com/path', {'fields': ('id,formats,subtitles', 'thumbnails')})),
query_dict('http://example.com/path?fields=id,formats,subtitles&fields=thumbnails')) parse_qs('http://example.com/path?fields=id,formats,subtitles&fields=thumbnails'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path?manifest=f4m', {'manifest': []})), 'http://example.com/path?manifest=f4m', {'manifest': []})),
query_dict('http://example.com/path')) parse_qs('http://example.com/path'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path?system=LINUX&system=WINDOWS', {'system': 'LINUX'})), 'http://example.com/path?system=LINUX&system=WINDOWS', {'system': 'LINUX'})),
query_dict('http://example.com/path?system=LINUX')) parse_qs('http://example.com/path?system=LINUX'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path', {'fields': b'id,formats,subtitles'})), 'http://example.com/path', {'fields': b'id,formats,subtitles'})),
query_dict('http://example.com/path?fields=id,formats,subtitles')) parse_qs('http://example.com/path?fields=id,formats,subtitles'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path', {'width': 1080, 'height': 720})), 'http://example.com/path', {'width': 1080, 'height': 720})),
query_dict('http://example.com/path?width=1080&height=720')) parse_qs('http://example.com/path?width=1080&height=720'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path', {'bitrate': 5020.43})), 'http://example.com/path', {'bitrate': 5020.43})),
query_dict('http://example.com/path?bitrate=5020.43')) parse_qs('http://example.com/path?bitrate=5020.43'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path', {'test': '第二行тест'})), 'http://example.com/path', {'test': '第二行тест'})),
query_dict('http://example.com/path?test=%E7%AC%AC%E4%BA%8C%E8%A1%8C%D1%82%D0%B5%D1%81%D1%82')) parse_qs('http://example.com/path?test=%E7%AC%AC%E4%BA%8C%E8%A1%8C%D1%82%D0%B5%D1%81%D1%82'))
def test_multipart_encode(self): def test_multipart_encode(self):
self.assertEqual( self.assertEqual(
@ -705,28 +776,6 @@ class TestUtil(unittest.TestCase):
self.assertRaises( self.assertRaises(
ValueError, multipart_encode, {b'field': b'value'}, boundary='value') ValueError, multipart_encode, {b'field': b'value'}, boundary='value')
def test_dict_get(self):
FALSE_VALUES = {
'none': None,
'false': False,
'zero': 0,
'empty_string': '',
'empty_list': [],
}
d = FALSE_VALUES.copy()
d['a'] = 42
self.assertEqual(dict_get(d, 'a'), 42)
self.assertEqual(dict_get(d, 'b'), None)
self.assertEqual(dict_get(d, 'b', 42), 42)
self.assertEqual(dict_get(d, ('a', )), 42)
self.assertEqual(dict_get(d, ('b', 'a', )), 42)
self.assertEqual(dict_get(d, ('b', 'c', 'a', 'd', )), 42)
self.assertEqual(dict_get(d, ('b', 'c', )), None)
self.assertEqual(dict_get(d, ('b', 'c', ), 42), 42)
for key, false_value in FALSE_VALUES.items():
self.assertEqual(dict_get(d, ('b', 'c', key, )), None)
self.assertEqual(dict_get(d, ('b', 'c', key, ), skip_false_values=False), false_value)
def test_merge_dicts(self): def test_merge_dicts(self):
self.assertEqual(merge_dicts({'a': 1}, {'b': 2}), {'a': 1, 'b': 2}) self.assertEqual(merge_dicts({'a': 1}, {'b': 2}), {'a': 1, 'b': 2})
self.assertEqual(merge_dicts({'a': 1}, {'a': 2}), {'a': 1}) self.assertEqual(merge_dicts({'a': 1}, {'a': 2}), {'a': 1})
@ -885,6 +934,111 @@ class TestUtil(unittest.TestCase):
) )
self.assertEqual(escape_url('http://vimeo.com/56015672#at=0'), 'http://vimeo.com/56015672#at=0') self.assertEqual(escape_url('http://vimeo.com/56015672#at=0'), 'http://vimeo.com/56015672#at=0')
def test_remove_dot_segments(self):
def remove_dot_segments(p):
q = '' if p.startswith('/') else '/'
p = 'http://example.com' + q + p
p = compat_urlparse.urlsplit(YoutubeDLHandler._fix_path(p)).path
return p[1:] if q else p
self.assertEqual(remove_dot_segments('/a/b/c/./../../g'), '/a/g')
self.assertEqual(remove_dot_segments('mid/content=5/../6'), 'mid/6')
self.assertEqual(remove_dot_segments('/ad/../cd'), '/cd')
self.assertEqual(remove_dot_segments('/ad/../cd/'), '/cd/')
self.assertEqual(remove_dot_segments('/..'), '/')
self.assertEqual(remove_dot_segments('/./'), '/')
self.assertEqual(remove_dot_segments('/./a'), '/a')
self.assertEqual(remove_dot_segments('/abc/./.././d/././e/.././f/./../../ghi'), '/ghi')
self.assertEqual(remove_dot_segments('/'), '/')
self.assertEqual(remove_dot_segments('/t'), '/t')
self.assertEqual(remove_dot_segments('t'), 't')
self.assertEqual(remove_dot_segments(''), '')
self.assertEqual(remove_dot_segments('/../a/b/c'), '/a/b/c')
self.assertEqual(remove_dot_segments('../a'), 'a')
self.assertEqual(remove_dot_segments('./a'), 'a')
self.assertEqual(remove_dot_segments('.'), '')
self.assertEqual(remove_dot_segments('////'), '////')
def test_js_to_json_vars_strings(self):
self.assertDictEqual(
json.loads(js_to_json(
'''{
'null': a,
'nullStr': b,
'true': c,
'trueStr': d,
'false': e,
'falseStr': f,
'unresolvedVar': g,
}''',
{
'a': 'null',
'b': '"null"',
'c': 'true',
'd': '"true"',
'e': 'false',
'f': '"false"',
'g': 'var',
}
)),
{
'null': None,
'nullStr': 'null',
'true': True,
'trueStr': 'true',
'false': False,
'falseStr': 'false',
'unresolvedVar': 'var'
}
)
self.assertDictEqual(
json.loads(js_to_json(
'''{
'int': a,
'intStr': b,
'float': c,
'floatStr': d,
}''',
{
'a': '123',
'b': '"123"',
'c': '1.23',
'd': '"1.23"',
}
)),
{
'int': 123,
'intStr': '123',
'float': 1.23,
'floatStr': '1.23',
}
)
self.assertDictEqual(
json.loads(js_to_json(
'''{
'object': a,
'objectStr': b,
'array': c,
'arrayStr': d,
}''',
{
'a': '{}',
'b': '"{}"',
'c': '[]',
'd': '"[]"',
}
)),
{
'object': {},
'objectStr': '{}',
'array': [],
'arrayStr': '[]',
}
)
def test_js_to_json_realworld(self): def test_js_to_json_realworld(self):
inp = '''{ inp = '''{
'clip':{'provider':'pseudo'} 'clip':{'provider':'pseudo'}
@ -955,10 +1109,10 @@ class TestUtil(unittest.TestCase):
!42: 42 !42: 42
}''') }''')
self.assertEqual(json.loads(on), { self.assertEqual(json.loads(on), {
'a': 0, 'a': True,
'b': 1, 'b': False,
'c': 0, 'c': False,
'd': 42.42, 'd': True,
'e': [], 'e': [],
'f': "abc", 'f': "abc",
'g': "", 'g': "",
@ -1028,10 +1182,26 @@ class TestUtil(unittest.TestCase):
on = js_to_json('{ "040": "040" }') on = js_to_json('{ "040": "040" }')
self.assertEqual(json.loads(on), {'040': '040'}) self.assertEqual(json.loads(on), {'040': '040'})
on = js_to_json('[1,//{},\n2]')
self.assertEqual(json.loads(on), [1, 2])
on = js_to_json(r'"\^\$\#"')
self.assertEqual(json.loads(on), R'^$#', msg='Unnecessary escapes should be stripped')
on = js_to_json('\'"\\""\'')
self.assertEqual(json.loads(on), '"""', msg='Unnecessary quote escape should be escaped')
def test_js_to_json_malformed(self): def test_js_to_json_malformed(self):
self.assertEqual(js_to_json('42a1'), '42"a1"') self.assertEqual(js_to_json('42a1'), '42"a1"')
self.assertEqual(js_to_json('42a-1'), '42"a"-1') self.assertEqual(js_to_json('42a-1'), '42"a"-1')
def test_js_to_json_template_literal(self):
self.assertEqual(js_to_json('`Hello ${name}`', {'name': '"world"'}), '"Hello world"')
self.assertEqual(js_to_json('`${name}${name}`', {'name': '"X"'}), '"XX"')
self.assertEqual(js_to_json('`${name}${name}`', {'name': '5'}), '"55"')
self.assertEqual(js_to_json('`${name}"${name}"`', {'name': '5'}), '"5\\"5\\""')
self.assertEqual(js_to_json('`${name}`', {}), '"name"')
def test_extract_attributes(self): def test_extract_attributes(self):
self.assertEqual(extract_attributes('<e x="y">'), {'x': 'y'}) self.assertEqual(extract_attributes('<e x="y">'), {'x': 'y'})
self.assertEqual(extract_attributes("<e x='y'>"), {'x': 'y'}) self.assertEqual(extract_attributes("<e x='y'>"), {'x': 'y'})
@ -1475,6 +1645,84 @@ Line 1
self.assertEqual(clean_podcast_url('https://www.podtrac.com/pts/redirect.mp3/chtbl.com/track/5899E/traffic.megaphone.fm/HSW7835899191.mp3'), 'https://traffic.megaphone.fm/HSW7835899191.mp3') self.assertEqual(clean_podcast_url('https://www.podtrac.com/pts/redirect.mp3/chtbl.com/track/5899E/traffic.megaphone.fm/HSW7835899191.mp3'), 'https://traffic.megaphone.fm/HSW7835899191.mp3')
self.assertEqual(clean_podcast_url('https://play.podtrac.com/npr-344098539/edge1.pod.npr.org/anon.npr-podcasts/podcast/npr/waitwait/2020/10/20201003_waitwait_wwdtmpodcast201003-015621a5-f035-4eca-a9a1-7c118d90bc3c.mp3'), 'https://edge1.pod.npr.org/anon.npr-podcasts/podcast/npr/waitwait/2020/10/20201003_waitwait_wwdtmpodcast201003-015621a5-f035-4eca-a9a1-7c118d90bc3c.mp3') self.assertEqual(clean_podcast_url('https://play.podtrac.com/npr-344098539/edge1.pod.npr.org/anon.npr-podcasts/podcast/npr/waitwait/2020/10/20201003_waitwait_wwdtmpodcast201003-015621a5-f035-4eca-a9a1-7c118d90bc3c.mp3'), 'https://edge1.pod.npr.org/anon.npr-podcasts/podcast/npr/waitwait/2020/10/20201003_waitwait_wwdtmpodcast201003-015621a5-f035-4eca-a9a1-7c118d90bc3c.mp3')
def test_LazyList(self):
it = list(range(10))
self.assertEqual(list(LazyList(it)), it)
self.assertEqual(LazyList(it).exhaust(), it)
self.assertEqual(LazyList(it)[5], it[5])
self.assertEqual(LazyList(it)[5:], it[5:])
self.assertEqual(LazyList(it)[:5], it[:5])
self.assertEqual(LazyList(it)[::2], it[::2])
self.assertEqual(LazyList(it)[1::2], it[1::2])
self.assertEqual(LazyList(it)[5::-1], it[5::-1])
self.assertEqual(LazyList(it)[6:2:-2], it[6:2:-2])
self.assertEqual(LazyList(it)[::-1], it[::-1])
self.assertTrue(LazyList(it))
self.assertFalse(LazyList(range(0)))
self.assertEqual(len(LazyList(it)), len(it))
self.assertEqual(repr(LazyList(it)), repr(it))
self.assertEqual(compat_str(LazyList(it)), compat_str(it))
self.assertEqual(list(LazyList(it, reverse=True)), it[::-1])
self.assertEqual(list(reversed(LazyList(it))[::-1]), it)
self.assertEqual(list(reversed(LazyList(it))[1:3:7]), it[::-1][1:3:7])
def test_LazyList_laziness(self):
def test(ll, idx, val, cache):
self.assertEqual(ll[idx], val)
self.assertEqual(ll._cache, list(cache))
ll = LazyList(range(10))
test(ll, 0, 0, range(1))
test(ll, 5, 5, range(6))
test(ll, -3, 7, range(10))
ll = LazyList(range(10), reverse=True)
test(ll, -1, 0, range(1))
test(ll, 3, 6, range(10))
ll = LazyList(itertools.count())
test(ll, 10, 10, range(11))
ll = reversed(ll)
test(ll, -15, 14, range(15))
def test_try_call(self):
def total(*x, **kwargs):
return sum(x) + sum(kwargs.values())
self.assertEqual(try_call(None), None,
msg='not a fn should give None')
self.assertEqual(try_call(lambda: 1), 1,
msg='int fn with no expected_type should give int')
self.assertEqual(try_call(lambda: 1, expected_type=int), 1,
msg='int fn with expected_type int should give int')
self.assertEqual(try_call(lambda: 1, expected_type=dict), None,
msg='int fn with wrong expected_type should give None')
self.assertEqual(try_call(total, args=(0, 1, 0, ), expected_type=int), 1,
msg='fn should accept arglist')
self.assertEqual(try_call(total, kwargs={'a': 0, 'b': 1, 'c': 0}, expected_type=int), 1,
msg='fn should accept kwargs')
self.assertEqual(try_call(lambda: 1, expected_type=dict), None,
msg='int fn with no expected_type should give None')
self.assertEqual(try_call(lambda x: {}, total, args=(42, ), expected_type=int), 42,
msg='expect first int result with expected_type int')
def test_variadic(self):
self.assertEqual(variadic(None), (None, ))
self.assertEqual(variadic('spam'), ('spam', ))
self.assertEqual(variadic('spam', allowed_types=dict), 'spam')
self.assertEqual(variadic('spam', allowed_types=[dict]), 'spam')
def test_join_nonempty(self):
self.assertEqual(join_nonempty('a', 'b'), 'a-b')
self.assertEqual(join_nonempty(
'a', 'b', 'c', 'd',
from_dict={'a': 'c', 'c': [], 'b': 'd', 'd': None}), 'c-d')
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -11,12 +11,11 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import get_params, try_rm from test.helper import get_params, try_rm
import io
import xml.etree.ElementTree import xml.etree.ElementTree
import youtube_dl.YoutubeDL import youtube_dl.YoutubeDL
import youtube_dl.extractor import youtube_dl.extractor
from youtube_dl.compat import compat_open as open
class YoutubeDL(youtube_dl.YoutubeDL): class YoutubeDL(youtube_dl.YoutubeDL):
@ -51,7 +50,7 @@ class TestAnnotations(unittest.TestCase):
ydl.download([TEST_ID]) ydl.download([TEST_ID])
self.assertTrue(os.path.exists(ANNOTATIONS_FILE)) self.assertTrue(os.path.exists(ANNOTATIONS_FILE))
annoxml = None annoxml = None
with io.open(ANNOTATIONS_FILE, 'r', encoding='utf-8') as annof: with open(ANNOTATIONS_FILE, 'r', encoding='utf-8') as annof:
annoxml = xml.etree.ElementTree.parse(annof) annoxml = xml.etree.ElementTree.parse(annof)
self.assertTrue(annoxml is not None, 'Failed to parse annotations XML') self.assertTrue(annoxml is not None, 'Failed to parse annotations XML')
root = annoxml.getroot() root = annoxml.getroot()

View File

@ -1,275 +0,0 @@
#!/usr/bin/env python
# coding: utf-8
from __future__ import unicode_literals
# Allow direct execution
import os
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import expect_value
from youtube_dl.extractor import YoutubeIE
class TestYoutubeChapters(unittest.TestCase):
_TEST_CASES = [
(
# https://www.youtube.com/watch?v=A22oy8dFjqc
# pattern: 00:00 - <title>
'''This is the absolute ULTIMATE experience of Queen's set at LIVE AID, this is the best video mixed to the absolutely superior stereo radio broadcast. This vastly superior audio mix takes a huge dump on all of the official mixes. Best viewed in 1080p. ENJOY! ***MAKE SURE TO READ THE DESCRIPTION***<br /><a href="#" onclick="yt.www.watch.player.seekTo(00*60+36);return false;">00:36</a> - Bohemian Rhapsody<br /><a href="#" onclick="yt.www.watch.player.seekTo(02*60+42);return false;">02:42</a> - Radio Ga Ga<br /><a href="#" onclick="yt.www.watch.player.seekTo(06*60+53);return false;">06:53</a> - Ay Oh!<br /><a href="#" onclick="yt.www.watch.player.seekTo(07*60+34);return false;">07:34</a> - Hammer To Fall<br /><a href="#" onclick="yt.www.watch.player.seekTo(12*60+08);return false;">12:08</a> - Crazy Little Thing Called Love<br /><a href="#" onclick="yt.www.watch.player.seekTo(16*60+03);return false;">16:03</a> - We Will Rock You<br /><a href="#" onclick="yt.www.watch.player.seekTo(17*60+18);return false;">17:18</a> - We Are The Champions<br /><a href="#" onclick="yt.www.watch.player.seekTo(21*60+12);return false;">21:12</a> - Is This The World We Created...?<br /><br />Short song analysis:<br /><br />- "Bohemian Rhapsody": Although it's a short medley version, it's one of the best performances of the ballad section, with Freddie nailing the Bb4s with the correct studio phrasing (for the first time ever!).<br /><br />- "Radio Ga Ga": Although it's missing one chorus, this is one of - if not the best - the best versions ever, Freddie nails all the Bb4s and sounds very clean! Spike Edney's Roland Jupiter 8 also really shines through on this mix, compared to the DVD releases!<br /><br />- "Audience Improv": A great improv, Freddie sounds strong and confident. You gotta love when he sustains that A4 for 4 seconds!<br /><br />- "Hammer To Fall": Despite missing a verse and a chorus, it's a strong version (possibly the best ever). Freddie sings the song amazingly, and even ad-libs a C#5 and a C5! Also notice how heavy Brian's guitar sounds compared to the thin DVD mixes - it roars!<br /><br />- "Crazy Little Thing Called Love": A great version, the crowd loves the song, the jam is great as well! Only downside to this is the slight feedback issues.<br /><br />- "We Will Rock You": Although cut down to the 1st verse and chorus, Freddie sounds strong. He nails the A4, and the solo from Dr. May is brilliant!<br /><br />- "We Are the Champions": Perhaps the high-light of the performance - Freddie is very daring on this version, he sustains the pre-chorus Bb4s, nails the 1st C5, belts great A4s, but most importantly: He nails the chorus Bb4s, in all 3 choruses! This is the only time he has ever done so! It has to be said though, the last one sounds a bit rough, but that's a side effect of belting high notes for the past 18 minutes, with nodules AND laryngitis!<br /><br />- "Is This The World We Created... ?": Freddie and Brian perform a beautiful version of this, and it is one of the best versions ever. It's both sad and hilarious that a couple of BBC engineers are talking over the song, one of them being completely oblivious of the fact that he is interrupting the performance, on live television... Which was being televised to almost 2 billion homes.<br /><br /><br />All rights go to their respective owners!<br />-----Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for fair use for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. Non-profit, educational or personal use tips the balance in favor of fair use''',
1477,
[{
'start_time': 36,
'end_time': 162,
'title': 'Bohemian Rhapsody',
}, {
'start_time': 162,
'end_time': 413,
'title': 'Radio Ga Ga',
}, {
'start_time': 413,
'end_time': 454,
'title': 'Ay Oh!',
}, {
'start_time': 454,
'end_time': 728,
'title': 'Hammer To Fall',
}, {
'start_time': 728,
'end_time': 963,
'title': 'Crazy Little Thing Called Love',
}, {
'start_time': 963,
'end_time': 1038,
'title': 'We Will Rock You',
}, {
'start_time': 1038,
'end_time': 1272,
'title': 'We Are The Champions',
}, {
'start_time': 1272,
'end_time': 1477,
'title': 'Is This The World We Created...?',
}]
),
(
# https://www.youtube.com/watch?v=ekYlRhALiRQ
# pattern: <num>. <title> 0:00
'1. Those Beaten Paths of Confusion <a href="#" onclick="yt.www.watch.player.seekTo(0*60+00);return false;">0:00</a><br />2. Beyond the Shadows of Emptiness & Nothingness <a href="#" onclick="yt.www.watch.player.seekTo(11*60+47);return false;">11:47</a><br />3. Poison Yourself...With Thought <a href="#" onclick="yt.www.watch.player.seekTo(26*60+30);return false;">26:30</a><br />4. The Agents of Transformation <a href="#" onclick="yt.www.watch.player.seekTo(35*60+57);return false;">35:57</a><br />5. Drowning in the Pain of Consciousness <a href="#" onclick="yt.www.watch.player.seekTo(44*60+32);return false;">44:32</a><br />6. Deny the Disease of Life <a href="#" onclick="yt.www.watch.player.seekTo(53*60+07);return false;">53:07</a><br /><br />More info/Buy: http://crepusculonegro.storenvy.com/products/257645-cn-03-arizmenda-within-the-vacuum-of-infinity<br /><br />No copyright is intended. The rights to this video are assumed by the owner and its affiliates.',
4009,
[{
'start_time': 0,
'end_time': 707,
'title': '1. Those Beaten Paths of Confusion',
}, {
'start_time': 707,
'end_time': 1590,
'title': '2. Beyond the Shadows of Emptiness & Nothingness',
}, {
'start_time': 1590,
'end_time': 2157,
'title': '3. Poison Yourself...With Thought',
}, {
'start_time': 2157,
'end_time': 2672,
'title': '4. The Agents of Transformation',
}, {
'start_time': 2672,
'end_time': 3187,
'title': '5. Drowning in the Pain of Consciousness',
}, {
'start_time': 3187,
'end_time': 4009,
'title': '6. Deny the Disease of Life',
}]
),
(
# https://www.youtube.com/watch?v=WjL4pSzog9w
# pattern: 00:00 <title>
'<a href="https://arizmenda.bandcamp.com/merch/despairs-depths-descended-cd" class="yt-uix-servicelink " data-target-new-window="True" data-servicelink="CDAQ6TgYACITCNf1raqT2dMCFdRjGAod_o0CBSj4HQ" data-url="https://arizmenda.bandcamp.com/merch/despairs-depths-descended-cd" rel="nofollow noopener" target="_blank">https://arizmenda.bandcamp.com/merch/...</a><br /><br /><a href="#" onclick="yt.www.watch.player.seekTo(00*60+00);return false;">00:00</a> Christening Unborn Deformities <br /><a href="#" onclick="yt.www.watch.player.seekTo(07*60+08);return false;">07:08</a> Taste of Purity<br /><a href="#" onclick="yt.www.watch.player.seekTo(16*60+16);return false;">16:16</a> Sculpting Sins of a Universal Tongue<br /><a href="#" onclick="yt.www.watch.player.seekTo(24*60+45);return false;">24:45</a> Birth<br /><a href="#" onclick="yt.www.watch.player.seekTo(31*60+24);return false;">31:24</a> Neves<br /><a href="#" onclick="yt.www.watch.player.seekTo(37*60+55);return false;">37:55</a> Libations in Limbo',
2705,
[{
'start_time': 0,
'end_time': 428,
'title': 'Christening Unborn Deformities',
}, {
'start_time': 428,
'end_time': 976,
'title': 'Taste of Purity',
}, {
'start_time': 976,
'end_time': 1485,
'title': 'Sculpting Sins of a Universal Tongue',
}, {
'start_time': 1485,
'end_time': 1884,
'title': 'Birth',
}, {
'start_time': 1884,
'end_time': 2275,
'title': 'Neves',
}, {
'start_time': 2275,
'end_time': 2705,
'title': 'Libations in Limbo',
}]
),
(
# https://www.youtube.com/watch?v=o3r1sn-t3is
# pattern: <title> 00:00 <note>
'Download this show in MP3: <a href="http://sh.st/njZKK" class="yt-uix-servicelink " data-url="http://sh.st/njZKK" data-target-new-window="True" data-servicelink="CDAQ6TgYACITCK3j8_6o2dMCFVDCGAoduVAKKij4HQ" rel="nofollow noopener" target="_blank">http://sh.st/njZKK</a><br /><br />Setlist:<br />I-E-A-I-A-I-O <a href="#" onclick="yt.www.watch.player.seekTo(00*60+45);return false;">00:45</a><br />Suite-Pee <a href="#" onclick="yt.www.watch.player.seekTo(4*60+26);return false;">4:26</a> (Incomplete)<br />Attack <a href="#" onclick="yt.www.watch.player.seekTo(5*60+31);return false;">5:31</a> (First live performance since 2011)<br />Prison Song <a href="#" onclick="yt.www.watch.player.seekTo(8*60+42);return false;">8:42</a><br />Know <a href="#" onclick="yt.www.watch.player.seekTo(12*60+32);return false;">12:32</a> (First live performance since 2011)<br />Aerials <a href="#" onclick="yt.www.watch.player.seekTo(15*60+32);return false;">15:32</a><br />Soldier Side - Intro <a href="#" onclick="yt.www.watch.player.seekTo(19*60+13);return false;">19:13</a><br />B.Y.O.B. <a href="#" onclick="yt.www.watch.player.seekTo(20*60+09);return false;">20:09</a><br />Soil <a href="#" onclick="yt.www.watch.player.seekTo(24*60+32);return false;">24:32</a><br />Darts <a href="#" onclick="yt.www.watch.player.seekTo(27*60+48);return false;">27:48</a><br />Radio/Video <a href="#" onclick="yt.www.watch.player.seekTo(30*60+38);return false;">30:38</a><br />Hypnotize <a href="#" onclick="yt.www.watch.player.seekTo(35*60+05);return false;">35:05</a><br />Temper <a href="#" onclick="yt.www.watch.player.seekTo(38*60+08);return false;">38:08</a> (First live performance since 1999)<br />CUBErt <a href="#" onclick="yt.www.watch.player.seekTo(41*60+00);return false;">41:00</a><br />Needles <a href="#" onclick="yt.www.watch.player.seekTo(42*60+57);return false;">42:57</a><br />Deer Dance <a href="#" onclick="yt.www.watch.player.seekTo(46*60+27);return false;">46:27</a><br />Bounce <a href="#" onclick="yt.www.watch.player.seekTo(49*60+38);return false;">49:38</a><br />Suggestions <a href="#" onclick="yt.www.watch.player.seekTo(51*60+25);return false;">51:25</a><br />Psycho <a href="#" onclick="yt.www.watch.player.seekTo(53*60+52);return false;">53:52</a><br />Chop Suey! <a href="#" onclick="yt.www.watch.player.seekTo(58*60+13);return false;">58:13</a><br />Lonely Day <a href="#" onclick="yt.www.watch.player.seekTo(1*3600+01*60+15);return false;">1:01:15</a><br />Question! <a href="#" onclick="yt.www.watch.player.seekTo(1*3600+04*60+14);return false;">1:04:14</a><br />Lost in Hollywood <a href="#" onclick="yt.www.watch.player.seekTo(1*3600+08*60+10);return false;">1:08:10</a><br />Vicinity of Obscenity <a href="#" onclick="yt.www.watch.player.seekTo(1*3600+13*60+40);return false;">1:13:40</a>(First live performance since 2012)<br />Forest <a href="#" onclick="yt.www.watch.player.seekTo(1*3600+16*60+17);return false;">1:16:17</a><br />Cigaro <a href="#" onclick="yt.www.watch.player.seekTo(1*3600+20*60+02);return false;">1:20:02</a><br />Toxicity <a href="#" onclick="yt.www.watch.player.seekTo(1*3600+23*60+57);return false;">1:23:57</a>(with Chino Moreno)<br />Sugar <a href="#" onclick="yt.www.watch.player.seekTo(1*3600+27*60+53);return false;">1:27:53</a>',
5640,
[{
'start_time': 45,
'end_time': 266,
'title': 'I-E-A-I-A-I-O',
}, {
'start_time': 266,
'end_time': 331,
'title': 'Suite-Pee (Incomplete)',
}, {
'start_time': 331,
'end_time': 522,
'title': 'Attack (First live performance since 2011)',
}, {
'start_time': 522,
'end_time': 752,
'title': 'Prison Song',
}, {
'start_time': 752,
'end_time': 932,
'title': 'Know (First live performance since 2011)',
}, {
'start_time': 932,
'end_time': 1153,
'title': 'Aerials',
}, {
'start_time': 1153,
'end_time': 1209,
'title': 'Soldier Side - Intro',
}, {
'start_time': 1209,
'end_time': 1472,
'title': 'B.Y.O.B.',
}, {
'start_time': 1472,
'end_time': 1668,
'title': 'Soil',
}, {
'start_time': 1668,
'end_time': 1838,
'title': 'Darts',
}, {
'start_time': 1838,
'end_time': 2105,
'title': 'Radio/Video',
}, {
'start_time': 2105,
'end_time': 2288,
'title': 'Hypnotize',
}, {
'start_time': 2288,
'end_time': 2460,
'title': 'Temper (First live performance since 1999)',
}, {
'start_time': 2460,
'end_time': 2577,
'title': 'CUBErt',
}, {
'start_time': 2577,
'end_time': 2787,
'title': 'Needles',
}, {
'start_time': 2787,
'end_time': 2978,
'title': 'Deer Dance',
}, {
'start_time': 2978,
'end_time': 3085,
'title': 'Bounce',
}, {
'start_time': 3085,
'end_time': 3232,
'title': 'Suggestions',
}, {
'start_time': 3232,
'end_time': 3493,
'title': 'Psycho',
}, {
'start_time': 3493,
'end_time': 3675,
'title': 'Chop Suey!',
}, {
'start_time': 3675,
'end_time': 3854,
'title': 'Lonely Day',
}, {
'start_time': 3854,
'end_time': 4090,
'title': 'Question!',
}, {
'start_time': 4090,
'end_time': 4420,
'title': 'Lost in Hollywood',
}, {
'start_time': 4420,
'end_time': 4577,
'title': 'Vicinity of Obscenity (First live performance since 2012)',
}, {
'start_time': 4577,
'end_time': 4802,
'title': 'Forest',
}, {
'start_time': 4802,
'end_time': 5037,
'title': 'Cigaro',
}, {
'start_time': 5037,
'end_time': 5273,
'title': 'Toxicity (with Chino Moreno)',
}, {
'start_time': 5273,
'end_time': 5640,
'title': 'Sugar',
}]
),
(
# https://www.youtube.com/watch?v=PkYLQbsqCE8
# pattern: <num> - <title> [<latinized title>] 0:00:00
'''Затемно (Zatemno) is an Obscure Black Metal Band from Russia.<br /><br />"Во прах (Vo prakh)'' Into The Ashes", Debut mini-album released may 6, 2016, by Death Knell Productions<br />Released on 6 panel digipak CD, limited to 100 copies only<br />And digital format on Bandcamp<br /><br />Tracklist<br /><br />1 - Во прах [Vo prakh] <a href="#" onclick="yt.www.watch.player.seekTo(0*3600+00*60+00);return false;">0:00:00</a><br />2 - Искупление [Iskupleniye] <a href="#" onclick="yt.www.watch.player.seekTo(0*3600+08*60+10);return false;">0:08:10</a><br />3 - Из серпов луны...[Iz serpov luny] <a href="#" onclick="yt.www.watch.player.seekTo(0*3600+14*60+30);return false;">0:14:30</a><br /><br />Links:<br /><a href="https://deathknellprod.bandcamp.com/album/--2" class="yt-uix-servicelink " data-target-new-window="True" data-url="https://deathknellprod.bandcamp.com/album/--2" data-servicelink="CC8Q6TgYACITCNP234Kr2dMCFcNxGAodQqsIwSj4HQ" target="_blank" rel="nofollow noopener">https://deathknellprod.bandcamp.com/a...</a><br /><a href="https://www.facebook.com/DeathKnellProd/" class="yt-uix-servicelink " data-target-new-window="True" data-url="https://www.facebook.com/DeathKnellProd/" data-servicelink="CC8Q6TgYACITCNP234Kr2dMCFcNxGAodQqsIwSj4HQ" target="_blank" rel="nofollow noopener">https://www.facebook.com/DeathKnellProd/</a><br /><br /><br />I don't have any right about this artifact, my only intention is to spread the music of the band, all rights are reserved to the Затемно (Zatemno) and his producers, Death Knell Productions.<br /><br />------------------------------------------------------------------<br /><br />Subscribe for more videos like this.<br />My link: <a href="https://web.facebook.com/AttackOfTheDragons" class="yt-uix-servicelink " data-target-new-window="True" data-url="https://web.facebook.com/AttackOfTheDragons" data-servicelink="CC8Q6TgYACITCNP234Kr2dMCFcNxGAodQqsIwSj4HQ" target="_blank" rel="nofollow noopener">https://web.facebook.com/AttackOfTheD...</a>''',
1138,
[{
'start_time': 0,
'end_time': 490,
'title': '1 - Во прах [Vo prakh]',
}, {
'start_time': 490,
'end_time': 870,
'title': '2 - Искупление [Iskupleniye]',
}, {
'start_time': 870,
'end_time': 1138,
'title': '3 - Из серпов луны...[Iz serpov luny]',
}]
),
(
# https://www.youtube.com/watch?v=xZW70zEasOk
# time point more than duration
'''● LCS Spring finals: Saturday and Sunday from <a href="#" onclick="yt.www.watch.player.seekTo(13*60+30);return false;">13:30</a> outside the venue! <br />● PAX East: Fri, Sat & Sun - more info in tomorrows video on the main channel!''',
283,
[]
),
]
def test_youtube_chapters(self):
for description, duration, expected_chapters in self._TEST_CASES:
ie = YoutubeIE()
expect_value(
self, ie._extract_chapters_from_description(description, duration),
expected_chapters, None)
if __name__ == '__main__':
unittest.main()

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import unicode_literals from __future__ import unicode_literals
# Allow direct execution # Allow direct execution
@ -9,10 +10,10 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import FakeYDL from test.helper import FakeYDL
from youtube_dl.extractor import ( from youtube_dl.extractor import (
YoutubePlaylistIE,
YoutubeIE, YoutubeIE,
YoutubePlaylistIE,
YoutubeTabIE,
) )
@ -24,47 +25,40 @@ class TestYoutubeLists(unittest.TestCase):
def test_youtube_playlist_noplaylist(self): def test_youtube_playlist_noplaylist(self):
dl = FakeYDL() dl = FakeYDL()
dl.params['noplaylist'] = True dl.params['noplaylist'] = True
dl.params['format'] = 'best'
ie = YoutubePlaylistIE(dl) ie = YoutubePlaylistIE(dl)
result = ie.extract('https://www.youtube.com/watch?v=FXxLjLQi3Fg&list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re') result = ie.extract('https://www.youtube.com/watch?v=FXxLjLQi3Fg&list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re')
self.assertEqual(result['_type'], 'url') self.assertEqual(result['_type'], 'url')
result = dl.extract_info(result['url'], download=False, ie_key=result.get('ie_key'), process=False)
self.assertEqual(YoutubeIE().extract_id(result['url']), 'FXxLjLQi3Fg') self.assertEqual(YoutubeIE().extract_id(result['url']), 'FXxLjLQi3Fg')
def test_youtube_course(self):
dl = FakeYDL()
ie = YoutubePlaylistIE(dl)
# TODO find a > 100 (paginating?) videos course
result = ie.extract('https://www.youtube.com/course?list=ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8')
entries = list(result['entries'])
self.assertEqual(YoutubeIE().extract_id(entries[0]['url']), 'j9WZyLZCBzs')
self.assertEqual(len(entries), 25)
self.assertEqual(YoutubeIE().extract_id(entries[-1]['url']), 'rYefUsYuEp0')
def test_youtube_mix(self): def test_youtube_mix(self):
dl = FakeYDL() dl = FakeYDL()
ie = YoutubePlaylistIE(dl) dl.params['format'] = 'best'
result = ie.extract('https://www.youtube.com/watch?v=W01L70IGBgE&index=2&list=RDOQpdSVF_k_w') ie = YoutubeTabIE(dl)
entries = result['entries'] result = dl.extract_info('https://www.youtube.com/watch?v=tyITL_exICo&list=RDCLAK5uy_kLWIr9gv1XLlPbaDS965-Db4TrBoUTxQ8',
self.assertTrue(len(entries) >= 50) download=False, ie_key=ie.ie_key(), process=True)
entries = (result or {}).get('entries', [{'id': 'not_found', }])
self.assertTrue(len(entries) >= 25)
original_video = entries[0] original_video = entries[0]
self.assertEqual(original_video['id'], 'OQpdSVF_k_w') self.assertEqual(original_video['id'], 'tyITL_exICo')
def test_youtube_toptracks(self): def test_youtube_flat_playlist_extraction(self):
print('Skipping: The playlist page gives error 500')
return
dl = FakeYDL()
ie = YoutubePlaylistIE(dl)
result = ie.extract('https://www.youtube.com/playlist?list=MCUS')
entries = result['entries']
self.assertEqual(len(entries), 100)
def test_youtube_flat_playlist_titles(self):
dl = FakeYDL() dl = FakeYDL()
dl.params['extract_flat'] = True dl.params['extract_flat'] = True
ie = YoutubePlaylistIE(dl) ie = YoutubeTabIE(dl)
result = ie.extract('https://www.youtube.com/playlist?list=PL-KKIb8rvtMSrAO9YFbeM6UQrAqoFTUWv') result = ie.extract('https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc')
self.assertIsPlaylist(result) self.assertIsPlaylist(result)
for entry in result['entries']: entries = list(result['entries'])
self.assertTrue(entry.get('title')) self.assertTrue(len(entries) == 1)
video = entries[0]
self.assertEqual(video['_type'], 'url')
self.assertEqual(video['ie_key'], 'Youtube')
self.assertEqual(video['id'], 'BaW_jenozKc')
self.assertEqual(video['url'], 'BaW_jenozKc')
self.assertEqual(video['title'], 'youtube-dl test video "\'/\\ä↭𝕐')
self.assertEqual(video['duration'], 10)
self.assertEqual(video['uploader'], 'Philipp Hagemeister')
if __name__ == '__main__': if __name__ == '__main__':

26
test/test_youtube_misc.py Normal file
View File

@ -0,0 +1,26 @@
#!/usr/bin/env python
from __future__ import unicode_literals
# Allow direct execution
import os
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from youtube_dl.extractor import YoutubeIE
class TestYoutubeMisc(unittest.TestCase):
def test_youtube_extract(self):
assertExtractId = lambda url, id: self.assertEqual(YoutubeIE.extract_id(url), id)
assertExtractId('http://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
assertExtractId('https://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
assertExtractId('https://www.youtube.com/watch?feature=player_embedded&v=BaW_jenozKc', 'BaW_jenozKc')
assertExtractId('https://www.youtube.com/watch_popup?v=BaW_jenozKc', 'BaW_jenozKc')
assertExtractId('http://www.youtube.com/watch?v=BaW_jenozKcsharePLED17F32AD9753930', 'BaW_jenozKc')
assertExtractId('BaW_jenozKc', 'BaW_jenozKc')
if __name__ == '__main__':
unittest.main()

View File

@ -8,76 +8,192 @@ import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import io
import re import re
import string import string
from youtube_dl.compat import (
compat_open as open,
compat_str,
compat_urlretrieve,
)
from test.helper import FakeYDL from test.helper import FakeYDL
from youtube_dl.extractor import YoutubeIE from youtube_dl.extractor import YoutubeIE
from youtube_dl.compat import compat_str, compat_urlretrieve from youtube_dl.jsinterp import JSInterpreter
_TESTS = [ _SIG_TESTS = [
( (
'https://s.ytimg.com/yts/jsbin/html5player-vflHOr_nV.js', 'https://s.ytimg.com/yts/jsbin/html5player-vflHOr_nV.js',
'js',
86, 86,
'>=<;:/.-[+*)(\'&%$#"!ZYX0VUTSRQPONMLKJIHGFEDCBA\\yxwvutsrqponmlkjihgfedcba987654321', '>=<;:/.-[+*)(\'&%$#"!ZYX0VUTSRQPONMLKJIHGFEDCBA\\yxwvutsrqponmlkjihgfedcba987654321',
), ),
( (
'https://s.ytimg.com/yts/jsbin/html5player-vfldJ8xgI.js', 'https://s.ytimg.com/yts/jsbin/html5player-vfldJ8xgI.js',
'js',
85, 85,
'3456789a0cdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRS[UVWXYZ!"#$%&\'()*+,-./:;<=>?@', '3456789a0cdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRS[UVWXYZ!"#$%&\'()*+,-./:;<=>?@',
), ),
( (
'https://s.ytimg.com/yts/jsbin/html5player-vfle-mVwz.js', 'https://s.ytimg.com/yts/jsbin/html5player-vfle-mVwz.js',
'js',
90, 90,
']\\[@?>=<;:/.-,+*)(\'&%$#"hZYXWVUTSRQPONMLKJIHGFEDCBAzyxwvutsrqponmlkjiagfedcb39876', ']\\[@?>=<;:/.-,+*)(\'&%$#"hZYXWVUTSRQPONMLKJIHGFEDCBAzyxwvutsrqponmlkjiagfedcb39876',
), ),
( (
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vfl0Cbn9e.js', 'https://s.ytimg.com/yts/jsbin/html5player-en_US-vfl0Cbn9e.js',
'js',
84, 84,
'O1I3456789abcde0ghijklmnopqrstuvwxyzABCDEFGHfJKLMN2PQRSTUVW@YZ!"#$%&\'()*+,-./:;<=', 'O1I3456789abcde0ghijklmnopqrstuvwxyzABCDEFGHfJKLMN2PQRSTUVW@YZ!"#$%&\'()*+,-./:;<=',
), ),
( (
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflXGBaUN.js', 'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflXGBaUN.js',
'js',
'2ACFC7A61CA478CD21425E5A57EBD73DDC78E22A.2094302436B2D377D14A3BBA23022D023B8BC25AA', '2ACFC7A61CA478CD21425E5A57EBD73DDC78E22A.2094302436B2D377D14A3BBA23022D023B8BC25AA',
'A52CB8B320D22032ABB3A41D773D2B6342034902.A22E87CDD37DBE75A5E52412DC874AC16A7CFCA2', 'A52CB8B320D22032ABB3A41D773D2B6342034902.A22E87CDD37DBE75A5E52412DC874AC16A7CFCA2',
), ),
( (
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflBb0OQx.js', 'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflBb0OQx.js',
'js',
84, 84,
'123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQ0STUVWXYZ!"#$%&\'()*+,@./:;<=>' '123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQ0STUVWXYZ!"#$%&\'()*+,@./:;<=>'
), ),
( (
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vfl9FYC6l.js', 'https://s.ytimg.com/yts/jsbin/html5player-en_US-vfl9FYC6l.js',
'js',
83, 83,
'123456789abcdefghijklmnopqr0tuvwxyzABCDETGHIJKLMNOPQRS>UVWXYZ!"#$%&\'()*+,-./:;<=F' '123456789abcdefghijklmnopqr0tuvwxyzABCDETGHIJKLMNOPQRS>UVWXYZ!"#$%&\'()*+,-./:;<=F'
), ),
( (
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflCGk6yw/html5player.js', 'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflCGk6yw/html5player.js',
'js',
'4646B5181C6C3020DF1D9C7FCFEA.AD80ABF70C39BD369CCCAE780AFBB98FA6B6CB42766249D9488C288', '4646B5181C6C3020DF1D9C7FCFEA.AD80ABF70C39BD369CCCAE780AFBB98FA6B6CB42766249D9488C288',
'82C8849D94266724DC6B6AF89BBFA087EACCD963.B93C07FBA084ACAEFCF7C9D1FD0203C6C1815B6B' '82C8849D94266724DC6B6AF89BBFA087EACCD963.B93C07FBA084ACAEFCF7C9D1FD0203C6C1815B6B'
), ),
( (
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflKjOTVq/html5player.js', 'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflKjOTVq/html5player.js',
'js',
'312AA52209E3623129A412D56A40F11CB0AF14AE.3EE09501CB14E3BCDC3B2AE808BF3F1D14E7FBF12', '312AA52209E3623129A412D56A40F11CB0AF14AE.3EE09501CB14E3BCDC3B2AE808BF3F1D14E7FBF12',
'112AA5220913623229A412D56A40F11CB0AF14AE.3EE0950FCB14EEBCDC3B2AE808BF331D14E7FBF3', '112AA5220913623229A412D56A40F11CB0AF14AE.3EE0950FCB14EEBCDC3B2AE808BF331D14E7FBF3',
) )
] ]
_NSIG_TESTS = [
(
'https://www.youtube.com/s/player/7862ca1f/player_ias.vflset/en_US/base.js',
'X_LCxVDjAavgE5t', 'yxJ1dM6iz5ogUg',
),
(
'https://www.youtube.com/s/player/9216d1f7/player_ias.vflset/en_US/base.js',
'SLp9F5bwjAdhE9F-', 'gWnb9IK2DJ8Q1w',
),
(
'https://www.youtube.com/s/player/f8cb7a3b/player_ias.vflset/en_US/base.js',
'oBo2h5euWy6osrUt', 'ivXHpm7qJjJN',
),
(
'https://www.youtube.com/s/player/2dfe380c/player_ias.vflset/en_US/base.js',
'oBo2h5euWy6osrUt', '3DIBbn3qdQ',
),
(
'https://www.youtube.com/s/player/f1ca6900/player_ias.vflset/en_US/base.js',
'cu3wyu6LQn2hse', 'jvxetvmlI9AN9Q',
),
(
'https://www.youtube.com/s/player/8040e515/player_ias.vflset/en_US/base.js',
'wvOFaY-yjgDuIEg5', 'HkfBFDHmgw4rsw',
),
(
'https://www.youtube.com/s/player/e06dea74/player_ias.vflset/en_US/base.js',
'AiuodmaDDYw8d3y4bf', 'ankd8eza2T6Qmw',
),
(
'https://www.youtube.com/s/player/5dd88d1d/player-plasma-ias-phone-en_US.vflset/base.js',
'kSxKFLeqzv_ZyHSAt', 'n8gS8oRlHOxPFA',
),
(
'https://www.youtube.com/s/player/324f67b9/player_ias.vflset/en_US/base.js',
'xdftNy7dh9QGnhW', '22qLGxrmX8F1rA',
),
(
'https://www.youtube.com/s/player/4c3f79c5/player_ias.vflset/en_US/base.js',
'TDCstCG66tEAO5pR9o', 'dbxNtZ14c-yWyw',
),
(
'https://www.youtube.com/s/player/c81bbb4a/player_ias.vflset/en_US/base.js',
'gre3EcLurNY2vqp94', 'Z9DfGxWP115WTg',
),
(
'https://www.youtube.com/s/player/1f7d5369/player_ias.vflset/en_US/base.js',
'batNX7sYqIJdkJ', 'IhOkL_zxbkOZBw',
),
(
'https://www.youtube.com/s/player/009f1d77/player_ias.vflset/en_US/base.js',
'5dwFHw8aFWQUQtffRq', 'audescmLUzI3jw',
),
(
'https://www.youtube.com/s/player/dc0c6770/player_ias.vflset/en_US/base.js',
'5EHDMgYLV6HPGk_Mu-kk', 'n9lUJLHbxUI0GQ',
),
(
'https://www.youtube.com/s/player/c2199353/player_ias.vflset/en_US/base.js',
'5EHDMgYLV6HPGk_Mu-kk', 'AD5rgS85EkrE7',
),
(
'https://www.youtube.com/s/player/113ca41c/player_ias.vflset/en_US/base.js',
'cgYl-tlYkhjT7A', 'hI7BBr2zUgcmMg',
),
(
'https://www.youtube.com/s/player/c57c113c/player_ias.vflset/en_US/base.js',
'-Txvy6bT5R6LqgnQNx', 'dcklJCnRUHbgSg',
),
(
'https://www.youtube.com/s/player/5a3b6271/player_ias.vflset/en_US/base.js',
'B2j7f_UPT4rfje85Lu_e', 'm5DmNymaGQ5RdQ',
),
(
'https://www.youtube.com/s/player/dac945fd/player_ias.vflset/en_US/base.js',
'o8BkRxXhuYsBCWi6RplPdP', '3Lx32v_hmzTm6A',
),
(
'https://www.youtube.com/s/player/6f20102c/player_ias.vflset/en_US/base.js',
'lE8DhoDmKqnmJJ', 'pJTTX6XyJP2BYw',
),
(
'https://www.youtube.com/s/player/cfa9e7cb/player_ias.vflset/en_US/base.js',
'qO0NiMtYQ7TeJnfFG2', 'k9cuJDHNS5O7kQ',
),
(
'https://www.youtube.com/s/player/b7910ca8/player_ias.vflset/en_US/base.js',
'_hXMCwMt9qE310D', 'LoZMgkkofRMCZQ',
),
(
'https://www.youtube.com/s/player/590f65a6/player_ias.vflset/en_US/base.js',
'1tm7-g_A9zsI8_Lay_', 'xI4Vem4Put_rOg',
),
(
'https://www.youtube.com/s/player/b22ef6e7/player_ias.vflset/en_US/base.js',
'b6HcntHGkvBLk_FRf', 'kNPW6A7FyP2l8A',
),
(
'https://www.youtube.com/s/player/3400486c/player_ias.vflset/en_US/base.js',
'lL46g3XifCKUZn1Xfw', 'z767lhet6V2Skl',
),
(
'https://www.youtube.com/s/player/5604538d/player_ias.vflset/en_US/base.js',
'7X-he4jjvMx7BCX', 'sViSydX8IHtdWA',
),
(
'https://www.youtube.com/s/player/20dfca59/player_ias.vflset/en_US/base.js',
'-fLCxedkAk4LUTK2', 'O8kfRq1y1eyHGw',
),
(
'https://www.youtube.com/s/player/b12cc44b/player_ias.vflset/en_US/base.js',
'keLa5R2U00sR9SQK', 'N1OGyujjEwMnLw',
),
]
class TestPlayerInfo(unittest.TestCase): class TestPlayerInfo(unittest.TestCase):
def test_youtube_extract_player_info(self): def test_youtube_extract_player_info(self):
PLAYER_URLS = ( PLAYER_URLS = (
('https://www.youtube.com/s/player/4c3f79c5/player_ias.vflset/en_US/base.js', '4c3f79c5'),
('https://www.youtube.com/s/player/64dddad9/player_ias.vflset/en_US/base.js', '64dddad9'), ('https://www.youtube.com/s/player/64dddad9/player_ias.vflset/en_US/base.js', '64dddad9'),
('https://www.youtube.com/s/player/64dddad9/player_ias.vflset/fr_FR/base.js', '64dddad9'),
('https://www.youtube.com/s/player/64dddad9/player-plasma-ias-phone-en_US.vflset/base.js', '64dddad9'),
('https://www.youtube.com/s/player/64dddad9/player-plasma-ias-phone-de_DE.vflset/base.js', '64dddad9'),
('https://www.youtube.com/s/player/64dddad9/player-plasma-ias-tablet-en_US.vflset/base.js', '64dddad9'),
# obsolete # obsolete
('https://www.youtube.com/yts/jsbin/player_ias-vfle4-e03/en_US/base.js', 'vfle4-e03'), ('https://www.youtube.com/yts/jsbin/player_ias-vfle4-e03/en_US/base.js', 'vfle4-e03'),
('https://www.youtube.com/yts/jsbin/player_ias-vfl49f_g4/en_US/base.js', 'vfl49f_g4'), ('https://www.youtube.com/yts/jsbin/player_ias-vfl49f_g4/en_US/base.js', 'vfl49f_g4'),
@ -86,59 +202,70 @@ class TestPlayerInfo(unittest.TestCase):
('https://www.youtube.com/yts/jsbin/player-en_US-vflaxXRn1/base.js', 'vflaxXRn1'), ('https://www.youtube.com/yts/jsbin/player-en_US-vflaxXRn1/base.js', 'vflaxXRn1'),
('https://s.ytimg.com/yts/jsbin/html5player-en_US-vflXGBaUN.js', 'vflXGBaUN'), ('https://s.ytimg.com/yts/jsbin/html5player-en_US-vflXGBaUN.js', 'vflXGBaUN'),
('https://s.ytimg.com/yts/jsbin/html5player-en_US-vflKjOTVq/html5player.js', 'vflKjOTVq'), ('https://s.ytimg.com/yts/jsbin/html5player-en_US-vflKjOTVq/html5player.js', 'vflKjOTVq'),
('http://s.ytimg.com/yt/swfbin/watch_as3-vflrEm9Nq.swf', 'vflrEm9Nq'),
('https://s.ytimg.com/yts/swfbin/player-vflenCdZL/watch_as3.swf', 'vflenCdZL'),
) )
for player_url, expected_player_id in PLAYER_URLS: for player_url, expected_player_id in PLAYER_URLS:
expected_player_type = player_url.split('.')[-1] player_id = YoutubeIE._extract_player_info(player_url)
player_type, player_id = YoutubeIE._extract_player_info(player_url)
self.assertEqual(player_type, expected_player_type)
self.assertEqual(player_id, expected_player_id) self.assertEqual(player_id, expected_player_id)
class TestSignature(unittest.TestCase): class TestSignature(unittest.TestCase):
def setUp(self): def setUp(self):
TEST_DIR = os.path.dirname(os.path.abspath(__file__)) TEST_DIR = os.path.dirname(os.path.abspath(__file__))
self.TESTDATA_DIR = os.path.join(TEST_DIR, 'testdata') self.TESTDATA_DIR = os.path.join(TEST_DIR, 'testdata/sigs')
if not os.path.exists(self.TESTDATA_DIR): if not os.path.exists(self.TESTDATA_DIR):
os.mkdir(self.TESTDATA_DIR) os.mkdir(self.TESTDATA_DIR)
def tearDown(self):
try:
for f in os.listdir(self.TESTDATA_DIR):
os.remove(f)
except OSError:
pass
def make_tfunc(url, stype, sig_input, expected_sig):
m = re.match(r'.*-([a-zA-Z0-9_-]+)(?:/watch_as3|/html5player)?\.[a-z]+$', url)
assert m, '%r should follow URL format' % url
test_id = m.group(1)
def test_func(self): def t_factory(name, sig_func, url_pattern):
basename = 'player-%s.%s' % (test_id, stype) def make_tfunc(url, sig_input, expected_sig):
fn = os.path.join(self.TESTDATA_DIR, basename) m = url_pattern.match(url)
assert m, '%r should follow URL format' % url
test_id = m.group('id')
if not os.path.exists(fn): def test_func(self):
compat_urlretrieve(url, fn) basename = 'player-{0}-{1}.js'.format(name, test_id)
fn = os.path.join(self.TESTDATA_DIR, basename)
ydl = FakeYDL() if not os.path.exists(fn):
ie = YoutubeIE(ydl) compat_urlretrieve(url, fn)
if stype == 'js': with open(fn, encoding='utf-8') as testf:
with io.open(fn, encoding='utf-8') as testf:
jscode = testf.read() jscode = testf.read()
func = ie._parse_sig_js(jscode) self.assertEqual(sig_func(jscode, sig_input), expected_sig)
else:
assert stype == 'swf'
with open(fn, 'rb') as testf:
swfcode = testf.read()
func = ie._parse_sig_swf(swfcode)
src_sig = (
compat_str(string.printable[:sig_input])
if isinstance(sig_input, int) else sig_input)
got_sig = func(src_sig)
self.assertEqual(got_sig, expected_sig)
test_func.__name__ = str('test_signature_' + stype + '_' + test_id) test_func.__name__ = str('test_{0}_js_{1}'.format(name, test_id))
setattr(TestSignature, test_func.__name__, test_func) setattr(TestSignature, test_func.__name__, test_func)
return make_tfunc
for test_spec in _TESTS: def signature(jscode, sig_input):
make_tfunc(*test_spec) func = YoutubeIE(FakeYDL())._parse_sig_js(jscode)
src_sig = (
compat_str(string.printable[:sig_input])
if isinstance(sig_input, int) else sig_input)
return func(src_sig)
def n_sig(jscode, sig_input):
funcname = YoutubeIE(FakeYDL())._extract_n_function_name(jscode)
return JSInterpreter(jscode).call_function(funcname, sig_input)
make_sig_test = t_factory(
'signature', signature, re.compile(r'.*-(?P<id>[a-zA-Z0-9_-]+)(?:/watch_as3|/html5player)?\.[a-z]+$'))
for test_spec in _SIG_TESTS:
make_sig_test(*test_spec)
make_nsig_test = t_factory(
'nsig', n_sig, re.compile(r'.+/player/(?P<id>[a-zA-Z0-9_-]+)/.+.js$'))
for test_spec in _NSIG_TESTS:
make_nsig_test(*test_spec)
if __name__ == '__main__': if __name__ == '__main__':

35
test/testdata/mpd/range_only.mpd vendored Normal file
View File

@ -0,0 +1,35 @@
<?xml version="1.0"?>
<!-- MPD file Generated with GPAC version 1.0.1-revrelease at 2021-11-27T20:53:11.690Z -->
<MPD xmlns="urn:mpeg:dash:schema:mpd:2011" minBufferTime="PT1.500S" type="static" mediaPresentationDuration="PT0H0M30.196S" maxSegmentDuration="PT0H0M10.027S" profiles="urn:mpeg:dash:profile:full:2011">
<ProgramInformation moreInformationURL="http://gpac.io">
<Title>manifest.mpd generated by GPAC</Title>
</ProgramInformation>
<Period duration="PT0H0M30.196S">
<AdaptationSet segmentAlignment="true" maxWidth="768" maxHeight="432" maxFrameRate="30000/1001" par="16:9" lang="und" startWithSAP="1">
<Representation id="1" mimeType="video/mp4" codecs="avc1.4D401E" width="768" height="432" frameRate="30000/1001" sar="1:1" bandwidth="526987">
<BaseURL>video_dashinit.mp4</BaseURL>
<SegmentList timescale="90000" duration="900000">
<Initialization range="0-881"/>
<SegmentURL mediaRange="882-876094" indexRange="882-925"/>
<SegmentURL mediaRange="876095-1466732" indexRange="876095-876138"/>
<SegmentURL mediaRange="1466733-1953615" indexRange="1466733-1466776"/>
<SegmentURL mediaRange="1953616-1994211" indexRange="1953616-1953659"/>
</SegmentList>
</Representation>
</AdaptationSet>
<AdaptationSet segmentAlignment="true" lang="und" startWithSAP="1">
<Representation id="2" mimeType="audio/mp4" codecs="mp4a.40.2" audioSamplingRate="48000" bandwidth="98096">
<AudioChannelConfiguration schemeIdUri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"/>
<BaseURL>audio_dashinit.mp4</BaseURL>
<SegmentList timescale="48000" duration="480000">
<Initialization range="0-752"/>
<SegmentURL mediaRange="753-124129" indexRange="753-796"/>
<SegmentURL mediaRange="124130-250544" indexRange="124130-124173"/>
<SegmentURL mediaRange="250545-374929" indexRange="250545-250588"/>
</SegmentList>
</Representation>
</AdaptationSet>
</Period>
</MPD>

351
test/testdata/mpd/subtitles.mpd vendored Normal file
View File

@ -0,0 +1,351 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Created with Unified Streaming Platform (version=1.10.18-20255) -->
<MPD
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:mpeg:dash:schema:mpd:2011"
xsi:schemaLocation="urn:mpeg:dash:schema:mpd:2011 http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-DASH_schema_files/DASH-MPD.xsd"
type="static"
mediaPresentationDuration="PT14M48S"
maxSegmentDuration="PT1M"
minBufferTime="PT10S"
profiles="urn:mpeg:dash:profile:isoff-live:2011">
<Period
id="1"
duration="PT14M48S">
<BaseURL>dash/</BaseURL>
<AdaptationSet
id="1"
group="1"
contentType="audio"
segmentAlignment="true"
audioSamplingRate="48000"
mimeType="audio/mp4"
codecs="mp4a.40.2"
startWithSAP="1">
<AudioChannelConfiguration
schemeIdUri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011"
value="2" />
<Role schemeIdUri="urn:mpeg:dash:role:2011" value="main" />
<SegmentTemplate
timescale="48000"
initialization="3144-kZT4LWMQw6Rh7Kpd-$RepresentationID$.dash"
media="3144-kZT4LWMQw6Rh7Kpd-$RepresentationID$-$Time$.dash">
<SegmentTimeline>
<S t="0" d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="96256" r="2" />
<S d="95232" />
<S d="3584" />
</SegmentTimeline>
</SegmentTemplate>
<Representation
id="audio=128001"
bandwidth="128001">
</Representation>
</AdaptationSet>
<AdaptationSet
id="2"
group="3"
contentType="text"
lang="en"
mimeType="application/mp4"
codecs="stpp"
startWithSAP="1">
<Role schemeIdUri="urn:mpeg:dash:role:2011" value="subtitle" />
<SegmentTemplate
timescale="1000"
initialization="3144-kZT4LWMQw6Rh7Kpd-$RepresentationID$.dash"
media="3144-kZT4LWMQw6Rh7Kpd-$RepresentationID$-$Time$.dash">
<SegmentTimeline>
<S t="0" d="60000" r="9" />
<S d="24000" />
</SegmentTimeline>
</SegmentTemplate>
<Representation
id="textstream_eng=1000"
bandwidth="1000">
</Representation>
</AdaptationSet>
<AdaptationSet
id="3"
group="2"
contentType="video"
par="960:409"
minBandwidth="100000"
maxBandwidth="4482000"
maxWidth="1689"
maxHeight="720"
segmentAlignment="true"
mimeType="video/mp4"
codecs="avc1.4D401F"
startWithSAP="1">
<Role schemeIdUri="urn:mpeg:dash:role:2011" value="main" />
<SegmentTemplate
timescale="12288"
initialization="3144-kZT4LWMQw6Rh7Kpd-$RepresentationID$.dash"
media="3144-kZT4LWMQw6Rh7Kpd-$RepresentationID$-$Time$.dash">
<SegmentTimeline>
<S t="0" d="24576" r="443" />
</SegmentTimeline>
</SegmentTemplate>
<Representation
id="video=100000"
bandwidth="100000"
width="336"
height="144"
sar="2880:2863"
scanType="progressive">
</Representation>
<Representation
id="video=326000"
bandwidth="326000"
width="562"
height="240"
sar="115200:114929"
scanType="progressive">
</Representation>
<Representation
id="video=698000"
bandwidth="698000"
width="844"
height="360"
sar="86400:86299"
scanType="progressive">
</Representation>
<Representation
id="video=1493000"
bandwidth="1493000"
width="1126"
height="480"
sar="230400:230267"
scanType="progressive">
</Representation>
<Representation
id="video=4482000"
bandwidth="4482000"
width="1688"
height="720"
sar="86400:86299"
scanType="progressive">
</Representation>
</AdaptationSet>
</Period>
</MPD>

32
test/testdata/mpd/url_and_range.mpd vendored Normal file
View File

@ -0,0 +1,32 @@
<?xml version="1.0" ?>
<MPD xmlns="urn:mpeg:dash:schema:mpd:2011" profiles="urn:mpeg:dash:profile:isoff-live:2011" minBufferTime="PT10.01S" mediaPresentationDuration="PT30.097S" type="static">
<!-- Created with Bento4 mp4-dash.py, VERSION=2.0.0-639 -->
<Period>
<!-- Video -->
<AdaptationSet mimeType="video/mp4" segmentAlignment="true" startWithSAP="1" maxWidth="768" maxHeight="432">
<Representation id="video-avc1" codecs="avc1.4D401E" width="768" height="432" scanType="progressive" frameRate="30000/1001" bandwidth="699597">
<SegmentList timescale="1000" duration="10010">
<Initialization sourceURL="video-frag.mp4" range="36-746"/>
<SegmentURL media="video-frag.mp4" mediaRange="747-876117"/>
<SegmentURL media="video-frag.mp4" mediaRange="876118-1466913"/>
<SegmentURL media="video-frag.mp4" mediaRange="1466914-1953954"/>
<SegmentURL media="video-frag.mp4" mediaRange="1953955-1994652"/>
</SegmentList>
</Representation>
</AdaptationSet>
<!-- Audio -->
<AdaptationSet mimeType="audio/mp4" startWithSAP="1" segmentAlignment="true">
<Representation id="audio-und-mp4a.40.2" codecs="mp4a.40.2" bandwidth="98808" audioSamplingRate="48000">
<AudioChannelConfiguration schemeIdUri="urn:mpeg:mpegB:cicp:ChannelConfiguration" value="2"/>
<SegmentList timescale="1000" duration="10010">
<Initialization sourceURL="audio-frag.mp4" range="32-623"/>
<SegmentURL media="audio-frag.mp4" mediaRange="624-124199"/>
<SegmentURL media="audio-frag.mp4" mediaRange="124200-250303"/>
<SegmentURL media="audio-frag.mp4" mediaRange="250304-374365"/>
<SegmentURL media="audio-frag.mp4" mediaRange="374366-374836"/>
</SegmentList>
</Representation>
</AdaptationSet>
</Period>
</MPD>

View File

@ -4,11 +4,10 @@
from __future__ import absolute_import, unicode_literals from __future__ import absolute_import, unicode_literals
import collections import collections
import contextlib
import copy import copy
import datetime import datetime
import errno import errno
import fileinput import functools
import io import io
import itertools import itertools
import json import json
@ -26,25 +25,39 @@ import tokenize
import traceback import traceback
import random import random
try:
from ssl import OPENSSL_VERSION
except ImportError:
# Must be Python 2.6, should be built against 1.0.2
OPENSSL_VERSION = 'OpenSSL 1.0.2(?)'
from string import ascii_letters from string import ascii_letters
from .compat import ( from .compat import (
compat_basestring, compat_basestring,
compat_cookiejar, compat_collections_chain_map as ChainMap,
compat_filter as filter,
compat_get_terminal_size, compat_get_terminal_size,
compat_http_client, compat_http_client,
compat_http_cookiejar_Cookie,
compat_http_cookies_SimpleCookie,
compat_integer_types,
compat_kwargs, compat_kwargs,
compat_map as map,
compat_numeric_types, compat_numeric_types,
compat_open as open,
compat_os_name, compat_os_name,
compat_str, compat_str,
compat_tokenize_tokenize, compat_tokenize_tokenize,
compat_urllib_error, compat_urllib_error,
compat_urllib_parse,
compat_urllib_request, compat_urllib_request,
compat_urllib_request_DataHandler, compat_urllib_request_DataHandler,
) )
from .utils import ( from .utils import (
_UnsafeExtensionError,
age_restricted, age_restricted,
args_to_str, args_to_str,
bug_reports_message,
ContentTooShortError, ContentTooShortError,
date_from_str, date_from_str,
DateRange, DateRange,
@ -62,7 +75,9 @@ from .utils import (
GeoRestrictedError, GeoRestrictedError,
int_or_none, int_or_none,
ISO3166Utils, ISO3166Utils,
join_nonempty,
locked_file, locked_file,
LazyList,
make_HTTPS_handler, make_HTTPS_handler,
MaxDownloadsReached, MaxDownloadsReached,
orderedSet, orderedSet,
@ -73,6 +88,7 @@ from .utils import (
PostProcessingError, PostProcessingError,
preferredencoding, preferredencoding,
prepend_extension, prepend_extension,
process_communicate_or_kill,
register_socks_protocols, register_socks_protocols,
render_table, render_table,
replace_extension, replace_extension,
@ -84,6 +100,7 @@ from .utils import (
std_headers, std_headers,
str_or_none, str_or_none,
subtitles_filename, subtitles_filename,
traverse_obj,
UnavailableVideoError, UnavailableVideoError,
url_basename, url_basename,
version_tuple, version_tuple,
@ -93,6 +110,7 @@ from .utils import (
YoutubeDLCookieProcessor, YoutubeDLCookieProcessor,
YoutubeDLHandler, YoutubeDLHandler,
YoutubeDLRedirectHandler, YoutubeDLRedirectHandler,
ytdl_is_updateable,
) )
from .cache import Cache from .cache import Cache
from .extractor import get_info_extractor, gen_extractor_classes, _LAZY_LOADER from .extractor import get_info_extractor, gen_extractor_classes, _LAZY_LOADER
@ -113,6 +131,20 @@ if compat_os_name == 'nt':
import ctypes import ctypes
def _catch_unsafe_file_extension(func):
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
try:
return func(self, *args, **kwargs)
except _UnsafeExtensionError as error:
self.report_error(
'{0} found; to avoid damaging your system, this value is disallowed.'
' If you believe this is an error{1}'.format(
error_to_compat_str(error), bug_reports_message(',')))
return wrapper
class YoutubeDL(object): class YoutubeDL(object):
"""YoutubeDL class. """YoutubeDL class.
@ -362,6 +394,9 @@ class YoutubeDL(object):
self.params.update(params) self.params.update(params)
self.cache = Cache(self) self.cache = Cache(self)
self._header_cookies = []
self._load_cookies_from_headers(self.params.get('http_headers'))
def check_deprecated(param, option, suggestion): def check_deprecated(param, option, suggestion):
if self.params.get(param) is not None: if self.params.get(param) is not None:
self.report_warning( self.report_warning(
@ -568,7 +603,7 @@ class YoutubeDL(object):
if self.params.get('cookiefile') is not None: if self.params.get('cookiefile') is not None:
self.cookiejar.save(ignore_discard=True, ignore_expires=True) self.cookiejar.save(ignore_discard=True, ignore_expires=True)
def trouble(self, message=None, tb=None): def trouble(self, *args, **kwargs):
"""Determine action to take when a download problem appears. """Determine action to take when a download problem appears.
Depending on if the downloader has been configured to ignore Depending on if the downloader has been configured to ignore
@ -577,6 +612,11 @@ class YoutubeDL(object):
tb, if given, is additional traceback information. tb, if given, is additional traceback information.
""" """
# message=None, tb=None, is_error=True
message = args[0] if len(args) > 0 else kwargs.get('message', None)
tb = args[1] if len(args) > 1 else kwargs.get('tb', None)
is_error = args[2] if len(args) > 2 else kwargs.get('is_error', True)
if message is not None: if message is not None:
self.to_stderr(message) self.to_stderr(message)
if self.params.get('verbose'): if self.params.get('verbose'):
@ -589,7 +629,10 @@ class YoutubeDL(object):
else: else:
tb_data = traceback.format_list(traceback.extract_stack()) tb_data = traceback.format_list(traceback.extract_stack())
tb = ''.join(tb_data) tb = ''.join(tb_data)
self.to_stderr(tb) if tb:
self.to_stderr(tb)
if not is_error:
return
if not self.params.get('ignoreerrors', False): if not self.params.get('ignoreerrors', False):
if sys.exc_info()[0] and hasattr(sys.exc_info()[1], 'exc_info') and sys.exc_info()[1].exc_info[0]: if sys.exc_info()[0] and hasattr(sys.exc_info()[1], 'exc_info') and sys.exc_info()[1].exc_info[0]:
exc_info = sys.exc_info()[1].exc_info exc_info = sys.exc_info()[1].exc_info
@ -598,11 +641,18 @@ class YoutubeDL(object):
raise DownloadError(message, exc_info) raise DownloadError(message, exc_info)
self._download_retcode = 1 self._download_retcode = 1
def report_warning(self, message): def report_warning(self, message, only_once=False, _cache={}):
''' '''
Print the message to stderr, it will be prefixed with 'WARNING:' Print the message to stderr, it will be prefixed with 'WARNING:'
If stderr is a tty file the 'WARNING:' will be colored If stderr is a tty file the 'WARNING:' will be colored
''' '''
if only_once:
m_hash = hash((self, message))
m_cnt = _cache.setdefault(m_hash, 0)
_cache[m_hash] = m_cnt + 1
if m_cnt > 0:
return
if self.params.get('logger') is not None: if self.params.get('logger') is not None:
self.params['logger'].warning(message) self.params['logger'].warning(message)
else: else:
@ -615,7 +665,7 @@ class YoutubeDL(object):
warning_message = '%s %s' % (_msg_header, message) warning_message = '%s %s' % (_msg_header, message)
self.to_stderr(warning_message) self.to_stderr(warning_message)
def report_error(self, message, tb=None): def report_error(self, message, *args, **kwargs):
''' '''
Do the same as trouble, but prefixes the message with 'ERROR:', colored Do the same as trouble, but prefixes the message with 'ERROR:', colored
in red if stderr is a tty file. in red if stderr is a tty file.
@ -624,8 +674,18 @@ class YoutubeDL(object):
_msg_header = '\033[0;31mERROR:\033[0m' _msg_header = '\033[0;31mERROR:\033[0m'
else: else:
_msg_header = 'ERROR:' _msg_header = 'ERROR:'
error_message = '%s %s' % (_msg_header, message) kwargs['message'] = '%s %s' % (_msg_header, message)
self.trouble(error_message, tb) self.trouble(*args, **kwargs)
def report_unscoped_cookies(self, *args, **kwargs):
# message=None, tb=False, is_error=False
if len(args) <= 2:
kwargs.setdefault('is_error', False)
if len(args) <= 0:
kwargs.setdefault(
'message',
'Unscoped cookies are not allowed: please specify some sort of scoping')
self.report_error(*args, **kwargs)
def report_file_already_downloaded(self, file_name): def report_file_already_downloaded(self, file_name):
"""Report file has already been fully downloaded.""" """Report file has already been fully downloaded."""
@ -720,7 +780,7 @@ class YoutubeDL(object):
filename = encodeFilename(filename, True).decode(preferredencoding()) filename = encodeFilename(filename, True).decode(preferredencoding())
return sanitize_path(filename) return sanitize_path(filename)
except ValueError as err: except ValueError as err:
self.report_error('Error in output template: ' + str(err) + ' (encoding: ' + repr(preferredencoding()) + ')') self.report_error('Error in output template: ' + error_to_compat_str(err) + ' (encoding: ' + repr(preferredencoding()) + ')')
return None return None
def _match_entry(self, info_dict, incomplete): def _match_entry(self, info_dict, incomplete):
@ -773,11 +833,20 @@ class YoutubeDL(object):
def extract_info(self, url, download=True, ie_key=None, extra_info={}, def extract_info(self, url, download=True, ie_key=None, extra_info={},
process=True, force_generic_extractor=False): process=True, force_generic_extractor=False):
''' """
Returns a list with a dictionary for each video we find. Return a list with a dictionary for each video extracted.
If 'download', also downloads the videos.
extra_info is a dict containing the extra values to add to each result Arguments:
''' url -- URL to extract
Keyword arguments:
download -- whether to download videos during extraction
ie_key -- extractor key hint
extra_info -- dictionary containing the extra values to add to each result
process -- whether to resolve all unresolved references (URLs, playlist items),
must be True for download to work.
force_generic_extractor -- force using the generic extractor
"""
if not ie_key and force_generic_extractor: if not ie_key and force_generic_extractor:
ie_key = 'Generic' ie_key = 'Generic'
@ -812,7 +881,7 @@ class YoutubeDL(object):
msg += '\nYou might want to use a VPN or a proxy server (with --proxy) to workaround.' msg += '\nYou might want to use a VPN or a proxy server (with --proxy) to workaround.'
self.report_error(msg) self.report_error(msg)
except ExtractorError as e: # An error we somewhat expected except ExtractorError as e: # An error we somewhat expected
self.report_error(compat_str(e), e.format_traceback()) self.report_error(compat_str(e), tb=e.format_traceback())
except MaxDownloadsReached: except MaxDownloadsReached:
raise raise
except Exception as e: except Exception as e:
@ -822,8 +891,83 @@ class YoutubeDL(object):
raise raise
return wrapper return wrapper
def _remove_cookie_header(self, http_headers):
"""Filters out `Cookie` header from an `http_headers` dict
The `Cookie` header is removed to prevent leaks as a result of unscoped cookies.
See: https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-v8mc-9377-rwjj
@param http_headers An `http_headers` dict from which any `Cookie` header
should be removed, or None
"""
return dict(filter(lambda pair: pair[0].lower() != 'cookie', (http_headers or {}).items()))
def _load_cookies(self, data, **kwargs):
"""Loads cookies from a `Cookie` header
This tries to work around the security vulnerability of passing cookies to every domain.
@param data The Cookie header as a string to load the cookies from
@param autoscope If `False`, scope cookies using Set-Cookie syntax and error for cookie without domains
If `True`, save cookies for later to be stored in the jar with a limited scope
If a URL, save cookies in the jar with the domain of the URL
"""
# autoscope=True (kw-only)
autoscope = kwargs.get('autoscope', True)
for cookie in compat_http_cookies_SimpleCookie(data).values() if data else []:
if autoscope and any(cookie.values()):
raise ValueError('Invalid syntax in Cookie Header')
domain = cookie.get('domain') or ''
expiry = cookie.get('expires')
if expiry == '': # 0 is valid so we check for `''` explicitly
expiry = None
prepared_cookie = compat_http_cookiejar_Cookie(
cookie.get('version') or 0, cookie.key, cookie.value, None, False,
domain, True, True, cookie.get('path') or '', bool(cookie.get('path')),
bool(cookie.get('secure')), expiry, False, None, None, {})
if domain:
self.cookiejar.set_cookie(prepared_cookie)
elif autoscope is True:
self.report_warning(
'Passing cookies as a header is a potential security risk; '
'they will be scoped to the domain of the downloaded urls. '
'Please consider loading cookies from a file or browser instead.',
only_once=True)
self._header_cookies.append(prepared_cookie)
elif autoscope:
self.report_warning(
'The extractor result contains an unscoped cookie as an HTTP header. '
'If you are specifying an input URL, ' + bug_reports_message(),
only_once=True)
self._apply_header_cookies(autoscope, [prepared_cookie])
else:
self.report_unscoped_cookies()
def _load_cookies_from_headers(self, headers):
self._load_cookies(traverse_obj(headers, 'cookie', casesense=False))
def _apply_header_cookies(self, url, cookies=None):
"""This method applies stray header cookies to the provided url
This loads header cookies and scopes them to the domain provided in `url`.
While this is not ideal, it helps reduce the risk of them being sent to
an unintended destination.
"""
parsed = compat_urllib_parse.urlparse(url)
if not parsed.hostname:
return
for cookie in map(copy.copy, cookies or self._header_cookies):
cookie.domain = '.' + parsed.hostname
self.cookiejar.set_cookie(cookie)
@__handle_extraction_exceptions @__handle_extraction_exceptions
def __extract_info(self, url, ie, download, extra_info, process): def __extract_info(self, url, ie, download, extra_info, process):
# Compat with passing cookies in http headers
self._apply_header_cookies(url)
ie_result = ie.extract(url) ie_result = ie.extract(url)
if ie_result is None: # Finished already (backwards compatibility; listformats and friends should be moved here) if ie_result is None: # Finished already (backwards compatibility; listformats and friends should be moved here)
return return
@ -849,7 +993,7 @@ class YoutubeDL(object):
def process_ie_result(self, ie_result, download=True, extra_info={}): def process_ie_result(self, ie_result, download=True, extra_info={}):
""" """
Take the result of the ie(may be modified) and resolve all unresolved Take the result of the ie (may be modified) and resolve all unresolved
references (URLs, playlist items). references (URLs, playlist items).
It will also download the videos if 'download'. It will also download the videos if 'download'.
@ -911,8 +1055,8 @@ class YoutubeDL(object):
elif result_type in ('playlist', 'multi_video'): elif result_type in ('playlist', 'multi_video'):
# Protect from infinite recursion due to recursively nested playlists # Protect from infinite recursion due to recursively nested playlists
# (see https://github.com/ytdl-org/youtube-dl/issues/27833) # (see https://github.com/ytdl-org/youtube-dl/issues/27833)
webpage_url = ie_result['webpage_url'] webpage_url = ie_result.get('webpage_url') # not all pl/mv have this
if webpage_url in self._playlist_urls: if webpage_url and webpage_url in self._playlist_urls:
self.to_screen( self.to_screen(
'[download] Skipping already downloaded playlist: %s' '[download] Skipping already downloaded playlist: %s'
% ie_result.get('title') or ie_result.get('id')) % ie_result.get('title') or ie_result.get('id'))
@ -920,6 +1064,10 @@ class YoutubeDL(object):
self._playlist_level += 1 self._playlist_level += 1
self._playlist_urls.add(webpage_url) self._playlist_urls.add(webpage_url)
new_result = dict((k, v) for k, v in extra_info.items() if k not in ie_result)
if new_result:
new_result.update(ie_result)
ie_result = new_result
try: try:
return self.__process_playlist(ie_result, download) return self.__process_playlist(ie_result, download)
finally: finally:
@ -1376,17 +1524,16 @@ class YoutubeDL(object):
'abr': formats_info[1].get('abr'), 'abr': formats_info[1].get('abr'),
'ext': output_ext, 'ext': output_ext,
} }
video_selector, audio_selector = map(_build_selector_function, selector.selector)
def selector_function(ctx): def selector_function(ctx):
for pair in itertools.product( selector_fn = lambda x: _build_selector_function(x)(ctx)
video_selector(copy.deepcopy(ctx)), audio_selector(copy.deepcopy(ctx))): for pair in itertools.product(*map(selector_fn, selector.selector)):
yield _merge(pair) yield _merge(pair)
filters = [self._build_format_filter(f) for f in selector.filters] filters = [self._build_format_filter(f) for f in selector.filters]
def final_selector(ctx): def final_selector(ctx):
ctx_copy = copy.deepcopy(ctx) ctx_copy = dict(ctx)
for _filter in filters: for _filter in filters:
ctx_copy['formats'] = list(filter(_filter, ctx_copy['formats'])) ctx_copy['formats'] = list(filter(_filter, ctx_copy['formats']))
return selector_function(ctx_copy) return selector_function(ctx_copy)
@ -1421,29 +1568,73 @@ class YoutubeDL(object):
parsed_selector = _parse_format_selection(iter(TokenIterator(tokens))) parsed_selector = _parse_format_selection(iter(TokenIterator(tokens)))
return _build_selector_function(parsed_selector) return _build_selector_function(parsed_selector)
def _calc_headers(self, info_dict): def _calc_headers(self, info_dict, load_cookies=False):
res = std_headers.copy() if load_cookies: # For --load-info-json
# load cookies from http_headers in legacy info.json
self._load_cookies(traverse_obj(info_dict, ('http_headers', 'Cookie'), casesense=False),
autoscope=info_dict['url'])
# load scoped cookies from info.json
self._load_cookies(info_dict.get('cookies'), autoscope=False)
add_headers = info_dict.get('http_headers') cookies = self.cookiejar.get_cookies_for_url(info_dict['url'])
if add_headers:
res.update(add_headers)
cookies = self._calc_cookies(info_dict)
if cookies: if cookies:
res['Cookie'] = cookies # Make a string like name1=val1; attr1=a_val1; ...name2=val2; ...
# By convention a cookie name can't be a well-known attribute name
# so this syntax is unambiguous and can be parsed by (eg) SimpleCookie
encoder = compat_http_cookies_SimpleCookie()
values = []
attributes = (('Domain', '='), ('Path', '='), ('Secure',), ('Expires', '='), ('Version', '='))
attributes = tuple([x[0].lower()] + list(x) for x in attributes)
for cookie in cookies:
_, value = encoder.value_encode(cookie.value)
# Py 2 '' --> '', Py 3 '' --> '""'
if value == '':
value = '""'
values.append('='.join((cookie.name, value)))
for attr in attributes:
value = getattr(cookie, attr[0], None)
if value:
values.append('%s%s' % (''.join(attr[1:]), value if len(attr) == 3 else ''))
info_dict['cookies'] = '; '.join(values)
res = std_headers.copy()
res.update(info_dict.get('http_headers') or {})
res = self._remove_cookie_header(res)
if 'X-Forwarded-For' not in res: if 'X-Forwarded-For' not in res:
x_forwarded_for_ip = info_dict.get('__x_forwarded_for_ip') x_forwarded_for_ip = info_dict.get('__x_forwarded_for_ip')
if x_forwarded_for_ip: if x_forwarded_for_ip:
res['X-Forwarded-For'] = x_forwarded_for_ip res['X-Forwarded-For'] = x_forwarded_for_ip
return res return res or None
def _calc_cookies(self, info_dict): def _calc_cookies(self, info_dict):
pr = sanitized_Request(info_dict['url']) pr = sanitized_Request(info_dict['url'])
self.cookiejar.add_cookie_header(pr) self.cookiejar.add_cookie_header(pr)
return pr.get_header('Cookie') return pr.get_header('Cookie')
def _fill_common_fields(self, info_dict, final=True):
for ts_key, date_key in (
('timestamp', 'upload_date'),
('release_timestamp', 'release_date'),
):
if info_dict.get(date_key) is None and info_dict.get(ts_key) is not None:
# Working around out-of-range timestamp values (e.g. negative ones on Windows,
# see http://bugs.python.org/issue1646728)
try:
upload_date = datetime.datetime.utcfromtimestamp(info_dict[ts_key])
info_dict[date_key] = compat_str(upload_date.strftime('%Y%m%d'))
except (ValueError, OverflowError, OSError):
pass
# Auto generate title fields corresponding to the *_number fields when missing
# in order to always have clean titles. This is very common for TV series.
if final:
for field in ('chapter', 'season', 'episode'):
if info_dict.get('%s_number' % field) is not None and not info_dict.get(field):
info_dict[field] = '%s %d' % (field.capitalize(), info_dict['%s_number' % field])
def process_video_result(self, info_dict, download=True): def process_video_result(self, info_dict, download=True):
assert info_dict.get('_type', 'video') == 'video' assert info_dict.get('_type', 'video') == 'video'
@ -1511,20 +1702,7 @@ class YoutubeDL(object):
if 'display_id' not in info_dict and 'id' in info_dict: if 'display_id' not in info_dict and 'id' in info_dict:
info_dict['display_id'] = info_dict['id'] info_dict['display_id'] = info_dict['id']
if info_dict.get('upload_date') is None and info_dict.get('timestamp') is not None: self._fill_common_fields(info_dict)
# Working around out-of-range timestamp values (e.g. negative ones on Windows,
# see http://bugs.python.org/issue1646728)
try:
upload_date = datetime.datetime.utcfromtimestamp(info_dict['timestamp'])
info_dict['upload_date'] = upload_date.strftime('%Y%m%d')
except (ValueError, OverflowError, OSError):
pass
# Auto generate title fields corresponding to the *_number fields when missing
# in order to always have clean titles. This is very common for TV series.
for field in ('chapter', 'season', 'episode'):
if info_dict.get('%s_number' % field) is not None and not info_dict.get(field):
info_dict[field] = '%s %d' % (field.capitalize(), info_dict['%s_number' % field])
for cc_kind in ('subtitles', 'automatic_captions'): for cc_kind in ('subtitles', 'automatic_captions'):
cc = info_dict.get(cc_kind) cc = info_dict.get(cc_kind)
@ -1556,9 +1734,6 @@ class YoutubeDL(object):
else: else:
formats = info_dict['formats'] formats = info_dict['formats']
if not formats:
raise ExtractorError('No video formats found!')
def is_wellformed(f): def is_wellformed(f):
url = f.get('url') url = f.get('url')
if not url: if not url:
@ -1571,7 +1746,10 @@ class YoutubeDL(object):
return True return True
# Filter out malformed formats for better extraction robustness # Filter out malformed formats for better extraction robustness
formats = list(filter(is_wellformed, formats)) formats = list(filter(is_wellformed, formats or []))
if not formats:
raise ExtractorError('No video formats found!')
formats_dict = {} formats_dict = {}
@ -1612,10 +1790,13 @@ class YoutubeDL(object):
format['protocol'] = determine_protocol(format) format['protocol'] = determine_protocol(format)
# Add HTTP headers, so that external programs can use them from the # Add HTTP headers, so that external programs can use them from the
# json output # json output
full_format_info = info_dict.copy() format['http_headers'] = self._calc_headers(ChainMap(format, info_dict), load_cookies=True)
full_format_info.update(format)
format['http_headers'] = self._calc_headers(full_format_info) # Safeguard against old/insecure infojson when using --load-info-json
# Remove private housekeeping stuff info_dict['http_headers'] = self._remove_cookie_header(
info_dict.get('http_headers') or {}) or None
# Remove private housekeeping stuff (copied to http_headers in _calc_headers())
if '__x_forwarded_for_ip' in info_dict: if '__x_forwarded_for_ip' in info_dict:
del info_dict['__x_forwarded_for_ip'] del info_dict['__x_forwarded_for_ip']
@ -1758,17 +1939,17 @@ class YoutubeDL(object):
self.to_stdout(formatSeconds(info_dict['duration'])) self.to_stdout(formatSeconds(info_dict['duration']))
print_mandatory('format') print_mandatory('format')
if self.params.get('forcejson', False): if self.params.get('forcejson', False):
self.to_stdout(json.dumps(info_dict)) self.to_stdout(json.dumps(self.sanitize_info(info_dict)))
@_catch_unsafe_file_extension
def process_info(self, info_dict): def process_info(self, info_dict):
"""Process a single resolved IE result.""" """Process a single resolved IE result."""
assert info_dict.get('_type', 'video') == 'video' assert info_dict.get('_type', 'video') == 'video'
max_downloads = self.params.get('max_downloads') max_downloads = int_or_none(self.params.get('max_downloads')) or float('inf')
if max_downloads is not None: if self._num_downloads >= max_downloads:
if self._num_downloads >= int(max_downloads): raise MaxDownloadsReached()
raise MaxDownloadsReached()
# TODO: backward compatibility, to be removed # TODO: backward compatibility, to be removed
info_dict['fulltitle'] = info_dict['title'] info_dict['fulltitle'] = info_dict['title']
@ -1819,7 +2000,7 @@ class YoutubeDL(object):
else: else:
try: try:
self.to_screen('[info] Writing video description to: ' + descfn) self.to_screen('[info] Writing video description to: ' + descfn)
with io.open(encodeFilename(descfn), 'w', encoding='utf-8') as descfile: with open(encodeFilename(descfn), 'w', encoding='utf-8') as descfile:
descfile.write(info_dict['description']) descfile.write(info_dict['description'])
except (OSError, IOError): except (OSError, IOError):
self.report_error('Cannot write description file ' + descfn) self.report_error('Cannot write description file ' + descfn)
@ -1834,7 +2015,7 @@ class YoutubeDL(object):
else: else:
try: try:
self.to_screen('[info] Writing video annotations to: ' + annofn) self.to_screen('[info] Writing video annotations to: ' + annofn)
with io.open(encodeFilename(annofn), 'w', encoding='utf-8') as annofile: with open(encodeFilename(annofn), 'w', encoding='utf-8') as annofile:
annofile.write(info_dict['annotations']) annofile.write(info_dict['annotations'])
except (KeyError, TypeError): except (KeyError, TypeError):
self.report_warning('There are no annotations to write.') self.report_warning('There are no annotations to write.')
@ -1861,7 +2042,7 @@ class YoutubeDL(object):
try: try:
# Use newline='' to prevent conversion of newline characters # Use newline='' to prevent conversion of newline characters
# See https://github.com/ytdl-org/youtube-dl/issues/10268 # See https://github.com/ytdl-org/youtube-dl/issues/10268
with io.open(encodeFilename(sub_filename), 'w', encoding='utf-8', newline='') as subfile: with open(encodeFilename(sub_filename), 'w', encoding='utf-8', newline='') as subfile:
subfile.write(sub_info['data']) subfile.write(sub_info['data'])
except (OSError, IOError): except (OSError, IOError):
self.report_error('Cannot write subtitles file ' + sub_filename) self.report_error('Cannot write subtitles file ' + sub_filename)
@ -1870,36 +2051,41 @@ class YoutubeDL(object):
try: try:
sub_data = ie._request_webpage( sub_data = ie._request_webpage(
sub_info['url'], info_dict['id'], note=False).read() sub_info['url'], info_dict['id'], note=False).read()
with io.open(encodeFilename(sub_filename), 'wb') as subfile: with open(encodeFilename(sub_filename), 'wb') as subfile:
subfile.write(sub_data) subfile.write(sub_data)
except (ExtractorError, IOError, OSError, ValueError) as err: except (ExtractorError, IOError, OSError, ValueError) as err:
self.report_warning('Unable to download subtitle for "%s": %s' % self.report_warning('Unable to download subtitle for "%s": %s' %
(sub_lang, error_to_compat_str(err))) (sub_lang, error_to_compat_str(err)))
continue continue
if self.params.get('writeinfojson', False): self._write_info_json(
infofn = replace_extension(filename, 'info.json', info_dict.get('ext')) 'video description', info_dict,
if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(infofn)): replace_extension(filename, 'info.json', info_dict.get('ext')))
self.to_screen('[info] Video description metadata is already present')
else:
self.to_screen('[info] Writing video description metadata as JSON to: ' + infofn)
try:
write_json_file(self.filter_requested_info(info_dict), infofn)
except (OSError, IOError):
self.report_error('Cannot write metadata to JSON file ' + infofn)
return
self._write_thumbnails(info_dict, filename) self._write_thumbnails(info_dict, filename)
if not self.params.get('skip_download', False): if not self.params.get('skip_download', False):
try: try:
def checked_get_suitable_downloader(info_dict, params):
ed_args = params.get('external_downloader_args')
dler = get_suitable_downloader(info_dict, params)
if ed_args and not params.get('external_downloader_args'):
# external_downloader_args was cleared because external_downloader was rejected
self.report_warning('Requested external downloader cannot be used: '
'ignoring --external-downloader-args.')
return dler
def dl(name, info): def dl(name, info):
fd = get_suitable_downloader(info, self.params)(self, self.params) fd = checked_get_suitable_downloader(info, self.params)(self, self.params)
for ph in self._progress_hooks: for ph in self._progress_hooks:
fd.add_progress_hook(ph) fd.add_progress_hook(ph)
if self.params.get('verbose'): if self.params.get('verbose'):
self.to_screen('[debug] Invoking downloader on %r' % info.get('url')) self.to_screen('[debug] Invoking downloader on %r' % info.get('url'))
return fd.download(name, info)
new_info = dict((k, v) for k, v in info.items() if not k.startswith('__p'))
new_info['http_headers'] = self._calc_headers(new_info)
return fd.download(name, new_info)
if info_dict.get('requested_formats') is not None: if info_dict.get('requested_formats') is not None:
downloaded = [] downloaded = []
@ -1928,18 +2114,26 @@ class YoutubeDL(object):
# TODO: Check acodec/vcodec # TODO: Check acodec/vcodec
return False return False
filename_real_ext = os.path.splitext(filename)[1][1:] exts = [info_dict['ext']]
filename_wo_ext = (
os.path.splitext(filename)[0]
if filename_real_ext == info_dict['ext']
else filename)
requested_formats = info_dict['requested_formats'] requested_formats = info_dict['requested_formats']
if self.params.get('merge_output_format') is None and not compatible_formats(requested_formats): if self.params.get('merge_output_format') is None and not compatible_formats(requested_formats):
info_dict['ext'] = 'mkv' info_dict['ext'] = 'mkv'
self.report_warning( self.report_warning(
'Requested formats are incompatible for merge and will be merged into mkv.') 'Requested formats are incompatible for merge and will be merged into mkv.')
exts.append(info_dict['ext'])
# Ensure filename always has a correct extension for successful merge # Ensure filename always has a correct extension for successful merge
filename = '%s.%s' % (filename_wo_ext, info_dict['ext']) def correct_ext(filename, ext=exts[1]):
if filename == '-':
return filename
f_name, f_real_ext = os.path.splitext(filename)
f_real_ext = f_real_ext[1:]
filename_wo_ext = f_name if f_real_ext in exts else filename
if ext is None:
ext = f_real_ext or None
return join_nonempty(filename_wo_ext, ext, delim='.')
filename = correct_ext(filename)
if os.path.exists(encodeFilename(filename)): if os.path.exists(encodeFilename(filename)):
self.to_screen( self.to_screen(
'[download] %s has already been downloaded and ' '[download] %s has already been downloaded and '
@ -1949,8 +2143,9 @@ class YoutubeDL(object):
new_info = dict(info_dict) new_info = dict(info_dict)
new_info.update(f) new_info.update(f)
fname = prepend_extension( fname = prepend_extension(
self.prepare_filename(new_info), correct_ext(
'f%s' % f['format_id'], new_info['ext']) self.prepare_filename(new_info), new_info['ext']),
'f%s' % (f['format_id'],), new_info['ext'])
if not ensure_dir_exists(fname): if not ensure_dir_exists(fname):
return return
downloaded.append(fname) downloaded.append(fname)
@ -2036,9 +2231,12 @@ class YoutubeDL(object):
try: try:
self.post_process(filename, info_dict) self.post_process(filename, info_dict)
except (PostProcessingError) as err: except (PostProcessingError) as err:
self.report_error('postprocessing: %s' % str(err)) self.report_error('postprocessing: %s' % error_to_compat_str(err))
return return
self.record_download_archive(info_dict) self.record_download_archive(info_dict)
# avoid possible nugatory search for further items (PR #26638)
if self._num_downloads >= max_downloads:
raise MaxDownloadsReached()
def download(self, url_list): def download(self, url_list):
"""Download a given list of URLs.""" """Download a given list of URLs."""
@ -2061,16 +2259,13 @@ class YoutubeDL(object):
raise raise
else: else:
if self.params.get('dump_single_json', False): if self.params.get('dump_single_json', False):
self.to_stdout(json.dumps(res)) self.to_stdout(json.dumps(self.sanitize_info(res)))
return self._download_retcode return self._download_retcode
def download_with_info_file(self, info_filename): def download_with_info_file(self, info_filename):
with contextlib.closing(fileinput.FileInput( with open(info_filename, encoding='utf-8') as f:
[info_filename], mode='r', info = self.filter_requested_info(json.load(f))
openhook=fileinput.hook_encoded('utf-8'))) as f:
# FileInput doesn't have a read method, we can't call json.load
info = self.filter_requested_info(json.loads('\n'.join(f)))
try: try:
self.process_ie_result(info, download=True) self.process_ie_result(info, download=True)
except DownloadError: except DownloadError:
@ -2083,10 +2278,36 @@ class YoutubeDL(object):
return self._download_retcode return self._download_retcode
@staticmethod @staticmethod
def filter_requested_info(info_dict): def sanitize_info(info_dict, remove_private_keys=False):
return dict( ''' Sanitize the infodict for converting to json '''
(k, v) for k, v in info_dict.items() if info_dict is None:
if k not in ['requested_formats', 'requested_subtitles']) return info_dict
if remove_private_keys:
reject = lambda k, v: (v is None
or k.startswith('__')
or k in ('requested_formats',
'requested_subtitles'))
else:
reject = lambda k, v: False
def filter_fn(obj):
if isinstance(obj, dict):
return dict((k, filter_fn(v)) for k, v in obj.items() if not reject(k, v))
elif isinstance(obj, (list, tuple, set, LazyList)):
return list(map(filter_fn, obj))
elif obj is None or any(isinstance(obj, c)
for c in (compat_integer_types,
(compat_str, float, bool))):
return obj
else:
return repr(obj)
return filter_fn(info_dict)
@classmethod
def filter_requested_info(cls, info_dict):
return cls.sanitize_info(info_dict, True)
def post_process(self, filename, ie_info): def post_process(self, filename, ie_info):
"""Run all the postprocessors on the given file.""" """Run all the postprocessors on the given file."""
@ -2293,18 +2514,21 @@ class YoutubeDL(object):
self.get_encoding())) self.get_encoding()))
write_string(encoding_str, encoding=None) write_string(encoding_str, encoding=None)
self._write_string('[debug] youtube-dl version ' + __version__ + '\n') writeln_debug = lambda *s: self._write_string('[debug] %s\n' % (''.join(s), ))
writeln_debug('youtube-dl version ', __version__)
if _LAZY_LOADER: if _LAZY_LOADER:
self._write_string('[debug] Lazy loading extractors enabled' + '\n') writeln_debug('Lazy loading extractors enabled')
if ytdl_is_updateable():
writeln_debug('Single file build')
try: try:
sp = subprocess.Popen( sp = subprocess.Popen(
['git', 'rev-parse', '--short', 'HEAD'], ['git', 'rev-parse', '--short', 'HEAD'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
cwd=os.path.dirname(os.path.abspath(__file__))) cwd=os.path.dirname(os.path.abspath(__file__)))
out, err = sp.communicate() out, err = process_communicate_or_kill(sp)
out = out.decode().strip() out = out.decode().strip()
if re.match('[0-9a-f]+', out): if re.match('[0-9a-f]+', out):
self._write_string('[debug] Git HEAD: ' + out + '\n') writeln_debug('Git HEAD: ', out)
except Exception: except Exception:
try: try:
sys.exc_clear() sys.exc_clear()
@ -2317,9 +2541,22 @@ class YoutubeDL(object):
return impl_name + ' version %d.%d.%d' % sys.pypy_version_info[:3] return impl_name + ' version %d.%d.%d' % sys.pypy_version_info[:3]
return impl_name return impl_name
self._write_string('[debug] Python version %s (%s) - %s\n' % ( def libc_ver():
platform.python_version(), python_implementation(), try:
platform_name())) return platform.libc_ver()
except OSError: # We may not have access to the executable
return []
libc = join_nonempty(*libc_ver(), delim=' ')
writeln_debug('Python %s (%s %s %s) - %s - %s%s' % (
platform.python_version(),
python_implementation(),
platform.machine(),
platform.architecture()[0],
platform_name(),
OPENSSL_VERSION,
(' - %s' % (libc, )) if libc else ''
))
exe_versions = FFmpegPostProcessor.get_versions(self) exe_versions = FFmpegPostProcessor.get_versions(self)
exe_versions['rtmpdump'] = rtmpdump_version() exe_versions['rtmpdump'] = rtmpdump_version()
@ -2331,17 +2568,17 @@ class YoutubeDL(object):
) )
if not exe_str: if not exe_str:
exe_str = 'none' exe_str = 'none'
self._write_string('[debug] exe versions: %s\n' % exe_str) writeln_debug('exe versions: %s' % (exe_str, ))
proxy_map = {} proxy_map = {}
for handler in self._opener.handlers: for handler in self._opener.handlers:
if hasattr(handler, 'proxies'): if hasattr(handler, 'proxies'):
proxy_map.update(handler.proxies) proxy_map.update(handler.proxies)
self._write_string('[debug] Proxy map: ' + compat_str(proxy_map) + '\n') writeln_debug('Proxy map: ', compat_str(proxy_map))
if self.params.get('call_home', False): if self.params.get('call_home', False):
ipaddr = self.urlopen('https://yt-dl.org/ip').read().decode('utf-8') ipaddr = self.urlopen('https://yt-dl.org/ip').read().decode('utf-8')
self._write_string('[debug] Public IP address: %s\n' % ipaddr) writeln_debug('Public IP address: %s' % (ipaddr, ))
latest_version = self.urlopen( latest_version = self.urlopen(
'https://yt-dl.org/latest/version').read().decode('utf-8') 'https://yt-dl.org/latest/version').read().decode('utf-8')
if version_tuple(latest_version) > version_tuple(__version__): if version_tuple(latest_version) > version_tuple(__version__):
@ -2358,7 +2595,7 @@ class YoutubeDL(object):
opts_proxy = self.params.get('proxy') opts_proxy = self.params.get('proxy')
if opts_cookiefile is None: if opts_cookiefile is None:
self.cookiejar = compat_cookiejar.CookieJar() self.cookiejar = YoutubeDLCookieJar()
else: else:
opts_cookiefile = expand_path(opts_cookiefile) opts_cookiefile = expand_path(opts_cookiefile)
self.cookiejar = YoutubeDLCookieJar(opts_cookiefile) self.cookiejar = YoutubeDLCookieJar(opts_cookiefile)
@ -2419,6 +2656,28 @@ class YoutubeDL(object):
encoding = preferredencoding() encoding = preferredencoding()
return encoding return encoding
def _write_info_json(self, label, info_dict, infofn, overwrite=None):
if not self.params.get('writeinfojson', False):
return False
def msg(fmt, lbl):
return fmt % (lbl + ' metadata',)
if overwrite is None:
overwrite = not self.params.get('nooverwrites', False)
if not overwrite and os.path.exists(encodeFilename(infofn)):
self.to_screen(msg('[info] %s is already present', label.title()))
return 'exists'
else:
self.to_screen(msg('[info] Writing %s as JSON to: ', label) + infofn)
try:
write_json_file(self.filter_requested_info(info_dict), infofn)
return True
except (OSError, IOError):
self.report_error(msg('Cannot write %s to JSON file ', label) + infofn)
return
def _write_thumbnails(self, info_dict, filename): def _write_thumbnails(self, info_dict, filename):
if self.params.get('writethumbnail', False): if self.params.get('writethumbnail', False):
thumbnails = info_dict.get('thumbnails') thumbnails = info_dict.get('thumbnails')

View File

@ -5,7 +5,6 @@ from __future__ import unicode_literals
__license__ = 'Public Domain' __license__ = 'Public Domain'
import codecs
import io import io
import os import os
import random import random
@ -17,10 +16,12 @@ from .options import (
) )
from .compat import ( from .compat import (
compat_getpass, compat_getpass,
compat_register_utf8,
compat_shlex_split, compat_shlex_split,
workaround_optparse_bug9161, workaround_optparse_bug9161,
) )
from .utils import ( from .utils import (
_UnsafeExtensionError,
DateRange, DateRange,
decodeOption, decodeOption,
DEFAULT_OUTTMPL, DEFAULT_OUTTMPL,
@ -46,10 +47,8 @@ from .YoutubeDL import YoutubeDL
def _real_main(argv=None): def _real_main(argv=None):
# Compatibility fixes for Windows # Compatibility fix for Windows
if sys.platform == 'win32': compat_register_utf8()
# https://github.com/ytdl-org/youtube-dl/issues/820
codecs.register(lambda name: codecs.lookup('utf-8') if name == 'cp65001' else None)
workaround_optparse_bug9161() workaround_optparse_bug9161()
@ -175,6 +174,9 @@ def _real_main(argv=None):
if opts.ap_mso and opts.ap_mso not in MSO_INFO: if opts.ap_mso and opts.ap_mso not in MSO_INFO:
parser.error('Unsupported TV Provider, use --ap-list-mso to get a list of supported TV Providers') parser.error('Unsupported TV Provider, use --ap-list-mso to get a list of supported TV Providers')
if opts.no_check_extensions:
_UnsafeExtensionError.lenient = True
def parse_retries(retries): def parse_retries(retries):
if retries in ('inf', 'infinite'): if retries in ('inf', 'infinite'):
parsed_retries = float('inf') parsed_retries = float('inf')

View File

@ -8,6 +8,18 @@ from .utils import bytes_to_intlist, intlist_to_bytes
BLOCK_SIZE_BYTES = 16 BLOCK_SIZE_BYTES = 16
def pkcs7_padding(data):
"""
PKCS#7 padding
@param {int[]} data cleartext
@returns {int[]} padding data
"""
remaining_length = BLOCK_SIZE_BYTES - len(data) % BLOCK_SIZE_BYTES
return data + [remaining_length] * remaining_length
def aes_ctr_decrypt(data, key, counter): def aes_ctr_decrypt(data, key, counter):
""" """
Decrypt with aes in counter mode Decrypt with aes in counter mode
@ -76,8 +88,7 @@ def aes_cbc_encrypt(data, key, iv):
previous_cipher_block = iv previous_cipher_block = iv
for i in range(block_count): for i in range(block_count):
block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES] block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
remaining_length = BLOCK_SIZE_BYTES - len(block) block = pkcs7_padding(block)
block += [remaining_length] * remaining_length
mixed_block = xor(block, previous_cipher_block) mixed_block = xor(block, previous_cipher_block)
encrypted_block = aes_encrypt(mixed_block, expanded_key) encrypted_block = aes_encrypt(mixed_block, expanded_key)
@ -88,6 +99,28 @@ def aes_cbc_encrypt(data, key, iv):
return encrypted_data return encrypted_data
def aes_ecb_encrypt(data, key):
"""
Encrypt with aes in ECB mode. Using PKCS#7 padding
@param {int[]} data cleartext
@param {int[]} key 16/24/32-Byte cipher key
@returns {int[]} encrypted data
"""
expanded_key = key_expansion(key)
block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
encrypted_data = []
for i in range(block_count):
block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
block = pkcs7_padding(block)
encrypted_block = aes_encrypt(block, expanded_key)
encrypted_data += encrypted_block
return encrypted_data
def key_expansion(data): def key_expansion(data):
""" """
Generate key schedule Generate key schedule
@ -303,7 +336,7 @@ def xor(data1, data2):
def rijndael_mul(a, b): def rijndael_mul(a, b):
if(a == 0 or b == 0): if (a == 0 or b == 0):
return 0 return 0
return RIJNDAEL_EXP_TABLE[(RIJNDAEL_LOG_TABLE[a] + RIJNDAEL_LOG_TABLE[b]) % 0xFF] return RIJNDAEL_EXP_TABLE[(RIJNDAEL_LOG_TABLE[a] + RIJNDAEL_LOG_TABLE[b]) % 0xFF]

View File

@ -1,21 +1,32 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import errno import errno
import io
import json import json
import os import os
import re import re
import shutil import shutil
import traceback import traceback
from .compat import compat_getenv from .compat import (
compat_getenv,
compat_open as open,
)
from .utils import ( from .utils import (
error_to_compat_str,
expand_path, expand_path,
is_outdated_version,
try_get,
write_json_file, write_json_file,
) )
from .version import __version__
class Cache(object): class Cache(object):
_YTDL_DIR = 'youtube-dl'
_VERSION_KEY = _YTDL_DIR + '_version'
_DEFAULT_VERSION = '2021.12.17'
def __init__(self, ydl): def __init__(self, ydl):
self._ydl = ydl self._ydl = ydl
@ -23,7 +34,7 @@ class Cache(object):
res = self._ydl.params.get('cachedir') res = self._ydl.params.get('cachedir')
if res is None: if res is None:
cache_root = compat_getenv('XDG_CACHE_HOME', '~/.cache') cache_root = compat_getenv('XDG_CACHE_HOME', '~/.cache')
res = os.path.join(cache_root, 'youtube-dl') res = os.path.join(cache_root, self._YTDL_DIR)
return expand_path(res) return expand_path(res)
def _get_cache_fn(self, section, key, dtype): def _get_cache_fn(self, section, key, dtype):
@ -50,13 +61,22 @@ class Cache(object):
except OSError as ose: except OSError as ose:
if ose.errno != errno.EEXIST: if ose.errno != errno.EEXIST:
raise raise
write_json_file(data, fn) write_json_file({self._VERSION_KEY: __version__, 'data': data}, fn)
except Exception: except Exception:
tb = traceback.format_exc() tb = traceback.format_exc()
self._ydl.report_warning( self._ydl.report_warning(
'Writing cache to %r failed: %s' % (fn, tb)) 'Writing cache to %r failed: %s' % (fn, tb))
def load(self, section, key, dtype='json', default=None): def _validate(self, data, min_ver):
version = try_get(data, lambda x: x[self._VERSION_KEY])
if not version: # Backward compatibility
data, version = {'data': data}, self._DEFAULT_VERSION
if not is_outdated_version(version, min_ver or '0', assume_new=False):
return data['data']
self._ydl.to_screen(
'Discarding old cache from version {version} (needs {min_ver})'.format(**locals()))
def load(self, section, key, dtype='json', default=None, min_ver=None):
assert dtype in ('json',) assert dtype in ('json',)
if not self.enabled: if not self.enabled:
@ -65,13 +85,13 @@ class Cache(object):
cache_fn = self._get_cache_fn(section, key, dtype) cache_fn = self._get_cache_fn(section, key, dtype)
try: try:
try: try:
with io.open(cache_fn, 'r', encoding='utf-8') as cachef: with open(cache_fn, 'r', encoding='utf-8') as cachef:
return json.load(cachef) return self._validate(json.load(cachef), min_ver)
except ValueError: except ValueError:
try: try:
file_size = os.path.getsize(cache_fn) file_size = os.path.getsize(cache_fn)
except (OSError, IOError) as oe: except (OSError, IOError) as oe:
file_size = str(oe) file_size = error_to_compat_str(oe)
self._ydl.report_warning( self._ydl.report_warning(
'Cache retrieval from %s failed (%s)' % (cache_fn, file_size)) 'Cache retrieval from %s failed (%s)' % (cache_fn, file_size))
except IOError: except IOError:

1667
youtube_dl/casefold.py Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,10 +1,12 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
from __future__ import division
import base64 import base64
import binascii import binascii
import collections import collections
import ctypes import ctypes
import datetime
import email import email
import getpass import getpass
import io import io
@ -19,14 +21,64 @@ import socket
import struct import struct
import subprocess import subprocess
import sys import sys
import types
import xml.etree.ElementTree import xml.etree.ElementTree
# naming convention
# 'compat_' + Python3_name.replace('.', '_')
# other aliases exist for convenience and/or legacy
# deal with critical unicode/str things first
try:
# Python 2
compat_str, compat_basestring, compat_chr = (
unicode, basestring, unichr
)
except NameError:
compat_str, compat_basestring, compat_chr = (
str, (str, bytes), chr
)
# casefold
try:
compat_str.casefold
compat_casefold = lambda s: s.casefold()
except AttributeError:
from .casefold import casefold as compat_casefold
try:
import collections.abc as compat_collections_abc
except ImportError:
import collections as compat_collections_abc
try: try:
import urllib.request as compat_urllib_request import urllib.request as compat_urllib_request
except ImportError: # Python 2 except ImportError: # Python 2
import urllib2 as compat_urllib_request import urllib2 as compat_urllib_request
# Also fix up lack of method arg in old Pythons
try:
type(compat_urllib_request.Request('http://127.0.0.1', method='GET'))
except TypeError:
def _add_init_method_arg(cls):
init = cls.__init__
def wrapped_init(self, *args, **kwargs):
method = kwargs.pop('method', 'GET')
init(self, *args, **kwargs)
if any(callable(x.__dict__.get('get_method')) for x in (self.__class__, self) if x != cls):
# allow instance or its subclass to override get_method()
return
if self.has_data() and method == 'GET':
method = 'POST'
self.get_method = types.MethodType(lambda _: method, self)
cls.__init__ = wrapped_init
_add_init_method_arg(compat_urllib_request.Request)
del _add_init_method_arg
try: try:
import urllib.error as compat_urllib_error import urllib.error as compat_urllib_error
except ImportError: # Python 2 except ImportError: # Python 2
@ -36,26 +88,32 @@ try:
import urllib.parse as compat_urllib_parse import urllib.parse as compat_urllib_parse
except ImportError: # Python 2 except ImportError: # Python 2
import urllib as compat_urllib_parse import urllib as compat_urllib_parse
import urlparse as _urlparse
for a in dir(_urlparse):
if not hasattr(compat_urllib_parse, a):
setattr(compat_urllib_parse, a, getattr(_urlparse, a))
del _urlparse
try: # unfavoured aliases
from urllib.parse import urlparse as compat_urllib_parse_urlparse compat_urlparse = compat_urllib_parse
except ImportError: # Python 2 compat_urllib_parse_urlparse = compat_urllib_parse.urlparse
from urlparse import urlparse as compat_urllib_parse_urlparse
try:
import urllib.parse as compat_urlparse
except ImportError: # Python 2
import urlparse as compat_urlparse
try: try:
import urllib.response as compat_urllib_response import urllib.response as compat_urllib_response
except ImportError: # Python 2 except ImportError: # Python 2
import urllib as compat_urllib_response import urllib as compat_urllib_response
try:
compat_urllib_response.addinfourl.status
except AttributeError:
# .getcode() is deprecated in Py 3.
compat_urllib_response.addinfourl.status = property(lambda self: self.getcode())
try: try:
import http.cookiejar as compat_cookiejar import http.cookiejar as compat_cookiejar
except ImportError: # Python 2 except ImportError: # Python 2
import cookielib as compat_cookiejar import cookielib as compat_cookiejar
compat_http_cookiejar = compat_cookiejar
if sys.version_info[0] == 2: if sys.version_info[0] == 2:
class compat_cookiejar_Cookie(compat_cookiejar.Cookie): class compat_cookiejar_Cookie(compat_cookiejar.Cookie):
@ -67,11 +125,35 @@ if sys.version_info[0] == 2:
compat_cookiejar.Cookie.__init__(self, version, name, value, *args, **kwargs) compat_cookiejar.Cookie.__init__(self, version, name, value, *args, **kwargs)
else: else:
compat_cookiejar_Cookie = compat_cookiejar.Cookie compat_cookiejar_Cookie = compat_cookiejar.Cookie
compat_http_cookiejar_Cookie = compat_cookiejar_Cookie
try: try:
import http.cookies as compat_cookies import http.cookies as compat_cookies
except ImportError: # Python 2 except ImportError: # Python 2
import Cookie as compat_cookies import Cookie as compat_cookies
compat_http_cookies = compat_cookies
if sys.version_info[0] == 2 or sys.version_info < (3, 3):
class compat_cookies_SimpleCookie(compat_cookies.SimpleCookie):
def load(self, rawdata):
must_have_value = 0
if not isinstance(rawdata, dict):
if sys.version_info[:2] != (2, 7) or sys.platform.startswith('java'):
# attribute must have value for parsing
rawdata, must_have_value = re.subn(
r'(?i)(;\s*)(secure|httponly)(\s*(?:;|$))', r'\1\2=\2\3', rawdata)
if sys.version_info[0] == 2:
if isinstance(rawdata, compat_str):
rawdata = str(rawdata)
super(compat_cookies_SimpleCookie, self).load(rawdata)
if must_have_value > 0:
for morsel in self.values():
for attr in ('secure', 'httponly'):
if morsel.get(attr):
morsel[attr] = True
else:
compat_cookies_SimpleCookie = compat_cookies.SimpleCookie
compat_http_cookies_SimpleCookie = compat_cookies_SimpleCookie
try: try:
import html.entities as compat_html_entities import html.entities as compat_html_entities
@ -2320,39 +2402,45 @@ try:
import http.client as compat_http_client import http.client as compat_http_client
except ImportError: # Python 2 except ImportError: # Python 2
import httplib as compat_http_client import httplib as compat_http_client
try:
compat_http_client.HTTPResponse.getcode
except AttributeError:
# Py < 3.1
compat_http_client.HTTPResponse.getcode = lambda self: self.status
try: try:
from urllib.error import HTTPError as compat_HTTPError from urllib.error import HTTPError as compat_HTTPError
except ImportError: # Python 2 except ImportError: # Python 2
from urllib2 import HTTPError as compat_HTTPError from urllib2 import HTTPError as compat_HTTPError
compat_urllib_HTTPError = compat_HTTPError
try: try:
from urllib.request import urlretrieve as compat_urlretrieve from urllib.request import urlretrieve as compat_urlretrieve
except ImportError: # Python 2 except ImportError: # Python 2
from urllib import urlretrieve as compat_urlretrieve from urllib import urlretrieve as compat_urlretrieve
compat_urllib_request_urlretrieve = compat_urlretrieve
try: try:
from HTMLParser import (
HTMLParser as compat_HTMLParser,
HTMLParseError as compat_HTMLParseError)
except ImportError: # Python 3
from html.parser import HTMLParser as compat_HTMLParser from html.parser import HTMLParser as compat_HTMLParser
except ImportError: # Python 2
from HTMLParser import HTMLParser as compat_HTMLParser
try: # Python 2
from HTMLParser import HTMLParseError as compat_HTMLParseError
except ImportError: # Python <3.4
try: try:
from html.parser import HTMLParseError as compat_HTMLParseError from html.parser import HTMLParseError as compat_HTMLParseError
except ImportError: # Python >3.4 except ImportError: # Python >3.4
# HTMLParseError was deprecated in Python 3.3 and removed in
# HTMLParseError has been deprecated in Python 3.3 and removed in
# Python 3.5. Introducing dummy exception for Python >3.5 for compatible # Python 3.5. Introducing dummy exception for Python >3.5 for compatible
# and uniform cross-version exception handling # and uniform cross-version exception handling
class compat_HTMLParseError(Exception): class compat_HTMLParseError(Exception):
pass pass
compat_html_parser_HTMLParser = compat_HTMLParser
compat_html_parser_HTMLParseError = compat_HTMLParseError
try: try:
from subprocess import DEVNULL _DEVNULL = subprocess.DEVNULL
compat_subprocess_get_DEVNULL = lambda: DEVNULL compat_subprocess_get_DEVNULL = lambda: _DEVNULL
except ImportError: except AttributeError:
compat_subprocess_get_DEVNULL = lambda: open(os.path.devnull, 'w') compat_subprocess_get_DEVNULL = lambda: open(os.path.devnull, 'w')
try: try:
@ -2360,15 +2448,12 @@ try:
except ImportError: except ImportError:
import BaseHTTPServer as compat_http_server import BaseHTTPServer as compat_http_server
try:
compat_str = unicode # Python 2
except NameError:
compat_str = str
try: try:
from urllib.parse import unquote_to_bytes as compat_urllib_parse_unquote_to_bytes from urllib.parse import unquote_to_bytes as compat_urllib_parse_unquote_to_bytes
from urllib.parse import unquote as compat_urllib_parse_unquote from urllib.parse import unquote as compat_urllib_parse_unquote
from urllib.parse import unquote_plus as compat_urllib_parse_unquote_plus from urllib.parse import unquote_plus as compat_urllib_parse_unquote_plus
from urllib.parse import urlencode as compat_urllib_parse_urlencode
from urllib.parse import parse_qs as compat_parse_qs
except ImportError: # Python 2 except ImportError: # Python 2
_asciire = (compat_urllib_parse._asciire if hasattr(compat_urllib_parse, '_asciire') _asciire = (compat_urllib_parse._asciire if hasattr(compat_urllib_parse, '_asciire')
else re.compile(r'([\x00-\x7f]+)')) else re.compile(r'([\x00-\x7f]+)'))
@ -2435,9 +2520,6 @@ except ImportError: # Python 2
string = string.replace('+', ' ') string = string.replace('+', ' ')
return compat_urllib_parse_unquote(string, encoding, errors) return compat_urllib_parse_unquote(string, encoding, errors)
try:
from urllib.parse import urlencode as compat_urllib_parse_urlencode
except ImportError: # Python 2
# Python 2 will choke in urlencode on mixture of byte and unicode strings. # Python 2 will choke in urlencode on mixture of byte and unicode strings.
# Possible solutions are to either port it from python 3 with all # Possible solutions are to either port it from python 3 with all
# the friends or manually ensure input query contains only byte strings. # the friends or manually ensure input query contains only byte strings.
@ -2459,7 +2541,62 @@ except ImportError: # Python 2
def encode_list(l): def encode_list(l):
return [encode_elem(e) for e in l] return [encode_elem(e) for e in l]
return compat_urllib_parse.urlencode(encode_elem(query), doseq=doseq) return compat_urllib_parse._urlencode(encode_elem(query), doseq=doseq)
# HACK: The following is the correct parse_qs implementation from cpython 3's stdlib.
# Python 2's version is apparently totally broken
def _parse_qsl(qs, keep_blank_values=False, strict_parsing=False,
encoding='utf-8', errors='replace'):
qs, _coerce_result = qs, compat_str
pairs = [s2 for s1 in qs.split('&') for s2 in s1.split(';')]
r = []
for name_value in pairs:
if not name_value and not strict_parsing:
continue
nv = name_value.split('=', 1)
if len(nv) != 2:
if strict_parsing:
raise ValueError('bad query field: %r' % (name_value,))
# Handle case of a control-name with no equal sign
if keep_blank_values:
nv.append('')
else:
continue
if len(nv[1]) or keep_blank_values:
name = nv[0].replace('+', ' ')
name = compat_urllib_parse_unquote(
name, encoding=encoding, errors=errors)
name = _coerce_result(name)
value = nv[1].replace('+', ' ')
value = compat_urllib_parse_unquote(
value, encoding=encoding, errors=errors)
value = _coerce_result(value)
r.append((name, value))
return r
def compat_parse_qs(qs, keep_blank_values=False, strict_parsing=False,
encoding='utf-8', errors='replace'):
parsed_result = {}
pairs = _parse_qsl(qs, keep_blank_values, strict_parsing,
encoding=encoding, errors=errors)
for name, value in pairs:
if name in parsed_result:
parsed_result[name].append(value)
else:
parsed_result[name] = [value]
return parsed_result
setattr(compat_urllib_parse, '_urlencode',
getattr(compat_urllib_parse, 'urlencode'))
for name, fix in (
('unquote_to_bytes', compat_urllib_parse_unquote_to_bytes),
('parse_unquote', compat_urllib_parse_unquote),
('unquote_plus', compat_urllib_parse_unquote_plus),
('urlencode', compat_urllib_parse_urlencode),
('parse_qs', compat_parse_qs)):
setattr(compat_urllib_parse, name, fix)
compat_urllib_parse_parse_qs = compat_parse_qs
try: try:
from urllib.request import DataHandler as compat_urllib_request_DataHandler from urllib.request import DataHandler as compat_urllib_request_DataHandler
@ -2495,21 +2632,11 @@ except ImportError: # Python < 3.4
return compat_urllib_response.addinfourl(io.BytesIO(data), headers, url) return compat_urllib_response.addinfourl(io.BytesIO(data), headers, url)
try:
compat_basestring = basestring # Python 2
except NameError:
compat_basestring = str
try:
compat_chr = unichr # Python 2
except NameError:
compat_chr = chr
try: try:
from xml.etree.ElementTree import ParseError as compat_xml_parse_error from xml.etree.ElementTree import ParseError as compat_xml_parse_error
except ImportError: # Python 2.6 except ImportError: # Python 2.6
from xml.parsers.expat import ExpatError as compat_xml_parse_error from xml.parsers.expat import ExpatError as compat_xml_parse_error
compat_xml_etree_ElementTree_ParseError = compat_xml_parse_error
etree = xml.etree.ElementTree etree = xml.etree.ElementTree
@ -2523,10 +2650,11 @@ try:
# xml.etree.ElementTree.Element is a method in Python <=2.6 and # xml.etree.ElementTree.Element is a method in Python <=2.6 and
# the following will crash with: # the following will crash with:
# TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types # TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types
isinstance(None, xml.etree.ElementTree.Element) isinstance(None, etree.Element)
from xml.etree.ElementTree import Element as compat_etree_Element from xml.etree.ElementTree import Element as compat_etree_Element
except TypeError: # Python <=2.6 except TypeError: # Python <=2.6
from xml.etree.ElementTree import _ElementInterface as compat_etree_Element from xml.etree.ElementTree import _ElementInterface as compat_etree_Element
compat_xml_etree_ElementTree_Element = compat_etree_Element
if sys.version_info[0] >= 3: if sys.version_info[0] >= 3:
def compat_etree_fromstring(text): def compat_etree_fromstring(text):
@ -2582,6 +2710,7 @@ else:
if k == uri or v == prefix: if k == uri or v == prefix:
del etree._namespace_map[k] del etree._namespace_map[k]
etree._namespace_map[uri] = prefix etree._namespace_map[uri] = prefix
compat_xml_etree_register_namespace = compat_etree_register_namespace
if sys.version_info < (2, 7): if sys.version_info < (2, 7):
# Here comes the crazy part: In 2.6, if the xpath is a unicode, # Here comes the crazy part: In 2.6, if the xpath is a unicode,
@ -2590,55 +2719,222 @@ if sys.version_info < (2, 7):
if isinstance(xpath, compat_str): if isinstance(xpath, compat_str):
xpath = xpath.encode('ascii') xpath = xpath.encode('ascii')
return xpath return xpath
# further code below based on CPython 2.7 source
import functools
_xpath_tokenizer_re = re.compile(r'''(?x)
( # (1)
'[^']*'|"[^"]*"| # quoted strings, or
::|//?|\.\.|\(\)|[/.*:[\]()@=] # navigation specials
)| # or (2)
((?:\{[^}]+\})?[^/[\]()@=\s]+)| # token: optional {ns}, no specials
\s+ # or white space
''')
def _xpath_tokenizer(pattern, namespaces=None):
for token in _xpath_tokenizer_re.findall(pattern):
tag = token[1]
if tag and tag[0] != "{" and ":" in tag:
try:
if not namespaces:
raise KeyError
prefix, uri = tag.split(":", 1)
yield token[0], "{%s}%s" % (namespaces[prefix], uri)
except KeyError:
raise SyntaxError("prefix %r not found in prefix map" % prefix)
else:
yield token
def _get_parent_map(context):
parent_map = context.parent_map
if parent_map is None:
context.parent_map = parent_map = {}
for p in context.root.getiterator():
for e in p:
parent_map[e] = p
return parent_map
def _select(context, result, filter_fn=lambda *_: True):
for elem in result:
for e in elem:
if filter_fn(e, elem):
yield e
def _prepare_child(next_, token):
tag = token[1]
return functools.partial(_select, filter_fn=lambda e, _: e.tag == tag)
def _prepare_star(next_, token):
return _select
def _prepare_self(next_, token):
return lambda _, result: (e for e in result)
def _prepare_descendant(next_, token):
token = next(next_)
if token[0] == "*":
tag = "*"
elif not token[0]:
tag = token[1]
else:
raise SyntaxError("invalid descendant")
def select(context, result):
for elem in result:
for e in elem.getiterator(tag):
if e is not elem:
yield e
return select
def _prepare_parent(next_, token):
def select(context, result):
# FIXME: raise error if .. is applied at toplevel?
parent_map = _get_parent_map(context)
result_map = {}
for elem in result:
if elem in parent_map:
parent = parent_map[elem]
if parent not in result_map:
result_map[parent] = None
yield parent
return select
def _prepare_predicate(next_, token):
signature = []
predicate = []
for token in next_:
if token[0] == "]":
break
if token[0] and token[0][:1] in "'\"":
token = "'", token[0][1:-1]
signature.append(token[0] or "-")
predicate.append(token[1])
def select(context, result, filter_fn=lambda _: True):
for elem in result:
if filter_fn(elem):
yield elem
signature = "".join(signature)
# use signature to determine predicate type
if signature == "@-":
# [@attribute] predicate
key = predicate[1]
return functools.partial(
select, filter_fn=lambda el: el.get(key) is not None)
if signature == "@-='":
# [@attribute='value']
key = predicate[1]
value = predicate[-1]
return functools.partial(
select, filter_fn=lambda el: el.get(key) == value)
if signature == "-" and not re.match(r"\d+$", predicate[0]):
# [tag]
tag = predicate[0]
return functools.partial(
select, filter_fn=lambda el: el.find(tag) is not None)
if signature == "-='" and not re.match(r"\d+$", predicate[0]):
# [tag='value']
tag = predicate[0]
value = predicate[-1]
def itertext(el):
for e in el.getiterator():
e = e.text
if e:
yield e
def select(context, result):
for elem in result:
for e in elem.findall(tag):
if "".join(itertext(e)) == value:
yield elem
break
return select
if signature == "-" or signature == "-()" or signature == "-()-":
# [index] or [last()] or [last()-index]
if signature == "-":
index = int(predicate[0]) - 1
else:
if predicate[0] != "last":
raise SyntaxError("unsupported function")
if signature == "-()-":
try:
index = int(predicate[2]) - 1
except ValueError:
raise SyntaxError("unsupported expression")
else:
index = -1
def select(context, result):
parent_map = _get_parent_map(context)
for elem in result:
try:
parent = parent_map[elem]
# FIXME: what if the selector is "*" ?
elems = list(parent.findall(elem.tag))
if elems[index] is elem:
yield elem
except (IndexError, KeyError):
pass
return select
raise SyntaxError("invalid predicate")
ops = {
"": _prepare_child,
"*": _prepare_star,
".": _prepare_self,
"..": _prepare_parent,
"//": _prepare_descendant,
"[": _prepare_predicate,
}
_cache = {}
class _SelectorContext:
parent_map = None
def __init__(self, root):
self.root = root
##
# Generate all matching objects.
def compat_etree_iterfind(elem, path, namespaces=None):
# compile selector pattern
if path[-1:] == "/":
path = path + "*" # implicit all (FIXME: keep this?)
try:
selector = _cache[path]
except KeyError:
if len(_cache) > 100:
_cache.clear()
if path[:1] == "/":
raise SyntaxError("cannot use absolute path on element")
tokens = _xpath_tokenizer(path, namespaces)
selector = []
for token in tokens:
if token[0] == "/":
continue
try:
selector.append(ops[token[0]](tokens, token))
except StopIteration:
raise SyntaxError("invalid path")
_cache[path] = selector
# execute selector pattern
result = [elem]
context = _SelectorContext(elem)
for select in selector:
result = select(context, result)
return result
# end of code based on CPython 2.7 source
else: else:
compat_xpath = lambda xpath: xpath compat_xpath = lambda xpath: xpath
compat_etree_iterfind = lambda element, match: element.iterfind(match)
try:
from urllib.parse import parse_qs as compat_parse_qs
except ImportError: # Python 2
# HACK: The following is the correct parse_qs implementation from cpython 3's stdlib.
# Python 2's version is apparently totally broken
def _parse_qsl(qs, keep_blank_values=False, strict_parsing=False,
encoding='utf-8', errors='replace'):
qs, _coerce_result = qs, compat_str
pairs = [s2 for s1 in qs.split('&') for s2 in s1.split(';')]
r = []
for name_value in pairs:
if not name_value and not strict_parsing:
continue
nv = name_value.split('=', 1)
if len(nv) != 2:
if strict_parsing:
raise ValueError('bad query field: %r' % (name_value,))
# Handle case of a control-name with no equal sign
if keep_blank_values:
nv.append('')
else:
continue
if len(nv[1]) or keep_blank_values:
name = nv[0].replace('+', ' ')
name = compat_urllib_parse_unquote(
name, encoding=encoding, errors=errors)
name = _coerce_result(name)
value = nv[1].replace('+', ' ')
value = compat_urllib_parse_unquote(
value, encoding=encoding, errors=errors)
value = _coerce_result(value)
r.append((name, value))
return r
def compat_parse_qs(qs, keep_blank_values=False, strict_parsing=False,
encoding='utf-8', errors='replace'):
parsed_result = {}
pairs = _parse_qsl(qs, keep_blank_values, strict_parsing,
encoding=encoding, errors=errors)
for name, value in pairs:
if name in parsed_result:
parsed_result[name].append(value)
else:
parsed_result[name] = [value]
return parsed_result
compat_os_name = os._name if os.name == 'java' else os.name compat_os_name = os._name if os.name == 'java' else os.name
@ -2674,7 +2970,7 @@ except (AssertionError, UnicodeEncodeError):
def compat_ord(c): def compat_ord(c):
if type(c) is int: if isinstance(c, int):
return c return c
else: else:
return ord(c) return ord(c)
@ -2764,6 +3060,8 @@ else:
else: else:
compat_expanduser = os.path.expanduser compat_expanduser = os.path.expanduser
compat_os_path_expanduser = compat_expanduser
if compat_os_name == 'nt' and sys.version_info < (3, 8): if compat_os_name == 'nt' and sys.version_info < (3, 8):
# os.path.realpath on Windows does not follow symbolic links # os.path.realpath on Windows does not follow symbolic links
@ -2775,6 +3073,8 @@ if compat_os_name == 'nt' and sys.version_info < (3, 8):
else: else:
compat_realpath = os.path.realpath compat_realpath = os.path.realpath
compat_os_path_realpath = compat_realpath
if sys.version_info < (3, 0): if sys.version_info < (3, 0):
def compat_print(s): def compat_print(s):
@ -2795,11 +3095,15 @@ if sys.version_info < (3, 0) and sys.platform == 'win32':
else: else:
compat_getpass = getpass.getpass compat_getpass = getpass.getpass
compat_getpass_getpass = compat_getpass
try: try:
compat_input = raw_input compat_input = raw_input
except NameError: # Python 3 except NameError: # Python 3
compat_input = input compat_input = input
# Python < 2.6.5 require kwargs to be bytes # Python < 2.6.5 require kwargs to be bytes
try: try:
def _testfunc(x): def _testfunc(x):
@ -2850,6 +3154,51 @@ else:
compat_socket_create_connection = socket.create_connection compat_socket_create_connection = socket.create_connection
try:
from contextlib import suppress as compat_contextlib_suppress
except ImportError:
class compat_contextlib_suppress(object):
_exceptions = None
def __init__(self, *exceptions):
super(compat_contextlib_suppress, self).__init__()
# TODO: [Base]ExceptionGroup (3.12+)
self._exceptions = exceptions
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return exc_type is not None and issubclass(exc_type, self._exceptions or tuple())
# subprocess.Popen context manager
# avoids leaking handles if .communicate() is not called
try:
_Popen = subprocess.Popen
# check for required context manager attributes
_Popen.__enter__ and _Popen.__exit__
compat_subprocess_Popen = _Popen
except AttributeError:
# not a context manager - make one
from contextlib import contextmanager
@contextmanager
def compat_subprocess_Popen(*args, **kwargs):
popen = None
try:
popen = _Popen(*args, **kwargs)
yield popen
finally:
if popen:
for f in (popen.stdin, popen.stdout, popen.stderr):
if f:
# repeated .close() is OK, but just in case
with compat_contextlib_suppress(EnvironmentError):
f.close()
popen.wait()
# Fix https://github.com/ytdl-org/youtube-dl/issues/4223 # Fix https://github.com/ytdl-org/youtube-dl/issues/4223
# See http://bugs.python.org/issue9161 for what is broken # See http://bugs.python.org/issue9161 for what is broken
def workaround_optparse_bug9161(): def workaround_optparse_bug9161():
@ -2877,6 +3226,7 @@ else:
_terminal_size = collections.namedtuple('terminal_size', ['columns', 'lines']) _terminal_size = collections.namedtuple('terminal_size', ['columns', 'lines'])
def compat_get_terminal_size(fallback=(80, 24)): def compat_get_terminal_size(fallback=(80, 24)):
from .utils import process_communicate_or_kill
columns = compat_getenv('COLUMNS') columns = compat_getenv('COLUMNS')
if columns: if columns:
columns = int(columns) columns = int(columns)
@ -2893,7 +3243,7 @@ else:
sp = subprocess.Popen( sp = subprocess.Popen(
['stty', 'size'], ['stty', 'size'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = sp.communicate() out, err = process_communicate_or_kill(sp)
_lines, _columns = map(int, out.split()) _lines, _columns = map(int, out.split())
except Exception: except Exception:
_columns, _lines = _terminal_size(*fallback) _columns, _lines = _terminal_size(*fallback)
@ -2904,15 +3254,16 @@ else:
lines = _lines lines = _lines
return _terminal_size(columns, lines) return _terminal_size(columns, lines)
try: try:
itertools.count(start=0, step=1) itertools.count(start=0, step=1)
compat_itertools_count = itertools.count compat_itertools_count = itertools.count
except TypeError: # Python 2.6 except TypeError: # Python 2.6
def compat_itertools_count(start=0, step=1): def compat_itertools_count(start=0, step=1):
n = start
while True: while True:
yield n yield start
n += step start += step
if sys.version_info >= (3, 0): if sys.version_info >= (3, 0):
from tokenize import tokenize as compat_tokenize_tokenize from tokenize import tokenize as compat_tokenize_tokenize
@ -2953,6 +3304,24 @@ else:
compat_Struct = struct.Struct compat_Struct = struct.Struct
# compat_map/filter() returning an iterator, supposedly the
# same versioning as for zip below
try:
from future_builtins import map as compat_map
except ImportError:
try:
from itertools import imap as compat_map
except ImportError:
compat_map = map
try:
from future_builtins import filter as compat_filter
except ImportError:
try:
from itertools import ifilter as compat_filter
except ImportError:
compat_filter = filter
try: try:
from future_builtins import zip as compat_zip from future_builtins import zip as compat_zip
except ImportError: # not 2.6+ or is 3.x except ImportError: # not 2.6+ or is 3.x
@ -2962,6 +3331,82 @@ except ImportError: # not 2.6+ or is 3.x
compat_zip = zip compat_zip = zip
# method renamed between Py2/3
try:
from itertools import zip_longest as compat_itertools_zip_longest
except ImportError:
from itertools import izip_longest as compat_itertools_zip_longest
# new class in collections
try:
from collections import ChainMap as compat_collections_chain_map
# Py3.3's ChainMap is deficient
if sys.version_info < (3, 4):
raise ImportError
except ImportError:
# Py <= 3.3
class compat_collections_chain_map(compat_collections_abc.MutableMapping):
maps = [{}]
def __init__(self, *maps):
self.maps = list(maps) or [{}]
def __getitem__(self, k):
for m in self.maps:
if k in m:
return m[k]
raise KeyError(k)
def __setitem__(self, k, v):
self.maps[0].__setitem__(k, v)
return
def __contains__(self, k):
return any((k in m) for m in self.maps)
def __delitem(self, k):
if k in self.maps[0]:
del self.maps[0][k]
return
raise KeyError(k)
def __delitem__(self, k):
self.__delitem(k)
def __iter__(self):
return itertools.chain(*reversed(self.maps))
def __len__(self):
return len(iter(self))
# to match Py3, don't del directly
def pop(self, k, *args):
if self.__contains__(k):
off = self.__getitem__(k)
self.__delitem(k)
return off
elif len(args) > 0:
return args[0]
raise KeyError(k)
def new_child(self, m=None, **kwargs):
m = m or {}
m.update(kwargs)
return compat_collections_chain_map(m, *self.maps)
@property
def parents(self):
return compat_collections_chain_map(*(self.maps[1:]))
# Pythons disagree on the type of a pattern (RegexObject, _sre.SRE_Pattern, Pattern, ...?)
compat_re_Pattern = type(re.compile(''))
# and on the type of a match
compat_re_Match = type(re.match('a', 'a'))
if sys.version_info < (3, 3): if sys.version_info < (3, 3):
def compat_b64decode(s, *args, **kwargs): def compat_b64decode(s, *args, **kwargs):
if isinstance(s, compat_str): if isinstance(s, compat_str):
@ -2970,6 +3415,8 @@ if sys.version_info < (3, 3):
else: else:
compat_b64decode = base64.b64decode compat_b64decode = base64.b64decode
compat_base64_b64decode = compat_b64decode
if platform.python_implementation() == 'PyPy' and sys.pypy_version_info < (5, 4, 0): if platform.python_implementation() == 'PyPy' and sys.pypy_version_info < (5, 4, 0):
# PyPy2 prior to version 5.4.0 expects byte strings as Windows function # PyPy2 prior to version 5.4.0 expects byte strings as Windows function
@ -2989,25 +3436,97 @@ else:
return ctypes.WINFUNCTYPE(*args, **kwargs) return ctypes.WINFUNCTYPE(*args, **kwargs)
__all__ = [ if sys.version_info < (3, 0):
# open(file, mode='r', buffering=- 1, encoding=None, errors=None, newline=None, closefd=True) not: opener=None
def compat_open(file_, *args, **kwargs):
if len(args) > 6 or 'opener' in kwargs:
raise ValueError('open: unsupported argument "opener"')
return io.open(file_, *args, **kwargs)
else:
compat_open = open
# compat_register_utf8
def compat_register_utf8():
if sys.platform == 'win32':
# https://github.com/ytdl-org/youtube-dl/issues/820
from codecs import register, lookup
register(
lambda name: lookup('utf-8') if name == 'cp65001' else None)
# compat_datetime_timedelta_total_seconds
try:
compat_datetime_timedelta_total_seconds = datetime.timedelta.total_seconds
except AttributeError:
# Py 2.6
def compat_datetime_timedelta_total_seconds(td):
return (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6) / 10**6
# optional decompression packages
# PyPi brotli package implements 'br' Content-Encoding
try:
import brotli as compat_brotli
except ImportError:
compat_brotli = None
# PyPi ncompress package implements 'compress' Content-Encoding
try:
import ncompress as compat_ncompress
except ImportError:
compat_ncompress = None
legacy = [
'compat_HTMLParseError', 'compat_HTMLParseError',
'compat_HTMLParser', 'compat_HTMLParser',
'compat_HTTPError', 'compat_HTTPError',
'compat_Struct',
'compat_b64decode', 'compat_b64decode',
'compat_basestring',
'compat_chr',
'compat_cookiejar', 'compat_cookiejar',
'compat_cookiejar_Cookie', 'compat_cookiejar_Cookie',
'compat_cookies', 'compat_cookies',
'compat_ctypes_WINFUNCTYPE', 'compat_cookies_SimpleCookie',
'compat_etree_Element', 'compat_etree_Element',
'compat_etree_fromstring',
'compat_etree_register_namespace', 'compat_etree_register_namespace',
'compat_expanduser', 'compat_expanduser',
'compat_getpass',
'compat_parse_qs',
'compat_realpath',
'compat_urllib_parse_parse_qs',
'compat_urllib_parse_unquote',
'compat_urllib_parse_unquote_plus',
'compat_urllib_parse_unquote_to_bytes',
'compat_urllib_parse_urlencode',
'compat_urllib_parse_urlparse',
'compat_urlparse',
'compat_urlretrieve',
'compat_xml_parse_error',
]
__all__ = [
'compat_html_parser_HTMLParseError',
'compat_html_parser_HTMLParser',
'compat_Struct',
'compat_base64_b64decode',
'compat_basestring',
'compat_brotli',
'compat_casefold',
'compat_chr',
'compat_collections_abc',
'compat_collections_chain_map',
'compat_datetime_timedelta_total_seconds',
'compat_http_cookiejar',
'compat_http_cookiejar_Cookie',
'compat_http_cookies',
'compat_http_cookies_SimpleCookie',
'compat_contextlib_suppress',
'compat_ctypes_WINFUNCTYPE',
'compat_etree_fromstring',
'compat_etree_iterfind',
'compat_filter',
'compat_get_terminal_size', 'compat_get_terminal_size',
'compat_getenv', 'compat_getenv',
'compat_getpass', 'compat_getpass_getpass',
'compat_html_entities', 'compat_html_entities',
'compat_html_entities_html5', 'compat_html_entities_html5',
'compat_http_client', 'compat_http_client',
@ -3015,13 +3534,20 @@ __all__ = [
'compat_input', 'compat_input',
'compat_integer_types', 'compat_integer_types',
'compat_itertools_count', 'compat_itertools_count',
'compat_itertools_zip_longest',
'compat_kwargs', 'compat_kwargs',
'compat_map',
'compat_ncompress',
'compat_numeric_types', 'compat_numeric_types',
'compat_open',
'compat_ord', 'compat_ord',
'compat_os_name', 'compat_os_name',
'compat_parse_qs', 'compat_os_path_expanduser',
'compat_os_path_realpath',
'compat_print', 'compat_print',
'compat_realpath', 'compat_re_Match',
'compat_re_Pattern',
'compat_register_utf8',
'compat_setenv', 'compat_setenv',
'compat_shlex_quote', 'compat_shlex_quote',
'compat_shlex_split', 'compat_shlex_split',
@ -3030,20 +3556,18 @@ __all__ = [
'compat_struct_pack', 'compat_struct_pack',
'compat_struct_unpack', 'compat_struct_unpack',
'compat_subprocess_get_DEVNULL', 'compat_subprocess_get_DEVNULL',
'compat_subprocess_Popen',
'compat_tokenize_tokenize', 'compat_tokenize_tokenize',
'compat_urllib_error', 'compat_urllib_error',
'compat_urllib_parse', 'compat_urllib_parse',
'compat_urllib_parse_unquote',
'compat_urllib_parse_unquote_plus',
'compat_urllib_parse_unquote_to_bytes',
'compat_urllib_parse_urlencode',
'compat_urllib_parse_urlparse',
'compat_urllib_request', 'compat_urllib_request',
'compat_urllib_request_DataHandler', 'compat_urllib_request_DataHandler',
'compat_urllib_response', 'compat_urllib_response',
'compat_urlparse', 'compat_urllib_request_urlretrieve',
'compat_urlretrieve', 'compat_urllib_HTTPError',
'compat_xml_parse_error', 'compat_xml_etree_ElementTree_Element',
'compat_xml_etree_ElementTree_ParseError',
'compat_xml_etree_register_namespace',
'compat_xpath', 'compat_xpath',
'compat_zip', 'compat_zip',
'workaround_optparse_bug9161', 'workaround_optparse_bug9161',

View File

@ -1,22 +1,31 @@
from __future__ import unicode_literals from __future__ import unicode_literals
from ..utils import (
determine_protocol,
)
def get_suitable_downloader(info_dict, params={}):
info_dict['protocol'] = determine_protocol(info_dict)
info_copy = info_dict.copy()
return _get_suitable_downloader(info_copy, params)
# Some of these require get_suitable_downloader
from .common import FileDownloader from .common import FileDownloader
from .dash import DashSegmentsFD
from .f4m import F4mFD from .f4m import F4mFD
from .hls import HlsFD from .hls import HlsFD
from .http import HttpFD from .http import HttpFD
from .rtmp import RtmpFD from .rtmp import RtmpFD
from .dash import DashSegmentsFD
from .rtsp import RtspFD from .rtsp import RtspFD
from .ism import IsmFD from .ism import IsmFD
from .niconico import NiconicoDmcFD
from .external import ( from .external import (
get_external_downloader, get_external_downloader,
FFmpegFD, FFmpegFD,
) )
from ..utils import (
determine_protocol,
)
PROTOCOL_MAP = { PROTOCOL_MAP = {
'rtmp': RtmpFD, 'rtmp': RtmpFD,
'm3u8_native': HlsFD, 'm3u8_native': HlsFD,
@ -26,13 +35,12 @@ PROTOCOL_MAP = {
'f4m': F4mFD, 'f4m': F4mFD,
'http_dash_segments': DashSegmentsFD, 'http_dash_segments': DashSegmentsFD,
'ism': IsmFD, 'ism': IsmFD,
'niconico_dmc': NiconicoDmcFD,
} }
def get_suitable_downloader(info_dict, params={}): def _get_suitable_downloader(info_dict, params={}):
"""Get the downloader class that can handle the info dict.""" """Get the downloader class that can handle the info dict."""
protocol = determine_protocol(info_dict)
info_dict['protocol'] = protocol
# if (info_dict.get('start_time') or info_dict.get('end_time')) and not info_dict.get('requested_formats') and FFmpegFD.can_download(info_dict): # if (info_dict.get('start_time') or info_dict.get('end_time')) and not info_dict.get('requested_formats') and FFmpegFD.can_download(info_dict):
# return FFmpegFD # return FFmpegFD
@ -42,7 +50,11 @@ def get_suitable_downloader(info_dict, params={}):
ed = get_external_downloader(external_downloader) ed = get_external_downloader(external_downloader)
if ed.can_download(info_dict): if ed.can_download(info_dict):
return ed return ed
# Avoid using unwanted args since external_downloader was rejected
if params.get('external_downloader_args'):
params['external_downloader_args'] = None
protocol = info_dict['protocol']
if protocol.startswith('m3u8') and info_dict.get('is_live'): if protocol.startswith('m3u8') and info_dict.get('is_live'):
return FFmpegFD return FFmpegFD

View File

@ -88,17 +88,21 @@ class FileDownloader(object):
return '---.-%' return '---.-%'
return '%6s' % ('%3.1f%%' % percent) return '%6s' % ('%3.1f%%' % percent)
@staticmethod @classmethod
def calc_eta(start, now, total, current): def calc_eta(cls, start_or_rate, now_or_remaining, *args):
if len(args) < 2:
rate, remaining = (start_or_rate, now_or_remaining)
if None in (rate, remaining):
return None
return int(float(remaining) / rate)
start, now = (start_or_rate, now_or_remaining)
total, current = args[:2]
if total is None: if total is None:
return None return None
if now is None: if now is None:
now = time.time() now = time.time()
dif = now - start rate = cls.calc_speed(start, now, current)
if current == 0 or dif < 0.001: # One millisecond return rate and int((float(total) - float(current)) / rate)
return None
rate = float(current) / dif
return int((float(total) - float(current)) / rate)
@staticmethod @staticmethod
def format_eta(eta): def format_eta(eta):
@ -123,6 +127,12 @@ class FileDownloader(object):
def format_retries(retries): def format_retries(retries):
return 'inf' if retries == float('inf') else '%.0f' % retries return 'inf' if retries == float('inf') else '%.0f' % retries
@staticmethod
def filesize_or_none(unencoded_filename):
fn = encodeFilename(unencoded_filename)
if os.path.isfile(fn):
return os.path.getsize(fn)
@staticmethod @staticmethod
def best_block_size(elapsed_time, bytes): def best_block_size(elapsed_time, bytes):
new_min = max(bytes / 2.0, 1.0) new_min = max(bytes / 2.0, 1.0)
@ -329,6 +339,10 @@ class FileDownloader(object):
def download(self, filename, info_dict): def download(self, filename, info_dict):
"""Download to a filename using the info from info_dict """Download to a filename using the info from info_dict
Return True on success and False otherwise Return True on success and False otherwise
This method filters the `Cookie` header from the info_dict to prevent leaks.
Downloaders have their own way of handling cookies.
See: https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-v8mc-9377-rwjj
""" """
nooverwrites_and_exists = ( nooverwrites_and_exists = (

View File

@ -1,5 +1,7 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import itertools
from .fragment import FragmentFD from .fragment import FragmentFD
from ..compat import compat_urllib_error from ..compat import compat_urllib_error
from ..utils import ( from ..utils import (
@ -30,26 +32,28 @@ class DashSegmentsFD(FragmentFD):
fragment_retries = self.params.get('fragment_retries', 0) fragment_retries = self.params.get('fragment_retries', 0)
skip_unavailable_fragments = self.params.get('skip_unavailable_fragments', True) skip_unavailable_fragments = self.params.get('skip_unavailable_fragments', True)
frag_index = 0 for frag_index, fragment in enumerate(fragments, 1):
for i, fragment in enumerate(fragments):
frag_index += 1
if frag_index <= ctx['fragment_index']: if frag_index <= ctx['fragment_index']:
continue continue
success = False
# In DASH, the first segment contains necessary headers to # In DASH, the first segment contains necessary headers to
# generate a valid MP4 file, so always abort for the first segment # generate a valid MP4 file, so always abort for the first segment
fatal = i == 0 or not skip_unavailable_fragments fatal = frag_index == 1 or not skip_unavailable_fragments
count = 0 fragment_url = fragment.get('url')
while count <= fragment_retries: if not fragment_url:
assert fragment_base_url
fragment_url = urljoin(fragment_base_url, fragment['path'])
headers = info_dict.get('http_headers')
fragment_range = fragment.get('range')
if fragment_range:
headers = headers.copy() if headers else {}
headers['Range'] = 'bytes=%s' % (fragment_range,)
for count in itertools.count():
try: try:
fragment_url = fragment.get('url') success, frag_content = self._download_fragment(ctx, fragment_url, info_dict, headers)
if not fragment_url:
assert fragment_base_url
fragment_url = urljoin(fragment_base_url, fragment['path'])
success, frag_content = self._download_fragment(ctx, fragment_url, info_dict)
if not success: if not success:
return False return False
self._append_fragment(ctx, frag_content) self._append_fragment(ctx, frag_content)
break
except compat_urllib_error.HTTPError as err: except compat_urllib_error.HTTPError as err:
# YouTube may often return 404 HTTP error for a fragment causing the # YouTube may often return 404 HTTP error for a fragment causing the
# whole download to fail. However if the same fragment is immediately # whole download to fail. However if the same fragment is immediately
@ -57,22 +61,21 @@ class DashSegmentsFD(FragmentFD):
# is usually enough) thus allowing to download the whole file successfully. # is usually enough) thus allowing to download the whole file successfully.
# To be future-proof we will retry all fragments that fail with any # To be future-proof we will retry all fragments that fail with any
# HTTP error. # HTTP error.
count += 1 if count < fragment_retries:
if count <= fragment_retries: self.report_retry_fragment(err, frag_index, count + 1, fragment_retries)
self.report_retry_fragment(err, frag_index, count, fragment_retries) continue
except DownloadError: except DownloadError:
# Don't retry fragment if error occurred during HTTP downloading # Don't retry fragment if error occurred during HTTP downloading
# itself since it has own retry settings # itself since it has its own retry settings
if not fatal: if fatal:
self.report_skip_fragment(frag_index) raise
break break
raise
if count > fragment_retries: if not success:
if not fatal: if not fatal:
self.report_skip_fragment(frag_index) self.report_skip_fragment(frag_index)
continue continue
self.report_error('giving up after %s fragment retries' % fragment_retries) self.report_error('giving up after %s fragment retries' % count)
return False return False
self._finish_frag_download(ctx) self._finish_frag_download(ctx)

View File

@ -1,17 +1,24 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import os.path import os
import re import re
import subprocess import subprocess
import sys import sys
import tempfile
import time import time
from .common import FileDownloader from .common import FileDownloader
from ..compat import ( from ..compat import (
compat_setenv, compat_setenv,
compat_str, compat_str,
compat_subprocess_Popen,
) )
from ..postprocessor.ffmpeg import FFmpegPostProcessor, EXT_TO_OUT_FORMATS
try:
from ..postprocessor.ffmpeg import FFmpegPostProcessor, EXT_TO_OUT_FORMATS
except ImportError:
FFmpegPostProcessor = None
from ..utils import ( from ..utils import (
cli_option, cli_option,
cli_valueless_option, cli_valueless_option,
@ -22,6 +29,9 @@ from ..utils import (
handle_youtubedl_headers, handle_youtubedl_headers,
check_executable, check_executable,
is_outdated_version, is_outdated_version,
process_communicate_or_kill,
T,
traverse_obj,
) )
@ -29,6 +39,7 @@ class ExternalFD(FileDownloader):
def real_download(self, filename, info_dict): def real_download(self, filename, info_dict):
self.report_destination(filename) self.report_destination(filename)
tmpfilename = self.temp_name(filename) tmpfilename = self.temp_name(filename)
self._cookies_tempfile = None
try: try:
started = time.time() started = time.time()
@ -41,6 +52,13 @@ class ExternalFD(FileDownloader):
# should take place # should take place
retval = 0 retval = 0
self.to_screen('[%s] Interrupted by user' % self.get_basename()) self.to_screen('[%s] Interrupted by user' % self.get_basename())
finally:
if self._cookies_tempfile and os.path.isfile(self._cookies_tempfile):
try:
os.remove(self._cookies_tempfile)
except OSError:
self.report_warning(
'Unable to delete temporary cookies file "{0}"'.format(self._cookies_tempfile))
if retval == 0: if retval == 0:
status = { status = {
@ -96,6 +114,16 @@ class ExternalFD(FileDownloader):
def _configuration_args(self, default=[]): def _configuration_args(self, default=[]):
return cli_configuration_args(self.params, 'external_downloader_args', default) return cli_configuration_args(self.params, 'external_downloader_args', default)
def _write_cookies(self):
if not self.ydl.cookiejar.filename:
tmp_cookies = tempfile.NamedTemporaryFile(suffix='.cookies', delete=False)
tmp_cookies.close()
self._cookies_tempfile = tmp_cookies.name
self.to_screen('[download] Writing temporary cookies file to "{0}"'.format(self._cookies_tempfile))
# real_download resets _cookies_tempfile; if it's None, save() will write to cookiejar.filename
self.ydl.cookiejar.save(self._cookies_tempfile, ignore_discard=True, ignore_expires=True)
return self.ydl.cookiejar.filename or self._cookies_tempfile
def _call_downloader(self, tmpfilename, info_dict): def _call_downloader(self, tmpfilename, info_dict):
""" Either overwrite this or implement _make_cmd """ """ Either overwrite this or implement _make_cmd """
cmd = [encodeArgument(a) for a in self._make_cmd(tmpfilename, info_dict)] cmd = [encodeArgument(a) for a in self._make_cmd(tmpfilename, info_dict)]
@ -104,18 +132,26 @@ class ExternalFD(FileDownloader):
p = subprocess.Popen( p = subprocess.Popen(
cmd, stderr=subprocess.PIPE) cmd, stderr=subprocess.PIPE)
_, stderr = p.communicate() _, stderr = process_communicate_or_kill(p)
if p.returncode != 0: if p.returncode != 0:
self.to_stderr(stderr.decode('utf-8', 'replace')) self.to_stderr(stderr.decode('utf-8', 'replace'))
return p.returncode return p.returncode
@staticmethod
def _header_items(info_dict):
return traverse_obj(
info_dict, ('http_headers', T(dict.items), Ellipsis))
class CurlFD(ExternalFD): class CurlFD(ExternalFD):
AVAILABLE_OPT = '-V' AVAILABLE_OPT = '-V'
def _make_cmd(self, tmpfilename, info_dict): def _make_cmd(self, tmpfilename, info_dict):
cmd = [self.exe, '--location', '-o', tmpfilename] cmd = [self.exe, '--location', '-o', tmpfilename, '--compressed']
for key, val in info_dict['http_headers'].items(): cookie_header = self.ydl.cookiejar.get_cookie_header(info_dict['url'])
if cookie_header:
cmd += ['--cookie', cookie_header]
for key, val in self._header_items(info_dict):
cmd += ['--header', '%s: %s' % (key, val)] cmd += ['--header', '%s: %s' % (key, val)]
cmd += self._bool_option('--continue-at', 'continuedl', '-', '0') cmd += self._bool_option('--continue-at', 'continuedl', '-', '0')
cmd += self._valueless_option('--silent', 'noprogress') cmd += self._valueless_option('--silent', 'noprogress')
@ -141,7 +177,7 @@ class CurlFD(ExternalFD):
# curl writes the progress to stderr so don't capture it. # curl writes the progress to stderr so don't capture it.
p = subprocess.Popen(cmd) p = subprocess.Popen(cmd)
p.communicate() process_communicate_or_kill(p)
return p.returncode return p.returncode
@ -150,8 +186,11 @@ class AxelFD(ExternalFD):
def _make_cmd(self, tmpfilename, info_dict): def _make_cmd(self, tmpfilename, info_dict):
cmd = [self.exe, '-o', tmpfilename] cmd = [self.exe, '-o', tmpfilename]
for key, val in info_dict['http_headers'].items(): for key, val in self._header_items(info_dict):
cmd += ['-H', '%s: %s' % (key, val)] cmd += ['-H', '%s: %s' % (key, val)]
cookie_header = self.ydl.cookiejar.get_cookie_header(info_dict['url'])
if cookie_header:
cmd += ['-H', 'Cookie: {0}'.format(cookie_header), '--max-redirect=0']
cmd += self._configuration_args() cmd += self._configuration_args()
cmd += ['--', info_dict['url']] cmd += ['--', info_dict['url']]
return cmd return cmd
@ -161,8 +200,10 @@ class WgetFD(ExternalFD):
AVAILABLE_OPT = '--version' AVAILABLE_OPT = '--version'
def _make_cmd(self, tmpfilename, info_dict): def _make_cmd(self, tmpfilename, info_dict):
cmd = [self.exe, '-O', tmpfilename, '-nv', '--no-cookies'] cmd = [self.exe, '-O', tmpfilename, '-nv', '--compression=auto']
for key, val in info_dict['http_headers'].items(): if self.ydl.cookiejar.get_cookie_header(info_dict['url']):
cmd += ['--load-cookies', self._write_cookies()]
for key, val in self._header_items(info_dict):
cmd += ['--header', '%s: %s' % (key, val)] cmd += ['--header', '%s: %s' % (key, val)]
cmd += self._option('--limit-rate', 'ratelimit') cmd += self._option('--limit-rate', 'ratelimit')
retry = self._option('--tries', 'retries') retry = self._option('--tries', 'retries')
@ -171,7 +212,10 @@ class WgetFD(ExternalFD):
retry[1] = '0' retry[1] = '0'
cmd += retry cmd += retry
cmd += self._option('--bind-address', 'source_address') cmd += self._option('--bind-address', 'source_address')
cmd += self._option('--proxy', 'proxy') proxy = self.params.get('proxy')
if proxy:
for var in ('http_proxy', 'https_proxy'):
cmd += ['--execute', '%s=%s' % (var, proxy)]
cmd += self._valueless_option('--no-check-certificate', 'nocheckcertificate') cmd += self._valueless_option('--no-check-certificate', 'nocheckcertificate')
cmd += self._configuration_args() cmd += self._configuration_args()
cmd += ['--', info_dict['url']] cmd += ['--', info_dict['url']]
@ -181,24 +225,121 @@ class WgetFD(ExternalFD):
class Aria2cFD(ExternalFD): class Aria2cFD(ExternalFD):
AVAILABLE_OPT = '-v' AVAILABLE_OPT = '-v'
@staticmethod
def _aria2c_filename(fn):
return fn if os.path.isabs(fn) else os.path.join('.', fn)
def _make_cmd(self, tmpfilename, info_dict): def _make_cmd(self, tmpfilename, info_dict):
cmd = [self.exe, '-c'] cmd = [self.exe, '-c',
cmd += self._configuration_args([ '--console-log-level=warn', '--summary-interval=0', '--download-result=hide',
'--min-split-size', '1M', '--max-connection-per-server', '4']) '--http-accept-gzip=true', '--file-allocation=none', '-x16', '-j16', '-s16']
dn = os.path.dirname(tmpfilename) if 'fragments' in info_dict:
if dn: cmd += ['--allow-overwrite=true', '--allow-piece-length-change=true']
cmd += ['--dir', dn] else:
cmd += ['--out', os.path.basename(tmpfilename)] cmd += ['--min-split-size', '1M']
for key, val in info_dict['http_headers'].items():
if self.ydl.cookiejar.get_cookie_header(info_dict['url']):
cmd += ['--load-cookies={0}'.format(self._write_cookies())]
for key, val in self._header_items(info_dict):
cmd += ['--header', '%s: %s' % (key, val)] cmd += ['--header', '%s: %s' % (key, val)]
cmd += self._configuration_args(['--max-connection-per-server', '4'])
cmd += ['--out', os.path.basename(tmpfilename)]
cmd += self._option('--max-overall-download-limit', 'ratelimit')
cmd += self._option('--interface', 'source_address') cmd += self._option('--interface', 'source_address')
cmd += self._option('--all-proxy', 'proxy') cmd += self._option('--all-proxy', 'proxy')
cmd += self._bool_option('--check-certificate', 'nocheckcertificate', 'false', 'true', '=') cmd += self._bool_option('--check-certificate', 'nocheckcertificate', 'false', 'true', '=')
cmd += self._bool_option('--remote-time', 'updatetime', 'true', 'false', '=') cmd += self._bool_option('--remote-time', 'updatetime', 'true', 'false', '=')
cmd += ['--', info_dict['url']] cmd += self._bool_option('--show-console-readout', 'noprogress', 'false', 'true', '=')
cmd += self._configuration_args()
# aria2c strips out spaces from the beginning/end of filenames and paths.
# We work around this issue by adding a "./" to the beginning of the
# filename and relative path, and adding a "/" at the end of the path.
# See: https://github.com/yt-dlp/yt-dlp/issues/276
# https://github.com/ytdl-org/youtube-dl/issues/20312
# https://github.com/aria2/aria2/issues/1373
dn = os.path.dirname(tmpfilename)
if dn:
cmd += ['--dir', self._aria2c_filename(dn) + os.path.sep]
if 'fragments' not in info_dict:
cmd += ['--out', self._aria2c_filename(os.path.basename(tmpfilename))]
cmd += ['--auto-file-renaming=false']
if 'fragments' in info_dict:
cmd += ['--file-allocation=none', '--uri-selector=inorder']
url_list_file = '%s.frag.urls' % (tmpfilename, )
url_list = []
for frag_index, fragment in enumerate(info_dict['fragments']):
fragment_filename = '%s-Frag%d' % (os.path.basename(tmpfilename), frag_index)
url_list.append('%s\n\tout=%s' % (fragment['url'], self._aria2c_filename(fragment_filename)))
stream, _ = self.sanitize_open(url_list_file, 'wb')
stream.write('\n'.join(url_list).encode())
stream.close()
cmd += ['-i', self._aria2c_filename(url_list_file)]
else:
cmd += ['--', info_dict['url']]
return cmd return cmd
class Aria2pFD(ExternalFD):
''' Aria2pFD class
This class support to use aria2p as downloader.
(Aria2p, a command-line tool and Python library to interact with an aria2c daemon process
through JSON-RPC.)
It can help you to get download progress more easily.
To use aria2p as downloader, you need to install aria2c and aria2p, aria2p can download with pip.
Then run aria2c in the background and enable with the --enable-rpc option.
'''
try:
import aria2p
__avail = True
except ImportError:
__avail = False
@classmethod
def available(cls):
return cls.__avail
def _call_downloader(self, tmpfilename, info_dict):
aria2 = self.aria2p.API(
self.aria2p.Client(
host='http://localhost',
port=6800,
secret=''
)
)
options = {
'min-split-size': '1M',
'max-connection-per-server': 4,
'auto-file-renaming': 'false',
}
options['dir'] = os.path.dirname(tmpfilename) or os.path.abspath('.')
options['out'] = os.path.basename(tmpfilename)
if self.ydl.cookiejar.get_cookie_header(info_dict['url']):
options['load-cookies'] = self._write_cookies()
options['header'] = []
for key, val in self._header_items(info_dict):
options['header'].append('{0}: {1}'.format(key, val))
download = aria2.add_uris([info_dict['url']], options)
status = {
'status': 'downloading',
'tmpfilename': tmpfilename,
}
started = time.time()
while download.status in ['active', 'waiting']:
download = aria2.get_download(download.gid)
status.update({
'downloaded_bytes': download.completed_length,
'total_bytes': download.total_length,
'elapsed': time.time() - started,
'eta': download.eta.total_seconds(),
'speed': download.download_speed,
})
self._hook_progress(status)
time.sleep(.5)
return download.status != 'complete'
class HttpieFD(ExternalFD): class HttpieFD(ExternalFD):
@classmethod @classmethod
def available(cls): def available(cls):
@ -206,25 +347,34 @@ class HttpieFD(ExternalFD):
def _make_cmd(self, tmpfilename, info_dict): def _make_cmd(self, tmpfilename, info_dict):
cmd = ['http', '--download', '--output', tmpfilename, info_dict['url']] cmd = ['http', '--download', '--output', tmpfilename, info_dict['url']]
for key, val in info_dict['http_headers'].items(): for key, val in self._header_items(info_dict):
cmd += ['%s:%s' % (key, val)] cmd += ['%s:%s' % (key, val)]
# httpie 3.1.0+ removes the Cookie header on redirect, so this should be safe for now. [1]
# If we ever need cookie handling for redirects, we can export the cookiejar into a session. [2]
# 1: https://github.com/httpie/httpie/security/advisories/GHSA-9w4w-cpc8-h2fq
# 2: https://httpie.io/docs/cli/sessions
cookie_header = self.ydl.cookiejar.get_cookie_header(info_dict['url'])
if cookie_header:
cmd += ['Cookie:%s' % cookie_header]
return cmd return cmd
class FFmpegFD(ExternalFD): class FFmpegFD(ExternalFD):
@classmethod @classmethod
def supports(cls, info_dict): def supports(cls, info_dict):
return info_dict['protocol'] in ('http', 'https', 'ftp', 'ftps', 'm3u8', 'rtsp', 'rtmp', 'mms') return info_dict['protocol'] in ('http', 'https', 'ftp', 'ftps', 'm3u8', 'rtsp', 'rtmp', 'mms', 'http_dash_segments')
@classmethod @classmethod
def available(cls): def available(cls):
return FFmpegPostProcessor().available # actual availability can only be confirmed for an instance
return bool(FFmpegPostProcessor)
def _call_downloader(self, tmpfilename, info_dict): def _call_downloader(self, tmpfilename, info_dict):
url = info_dict['url'] # `downloader` means the parent `YoutubeDL`
ffpp = FFmpegPostProcessor(downloader=self) ffpp = FFmpegPostProcessor(downloader=self.ydl)
if not ffpp.available: if not ffpp.available:
self.report_error('m3u8 download detected but ffmpeg or avconv could not be found. Please install one.') self.report_error('ffmpeg required for download but no ffmpeg (nor avconv) executable could be found. Please install one.')
return False return False
ffpp.check_version() ffpp.check_version()
@ -253,7 +403,15 @@ class FFmpegFD(ExternalFD):
# if end_time: # if end_time:
# args += ['-t', compat_str(end_time - start_time)] # args += ['-t', compat_str(end_time - start_time)]
if info_dict['http_headers'] and re.match(r'^https?://', url): url = info_dict['url']
cookies = self.ydl.cookiejar.get_cookies_for_url(url)
if cookies:
args.extend(['-cookies', ''.join(
'{0}={1}; path={2}; domain={3};\r\n'.format(
cookie.name, cookie.value, cookie.path, cookie.domain)
for cookie in cookies)])
if info_dict.get('http_headers') and re.match(r'^https?://', url):
# Trailing \r\n after each HTTP header is important to prevent warning from ffmpeg/avconv: # Trailing \r\n after each HTTP header is important to prevent warning from ffmpeg/avconv:
# [http @ 00000000003d2fa0] No trailing CRLF found in HTTP header. # [http @ 00000000003d2fa0] No trailing CRLF found in HTTP header.
headers = handle_youtubedl_headers(info_dict['http_headers']) headers = handle_youtubedl_headers(info_dict['http_headers'])
@ -333,18 +491,25 @@ class FFmpegFD(ExternalFD):
self._debug_cmd(args) self._debug_cmd(args)
proc = subprocess.Popen(args, stdin=subprocess.PIPE, env=env) # From [1], a PIPE opened in Popen() should be closed, unless
try: # .communicate() is called. Avoid leaking any PIPEs by using Popen
retval = proc.wait() # as a context manager (newer Python 3.x and compat)
except KeyboardInterrupt: # Fixes "Resource Warning" in test/test_downloader_external.py
# subprocces.run would send the SIGKILL signal to ffmpeg and the # [1] https://devpress.csdn.net/python/62fde12d7e66823466192e48.html
# mp4 file couldn't be played, but if we ask ffmpeg to quit it with compat_subprocess_Popen(args, stdin=subprocess.PIPE, env=env) as proc:
# produces a file that is playable (this is mostly useful for live try:
# streams). Note that Windows is not affected and produces playable retval = proc.wait()
# files (see https://github.com/ytdl-org/youtube-dl/issues/8300). except BaseException as e:
if sys.platform != 'win32': # subprocess.run would send the SIGKILL signal to ffmpeg and the
proc.communicate(b'q') # mp4 file couldn't be played, but if we ask ffmpeg to quit it
raise # produces a file that is playable (this is mostly useful for live
# streams). Note that Windows is not affected and produces playable
# files (see https://github.com/ytdl-org/youtube-dl/issues/8300).
if isinstance(e, KeyboardInterrupt) and sys.platform != 'win32':
process_communicate_or_kill(proc, b'q')
else:
proc.kill()
raise
return retval return retval

View File

@ -71,7 +71,7 @@ class FragmentFD(FileDownloader):
@staticmethod @staticmethod
def __do_ytdl_file(ctx): def __do_ytdl_file(ctx):
return not ctx['live'] and not ctx['tmpfilename'] == '-' return ctx['live'] is not True and ctx['tmpfilename'] != '-'
def _read_ytdl_file(self, ctx): def _read_ytdl_file(self, ctx):
assert 'ytdl_corrupt' not in ctx assert 'ytdl_corrupt' not in ctx
@ -101,6 +101,13 @@ class FragmentFD(FileDownloader):
'url': frag_url, 'url': frag_url,
'http_headers': headers or info_dict.get('http_headers'), 'http_headers': headers or info_dict.get('http_headers'),
} }
frag_resume_len = 0
if ctx['dl'].params.get('continuedl', True):
frag_resume_len = self.filesize_or_none(
self.temp_name(fragment_filename))
fragment_info_dict['frag_resume_len'] = frag_resume_len
ctx['frag_resume_len'] = frag_resume_len or 0
success = ctx['dl'].download(fragment_filename, fragment_info_dict) success = ctx['dl'].download(fragment_filename, fragment_info_dict)
if not success: if not success:
return False, None return False, None
@ -124,9 +131,7 @@ class FragmentFD(FileDownloader):
del ctx['fragment_filename_sanitized'] del ctx['fragment_filename_sanitized']
def _prepare_frag_download(self, ctx): def _prepare_frag_download(self, ctx):
if 'live' not in ctx: if not ctx.setdefault('live', False):
ctx['live'] = False
if not ctx['live']:
total_frags_str = '%d' % ctx['total_frags'] total_frags_str = '%d' % ctx['total_frags']
ad_frags = ctx.get('ad_frags', 0) ad_frags = ctx.get('ad_frags', 0)
if ad_frags: if ad_frags:
@ -136,10 +141,11 @@ class FragmentFD(FileDownloader):
self.to_screen( self.to_screen(
'[%s] Total fragments: %s' % (self.FD_NAME, total_frags_str)) '[%s] Total fragments: %s' % (self.FD_NAME, total_frags_str))
self.report_destination(ctx['filename']) self.report_destination(ctx['filename'])
continuedl = self.params.get('continuedl', True)
dl = HttpQuietDownloader( dl = HttpQuietDownloader(
self.ydl, self.ydl,
{ {
'continuedl': True, 'continuedl': continuedl,
'quiet': True, 'quiet': True,
'noprogress': True, 'noprogress': True,
'ratelimit': self.params.get('ratelimit'), 'ratelimit': self.params.get('ratelimit'),
@ -150,12 +156,11 @@ class FragmentFD(FileDownloader):
) )
tmpfilename = self.temp_name(ctx['filename']) tmpfilename = self.temp_name(ctx['filename'])
open_mode = 'wb' open_mode = 'wb'
resume_len = 0
# Establish possible resume length # Establish possible resume length
if os.path.isfile(encodeFilename(tmpfilename)): resume_len = self.filesize_or_none(tmpfilename) or 0
if resume_len > 0:
open_mode = 'ab' open_mode = 'ab'
resume_len = os.path.getsize(encodeFilename(tmpfilename))
# Should be initialized before ytdl file check # Should be initialized before ytdl file check
ctx.update({ ctx.update({
@ -164,7 +169,8 @@ class FragmentFD(FileDownloader):
}) })
if self.__do_ytdl_file(ctx): if self.__do_ytdl_file(ctx):
if os.path.isfile(encodeFilename(self.ytdl_filename(ctx['filename']))): ytdl_file_exists = os.path.isfile(encodeFilename(self.ytdl_filename(ctx['filename'])))
if continuedl and ytdl_file_exists:
self._read_ytdl_file(ctx) self._read_ytdl_file(ctx)
is_corrupt = ctx.get('ytdl_corrupt') is True is_corrupt = ctx.get('ytdl_corrupt') is True
is_inconsistent = ctx['fragment_index'] > 0 and resume_len == 0 is_inconsistent = ctx['fragment_index'] > 0 and resume_len == 0
@ -178,7 +184,12 @@ class FragmentFD(FileDownloader):
if 'ytdl_corrupt' in ctx: if 'ytdl_corrupt' in ctx:
del ctx['ytdl_corrupt'] del ctx['ytdl_corrupt']
self._write_ytdl_file(ctx) self._write_ytdl_file(ctx)
else: else:
if not continuedl:
if ytdl_file_exists:
self._read_ytdl_file(ctx)
ctx['fragment_index'] = resume_len = 0
self._write_ytdl_file(ctx) self._write_ytdl_file(ctx)
assert ctx['fragment_index'] == 0 assert ctx['fragment_index'] == 0
@ -209,6 +220,7 @@ class FragmentFD(FileDownloader):
start = time.time() start = time.time()
ctx.update({ ctx.update({
'started': start, 'started': start,
'fragment_started': start,
# Amount of fragment's bytes downloaded by the time of the previous # Amount of fragment's bytes downloaded by the time of the previous
# frag progress hook invocation # frag progress hook invocation
'prev_frag_downloaded_bytes': 0, 'prev_frag_downloaded_bytes': 0,
@ -218,6 +230,9 @@ class FragmentFD(FileDownloader):
if s['status'] not in ('downloading', 'finished'): if s['status'] not in ('downloading', 'finished'):
return return
if not total_frags and ctx.get('fragment_count'):
state['fragment_count'] = ctx['fragment_count']
time_now = time.time() time_now = time.time()
state['elapsed'] = time_now - start state['elapsed'] = time_now - start
frag_total_bytes = s.get('total_bytes') or 0 frag_total_bytes = s.get('total_bytes') or 0
@ -232,16 +247,17 @@ class FragmentFD(FileDownloader):
ctx['fragment_index'] = state['fragment_index'] ctx['fragment_index'] = state['fragment_index']
state['downloaded_bytes'] += frag_total_bytes - ctx['prev_frag_downloaded_bytes'] state['downloaded_bytes'] += frag_total_bytes - ctx['prev_frag_downloaded_bytes']
ctx['complete_frags_downloaded_bytes'] = state['downloaded_bytes'] ctx['complete_frags_downloaded_bytes'] = state['downloaded_bytes']
ctx['speed'] = state['speed'] = self.calc_speed(
ctx['fragment_started'], time_now, frag_total_bytes)
ctx['fragment_started'] = time.time()
ctx['prev_frag_downloaded_bytes'] = 0 ctx['prev_frag_downloaded_bytes'] = 0
else: else:
frag_downloaded_bytes = s['downloaded_bytes'] frag_downloaded_bytes = s['downloaded_bytes']
state['downloaded_bytes'] += frag_downloaded_bytes - ctx['prev_frag_downloaded_bytes'] state['downloaded_bytes'] += frag_downloaded_bytes - ctx['prev_frag_downloaded_bytes']
ctx['speed'] = state['speed'] = self.calc_speed(
ctx['fragment_started'], time_now, frag_downloaded_bytes - ctx['frag_resume_len'])
if not ctx['live']: if not ctx['live']:
state['eta'] = self.calc_eta( state['eta'] = self.calc_eta(state['speed'], estimated_size - state['downloaded_bytes'])
start, time_now, estimated_size - resume_len,
state['downloaded_bytes'] - resume_len)
state['speed'] = s.get('speed') or ctx.get('speed')
ctx['speed'] = state['speed']
ctx['prev_frag_downloaded_bytes'] = frag_downloaded_bytes ctx['prev_frag_downloaded_bytes'] = frag_downloaded_bytes
self._hook_progress(state) self._hook_progress(state)
@ -268,7 +284,7 @@ class FragmentFD(FileDownloader):
os.utime(ctx['filename'], (time.time(), filetime)) os.utime(ctx['filename'], (time.time(), filetime))
except Exception: except Exception:
pass pass
downloaded_bytes = os.path.getsize(encodeFilename(ctx['filename'])) downloaded_bytes = self.filesize_or_none(ctx['filename']) or 0
self._hook_progress({ self._hook_progress({
'downloaded_bytes': downloaded_bytes, 'downloaded_bytes': downloaded_bytes,

View File

@ -58,9 +58,9 @@ class HttpFD(FileDownloader):
if self.params.get('continuedl', True): if self.params.get('continuedl', True):
# Establish possible resume length # Establish possible resume length
if os.path.isfile(encodeFilename(ctx.tmpfilename)): ctx.resume_len = info_dict.get('frag_resume_len')
ctx.resume_len = os.path.getsize( if ctx.resume_len is None:
encodeFilename(ctx.tmpfilename)) ctx.resume_len = self.filesize_or_none(ctx.tmpfilename) or 0
ctx.is_resume = ctx.resume_len > 0 ctx.is_resume = ctx.resume_len > 0
@ -115,9 +115,9 @@ class HttpFD(FileDownloader):
raise RetryDownload(err) raise RetryDownload(err)
raise err raise err
# When trying to resume, Content-Range HTTP header of response has to be checked # When trying to resume, Content-Range HTTP header of response has to be checked
# to match the value of requested Range HTTP header. This is due to a webservers # to match the value of requested Range HTTP header. This is due to webservers
# that don't support resuming and serve a whole file with no Content-Range # that don't support resuming and serve a whole file with no Content-Range
# set in response despite of requested Range (see # set in response despite requested Range (see
# https://github.com/ytdl-org/youtube-dl/issues/6057#issuecomment-126129799) # https://github.com/ytdl-org/youtube-dl/issues/6057#issuecomment-126129799)
if has_range: if has_range:
content_range = ctx.data.headers.get('Content-Range') content_range = ctx.data.headers.get('Content-Range')
@ -141,7 +141,8 @@ class HttpFD(FileDownloader):
# Content-Range is either not present or invalid. Assuming remote webserver is # Content-Range is either not present or invalid. Assuming remote webserver is
# trying to send the whole file, resume is not possible, so wiping the local file # trying to send the whole file, resume is not possible, so wiping the local file
# and performing entire redownload # and performing entire redownload
self.report_unable_to_resume() if range_start > 0:
self.report_unable_to_resume()
ctx.resume_len = 0 ctx.resume_len = 0
ctx.open_mode = 'wb' ctx.open_mode = 'wb'
ctx.data_len = int_or_none(ctx.data.info().get('Content-length', None)) ctx.data_len = int_or_none(ctx.data.info().get('Content-length', None))
@ -293,10 +294,7 @@ class HttpFD(FileDownloader):
# Progress message # Progress message
speed = self.calc_speed(start, now, byte_counter - ctx.resume_len) speed = self.calc_speed(start, now, byte_counter - ctx.resume_len)
if ctx.data_len is None: eta = self.calc_eta(speed, ctx.data_len and (ctx.data_len - byte_counter))
eta = None
else:
eta = self.calc_eta(start, time.time(), ctx.data_len - ctx.resume_len, byte_counter - ctx.resume_len)
self._hook_progress({ self._hook_progress({
'status': 'downloading', 'status': 'downloading',

View File

@ -0,0 +1,66 @@
# coding: utf-8
from __future__ import unicode_literals
try:
import threading
except ImportError:
threading = None
from .common import FileDownloader
from ..downloader import get_suitable_downloader
from ..extractor.niconico import NiconicoIE
from ..utils import sanitized_Request
class NiconicoDmcFD(FileDownloader):
""" Downloading niconico douga from DMC with heartbeat """
FD_NAME = 'niconico_dmc'
def real_download(self, filename, info_dict):
self.to_screen('[%s] Downloading from DMC' % self.FD_NAME)
ie = NiconicoIE(self.ydl)
info_dict, heartbeat_info_dict = ie._get_heartbeat_info(info_dict)
fd = get_suitable_downloader(info_dict, params=self.params)(self.ydl, self.params)
for ph in self._progress_hooks:
fd.add_progress_hook(ph)
if not threading:
self.to_screen('[%s] Threading for Heartbeat not available' % self.FD_NAME)
return fd.real_download(filename, info_dict)
success = download_complete = False
timer = [None]
heartbeat_lock = threading.Lock()
heartbeat_url = heartbeat_info_dict['url']
heartbeat_data = heartbeat_info_dict['data'].encode()
heartbeat_interval = heartbeat_info_dict.get('interval', 30)
request = sanitized_Request(heartbeat_url, heartbeat_data)
def heartbeat():
try:
self.ydl.urlopen(request).read()
except Exception:
self.to_screen('[%s] Heartbeat failed' % self.FD_NAME)
with heartbeat_lock:
if not download_complete:
timer[0] = threading.Timer(heartbeat_interval, heartbeat)
timer[0].start()
heartbeat_info_dict['ping']()
self.to_screen('[%s] Heartbeat with %d second interval ...' % (self.FD_NAME, heartbeat_interval))
try:
heartbeat()
if type(fd).__name__ == 'HlsFD':
info_dict.update(ie._extract_m3u8_formats(info_dict['url'], info_dict['id'])[0])
success = fd.real_download(filename, info_dict)
finally:
if heartbeat_lock:
with heartbeat_lock:
timer[0].cancel()
download_complete = True
return success

View File

@ -89,11 +89,13 @@ class RtmpFD(FileDownloader):
self.to_screen('') self.to_screen('')
cursor_in_new_line = True cursor_in_new_line = True
self.to_screen('[rtmpdump] ' + line) self.to_screen('[rtmpdump] ' + line)
finally: if not cursor_in_new_line:
self.to_screen('')
return proc.wait()
except BaseException: # Including KeyboardInterrupt
proc.kill()
proc.wait() proc.wait()
if not cursor_in_new_line: raise
self.to_screen('')
return proc.returncode
url = info_dict['url'] url = info_dict['url']
player_url = info_dict.get('player_url') player_url = info_dict.get('player_url')

View File

@ -31,30 +31,34 @@ from ..utils import (
class ADNIE(InfoExtractor): class ADNIE(InfoExtractor):
IE_DESC = 'Anime Digital Network' IE_DESC = 'Animation Digital Network'
_VALID_URL = r'https?://(?:www\.)?animedigitalnetwork\.fr/video/[^/]+/(?P<id>\d+)' _VALID_URL = r'https?://(?:www\.)?(?:animation|anime)digitalnetwork\.fr/video/[^/]+/(?P<id>\d+)'
_TEST = { _TESTS = [{
'url': 'http://animedigitalnetwork.fr/video/blue-exorcist-kyoto-saga/7778-episode-1-debut-des-hostilites', 'url': 'https://animationdigitalnetwork.fr/video/fruits-basket/9841-episode-1-a-ce-soir',
'md5': '0319c99885ff5547565cacb4f3f9348d', 'md5': '1c9ef066ceb302c86f80c2b371615261',
'info_dict': { 'info_dict': {
'id': '7778', 'id': '9841',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Blue Exorcist - Kyôto Saga - Episode 1', 'title': 'Fruits Basket - Episode 1',
'description': 'md5:2f7b5aa76edbc1a7a92cedcda8a528d5', 'description': 'md5:14be2f72c3c96809b0ca424b0097d336',
'series': 'Blue Exorcist - Kyôto Saga', 'series': 'Fruits Basket',
'duration': 1467, 'duration': 1437,
'release_date': '20170106', 'release_date': '20190405',
'comment_count': int, 'comment_count': int,
'average_rating': float, 'average_rating': float,
'season_number': 2, 'season_number': 1,
'episode': 'Début des hostilités', 'episode': 'À ce soir !',
'episode_number': 1, 'episode_number': 1,
} },
} 'skip': 'Only available in region (FR, ...)',
}, {
'url': 'http://animedigitalnetwork.fr/video/blue-exorcist-kyoto-saga/7778-episode-1-debut-des-hostilites',
'only_matching': True,
}]
_NETRC_MACHINE = 'animedigitalnetwork' _NETRC_MACHINE = 'animationdigitalnetwork'
_BASE_URL = 'http://animedigitalnetwork.fr' _BASE = 'animationdigitalnetwork.fr'
_API_BASE_URL = 'https://gw.api.animedigitalnetwork.fr/' _API_BASE_URL = 'https://gw.api.' + _BASE + '/'
_PLAYER_BASE_URL = _API_BASE_URL + 'player/' _PLAYER_BASE_URL = _API_BASE_URL + 'player/'
_HEADERS = {} _HEADERS = {}
_LOGIN_ERR_MESSAGE = 'Unable to log in' _LOGIN_ERR_MESSAGE = 'Unable to log in'
@ -82,14 +86,14 @@ class ADNIE(InfoExtractor):
if subtitle_location: if subtitle_location:
enc_subtitles = self._download_webpage( enc_subtitles = self._download_webpage(
subtitle_location, video_id, 'Downloading subtitles data', subtitle_location, video_id, 'Downloading subtitles data',
fatal=False, headers={'Origin': 'https://animedigitalnetwork.fr'}) fatal=False, headers={'Origin': 'https://' + self._BASE})
if not enc_subtitles: if not enc_subtitles:
return None return None
# http://animedigitalnetwork.fr/components/com_vodvideo/videojs/adn-vjs.min.js # http://animationdigitalnetwork.fr/components/com_vodvideo/videojs/adn-vjs.min.js
dec_subtitles = intlist_to_bytes(aes_cbc_decrypt( dec_subtitles = intlist_to_bytes(aes_cbc_decrypt(
bytes_to_intlist(compat_b64decode(enc_subtitles[24:])), bytes_to_intlist(compat_b64decode(enc_subtitles[24:])),
bytes_to_intlist(binascii.unhexlify(self._K + 'ab9f52f5baae7c72')), bytes_to_intlist(binascii.unhexlify(self._K + '7fac1178830cfe0c')),
bytes_to_intlist(compat_b64decode(enc_subtitles[:24])) bytes_to_intlist(compat_b64decode(enc_subtitles[:24]))
)) ))
subtitles_json = self._parse_json( subtitles_json = self._parse_json(
@ -138,9 +142,9 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
if not username: if not username:
return return
try: try:
url = self._API_BASE_URL + 'authentication/login'
access_token = (self._download_json( access_token = (self._download_json(
self._API_BASE_URL + 'authentication/login', None, url, None, 'Logging in', self._LOGIN_ERR_MESSAGE, fatal=False,
'Logging in', self._LOGIN_ERR_MESSAGE, fatal=False,
data=urlencode_postdata({ data=urlencode_postdata({
'password': password, 'password': password,
'rememberMe': False, 'rememberMe': False,
@ -153,7 +157,8 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
message = None message = None
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 401: if isinstance(e.cause, compat_HTTPError) and e.cause.code == 401:
resp = self._parse_json( resp = self._parse_json(
e.cause.read().decode(), None, fatal=False) or {} self._webpage_read_content(e.cause, url, username),
username, fatal=False) or {}
message = resp.get('message') or resp.get('code') message = resp.get('message') or resp.get('code')
self.report_warning(message or self._LOGIN_ERR_MESSAGE) self.report_warning(message or self._LOGIN_ERR_MESSAGE)
@ -211,7 +216,9 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
# This usually goes away with a different random pkcs1pad, so retry # This usually goes away with a different random pkcs1pad, so retry
continue continue
error = self._parse_json(e.cause.read(), video_id) error = self._parse_json(
self._webpage_read_content(e.cause, links_url, video_id),
video_id, fatal=False) or {}
message = error.get('message') message = error.get('message')
if e.cause.code == 403 and error.get('code') == 'player-bad-geolocation-country': if e.cause.code == 403 and error.get('code') == 'player-bad-geolocation-country':
self.raise_geo_restricted(msg=message) self.raise_geo_restricted(msg=message)

View File

@ -8,6 +8,8 @@ from ..utils import (
ExtractorError, ExtractorError,
GeoRestrictedError, GeoRestrictedError,
int_or_none, int_or_none,
remove_start,
traverse_obj,
update_url_query, update_url_query,
urlencode_postdata, urlencode_postdata,
) )
@ -20,8 +22,8 @@ class AENetworksBaseIE(ThePlatformIE):
(?:history(?:vault)?|aetv|mylifetime|lifetimemovieclub)\.com| (?:history(?:vault)?|aetv|mylifetime|lifetimemovieclub)\.com|
fyi\.tv fyi\.tv
)/''' )/'''
_THEPLATFORM_KEY = 'crazyjava' _THEPLATFORM_KEY = '43jXaGRQud'
_THEPLATFORM_SECRET = 's3cr3t' _THEPLATFORM_SECRET = 'S10BPXHMlb'
_DOMAIN_MAP = { _DOMAIN_MAP = {
'history.com': ('HISTORY', 'history'), 'history.com': ('HISTORY', 'history'),
'aetv.com': ('AETV', 'aetv'), 'aetv.com': ('AETV', 'aetv'),
@ -33,14 +35,17 @@ class AENetworksBaseIE(ThePlatformIE):
} }
def _extract_aen_smil(self, smil_url, video_id, auth=None): def _extract_aen_smil(self, smil_url, video_id, auth=None):
query = {'mbr': 'true'} query = {
'mbr': 'true',
'formats': 'M3U+none,MPEG-DASH+none,MPEG4,MP3',
}
if auth: if auth:
query['auth'] = auth query['auth'] = auth
TP_SMIL_QUERY = [{ TP_SMIL_QUERY = [{
'assetTypes': 'high_video_ak', 'assetTypes': 'high_video_ak',
'switch': 'hls_high_ak' 'switch': 'hls_high_ak',
}, { }, {
'assetTypes': 'high_video_s3' 'assetTypes': 'high_video_s3',
}, { }, {
'assetTypes': 'high_video_s3', 'assetTypes': 'high_video_s3',
'switch': 'hls_high_fastly', 'switch': 'hls_high_fastly',
@ -75,7 +80,14 @@ class AENetworksBaseIE(ThePlatformIE):
requestor_id, brand = self._DOMAIN_MAP[domain] requestor_id, brand = self._DOMAIN_MAP[domain]
result = self._download_json( result = self._download_json(
'https://feeds.video.aetnd.com/api/v2/%s/videos' % brand, 'https://feeds.video.aetnd.com/api/v2/%s/videos' % brand,
filter_value, query={'filter[%s]' % filter_key: filter_value})['results'][0] filter_value, query={'filter[%s]' % filter_key: filter_value})
result = traverse_obj(
result, ('results',
lambda k, v: k == 0 and v[filter_key] == filter_value),
get_all=False)
if not result:
raise ExtractorError('Show not found in A&E feed (too new?)', expected=True,
video_id=remove_start(filter_value, '/'))
title = result['title'] title = result['title']
video_id = result['id'] video_id = result['id']
media_url = result['publicUrl'] media_url = result['publicUrl']
@ -126,7 +138,7 @@ class AENetworksIE(AENetworksBaseIE):
'skip_download': True, 'skip_download': True,
}, },
'add_ie': ['ThePlatform'], 'add_ie': ['ThePlatform'],
'skip': 'This video is only available for users of participating TV providers.', 'skip': 'Geo-restricted - This content is not available in your location.'
}, { }, {
'url': 'http://www.aetv.com/shows/duck-dynasty/season-9/episode-1', 'url': 'http://www.aetv.com/shows/duck-dynasty/season-9/episode-1',
'info_dict': { 'info_dict': {
@ -143,6 +155,7 @@ class AENetworksIE(AENetworksBaseIE):
'skip_download': True, 'skip_download': True,
}, },
'add_ie': ['ThePlatform'], 'add_ie': ['ThePlatform'],
'skip': 'This video is only available for users of participating TV providers.',
}, { }, {
'url': 'http://www.fyi.tv/shows/tiny-house-nation/season-1/episode-8', 'url': 'http://www.fyi.tv/shows/tiny-house-nation/season-1/episode-8',
'only_matching': True 'only_matching': True

View File

@ -18,7 +18,7 @@ class AliExpressLiveIE(InfoExtractor):
'id': '2800002704436634', 'id': '2800002704436634',
'ext': 'mp4', 'ext': 'mp4',
'title': 'CASIMA7.22', 'title': 'CASIMA7.22',
'thumbnail': r're:http://.*\.jpg', 'thumbnail': r're:https?://.*\.jpg',
'uploader': 'CASIMA Official Store', 'uploader': 'CASIMA Official Store',
'timestamp': 1500717600, 'timestamp': 1500717600,
'upload_date': '20170722', 'upload_date': '20170722',

View File

@ -0,0 +1,89 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import (
clean_html,
dict_get,
get_element_by_class,
int_or_none,
unified_strdate,
url_or_none,
)
class Alsace20TVIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?alsace20\.tv/(?:[\w-]+/)+[\w-]+-(?P<id>[\w]+)'
_TESTS = [{
'url': 'https://www.alsace20.tv/VOD/Actu/JT/Votre-JT-jeudi-3-fevrier-lyNHCXpYJh.html',
# 'md5': 'd91851bf9af73c0ad9b2cdf76c127fbb',
'info_dict': {
'id': 'lyNHCXpYJh',
'ext': 'mp4',
'description': 'md5:fc0bc4a0692d3d2dba4524053de4c7b7',
'title': 'Votre JT du jeudi 3 février',
'upload_date': '20220203',
'thumbnail': r're:https?://.+\.jpg',
'duration': 1073,
'view_count': int,
},
'params': {
'format': 'bestvideo',
},
}]
def _extract_video(self, video_id, url=None):
info = self._download_json(
'https://www.alsace20.tv/visionneuse/visio_v9_js.php?key=%s&habillage=0&mode=html' % (video_id, ),
video_id) or {}
title = info['titre']
formats = []
for res, fmt_url in (info.get('files') or {}).items():
formats.extend(
self._extract_smil_formats(fmt_url, video_id, fatal=False)
if '/smil:_' in fmt_url
else self._extract_mpd_formats(fmt_url, video_id, mpd_id=res, fatal=False))
self._sort_formats(formats)
webpage = (url and self._download_webpage(url, video_id, fatal=False)) or ''
thumbnail = url_or_none(dict_get(info, ('image', 'preview', )) or self._og_search_thumbnail(webpage))
upload_date = self._search_regex(r'/(\d{6})_', thumbnail, 'upload_date', default=None)
upload_date = unified_strdate('20%s-%s-%s' % (upload_date[:2], upload_date[2:4], upload_date[4:])) if upload_date else None
return {
'id': video_id,
'title': title,
'formats': formats,
'description': clean_html(get_element_by_class('wysiwyg', webpage)),
'upload_date': upload_date,
'thumbnail': thumbnail,
'duration': int_or_none(self._og_search_property('video:duration', webpage) if webpage else None),
'view_count': int_or_none(info.get('nb_vues')),
}
def _real_extract(self, url):
video_id = self._match_id(url)
return self._extract_video(video_id, url)
class Alsace20TVEmbedIE(Alsace20TVIE):
_VALID_URL = r'https?://(?:www\.)?alsace20\.tv/emb/(?P<id>[\w]+)'
_TESTS = [{
'url': 'https://www.alsace20.tv/emb/lyNHCXpYJh',
# 'md5': 'd91851bf9af73c0ad9b2cdf76c127fbb',
'info_dict': {
'id': 'lyNHCXpYJh',
'ext': 'mp4',
'title': 'Votre JT du jeudi 3 février',
'upload_date': '20220203',
'thumbnail': r're:https?://.+\.jpg',
'view_count': int,
},
'params': {
'format': 'bestvideo',
},
}]
def _real_extract(self, url):
video_id = self._match_id(url)
return self._extract_video(video_id)

View File

@ -15,7 +15,7 @@ from ..utils import (
class AmericasTestKitchenIE(InfoExtractor): class AmericasTestKitchenIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?(?:americastestkitchen|cooks(?:country|illustrated))\.com/(?P<resource_type>episode|videos)/(?P<id>\d+)' _VALID_URL = r'https?://(?:www\.)?(?:americastestkitchen|cooks(?:country|illustrated))\.com/(?:cooks(?:country|illustrated)/)?(?P<resource_type>episode|videos)/(?P<id>\d+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.americastestkitchen.com/episode/582-weeknight-japanese-suppers', 'url': 'https://www.americastestkitchen.com/episode/582-weeknight-japanese-suppers',
'md5': 'b861c3e365ac38ad319cfd509c30577f', 'md5': 'b861c3e365ac38ad319cfd509c30577f',
@ -23,15 +23,20 @@ class AmericasTestKitchenIE(InfoExtractor):
'id': '5b400b9ee338f922cb06450c', 'id': '5b400b9ee338f922cb06450c',
'title': 'Japanese Suppers', 'title': 'Japanese Suppers',
'ext': 'mp4', 'ext': 'mp4',
'display_id': 'weeknight-japanese-suppers',
'description': 'md5:64e606bfee910627efc4b5f050de92b3', 'description': 'md5:64e606bfee910627efc4b5f050de92b3',
'thumbnail': r're:^https?://', 'timestamp': 1523304000,
'timestamp': 1523318400, 'upload_date': '20180409',
'upload_date': '20180410', 'release_date': '20180409',
'release_date': '20180410',
'series': "America's Test Kitchen", 'series': "America's Test Kitchen",
'season': 'Season 18',
'season_number': 18, 'season_number': 18,
'episode': 'Japanese Suppers', 'episode': 'Japanese Suppers',
'episode_number': 15, 'episode_number': 15,
'duration': 1376,
'thumbnail': r're:^https?://',
'average_rating': 0,
'view_count': int,
}, },
'params': { 'params': {
'skip_download': True, 'skip_download': True,
@ -44,15 +49,20 @@ class AmericasTestKitchenIE(InfoExtractor):
'id': '5fbe8c61bda2010001c6763b', 'id': '5fbe8c61bda2010001c6763b',
'title': 'Simple Chicken Dinner', 'title': 'Simple Chicken Dinner',
'ext': 'mp4', 'ext': 'mp4',
'display_id': 'atktv_2103_simple-chicken-dinner_full-episode_web-mp4',
'description': 'md5:eb68737cc2fd4c26ca7db30139d109e7', 'description': 'md5:eb68737cc2fd4c26ca7db30139d109e7',
'thumbnail': r're:^https?://', 'timestamp': 1610737200,
'timestamp': 1610755200, 'upload_date': '20210115',
'upload_date': '20210116', 'release_date': '20210115',
'release_date': '20210116',
'series': "America's Test Kitchen", 'series': "America's Test Kitchen",
'season': 'Season 21',
'season_number': 21, 'season_number': 21,
'episode': 'Simple Chicken Dinner', 'episode': 'Simple Chicken Dinner',
'episode_number': 3, 'episode_number': 3,
'duration': 1397,
'thumbnail': r're:^https?://',
'view_count': int,
'average_rating': 0,
}, },
'params': { 'params': {
'skip_download': True, 'skip_download': True,
@ -60,6 +70,12 @@ class AmericasTestKitchenIE(InfoExtractor):
}, { }, {
'url': 'https://www.americastestkitchen.com/videos/3420-pan-seared-salmon', 'url': 'https://www.americastestkitchen.com/videos/3420-pan-seared-salmon',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://www.americastestkitchen.com/cookscountry/episode/564-when-only-chocolate-will-do',
'only_matching': True,
}, {
'url': 'https://www.americastestkitchen.com/cooksillustrated/videos/4478-beef-wellington',
'only_matching': True,
}, { }, {
'url': 'https://www.cookscountry.com/episode/564-when-only-chocolate-will-do', 'url': 'https://www.cookscountry.com/episode/564-when-only-chocolate-will-do',
'only_matching': True, 'only_matching': True,
@ -94,7 +110,7 @@ class AmericasTestKitchenIE(InfoExtractor):
class AmericasTestKitchenSeasonIE(InfoExtractor): class AmericasTestKitchenSeasonIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?(?P<show>americastestkitchen|cookscountry)\.com/episodes/browse/season_(?P<id>\d+)' _VALID_URL = r'https?://(?:www\.)?(?P<show>americastestkitchen|(?P<cooks>cooks(?:country|illustrated)))\.com(?:(?:/(?P<show2>cooks(?:country|illustrated)))?(?:/?$|(?<!ated)(?<!ated\.com)/episodes/browse/season_(?P<season>\d+)))'
_TESTS = [{ _TESTS = [{
# ATK Season # ATK Season
'url': 'https://www.americastestkitchen.com/episodes/browse/season_1', 'url': 'https://www.americastestkitchen.com/episodes/browse/season_1',
@ -105,48 +121,93 @@ class AmericasTestKitchenSeasonIE(InfoExtractor):
'playlist_count': 13, 'playlist_count': 13,
}, { }, {
# Cooks Country Season # Cooks Country Season
'url': 'https://www.cookscountry.com/episodes/browse/season_12', 'url': 'https://www.americastestkitchen.com/cookscountry/episodes/browse/season_12',
'info_dict': { 'info_dict': {
'id': 'season_12', 'id': 'season_12',
'title': 'Season 12', 'title': 'Season 12',
}, },
'playlist_count': 13, 'playlist_count': 13,
}, {
# America's Test Kitchen Series
'url': 'https://www.americastestkitchen.com/',
'info_dict': {
'id': 'americastestkitchen',
'title': 'America\'s Test Kitchen',
},
'playlist_count': 558,
}, {
# Cooks Country Series
'url': 'https://www.americastestkitchen.com/cookscountry',
'info_dict': {
'id': 'cookscountry',
'title': 'Cook\'s Country',
},
'playlist_count': 199,
}, {
'url': 'https://www.americastestkitchen.com/cookscountry/',
'only_matching': True,
}, {
'url': 'https://www.cookscountry.com/episodes/browse/season_12',
'only_matching': True,
}, {
'url': 'https://www.cookscountry.com',
'only_matching': True,
}, {
'url': 'https://www.americastestkitchen.com/cooksillustrated/',
'only_matching': True,
}, {
'url': 'https://www.cooksillustrated.com',
'only_matching': True,
}] }]
def _real_extract(self, url): def _real_extract(self, url):
show_name, season_number = re.match(self._VALID_URL, url).groups() match = re.match(self._VALID_URL, url).groupdict()
season_number = int(season_number) show = match.get('show2')
show_path = ('/' + show) if show else ''
show = show or match['show']
season_number = int_or_none(match.get('season'))
slug = 'atk' if show_name == 'americastestkitchen' else 'cco' slug, title = {
'americastestkitchen': ('atk', 'America\'s Test Kitchen'),
'cookscountry': ('cco', 'Cook\'s Country'),
'cooksillustrated': ('cio', 'Cook\'s Illustrated'),
}[show]
season = 'Season %d' % season_number facet_filters = [
'search_document_klass:episode',
'search_show_slug:' + slug,
]
if season_number:
playlist_id = 'season_%d' % season_number
playlist_title = 'Season %d' % season_number
facet_filters.append('search_season_list:' + playlist_title)
else:
playlist_id = show
playlist_title = title
season_search = self._download_json( season_search = self._download_json(
'https://y1fnzxui30-dsn.algolia.net/1/indexes/everest_search_%s_season_desc_production' % slug, 'https://y1fnzxui30-dsn.algolia.net/1/indexes/everest_search_%s_season_desc_production' % slug,
season, headers={ playlist_id, headers={
'Origin': 'https://www.%s.com' % show_name, 'Origin': 'https://www.americastestkitchen.com',
'X-Algolia-API-Key': '8d504d0099ed27c1b73708d22871d805', 'X-Algolia-API-Key': '8d504d0099ed27c1b73708d22871d805',
'X-Algolia-Application-Id': 'Y1FNZXUI30', 'X-Algolia-Application-Id': 'Y1FNZXUI30',
}, query={ }, query={
'facetFilters': json.dumps([ 'facetFilters': json.dumps(facet_filters),
'search_season_list:' + season, 'attributesToRetrieve': 'description,search_%s_episode_number,search_document_date,search_url,title,search_atk_episode_season' % slug,
'search_document_klass:episode',
'search_show_slug:' + slug,
]),
'attributesToRetrieve': 'description,search_%s_episode_number,search_document_date,search_url,title' % slug,
'attributesToHighlight': '', 'attributesToHighlight': '',
'hitsPerPage': 1000, 'hitsPerPage': 1000,
}) })
def entries(): def entries():
for episode in (season_search.get('hits') or []): for episode in (season_search.get('hits') or []):
search_url = episode.get('search_url') search_url = episode.get('search_url') # always formatted like '/episode/123-title-of-episode'
if not search_url: if not search_url:
continue continue
yield { yield {
'_type': 'url', '_type': 'url',
'url': 'https://www.%s.com%s' % (show_name, search_url), 'url': 'https://www.americastestkitchen.com%s%s' % (show_path, search_url),
'id': try_get(episode, lambda e: e['objectID'].split('_')[-1]), 'id': try_get(episode, lambda e: e['objectID'].rsplit('_', 1)[-1]),
'title': episode.get('title'), 'title': episode.get('title'),
'description': episode.get('description'), 'description': episode.get('description'),
'timestamp': unified_timestamp(episode.get('search_document_date')), 'timestamp': unified_timestamp(episode.get('search_document_date')),
@ -156,4 +217,4 @@ class AmericasTestKitchenSeasonIE(InfoExtractor):
} }
return self.playlist_result( return self.playlist_result(
entries(), 'season_%d' % season_number, season) entries(), playlist_id, playlist_title)

View File

@ -6,25 +6,21 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
determine_ext, determine_ext,
js_to_json, int_or_none,
url_or_none, url_or_none,
) )
class APAIE(InfoExtractor): class APAIE(InfoExtractor):
_VALID_URL = r'https?://[^/]+\.apa\.at/embed/(?P<id>[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12})' _VALID_URL = r'(?P<base_url>https?://[^/]+\.apa\.at)/embed/(?P<id>[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12})'
_TESTS = [{ _TESTS = [{
'url': 'http://uvp.apa.at/embed/293f6d17-692a-44e3-9fd5-7b178f3a1029', 'url': 'http://uvp.apa.at/embed/293f6d17-692a-44e3-9fd5-7b178f3a1029',
'md5': '2b12292faeb0a7d930c778c7a5b4759b', 'md5': '2b12292faeb0a7d930c778c7a5b4759b',
'info_dict': { 'info_dict': {
'id': 'jjv85FdZ', 'id': '293f6d17-692a-44e3-9fd5-7b178f3a1029',
'ext': 'mp4', 'ext': 'mp4',
'title': '"Blau ist mysteriös": Die Blue Man Group im Interview', 'title': '293f6d17-692a-44e3-9fd5-7b178f3a1029',
'description': 'md5:d41d8cd98f00b204e9800998ecf8427e',
'thumbnail': r're:^https?://.*\.jpg$', 'thumbnail': r're:^https?://.*\.jpg$',
'duration': 254,
'timestamp': 1519211149,
'upload_date': '20180221',
}, },
}, { }, {
'url': 'https://uvp-apapublisher.sf.apa.at/embed/2f94e9e6-d945-4db2-9548-f9a41ebf7b78', 'url': 'https://uvp-apapublisher.sf.apa.at/embed/2f94e9e6-d945-4db2-9548-f9a41ebf7b78',
@ -46,9 +42,11 @@ class APAIE(InfoExtractor):
webpage)] webpage)]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) mobj = re.match(self._VALID_URL, url)
video_id, base_url = mobj.group('id', 'base_url')
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(
'%s/player/%s' % (base_url, video_id), video_id)
jwplatform_id = self._search_regex( jwplatform_id = self._search_regex(
r'media[iI]d\s*:\s*["\'](?P<id>[a-zA-Z0-9]{8})', webpage, r'media[iI]d\s*:\s*["\'](?P<id>[a-zA-Z0-9]{8})', webpage,
@ -59,16 +57,18 @@ class APAIE(InfoExtractor):
'jwplatform:' + jwplatform_id, ie='JWPlatform', 'jwplatform:' + jwplatform_id, ie='JWPlatform',
video_id=video_id) video_id=video_id)
sources = self._parse_json( def extract(field, name=None):
self._search_regex( return self._search_regex(
r'sources\s*=\s*(\[.+?\])\s*;', webpage, 'sources'), r'\b%s["\']\s*:\s*(["\'])(?P<value>(?:(?!\1).)+)\1' % field,
video_id, transform_source=js_to_json) webpage, name or field, default=None, group='value')
title = extract('title') or video_id
description = extract('description')
thumbnail = extract('poster', 'thumbnail')
formats = [] formats = []
for source in sources: for format_id in ('hls', 'progressive'):
if not isinstance(source, dict): source_url = url_or_none(extract(format_id))
continue
source_url = url_or_none(source.get('file'))
if not source_url: if not source_url:
continue continue
ext = determine_ext(source_url) ext = determine_ext(source_url)
@ -77,18 +77,19 @@ class APAIE(InfoExtractor):
source_url, video_id, 'mp4', entry_protocol='m3u8_native', source_url, video_id, 'mp4', entry_protocol='m3u8_native',
m3u8_id='hls', fatal=False)) m3u8_id='hls', fatal=False))
else: else:
height = int_or_none(self._search_regex(
r'(\d+)\.mp4', source_url, 'height', default=None))
formats.append({ formats.append({
'url': source_url, 'url': source_url,
'format_id': format_id,
'height': height,
}) })
self._sort_formats(formats) self._sort_formats(formats)
thumbnail = self._search_regex(
r'image\s*:\s*(["\'])(?P<url>(?:(?!\1).)+)\1', webpage,
'thumbnail', fatal=False, group='url')
return { return {
'id': video_id, 'id': video_id,
'title': video_id, 'title': title,
'description': description,
'thumbnail': thumbnail, 'thumbnail': thumbnail,
'formats': formats, 'formats': formats,
} }

View File

@ -9,10 +9,10 @@ from ..utils import (
class AppleConnectIE(InfoExtractor): class AppleConnectIE(InfoExtractor):
_VALID_URL = r'https?://itunes\.apple\.com/\w{0,2}/?post/idsa\.(?P<id>[\w-]+)' _VALID_URL = r'https?://itunes\.apple\.com/\w{0,2}/?post/(?:id)?sa\.(?P<id>[\w-]+)'
_TEST = { _TESTS = [{
'url': 'https://itunes.apple.com/us/post/idsa.4ab17a39-2720-11e5-96c5-a5b38f6c42d3', 'url': 'https://itunes.apple.com/us/post/idsa.4ab17a39-2720-11e5-96c5-a5b38f6c42d3',
'md5': 'e7c38568a01ea45402570e6029206723', 'md5': 'c1d41f72c8bcaf222e089434619316e4',
'info_dict': { 'info_dict': {
'id': '4ab17a39-2720-11e5-96c5-a5b38f6c42d3', 'id': '4ab17a39-2720-11e5-96c5-a5b38f6c42d3',
'ext': 'm4v', 'ext': 'm4v',
@ -22,7 +22,10 @@ class AppleConnectIE(InfoExtractor):
'upload_date': '20150710', 'upload_date': '20150710',
'timestamp': 1436545535, 'timestamp': 1436545535,
}, },
} }, {
'url': 'https://itunes.apple.com/us/post/sa.0fe0229f-2457-11e5-9f40-1bb645f2d5d9',
'only_matching': True,
}]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
@ -36,7 +39,7 @@ class AppleConnectIE(InfoExtractor):
video_data = self._parse_json(video_json, video_id) video_data = self._parse_json(video_json, video_id)
timestamp = str_to_int(self._html_search_regex(r'data-timestamp="(\d+)"', webpage, 'timestamp')) timestamp = str_to_int(self._html_search_regex(r'data-timestamp="(\d+)"', webpage, 'timestamp'))
like_count = str_to_int(self._html_search_regex(r'(\d+) Loves', webpage, 'like count')) like_count = str_to_int(self._html_search_regex(r'(\d+) Loves', webpage, 'like count', default=None))
return { return {
'id': video_id, 'id': video_id,

View File

@ -3,8 +3,11 @@ from __future__ import unicode_literals
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
clean_html,
clean_podcast_url, clean_podcast_url,
get_element_by_class,
int_or_none, int_or_none,
parse_codecs,
parse_iso8601, parse_iso8601,
try_get, try_get,
) )
@ -14,16 +17,17 @@ class ApplePodcastsIE(InfoExtractor):
_VALID_URL = r'https?://podcasts\.apple\.com/(?:[^/]+/)?podcast(?:/[^/]+){1,2}.*?\bi=(?P<id>\d+)' _VALID_URL = r'https?://podcasts\.apple\.com/(?:[^/]+/)?podcast(?:/[^/]+){1,2}.*?\bi=(?P<id>\d+)'
_TESTS = [{ _TESTS = [{
'url': 'https://podcasts.apple.com/us/podcast/207-whitney-webb-returns/id1135137367?i=1000482637777', 'url': 'https://podcasts.apple.com/us/podcast/207-whitney-webb-returns/id1135137367?i=1000482637777',
'md5': 'df02e6acb11c10e844946a39e7222b08', 'md5': '41dc31cd650143e530d9423b6b5a344f',
'info_dict': { 'info_dict': {
'id': '1000482637777', 'id': '1000482637777',
'ext': 'mp3', 'ext': 'mp3',
'title': '207 - Whitney Webb Returns', 'title': '207 - Whitney Webb Returns',
'description': 'md5:13a73bade02d2e43737751e3987e1399', 'description': 'md5:75ef4316031df7b41ced4e7b987f79c6',
'upload_date': '20200705', 'upload_date': '20200705',
'timestamp': 1593921600, 'timestamp': 1593932400,
'duration': 6425, 'duration': 6454,
'series': 'The Tim Dillon Show', 'series': 'The Tim Dillon Show',
'thumbnail': 're:.+[.](png|jpe?g|webp)',
} }
}, { }, {
'url': 'https://podcasts.apple.com/podcast/207-whitney-webb-returns/id1135137367?i=1000482637777', 'url': 'https://podcasts.apple.com/podcast/207-whitney-webb-returns/id1135137367?i=1000482637777',
@ -39,18 +43,40 @@ class ApplePodcastsIE(InfoExtractor):
def _real_extract(self, url): def _real_extract(self, url):
episode_id = self._match_id(url) episode_id = self._match_id(url)
webpage = self._download_webpage(url, episode_id) webpage = self._download_webpage(url, episode_id)
ember_data = self._parse_json(self._search_regex( episode_data = {}
r'id="shoebox-ember-data-store"[^>]*>\s*({.+?})\s*<', ember_data = {}
webpage, 'ember data'), episode_id) # new page type 2021-11
episode = ember_data['data']['attributes'] amp_data = self._parse_json(self._search_regex(
r'(?s)id="shoebox-media-api-cache-amp-podcasts"[^>]*>\s*({.+?})\s*<',
webpage, 'AMP data', default='{}'), episode_id, fatal=False) or {}
amp_data = try_get(amp_data,
lambda a: self._parse_json(
next(a[x] for x in iter(a) if episode_id in x),
episode_id),
dict) or {}
amp_data = amp_data.get('d') or []
episode_data = try_get(
amp_data,
lambda a: next(x for x in a
if x['type'] == 'podcast-episodes' and x['id'] == episode_id),
dict)
if not episode_data:
# try pre 2021-11 page type: TODO: consider deleting if no longer used
ember_data = self._parse_json(self._search_regex(
r'(?s)id="shoebox-ember-data-store"[^>]*>\s*({.+?})\s*<',
webpage, 'ember data'), episode_id) or {}
ember_data = ember_data.get(episode_id) or ember_data
episode_data = try_get(ember_data, lambda x: x['data'], dict)
episode = episode_data['attributes']
description = episode.get('description') or {} description = episode.get('description') or {}
series = None series = None
for inc in (ember_data.get('included') or []): for inc in (amp_data or ember_data.get('included') or []):
if inc.get('type') == 'media/podcast': if inc.get('type') == 'media/podcast':
series = try_get(inc, lambda x: x['attributes']['name']) series = try_get(inc, lambda x: x['attributes']['name'])
series = series or clean_html(get_element_by_class('podcast-header__identity', webpage))
return { info = [{
'id': episode_id, 'id': episode_id,
'title': episode['name'], 'title': episode['name'],
'url': clean_podcast_url(episode['assetUrl']), 'url': clean_podcast_url(episode['assetUrl']),
@ -58,4 +84,10 @@ class ApplePodcastsIE(InfoExtractor):
'timestamp': parse_iso8601(episode.get('releaseDateTime')), 'timestamp': parse_iso8601(episode.get('releaseDateTime')),
'duration': int_or_none(episode.get('durationInMilliseconds'), 1000), 'duration': int_or_none(episode.get('durationInMilliseconds'), 1000),
'series': series, 'series': series,
} 'thumbnail': self._og_search_thumbnail(webpage),
}]
self._sort_formats(info)
info = info[0]
codecs = parse_codecs(info.get('ext', 'mp3'))
info.update(codecs)
return info

View File

@ -2,15 +2,17 @@ from __future__ import unicode_literals
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
unified_strdate,
clean_html, clean_html,
extract_attributes,
unified_strdate,
unified_timestamp,
) )
class ArchiveOrgIE(InfoExtractor): class ArchiveOrgIE(InfoExtractor):
IE_NAME = 'archive.org' IE_NAME = 'archive.org'
IE_DESC = 'archive.org videos' IE_DESC = 'archive.org videos'
_VALID_URL = r'https?://(?:www\.)?archive\.org/(?:details|embed)/(?P<id>[^/?#]+)(?:[?].*)?$' _VALID_URL = r'https?://(?:www\.)?archive\.org/(?:details|embed)/(?P<id>[^/?#&]+)'
_TESTS = [{ _TESTS = [{
'url': 'http://archive.org/details/XD300-23_68HighlightsAResearchCntAugHumanIntellect', 'url': 'http://archive.org/details/XD300-23_68HighlightsAResearchCntAugHumanIntellect',
'md5': '8af1d4cf447933ed3c7f4871162602db', 'md5': '8af1d4cf447933ed3c7f4871162602db',
@ -19,8 +21,11 @@ class ArchiveOrgIE(InfoExtractor):
'ext': 'ogg', 'ext': 'ogg',
'title': '1968 Demo - FJCC Conference Presentation Reel #1', 'title': '1968 Demo - FJCC Conference Presentation Reel #1',
'description': 'md5:da45c349df039f1cc8075268eb1b5c25', 'description': 'md5:da45c349df039f1cc8075268eb1b5c25',
'upload_date': '19681210', 'creator': 'SRI International',
'uploader': 'SRI International' 'release_date': '19681210',
'uploader': 'SRI International',
'timestamp': 1268695290,
'upload_date': '20100315',
} }
}, { }, {
'url': 'https://archive.org/details/Cops1922', 'url': 'https://archive.org/details/Cops1922',
@ -29,22 +34,43 @@ class ArchiveOrgIE(InfoExtractor):
'id': 'Cops1922', 'id': 'Cops1922',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Buster Keaton\'s "Cops" (1922)', 'title': 'Buster Keaton\'s "Cops" (1922)',
'description': 'md5:89e7c77bf5d965dd5c0372cfb49470f6', 'description': 'md5:43a603fd6c5b4b90d12a96b921212b9c',
'timestamp': 1387699629,
'upload_date': '20131222',
} }
}, { }, {
'url': 'http://archive.org/embed/XD300-23_68HighlightsAResearchCntAugHumanIntellect', 'url': 'http://archive.org/embed/XD300-23_68HighlightsAResearchCntAugHumanIntellect',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://archive.org/details/MSNBCW_20131125_040000_To_Catch_a_Predator/',
'only_matching': True,
}] }]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage( webpage = self._download_webpage(
'http://archive.org/embed/' + video_id, video_id) 'http://archive.org/embed/' + video_id, video_id)
jwplayer_playlist = self._parse_json(self._search_regex(
r"(?s)Play\('[^']+'\s*,\s*(\[.+\])\s*,\s*{.*?}\)", playlist = None
webpage, 'jwplayer playlist'), video_id) play8 = self._search_regex(
info = self._parse_jwplayer_data( r'(<[^>]+\bclass=["\']js-play8-playlist[^>]+>)', webpage,
{'playlist': jwplayer_playlist}, video_id, base_url=url) 'playlist', default=None)
if play8:
attrs = extract_attributes(play8)
playlist = attrs.get('value')
if not playlist:
# Old jwplayer fallback
playlist = self._search_regex(
r"(?s)Play\('[^']+'\s*,\s*(\[.+\])\s*,\s*{.*?}\)",
webpage, 'jwplayer playlist', default='[]')
jwplayer_playlist = self._parse_json(playlist, video_id, fatal=False)
if jwplayer_playlist:
info = self._parse_jwplayer_data(
{'playlist': jwplayer_playlist}, video_id, base_url=url)
else:
# HTML5 media fallback
info = self._parse_html5_media_entries(url, webpage, video_id)[0]
info['id'] = video_id
def get_optional(metadata, field): def get_optional(metadata, field):
return metadata.get(field, [None])[0] return metadata.get(field, [None])[0]
@ -58,8 +84,12 @@ class ArchiveOrgIE(InfoExtractor):
'description': clean_html(get_optional(metadata, 'description')), 'description': clean_html(get_optional(metadata, 'description')),
}) })
if info.get('_type') != 'playlist': if info.get('_type') != 'playlist':
creator = get_optional(metadata, 'creator')
info.update({ info.update({
'uploader': get_optional(metadata, 'creator'), 'creator': creator,
'upload_date': unified_strdate(get_optional(metadata, 'date')), 'release_date': unified_strdate(get_optional(metadata, 'date')),
'uploader': get_optional(metadata, 'publisher') or creator,
'timestamp': unified_timestamp(get_optional(metadata, 'publicdate')),
'language': get_optional(metadata, 'language'),
}) })
return info return info

View File

@ -249,14 +249,14 @@ class ARDMediathekIE(ARDMediathekBaseIE):
class ARDIE(InfoExtractor): class ARDIE(InfoExtractor):
_VALID_URL = r'(?P<mainurl>https?://(?:www\.)?daserste\.de/[^?#]+/videos(?:extern)?/(?P<display_id>[^/?#]+)-(?:video-?)?(?P<id>[0-9]+))\.html' _VALID_URL = r'(?P<mainurl>https?://(?:www\.)?daserste\.de/(?:[^/?#&]+/)+(?P<id>[^/?#&]+))\.html'
_TESTS = [{ _TESTS = [{
# available till 7.01.2022 # available till 7.01.2022
'url': 'https://www.daserste.de/information/talk/maischberger/videos/maischberger-die-woche-video100.html', 'url': 'https://www.daserste.de/information/talk/maischberger/videos/maischberger-die-woche-video100.html',
'md5': '867d8aa39eeaf6d76407c5ad1bb0d4c1', 'md5': '867d8aa39eeaf6d76407c5ad1bb0d4c1',
'info_dict': { 'info_dict': {
'display_id': 'maischberger-die-woche', 'id': 'maischberger-die-woche-video100',
'id': '100', 'display_id': 'maischberger-die-woche-video100',
'ext': 'mp4', 'ext': 'mp4',
'duration': 3687.0, 'duration': 3687.0,
'title': 'maischberger. die woche vom 7. Januar 2021', 'title': 'maischberger. die woche vom 7. Januar 2021',
@ -264,16 +264,25 @@ class ARDIE(InfoExtractor):
'thumbnail': r're:^https?://.*\.jpg$', 'thumbnail': r're:^https?://.*\.jpg$',
}, },
}, { }, {
'url': 'https://www.daserste.de/information/reportage-dokumentation/erlebnis-erde/videosextern/woelfe-und-herdenschutzhunde-ungleiche-brueder-102.html', 'url': 'https://www.daserste.de/information/politik-weltgeschehen/morgenmagazin/videosextern/dominik-kahun-aus-der-nhl-direkt-zur-weltmeisterschaft-100.html',
'only_matching': True,
}, {
'url': 'https://www.daserste.de/information/nachrichten-wetter/tagesthemen/videosextern/tagesthemen-17736.html',
'only_matching': True, 'only_matching': True,
}, { }, {
'url': 'http://www.daserste.de/information/reportage-dokumentation/dokus/videos/die-story-im-ersten-mission-unter-falscher-flagge-100.html', 'url': 'http://www.daserste.de/information/reportage-dokumentation/dokus/videos/die-story-im-ersten-mission-unter-falscher-flagge-100.html',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://www.daserste.de/unterhaltung/serie/in-aller-freundschaft-die-jungen-aerzte/Drehpause-100.html',
'only_matching': True,
}, {
'url': 'https://www.daserste.de/unterhaltung/film/filmmittwoch-im-ersten/videos/making-ofwendezeit-video-100.html',
'only_matching': True,
}] }]
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) mobj = re.match(self._VALID_URL, url)
display_id = mobj.group('display_id') display_id = mobj.group('id')
player_url = mobj.group('mainurl') + '~playerXml.xml' player_url = mobj.group('mainurl') + '~playerXml.xml'
doc = self._download_xml(player_url, display_id) doc = self._download_xml(player_url, display_id)
@ -284,26 +293,63 @@ class ARDIE(InfoExtractor):
formats = [] formats = []
for a in video_node.findall('.//asset'): for a in video_node.findall('.//asset'):
file_name = xpath_text(a, './fileName', default=None)
if not file_name:
continue
format_type = a.attrib.get('type')
format_url = url_or_none(file_name)
if format_url:
ext = determine_ext(file_name)
if ext == 'm3u8':
formats.extend(self._extract_m3u8_formats(
format_url, display_id, 'mp4', entry_protocol='m3u8_native',
m3u8_id=format_type or 'hls', fatal=False))
continue
elif ext == 'f4m':
formats.extend(self._extract_f4m_formats(
update_url_query(format_url, {'hdcore': '3.7.0'}),
display_id, f4m_id=format_type or 'hds', fatal=False))
continue
f = { f = {
'format_id': a.attrib['type'], 'format_id': format_type,
'width': int_or_none(a.find('./frameWidth').text), 'width': int_or_none(xpath_text(a, './frameWidth')),
'height': int_or_none(a.find('./frameHeight').text), 'height': int_or_none(xpath_text(a, './frameHeight')),
'vbr': int_or_none(a.find('./bitrateVideo').text), 'vbr': int_or_none(xpath_text(a, './bitrateVideo')),
'abr': int_or_none(a.find('./bitrateAudio').text), 'abr': int_or_none(xpath_text(a, './bitrateAudio')),
'vcodec': a.find('./codecVideo').text, 'vcodec': xpath_text(a, './codecVideo'),
'tbr': int_or_none(a.find('./totalBitrate').text), 'tbr': int_or_none(xpath_text(a, './totalBitrate')),
} }
if a.find('./serverPrefix').text: server_prefix = xpath_text(a, './serverPrefix', default=None)
f['url'] = a.find('./serverPrefix').text if server_prefix:
f['playpath'] = a.find('./fileName').text f.update({
'url': server_prefix,
'playpath': file_name,
})
else: else:
f['url'] = a.find('./fileName').text if not format_url:
continue
f['url'] = format_url
formats.append(f) formats.append(f)
self._sort_formats(formats) self._sort_formats(formats)
_SUB_FORMATS = (
('./dataTimedText', 'ttml'),
('./dataTimedTextNoOffset', 'ttml'),
('./dataTimedTextVtt', 'vtt'),
)
subtitles = {}
for subsel, subext in _SUB_FORMATS:
for node in video_node.findall(subsel):
subtitles.setdefault('de', []).append({
'url': node.attrib['url'],
'ext': subext,
})
return { return {
'id': mobj.group('id'), 'id': xpath_text(video_node, './videoId', default=display_id),
'formats': formats, 'formats': formats,
'subtitles': subtitles,
'display_id': display_id, 'display_id': display_id,
'title': video_node.find('./title').text, 'title': video_node.find('./title').text,
'duration': parse_duration(video_node.find('./duration').text), 'duration': parse_duration(video_node.find('./duration').text),
@ -313,7 +359,7 @@ class ARDIE(InfoExtractor):
class ARDBetaMediathekIE(ARDMediathekBaseIE): class ARDBetaMediathekIE(ARDMediathekBaseIE):
_VALID_URL = r'https://(?:(?:beta|www)\.)?ardmediathek\.de/(?P<client>[^/]+)/(?:player|live|video)/(?P<display_id>(?:[^/]+/)*)(?P<video_id>[a-zA-Z0-9]+)' _VALID_URL = r'https://(?:(?:beta|www)\.)?ardmediathek\.de/(?:[^/]+/)?(?:player|live|video)/(?:[^/]+/)*(?P<id>Y3JpZDovL[a-zA-Z0-9]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.ardmediathek.de/mdr/video/die-robuste-roswita/Y3JpZDovL21kci5kZS9iZWl0cmFnL2Ntcy84MWMxN2MzZC0wMjkxLTRmMzUtODk4ZS0wYzhlOWQxODE2NGI/', 'url': 'https://www.ardmediathek.de/mdr/video/die-robuste-roswita/Y3JpZDovL21kci5kZS9iZWl0cmFnL2Ntcy84MWMxN2MzZC0wMjkxLTRmMzUtODk4ZS0wYzhlOWQxODE2NGI/',
'md5': 'a1dc75a39c61601b980648f7c9f9f71d', 'md5': 'a1dc75a39c61601b980648f7c9f9f71d',
@ -343,22 +389,22 @@ class ARDBetaMediathekIE(ARDMediathekBaseIE):
}, { }, {
'url': 'https://www.ardmediathek.de/swr/live/Y3JpZDovL3N3ci5kZS8xMzQ4MTA0Mg', 'url': 'https://www.ardmediathek.de/swr/live/Y3JpZDovL3N3ci5kZS8xMzQ4MTA0Mg',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://www.ardmediathek.de/video/coronavirus-update-ndr-info/astrazeneca-kurz-lockdown-und-pims-syndrom-81/ndr/Y3JpZDovL25kci5kZS84NzE0M2FjNi0wMWEwLTQ5ODEtOTE5NS1mOGZhNzdhOTFmOTI/',
'only_matching': True,
}, {
'url': 'https://www.ardmediathek.de/ard/player/Y3JpZDovL3dkci5kZS9CZWl0cmFnLWQ2NDJjYWEzLTMwZWYtNGI4NS1iMTI2LTU1N2UxYTcxOGIzOQ/tatort-duo-koeln-leipzig-ihr-kinderlein-kommet',
'only_matching': True,
}] }]
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) video_id = self._match_id(url)
video_id = mobj.group('video_id')
display_id = mobj.group('display_id')
if display_id:
display_id = display_id.rstrip('/')
if not display_id:
display_id = video_id
player_page = self._download_json( player_page = self._download_json(
'https://api.ardmediathek.de/public-gateway', 'https://api.ardmediathek.de/public-gateway',
display_id, data=json.dumps({ video_id, data=json.dumps({
'query': '''{ 'query': '''{
playerPage(client:"%s", clipId: "%s") { playerPage(client: "ard", clipId: "%s") {
blockedByFsk blockedByFsk
broadcastedOn broadcastedOn
maturityContentRating maturityContentRating
@ -388,7 +434,7 @@ class ARDBetaMediathekIE(ARDMediathekBaseIE):
} }
} }
} }
}''' % (mobj.group('client'), video_id), }''' % video_id,
}).encode(), headers={ }).encode(), headers={
'Content-Type': 'application/json' 'Content-Type': 'application/json'
})['data']['playerPage'] })['data']['playerPage']
@ -413,7 +459,6 @@ class ARDBetaMediathekIE(ARDMediathekBaseIE):
r'\(FSK\s*(\d+)\)\s*$', description, 'age limit', default=None)) r'\(FSK\s*(\d+)\)\s*$', description, 'age limit', default=None))
info.update({ info.update({
'age_limit': age_limit, 'age_limit': age_limit,
'display_id': display_id,
'title': title, 'title': title,
'description': description, 'description': description,
'timestamp': unified_timestamp(player_page.get('broadcastedOn')), 'timestamp': unified_timestamp(player_page.get('broadcastedOn')),

View File

@ -0,0 +1,101 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..compat import (
compat_parse_qs,
compat_urllib_parse_urlparse,
)
from ..utils import (
float_or_none,
int_or_none,
parse_iso8601,
remove_start,
)
class ArnesIE(InfoExtractor):
IE_NAME = 'video.arnes.si'
IE_DESC = 'Arnes Video'
_VALID_URL = r'https?://video\.arnes\.si/(?:[a-z]{2}/)?(?:watch|embed|api/(?:asset|public/video))/(?P<id>[0-9a-zA-Z]{12})'
_TESTS = [{
'url': 'https://video.arnes.si/watch/a1qrWTOQfVoU?t=10',
'md5': '4d0f4d0a03571b33e1efac25fd4a065d',
'info_dict': {
'id': 'a1qrWTOQfVoU',
'ext': 'mp4',
'title': 'Linearna neodvisnost, definicija',
'description': 'Linearna neodvisnost, definicija',
'license': 'PRIVATE',
'creator': 'Polona Oblak',
'timestamp': 1585063725,
'upload_date': '20200324',
'channel': 'Polona Oblak',
'channel_id': 'q6pc04hw24cj',
'channel_url': 'https://video.arnes.si/?channel=q6pc04hw24cj',
'duration': 596.75,
'view_count': int,
'tags': ['linearna_algebra'],
'start_time': 10,
}
}, {
'url': 'https://video.arnes.si/api/asset/s1YjnV7hadlC/play.mp4',
'only_matching': True,
}, {
'url': 'https://video.arnes.si/embed/s1YjnV7hadlC',
'only_matching': True,
}, {
'url': 'https://video.arnes.si/en/watch/s1YjnV7hadlC',
'only_matching': True,
}, {
'url': 'https://video.arnes.si/embed/s1YjnV7hadlC?t=123&hideRelated=1',
'only_matching': True,
}, {
'url': 'https://video.arnes.si/api/public/video/s1YjnV7hadlC',
'only_matching': True,
}]
_BASE_URL = 'https://video.arnes.si'
def _real_extract(self, url):
video_id = self._match_id(url)
video = self._download_json(
self._BASE_URL + '/api/public/video/' + video_id, video_id)['data']
title = video['title']
formats = []
for media in (video.get('media') or []):
media_url = media.get('url')
if not media_url:
continue
formats.append({
'url': self._BASE_URL + media_url,
'format_id': remove_start(media.get('format'), 'FORMAT_'),
'format_note': media.get('formatTranslation'),
'width': int_or_none(media.get('width')),
'height': int_or_none(media.get('height')),
})
self._sort_formats(formats)
channel = video.get('channel') or {}
channel_id = channel.get('url')
thumbnail = video.get('thumbnailUrl')
return {
'id': video_id,
'title': title,
'formats': formats,
'thumbnail': self._BASE_URL + thumbnail,
'description': video.get('description'),
'license': video.get('license'),
'creator': video.get('author'),
'timestamp': parse_iso8601(video.get('creationTime')),
'channel': channel.get('name'),
'channel_id': channel_id,
'channel_url': self._BASE_URL + '/?channel=' + channel_id if channel_id else None,
'duration': float_or_none(video.get('duration'), 1000),
'view_count': int_or_none(video.get('views')),
'tags': video.get('hashtags'),
'start_time': int_or_none(compat_parse_qs(
compat_urllib_parse_urlparse(url).query).get('t', [None])[0]),
}

View File

@ -0,0 +1,315 @@
# coding: utf-8
from __future__ import unicode_literals
import re
from .common import InfoExtractor
from ..utils import (
ExtractorError,
float_or_none,
int_or_none,
merge_dicts,
parse_iso8601,
str_or_none,
T,
traverse_obj,
url_or_none,
)
class Art19IE(InfoExtractor):
_UUID_REGEX = r'[\da-f]{8}-?[\da-f]{4}-?[\da-f]{4}-?[\da-f]{4}-?[\da-f]{12}'
_VALID_URL = (
r'https?://(?:www\.)?art19\.com/shows/[^/#?]+/episodes/(?P<id>{0})'.format(_UUID_REGEX),
r'https?://rss\.art19\.com/episodes/(?P<id>{0})\.mp3'.format(_UUID_REGEX),
)
_EMBED_REGEX = (r'<iframe\b[^>]+\bsrc\s*=\s*[\'"](?P<url>{0})'.format(_VALID_URL[0]),)
_TESTS = [{
'url': 'https://rss.art19.com/episodes/5ba1413c-48b8-472b-9cc3-cfd952340bdb.mp3',
'info_dict': {
'id': '5ba1413c-48b8-472b-9cc3-cfd952340bdb',
'ext': 'mp3',
'title': 'Why Did DeSantis Drop Out?',
'series': 'The Daily Briefing',
'release_timestamp': 1705941275,
'description': 'md5:da38961da4a3f7e419471365e3c6b49f',
'episode': 'Episode 582',
'thumbnail': r're:^https?://content\.production\.cdn\.art19\.com.*\.jpeg$',
'series_id': 'ed52a0ab-08b1-4def-8afc-549e4d93296d',
'upload_date': '20240122',
'timestamp': 1705940815,
'episode_number': 582,
# 'modified_date': '20240122',
'episode_id': '5ba1413c-48b8-472b-9cc3-cfd952340bdb',
'modified_timestamp': int,
'release_date': '20240122',
'duration': 527.4,
},
}, {
'url': 'https://art19.com/shows/scamfluencers/episodes/8319b776-4153-4d22-8630-631f204a03dd',
'info_dict': {
'id': '8319b776-4153-4d22-8630-631f204a03dd',
'ext': 'mp3',
'title': 'Martha Stewart: The Homemaker Hustler Part 2',
# 'modified_date': '20240116',
'upload_date': '20240105',
'modified_timestamp': int,
'episode_id': '8319b776-4153-4d22-8630-631f204a03dd',
'series_id': 'd3c9b8ca-26b3-42f4-9bd8-21d1a9031e75',
'thumbnail': r're:^https?://content\.production\.cdn\.art19\.com.*\.jpeg$',
'description': r're:(?s)In the summer of 2003, Martha Stewart is indicted .{695}#do-not-sell-my-info\.$',
'release_timestamp': 1705305660,
'release_date': '20240115',
'timestamp': 1704481536,
'episode_number': 88,
'series': 'Scamfluencers',
'duration': 2588.37501,
'episode': 'Episode 88',
},
}]
_WEBPAGE_TESTS = [{
'url': 'https://www.nu.nl/formule-1/6291456/verstappen-wordt-een-synoniem-voor-formule-1.html',
'info_dict': {
'id': '7d42626a-7301-47db-bb8a-3b6f054d77d7',
'ext': 'mp3',
'title': "'Verstappen wordt een synoniem voor Formule 1'",
'season': 'Seizoen 6',
'description': 'md5:39a7159a31c4cda312b2e893bdd5c071',
'episode_id': '7d42626a-7301-47db-bb8a-3b6f054d77d7',
'duration': 3061.82111,
'series_id': '93f4e113-2a60-4609-a564-755058fa40d8',
'release_date': '20231126',
'modified_timestamp': 1701156004,
'thumbnail': r're:^https?://content\.production\.cdn\.art19\.com.*\.jpeg$',
'season_number': 6,
'episode_number': 52,
# 'modified_date': '20231128',
'upload_date': '20231126',
'timestamp': 1701025981,
'season_id': '36097c1e-7455-490d-a2fe-e2f10b4d5f26',
'series': 'De Boordradio',
'release_timestamp': 1701026308,
'episode': 'Episode 52',
},
}, {
'url': 'https://www.wishtv.com/podcast-episode/larry-bucshon-announces-retirement-from-congress/',
'info_dict': {
'id': '8da368bd-08d1-46d0-afaa-c134a4af7dc0',
'ext': 'mp3',
'title': 'Larry Bucshon announces retirement from congress',
'upload_date': '20240115',
'episode_number': 148,
'episode': 'Episode 148',
'thumbnail': r're:^https?://content\.production\.cdn\.art19\.com.*\.jpeg$',
'release_date': '20240115',
'timestamp': 1705328205,
'release_timestamp': 1705329275,
'series': 'All INdiana Politics',
# 'modified_date': '20240117',
'modified_timestamp': 1705458901,
'series_id': 'c4af6c27-b10f-4ff2-9f84-0f407df86ff1',
'episode_id': '8da368bd-08d1-46d0-afaa-c134a4af7dc0',
'description': 'md5:53b5239e4d14973a87125c217c255b2a',
'duration': 1256.18848,
},
}]
@classmethod
def _extract_embed_urls(cls, url, webpage):
for from_ in super(Art19IE, cls)._extract_embed_urls(url, webpage):
yield from_
for episode_id in re.findall(
r'<div\b[^>]+\bclass\s*=\s*[\'"][^\'"]*art19-web-player[^\'"]*[\'"][^>]+\bdata-episode-id=[\'"]({0})[\'"]'.format(cls._UUID_REGEX), webpage):
yield 'https://rss.art19.com/episodes/{0}.mp3'.format(episode_id)
def _real_extract(self, url):
episode_id = self._match_id(url)
player_metadata = self._download_json(
'https://art19.com/episodes/{0}'.format(episode_id), episode_id,
note='Downloading player metadata', fatal=False,
headers={'Accept': 'application/vnd.art19.v0+json'})
rss_metadata = self._download_json(
'https://rss.art19.com/episodes/{0}.json'.format(episode_id), episode_id,
fatal=False, note='Downloading RSS metadata')
formats = [{
'format_id': 'direct',
'url': 'https://rss.art19.com/episodes/{0}.mp3'.format(episode_id),
'vcodec': 'none',
'acodec': 'mp3',
}]
for fmt_id, fmt_data in traverse_obj(rss_metadata, (
'content', 'media', T(dict.items),
lambda _, k_v: k_v[0] != 'waveform_bin' and k_v[1].get('url'))):
fmt_url = url_or_none(fmt_data['url'])
if not fmt_url:
continue
formats.append({
'format_id': fmt_id,
'url': fmt_url,
'vcodec': 'none',
'acodec': fmt_id,
'quality': -2 if fmt_id == 'ogg' else -1,
})
self._sort_formats(formats)
return merge_dicts({
'id': episode_id,
'formats': formats,
}, traverse_obj(player_metadata, ('episode', {
'title': ('title', T(str_or_none)),
'description': ('description_plain', T(str_or_none)),
'episode_id': ('id', T(str_or_none)),
'episode_number': ('episode_number', T(int_or_none)),
'season_id': ('season_id', T(str_or_none)),
'series_id': ('series_id', T(str_or_none)),
'timestamp': ('created_at', T(parse_iso8601)),
'release_timestamp': ('released_at', T(parse_iso8601)),
'modified_timestamp': ('updated_at', T(parse_iso8601)),
})), traverse_obj(rss_metadata, ('content', {
'title': ('episode_title', T(str_or_none)),
'description': ('episode_description_plain', T(str_or_none)),
'episode_id': ('episode_id', T(str_or_none)),
'episode_number': ('episode_number', T(int_or_none)),
'season': ('season_title', T(str_or_none)),
'season_id': ('season_id', T(str_or_none)),
'season_number': ('season_number', T(int_or_none)),
'series': ('series_title', T(str_or_none)),
'series_id': ('series_id', T(str_or_none)),
'thumbnail': ('cover_image', T(url_or_none)),
'duration': ('duration', T(float_or_none)),
})), rev=True)
class Art19ShowIE(InfoExtractor):
IE_DESC = 'Art19 series'
_VALID_URL_BASE = r'https?://(?:www\.)?art19\.com/shows/(?P<id>[\w-]+)(?:/embed)?/?'
_VALID_URL = (
r'{0}(?:$|[#?])'.format(_VALID_URL_BASE),
r'https?://rss\.art19\.com/(?P<id>[\w-]+)/?(?:$|[#?])',
)
_EMBED_REGEX = (r'<iframe[^>]+\bsrc=[\'"](?P<url>{0}[^\'"])'.format(_VALID_URL_BASE),)
_TESTS = [{
'url': 'https://www.art19.com/shows/5898c087-a14f-48dc-b6fc-a2280a1ff6e0/',
'info_dict': {
'_type': 'playlist',
'id': '5898c087-a14f-48dc-b6fc-a2280a1ff6e0',
'display_id': 'echt-gebeurd',
'title': 'Echt Gebeurd',
'description': r're:(?us)Bij\sEcht Gebeurd\svertellen mensen .{1166} Eline Veldhuisen\.$',
'timestamp': 1492642167,
# 'upload_date': '20170419',
'modified_timestamp': int,
# 'modified_date': str,
'tags': 'count:7',
},
'playlist_mincount': 425,
}, {
'url': 'https://rss.art19.com/scamfluencers',
'info_dict': {
'_type': 'playlist',
'id': 'd3c9b8ca-26b3-42f4-9bd8-21d1a9031e75',
'display_id': 'scamfluencers',
'title': 'Scamfluencers',
'description': r're:(?s)You never really know someone\b.{1078} wondery\.com/links/scamfluencers/ now\.$',
'timestamp': 1647368573,
# 'upload_date': '20220315',
'modified_timestamp': int,
# 'modified_date': str,
'tags': [],
},
'playlist_mincount': 90,
}, {
'url': 'https://art19.com/shows/enthuellt/embed',
'info_dict': {
'_type': 'playlist',
'id': 'e2cacf57-bb8a-4263-aa81-719bcdd4f80c',
'display_id': 'enthuellt',
'title': 'Enthüllt',
'description': 'md5:17752246643414a2fd51744fc9a1c08e',
'timestamp': 1601645860,
# 'upload_date': '20201002',
'modified_timestamp': int,
# 'modified_date': str,
'tags': 'count:10',
},
'playlist_mincount': 10,
'skip': 'Content not found',
}]
_WEBPAGE_TESTS = [{
'url': 'https://deconstructingyourself.com/deconstructing-yourself-podcast',
'info_dict': {
'_type': 'playlist',
'id': 'cfbb9b01-c295-4adb-8726-adde7c03cf21',
'display_id': 'deconstructing-yourself',
'title': 'Deconstructing Yourself',
'description': 'md5:dab5082b28b248a35476abf64768854d',
'timestamp': 1570581181,
# 'upload_date': '20191009',
'modified_timestamp': int,
# 'modified_date': str,
'tags': 'count:5',
},
'playlist_mincount': 80,
}, {
'url': 'https://chicagoreader.com/columns-opinion/podcasts/ben-joravsky-show-podcast-episodes/',
'info_dict': {
'_type': 'playlist',
'id': '9dfa2c37-ab87-4c13-8388-4897914313ec',
'display_id': 'the-ben-joravsky-show',
'title': 'The Ben Joravsky Show',
'description': 'md5:c0f3ec0ee0dbea764390e521adc8780a',
'timestamp': 1550875095,
# 'upload_date': '20190222',
'modified_timestamp': int,
# 'modified_date': str,
'tags': ['Chicago Politics', 'chicago', 'Ben Joravsky'],
},
'playlist_mincount': 1900,
}]
@classmethod
def _extract_embed_urls(cls, url, webpage):
for from_ in super(Art19ShowIE, cls)._extract_embed_urls(url, webpage):
yield from_
for series_id in re.findall(
r'<div[^>]+\bclass=[\'"][^\'"]*art19-web-player[^\'"]*[\'"][^>]+\bdata-series-id=[\'"]([\w-]+)[\'"]', webpage):
yield 'https://art19.com/shows/{0}'.format(series_id)
def _real_extract(self, url):
series_id = self._match_id(url)
for expected in ((403, 404), None):
series_metadata, urlh = self._download_json_handle(
'https://art19.com/series/{0}'.format(series_id), series_id, note='Downloading series metadata',
headers={'Accept': 'application/vnd.art19.v0+json'},
expected_status=(403, 404))
if urlh.getcode() == 403:
# raise the actual problem with the page
urlh = self._request_webpage(url, series_id, expected_status=404)
if urlh.getcode() == 404:
raise ExtractorError(
'content not found, possibly expired',
video_id=series_id, expected=True)
if urlh.getcode() not in (expected or []):
# apparently OK
break
return merge_dicts(
self.playlist_result((
self.url_result('https://rss.art19.com/episodes/{0}.mp3'.format(episode_id), Art19IE)
for episode_id in traverse_obj(series_metadata, ('series', 'episode_ids', Ellipsis, T(str_or_none))))),
traverse_obj(series_metadata, ('series', {
'id': ('id', T(str_or_none)),
'display_id': ('slug', T(str_or_none)),
'title': ('title', T(str_or_none)),
'description': ('description_plain', T(str_or_none)),
'timestamp': ('created_at', T(parse_iso8601)),
'modified_timestamp': ('updated_at', T(parse_iso8601)),
})),
traverse_obj(series_metadata, {
'tags': ('tags', Ellipsis, 'name', T(str_or_none)),
}, {'tags': T(lambda _: [])}))

View File

@ -12,6 +12,7 @@ from ..utils import (
ExtractorError, ExtractorError,
int_or_none, int_or_none,
qualities, qualities,
strip_or_none,
try_get, try_get,
unified_strdate, unified_strdate,
url_or_none, url_or_none,
@ -252,3 +253,49 @@ class ArteTVPlaylistIE(ArteTVBaseIE):
title = collection.get('title') title = collection.get('title')
description = collection.get('shortDescription') or collection.get('teaserText') description = collection.get('shortDescription') or collection.get('teaserText')
return self.playlist_result(entries, playlist_id, title, description) return self.playlist_result(entries, playlist_id, title, description)
class ArteTVCategoryIE(ArteTVBaseIE):
_VALID_URL = r'https?://(?:www\.)?arte\.tv/(?P<lang>%s)/videos/(?P<id>[\w-]+(?:/[\w-]+)*)/?\s*$' % ArteTVBaseIE._ARTE_LANGUAGES
_TESTS = [{
'url': 'https://www.arte.tv/en/videos/politics-and-society/',
'info_dict': {
'id': 'politics-and-society',
'title': 'Politics and society',
'description': 'Investigative documentary series, geopolitical analysis, and international commentary',
},
'playlist_mincount': 13,
},
]
@classmethod
def suitable(cls, url):
return (
not any(ie.suitable(url) for ie in (ArteTVIE, ArteTVPlaylistIE, ))
and super(ArteTVCategoryIE, cls).suitable(url))
def _real_extract(self, url):
lang, playlist_id = re.match(self._VALID_URL, url).groups()
webpage = self._download_webpage(url, playlist_id)
items = []
for video in re.finditer(
r'<a\b[^>]*?href\s*=\s*(?P<q>"|\'|\b)(?P<url>https?://www\.arte\.tv/%s/videos/[\w/-]+)(?P=q)' % lang,
webpage):
video = video.group('url')
if video == url:
continue
if any(ie.suitable(video) for ie in (ArteTVIE, ArteTVPlaylistIE, )):
items.append(video)
if items:
title = (self._og_search_title(webpage, default=None)
or self._html_search_regex(r'<title\b[^>]*>([^<]+)</title>', default=None))
title = strip_or_none(title.rsplit('|', 1)[0]) or self._generic_title(url)
result = self.playlist_from_matches(items, playlist_id=playlist_id, playlist_title=title)
if result:
description = self._og_search_description(webpage, default=None)
if description:
result['description'] = description
return result

View File

@ -14,7 +14,7 @@ from ..utils import (
class AudiomackIE(InfoExtractor): class AudiomackIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?audiomack\.com/song/(?P<id>[\w/-]+)' _VALID_URL = r'https?://(?:www\.)?audiomack\.com/(?:song/|(?=.+/song/))(?P<id>[\w/-]+)'
IE_NAME = 'audiomack' IE_NAME = 'audiomack'
_TESTS = [ _TESTS = [
# hosted on audiomack # hosted on audiomack
@ -29,25 +29,27 @@ class AudiomackIE(InfoExtractor):
} }
}, },
# audiomack wrapper around soundcloud song # audiomack wrapper around soundcloud song
# Needs new test URL.
{ {
'add_ie': ['Soundcloud'], 'add_ie': ['Soundcloud'],
'url': 'http://www.audiomack.com/song/hip-hop-daily/black-mamba-freestyle', 'url': 'http://www.audiomack.com/song/hip-hop-daily/black-mamba-freestyle',
'info_dict': { 'only_matching': True,
'id': '258901379', # 'info_dict': {
'ext': 'mp3', # 'id': '258901379',
'description': 'mamba day freestyle for the legend Kobe Bryant ', # 'ext': 'mp3',
'title': 'Black Mamba Freestyle [Prod. By Danny Wolf]', # 'description': 'mamba day freestyle for the legend Kobe Bryant ',
'uploader': 'ILOVEMAKONNEN', # 'title': 'Black Mamba Freestyle [Prod. By Danny Wolf]',
'upload_date': '20160414', # 'uploader': 'ILOVEMAKONNEN',
} # 'upload_date': '20160414',
# }
}, },
] ]
def _real_extract(self, url): def _real_extract(self, url):
# URLs end with [uploader name]/[uploader title] # URLs end with [uploader name]/song/[uploader title]
# this title is whatever the user types in, and is rarely # this title is whatever the user types in, and is rarely
# the proper song title. Real metadata is in the api response # the proper song title. Real metadata is in the api response
album_url_tag = self._match_id(url) album_url_tag = self._match_id(url).replace('/song/', '/')
# Request the extended version of the api for extra fields like artist and title # Request the extended version of the api for extra fields like artist and title
api_response = self._download_json( api_response = self._download_json(
@ -73,13 +75,13 @@ class AudiomackIE(InfoExtractor):
class AudiomackAlbumIE(InfoExtractor): class AudiomackAlbumIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?audiomack\.com/album/(?P<id>[\w/-]+)' _VALID_URL = r'https?://(?:www\.)?audiomack\.com/(?:album/|(?=.+/album/))(?P<id>[\w/-]+)'
IE_NAME = 'audiomack:album' IE_NAME = 'audiomack:album'
_TESTS = [ _TESTS = [
# Standard album playlist # Standard album playlist
{ {
'url': 'http://www.audiomack.com/album/flytunezcom/tha-tour-part-2-mixtape', 'url': 'http://www.audiomack.com/album/flytunezcom/tha-tour-part-2-mixtape',
'playlist_count': 15, 'playlist_count': 11,
'info_dict': 'info_dict':
{ {
'id': '812251', 'id': '812251',
@ -95,24 +97,24 @@ class AudiomackAlbumIE(InfoExtractor):
}, },
'playlist': [{ 'playlist': [{
'info_dict': { 'info_dict': {
'title': 'PPP (Pistol P Project) - 9. Heaven or Hell (CHIMACA) ft Zuse (prod by DJ FU)', 'title': 'PPP (Pistol P Project) - 10. 4 Minutes Of Hell Part 4 (prod by DY OF 808 MAFIA)',
'id': '837577', 'id': '837580',
'ext': 'mp3', 'ext': 'mp3',
'uploader': 'Lil Herb a.k.a. G Herbo', 'uploader': 'Lil Herb a.k.a. G Herbo',
} }
}], }],
'params': { 'params': {
'playliststart': 9, 'playliststart': 2,
'playlistend': 9, 'playlistend': 2,
} }
} }
] ]
def _real_extract(self, url): def _real_extract(self, url):
# URLs end with [uploader name]/[uploader title] # URLs end with [uploader name]/album/[uploader title]
# this title is whatever the user types in, and is rarely # this title is whatever the user types in, and is rarely
# the proper song title. Real metadata is in the api response # the proper song title. Real metadata is in the api response
album_url_tag = self._match_id(url) album_url_tag = self._match_id(url).replace('/album/', '/')
result = {'_type': 'playlist', 'entries': []} result = {'_type': 'playlist', 'entries': []}
# There is no one endpoint for album metadata - instead it is included/repeated in each song's metadata # There is no one endpoint for album metadata - instead it is included/repeated in each song's metadata
# Therefore we don't know how many songs the album has and must infi-loop until failure # Therefore we don't know how many songs the album has and must infi-loop until failure
@ -134,7 +136,7 @@ class AudiomackAlbumIE(InfoExtractor):
# Pull out the album metadata and add to result (if it exists) # Pull out the album metadata and add to result (if it exists)
for resultkey, apikey in [('id', 'album_id'), ('title', 'album_title')]: for resultkey, apikey in [('id', 'album_id'), ('title', 'album_title')]:
if apikey in api_response and resultkey not in result: if apikey in api_response and resultkey not in result:
result[resultkey] = api_response[apikey] result[resultkey] = compat_str(api_response[apikey])
song_id = url_basename(api_response['url']).rpartition('.')[0] song_id = url_basename(api_response['url']).rpartition('.')[0]
result['entries'].append({ result['entries'].append({
'id': compat_str(api_response.get('id', song_id)), 'id': compat_str(api_response.get('id', song_id)),

View File

@ -47,7 +47,7 @@ class AZMedienIE(InfoExtractor):
'url': 'https://www.telebaern.tv/telebaern-news/montag-1-oktober-2018-ganze-sendung-133531189#video=0_7xjo9lf1', 'url': 'https://www.telebaern.tv/telebaern-news/montag-1-oktober-2018-ganze-sendung-133531189#video=0_7xjo9lf1',
'only_matching': True 'only_matching': True
}] }]
_API_TEMPL = 'https://www.%s/api/pub/gql/%s/NewsArticleTeaser/cb9f2f81ed22e9b47f4ca64ea3cc5a5d13e88d1d' _API_TEMPL = 'https://www.%s/api/pub/gql/%s/NewsArticleTeaser/a4016f65fe62b81dc6664dd9f4910e4ab40383be'
_PARTNER_ID = '1719221' _PARTNER_ID = '1719221'
def _real_extract(self, url): def _real_extract(self, url):

View File

@ -0,0 +1,37 @@
# coding: utf-8
from __future__ import unicode_literals
from .brightcove import BrightcoveNewIE
from ..utils import extract_attributes
class BandaiChannelIE(BrightcoveNewIE):
IE_NAME = 'bandaichannel'
_VALID_URL = r'https?://(?:www\.)?b-ch\.com/titles/(?P<id>\d+/\d+)'
_TESTS = [{
'url': 'https://www.b-ch.com/titles/514/001',
'md5': 'a0f2d787baa5729bed71108257f613a4',
'info_dict': {
'id': '6128044564001',
'ext': 'mp4',
'title': 'メタルファイターMIKU 第1話',
'timestamp': 1580354056,
'uploader_id': '5797077852001',
'upload_date': '20200130',
'duration': 1387.733,
},
'params': {
'format': 'bestvideo',
'skip_download': True,
},
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
attrs = extract_attributes(self._search_regex(
r'(<video-js[^>]+\bid="bcplayer"[^>]*>)', webpage, 'player'))
bc = self._download_json(
'https://pbifcd.b-ch.com/v1/playbackinfo/ST/70/' + attrs['data-info'],
video_id, headers={'X-API-KEY': attrs['data-auth'].strip()})['bc']
return self._parse_brightcove_metadata(bc, bc['id'])

View File

@ -49,6 +49,7 @@ class BandcampIE(InfoExtractor):
'uploader': 'Ben Prunty', 'uploader': 'Ben Prunty',
'timestamp': 1396508491, 'timestamp': 1396508491,
'upload_date': '20140403', 'upload_date': '20140403',
'release_timestamp': 1396483200,
'release_date': '20140403', 'release_date': '20140403',
'duration': 260.877, 'duration': 260.877,
'track': 'Lanius (Battle)', 'track': 'Lanius (Battle)',
@ -69,6 +70,7 @@ class BandcampIE(InfoExtractor):
'uploader': 'Mastodon', 'uploader': 'Mastodon',
'timestamp': 1322005399, 'timestamp': 1322005399,
'upload_date': '20111122', 'upload_date': '20111122',
'release_timestamp': 1076112000,
'release_date': '20040207', 'release_date': '20040207',
'duration': 120.79, 'duration': 120.79,
'track': 'Hail to Fire', 'track': 'Hail to Fire',
@ -197,7 +199,7 @@ class BandcampIE(InfoExtractor):
'thumbnail': thumbnail, 'thumbnail': thumbnail,
'uploader': artist, 'uploader': artist,
'timestamp': timestamp, 'timestamp': timestamp,
'release_date': unified_strdate(tralbum.get('album_release_date')), 'release_timestamp': unified_timestamp(tralbum.get('album_release_date')),
'duration': duration, 'duration': duration,
'track': track, 'track': track,
'track_number': track_number, 'track_number': track_number,

View File

@ -1,37 +1,46 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import functools
import itertools import itertools
import json
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import (
compat_etree_Element,
compat_HTTPError,
compat_parse_qs,
compat_str,
compat_urllib_error,
compat_urllib_parse_urlparse,
compat_urlparse,
)
from ..utils import ( from ..utils import (
ExtractorError,
OnDemandPagedList,
clean_html, clean_html,
dict_get, dict_get,
ExtractorError,
float_or_none, float_or_none,
get_element_by_class, get_element_by_class,
int_or_none, int_or_none,
js_to_json, js_to_json,
parse_duration, parse_duration,
parse_iso8601, parse_iso8601,
strip_or_none,
try_get, try_get,
unescapeHTML, unescapeHTML,
unified_timestamp,
url_or_none, url_or_none,
urlencode_postdata, urlencode_postdata,
urljoin, urljoin,
) )
from ..compat import (
compat_etree_Element,
compat_HTTPError,
compat_urlparse,
)
class BBCCoUkIE(InfoExtractor): class BBCCoUkIE(InfoExtractor):
IE_NAME = 'bbc.co.uk' IE_NAME = 'bbc.co.uk'
IE_DESC = 'BBC iPlayer' IE_DESC = 'BBC iPlayer'
_ID_REGEX = r'(?:[pbm][\da-z]{7}|w[\da-z]{7,14})' _ID_REGEX = r'(?:[pbml][\da-z]{7}|w[\da-z]{7,14})'
_VALID_URL = r'''(?x) _VALID_URL = r'''(?x)
https?:// https?://
(?:www\.)?bbc\.co\.uk/ (?:www\.)?bbc\.co\.uk/
@ -387,9 +396,17 @@ class BBCCoUkIE(InfoExtractor):
formats.extend(self._extract_mpd_formats( formats.extend(self._extract_mpd_formats(
href, programme_id, mpd_id=format_id, fatal=False)) href, programme_id, mpd_id=format_id, fatal=False))
elif transfer_format == 'hls': elif transfer_format == 'hls':
formats.extend(self._extract_m3u8_formats( # TODO: let expected_status be passed into _extract_xxx_formats() instead
href, programme_id, ext='mp4', entry_protocol='m3u8_native', try:
m3u8_id=format_id, fatal=False)) fmts = self._extract_m3u8_formats(
href, programme_id, ext='mp4', entry_protocol='m3u8_native',
m3u8_id=format_id, fatal=False)
except ExtractorError as e:
if not (isinstance(e.exc_info[1], compat_urllib_error.HTTPError)
and e.exc_info[1].code in (403, 404)):
raise
fmts = []
formats.extend(fmts)
elif transfer_format == 'hds': elif transfer_format == 'hds':
formats.extend(self._extract_f4m_formats( formats.extend(self._extract_f4m_formats(
href, programme_id, f4m_id=format_id, fatal=False)) href, programme_id, f4m_id=format_id, fatal=False))
@ -756,23 +773,44 @@ class BBCIE(BBCCoUkIE):
'only_matching': True, 'only_matching': True,
}, { }, {
# custom redirection to www.bbc.com # custom redirection to www.bbc.com
# also, video with window.__INITIAL_DATA__
'url': 'http://www.bbc.co.uk/news/science-environment-33661876', 'url': 'http://www.bbc.co.uk/news/science-environment-33661876',
'only_matching': True, 'info_dict': {
'id': 'p02xzws1',
'ext': 'mp4',
'title': "Pluto may have 'nitrogen glaciers'",
'description': 'md5:6a95b593f528d7a5f2605221bc56912f',
'thumbnail': r're:https?://.+/.+\.jpg',
'timestamp': 1437785037,
'upload_date': '20150725',
},
}, {
# video with window.__INITIAL_DATA__ and value as JSON string
'url': 'https://www.bbc.com/news/av/world-europe-59468682',
'info_dict': {
'id': 'p0b71qth',
'ext': 'mp4',
'title': 'Why France is making this woman a national hero',
'description': 'md5:7affdfab80e9c3a1f976230a1ff4d5e4',
'thumbnail': r're:https?://.+/.+\.jpg',
'timestamp': 1638230731,
'upload_date': '20211130',
},
}, { }, {
# single video article embedded with data-media-vpid # single video article embedded with data-media-vpid
'url': 'http://www.bbc.co.uk/sport/rowing/35908187', 'url': 'http://www.bbc.co.uk/sport/rowing/35908187',
'only_matching': True, 'only_matching': True,
}, { }, {
# bbcthreeConfig
'url': 'https://www.bbc.co.uk/bbcthree/clip/73d0bbd0-abc3-4cea-b3c0-cdae21905eb1', 'url': 'https://www.bbc.co.uk/bbcthree/clip/73d0bbd0-abc3-4cea-b3c0-cdae21905eb1',
'info_dict': { 'info_dict': {
'id': 'p06556y7', 'id': 'p06556y7',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Transfers: Cristiano Ronaldo to Man Utd, Arsenal to spend?', 'title': 'Things Not To Say to people that live on council estates',
'description': 'md5:4b7dfd063d5a789a1512e99662be3ddd', 'description': "From being labelled a 'chav', to the presumption that they're 'scroungers', people who live on council estates encounter all kinds of prejudices and false assumptions about themselves, their families, and their lifestyles. Here, eight people discuss the common statements, misconceptions, and clichés that they're tired of hearing.",
'duration': 360,
'thumbnail': r're:https?://.+/.+\.jpg',
}, },
'params': {
'skip_download': True,
}
}, { }, {
# window.__PRELOADED_STATE__ # window.__PRELOADED_STATE__
'url': 'https://www.bbc.co.uk/radio/play/b0b9z4yl', 'url': 'https://www.bbc.co.uk/radio/play/b0b9z4yl',
@ -793,11 +831,25 @@ class BBCIE(BBCCoUkIE):
'description': 'Learn English words and phrases from this story', 'description': 'Learn English words and phrases from this story',
}, },
'add_ie': [BBCCoUkIE.ie_key()], 'add_ie': [BBCCoUkIE.ie_key()],
}, {
# BBC Reel
'url': 'https://www.bbc.com/reel/video/p07c6sb6/how-positive-thinking-is-harming-your-happiness',
'info_dict': {
'id': 'p07c6sb9',
'ext': 'mp4',
'title': 'How positive thinking is harming your happiness',
'alt_title': 'The downsides of positive thinking',
'description': 'md5:fad74b31da60d83b8265954ee42d85b4',
'duration': 235,
'thumbnail': r're:https?://.+/p07c9dsr.jpg',
'upload_date': '20190604',
'categories': ['Psychology'],
},
}] }]
@classmethod @classmethod
def suitable(cls, url): def suitable(cls, url):
EXCLUDE_IE = (BBCCoUkIE, BBCCoUkArticleIE, BBCCoUkIPlayerPlaylistIE, BBCCoUkPlaylistIE) EXCLUDE_IE = (BBCCoUkIE, BBCCoUkArticleIE, BBCCoUkIPlayerEpisodesIE, BBCCoUkIPlayerGroupIE, BBCCoUkPlaylistIE)
return (False if any(ie.suitable(url) for ie in EXCLUDE_IE) return (False if any(ie.suitable(url) for ie in EXCLUDE_IE)
else super(BBCIE, cls).suitable(url)) else super(BBCIE, cls).suitable(url))
@ -929,7 +981,7 @@ class BBCIE(BBCCoUkIE):
else: else:
entry['title'] = info['title'] entry['title'] = info['title']
entry['formats'].extend(info['formats']) entry['formats'].extend(info['formats'])
except Exception as e: except ExtractorError as e:
# Some playlist URL may fail with 500, at the same time # Some playlist URL may fail with 500, at the same time
# the other one may work fine (e.g. # the other one may work fine (e.g.
# http://www.bbc.com/turkce/haberler/2015/06/150615_telabyad_kentin_cogu) # http://www.bbc.com/turkce/haberler/2015/06/150615_telabyad_kentin_cogu)
@ -980,6 +1032,37 @@ class BBCIE(BBCCoUkIE):
'subtitles': subtitles, 'subtitles': subtitles,
} }
# bbc reel (e.g. https://www.bbc.com/reel/video/p07c6sb6/how-positive-thinking-is-harming-your-happiness)
initial_data = self._parse_json(self._html_search_regex(
r'<script[^>]+id=(["\'])initial-data\1[^>]+data-json=(["\'])(?P<json>(?:(?!\2).)+)',
webpage, 'initial data', default='{}', group='json'), playlist_id, fatal=False)
if initial_data:
init_data = try_get(
initial_data, lambda x: x['initData']['items'][0], dict) or {}
smp_data = init_data.get('smpData') or {}
clip_data = try_get(smp_data, lambda x: x['items'][0], dict) or {}
version_id = clip_data.get('versionID')
if version_id:
title = smp_data['title']
formats, subtitles = self._download_media_selector(version_id)
self._sort_formats(formats)
image_url = smp_data.get('holdingImageURL')
display_date = init_data.get('displayDate')
topic_title = init_data.get('topicTitle')
return {
'id': version_id,
'title': title,
'formats': formats,
'alt_title': init_data.get('shortTitle'),
'thumbnail': image_url.replace('$recipe', 'raw') if image_url else None,
'description': smp_data.get('summary') or init_data.get('shortSummary'),
'upload_date': display_date.replace('-', '') if display_date else None,
'subtitles': subtitles,
'duration': int_or_none(clip_data.get('duration')),
'categories': [topic_title] if topic_title else None,
}
# Morph based embed (e.g. http://www.bbc.co.uk/sport/live/olympics/36895975) # Morph based embed (e.g. http://www.bbc.co.uk/sport/live/olympics/36895975)
# There are several setPayload calls may be present but the video # There are several setPayload calls may be present but the video
# seems to be always related to the first one # seems to be always related to the first one
@ -1041,7 +1124,7 @@ class BBCIE(BBCCoUkIE):
thumbnail = None thumbnail = None
image_url = current_programme.get('image_url') image_url = current_programme.get('image_url')
if image_url: if image_url:
thumbnail = image_url.replace('{recipe}', '1920x1920') thumbnail = image_url.replace('{recipe}', 'raw')
return { return {
'id': programme_id, 'id': programme_id,
'title': title, 'title': title,
@ -1100,9 +1183,16 @@ class BBCIE(BBCCoUkIE):
return self.playlist_result( return self.playlist_result(
entries, playlist_id, playlist_title, playlist_description) entries, playlist_id, playlist_title, playlist_description)
initial_data = self._parse_json(self._search_regex( initial_data = self._search_regex(
r'window\.__INITIAL_DATA__\s*=\s*({.+?});', webpage, r'window\.__INITIAL_DATA__\s*=\s*("{.+?}")\s*;', webpage,
'preload state', default='{}'), playlist_id, fatal=False) 'quoted preload state', default=None)
if initial_data is None:
initial_data = self._search_regex(
r'window\.__INITIAL_DATA__\s*=\s*({.+?})\s*;', webpage,
'preload state', default={})
else:
initial_data = self._parse_json(initial_data or '"{}"', playlist_id, fatal=False)
initial_data = self._parse_json(initial_data, playlist_id, fatal=False)
if initial_data: if initial_data:
def parse_media(media): def parse_media(media):
if not media: if not media:
@ -1114,19 +1204,39 @@ class BBCIE(BBCCoUkIE):
continue continue
formats, subtitles = self._download_media_selector(item_id) formats, subtitles = self._download_media_selector(item_id)
self._sort_formats(formats) self._sort_formats(formats)
item_desc = None
blocks = try_get(media, lambda x: x['summary']['blocks'], list)
if blocks:
summary = []
for block in blocks:
text = try_get(block, lambda x: x['model']['text'], compat_str)
if text:
summary.append(text)
if summary:
item_desc = '\n\n'.join(summary)
item_time = None
for meta in try_get(media, lambda x: x['metadata']['items'], list) or []:
if try_get(meta, lambda x: x['label']) == 'Published':
item_time = unified_timestamp(meta.get('timestamp'))
break
entries.append({ entries.append({
'id': item_id, 'id': item_id,
'title': item_title, 'title': item_title,
'thumbnail': item.get('holdingImageUrl'), 'thumbnail': item.get('holdingImageUrl'),
'formats': formats, 'formats': formats,
'subtitles': subtitles, 'subtitles': subtitles,
'timestamp': item_time,
'description': strip_or_none(item_desc),
}) })
for resp in (initial_data.get('data') or {}).values(): for resp in (initial_data.get('data') or {}).values():
name = resp.get('name') name = resp.get('name')
if name == 'media-experience': if name == 'media-experience':
parse_media(try_get(resp, lambda x: x['data']['initialItem']['mediaItem'], dict)) parse_media(try_get(resp, lambda x: x['data']['initialItem']['mediaItem'], dict))
elif name == 'article': elif name == 'article':
for block in (try_get(resp, lambda x: x['data']['blocks'], list) or []): for block in (try_get(resp,
(lambda x: x['data']['blocks'],
lambda x: x['data']['content']['model']['blocks'],),
list) or []):
if block.get('type') != 'media': if block.get('type') != 'media':
continue continue
parse_media(block.get('model')) parse_media(block.get('model'))
@ -1293,21 +1403,149 @@ class BBCCoUkPlaylistBaseIE(InfoExtractor):
playlist_id, title, description) playlist_id, title, description)
class BBCCoUkIPlayerPlaylistIE(BBCCoUkPlaylistBaseIE): class BBCCoUkIPlayerPlaylistBaseIE(InfoExtractor):
IE_NAME = 'bbc.co.uk:iplayer:playlist' _VALID_URL_TMPL = r'https?://(?:www\.)?bbc\.co\.uk/iplayer/%%s/(?P<id>%s)' % BBCCoUkIE._ID_REGEX
_VALID_URL = r'https?://(?:www\.)?bbc\.co\.uk/iplayer/(?:episodes|group)/(?P<id>%s)' % BBCCoUkIE._ID_REGEX
_URL_TEMPLATE = 'http://www.bbc.co.uk/iplayer/episode/%s' @staticmethod
_VIDEO_ID_TEMPLATE = r'data-ip-id=["\'](%s)' def _get_default(episode, key, default_key='default'):
return try_get(episode, lambda x: x[key][default_key])
def _get_description(self, data):
synopsis = data.get(self._DESCRIPTION_KEY) or {}
return dict_get(synopsis, ('large', 'medium', 'small'))
def _fetch_page(self, programme_id, per_page, series_id, page):
elements = self._get_elements(self._call_api(
programme_id, per_page, page + 1, series_id))
for element in elements:
episode = self._get_episode(element)
episode_id = episode.get('id')
if not episode_id:
continue
thumbnail = None
image = self._get_episode_image(episode)
if image:
thumbnail = image.replace('{recipe}', 'raw')
category = self._get_default(episode, 'labels', 'category')
yield {
'_type': 'url',
'id': episode_id,
'title': self._get_episode_field(episode, 'subtitle'),
'url': 'https://www.bbc.co.uk/iplayer/episode/' + episode_id,
'thumbnail': thumbnail,
'description': self._get_description(episode),
'categories': [category] if category else None,
'series': self._get_episode_field(episode, 'title'),
'ie_key': BBCCoUkIE.ie_key(),
}
def _real_extract(self, url):
pid = self._match_id(url)
qs = compat_parse_qs(compat_urllib_parse_urlparse(url).query)
series_id = qs.get('seriesId', [None])[0]
page = qs.get('page', [None])[0]
per_page = 36 if page else self._PAGE_SIZE
fetch_page = functools.partial(self._fetch_page, pid, per_page, series_id)
entries = fetch_page(int(page) - 1) if page else OnDemandPagedList(fetch_page, self._PAGE_SIZE)
playlist_data = self._get_playlist_data(self._call_api(pid, 1))
return self.playlist_result(
entries, pid, self._get_playlist_title(playlist_data),
self._get_description(playlist_data))
class BBCCoUkIPlayerEpisodesIE(BBCCoUkIPlayerPlaylistBaseIE):
IE_NAME = 'bbc.co.uk:iplayer:episodes'
_VALID_URL = BBCCoUkIPlayerPlaylistBaseIE._VALID_URL_TMPL % 'episodes'
_TESTS = [{ _TESTS = [{
'url': 'http://www.bbc.co.uk/iplayer/episodes/b05rcz9v', 'url': 'http://www.bbc.co.uk/iplayer/episodes/b05rcz9v',
'info_dict': { 'info_dict': {
'id': 'b05rcz9v', 'id': 'b05rcz9v',
'title': 'The Disappearance', 'title': 'The Disappearance',
'description': 'French thriller serial about a missing teenager.', 'description': 'md5:58eb101aee3116bad4da05f91179c0cb',
}, },
'playlist_mincount': 6, 'playlist_mincount': 8,
'skip': 'This programme is not currently available on BBC iPlayer',
}, { }, {
# all seasons
'url': 'https://www.bbc.co.uk/iplayer/episodes/b094m5t9/doctor-foster',
'info_dict': {
'id': 'b094m5t9',
'title': 'Doctor Foster',
'description': 'md5:5aa9195fad900e8e14b52acd765a9fd6',
},
'playlist_mincount': 10,
}, {
# explicit season
'url': 'https://www.bbc.co.uk/iplayer/episodes/b094m5t9/doctor-foster?seriesId=b094m6nv',
'info_dict': {
'id': 'b094m5t9',
'title': 'Doctor Foster',
'description': 'md5:5aa9195fad900e8e14b52acd765a9fd6',
},
'playlist_mincount': 5,
}, {
# all pages
'url': 'https://www.bbc.co.uk/iplayer/episodes/m0004c4v/beechgrove',
'info_dict': {
'id': 'm0004c4v',
'title': 'Beechgrove',
'description': 'Gardening show that celebrates Scottish horticulture and growing conditions.',
},
'playlist_mincount': 37,
}, {
# explicit page
'url': 'https://www.bbc.co.uk/iplayer/episodes/m0004c4v/beechgrove?page=2',
'info_dict': {
'id': 'm0004c4v',
'title': 'Beechgrove',
'description': 'Gardening show that celebrates Scottish horticulture and growing conditions.',
},
'playlist_mincount': 1,
}]
_PAGE_SIZE = 100
_DESCRIPTION_KEY = 'synopsis'
def _get_episode_image(self, episode):
return self._get_default(episode, 'image')
def _get_episode_field(self, episode, field):
return self._get_default(episode, field)
@staticmethod
def _get_elements(data):
return data['entities']['results']
@staticmethod
def _get_episode(element):
return element.get('episode') or {}
def _call_api(self, pid, per_page, page=1, series_id=None):
variables = {
'id': pid,
'page': page,
'perPage': per_page,
}
if series_id:
variables['sliceId'] = series_id
return self._download_json(
'https://graph.ibl.api.bbc.co.uk/', pid, headers={
'Content-Type': 'application/json'
}, data=json.dumps({
'id': '5692d93d5aac8d796a0305e895e61551',
'variables': variables,
}).encode('utf-8'))['data']['programme']
@staticmethod
def _get_playlist_data(data):
return data
def _get_playlist_title(self, data):
return self._get_default(data, 'title')
class BBCCoUkIPlayerGroupIE(BBCCoUkIPlayerPlaylistBaseIE):
IE_NAME = 'bbc.co.uk:iplayer:group'
_VALID_URL = BBCCoUkIPlayerPlaylistBaseIE._VALID_URL_TMPL % 'group'
_TESTS = [{
# Available for over a year unlike 30 days for most other programmes # Available for over a year unlike 30 days for most other programmes
'url': 'http://www.bbc.co.uk/iplayer/group/p02tcc32', 'url': 'http://www.bbc.co.uk/iplayer/group/p02tcc32',
'info_dict': { 'info_dict': {
@ -1316,14 +1554,56 @@ class BBCCoUkIPlayerPlaylistIE(BBCCoUkPlaylistBaseIE):
'description': 'md5:683e901041b2fe9ba596f2ab04c4dbe7', 'description': 'md5:683e901041b2fe9ba596f2ab04c4dbe7',
}, },
'playlist_mincount': 10, 'playlist_mincount': 10,
}, {
# all pages
'url': 'https://www.bbc.co.uk/iplayer/group/p081d7j7',
'info_dict': {
'id': 'p081d7j7',
'title': 'Music in Scotland',
'description': 'Perfomances in Scotland and programmes featuring Scottish acts.',
},
'playlist_mincount': 47,
}, {
# explicit page
'url': 'https://www.bbc.co.uk/iplayer/group/p081d7j7?page=2',
'info_dict': {
'id': 'p081d7j7',
'title': 'Music in Scotland',
'description': 'Perfomances in Scotland and programmes featuring Scottish acts.',
},
'playlist_mincount': 11,
}] }]
_PAGE_SIZE = 200
_DESCRIPTION_KEY = 'synopses'
def _extract_title_and_description(self, webpage): def _get_episode_image(self, episode):
title = self._search_regex(r'<h1>([^<]+)</h1>', webpage, 'title', fatal=False) return self._get_default(episode, 'images', 'standard')
description = self._search_regex(
r'<p[^>]+class=(["\'])subtitle\1[^>]*>(?P<value>[^<]+)</p>', def _get_episode_field(self, episode, field):
webpage, 'description', fatal=False, group='value') return episode.get(field)
return title, description
@staticmethod
def _get_elements(data):
return data['elements']
@staticmethod
def _get_episode(element):
return element
def _call_api(self, pid, per_page, page=1, series_id=None):
return self._download_json(
'http://ibl.api.bbc.co.uk/ibl/v1/groups/%s/episodes' % pid,
pid, query={
'page': page,
'per_page': per_page,
})['group_episodes']
@staticmethod
def _get_playlist_data(data):
return data['group']
def _get_playlist_title(self, data):
return data.get('title')
class BBCCoUkPlaylistIE(BBCCoUkPlaylistBaseIE): class BBCCoUkPlaylistIE(BBCCoUkPlaylistBaseIE):

View File

@ -0,0 +1,59 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import ExtractorError, urlencode_postdata
class BigoIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?bigo\.tv/(?:[a-z]{2,}/)?(?P<id>[^/]+)'
_TESTS = [{
'url': 'https://www.bigo.tv/ja/221338632',
'info_dict': {
'id': '6576287577575737440',
'title': '土よ〜💁‍♂️ 休憩室/REST room',
'thumbnail': r're:https?://.+',
'uploader': '✨Shin💫',
'uploader_id': '221338632',
'is_live': True,
},
'skip': 'livestream',
}, {
'url': 'https://www.bigo.tv/th/Tarlerm1304',
'only_matching': True,
}, {
'url': 'https://bigo.tv/115976881',
'only_matching': True,
}]
def _real_extract(self, url):
user_id = self._match_id(url)
info_raw = self._download_json(
'https://bigo.tv/studio/getInternalStudioInfo',
user_id, data=urlencode_postdata({'siteId': user_id}))
if not isinstance(info_raw, dict):
raise ExtractorError('Received invalid JSON data')
if info_raw.get('code'):
raise ExtractorError(
'Bigo says: %s (code %s)' % (info_raw.get('msg'), info_raw.get('code')), expected=True)
info = info_raw.get('data') or {}
if not info.get('alive'):
raise ExtractorError('This user is offline.', expected=True)
return {
'id': info.get('roomId') or user_id,
'title': info.get('roomTopic') or info.get('nick_name') or user_id,
'formats': [{
'url': info.get('hls_src'),
'ext': 'mp4',
'protocol': 'm3u8',
}],
'thumbnail': info.get('snapshot'),
'uploader': info.get('nick_name'),
'uploader_id': user_id,
'is_live': True,
}

View File

@ -156,6 +156,7 @@ class BiliBiliIE(InfoExtractor):
cid = js['result']['cid'] cid = js['result']['cid']
headers = { headers = {
'Accept': 'application/json',
'Referer': url 'Referer': url
} }
headers.update(self.geo_verification_headers()) headers.update(self.geo_verification_headers())
@ -232,7 +233,7 @@ class BiliBiliIE(InfoExtractor):
webpage) webpage)
if uploader_mobj: if uploader_mobj:
info.update({ info.update({
'uploader': uploader_mobj.group('name'), 'uploader': uploader_mobj.group('name').strip(),
'uploader_id': uploader_mobj.group('id'), 'uploader_id': uploader_mobj.group('id'),
}) })
if not info.get('uploader'): if not info.get('uploader'):
@ -368,6 +369,11 @@ class BilibiliAudioIE(BilibiliAudioBaseIE):
'filesize': int_or_none(play_data.get('size')), 'filesize': int_or_none(play_data.get('size')),
}] }]
for a_format in formats:
a_format.setdefault('http_headers', {}).update({
'Referer': url,
})
song = self._call_api('song/info', au_id) song = self._call_api('song/info', au_id)
title = song['title'] title = song['title']
statistic = song.get('statistic') or {} statistic = song.get('statistic') or {}

View File

@ -0,0 +1,173 @@
# coding: utf-8
from __future__ import unicode_literals
import json
from ..utils import (
strip_or_none,
traverse_obj,
)
from .common import InfoExtractor
class BlerpIE(InfoExtractor):
IE_NAME = 'blerp'
_VALID_URL = r'https?://(?:www\.)?blerp\.com/soundbites/(?P<id>[0-9a-zA-Z]+)'
_TESTS = [{
'url': 'https://blerp.com/soundbites/6320fe8745636cb4dd677a5a',
'info_dict': {
'id': '6320fe8745636cb4dd677a5a',
'title': 'Samsung Galaxy S8 Over the Horizon Ringtone 2016',
'uploader': 'luminousaj',
'uploader_id': '5fb81e51aa66ae000c395478',
'ext': 'mp3',
'tags': ['samsung', 'galaxy', 's8', 'over the horizon', '2016', 'ringtone'],
}
}, {
'url': 'https://blerp.com/soundbites/5bc94ef4796001000498429f',
'info_dict': {
'id': '5bc94ef4796001000498429f',
'title': 'Yee',
'uploader': '179617322678353920',
'uploader_id': '5ba99cf71386730004552c42',
'ext': 'mp3',
'tags': ['YEE', 'YEET', 'wo ha haah catchy tune yee', 'yee']
}
}]
_GRAPHQL_OPERATIONNAME = "webBitePageGetBite"
_GRAPHQL_QUERY = (
'''query webBitePageGetBite($_id: MongoID!) {
web {
biteById(_id: $_id) {
...bitePageFrag
__typename
}
__typename
}
}
fragment bitePageFrag on Bite {
_id
title
userKeywords
keywords
color
visibility
isPremium
owned
price
extraReview
isAudioExists
image {
filename
original {
url
__typename
}
__typename
}
userReactions {
_id
reactions
createdAt
__typename
}
topReactions
totalSaveCount
saved
blerpLibraryType
license
licenseMetaData
playCount
totalShareCount
totalFavoriteCount
totalAddedToBoardCount
userCategory
userAudioQuality
audioCreationState
transcription
userTranscription
description
createdAt
updatedAt
author
listingType
ownerObject {
_id
username
profileImage {
filename
original {
url
__typename
}
__typename
}
__typename
}
transcription
favorited
visibility
isCurated
sourceUrl
audienceRating
strictAudienceRating
ownerId
reportObject {
reportedContentStatus
__typename
}
giphy {
mp4
gif
__typename
}
audio {
filename
original {
url
__typename
}
mp3 {
url
__typename
}
__typename
}
__typename
}
''')
def _real_extract(self, url):
audio_id = self._match_id(url)
data = {
'operationName': self._GRAPHQL_OPERATIONNAME,
'query': self._GRAPHQL_QUERY,
'variables': {
'_id': audio_id
}
}
headers = {
'Content-Type': 'application/json'
}
json_result = self._download_json('https://api.blerp.com/graphql',
audio_id, data=json.dumps(data).encode('utf-8'), headers=headers)
bite_json = json_result['data']['web']['biteById']
info_dict = {
'id': bite_json['_id'],
'url': bite_json['audio']['mp3']['url'],
'title': bite_json['title'],
'uploader': traverse_obj(bite_json, ('ownerObject', 'username'), expected_type=strip_or_none),
'uploader_id': traverse_obj(bite_json, ('ownerObject', '_id'), expected_type=strip_or_none),
'ext': 'mp3',
'tags': list(filter(None, map(strip_or_none, (traverse_obj(bite_json, 'userKeywords', expected_type=list) or []))) or None)
}
return info_dict

View File

@ -1,86 +0,0 @@
from __future__ import unicode_literals
import json
from .common import InfoExtractor
from ..utils import (
remove_start,
int_or_none,
)
class BlinkxIE(InfoExtractor):
_VALID_URL = r'(?:https?://(?:www\.)blinkx\.com/#?ce/|blinkx:)(?P<id>[^?]+)'
IE_NAME = 'blinkx'
_TEST = {
'url': 'http://www.blinkx.com/ce/Da0Gw3xc5ucpNduzLuDDlv4WC9PuI4fDi1-t6Y3LyfdY2SZS5Urbvn-UPJvrvbo8LTKTc67Wu2rPKSQDJyZeeORCR8bYkhs8lI7eqddznH2ofh5WEEdjYXnoRtj7ByQwt7atMErmXIeYKPsSDuMAAqJDlQZ-3Ff4HJVeH_s3Gh8oQ',
'md5': '337cf7a344663ec79bf93a526a2e06c7',
'info_dict': {
'id': 'Da0Gw3xc',
'ext': 'mp4',
'title': 'No Daily Show for John Oliver; HBO Show Renewed - IGN News',
'uploader': 'IGN News',
'upload_date': '20150217',
'timestamp': 1424215740,
'description': 'HBO has renewed Last Week Tonight With John Oliver for two more seasons.',
'duration': 47.743333,
},
}
def _real_extract(self, url):
video_id = self._match_id(url)
display_id = video_id[:8]
api_url = ('https://apib4.blinkx.com/api.php?action=play_video&'
+ 'video=%s' % video_id)
data_json = self._download_webpage(api_url, display_id)
data = json.loads(data_json)['api']['results'][0]
duration = None
thumbnails = []
formats = []
for m in data['media']:
if m['type'] == 'jpg':
thumbnails.append({
'url': m['link'],
'width': int(m['w']),
'height': int(m['h']),
})
elif m['type'] == 'original':
duration = float(m['d'])
elif m['type'] == 'youtube':
yt_id = m['link']
self.to_screen('Youtube video detected: %s' % yt_id)
return self.url_result(yt_id, 'Youtube', video_id=yt_id)
elif m['type'] in ('flv', 'mp4'):
vcodec = remove_start(m['vcodec'], 'ff')
acodec = remove_start(m['acodec'], 'ff')
vbr = int_or_none(m.get('vbr') or m.get('vbitrate'), 1000)
abr = int_or_none(m.get('abr') or m.get('abitrate'), 1000)
tbr = vbr + abr if vbr and abr else None
format_id = '%s-%sk-%s' % (vcodec, tbr, m['w'])
formats.append({
'format_id': format_id,
'url': m['link'],
'vcodec': vcodec,
'acodec': acodec,
'abr': abr,
'vbr': vbr,
'tbr': tbr,
'width': int_or_none(m.get('w')),
'height': int_or_none(m.get('h')),
})
self._sort_formats(formats)
return {
'id': display_id,
'fullid': video_id,
'title': data['title'],
'formats': formats,
'uploader': data['channel_name'],
'timestamp': data['pubdate_epoch'],
'description': data.get('description'),
'thumbnails': thumbnails,
'duration': duration,
}

View File

@ -1,3 +1,4 @@
# coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import re import re
@ -12,13 +13,28 @@ from ..utils import (
class BongaCamsIE(InfoExtractor): class BongaCamsIE(InfoExtractor):
_VALID_URL = r'https?://(?P<host>(?:[^/]+\.)?bongacams\d*\.com)/(?P<id>[^/?&#]+)' _VALID_URL = r'https?://(?P<host>(?:[^/]+\.)?bongacams\d*\.(?:com|net))/(?P<id>[^/?&#]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://de.bongacams.com/azumi-8', 'url': 'https://de.bongacams.com/azumi-8',
'only_matching': True, 'only_matching': True,
}, { }, {
'url': 'https://cn.bongacams.com/azumi-8', 'url': 'https://cn.bongacams.com/azumi-8',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://de.bongacams.net/claireashton',
'info_dict': {
'id': 'claireashton',
'ext': 'mp4',
'title': r're:ClaireAshton \d{4}-\d{2}-\d{2} \d{2}:\d{2}',
'age_limit': 18,
'uploader_id': 'ClaireAshton',
'uploader': 'ClaireAshton',
'like_count': int,
'is_live': True,
},
'params': {
'skip_download': True,
},
}] }]
def _real_extract(self, url): def _real_extract(self, url):

View File

@ -12,7 +12,7 @@ from ..utils import (
class BravoTVIE(AdobePassIE): class BravoTVIE(AdobePassIE):
_VALID_URL = r'https?://(?:www\.)?bravotv\.com/(?:[^/]+/)+(?P<id>[^/?#]+)' _VALID_URL = r'https?://(?:www\.)?(?P<req_id>bravotv|oxygen)\.com/(?:[^/]+/)+(?P<id>[^/?#]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.bravotv.com/top-chef/season-16/episode-15/videos/the-top-chef-season-16-winner-is', 'url': 'https://www.bravotv.com/top-chef/season-16/episode-15/videos/the-top-chef-season-16-winner-is',
'md5': 'e34684cfea2a96cd2ee1ef3a60909de9', 'md5': 'e34684cfea2a96cd2ee1ef3a60909de9',
@ -28,10 +28,13 @@ class BravoTVIE(AdobePassIE):
}, { }, {
'url': 'http://www.bravotv.com/below-deck/season-3/ep-14-reunion-part-1', 'url': 'http://www.bravotv.com/below-deck/season-3/ep-14-reunion-part-1',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://www.oxygen.com/in-ice-cold-blood/season-2/episode-16/videos/handling-the-horwitz-house-after-the-murder-season-2',
'only_matching': True,
}] }]
def _real_extract(self, url): def _real_extract(self, url):
display_id = self._match_id(url) site, display_id = re.match(self._VALID_URL, url).groups()
webpage = self._download_webpage(url, display_id) webpage = self._download_webpage(url, display_id)
settings = self._parse_json(self._search_regex( settings = self._parse_json(self._search_regex(
r'<script[^>]+data-drupal-selector="drupal-settings-json"[^>]*>({.+?})</script>', webpage, 'drupal settings'), r'<script[^>]+data-drupal-selector="drupal-settings-json"[^>]*>({.+?})</script>', webpage, 'drupal settings'),
@ -53,11 +56,14 @@ class BravoTVIE(AdobePassIE):
tp_path = release_pid = tve['release_pid'] tp_path = release_pid = tve['release_pid']
if tve.get('entitlement') == 'auth': if tve.get('entitlement') == 'auth':
adobe_pass = settings.get('tve_adobe_auth', {}) adobe_pass = settings.get('tve_adobe_auth', {})
if site == 'bravotv':
site = 'bravo'
resource = self._get_mvpd_resource( resource = self._get_mvpd_resource(
adobe_pass.get('adobePassResourceId', 'bravo'), adobe_pass.get('adobePassResourceId') or site,
tve['title'], release_pid, tve.get('rating')) tve['title'], release_pid, tve.get('rating'))
query['auth'] = self._extract_mvpd_auth( query['auth'] = self._extract_mvpd_auth(
url, release_pid, adobe_pass.get('adobePassRequestorId', 'bravo'), resource) url, release_pid,
adobe_pass.get('adobePassRequestorId') or site, resource)
else: else:
shared_playlist = settings['ls_playlist'] shared_playlist = settings['ls_playlist']
account_pid = shared_playlist['account_pid'] account_pid = shared_playlist['account_pid']

View File

@ -0,0 +1,79 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import (
determine_ext,
int_or_none,
merge_dicts,
parse_iso8601,
T,
traverse_obj,
txt_or_none,
urljoin,
)
class CaffeineTVIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?caffeine\.tv/[^/]+/video/(?P<id>[0-9a-f-]+)'
_TESTS = [{
'url': 'https://www.caffeine.tv/TsuSurf/video/cffc0a00-e73f-11ec-8080-80017d29f26e',
'info_dict': {
'id': 'cffc0a00-e73f-11ec-8080-80017d29f26e',
'ext': 'mp4',
'title': 'GOOOOD MORNINNNNN #highlights',
'timestamp': 1654702180,
'upload_date': '20220608',
'uploader': 'TsuSurf',
'duration': 3145,
'age_limit': 17,
},
'params': {
'format': 'bestvideo',
},
}]
def _real_extract(self, url):
video_id = self._match_id(url)
json_data = self._download_json(
'https://api.caffeine.tv/social/public/activity/' + video_id,
video_id)
broadcast_info = traverse_obj(json_data, ('broadcast_info', T(dict))) or {}
title = broadcast_info['broadcast_title']
video_url = broadcast_info['video_url']
ext = determine_ext(video_url)
if ext == 'm3u8':
formats = self._extract_m3u8_formats(
video_url, video_id, 'mp4', entry_protocol='m3u8',
fatal=False)
else:
formats = [{'url': video_url}]
self._sort_formats(formats)
return merge_dicts({
'id': video_id,
'title': title,
'formats': formats,
}, traverse_obj(json_data, {
'uploader': ((None, 'user'), 'username'),
}, get_all=False), traverse_obj(json_data, {
'like_count': ('like_count', T(int_or_none)),
'view_count': ('view_count', T(int_or_none)),
'comment_count': ('comment_count', T(int_or_none)),
'tags': ('tags', Ellipsis, T(txt_or_none)),
'is_live': 'is_live',
'uploader': ('user', 'name'),
}), traverse_obj(broadcast_info, {
'duration': ('content_duration', T(int_or_none)),
'timestamp': ('broadcast_start_time', T(parse_iso8601)),
'thumbnail': ('preview_image_path', T(lambda u: urljoin(url, u))),
'age_limit': ('content_rating', T(lambda r: r and {
# assume Apple Store ratings [1]
# 1. https://en.wikipedia.org/wiki/Mobile_software_content_rating_system
'FOUR_PLUS': 0,
'NINE_PLUS': 9,
'TWELVE_PLUS': 12,
'SEVENTEEN_PLUS': 17,
}.get(r, 17))),
}))

View File

@ -0,0 +1,74 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..compat import compat_str
from ..utils import (
ExtractorError,
traverse_obj,
try_get,
)
class CallinIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?callin\.com/episode/(?:[^/#?-]+-)*(?P<id>[^/#?-]+)'
_TESTS = [{
'url': 'https://www.callin.com/episode/fcc-commissioner-brendan-carr-on-elons-PrumRdSQJW',
'md5': '14ede27ee2c957b7e4db93140fc0745c',
'info_dict': {
'id': 'PrumRdSQJW',
'ext': 'mp4',
'title': 'FCC Commissioner Brendan Carr on Elons Starlink',
'description': 'Or, why the government doesnt like SpaceX',
'channel': 'The Pull Request',
'channel_url': 'https://callin.com/show/the-pull-request-ucnDJmEKAa',
}
}, {
'url': 'https://www.callin.com/episode/episode-81-elites-melt-down-over-student-debt-lzxMidUnjA',
'md5': '16f704ddbf82a27e3930533b12062f07',
'info_dict': {
'id': 'lzxMidUnjA',
'ext': 'mp4',
'title': 'Episode 81- Elites MELT DOWN over Student Debt Victory? Rumble in NYC?',
'description': 'Lets talk todays episode about the primary election shake up in NYC and the elites melting down over student debt cancelation.',
'channel': 'The DEBRIEF With Briahna Joy Gray',
'channel_url': 'https://callin.com/show/the-debrief-with-briahna-joy-gray-siiFDzGegm',
}
}]
def _search_nextjs_data(self, webpage, video_id, transform_source=None, fatal=True, **kw):
return self._parse_json(
self._search_regex(
r'(?s)<script[^>]+id=[\'"]__NEXT_DATA__[\'"][^>]*>([^<]+)</script>',
webpage, 'next.js data', fatal=fatal, **kw),
video_id, transform_source=transform_source, fatal=fatal)
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
next_data = self._search_nextjs_data(webpage, video_id)
episode = traverse_obj(next_data, ('props', 'pageProps', 'episode'), expected_type=dict)
if not episode:
raise ExtractorError('Failed to find episode data')
title = episode.get('title') or self._og_search_title(webpage)
description = episode.get('description') or self._og_search_description(webpage)
formats = []
formats.extend(self._extract_m3u8_formats(
episode.get('m3u8'), video_id, 'mp4',
entry_protocol='m3u8_native', fatal=False))
self._sort_formats(formats)
channel = try_get(episode, lambda x: x['show']['title'], compat_str)
channel_url = try_get(episode, lambda x: x['show']['linkObj']['resourceUrl'], compat_str)
return {
'id': video_id,
'title': title,
'description': description,
'formats': formats,
'channel': channel,
'channel_url': channel_url,
}

View File

@ -3,7 +3,6 @@ from __future__ import unicode_literals
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError,
int_or_none, int_or_none,
url_or_none, url_or_none,
) )
@ -20,32 +19,11 @@ class CamModelsIE(InfoExtractor):
def _real_extract(self, url): def _real_extract(self, url):
user_id = self._match_id(url) user_id = self._match_id(url)
webpage = self._download_webpage(
url, user_id, headers=self.geo_verification_headers())
manifest_root = self._html_search_regex(
r'manifestUrlRoot=([^&\']+)', webpage, 'manifest', default=None)
if not manifest_root:
ERRORS = (
("I'm offline, but let's stay connected", 'This user is currently offline'),
('in a private show', 'This user is in a private show'),
('is currently performing LIVE', 'This model is currently performing live'),
)
for pattern, message in ERRORS:
if pattern in webpage:
error = message
expected = True
break
else:
error = 'Unable to find manifest URL root'
expected = False
raise ExtractorError(error, expected=expected)
manifest = self._download_json( manifest = self._download_json(
'%s%s.json' % (manifest_root, user_id), user_id) 'https://manifest-server.naiadsystems.com/live/s:%s.json' % user_id, user_id)
formats = [] formats = []
thumbnails = []
for format_id, format_dict in manifest['formats'].items(): for format_id, format_dict in manifest['formats'].items():
if not isinstance(format_dict, dict): if not isinstance(format_dict, dict):
continue continue
@ -85,6 +63,13 @@ class CamModelsIE(InfoExtractor):
'preference': -1, 'preference': -1,
}) })
else: else:
if format_id == 'jpeg':
thumbnails.append({
'url': f['url'],
'width': f['width'],
'height': f['height'],
'format_id': f['format_id'],
})
continue continue
formats.append(f) formats.append(f)
self._sort_formats(formats) self._sort_formats(formats)
@ -92,6 +77,7 @@ class CamModelsIE(InfoExtractor):
return { return {
'id': user_id, 'id': user_id,
'title': self._live_title(user_id), 'title': self._live_title(user_id),
'thumbnails': thumbnails,
'is_live': True, 'is_live': True,
'formats': formats, 'formats': formats,
'age_limit': 18 'age_limit': 18

Some files were not shown because too many files have changed in this diff Show More