mirror of
https://github.com/ytdl-org/youtube-dl
synced 2024-12-23 04:30:10 +09:00
Merge remote-tracking branch 'upstream/master'
This commit is contained in:
commit
31595de72b
6
.github/ISSUE_TEMPLATE/1_broken_site.md
vendored
6
.github/ISSUE_TEMPLATE/1_broken_site.md
vendored
@ -18,7 +18,7 @@ title: ''
|
|||||||
|
|
||||||
<!--
|
<!--
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.04.07. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.04.26. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
||||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
||||||
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
|
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
|
||||||
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
||||||
@ -26,7 +26,7 @@ Carefully read and work through this check list in order to prevent the most com
|
|||||||
-->
|
-->
|
||||||
|
|
||||||
- [ ] I'm reporting a broken site support
|
- [ ] I'm reporting a broken site support
|
||||||
- [ ] I've verified that I'm running youtube-dl version **2021.04.07**
|
- [ ] I've verified that I'm running youtube-dl version **2021.04.26**
|
||||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
||||||
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
|
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
|
||||||
- [ ] I've searched the bugtracker for similar issues including closed ones
|
- [ ] I've searched the bugtracker for similar issues including closed ones
|
||||||
@ -41,7 +41,7 @@ Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <
|
|||||||
[debug] User config: []
|
[debug] User config: []
|
||||||
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
||||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
||||||
[debug] youtube-dl version 2021.04.07
|
[debug] youtube-dl version 2021.04.26
|
||||||
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
|
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
|
||||||
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
||||||
[debug] Proxy map: {}
|
[debug] Proxy map: {}
|
||||||
|
@ -19,7 +19,7 @@ labels: 'site-support-request'
|
|||||||
|
|
||||||
<!--
|
<!--
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.04.07. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.04.26. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
||||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
||||||
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
|
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
|
||||||
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
||||||
@ -27,7 +27,7 @@ Carefully read and work through this check list in order to prevent the most com
|
|||||||
-->
|
-->
|
||||||
|
|
||||||
- [ ] I'm reporting a new site support request
|
- [ ] I'm reporting a new site support request
|
||||||
- [ ] I've verified that I'm running youtube-dl version **2021.04.07**
|
- [ ] I've verified that I'm running youtube-dl version **2021.04.26**
|
||||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
||||||
- [ ] I've checked that none of provided URLs violate any copyrights
|
- [ ] I've checked that none of provided URLs violate any copyrights
|
||||||
- [ ] I've searched the bugtracker for similar site support requests including closed ones
|
- [ ] I've searched the bugtracker for similar site support requests including closed ones
|
||||||
|
@ -18,13 +18,13 @@ title: ''
|
|||||||
|
|
||||||
<!--
|
<!--
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.04.07. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.04.26. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
||||||
- Search the bugtracker for similar site feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
- Search the bugtracker for similar site feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
- Finally, put x into all relevant boxes (like this [x])
|
||||||
-->
|
-->
|
||||||
|
|
||||||
- [ ] I'm reporting a site feature request
|
- [ ] I'm reporting a site feature request
|
||||||
- [ ] I've verified that I'm running youtube-dl version **2021.04.07**
|
- [ ] I've verified that I'm running youtube-dl version **2021.04.26**
|
||||||
- [ ] I've searched the bugtracker for similar site feature requests including closed ones
|
- [ ] I've searched the bugtracker for similar site feature requests including closed ones
|
||||||
|
|
||||||
|
|
||||||
|
6
.github/ISSUE_TEMPLATE/4_bug_report.md
vendored
6
.github/ISSUE_TEMPLATE/4_bug_report.md
vendored
@ -18,7 +18,7 @@ title: ''
|
|||||||
|
|
||||||
<!--
|
<!--
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.04.07. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.04.26. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
||||||
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
|
||||||
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
|
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
|
||||||
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
||||||
@ -27,7 +27,7 @@ Carefully read and work through this check list in order to prevent the most com
|
|||||||
-->
|
-->
|
||||||
|
|
||||||
- [ ] I'm reporting a broken site support issue
|
- [ ] I'm reporting a broken site support issue
|
||||||
- [ ] I've verified that I'm running youtube-dl version **2021.04.07**
|
- [ ] I've verified that I'm running youtube-dl version **2021.04.26**
|
||||||
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
- [ ] I've checked that all provided URLs are alive and playable in a browser
|
||||||
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
|
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
|
||||||
- [ ] I've searched the bugtracker for similar bug reports including closed ones
|
- [ ] I've searched the bugtracker for similar bug reports including closed ones
|
||||||
@ -43,7 +43,7 @@ Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <
|
|||||||
[debug] User config: []
|
[debug] User config: []
|
||||||
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
||||||
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
|
||||||
[debug] youtube-dl version 2021.04.07
|
[debug] youtube-dl version 2021.04.26
|
||||||
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
|
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
|
||||||
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
|
||||||
[debug] Proxy map: {}
|
[debug] Proxy map: {}
|
||||||
|
4
.github/ISSUE_TEMPLATE/5_feature_request.md
vendored
4
.github/ISSUE_TEMPLATE/5_feature_request.md
vendored
@ -19,13 +19,13 @@ labels: 'request'
|
|||||||
|
|
||||||
<!--
|
<!--
|
||||||
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
|
||||||
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.04.07. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.04.26. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
|
||||||
- Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
- Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
|
||||||
- Finally, put x into all relevant boxes (like this [x])
|
- Finally, put x into all relevant boxes (like this [x])
|
||||||
-->
|
-->
|
||||||
|
|
||||||
- [ ] I'm reporting a feature request
|
- [ ] I'm reporting a feature request
|
||||||
- [ ] I've verified that I'm running youtube-dl version **2021.04.07**
|
- [ ] I've verified that I'm running youtube-dl version **2021.04.26**
|
||||||
- [ ] I've searched the bugtracker for similar feature requests including closed ones
|
- [ ] I've searched the bugtracker for similar feature requests including closed ones
|
||||||
|
|
||||||
|
|
||||||
|
9
.github/workflows/ci.yml
vendored
9
.github/workflows/ci.yml
vendored
@ -49,11 +49,18 @@ jobs:
|
|||||||
- name: Install Jython
|
- name: Install Jython
|
||||||
if: ${{ matrix.python-impl == 'jython' }}
|
if: ${{ matrix.python-impl == 'jython' }}
|
||||||
run: |
|
run: |
|
||||||
wget http://search.maven.org/remotecontent?filepath=org/python/jython-installer/2.7.1/jython-installer-2.7.1.jar -O jython-installer.jar
|
wget https://repo1.maven.org/maven2/org/python/jython-installer/2.7.1/jython-installer-2.7.1.jar -O jython-installer.jar
|
||||||
java -jar jython-installer.jar -s -d "$HOME/jython"
|
java -jar jython-installer.jar -s -d "$HOME/jython"
|
||||||
echo "$HOME/jython/bin" >> $GITHUB_PATH
|
echo "$HOME/jython/bin" >> $GITHUB_PATH
|
||||||
- name: Install nose
|
- name: Install nose
|
||||||
|
if: ${{ matrix.python-impl != 'jython' }}
|
||||||
run: pip install nose
|
run: pip install nose
|
||||||
|
- name: Install nose (Jython)
|
||||||
|
if: ${{ matrix.python-impl == 'jython' }}
|
||||||
|
# Working around deprecation of support for non-SNI clients at PyPI CDN (see https://status.python.org/incidents/hzmjhqsdjqgb)
|
||||||
|
run: |
|
||||||
|
wget https://files.pythonhosted.org/packages/99/4f/13fb671119e65c4dce97c60e67d3fd9e6f7f809f2b307e2611f4701205cb/nose-1.3.7-py2-none-any.whl
|
||||||
|
pip install nose-1.3.7-py2-none-any.whl
|
||||||
- name: Run tests
|
- name: Run tests
|
||||||
continue-on-error: ${{ matrix.ytdl-test-set == 'download' || matrix.python-impl == 'jython' }}
|
continue-on-error: ${{ matrix.ytdl-test-set == 'download' || matrix.python-impl == 'jython' }}
|
||||||
env:
|
env:
|
||||||
|
38
ChangeLog
38
ChangeLog
@ -1,3 +1,41 @@
|
|||||||
|
version 2021.04.26
|
||||||
|
|
||||||
|
Extractors
|
||||||
|
+ [xfileshare] Add support for wolfstream.tv (#28858)
|
||||||
|
* [francetvinfo] Improve video id extraction (#28792)
|
||||||
|
* [medaltv] Fix extraction (#28807)
|
||||||
|
* [tver] Redirect all downloads to Brightcove (#28849)
|
||||||
|
* [go] Improve video id extraction (#25207, #25216, #26058)
|
||||||
|
* [youtube] Fix lazy extractors (#28780)
|
||||||
|
+ [bbc] Extract description and timestamp from __INITIAL_DATA__ (#28774)
|
||||||
|
* [cbsnews] Fix extraction for python <3.6 (#23359)
|
||||||
|
|
||||||
|
|
||||||
|
version 2021.04.17
|
||||||
|
|
||||||
|
Core
|
||||||
|
+ [utils] Add support for experimental HTTP response status code
|
||||||
|
308 Permanent Redirect (#27877, #28768)
|
||||||
|
|
||||||
|
Extractors
|
||||||
|
+ [lbry] Add support for HLS videos (#27877, #28768)
|
||||||
|
* [youtube] Fix stretched ratio calculation
|
||||||
|
* [youtube] Improve stretch extraction (#28769)
|
||||||
|
* [youtube:tab] Improve grid extraction (#28725)
|
||||||
|
+ [youtube:tab] Detect series playlist on playlists page (#28723)
|
||||||
|
+ [youtube] Add more invidious instances (#28706)
|
||||||
|
* [pluralsight] Extend anti-throttling timeout (#28712)
|
||||||
|
* [youtube] Improve URL to extractor routing (#27572, #28335, #28742)
|
||||||
|
+ [maoritv] Add support for maoritelevision.com (#24552)
|
||||||
|
+ [youtube:tab] Pass innertube context and x-goog-visitor-id header along with
|
||||||
|
continuation requests (#28702)
|
||||||
|
* [mtv] Fix Viacom A/B Testing Video Player extraction (#28703)
|
||||||
|
+ [pornhub] Extract DASH and HLS formats from get_media end point (#28698)
|
||||||
|
* [cbssports] Fix extraction (#28682)
|
||||||
|
* [jamendo] Fix track extraction (#28686)
|
||||||
|
* [curiositystream] Fix format extraction (#26845, #28668)
|
||||||
|
|
||||||
|
|
||||||
version 2021.04.07
|
version 2021.04.07
|
||||||
|
|
||||||
Core
|
Core
|
||||||
|
@ -3,6 +3,7 @@
|
|||||||
- **20min**
|
- **20min**
|
||||||
- **220.ro**
|
- **220.ro**
|
||||||
- **23video**
|
- **23video**
|
||||||
|
- **247sports**
|
||||||
- **24video**
|
- **24video**
|
||||||
- **3qsdn**: 3Q SDN
|
- **3qsdn**: 3Q SDN
|
||||||
- **3sat**
|
- **3sat**
|
||||||
@ -160,7 +161,8 @@
|
|||||||
- **cbsnews**: CBS News
|
- **cbsnews**: CBS News
|
||||||
- **cbsnews:embed**
|
- **cbsnews:embed**
|
||||||
- **cbsnews:livevideo**: CBS News Live Videos
|
- **cbsnews:livevideo**: CBS News Live Videos
|
||||||
- **CBSSports**
|
- **cbssports**
|
||||||
|
- **cbssports:embed**
|
||||||
- **CCMA**
|
- **CCMA**
|
||||||
- **CCTV**: 央视网
|
- **CCTV**: 央视网
|
||||||
- **CDA**
|
- **CDA**
|
||||||
@ -490,6 +492,7 @@
|
|||||||
- **mangomolo:live**
|
- **mangomolo:live**
|
||||||
- **mangomolo:video**
|
- **mangomolo:video**
|
||||||
- **ManyVids**
|
- **ManyVids**
|
||||||
|
- **MaoriTV**
|
||||||
- **Markiza**
|
- **Markiza**
|
||||||
- **MarkizaPage**
|
- **MarkizaPage**
|
||||||
- **massengeschmack.tv**
|
- **massengeschmack.tv**
|
||||||
@ -1159,7 +1162,7 @@
|
|||||||
- **WWE**
|
- **WWE**
|
||||||
- **XBef**
|
- **XBef**
|
||||||
- **XboxClips**
|
- **XboxClips**
|
||||||
- **XFileShare**: XFileShare based sites: Aparat, ClipWatching, GoUnlimited, GoVid, HolaVid, Streamty, TheVideoBee, Uqload, VidBom, vidlo, VidLocker, VidShare, VUp, XVideoSharing
|
- **XFileShare**: XFileShare based sites: Aparat, ClipWatching, GoUnlimited, GoVid, HolaVid, Streamty, TheVideoBee, Uqload, VidBom, vidlo, VidLocker, VidShare, VUp, WolfStream, XVideoSharing
|
||||||
- **XHamster**
|
- **XHamster**
|
||||||
- **XHamsterEmbed**
|
- **XHamsterEmbed**
|
||||||
- **XHamsterUser**
|
- **XHamsterUser**
|
||||||
|
@ -70,15 +70,6 @@ class TestAllURLsMatching(unittest.TestCase):
|
|||||||
# self.assertMatch('http://www.youtube.com/results?search_query=making+mustard', ['youtube:search_url'])
|
# self.assertMatch('http://www.youtube.com/results?search_query=making+mustard', ['youtube:search_url'])
|
||||||
# self.assertMatch('https://www.youtube.com/results?baz=bar&search_query=youtube-dl+test+video&filters=video&lclk=video', ['youtube:search_url'])
|
# self.assertMatch('https://www.youtube.com/results?baz=bar&search_query=youtube-dl+test+video&filters=video&lclk=video', ['youtube:search_url'])
|
||||||
|
|
||||||
def test_youtube_extract(self):
|
|
||||||
assertExtractId = lambda url, id: self.assertEqual(YoutubeIE.extract_id(url), id)
|
|
||||||
assertExtractId('http://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
|
|
||||||
assertExtractId('https://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
|
|
||||||
assertExtractId('https://www.youtube.com/watch?feature=player_embedded&v=BaW_jenozKc', 'BaW_jenozKc')
|
|
||||||
assertExtractId('https://www.youtube.com/watch_popup?v=BaW_jenozKc', 'BaW_jenozKc')
|
|
||||||
assertExtractId('http://www.youtube.com/watch?v=BaW_jenozKcsharePLED17F32AD9753930', 'BaW_jenozKc')
|
|
||||||
assertExtractId('BaW_jenozKc', 'BaW_jenozKc')
|
|
||||||
|
|
||||||
def test_facebook_matching(self):
|
def test_facebook_matching(self):
|
||||||
self.assertTrue(FacebookIE.suitable('https://www.facebook.com/Shiniknoh#!/photo.php?v=10153317450565268'))
|
self.assertTrue(FacebookIE.suitable('https://www.facebook.com/Shiniknoh#!/photo.php?v=10153317450565268'))
|
||||||
self.assertTrue(FacebookIE.suitable('https://www.facebook.com/cindyweather?fref=ts#!/photo.php?v=10152183998945793'))
|
self.assertTrue(FacebookIE.suitable('https://www.facebook.com/cindyweather?fref=ts#!/photo.php?v=10152183998945793'))
|
||||||
|
@ -39,6 +39,16 @@ class TestExecution(unittest.TestCase):
|
|||||||
_, stderr = p.communicate()
|
_, stderr = p.communicate()
|
||||||
self.assertFalse(stderr)
|
self.assertFalse(stderr)
|
||||||
|
|
||||||
|
def test_lazy_extractors(self):
|
||||||
|
try:
|
||||||
|
subprocess.check_call([sys.executable, 'devscripts/make_lazy_extractors.py', 'youtube_dl/extractor/lazy_extractors.py'], cwd=rootDir, stdout=_DEV_NULL)
|
||||||
|
subprocess.check_call([sys.executable, 'test/test_all_urls.py'], cwd=rootDir, stdout=_DEV_NULL)
|
||||||
|
finally:
|
||||||
|
try:
|
||||||
|
os.remove('youtube_dl/extractor/lazy_extractors.py')
|
||||||
|
except (IOError, OSError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
unittest.main()
|
unittest.main()
|
||||||
|
26
test/test_youtube_misc.py
Normal file
26
test/test_youtube_misc.py
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
|
# Allow direct execution
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import unittest
|
||||||
|
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
|
||||||
|
|
||||||
|
from youtube_dl.extractor import YoutubeIE
|
||||||
|
|
||||||
|
|
||||||
|
class TestYoutubeMisc(unittest.TestCase):
|
||||||
|
def test_youtube_extract(self):
|
||||||
|
assertExtractId = lambda url, id: self.assertEqual(YoutubeIE.extract_id(url), id)
|
||||||
|
assertExtractId('http://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
|
||||||
|
assertExtractId('https://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
|
||||||
|
assertExtractId('https://www.youtube.com/watch?feature=player_embedded&v=BaW_jenozKc', 'BaW_jenozKc')
|
||||||
|
assertExtractId('https://www.youtube.com/watch_popup?v=BaW_jenozKc', 'BaW_jenozKc')
|
||||||
|
assertExtractId('http://www.youtube.com/watch?v=BaW_jenozKcsharePLED17F32AD9753930', 'BaW_jenozKc')
|
||||||
|
assertExtractId('BaW_jenozKc', 'BaW_jenozKc')
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
unittest.main()
|
@ -773,11 +773,20 @@ class YoutubeDL(object):
|
|||||||
|
|
||||||
def extract_info(self, url, download=True, ie_key=None, extra_info={},
|
def extract_info(self, url, download=True, ie_key=None, extra_info={},
|
||||||
process=True, force_generic_extractor=False):
|
process=True, force_generic_extractor=False):
|
||||||
'''
|
"""
|
||||||
Returns a list with a dictionary for each video we find.
|
Return a list with a dictionary for each video extracted.
|
||||||
If 'download', also downloads the videos.
|
|
||||||
extra_info is a dict containing the extra values to add to each result
|
Arguments:
|
||||||
'''
|
url -- URL to extract
|
||||||
|
|
||||||
|
Keyword arguments:
|
||||||
|
download -- whether to download videos during extraction
|
||||||
|
ie_key -- extractor key hint
|
||||||
|
extra_info -- dictionary containing the extra values to add to each result
|
||||||
|
process -- whether to resolve all unresolved references (URLs, playlist items),
|
||||||
|
must be True for download to work.
|
||||||
|
force_generic_extractor -- force using the generic extractor
|
||||||
|
"""
|
||||||
|
|
||||||
if not ie_key and force_generic_extractor:
|
if not ie_key and force_generic_extractor:
|
||||||
ie_key = 'Generic'
|
ie_key = 'Generic'
|
||||||
|
@ -11,6 +11,7 @@ from ..compat import (
|
|||||||
compat_etree_Element,
|
compat_etree_Element,
|
||||||
compat_HTTPError,
|
compat_HTTPError,
|
||||||
compat_parse_qs,
|
compat_parse_qs,
|
||||||
|
compat_str,
|
||||||
compat_urllib_parse_urlparse,
|
compat_urllib_parse_urlparse,
|
||||||
compat_urlparse,
|
compat_urlparse,
|
||||||
)
|
)
|
||||||
@ -25,8 +26,10 @@ from ..utils import (
|
|||||||
js_to_json,
|
js_to_json,
|
||||||
parse_duration,
|
parse_duration,
|
||||||
parse_iso8601,
|
parse_iso8601,
|
||||||
|
strip_or_none,
|
||||||
try_get,
|
try_get,
|
||||||
unescapeHTML,
|
unescapeHTML,
|
||||||
|
unified_timestamp,
|
||||||
url_or_none,
|
url_or_none,
|
||||||
urlencode_postdata,
|
urlencode_postdata,
|
||||||
urljoin,
|
urljoin,
|
||||||
@ -761,8 +764,17 @@ class BBCIE(BBCCoUkIE):
|
|||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
}, {
|
}, {
|
||||||
# custom redirection to www.bbc.com
|
# custom redirection to www.bbc.com
|
||||||
|
# also, video with window.__INITIAL_DATA__
|
||||||
'url': 'http://www.bbc.co.uk/news/science-environment-33661876',
|
'url': 'http://www.bbc.co.uk/news/science-environment-33661876',
|
||||||
'only_matching': True,
|
'info_dict': {
|
||||||
|
'id': 'p02xzws1',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': "Pluto may have 'nitrogen glaciers'",
|
||||||
|
'description': 'md5:6a95b593f528d7a5f2605221bc56912f',
|
||||||
|
'thumbnail': r're:https?://.+/.+\.jpg',
|
||||||
|
'timestamp': 1437785037,
|
||||||
|
'upload_date': '20150725',
|
||||||
|
},
|
||||||
}, {
|
}, {
|
||||||
# single video article embedded with data-media-vpid
|
# single video article embedded with data-media-vpid
|
||||||
'url': 'http://www.bbc.co.uk/sport/rowing/35908187',
|
'url': 'http://www.bbc.co.uk/sport/rowing/35908187',
|
||||||
@ -1164,12 +1176,29 @@ class BBCIE(BBCCoUkIE):
|
|||||||
continue
|
continue
|
||||||
formats, subtitles = self._download_media_selector(item_id)
|
formats, subtitles = self._download_media_selector(item_id)
|
||||||
self._sort_formats(formats)
|
self._sort_formats(formats)
|
||||||
|
item_desc = None
|
||||||
|
blocks = try_get(media, lambda x: x['summary']['blocks'], list)
|
||||||
|
if blocks:
|
||||||
|
summary = []
|
||||||
|
for block in blocks:
|
||||||
|
text = try_get(block, lambda x: x['model']['text'], compat_str)
|
||||||
|
if text:
|
||||||
|
summary.append(text)
|
||||||
|
if summary:
|
||||||
|
item_desc = '\n\n'.join(summary)
|
||||||
|
item_time = None
|
||||||
|
for meta in try_get(media, lambda x: x['metadata']['items'], list) or []:
|
||||||
|
if try_get(meta, lambda x: x['label']) == 'Published':
|
||||||
|
item_time = unified_timestamp(meta.get('timestamp'))
|
||||||
|
break
|
||||||
entries.append({
|
entries.append({
|
||||||
'id': item_id,
|
'id': item_id,
|
||||||
'title': item_title,
|
'title': item_title,
|
||||||
'thumbnail': item.get('holdingImageUrl'),
|
'thumbnail': item.get('holdingImageUrl'),
|
||||||
'formats': formats,
|
'formats': formats,
|
||||||
'subtitles': subtitles,
|
'subtitles': subtitles,
|
||||||
|
'timestamp': item_time,
|
||||||
|
'description': strip_or_none(item_desc),
|
||||||
})
|
})
|
||||||
for resp in (initial_data.get('data') or {}).values():
|
for resp in (initial_data.get('data') or {}).values():
|
||||||
name = resp.get('name')
|
name = resp.get('name')
|
||||||
|
@ -1,86 +0,0 @@
|
|||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import json
|
|
||||||
|
|
||||||
from .common import InfoExtractor
|
|
||||||
from ..utils import (
|
|
||||||
remove_start,
|
|
||||||
int_or_none,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class BlinkxIE(InfoExtractor):
|
|
||||||
_VALID_URL = r'(?:https?://(?:www\.)blinkx\.com/#?ce/|blinkx:)(?P<id>[^?]+)'
|
|
||||||
IE_NAME = 'blinkx'
|
|
||||||
|
|
||||||
_TEST = {
|
|
||||||
'url': 'http://www.blinkx.com/ce/Da0Gw3xc5ucpNduzLuDDlv4WC9PuI4fDi1-t6Y3LyfdY2SZS5Urbvn-UPJvrvbo8LTKTc67Wu2rPKSQDJyZeeORCR8bYkhs8lI7eqddznH2ofh5WEEdjYXnoRtj7ByQwt7atMErmXIeYKPsSDuMAAqJDlQZ-3Ff4HJVeH_s3Gh8oQ',
|
|
||||||
'md5': '337cf7a344663ec79bf93a526a2e06c7',
|
|
||||||
'info_dict': {
|
|
||||||
'id': 'Da0Gw3xc',
|
|
||||||
'ext': 'mp4',
|
|
||||||
'title': 'No Daily Show for John Oliver; HBO Show Renewed - IGN News',
|
|
||||||
'uploader': 'IGN News',
|
|
||||||
'upload_date': '20150217',
|
|
||||||
'timestamp': 1424215740,
|
|
||||||
'description': 'HBO has renewed Last Week Tonight With John Oliver for two more seasons.',
|
|
||||||
'duration': 47.743333,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
def _real_extract(self, url):
|
|
||||||
video_id = self._match_id(url)
|
|
||||||
display_id = video_id[:8]
|
|
||||||
|
|
||||||
api_url = ('https://apib4.blinkx.com/api.php?action=play_video&'
|
|
||||||
+ 'video=%s' % video_id)
|
|
||||||
data_json = self._download_webpage(api_url, display_id)
|
|
||||||
data = json.loads(data_json)['api']['results'][0]
|
|
||||||
duration = None
|
|
||||||
thumbnails = []
|
|
||||||
formats = []
|
|
||||||
for m in data['media']:
|
|
||||||
if m['type'] == 'jpg':
|
|
||||||
thumbnails.append({
|
|
||||||
'url': m['link'],
|
|
||||||
'width': int(m['w']),
|
|
||||||
'height': int(m['h']),
|
|
||||||
})
|
|
||||||
elif m['type'] == 'original':
|
|
||||||
duration = float(m['d'])
|
|
||||||
elif m['type'] == 'youtube':
|
|
||||||
yt_id = m['link']
|
|
||||||
self.to_screen('Youtube video detected: %s' % yt_id)
|
|
||||||
return self.url_result(yt_id, 'Youtube', video_id=yt_id)
|
|
||||||
elif m['type'] in ('flv', 'mp4'):
|
|
||||||
vcodec = remove_start(m['vcodec'], 'ff')
|
|
||||||
acodec = remove_start(m['acodec'], 'ff')
|
|
||||||
vbr = int_or_none(m.get('vbr') or m.get('vbitrate'), 1000)
|
|
||||||
abr = int_or_none(m.get('abr') or m.get('abitrate'), 1000)
|
|
||||||
tbr = vbr + abr if vbr and abr else None
|
|
||||||
format_id = '%s-%sk-%s' % (vcodec, tbr, m['w'])
|
|
||||||
formats.append({
|
|
||||||
'format_id': format_id,
|
|
||||||
'url': m['link'],
|
|
||||||
'vcodec': vcodec,
|
|
||||||
'acodec': acodec,
|
|
||||||
'abr': abr,
|
|
||||||
'vbr': vbr,
|
|
||||||
'tbr': tbr,
|
|
||||||
'width': int_or_none(m.get('w')),
|
|
||||||
'height': int_or_none(m.get('h')),
|
|
||||||
})
|
|
||||||
|
|
||||||
self._sort_formats(formats)
|
|
||||||
|
|
||||||
return {
|
|
||||||
'id': display_id,
|
|
||||||
'fullid': video_id,
|
|
||||||
'title': data['title'],
|
|
||||||
'formats': formats,
|
|
||||||
'uploader': data['channel_name'],
|
|
||||||
'timestamp': data['pubdate_epoch'],
|
|
||||||
'description': data.get('description'),
|
|
||||||
'thumbnails': thumbnails,
|
|
||||||
'duration': duration,
|
|
||||||
}
|
|
@ -26,7 +26,7 @@ class CBSNewsEmbedIE(CBSIE):
|
|||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
item = self._parse_json(zlib.decompress(compat_b64decode(
|
item = self._parse_json(zlib.decompress(compat_b64decode(
|
||||||
compat_urllib_parse_unquote(self._match_id(url))),
|
compat_urllib_parse_unquote(self._match_id(url))),
|
||||||
-zlib.MAX_WBITS), None)['video']['items'][0]
|
-zlib.MAX_WBITS).decode('utf-8'), None)['video']['items'][0]
|
||||||
return self._extract_video_info(item['mpxRefId'], 'cbsnews')
|
return self._extract_video_info(item['mpxRefId'], 'cbsnews')
|
||||||
|
|
||||||
|
|
||||||
|
@ -133,6 +133,8 @@ class CDAIE(InfoExtractor):
|
|||||||
'age_limit': 18 if need_confirm_age else 0,
|
'age_limit': 18 if need_confirm_age else 0,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
info = self._search_json_ld(webpage, video_id, default={})
|
||||||
|
|
||||||
# Source: https://www.cda.pl/js/player.js?t=1606154898
|
# Source: https://www.cda.pl/js/player.js?t=1606154898
|
||||||
def decrypt_file(a):
|
def decrypt_file(a):
|
||||||
for p in ('_XDDD', '_CDA', '_ADC', '_CXD', '_QWE', '_Q5', '_IKSDE'):
|
for p in ('_XDDD', '_CDA', '_ADC', '_CXD', '_QWE', '_Q5', '_IKSDE'):
|
||||||
@ -197,7 +199,7 @@ class CDAIE(InfoExtractor):
|
|||||||
handler = self._download_webpage
|
handler = self._download_webpage
|
||||||
|
|
||||||
webpage = handler(
|
webpage = handler(
|
||||||
self._BASE_URL + href, video_id,
|
urljoin(self._BASE_URL, href), video_id,
|
||||||
'Downloading %s version information' % resolution, fatal=False)
|
'Downloading %s version information' % resolution, fatal=False)
|
||||||
if not webpage:
|
if not webpage:
|
||||||
# Manually report warning because empty page is returned when
|
# Manually report warning because empty page is returned when
|
||||||
@ -209,6 +211,4 @@ class CDAIE(InfoExtractor):
|
|||||||
|
|
||||||
self._sort_formats(formats)
|
self._sort_formats(formats)
|
||||||
|
|
||||||
info = self._search_json_ld(webpage, video_id, default={})
|
|
||||||
|
|
||||||
return merge_dicts(info_dict, info)
|
return merge_dicts(info_dict, info)
|
||||||
|
@ -32,6 +32,18 @@ class DigitallySpeakingIE(InfoExtractor):
|
|||||||
# From http://www.gdcvault.com/play/1013700/Advanced-Material
|
# From http://www.gdcvault.com/play/1013700/Advanced-Material
|
||||||
'url': 'http://sevt.dispeak.com/ubm/gdc/eur10/xml/11256_1282118587281VNIT.xml',
|
'url': 'http://sevt.dispeak.com/ubm/gdc/eur10/xml/11256_1282118587281VNIT.xml',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
# From https://gdcvault.com/play/1016624, empty speakerVideo
|
||||||
|
'url': 'https://sevt.dispeak.com/ubm/gdc/online12/xml/201210-822101_1349794556671DDDD.xml',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '201210-822101_1349794556671DDDD',
|
||||||
|
'ext': 'flv',
|
||||||
|
'title': 'Pre-launch - Preparing to Take the Plunge',
|
||||||
|
},
|
||||||
|
}, {
|
||||||
|
# From http://www.gdcvault.com/play/1014846/Conference-Keynote-Shigeru, empty slideVideo
|
||||||
|
'url': 'http://events.digitallyspeaking.com/gdc/project25/xml/p25-miyamoto1999_1282467389849HSVB.xml',
|
||||||
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _parse_mp4(self, metadata):
|
def _parse_mp4(self, metadata):
|
||||||
@ -84,25 +96,19 @@ class DigitallySpeakingIE(InfoExtractor):
|
|||||||
'vcodec': 'none',
|
'vcodec': 'none',
|
||||||
'format_id': audio.get('code'),
|
'format_id': audio.get('code'),
|
||||||
})
|
})
|
||||||
slide_video_path = xpath_text(metadata, './slideVideo', fatal=True)
|
for video_key, format_id, preference in (
|
||||||
|
('slide', 'slides', -2), ('speaker', 'speaker', -1)):
|
||||||
|
video_path = xpath_text(metadata, './%sVideo' % video_key)
|
||||||
|
if not video_path:
|
||||||
|
continue
|
||||||
formats.append({
|
formats.append({
|
||||||
'url': 'rtmp://%s/ondemand?ovpfv=1.1' % akamai_url,
|
'url': 'rtmp://%s/ondemand?ovpfv=1.1' % akamai_url,
|
||||||
'play_path': remove_end(slide_video_path, '.flv'),
|
'play_path': remove_end(video_path, '.flv'),
|
||||||
'ext': 'flv',
|
'ext': 'flv',
|
||||||
'format_note': 'slide deck video',
|
'format_note': '%s video' % video_key,
|
||||||
'quality': -2,
|
'quality': preference,
|
||||||
'preference': -2,
|
'preference': preference,
|
||||||
'format_id': 'slides',
|
'format_id': format_id,
|
||||||
})
|
|
||||||
speaker_video_path = xpath_text(metadata, './speakerVideo', fatal=True)
|
|
||||||
formats.append({
|
|
||||||
'url': 'rtmp://%s/ondemand?ovpfv=1.1' % akamai_url,
|
|
||||||
'play_path': remove_end(speaker_video_path, '.flv'),
|
|
||||||
'ext': 'flv',
|
|
||||||
'format_note': 'speaker video',
|
|
||||||
'quality': -1,
|
|
||||||
'preference': -1,
|
|
||||||
'format_id': 'speaker',
|
|
||||||
})
|
})
|
||||||
return formats
|
return formats
|
||||||
|
|
||||||
|
@ -132,7 +132,6 @@ from .bleacherreport import (
|
|||||||
BleacherReportIE,
|
BleacherReportIE,
|
||||||
BleacherReportCMSIE,
|
BleacherReportCMSIE,
|
||||||
)
|
)
|
||||||
from .blinkx import BlinkxIE
|
|
||||||
from .bloomberg import BloombergIE
|
from .bloomberg import BloombergIE
|
||||||
from .bokecc import BokeCCIE
|
from .bokecc import BokeCCIE
|
||||||
from .bongacams import BongaCamsIE
|
from .bongacams import BongaCamsIE
|
||||||
|
@ -383,6 +383,10 @@ class FranceTVInfoIE(FranceTVBaseInfoExtractor):
|
|||||||
}, {
|
}, {
|
||||||
'url': 'http://france3-regions.francetvinfo.fr/limousin/emissions/jt-1213-limousin',
|
'url': 'http://france3-regions.francetvinfo.fr/limousin/emissions/jt-1213-limousin',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
# "<figure id=" pattern (#28792)
|
||||||
|
'url': 'https://www.francetvinfo.fr/culture/patrimoine/incendie-de-notre-dame-de-paris/notre-dame-de-paris-de-l-incendie-de-la-cathedrale-a-sa-reconstruction_4372291.html',
|
||||||
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
@ -400,7 +404,7 @@ class FranceTVInfoIE(FranceTVBaseInfoExtractor):
|
|||||||
(r'player\.load[^;]+src:\s*["\']([^"\']+)',
|
(r'player\.load[^;]+src:\s*["\']([^"\']+)',
|
||||||
r'id-video=([^@]+@[^"]+)',
|
r'id-video=([^@]+@[^"]+)',
|
||||||
r'<a[^>]+href="(?:https?:)?//videos\.francetv\.fr/video/([^@]+@[^"]+)"',
|
r'<a[^>]+href="(?:https?:)?//videos\.francetv\.fr/video/([^@]+@[^"]+)"',
|
||||||
r'data-id=["\']([\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12})'),
|
r'(?:data-id|<figure[^<]+\bid)=["\']([\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12})'),
|
||||||
webpage, 'video id')
|
webpage, 'video id')
|
||||||
|
|
||||||
return self._make_url_result(video_id)
|
return self._make_url_result(video_id)
|
||||||
|
@ -16,7 +16,7 @@ from ..utils import (
|
|||||||
|
|
||||||
|
|
||||||
class FunimationIE(InfoExtractor):
|
class FunimationIE(InfoExtractor):
|
||||||
_VALID_URL = r'https?://(?:www\.)?funimation(?:\.com|now\.uk)/shows/[^/]+/(?P<id>[^/?#&]+)'
|
_VALID_URL = r'https?://(?:www\.)?funimation(?:\.com|now\.uk)/(?:[^/]+/)?shows/[^/]+/(?P<id>[^/?#&]+)'
|
||||||
|
|
||||||
_NETRC_MACHINE = 'funimation'
|
_NETRC_MACHINE = 'funimation'
|
||||||
_TOKEN = None
|
_TOKEN = None
|
||||||
@ -51,6 +51,10 @@ class FunimationIE(InfoExtractor):
|
|||||||
}, {
|
}, {
|
||||||
'url': 'https://www.funimationnow.uk/shows/puzzle-dragons-x/drop-impact/simulcast/',
|
'url': 'https://www.funimationnow.uk/shows/puzzle-dragons-x/drop-impact/simulcast/',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
# with lang code
|
||||||
|
'url': 'https://www.funimation.com/en/shows/hacksign/role-play/',
|
||||||
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _login(self):
|
def _login(self):
|
||||||
|
@ -6,6 +6,7 @@ from .common import InfoExtractor
|
|||||||
from .kaltura import KalturaIE
|
from .kaltura import KalturaIE
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
HEADRequest,
|
HEADRequest,
|
||||||
|
remove_start,
|
||||||
sanitized_Request,
|
sanitized_Request,
|
||||||
smuggle_url,
|
smuggle_url,
|
||||||
urlencode_postdata,
|
urlencode_postdata,
|
||||||
@ -102,6 +103,26 @@ class GDCVaultIE(InfoExtractor):
|
|||||||
'format': 'mp4-408',
|
'format': 'mp4-408',
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
# Kaltura embed, whitespace between quote and embedded URL in iframe's src
|
||||||
|
'url': 'https://www.gdcvault.com/play/1025699',
|
||||||
|
'info_dict': {
|
||||||
|
'id': '0_zagynv0a',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Tech Toolbox',
|
||||||
|
'upload_date': '20190408',
|
||||||
|
'uploader_id': 'joe@blazestreaming.com',
|
||||||
|
'timestamp': 1554764629,
|
||||||
|
},
|
||||||
|
'params': {
|
||||||
|
'skip_download': True,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
# HTML5 video
|
||||||
|
'url': 'http://www.gdcvault.com/play/1014846/Conference-Keynote-Shigeru',
|
||||||
|
'only_matching': True,
|
||||||
|
},
|
||||||
]
|
]
|
||||||
|
|
||||||
def _login(self, webpage_url, display_id):
|
def _login(self, webpage_url, display_id):
|
||||||
@ -175,7 +196,18 @@ class GDCVaultIE(InfoExtractor):
|
|||||||
|
|
||||||
xml_name = self._html_search_regex(
|
xml_name = self._html_search_regex(
|
||||||
r'<iframe src=".*?\?xml(?:=|URL=xml/)(.+?\.xml).*?".*?</iframe>',
|
r'<iframe src=".*?\?xml(?:=|URL=xml/)(.+?\.xml).*?".*?</iframe>',
|
||||||
start_page, 'xml filename')
|
start_page, 'xml filename', default=None)
|
||||||
|
if not xml_name:
|
||||||
|
info = self._parse_html5_media_entries(url, start_page, video_id)[0]
|
||||||
|
info.update({
|
||||||
|
'title': remove_start(self._search_regex(
|
||||||
|
r'>Session Name:\s*<.*?>\s*<td>(.+?)</td>', start_page,
|
||||||
|
'title', default=None) or self._og_search_title(
|
||||||
|
start_page, default=None), 'GDC Vault - '),
|
||||||
|
'id': video_id,
|
||||||
|
'display_id': display_id,
|
||||||
|
})
|
||||||
|
return info
|
||||||
embed_url = '%s/xml/%s' % (xml_root, xml_name)
|
embed_url = '%s/xml/%s' % (xml_root, xml_name)
|
||||||
ie_key = 'DigitallySpeaking'
|
ie_key = 'DigitallySpeaking'
|
||||||
|
|
||||||
|
@ -4,10 +4,12 @@ from __future__ import unicode_literals
|
|||||||
import re
|
import re
|
||||||
|
|
||||||
from .adobepass import AdobePassIE
|
from .adobepass import AdobePassIE
|
||||||
|
from ..compat import compat_str
|
||||||
from ..utils import (
|
from ..utils import (
|
||||||
int_or_none,
|
int_or_none,
|
||||||
determine_ext,
|
determine_ext,
|
||||||
parse_age_limit,
|
parse_age_limit,
|
||||||
|
try_get,
|
||||||
urlencode_postdata,
|
urlencode_postdata,
|
||||||
ExtractorError,
|
ExtractorError,
|
||||||
)
|
)
|
||||||
@ -116,6 +118,18 @@ class GoIE(AdobePassIE):
|
|||||||
# m3u8 download
|
# m3u8 download
|
||||||
'skip_download': True,
|
'skip_download': True,
|
||||||
},
|
},
|
||||||
|
}, {
|
||||||
|
'url': 'https://abc.com/shows/modern-family/episode-guide/season-01/101-pilot',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'VDKA22600213',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'Pilot',
|
||||||
|
'description': 'md5:74306df917cfc199d76d061d66bebdb4',
|
||||||
|
},
|
||||||
|
'params': {
|
||||||
|
# m3u8 download
|
||||||
|
'skip_download': True,
|
||||||
|
},
|
||||||
}, {
|
}, {
|
||||||
'url': 'http://abc.go.com/shows/the-catch/episode-guide/season-01/10-the-wedding',
|
'url': 'http://abc.go.com/shows/the-catch/episode-guide/season-01/10-the-wedding',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
@ -149,11 +163,27 @@ class GoIE(AdobePassIE):
|
|||||||
brand = site_info.get('brand')
|
brand = site_info.get('brand')
|
||||||
if not video_id or not site_info:
|
if not video_id or not site_info:
|
||||||
webpage = self._download_webpage(url, display_id or video_id)
|
webpage = self._download_webpage(url, display_id or video_id)
|
||||||
|
data = self._parse_json(
|
||||||
|
self._search_regex(
|
||||||
|
r'["\']__abc_com__["\']\s*\]\s*=\s*({.+?})\s*;', webpage,
|
||||||
|
'data', default='{}'),
|
||||||
|
display_id or video_id, fatal=False)
|
||||||
|
# https://abc.com/shows/modern-family/episode-guide/season-01/101-pilot
|
||||||
|
layout = try_get(data, lambda x: x['page']['content']['video']['layout'], dict)
|
||||||
|
video_id = None
|
||||||
|
if layout:
|
||||||
|
video_id = try_get(
|
||||||
|
layout,
|
||||||
|
(lambda x: x['videoid'], lambda x: x['video']['id']),
|
||||||
|
compat_str)
|
||||||
|
if not video_id:
|
||||||
video_id = self._search_regex(
|
video_id = self._search_regex(
|
||||||
(
|
(
|
||||||
# There may be inner quotes, e.g. data-video-id="'VDKA3609139'"
|
# There may be inner quotes, e.g. data-video-id="'VDKA3609139'"
|
||||||
# from http://freeform.go.com/shows/shadowhunters/episodes/season-2/1-this-guilty-blood
|
# from http://freeform.go.com/shows/shadowhunters/episodes/season-2/1-this-guilty-blood
|
||||||
r'data-video-id=["\']*(VDKA\w+)',
|
r'data-video-id=["\']*(VDKA\w+)',
|
||||||
|
# page.analytics.videoIdCode
|
||||||
|
r'\bvideoIdCode["\']\s*:\s*["\']((?:vdka|VDKA)\w+)',
|
||||||
# https://abc.com/shows/the-rookie/episode-guide/season-02/03-the-bet
|
# https://abc.com/shows/the-rookie/episode-guide/season-02/03-the-bet
|
||||||
r'\b(?:video)?id["\']\s*:\s*["\'](VDKA\w+)'
|
r'\b(?:video)?id["\']\s*:\s*["\'](VDKA\w+)'
|
||||||
), webpage, 'video id', default=video_id)
|
), webpage, 'video id', default=video_id)
|
||||||
|
@ -120,7 +120,7 @@ class KalturaIE(InfoExtractor):
|
|||||||
def _extract_urls(webpage):
|
def _extract_urls(webpage):
|
||||||
# Embed codes: https://knowledge.kaltura.com/embedding-kaltura-media-players-your-site
|
# Embed codes: https://knowledge.kaltura.com/embedding-kaltura-media-players-your-site
|
||||||
finditer = (
|
finditer = (
|
||||||
re.finditer(
|
list(re.finditer(
|
||||||
r"""(?xs)
|
r"""(?xs)
|
||||||
kWidget\.(?:thumb)?[Ee]mbed\(
|
kWidget\.(?:thumb)?[Ee]mbed\(
|
||||||
\{.*?
|
\{.*?
|
||||||
@ -128,8 +128,8 @@ class KalturaIE(InfoExtractor):
|
|||||||
(?P<q2>['"])_?(?P<partner_id>(?:(?!(?P=q2)).)+)(?P=q2),.*?
|
(?P<q2>['"])_?(?P<partner_id>(?:(?!(?P=q2)).)+)(?P=q2),.*?
|
||||||
(?P<q3>['"])entry_?[Ii]d(?P=q3)\s*:\s*
|
(?P<q3>['"])entry_?[Ii]d(?P=q3)\s*:\s*
|
||||||
(?P<q4>['"])(?P<id>(?:(?!(?P=q4)).)+)(?P=q4)(?:,|\s*\})
|
(?P<q4>['"])(?P<id>(?:(?!(?P=q4)).)+)(?P=q4)(?:,|\s*\})
|
||||||
""", webpage)
|
""", webpage))
|
||||||
or re.finditer(
|
or list(re.finditer(
|
||||||
r'''(?xs)
|
r'''(?xs)
|
||||||
(?P<q1>["'])
|
(?P<q1>["'])
|
||||||
(?:https?:)?//cdnapi(?:sec)?\.kaltura\.com(?::\d+)?/(?:(?!(?P=q1)).)*\b(?:p|partner_id)/(?P<partner_id>\d+)(?:(?!(?P=q1)).)*
|
(?:https?:)?//cdnapi(?:sec)?\.kaltura\.com(?::\d+)?/(?:(?!(?P=q1)).)*\b(?:p|partner_id)/(?P<partner_id>\d+)(?:(?!(?P=q1)).)*
|
||||||
@ -142,16 +142,16 @@ class KalturaIE(InfoExtractor):
|
|||||||
\[\s*(?P<q2_1>["'])entry_?[Ii]d(?P=q2_1)\s*\]\s*=\s*
|
\[\s*(?P<q2_1>["'])entry_?[Ii]d(?P=q2_1)\s*\]\s*=\s*
|
||||||
)
|
)
|
||||||
(?P<q3>["'])(?P<id>(?:(?!(?P=q3)).)+)(?P=q3)
|
(?P<q3>["'])(?P<id>(?:(?!(?P=q3)).)+)(?P=q3)
|
||||||
''', webpage)
|
''', webpage))
|
||||||
or re.finditer(
|
or list(re.finditer(
|
||||||
r'''(?xs)
|
r'''(?xs)
|
||||||
<(?:iframe[^>]+src|meta[^>]+\bcontent)=(?P<q1>["'])
|
<(?:iframe[^>]+src|meta[^>]+\bcontent)=(?P<q1>["'])\s*
|
||||||
(?:https?:)?//(?:(?:www|cdnapi(?:sec)?)\.)?kaltura\.com/(?:(?!(?P=q1)).)*\b(?:p|partner_id)/(?P<partner_id>\d+)
|
(?:https?:)?//(?:(?:www|cdnapi(?:sec)?)\.)?kaltura\.com/(?:(?!(?P=q1)).)*\b(?:p|partner_id)/(?P<partner_id>\d+)
|
||||||
(?:(?!(?P=q1)).)*
|
(?:(?!(?P=q1)).)*
|
||||||
[?&;]entry_id=(?P<id>(?:(?!(?P=q1))[^&])+)
|
[?&;]entry_id=(?P<id>(?:(?!(?P=q1))[^&])+)
|
||||||
(?:(?!(?P=q1)).)*
|
(?:(?!(?P=q1)).)*
|
||||||
(?P=q1)
|
(?P=q1)
|
||||||
''', webpage)
|
''', webpage))
|
||||||
)
|
)
|
||||||
urls = []
|
urls = []
|
||||||
for mobj in finditer:
|
for mobj in finditer:
|
||||||
|
@ -120,6 +120,26 @@ class LBRYIE(LBRYBaseIE):
|
|||||||
'channel_url': 'https://lbry.tv/@LBRYFoundation:0ed629d2b9c601300cacf7eabe9da0be79010212',
|
'channel_url': 'https://lbry.tv/@LBRYFoundation:0ed629d2b9c601300cacf7eabe9da0be79010212',
|
||||||
'vcodec': 'none',
|
'vcodec': 'none',
|
||||||
}
|
}
|
||||||
|
}, {
|
||||||
|
# HLS
|
||||||
|
'url': 'https://odysee.com/@gardeningincanada:b/plants-i-will-never-grow-again.-the:e',
|
||||||
|
'md5': 'fc82f45ea54915b1495dd7cb5cc1289f',
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'e51671357333fe22ae88aad320bde2f6f96b1410',
|
||||||
|
'ext': 'mp4',
|
||||||
|
'title': 'PLANTS I WILL NEVER GROW AGAIN. THE BLACK LIST PLANTS FOR A CANADIAN GARDEN | Gardening in Canada 🍁',
|
||||||
|
'description': 'md5:9c539c6a03fb843956de61a4d5288d5e',
|
||||||
|
'timestamp': 1618254123,
|
||||||
|
'upload_date': '20210412',
|
||||||
|
'release_timestamp': 1618254002,
|
||||||
|
'release_date': '20210412',
|
||||||
|
'tags': list,
|
||||||
|
'duration': 554,
|
||||||
|
'channel': 'Gardening In Canada',
|
||||||
|
'channel_id': 'b8be0e93b423dad221abe29545fbe8ec36e806bc',
|
||||||
|
'channel_url': 'https://odysee.com/@gardeningincanada:b8be0e93b423dad221abe29545fbe8ec36e806bc',
|
||||||
|
'formats': 'mincount:3',
|
||||||
|
}
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://odysee.com/@BrodieRobertson:5/apple-is-tracking-everything-you-do-on:e',
|
'url': 'https://odysee.com/@BrodieRobertson:5/apple-is-tracking-everything-you-do-on:e',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
@ -163,10 +183,18 @@ class LBRYIE(LBRYBaseIE):
|
|||||||
streaming_url = self._call_api_proxy(
|
streaming_url = self._call_api_proxy(
|
||||||
'get', claim_id, {'uri': uri}, 'streaming url')['streaming_url']
|
'get', claim_id, {'uri': uri}, 'streaming url')['streaming_url']
|
||||||
info = self._parse_stream(result, url)
|
info = self._parse_stream(result, url)
|
||||||
|
urlh = self._request_webpage(
|
||||||
|
streaming_url, display_id, note='Downloading streaming redirect url info')
|
||||||
|
if determine_ext(urlh.geturl()) == 'm3u8':
|
||||||
|
info['formats'] = self._extract_m3u8_formats(
|
||||||
|
urlh.geturl(), display_id, 'mp4', entry_protocol='m3u8_native',
|
||||||
|
m3u8_id='hls')
|
||||||
|
self._sort_formats(info['formats'])
|
||||||
|
else:
|
||||||
|
info['url'] = streaming_url
|
||||||
info.update({
|
info.update({
|
||||||
'id': claim_id,
|
'id': claim_id,
|
||||||
'title': title,
|
'title': title,
|
||||||
'url': streaming_url,
|
|
||||||
})
|
})
|
||||||
return info
|
return info
|
||||||
|
|
||||||
|
@ -15,33 +15,39 @@ from ..utils import (
|
|||||||
|
|
||||||
|
|
||||||
class MedalTVIE(InfoExtractor):
|
class MedalTVIE(InfoExtractor):
|
||||||
_VALID_URL = r'https?://(?:www\.)?medal\.tv/clips/(?P<id>[0-9]+)'
|
_VALID_URL = r'https?://(?:www\.)?medal\.tv/clips/(?P<id>[^/?#&]+)'
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
'url': 'https://medal.tv/clips/34934644/3Is9zyGMoBMr',
|
'url': 'https://medal.tv/clips/2mA60jWAGQCBH',
|
||||||
'md5': '7b07b064331b1cf9e8e5c52a06ae68fa',
|
'md5': '7b07b064331b1cf9e8e5c52a06ae68fa',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '34934644',
|
'id': '2mA60jWAGQCBH',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'title': 'Quad Cold',
|
'title': 'Quad Cold',
|
||||||
'description': 'Medal,https://medal.tv/desktop/',
|
'description': 'Medal,https://medal.tv/desktop/',
|
||||||
'uploader': 'MowgliSB',
|
'uploader': 'MowgliSB',
|
||||||
'timestamp': 1603165266,
|
'timestamp': 1603165266,
|
||||||
'upload_date': '20201020',
|
'upload_date': '20201020',
|
||||||
'uploader_id': 10619174,
|
'uploader_id': '10619174',
|
||||||
}
|
}
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://medal.tv/clips/36787208',
|
'url': 'https://medal.tv/clips/2um24TWdty0NA',
|
||||||
'md5': 'b6dc76b78195fff0b4f8bf4a33ec2148',
|
'md5': 'b6dc76b78195fff0b4f8bf4a33ec2148',
|
||||||
'info_dict': {
|
'info_dict': {
|
||||||
'id': '36787208',
|
'id': '2um24TWdty0NA',
|
||||||
'ext': 'mp4',
|
'ext': 'mp4',
|
||||||
'title': 'u tk me i tk u bigger',
|
'title': 'u tk me i tk u bigger',
|
||||||
'description': 'Medal,https://medal.tv/desktop/',
|
'description': 'Medal,https://medal.tv/desktop/',
|
||||||
'uploader': 'Mimicc',
|
'uploader': 'Mimicc',
|
||||||
'timestamp': 1605580939,
|
'timestamp': 1605580939,
|
||||||
'upload_date': '20201117',
|
'upload_date': '20201117',
|
||||||
'uploader_id': 5156321,
|
'uploader_id': '5156321',
|
||||||
}
|
}
|
||||||
|
}, {
|
||||||
|
'url': 'https://medal.tv/clips/37rMeFpryCC-9',
|
||||||
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://medal.tv/clips/2WRj40tpY_EU9',
|
||||||
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
|
@ -393,7 +393,7 @@ query viewClip {
|
|||||||
# To somewhat reduce the probability of these consequences
|
# To somewhat reduce the probability of these consequences
|
||||||
# we will sleep random amount of time before each call to ViewClip.
|
# we will sleep random amount of time before each call to ViewClip.
|
||||||
self._sleep(
|
self._sleep(
|
||||||
random.randint(2, 5), display_id,
|
random.randint(5, 10), display_id,
|
||||||
'%(video_id)s: Waiting for %(timeout)s seconds to avoid throttling')
|
'%(video_id)s: Waiting for %(timeout)s seconds to avoid throttling')
|
||||||
|
|
||||||
if not viewclip:
|
if not viewclip:
|
||||||
|
@ -146,7 +146,7 @@ class SVTPlayIE(SVTPlayBaseIE):
|
|||||||
)
|
)
|
||||||
(?P<svt_id>[^/?#&]+)|
|
(?P<svt_id>[^/?#&]+)|
|
||||||
https?://(?:www\.)?(?:svtplay|oppetarkiv)\.se/(?:video|klipp|kanaler)/(?P<id>[^/?#&]+)
|
https?://(?:www\.)?(?:svtplay|oppetarkiv)\.se/(?:video|klipp|kanaler)/(?P<id>[^/?#&]+)
|
||||||
(?:.*?modalId=(?P<modal_id>[\da-zA-Z-]+))?
|
(?:.*?(?:modalId|id)=(?P<modal_id>[\da-zA-Z-]+))?
|
||||||
)
|
)
|
||||||
'''
|
'''
|
||||||
_TESTS = [{
|
_TESTS = [{
|
||||||
@ -177,6 +177,9 @@ class SVTPlayIE(SVTPlayBaseIE):
|
|||||||
}, {
|
}, {
|
||||||
'url': 'https://www.svtplay.se/video/30479064/husdrommar/husdrommar-sasong-8-designdrommar-i-stenungsund?modalId=8zVbDPA',
|
'url': 'https://www.svtplay.se/video/30479064/husdrommar/husdrommar-sasong-8-designdrommar-i-stenungsund?modalId=8zVbDPA',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.svtplay.se/video/30684086/rapport/rapport-24-apr-18-00-7?id=e72gVpa',
|
||||||
|
'only_matching': True,
|
||||||
}, {
|
}, {
|
||||||
# geo restricted to Sweden
|
# geo restricted to Sweden
|
||||||
'url': 'http://www.oppetarkiv.se/video/5219710/trollflojten',
|
'url': 'http://www.oppetarkiv.se/video/5219710/trollflojten',
|
||||||
@ -259,7 +262,7 @@ class SVTPlayIE(SVTPlayBaseIE):
|
|||||||
if not svt_id:
|
if not svt_id:
|
||||||
svt_id = self._search_regex(
|
svt_id = self._search_regex(
|
||||||
(r'<video[^>]+data-video-id=["\']([\da-zA-Z-]+)',
|
(r'<video[^>]+data-video-id=["\']([\da-zA-Z-]+)',
|
||||||
r'<[^>]+\bdata-rt=["\']top-area-play-button["\'][^>]+\bhref=["\'][^"\']*video/%s/[^"\']*\bmodalId=([\da-zA-Z-]+)' % re.escape(video_id),
|
r'<[^>]+\bdata-rt=["\']top-area-play-button["\'][^>]+\bhref=["\'][^"\']*video/%s/[^"\']*\b(?:modalId|id)=([\da-zA-Z-]+)' % re.escape(video_id),
|
||||||
r'["\']videoSvtId["\']\s*:\s*["\']([\da-zA-Z-]+)',
|
r'["\']videoSvtId["\']\s*:\s*["\']([\da-zA-Z-]+)',
|
||||||
r'["\']videoSvtId\\?["\']\s*:\s*\\?["\']([\da-zA-Z-]+)',
|
r'["\']videoSvtId\\?["\']\s*:\s*\\?["\']([\da-zA-Z-]+)',
|
||||||
r'"content"\s*:\s*{.*?"id"\s*:\s*"([\da-zA-Z-]+)"',
|
r'"content"\s*:\s*{.*?"id"\s*:\s*"([\da-zA-Z-]+)"',
|
||||||
|
@ -74,6 +74,12 @@ class TV2DKIE(InfoExtractor):
|
|||||||
webpage = self._download_webpage(url, video_id)
|
webpage = self._download_webpage(url, video_id)
|
||||||
|
|
||||||
entries = []
|
entries = []
|
||||||
|
|
||||||
|
def add_entry(partner_id, kaltura_id):
|
||||||
|
entries.append(self.url_result(
|
||||||
|
'kaltura:%s:%s' % (partner_id, kaltura_id), 'Kaltura',
|
||||||
|
video_id=kaltura_id))
|
||||||
|
|
||||||
for video_el in re.findall(r'(?s)<[^>]+\bdata-entryid\s*=[^>]*>', webpage):
|
for video_el in re.findall(r'(?s)<[^>]+\bdata-entryid\s*=[^>]*>', webpage):
|
||||||
video = extract_attributes(video_el)
|
video = extract_attributes(video_el)
|
||||||
kaltura_id = video.get('data-entryid')
|
kaltura_id = video.get('data-entryid')
|
||||||
@ -82,9 +88,14 @@ class TV2DKIE(InfoExtractor):
|
|||||||
partner_id = video.get('data-partnerid')
|
partner_id = video.get('data-partnerid')
|
||||||
if not partner_id:
|
if not partner_id:
|
||||||
continue
|
continue
|
||||||
entries.append(self.url_result(
|
add_entry(partner_id, kaltura_id)
|
||||||
'kaltura:%s:%s' % (partner_id, kaltura_id), 'Kaltura',
|
if not entries:
|
||||||
video_id=kaltura_id))
|
kaltura_id = self._search_regex(
|
||||||
|
r'entry_id\s*:\s*["\']([0-9a-z_]+)', webpage, 'kaltura id')
|
||||||
|
partner_id = self._search_regex(
|
||||||
|
(r'\\u002Fp\\u002F(\d+)\\u002F', r'/p/(\d+)/'), webpage,
|
||||||
|
'partner id')
|
||||||
|
add_entry(partner_id, kaltura_id)
|
||||||
return self.playlist_result(entries)
|
return self.playlist_result(entries)
|
||||||
|
|
||||||
|
|
||||||
|
@ -9,7 +9,6 @@ from ..utils import (
|
|||||||
int_or_none,
|
int_or_none,
|
||||||
remove_start,
|
remove_start,
|
||||||
smuggle_url,
|
smuggle_url,
|
||||||
strip_or_none,
|
|
||||||
try_get,
|
try_get,
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -45,32 +44,18 @@ class TVerIE(InfoExtractor):
|
|||||||
query={'token': self._TOKEN})['main']
|
query={'token': self._TOKEN})['main']
|
||||||
p_id = main['publisher_id']
|
p_id = main['publisher_id']
|
||||||
service = remove_start(main['service'], 'ts_')
|
service = remove_start(main['service'], 'ts_')
|
||||||
info = {
|
|
||||||
'_type': 'url_transparent',
|
|
||||||
'description': try_get(main, lambda x: x['note'][0]['text'], compat_str),
|
|
||||||
'episode_number': int_or_none(try_get(main, lambda x: x['ext']['episode_number'])),
|
|
||||||
}
|
|
||||||
|
|
||||||
if service == 'cx':
|
|
||||||
title = main['title']
|
|
||||||
subtitle = strip_or_none(main.get('subtitle'))
|
|
||||||
if subtitle:
|
|
||||||
title += ' - ' + subtitle
|
|
||||||
info.update({
|
|
||||||
'title': title,
|
|
||||||
'url': 'https://i.fod.fujitv.co.jp/plus7/web/%s/%s.html' % (p_id[:4], p_id),
|
|
||||||
'ie_key': 'FujiTVFODPlus7',
|
|
||||||
})
|
|
||||||
else:
|
|
||||||
r_id = main['reference_id']
|
r_id = main['reference_id']
|
||||||
if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):
|
if service not in ('tx', 'russia2018', 'sebare2018live', 'gorin'):
|
||||||
r_id = 'ref:' + r_id
|
r_id = 'ref:' + r_id
|
||||||
bc_url = smuggle_url(
|
bc_url = smuggle_url(
|
||||||
self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),
|
self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id),
|
||||||
{'geo_countries': ['JP']})
|
{'geo_countries': ['JP']})
|
||||||
info.update({
|
|
||||||
|
return {
|
||||||
|
'_type': 'url_transparent',
|
||||||
|
'description': try_get(main, lambda x: x['note'][0]['text'], compat_str),
|
||||||
|
'episode_number': int_or_none(try_get(main, lambda x: x['ext']['episode_number'])),
|
||||||
'url': bc_url,
|
'url': bc_url,
|
||||||
'ie_key': 'BrightcoveNew',
|
'ie_key': 'BrightcoveNew',
|
||||||
})
|
}
|
||||||
|
|
||||||
return info
|
|
||||||
|
@ -19,6 +19,7 @@ from ..utils import (
|
|||||||
strip_or_none,
|
strip_or_none,
|
||||||
unified_timestamp,
|
unified_timestamp,
|
||||||
update_url_query,
|
update_url_query,
|
||||||
|
url_or_none,
|
||||||
xpath_text,
|
xpath_text,
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -52,6 +53,9 @@ class TwitterBaseIE(InfoExtractor):
|
|||||||
return [f]
|
return [f]
|
||||||
|
|
||||||
def _extract_formats_from_vmap_url(self, vmap_url, video_id):
|
def _extract_formats_from_vmap_url(self, vmap_url, video_id):
|
||||||
|
vmap_url = url_or_none(vmap_url)
|
||||||
|
if not vmap_url:
|
||||||
|
return []
|
||||||
vmap_data = self._download_xml(vmap_url, video_id)
|
vmap_data = self._download_xml(vmap_url, video_id)
|
||||||
formats = []
|
formats = []
|
||||||
urls = []
|
urls = []
|
||||||
|
@ -58,6 +58,7 @@ class XFileShareIE(InfoExtractor):
|
|||||||
(r'vidlocker\.xyz', 'VidLocker'),
|
(r'vidlocker\.xyz', 'VidLocker'),
|
||||||
(r'vidshare\.tv', 'VidShare'),
|
(r'vidshare\.tv', 'VidShare'),
|
||||||
(r'vup\.to', 'VUp'),
|
(r'vup\.to', 'VUp'),
|
||||||
|
(r'wolfstream\.tv', 'WolfStream'),
|
||||||
(r'xvideosharing\.com', 'XVideoSharing'),
|
(r'xvideosharing\.com', 'XVideoSharing'),
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -82,6 +83,9 @@ class XFileShareIE(InfoExtractor):
|
|||||||
}, {
|
}, {
|
||||||
'url': 'https://aparat.cam/n4d6dh0wvlpr',
|
'url': 'https://aparat.cam/n4d6dh0wvlpr',
|
||||||
'only_matching': True,
|
'only_matching': True,
|
||||||
|
}, {
|
||||||
|
'url': 'https://wolfstream.tv/nthme29v9u2x',
|
||||||
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
|
@ -11,6 +11,7 @@ from ..utils import (
|
|||||||
parse_duration,
|
parse_duration,
|
||||||
sanitized_Request,
|
sanitized_Request,
|
||||||
str_to_int,
|
str_to_int,
|
||||||
|
url_or_none,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@ -87,10 +88,10 @@ class XTubeIE(InfoExtractor):
|
|||||||
'Cookie': 'age_verified=1; cookiesAccepted=1',
|
'Cookie': 'age_verified=1; cookiesAccepted=1',
|
||||||
})
|
})
|
||||||
|
|
||||||
title, thumbnail, duration = [None] * 3
|
title, thumbnail, duration, sources, media_definition = [None] * 5
|
||||||
|
|
||||||
config = self._parse_json(self._search_regex(
|
config = self._parse_json(self._search_regex(
|
||||||
r'playerConf\s*=\s*({.+?})\s*,\s*(?:\n|loaderConf)', webpage, 'config',
|
r'playerConf\s*=\s*({.+?})\s*,\s*(?:\n|loaderConf|playerWrapper)', webpage, 'config',
|
||||||
default='{}'), video_id, transform_source=js_to_json, fatal=False)
|
default='{}'), video_id, transform_source=js_to_json, fatal=False)
|
||||||
if config:
|
if config:
|
||||||
config = config.get('mainRoll')
|
config = config.get('mainRoll')
|
||||||
@ -99,20 +100,52 @@ class XTubeIE(InfoExtractor):
|
|||||||
thumbnail = config.get('poster')
|
thumbnail = config.get('poster')
|
||||||
duration = int_or_none(config.get('duration'))
|
duration = int_or_none(config.get('duration'))
|
||||||
sources = config.get('sources') or config.get('format')
|
sources = config.get('sources') or config.get('format')
|
||||||
|
media_definition = config.get('mediaDefinition')
|
||||||
|
|
||||||
if not isinstance(sources, dict):
|
if not isinstance(sources, dict) and not media_definition:
|
||||||
sources = self._parse_json(self._search_regex(
|
sources = self._parse_json(self._search_regex(
|
||||||
r'(["\'])?sources\1?\s*:\s*(?P<sources>{.+?}),',
|
r'(["\'])?sources\1?\s*:\s*(?P<sources>{.+?}),',
|
||||||
webpage, 'sources', group='sources'), video_id,
|
webpage, 'sources', group='sources'), video_id,
|
||||||
transform_source=js_to_json)
|
transform_source=js_to_json)
|
||||||
|
|
||||||
formats = []
|
formats = []
|
||||||
|
format_urls = set()
|
||||||
|
|
||||||
|
if isinstance(sources, dict):
|
||||||
for format_id, format_url in sources.items():
|
for format_id, format_url in sources.items():
|
||||||
|
format_url = url_or_none(format_url)
|
||||||
|
if not format_url:
|
||||||
|
continue
|
||||||
|
if format_url in format_urls:
|
||||||
|
continue
|
||||||
|
format_urls.add(format_url)
|
||||||
formats.append({
|
formats.append({
|
||||||
'url': format_url,
|
'url': format_url,
|
||||||
'format_id': format_id,
|
'format_id': format_id,
|
||||||
'height': int_or_none(format_id),
|
'height': int_or_none(format_id),
|
||||||
})
|
})
|
||||||
|
|
||||||
|
if isinstance(media_definition, list):
|
||||||
|
for media in media_definition:
|
||||||
|
video_url = url_or_none(media.get('videoUrl'))
|
||||||
|
if not video_url:
|
||||||
|
continue
|
||||||
|
if video_url in format_urls:
|
||||||
|
continue
|
||||||
|
format_urls.add(video_url)
|
||||||
|
format_id = media.get('format')
|
||||||
|
if format_id == 'hls':
|
||||||
|
formats.extend(self._extract_m3u8_formats(
|
||||||
|
video_url, video_id, 'mp4', entry_protocol='m3u8_native',
|
||||||
|
m3u8_id='hls', fatal=False))
|
||||||
|
elif format_id == 'mp4':
|
||||||
|
height = int_or_none(media.get('quality'))
|
||||||
|
formats.append({
|
||||||
|
'url': video_url,
|
||||||
|
'format_id': '%s-%d' % (format_id, height) if height else format_id,
|
||||||
|
'height': height,
|
||||||
|
})
|
||||||
|
|
||||||
self._remove_duplicate_formats(formats)
|
self._remove_duplicate_formats(formats)
|
||||||
self._sort_formats(formats)
|
self._sort_formats(formats)
|
||||||
|
|
||||||
|
@ -46,6 +46,10 @@ from ..utils import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_qs(url):
|
||||||
|
return compat_urlparse.parse_qs(compat_urlparse.urlparse(url).query)
|
||||||
|
|
||||||
|
|
||||||
class YoutubeBaseInfoExtractor(InfoExtractor):
|
class YoutubeBaseInfoExtractor(InfoExtractor):
|
||||||
"""Provide base functions for Youtube extractors"""
|
"""Provide base functions for Youtube extractors"""
|
||||||
_LOGIN_URL = 'https://accounts.google.com/ServiceLogin'
|
_LOGIN_URL = 'https://accounts.google.com/ServiceLogin'
|
||||||
@ -61,11 +65,6 @@ class YoutubeBaseInfoExtractor(InfoExtractor):
|
|||||||
|
|
||||||
_PLAYLIST_ID_RE = r'(?:(?:PL|LL|EC|UU|FL|RD|UL|TL|PU|OLAK5uy_)[0-9A-Za-z-_]{10,}|RDMM)'
|
_PLAYLIST_ID_RE = r'(?:(?:PL|LL|EC|UU|FL|RD|UL|TL|PU|OLAK5uy_)[0-9A-Za-z-_]{10,}|RDMM)'
|
||||||
|
|
||||||
def _ids_to_results(self, ids):
|
|
||||||
return [
|
|
||||||
self.url_result(vid_id, 'Youtube', video_id=vid_id)
|
|
||||||
for vid_id in ids]
|
|
||||||
|
|
||||||
def _login(self):
|
def _login(self):
|
||||||
"""
|
"""
|
||||||
Attempt to log in to YouTube.
|
Attempt to log in to YouTube.
|
||||||
@ -355,21 +354,28 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||||||
r'(?:www\.)?invidious\.mastodon\.host',
|
r'(?:www\.)?invidious\.mastodon\.host',
|
||||||
r'(?:www\.)?invidious\.zapashcanon\.fr',
|
r'(?:www\.)?invidious\.zapashcanon\.fr',
|
||||||
r'(?:www\.)?invidious\.kavin\.rocks',
|
r'(?:www\.)?invidious\.kavin\.rocks',
|
||||||
|
r'(?:www\.)?invidious\.tinfoil-hat\.net',
|
||||||
|
r'(?:www\.)?invidious\.himiko\.cloud',
|
||||||
|
r'(?:www\.)?invidious\.reallyancient\.tech',
|
||||||
r'(?:www\.)?invidious\.tube',
|
r'(?:www\.)?invidious\.tube',
|
||||||
r'(?:www\.)?invidiou\.site',
|
r'(?:www\.)?invidiou\.site',
|
||||||
r'(?:www\.)?invidious\.site',
|
r'(?:www\.)?invidious\.site',
|
||||||
r'(?:www\.)?invidious\.xyz',
|
r'(?:www\.)?invidious\.xyz',
|
||||||
r'(?:www\.)?invidious\.nixnet\.xyz',
|
r'(?:www\.)?invidious\.nixnet\.xyz',
|
||||||
|
r'(?:www\.)?invidious\.048596\.xyz',
|
||||||
r'(?:www\.)?invidious\.drycat\.fr',
|
r'(?:www\.)?invidious\.drycat\.fr',
|
||||||
|
r'(?:www\.)?inv\.skyn3t\.in',
|
||||||
r'(?:www\.)?tube\.poal\.co',
|
r'(?:www\.)?tube\.poal\.co',
|
||||||
r'(?:www\.)?tube\.connect\.cafe',
|
r'(?:www\.)?tube\.connect\.cafe',
|
||||||
r'(?:www\.)?vid\.wxzm\.sx',
|
r'(?:www\.)?vid\.wxzm\.sx',
|
||||||
r'(?:www\.)?vid\.mint\.lgbt',
|
r'(?:www\.)?vid\.mint\.lgbt',
|
||||||
|
r'(?:www\.)?vid\.puffyan\.us',
|
||||||
r'(?:www\.)?yewtu\.be',
|
r'(?:www\.)?yewtu\.be',
|
||||||
r'(?:www\.)?yt\.elukerio\.org',
|
r'(?:www\.)?yt\.elukerio\.org',
|
||||||
r'(?:www\.)?yt\.lelux\.fi',
|
r'(?:www\.)?yt\.lelux\.fi',
|
||||||
r'(?:www\.)?invidious\.ggc-project\.de',
|
r'(?:www\.)?invidious\.ggc-project\.de',
|
||||||
r'(?:www\.)?yt\.maisputain\.ovh',
|
r'(?:www\.)?yt\.maisputain\.ovh',
|
||||||
|
r'(?:www\.)?ytprivate\.com',
|
||||||
r'(?:www\.)?invidious\.13ad\.de',
|
r'(?:www\.)?invidious\.13ad\.de',
|
||||||
r'(?:www\.)?invidious\.toot\.koeln',
|
r'(?:www\.)?invidious\.toot\.koeln',
|
||||||
r'(?:www\.)?invidious\.fdn\.fr',
|
r'(?:www\.)?invidious\.fdn\.fr',
|
||||||
@ -414,15 +420,8 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||||||
)
|
)
|
||||||
)? # all until now is optional -> you can pass the naked ID
|
)? # all until now is optional -> you can pass the naked ID
|
||||||
(?P<id>[0-9A-Za-z_-]{11}) # here is it! the YouTube video ID
|
(?P<id>[0-9A-Za-z_-]{11}) # here is it! the YouTube video ID
|
||||||
(?!.*?\blist=
|
|
||||||
(?:
|
|
||||||
%(playlist_id)s| # combined list/video URLs are handled by the playlist IE
|
|
||||||
WL # WL are handled by the watch later IE
|
|
||||||
)
|
|
||||||
)
|
|
||||||
(?(1).+)? # if we found the ID, everything can follow
|
(?(1).+)? # if we found the ID, everything can follow
|
||||||
$""" % {
|
$""" % {
|
||||||
'playlist_id': YoutubeBaseInfoExtractor._PLAYLIST_ID_RE,
|
|
||||||
'invidious': '|'.join(_INVIDIOUS_SITES),
|
'invidious': '|'.join(_INVIDIOUS_SITES),
|
||||||
}
|
}
|
||||||
_PLAYER_INFO_RE = (
|
_PLAYER_INFO_RE = (
|
||||||
@ -808,6 +807,11 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||||||
},
|
},
|
||||||
'skip': 'This video does not exist.',
|
'skip': 'This video does not exist.',
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
# Video with incomplete 'yt:stretch=16:'
|
||||||
|
'url': 'https://www.youtube.com/watch?v=FRhJzUSJbGI',
|
||||||
|
'only_matching': True,
|
||||||
|
},
|
||||||
{
|
{
|
||||||
# Video licensed under Creative Commons
|
# Video licensed under Creative Commons
|
||||||
'url': 'https://www.youtube.com/watch?v=M4gD1WSo5mA',
|
'url': 'https://www.youtube.com/watch?v=M4gD1WSo5mA',
|
||||||
@ -1208,6 +1212,16 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||||||
'397': {'acodec': 'none', 'vcodec': 'av01.0.05M.08'},
|
'397': {'acodec': 'none', 'vcodec': 'av01.0.05M.08'},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def suitable(cls, url):
|
||||||
|
# Hack for lazy extractors until more generic solution is implemented
|
||||||
|
# (see #28780)
|
||||||
|
from .youtube import parse_qs
|
||||||
|
qs = parse_qs(url)
|
||||||
|
if qs.get('list', [None])[0]:
|
||||||
|
return False
|
||||||
|
return super(YoutubeIE, cls).suitable(url)
|
||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
def __init__(self, *args, **kwargs):
|
||||||
super(YoutubeIE, self).__init__(*args, **kwargs)
|
super(YoutubeIE, self).__init__(*args, **kwargs)
|
||||||
self._code_cache = {}
|
self._code_cache = {}
|
||||||
@ -1706,13 +1720,16 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
|
|||||||
for m in re.finditer(self._meta_regex('og:video:tag'), webpage)]
|
for m in re.finditer(self._meta_regex('og:video:tag'), webpage)]
|
||||||
for keyword in keywords:
|
for keyword in keywords:
|
||||||
if keyword.startswith('yt:stretch='):
|
if keyword.startswith('yt:stretch='):
|
||||||
w, h = keyword.split('=')[1].split(':')
|
mobj = re.search(r'(\d+)\s*:\s*(\d+)', keyword)
|
||||||
w, h = int(w), int(h)
|
if mobj:
|
||||||
|
# NB: float is intentional for forcing float division
|
||||||
|
w, h = (float(v) for v in mobj.groups())
|
||||||
if w > 0 and h > 0:
|
if w > 0 and h > 0:
|
||||||
ratio = w / h
|
ratio = w / h
|
||||||
for f in formats:
|
for f in formats:
|
||||||
if f.get('vcodec') != 'none':
|
if f.get('vcodec') != 'none':
|
||||||
f['stretched_ratio'] = ratio
|
f['stretched_ratio'] = ratio
|
||||||
|
break
|
||||||
|
|
||||||
thumbnails = []
|
thumbnails = []
|
||||||
for container in (video_details, microformat):
|
for container in (video_details, microformat):
|
||||||
@ -2008,6 +2025,15 @@ class YoutubeTabIE(YoutubeBaseInfoExtractor):
|
|||||||
'title': 'Игорь Клейнер - Playlists',
|
'title': 'Игорь Клейнер - Playlists',
|
||||||
'description': 'md5:be97ee0f14ee314f1f002cf187166ee2',
|
'description': 'md5:be97ee0f14ee314f1f002cf187166ee2',
|
||||||
},
|
},
|
||||||
|
}, {
|
||||||
|
# playlists, series
|
||||||
|
'url': 'https://www.youtube.com/c/3blue1brown/playlists?view=50&sort=dd&shelf_id=3',
|
||||||
|
'playlist_mincount': 5,
|
||||||
|
'info_dict': {
|
||||||
|
'id': 'UCYO_jab_esuFRV4b17AJtAw',
|
||||||
|
'title': '3Blue1Brown - Playlists',
|
||||||
|
'description': 'md5:e1384e8a133307dd10edee76e875d62f',
|
||||||
|
},
|
||||||
}, {
|
}, {
|
||||||
# playlists, singlepage
|
# playlists, singlepage
|
||||||
'url': 'https://www.youtube.com/user/ThirstForScience/playlists',
|
'url': 'https://www.youtube.com/user/ThirstForScience/playlists',
|
||||||
@ -2275,6 +2301,9 @@ class YoutubeTabIE(YoutubeBaseInfoExtractor):
|
|||||||
'title': '#cctv9',
|
'title': '#cctv9',
|
||||||
},
|
},
|
||||||
'playlist_mincount': 350,
|
'playlist_mincount': 350,
|
||||||
|
}, {
|
||||||
|
'url': 'https://www.youtube.com/watch?list=PLW4dVinRY435CBE_JD3t-0SRXKfnZHS1P&feature=youtu.be&v=M9cJMXmQ_ZU',
|
||||||
|
'only_matching': True,
|
||||||
}]
|
}]
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
@ -2297,9 +2326,12 @@ class YoutubeTabIE(YoutubeBaseInfoExtractor):
|
|||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _extract_grid_item_renderer(item):
|
def _extract_grid_item_renderer(item):
|
||||||
for item_kind in ('Playlist', 'Video', 'Channel'):
|
assert isinstance(item, dict)
|
||||||
renderer = item.get('grid%sRenderer' % item_kind)
|
for key, renderer in item.items():
|
||||||
if renderer:
|
if not key.startswith('grid') or not key.endswith('Renderer'):
|
||||||
|
continue
|
||||||
|
if not isinstance(renderer, dict):
|
||||||
|
continue
|
||||||
return renderer
|
return renderer
|
||||||
|
|
||||||
def _grid_entries(self, grid_renderer):
|
def _grid_entries(self, grid_renderer):
|
||||||
@ -2310,7 +2342,8 @@ class YoutubeTabIE(YoutubeBaseInfoExtractor):
|
|||||||
if not isinstance(renderer, dict):
|
if not isinstance(renderer, dict):
|
||||||
continue
|
continue
|
||||||
title = try_get(
|
title = try_get(
|
||||||
renderer, lambda x: x['title']['runs'][0]['text'], compat_str)
|
renderer, (lambda x: x['title']['runs'][0]['text'],
|
||||||
|
lambda x: x['title']['simpleText']), compat_str)
|
||||||
# playlist
|
# playlist
|
||||||
playlist_id = renderer.get('playlistId')
|
playlist_id = renderer.get('playlistId')
|
||||||
if playlist_id:
|
if playlist_id:
|
||||||
@ -2318,10 +2351,12 @@ class YoutubeTabIE(YoutubeBaseInfoExtractor):
|
|||||||
'https://www.youtube.com/playlist?list=%s' % playlist_id,
|
'https://www.youtube.com/playlist?list=%s' % playlist_id,
|
||||||
ie=YoutubeTabIE.ie_key(), video_id=playlist_id,
|
ie=YoutubeTabIE.ie_key(), video_id=playlist_id,
|
||||||
video_title=title)
|
video_title=title)
|
||||||
|
continue
|
||||||
# video
|
# video
|
||||||
video_id = renderer.get('videoId')
|
video_id = renderer.get('videoId')
|
||||||
if video_id:
|
if video_id:
|
||||||
yield self._extract_video(renderer)
|
yield self._extract_video(renderer)
|
||||||
|
continue
|
||||||
# channel
|
# channel
|
||||||
channel_id = renderer.get('channelId')
|
channel_id = renderer.get('channelId')
|
||||||
if channel_id:
|
if channel_id:
|
||||||
@ -2330,6 +2365,17 @@ class YoutubeTabIE(YoutubeBaseInfoExtractor):
|
|||||||
yield self.url_result(
|
yield self.url_result(
|
||||||
'https://www.youtube.com/channel/%s' % channel_id,
|
'https://www.youtube.com/channel/%s' % channel_id,
|
||||||
ie=YoutubeTabIE.ie_key(), video_title=title)
|
ie=YoutubeTabIE.ie_key(), video_title=title)
|
||||||
|
continue
|
||||||
|
# generic endpoint URL support
|
||||||
|
ep_url = urljoin('https://www.youtube.com/', try_get(
|
||||||
|
renderer, lambda x: x['navigationEndpoint']['commandMetadata']['webCommandMetadata']['url'],
|
||||||
|
compat_str))
|
||||||
|
if ep_url:
|
||||||
|
for ie in (YoutubeTabIE, YoutubePlaylistIE, YoutubeIE):
|
||||||
|
if ie.suitable(ep_url):
|
||||||
|
yield self.url_result(
|
||||||
|
ep_url, ie=ie.ie_key(), video_id=ie._match_id(ep_url), video_title=title)
|
||||||
|
break
|
||||||
|
|
||||||
def _shelf_entries_from_content(self, shelf_renderer):
|
def _shelf_entries_from_content(self, shelf_renderer):
|
||||||
content = shelf_renderer.get('content')
|
content = shelf_renderer.get('content')
|
||||||
@ -2764,7 +2810,7 @@ class YoutubeTabIE(YoutubeBaseInfoExtractor):
|
|||||||
url = compat_urlparse.urlunparse(
|
url = compat_urlparse.urlunparse(
|
||||||
compat_urlparse.urlparse(url)._replace(netloc='www.youtube.com'))
|
compat_urlparse.urlparse(url)._replace(netloc='www.youtube.com'))
|
||||||
# Handle both video/playlist URLs
|
# Handle both video/playlist URLs
|
||||||
qs = compat_urlparse.parse_qs(compat_urlparse.urlparse(url).query)
|
qs = parse_qs(url)
|
||||||
video_id = qs.get('v', [None])[0]
|
video_id = qs.get('v', [None])[0]
|
||||||
playlist_id = qs.get('list', [None])[0]
|
playlist_id = qs.get('list', [None])[0]
|
||||||
if video_id and playlist_id:
|
if video_id and playlist_id:
|
||||||
@ -2860,12 +2906,19 @@ class YoutubePlaylistIE(InfoExtractor):
|
|||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def suitable(cls, url):
|
def suitable(cls, url):
|
||||||
return False if YoutubeTabIE.suitable(url) else super(
|
if YoutubeTabIE.suitable(url):
|
||||||
YoutubePlaylistIE, cls).suitable(url)
|
return False
|
||||||
|
# Hack for lazy extractors until more generic solution is implemented
|
||||||
|
# (see #28780)
|
||||||
|
from .youtube import parse_qs
|
||||||
|
qs = parse_qs(url)
|
||||||
|
if qs.get('v', [None])[0]:
|
||||||
|
return False
|
||||||
|
return super(YoutubePlaylistIE, cls).suitable(url)
|
||||||
|
|
||||||
def _real_extract(self, url):
|
def _real_extract(self, url):
|
||||||
playlist_id = self._match_id(url)
|
playlist_id = self._match_id(url)
|
||||||
qs = compat_urlparse.parse_qs(compat_urlparse.urlparse(url).query)
|
qs = parse_qs(url)
|
||||||
if not qs:
|
if not qs:
|
||||||
qs = {'list': playlist_id}
|
qs = {'list': playlist_id}
|
||||||
return self.url_result(
|
return self.url_result(
|
||||||
|
@ -39,6 +39,7 @@ import zlib
|
|||||||
from .compat import (
|
from .compat import (
|
||||||
compat_HTMLParseError,
|
compat_HTMLParseError,
|
||||||
compat_HTMLParser,
|
compat_HTMLParser,
|
||||||
|
compat_HTTPError,
|
||||||
compat_basestring,
|
compat_basestring,
|
||||||
compat_chr,
|
compat_chr,
|
||||||
compat_cookiejar,
|
compat_cookiejar,
|
||||||
@ -2879,12 +2880,60 @@ class YoutubeDLCookieProcessor(compat_urllib_request.HTTPCookieProcessor):
|
|||||||
|
|
||||||
|
|
||||||
class YoutubeDLRedirectHandler(compat_urllib_request.HTTPRedirectHandler):
|
class YoutubeDLRedirectHandler(compat_urllib_request.HTTPRedirectHandler):
|
||||||
if sys.version_info[0] < 3:
|
"""YoutubeDL redirect handler
|
||||||
|
|
||||||
|
The code is based on HTTPRedirectHandler implementation from CPython [1].
|
||||||
|
|
||||||
|
This redirect handler solves two issues:
|
||||||
|
- ensures redirect URL is always unicode under python 2
|
||||||
|
- introduces support for experimental HTTP response status code
|
||||||
|
308 Permanent Redirect [2] used by some sites [3]
|
||||||
|
|
||||||
|
1. https://github.com/python/cpython/blob/master/Lib/urllib/request.py
|
||||||
|
2. https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/308
|
||||||
|
3. https://github.com/ytdl-org/youtube-dl/issues/28768
|
||||||
|
"""
|
||||||
|
|
||||||
|
http_error_301 = http_error_303 = http_error_307 = http_error_308 = compat_urllib_request.HTTPRedirectHandler.http_error_302
|
||||||
|
|
||||||
def redirect_request(self, req, fp, code, msg, headers, newurl):
|
def redirect_request(self, req, fp, code, msg, headers, newurl):
|
||||||
|
"""Return a Request or None in response to a redirect.
|
||||||
|
|
||||||
|
This is called by the http_error_30x methods when a
|
||||||
|
redirection response is received. If a redirection should
|
||||||
|
take place, return a new Request to allow http_error_30x to
|
||||||
|
perform the redirect. Otherwise, raise HTTPError if no-one
|
||||||
|
else should try to handle this url. Return None if you can't
|
||||||
|
but another Handler might.
|
||||||
|
"""
|
||||||
|
m = req.get_method()
|
||||||
|
if (not (code in (301, 302, 303, 307, 308) and m in ("GET", "HEAD")
|
||||||
|
or code in (301, 302, 303) and m == "POST")):
|
||||||
|
raise compat_HTTPError(req.full_url, code, msg, headers, fp)
|
||||||
|
# Strictly (according to RFC 2616), 301 or 302 in response to
|
||||||
|
# a POST MUST NOT cause a redirection without confirmation
|
||||||
|
# from the user (of urllib.request, in this case). In practice,
|
||||||
|
# essentially all clients do redirect in this case, so we do
|
||||||
|
# the same.
|
||||||
|
|
||||||
# On python 2 urlh.geturl() may sometimes return redirect URL
|
# On python 2 urlh.geturl() may sometimes return redirect URL
|
||||||
# as byte string instead of unicode. This workaround allows
|
# as byte string instead of unicode. This workaround allows
|
||||||
# to force it always return unicode.
|
# to force it always return unicode.
|
||||||
return compat_urllib_request.HTTPRedirectHandler.redirect_request(self, req, fp, code, msg, headers, compat_str(newurl))
|
if sys.version_info[0] < 3:
|
||||||
|
newurl = compat_str(newurl)
|
||||||
|
|
||||||
|
# Be conciliant with URIs containing a space. This is mainly
|
||||||
|
# redundant with the more complete encoding done in http_error_302(),
|
||||||
|
# but it is kept for compatibility with other callers.
|
||||||
|
newurl = newurl.replace(' ', '%20')
|
||||||
|
|
||||||
|
CONTENT_HEADERS = ("content-length", "content-type")
|
||||||
|
# NB: don't use dict comprehension for python 2.6 compatibility
|
||||||
|
newheaders = dict((k, v) for k, v in req.headers.items()
|
||||||
|
if k.lower() not in CONTENT_HEADERS)
|
||||||
|
return compat_urllib_request.Request(
|
||||||
|
newurl, headers=newheaders, origin_req_host=req.origin_req_host,
|
||||||
|
unverifiable=True)
|
||||||
|
|
||||||
|
|
||||||
def extract_timezone(date_str):
|
def extract_timezone(date_str):
|
||||||
|
@ -1,3 +1,3 @@
|
|||||||
from __future__ import unicode_literals
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
__version__ = '2021.04.07'
|
__version__ = '2021.04.26'
|
||||||
|
Loading…
Reference in New Issue
Block a user