|
|
|
|
Changelog for python311-Scrapy-2.7.1-40.21.noarch.rpm :
* Mon Nov 07 2022 Yogalakshmi Arunachalam - Update to v2.7.1 * Relaxed the restriction introduced in 2.6.2 so that the Proxy-Authentication header can again be set explicitly in certain cases, restoring compatibility with scrapy-zyte-smartproxy 2.1.0 and older Bug fixes * full change-log https://docs.scrapy.org/en/latest/news.html#scrapy-2-7-1-2022-11-02 * Thu Oct 27 2022 Yogalakshmi Arunachalam - Update to v2.7.0 Highlights: * Added Python 3.11 support, dropped Python 3.6 support * Improved support for :ref:`asynchronous callbacks ` * :ref:`Asyncio support ` is enabled by default on new projects * Output names of item fields can now be arbitrary strings * Centralized :ref:`request fingerprinting ` configuration is now possible Modified requirements * Python 3.7 or greater is now required; support for Python 3.6 has been dropped. Support for the upcoming Python 3.11 has been added. The minimum required version of some dependencies has changed as well: - lxml: 3.5.0 → 4.3.0 - Pillow (:ref:`images pipeline `): 4.0.0 → 7.1.0 - zope.interface: 5.0.0 → 5.1.0 (:issue:`5512`, :issue:`5514`, :issue:`5524`, :issue:`5563`, :issue:`5664`, :issue:`5670`, :issue:`5678`) Deprecations - :meth:`ImagesPipeline.thumb_path ` must now accept an item parameter (:issue:`5504`, :issue:`5508`). - The scrapy.downloadermiddlewares.decompression module is now deprecated (:issue:`5546`, :issue:`5547`). Complete changelog https://github.com/scrapy/scrapy/blob/2.7/docs/news.rst * Fri Sep 09 2022 Yogalakshmi Arunachalam - Update to v2.6.2 Security bug fix: * When HttpProxyMiddleware processes a request with proxy metadata, and that proxy metadata includes proxy credentials, HttpProxyMiddleware sets the Proxy-Authentication header, but only if that header is not already set. * There are third-party proxy-rotation downloader middlewares that set different proxy metadata every time they process a request. * Because of request retries and redirects, the same request can be processed by downloader middlewares more than once, including both HttpProxyMiddleware and any third-party proxy-rotation downloader middleware. * These third-party proxy-rotation downloader middlewares could change the proxy metadata of a request to a new value, but fail to remove the Proxy-Authentication header from the previous value of the proxy metadata, causing the credentials of one proxy to be sent to a different proxy. * To prevent the unintended leaking of proxy credentials, the behavior of HttpProxyMiddleware is now as follows when processing a request: + If the request being processed defines proxy metadata that includes credentials, the Proxy-Authorization header is always updated to feature those credentials. + If the request being processed defines proxy metadata without credentials, the Proxy-Authorization header is removed unless it was originally defined for the same proxy URL. + To remove proxy credentials while keeping the same proxy URL, remove the Proxy-Authorization header. + If the request has no proxy metadata, or that metadata is a falsy value (e.g. None), the Proxy-Authorization header is removed. + It is no longer possible to set a proxy URL through the proxy metadata but set the credentials through the Proxy-Authorization header. Set proxy credentials through the proxy metadata instead. * Also fixes the following regressions introduced in 2.6.0: + CrawlerProcess supports again crawling multiple spiders (issue 5435, issue 5436) + Installing a Twisted reactor before Scrapy does (e.g. importing twisted.internet.reactor somewhere at the module level) no longer prevents Scrapy from starting, as long as a different reactor is not specified in TWISTED_REACTOR (issue 5525, issue 5528) + Fixed an exception that was being logged after the spider finished under certain conditions (issue 5437, issue 5440) + The --output/-o command-line parameter supports again a value starting with a hyphen (issue 5444, issue 5445) + The scrapy parse -h command no longer throws an error (issue 5481, issue 5482) * Fri Mar 04 2022 Ben Greiner - Update runtime requirements and test deselections * Wed Mar 02 2022 Matej Cepl - Update to v2.6.1 * Security fixes for cookie handling (CVE-2022-0577 aka bsc#1196638, GHSA-mfjm-vh54-3f96) * Python 3.10 support * asyncio support is no longer considered experimental, and works out-of-the-box on Windows regardless of your Python version * Feed exports now support pathlib.Path output paths and per-feed item filtering and post-processing- Remove unnecessary patches: - remove-h2-version-restriction.patch - add-peak-method-to-queues.patch * Sun Jan 16 2022 Ben Greiner - Skip a failing test in python310: exception format not recognized * Thu Oct 07 2021 Ben Greiner - Update to 2.5.1, Security bug fix * boo#1191446, CVE-2021-41125 * If you use HttpAuthMiddleware (i.e. the http_user and http_pass spider attributes) for HTTP authentication, any request exposes your credentials to the request target. * To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute, http_auth_domain, and point it to the specific domain to which the authentication credentials must be sent. * If the http_auth_domain spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain. * If you need to send the same HTTP authentication credentials to multiple domains, you can use w3lib.http.basic_auth_header instead to set the value of the Authorization header of your requests. * If you really want your spider to send the same HTTP authentication credentials to any domain, set the http_auth_domain spider attribute to None. * Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work. * Wed Sep 01 2021 Fusion Future - Remove h2 < 4.0 dependency version restriction. (boo#1190035) * remove-h2-version-restriction.patch- Add peak method to queues to fix build with queuelib 1.6.2. * add-peak-method-to-queues.patch- Drop support for Python 3.6 as python-uvloop does not support it.- Require testfixtures >= 6.0.0 (tests need LogCapture.check_present). (https://github.com/Simplistix/testfixtures/commit/2953bb4caadc1a462e5332ffb01591ba1fc3284f) * Wed Apr 28 2021 Ben Greiner - Update to 2.5.0: * Official Python 3.9 support * Experimental HTTP/2 support * New get_retry_request() function to retry requests from spider callbacks * New headers_received signal that allows stopping downloads early * New Response.protocol attribute- Release 2.4.1: * Fixed feed exports overwrite support * Fixed the asyncio event loop handling, which could make code hang * Fixed the IPv6-capable DNS resolver CachingHostnameResolver for download handlers that call reactor.resolve * Fixed the output of the genspider command showing placeholders instead of the import part of the generated spider module (issue 4874)- Release 2.4.0: * Python 3.5 support has been dropped. * The file_path method of media pipelines can now access the source item. * This allows you to set a download file path based on item data. * The new item_export_kwargs key of the FEEDS setting allows to define keyword parameters to pass to item exporter classes. * You can now choose whether feed exports overwrite or append to the output file. * For example, when using the crawl or runspider commands, you can use the -O option instead of -o to overwrite the output file. * Zstd-compressed responses are now supported if zstandard is installed. * In settings, where the import path of a class is required, it is now possible to pass a class object instead.- Release 2.3.0: * Feed exports now support Google Cloud Storage as a storage backend * The new FEED_EXPORT_BATCH_ITEM_COUNT setting allows to deliver output items in batches of up to the specified number of items. * It also serves as a workaround for delayed file delivery, which causes Scrapy to only start item delivery after the crawl has finished when using certain storage backends (S3, FTP, and now GCS). * The base implementation of item loaders has been moved into a separate library, itemloaders, allowing usage from outside Scrapy and a separate release schedule- Release 2.2.1: * The startproject command no longer makes unintended changes to the permissions of files in the destination folder, such as removing execution permissions. * Fri Jul 03 2020 Jacob W - Update to 2.2.0: * Python 3.5.2+ is required now * dataclass objects and attrs objects are now valid item types * New TextResponse.json method * New bytes_received signal that allows canceling response download * CookiesMiddleware fixes- Update to 2.1.0: * New FEEDS setting to export to multiple feeds * New Response.ip_address attribute- Remove zope-exception-test_crawler.patch- Add new required dependency python-itemadapter- Omit test that fails in OBS due to https / tls issues * Tue May 19 2020 Petr Gajdos - %python3_only -> %python_alternative * Thu Apr 02 2020 Steve Kowalik - Update to 2.0.1: * Python 2 support has been removed * Partial coroutine syntax support and experimental asyncio support * New Response.follow_all method * FTP support for media pipelines * New Response.certificate attribute * IPv6 support through DNS_RESOLVER * Response.follow_all now supports an empty URL iterable as input * Removed top-level reactor imports to prevent errors about the wrong Twisted reactor being installed when setting a different Twisted reactor using TWISTED_REACTOR- Add zope-exception-test_crawler.patch, rewriting one testcase to pass with our version of Zope.- Update BuildRequires based on test requirements. * Thu Jan 16 2020 Marketa Calabkova - update to 1.8.0 * Dropped Python 3.4 support and updated minimum requirements; made Python 3.8 support official * lots of new fixes and features
|
|
|