SEARCH
NEW RPMS
DIRECTORIES
ABOUT
FAQ
VARIOUS
BLOG

 
 
Changelog for python311-torch-converters-2.3.1-1.1.noarch.rpm :

* Thu Jul 11 2024 Christian Goll - update to 2.3.1 with following summarized highlights:
* from 2.0.x: - torch.compile is the main API for PyTorch 2.0, which wraps your model and returns a compiled model. It is a fully additive (and optional) feature and hence 2.0 is 100% backward compatible by definition - Accelerated Transformers introduce high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA). The API is integrated with torch.compile() and model developers may also use the scaled dot product attention kernels directly by calling the new scaled_dot_product_attention() operato
* from 2.1.x: - automatic dynamic shape support in torch.compile, torch.distributed.checkpoint for saving/loading distributed training jobs on multiple ranks in parallel, and torch.compile support for the NumPy API. - In addition, this release offers numerous performance improvements (e.g. CPU inductor improvements, AVX512 support, scaled-dot-product-attention support) as well as a prototype release of torch.export, a sound full-graph capture mechanism, and torch.export-based quantization.
* from 2.2.x: - 2x performance improvements to scaled_dot_product_attention via FlashAttention-v2 integration, as well as AOTInductor, a new ahead-of-time compilation and deployment tool built for non-python server-side deployments.
* from 2.3.x: - support for user-defined Triton kernels in torch.compile, allowing for users to migrate their own Triton kernels from eager without experiencing performance complications or graph breaks. As well, Tensor Parallelism improves the experience for training Large Language Models using native PyTorch functions, which has been validated on training runs for 100B parameter models.- added seperate openmpi4 build- added sepetate vulcan build, although this functions isn\'t exposed to python abi- For the obs build all the vendored sources follow the pattern NAME-7digitcommit.tar.gz and not the NAME-COMMIT.tar.gz- added following patches:
* skip-third-party-check.patch
* fix-setup.patch- removed patches:
* pytorch-rm-some-gitmodules.patch
* fix-call-of-onnxInitGraph.patch
* Thu Jul 22 2021 Guillaume GARDET - Fix build on x86_64 by using GCC10 instead of GCC11 https://github.com/google/XNNPACK/issues/1550
* Thu Jul 22 2021 Guillaume GARDET - Update to 1.9.0- Release notes: https://github.com/pytorch/pytorch/releases/tag/v1.9.0- Drop upstreamed patch:
* fix-mov-operand-for-gcc.patch- Drop unneeded patches:
* removed-peachpy-depedency.patch- Refresh patches:
* skip-third-party-check.patch
* fix-call-of-onnxInitGraph.patch- Add new patch:
* pytorch-rm-some-gitmodules.patch
* Thu Jul 22 2021 Guillaume GARDET - Add _service file to ease future update of deps
* Thu Jul 22 2021 Guillaume GARDET - Update sleef to fix build on aarch64
* Fri Apr 23 2021 Matej Cepl - Don\'t build python36-
* package (missing pandas)
* Thu Jan 21 2021 Benjamin Greiner - Fix python-rpm-macros usage
* Wed Oct 07 2020 Guillaume GARDET - Use GCC9 to build on aarch64 Tumbleweed to workaround SVE problem with GCC10 with sleef, see: https://github.com/pytorch/pytorch/issues/45971
* Thu Aug 20 2020 Martin Liška - Use memoryperjob constraint instead of %limit_build macro.
* Tue Jun 23 2020 Christian Goll - updated to new stable release 1.5.1 which has following changes: This release includes several major new API additions and improvements. These include new APIs for autograd allowing for easy computation of hessians and jacobians, a significant update to the C++ frontend, ‘channels last’ memory format for more performant computer vision models, a stable release of the distributed RPC framework used for model parallel training, and a new API that allows for the creation of Custom C++ Classes that was inspired by PyBind. Additionally torch_xla 1.5 is now available and tested with the PyTorch 1.5 release providing a mature Cloud TPU experience.
* see release.html for detailed information- added patches:
* fix-call-of-onnxInitGraph.patch for API mismatch in onnx
* fix-mov-operand-for-gcc.patch for aarch64 operands- removed sources:
* cpuinfo-89fe1695edf9ee14c22f815f24bac45577a4f135.tar.gz
* gloo-7c541247a6fa49e5938e304ab93b6da661823d0f.tar.gz
* onnx-fea8568cac61a482ed208748fdc0e1a8e47f62f5.tar.gz
* psimd-90a938f30ba414ada2f4b00674ee9631d7d85e19.tar.gz
* pthreadpool-13da0b4c21d17f94150713366420baaf1b5a46f4.tar.gz- added sources:
* cpuinfo-0e6bde92b343c5fbcfe34ecd41abf9515d54b4a7.tar.gz
* gloo-113bde13035594cafdca247be953610b53026553.tar.gz
* onnx-9fdae4c68960a2d44cd1cc871c74a6a9d469fa1f.tar.gz
* psimd-10b4ffc6ea9e2e11668f86969586f88bc82aaefa.tar.gz
* pthreadpool-d465747660ecf9ebbaddf8c3db37e4a13d0c9103.tar.gz
 
ICM