SEARCH
NEW RPMS
DIRECTORIES
ABOUT
FAQ
VARIOUS
BLOG

 
 
Changelog for openvino-sample-2024.1.0-2024.1.0-lp154.125.6.noarch.rpm :

* Tue Jan 30 2024 Alessandro de Oliveira Faria - update to 2023.3.0 Summary of major features and improvements  
* More Generative AI coverage and framework integrations to minimize code changes. + Introducing OpenVINO Gen AI repository on GitHub that demonstrates native C and C++ pipeline samples for Large Language Models (LLMs). String tensors are now supported as inputs and tokenizers natively to reduce overhead and ease production. + New and noteworthy models validated; Mistral, Zephyr, Qwen, ChatGLM3, and Baichuan. + New Jupyter Notebooks for Latent Consistency Models (LCM) and Distil-Whisper. Updated LLM Chatbot notebook to include LangChain, Neural Chat, TinyLlama, ChatGLM3, Qwen, Notus, and Youri models. + Torch.compile is now fully integrated with OpenVINO, which now includes a hardware \'options\' parameter allowing for seamless inference hardware selection by leveraging the plugin architecture in OpenVINO.
* Broader Large Language Model (LLM) support and more model compression techniques. + As part of the Neural Network Compression Framework (NNCF), INT4 weight compression model formats are now fully supported on Intel® Xeon® CPUs in addition to Intel® Core™ and iGPU, adding more performance, lower memory usage, and accuracy opportunity when using LLMs. + Improved performance of transformer-based LLM on CPU and GPU using stateful model technique to increase memory efficiency where internal states are shared among multiple iterations of inference. + Easier optimization and conversion of Hugging Face models – compress LLM models to INT8 and INT4 with Hugging Face Optimum command line interface and export models to OpenVINO format. Note this is part of Optimum-Intel which needs to be installed separately. + Tokenizer and TorchVision transform support is now available in the OpenVINO runtime (via new API) requiring less preprocessing code and enhancing performance by automatically handling this model setup. More details on Tokenizers support in the Ecosystem section.
* More portability and performance to run AI at the edge, in the cloud, or locally. + Full support for 5th Gen Intel® Xeon® Scalable processors (codename Emerald Rapids) + Further optimized performance on Intel® Core™ Ultra (codename Meteor Lake) CPU with latency hint, by leveraging both P-core and E-cores. + Improved performance on ARM platforms using throughput hint, which increases efficiency in utilization of CPU cores and memory bandwidth. + Preview JavaScript API to enable node JS development to access JavaScript binding via source code. See details below. + Improved model serving of LLMs through OpenVINO Model Server. This not only enables LLM serving over KServe v2 gRPC and REST APIs for more flexibility but also improves throughput by running processing like tokenization on the server side. More details in the Ecosystem section. Support Change and Deprecation Notices
* The OpenVINO™ Development Tools package (pip install openvino-dev) is deprecated and will be removed from installation options and distribution channels beginning with the 2025.0 release. For more details, refer to the OpenVINO Legacy Features and Components page. + Ubuntu 18.04 support is discontinued in the 2023.3 LTS release. The recommended version of Ubuntu is 22.04. + Starting with 2023.3 OpenVINO longer supports Python 3.7 due to the Python community discontinuing support. Update to a newer version (currently 3.8-3.11) to avoid interruptions. + All ONNX Frontend legacy API (known as ONNX_IMPORTER_API) will no longer be available in the 2024.0 release. \'PerfomanceMode.UNDEFINED\' property as part of the OpenVINO Python API will be discontinued in the 2024.0 release. + Tools: - Deployment Manager is deprecated and will be supported for two years according to the LTS policy. Visit the selector tool to see package distribution options or the deployment guide documentation. - Accuracy Checker is deprecated and will be discontinued with 2024.0.   - Post-Training Optimization Tool (POT) has been deprecated and the 2023.3 LTS is the last release that supports the tool. Developers are encouraged to use the Neural Network Compression Framework (NNCF) for this feature. - Model Optimizer is deprecated and will be fully supported until the 2025.0 release. We encourage developers to perform model conversion through OpenVINO Model Converter (API call: OVC). Follow the model conversion transition guide for more details. - Deprecated support for a git patch for NNCF integration with huggingface/transformers. The recommended approach is to use huggingface/optimum-intel for applying NNCF optimization on top of models from Hugging Face. - Support for Apache MXNet, Caffe, and Kaldi model formats is deprecated and will be discontinued with the 2024.0 release. + Runtime: - Intel® Gaussian & Neural Accelerator (Intel® GNA) will be deprecated in a future release. We encourage developers to use the Neural Processing Unit (NPU) for low-powered systems like Intel® CoreTM Ultra or 14th generation and beyond. - OpenVINO C++/C/Python 1.0 APIs are deprecated and will be discontinued in the 2024.0 release. Please use API 2.0 in your applications going forward to avoid disruption. - OpenVINO property Affinity API will be deprecated from 2024.0 and will be discontinued in 2025.0. It will be replaced with CPU binding configurations (ov::hint::enable_cpu_pinning). You can find OpenVINO™ toolkit 2023.3 release here:
* Download archives
* with OpenVINO™
* Install it via Conda: conda install -c conda-forge openvino=2023.3.0
* OpenVINO™ for Python: pip install openvino==2023.3.0
* Fri Jan 26 2024 Alessandro de Oliveira Faria - update to 2023.2.0 Summary of major features and improvements:
* More Generative AI coverage and framework integrations to minimize code changes. + Expanded model support for direct PyTorch model conversion automatically convert additional models directly from PyTorch or execute via torch.compile with OpenVINO 
* More Generative AI coverage and framework integrations to minimize code changes. + Expanded model support for direct PyTorch model conversion automatically convert additional models directly from PyTorch or execute via torch.compile with OpenVINO as the backend. + New and noteworthy models supported – we have enabled models used for chatbots, instruction following, code generation, and many more, including prominent models like LLaVA, chatGLM, Bark (text to audio), and LCM (Latent Consistency Models, an optimized version of Stable Diffusion). + Easier optimization and conversion of Hugging Face models compress LLM models to Int8 with the Hugging Face Optimum command line interface and export models to the OpenVINO IR format. + OpenVINO is now available on Conan – a package manager which enables more seamless package management for large-scale projects for C and  C++ developers.
* Broader Large Language Model (LLM) support and more model compression techniques. + Accelerate inference for LLM models on Intel® Core™ CPU and iGPU with the use of Int8 model weight compression. + Expanded model support for dynamic shapes for improved performance on GPU. + Preview support for Int4 model format is now included. Int4 optimized model weights are now available to try on Intel® Core™ CPU and iGPU, to accelerate models like Llama 2 and chatGLM2. + The following Int4 model compression formats are supported for inference in runtime: - Generative Pre-training Transformer Quantization (GPTQ); with GPTQ-compressed models, you can access them through the Hugging Face repositories. - Native Int4 compression through Neural Network Compression Framework (NNCF).
* More portability and performance to run AI at the edge, in the cloud, or locally. + In 2023.1 we announced full support for ARM architecture, now we have improved performance by enabling FP16 model formats for LLMs and integrating additional acceleration libraries to improve latency. Support Change and Deprecation Notices:
* The OpenVINO™ Development Tools package (pip install openvino-dev) is deprecated and will be removed from installation options and distribution channels with 2025.0. To learn more, refer to the OpenVINO Legacy Features and Components page. To ensure optimal performance, install the OpenVINO package (pip install openvino), which includes essential components such as OpenVINO Runtime, OpenVINO Converter, and Benchmark Tool. + Tools:  - Deployment Manager is deprecated and will be removed in the 2024.0 release. - Accuracy Checker is deprecated and will be discontinued with 2024.0.    - Post-Training Optimization Tool (POT)  is deprecated and will be discontinued with 2024.0.  - Model Optimizer is deprecated and will be fully supported up until the 2025.0 release. Model conversion to the OpenVINO IR format should be performed through OpenVINO Model Converter which is part of the PyPI package. Follow the Model Optimizer to OpenVINO Model Converter transition guide for smoother transition. Known limitations are TensorFlow model with TF1 Control flow and object detection models. These limitations relate to the gap in TensorFlow direct conversion capabilities which will be addressed in upcoming releases. - PyTorch 1.13 support is deprecated in Neural Network Compression Framework (NNCF).
* Runtime:  + Intel® Gaussian & Neural Accelerator (Intel® GNA) will be deprecated in a future release. We encourage developers to use the Neural Processing Unit (NPU) for low powered systems like Intel® Core™ Ultra or 14th generation and beyond.   + OpenVINO C++/C/Python 1.0 APIs will be discontinued with 2024.0.  + PyTorch 1.13 support is deprecated in Neural Network Compression Framework (NNCF). You can find OpenVINO™ toolkit 2023.2 release here:
* Download archives
* with OpenVINO™
* Install it via Conda: conda install -c conda-forge openvino=2023.2.0
* OpenVINO™ for Python: pip install openvino==2023.2.0 Acknowledgements Thanks for contributions from the OpenVINO developer community: AATTsiddhant-0707, AATTNsdHSO, AATTmahimairaja, AATTSANTHOSH-MAMIDISETTI, AATTrsato10, AATTPRATHAM-SPS Release documentation is available here: https://docs.openvino.ai/2023.2 Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-2.html
 
ICM