Name : python312-tokenizers
| |
Version : 0.20.0
| Vendor : obs://build_opensuse_org/science
|
Release : 1.1
| Date : 2024-08-20 09:27:42
|
Group : Unspecified
| Source RPM : python-tokenizers-0.20.0-1.1.src.rpm
|
Size : 6.70 MB
| |
Packager : (none)
| |
Summary : Provides an implementation of today\'s most used tokenizers
|
Description :
Provides an implementation of today\'s most used tokenizers, with a focus on performance and versatility. * Train new vocabularies and tokenize, using today\'s most used tokenizers. * Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server\'s CPU. * Easy to use, but also extremely versatile. * Designed for research and production. * Normalization comes with alignments tracking. It\'s always possible to get the part of the original sentence that corresponds to a given token. * Does all the pre-processing: Truncate, Pad, add the special tokens your model needs.
|
RPM found in directory: /packages/linux-pbone/ftp5.gwdg.de/pub/opensuse/repositories/science:/machinelearning/openSUSE_Tumbleweed/i586 |