r-tokenizers 0.2.1 Fast, consistent tokenization of natural language text
This is a package for converting natural language text into tokens. It includes tokenizers for shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, shingled characters, lines, tweets, Penn Treebank, regular expressions, as well as functions for counting characters, words, and sentences, and a function for splitting longer texts into separate documents, each with the same number of words. The tokenizers have a consistent interface, and the package is built on the stringi
and Rcpp
packages for fast yet correct tokenization in UTF-8 encoding.
- Website: https://lincolnmullen.com/software/tokenizers/
- License: Expat
- Package source: cran.scm
- Patches: None
- Builds: x86_64-linux, i686-linux, armhf-linux, aarch64-linux, i586-gnu