A theoretical model for n-gram distribution in big data corpora

There is a wide diversity of applications relying on the identification of the sequences of n consecutive words (n-grams) occurring in corpora. Many studies follow an empirical approach for determining the statistical distribution of the n-grams but are usually constrained by the corpora sizes, whic...

Full description

Bibliographic Details
Main Author: Silva, Joaquim F. (author)
Other Authors: Gonçalves, Carlos Jorge de Sousa (author), Cunha, José C. (author)
Format: conferenceObject
Language:eng
Published: 2017
Subjects:
Online Access:http://hdl.handle.net/10400.21/6829
Country:Portugal
Oai:oai:repositorio.ipl.pt:10400.21/6829
Description
Summary:There is a wide diversity of applications relying on the identification of the sequences of n consecutive words (n-grams) occurring in corpora. Many studies follow an empirical approach for determining the statistical distribution of the n-grams but are usually constrained by the corpora sizes, which for practical reasons stay far away from Big Data. However, Big Data sizes imply hidden behaviors to the applications, such as extraction of relevant information from Web scale sources. In this paper we propose a theoretical approach for estimating the number of distinct n-grams in each corpus. It is based on the Zipf-Mandelbrot Law and the Poisson distribution, and it allows an efficient estimation of the number of distinct 1-grams, 2-grams, 6-grams, for any corpus size. The proposed model was validated for English and French corpora. We illustrate a practical application of this approach to the extraction of relevant expressions from natural language corpora, and predict its asymptotic behaviour for increasingly large sizes.