Bilingual example segmentation based on markers hypothesis

The Marker Hypothesis was first defined by Thomas Green in 1979. It is a psycho-linguistic hypothesis defining that there is a set of words in every language that marks boundaries of phrases in a sentence. While it remains a hypothesis because nobody has proved it, tests have shows that results are...

ver descrição completa

Detalhes bibliográficos
Autor principal: Simões, Alberto (author)
Outros Autores: Almeida, J. J. (author)
Formato: conferencePaper
Idioma:eng
Publicado em: 2009
Assuntos:
Texto completo:http://hdl.handle.net/1822/16472
País:Portugal
Oai:oai:repositorium.sdum.uminho.pt:1822/16472
Descrição
Resumo:The Marker Hypothesis was first defined by Thomas Green in 1979. It is a psycho-linguistic hypothesis defining that there is a set of words in every language that marks boundaries of phrases in a sentence. While it remains a hypothesis because nobody has proved it, tests have shows that results are comparable to basic shallow parsers with higher efficiency. The chunking algorithm based on the Marker Hypothesis is simple, fast and almost language independent. It depends on a list of closed-class words, that are already available for most languages. This makes it suitable for bilingual chunking (there is not the requirement for separate language shallow parsers). This paper discusses the use of the Marker Hypothesis combined with Probabilistic Translation Dictionaries for example-based machine translation resources extraction from parallel corpora.