Bilingual example segmentation based on markers hypothesis
The Marker Hypothesis was first defined by Thomas Green in 1979. It is a psycho-linguistic hypothesis defining that there is a set of words in every language that marks boundaries of phrases in a sentence. While it remains a hypothesis because nobody has proved it, tests have shows that results are...
Main Author: | |
---|---|
Other Authors: | |
Format: | conferencePaper |
Language: | eng |
Published: |
2009
|
Subjects: | |
Online Access: | http://hdl.handle.net/1822/16472 |
Country: | Portugal |
Oai: | oai:repositorium.sdum.uminho.pt:1822/16472 |
Summary: | The Marker Hypothesis was first defined by Thomas Green in 1979. It is a psycho-linguistic hypothesis defining that there is a set of words in every language that marks boundaries of phrases in a sentence. While it remains a hypothesis because nobody has proved it, tests have shows that results are comparable to basic shallow parsers with higher efficiency. The chunking algorithm based on the Marker Hypothesis is simple, fast and almost language independent. It depends on a list of closed-class words, that are already available for most languages. This makes it suitable for bilingual chunking (there is not the requirement for separate language shallow parsers). This paper discusses the use of the Marker Hypothesis combined with Probabilistic Translation Dictionaries for example-based machine translation resources extraction from parallel corpora. |
---|