Hadoop mapreduce tolerante a faltas bizantinas
MapReduce is often used to run critical jobs such as scientific data analysis. However, evidence in the literature shows that arbitrary faults do occur and can probably corrupt the results of MapReduce jobs. MapReduce runtimes like Hadoop tolerate crash faults, butnot arbitrary or Byzantine faults....
Main Author: | |
---|---|
Format: | masterThesis |
Language: | por |
Published: |
2012
|
Subjects: | |
Online Access: | http://hdl.handle.net/10451/13903 |
Country: | Portugal |
Oai: | oai:repositorio.ul.pt:10451/13903 |
Summary: | MapReduce is often used to run critical jobs such as scientific data analysis. However, evidence in the literature shows that arbitrary faults do occur and can probably corrupt the results of MapReduce jobs. MapReduce runtimes like Hadoop tolerate crash faults, butnot arbitrary or Byzantine faults. In this work, it is presented a MapReduce algorithm andprototype that tolerate these faults. An experimental evaluation shows that the execution of a job with the implemented algorithm uses twice the resources of the original Hadoop,instead of the 3 or 4 times more that would be achieved with the direct application of common Byzantine fault-tolerance paradigms. It is believed that this cost is acceptable for critical applications that require that level of fault tolerance. |
---|