Hadoop MapReduce is a framework to process vast amounts of data in the cluster of machines in a reliable and fault-tolerant manner. To better management of this framework, estimating the runtime of a job is an important but challenging problem. In this paper, after analysis the anatomy of processing a job in Hadoop MapReduce precisely, we model each stage by relevant essential and efficient features that higher impact on runtime; then we propose two new methods to estimate the runtime, where the history of the previous run of a job is available, or a job performs for the first time. The results show less than 12% error in the estimation of runtime for the first run and less than 8.5% error when the profile or history of the job has existed.