For a human interaction with machine, it is important that it understand the mood of the speaker. Until now we train machines on neutral speeches or utterances. The mood of a person would affect their performances. Deciphering human mood is challenging for the machines, as human can create fourteen distinct sound in a second. For a machine to understand the human behaviour, it should understand the acoustic abilities of the human ear. Mel Frequency Cepstral Coefficients (MFCC) and Linear Prediction coefficients (LPC) can replicate human auditory system. The proposed model Emotion Recognition from Indian Languages (ERIL) extracts emotions like fear, anger, surprise, sadness, happiness, and neutral. ERIL first pre-processes the voice signal, extracts selective MFCC, LPC, pitch, and voice quality features, then classifies the speech using Catboost. ERIL is a multilingual emotion classifier, it is independent of any language. We checked it on Hindi, Gujarati, Marathi, Punjabi, Bangla, Tamil, Oriya, and Telugu. We recorded a speech dataset of various emotions in these languages. ERIL is compared to other benchmark classifiers.