A large amount of music data is now available on the Internet thanks to the advent of the World Wide Web. In addition to seeking for expected music for clients, it becomes necessary to build a recommendation service. The majority of existing music recommendation systems relies on collaborative or content-based engines. However, a user's music selection is not only based on prior preferences or musical content. However, it is also reliant on the user's mood. This paper presents an mood-based music (MoodSIC) recommendation framework that automatically learns a user's mood and suggest a list of songs pertaining to the mood. MoodSIC first detects the listener’s mood using various parameters such as skin temperature, facial texture, voice input, facial expression and then render a recommendation of the songs. The MoodSIC system is delivered as a web-application which uses MongoDB as a back-end for storing the songs. The proposed recommendation system provides a user-friendly interface to detect user mood using webcam, generate recommended playlist and autoplay music to the liking from generated playlist.