This paper proposes an automated system for grading handwritten subjective answers, leveraging advanced computer vision, natural language processing, and large language model techniques. Although time-consuming, the system presents a promising theoretical approach by employing CRAFT for text detection, TrOCR for handwritten text recognition, and a fine-tuned language model for answer evaluation. Experimental results demonstrate the system's potential accuracy in transcribing handwritten text and consistency in grading answers compared to human raters. The proposed methodology offers a scalable and efficient solution to automate the traditionally labor-intensive task of grading handwritten responses, with the potential to transform education assessment practices. The system's performance, limitations, and future research directions to improve efficiency are discussed.