Automated Essay Scoring (AES) systems are crucial in modern education, providing scalable and consistent assessments of student writing. While large language models (LLMs) like BERT and DeBERTa have demonstrated state-of-the-art performance in various Natural Language Processing (NLP) tasks, their high computational demands limit their applicability in resource-constrained educational environments. This paper explores the potential of simpler machine learning models—Support Vector Machines (SVM), LightGBM (LGB), and Artificial Neural Networks (ANN)—in achieving comparable AES performance through effective feature engineering. By leveraging a dataset of over 17,000 student essays from diverse backgrounds, we demonstrate that with thoughtful feature selection and transformation, these models can achieve Quadratic Weighted Kappa (QWK) scores competitive with those of more complex LLMs, with SVM and ANN models reaching QWK scores of 0.79 and LGB achieving 0.80 on test data. Our findings highlight the viability of using less computationally intensive models for AES, making such systems more accessible to a broader range of educational institutions. This research offers practical insights into developing efficient and scalable AES solutions that maintain high performance while reducing the computational burden.