Code-mixing (CM), the blending of two or more languages within a discourse, is common in multilingual societies. The lack of codemixed training data is a primary concern for developing end-to-end AI models intended for deployment in various natural language processing (NLP) applications. Manually creating or crowdsourcing the code-mixed labelled data for the task at hand is a viable option. However, it requires significant human labour and is often impractical due to the language-specific variation of the codemixed text. To bypass the issue of data scarcity, we offer an efficient method for automatically generating code-mixed Malayalam-English (Manglish) text from Monolingual English text data collected from the Amazon customer review corpus without any parallel data. English to Manglish translation is a less studied research problem. The generated synthetic code-mixed Manglish text data can be used to train the AI models for various NLP tasks in code-mixed context. The codemixed text generator in this work is based on the linguistic and task-independent characteristics derived from a transformer-based language model called LaBSE. The generated corpus is then evaluated using a variety of objective metrics. We intend to expand it by incorporating further implementations of linguistic theories, improved NER and parsing methods and implementing a variety of NLP applications, with the expectation that this will permit more study on code-mixing in different language pairs.