This paper includes study of three different text generation model approaches to test impacts on feasibility in fidelity of results while making inferior changes to data preprocessing and simultaneously increasing the models size. Chicken recipes found online were used to train text generative models for evaluation. The general theme that emerged as the result of this experiment was: the outcome of a model is dictated not only by how big it is but by how well the data was processed for it.