Now a days, Text summarization has become important as the amount of text data available online grows at an exponential rate. Most of the text classification systems require going through a huge amount of data. In general, Producing exact and meaningful summaries of big texts is a time-consuming endeavour. Hence generating abstract summaries which retain the key information of the data and using it to train machine learning models will make these models space and time-efficient. Abstractive text summarization has been successful in moving from linear models to nonlinear neural network models using sparse models [1]. This success comes from the application of deep learning models on natural language processing tasks where these mod-els are capable of modeling the interrelating patterns in data without hand-crafted features. The Text to Text Transfer Transformer(T5) approach was used to investigate the text summarization problem, and the results showed that the Transfer Learning based model performed significantly better for abstractive text summarization than the Sequence to Sequence Recurrent Model.