Classification of disasters is crucial for effectivedisaster management and response. This paper proposes amethodology that combines computer vision techniques andfederated learning to improve the classification accuracy ofdisasters while addressing the issue of data transfer and the timesquandered doing so. This methodology employs computer visionalgorithms to analyse captured visual data from a variety ofsources. It seeks to accurately classify disasters such as wildfires,floods, earthquakes, and cyclones by extracting pertinent featuresand patterns from these images. Using federated learning toresolve the issues of data privacy and transfer latency is theproposed solution. Federated learning makes it possible to trainmodels on decentralised data sources without requiring datacentralization. Each participating device or data source trainsa local model using its own data, and only model updatesare shared and aggregated to create a global model. Extensiveexperiments utilising videos of actual disasters are conductedto evaluate the proposed methodology. The evaluation focuseson precision and effectiveness. This strategy is anticipated toresult in improved disaster classification models, making themappropriate for deployment in disaster management systems.