Hearing-impaired people use Sign Language to communicate with each other as well as with other communities. Usually, they are unable to communicate with normal people. Most of the people without hearing disability do not understand the Sign Language and unable to understand hearing-impaired people. So, they need recognition of Sign Language to text. In this research, a model is optimized for the recognition of Amharic Sign Language to Amharic characters. A convolutional neural network model is trained on datasets gathered from a teacher of Amharic Sign Language. Frame extraction from Amharic Sign Language video, labeling and annotation, XML creation, generate TFrecord, and training models are major general steps followed for developing models to recognize Amharic Sign Language to characters. After training of the neural network is completed, the model is saved for recognition of Sign Language from a video system or from the frame of video. The accuracy of the model is the summation of confidence of individual alphabets correctly recognized divided by the number of alphabets presented for evaluation for Faster R-CNN and SSD. Hence, the mean average accuracy of the Faster R-CNN and Single-Shot Detector is found to be 98. 25% and 96 % respectively. The model is trained and evaluated for the character of the Amharic language. The research will continue to include the remaining words and sentence used in Amharic Sign Language to have a full- edged Sign Language recognition model to a complete system.