5 Years Impact Factor: 1.53
Author: Saasupalli Sukanya, Bollaram Charan, K Maheshwar Reddy, Mr. G Sathish
Abstract:
Early methods of describing images relied on human input, such as manual tagging and writing descriptions. Basic image classification techniques provided limited labels. As computer vision evolved, traditional feature extraction methods (like SIFT and HOG) were used for recognizing objects but lacked the ability to generate comprehensive descriptions. The objective of this project is to develop a system that automatically generates descriptive captions for images by leveraging deep learning models and natural language processing techniques, enhancing image understanding and accessibility. Before AI, traditional systems used basic image annotation tools where humans manually tagged images with metadata. Text-based image search engines relied on keywords provided by users. Descriptions were written manually, limiting scalability and accuracy. The manual generation of image captions is time-consuming, inconsistent, and inefficient for large datasets. Traditional systems lac
Download PDF