Image-classification-Convolutional-Neural-Network

Project Title: American Sign Language (ASL) Image Classification

Deep learning: Image Classification using a Convolutional Neural Network

Overview

American Sign Language (ASL) serves as a primary means of communication for many deaf individuals. This project delves into the development of a convolutional neural network (CNN) designed to classify images of ASL letters. The primary goal is to construct a model capable of recognizing individual letters, laying the groundwork for the creation of a sign language translation system.

Project Structure

1. Introduction to ASL

2. Data Loading and Preprocessing

3. Visualizing the Training Data

4. Dataset Examination

5. One-Hot Encoding

6. Model Definition

7. Model Compilation

8. Model Training

9. Model Testing

10. Visualizing Misclassifications

How to Use

  1. Ensure you have the necessary dependencies installed (e.g., TensorFlow, NumPy, Matplotlib).
  2. Execute the provided code cells in a Jupyter notebook or an equivalent environment.
  3. Follow the step-by-step instructions for data loading, preprocessing, model training, and evaluation.
  4. Examine the model’s performance on the test set and visualize misclassifications.

Conclusion

The CNN model for ASL image classification gives an accuracy of 93% on the test set. Further enhancements and refinements can be explored to improve the model’s accuracy and robustness using Dropuouts or Batch Normalization including considering more epochs during the training process.