Management of Stray Dog Vaccination Through Individual Identification of Stray Dogs Using Machine Learning

Summary

A newspaper article about the large number of human deaths caused by dog-mediated rabies made me wonder why the existing vaccination programs were not successful in addressing this issue. The statistics of the rabies deaths were alarming: 59,000 deaths every year worldwide, 36% rabies deaths in India, 99% cases caused by dog bites, and usual victims are children. Inspired by the World Health Organization’s goal of zero human dog-mediated rabies deaths by 2030, I wanted to join hands in the fight against rabies.

 

I found that the process of vaccination of stray dogs followed by the civic authorities can be made more efficient if we can maintain a locality-wise record of the number of dogs and their vaccination status. To do this, we need to develop a reliable method to uniquely identify individual dogs in a locality. This project proposes using machine learning to identify individual dogs in the locality.

 

I used pre-trained networks to detect the presence of dogs in images extracted from video recording of stray dogs in a nearby locality. I then used the face recognition algorithm LBPH(Local Binary Patterns Histograms) to create a unique “signature” histogram of individual dogs. I trained a Support Vector Machine model to classify the images of different stray dogs. I obtained a 92% accuracy in individually identifying the dogs in the experimental data set.

 

In the future, I plan to track the population as it ages by continuously updating the dataset with new images.

Question / Proposal

How can we make the anti-rabies vaccination process more efficient? How can we enable civic authorities to track the population and vaccination status of stray dogs in a locality? Can we uniquely identify individual dogs in a locality from just their images? These are the questions I was trying to answer, and I wanted to develop an automated dog-identification system. By installing video cameras on streets and using machine learning and image processing techniques, we can detect the presence of dogs in an image and then individual identifying stray dogs in a locality. This would help the authorities in maintaining an up-to-date and complete record of the population and vaccination status of stray dogs, thus allowing them to assess the progress of existing vaccination programs and take corrective measures to improve their efficiency.

 

After reading about different face recognition algorithms, I wanted to test whether one such algorithm(Local Binary Patterns Histograms) would allow us to extract useful information from the images of dogs, that we could then use as a “signature” or descriptor, to uniquely identify individual stray dogs in my dataset. The way I tested this was by training a Support Vector Machine to classify these different “signatures”, hoping for a high classification accuracy that would validate my hypothesis.

Given that the algorithm I chose was robust against illumination variations, and the problem of individual identification was a multi-class classification problem, significantly harder than a binary class classification problem, I expected to obtain an accuracy of around 80-90%.

Research

Background Research on Machine Learning Algorithms to Perform Individual Identification

For the first step, the detection of a stray dog in an image and localizing it within the image, I found there were two algorithms that I could use - object detection and instance segmentation. While object detection could only create a rectangular bounding box around the image of a dog, instance segmentation was more powerful, and could create a mask of arbitrary shape, isolating the pixels that contained the image of a dog. Either method could be used to isolate relevant pixels to extract the “signature” or descriptor from in the next step. Training such algorithms requires massive amounts of data, sometimes several thousands of labelled images, and substantial computing power. So I used pre-trained networks such as YOLO(You Look Only Once) for object detection and Mask R-CNN(Regional Convolutional Neural Network) for instance segmentation.

 

For the next step, the individual identification of stray dogs in a locality, I studied different face recognition algorithms, such as Fisherfaces, Eigenfaces and LBPH(Local Binary Patterns Histograms). LBPH was selected so that it would be more robust against illumination variations, and because it requires much less images per individual compared to the other two algorithms. This was also the reason convolutional neural networks(CNNs) could not be used for the second step of individual identification, because the average number of images per dog in my dataset is 40, and CNNs require thousands of images.

 

Background Research on the Vaccination and Sterilization Programs in India

Although street dog anti-rabies vaccination and sterilisation programs such as the Animal Birth Control program were initiated by the Government of India, this has failed to prevent dog-mediated rabies due to poor implementation, inadequate funding and infrastructure. Often, no baseline study is done on the number of stray dogs and number of attacks before undertaking the project. In my city, Bengaluru, as many as 344 to 557 stray dogs are reported to have been vaccinated and sterilized every day on official records. However, this number is contradicted by the rampant stray dog problem in the city. This strongly suggests that a method to track the stray dog population and vaccination status of individual stray dogs will allow the civic authorities to closely monitor the progress the program, an important step towards making it more efficient.

Method / Testing and Redesign

  1. Data Collection
    Video recordings of 12 street dogs were captured. Images were extracted from the video and manually labelled. The average number of images per dog is 40. The data was split into training(70%), cross validation(20%) and testing datasets(10%).
  2. Data Division
    The testing dataset was included to ensure that the trained model is not overfitting the training and cross validation datasets and is able to generalize well on the testing dataset as well.

  3. Platforms Used
    OpenCV-Python was used to pre-process the images, and Python was used for all programming tasks.

  4. Testing Algorithms
    Algorithms that differed in the method of feature extraction and learning model used were compared based on their testing accuracy. A description of the four algorithms that were compared are as follows:

    1. Linear Support Vector Machine(SVM) model to classify LBPH(Local Binary Patterns Histograms) feature vectors obtained from the grayscale images cropped using the bounding boxes obtained the object detection step.

    2. SVM model with a Gaussian Radial Basis Function(RBF) kernel to classify LBPH feature vectors obtained from the cropped grayscale images.

    3. SVM model with a RBF kernel to classify LBPH feature vectors obtained from the cropped RGB images.

    4. SVM model with a RBF kernel to classify LBPH feature vectors obtained from the RGB images after instance segmentation.

      Fig : A summary of the Compared Algorithms

  • Object Detection
    A pre-trained object detection network (YOLO) was used to create bounding boxes enclosing dogs and crop the images, to reduce number of irrelevant pixels used in the feature extraction step.

  • Instance Segmentation
    To further reduce the number of irrelevant pixels used to extract the descriptors/ features, a pre-trained instance segmentation network (Mask RCNN) was used to obtain a mask around the images of the dogs. The hypothesis was that performing instance segmentation instead of object detection would give more accurate results.

    Fig: Process of Object Detection/ Instance Segmentation

  • Local Binary Patterns Histograms(LBPH) Feature Extraction
    LBPH feature vectors were extracted from the images. Only pixels within the bounding box(in the case of object detection) and those within the mask(instance segmentation) were used in this step. Initially images were converted to grayscale, but it was found that retaining the RGB format increased classification accuracy.

    Fig: The Feature Vectors Extracted from Images of two Different Dogs - The similarity between the histograms extracted from images of each dog and the difference across the two dogs strengthens our hypothesis that the LBPH can be used to obtain a unique “signature” or descriptor of each individual dog.

  • Support Vector Machine Training and Tuning
    SVM was trained on the training LBP histograms, to perform the multi-class classification task of individual identification of stray dogs. Linear and RBF kernels were compared. Hyperparameters C and gamma for tuning the SVM were selected by performing a grid-search on the cross validation data. The hyperparameter combination that gave the highest accuracy on the cross validation dataset was selected.

  • Testing the Model
    Finally, the model was tested using the testing dataset.

Fig : Flowchart Depicting the Methodology

 

Results

A summary of the description and testing accuracy of the 4 compared algorithms:

Algorithm No. Algorithm Description Testing Accuracy
Grayscale/ RGB images Object Detection/ Instance Segmentation Support Vector Machine Kernel
1 Grayscale Object Detection Linear 23%
2 Grayscale Object Detection Gaussian Radial Basis Function 68%
3 RGB Object Detection Gaussian Radial Basis Function 92%
4 RGB Instance Segmentation Gaussian Radial Basis Function 87%

A summary of the training, cross validation and testing accuracies obtained from each of the 4 compared algorithms:

Algorithm No. Training Accuracy Cross Validation Accuracy Testing Accuracy
1 24% - 23%
2 98% 77% 68%
3 100% 92% 92%
4 99% 85% 87%

 

A detailed discussion of the results obtained for each algorithm follows:

1. Algorithm 1

    Results

  • Training accuracy - 24%
  • Testing accuracy - 23%

    Conclusion

  • The feature vectors obtained from the feature extraction step do not appear to be linearly separable, due to low training accuracy.
  • A possible method to enhance the algorithm performance could be to use a non-linear kernel support vector machine, such as the Gaussian Radial Basis Function SVM.

2. Algorithm 2

    Results

  • Training accuracy - 98%
  • Cross validation accuracy - 77%
  • Testing accuracy - 68%

    Conclusion

  • Testing accuracy increased significantly.
  • High training accuracy and low testing accuracy indicates that the model is overfitted.
  • A possible method to enhance the algorithm performance could be to retain the RGB image format and continue to use a non-linear kernel support vector machine, such as the Gaussian Radial Basis Function SVM.

3. Algorithm 3

    Results

  • Training accuracy - 100%
  • Cross validation accuracy - 92%
  • Testing accuracy - 92%

    Conclusion

  • Extracting features from the RGB images instead of grayscale images increased the testing accuracy significantly, and solved the overfitting problem of Algorithm 2. Algorithm 2 could have overfitted the data because the histogram data obtained from grayscale images were not well-separable, so the model was trying to fit the noise in the data. By using additional colour information from RGB images, the data could have become more separable, and the model will be less likely to overfit the data.
  • A possible method to enhance the algorithm performance could be to use instance segmentation to further reduce the number of irrelevant pixels used in the step of feature extraction. A pre-trained algorithm could be used to create a mask around the images of the dogs. Only the pixels within the mask can be used for the feature extraction step.

4. Algorithm 4
    Results

  • Training accuracy - 99%
  • Cross validation accuracy - 85%
  • Testing accuracy - 87%

    Conclusion

  • Accuracy of Algorithm 4 is less than that of Algorithm 3. A possible explanation for this is that instance segmentation was imperfect, leading to loss of information.

Hence we recommend Algorithm 3 as our proposed solution.

 

 

Conclusion

The results obtained demonstrate that the proposed algorithm of object detection/ instance segmentation followed by SVM(Support Vector Machine) multiclass classification of LBPH features can help in achieving satisfactory accuracy(92%) in the individual identification of stray dogs in a locality. This also validates our hypothesis that the LBPH(Local Binary Patterns Histograms) can be used to extract unique "signatures" of individual stray dogs.

One of the algorithms perform with high accuracy in the multiclass classification problem of individual identification of street dogs. Two techniques that gave substantial increase in classification accuracy were:

(1) Using color information (instead of only using grayscale images) from the RGB images to extract LBP histograms.

(2) Using a non-linear kernel (Gaussian Radial Basis Function) for the Support Vector Machine.

The results obtained are reliable because the data was divided into training, cross validation and testing datasets, to prevent overfitting. This ensured that the selected model is able to generalize well even on unseen images of the same stray dogs. Also, by taking a larger number of classes(12) the results show that locality-wise individual identification of stray dogs can be reliably carried out. The upper limit on the number of dogs in a locality(for example, a street) is generally 10.

This project will help in maintaining a complete and up-to-date record of the stray dog population and their vaccination status in each locality, making the vaccination process efficient. Possible improvements in the future include continuously updating the dataset with new images while deleting the old images and retraining the algorithm, so that it can correctly identify the dogs even as they age. This project can be extended to detect and identify animals of endangered species, so that their population can be tracked.

About me

I am Aparna Ajit Gupte, a grade 12 student of National Public School - Indiranagar, Bengaluru, Karnataka, India. I am passionate about computer science, especially machine learning and artificial intelligence. Dinner table discussions with my father, a computer vision specialist, on the latest advances in the field of machine learning and computer vision sparked off my interest, and compelled me to delve deeper into the subject and the rich mathematics behind it. I decided to take the online course, Machine Learning, taught by professor Andrew Ng on Coursera. The fact that probability and statistics could allow us to mimic the way the human brain learns fascinates me, and I hope to contribute to research in this area in the future. In college, I hope to pursue computer science and mathematics, and continue working on research projects in the field of machine learning. I strongly believe in the power of machine learning and artificial intelligence to transform the way we live in unimaginable ways. Winning the Google Science Fair would mean so much to me - it would open endless opportunities for future work and research. Being able to present my work to some of the leaders in fields across STEM would be enriching experience, one that I would learn so much from. Further, it would fuel my passion to create technology to make the world a better place.

Health & Safety

Most of the work was done at home with guidance from my father Ajit Deepak Gupte. For taking videos of street dogs, I took care not to go very close to the dogs to avoid dog bites. 

Contact details of my mentor is :

Name : Ajit Deepak Gupte

Phone Number : 9845109961

Bibliography, references, and acknowledgements

Acknowledgements

I would like to thank my guide and father Ajit Deepak Gupte and mother Deepa Nair for suggesting possible changes in the algorithm that contributed to gains in the accuracy of classification.

References

• MobileNetSSD
https://github.com/chuanqi305/MobileNet-SSD

• Mask R-CNN
https://github.com/matterport/Mask_RCNN

• OpenCV Documentation
https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html#local-binary-patterns-histograms

• Scikit-learn Documentation
https://scikit-learn.org/stable/modules/svm.html

• Medium
https://medium.com/comet-app/review-of-deep-learning-algorithms-forobject-detection-c1f3d437b852

•Towards Data Science
https://towardsdatascience.com/yolo-you-only-look-once-real-timeobject-detection-explained-492dc9230006

• Pyimagesearch
https://www.pyimagesearch.com/2018/11/12/yolo-object-detection-withopencv/ https://www.pyimagesearch.com/2017/09/11/object-detection-withdeep-learning-and-opencv/
https://www.pyimagesearch.com/2015/12/07/local-binary-patterns-withpython-opencv/