LeafAI: A convolutional neural network based approach to plant disease diagnosis


     In the 21st century, nearly 75% of crop output comes from 12 plant species; the homogeneity of the agricultural system along with climate change has increased the effects of plant disease. The key to reducing plant disease damage is through early detection. Current plant disease diagnostic methods rely on visual inspection by plant pathologists and/or expensive and time-consuming lab based methods like ELISA and PCR. The goal of the project was to (1) develop a machine learning algorithm that identifies plant disease with 80% + accuracy and (2) apply the algorithm to a mobile app and server application to utilize smartphone images for disease detection.

     Convolutional Neural Networks (CNNs) have developed into a powerful tool in the field of machine learning. In phase 2, the optimal CNN configuration was determined through considering the network architecture (AlexNet or custom), dataset type (RGB, grayscale, segmented), and training method (transfer learning or scratch) in 9 CNNs. The CNN application (PlantNet) trained using transfer learning and 50,000+ RGB images is capable of classifying plant disease with an overall accuracy of 95.40%. In phase 3, the PlantNet algorithm was integrated with a mobile application and server system through Python Flask. There are more than 3.4 billion smartphone subscriptions worldwide; whether integrated onto smartphones or drones for continuous monitoring PlantNet can provide universal access to vital diagnostics. The goal of LeafAI is to bring our world closer to food and consequently economic security through enhancing current plant disease detection methods.

Question / Proposal

     Nearly 75% of our total crop output comes from 12 plant species. The homogeneity of the current agriculture system combined with the effects of climate change has led to a growing threat - plant disease. The key to reducing the damage of plant disease lies in early and accurate detection. Current plant disease diagnostic tools such as ELISA and PCR are time consuming and expensive, and the visual inspection of plants by plant pathologists is often a rare resource in developing countries. We need an inexpensive, widespread, and fast plant disease identification method.

Prior machine learning approaches to plant disease detection relied on feature extraction processes, were trained to identify <10 classes, and failed to consider the effects of angle, lighting, and scale on the practicality of plant disease classification.

Primary Question: Can we identify biotic plant diseases through pictures taken from a smartphone using a machine learning algorithm with an 80% overall accuracy under 2 minutes?

Hypothesis:  Since Convolutional Neural Networks (CNNs) have become powerful tools in image classification (even exceeding human's accuracy in general object detection in ImageNet 2015), we can train a CNN to identify 38 classes of plant disease from images of different angles, resolutions, and scales with an overal accuracy of 80%+ under 2 minutes. Additionally, it's predicted that by 2020 we will have 5 billion smartphone subscriptions, making this a widespread tool for urban gardeners and subsistence farmers alike.

To effectively develop the classifier and application, the project was divided into 3 phases.


Over 700,000 people in the world suffer from malnutrition. In addition to this, the food supply concentrates around 14 staples crops. Plant diseases account for approximately a 10-30% plant productivity loss which translates to million/billion dollars of economic loss.

Plant Pathology

Plant disease is defined as “a suboptimal growth on a plant due to the presence of a continuous irritant” whether abiotic or biotic. Biotic plant diseases are caused due to viruses, bacteria, and fungi and manifest on any part of the plant: leaf, stem, shoot, or fruit. Typically, plant disease is diagnosed through visible inspection of signs (visible pathogen) and symptoms (altercations of the plant). Symptoms may include abnormal tissue coloration, tissue death, defoliation, wilting, and etc. Proper diagnosis is essential for disease management. Due to dataset availability, biotic plant disease diagnosis using symptoms appearing on leaves of plants was explored.

Machine Learning

The essence of machine learning is trying to find the function f which relates the input x to the output y through designing a hypothesis function h(x). The goal is to find model parameters that minimize the loss function using gradient descent or backpropogation. This process is often referred to as training and testing. In the training portion, the network starts off with random weights/biases, and based on the training dataset it manually adjusts the weights/biases using the gradient descent mechanism. In the testing portion, the network’s performance is evaluated on an unseen dataset using statistics such as the mean f1 score, precision, and recall.

Deep learning requires large amounts of data (100,000s) and high computing power (GPUs). The primary differences between regular neural networks and deep learning is that deep learning algorithms have > 2-3 hidden layer and lack the need for manual feature extraction.

Convolutional Neural Networks

In convolutional neural networks (CNNs), the input is a matrix of pixel brightness [w, h, c] and the output is a vector of class probabilities. The hidden layers are adding (activations)(weights) and performing matrix multiplication to determine the class probabilities.

Existing Solutions

Digital tools such as the CABI Crop Compendium, Purdue Plant Doctor App, and the leaf doctor app (University of Hawaii) are not individualized, scalable, and rely on manual analysis. In Raza et. al 2015, a support vector machine was built to identify tomato powdery mildew using hyperspectral images; this method relies on a lengthy manual feature extraction process through PCA which is avoided through deep learning meta frameworks. Prior literature does not describe any attempts of building a practical application for any such plant disease diagnosis algorithms. Studies have not considered the effects of image angle, lighting, and scale on the practicality of image classification. Typically, networks were not trained to identify large number of classes like 38.

Method / Testing and Redesign

Data Acquistion

The Plant Village Dataset, a dataset of 54,306 diseased/not diseased plant leaves collected by the Pennsylvania State University extension program, was utilized to train and test the network. Three versions of the dataset were generated to determine whether the network mainly relied on color based, texture based, or extraneous factors during classification. Additionally, the images were randomly sorted into training and testing sets in a ratio of 7:3.

Phase 2

The convolutional neural network architecture (MATLAB or Custom), method of training (scratch or transfer learning), and dataset type (color, grey scale, or segmented) all influence the performance of the deep learning algorithm. The goal of phase 2 was to determine the optimal CNN configuration. Since 9 different CNN configurations were created, training hyperparameters were controlled and the overall accuracy (average of training and testing accuracies) and the mean f1 scores were compared. See detailed procedures in the presentation.

Afterwards, 207 total images of disease/not disease plants (5-7 images per class) were downloaded using BING’s automatic download feature. This was done to test how the classifiers would perform on a noisy, unfamiliar dataset representative of the real world (without training).

Phase 3

The goal of phase 3 was to actually apply the deep learning algorithm to a mobile/web application with server system. The first step is to create an executable function. However, I was unable to create an executable function as I lacked access to a MATLAB compiler. After performing some more research, I realized I could approach this problem in 2 ways: (A) through running the program on a desktop computer or (B) utilizing an engine api to call MATLAB from another platform. I decided to utilize Python FLASK. Here's a schematic of the app and server.


Phase 2 Results

Training Results

The training plot of PlantNet (Alex trained using transfer learning on the color dataset) is displayed below. During training, the network is learning the model parameters. First, there’s an exponential increase in accuracy with training with a few iterations due to transfer learning. By the end of training, the network had an accuracy of 98.5%. However, training took several days on a CPU which would be accelerated through a GPU/parallel processing such as NVIDIA GTX.


The convolutional neural network configuration of AlexNet trained through transfer learning with RGB Images produced the highest overall accuracy of 98.1% on the Plant Village Dataset, surpassing the initial hypothesis. Additionally, it was determined that AlexNet consistently outperformed the MyNet architecture . The network performed the best on the RGB dataset indicating that the network utilizes color features such as abnormal discoloration for classification.

On the BING Data set, the most accurate classifier, PlantNet, achieved a testing accuracy of 24.10%. The low accuracy of PlantNet on the BING dataset can be attributed to extraneous factors such as background, lack of previous exposure, and overfitting. The BING dataset had images of leaves with green natural backgrounds unlike the data by which PlantNet was trained upon. This indicates that PlantNet is prone to overfitting. Overfitting occurs when the model is too closely aligned to the data and is unable to generalize well. Overfitting can be reduced through cross validation (k folds), training with more data, remove extraneous features, stopping training early, and even adding a regularization term. In the future, I plan on using image segmentation, the process of removing certain features of the image or dividing certain portions of the image (k-means clustering), to reduce noise.

Phase 3

Here is a demo of the application.



     According to my experiments, the PlantNet model achieved the highest overall accuracy of 95.4% surpassing the initial design criteria. Even though the training took several days, the classification itself of an individual image takes under 1 second. Without any feature extraction, the model correctly classifies 38 classes of 14 crops and 26 diseases. Generally it was found that training with RGB images, a pre-trained network with many hidden layers, and transfer learning produced the highest accuracies.                                                                                                        

     However, there is still potential to improve the overall accuracy of PlantNet and reduce overfitting through training it with a more diverse set of images, utilizing data augmenters, and performing image pre-processing. PlantNet has several different applications: IOS app coupled with a server, drone based tracking, or even integrated with a Lab-On-Chip (ELISA) method for complete diagnosis. Overall, I learned a lot from the experiment), and plan to continue it further to make a difference in our world.

Next Steps

In the future, I would like to improve the functionality of the IOS/android app by integrating an alerting system based on crowdsourced geospatial data for farmers and a directory of local plant pathology and aid organization resources. Alongside this, I would like to reduce the amount of overfitting (high variance):

  • K folds cross validation to optimize the hyperparameters
  • Data augmentation through GANs (reflections, angles, expand training set)
  • Ending training early to prevent the memorization of noise

     To increase the practicality of the solution, utilizing multiple neural networks divided by crop type, image pre-processing algorithms like image segmentation or a user based framing approach, and adding a threshold to plant disease label return will be explored. 

      Although identification of visual symptoms is key to managing plant disease, finding evidence of the disease before any visual prognosis would be a agricultural breakthrough. VOCx(s) are released by plants during inoculation; additionally, thermal imaging has been researched for tomato powdery mildew detection. To create a deep learning algorithm for early detection, I would collect data: thermal imaging and volatile organic compound data.

    Overall, the PlantNet deep learning algorithm has great potential for application in mobile applications and drones for urban gardeners and subsistence farmers alike. Even though PlantNet doesn’t solve the problem of plant detection, it allows us to be 1 step closer to  food and consequently economic security to all by providing a basis for a widespread smartphone based plant disease diagnostic method.

About me

Hello, my name is Maanasa Mendu. I am 16 years old and am from Mason, OH. I love doing research (evidently), reading science fiction, participating in my school's Science Olympiad team, and taking long walks!

Honestly, I am "novice" coder. I started very late compared to most people . Drawn in by the possibilites of machine learning, I decided to start out with watching Coursera videos, practicing on code academy, and reading Intro to Python books. I had no idea that I could actually create a program let alone a neural net. I guess all it takes for a journey of 1000 miles is a single step! I love the sheer joy that overcomes you when a program or prototype finally works.

I am definitely inspired by countless scientists including Marie Curie, Andrei Sakharov, and Rachel Carson. Marie Curie was not only a revolutionary in her field, but also overcame male discrimination and continues to inspire female scientists around the world. Rachel Carson was a vocal scientists who went against society's’ beliefs during her fight against DDT and transformed the world.

In the future, I want to definitely pursue a M.D. Ph.D or become an environmental engineer. Regardless, I plan on staying in the STEM field, conducting research, and hopefully making a difference in peoples' lives.

Winning anything at the Google Science Fair would be amazing. Apart from the monetary prize, the recognition can serve as a bridge between my project and the real world. 

Health & Safety

This project was done independently at home with remote (email assistance) from experts. Since this was a project dealing with image pre-processing and machine learning work, I didn't have to follow any specific safety procedures.

Bibliography, references, and acknowledgements

Thanks to…

  • My parents for always supporting me and encouraging me to pursue STEM!
  • Dr. Lewandowski (OSU Plant Pathology department) for answering questions and referring me to the Bugwood database.
  • Mrs. Young for overseeing the science fair program and reviewing my research plan.


1. Altieri, Miguel & Koohafkan, Parviz. (2008). Enduring Farms: Climate Change, Smallholders and Traditional Farming Communities.

2.Strange, Richard N., and Peter R. Scott. “Plant Disease: A Threat to Global Food Security.”Annual Review of Phytopathology, vol. 43, no. 1, 2005, pp. 83–116., doi:10.1146/annurev.phyto.43.113004.133839.

3. Krizhevsky, Alex, et al. “ImageNet Classification with Deep Convolutional Neural Networks.”Communications of the ACM, vol. 60, no. 6, 2017, pp. 84–90., doi:10.1145/30653861)  “CS231n Convolutional Neural Networks for Visual Recognition.” CS231n Convolutional Neural Networks for Visual Recognition, cs231n.github.io/.

4. “Turfgrass Pathology Program.” Introduction to Plant Disease Fact Sheet Series | Turfgrass Pathology Program, turfdisease.osu.edu/news/introduction-plant-disease-fact-sheet-series

5. “CS231n Convolutional Neural Networks for Visual Recognition.” CS231n Convolutional Neural Networks for Visual Recognition, cs231n.github.io/.

6. Hughes, David. “An Open Access Repository of Images on Plant Health to Enable the Development of Mobile Disease Diagnostics.” Arxiv: Computers and Society, arxiv.org/abs/1511.08060.

7. Singh, V., & Misra, A. (2017). Detection of plant leaf diseases using image segmentation and soft computing techniques. Information Processing in Agriculture, 4(1), 41-49. Doi:10.1016/j.inpa.2016.10.005

8. Geitgey, Adam. “Machine Learning Is Fun! Part 3: Deep Learning and Convolutional Neural Networks.” Medium, Medium, 13 June 2016, medium.com/@ageitgey/machine-learning-is-fun-part-3-deep-learning-and-convolutional-neural-networks-f40359318721.

9. “Machine Learning.” Coursera, www.coursera.org/learn/machine-learning.

10. Raza, S., Prince, G., Clarkson, J. P., & Rajpoot, N. M. (2015). Automatic Detection of Diseased Tomato Plants Using Thermal and Stereo Visible Light Images. Plos One, 10(4). Doi:10.1371/journal.pone.0123262

11. “How Artificial Intelligence and Machine Learning Can Help Farmers Diagnose Crop Diseases?” AI.Business, ai.business/2017/08/30/how-artificial-intelligence-and-machine-learning-can-help-farmers-diagnose-crop-diseases/.

12. Plant Image Analysis | Datasets, www.plant-image-analysis.org/dataset.

13. Mwebaze, Ernest, and Godliver Owomugisha. 2016. “Machine Learning for Plant Disease Incidence and Severity Measurements from Leaf Images.” 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA). Doi:10.1109/icmla.2016.0034.

14. Lenné, J. M. (n.d.). Food security and agrobiodiversity management. Agrobiodiversity management for food security: a critical review, 12-25. Doi:10.1079/9781845937614.0012

15. Yeh, Y. F., Chung, W., Liao, J., Chung, C., Kuo, Y., & Lin, T. (2013). A Comparison of Machine Learning Methods on Hyperspectral Plant Disease Assessments. IFAC Proceedings Volumes,46(4), 361-365. Doi:10.3182/20130327-3-jp-3017.00081

16. Yang, Xin & Guo, Tingwei. (2017). Machine learning in plant disease research. European Journal of BioMedical Research. 3. 6. 10.18088/ejbmr.3.1.2017.pp6-9.

17. Alexnet. (n.d.). Retrieved February 18, 2018, from https://www.mathworks.com/help/nnet/examples/transfer-learning-using-alexnet.html