Books FREE Online Courses
Free Tutorials  Go to Your University  Placement Preparation 
SEO MCQs : SET-1 Gadgets for Engineers
0 like 0 dislike
6.8k views
in AI-ML-Data Science Projects by (119 points)
edited by

Plant Disease Detection System using CNN

1 Answer

0 like 0 dislike
by (119 points)
selected by
 
Best answer

PROJECT: PLANT DISEASE DETECTION SYSTEM

Crop diseases are a major threat to food security, but their rapid identification remains difficult in many parts of the world. This project aims to detect the type of disease of the plant with the help of the images of plant's leaf. The model has been trained with 70295 images of different types of diseased and healthy plants . With the help of CNN and OpenCV the model predicts whether the plant is diseased or healthy and the type of disease with which it is infected.

The application which we are using to create this project is Jupyter Notebook which is an open source web application that you can use to create and share documents that contain live code, equations, visualisations and text.

# importing libraries
import tensorflow as tf
import matplotlib.pyplot as plt
import os
%matplotlib inline
from tensorflow,keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Dense, Input, Flatten, Dropout, Conv2D
from tensorflow.keras.layers import BatchNormalization, Activation, MaxPooling2D
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from tensorflow.keras.utils import plot_model
from IPython.display import SVG, Image
 

NOTE

This block of code is used to import all the required libraries like tensorflow, keras, matplotlib for the functioning of the model. Here tensorflow version 2.2 is used which is the latest version.

# opening the file
for expression in os.listdir("C://Users//pc//Downloads//NewPlantDiseaseDataset(Augme
nted)//NewPlantDiseaseDataset(Augmented)//train//"
):
           print(str(len(os.listdir("C://Users//pc//Downloads//NewPlantDiseaseDataset(Augme
           nted
)//NewPlantDiseaseDataset(Augmented)//train//"+expression)))+
           " "+expression+ 'images')

 


OUTPUT


                        

NOTE

This block of code is used to open the dataset file and shows the number of images of different classes in the dataset. Here we have used two datasets one is Test Data and other is Train Data with 38 classes. The text highlighted with yellow is the path of the dataset file. The dataset used is available on kaggle, to download the dataset you can click here

# Defining train and test data to train the model
img_size=48
batch_size=64
datagen_train=ImageDataGenerator(horizontal_flip=True)
train_generator=datagen_train.flow_from_directory("C://Users//pc//Downloads//NewPlant
DiseaseDataset(Augmented)//NewPlantDiseaseDataset(Augmented)//train//"
,target_size=
(48,48), batch_size=batch_size, class_mode='categorical', shuffle=True)
datagen_validation=ImageDataGenerator(horizontal_flip=True)
validation_generator=datagen_train.flow_from_directory("C://Uers//pc//Downloads//NewPlant
DiseaseDataset(Augmented)//NewPlantDiseaseDataset(Augmented)//valid//"
,target_size=
(48,48), batch_size=batch_size, class_mode='categorical', shuffle=True)

OUTPUT

NOTE

This part of code is used to define train and test data into the model. The train dataset has 70295 images belonging to 38 classes and the test dataset has 17572 images belonging to 38 classes. 

#initialising CNN
model=Sequential()
#conv-1
model.add(Conv2D(64,(3,3), padding='same', input_shape= (48,48,3)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
#conv-2
model.add(Conv2D(128,(5,5), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
#conv-3
model.add(Conv2D(512,(3,3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
#conv-4
model.add(Conv2D(512,(3,3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(256))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(512))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))

model.add(Dense(38, activation='softmax'))
opt=Adam(lr=0.0005)

NOTE

  1. The Sequential model API is a way of creating deep learning models where a sequential class is created and model layers are created and added to it
  2. We have to add a lot of layers in a CNN model for better accuracy and efficiency of the model.
  3. Here we have added convolutional layers with relu activation and the final layer has softmax activation function. The input_shape here represents RGB format of image.
  4. The convolutional layer is then passed to MaxPooling layer.
  5. We pass the images through the layers more than once for better feature extraction.
#compiling the model
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()

NOTE

  • The optimizers are used for improving the performance and speed for training a specific model.
epochs=15
steps_per_epoch=train_generator.n//train_generator.batch_size
steps_per_epoch
validation_steps=validation_generator.n//valid_generator.batch_size

checkpoint=ModelCheckpoint("model_weights.h5", monitor="val_accuracy", save_weights_only=True, model='max', verbose=1)
reduce_lr = ReduceLROnPlateau(monitor = 'val_loss', factor=0.1, patience=2, min_lr= 0.00001, model= 'auto')

history= model.fit(
          x=train_generator, steps_per_epoch=steps_per_epoch, epochs=epochs,            validation_data=validation_generator, validation_steps=validation_steps)

#saving the model
model.save('my_disease.h5')

OUTPUT

NOTE

  1. In this part of code we have trained the model.
  2. epochs = 15 means the model will see or read the image 15 times.
  3. By saving the model there will be no need to train the model again and again, we can simply load the model for prediction.
from tensorflow.keras.models import load_model
classifier = load_model('my_disease.h5')

NOTE

This is used to load the saved model for prediction

import numpy as np
from tensorflow.keras.preprocessing import image
path = "C://Users//pc//Downloads//plant_test//PotatoEarlyBlight1.jpg"
test_image = image.load_img(path)
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
plt.imshow(test_image)
test_img = image.load_img(path, target_size=(48,48))
test_img = image.img_to_array(test_img)
test_img = np.expand_dims(test_img,axis=0)
result = classifier.predict(test_img)
a = result.argmax()
s = train_generator.class_indices
name = [ ]
for i in s:
     name.append(i)
for i in range(len(s)):
     if (i==a):
          p=name[i]
p

OUTPUT

NOTE

  1. In this code we are taking a test image and printing it with the help of natplotlib.
  2. We are doing the preprocessing on the selected image.
  3. training_set.class_indices: from 1 to 38.
  4. After processing, the model will tell the name of the disease shown in the image which is potato early blight here.
by (100 points)
Hey thanks for sharing the knowledge and I am quite interested in knowing the accuracy of this model can you please share how to find the accuracy of this model.
Thank you in advance.

3.3k questions

7.1k answers

395 comments

4.5k users

 Important Lists:

Important Lists, Exams & Cutoffs Exams after Graduation PSUs

 Goeduhub:

About Us | Contact Us || Terms & Conditions | Privacy Policy || Youtube Channel || Telegram Channel © goeduhub.com Social::   |  | 

 

...