Identification of diabetic retinopathy using deep learning algorithm and blood vessel extraction
*Corresponding Author:
Published: 30-Sep-2021, DOI: 10.54931/2053-4787.29-S1-2
African Journal of Diabetes medicine received 1471 citations as per google scholar report
Short Communication - African Journal of Diabetes medicine (2021)
*Corresponding Author:
Published: 30-Sep-2021, DOI: 10.54931/2053-4787.29-S1-2
Retinal blood vessel and retinal vessel tree segmentation are significant components in disease identification systems. Diabetic retinopathy is found using identifying haemorrhages in blood vessels. The debauched vessel segmentation helps in an image segmentation process to improve the accuracy of the system. This paper uses Edge Enhancement and Edge Detection method for blood vessel extraction. It covers drusen, exudates, vessel contrasts and artifacts. After extracting the blood vessel, the dataset is fed into CNN network called EyeNet for identifying DR infected images. It is observed that EyeNet leads to Sensitivity of about 90.02%, Specificity of about 98.77% and Accuracy of about 96.08%
Retina; Deep learning; CNN; DR
Diabetic Retinopathy is a diabetic complication that affect eye. The automated system was developed for suitable detection of the disease using fundus image and segmentation. The location of anomalies in fovea is being identified and helpful for diagnosis. The detection of retinal parts was carried out as part of the overall device growth, and the results have been published.1
The method of removing the usual retinal components: blood vessels, fovea, and optic disc, allows for the identification of lesions. There are different techniques explained in for blood vessel extraction namely Edge Enhancement and Edge Detection, Modified Matched Filtering, Continuation Algorithm and Image Line Cross Section. Diabetic retinopathy is a serious eye disorder that can lead to blindness in people of working age. This study introduces a technique for segmenting retinal vessels are applicable for analysing the retinal image analysis.2,3
A multilayer neural network with three primary colour components of the image, namely red, green, and blue as inputs, is used to identify and segment retinal blood vessels. The back propagation algorithm is used, which provides a reliable method for changing the weights in a feed forward network. Deep convolutional neural networks have recently demonstrated superior image classification efficiency as compared to the feature extracted image classification methods.4
Thus, authors investigated convolutional neural network for the automatic classification of diabetic retinopathy using a colour fundus picture, and found that it had an accuracy of 94.5% on their dataset, outperforming traditional approaches.
Authors proposed morphological processing, thresholding, edge detection, and adaptive histogram equalization to segment and extract blood vessels from retinal images. They created a network with convolutional neural network architecture for accurately classifying the severity of DR from the fundus picture for automatic diagnosis. It is evident that both the Blood vessels and affected area are similar to some extent which the computer application may fail to identify the right symptom. This may also reduce the accuracy rate when a deep learning algorithm is implemented to train the fundus image to identify certain diseases. The objective of this paper is blood vessel extracted images are given to the Convolution Neural Network to identify whether the given image is Diabetic Retinopathy affected or not based on haemorrhages.5
The Blood vessel extraction stage uses the Edge Enhancement and Edge Detection (EEED) technique to remove the blood vessels from the fundus image explained in Figure 1.
Convolution Neural Network consists of convolution layers, classification network, sub sampling layer, non-linear layer and fully connected neural network. Many features in each pixel of the image are extracted from the network and are used to build the features that will map the image input applied. The use of fully connected layer is in the output layer is to find the decision.
Figure 2 represents the EYENET Model. It consists of Zero padding layer, convolutional layer, activation function and batch normalization, drop out layers, max pooling layers, Global averaging pooling layer, fully connected layer and output layer. In order to remove the boundary issues in an input image, the rows and columns of zeros are added in the input image for convolution operation. The convolution operation has been performed using number of kernels with different step sizes to maximize the accuracy. Rectified Linear unit is considered as a nonlinear activation function to make the negative element becomes positive. Batch Normalization uses activation function to reduce the sensitivity for every input data variations. Drop out layer is responsible for preventing over fitting problem and also used to enhance the activation performance. The purpose of max pooling layer down samples the input images using filter sizes. The Global average pooling layer calculates the feature map average output. In fully connected layer, each neuron is connected to the other neuron that will be connected to the next layer. The purpose of output layer provides the probability occurrence of each class for the input image given.
Simulation results
We calculated the following parameters from the results
• Specificity
• Sensitivity
• Accuracy
Sensitivity
It is also called as true positive rate (Tp), the recall, or probability of detection.
• The percentage of abnormal cells that are correctly identified as having the condition.
Sensitivity=Tp/Tp+Fn (1)
Specificity
• It is also called true negative rate (Tn).
• The actual negatives that are accurately identified as such.
• The percentage of normal cells that are correctly identified as having the condition.
Specificity=Tn/Fp+Tn (2)
Accuracy
The accuracy is defined as the percentage of correctly classified instances.
Accuracy=(Tp+Tn)/(Tp+Tn+Fp+Fn) (3)
Figures 3-15 explains the concepts of extracted blood vessels images.
Our Eyenet algorithm was able to produce a sensitivity of about 90.02%, a specificity of about 98.77% and accuracy of about 96.08%.
Our findings have shown that an automatic diagnosis of DR can be made by segmenting blood vessels and categorizing them using CNN. The advantage of having a professional CNN is that it can have a faster diagnosis and report than an expert can. Classifying the regular and extreme photos is straightforward, but classifying the DR as mild or moderate at an early stage is problematic. The importance of this research is that it identifies the disease at an earlier point, saving time and money. The suggested CNN has proven to be effective in correctly recognizing the DR early enough to avoid vision loss. It has 96.08% accuracy, a sensitivity of 90.02%, and a specificity of 98.77%.
None
None
Select your language of interest to view the total content in your interested language
To read the issue click on a cover