Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 11(1): 15756, 2021 08 03.
Artículo en Inglés | MEDLINE | ID: mdl-34344983

RESUMEN

Crop variety identification is an essential link in seed detection, phenotype collection and scientific breeding. This paper takes peanut as an example to explore a new method for crop variety identification. Peanut is a crucial oil crop and cash crop. The yield and quality of different peanut varieties are different, so it is necessary to identify and classify different peanut varieties. The traditional image processing method of peanut variety identification needs to extract many features, which has defects such as intense subjectivity and insufficient generalization ability. Based on the deep learning technology, this paper improved the deep convolutional neural network VGG16 and applied the improved VGG16 to the identification and classification task of 12 varieties of peanuts. Firstly, the peanut pod images of 12 varieties obtained by the scanner were preprocessed with gray-scale, binarization, and ROI extraction to form a peanut pod data set with a total of 3365 images of 12 varieties. A series of improvements have been made to VGG16. Remove the F6 and F7 fully connected layers of VGG16. Add Conv6 and Global Average Pooling Layer. The three convolutional layers of conv5 have changed into Depth Concatenation and add the Batch Normalization(BN) layers to the model. Besides, fine-tuning is carried out based on the improved VGG16. We adjusted the location of the BN layers. Adjust the number of filters for Conv6. Finally, the improved VGG16 model's training test results were compared with the other classic models, AlexNet, VGG16, GoogLeNet, ResNet18, ResNet50, SqueezeNet, DenseNet201 and MobileNetv2 verify its superiority. The average accuracy of the improved VGG16 model on the peanut pods test set was 96.7%, which was 8.9% higher than that of VGG16, and 1.6-12.3% higher than that of other classical models. Besides, supplementary experiments were carried out to prove the robustness and generality of the improved VGG16. The improved VGG16 was applied to the identification and classification of seven corn grain varieties with the same method and an average accuracy of 90.1% was achieved. The experimental results show that the improved VGG16 proposed in this paper can identify and classify peanut pods of different varieties, proving the feasibility of a convolutional neural network in variety identification and classification. The model proposed in this experiment has a positive significance for exploring other Crop variety identification and classification.


Asunto(s)
Algoritmos , Arachis/química , Arachis/clasificación , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Fenotipo
2.
Food Chem ; 360: 129968, 2021 Oct 30.
Artículo en Inglés | MEDLINE | ID: mdl-34082378

RESUMEN

Aflatoxin is commonly exists in moldy foods, it is classified as a class one carcinogen by the World Health Organization. In this paper, we used one dimensional convolution neural network (1D-CNN) to classify whether a pixel contains aflatoxin. Firstly we found the best combination of 1D-CNN parameters were epoch = 30, learning rate = 0.00005 and 'relu' for active function, the highest test accuracy reached 96.35% for peanut, 92.11% for maize and 94.64% for mix data. Then we compared 1D-CNN with feature selection and methods in other papers, result shows that neural network has greatly improved the detection efficiency than feature selection. Finally we visualized the classification result of different training 1D-CNN networks. This research provides the core algorithm for the intelligent sorter with aflatoxin detection function, which is of positive significance for grain processing and the prenatal detoxification of foreign trade enterprises.


Asunto(s)
Aflatoxinas/análisis , Imágenes Hiperespectrales , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Análisis de los Alimentos , Contaminación de Alimentos/análisis , Humanos
3.
Spectrochim Acta A Mol Biomol Spectrosc ; 234: 118269, 2020 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-32217452

RESUMEN

Aflatoxin is highly toxic and is easily found in maize, a little aflatoxin can induce liver cancer. In this paper, we used hyperspectral data in the pixel-level to build the aflatoxin classifying model, each of the pixel have 600 hyperspectral bands and labeled 'clean' or 'contaminated'. We use 3 method to extracted feature bands, one method is to select 4 hyperspectral bands from other articles: 390 nm, 440 nm, 540 nm and 710 nm, another method is to use feature extraction PCA to obtain first 5 pcs to shrink the hyperspectral volume, the third method is to use Fscnca, Fscmrmr, Relieff and Fishier algorithm to select top 10 feature bands. After feature band selection or extraction, we put the feature bands into Random Forest (RF) and K-nearest neighbor (KNN) to classify whether a pixel is polluted by aflatoxin. The highest accurate for feature selection is Relieff, it reached the accuracy of 99.38% with RF classifier and 98.77% in KNN classifier. PCA feature extraction with RF classifier also reached a high accuracy 93.83%. And the 600 bands without feature extraction reached the accuracy of 100%. Feature bands selected from other papers could reach an accuracy of 89.51%. The result shows that the feature extraction performs well on its own data set. And if the computing time is not taken into account, we could use full band to classify the aflatoxin due to its high accuracy.


Asunto(s)
Aflatoxinas/análisis , Algoritmos , Imágenes Hiperespectrales , Zea mays/química , Redes Neurales de la Computación , Análisis de Componente Principal
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...