Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 271
Filtrar
1.
Heliyon ; 10(16): e36390, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39262960

RESUMO

Biometric systems have gained attention as a more secure alternative to traditional authentication methods. However, these systems are not without their technical limitations. This paper presents a hybrid approach that combines edge detection and segmentation techniques to enhance the security of cloud systems. The proposed method uses iris recognition as a biometric paradigm, taking advantage of the iris' unique patterns. We performed feature extraction and classification using hamming distance (HD) and convolutional neural networks (CNN). We validated the experimental findings using various datasets, such as MMU, IITD, and CASIA Iris Interval V4. We compared the proposed method's results to previous research, demonstrating recognition rates of 99.50 % on MMU using CNN, 97.18 % on IITD using CNN, and 95.07 % on CASIA using HD. These results indicate that the proposed method outperforms other classifiers used in previous research, showcasing its effectiveness in improving cloud security services.

2.
Front Ophthalmol (Lausanne) ; 4: 1396511, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39290775

RESUMO

Purpose: To describe the construction and diagnostic accuracy of a modularized, virtual reality (VR)-based, pupillometer for detecting relative afferent pupillary defect (RAPD) in unilateral optic neuropathies, vis-à-vis, clinical grading by experienced neuro-ophthalmologists. Methods: Protocols for the swinging flashlight test and pupillary light response analysis used in a previous stand-alone pupillometer was integrated into the hardware of a Pico Neo 2 Eye® VR headset with built-in eye tracker. Each eye of 77 cases (mean ± 1SD age: 39.1 ± 14.9yrs) and 77 age-similar controls were stimulated independently thrice for 1sec at 125lux light intensity, followed by 3sec of darkness. RAPD was quantified as the ratio of the direct reflex of the stronger to the weaker eye. Device performance was evaluated using standard ROC analysis. Results: The median (25th - 75th quartiles) pupil constriction of the affected eye of cases was 38% (17 - 23%) smaller than their fellow eye (p<0.001), compared to an interocular difference of +/-6% (3 - 15%) in controls. The sensitivity of RAPD detection was 78.5% for the entire dataset and it improved to 85.1% when the physiological asymmetries in the bilateral pupillary miosis were accounted for. Specificity and the area under ROC curve remained between 81 - 96.3% across all analyses. Conclusions: RAPD may be successfully quantified in unilateral neuro-ophthalmic pathology using a VR-technology-based modularized pupillometer. Such an objective estimation of RAPD provides immunity against biases and variability in the clinical grading, overall enhancing its value for clinical decision making.

3.
Sci Rep ; 14(1): 20791, 2024 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-39251697

RESUMO

Insole blanking production technology plays a vital role in contemporary machining and manufacturing industries. Existing insole blanking production models have limitations because most robots are required to accurately position the workpiece to a predetermined location, and special auxiliary equipment is usually required to ensure the precise positioning of the robot. In this paper, we present an adaptive blanking robotic system for different lighting environments, which consists of an industrial robot arm, an RGB-D camera configuration, and a customized insole blanking table and mold. We introduce an innovative edge detection framework that utilizes color features and morphological parameters optimized through particle swarm optimization (PSO) techniques to Adaptive recognition of insole edge contours. A path planning framework based on FSPS-BIT* is also introduced, which integrates the BIT* algorithm with the FSPS algorithm for efficient path planning of the robotic arm.

4.
BMC Med Imaging ; 24(1): 231, 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39223468

RESUMO

Recent improvements in artificial intelligence and computer vision make it possible to automatically detect abnormalities in medical images. Skin lesions are one broad class of them. There are types of lesions that cause skin cancer, again with several types. Melanoma is one of the deadliest types of skin cancer. Its early diagnosis is at utmost importance. The treatments are greatly aided with artificial intelligence by the quick and precise diagnosis of these conditions. The identification and delineation of boundaries inside skin lesions have shown promise when using the basic image processing approaches for edge detection. Further enhancements regarding edge detections are possible. In this paper, the use of fractional differentiation for improved edge detection is explored on the application of skin lesion detection. A framework based on fractional differential filters for edge detection in skin lesion images is proposed that can improve automatic detection rate of malignant melanoma. The derived images are used to enhance the input images. Obtained images then undergo a classification process based on deep learning. A well-studied dataset of HAM10000 is used in the experiments. The system achieves 81.04% accuracy with EfficientNet model using the proposed fractional derivative based enhancements whereas accuracies are around 77.94% when using original images. In almost all the experiments, the enhanced images improved the accuracy. The results show that the proposed method improves the recognition performance.


Assuntos
Melanoma , Neoplasias Cutâneas , Melanoma/diagnóstico por imagem , Humanos , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado Profundo , Algoritmos
5.
Sci Rep ; 14(1): 22628, 2024 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-39349710

RESUMO

The stability of arc bubble is a crucial indicator of underwater wet welding process. However, limited research exists on detecting arc bubble edges in such environments, and traditional algorithms often produce blurry and discontinuous results. To address these challenges, we propose a novel arc bubble edge detection method based on deep transfer learning for processing underwater wet welding images. The proposed method integrates two training stages: pre-training and fine-tuning. In the pre-training stage, a large source domain dataset is used to train VGG16 as a feature extractor. In the fine-tuning stage, we introduce the Attention-Scale-Semantics (ASS) model, which consists of a Convolutional Block Attention Module (CBAM), a Scale Fusion Module (SCM) and a Semantic Fusion Module (SEM). The ASS model is further trained on a small target domain dataset specific to underwater wet welding to fine-tune the model parameters. The CBAM can adaptively weight the feature maps, focusing on more crucial features to better capture edge information. The SCM training method maximizes feature utilization and simplifies training by combining multi-scale features. Additionally, the skip structure of SEM effectively mitigates semantic loss in the high-level network, enhancing the accuracy of edge detection. On the BSDS500 dataset and a self-constructed underwater wet welding dataset, the ASS model was evaluated against conventional edge detection models-Richer Convolutional Features (RCF), Fully Convolutional Network (FCN), and UNet-as well as state-of-the-art models LDC and TEED. In terms of Mean Absolute Error (MAE), accuracy, and other evaluation metrics, the ASS model consistently outperforms these models, demonstrating edge detection capabilities that are both effective and stable in detecting arc bubble edges in underwater wet welding images.

6.
Bioengineering (Basel) ; 11(8)2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39199759

RESUMO

In response to the analysis of the functional status of forearm blood vessels, this paper fully considers the orientation of the vascular skeleton and the geometric characteristics of blood vessels and proposes a blood vessel width calculation algorithm based on the radius estimation of the tangent circle (RETC) in forearm near-infrared images. First, the initial infrared image obtained by the infrared camera is preprocessed by image cropping, contrast stretching, denoising, enhancement, and initial segmentation. Second, the Zhang-Suen refinement algorithm is used to extract the vascular skeleton. Third, the Canny edge detection method is used to perform vascular edge detection. Finally, a RETC algorithm is developed to calculate the vessel width. This paper evaluates the accuracy of the proposed RETC algorithm, and experimental results show that the mean absolute error between the vessel width obtained by our algorithm and the reference vessel width is as low as 0.36, with a variance of only 0.10, which can be significantly reduced compared to traditional calculation measurements.

7.
J Imaging ; 10(8)2024 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-39194975

RESUMO

In the field of 2-D image processing and computer vision, accurately detecting and segmenting objects in scenarios where they overlap or are obscured remains a challenge. This difficulty is worse in the analysis of shoeprints used in forensic investigations because they are embedded in noisy environments such as the ground and can be indistinct. Traditional convolutional neural networks (CNNs), despite their success in various image analysis tasks, struggle with accurately delineating overlapping objects due to the complexity of segmenting intertwined textures and boundaries against a background of noise. This study introduces and employs the YOLO (You Only Look Once) model enhanced by edge detection and image segmentation techniques to improve the detection of overlapping shoeprints. By focusing on the critical boundary information between shoeprint textures and the ground, our method demonstrates improvements in sensitivity and precision, achieving confidence levels above 85% for minimally overlapped images and maintaining above 70% for extensively overlapped instances. Heatmaps of convolution layers were generated to show how the network converges towards successful detection using these enhancements. This research may provide a potential methodology for addressing the broader challenge of detecting multiple overlapping objects against noisy backgrounds.

8.
Sensors (Basel) ; 24(16)2024 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-39205042

RESUMO

Solar panels may suffer from faults, which could yield high temperature and significantly degrade their power generation. To detect faults of solar panels in large photovoltaic plants, drones with infrared cameras have been implemented. Drones may capture a huge number of infrared images. It is not realistic to manually analyze such a huge number of infrared images. To solve this problem, we develop a Deep Edge-Based Fault Detection (DEBFD) method, which applies convolutional neural networks (CNNs) for edge detection and object detection according to the captured infrared images. Particularly, a machine learning-based contour filter is designed to eliminate incorrect background contours. Then faults of solar panels are detected. Based on these fault detection results, solar panels can be classified into two classes, i.e., normal and faulty ones (i.e., macro ones). We collected 2060 images in multiple scenes and achieved a high macro F1 score. Our method achieved a frame rate of 28 fps over infrared images of solar panels on an NVIDIA GeForce RTX 2080 Ti GPU.

9.
Microvasc Res ; 156: 104732, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39147360

RESUMO

Fluorescence intravital microscopy captures large data sets of dynamic multicellular interactions within various organs such as the lungs, liver, and brain of living subjects. In medical imaging, edge detection is used to accurately identify and delineate important structures and boundaries inside the images. To improve edge sharpness, edge detection frequently requires the inclusion of low-level features. Herein, a machine learning approach is needed to automate the edge detection of multicellular aggregates of distinctly labeled blood cells within the microcirculation. In this work, the Structured Adaptive Boosting Trees algorithm (AdaBoost.S) is proposed as a contribution to overcome some of the edge detection challenges related to medical images. Algorithm design is based on the observation that edges over an image mask often exhibit special structures and are interdependent. Such structures can be predicted using the features extracted from a bigger image patch that covers the image edge mask. The proposed AdaBoost.S is applied to detect multicellular aggregates within blood vessels from the fluorescence lung intravital images of mice exposed to e-cigarette vapor. The predictive capabilities of this approach for detecting platelet-neutrophil aggregates within the lung blood vessels are evaluated against three conventional machine learning algorithms: Random Forest, XGBoost and Decision Tree. AdaBoost.S exhibits a mean recall, F-score, and precision of 0.81, 0.79, and 0.78, respectively. Compared to all three existing algorithms, AdaBoost.S has statistically better performance for recall and F-score. Although AdaBoost.S does not outperform Random Forest in precision, it remains superior to the XGBoost and Decision Tree algorithms. The proposed AdaBoost.S is widely applicable to analysis of other fluorescence intravital microscopy applications including cancer, infection, and cardiovascular disease.


Assuntos
Algoritmos , Plaquetas , Microscopia Intravital , Pulmão , Aprendizado de Máquina , Microscopia de Fluorescência , Neutrófilos , Animais , Pulmão/irrigação sanguínea , Pulmão/diagnóstico por imagem , Plaquetas/metabolismo , Interpretação de Imagem Assistida por Computador , Agregação Celular , Camundongos , Reprodutibilidade dos Testes , Valor Preditivo dos Testes , Camundongos Endogâmicos C57BL
10.
Pol J Radiol ; 89: e368-e377, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39139256

RESUMO

Purpose: To detect foot ulcers in diabetic patients by analysing thermal images of the foot using a deep learning model and estimate the effectiveness of the proposed model by comparing it with some existing studies. Material and methods: Open-source thermal images were used for the study. The dataset consists of two types of images of the feet of diabetic patients: normal and abnormal foot images. The dataset contains 1055 total images; among these, 543 are normal foot images, and the others are images of abnormal feet of the patient. The study's dataset was converted into a new and pre-processed dataset by applying canny edge detection and watershed segmentation. This pre-processed dataset was then balanced and enlarged using data augmentation, and after that, for prediction, a deep learning model was applied for the diagnosis of an ulcer in the foot. After applying canny edge detection and segmentation, the pre-processed dataset can enhance the model's performance for correct predictions and reduce the computational cost. Results: Our proposed model, utilizing ResNet50 and EfficientNetB0, was tested on both the original dataset and the pre-processed dataset after applying edge detection and segmentation. The results were highly promising, with ResNet50 achieving 89% and 89.1% accuracy for the two datasets, respectively, and EfficientNetB0 surpassing this with 96.1% and 99.4% accuracy for the two datasets, respectively. Conclusions: Our study offers a practical solution for foot ulcer detection, particularly in situations where expert analysis is not readily available. The efficacy of our models was tested using real images, and they outperformed other available models, demonstrating their potential for real-world application.

11.
Front Physiol ; 15: 1416912, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39175612

RESUMO

Introduction: The cardiothoracic ratio (CTR) based on postero-anterior chest X-rays (P-A CXR) images is one of the most commonly used cardiac measurement methods and an indicator for initially evaluating cardiac diseases. However, the hearts are not readily observable on P-A CXR images compared to the lung fields. Therefore, radiologists often manually determine the CTR's right and left heart border points of the adjacent left and right lung fields to the heart based on P-A CXR images. Meanwhile, manual CTR measurement based on the P-A CXR image requires experienced radiologists and is time-consuming and laborious. Methods: Based on the above, this article proposes a novel, fully automatic CTR calculation method based on lung fields abstracted from the P-A CXR images using convolutional neural networks (CNNs), overcoming the limitations to heart segmentation and avoiding errors in heart segmentation. First, the lung field mask images are abstracted from the P-A CXR images based on the pre-trained CNNs. Second, a novel localization method of the heart's right and left border points is proposed based on the two-dimensional projection morphology of the lung field mask images using graphics. Results: The results show that the mean distance errors at the x-axis direction of the CTR's four key points in the test sets T1 (21 × 512 × 512 static P-A CXR images) and T2 (13 × 512 × 512 dynamic P-A CXR images) based on various pre-trained CNNs are 4.1161 and 3.2116 pixels, respectively. In addition, the mean CTR errors on the test sets T1 and T2 based on four proposed models are 0.0208 and 0.0180, respectively. Discussion: Our proposed model achieves the equivalent performance of CTR calculation as the previous CardioNet model, overcomes heart segmentation, and takes less time. Therefore, our proposed method is practical and feasible and may become an effective tool for initially evaluating cardiac diseases.

12.
Sensors (Basel) ; 24(13)2024 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-39001170

RESUMO

This paper presents a novel segmentation algorithm specially developed for applications in 3D point clouds with high variability and noise, particularly suitable for heritage building 3D data. The method can be categorized within the segmentation procedures based on edge detection. In addition, it uses a graph-based topological structure generated from the supervoxelization of the 3D point clouds, which is used to make the closure of the edge points and to define the different segments. The algorithm provides a valuable tool for generating results that can be used in subsequent classification tasks and broader computer applications dealing with 3D point clouds. One of the characteristics of this segmentation method is that it is unsupervised, which makes it particularly advantageous for heritage applications where labelled data is scarce. It is also easily adaptable to different edge point detection and supervoxelization algorithms. Finally, the results show that the 3D data can be segmented into different architectural elements, which is important for further classification or recognition. Extensive testing on real data from historic buildings demonstrated the effectiveness of the method. The results show superior performance compared to three other segmentation methods, both globally and in the segmentation of planar and curved zones of historic buildings.

13.
Heliyon ; 10(10): e31430, 2024 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-38826709

RESUMO

This research introduces a new approach to elevate the precision of image edge detection through a new algorithm rooted in the coefficients derived from the subclass SCt,ρ (CSKP model). Our method employs convolution operations on input image pixels, utilizing the CSKP mask window in eight distinct directions, fostering a comprehensive and multi-directional analysis of edge features. To gauge the efficacy of our algorithm, image quality is assessed through perceptually significant metrics, including contrast, correlation, energy, homogeneity, and entropy. The study aims to contribute a valuable tool for diverse applications such as computer vision and medical imaging by presenting a robust and innovative solution to enhance image edge detection. The results demonstrate notable improvements, affirming the potential of the proposed algorithm to advance the current state-of-the-art in image processing.

14.
J Appl Physiol (1985) ; 137(2): 300-311, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-38695355

RESUMO

Flow-mediated dilation (FMD) is a common measure of endothelial function and an indicator of vascular health. Automated software methods exist to improve the speed and accuracy of FMD analysis. Compared with commercial software, open-source software offers similar capabilities at a much lower cost while allowing for increased customization specific to users' needs. We introduced modifications to an existing open-source software, FloWave.us to better meet FMD analysis needs. The purpose of this study was to compare the repeatability and reliability of the modified FloWave.us software to the original software and to manual measurements. To assess these outcomes, duplex ultrasound imaging data from the popliteal artery in older adults were analyzed. The average percent FMD for the modified software was 6.98 ± 3.68% and 7.27 ± 3.81% for observer 1 and 2 respectively, compared with 9.17 ± 4.91% and 10.70 ± 4.47% with manual measurements and 5.07 ± 31.79% with the original software for observer 1. The modified software and manual methods demonstrated higher intraobserver intraclass correlation coefficients (ICCs) for repeated measures for baseline diameter, peak diameter, and percent FMD compared with the original software. For percent FMD, the interobserver ICC was 0.593 for manual measurements and 0.723 for the modified software. With the modified method, an average of 97.7 ± 2.4% of FMD videos frames were read, compared with only 17.9 ± 15.0% frames read with the original method when analyzed by the same observer. Overall, this work further establishes open-source software as a robust and viable tool for FMD analysis and demonstrates improved reliability compared with the original software.NEW & NOTEWORTHY This study improves edge detection capabilities and implements noise reduction strategies to optimize an existing open-source software's suitability for flow-mediated dilation (FMD) analysis. The modified software improves the precision and reliability of FMD analysis compared with the original software algorithm. We demonstrate that this modified open-source software is a robust tool for FMD analysis.


Assuntos
Software , Vasodilatação , Humanos , Masculino , Idoso , Reprodutibilidade dos Testes , Feminino , Vasodilatação/fisiologia , Artéria Poplítea/fisiologia , Artéria Poplítea/diagnóstico por imagem , Pessoa de Meia-Idade , Fluxo Sanguíneo Regional/fisiologia , Endotélio Vascular/fisiologia , Endotélio Vascular/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Velocidade do Fluxo Sanguíneo/fisiologia
15.
J Xray Sci Technol ; 32(4): 1011-1039, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38759091

RESUMO

Retinal disorders pose a serious threat to world healthcare because they frequently result in visual loss or impairment. For retinal disorders to be diagnosed precisely, treated individually, and detected early, deep learning is a necessary subset of artificial intelligence. This paper provides a complete approach to improve the accuracy and reliability of retinal disease identification using images from OCT (Retinal Optical Coherence Tomography). The Hybrid Model GIGT, which combines Generative Adversarial Networks (GANs), Inception, and Game Theory, is a novel method for diagnosing retinal diseases using OCT pictures. This technique, which is carried out in Python, includes preprocessing images, feature extraction, GAN classification, and a game-theoretic examination. Resizing, grayscale conversion, noise reduction using Gaussian filters, contrast enhancement using Contrast Limiting Adaptive Histogram Equalization (CLAHE), and edge recognition via the Canny technique are all part of the picture preparation step. These procedures set up the OCT pictures for efficient analysis. The Inception model is used for feature extraction, which enables the extraction of discriminative characteristics from the previously processed pictures. GANs are used for classification, which improves accuracy and resilience by adding a strategic and dynamic aspect to the diagnostic process. Additionally, a game-theoretic analysis is utilized to evaluate the security and dependability of the model in the face of hostile attacks. Strategic analysis and deep learning work together to provide a potent diagnostic tool. This suggested model's remarkable 98.2% accuracy rate shows how this method has the potential to improve the detection of retinal diseases, improve patient outcomes, and address the worldwide issue of visual impairment.


Assuntos
Teoria dos Jogos , Redes Neurais de Computação , Doenças Retinianas , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Doenças Retinianas/diagnóstico por imagem , Retina/diagnóstico por imagem , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos
16.
Micromachines (Basel) ; 15(5)2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38793179

RESUMO

With the rapid development of the emerging intelligent, flexible, transparent, and wearable electronic devices, such as quantum-dot-based micro light-emitting diodes (micro-LEDs), thin-film transistors (TFTs), and flexible sensors, numerous pixel-level printing technologies have emerged. Among them, inkjet printing has proven to be a useful and effective tool for consistently printing micron-level ink droplets, for instance, smaller than 50 µm, onto wearable electronic devices. However, quickly and accurately determining the printing quality, which is significant for the electronic device performance, is challenging due to the large quantity and micron size of ink droplets. Therefore, leveraging existing image processing algorithms, we have developed an effective method and software for quickly detecting the morphology of printed inks served in inkjet printing. This method is based on the edge detection technology. We believe this method can greatly meet the increasing demands for quick evaluation of print quality in inkjet printing.

17.
Network ; : 1-31, 2024 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-38708841

RESUMO

In contemporary times, content-based image retrieval (CBIR) techniques have gained widespread acceptance as a means for end-users to discern and extract specific image content from vast repositories. However, it is noteworthy that a substantial majority of CBIR studies continue to rely on linear methodologies such as gradient-based and derivative-based edge detection techniques. This research explores the integration of bioinspired Spiking Neural Network (SNN) based edge detection within CBIR. We introduce an innovative, computationally efficient SNN-based approach designed explicitly for CBIR applications, outperforming existing SNN models by reducing computational overhead by 2.5 times. The proposed SNN-based edge detection approach is seamlessly incorporated into three distinct CBIR techniques, each employing conventional edge detection methodologies including Sobel, Canny, and image derivatives. Rigorous experimentation and evaluations are carried out utilizing the Corel-10k dataset and crop weed dataset, a widely recognized and frequently adopted benchmark dataset in the realm of image analysis. Importantly, our findings underscore the enhanced performance of CBIR methodologies integrating the proposed SNN-based edge detection approach, with an average increase in mean precision values exceeding 3%. This study conclusively demonstrated the utility of our proposed methodology in optimizing feature extraction, thereby establishing its pivotal role in advancing edge centric CBIR approaches.

18.
Plant Methods ; 20(1): 73, 2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38773503

RESUMO

BACKGROUND: X-ray computed tomography (CT) is a powerful tool for measuring plant root growth in soil. However, a rapid scan with larger pots, which is required for throughput-prioritized crop breeding, results in high noise levels, low resolution, and blurred root segments in the CT volumes. Moreover, while plant root segmentation is essential for root quantification, detailed conditional studies on segmenting noisy root segments are scarce. The present study aimed to investigate the effects of scanning time and deep learning-based restoration of image quality on semantic segmentation of blurry rice (Oryza sativa) root segments in CT volumes. RESULTS: VoxResNet, a convolutional neural network-based voxel-wise residual network, was used as the segmentation model. The training efficiency of the model was compared using CT volumes obtained at scan times of 33, 66, 150, 300, and 600 s. The learning efficiencies of the samples were similar, except for scan times of 33 and 66 s. In addition, The noise levels of predicted volumes differd among scanning conditions, indicating that the noise level of a scan time ≥ 150 s does not affect the model training efficiency. Conventional filtering methods, such as median filtering and edge detection, increased the training efficiency by approximately 10% under any conditions. However, the training efficiency of 33 and 66 s-scanned samples remained relatively low. We concluded that scan time must be at least 150 s to not affect segmentation. Finally, we constructed a semantic segmentation model for 150 s-scanned CT volumes, for which the Dice loss reached 0.093. This model could not predict the lateral roots, which were not included in the training data. This limitation will be addressed by preparing appropriate training data. CONCLUSIONS: A semantic segmentation model can be constructed even with rapidly scanned CT volumes with high noise levels. Given that scanning times ≥ 150 s did not affect the segmentation results, this technique holds promise for rapid and low-dose scanning. This study offers insights into images other than CT volumes with high noise levels that are challenging to determine when annotating.

19.
Heliyon ; 10(9): e30486, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38742071

RESUMO

A novel automated medication verification system (AMVS) aims to address the limitation of manual medication verification among healthcare professionals with a high workload, thereby reducing medication errors in hospitals. Specifically, the manual medication verification process is time-consuming and prone to errors, especially in healthcare settings with high workloads. The proposed system strategy is to streamline and automate this process, enhancing efficiency and reducing medication errors. The system employs deep learning models to swiftly and accurately classify multiple medications within a single image without requiring manual labeling during model construction. It comprises edge detection and classification to verify medication types. Unlike previous studies conducted in open spaces, our study takes place in a closed space to minimize the impact of optical changes on image capture. During the experimental process, the system individually identifies each drug within the image by edge detection method and utilizes a classification model to determine each drug type. Our research has successfully developed a fully automated drug recognition system, achieving an accuracy of over 95 % in identifying drug types and conducting segmentation analyses. Specifically, the system demonstrates an accuracy rate of approximately 96 % for drug sets containing fewer than ten types and 93 % for those with ten types. This verification system builds an image classification model quickly. It holds promising potential in assisting nursing staff during AMVS, thereby reducing the likelihood of medication errors and alleviating the burden on nursing staff.

20.
Sci Rep ; 14(1): 8231, 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38589419

RESUMO

Terrestrial planets and their moons have impact craters, contributing significantly to the complex geomorphology of planetary bodies in our Solar System. Traditional crater identification methods struggle with accuracy because of the diverse forms, locations, and sizes of the craters. Our main aim is to locate lunar craters using images from Terrain Mapping Camera-2 (TMC-2) onboard the Chandrayaan-II satellite. The crater-based U-Net model, a convolutional neural network frequently used in image segmentation tasks, is a deep learning method presented in this study. The task of crater detection was accomplished with the proposed model in two steps: initially, it was trained using Resnet18 as the backbone and U-Net based on Image Net as weights. Secondly, TMC-2 images from Chandrayaan-2 were used to detect craters based on the trained model. The model proposed in this study comprises a neural network, feature extractor, and optimization technique for lunar crater detection. The model achieves 80.95% accuracy using unannotated data and precision and recall are much better with annotated data with an accuracy of 86.91% in object detection with TMC-2 ortho images. 2000 images have been considered for the present work as manual annotation is a time-consuming process and the inclusion of more images can enhance the performance score of the model proposed.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA