Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Diagnostics (Basel) ; 13(6)2023 Mar 17.
Article in English | MEDLINE | ID: mdl-36980463

ABSTRACT

To improve the accuracy of tumor identification, it is necessary to develop a reliable automated diagnostic method. In order to precisely categorize brain tumors, researchers developed a variety of segmentation algorithms. Segmentation of brain images is generally recognized as one of the most challenging tasks in medical image processing. In this article, a novel automated detection and classification method was proposed. The proposed approach consisted of many phases, including pre-processing MRI images, segmenting images, extracting features, and classifying images. During the pre-processing portion of an MRI scan, an adaptive filter was utilized to eliminate background noise. For feature extraction, the local-binary grey level co-occurrence matrix (LBGLCM) was used, and for image segmentation, enhanced fuzzy c-means clustering (EFCMC) was used. After extracting the scan features, we used a deep learning model to classify MRI images into two groups: glioma and normal. The classifications were created using a convolutional recurrent neural network (CRNN). The proposed technique improved brain image classification from a defined input dataset. MRI scans from the REMBRANDT dataset, which consisted of 620 testing and 2480 training sets, were used for the research. The data demonstrate that the newly proposed method outperformed its predecessors. The proposed CRNN strategy was compared against BP, U-Net, and ResNet, which are three of the most prevalent classification approaches currently being used. For brain tumor classification, the proposed system outcomes were 98.17% accuracy, 91.34% specificity, and 98.79% sensitivity.

2.
Sensors (Basel) ; 23(3)2023 Jan 19.
Article in English | MEDLINE | ID: mdl-36772207

ABSTRACT

Rapid improvements in ultrasound imaging technology have made it much more useful for screening and diagnosing breast problems. Local-speckle-noise destruction in ultrasound breast images may impair image quality and impact observation and diagnosis. It is crucial to remove localized noise from images. In the article, we have used the hybrid deep learning technique to remove local speckle noise from breast ultrasound images. The contrast of ultrasound breast images was first improved using logarithmic and exponential transforms, and then guided filter algorithms were used to enhance the details of the glandular ultrasound breast images. In order to finish the pre-processing of ultrasound breast images and enhance image clarity, spatial high-pass filtering algorithms were used to remove the extreme sharpening. In order to remove local speckle noise without sacrificing the image edges, edge-sensitive terms were eventually added to the Logical-Pool Recurrent Neural Network (LPRNN). The mean square error and false recognition rate both fell below 1.1% at the hundredth training iteration, showing that the LPRNN had been properly trained. Ultrasound images that have had local speckle noise destroyed had signal-to-noise ratios (SNRs) greater than 65 dB, peak SNR ratios larger than 70 dB, edge preservation index values greater than the experimental threshold of 0.48, and quick destruction times. The time required to destroy local speckle noise is low, edge information is preserved, and image features are brought into sharp focus.


Subject(s)
Deep Learning , Humans , Female , Ultrasonography, Mammary , Ultrasonography/methods , Algorithms , Neural Networks, Computer , Signal-To-Noise Ratio
3.
Comput Intell Neurosci ; 2022: 5267498, 2022.
Article in English | MEDLINE | ID: mdl-36017452

ABSTRACT

One of the most challenging tasks for clinicians is detecting symptoms of cardiovascular disease as earlier as possible. Many individuals worldwide die each year from cardiovascular disease. Since heart disease is a major concern, it must be dealt with timely. Multiple variables affecting health, such as excessive blood pressure, elevated cholesterol, an irregular pulse rate, and many more, make it challenging to diagnose cardiac disease. Thus, artificial intelligence can be useful in identifying and treating diseases early on. This paper proposes an ensemble-based approach that uses machine learning (ML) and deep learning (DL) models to predict a person's likelihood of developing cardiovascular disease. We employ six classification algorithms to predict cardiovascular disease. Models are trained using a publicly available dataset of cardiovascular disease cases. We use random forest (RF) to extract important cardiovascular disease features. The experiment results demonstrate that the ML ensemble model achieves the best disease prediction accuracy of 88.70%.


Subject(s)
Cardiovascular Diseases , Heart Diseases , Algorithms , Artificial Intelligence , Cardiovascular Diseases/diagnosis , Humans , Machine Learning
4.
Sensors (Basel) ; 23(1)2022 Dec 27.
Article in English | MEDLINE | ID: mdl-36616873

ABSTRACT

Modern technologies such as the Internet of Things (IoT) and physical systems used as navigation systems play an important role in locating a specific location in an unfamiliar environment. Due to recent technological developments, users can now incorporate these systems into mobile devices, which has a positive impact on the acceptance of navigational systems and the number of users who use them. The system that is used to find a specific location within a building is known as an indoor navigation system. In this study, we present a novel approach to adaptable and changeable multistory navigation systems that can be implemented in different environments such as libraries, grocery stores, shopping malls, and official buildings using facial and speech recognition with the help of voice broadcasting. We chose a library building for the experiment to help registered users find a specific book on different building floors. In the proposed system, to help the users, robots are placed on each floor of the building, communicating with each other, and with the person who needs navigational help. The proposed system uses an Android platform that consists of two separate applications: one for administration to add or remove settings and data, which in turn builds an environment map, while the second application is deployed on robots that interact with the users. The developed system was tested using two methods, namely system evaluation, and user evaluation. The evaluation of the system is based on the results of voice and face recognition by the user, and the model's performance relies on accuracy values obtained by testing out various values for the neural network parameters. The evaluation method adopted by the proposed system achieved an accuracy of 97.92% and 97.88% for both of the tasks. The user evaluation method using the developed Android applications was tested on multi-story libraries, and the results were obtained by gathering responses from users who interacted with the applications for navigation, such as to find a specific book. Almost all the users find it useful to have robots placed on each floor of the building for giving specific directions with automatic recognition and recall of what a person is searching for. The evaluation results show that the proposed system can be implemented in different environments, which shows its effectiveness.


Subject(s)
Facial Recognition , Internet of Things , Voice , Humans , Speech
SELECTION OF CITATIONS
SEARCH DETAIL
...