Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 9614, 2024 04 26.
Artigo em Inglês | MEDLINE | ID: mdl-38671304

RESUMO

The abnormal heart conduction, known as arrhythmia, can contribute to cardiac diseases that carry the risk of fatal consequences. Healthcare professionals typically use electrocardiogram (ECG) signals and certain preliminary tests to identify abnormal patterns in a patient's cardiac activity. To assess the overall cardiac health condition, cardiac specialists monitor these activities separately. This procedure may be arduous and time-intensive, potentially impacting the patient's well-being. This study automates and introduces a novel solution for predicting the cardiac health conditions, specifically identifying cardiac morbidity and arrhythmia in patients by using invasive and non-invasive measurements. The experimental analyses conducted in medical studies entail extremely sensitive data and any partial or biased diagnoses in this field are deemed unacceptable. Therefore, this research aims to introduce a new concept of determining the uncertainty level of machine learning algorithms using information entropy. To assess the effectiveness of machine learning algorithms information entropy can be considered as a unique performance evaluator of the machine learning algorithm which is not selected previously any studies within the realm of bio-computational research. This experiment was conducted on arrhythmia and heart disease datasets collected from Massachusetts Institute of Technology-Berth Israel Hospital-arrhythmia (DB-1) and Cleveland Heart Disease (DB-2), respectively. Our framework consists of four significant steps: 1) Data acquisition, 2) Feature preprocessing approach, 3) Implementation of learning algorithms, and 4) Information Entropy. The results demonstrate the average performance in terms of accuracy achieved by the classification algorithms: Neural Network (NN) achieved 99.74%, K-Nearest Neighbor (KNN) 98.98%, Support Vector Machine (SVM) 99.37%, Random Forest (RF) 99.76 % and Naïve Bayes (NB) 98.66% respectively. We believe that this study paves the way for further research, offering a framework for identifying cardiac health conditions through machine learning techniques.


Assuntos
Arritmias Cardíacas , Eletrocardiografia , Aprendizado de Máquina , Humanos , Eletrocardiografia/métodos , Arritmias Cardíacas/diagnóstico , Algoritmos , Monitorização Fisiológica/métodos , Cardiopatias/diagnóstico
2.
Heliyon ; 10(3): e25369, 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38352790

RESUMO

In recent years, scientific data on cancer has expanded, providing potential for a better understanding of malignancies and improved tailored care. Advances in Artificial Intelligence (AI) processing power and algorithmic development position Machine Learning (ML) and Deep Learning (DL) as crucial players in predicting Leukemia, a blood cancer, using integrated multi-omics technology. However, realizing these goals demands novel approaches to harness this data deluge. This study introduces a novel Leukemia diagnosis approach, analyzing multi-omics data for accuracy using ML and DL algorithms. ML techniques, including Random Forest (RF), Naive Bayes (NB), Decision Tree (DT), Logistic Regression (LR), Gradient Boosting (GB), and DL methods such as Recurrent Neural Networks (RNN) and Feedforward Neural Networks (FNN) are compared. GB achieved 97 % accuracy in ML, while RNN outperformed by achieving 98 % accuracy in DL. This approach filters unclassified data effectively, demonstrating the significance of DL for leukemia prediction. The testing validation was based on 17 different features such as patient age, sex, mutation type, treatment methods, chromosomes, and others. Our study compares ML and DL techniques and chooses the best technique that gives optimum results. The study emphasizes the implications of high-throughput technology in healthcare, offering improved patient care.

3.
PLoS One ; 19(1): e0292100, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38236900

RESUMO

Diabetes prediction is an ongoing study topic in which medical specialists are attempting to forecast the condition with greater precision. Diabetes typically stays lethargic, and on the off chance that patients are determined to have another illness, like harm to the kidney vessels, issues with the retina of the eye, or a heart issue, it can cause metabolic problems and various complexities in the body. Various worldwide learning procedures, including casting a ballot, supporting, and sacking, have been applied in this review. The Engineered Minority Oversampling Procedure (Destroyed), along with the K-overlay cross-approval approach, was utilized to achieve class evening out and approve the discoveries. Pima Indian Diabetes (PID) dataset is accumulated from the UCI Machine Learning (UCI ML) store for this review, and this dataset was picked. A highlighted engineering technique was used to calculate the influence of lifestyle factors. A two-phase classification model has been developed to predict insulin resistance using the Sequential Minimal Optimisation (SMO) and SMOTE approaches together. The SMOTE technique is used to preprocess data in the model's first phase, while SMO classes are used in the second phase. All other categorization techniques were outperformed by bagging decision trees in terms of Misclassification Error rate, Accuracy, Specificity, Precision, Recall, F1 measures, and ROC curve. The model was created using a combined SMOTE and SMO strategy, which achieved 99.07% correction with 0.1 ms of runtime. The suggested system's result is to enhance the classifier's performance in spotting illness early.


Assuntos
Algoritmos , Diabetes Mellitus Tipo 2 , Humanos , Diabetes Mellitus Tipo 2/diagnóstico , Aprendizado de Máquina , Curva ROC , Previsões
4.
Sci Rep ; 14(1): 1345, 2024 01 16.
Artigo em Inglês | MEDLINE | ID: mdl-38228639

RESUMO

A brain tumor is an unnatural expansion of brain cells that can't be stopped, making it one of the deadliest diseases of the nervous system. The brain tumor segmentation for its earlier diagnosis is a difficult task in the field of medical image analysis. Earlier, segmenting brain tumors was done manually by radiologists but that requires a lot of time and effort. Inspite of this, in the manual segmentation there was possibility of making mistakes due to human intervention. It has been proved that deep learning models can outperform human experts for the diagnosis of brain tumor in MRI images. These algorithms employ a huge number of MRI scans to learn the difficult patterns of brain tumors to segment them automatically and accurately. Here, an encoder-decoder based architecture with deep convolutional neural network is proposed for semantic segmentation of brain tumor in MRI images. The proposed method focuses on the image downsampling in the encoder part. For this, an intelligent LinkNet-34 model with EfficientNetB7 encoder based semantic segmentation model is proposed. The performance of LinkNet-34 model is compared with other three models namely FPN, U-Net, and PSPNet. Further, the performance of EfficientNetB7 used as encoder in LinkNet-34 model has been compared with three encoders namely ResNet34, MobileNet_V2, and ResNet50. After that, the proposed model is optimized using three different optimizers such as RMSProp, Adamax and Adam. The LinkNet-34 model has outperformed with EfficientNetB7 encoder using Adamax optimizer with the value of jaccard index as 0.89 and dice coefficient as 0.915.


Assuntos
Neoplasias Encefálicas , Semântica , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Algoritmos , Inteligência , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador
5.
Sensors (Basel) ; 23(19)2023 Sep 22.
Artigo em Inglês | MEDLINE | ID: mdl-37836846

RESUMO

Due to the modern power system's rapid development, more scattered smart grid components are securely linked into the power system by encircling a wide electrical power network with the underpinning communication system. By enabling a wide range of applications, such as distributed energy management, system state forecasting, and cyberattack security, these components generate vast amounts of data that automate and improve the efficiency of the smart grid. Due to traditional computer technologies' inability to handle the massive amount of data that smart grid systems generate, AI-based alternatives have received a lot of interest. Long Short-Term Memory (LSTM) and recurrent Neural Networks (RNN) will be specifically developed in this study to address this issue by incorporating the adaptively time-developing energy system's attributes to enhance the model of the dynamic properties of contemporary Smart Grid (SG) that are impacted by Revised Encoding Scheme (RES) or system reconfiguration to differentiate LSTM changes & real-time threats. More specifically, we provide a federated instructional strategy for consumer sharing of power data to Power Grid (PG) that is supported by edge clouds, protects consumer privacy, and is communication-efficient. They then design two optimization problems for Energy Data Owners (EDO) and energy service operations, as well as a local information assessment method in Federated Learning (FL) by taking non-independent and identically distributed (IID) effects into consideration. The test results revealed that LSTM had a longer training duration, four hidden levels, and higher training loss than other models. The provided method works incredibly well in several situations to identify FDIA. The suggested approach may successfully induce EDOs to employ high-quality local models, increase the payout of the ESP, and decrease task latencies, according to extensive simulations, which are the last points. According to the verification results, every assault sample could be effectively recognized utilizing the current detection methods and the LSTM RNN-based structure created by Smart.

6.
Life (Basel) ; 13(10)2023 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-37895472

RESUMO

Bone marrow (BM) is an essential part of the hematopoietic system, which generates all of the body's blood cells and maintains the body's overall health and immune system. The classification of bone marrow cells is pivotal in both clinical and research settings because many hematological diseases, such as leukemia, myelodysplastic syndromes, and anemias, are diagnosed based on specific abnormalities in the number, type, or morphology of bone marrow cells. There is a requirement for developing a robust deep-learning algorithm to diagnose bone marrow cells to keep a close check on them. This study proposes a framework for categorizing bone marrow cells into seven classes. In the proposed framework, five transfer learning models-DenseNet121, EfficientNetB5, ResNet50, Xception, and MobileNetV2-are implemented into the bone marrow dataset to classify them into seven classes. The best-performing DenseNet121 model was fine-tuned by adding one batch-normalization layer, one dropout layer, and two dense layers. The proposed fine-tuned DenseNet121 model was optimized using several optimizers, such as AdaGrad, AdaDelta, Adamax, RMSprop, and SGD, along with different batch sizes of 16, 32, 64, and 128. The fine-tuned DenseNet121 model was integrated with an attention mechanism to improve its performance by allowing the model to focus on the most relevant features or regions of the image, which can be particularly beneficial in medical imaging, where certain regions might have critical diagnostic information. The proposed fine-tuned and integrated DenseNet121 achieved the highest accuracy, with a training success rate of 99.97% and a testing success rate of 97.01%. The key hyperparameters, such as batch size, number of epochs, and different optimizers, were all considered for optimizing these pre-trained models to select the best model. This study will help in medical research to effectively classify the BM cells to prevent diseases like leukemia.

7.
Life (Basel) ; 13(10)2023 Oct 21.
Artigo em Inglês | MEDLINE | ID: mdl-37895474

RESUMO

Breast cancer (BC) is the most common cancer among women, making it essential to have an accurate and dependable system for diagnosing benign or malignant tumors. It is essential to detect this cancer early in order to inform subsequent treatments. Currently, fine needle aspiration (FNA) cytology and machine learning (ML) models can be used to detect and diagnose this cancer more accurately. Consequently, an effective and dependable approach needs to be developed to enhance the clinical capacity to diagnose this illness. This study aims to detect and divide BC into two categories using the Wisconsin Diagnostic Breast Cancer (WDBC) benchmark feature set and to select the fewest features to attain the highest accuracy. To this end, this study explores automated BC prediction using multi-model features and ensemble machine learning (EML) techniques. To achieve this, we propose an advanced ensemble technique, which incorporates voting, bagging, stacking, and boosting as combination techniques for the classifier in the proposed EML methods to distinguish benign breast tumors from malignant cancers. In the feature extraction process, we suggest a recursive feature elimination technique to find the most important features of the WDBC that are pertinent to BC detection and classification. Furthermore, we conducted cross-validation experiments, and the comparative results demonstrated that our method can effectively enhance classification performance and attain the highest value in six evaluation metrics, including precision, sensitivity, area under the curve (AUC), specificity, accuracy, and F1-score. Overall, the stacking model achieved the best average accuracy, at 99.89%, and its sensitivity, specificity, F1-score, precision, and AUC/ROC were 1.00%, 0.999%, 1.00%, 1.00%, and 1.00%, respectively, thus generating excellent results. The findings of this study can be used to establish a reliable clinical detection system, enabling experts to make more precise and operative decisions in the future. Additionally, the proposed technology might be used to detect a variety of cancers.

8.
PLoS One ; 18(9): e0286874, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37747876

RESUMO

This study proposes a novel hybrid computational approach that integrates the artificial dragonfly algorithm (ADA) with the Hopfield neural network (HNN) to achieve an optimal representation of the Exact Boolean kSatisfiability (EBkSAT) logical rule. The primary objective is to investigate the effectiveness and robustness of the ADA algorithm in expediting the training phase of the HNN to attain an optimized EBkSAT logic representation. To assess the performance of the proposed hybrid computational model, a specific Exact Boolean kSatisfiability problem is constructed, and simulated data sets are generated. The evaluation metrics employed include the global minimum ratio (GmR), root mean square error (RMSE), mean absolute percentage error (MAPE), and network computational time (CT) for EBkSAT representation. Comparative analyses are conducted between the results obtained from the proposed model and existing models in the literature. The findings demonstrate that the proposed hybrid model, ADA-HNN-EBkSAT, surpasses existing models in terms of accuracy and computational time. This suggests that the ADA algorithm exhibits effective compatibility with the HNN for achieving an optimal representation of the EBkSAT logical rule. These outcomes carry significant implications for addressing intricate optimization problems across diverse domains, including computer science, engineering, and business.


Assuntos
Algoritmos , Redes Neurais de Computação , Benchmarking , Comércio , Engenharia
9.
Diagnostics (Basel) ; 13(14)2023 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-37510142

RESUMO

The segmentation of gastrointestinal (GI) organs is crucial in radiation therapy for treating GI cancer. It allows for developing a targeted radiation therapy plan while minimizing radiation exposure to healthy tissue, improving treatment success, and decreasing side effects. Medical diagnostics in GI tract organ segmentation is essential for accurate disease detection, precise differential diagnosis, optimal treatment planning, and efficient disease monitoring. This research presents a hybrid encoder-decoder-based model for segmenting healthy organs in the GI tract in biomedical images of cancer patients, which might help radiation oncologists treat cancer more quickly. Here, EfficientNet B0 is used as a bottom-up encoder architecture for downsampling to capture contextual information by extracting meaningful and discriminative features from input images. The performance of the EfficientNet B0 encoder is compared with that of three encoders: ResNet 50, MobileNet V2, and Timm Gernet. The Feature Pyramid Network (FPN) is a top-down decoder architecture used for upsampling to recover spatial information. The performance of the FPN decoder was compared with that of three decoders: PAN, Linknet, and MAnet. This paper proposes a segmentation model named as the Feature Pyramid Network (FPN), with EfficientNet B0 as the encoder. Furthermore, the proposed hybrid model is analyzed using Adam, Adadelta, SGD, and RMSprop optimizers. Four performance criteria are used to assess the models: the Jaccard and Dice coefficients, model loss, and processing time. The proposed model can achieve Dice coefficient and Jaccard index values of 0.8975 and 0.8832, respectively. The proposed method can assist radiation oncologists in precisely targeting areas hosting cancer cells in the gastrointestinal tract, allowing for more efficient and timely cancer treatment.

10.
PeerJ Comput Sci ; 9: e1294, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346705

RESUMO

Higher educational institutes generate massive amounts of student data. This data needs to be explored in depth to better understand various facets of student learning behavior. The educational data mining approach has given provisions to extract useful and non-trivial knowledge from large collections of student data. Using the educational data mining method of classification, this research analyzes data of 291 university students in an attempt to predict student performance at the end of a 4-year degree program. A student segmentation framework has also been proposed to identify students at various levels of academic performance. Coupled with the prediction model, the proposed segmentation framework provides a useful mechanism for devising pedagogical policies to increase the quality of education by mitigating academic failure and encouraging higher performance. The experimental results indicate the effectiveness of the proposed framework and the applicability of classifying students into multiple performance levels using a small subset of courses being taught in the initial two years of the 4-year degree program.

11.
Healthcare (Basel) ; 11(11)2023 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-37297701

RESUMO

Pneumonia has been directly responsible for a huge number of deaths all across the globe. Pneumonia shares visual features with other respiratory diseases, such as tuberculosis, which can make it difficult to distinguish between them. Moreover, there is significant variability in the way chest X-ray images are acquired and processed, which can impact the quality and consistency of the images. This can make it challenging to develop robust algorithms that can accurately identify pneumonia in all types of images. Hence, there is a need to develop robust, data-driven algorithms that are trained on large, high-quality datasets and validated using a range of imaging techniques and expert radiological analysis. In this research, a deep-learning-based model is demonstrated for differentiating between normal and severe cases of pneumonia. This complete proposed system has a total of eight pre-trained models, namely, ResNet50, ResNet152V2, DenseNet121, DenseNet201, Xception, VGG16, EfficientNet, and MobileNet. These eight pre-trained models were simulated on two datasets having 5856 images and 112,120 images of chest X-rays. The best accuracy is obtained on the MobileNet model with values of 94.23% and 93.75% on two different datasets. Key hyperparameters including batch sizes, number of epochs, and different optimizers have all been considered during comparative interpretation of these models to determine the most appropriate model.

12.
Diagnostics (Basel) ; 13(12)2023 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-37371016

RESUMO

Acute Lymphocytic Leukemia is a type of cancer that occurs when abnormal white blood cells are produced in the bone marrow which do not function properly, crowding out healthy cells and weakening the immunity of the body and thus its ability to resist infections. It spreads quickly in children's bodies, and if not treated promptly it may lead to death. The manual detection of this disease is a tedious and slow task. Machine learning and deep learning techniques are faster than manual detection and more accurate. In this paper, a deep feature selection-based approach ResRandSVM is proposed for the detection of Acute Lymphocytic Leukemia in blood smear images. The proposed approach uses seven deep-learning models: ResNet152, VGG16, DenseNet121, MobileNetV2, InceptionV3, EfficientNetB0 and ResNet50 for deep feature extraction from blood smear images. After that, three feature selection methods are used to extract valuable and important features: analysis of variance (ANOVA), principal component analysis (PCA), and Random Forest. Then the selected feature map is fed to four different classifiers, Adaboost, Support Vector Machine, Artificial Neural Network and Naïve Bayes models, to classify the images into leukemia and normal images. The model performs best with a combination of ResNet50 as a feature extractor, Random Forest as feature selection and Support Vector Machine as a classifier with an accuracy of 0.900, precision of 0.902, recall of 0.957 and F1-score of 0.929.

13.
Diagnostics (Basel) ; 13(9)2023 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-37175042

RESUMO

The segmentation of lungs from medical images is a critical step in the diagnosis and treatment of lung diseases. Deep learning techniques have shown great promise in automating this task, eliminating the need for manual annotation by radiologists. In this research, a convolution neural network architecture is proposed for lung segmentation using chest X-ray images. In the proposed model, concatenate block is embedded to learn a series of filters or features used to extract meaningful information from the image. Moreover, a transpose layer is employed in the concatenate block to improve the spatial resolution of feature maps generated by a prior convolutional layer. The proposed model is trained using k-fold validation as it is a powerful and flexible tool for evaluating the performance of deep learning models. The proposed model is evaluated on five different subsets of the data by taking the value of k as 5 to obtain the optimized model to obtain more accurate results. The performance of the proposed model is analyzed for different hyper-parameters such as the batch size as 32, optimizer as Adam and 40 epochs. The dataset used for the segmentation of disease is taken from the Kaggle repository. The various performance parameters such as accuracy, IoU, and dice coefficient are calculated, and the values obtained are 0.97, 0.93, and 0.96, respectively.

14.
Sensors (Basel) ; 22(3)2022 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-35161698

RESUMO

The coronavirus pandemic, also known as the COVID-19 pandemic, is an ongoing virus. It was first identified on December 2019 in Wuhan, China, and later spread to 192 countries. As of now, 251,266,207 people have been affected, and 5,070,244 deaths are reported. Due to the growing number of COVID-19 patients, the demand for COVID wards is increasing. Telemedicine applications are increasing drastically because of convenient treatment options. The healthcare sector is rapidly adopting telemedicine applications for the treatment of COVID-19 patients. Most telemedicine applications are developed for heterogeneous environments and due to their diverse nature, data transmission between similar and dissimilar telemedicine applications is a difficult task. In this paper, we propose a Tele-COVID system architecture design along with its security aspects to provide the treatment for COVID-19 patients from distance. Tele-COVID secure system architecture is designed to resolve the problem of data interchange between two different telemedicine applications, interoperability, and vendor lock-in. Tele-COVID is a web-based and Android telemedicine application that provides suitable treatment to COVID-19 patients. With the help of Tele-COVID, the treatment of patients at a distance is possible without the need for them to visit hospitals; in case of emergency, necessary services can also be provided. The application is tested on COVID-19 patients in the county hospital and shows the initial results.


Assuntos
COVID-19 , Telemedicina , Hospitais , Humanos , Pandemias , SARS-CoV-2
15.
J Pharm Pract ; 24(2): 216-22, 2011 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-21712217

RESUMO

OBJECTIVE: Protease inhibitors (PIs) exhibit considerable interpatient pharmacokinetic variability in plasma trough concentrations. Therapeutic drug monitoring (TDM) is occasionally used to guide chronic dosing to achieve target trough concentrations, but its clinical success assumes minimal intrasubject variability. Therefore, our primary objective was to evaluate intrapatient variability in atazanavir (ATV) plasma trough concentrations in HIV-1-infected patients. DESIGN/METHODS: In a single-site, prospective, cohort study, patients on atazanavir with or without ritonavir (ATV/r or ATV) for 2 clinic visits were enrolled. Adherence and time since last dose (TSLD) were verified at each visit. ATV was assayed with high-performance liquid chromatography. Intra- and interpatient variation was evaluated using the median intraindividual percentage coefficient of variation (ICV). RESULTS: The mean 24-hour ATV trough concentrations for the first and second visit for the ATV/r group (n = 10) was 598 (CV 84%) and 525 ng/mL (CV 66%), respectively (P = .511), and 300 (CV 81%) and 434 ng/mL (CV 106%) for the ATV group (n = 4), respectively (P = .369). Median ICV was 43.1% for all patients (range: 0.6%-107.6%), 38.1% (0.6%-107.6%) for the ATV/r group, and 33.1% (2.3%-87.6%) for the ATV group. CONCLUSIONS: Potential intrapatient variability in ATV troughs suggests that repeated measurements may be required to ensure that target values are maintained.


Assuntos
Fármacos Anti-HIV/administração & dosagem , Infecções por HIV/tratamento farmacológico , Oligopeptídeos/administração & dosagem , Piridinas/administração & dosagem , Ritonavir/administração & dosagem , Sulfato de Atazanavir , Cromatografia Líquida de Alta Pressão , Estudos de Coortes , Monitoramento de Medicamentos/métodos , Quimioterapia Combinada , Infecções por HIV/sangue , HIV-1/efeitos dos fármacos , Humanos , Pessoa de Meia-Idade , Monitorização Fisiológica , Estudos Prospectivos , Estatística como Assunto , Resultado do Tratamento
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...