Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Plant Sci ; 15: 1402835, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38988642

RESUMO

The agricultural sector is pivotal to food security and economic stability worldwide. Corn holds particular significance in the global food industry, especially in developing countries where agriculture is a cornerstone of the economy. However, corn crops are vulnerable to various diseases that can significantly reduce yields. Early detection and precise classification of these diseases are crucial to prevent damage and ensure high crop productivity. This study leverages the VGG16 deep learning (DL) model to classify corn leaves into four categories: healthy, blight, gray spot, and common rust. Despite the efficacy of DL models, they often face challenges related to the explainability of their decision-making processes. To address this, Layer-wise Relevance Propagation (LRP) is employed to enhance the model's transparency by generating intuitive and human-readable heat maps of input images. The proposed VGG16 model, augmented with LRP, outperformed previous state-of-the-art models in classifying corn leaf diseases. Simulation results demonstrated that the model not only achieved high accuracy but also provided interpretable results, highlighting critical regions in the images used for classification. By generating human-readable explanations, this approach ensures greater transparency and reliability in model performance, aiding farmers in improving their crop yields.

3.
Sci Rep ; 14(1): 6173, 2024 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-38486010

RESUMO

A kidney stone is a solid formation that can lead to kidney failure, severe pain, and reduced quality of life from urinary system blockages. While medical experts can interpret kidney-ureter-bladder (KUB) X-ray images, specific images pose challenges for human detection, requiring significant analysis time. Consequently, developing a detection system becomes crucial for accurately classifying KUB X-ray images. This article applies a transfer learning (TL) model with a pre-trained VGG16 empowered with explainable artificial intelligence (XAI) to establish a system that takes KUB X-ray images and accurately categorizes them as kidney stones or normal cases. The findings demonstrate that the model achieves a testing accuracy of 97.41% in identifying kidney stones or normal KUB X-rays in the dataset used. VGG16 model delivers highly accurate predictions but lacks fairness and explainability in their decision-making process. This study incorporates the Layer-Wise Relevance Propagation (LRP) technique, an explainable artificial intelligence (XAI) technique, to enhance the transparency and effectiveness of the model to address this concern. The XAI technique, specifically LRP, increases the model's fairness and transparency, facilitating human comprehension of the predictions. Consequently, XAI can play an important role in assisting doctors with the accurate identification of kidney stones, thereby facilitating the execution of effective treatment strategies.


Assuntos
Inteligência Artificial , Cálculos Renais , Humanos , Raios X , Qualidade de Vida , Cálculos Renais/diagnóstico por imagem , Fluoroscopia
4.
J Healthc Eng ; 2023: 1406545, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37284488

RESUMO

Lymphoma and leukemia are fatal syndromes of cancer that cause other diseases and affect all types of age groups including male and female, and disastrous and fatal blood cancer causes an increased savvier death ratio. Both lymphoma and leukemia are associated with the damage and rise of immature lymphocytes, monocytes, neutrophils, and eosinophil cells. So, in the health sector, the early prediction and treatment of blood cancer is a major issue for survival rates. Nowadays, there are various manual techniques to analyze and predict blood cancer using the microscopic medical reports of white blood cell images, which is very steady for prediction and causes a major ratio of deaths. Manual prediction and analysis of eosinophils, lymphocytes, monocytes, and neutrophils are very difficult and time-consuming. In previous studies, they used numerous deep learning and machine learning techniques to predict blood cancer, but there are still some limitations in these studies. So, in this article, we propose a model of deep learning empowered with transfer learning and indulge in image processing techniques to improve the prediction results. The proposed transfer learning model empowered with image processing incorporates different levels of prediction, analysis, and learning procedures and employs different learning criteria like learning rate and epochs. The proposed model used numerous transfer learning models with varying parameters for each model and cloud techniques to choose the best prediction model, and the proposed model used an extensive set of performance techniques and procedures to predict the white blood cells which cause cancer to incorporate image processing techniques. So, after extensive procedures of AlexNet, MobileNet, and ResNet with both image processing and without image processing techniques with numerous learning criteria, the stochastic gradient descent momentum incorporated with AlexNet is outperformed with the highest prediction accuracy of 97.3% and the misclassification rate is 2.7% with image processing technique. The proposed model gives good results and can be applied for smart diagnosing of blood cancer using eosinophils, lymphocytes, monocytes, and neutrophils.


Assuntos
Neoplasias Hematológicas , Leucemia , Neoplasias , Humanos , Masculino , Feminino , Leucócitos , Aprendizado de Máquina , Neoplasias/diagnóstico , Leucemia/diagnóstico , Processamento de Imagem Assistida por Computador/métodos
5.
Diagnostics (Basel) ; 13(2)2023 Jan 11.
Artigo em Inglês | MEDLINE | ID: mdl-36673080

RESUMO

COVID-19 is a rapidly spreading pandemic, and early detection is important to halting the spread of infection. Recently, the outbreak of this virus has severely affected people around the world with increasing death rates. The increased death rates are because of its spreading nature among people, mainly through physical interactions. Therefore, it is very important to control the spreading of the virus and detect people's symptoms during the initial stages so proper preventive measures can be taken in good time. In response to COVID-19, revolutionary automation such as deep learning, machine learning, image processing, and medical images such as chest radiography (CXR) and computed tomography (CT) have been developed in this environment. Currently, the coronavirus is identified via an RT-PCR test. Alternative solutions are required due to the lengthy moratorium period and the large number of false-negative estimations. To prevent the spreading of the virus, we propose the Vehicle-based COVID-19 Detection System to reveal the related symptoms of a person in the vehicles. Moreover, deep extreme machine learning is applied. The proposed system uses headaches, flu, fever, cough, chest pain, shortness of breath, tiredness, nasal congestion, diarrhea, breathing difficulty, and pneumonia. The symptoms are considered parameters to reveal the presence of COVID-19 in a person. Our proposed approach in Vehicles will make it easier for governments to perform COVID-19 tests timely in cities. Due to the ambiguous nature of symptoms in humans, we utilize fuzzy modeling for simulation. The suggested COVID-19 detection model achieved an accuracy of more than 90%.

6.
Comput Intell Neurosci ; 2022: 5054641, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36268157

RESUMO

With the emergence of the Internet of Things (IoT), investigation of different diseases in healthcare improved, and cloud computing helped to centralize the data and to access patient records throughout the world. In this way, the electrocardiogram (ECG) is used to diagnose heart diseases or abnormalities. The machine learning techniques have been used previously but are feature-based and not as accurate as transfer learning; the proposed development and validation of embedded device prove ECG arrhythmia by using the transfer learning (DVEEA-TL) model. This model is the combination of hardware, software, and two datasets that are augmented and fused and further finds the accuracy results in high proportion as compared to the previous work and research. In the proposed model, a new dataset is made by the combination of the Kaggle dataset and the other, which is made by taking the real-time healthy and unhealthy datasets, and later, the AlexNet transfer learning approach is applied to get a more accurate reading in terms of ECG signals. In this proposed research, the DVEEA-TL model diagnoses the heart abnormality in respect of accuracy during the training and validation stages as 99.9% and 99.8%, respectively, which is the best and more reliable approach as compared to the previous research in this field.


Assuntos
Arritmias Cardíacas , Eletrocardiografia , Humanos , Eletrocardiografia/métodos , Arritmias Cardíacas/diagnóstico , Computação em Nuvem , Aprendizado de Máquina , Software
7.
Comput Biol Med ; 150: 106019, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36162198

RESUMO

In recent years, the global Internet of Medical Things (IoMT) industry has evolved at a tremendous speed. Security and privacy are key concerns on the IoMT, owing to the huge scale and deployment of IoMT networks. Machine learning (ML) and blockchain (BC) technologies have significantly enhanced the capabilities and facilities of healthcare 5.0, spawning a new area known as "Smart Healthcare." By identifying concerns early, a smart healthcare system can help avoid long-term damage. This will enhance the quality of life for patients while reducing their stress and healthcare costs. The IoMT enables a range of functionalities in the field of information technology, one of which is smart and interactive health care. However, combining medical data into a single storage location to train a powerful machine learning model raises concerns about privacy, ownership, and compliance with greater concentration. Federated learning (FL) overcomes the preceding difficulties by utilizing a centralized aggregate server to disseminate a global learning model. Simultaneously, the local participant keeps control of patient information, assuring data confidentiality and security. This article conducts a comprehensive analysis of the findings on blockchain technology entangled with federated learning in healthcare. 5.0. The purpose of this study is to construct a secure health monitoring system in healthcare 5.0 by utilizing a blockchain technology and Intrusion Detection System (IDS) to detect any malicious activity in a healthcare network and enables physicians to monitor patients through medical sensors and take necessary measures periodically by predicting diseases. The proposed system demonstrates that the approach is optimized effectively for healthcare monitoring. In contrast, the proposed healthcare 5.0 system entangled with FL Approach achieves 93.22% accuracy for disease prediction, and the proposed RTS-DELM-based secure healthcare 5.0 system achieves 96.18% accuracy for the estimation of intrusion detection.


Assuntos
Blockchain , Humanos , Qualidade de Vida , Tecnologia , Instalações de Saúde , Atenção à Saúde
8.
Sensors (Basel) ; 22(18)2022 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-36146347

RESUMO

Attention is a complex cognitive process with innate resource management and information selection capabilities for maintaining a certain level of functional awareness in socio-cognitive service agents. The human-machine society depends on creating illusionary believable behaviors. These behaviors include processing sensory information based on contextual adaptation and focusing on specific aspects. The cognitive processes based on selective attention help the agent to efficiently utilize its computational resources by scheduling its intellectual tasks, which are not limited to decision-making, goal planning, action selection, and execution of actions. This study reports ongoing work on developing a cognitive architectural framework, a Nature-inspired Humanoid Cognitive Computing Platform for Self-aware and Conscious Agents (NiHA). The NiHA comprises cognitive theories, frameworks, and applications within machine consciousness (MC) and artificial general intelligence (AGI). The paper is focused on top-down and bottom-up attention mechanisms for service agents as a step towards machine consciousness. This study evaluates the behavioral impact of psychophysical states on attention. The proposed agent attains almost 90% accuracy in attention generation. In social interaction, contextual-based working is important, and the agent attains 89% accuracy in its attention by adding and checking the effect of psychophysical states on parallel selective attention. The addition of the emotions to attention process produced more contextual-based responses.


Assuntos
Inteligência Artificial , Psicofisiologia , Cognição/fisiologia , Humanos , Percepção
9.
Comput Intell Neurosci ; 2022: 6852845, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35958748

RESUMO

According to the World Health Organization (WHO) report, heart disease is spreading throughout the world very rapidly and the situation is becoming alarming in people aged 40 or above (Xu, 2020). Different methods and procedures are adopted to detect and diagnose heart abnormalities. Data scientists are working on finding the different methods with the required accuracy (Strodthoff et al., 2021). Electrocardiogram (ECG) is the procedure to find the heart condition in the waveform. For ages, the machine learning techniques, which are feature based, played a vital role in the medical sciences and centralized the data in cloud computing and having access throughout the world. Furthermore, deep learning or transfer learning widens the vision and introduces different transfer learning methods to ensure accuracy and time management to detect the ECG in a better way in comparison to the previous and machine learning methods. Hence, it is said that transfer learning has turned world research into more appropriate and innovative research. Here, the proposed comparison and accuracy analysis of different transfer learning methods by using ECG classification for detecting ECG Arrhythmia (CAA-TL). The CAA-TL model has the multiclassification of the ECG dataset, which has been taken from Kaggle. Some of the healthy and unhealthy datasets have been taken in real-time, augmented, and fused with the Kaggle dataset, i.e., Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH dataset). The CAA-TL worked on the accuracy of heart problem detection by using different methods like ResNet50, AlexNet, and SqueezeNet. All three deep learning methods showed remarkable accuracy, which is improved from the previous research. The comparison of different deep learning approaches with respect to layers widens the research and gives the more clarity and accuracy and at the same time finds it time-consuming while working with multiclassification with massive dataset of ECG. The implementation of the proposed method showed an accuracy of 98.8%, 90.08%, and 91% for AlexNet, SqueezeNet, and ResNet50, respectively.


Assuntos
Aprendizado Profundo , Arritmias Cardíacas/diagnóstico , Computação em Nuvem , Eletrocardiografia/métodos , Humanos , Aprendizado de Máquina
10.
Sensors (Basel) ; 22(16)2022 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-36016001

RESUMO

Hundreds of image encryption schemes have been conducted (as the literature review indicates). The majority of these schemes use pixels as building blocks for confusion and diffusion operations. Pixel-level operations are time-consuming and, thus, not suitable for many critical applications (e.g., telesurgery). Security is of the utmost importance while writing these schemes. This study aimed to provide a scheme based on block-level scrambling (with increased speed). Three streams of chaotic data were obtained through the intertwining logistic map (ILM). For a given image, the algorithm creates blocks of eight pixels. Two blocks (randomly selected from the long array of blocks) are swapped an arbitrary number of times. Two streams of random numbers facilitate this process. The scrambled image is further XORed with the key image generated through the third stream of random numbers to obtain the final cipher image. Plaintext sensitivity is incorporated through SHA-256 hash codes for the given image. The suggested cipher is subjected to a comprehensive set of security parameters, such as the key space, histogram, correlation coefficient, information entropy, differential attack, peak signal to noise ratio (PSNR), noise, and data loss attack, time complexity, and encryption throughput. In particular, the computational time of 0.1842 s and the throughput of 3.3488 Mbps of this scheme outperforms many published works, which bears immense promise for its real-world application.

11.
Sensors (Basel) ; 22(12)2022 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-35746303

RESUMO

Security and privacy in the Internet of Things (IoT) other significant challenges, primarily because of the vast scale and deployment of IoT networks. Blockchain-based solutions support decentralized protection and privacy. In this study, a private blockchain-based smart home network architecture for estimating intrusion detection empowered with a Fused Real-Time Sequential Deep Extreme Learning Machine (RTS-DELM) system model is proposed. This study investigates the methodology of RTS-DELM implemented in blockchain-based smart homes to detect any malicious activity. The approach of data fusion and the decision level fusion technique are also implemented to achieve enhanced accuracy. This study examines the numerous key components and features of the smart home network framework more extensively. The Fused RTS-DELM technique achieves a very significant level of stability with a low error rate for any intrusion activity in smart home networks. The simulation findings indicate that this suggested technique successfully optimizes smart home networks for monitoring and detecting harmful or intrusive activities.


Assuntos
Blockchain , Internet das Coisas , Segurança Computacional , Aprendizado de Máquina
12.
Sensors (Basel) ; 22(9)2022 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-35591194

RESUMO

Precipitation in any form-such as rain, snow, and hail-can affect day-to-day outdoor activities. Rainfall prediction is one of the challenging tasks in weather forecasting process. Accurate rainfall prediction is now more difficult than before due to the extreme climate variations. Machine learning techniques can predict rainfall by extracting hidden patterns from historical weather data. Selection of an appropriate classification technique for prediction is a difficult job. This research proposes a novel real-time rainfall prediction system for smart cities using a machine learning fusion technique. The proposed framework uses four widely used supervised machine learning techniques, i.e., decision tree, Naïve Bayes, K-nearest neighbors, and support vector machines. For effective prediction of rainfall, the technique of fuzzy logic is incorporated in the framework to integrate the predictive accuracies of the machine learning techniques, also known as fusion. For prediction, 12 years of historical weather data (2005 to 2017) for the city of Lahore is considered. Pre-processing tasks such as cleaning and normalization were performed on the dataset before the classification process. The results reflect that the proposed machine learning fusion-based framework outperforms other models.


Assuntos
Lógica Fuzzy , Aprendizado de Máquina , Teorema de Bayes , Cidades , Máquina de Vetores de Suporte
13.
Comput Intell Neurosci ; 2022: 3606068, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35126487

RESUMO

Smart applications and intelligent systems are being developed that are self-reliant, adaptive, and knowledge-based in nature. Emergency and disaster management, aerospace, healthcare, IoT, and mobile applications, among them, revolutionize the world of computing. Applications with a large number of growing devices have transformed the current design of centralized cloud impractical. Despite the use of 5G technology, delay-sensitive applications and cloud cannot go parallel due to exceeding threshold values of certain parameters like latency, bandwidth, response time, etc. Middleware proves to be a better solution to cope up with these issues while satisfying the high requirements task offloading standards. Fog computing is recommended middleware in this research article in view of the fact that it provides the services to the edge of the network; delay-sensitive applications can be entertained effectively. On the contrary, fog nodes contain a limited set of resources that may not process all tasks, especially of computation-intensive applications. Additionally, fog is not the replacement of the cloud, rather supplement to the cloud, both behave like counterparts and offer their services correspondingly to compliance the task needs but fog computing has relatively closer proximity to the devices comparatively cloud. The problem arises when a decision needs to take what is to be offloaded: data, computation, or application, and more specifically where to offload: either fog or cloud and how much to offload. Fog-cloud collaboration is stochastic in terms of task-related attributes like task size, duration, arrival rate, and required resources. Dynamic task offloading becomes crucial in order to utilize the resources at fog and cloud to improve QoS. Since this formation of task offloading policy is a bit complex in nature, this problem is addressed in the research article and proposes an intelligent task offloading model. Simulation results demonstrate the authenticity of the proposed logistic regression model acquiring 86% accuracy compared to other algorithms and confidence in the predictive task offloading policy by making sure process consistency and reliability.


Assuntos
Algoritmos , Computação em Nuvem , Simulação por Computador , Modelos Logísticos , Reprodutibilidade dos Testes
14.
Comput Intell Neurosci ; 2021: 2487759, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34868288

RESUMO

The Internet of Medical Things (IoMT) enables digital devices to gather, infer, and broadcast health data via the cloud platform. The phenomenal growth of the IoMT is fueled by many factors, including the widespread and growing availability of wearables and the ever-decreasing cost of sensor-based technology. The cost of related healthcare will rise as the global population of elderly people grows in parallel with an overall life expectancy that demands affordable healthcare services, solutions, and developments. IoMT may bring revolution in the medical sciences in terms of the quality of healthcare of elderly people while entangled with machine learning (ML) algorithms. The effectiveness of the smart healthcare (SHC) model to monitor elderly people was observed by performing tests on IoMT datasets. For evaluation, the precision, recall, fscore, accuracy, and ROC values are computed. The authors also compare the results of the SHC model with different conventional popular ML techniques, e.g., support vector machine (SVM), K-nearest neighbor (KNN), and decision tree (DT), to analyze the effectiveness of the result.


Assuntos
Algoritmos , Aprendizado de Máquina , Idoso , Análise por Conglomerados , Atenção à Saúde , Humanos , Máquina de Vetores de Suporte
15.
Comput Intell Neurosci ; 2021: 6262194, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34630550

RESUMO

Road surface defects are crucial problems for safe and smooth traffic flow. Due to climate changes, low quality of construction material, large flow of traffic, and heavy vehicles, road surface anomalies are increasing rapidly. Detection and repairing of these defects are necessary for the safety of drivers, passengers, and vehicles from mechanical faults. In this modern era, autonomous vehicles are an active research area that controls itself with the help of in-vehicle sensors without human commands, especially after the emergence of deep learning (DNN) techniques. A combination of sensors and DNN techniques can be useful for unmanned vehicles for the perception of their surroundings for the detection of tracks and obstacles for smooth traveling based on the deployment of artificial intelligence in vehicles. One of the biggest challenges for autonomous vehicles is to avoid the critical road defects that may lead to dangerous situations. To solve the accident issues and share emergency information, the Intelligent Transportation System (ITS) introduced the concept of vehicular network termed as vehicular ad hoc network (VANET) for achieving security and safety in a traffic flow. A novel mechanism is proposed for the automatic detection of road anomalies by autonomous vehicles and providing road information to upcoming vehicles based on Edge AI and VANET. Road images captured via camera and deployment of the trained model for road anomaly detection in a vehicle could help to reduce the accident rate and risk of hazards on poor road conditions. The techniques Residual Convolutional Neural Network (ResNet-18) and Visual Geometry Group (VGG-11) are applied for the automatic detection and classification of the road with anomalies such as a pothole, bump, crack, and plain roads without anomalies using the dataset from different online sources. The results show that the applied models performed well than other techniques used for road anomalies identification.


Assuntos
Acidentes de Trânsito , Aprendizado Profundo , Acidentes de Trânsito/prevenção & controle , Inteligência Artificial , Humanos , Redes Neurais de Computação , Meios de Transporte
16.
Comput Intell Neurosci ; 2021: 4243700, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34567101

RESUMO

The prediction of human diseases precisely is still an uphill battle task for better and timely treatment. A multidisciplinary diabetic disease is a life-threatening disease all over the world. It attacks different vital parts of the human body, like Neuropathy, Retinopathy, Nephropathy, and ultimately Heart. A smart healthcare recommendation system predicts and recommends the diabetic disease accurately using optimal machine learning models with the data fusion technique on healthcare datasets. Various machine learning models and methods have been proposed in the recent past to predict diabetes disease. Still, these systems cannot handle the massive number of multifeatures datasets on diabetes disease properly. A smart healthcare recommendation system is proposed for diabetes disease based on deep machine learning and data fusion perspectives. Using data fusion, we can eliminate the irrelevant burden of system computational capabilities and increase the proposed system's performance to predict and recommend this life-threatening disease more accurately. Finally, the ensemble machine learning model is trained for diabetes prediction. This intelligent recommendation system is evaluated based on a well-known diabetes dataset, and its performance is compared with the most recent developments from the literature. The proposed system achieved 99.6% accuracy, which is higher compared to the existing deep machine learning methods. Therefore, our proposed system is better for multidisciplinary diabetes disease prediction and recommendation. Our proposed system's improved disease diagnosis performance advocates for its employment in the automated diagnostic and recommendation systems for diabetic patients.


Assuntos
Diabetes Mellitus , Atenção à Saúde , Diabetes Mellitus/diagnóstico , Diabetes Mellitus/terapia , Humanos , Aprendizado de Máquina
17.
J Healthc Eng ; 2020: 8017496, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32509260

RESUMO

The developing countries are still starving for the betterment of health sector. The disease commonly found among the women is breast cancer, and past researches have proven results that if the cancer is detected at a very early stage, the chances to overcome the disease are higher than the disease treated or detected at a later stage. This article proposed cloud-based intelligent BCP-T1F-SVM with 2 variations/models like BCP-T1F and BCP-SVM. The proposed BCP-T1F-SVM system has employed two main soft computing algorithms. The proposed BCP-T1F-SVM expert system specifically defines the stage and the type of cancer a person is suffering from. Expert system will elaborate the grievous stages of the cancer, to which extent a patient has suffered. The proposed BCP-SVM gives the higher precision of the proposed breast cancer detection model. In the limelight of breast cancer, the proposed BCP-T1F-SVM expert system gives out the higher precision rate. The proposed BCP-T1F expert system is being employed in the diagnosis of breast cancer at an initial stage. Taking different stages of cancer into account, breast cancer is being dealt by BCP-T1F expert system. The calculations and the evaluation done in this research have revealed that BCP-SVM is better than BCP-T1F. The BCP-T1F concludes out the 96.56 percentage accuracy, whereas the BCP-SVM gives accuracy of 97.06 percentage. The above unleashed research is wrapped up with the conclusion that BCP-SVM is better than the BCP-T1F. The opinions have been recommended by the medical expertise of Sheikh Zayed Hospital Lahore, Pakistan, and Cavan General Hospital, Lisdaran, Cavan, Ireland.


Assuntos
Neoplasias da Mama/diagnóstico , Mama/diagnóstico por imagem , Computação em Nuvem , Diagnóstico por Computador , Computação em Nuvem/estatística & dados numéricos , Diagnóstico por Computador/estatística & dados numéricos , Detecção Precoce de Câncer , Sistemas Inteligentes , Feminino , Humanos , Máquina de Vetores de Suporte
18.
J Healthc Eng ; 2019: 6361318, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30867895

RESUMO

In this research, a new multilayered mamdani fuzzy inference system (Ml-MFIS) is proposed to diagnose hepatitis B. The proposed automated diagnosis of hepatitis B using multilayer mamdani fuzzy inference system (ADHB-ML-MFIS) expert system can classify the different stages of hepatitis B such as no hepatitis, acute HBV, or chronic HBV. The expert system has two input variables at layer I and seven input variables at layer II. At layer I, input variables are ALT and AST that detect the output condition of the liver to be normal or to have hepatitis or infection and/or other problems. The further input variables at layer II are HBsAg, anti-HBsAg, anti-HBcAg, anti-HBcAg-IgM, HBeAg, anti-HBeAg, and HBV-DNA that determine the output condition of hepatitis such as no hepatitis, acute hepatitis, or chronic hepatitis and other reasons that arise due to enzyme vaccination or due to previous hepatitis infection. This paper presents an analysis of the results accurately using the proposed ADHB-ML-MFIS expert system to model the complex hepatitis B processes with the medical expert opinion that is collected from the Pathology Department of Shalamar Hospital, Lahore, Pakistan. The overall accuracy of the proposed ADHB-ML-MFIS expert system is 92.2%.


Assuntos
Diagnóstico por Computador/métodos , Hepatite B/diagnóstico , Alanina Transaminase/sangue , Aspartato Aminotransferases/sangue , Simulação por Computador , Diagnóstico por Computador/estatística & dados numéricos , Sistemas Inteligentes , Lógica Fuzzy , Hepatite B/sangue , Hepatite B/virologia , Anticorpos Anti-Hepatite B/sangue , Antígenos da Hepatite B/sangue , Humanos , Paquistão
19.
Comput Intell Neurosci ; 2018: 6759526, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30627144

RESUMO

Multiple-input and multiple-output (MIMO) technology is one of the latest technologies to enhance the capacity of the channel as well as the service quality of the communication system. By using the MIMO technology at the physical layer, the estimation of the data and the channel is performed based on the principle of maximum likelihood. For this purpose, the continuous and discrete fuzzy logic-empowered opposite learning-based mutant particle swarm optimization (FL-OLMPSO) algorithm is used over the Rayleigh fading channel in three levels. The data and the channel populations are prepared during the first level of the algorithm, while the channel parameters are estimated in the second level of the algorithm by using the continuous FL-OLMPSO. After determining the channel parameters, the transmitted symbols are evaluated in the 3rd level of the algorithm by using the channel parameters along with the discrete FL-OLMPSO. To enhance the convergence rate of the FL-OLMPSO algorithm, the velocity factor is updated using fuzzy logic. In this article, two variants, FL-total OLMPSO (FL-TOLMPSO) and FL-partial OLMPSO (FL-POLMPSO) of FL-OLMPSO, are proposed. The simulation results of proposed techniques show desirable results regarding MMCE, MMSE, and BER as compared to conventional opposite learning mutant PSO (TOLMPSO and POLMPSO) techniques.


Assuntos
Algoritmos , Inteligência Artificial , Lógica Fuzzy , Aprendizagem/fisiologia , Simulação por Computador , Probabilidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...