Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Intell Neurosci ; 2022: 1332664, 2022.
Article in English | MEDLINE | ID: mdl-35800708

ABSTRACT

Bipolar disorder is marked by mood swings that alternate between mania and depression. The stages of bipolar disorder (BD), as one of the most common mental conditions, are often misdiagnosed as major depressive disorder (MDD), resulting in ineffective treatment and a poor prognosis. As a result, distinguishing MDD from BD at an earlier phase of the disease may aid in more efficient and targeted treatments. In this research, an enhanced ACO (IACO) technique biologically inspired by and following the required ant colony optimization (ACO) was utilized to minimize the number of features by deleting unrelated or redundant feature data. To distinguish MDD and BD individuals, the selected features were loaded into a support vector machine (SVM), a sophisticated mathematical technique for classification process, regression, functional estimates, and modeling operations. In respect of classifications efficiency and frequency of features extracted, the performance of the IACO method was linked to that of regular ACO, particle swarm optimization (PSO), and genetic algorithm (GA) techniques. The validation was performed using a nested cross-validation (CV) approach to produce nearly reliable estimates of classification error.


Subject(s)
Bipolar Disorder , Depressive Disorder, Major , Algorithms , Bipolar Disorder/diagnosis , Depressive Disorder, Major/diagnosis , Humans , Support Vector Machine
2.
Comput Electr Eng ; 101: 108055, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35505976

ABSTRACT

As people all over the world are vulnerable to be affected by the COVID-19 virus, the automatic detection of such a virus is an important concern. The paper aims to detect and classify corona virus using machine learning. To spot and identify corona virus in CT-Lung screening and Computer-Aided diagnosis (CAD) system is projected to distinguish and classifies the COVID-19. By utilizing the clinical specimens obtained from the corona-infected patients with the help of some machine learning techniques like Decision Tree, Support Vector Machine, K-means clustering, and Radial Basis Function. While some specialists believe that the RT-PCR test is the best option for diagnosing Covid-19 patients, others believe that CT scans of the lungs can be more accurate in diagnosing corona virus infection, as well as being less expensive than the PCR test. The clinical specimens include serum specimens, respiratory secretions, and whole blood specimens. Overall, 15 factors are measured from these specimens as the result of the previous clinical examinations. The proposed CAD system consists of four phases starting with the CT lungs screening collection, followed by a pre-processing stage to enhance the appearance of the ground glass opacities (GGOs) nodules as they originally lock hazy with fainting contrast. A modified K-means algorithm will be used to detect and segment these regions. Finally, the use of detected, infected areas that obtained in the detection phase with a scale of 50×50 and perform segmentation of the solid false positives that seem to be GGOs as inputs and targets for the machine learning classifiers, here a support vector machine (SVM) and Radial basis function (RBF) has been utilized. Moreover, a GUI application is developed which avoids the confusion of the doctors for getting the exact results by giving the 15 input factors obtained from the clinical specimens.

3.
Comput Intell Neurosci ; 2022: 7710005, 2022.
Article in English | MEDLINE | ID: mdl-35371228

ABSTRACT

In this modern era, each and everything is computerized, and everyone has their own smart gadgets to communicate with others around the globe without any range limitations. Most of the communication pathways belong to smart applications, call options in smartphones, and other multiple ways, but e-mail communication is considered the main professional communication pathway, which allows business people as well as commercial and noncommercial organizations to communicate with one another or globally share some important official documents and reports. This global pathway attracts many attackers and intruders to do a scam with such innovations; in particular, the intruders generate false messages with some attractive contents and post them as e-mails to global users. This kind of unnecessary and not needed advertisement or threatening mails is considered as spam mails, which usually contain advertisements, promotions of a concern or institution, and so on. These mails are also considered or called junk mails, which will be reflected as the same category. In general, e-mails are the usual way of message delivery for business oriented as well as any official needs, but in some cases there is a necessity of transferring some voice instructions or messages to the destination via the same e-mail pathway. These kinds of voice-oriented e-mail accessing are called voice mails. The voice mail is generally composed to deliver the speech aspect instructions or information to the receiver to do some particular tasks or convey some important messages to the receiver. A voice-mail-enabled system allows users to communicate with one another based on speech input which the sender can communicate to the receiver via voice conversations, which is used to deliver voice information to the recipient. These kinds of mails are usually generated using personal computers or laptops and exchanged via general e-mail pathway, or separate paid and nonpaid mail gateways are available to deal with certain mail transactions. The above-mentioned e-mail spam is considered in many past researches and attains some solutions, but in case of voice-based e-mail aspect, there will be no options to manage such kind of security parameters. In this paper, a hybrid data processing mechanism is handled with respect to both text-enabled and voice-enabled e-mails, which is called Genetic Decision Tree Processing with Natural Language Processing (GDTPNLP). This proposed approach provides a way of identifying the e-mail spam in both textual e-mails and speech-enabled e-mails. The proposed approach of GDTPNLP provides higher spam detection rate in terms of text extraction speed, performance, cost efficiency, and accuracy. These all will be explained in detail with graphical output views in the Results and Discussion.


Subject(s)
Electronic Mail , Speech , Communication , Data Collection , Decision Trees , Humans
4.
J Healthc Eng ; 2022: 5337733, 2022.
Article in English | MEDLINE | ID: mdl-35340260

ABSTRACT

A new computing paradigm that has been growing in computing systems is fog computing. In the healthcare industry, Internet of Things (IoT) driven fog computing is being developed to speed up the services for the general public and save billions of lives. This new computing platform, based on the fog computing paradigm, may reduce latency when transmitting and communicating signals with faraway servers, allowing medical services to be delivered more quickly in both spatial and temporal dimensions. One of the necessary qualities of computing systems that can enable the completion of healthcare operations is latency reduction. Fog computing can provide reduced latency when compared to cloud computing due to the use of only low-end computers, mobile phones, and personal devices in fog computing. In this paper, a new framework for healthcare monitoring for managing real-time notification based on fog computing has been proposed. The proposed system monitors the patient's body temperature, heart rate, and blood pressure values obtained from the sensors that are embedded into a wearable device and notifies the doctors or caregivers in real time if there occur any contradictions in the normal threshold value using the machine learning algorithms. The notification can also be set for the patients to alert them about the periodical medications or diet to be maintained by the patients. The cloud layer stores the big data into the cloud for future references for the hospitals and the researchers.


Subject(s)
Internet of Things , Cloud Computing , Computers , Delivery of Health Care , Humans , Monitoring, Physiologic
5.
J Healthc Eng ; 2022: 7969220, 2022.
Article in English | MEDLINE | ID: mdl-35281545

ABSTRACT

Medical costs are one of the most common recurring expenses in a person's life. Based on different research studies, BMI, ageing, smoking, and other factors are all related to greater personal medical care costs. The estimates of the expenditures of health care related to obesity are needed to help create cost-effective obesity prevention strategies. Obesity prevention at a young age is a top concern in global health, clinical practice, and public health. To avoid these restrictions, genetic variants are employed as instrumental variables in this research. Using statistics from public huge datasets, the impact of body mass index (BMI) on overall healthcare expenses is predicted. A multiview learning architecture can be used to leverage BMI information in records, including diagnostic texts, diagnostic IDs, and patient traits. A hierarchy perception structure was suggested to choose significant words, health checks, and diagnoses for training phase informative data representations, because various words, diagnoses, and previous health care have varying significance for expense calculation. In this system model, linear regression analysis, naive Bayes classifier, and random forest algorithms were compared using a business analytic method that applied statistical and machine-learning approaches. According to the results of our forecasting method, linear regression has the maximum accuracy of 97.89 percent in forecasting overall healthcare costs. In terms of financial statistics, our methodology provides a predictive method.


Subject(s)
Health Care Costs , Machine Learning , Bayes Theorem , Hospitalization , Humans , Obesity
6.
Comput Intell Neurosci ; 2022: 1549842, 2022.
Article in English | MEDLINE | ID: mdl-35075356

ABSTRACT

Since the Pre-Roman era, olive trees have a significant economic and cultural value. In 2019, the Al-Jouf region, in the north of the Kingdom of Saudi Arabia, gained a global presence by entering the Guinness World Records, with the largest number of olive trees in the world. Olive tree detecting and counting from a given satellite image are a significant and difficult computer vision problem. Because olive farms are spread out over a large area, manually counting the trees is impossible. Moreover, accurate automatic detection and counting of olive trees in satellite images have many challenges such as scale variations, weather changes, perspective distortions, and orientation changes. Another problem is the lack of a standard database of olive trees available for deep learning applications. To address these problems, we first build a large-scale olive dataset dedicated to deep learning research and applications. The dataset consists of 230 RGB images collected over the territory of Al-Jouf, KSA. We then propose an efficient deep learning model (SwinTUnet) for detecting and counting olive trees from satellite imagery. The proposed SwinTUnet is a Unet-like network which consists of an encoder, a decoder, and skip connections. Swin Transformer block is the fundamental unit of SwinTUnet to learn local and global semantic information. The results of an experimental study on the proposed dataset show that the SwinTUnet model outperforms the related studies in terms of overall detection with a 0.94% estimation error.


Subject(s)
Deep Learning , Olea , Databases, Factual , Image Processing, Computer-Assisted , Satellite Imagery
7.
Mater Today Proc ; 2021 Jan 23.
Article in English | MEDLINE | ID: mdl-33520671

ABSTRACT

Coronavirus disease-2019 (COVID-19) is a viral infection that rose in a city in the Chinese province of Hubei, Wuhan. The world did not wait too long until the virus spread to reach Europe, Africa, and America to be a global pandemic. Due to the lack of information about the behaviour of the virus, several prediction models are in use all over around the world for decision making and taking precautionary actions. Therefor, in this paper, a new model named MSIR based on SIR model is proposed. The model is used to predict the spread of the disease in three cities Riyadh, Hufof and Jeddah in the kingdom of Saudi Arabia. Also the estimation of disease propagation with and without containment measure is carried out. We think that the results could be used to enhance the predictability of the pandemic outbreaks in other cities and to build long term artificial intelligence prediction model.

8.
Comput Intell Neurosci ; 2021: 7677568, 2021.
Article in English | MEDLINE | ID: mdl-35003247

ABSTRACT

Cardiac arrhythmia is an illness in which a heartbeat is erratic, either too slow or too rapid. It happens as a result of faulty electrical impulses that coordinate the heartbeats. Sudden cardiac death can occur as a result of certain serious arrhythmia disorders. As a result, the primary goal of electrocardiogram (ECG) investigation is to reliably perceive arrhythmias as life-threatening to provide a suitable therapy and save lives. ECG signals are waveforms that denote the electrical movement of the human heart (P, QRS, and T). The duration, structure, and distances between various peaks of each waveform are utilized to identify heart problems. The signals' autoregressive (AR) analysis is then used to obtain a specific selection of signal features, the parameters of the AR signal model. Groups of retrieved AR characteristics for three various ECG kinds are cleanly separated in the training dataset, providing high connection classification and heart problem diagnosis to each ECG signal within the training dataset. A new technique based on two-event-related moving averages (TERMAs) and fractional Fourier transform (FFT) algorithms is suggested to better evaluate ECG signals. This study could help researchers examine the current state-of-the-art approaches employed in the detection of arrhythmia situations. The characteristic of our suggested machine learning approach is cross-database training and testing with improved characteristics.


Subject(s)
Electrocardiography , Signal Processing, Computer-Assisted , Algorithms , Arrhythmias, Cardiac/diagnosis , Heart Rate , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...