Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
1.
Front Comput Neurosci ; 17: 1204445, 2023.
Article in English | MEDLINE | ID: mdl-37711504

ABSTRACT

Point clouds have evolved into one of the most important data formats for 3D representation. It is becoming more popular as a result of the increasing affordability of acquisition equipment and growing usage in a variety of fields. Volumetric grid-based approaches are among the most successful models for processing point clouds because they fully preserve data granularity while additionally making use of point dependency. However, using lower order local estimate functions to close 3D objects, such as the piece-wise constant function, necessitated the use of a high-resolution grid in order to capture detailed features that demanded vast computational resources. This study proposes an improved fused feature network as well as a comprehensive framework for solving shape classification and segmentation tasks using a two-branch technique and feature learning. We begin by designing a feature encoding network with two distinct building blocks: layer skips within, batch normalization (BN), and rectified linear units (ReLU) in between. The purpose of using layer skips is to have fewer layers to propagate across, which will speed up the learning process and lower the effect of gradients vanishing. Furthermore, we develop a robust grid feature extraction module that consists of multiple convolution blocks accompanied by max-pooling to represent a hierarchical representation and extract features from an input grid. We overcome the grid size constraints by sampling a constant number of points in each grid using a simple K-points nearest neighbor (KNN) search, which aids in learning approximation functions in higher order. The proposed method outperforms or is comparable to state-of-the-art approaches in point cloud segmentation and classification tasks. In addition, a study of ablation is presented to show the effectiveness of the proposed method.

2.
Sensors (Basel) ; 23(17)2023 Aug 25.
Article in English | MEDLINE | ID: mdl-37687870

ABSTRACT

Answering a query through a peer-to-peer database presents one of the greatest challenges due to the high cost and time required to obtain a comprehensive response. Consequently, these systems were primarily designed to handle approximation queries. In our research, the primary objective was to develop an intelligent system capable of responding to approximate set-value inquiries. This paper explores the use of particle optimization to enhance the system's intelligence. In contrast to previous studies, our proposed method avoids the use of sampling. Despite the utilization of the best sampling methods, there remains a possibility of error, making it difficult to guarantee accuracy. Nonetheless, achieving a certain degree of accuracy is crucial in handling approximate queries. Various factors influence the accuracy of sampling procedures. The results of our studies indicate that the suggested method has demonstrated improvements in terms of the number of queries issued, the number of peers examined, and its execution time, which is significantly faster than the flood approach. Answering queries poses one of the most arduous challenges in peer-to-peer databases, as obtaining a complete answer is both costly and time-consuming. Consequently, approximation queries have been adopted as a solution in these systems. Our research evaluated several methods, including flood algorithms, parallel diffusion algorithms, and ISM algorithms. When it comes to query transmission, the proposed method exhibits superior cost-effectiveness and execution times.

3.
Brain Sci ; 13(4)2023 Mar 25.
Article in English | MEDLINE | ID: mdl-37190520

ABSTRACT

Recognition of lying is a more complex cognitive process than truth-telling because of the presence of involuntary cognitive cues that are useful to lie recognition. Researchers have proposed different approaches in the literature to solve the problem of lie recognition from either handcrafted and/or automatic lie features during court trials and police interrogations. Unfortunately, due to the cognitive complexity and the lack of involuntary cues related to lying features, the performances of these approaches suffer and their generalization ability is limited. To improve performance, this study proposed state transition patterns based on hands, body motions, and eye blinking features from real-life court trial videos. Each video frame is represented according to a computed threshold value among neighboring pixels to extract spatial-temporal state transition patterns (STSTP) of the hand and face poses as involuntary cues using fully connected convolution neural network layers optimized with the weights of ResNet-152 learning. In addition, this study computed an eye aspect ratio model to obtain eye blinking features. These features were fused together as a single multi-modal STSTP feature model. The model was built using the enhanced calculated weight of bidirectional long short-term memory. The proposed approach was evaluated by comparing its performance with current state-of-the-art methods. It was found that the proposed approach improves the performance of detecting lies.

4.
J Ambient Intell Humaniz Comput ; 14(7): 9677-9750, 2023.
Article in English | MEDLINE | ID: mdl-35821879

ABSTRACT

The success of deep learning over the traditional machine learning techniques in handling artificial intelligence application tasks such as image processing, computer vision, object detection, speech recognition, medical imaging and so on, has made deep learning the buzz word that dominates Artificial Intelligence applications. From the last decade, the applications of deep learning in physiological signals such as electrocardiogram (ECG) have attracted a good number of research. However, previous surveys have not been able to provide a systematic comprehensive review including biometric ECG based systems of the applications of deep learning in ECG with respect to domain of applications. To address this gap, we conducted a systematic literature review on the applications of deep learning in ECG including biometric ECG based systems. The study analyzed systematically, 150 primary studies with evidence of the application of deep learning in ECG. The study shows that the applications of deep learning in ECG have been applied in different domains. We presented a new taxonomy of the domains of application of the deep learning in ECG. The paper also presented discussions on biometric ECG based systems and meta-data analysis of the studies based on the domain, area, task, deep learning models, dataset sources and preprocessing methods. Challenges and potential research opportunities were highlighted to enable novel research. We believe that this study will be useful to both new researchers and expert researchers who are seeking to add knowledge to the already existing body of knowledge in ECG signal processing using deep learning algorithm. Supplementary information: The online version contains supplementary material available at 10.1007/s12652-022-03868-z.

5.
Sci Rep ; 12(1): 6166, 2022 04 13.
Article in English | MEDLINE | ID: mdl-35418566

ABSTRACT

Deep learning (DL) models are becoming pervasive and applicable to computer vision, image processing, and synthesis problems. The performance of these models is often improved through architectural configuration, tweaks, the use of enormous training data, and skillful selection of hyperparameters. The application of deep learning models to medical image processing has yielded interesting performance, capable of correctly detecting abnormalities in medical digital images, making them surpass human physicians. However, advancing research in this domain largely relies on the availability of training datasets. These datasets are sometimes not publicly accessible, insufficient for training, and may also be characterized by a class imbalance among samples. As a result, inadequate training samples and difficulty in accessing new datasets for training deep learning models limit performance and research into new domains. Hence, generative adversarial networks (GANs) have been proposed to mediate this gap by synthesizing data similar to real sample images. However, we observed that benchmark datasets with regions of interest (ROIs) for characterizing abnormalities in breast cancer using digital mammography do not contain sufficient data with a fair distribution of all cases of abnormalities. For instance, the architectural distortion and breast asymmetry in digital mammograms are sparsely distributed across most publicly available datasets. This paper proposes a GAN model, named ROImammoGAN, which synthesizes ROI-based digital mammograms. Our approach involves the design of a GAN model consisting of both a generator and a discriminator to learn a hierarchy of representations for abnormalities in digital mammograms. Attention is given to architectural distortion, asymmetry, mass, and microcalcification abnormalities so that training distinctively learns the features of each abnormality and generates sufficient images for each category. The proposed GAN model was applied to MIAS datasets, and the performance evaluation yielded a competitive accuracy for the synthesized samples. In addition, the quality of the images generated was also evaluated using PSNR, SSIM, FSIM, BRISQUE, PQUE, NIQUE, FID, and geometry scores. The results showed that ROImammoGAN performed competitively with state-of-the-art GANs. The outcome of this study is a model for augmenting CNN models with ROI-centric image samples for the characterization of abnormalities in breast images.


Subject(s)
Breast Neoplasms , Neural Networks, Computer , Benchmarking , Breast Neoplasms/diagnostic imaging , Female , Humans , Image Processing, Computer-Assisted/methods , Mammography
6.
PLoS One ; 16(12): e0259786, 2021.
Article in English | MEDLINE | ID: mdl-34855771

ABSTRACT

Team formation (TF) in social networks exploits graphs (i.e., vertices = experts and edges = skills) to represent a possible collaboration between the experts. These networks lead us towards building cost-effective research teams irrespective of the geolocation of the experts and the size of the dataset. Previously, large datasets were not closely inspected for the large-scale distributions & relationships among the researchers, resulting in the algorithms failing to scale well on the data. Therefore, this paper presents a novel TF algorithm for expert team formation called SSR-TF based on two metrics; communication cost and graph reduction, that will become a basis for future TF's. In SSR-TF, communication cost finds the possibility of collaboration between researchers. The graph reduction scales the large data to only appropriate skills and the experts, resulting in real-time extraction of experts for collaboration. This approach is tested on five organic and benchmark datasets, i.e., UMP, DBLP, ACM, IMDB, and Bibsonomy. The SSR-TF algorithm is able to build cost-effective teams with the most appropriate experts-resulting in the formation of more communicative teams with high expertise levels.


Subject(s)
Algorithms , Cooperative Behavior , Social Networking , Computer Graphics , Computer Heuristics , Databases, Factual , Humans , Motion Pictures
7.
Biomed Res Int ; 2021: 5546790, 2021.
Article in English | MEDLINE | ID: mdl-34518801

ABSTRACT

The spread of COVID-19 worldwide continues despite multidimensional efforts to curtail its spread and provide treatment. Efforts to contain the COVID-19 pandemic have triggered partial or full lockdowns across the globe. This paper presents a novel framework that intelligently combines machine learning models and the Internet of Things (IoT) technology specifically to combat COVID-19 in smart cities. The purpose of the study is to promote the interoperability of machine learning algorithms with IoT technology by interacting with a population and its environment to curtail the COVID-19 pandemic. Furthermore, the study also investigates and discusses some solution frameworks, which can generate, capture, store, and analyze data using machine learning algorithms. These algorithms can detect, prevent, and trace the spread of COVID-19 and provide a better understanding of the disease in smart cities. Similarly, the study outlined case studies on the application of machine learning to help fight against COVID-19 in hospitals worldwide. The framework proposed in the study is a comprehensive presentation on the major components needed to integrate the machine learning approach with other AI-based solutions. Finally, the machine learning framework presented in this study has the potential to help national healthcare systems in curtailing the COVID-19 pandemic in smart cities. In addition, the proposed framework is poised as a pointer for generating research interests that would yield outcomes capable of been integrated to form an improved framework.


Subject(s)
COVID-19/epidemiology , Communicable Disease Control/methods , Machine Learning , Algorithms , Artificial Intelligence , COVID-19/prevention & control , COVID-19/transmission , Cities/epidemiology , Contact Tracing/methods , Delivery of Health Care , Humans , Internet of Things , Pandemics , SARS-CoV-2/pathogenicity
8.
IEEE Access ; 9: 77905-77919, 2021.
Article in English | MEDLINE | ID: mdl-36789158

ABSTRACT

The novel coronavirus, also known as COVID-19, is a pandemic that has weighed heavily on the socio-economic affairs of the world. Research into the production of relevant vaccines is progressively being advanced with the development of the Pfizer and BioNTech, AstraZeneca, Moderna, Sputnik V, Janssen, Sinopharm, Valneva, Novavax and Sanofi Pasteur vaccines. There is, however, a need for a computational intelligence solution approach to mediate the process of facilitating quick detection of the disease. Different computational intelligence methods, which comprise natural language processing, knowledge engineering, and deep learning, have been proposed in the literature to tackle the spread of coronavirus disease. More so, the application of deep learning models have demonstrated an impressive performance compared to other methods. This paper aims to advance the application of deep learning and image pre-processing techniques to characterise and detect novel coronavirus infection. Furthermore, the study proposes a framework named CovFrameNet., which consist of a pipelined image pre-processing method and a deep learning model for feature extraction, classification, and performance measurement. The novelty of this study lies in the design of a CNN architecture that incorporates an enhanced image pre-processing mechanism. The National Institutes of Health (NIH) Chest X-Ray dataset and COVID-19 Radiography database were used to evaluate and validate the effectiveness of the proposed deep learning model. Results obtained revealed that the proposed model achieved an accuracy of 0.1, recall/precision of 0.85, F-measure of 0.9, and specificity of 1.0. Thus, the study's outcome showed that a CNN-based method with image pre-processing capability could be adopted for the pre-screening of suspected COVID-19 cases, and the confirmation of RT-PCR-based detected cases of COVID-19.

9.
PeerJ Comput Sci ; 6: e313, 2020.
Article in English | MEDLINE | ID: mdl-33816964

ABSTRACT

BACKGROUND AND OBJECTIVE: The COVID-19 pandemic has caused severe mortality across the globe, with the USA as the current epicenter of the COVID-19 epidemic even though the initial outbreak was in Wuhan, China. Many studies successfully applied machine learning to fight COVID-19 pandemic from a different perspective. To the best of the authors' knowledge, no comprehensive survey with bibliometric analysis has been conducted yet on the adoption of machine learning to fight COVID-19. Therefore, the main goal of this study is to bridge this gap by carrying out an in-depth survey with bibliometric analysis on the adoption of machine learning-based technologies to fight COVID-19 pandemic from a different perspective, including an extensive systematic literature review and bibliometric analysis. METHODS: We applied a literature survey methodology to retrieved data from academic databases and subsequently employed a bibliometric technique to analyze the accessed records. Besides, the concise summary, sources of COVID-19 datasets, taxonomy, synthesis and analysis are presented in this study. It was found that the Convolutional Neural Network (CNN) is mainly utilized in developing COVID-19 diagnosis and prognosis tools, mostly from chest X-ray and chest CT scan images. Similarly, in this study, we performed a bibliometric analysis of machine learning-based COVID-19 related publications in the Scopus and Web of Science citation indexes. Finally, we propose a new perspective for solving the challenges identified as direction for future research. We believe the survey with bibliometric analysis can help researchers easily detect areas that require further development and identify potential collaborators. RESULTS: The findings of the analysis presented in this article reveal that machine learning-based COVID-19 diagnose tools received the most considerable attention from researchers. Specifically, the analyses of results show that energy and resources are more dispenses towards COVID-19 automated diagnose tools while COVID-19 drugs and vaccine development remains grossly underexploited. Besides, the machine learning-based algorithm that is predominantly utilized by researchers in developing the diagnostic tool is CNN mainly from X-rays and CT scan images. CONCLUSIONS: The challenges hindering practical work on the application of machine learning-based technologies to fight COVID-19 and new perspective to solve the identified problems are presented in this article. Furthermore, we believed that the presented survey with bibliometric analysis could make it easier for researchers to identify areas that need further development and possibly identify potential collaborators at author, country and institutional level, with the overall aim of furthering research in the focused area of machine learning application to disease control.

10.
Heliyon ; 5(6): e01802, 2019 Jun.
Article in English | MEDLINE | ID: mdl-31211254

ABSTRACT

The upsurge in the volume of unwanted emails called spam has created an intense need for the development of more dependable and robust antispam filters. Machine learning methods of recent are being used to successfully detect and filter spam emails. We present a systematic review of some of the popular machine learning based email spam filtering approaches. Our review covers survey of the important concepts, attempts, efficiency, and the research trend in spam filtering. The preliminary discussion in the study background examines the applications of machine learning techniques to the email spam filtering process of the leading internet service providers (ISPs) like Gmail, Yahoo and Outlook emails spam filters. Discussion on general email spam filtering process, and the various efforts by different researchers in combating spam through the use machine learning techniques was done. Our review compares the strengths and drawbacks of existing machine learning approaches and the open research problems in spam filtering. We recommended deep leaning and deep adversarial learning as the future techniques that can effectively handle the menace of spam emails.

11.
Springerplus ; 5(1): 868, 2016.
Article in English | MEDLINE | ID: mdl-27386317

ABSTRACT

Prior research studies have shown that the peak signal to noise ratio (PSNR) is the most frequent watermarked image quality metric that is used for determining the levels of strength and weakness of watermarking algorithms. Conversely, normalised cross correlation (NCC) is the most common metric used after attacks were applied to a watermarked image to verify the strength of the algorithm used. Many researchers have used these approaches to evaluate their algorithms. These strategies have been used for a long time, however, which unfortunately limits the value of PSNR and NCC in reflecting the strength and weakness of the watermarking algorithms. This paper considers this issue to determine the threshold values of these two parameters in reflecting the amount of strength and weakness of the watermarking algorithms. We used our novel watermarking technique for embedding four watermarks in intermediate significant bits (ISB) of six image files one-by-one through replacing the image pixels with new pixels and, at the same time, keeping the new pixels very close to the original pixels. This approach gains an improved robustness based on the PSNR and NCC values that were gathered. A neural network model was built that uses the image quality metrics (PSNR and NCC) values obtained from the watermarking of six grey-scale images that use ISB as the desired output and that are trained for each watermarked image's PSNR and NCC. The neural network predicts the watermarked image's PSNR together with NCC after the attacks when a portion of the output of the same or different types of image quality metrics (PSNR and NCC) are obtained. The results indicate that the NCC metric fluctuates before the PSNR values deteriorate.

12.
PLoS One ; 11(1): e0144371, 2016.
Article in English | MEDLINE | ID: mdl-26790131

ABSTRACT

The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global optimization problems.


Subject(s)
Behavior, Animal/physiology , Seals, Earless/physiology , Algorithms , Animals , Computer Simulation , Models, Theoretical
13.
PLoS One ; 10(8): e0136140, 2015.
Article in English | MEDLINE | ID: mdl-26305483

ABSTRACT

BACKGROUND: Global warming is attracting attention from policy makers due to its impacts such as floods, extreme weather, increases in temperature by 0.7°C, heat waves, storms, etc. These disasters result in loss of human life and billions of dollars in property. Global warming is believed to be caused by the emissions of greenhouse gases due to human activities including the emissions of carbon dioxide (CO2) from petroleum consumption. Limitations of the previous methods of predicting CO2 emissions and lack of work on the prediction of the Organization of the Petroleum Exporting Countries (OPEC) CO2 emissions from petroleum consumption have motivated this research. METHODS/FINDINGS: The OPEC CO2 emissions data were collected from the Energy Information Administration. Artificial Neural Network (ANN) adaptability and performance motivated its choice for this study. To improve effectiveness of the ANN, the cuckoo search algorithm was hybridised with accelerated particle swarm optimisation for training the ANN to build a model for the prediction of OPEC CO2 emissions. The proposed model predicts OPEC CO2 emissions for 3, 6, 9, 12 and 16 years with an improved accuracy and speed over the state-of-the-art methods. CONCLUSION: An accurate prediction of OPEC CO2 emissions can serve as a reference point for propagating the reorganisation of economic development in OPEC member countries with the view of reducing CO2 emissions to Kyoto benchmarks--hence, reducing global warming. The policy implications are discussed in the paper.


Subject(s)
Algorithms , Carbon Dioxide/analysis , Global Warming , Neural Networks, Computer , Petroleum , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...