Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 289
Filter
1.
Environ Monit Assess ; 196(8): 724, 2024 Jul 11.
Article in English | MEDLINE | ID: mdl-38990407

ABSTRACT

Analysis of the change in groundwater used as a drinking and irrigation water source is of critical importance in terms of monitoring aquifers, planning water resources, energy production, combating climate change, and agricultural production. Therefore, it is necessary to model groundwater level (GWL) fluctuations to monitor and predict groundwater storage. Artificial intelligence-based models in water resource management have become prevalent due to their proven success in hydrological studies. This study proposed a hybrid model that combines the artificial neural network (ANN) and the artificial bee colony optimization (ABC) algorithm, along with the ensemble empirical mode decomposition (EEMD) and the local mean decomposition (LMD) techniques, to model groundwater levels in Erzurum province, Türkiye. GWL estimation results were evaluated with mean square error (MSE), coefficient of determination (R2), and residual sum of squares (RSS) and visually with violin, scatter, and time series plot. The study results indicated that the EEMD-ABC-ANN hybrid model was superior to other models in estimating GWL, with R2 values ranging from 0.91 to 0.99 and MSE values ranging from 0.004 to 0.07. It has also been revealed that promising GWL predictions can be made with previous GWL data.


Subject(s)
Environmental Monitoring , Groundwater , Neural Networks, Computer , Groundwater/chemistry , Bees , Animals , Environmental Monitoring/methods , Algorithms
2.
Sci Total Environ ; 946: 174413, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38960180

ABSTRACT

Understanding the origins of sediment within stream networks is critical to developing effective strategies to mitigate sediment delivery and soil erosion in larger drainage basins. Sediment fingerprinting is a widely accepted approach to identifying sediment sources; however, it typically relies on labor-intensive and costly chemical analyses. Recent studies have recognized diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) as a non-destructive, cost-effective, and efficient alternative for estimating sediment contributions from multiple sources. This study aimed to assess (i) the effects of different particle size fractions on DRIFTS and conservatism tests, (ii) the effects of spectral pre-processing on discriminating sub-catchment spatial sediment sources, (iii) the efficiency of partial least squares regression (PLSR) and support vector machine regression (SVMR) chemometric models across different spectral resolutions and particle size fractions, and (iv) the quantification of sub-catchment spatial sediment source contributions using chemometric models across different particle size fractions. DRIFTS analysis was performed on three particle size fractions (<38 µm, 38-63 µm, and 63-125 µm) using 54 sediment samples from three different sub-catchments and 26 target sediment samples from the Andajrood catchment in Iran. Results showed significant effects of particle size fractions on DRIFTS for both sub-catchment sediment sources and target sediment samples. Conservatism tests indicated that DRIFTS behave conservative for the majority of target sediment samples. Spectral pre-processing techniques including SNV + SGD1 and SGD1 effectively discriminated sources across all particle size fractions and spectral resolutions. However, the optimal combination of pre-processing, spectral resolution, and regression models varied between sub-fractions. Validated model estimates revealed that sub-catchment 1 consistently contributed the most sediment across all particle size fractions, followed by sub-catchments 3 and 2. These results highlight the effectiveness of DRIFTS as a rapid, cost-effective, and precise method for discriminating and apportioning sediment sources within spatial sub-catchments.

3.
Cogn Neurodyn ; 18(3): 961-972, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38826654

ABSTRACT

Electroencephalography (EEG) evaluation is an important step in the clinical diagnosis of brain death during the standard clinical procedure. The processing of the brain-death EEG signals acquisition always carried out in the Intensive Care Unit (ICU). The electromagnetic environmental noise and prescribed sedative may erroneously suggest cerebral electrical activity, thus effecting the presentation of EEG signals. In order to accurately and efficiently assist physicians in making correct judgments, this paper presents a band-pass filter and threshold rejection-based EEG signal pre-processing method and an EEG-based coma/brain-death classification system associated with One Dimensional Convolutional Neural Network (1D-CNN) model to classify informative brain activity features from real-world recorded clinical EEG data. The experimental result shows that our method is well performed in classify the coma patients and brain-death patients with the classification accuracy of 99.71%, F1-score of 99.71% and recall score of 99.51%, which means the proposed model is well performed in the coma/brain-death EEG signals classification task. This paper provides a more straightforward and effective method for pre-processing and classifying EEG signals from coma/brain-death patients, and demonstrates the validity and reliability of the method. Considering the specificity of the condition and the complexity of the EEG acquisition environment, it presents an effective method for pre-processing real-time EEG signals in clinical diagnoses and aiding the physicians in their diagnosis, with significant implications for the choice of signal pre-processing methods in the construction of practical brain-death identification systems.

4.
Brain Inform ; 11(1): 17, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38837089

ABSTRACT

Neuromarketing is an emerging research field that aims to understand consumers' decision-making processes when choosing which product to buy. This information is highly sought after by businesses looking to improve their marketing strategies by understanding what leaves a positive or negative impression on consumers. It has the potential to revolutionize the marketing industry by enabling companies to offer engaging experiences, create more effective advertisements, avoid the wrong marketing strategies, and ultimately save millions of dollars for businesses. Therefore, good documentation is necessary to capture the current research situation in this vital sector. In this article, we present a systematic review of EEG-based Neuromarketing. We aim to shed light on the research trends, technical scopes, and potential opportunities in this field. We reviewed recent publications from valid databases and divided the popular research topics in Neuromarketing into five clusters to present the current research trend in this field. We also discuss the brain regions that are activated when making purchase decisions and their relevance to Neuromarketing applications. The article provides appropriate illustrations of marketing stimuli that can elicit authentic impressions from consumers' minds, the techniques used to process and analyze recorded brain data, and the current strategies employed to interpret the data. Finally, we offer recommendations to upcoming researchers to help them investigate the possibilities in this area more efficiently in the future.

5.
Sensors (Basel) ; 24(9)2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38732843

ABSTRACT

As the number of electronic gadgets in our daily lives is increasing and most of them require some kind of human interaction, this demands innovative, convenient input methods. There are limitations to state-of-the-art (SotA) ultrasound-based hand gesture recognition (HGR) systems in terms of robustness and accuracy. This research presents a novel machine learning (ML)-based end-to-end solution for hand gesture recognition with low-cost micro-electromechanical (MEMS) system ultrasonic transducers. In contrast to prior methods, our ML model processes the raw echo samples directly instead of using pre-processed data. Consequently, the processing flow presented in this work leaves it to the ML model to extract the important information from the echo data. The success of this approach is demonstrated as follows. Four MEMS ultrasonic transducers are placed in three different geometrical arrangements. For each arrangement, different types of ML models are optimized and benchmarked on datasets acquired with the presented custom hardware (HW): convolutional neural networks (CNNs), gated recurrent units (GRUs), long short-term memory (LSTM), vision transformer (ViT), and cross-attention multi-scale vision transformer (CrossViT). The three last-mentioned ML models reached more than 88% accuracy. The most important innovation described in this research paper is that we were able to demonstrate that little pre-processing is necessary to obtain high accuracy in ultrasonic HGR for several arrangements of cost-effective and low-power MEMS ultrasonic transducer arrays. Even the computationally intensive Fourier transform can be omitted. The presented approach is further compared to HGR systems using other sensor types such as vision, WiFi, radar, and state-of-the-art ultrasound-based HGR systems. Direct processing of the sensor signals by a compact model makes ultrasonic hand gesture recognition a true low-cost and power-efficient input method.


Subject(s)
Gestures , Hand , Machine Learning , Neural Networks, Computer , Humans , Hand/physiology , Pattern Recognition, Automated/methods , Ultrasonography/methods , Ultrasonography/instrumentation , Ultrasonics/instrumentation , Algorithms
6.
Sensors (Basel) ; 24(9)2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38732936

ABSTRACT

Lung diseases are the third-leading cause of mortality in the world. Due to compromised lung function, respiratory difficulties, and physiological complications, lung disease brought on by toxic substances, pollution, infections, or smoking results in millions of deaths every year. Chest X-ray images pose a challenge for classification due to their visual similarity, leading to confusion among radiologists. To imitate those issues, we created an automated system with a large data hub that contains 17 datasets of chest X-ray images for a total of 71,096, and we aim to classify ten different disease classes. For combining various resources, our large datasets contain noise and annotations, class imbalances, data redundancy, etc. We conducted several image pre-processing techniques to eliminate noise and artifacts from images, such as resizing, de-annotation, CLAHE, and filtering. The elastic deformation augmentation technique also generates a balanced dataset. Then, we developed DeepChestGNN, a novel medical image classification model utilizing a deep convolutional neural network (DCNN) to extract 100 significant deep features indicative of various lung diseases. This model, incorporating Batch Normalization, MaxPooling, and Dropout layers, achieved a remarkable 99.74% accuracy in extensive trials. By combining graph neural networks (GNNs) with feedforward layers, the architecture is very flexible when it comes to working with graph data for accurate lung disease classification. This study highlights the significant impact of combining advanced research with clinical application potential in diagnosing lung diseases, providing an optimal framework for precise and efficient disease identification and classification.


Subject(s)
Lung Diseases , Neural Networks, Computer , Humans , Lung Diseases/diagnostic imaging , Lung Diseases/diagnosis , Image Processing, Computer-Assisted/methods , Deep Learning , Algorithms , Lung/diagnostic imaging , Lung/pathology
7.
Brain Commun ; 6(3): fcae165, 2024.
Article in English | MEDLINE | ID: mdl-38799618

ABSTRACT

Studies of intracranial EEG networks have been used to reveal seizure generators in patients with drug-resistant epilepsy. Intracranial EEG is implanted to capture the epileptic network, the collection of brain tissue that forms a substrate for seizures to start and spread. Interictal intracranial EEG measures brain activity at baseline, and networks computed during this state can reveal aberrant brain tissue without requiring seizure recordings. Intracranial EEG network analyses require choosing a reference and applying statistical measures of functional connectivity. Approaches to these technical choices vary widely across studies, and the impact of these technical choices on downstream analyses is poorly understood. Our objective was to examine the effects of different re-referencing and connectivity approaches on connectivity results and on the ability to lateralize the seizure onset zone in patients with drug-resistant epilepsy. We applied 48 pre-processing pipelines to a cohort of 125 patients with drug-resistant epilepsy recorded with interictal intracranial EEG across two epilepsy centres to generate intracranial EEG functional connectivity networks. Twenty-four functional connectivity measures across time and frequency domains were applied in combination with common average re-referencing or bipolar re-referencing. We applied an unsupervised clustering algorithm to identify groups of pre-processing pipelines. We subjected each pre-processing approach to three quality tests: (i) the introduction of spurious correlations; (ii) robustness to incomplete spatial sampling; and (iii) the ability to lateralize the clinician-defined seizure onset zone. Three groups of similar pre-processing pipelines emerged: common average re-referencing pipelines, bipolar re-referencing pipelines and relative entropy-based connectivity pipelines. Relative entropy and common average re-referencing networks were more robust to incomplete electrode sampling than bipolar re-referencing and other connectivity methods (Friedman test, Dunn-Sidák test P < 0.0001). Bipolar re-referencing reduced spurious correlations at non-adjacent channels better than common average re-referencing (Δ mean from machine ref = -0.36 versus -0.22) and worse in adjacent channels (Δ mean from machine ref = -0.14 versus -0.40). Relative entropy-based network measures lateralized the seizure onset hemisphere better than other measures in patients with temporal lobe epilepsy (Benjamini-Hochberg-corrected P < 0.05, Cohen's d: 0.60-0.76). Finally, we present an interface where users can rapidly evaluate intracranial EEG pre-processing choices to select the optimal pre-processing methods tailored to specific research questions. The choice of pre-processing methods affects downstream network analyses. Choosing a single method among highly correlated approaches can reduce redundancy in processing. Relative entropy outperforms other connectivity methods in multiple quality tests. We present a method and interface for researchers to optimize their pre-processing methods for deriving intracranial EEG brain networks.

8.
Magn Reson Imaging ; 111: 186-195, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38744351

ABSTRACT

PURPOSE: To determine the significance of complex-valued inputs and complex-valued convolutions compared to real-valued inputs and real-valued convolutions in convolutional neural networks (CNNs) for frequency and phase correction (FPC) of GABA-edited magnetic resonance spectroscopy (MRS) data. METHODS: An ablation study using simulated data was performed to determine the most effective input (real or complex) and convolution type (real or complex) to predict frequency and phase shifts in GABA-edited MEGA-PRESS data using CNNs. The best CNN model was subsequently compared using both simulated and in vivo data to two recently proposed deep learning (DL) methods for FPC of GABA-edited MRS. All methods were trained using the same experimental setup and evaluated using the signal-to-noise ratio (SNR) and linewidth of the GABA peak, choline artifact, and by visually assessing the reconstructed final difference spectrum. Statistical significance was assessed using the Wilcoxon signed rank test. RESULTS: The ablation study showed that using complex values for the input represented by real and imaginary channels in our model input tensor, with complex convolutions was most effective for FPC. Overall, in the comparative study using simulated data, our CC-CNN model (that received complex-valued inputs with complex convolutions) outperformed the other models as evaluated by the mean absolute error. CONCLUSION: Our results indicate that the optimal CNN configuration for GABA-edited MRS FPC uses a complex-valued input and complex convolutions. Overall, this model outperformed existing DL models.


Subject(s)
Magnetic Resonance Spectroscopy , Neural Networks, Computer , Signal-To-Noise Ratio , gamma-Aminobutyric Acid , gamma-Aminobutyric Acid/metabolism , gamma-Aminobutyric Acid/analysis , Magnetic Resonance Spectroscopy/methods , Humans , Brain/diagnostic imaging , Brain/metabolism , Deep Learning , Algorithms , Artifacts , Choline/metabolism , Computer Simulation
9.
J Environ Manage ; 360: 121097, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38733844

ABSTRACT

With high-frequency data of nitrate (NO3-N) concentrations in waters becoming increasingly important for understanding of watershed system behaviors and ecosystem managements, the accurate and economic acquisition of high-frequency NO3-N concentration data has become a key point. This study attempted to use coupled deep learning neural networks and routine monitored data to predict hourly NO3-N concentrations in a river. The hourly NO3-N concentration at the outlet of the Oyster River watershed in New Hampshire, USA, was predicted through neural networks with a hybrid model architecture coupling the Convolutional Neural Networks and the Long Short-Term Memory model (CNN-LSTM). The routine monitored data (the river depth, water temperature, air temperature, precipitation, specific conductivity, pH and dissolved oxygen concentrations) for model training were collected from a nested high-frequency monitoring network, while the high-frequency NO3-N concentration data obtained at the outlet were not included as inputs. The whole dataset was separated into training, validation, and testing processes according to the ratio of 5:3:2, respectively. The hybrid CNN-LSTM model with different input lengths (1d, 3d, 7d, 15d, 30d) displayed comparable even better performance than other studies with lower frequencies, showing mean values of the Nash-Sutcliffe Efficiency 0.60-0.83. Models with shorter input lengths demonstrated both the higher modeling accuracy and stability. The water level, water temperature and pH values at monitoring sites were main controlling factors for forecasting performances. This study provided a new insight of using deep learning networks with a coupled architecture and routine monitored data for high-frequency riverine NO3-N concentration forecasting and suggestions about strategies about variable and input length selection during preprocessing of input data.


Subject(s)
Deep Learning , Neural Networks, Computer , Nitrates , Rivers , Nitrates/analysis , Rivers/chemistry , Environmental Monitoring/methods , Water Pollutants, Chemical/analysis , New Hampshire
10.
Anal Sci ; 40(7): 1261-1268, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38573454

ABSTRACT

In this study, in order to realize the sharing of the near-infrared analysis model of holocellulose between three spectral instruments of the same type, 84 pulp samples and their content of holocellulose were taken as the research objects. The effects of 10 pre-processing methods, such as 1st derivative (D1st), 2nd derivative (D2nd), multiplicative scatter correction (MSC), standard normal variable transformation (SNV), autoscaling, normalization, mean centering and pairwise combination, on the transfer effect of the stable wavelength selected by screening wavelengths with consistent and stable signals (SWCSS) were discussed. The results showed that the model established by the wavelength selected by the SWCSS algorithm after the autoscaling pre-processing method had the best analysis effect on the two target samples. Root mean square error of prediction (RMSEP) decreased from 2.4769 and 2.3119 before the model transfer to 1.2563 and 1.2384, respectively. Compared with the full-spectrum model, the value of AIC decreased from 3209.83 to 942.82. Therefore, the autoscaling pre-processing method combined with SWCSS algorithm can significantly improve the accuracy and efficiency of model transfer and provide help for the application of SWCSS algorithm in the rapid determination of pulp properties by near-infrared spectroscopy (NIRS).

11.
Heliyon ; 10(6): e27752, 2024 Mar 30.
Article in English | MEDLINE | ID: mdl-38560675

ABSTRACT

This study worked with Chunghwa Telecom to collect data from 17 rooftop solar photovoltaic plants installed on top of office buildings, warehouses, and computer rooms in northern, central and southern Taiwan from January 2021 to June 2023. A data pre-processing method combining linear regression and K Nearest Neighbor (k-NN) was proposed to estimate missing values for weather and power generation data. Outliers were processed using historical data and parameters highly correlated with power generation volumes were used to train an artificial intelligence (AI) model. To verify the reliability of this data pre-processing method, this study developed multilayer perceptron (MLP) and long short-term memory (LSTM) models to make short-term and medium-term power generation forecasts for the 17 solar photovoltaic plants. Study results showed that the proposed data pre-processing method reduced normalized root mean square error (nRMSE) for short- and medium-term forecasts in the MLP model by 17.47% and 11.06%, respectively, and also reduced the nRMSE for short- and medium-term forecasts in the LSTM model by 20.20% and 8.03%, respectively.

12.
MethodsX ; 12: 102668, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38617898

ABSTRACT

This study introduces "Specialis Revelio," a sophisticated text pre-processing module aimed at enhancing the detection of disguised toxic content in online communications. Through a blend of conventional and novel pre-processing methods, this module significantly improves the accuracy of existing toxic text detection tools, addressing the challenge of content that is deliberately altered to evade standard detection methods.•Integration with Existing Systems: "Specialis Revelio" is designed to augment popular toxic text classifiers, enhancing their ability to detect and filter toxic content more effectively.•Innovative Pre-processing Methods: The module combines traditional pre-processing steps like lowercasing and stemming with advanced strategies, including the handling of adversarial examples and typo correction, to reveal concealed toxicity.•Validation through Comparative Study: Its effectiveness was validated via a comparative analysis against widely used APIs, demonstrating a marked improvement in the detection of various toxic text indicators.

13.
Sci Rep ; 14(1): 9152, 2024 Apr 21.
Article in English | MEDLINE | ID: mdl-38644408

ABSTRACT

Air pollution stands as a significant modern-day challenge impacting life quality, the environment, and the economy. It comprises various pollutants like gases, particulate matter, biological molecules, and more, stemming from sources such as vehicle emissions, industrial operations, agriculture, and natural events. Nitrogen dioxide (NO2), among these harmful gases, is notably prevalent in densely populated urban regions. Given its adverse effects on health and the environment, accurate monitoring of NO2 levels becomes imperative for devising effective risk mitigation strategies. However, the precise measurement of NO2 poses challenges as it traditionally relies on costly and bulky equipment. This has prompted the development of more affordable alternatives, although their reliability is often questionable. The aim of this article is to introduce a groundbreaking method for precisely calibrating cost-effective NO2 sensors. This technique involves statistical preprocessing of low-cost sensor readings, aligning their distribution with reference data. Central to this calibration is an artificial neural network (ANN) surrogate designed to predict sensor correction coefficients. It utilizes environmental variables (temperature, humidity, atmospheric pressure), cross-references auxiliary NO2 sensors, and incorporates short time series of previous readings from the primary sensor. These methods are complemented by global data scaling. Demonstrated using a custom-designed cost-effective monitoring platform and high-precision public reference station data collected over 5 months, every component of our calibration framework proves crucial, contributing to its exceptional accuracy (with a correlation coefficient near 0.95 concerning the reference data and an RMSE below 2.4 µg/m3). This level of performance positions the calibrated sensor as a viable, cost-effective alternative to traditional monitoring approaches.

14.
PeerJ Comput Sci ; 10: e1953, 2024.
Article in English | MEDLINE | ID: mdl-38660169

ABSTRACT

Melanoma is the most aggressive and prevalent form of skin cancer globally, with a higher incidence in men and individuals with fair skin. Early detection of melanoma is essential for the successful treatment and prevention of metastasis. In this context, deep learning methods, distinguished by their ability to perform automated and detailed analysis, extracting melanoma-specific features, have emerged. These approaches excel in performing large-scale analysis, optimizing time, and providing accurate diagnoses, contributing to timely treatments compared to conventional diagnostic methods. The present study offers a methodology to assess the effectiveness of an AlexNet-based convolutional neural network (CNN) in identifying early-stage melanomas. The model is trained on a balanced dataset of 10,605 dermoscopic images, and on modified datasets where hair, a potential obstructive factor, was detected and removed allowing for an assessment of how hair removal affects the model's overall performance. To perform hair removal, we propose a morphological algorithm combined with different filtering techniques for comparison: Fourier, Wavelet, average blur, and low-pass filters. The model is evaluated through 10-fold cross-validation and the metrics of accuracy, recall, precision, and the F1 score. The results demonstrate that the proposed model performs the best for the dataset where we implemented both a Wavelet filter and hair removal algorithm. It has an accuracy of 91.30%, a recall of 87%, a precision of 95.19%, and an F1 score of 90.91%.

15.
Appl Spectrosc ; 78(6): 567-578, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38465603

ABSTRACT

Given the growing urge for plastic management and regulation in the world, recent studies have investigated the problem of plastic material identification for correct classification and disposal. Recent works have shown the potential of machine learning techniques for successful microplastics classification using Raman signals. Classification techniques from the machine learning area allow the identification of the type of microplastic from optical signals based on Raman spectroscopy. In this paper, we investigate the impact of high-frequency noise on the performance of related classification tasks. It is well-known that classification based on Raman is highly dependent on peak visibility, but it is also known that signal smoothing is a common step in the pre-processing of the measured signals. This raises a potential trade-off between high-frequency noise and peak preservation that depends on user-defined parameters. The results obtained in this work suggest that a linear discriminant analysis model cannot generalize properly in the presence of noisy signals, whereas an error-correcting output codes model is better suited to account for inherent noise. Moreover, principal components analysis (PCA) can become a must-do step for robust classification models, given its simplicity and natural smoothing capabilities. Our study on the high-frequency noise, the possible trade-off between pre-processing the high-frequency noise and the peak visibility, and the use of PCA as a noise reduction technique in addition to its dimensionality reduction functionality are the fundamental aspects of this work.

16.
Int J Food Microbiol ; 416: 110665, 2024 May 02.
Article in English | MEDLINE | ID: mdl-38457887

ABSTRACT

Romaine lettuce in the U.S. is primarily grown in California or Arizona and either processed near the growing regions (source processing) or transported long distance for processing in facilities serving distant markets (forward processing). Recurring outbreaks of Escherichia coli O157:H7 implicating romaine lettuce in recent years, which sometimes exhibited patterns of case clustering in Northeast and Midwest, have raised industry concerns over the potential impact of forward processing on romaine lettuce food safety and quality. In this study, freshly harvested romaine lettuce from a commercial field destined for both forward and source processing channels was tracked from farm to processing facility in two separate trials. Whole-head romaine lettuce and packaged fresh-cut products were collected from both forward and source facilities for microbiological and product quality analyses. High-throughput amplicon sequencing targeting16S rRNA gene was performed to describe shifts in lettuce microbiota. Total aerobic bacteria and coliform counts on whole-head lettuce and on fresh-cut lettuce at different storage times were significantly (p < 0.05) higher for those from the forward processing facility than those from the source processing facility. Microbiota on whole-head lettuce and on fresh-cut lettuce showed differential shifting after lettuce being subjected to source or forward processing, and after product storage. Consistent with the length of pre-processing delays between harvest and processing, the lettuce quality scores of source-processed romaine lettuce, especially at late stages of 2-week storage, was significantly higher than of forward-processed product (p < 0.05).


Subject(s)
Escherichia coli O157 , Microbiota , Food Microbiology , Lactuca , Escherichia coli O157/genetics , Food Safety , Colony Count, Microbial , Food Handling , Food Contamination/analysis
17.
Med Biol Eng Comput ; 2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38536580

ABSTRACT

This study investigated the impact of ComBat harmonization on the reproducibility of radiomic features extracted from magnetic resonance images (MRI) acquired on different scanners, using various data acquisition parameters and multiple image pre-processing techniques using a dedicated MRI phantom. Four scanners were used to acquire an MRI of a nonanatomic phantom as part of the TCIA RIDER database. In fast spin-echo inversion recovery (IR) sequences, several inversion durations were employed, including 50, 100, 250, 500, 750, 1000, 1500, 2000, 2500, and 3000 ms. In addition, a 3D fast spoiled gradient recalled echo (FSPGR) sequence was used to investigate several flip angles (FA): 2, 5, 10, 15, 20, 25, and 30 degrees. Nineteen phantom compartments were manually segmented. Different approaches were used to pre-process each image: Bin discretization, Wavelet filter, Laplacian of Gaussian, logarithm, square, square root, and gradient. Overall, 92 first-, second-, and higher-order statistical radiomic features were extracted. ComBat harmonization was also applied to the extracted radiomic features. Finally, the Intraclass Correlation Coefficient (ICC) and Kruskal-Wallis's (KW) tests were implemented to assess the robustness of radiomic features. The number of non-significant features in the KW test ranged between 0-5 and 29-74 for various scanners, 31-91 and 37-92 for three times tests, 0-33 to 34-90 for FAs, and 3-68 to 65-89 for IRs before and after ComBat harmonization, with different image pre-processing techniques, respectively. The number of features with ICC over 90% ranged between 0-8 and 6-60 for various scanners, 11-75 and 17-80 for three times tests, 3-83 to 9-84 for FAs, and 3-49 to 3-63 for IRs before and after ComBat harmonization, with different image pre-processing techniques, respectively. The use of various scanners, IRs, and FAs has a great impact on radiomic features. However, the majority of scanner-robust features is also robust to IR and FA. Among the effective parameters in MR images, several tests in one scanner have a negligible impact on radiomic features. Different scanners and acquisition parameters using various image pre-processing might affect radiomic features to a large extent. ComBat harmonization might significantly impact the reproducibility of MRI radiomic features.

18.
BMC Bioinformatics ; 25(1): 80, 2024 Feb 20.
Article in English | MEDLINE | ID: mdl-38378440

ABSTRACT

BACKGROUND: With the increase of the dimensionality in flow cytometry data over the past years, there is a growing need to replace or complement traditional manual analysis (i.e. iterative 2D gating) with automated data analysis pipelines. A crucial part of these pipelines consists of pre-processing and applying quality control filtering to the raw data, in order to use high quality events in the downstream analyses. This part can in turn be split into a number of elementary steps: signal compensation or unmixing, scale transformation, debris, doublets and dead cells removal, batch effect correction, etc. However, assembling and assessing the pre-processing part can be challenging for a number of reasons. First, each of the involved elementary steps can be implemented using various methods and R packages. Second, the order of the steps can have an impact on the downstream analysis results. Finally, each method typically comes with its specific, non standardized diagnostic and visualizations, making objective comparison difficult for the end user. RESULTS: Here, we present CytoPipeline and CytoPipelineGUI, two R packages to build, compare and assess pre-processing pipelines for flow cytometry data. To exemplify these new tools, we present the steps involved in designing a pre-processing pipeline on a real life dataset and demonstrate different visual assessment use cases. We also set up a benchmarking comparing two pre-processing pipelines differing by their quality control methods, and show how the package visualization utilities can provide crucial user insight into the obtained benchmark metrics. CONCLUSION: CytoPipeline and CytoPipelineGUI are two Bioconductor R packages that help building, visualizing and assessing pre-processing pipelines for flow cytometry data. They increase productivity during pipeline development and testing, and complement benchmarking tools, by providing user intuitive insight into benchmarking results.


Subject(s)
Data Analysis , Software , Flow Cytometry/methods
19.
Hum Brain Mapp ; 45(3): e26632, 2024 Feb 15.
Article in English | MEDLINE | ID: mdl-38379519

ABSTRACT

Since the introduction of the BrainAGE method, novel machine learning methods for brain age prediction have continued to emerge. The idea of estimating the chronological age from magnetic resonance images proved to be an interesting field of research due to the relative simplicity of its interpretation and its potential use as a biomarker of brain health. We revised our previous BrainAGE approach, originally utilising relevance vector regression (RVR), and substituted it with Gaussian process regression (GPR), which enables more stable processing of larger datasets, such as the UK Biobank (UKB). In addition, we extended the global BrainAGE approach to regional BrainAGE, providing spatially specific scores for five brain lobes per hemisphere. We tested the performance of the new algorithms under several different conditions and investigated their validity on the ADNI and schizophrenia samples, as well as on a synthetic dataset of neocortical thinning. The results show an improved performance of the reframed global model on the UKB sample with a mean absolute error (MAE) of less than 2 years and a significant difference in BrainAGE between healthy participants and patients with Alzheimer's disease and schizophrenia. Moreover, the workings of the algorithm show meaningful effects for a simulated neocortical atrophy dataset. The regional BrainAGE model performed well on two clinical samples, showing disease-specific patterns for different levels of impairment. The results demonstrate that the new improved algorithms provide reliable and valid brain age estimations.


Subject(s)
Alzheimer Disease , Schizophrenia , Humans , Workflow , Brain/diagnostic imaging , Brain/pathology , Schizophrenia/diagnostic imaging , Schizophrenia/pathology , Alzheimer Disease/diagnostic imaging , Alzheimer Disease/pathology , Machine Learning , Magnetic Resonance Imaging/methods
20.
Electromagn Biol Med ; 43(1-2): 31-45, 2024 Apr 02.
Article in English | MEDLINE | ID: mdl-38369844

ABSTRACT

This paper proposes a novel approach, BTC-SAGAN-CHA-MRI, for the classification of brain tumors using a SAGAN optimized with a Color Harmony Algorithm. Brain cancer, with its high fatality rate worldwide, especially in the case of brain tumors, necessitates more accurate and efficient classification methods. While existing deep learning approaches for brain tumor classification have been suggested, they often lack precision and require substantial computational time.The proposed method begins by gathering input brain MR images from the BRATS dataset, followed by a pre-processing step using a Mean Curvature Flow-based approach to eliminate noise. The pre-processed images then undergo the Improved Non-Sub sampled Shearlet Transform (INSST) for extracting radiomic features. These features are fed into the SAGAN, which is optimized with a Color Harmony Algorithm to categorize the brain images into different tumor types, including Gliomas, Meningioma, and Pituitary tumors. This innovative approach shows promise in enhancing the precision and efficiency of brain tumor classification, holding potential for improved diagnostic outcomes in the field of medical imaging. The accuracy acquired for the brain tumor identification from the proposed method is 99.29%. The proposed BTC-SAGAN-CHA-MRI technique achieves 18.29%, 14.09% and 7.34% higher accuracy and 67.92%,54.04%, and 59.08% less Computation Time when analyzed to the existing models, like Brain tumor diagnosis utilizing deep learning convolutional neural network with transfer learning approach (BTC-KNN-SVM-MRI); M3BTCNet: multi model brain tumor categorization under metaheuristic deep neural network features optimization (BTC-CNN-DEMFOA-MRI), and efficient method depending upon hierarchical deep learning neural network classifier for brain tumour categorization (BTC-Hie DNN-MRI) respectively.


This paper proposes a novel approach, BTC-SAGAN-CHA-MRI, for the classification of brain tumors using a Self-Attention based Generative Adversarial Network (SAGAN) optimized with a Color Harmony Algorithm. Brain cancer, with its high fatality rate worldwide, especially in the case of brain tumors, necessitates more accurate and efficient classification methods. While existing deep learning approaches for brain tumor classification have been suggested, they often lack precision and require substantial computational time. The proposed method begins by gathering input brain MR images from the BRATS dataset, followed by a pre-processing step using a Mean Curvature Flow-based approach to eliminate noise. The pre-processed images then undergo the Improved Non-Sub sampled Shearlet Transform (INSST) for extracting radiomic features. These features are fed into the SAGAN, which is optimized with a Color Harmony Algorithm to categorize the brain images into different tumor types, including Gliomas, Meningioma, and Pituitary tumors. This innovative approach shows promise in enhancing the precision and efficiency of brain tumor classification, holding potential for improved diagnostic outcomes in the field of medical imaging.


Subject(s)
Algorithms , Brain Neoplasms , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/classification , Brain Neoplasms/pathology , Humans , Image Processing, Computer-Assisted/methods , Color , Neural Networks, Computer , Deep Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...