RESUMO
Currently, the number of vehicles in circulation continues to increase steadily, leading to a parallel increase in vehicular accidents. Among the many causes of these accidents, human factors such as driver drowsiness play a fundamental role. In this context, one solution to address the challenge of drowsiness detection is to anticipate drowsiness by alerting drivers in a timely and effective manner. Thus, this paper presents a Convolutional Neural Network (CNN)-based approach for drowsiness detection by analyzing the eye region and Mouth Aspect Ratio (MAR) for yawning detection. As part of this approach, endpoint delineation is optimized for extraction of the region of interest (ROI) around the eyes. An NVIDIA Jetson Nano-based device and near-infrared (NIR) camera are used for real-time applications. A Driver Drowsiness Artificial Intelligence (DD-AI) architecture is proposed for the eye state detection procedure. In a performance analysis, the results of the proposed approach were compared with architectures based on InceptionV3, VGG16, and ResNet50V2. Night-Time Yawning-Microsleep-Eyeblink-Driver Distraction (NITYMED) was used for training, validation, and testing of the architectures. The proposed DD-AI network achieved an accuracy of 99.88% with the NITYMED test data, proving superior to the other networks. In the hardware implementation, tests were conducted in a real environment, resulting in 96.55% and 14 fps on average for the DD-AI network, thereby confirming its superior performance.
Assuntos
Condução de Veículo , Redes Neurais de Computação , Humanos , Boca/fisiologia , Olho , Fases do Sono/fisiologia , Sonolência , Inteligência Artificial , Acidentes de TrânsitoRESUMO
Recent research has demonstrated the effectiveness of convolutional neural networks (CNN) in assessing the health status of bee colonies by classifying acoustic patterns. However, developing a monitoring system using CNNs compared to conventional machine learning models can result in higher computation costs, greater energy demand, and longer inference times. This study examines the potential of CNN architectures in developing a monitoring system based on constrained hardware. The experimentation involved testing ten CNN architectures from the PyTorch and Torchvision libraries on single-board computers: an Nvidia Jetson Nano (NJN), a Raspberry Pi 5 (RPi5), and an Orange Pi 5 (OPi5). The CNN architectures were trained using four datasets containing spectrograms of acoustic samples of different durations (30, 10, 5, or 1 s) to analyze their impact on performance. The hyperparameter search was conducted using the Optuna framework, and the CNN models were validated using k-fold cross-validation. The inference time and power consumption were measured to compare the performance of the CNN models and the SBCs. The aim is to provide a basis for developing a monitoring system for precision applications in apiculture based on constrained devices and CNNs.
Assuntos
Acústica , Redes Neurais de Computação , Animais , Abelhas/fisiologia , Aprendizado de Máquina , AlgoritmosRESUMO
Breast cancer remains a major health concern worldwide, requiring the advancement of early detection methods to improve prognosis and treatment outcomes. In this sense, mammography is regarded as the gold standard in breast cancer screening and early detection. However, in a scenario where extensive analysis is required, a large set of mammograms conducted by radiologists may carry out false negative or false positive diagnoses. Therefore, artificial intelligence has emerged in recent years as a method for enhancing timing in breast cancer diagnosis. Nonetheless, preprocessing stages are required to prepare the mammography dataset to enhance learning models to correctly identify breast anomalies. In this paper, we introduce a novel method employing convolutional neural networks (CNNs) to segment the pectoral muscle in 1288 mediolateral oblique mammograms (MLOs), thereby addressing class imbalance and overfitting between classes, and dataset augmentation based on translation, rotation, and scale transformation. The effectiveness of the model was assessed through a confusion matrix and performance metrics, highlighting an average Dice coefficient of 0.98 and a Jaccard index of 0.96. The outcomes demonstrate the model capability to accurately identify three classes: pectoral muscle, breast, and background. This study emphasizes the importance of tackling class imbalance problems and augmenting data for the training of models for reliable early breast cancer detection.
RESUMO
The Zika disease caused by the Zika virus was declared a Public Health Emergency by the World Health Union (WHO), with microcephaly as the most critical consequence. Aiming to reduce the spread of the virus, biopharmaceutical organizations invest in vaccine research and production, based on multiple platforms. A crescent vaccine production approach is based on virus-like particles (VLP), for not having genetic material in its composition, hypoallergenic and non-mutant character. For bioprocess, it is essential to have means of real-time monitoring, which can be assessed using process analysis techniques such as Near-infrared (NIR) spectroscopy, that can be combined with chemometric methods, like Partial-Least Squares (PLS) and Artificial Neural Networks (ANN) for prediction of biochemical variables. This work proposes a biochemical Zika VLP upstream production at-line monitoring model using NIR spectroscopy comparing sampling conditions (with or without cells), analytical blank (air, ultrapure water), and spectra pre-processing approaches. Seven experiments in a benchtop bioreactor using recombinant baculovirus/Sf9 insect cell platform in serum-free medium were performed to obtain biochemical and spectral data for chemometrics modeling (PLS and ANN), composed by a random data split (80 % calibration, 20 % validation) for cross-validation of the PLS models and 70 % training, 15 % testing, 15 % validation for ANN. The best models generated in the present work presented an average absolute error of 1.59 × 105 cell/mL for density of viable cells, 2.37 % for cell viability, 0.25 g/L for glucose, 0.007 g/L for lactate, 0.138 g/L for glutamine, 0.18 g/L for glutamate, 0,003 g/L for ammonium, and 0.014 g/L for potassium.
RESUMO
The present work focused on inline Raman spectroscopy monitoring of SARS-CoV-2 VLP production using two culture media by fitting chemometric models for biochemical parameters (viable cell density, cell viability, glucose, lactate, glutamine, glutamate, ammonium, and viral titer). For that purpose, linear, partial least square (PLS), and nonlinear approaches, artificial neural network (ANN), were used as correlation techniques to build the models for each variable. ANN approach resulted in better fitting for most parameters, except for viable cell density and glucose, whose PLS presented more suitable models. Both were statistically similar for ammonium. The mean absolute error of the best models, within the quantified value range for viable cell density (375,000-1,287,500 cell/mL), cell viability (29.76-100.00%), glucose (8.700-10.500 g/), lactate (0.019-0.400 g/L), glutamine (0.925-1.520 g/L), glutamate (0.552-1.610 g/L), viral titer (no virus quantified-7.505 log10 PFU/mL) and ammonium (0.0074-0.0478 g/L) were, respectively, 41,533 ± 45,273 cell/mL (PLS), 1.63 ± 1.54% (ANN), 0.058 ± 0.065 g/L (PLS), 0.007 ± 0.007 g/L (ANN), 0.007 ± 0.006 g/L (ANN), 0.006 ± 0.006 g/L (ANN), 0.211 ± 0.221 log10 PFU/mL (ANN), and 0.0026 ± 0.0026 g/L (PLS) or 0.0027 ± 0.0034 g/L (ANN). The correlation accuracy, errors, and best models obtained are in accord with studies, both online and offline approaches while using the same insect cell/baculovirus expression system or different cell host. Besides, the biochemical tracking throughout bioreactor runs using the models showed suitable profiles, even using two different culture media.
RESUMO
There are two widely used methods to measure the cardiac cycle and obtain heart rate measurements: the electrocardiogram (ECG) and the photoplethysmogram (PPG). The sensors used in these methods have gained great popularity in wearable devices, which have extended cardiac monitoring beyond the hospital environment. However, the continuous monitoring of ECG signals via mobile devices is challenging, as it requires users to keep their fingers pressed on the device during data collection, making it unfeasible in the long term. On the other hand, the PPG does not contain this limitation. However, the medical knowledge to diagnose these anomalies from this sign is limited by the need for familiarity, since the ECG is studied and used in the literature as the gold standard. To minimize this problem, this work proposes a method, PPG2ECG, that uses the correlation between the domains of PPG and ECG signals to infer from the PPG signal the waveform of the ECG signal. PPG2ECG consists of mapping between domains by applying a set of convolution filters, learning to transform a PPG input signal into an ECG output signal using a U-net inception neural network architecture. We assessed our proposed method using two evaluation strategies based on personalized and generalized models and achieved mean error values of 0.015 and 0.026, respectively. Our method overcomes the limitations of previous approaches by providing an accurate and feasible method for continuous monitoring of ECG signals through PPG signals. The short distances between the infer-red ECG and the original ECG demonstrate the feasibility and potential of our method to assist in the early identification of heart diseases.
Assuntos
Eletrocardiografia , Frequência Cardíaca , Redes Neurais de Computação , Fotopletismografia , Processamento de Sinais Assistido por Computador , Humanos , Eletrocardiografia/métodos , Fotopletismografia/métodos , Frequência Cardíaca/fisiologia , Algoritmos , Dispositivos Eletrônicos VestíveisRESUMO
The use of artificial intelligence algorithms (AI) has gained importance for dental applications in recent years. Analyzing AI information from different sensor data such as images or panoramic radiographs (panoramic X-rays) can help to improve medical decisions and achieve early diagnosis of different dental pathologies. In particular, the use of deep learning (DL) techniques based on convolutional neural networks (CNNs) has obtained promising results in dental applications based on images, in which approaches based on classification, detection, and segmentation are being studied with growing interest. However, there are still several challenges to be tackled, such as the data quality and quantity, the variability among categories, and the analysis of the possible bias and variance associated with each dataset distribution. This study aims to compare the performance of three deep learning object detection models-Faster R-CNN, YOLO V2, and SSD-using different ResNet architectures (ResNet-18, ResNet-50, and ResNet-101) as feature extractors for detecting and classifying third molar angles in panoramic X-rays according to Winter's classification criterion. Each object detection architecture was trained, calibrated, validated, and tested with three different feature extraction CNNs which are ResNet-18, ResNet-50, and ResNet-101, which were the networks that best fit our dataset distribution. Based on such detection networks, we detect four different categories of angles in third molars using panoramic X-rays by using Winter's classification criterion. This criterion characterizes the third molar's position relative to the second molar's longitudinal axis. The detected categories for the third molars are distoangular, vertical, mesioangular, and horizontal. For training, we used a total of 644 panoramic X-rays. The results obtained in the testing dataset reached up to 99% mean average accuracy performance, demonstrating the YOLOV2 obtained higher effectiveness in solving the third molar angle detection problem. These results demonstrate that the use of CNNs for object detection in panoramic radiographs represents a promising solution in dental applications.
Assuntos
Aprendizado Profundo , Dente Serotino , Redes Neurais de Computação , Radiografia Panorâmica , Radiografia Panorâmica/métodos , Humanos , Dente Serotino/diagnóstico por imagem , Algoritmos , Inteligência Artificial , Processamento de Imagem Assistida por Computador/métodosRESUMO
Traditionally, the performance of sodium-ion batteries has been predicted based on a single characteristic of the electrodes and its relationship to specific capacity increase. However, recent studies have shown that this hypothesis is incorrect because their performance depends on multiple physical and chemical variables. Due to the above, the present communication shows machine learning as an innovative strategy to predict the performance of functionalized hard carbon anodes prepared from grapefruit peels. In this sense, a three-layer feed-forward Artificial Neural Network (ANN) was designed. The inputs used to feed the ANN were the physicochemical characteristics of the materials, which consisted of mercury intrusion porosimetry data (SHg and average pore), elemental analysis (C, H, N, S), ID/IG ratio obtained from RAMAN studies, and X-ray photoemission spectroscopy data of the C1s, N1s, and O1s regions. In addition, two more inputs were added: the cycle number and the applied C-rate. The ANN architecture consisted of a first hidden layer with a sigmoid transfer function and a second layer with a log-sigmoid transfer function. Finally, a sigmoid transfer function was used in the output layer. Each layer had 10 neurons. The training algorithm used was Bayesian regularization. The results show that the proposed ANN correctly predicts (R2 > 0.99) the performance of all materials. The proposed strategy provides critical insights into the variables that must be controlled during material synthesis to optimize the process and accelerate progress in developing tailored materials.
RESUMO
Implementing diabetes surveillance systems is paramount to mitigate the risk of incurring substantial medical expenses. Currently, blood glucose is measured by minimally invasive methods, which involve extracting a small blood sample and transmitting it to a blood glucose meter. This method is deemed discomforting for individuals who are undergoing it. The present study introduces an Explainable Artificial Intelligence (XAI) system, which aims to create an intelligible machine capable of explaining expected outcomes and decision models. To this end, we analyze abnormal glucose levels by utilizing Bi-directional Long Short-Term Memory (Bi-LSTM) and Convolutional Neural Network (CNN). In this regard, the glucose levels are acquired through the glucose oxidase (GOD) strips placed over the human body. Later, the signal data is converted to the spectrogram images, classified as low glucose, average glucose, and abnormal glucose levels. The labeled spectrogram images are then used to train the individualized monitoring model. The proposed XAI model to track real-time glucose levels uses the XAI-driven architecture in its feature processing. The model's effectiveness is evaluated by analyzing the performance of the proposed model and several evolutionary metrics used in the confusion matrix. The data revealed in the study demonstrate that the proposed model effectively identifies individuals with elevated glucose levels.
RESUMO
Objective: Convolutional neural networks (CNNs) have achieved state-of-the-art results in various medical image segmentation tasks. However, CNNs often assume that the source and target dataset follow the same probability distribution and when this assumption is not satisfied their performance degrades significantly. This poses a limitation in medical image analysis, where including information from different imaging modalities can bring large clinical benefits. In this work, we present an unsupervised Structure Aware Cross-modality Domain Adaptation (StAC-DA) framework for medical image segmentation. Methods: StAC-DA implements an image- and feature-level adaptation in a sequential two-step approach. The first step performs an image-level alignment, where images from the source domain are translated to the target domain in pixel space by implementing a CycleGAN-based model. The latter model includes a structure-aware network that preserves the shape of the anatomical structure during translation. The second step consists of a feature-level alignment. A U-Net network with deep supervision is trained with the transformed source domain images and target domain images in an adversarial manner to produce probable segmentations for the target domain. Results: The framework is evaluated on bidirectional cardiac substructure segmentation. StAC-DA outperforms leading unsupervised domain adaptation approaches, being ranked first in the segmentation of the ascending aorta when adapting from Magnetic Resonance Imaging (MRI) to Computed Tomography (CT) domain and from CT to MRI domain. Conclusions: The presented framework overcomes the limitations posed by differing distributions in training and testing datasets. Moreover, the experimental results highlight its potential to improve the accuracy of medical image segmentation across diverse imaging modalities.
RESUMO
PURPOSE: Investigations into the correction of presbyopia have considered lens design, clinical implications and the development of objective metrics such as the visual Strehl ratio. This study investigated the Jacobi-Fourier phase mask as an ophthalmic element in the correction of presbyopia. The goal was to develop a contact or intraocular lens whose performance was largely insensitive to changes in pupil diameter. METHODS: Numerical simulations based on Fourier optics were performed to evaluate three different Jacobi-Fourier polynomials, with the aim of providing a range of clear vision (1 Dioptre (D)). Performance was evaluated for three pupil sizes (6, 4 and 2 mm), while polychromatic images were simulated using three different wavelengths (656.3, 587.6 and 486.1 nm). The Neural Transfer function was included in the simulation. To validate the method and results, we used the Visual Strehl combined objective metric (VSCombined) currently used in visual optics. This metric gives more weight to the phase transfer function and is more suitable for non-symmetrical phase functions. RESULTS: Numerical validation showed the suitability of the Jacobi-Fourier phase masks for extending the range of clear vision of presbyopic eyes, providing a visual acuity of at least 0.10 logMAR (6/7.5 Snellen) at all distances between 1 and 6 m. The results show a range of clear vision of 1D was not affected by changes in pupil size, an increase in retinal image contrast accompanied by image artefact reduction by increasing the radial order of the Jacobi-Fourier phase mask and a reduction of wavelength dependence of the retinal images. These results are supported by simulated images and the objective criterion VSCombined. CONCLUSIONS: The use of Jacobi-Fourier phase masks as ophthalmic elements for presbyopic correction show promising results, with a good range of clear vision and reduced dependence on pupil size and chromatic aberration.
RESUMO
This paper proposes a model based on machine learning for the prediction of road traffic noise for the city of Bogota-Colombia. The input variables of the model were: vehicle capacity, speed, type of flow and number of lanes. The input data were obtained through measurement campaigns in which audio and video recordings were made. The audio recordings, made with a measuring microphone calibrated at a height of 4 meters, made it possible to calculate the noise levels through software processing. On the other hand, by processing the video data, the capacity, and speed of the vehicle were obtained. This process was carried out by means of a classifier trained with images of vehicles taken in the field and free databases. In order to determine the machine learning algorithm to be used, five models were compared, which were configured with their respective hyperparameters obtained through mesh search. The results showed that the Multilayer Perceptron (MLP) regression had the best fit with an MAE of 0.86 dBA for the test data. Finally, the proposed MLP regressor was compared with some classical statistical models used for road traffic noise prediction. The main conclusion is that the MLP regressor obtained the best error and fit indicators with respect to traditional statistical models.
RESUMO
We present a novel neural network-based method for analyzing intra-voxel structures, addressing critical challenges in diffusion-weighted MRI analysis for brain connectivity and development studies. The network architecture, called the Local Neighborhood Neural Network, is designed to use the spatial correlations of neighboring voxels for an enhanced inference while reducing parameter overhead. Our model exploits these relationships to improve the analysis of complex structures and noisy data environments. We adopt a self-supervised approach to address the lack of ground truth data, generating signals of voxel neighborhoods to integrate the training set. This eliminates the need for manual annotations and facilitates training under realistic conditions. Comparative analyses show that our method outperforms the constrained spherical deconvolution (CSD) method in quantitative and qualitative validations. Using phantom images that mimic in vivo data, our approach improves angular error, volume fraction estimation accuracy, and success rate. Furthermore, a qualitative comparison of the results in actual data shows a better spatial consistency of the proposed method in areas of real brain images. This approach demonstrates enhanced intra-voxel structure analysis capabilities and holds promise for broader application in various imaging scenarios.
RESUMO
Precise measurement of fiber diameter in animal and synthetic textiles is crucial for quality assessment and pricing; however, traditional methods often struggle with accuracy, particularly when fibers are densely packed or overlapping. Current computer vision techniques, while useful, have limitations in addressing these challenges. This paper introduces a novel deep-learning-based method to automatically generate distance maps of fiber micrographs, enabling more accurate fiber segmentation and diameter calculation. Our approach utilizes a modified U-Net architecture, trained on both real and simulated micrographs, to regress distance maps. This allows for the effective separation of individual fibers, even in complex scenarios. The model achieves a mean absolute error (MAE) of 0.1094 and a mean square error (MSE) of 0.0711, demonstrating its effectiveness in accurately measuring fiber diameters. This research highlights the potential of deep learning to revolutionize fiber analysis in the textile industry, offering a more precise and automated solution for quality control and pricing.
RESUMO
Emotion recognition through speech is a technique employed in various scenarios of Human-Computer Interaction (HCI). Existing approaches have achieved significant results; however, limitations persist, with the quantity and diversity of data being more notable when deep learning techniques are used. The lack of a standard in feature selection leads to continuous development and experimentation. Choosing and designing the appropriate network architecture constitutes another challenge. This study addresses the challenge of recognizing emotions in the human voice using deep learning techniques, proposing a comprehensive approach, and developing preprocessing and feature selection stages while constructing a dataset called EmoDSc as a result of combining several available databases. The synergy between spectral features and spectrogram images is investigated. Independently, the weighted accuracy obtained using only spectral features was 89%, while using only spectrogram images, the weighted accuracy reached 90%. These results, although surpassing previous research, highlight the strengths and limitations when operating in isolation. Based on this exploration, a neural network architecture composed of a CNN1D, a CNN2D, and an MLP that fuses spectral features and spectogram images is proposed. The model, supported by the unified dataset EmoDSc, demonstrates a remarkable accuracy of 96%.
Assuntos
Aprendizado Profundo , Emoções , Redes Neurais de Computação , Humanos , Emoções/fisiologia , Fala/fisiologia , Bases de Dados Factuais , Algoritmos , Reconhecimento Automatizado de Padrão/métodosRESUMO
This article implements a hybrid Machine Learning (ML) model to classify stoppage events in a copper-crushing equipment, more specifically, a conveyor belt. The model combines Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs) with Principal Component Analysis (PCA) to identify the type of stoppage event when they occur in an industrial sector that is significant for the Chilean economy. This research addresses the critical need to optimise maintenance management in the mining industry, highlighting the technological relevance and motivation for using advanced ML techniques. This study focusses on combining and implementing three ML models trained with historical data composed of information from various sensors, real and virtual, as well from maintenance reports that report operational conditions and equipment failure characteristics. The main objective of this study is to improve the efficiency when identifying the nature of a stoppage serving as a basis for the subsequent development of a reliable failure prediction system. The results indicate that this approach significantly increases information reliability, addressing the persistent challenges in data management within the maintenance area. With a classification accuracy of 96.2% and a recall of 96.3%, the model validates and automates the classification of stoppage events, significantly reducing dependency on interdepartmental interactions. This advancement eliminates the need for reliance on external databases, which have previously been prone to errors, missing critical data, or containing outdated information. By implementing this methodology, a robust and reliable foundation is established for developing a failure prediction model, fostering both efficiency and reliability in the maintenance process. The application of ML in this context produces demonstrably positive outcomes in the classification of stoppage events, underscoring its significant impact on industry operations.
RESUMO
The massive arrival of pelagic Sargassum on the coasts of several countries of the Atlantic Ocean began in 2011 and to date continues to generate social and environmental challenges for the region. Therefore, knowing the distribution and quantity of Sargassum in the ocean, coasts, and beaches is necessary to understand the phenomenon and develop protocols for its management, use, and final disposal. In this context, the present study proposes a methodology to calculate the area Sargassum occupies on beaches in square meters, based on the semantic segmentation of aerial images using the pix2pix architecture. For training and testing the algorithm, a unique dataset was built from scratch, consisting of 15,268 aerial images segmented into three classes. The images correspond to beaches in the cities of Mahahual and Puerto Morelos, located in Quintana Roo, Mexico. To analyze the results the fß-score metric was used. The results for the Sargassum class indicate that there is a balance between false positives and false negatives, with a slight bias towards false negatives, which means that the algorithm tends to underestimate the Sargassum pixels in the images. To know the confidence intervals within which the algorithm performs better, the results of the f0.5-score metric were resampled by bootstrapping considering all classes and considering only the Sargassum class. From the above, we found that the algorithm offers better performance when segmenting Sargassum images on the sand. From the results, maps showing the Sargassum coverage area along the beach were designed to complement the previous ones and provide insight into the field of study.
Assuntos
Aprendizado Profundo , Sargassum , México , Algoritmos , Monitoramento Ambiental/métodos , Oceano Atlântico , Humanos , Imagens de Satélites , Conservação dos Recursos Naturais/métodos , PraiasRESUMO
The insecticide pyriproxyfen (PPF), commonly used in drinking water, has already been described as a potential neurotoxic agent in non-target organisms, particularly during embryonic development. Consequently, exposure to PPF can lead to congenital anomalies in the central nervous system. Therefore, understanding the impact of this insecticide on developing neural cells is a relevant concern that requires attention. Thus, this study aimed to investigate the effects of PPF on the proliferation, differentiation, migration, and cell death of neural cells by comparing embryos that develop exencephaly with normal embryos, after exposure to this insecticide. Chicken embryos, used as a study model, were exposed to concentrations of 0.01 and 10 mg/L PPF on embryonic day E1 and analyzed on embryonic day E10. Exposed embryos received 50 µL of PPF diluted in vehicle solution, and control embryos received exclusively 50 µL of vehicle solution. After exposure, embryos were categorized into control embryos, embryos with exencephaly exposed to PPF, and embryos without exencephaly exposed to PPF. The results showed that although the impact was differentiated in the forebrain and midbrain, both brain vesicles were affected by PPF exposure, and this was observed in embryos with and without exencephaly. The most evident changes observed in embryos with exencephaly were DNA damage accompanied by alterations in cell proliferation, increased apoptosis, and reduced neural differentiation and migration. Embryos without exencephaly showed DNA damage and reduced cell proliferation and migration. These cellular events directly interfered with the density and thickness of neural cell layers. Together, these results suggest that PPF exposure causes cellular damage during neurogenesis, regardless of whether embryos display or do not display external normal morphology. This nuanced understanding provides important insights into the neurotoxicity of PPF and its potential effects on inherent events in neurogenesis.
RESUMO
The electrical activity of the brain, characterized by its frequency components, reflects a complex interplay between periodic (oscillatory) and aperiodic components. These components are associated with various neurophysiological processes, such as the excitation-inhibition balance (aperiodic activity) or interregional communication (oscillatory activity). However, we do not fully understand whether these components are truly independent or if different neuromodulators affect them in different ways. The dopaminergic system has a critical role for cognition and motivation, being a potential modulator of these power spectrum components. To improve our understanding of these questions, we investigated the differential effects of this system on these components using electrocorticogram recordings in cats, which show clear oscillations and aperiodic 1/f activity. Specifically, we focused on the effects of haloperidol (a D2 receptor antagonist) on oscillatory and aperiodic dynamics during wakefulness and sleep. By parameterizing the power spectrum into these two components, our findings reveal a robust modulation of oscillatory activity by the D2 receptor across the brain. Surprisingly, aperiodic activity was not significantly affected and exhibited inconsistent changes across the brain. This suggests a nuanced interplay between neuromodulation and the distinct components of brain oscillations, providing insights into the selective regulation of oscillatory dynamics in awake states.
Assuntos
Encéfalo , Haloperidol , Sono , Vigília , Vigília/efeitos dos fármacos , Vigília/fisiologia , Animais , Haloperidol/farmacologia , Sono/efeitos dos fármacos , Sono/fisiologia , Gatos , Encéfalo/efeitos dos fármacos , Encéfalo/fisiologia , Masculino , Ondas Encefálicas/efeitos dos fármacos , Ondas Encefálicas/fisiologia , Eletrocorticografia/efeitos dos fármacos , Antagonistas de Dopamina/farmacologiaRESUMO
Multiple sclerosis is a chronic inflammatory disease of the central nervous system characterized by autoimmune destruction of the myelin sheath, leading to irreversible and progressive functional deficits in patients. Pre-clinical studies involving the use of neural stem cells (NSCs) have already demonstrated their potential in neuronal regeneration and remyelination. However, the exclusive application of cell therapy has not proved sufficient to achieve satisfactory therapeutic levels. Recognizing these limitations, there is a need to combine cell therapy with other adjuvant protocols. In this context, extracellular vesicles (EVs) can contribute to intercellular communication, stimulating the production of proteins and lipids associated with remyelination and providing trophic support to axons. This study aimed to evaluate the therapeutic efficacy of the combination of NSCs and EVs derived from oligodendrocyte precursor cells (OPCs) in an animal model of multiple sclerosis. OPCs were differentiated from NSCs and had their identity confirmed by gene expression analysis and immunocytochemistry. Exosomes were isolated by differential ultracentrifugation and characterized by Western, transmission electron microscopy and nanoparticle tracking analysis. Experimental therapy of C57BL/6 mice induced with experimental autoimmune encephalomyelitis (EAE) were grouped in control, treated with NSCs, treated with OPC-derived EVs and treated with a combination of both. The treatments were evaluated clinically using scores and body weight, microscopically using immunohistochemistry and immunological profile by flow cytometry. The animals showed significant clinical improvement and weight gain with the treatments. However, only the treatments involving EVs led to immune modulation, changing the profile from Th1 to Th2 lymphocytes. Fifteen days after treatment revealed a reduction in reactive microgliosis and astrogliosis in the groups treated with EVs. However, there was no reduction in demyelination. The results indicate the potential therapeutic use of OPC-derived EVs to attenuate inflammation and promote recovery in EAE, especially when combined with cell therapy.