Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Microsc Res Tech ; 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38864463

RESUMO

The impact of Artificial Intelligence (AI) is rapidly expanding, revolutionizing both science and society. It is applied to practically all areas of life, science, and technology, including materials science, which continuously requires novel tools for effective materials characterization. One of the widely used techniques is scanning probe microscopy (SPM). SPM has fundamentally changed materials engineering, biology, and chemistry by providing tools for atomic-precision surface mapping. Despite its many advantages, it also has some drawbacks, such as long scanning times or the possibility of damaging soft-surface materials. In this paper, we focus on the potential for supporting SPM-based measurements, with an emphasis on the application of AI-based algorithms, especially Machine Learning-based algorithms, as well as quantum computing (QC). It has been found that AI can be helpful in automating experimental processes in routine operations, algorithmically searching for optimal sample regions, and elucidating structure-property relationships. Thus, it contributes to increasing the efficiency and accuracy of optical nanoscopy scanning probes. Moreover, the combination of AI-based algorithms and QC may have enormous potential to enhance the practical application of SPM. The limitations of the AI-QC-based approach were also discussed. Finally, we outline a research path for improving AI-QC-powered SPM. RESEARCH HIGHLIGHTS: Artificial intelligence and quantum computing as support for scanning probe microscopy. The analysis indicates a research gap in the field of scanning probe microscopy. The research aims to shed light into ai-qc-powered scanning probe microscopy.

2.
Chemosphere ; 360: 142347, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38759802

RESUMO

Textile and cosmetic industries generate large amounts of dye effluents requiring treatment before discharge. This wastewater contains high levels of reactive dyes, low to none-biodegradable materials and chemical residues. Technically, dye wastewater is characterised by high chemical and biological oxygen demand. Biological, physical and pressure-driven membrane processes have been extensively used in textile wastewater treatment plants. However, these technologies are characterised by process complexity and are often costly. Also, process efficiency is not achieved in cost-effective biochemical and physical treatment processes. Membrane distillation (MD) emerged as a promising technology harnessing challenges faced by pressure-driven membrane processes. To ensure high cost-effectiveness, the MD can be operated by solar energy or low-grade waste heat. Herein, the MD purification of dye wastewater is comprehensively and yet concisely discussed. This involved research advancement in MD processes towards removal of dyes from industrial effluents. Also, challenges faced by this process with a specific focus on fouling are reviewed. Current literature mainly tested MD setups in the laboratory scale suggesting a deep need of further optimization of membrane and module designs in near future, especially for textile wastewater treatment. There is a need to deliver customized high-porosity hydrophobic membrane design with the appropriate thickness and module configuration to reduce concentration and temperature polarization (CP and TP). Also, energy loss should be minimized while increasing dye rejection and permeate flux. Although laboratory experiments remain pivotal in optimizing the MD process for treating dye wastewater, the nature of their time intensity poses a challenge. Given the multitude of parameters involved in MD process optimization, artificial intelligence (AI) methodologies present a promising avenue for assistance. Thus, AI-driven algorithms have the potential to enhance overall process efficiency, cutting down on time, fine-tuning parameters, and driving cost reductions. However, achieving an optimal balance between efficiency enhancements and financial outlays is a complex process. Finally, this paper suggests a research direction for the development of effective synthetic and natural dye removal from industrially discharged wastewater.


Assuntos
Corantes , Destilação , Membranas Artificiais , Indústria Têxtil , Eliminação de Resíduos Líquidos , Águas Residuárias , Poluentes Químicos da Água , Águas Residuárias/química , Destilação/métodos , Corantes/química , Corantes/isolamento & purificação , Eliminação de Resíduos Líquidos/métodos , Poluentes Químicos da Água/química , Poluentes Químicos da Água/análise , Purificação da Água/métodos , Resíduos Industriais
3.
Cardiol J ; 31(2): 321-341, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38247435

RESUMO

This paper aims to thoroughly discuss the impact of artificial intelligence (AI) on clinical practice in interventional cardiology (IC) with special recognition of its most recent advancements. Thus, recent years have been exceptionally abundant in advancements in computational tools, including the development of AI. The application of AI development is currently in its early stages, nevertheless new technologies have proven to be a promising concept, particularly considering IC showing great impact on patient safety, risk stratification and outcomes during the whole therapeutic process. The primary goal is to achieve the integration of multiple cardiac imaging modalities, establish online decision support systems and platforms based on augmented and/or virtual realities, and finally to create automatic medical systems, providing electronic health data on patients. In a simplified way, two main areas of AI utilization in IC may be distinguished, namely, virtual and physical. Consequently, numerous studies have provided data regarding AI utilization in terms of automated interpretation and analysis from various cardiac modalities, including electrocardiogram, echocardiography, angiography, cardiac magnetic resonance imaging, and computed tomography as well as data collected during robotic-assisted percutaneous coronary intervention procedures. Thus, this paper aims to thoroughly discuss the impact of AI on clinical practice in IC with special recognition of its most recent advancements.


Assuntos
Inteligência Artificial , Cardiologia , Humanos , Cardiologia/tendências , Doenças Cardiovasculares/terapia , Doenças Cardiovasculares/diagnóstico , Doenças Cardiovasculares/diagnóstico por imagem , Intervenção Coronária Percutânea/métodos
4.
Environ Sci Pollut Res Int ; 30(22): 62689-62703, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36944836

RESUMO

In this paper, green nanocomposites based on biomass and superparamagnetic nanoparticles were synthesized and used as adsorbents to remove methylene blue (MB) from water with magnetic separation. The adsorbents were synthesized through the wet co-precipitation technique, in which iron-oxide nanoparticles coated the cores based on coffee, cellulose, and red volcanic algae waste. The procedure resulted in materials that could be easily separated from aqueous solutions with magnets. The morphology and chemical composition of the nanocomposites were characterized by SEM, FT-IR, and XPS methods. The adsorption studies of MB removal with UV-vis spectrometry showed that the adsorption performance of the prepared materials strongly depended on their morphology and the type of the organic adsorbent. The adsorption studies presented the highest effectiveness in neutral pH with only a slight effect on ionic strength. The MB removal undergoes pseudo-second kinetics for all adsorbents. The maximal adsorption capacity for the coffee@Fe3O4-2, cellulose@Fe3O4-1, and algae@Fe3O4-1 is 38.23 mg g-1, 41.61 mg g-1, and 48.41 mg g-1, respectively. The mechanism of MB adsorption follows the Langmuir model using coffee@Fe3O4 and cellulose@Fe3O4, while for algae@Fe3O4 the process fits to the Redlich-Peterson model. The removal efficiency analysis based on UV-vis adsorption spectra revealed that the adsorption effectiveness of the nanocomposites increased as follows: coffee@Fe3O4-2 > cellulose@Fe3O4-1 > algae@Fe3O4-1, demonstrating an MB removal efficiency of up to 90%.


Assuntos
Nanopartículas de Magnetita , Rodófitas , Poluentes Químicos da Água , Azul de Metileno/química , Café , Biomassa , Celulose , Espectroscopia de Infravermelho com Transformada de Fourier , Adsorção , Poluentes Químicos da Água/química , Cinética
5.
Molecules ; 28(6)2023 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-36985530

RESUMO

The rapidly growing production and usage of lithium-ion batteries (LIBs) dramatically raises the number of harmful wastes. Consequently, the LIBs waste management processes, taking into account reliability, efficiency, and sustainability criteria, became a hot issue in the context of environmental protection as well as the scarcity of metal resources. In this paper, we propose for the first time a functional material-a magnetorheological fluid (MRF) from the LIBs-based liquid waste containing heavy metal ions. At first, the spent battery waste powder was treated with acid-leaching, where the post-treatment acid-leaching solution (ALS) contained heavy metal ions including cobalt. Then, ALS was used during wet co-precipitation to obtain cobalt-doped superparamagnetic iron oxide nanoparticles (SPIONs) and as an effect, the harmful liquid waste was purified from cobalt. The obtained nanoparticles were characterized with SEM, TEM, XPS, and magnetometry. Subsequently, superparamagnetic nanoparticles sized 15 nm average in diameter and magnetization saturation of about 91 emu g-1 doped with Co were used to prepare the MRF that increases the viscosity by about 300% in the presence of the 100 mT magnetic fields. We propose a facile and cost-effective way to utilize harmful ALS waste and use them in the preparation of superparamagnetic particles to be used in the magnetorheological fluid. This work describes for the first time the second life of the battery waste in the MRF and a facile way to remove the harmful ingredients from the solutions obtained after the acid leaching of LIBs as an effective end-of-life option for hydrometallurgical waste utilization.

6.
Head Face Med ; 18(1): 12, 2022 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-35382839

RESUMO

BACKGROUND: The Augmented Reality (AR) blends digital information with the real world. Thanks to cameras, sensors, and displays it can supplement the physical world with holographic images. Nowadays, the applications of AR range from navigated surgery to vehicle navigation. DEVELOPMENT: The purpose of this feasibility study was to develop an AR holographic system implementing Vertucci's classification of dental root morphology to facilitate the study of tooth anatomy. It was tailored to run on the AR HoloLens 2 (Microsoft) glasses. The 3D tooth models were created in Autodesk Maya and exported to Unity software. The holograms of dental roots can be projected in a natural setting of the dental office. The application allowed to display 3D objects in such a way that they could be rotated, zoomed in/out, and penetrated. The advantage of the proposed approach was that students could learn a 3D internal anatomy of the teeth without environmental visual restrictions. CONCLUSIONS: It is feasible to visualize internal dental root anatomy with AR holographic system. AR holograms seem to be attractive adjunct for learning of root anatomy.


Assuntos
Realidade Aumentada , Holografia , Dente , Estudos de Viabilidade , Holografia/métodos , Humanos , Imageamento Tridimensional , Tecnologia
7.
Entropy (Basel) ; 23(1)2021 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-33435243

RESUMO

In the nervous system, information is conveyed by sequence of action potentials, called spikes-trains. As MacKay and McCulloch suggested, spike-trains can be represented as bits sequences coming from Information Sources (IS). Previously, we studied relations between spikes' Information Transmission Rates (ITR) and their correlations, and frequencies. Now, I concentrate on the problem of how spikes fluctuations affect ITR. The IS are typically modeled as stationary stochastic processes, which I consider here as two-state Markov processes. As a spike-trains' fluctuation measure, I assume the standard deviation σ, which measures the average fluctuation of spikes around the average spike frequency. I found that the character of ITR and signal fluctuations relation strongly depends on the parameter s being a sum of transitions probabilities from a no spike state to spike state. The estimate of the Information Transmission Rate was found by expressions depending on the values of signal fluctuations and parameter s. It turned out that for smaller s<1, the quotient ITRσ has a maximum and can tend to zero depending on transition probabilities, while for 1

8.
Cardiol J ; 28(1): 23-33, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-32789838

RESUMO

BACKGROUND: General data protection regulation (GDPR) provides rules according to which data should be managed and processed in a secure and appropriate way for patient requirements and security. Currently, everyone in Europe is covered by GDPR. Thus, the medical practice also requires access to patient data in a safe and secure way. METHODS: Holographic technology allows users to see everything visible on a computer screen in a new and less restricted way, i.e. without the limitations of traditional computers and screens. RESULTS: In this study, a three-dimensional holographic doctors' assistant is designed and implemented in a way that meets the GDPR requirements. The HoloView application, which is tailored to run on Microsoft HoloLens, is proposed toallow display and access to personal data and so-called sensitive information of all individual patients without the risk that it will be presented to unauthorized persons. CONCLUSIONS: To enhance the user experience and remain consistent with GSPR, a holographic desk is proposed that allows displaying patient data and sensitive information only in front of the doctor's eyes using mixed reality glasses. Last but not least, it boasts of a reduction in infection risk for the staff during the COVID-19 pandemic, affording medical care to be carried out by as few doctors as possible.


Assuntos
COVID-19/epidemiologia , Segurança Computacional/estatística & dados numéricos , Atenção à Saúde/métodos , Pandemias , SARS-CoV-2 , Realidade Virtual , Europa (Continente)/epidemiologia , Humanos
9.
Comput Methods Programs Biomed ; 182: 105052, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31476448

RESUMO

BACKGROUND AND OBJECTIVE: People suffer from sleep disorders caused by work-related stress, irregular lifestyle or mental health problems. Therefore, development of effective tools to diagnose sleep disorders is important. Recently, to analyze biomedical signals Information Theory is exploited. We propose efficient classification method of sleep anomalies by applying entropy estimating algorithms to encoded ECGs signals coming from patients suffering from Sleep-Related Breathing Disorders (SRBD). METHODS: First, ECGs were discretized using the encoding method which captures the biosignals variability. It takes into account oscillations of ECG measurements around signals averages. Next, to estimate entropy of encoded signals Lempel-Ziv complexity algorithm (LZ) which measures patterns generation rate was applied. Then, optimal encoding parameters, which allow distinguishing normal versus abnormal events during sleep with high sensitivity and specificity were determined numerically. Simultaneously, subjects' states were identified using acoustic signal of breathing recorded in the same period during sleep. RESULTS: Random sequences show normalized LZ close to 1 while for more regular sequences it is closer to 0. Our calculations show that SRBDs have normalized LZ around 0.32 (on average), while control group has complexity around 0.85. The results obtained to public database are similar, i.e. LZ for SRBDs around 0.48 and for control group 0.7. These show that signals within the control group are more random whereas for the SRBD group ECGs are more deterministic. This finding remained valid for both signals acquired during the whole duration of experiment, and when shorter time intervals were considered. Proposed classifier provided sleep disorders diagnostics with a sensitivity of 93.75 and specificity of 73.00%. To validate our method we have considered also different variants as a training and as testing sets. In all cases, the optimal encoding parameter, sensitivity and specificity values were similar to our results above. CONCLUSIONS: Our pilot study suggests that LZ based algorithm could be used as a clinical tool to classify sleep disorders since the LZ complexities for SRBD positives versus healthy individuals show a significant difference. Moreover, normalized LZ complexity changes are related to the snoring level. This study also indicates that LZ technique is able to detect sleep abnormalities in early disorders stage.


Assuntos
Eletrocardiografia/métodos , Transtornos do Sono-Vigília/fisiopatologia , Algoritmos , Humanos , Processamento de Sinais Assistido por Computador , Transtornos do Sono-Vigília/classificação
10.
Biol Cybern ; 113(4): 453-464, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31243531

RESUMO

To understand how anatomy and physiology allow an organism to perform its function, it is important to know how information that is transmitted by spikes in the brain is received and encoded. A natural question is whether the spike rate alone encodes the information about a stimulus (rate code), or additional information is contained in the temporal pattern of the spikes (temporal code). Here we address this question using data from the cat Lateral Geniculate Nucleus (LGN), which is the visual portion of the thalamus, through which visual information from the retina is communicated to the visual cortex. We analyzed the responses of LGN neurons to spatially homogeneous spots of various sizes with temporally random luminance modulation. We compared the Firing Rate with the Shannon Information Transmission Rate , which quantifies the information contained in the temporal relationships between spikes. We found that the behavior of these two rates can differ quantitatively. This suggests that the energy used for spiking does not translate directly into the information to be transmitted. We also compared Firing Rates with Information Rates for X-ON and X-OFF cells. We found that, for X-ON cells the Firing Rate and Information Rate often behave in a completely different way, while for X-OFF cells these rates are much more highly correlated. Our results suggest that for X-ON cells a more efficient "temporal code" is employed, while for X-OFF cells a straightforward "rate code" is used, which is more reliable and is correlated with energy consumption.


Assuntos
Potenciais de Ação/fisiologia , Corpos Geniculados/citologia , Corpos Geniculados/fisiologia , Processos Mentais/fisiologia , Neurônios/fisiologia , Animais , Gatos , Estimulação Luminosa/métodos , Córtex Visual/citologia , Córtex Visual/fisiologia , Vias Visuais/citologia , Vias Visuais/fisiologia
11.
Int J Neural Syst ; 29(8): 1950003, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30841769

RESUMO

The nature of neural codes is central to neuroscience. Do neurons encode information through relatively slow changes in the firing rates of individual spikes (rate code) or by the precise timing of every spike (temporal code)? Here we compare the loss of information due to correlations for these two possible neural codes. The essence of Shannon's definition of information is to combine information with uncertainty: the higher the uncertainty of a given event, the more information is conveyed by that event. Correlations can reduce uncertainty or the amount of information, but by how much? In this paper we address this question by a direct comparison of the information per symbol conveyed by the words coming from a binary Markov source (temporal code) with the information per symbol coming from the corresponding Bernoulli source (uncorrelated, rate code). In a previous paper we found that a crucial role in the relation between information transmission rates (ITRs) and firing rates is played by a parameter s, which is the sum of transition probabilities from the no-spike state to the spike state and vice versa. We found that in this case too a crucial role is played by the same parameter s. We calculated the maximal and minimal bounds of the quotient of ITRs for these sources. Next, making use of the entropy grouping axiom, we determined the loss of information in a Markov source compared with the information in the corresponding Bernoulli source for a given word length. Our results show that in the case of correlated signals the loss of information is relatively small, and thus temporal codes, which are more energetically efficient, can replace rate codes effectively. These results were confirmed by experiments.


Assuntos
Potenciais de Ação , Teoria da Informação , Cadeias de Markov , Modelos Neurológicos , Incerteza
12.
BMC Neurosci ; 16: 32, 2015 May 19.
Artigo em Inglês | MEDLINE | ID: mdl-25986973

RESUMO

BACKGROUND: Explaining how the brain processing is so fast remains an open problem (van Hemmen JL, Sejnowski T., 2004). Thus, the analysis of neural transmission (Shannon CE, Weaver W., 1963) processes basically focuses on searching for effective encoding and decoding schemes. According to the Shannon fundamental theorem, mutual information plays a crucial role in characterizing the efficiency of communication channels. It is well known that this efficiency is determined by the channel capacity that is already the maximal mutual information between input and output signals. On the other hand, intuitively speaking, when input and output signals are more correlated, the transmission should be more efficient. A natural question arises about the relation between mutual information and correlation. We analyze the relation between these quantities using the binary representation of signals, which is the most common approach taken in studying neuronal processes of the brain. RESULTS: We present binary communication channels for which mutual information and correlation coefficients behave differently both quantitatively and qualitatively. Despite this difference in behavior, we show that the noncorrelation of binary signals implies their independence, in contrast to the case for general types of signals. CONCLUSIONS: Our research shows that the mutual information cannot be replaced by sheer correlations. Our results indicate that neuronal encoding has more complicated nature which cannot be captured by straightforward correlations between input and output signals once the mutual information takes into account the structure and patterns of the signals.


Assuntos
Comunicação , Teoria da Informação , Modelos Neurológicos , Potenciais de Ação , Algoritmos , Encéfalo/fisiologia , Neurônios/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...