Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 316.867
Filtrar
1.
Rev. esp. patol ; 57(2): 77-83, Abr-Jun, 2024. tab, ilus
Artigo em Espanhol | IBECS | ID: ibc-232410

RESUMO

Introducción: En un servicio de anatomía patológica se analiza la carga laboral en tiempo médico en función de la complejidad de las muestras recibidas, y se valora su distribución entre los patólogos, presentado un nuevo algoritmo informático que favorece una distribución equitativa. Métodos: Siguiendo las directrices para la «Estimación de la carga de trabajo en citopatología e histopatología (tiempo médico) atendiendo al catálogo de muestras y procedimientos de la SEAP-IAP (2.ª edición)» se determinan las unidades de carga laboral (UCL) por patólogo y UCL global del servicio, la carga media laboral que soporta el servicio (factor MU), el tiempo de dedicación de cada patólogo a la actividad asistencial y el número de patólogos óptimo según la carga laboral del servicio. Resultados: Determinamos 12.197 UCL totales anuales para el patólogo jefe de servicio, así como 14.702 y 13.842 para los patólogos adjuntos, con una UCL global del servicio de 40.742. El factor MU calculado es 4,97. El jefe ha dedicado el 72,25% de su jornada a la asistencia y los adjuntos el 87,09 y 82,01%. El número de patólogos óptimo para el servicio es de 3,55. Conclusiones: Todos los resultados obtenidos demuestran la sobrecarga laboral médica, y la distribución de las UCL entre los patólogos no resulta equitativa. Se propone un algoritmo informático capaz de distribuir la carga laboral de manera equitativa, asociado al sistema de información del laboratorio, y que tenga en cuenta el tipo de muestra, su complejidad y la dedicación asistencial de cada patólogo.(AU)


Introduction: In a pathological anatomy service, the workload in medical time is analyzed based on the complexity of the samples received and its distribution among pathologists is assessed, presenting a new computer algorithm that favors an equitable distribution. Methods: Following the second edition of the Spanish guidelines for the estimation of workload in cytopathology and histopathology (medical time) according to the Spanish Pathology Society-International Academy of Pathology (SEAP-IAP) catalog of samples and procedures, we determined the workload units (UCL) per pathologist and the overall UCL of the service, the average workload of the service (MU factor), the time dedicated by each pathologist to healthcare activity and the optimal number of pathologists according to the workload of the service. Results: We determined 12 197 total annual UCL for the chief pathologist, as well as 14 702 and 13 842 UCL for associate pathologists, with an overall of 40 742 UCL for the whole service. The calculated MU factor is 4.97. The chief pathologist devoted 72.25% of his working day to healthcare activity while associate pathologists dedicated 87.09% and 82.01% of their working hours. The optimal number of pathologists for the service is found to be 3.55. Conclusions: The results demonstrate medical work overload and a non-equitable distribution of UCLs among pathologists. We propose a computer algorithm capable of distributing the workload in an equitable manner. It would be associated with the laboratory information system and take into account the type of specimen, its complexity and the dedication of each pathologist to healthcare activity.(AU)


Assuntos
Humanos , Masculino , Feminino , Patologia , Carga de Trabalho , Patologistas , Serviço Hospitalar de Patologia , Algoritmos
2.
PLoS One ; 19(5): e0302793, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38739601

RESUMO

BACKGROUND: In cardiology, cardiac output (CO) is an important parameter for assessing cardiac function. While invasive thermodilution procedures are the gold standard for CO assessment, transthoracic Doppler echocardiography (TTE) has become the established method for routine CO assessment in daily clinical practice. However, a demand persists for non-invasive approaches, including oscillometric pulse wave analysis (PWA), to enhance the accuracy of CO estimation, reduce complications associated with invasive procedures, and facilitate its application in non-intensive care settings. Here, we aimed to compare the TTE and oscillometric PWA algorithm Antares for a non-invasive estimation of CO. METHODS: Non-invasive CO data obtained by two-dimensional TTE were compared with those from an oscillometric blood pressure device (custo med GmbH, Ottobrunn, Germany) using the integrated algorithm Antares (Redwave Medical GmbH, Jena, Germany). In total, 59 patients undergoing elective cardiac catheterization for clinical reasons (71±10 years old, 76% males) were included. Agreement between both CO measures were assessed by Bland-Altman analysis, Student's t-test, and Pearson correlations. RESULTS: The mean difference in CO was 0.04 ± 1.03 l/min (95% confidence interval for the mean difference: -0.23 to 0.30 l/min) for the overall group, with lower and upper limits of agreement at -1.98 and 2.05 l/min, respectively. There was no statistically significant difference in means between both CO measures (P = 0.785). Statistically significant correlations between TTE and Antares CO were observed in the entire cohort (r = 0.705, P<0.001) as well as in female (r = 0.802, P<0.001) and male patients (r = 0.669, P<0.001). CONCLUSIONS: The oscillometric PWA algorithm Antares and established TTE for a non-invasive estimation of CO are highly correlated in male and female patients, with no statistically significant difference between both approaches. Future validation studies of the Antares CO are necessary before a clinical application can be considered.


Assuntos
Algoritmos , Débito Cardíaco , Ecocardiografia Doppler , Análise de Onda de Pulso , Humanos , Masculino , Feminino , Débito Cardíaco/fisiologia , Idoso , Análise de Onda de Pulso/métodos , Ecocardiografia Doppler/métodos , Pessoa de Meia-Idade , Idoso de 80 Anos ou mais , Oscilometria/métodos
3.
PLoS One ; 19(5): e0303366, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38739676

RESUMO

This study presents a novel approach to modeling the velocity-time curve in 100m sprinting by integrating machine learning algorithms. It critically addresses the limitations of traditional speed models, which often require extensive and intricate data collection, by proposing a more accessible and accurate method using fewer variables. The research utilized data from various international track events from 1987 to 2019. Two machine learning models, Random Forest (RF) and Neural Network (NN), were employed to predict the velocity-time curve, focusing on the acceleration phase of the sprint. The models were evaluated against the traditional exponential speed model using Mean Squared Error (MSE), with the NN model demonstrating superior performance. Additionally, the study explored the correlation between maximum velocity, the time of maximum velocity occurrence, the duration of the maximum speed phase, and the overall 100m sprint time. The findings indicate a strong negative correlation between maximum velocity and final time, offering new insights into the dynamics of sprinting performance. This research contributes significantly to the field of sports science, particularly in optimizing training and performance analysis in sprinting.


Assuntos
Desempenho Atlético , Aprendizado de Máquina , Corrida , Humanos , Corrida/fisiologia , Desempenho Atlético/fisiologia , Redes Neurais de Computação , Algoritmos , Aceleração
4.
Sci Rep ; 14(1): 11233, 2024 05 16.
Artigo em Inglês | MEDLINE | ID: mdl-38755269

RESUMO

Automated disease diagnosis and prediction, powered by AI, play a crucial role in enabling medical professionals to deliver effective care to patients. While such predictive tools have been extensively explored in resource-rich languages like English, this manuscript focuses on predicting disease categories automatically from symptoms documented in the Afaan Oromo language, employing various classification algorithms. This study encompasses machine learning techniques such as support vector machines, random forests, logistic regression, and Naïve Bayes, as well as deep learning approaches including LSTM, GRU, and Bi-LSTM. Due to the unavailability of a standard corpus, we prepared three data sets with different numbers of patient symptoms arranged into 10 categories. The two feature representations, TF-IDF and word embedding, were employed. The performance of the proposed methodology has been evaluated using accuracy, recall, precision, and F1 score. The experimental results show that, among machine learning models, the SVM model using TF-IDF had the highest accuracy and F1 score of 94.7%, while the LSTM model using word2vec embedding showed an accuracy rate of 95.7% and F1 score of 96.0% from deep learning models. To enhance the optimal performance of each model, several hyper-parameter tuning settings were used. This study shows that the LSTM model verifies to be the best of all the other models over the entire dataset.


Assuntos
Aprendizado Profundo , Humanos , Etiópia , Máquina de Vetores de Suporte , Idioma , Algoritmos , Aprendizado de Máquina , Teorema de Bayes , Inteligência Artificial
5.
PLoS One ; 19(5): e0302741, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38758774

RESUMO

In the context of integrating sports and medicine domains, the urgent resolution of elderly health supervision requires effective data clustering algorithms. This paper introduces a novel higher-order hybrid clustering algorithm that combines density values and the particle swarm optimization (PSO) algorithm. Initially, the traditional PSO algorithm is enhanced by integrating the Global Evolution Dynamic Model (GEDM) into the Distribution Estimation Algorithm (EDA), constructing a weighted covariance matrix-based GEDM. This adapted PSO algorithm dynamically selects between the Global Evolution Dynamic Model and the standard PSO algorithm to update population information, significantly enhancing convergence speed while mitigating the risk of local optima entrapment. Subsequently, the higher-order hybrid clustering algorithm is formulated based on the density value and the refined PSO algorithm. The PSO clustering algorithm is adopted in the initial clustering phase, culminating in class clusters after a finite number of iterations. These clusters then undergo the application of the density peak search algorithm to identify candidate centroids. The final centroids are determined through a fusion of the initial class clusters and the identified candidate centroids. Results showcase remarkable improvements: achieving 99.13%, 82.22%, and 99.22% for F-measure, recall, and precision on dataset S1, and 75.22%, 64.0%, and 64.4% on dataset CMC. Notably, the proposed algorithm yields a 75.22%, 64.4%, and 64.6% rate on dataset S, significantly surpassing the comparative schemes' performance. Moreover, employing the text vector representation of the LDA topic vector model underscores the efficacy of the higher-order hybrid clustering algorithm in efficiently clustering text information. This innovative approach facilitates swift and accurate clustering of elderly health data from the perspective of sports and medicine integration. It enables the identification of patterns and regularities within the data, facilitating the formulation of personalized health management strategies and addressing latent health concerns among the elderly population.


Assuntos
Algoritmos , Humanos , Análise por Conglomerados , Idoso , Gestão da Informação em Saúde/métodos , Medicina Esportiva/métodos , Esportes
6.
PLoS One ; 19(5): e0303076, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38758825

RESUMO

STUDY OBJECTIVE: This study aimed to prospectively validate the performance of an artificially augmented home sleep apnea testing device (WVU-device) and its patented technology. METHODOLOGY: The WVU-device, utilizing patent pending (US 20210001122A) technology and an algorithm derived from cardio-pulmonary physiological parameters, comorbidities, and anthropological information was prospectively compared with a commercially available and Center for Medicare and Medicaid Services (CMS) approved home sleep apnea testing (HSAT) device. The WVU-device and the HSAT device were applied on separate hands of the patient during a single night study. The oxygen desaturation index (ODI) obtained from the WVU-device was compared to the respiratory event index (REI) derived from the HSAT device. RESULTS: A total of 78 consecutive patients were included in the prospective study. Of the 78 patients, 38 (48%) were women and 9 (12%) had a Fitzpatrick score of 3 or higher. The ODI obtained from the WVU-device corelated well with the HSAT device, and no significant bias was observed in the Bland-Altman curve. The accuracy for ODI > = 5 and REI > = 5 was 87%, for ODI> = 15 and REI > = 15 was 89% and for ODI> = 30 and REI of > = 30 was 95%. The sensitivity and specificity for these ODI /REI cut-offs were 0.92 and 0.78, 0.91 and 0.86, and 0.94 and 0.95, respectively. CONCLUSION: The WVU-device demonstrated good accuracy in predicting REI when compared to an approved HSAT device, even in patients with darker skin tones.


Assuntos
Inteligência Artificial , Síndromes da Apneia do Sono , Humanos , Feminino , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Síndromes da Apneia do Sono/diagnóstico , Síndromes da Apneia do Sono/fisiopatologia , Idoso , Polissonografia/instrumentação , Polissonografia/métodos , Algoritmos , Adulto
7.
PLoS One ; 19(5): e0300961, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38758938

RESUMO

The stable and site-specific operation of transmission lines is a crucial safeguard for grid functionality. This study introduces a comprehensive optimization design method for transmission line crossing frame structures based on the Biogeography-Based Optimization (BBO) algorithm, which integrates size, shape, and topology optimization. By utilizing the BBO algorithm to optimize the truss structure's design variables, the method ensures the structure's economic and practical viability while enhancing its performance. The optimization process is validated through finite element analysis, confirming the optimized structure's compliance with strength, stiffness, and stability requirements. The results demonstrate that the integrated design of size, shape, and topology optimization, as opposed to individual optimizations of size or shape and topology, yields the lightest structure mass and a maximum stress of 151.4 MPa under construction conditions. These findings also satisfy the criteria for strength, stiffness, and stability, verifying the method's feasibility, effectiveness, and practicality. This approach surpasses traditional optimization methods, offering a more effective solution for complex structural optimization challenges, thereby enhancing the sustainable utilization of structures.


Assuntos
Algoritmos , Análise de Elementos Finitos
8.
PLoS One ; 19(5): e0298572, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38758947

RESUMO

Aiming at the problem of load increase in distribution network and low satisfaction of vehicle owners caused by disorderly charging of electric vehicles, an optimal scheduling model of electric vehicles considering the comprehensive satisfaction of vehicle owners is proposed. In this model, the dynamic electricity price and charging and discharging state of electric vehicles are taken as decision variables, and the income of electric vehicle charging stations, the comprehensive satisfaction of vehicle owners considering economic benefits and the load fluctuation of electric vehicles are taken as optimization objectives. The improved NSGA-III algorithm (DJM-NSGA-III) based on dynamic opposition-based learning strategy, Jaya algorithm and Manhattan distance is used to solve the problems of low initial population quality, easy to fall into local optimal solution and ignoring potential optimal solution when NSGA-III algorithm is used to solve the multi-objective and high-dimensional scheduling model. The experimental results show that the proposed method can improve the owner's satisfaction while improving the income of the charging station, effectively alleviate the conflict of interest between the two, and maintain the safe and stable operation of the distribution network.


Assuntos
Algoritmos , Eletricidade , Automóveis , Humanos , Modelos Teóricos
9.
Biomed Phys Eng Express ; 10(4)2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38744255

RESUMO

Purpose. To develop a method to extract statistical low-contrast detectability (LCD) and contrast-detail (C-D) curves from clinical patient images.Method. We used the region of air surrounding the patient as an alternative for a homogeneous region within a patient. A simple graphical user interface (GUI) was created to set the initial configuration for region of interest (ROI), ROI size, and minimum detectable contrast (MDC). The process was started by segmenting the air surrounding the patient with a threshold between -980 HU (Hounsfield units) and -1024 HU to get an air mask. The mask was trimmed using the patient center coordinates to avoid distortion from the patient table. It was used to automatically place square ROIs of a predetermined size. The mean pixel values in HU within each ROI were calculated, and the standard deviation (SD) from all the means was obtained. The MDC for a particular target size was generated by multiplying the SD by 3.29. A C-D curve was obtained by iterating this process for the other ROI sizes. This method was applied to the homogeneous area from the uniformity module of an ACR CT phantom to find the correlation between the parameters inside and outside the phantom, for 30 thoracic, 26 abdominal, and 23 head images.Results. The phantom images showed a significant linear correlation between the LCDs obtained from outside and inside the phantom, with R2values of 0.67 and 0.99 for variations in tube currents and tube voltages. This indicated that the air region outside the phantom can act as a surrogate for the homogenous region inside the phantom to obtain the LCD and C-D curves.Conclusion. The C-D curves obtained from outside the ACR CT phantom show a strong linear correlation with those from inside the phantom. The proposed method can also be used to extract the LCD from patient images by using the region of air outside as a surrogate for a region inside the patient.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos , Interface Usuário-Computador , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
10.
J Chromatogr A ; 1726: 464941, 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38749274

RESUMO

Method development in comprehensive two-dimensional liquid chromatography (LC×LC) is a challenging process. The interdependencies between the two dimensions and the possibility of incorporating complex gradient profiles, such as multi-segmented gradients or shifting gradients, make trial-and-error method development time-consuming and highly dependent on user experience. Retention modeling and Bayesian optimization (BO) have been proposed as solutions to mitigate these issues. However, both approaches have their strengths and weaknesses. On the one hand, retention modeling, which approximates true retention behavior, depends on effective peak tracking and accurate retention time and width predictions, which are increasingly challenging for complex samples and advanced gradient assemblies. On the other hand, Bayesian optimization may require many experiments when dealing with many adjustable parameters, as in LC×LC. Therefore, in this work, we investigate the use of multi-task Bayesian optimization (MTBO), a method that can combine information from both retention modeling and experimental measurements. The algorithm was first tested and compared with BO using a synthetic retention modeling test case, where it was shown that MTBO finds better optima with fewer method-development iterations than conventional BO. Next, the algorithm was tested on the optimization of a method for a pesticide sample and we found that the algorithm was able to improve upon the initial scanning experiments. Multi-task Bayesian optimization is a promising technique in situations where modeling retention is challenging, and the high number of adjustable parameters and/or limited optimization budget makes traditional Bayesian optimization impractical.


Assuntos
Algoritmos , Teorema de Bayes , Cromatografia Líquida/métodos , Praguicidas/isolamento & purificação , Praguicidas/análise
11.
BMC Bioinformatics ; 25(1): 174, 2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38698340

RESUMO

BACKGROUND: In last two decades, the use of high-throughput sequencing technologies has accelerated the pace of discovery of proteins. However, due to the time and resource limitations of rigorous experimental functional characterization, the functions of a vast majority of them remain unknown. As a result, computational methods offering accurate, fast and large-scale assignment of functions to new and previously unannotated proteins are sought after. Leveraging the underlying associations between the multiplicity of features that describe proteins could reveal functional insights into the diverse roles of proteins and improve performance on the automatic function prediction task. RESULTS: We present GO-LTR, a multi-view multi-label prediction model that relies on a high-order tensor approximation of model weights combined with non-linear activation functions. The model is capable of learning high-order relationships between multiple input views representing the proteins and predicting high-dimensional multi-label output consisting of protein functional categories. We demonstrate the competitiveness of our method on various performance measures. Experiments show that GO-LTR learns polynomial combinations between different protein features, resulting in improved performance. Additional investigations establish GO-LTR's practical potential in assigning functions to proteins under diverse challenging scenarios: very low sequence similarity to previously observed sequences, rarely observed and highly specific terms in the gene ontology. IMPLEMENTATION: The code and data used for training GO-LTR is available at https://github.com/aalto-ics-kepaco/GO-LTR-prediction .


Assuntos
Biologia Computacional , Proteínas , Proteínas/química , Proteínas/metabolismo , Biologia Computacional/métodos , Bases de Dados de Proteínas , Algoritmos
12.
BMC Health Serv Res ; 24(1): 569, 2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38698386

RESUMO

BACKGROUND: The national breast screening programme in the United Kingdom is under pressure due to workforce shortages and having been paused during the COVID-19 pandemic. Artificial intelligence has the potential to transform how healthcare is delivered by improving care processes and patient outcomes. Research on the clinical and organisational benefits of artificial intelligence is still at an early stage, and numerous concerns have been raised around its implications, including patient safety, acceptance, and accountability for decisions. Reforming the breast screening programme to include artificial intelligence is a complex endeavour because numerous stakeholders influence it. Therefore, a stakeholder analysis was conducted to identify relevant stakeholders, explore their views on the proposed reform (i.e., integrating artificial intelligence algorithms into the Scottish National Breast Screening Service for breast cancer detection) and develop strategies for managing 'important' stakeholders. METHODS: A qualitative study (i.e., focus groups and interviews, March-November 2021) was conducted using the stakeholder analysis guide provided by the World Health Organisation and involving three Scottish health boards: NHS Greater Glasgow & Clyde, NHS Grampian and NHS Lothian. The objectives included: (A) Identify possible stakeholders (B) Explore stakeholders' perspectives and describe their characteristics (C) Prioritise stakeholders in terms of importance and (D) Develop strategies to manage 'important' stakeholders. Seven stakeholder characteristics were assessed: their knowledge of the targeted reform, position, interest, alliances, resources, power and leadership. RESULTS: Thirty-two participants took part from 14 (out of 17 identified) sub-groups of stakeholders. While they were generally supportive of using artificial intelligence in breast screening programmes, some concerns were raised. Stakeholder knowledge, influence and interests in the reform varied. Key advantages mentioned include service efficiency, quicker results and reduced work pressure. Disadvantages included overdiagnosis or misdiagnosis of cancer, inequalities in detection and the self-learning capacity of the algorithms. Five strategies (with considerations suggested by stakeholders) were developed to maintain and improve the support of 'important' stakeholders. CONCLUSIONS: Health services worldwide face similar challenges of workforce issues to provide patient care. The findings of this study will help others to learn from Scottish experiences and provide guidance to conduct similar studies targeting healthcare reform. STUDY REGISTRATION: researchregistry6579, date of registration: 16/02/2021.


Assuntos
Algoritmos , Inteligência Artificial , Neoplasias da Mama , COVID-19 , Pesquisa Qualitativa , Participação dos Interessados , Humanos , Neoplasias da Mama/diagnóstico , Feminino , COVID-19/diagnóstico , COVID-19/epidemiologia , Detecção Precoce de Câncer/métodos , Reino Unido , SARS-CoV-2 , Escócia , Grupos Focais
13.
Orphanet J Rare Dis ; 19(1): 183, 2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38698482

RESUMO

BACKGROUND: With over 7000 Mendelian disorders, identifying children with a specific rare genetic disorder diagnosis through structured electronic medical record data is challenging given incompleteness of records, inaccurate medical diagnosis coding, as well as heterogeneity in clinical symptoms and procedures for specific disorders. We sought to develop a digital phenotyping algorithm (PheIndex) using electronic medical records to identify children aged 0-3 diagnosed with genetic disorders or who present with illness with an increased risk for genetic disorders. RESULTS: Through expert opinion, we established 13 criteria for the algorithm and derived a score and a classification. The performance of each criterion and the classification were validated by chart review. PheIndex identified 1,088 children out of 93,154 live births who may be at an increased risk for genetic disorders. Chart review demonstrated that the algorithm achieved 90% sensitivity, 97% specificity, and 94% accuracy. CONCLUSIONS: The PheIndex algorithm can help identify when a rare genetic disorder may be present, alerting providers to consider ordering a diagnostic genetic test and/or referring a patient to a medical geneticist.


Assuntos
Algoritmos , Doenças Raras , Humanos , Doenças Raras/genética , Doenças Raras/diagnóstico , Lactente , Recém-Nascido , Pré-Escolar , Feminino , Masculino , Registros Eletrônicos de Saúde , Doenças Genéticas Inatas/diagnóstico , Doenças Genéticas Inatas/genética , Fenótipo
14.
Stud Health Technol Inform ; 314: 151-152, 2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38785022

RESUMO

This study proposes an innovative application of the Goertzel Algorithm (GA) for the processing of vocal signals in dysphonia evaluation. Compared to the Fast Fourier Transform (FFT) representing the gold standard analysis technique in this context, GA demonstrates higher efficiency in terms of processing time and memory usage, also showing an improved discrimination between healthy and pathological conditions. This suggests that GA-based approaches could enhance the reliability and efficiency of vocal signal analysis, thus supporting physicians in dysphonia research and clinical monitoring.


Assuntos
Algoritmos , Disfonia , Humanos , Disfonia/diagnóstico , Processamento de Sinais Assistido por Computador , Espectrografia do Som/métodos , Reprodutibilidade dos Testes , Análise de Fourier , Feminino , Masculino
15.
Luminescence ; 39(5): e4766, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38785095

RESUMO

In this work, two validated approaches were used for estimating hydroxyzine HCl for the first time using resonance Rayleigh scattering (RRS) and spectrofluorimetric techniques. The suggested approaches relied on forming an association complex between hydroxyzine HCl and 2,4,5,7-tetraiodofluorescein (erythrosin B) reagent in an acidic media. The quenching in the fluorescence intensity of 2,4,5,7-tetraiodofluorescein by hydroxyzine at 551.5 nm (excitation = 527.5 nm) was used for determining the studied drug by the spectrofluorimetric technique. The RRS approach is based on amplifying the RRS spectrum at 348 nm upon the interaction of hydroxyzine HCl with 2,4,5,7-tetraiodofluorescein. The spectrofluorimetric methodology and the RRS methodology produced linear results within ranges of 0.15-1.5 µg ml-1 and 0.1-1.2 µg ml-1, respectively. LOD values for these methods were determined to be 0.047 µg ml-1 and 0.033 µg ml-1, respectively. The content of hydroxyzine HCl in its pharmaceutical tablet was estimated using the developed procedures with acceptable recoveries. Additionally, the application of four greenness and whiteness algorithms shows that they are superior to the previously reported method in terms of sustainability, economics, analytical performance, and practicality.


Assuntos
Algoritmos , Hidroxizina , Espectrometria de Fluorescência , Hidroxizina/análise , Hidroxizina/química , Antagonistas dos Receptores Histamínicos/análise , Antagonistas dos Receptores Histamínicos/química , Espalhamento de Radiação , Eritrosina/química , Eritrosina/análise
16.
Methods Mol Biol ; 2726: 45-83, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38780727

RESUMO

Several different ways to predict RNA secondary structures have been suggested in the literature. Statistical methods, such as those that utilize stochastic context-free grammars (SCFGs), or approaches based on machine learning aim to predict the best representative structure for the underlying ensemble of possible conformations. Their parameters have therefore been trained on larger subsets of well-curated, known secondary structures. Physics-based methods, on the other hand, usually refrain from using optimized parameters. They model secondary structures from loops as individual building blocks which have been assigned a physical property instead: the free energy of the respective loop. Such free energies are either derived from experiments or from mathematical modeling. This rigorous use of physical properties then allows for the application of statistical mechanics to describe the entire state space of RNA secondary structures in terms of equilibrium probabilities. On that basis, and by using efficient algorithms, many more descriptors of the conformational state space of RNA molecules can be derived to investigate and explain the many functions of RNA molecules. Moreover, compared to other methods, physics-based models allow for a much easier extension with other properties that can be measured experimentally. For instance, small molecules or proteins can bind to an RNA and their binding affinity can be assessed experimentally. Under certain conditions, existing RNA secondary structure prediction tools can be used to model this RNA-ligand binding and to eventually shed light on its impact on structure formation and function.


Assuntos
Conformação de Ácido Nucleico , RNA , Termodinâmica , RNA/química , Algoritmos , Biologia Computacional/métodos , Aprendizado de Máquina , Modelos Moleculares
17.
Methods Mol Biol ; 2726: 125-141, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38780730

RESUMO

Analysis of the folding space of RNA generally suffers from its exponential size. With classified Dynamic Programming algorithms, it is possible to alleviate this burden and to analyse the folding space of RNA in great depth. Key to classified DP is that the search space is partitioned into classes based on an on-the-fly computed feature. A class-wise evaluation is then used to compute class-wide properties, such as the lowest free energy structure for each class, or aggregate properties, such as the class' probability. In this paper we describe the well-known shape and hishape abstraction of RNA structures, their power to help better understand RNA function and related methods that are based on these abstractions.


Assuntos
Algoritmos , Biologia Computacional , Conformação de Ácido Nucleico , Dobramento de RNA , RNA , RNA/química , RNA/genética , Biologia Computacional/métodos , Software , Termodinâmica
18.
Methods Mol Biol ; 2726: 143-168, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38780731

RESUMO

The 3D structures of many ribonucleic acid (RNA) loops are characterized by highly organized networks of non-canonical interactions. Multiple computational methods have been developed to annotate structures with those interactions or automatically identify recurrent interaction networks. By contrast, the reverse problem that aims to retrieve the geometry of a look from its sequence or ensemble of interactions remains much less explored. In this chapter, we will describe how to retrieve and build families of conserved structural motifs using their underlying network of non-canonical interactions. Then, we will show how to assign sequence alignments to those families and use the software BayesPairing to build statistical models of structural motifs with their associated sequence alignments. From this model, we will apply BayesPairing to identify in new sequences regions where those loop geometries can occur.


Assuntos
Pareamento de Bases , Biologia Computacional , RNA , Software , Biologia Computacional/métodos , RNA/química , RNA/genética , Conformação de Ácido Nucleico , Alinhamento de Sequência/métodos , Algoritmos , Motivos de Nucleotídeos , Teorema de Bayes , Modelos Moleculares
19.
Methods Mol Biol ; 2726: 235-254, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38780734

RESUMO

Generating accurate alignments of non-coding RNA sequences is indispensable in the quest for understanding RNA function. Nevertheless, aligning RNAs remains a challenging computational task. In the twilight-zone of RNA sequences with low sequence similarity, sequence homologies and compatible, favorable (a priori unknown) structures can be inferred only in dependency of each other. Thus, simultaneous alignment and folding (SA&F) remains the gold-standard of comparative RNA analysis, even if this method is computationally highly demanding. This text introduces to the recent release 2.0 of the software package LocARNA, focusing on its practical application. The package enables versatile, fast and accurate analysis of multiple RNAs. For this purpose, it implements SA&F algorithms in a specific, lightweight flavor that makes them routinely applicable in large scale. Its high performance is achieved by combining ensemble-based sparsification of the structure space and banding strategies. Probabilistic banding strongly improves the performance of LocARNA 2.0 even over previous releases, while simplifying its effective use. Enabling flexible application to various use cases, LocARNA provides tools to globally and locally compare, cluster, and multiply aligned RNAs based on optimization and probabilistic variants of SA&F, which optionally integrate prior knowledge, expressible by anchor and structure constraints.


Assuntos
Algoritmos , Biologia Computacional , Dobramento de RNA , RNA , Software , RNA/genética , RNA/química , Biologia Computacional/métodos , Conformação de Ácido Nucleico , Alinhamento de Sequência/métodos , Análise de Sequência de RNA/métodos
20.
Methods Mol Biol ; 2726: 15-43, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38780726

RESUMO

The nearest-neighbor (NN) model is a general tool for the evaluation for oligonucleotide thermodynamic stability. It is primarily used for the prediction of melting temperatures but has also found use in RNA secondary structure prediction and theoretical models of hybridization kinetics. One of the key problems is to obtain the NN parameters from melting temperatures, and VarGibbs was designed to obtain those parameters directly from melting temperatures. Here we will describe the basic workflow from RNA melting temperatures to NN parameters with the use of VarGibbs. We start by a brief revision of the basic concepts of RNA hybridization and of the NN model and then show how to prepare the data files, run the parameter optimization, and interpret the results.


Assuntos
Conformação de Ácido Nucleico , Desnaturação de Ácido Nucleico , Termodinâmica , Temperatura de Transição , RNA/química , RNA/genética , Software , Algoritmos , Hibridização de Ácido Nucleico/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...