Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
Cureus ; 16(4): e59334, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38817524

ABSTRACT

Introduction Despite the constant development of medicine and the increasing accessibility to medical specialists, in the first quarter of the 21st century, odontogenic abscesses remain one of the leading causes of emergency hospitalization in maxillofacial surgery clinics. Because of the serious and lethal complications that this type of suppurative infection can lead to if not treated promptly, there is a need for constant updating of the knowledge of its origin, which is precisely what is addressed in this original article. Materials and methods It reports on a retrospective study conducted over a five-year period (2018-2023), during which 705 patients aged 18 years and older with a confirmed diagnosis of odontogenic soft tissue abscess of the head and neck underwent emergency surgery. Results The average age of the patients studied was 41 years, with the oldest being an 82-year-old woman. The proportion of males in the study population was higher - 54.18%. Young patients (18-44 years) were the most affected, with a total of 364 patients (213 males and 151 females), while the proportion of old people (75 years of age and older) was the lowest, with a total of 15 patients, including seven males and eight females. The first molars of both jaws (16, 26, 36 and 46) were the cause of the suppurative bacterial infection in the highest number among our study patients - 208 out of 705 (29.5%). Central incisors (teeth 11, 21, 31 and 41) were the least frequent direct cause of odontogenic infection, accounting for only 17 cases out of 705 (2.41%). Discussion The most logical reason for the decrease in the number of patients with odontogenic abscesses with increasing age is tooth loss in older individuals. Our study confirmed the knowledge that the first mandibular molars are the most common teeth leading to the formation of purulent exudate in the adjacent mandibular soft tissues. However, in contrast to the well-known fact for the maxilla that canines are the most frequent etiologic factor for the occurrence of odontogenic abscesses, we conclude that again the first molars (teeth 16 and 26) outnumber the other teeth of the maxillary dentition, with canines outnumbering only incisors. The teeth of the lower jaw are the cause of more than twice as many exudative infections as those of the upper jaw - the ratio between them is 2.54:1. Conclusions Knowledge of odontogenic abscesses - their demographic distribution, frequency and etiology, their diagnosis and treatment - is the basis for the prediction and treatment outcome of these diseases, mainly affecting young people. Their treatment is both surgical in order to evacuate the suppurative focus, and antibacterial.

2.
Cureus ; 16(3): e56836, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38654803

ABSTRACT

Schwannomas are not uncommon in the maxillofacial region; however, those with intraoral localization and, in particular, the hard palate are among the least frequently described. In the current case report, we present a 17-year-old girl with a histologically verified schwannoma of the hard palate on the right, originating from the right greater palatine nerve. In her case, despite the lysis of the palatine bone from the tumor compression, the disease is asymptomatic, causing only a weak sensation of local discomfort. The lesion was removed surgically under general anesthesia and the resulting defect of the palatal mucosa was compensated by plastic reconstruction with a lingual mucosal flap on a posterior feeding base. The recovery period was uneventful.

3.
J Med Econ ; 24(1): 373-385, 2021.
Article in English | MEDLINE | ID: mdl-33588669

ABSTRACT

Multimorbidity is a defining challenge for health systems and requires coordination of care delivery and care management. Care management is a clinical service designed to remotely engage patients between visits and after discharge in order to support self-management of chronic and emergent conditions, encourage increased use of scheduled care and address the use of unscheduled care. Care management can be provided using digital technology - digital care management. A robust methodology to assess digital care management, or any traditional or digital primary care intervention aimed at longitudinal management of multimorbidity, does not exist outside of randomized controlled trials (RCTs). RCTs are not always generalizable and are also not feasible for most healthcare organizations. We describe here a novel and pragmatic methodology for the evaluation of digital care management that is generalizable to any longitudinal intervention for multimorbidity irrespective of its mode of delivery. This methodology implements propensity matching with bootstrapping to address some of the major challenges in evaluation including identification of robust outcome measures, selection of an appropriate control population, small sample sizes with class imbalances, and limitations of RCTs. We apply this methodology to the evaluation of digital care management at a U.S. payor and demonstrate a 9% reduction in ER utilization, a 17% reduction in inpatient admissions, and a 29% increase in the utilization of preventive medicine services. From these utilization outcomes, we drive forward an estimated cost saving that is specific to a single payor's payment structure for the study time period of $641 per-member-per-month at 3 months. We compare these results to those derived from existing observational approaches, 1:1 and 1:n propensity matching, and discuss the circumstances in which our methodology has advantages over existing techniques. Whilst our methodology focuses on cost and utilization and is applied in the U.S. context, it is applicable to other outcomes such as Patient Reported Outcome Measures (PROMS) or clinical biometrics and can be used in other health system contexts where the challenge of multimorbidity is prevalent.


Subject(s)
Multimorbidity , Self-Management , Hospitalization , Humans , Patient Reported Outcome Measures , Primary Health Care
4.
Popul Health Manag ; 23(4): 319-325, 2020 08.
Article in English | MEDLINE | ID: mdl-31765282

ABSTRACT

Digital care management programs can reduce health care costs and improve quality of care. However, it is unclear how to target patients who are most likely to benefit from these programs ex ante, a shortcoming of current "risk score"-based approaches across many interventions. This study explores a framework to define impactability by using machine learning (ML) models to identify those patients most likely to benefit from a digital health intervention for care management. Anonymized insurance claims data were used from a commercially insured population across several US states and combined with inferred sociodemographic data. The approach involves creating 2 models and the comparative analysis of the methodologies and performances therein. The authors first train a cost prediction model to calculate the differences in predicted (without intervention) versus actual (with onboarding onto digital health platform) health care expenditures for patients (N = 5600). This enables classification impactability if differences in predicted versus actual costs meet a predetermined threshold. Several random forest and logistic regression machine learning models were then trained to accurately categorize new patients as impactable versus not impactable. These parameters are modified through grid search to define the parameters that deliver optimal performance, reaching an overall sensitivity of 0.77 and specificity of 0.65 among all models. This approach shows that impactability for a digital health intervention can be successfully defined using ML methods, thus enabling efficient allocation of resources. This framework is generalizable to analyzing impactability of any intervention and can contribute to realizing closed-loop feedback systems for continuous improvement in health care.


Subject(s)
Digital Technology/methods , Machine Learning , Models, Statistical , Telemedicine/methods , Adult , Costs and Cost Analysis , Female , Humans , Male , Middle Aged
5.
Opt Express ; 27(21): 30380-30395, 2019 Oct 14.
Article in English | MEDLINE | ID: mdl-31684286

ABSTRACT

This paper describes a novel Boundary Source Method (BSM) applied to the vector calculation of electromagnetic fields from a surface defined by the interface between homogenous, isotropic media. In this way, the reflected and transmitted fields are represented as an expansion of the electric fields generated by a basis of orthogonal electric and magnetic dipole sources that are tangential to, and evenly distributed over the surface of interest. The dipole moments required to generate these fields are then calculated according to the extinction theorem of Ewald and Oseen applied at control points situated at either side of the boundary. It is shown that the sources are essentially vector-equivalent Huygens' wavelets applied at discrete points at the boundary and special attention is given to their placement and the corresponding placement of control points according to the Nyquist sampling criteria. The central result of this paper is that the extinction theorem should be applied at control points situated at a distance d = 3s (where s is the separation of the sources) and consequently we refer to the method as 3sBSM. The method is applied to reflection at a plane dielectric surface and a spherical dielectric sphere and good agreement is demonstrated in comparison with the Fresnel equations and Mie series expansion respectively (even at resonance). We conclude that 3sBSM provides an accurate solution to electromagnetic scattering from a bandlimited surface and efficiently avoids the singular surface integrals and special basis functions proposed by others.

6.
Appl Opt ; 55(13): 3555-65, 2016 May 01.
Article in English | MEDLINE | ID: mdl-27140371

ABSTRACT

In a recent publication [3rd International Conference on Surface Metrology, Annecy, France, 2012, p. 1] it was shown that surface roughness measurements made using a focus variation microscope (FVM) are influenced by surface tilt. The effect appears to be most significant when the surface has microscale roughness (Ra≈50 nm) that is sufficient to provide a diffusely scattered signal that is comparable in magnitude to the specular component. This paper explores, from first principles, image formation using the focus variation method. With the assumption of incoherent scattering, it is shown that the process is linear and the 3D point spread characteristics and transfer characteristics of the instrument are well defined. It is argued that for the case of microscale roughness and through the objective illumination, the assumption of incoherence cannot be justified and more rigorous analysis is required. Using a foil model of surface scattering, the images that are recorded by a FVM have been calculated. It is shown that for the case of through-the-objective illumination at small tilt angles, the signal quality is degraded in a systematic manner. This is attributed to the mixing of specular and diffusely reflected components and leads to an asymmetry in the k-space representation of the output signals. It is shown that by using extra-aperture illumination or tilt angles greater than the acceptance angle of aperture (such that the specular component is lost), the incoherent assumption can be justified once again. The work highlights the importance of using ring-light illumination and/or polarizing optics, which are often available as options on commercial instruments, as a means to mitigate or prevent these effects.

7.
J Biomech ; 47(3): 625-30, 2014 Feb 07.
Article in English | MEDLINE | ID: mdl-24373509

ABSTRACT

This note reports observations of the change of stiffness of human mesenchymal stem cells (hMSCs) with the progress of cell death as measured by AFM. hMSC with impaired membrane, dead and viable cells were labelled with Annexin V and Propidium Iodide after 24h cold storage, followed by AFM measurement and Young's modulus of cells was derived. Viable hMSCs have a Young's modulus (E) in the range of 0.81-1.13kPa and consistent measurement was observed when different measurement locations were chosen. E of cells with partially impaired membrane was 0.69±0.17kPa or in the range of 2.04-4.74kPa, depending upon the measurement locations. With the loss of membrane integrity, though there was no variation on measured E between different locations, a mixed picture of cell stiffness was observed as indicated by cells with E as low as 0.09±0.03kPa, in a mid-range of 4.62±0.67kPa, and the highest of up to 48.98±19.80kPa. With the progress of cell death, the highest stiffness was noticed for cells showing a more granular appearance; also the lowest stiffness for cells with vacuole appearance. Findings from this study indicate that cell stiffness is significantly altered with the progress of cell death.


Subject(s)
Cell Death/physiology , Cryopreservation , Elastic Modulus/physiology , Mesenchymal Stem Cells/physiology , Microscopy, Atomic Force/methods , Models, Biological , Cell Membrane/physiology , Cell Survival/physiology , Cells, Cultured , Humans , Mesenchymal Stem Cells/cytology
8.
J Nurs Adm ; 43(5): 258-65, 2013 May.
Article in English | MEDLINE | ID: mdl-23615367

ABSTRACT

This study uses the qualitatively developed Adams influence model (AIM) and concepts from the psychometrically validated revised professional practice environment scale to guide the development of the leadership influence over professional practice environments scale. Nurse executives and others can use this scale individually or in conjunction with instruments targeting staff or patient perceptions of their influence as part of health services research, leadership development, and professional practice environment enhancement strategy.


Subject(s)
Leadership , Nurse Administrators/psychology , Adult , Aged , Female , Humans , Male , Middle Aged , Models, Nursing , Models, Psychological , Nursing Administration Research , Organizational Culture , Psychometrics , Workplace/organization & administration , Workplace/psychology
9.
Clin Endocrinol (Oxf) ; 77(1): 56-61, 2012 Jul.
Article in English | MEDLINE | ID: mdl-21913955

ABSTRACT

UNLABELLED: Although vitamin D deficiency has been associated with increased insulin resistance, a causal link has not been established. Interpreting the relationship has been confounded by a close correlation between vitamin D deficiency and obesity. The current clinical approach of assessing endogenous 25-hydroxyvitamin D (25(OH)D) concentrations in patients with chronic kidney disease (CKD), and independently administering activated vitamin D (AD), allows a unique opportunity to clarify cause and effect in the relationship of vitamin D, obesity and insulin resistance. METHODS: We assessed how 25(OH)D and body mass index (BMI) related to fasting insulin concentrations in 120 nondiabetic patients with CKD. In addition, we described how treatment with AD modified these relationships. RESULTS: In the full cohort, fasting insulin concentrations varied inversely with both 25(OH)D (r = -0·22, P = 0·02) and BMI (r = -0·36, P < 0·0001). The administration of AD altered these relationships. In individuals treated with AD, there was no association between 25(OH)D and fasting insulin, and the mean fasting insulin concentrations were significantly lower than in those not receiving AD (40·5 ± 22·0 vs 54·1 ± 30·9 pm, P = 0·01). In a multivariate analysis, both AD treatment and BMI were independent predictors of fasting insulin. Furthermore, obese patients treated with AD had insulin concentrations similar to nonobese patients (46·1 ± 24·9 vs 40·2 ± 21·5 pm), whereas untreated obese patients had markedly higher fasting insulin concentrations (74·4 ± 33·4 pm, P = 0·003). CONCLUSION: 25(OH)D deficiency is associated with insulin resistance in CKD. Replacement with pharmacologic doses of AD is associated with lower fasting insulin concentrations, especially in obese patients.


Subject(s)
Insulin Resistance , Renal Insufficiency, Chronic/blood , Vitamin D/analogs & derivatives , Adult , Aged , Body Composition/physiology , Body Mass Index , Cohort Studies , Fasting/blood , Fasting/metabolism , Female , Humans , Insulin/blood , Insulin Resistance/physiology , Male , Middle Aged , Obesity/blood , Obesity/complications , Obesity/epidemiology , Obesity/metabolism , Renal Insufficiency, Chronic/complications , Renal Insufficiency, Chronic/epidemiology , Renal Insufficiency, Chronic/metabolism , Vitamin D/blood , Vitamin D/physiology , Vitamin D Deficiency/blood , Vitamin D Deficiency/complications , Vitamin D Deficiency/epidemiology , Vitamin D Deficiency/metabolism
10.
Anal Chem ; 83(23): 8900-5, 2011 Dec 01.
Article in English | MEDLINE | ID: mdl-22029261

ABSTRACT

Interactions between biomolecules are an important feature of biological systems and understanding these interactions is a key goal in biochemical studies. Using conventional techniques, such as surface plasmon resonance and isothermal titration calorimetry, the determination of the binding constants requires a significant amount of time and resources to produce and purify sufficient quantities of biomolecules in order to measure the affinity of biological interactions. Using DNA hybridization, we have demonstrated a new technique based on the use of nanotethers and time-resolved Forster resonance energy transfer (FRET) that significantly reduces the amount of material required to carry out quantitative binding assays. Test biomolecules were colocalized and attached to a surface using DNA tethers constructed from overlapping oligonucleotides. The length of the tethers defines the concentration of the tethered biomolecule. Effective end concentrations ranging from 56 nM to 3.8 µM were demonstrated. The use of variable length tethers may have wider applications in the quantitative measurement of affinity binding parameters.


Subject(s)
Fluorescence Resonance Energy Transfer , Nanostructures/chemistry , DNA, Single-Stranded/chemistry , Fluorescent Dyes/chemistry , Monte Carlo Method , Nucleic Acid Hybridization , Oligonucleotides/chemistry
11.
IEEE Trans Neural Netw ; 21(2): 262-74, 2010 Feb.
Article in English | MEDLINE | ID: mdl-20040415

ABSTRACT

This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.


Subject(s)
Bayes Theorem , Neural Networks, Computer , Algorithms , Computer Simulation , Electricity , Lasers , Nonlinear Dynamics , Probability , Time Factors
12.
Neural Netw ; 21(1): 36-47, 2008 Jan.
Article in English | MEDLINE | ID: mdl-17983727

ABSTRACT

This paper presents a sequential Bayesian approach to kernel modelling of data, which contain unusual observations and outliers. The noise is heavy tailed described as a one-dimensional mixture distribution of Gaussians. The development uses a factorised variational approximation to the posterior of all unknowns, that helps to perform tractable Bayesian inference at two levels: (1) sequential estimation of the weights distribution (including its mean vector and covariance matrix); and (2) recursive updating of the noise distribution and batch evaluation of the weights prior distribution. These steps are repeated, and the free parameters of the non-Gaussian error distribution are adapted at the end of each cycle. The reported results show that this is a robust approach that can outperform standard methods in regression and time-series forecasting.


Subject(s)
Bayes Theorem , Neural Networks, Computer , Noise , Humans , Normal Distribution , Regression Analysis , Time Factors
13.
Neural Netw ; 16(10): 1527-40, 2003 Dec.
Article in English | MEDLINE | ID: mdl-14622880

ABSTRACT

This paper presents a constructive approach to neural network modeling of polynomial harmonic functions. This is an approach to growing higher-order networks like these build by the multilayer GMDH algorithm using activation polynomials. Two contributions for enhancement of the neural network learning are offered: (1) extending the expressive power of the network representation with another compositional scheme for combining polynomial terms and harmonics obtained analytically from the data; (2) space improving the higher-order network performance with a backpropagation algorithm for further gradient descent learning of the weights, initialized by least squares fitting during the growing phase. Empirical results show that the polynomial harmonic version phGMDH outperforms the previous GMDH, a Neurofuzzy GMDH and traditional MLP neural networks on time series modeling tasks. Applying next backpropagation training helps to achieve superior polynomial network performances.


Subject(s)
Computer Simulation , Database Management Systems , Learning , Neural Networks, Computer , Algorithms , Artificial Intelligence , Generalization, Psychological , Humans , Motivation , Signal Processing, Computer-Assisted , Time Factors
14.
Int J Neural Syst ; 12(5): 399-410, 2002 Oct.
Article in English | MEDLINE | ID: mdl-12424810

ABSTRACT

This paper presents a genetic programming system that evolves polynomial harmonic networks. These are multilayer feed-forward neural networks with polynomial activation functions. The novel hybrids assume that harmonics with non-multiple frequencies may enter as inputs the activation polynomials. The harmonics with non-multiple, irregular frequencies are derived analytically using the discrete Fourier transform. The polynomial harmonic networks have tree-structured topology which makes them especially suitable for evolutionary structural search. Empirical results show that this hybrid genetic programming system outperforms an evolutionary system manipulating polynomials, the traditional Koza-style genetic programming, and the harmonic GMDH network algorithm on processing time series.


Subject(s)
Fourier Analysis , Artificial Intelligence , Computer Simulation , Genetics , Models, Statistical , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...