Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 47
Filter
1.
Biochem Biophys Res Commun ; 726: 150281, 2024 Sep 24.
Article in English | MEDLINE | ID: mdl-38909532

ABSTRACT

Cell-fusion mediated generation of multinucleated syncytia represent critical feature during viral infection and in development. Efficiency of syncytia formation is usually illustrated as fusion efficiency under given condition by quantifying total number of nuclei in syncytia normalized to total number of nuclei (both within syncytia and unfused cell nuclei) in unit field of view. However heterogeneity in multinucleated syncytia sizes poses challenge in quantification of cell-fusion multinucleation under diverse conditions. Taking in-vitro SARS-CoV-2 spike-protein variants mediated virus-cell fusion model and placenta trophoblast syncytialization as cell-cell fusion model; herein we emphasize wide application of simple unbiased detailed measure of virus-cell and cell-cell multinucleation using experiential cumulative distribution function (CDF) and fusion number events (FNE) approaches illustrating comprehensive metrics for syncytia interpretation.


Subject(s)
Cell Fusion , Giant Cells , SARS-CoV-2 , Trophoblasts , Humans , Giant Cells/virology , Giant Cells/cytology , SARS-CoV-2/physiology , Trophoblasts/virology , Trophoblasts/cytology , Spike Glycoprotein, Coronavirus/metabolism , Female , COVID-19/virology , Pregnancy , Virus Internalization , Placenta/virology , Placenta/cytology
2.
J Imaging ; 10(3)2024 Feb 20.
Article in English | MEDLINE | ID: mdl-38535132

ABSTRACT

Image decolorization is an image pre-processing step which is widely used in image analysis, computer vision, and printing applications. The most commonly used methods give each color channel (e.g., the R component in RGB format, or the Y component of an image in CIE-XYZ format) a constant weight without considering image content. This approach is simple and fast, but it may cause significant information loss when images contain too many isoluminant colors. In this paper, we propose a new method which is not only efficient, but also can preserve a higher level of image contrast and detail than the traditional methods. It uses the information from the cumulative distribution function (CDF) of the information in each color channel to compute a weight for each pixel in each color channel. Then, these weights are used to combine the three color channels (red, green, and blue) to obtain the final grayscale value. The algorithm works in RGB color space directly without any color conversion. In order to evaluate the proposed algorithm objectively, two new metrics are also developed. Experimental results show that the proposed algorithm can run as efficiently as the traditional methods and obtain the best overall performance across four different metrics.

3.
Heliyon ; 10(4): e26482, 2024 Feb 29.
Article in English | MEDLINE | ID: mdl-38434092

ABSTRACT

We show that the conventional income inequality indexes assess income inequality incorrectly because of three problems. The unequally distributed (UD) income-based approach solves the problems, decomposes income inequality into two kinds of departure from equality, and provides two indexes. The comprehensive assessment of income inequality requires the integration of two kinds of departure. This paper proposes the relative UD (RUD) income-based approach. The RUD income-based approach combines the cumulative distribution function and quantile function of the RUD income and produces a new index integrating two kinds of departure. We investigate the properties of the new index and demonstrate its applicability through example income distributions.

4.
Sensors (Basel) ; 24(5)2024 Feb 25.
Article in English | MEDLINE | ID: mdl-38475026

ABSTRACT

The Advanced Meteorological Image (AMI) onboard GEOKOMPSAT 2A (GK-2A) enables the retrieval of dust aerosol optical depth (DAOD) from geostationary satellites using infrared (IR) channels. IR observations allow the retrieval of DAOD and the dust layer altitude (24 h) over surface properties, particularly over deserts. In this study, dust events in northeast Asia from 2020 to 2021 were investigated using five GK-2A thermal IR bands (8.7, 10.5, 11.4, 12.3, and 13.3 µm). For the dust cloud, the brightness temperature differences (BTDs) of 10.5 and 12.3 µm were consistently negative, while the BTD of 8.7 and 10.5 µm varied based on the dust intensity. This study exploited these optical properties to develop a physical approach for DAOD lookup tables (LUTs) using IR channels to retrieve the DAOD. To this end, the characteristics of thermal radiation transfer were simulated using the forward model; dust aerosols were explained by BTD (10.5, 12.3 µm)-an intrinsic characteristic of dust aerosol. The DAOD and dust properties were gained from a brightness temperature (BT) of 10.5 µm and BTD of 10.5, 12.3 µm. Additionally, the cumulative distribution function (CDF) was employed to strengthen the continuity of 24-h DAOD. The CDF was applied to the algorithm by calculating the conversion value coefficient for the DAOD error correction of the IR, with daytime visible aerosol optical depth as the true value. The results show that the DAOD product can be successfully applied during the daytime and nighttime to continuously monitor the flow of yellow dust from the GK-2A satellite in northeast Asia. In particular, the validation results for IR DAOD were similar to the active satellite product (CALIPSO/CALIOP) results, which exhibited a tendency similar to that for IR DAOD at night.

5.
Food Res Int ; 178: 113933, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38309904

ABSTRACT

Efficient food safety risk assessment significantly affects food safety supervision. However, food detection data of different types and batches show different feature distributions, resulting in unstable detection results of most risk assessment models, lack of interpretability of risk classification, and insufficient risk traceability. This study aims to explore an efficient food safety risk assessment model that takes into account robustness, interpretability and traceability. Therefore, the Explainable unsupervised risk Warning Framework based on the Empirical cumulative Distribution function (EWFED) was proposed. Firstly, the detection data's underlying distribution is estimated as non-parametric by calculating each testing indicator's empirical cumulative distribution. Next, the tail probabilities of each testing indicator are estimated based on these distributions and summarized to obtain the sample risk value. Finally, the "3σ Rule" is used to achieve explainable risk classification of qualified samples, and the reasons for unqualified samples are tracked according to the risk score of each testing indicator. The experiments of the EWFED model on two types of dairy product detection data in actual application scenarios have verified its effectiveness, achieving interpretable risk division and risk tracing of unqualified samples. Therefore, this study provides a more robust and systematic food safety risk assessment method to promote precise management and control of food safety risks effectively.


Subject(s)
Food Safety , Food , Food Safety/methods , Risk Factors , Risk Assessment
6.
J Biopharm Stat ; : 1-13, 2023 Nov 20.
Article in English | MEDLINE | ID: mdl-37982583

ABSTRACT

OBJECTIVES: The FDA recommends the use of anchor-based methods and empirical cumulative distribution function (eCDF) curves to establish a meaningful within-patient change (MWPC) for a clinical outcome assessment (COA). In practice, the estimates obtained from model-based methods and eCDF curves may not closely align, although an anchor is used with both. To help interpret their results, we investigated and compared these approaches. METHODS: Both repeated measures model (RMM) and eCDF approaches were used to estimate an MWPC on a target COA. We used both real-life (ClinicalTrials.gov: NCT02697773) and simulated data sets that included 688 patients with up to six visits per patient, target COA (range 0 to 10), and an anchor measure on patient global assessment of osteoarthritis from 1 (very good) to 5 (very poor). Ninety-five percent confidence intervals for the MWPC were calculated by the bootstrap method. RESULTS: The distribution of the COA score changes affected the degree of concordance between RMM and eCDF estimates. The COA score changes from simulated normally distributed data led to greater concordance between the two approaches than did COA score changes from the actual clinical data. The confidence intervals of MWPC estimate based on eCDF methods were much wider than that by RMM methods, and the point estimate of eCDF methods varied noticeably across visits. CONCLUSIONS: Our data explored the differences of model-based methods over eCDF approaches, finding that the former integrates more information across a diverse range of COA and anchor scores and provides more precise estimates for the MWPC.

7.
Front Bioeng Biotechnol ; 11: 1228922, 2023.
Article in English | MEDLINE | ID: mdl-37860626

ABSTRACT

The purpose of this study was to develop injury risk functions (IRFs) for the anterior and posterior cruciate ligaments (ACL and PCL, respectively) and the medial and lateral collateral ligaments (MCL and LCL, respectively) in the knee joint. The IRFs were based on post-mortem human subjects (PMHSs). Available specimen-specific failure strains were supplemented with statistically generated failure strains (virtual values) to accommodate for unprovided detailed experimental data in the literature. The virtual values were derived from the reported mean and standard deviation in the experimental studies. All virtual and specimen-specific values were thereafter categorized into groups of static and dynamic rates, respectively, and tested for the best fitting theoretical distribution to derive a ligament-specific IRF. A total of 10 IRFs were derived (three for ACL, two for PCL, two for MCL, and three for LCL). ACL, MCL, and LCL received IRFs in both dynamic and static tensile rates, while a sufficient dataset was achieved only for dynamic rates of the PCL. The log-logistic and Weibull distributions had the best fit (p-values: >0.9, RMSE: 2.3%-4.7%) to the empirical datasets for all the ligaments. These IRFs are, to the best of the authors' knowledge, the first attempt to generate injury prediction tools based on PMHS data for the four knee ligaments. The study has summarized all the relevant literature on PHMS experimental tensile tests on the knee ligaments and utilized the available empirical data to create the IRFs. Future improvements require upcoming experiments to provide comparable testing and strain measurements. Furthermore, emphasis on a clear definition of failure and transparent reporting of each specimen-specific result is necessary.

8.
Heliyon ; 9(9): e19451, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37681146

ABSTRACT

For Orthogonal Frequency Division Multiplexing (OFDM) systems, the most significant problem is the peak-to-average power ratio. The utilisation of partial transmission sequence, often known as PTS, is an efficient method for reducing PAPR. When it comes to minimizing the peak-to-average power ratio (PAPR) in Orthogonal Frequency Division Multiplexing (OFDM) Systems, PTS is one of the most effective approaches that may be used. Due to the substantial data load, using peak-to-average power ratio in OFDM is challenging. The most crucial problem with OFDM is achieving better results by lowering PAPR. Provide a PTS in this research that is based on the Chaotic Biogeography Based Optimization (CBBO) algorithm to effectively address the high PAPR issue that exists in Generalized Frequency Division Multiplexing (GFDM) waveforms using Hermitian Symmetry property is used. The Hermitian symmetry is utilised in order to acquire a real-valued time-domain signal. Phase rotation factor combinations are carried out in an effective and optimal manner through the utilisation of an innovative combination of optimization techniques. In comparison to conventional optimization techniques, a new hybrid optimization offers quick convergence quality and minimal complexity. When compared to traditional PTS methods such traditional GFDM and OFDM-PTS, experimental results demonstrate that the suggested CBBO-PTS technique significantly improves on minimizing PAPR.

9.
Sensors (Basel) ; 23(2)2023 Jan 13.
Article in English | MEDLINE | ID: mdl-36679746

ABSTRACT

Orthogonal frequency division multiplexing (OFDM) has the characteristics of high spectrum efficiency and excellent anti-multipath interference ability. It is the most popular and mature technology currently in wireless communication. However, OFDM is a multi-carrier system, which inevitably has the problem of a high peak-to-average power ratio (PAPR), and s signal with too high PAPR is prone to distortion when passing through an amplifier due to nonlinearity. To address the troubles caused by high PAPR, we proposed an improved tone reservation (I-TR) algorithm to alleviate the above native phenomenon, which will pay some modest pre-calculations to estimate the rough proportion of peak reduction tone (PRT) to determine the appropriate output power allocation threshold then utilize a few iterations to converge to the near-optimal PAPR. Furthermore, our proposed scheme significantly outperforms previous works in terms of PAPR performance and computational complexity, such as selective mapping (SLM), partial transmission sequence (PTS), TR, tone injection (TI), etc. The simulation results show that in our proposed scheme, the PAPR is appreciably reduced by about 6.44 dB compared with the original OFDM technique at complementary cumulative distribution function (CCDF) equal to 10-3, and the complexity of I-TR has reduced by approximately 96% compared to TR. Besides, as for bit error rate (BER), our proposed method always outperforms the original OFDM without any sacrifice.


Subject(s)
Communication , Signal Processing, Computer-Assisted , Computer Simulation , Algorithms , Amplifiers, Electronic
10.
Int J Biostat ; 19(1): 1-19, 2023 05 01.
Article in English | MEDLINE | ID: mdl-35749155

ABSTRACT

It has been reported that about half of biological discoveries are irreproducible. These irreproducible discoveries were partially attributed to poor statistical power. The poor powers are majorly owned to small sample sizes. However, in molecular biology and medicine, due to the limit of biological resources and budget, most molecular biological experiments have been conducted with small samples. Two-sample t-test controls bias by using a degree of freedom. However, this also implicates that t-test has low power in small samples. A discovery found with low statistical power suggests that it has a poor reproducibility. So, promotion of statistical power is not a feasible way to enhance reproducibility in small-sample experiments. An alternative way is to reduce type I error rate. For doing so, a so-called t α -test was developed. Both theoretical analysis and simulation study demonstrate that t α -test much outperforms t-test. However, t α -test is reduced to t-test when sample sizes are over 15. Large-scale simulation studies and real experiment data show that t α -test significantly reduced type I error rate compared to t-test and Wilcoxon test in small-sample experiments. t α -test had almost the same empirical power with t-test. Null p-value density distribution explains why t α -test had so lower type I error rate than t-test. One real experimental dataset provides a typical example to show that t α -test outperforms t-test and a microarray dataset showed that t α -test had the best performance among five statistical methods. In addition, the density distribution and probability cumulative function of t α -statistic were given in mathematics and the theoretical and observed distributions are well matched.


Subject(s)
Models, Statistical , Reproducibility of Results , Computer Simulation , Likelihood Functions , Sample Size
11.
Methods Mol Biol ; 2593: 171-195, 2023.
Article in English | MEDLINE | ID: mdl-36513931

ABSTRACT

Lysosomes are highly dynamic degradation/recycling organelles that harbor sophisticated molecular sensors and signal transduction machinery through which they control cell adaptation to environmental cues and nutrients. The movements of these signaling hubs comprise persistent, directional runs-active, ATP-dependent transport along the microtubule tracks-interspersed by short, passive movements and pauses imposed by cytoplasmic constraints. The trajectories of individual lysosomes are usually obtained by time-lapse imaging of the acidic organelles labeled with LysoTracker dyes or fluorescently-tagged lysosomal-associated membrane proteins LAMP1 and LAMP2. Subsequent particle tracking generates large data sets comprising thousands of lysosome trajectories and hundreds of thousands of data points. Analyzing such data sets requires unbiased, automated methods to handle large data sets while capturing the temporal heterogeneity of lysosome trajectory data. This chapter describes integrated and largely automated workflow from live cell imaging to lysosome trajectories to computing the parameters of lysosome dynamics. We describe an open-source code for implementing the continuous wavelet transform (CWT) to distinguish trajectory segments corresponding to active transport (i.e., "runs" and "flights") versus passive lysosome movements. Complementary cumulative distribution functions (CDFs) of the "runs/flights" are generated, and Akaike weight comparisons with several competing models (lognormal, power law, truncated power law, stretched exponential, exponential) are performed automatically. Such high-throughput analyses yield useful aggregate/ensemble metrics for lysosome active transport.


Subject(s)
Lysosomes , Wavelet Analysis , Lysosomes/metabolism , Lysosomal Membrane Proteins/metabolism , Biological Transport, Active , Software
12.
IBRO Neurosci Rep ; 13: 356-363, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36281438

ABSTRACT

Merkel cells (MCs) and associated primary sensory afferents of the whisker follicle-sinus complex, accurately code whisker self-movement, angle, and whisk phase during whisking. However, little is known about their roles played in cortical encoding of whisker movement. To this end, the spiking activity of primary somatosensory barrel cortex (wS1) neurons was measured in response to varying the whisker deflection amplitude and velocity in transgenic mice with previously established reduced mechanoelectrical coupling at MC-associated afferents. Under reduced MC activity, wS1 neurons exhibited increased sensitivity to whisker deflection. This appeared to arise from a lack of variation in response magnitude to varying the whisker deflection amplitude and velocity. This latter effect was further indicated by weaker variation in the temporal profile of the evoked spiking activity when either whisker deflection amplitude or velocity was varied. Nevertheless, under reduced MC activity, wS1 neurons retained the ability to differentiate stimulus features based on the timing of their first post-stimulus spike. Collectively, results from this study suggest that MCs contribute to cortical encoding of both whisker amplitude and velocity, predominantly by tuning wS1 response magnitude, and by patterning the evoked spiking activity, rather than by tuning wS1 response latency.

13.
Entropy (Basel) ; 24(7)2022 Jul 02.
Article in English | MEDLINE | ID: mdl-35885147

ABSTRACT

Based on the canonical correlation analysis, we derive series representations of the probability density function (PDF) and the cumulative distribution function (CDF) of the information density of arbitrary Gaussian random vectors as well as a general formula to calculate the central moments. Using the general results, we give closed-form expressions of the PDF and CDF and explicit formulas of the central moments for important special cases. Furthermore, we derive recurrence formulas and tight approximations of the general series representations, which allow efficient numerical calculations with an arbitrarily high accuracy as demonstrated with an implementation in Python publicly available on GitLab. Finally, we discuss the (in)validity of Gaussian approximations of the information density.

14.
Sensors (Basel) ; 22(13)2022 Jun 24.
Article in English | MEDLINE | ID: mdl-35808264

ABSTRACT

Air pollution has become a serious problem in all megacities. It is necessary to continuously monitor the state of the atmosphere, but pollution data received using fixed stations are not sufficient for an accurate assessment of the aerosol pollution level of the air. Mobility in measuring devices can significantly increase the spatiotemporal resolution of the received data. Unfortunately, the quality of readings from mobile, low-cost sensors is significantly inferior to stationary sensors. This makes it necessary to evaluate the various characteristics of monitoring systems depending on the properties of the mobile sensors used. This paper presents an approach in which the time of pollution detection is considered a random variable. To the best of our knowledge, we are the first to deduce the cumulative distribution function of the pollution detection time depending on the features of the monitoring system. The obtained distribution function makes it possible to optimize some characteristics of air pollution detection systems in a smart city.


Subject(s)
Air Pollutants , Air Pollution , Aerosols , Air Pollutants/analysis , Air Pollution/analysis , Cities , Environmental Monitoring , Particulate Matter/analysis
15.
Appl Radiat Isot ; 186: 110295, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35609403

ABSTRACT

Detailed geometric information of a high-purity Ge (HPGe) detector is a very important issue for Monte Carlo simulation of the detector. Commonly, users have no geometric information about the detector and information given by the manufacturer is not completely valid for simulation. An equivalent geometry of detector, the parameters of which can be used for Monte Carlo simulation, is optimised using a genetic algorithm for a large-volume HPGe detector in this study. A mixed-point gamma calibration standard, emitting 12 useful gamma-radiation energies within 59.5-1836.1 keV, is placed at 74 different locations around the detector for this purpose. A high-quality solution is generated starting from an initial population of randomly-generated detector geometries using a genetic algorithm. Fitness of each geometry is obtained by comparing full energy peak efficiencies computed by Monte Carlo simulation with experimental values for each energy and position. Efficiencies with relative errors less than 5% for high energies and less than 7% for lower energies, except 59.5 keV, are obtained using optimised equivalent geometry parameters for the Monte Carlo simulation. Also, the necessity of using crystal dimensions smaller than real dimensions for Monte Carlo simulations of high-volume HPGe detectors is discussed. In addition, for Monte Carlo simulation of high-volume HPGe detectors, it is demonstrated that the use of smaller crystal dimensions than the real dimensions is necessary to obtain experimentally measured efficiencies of the detector.

16.
J Food Sci ; 87(5): 2096-2111, 2022 May.
Article in English | MEDLINE | ID: mdl-35355270

ABSTRACT

The reparameterization of the Weibull cumulative distribution function and its survival function was performed to obtain meaningful parameters in food and biological sciences such as the lag phase (λ), the maximum rate ( µ max ${\mu _{{\rm{max}}}}$ ), and the maximum increase/decrease of the function (A). The application of the Lambert function was crucial in order to achieve an explicit mathematical solution. Since the reparameterized model is applicable only when the shape parameter (α) is greater than one, the Weibull model was modified with the introduction of a new parameter ( µ ß ${\mu _\beta }$ ) that represents the model rate at time ß (scale parameter). All models were applied to literature data on food technology and microbiology topics: Microbial growth, thermal microbial inactivation, thermal degradation kinetics, and particle size distributions. The Weibull model and the reparameterized versions showed identical fitting performance in terms of coefficient of determination, residual mean standard error, values of residuals, and estimated values of the parameters. Some faults in the datasets used in this study permitted to re-mark the criticality of a good experimental plan when data modeling is approached. The parameter µ ß ${\mu _\beta }$ resulted in an interesting new rate parameter that is not correlated with the scale parameter ( | r ¯ | $| {\bar{r}} |$ = 0.64 ± 0.37) and highly correlated with the shape parameter ( | r ¯ | $| {\bar{r}} |$ = 0.90 ± 0.11). Also, the reparameterization of the Weibull probability density function was performed by using both the standard and new parameters and applied to experimental data and gave useful information from the distribution curve, such as the value of the mode ( µ max ${\mu _{{\rm{max}}}}$ ) and a measure of the curve skewness (λ).


Subject(s)
Food Microbiology , Food Technology , Kinetics , Microbial Viability , Models, Biological
17.
Sensors (Basel) ; 21(21)2021 Nov 07.
Article in English | MEDLINE | ID: mdl-34770702

ABSTRACT

With the growing demand for structural health monitoring system applications, data imaging is an ideal method for performing regular routine maintenance inspections. Image analysis can provide invaluable information about the health conditions of a structure's existing infrastructure by recording and analyzing exterior damages. Therefore, it is desirable to have an automated approach that reports defects on images reliably and robustly. This paper presents a multivariate analysis approach for images, specifically for assessing substantial damage (such as cracks). The image analysis provides graph representations that are related to the image, such as the histogram. In addition, image-processing techniques such as grayscale are also implemented, which enhance the object's information present in the image. In addition, this study uses image segmentation and a neural network, for transforming an image to analyze it more easily and as a classifier, respectively. Initially, each concrete structure image is preprocessed to highlight the crack. A neural network is used to calculate and categorize the visual characteristics of each region, and it shows an accuracy for classification of 98%. Experimental results show that thermal image extraction yields better histogram and cumulative distribution function features. The system can promote the development of various thermal image applications, such as nonphysical visual recognition and fault detection analysis.


Subject(s)
Image Processing, Computer-Assisted , Thermography , Multivariate Analysis , Neural Networks, Computer
18.
Comput Struct Biotechnol J ; 19: 4603-4618, 2021.
Article in English | MEDLINE | ID: mdl-34471502

ABSTRACT

BACKGROUND: Gliomas are one of the most common types of primary tumors in central nervous system. Previous studies have found that macrophages actively participate in tumor growth. METHODS: Weighted gene co-expression network analysis was used to identify meaningful macrophage-related gene genes for clustering. Pamr, SVM, and neural network were applied for validating clustering results. Somatic mutation and methylation were used for defining the features of identified clusters. Differentially expressed genes (DEGs) between the stratified groups after performing elastic regression and principal component analyses were used for the construction of MScores. The expression of macrophage-specific genes were evaluated in tumor microenvironment based on single cell sequencing analysis. A total of 2365 samples from 15 glioma datasets and 5842 pan-cancer samples were used for external validation of MScore. RESULTS: Macrophages were identified to be negatively associated with the survival of glioma patients. Twenty-six macrophage-specific DEGs obtained by elastic regression and PCA were highly expressed in macrophages at single-cell level. The prognostic value of MScores in glioma was validated by the active proinflammatory and metabolic profile of infiltrating microenvironment and response to immunotherapies of samples with this signature. MScores managed to stratify patient survival probabilities in 15 external glioma datasets and pan-cancer datasets, which predicted worse survival outcome. Sequencing data and immunohistochemistry of Xiangya glioma cohort confirmed the prognostic value of MScores. A prognostic model based on MScores demonstrated high accuracy rate. CONCLUSION: Our findings strongly support a modulatory role of macrophages, especially M2 macrophages in glioma progression and warrants further experimental studies.

19.
Ecol Evol ; 11(12): 7461-7473, 2021 Jun.
Article in English | MEDLINE | ID: mdl-34188827

ABSTRACT

Most existing functional diversity indices focus on a single facet of functional diversity. Although these indices are useful for quantifying specific aspects of functional diversity, they often present some conceptual or practical limitations in estimating functional diversity. Here, we present a new functional extension and evenness (FEE) index that encompasses two important aspects of functional diversity. This new index is based on the straightforward notion that a community has high diversity when its species are distant from each other in trait space. The index quantifies functional diversity by evaluating the overall extension of species traits and the interspecific differences of a species assemblage in trait space. The concept of minimum spanning tree (MST) of points was adopted to obtain the essential distribution properties for a species assembly in trait space. We combined the total length of MST branches (extension) and the variation of branch lengths (evenness) into a raw FEE0 metric and then translated FEE0 to a species richness-independent FEE index using a null model approach. We assessed the properties of FEE and used multiple approaches to evaluate its performance. The results show that the FEE index performs well in quantifying functional diversity and presents the following desired properties: (a) It allows a fair comparison of functional diversity across different species richness levels; (b) it preserves the essence of single-facet indices while overcoming some of their limitations; (c) it standardizes comparisons among communities by taking into consideration the trait space of the shared species pool; and (d) it has the potential to distinguish among different community assembly processes. With these attributes, we suggest that the FEE index is a promising metric to inform biodiversity conservation policy and management, especially in applications at large spatial and/or temporal scales.

20.
J Chromatogr A ; 1640: 461931, 2021 Mar 15.
Article in English | MEDLINE | ID: mdl-33581675

ABSTRACT

The average minimum resolution required for separating adjacent single-component peaks (SCPs) in one-dimensional chromatograms is an important metric in statistical overlap theory (SOT). However, its value changes with changing chromatographic conditions in non-intuitive ways, when SOT predicts the average number of peaks (maxima). A more stable and easily understood value of resolution is obtained on making a different prediction. A general equation is derived for the sum of all separated and superposed widths of SCPs in a chromatogram. The equation is a function of the saturation α, a metric of chromatographic crowdedness, and is expressed in dimensionless form by dividing by the duration of the chromatogram. This dimensionless function, f(α), is also the cumulative distribution function of the probability of separating adjacent SCPs. Simulations based on the clustering of line segments representing SCPs verify expressions for f(α) calculated from five functions for the distribution of intervals between adjacent SCPs. Synthetic chromatograms are computed with different saturations, distributions of intervals, and distribution of SCP amplitudes. The chromatograms are analyzed by calculating the sum of the widths of peaks at different relative responses, dividing the sum by the duration of the chromatograms, and graphing the reduced sum against relative response. For small values of relative response, the reduced sum approaches the fraction of baseline that is occupied by chromatographic peaks. This fraction can be identified with f(α), if the saturation α is defined with the average minimum resolution equaling 1.5. The identification is general and is independent of the saturation, the interval distribution, or the amplitude distribution. This constant value of resolution corresponds to baseline resolution, which simplifies the interpretation of SOT.


Subject(s)
Chromatography/methods , Statistics as Topic , Computer Simulation , Probability
SELECTION OF CITATIONS
SEARCH DETAIL