Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Acta Ophthalmol ; 100(8): e1553-e1560, 2022 Dec.
Article in English | MEDLINE | ID: mdl-35415874

ABSTRACT

PURPOSE: To develop an automated image recognition software for the objective quantification of choroidal vascularity index (CVI) and choroidal thickness (CT) at different choroidal locations on images obtained from enhanced depth imaging optical coherence tomography (EDI-OCT), and to validate its reliability and investigate the difference and correlation between measurements made by manual and software. METHODS: A total of 390 EDI-OCT scans, captured from 130 eligible emmetropic or myopic subjects, were categorized into four grades in terms of their accessibility to identify the choroidal-scleral interface (CSI) and were further assessed for CT and CVI at five locations (subfoveal, nasal, temporal, superior and inferior) by the newly developed Choroidal Vascularity Index Software (CVIS) and three ophthalmologists. Choroidal parameters acquired from CVIS were evaluated for its reliability and correlation with ocular factors, in comparison to manual measurements. Distribution of difference and correlation coefficient between CVIS and manual measurements were also analysed. RESULTS: Choroidal Vascularity Index Software (CVIS) demonstrated excellent intra-session reliability for CT (ICC: 0.992) and CVI (ICC: 0.978) measurements, compared to the relatively lower intra- and inter-observer reliability of manual measurements. Choroidal Vascularity Index Software (CVIS) and manual assessments had the highest correlation at nasal choroid (CT: r = 0.829, p < 0.001; CVI: r = 0.665, p < 0.001). Choroidal parameters identified with CVIS showed stronger correlations with axial length than manual measurements. CONCLUSION: This automated software, CVIS, exhibited excellent reliability compared to manual measurements, which are subject to image quality and clinical experience. With its validated clinical relevance, CVIS holds promise to serve as a flexible and robust tool in future vitreoretinal and chorioretinal studies.


Subject(s)
Choroid , Tomography, Optical Coherence , Humans , Tomography, Optical Coherence/methods , Reproducibility of Results , Software , Sclera
2.
J Am Soc Mass Spectrom ; 31(11): 2296-2304, 2020 Nov 04.
Article in English | MEDLINE | ID: mdl-33104352

ABSTRACT

A novel approach for phenotype prediction is developed for data-independent acquisition (DIA) mass spectrometric (MS) data without the need for peptide precursor identification using existing DIA software tools. The first step converts the DIA-MS data file into a new file format called DIA tensor (DIAT), which can be used for the convenient visualization of all the ions from peptide precursors and fragments. DIAT files can be fed directly into a deep neural network to predict phenotypes such as appearances of cats, dogs, and microscopic images. As a proof of principle, we applied this approach to 102 hepatocellular carcinoma samples and achieved an accuracy of 96.8% in distinguishing malignant from benign samples. We further applied a refined model to classify thyroid nodules. Deep learning based on 492 training samples achieved an accuracy of 91.7% in an independent cohort of 216 test samples. This approach surpassed the deep-learning model based on peptide and protein matrices generated by OpenSWATH. In summary, we present a new strategy for DIA data analysis based on a novel data format called DIAT, which enables facile two-dimensional visualization of DIA proteomics data. DIAT files can be directly used for deep learning for biological and clinical phenotype classification. Future research will interpret the deep-learning models emerged from DIAT analysis.


Subject(s)
Mass Spectrometry/methods , Proteome/analysis , Proteomics/methods , Carcinoma, Hepatocellular/chemistry , Carcinoma, Hepatocellular/diagnosis , Deep Learning , Humans , Liver Neoplasms/chemistry , Liver Neoplasms/diagnosis , Peptides/analysis , Software , Thyroid Gland/chemistry
3.
J Proteome Res ; 19(7): 2732-2741, 2020 07 02.
Article in English | MEDLINE | ID: mdl-32053377

ABSTRACT

We reported and evaluated a microflow, single-shot, short gradient SWATH MS method intended to accelerate the discovery and verification of protein biomarkers in preclassified clinical specimens. The method uses a 15 min gradient microflow-LC peptide separation, an optimized SWATH MS window configuration, and OpenSWATH software for data analysis. We applied the method to a cohort containing 204 FFPE tissue samples from 58 prostate cancer patients and 10 benign prostatic hyperplasia patients. Altogether we identified 27,975 proteotypic peptides and 4037 SwissProt proteins from these 204 samples. Compared to a reference SWATH method with a 2 h gradient, we found 3800 proteins were quantified by the two methods on two different instruments with relatively high consistency (r = 0.77). The accelerated method consumed only 17% instrument time, while quantifying 80% of proteins compared to the 2 h gradient SWATH. Although the missing value rate increased by 20%, batch effects reduced by 21%. 75 deregulated proteins measured by the accelerated method were selected for further validation. A shortlist of 134 selected peptide precursors from the 75 proteins were analyzed using MRM-HR, and the results exhibited high quantitative consistency with the 15 min SWATH method (r = 0.89) in the same sample set. We further verified the applicability of these 75 proteins in separating benign and malignant tissues (AUC = 0.99) in an independent prostate cancer cohort (n = 154). Altogether, the results showed that the 15 min gradient microflow SWATH accelerated large-scale data acquisition by 6 times, reduced batch effect by 21%, introduced 20% more missing values, and exhibited comparable ability to separate disease groups.


Subject(s)
Proteomics , Software , Biomarkers , Humans , Male , Peptides
4.
Phys Chem Chem Phys ; 20(25): 17245-17252, 2018 Jun 27.
Article in English | MEDLINE | ID: mdl-29901060

ABSTRACT

We report a strategy to enhance the room temperature low-field magnetoresistance (LFMR) behavior of Fe3O4 nanoparticle (NP) assemblies by controlled Zn-substitution. The Zn-substituted 7 nm ZnxFe3-xO4, (x = 0 to 0.4) NPs are prepared by thermal decomposition of metal acetylacetonates (M(acac)n, M = Fe2+, Fe3+, and Zn2+). The substitution increases NP magnetic susceptibility (χ) and makes the magnetic moment more sensitive to low magnetic fields. As a result, the Zn0.3Fe2.7O4 NP assembly with NPs separated by tridecanoate exhibits a large magnetoresistance (MR) ratio of -14.8% at 300 K under a 4.5 kOe magnetic field. The demonstrated approach to control NP substitution to enhance low-field magnetoresistance of the NP assemblies provides an attractive new strategy to fabricate Fe3O4-based magnetic NP assemblies with desirable transport properties for sensitive spintronic applications.

5.
Sci Adv ; 4(1): eaao3318, 2018 01.
Article in English | MEDLINE | ID: mdl-29344574

ABSTRACT

A magnetoresistance (MR) effect induced by the Rashba spin-orbit interaction was predicted, but not yet observed, in bilayers consisting of normal metal and ferromagnetic insulator. We present an experimental observation of this new type of spin-orbit MR (SOMR) effect in the Cu[Pt]/Y3Fe5O12 (YIG) bilayer structure, where the Cu/YIG interface is decorated with nanosize Pt islands. This new MR is apparently not caused by the bulk spin-orbit interaction because of the negligible spin-orbit interaction in Cu and the discontinuity of the Pt islands. This SOMR disappears when the Pt islands are absent or located away from the Cu/YIG interface; therefore, we can unambiguously ascribe it to the Rashba spin-orbit interaction at the interface enhanced by the Pt decoration. The numerical Boltzmann simulations are consistent with the experimental SOMR results in the angular dependence of magnetic field and the Cu thickness dependence. Our finding demonstrates the realization of the spin manipulation by interface engineering.

6.
PLoS One ; 12(11): e0188428, 2017.
Article in English | MEDLINE | ID: mdl-29161317

ABSTRACT

As the energy consumption has been surging in an unsustainable way, it is important to understand the impact of existing architecture designs from energy efficiency perspective, which is especially valuable for High Performance Computing (HPC) and datacenter environment hosting tens of thousands of servers. One obstacle hindering the advance of comprehensive evaluation on energy efficiency is the deficient power measuring approach. Most of the energy study relies on either external power meters or power models, both of these two methods contain intrinsic drawbacks in their practical adoption and measuring accuracy. Fortunately, the advent of Intel Running Average Power Limit (RAPL) interfaces has promoted the power measurement ability into next level, with higher accuracy and finer time resolution. Therefore, we argue it is the exact time to conduct an in-depth evaluation of the existing architecture designs to understand their impact on system energy efficiency. In this paper, we leverage representative benchmark suites including serial and parallel workloads from diverse domains to evaluate the architecture features such as Non Uniform Memory Access (NUMA), Simultaneous Multithreading (SMT) and Turbo Boost. The energy is tracked at subcomponent level such as Central Processing Unit (CPU) cores, uncore components and Dynamic Random-Access Memory (DRAM) through exploiting the power measurement ability exposed by RAPL. The experiments reveal non-intuitive results: 1) the mismatch between local compute and remote memory node caused by NUMA effect not only generates dramatic power and energy surge but also deteriorates the energy efficiency significantly; 2) for multithreaded application such as the Princeton Application Repository for Shared-Memory Computers (PARSEC), most of the workloads benefit a notable increase of energy efficiency using SMT, with more than 40% decline in average power consumption; 3) Turbo Boost is effective to accelerate the workload execution and further preserve the energy, however it may not be applicable on system with tight power budget.


Subject(s)
Computing Methodologies , Efficiency , Electric Power Supplies , Algorithms , Architecture , Physical Phenomena
7.
PLoS One ; 12(4): e0175861, 2017.
Article in English | MEDLINE | ID: mdl-28448575

ABSTRACT

Workload consolidation is a common method to increase resource utilization of the clusters or data centers while still trying to ensure the performance of the workloads. In order to get the maximum benefit from workload consolidation, the task scheduler has to understand the runtime characteristics of the individual program and schedule the programs with less resource conflict onto the same server. We propose a set of metrics to comprehensively depict the runtime characteristics of programs. The metrics set consists of two types of metrics: resource usage and resource sensitivity. The resource sensitivity refers to the performance degradation caused by insufficient resources. The resource usage of a program is easy to get by common performance analysis tools, but the resource sensitivity can not be obtained directly. The simplest and the most intuitive way to obtain the resource sensitivity of a program is to run the program in an environment with controllable resources and record the performance achieved under all possible resource conditions. However, such a process is very much time consuming when multiple resources are involved and each resource is controlled in fine granularity. In order to obtain the resource sensitivity of a program quickly, we propose a method to speed up the resource sensitivity profiling process. Our method is realized based on two level profiling acceleration strategies. First, taking advantage of the resource usage information, we set up the maximum resource usage of the program as the upper bound of the controlled resource. In this way, the range of controlling resource levels can be narrowed, and the number of experiments can be significantly reduced. Secondly, using a prediction model achieved by interpolation, we can reduce the time spent on profiling even further because the resource sensitivity in most of the resource conditions is obtained by interpolation instead of real program execution. These two profiling acceleration strategies have been implemented and applied in profiling program runtime characteristics. Our experiment results show that the proposed two-level profiling acceleration strategy not only shortens the process of profiling, but also guarantees the accuracy of the resource sensitivity. With the fast profiling method, the average absolute error of the resource sensitivity can be controlled within 0.05.


Subject(s)
Computers , Software/standards , Electronic Data Processing , Time Factors
8.
Nanoscale ; 8(24): 12128-33, 2016 Jun 16.
Article in English | MEDLINE | ID: mdl-27271347

ABSTRACT

We report a facile approach to stabilize Fe3O4 nanoparticles (NPs) by using tetrathiafulvalene carboxylate (TTF-COO(-)) and to control electron transport with an enhanced magnetoresistance (MR) effect in TTF-COO-Fe3O4 NP assemblies. This TTF-COO-coating is advantageous over other conventional organic coatings, making it possible to develop stable Fe3O4 NP arrays for sensitive spintronics applications.

9.
ACS Nano ; 9(12): 12205-13, 2015 Dec 22.
Article in English | MEDLINE | ID: mdl-26563827

ABSTRACT

We report a strategy to coat Fe3O4 nanoparticles (NPs) with tetrathiafulvalene-fused carboxylic ligands (TTF-COO-) and to control electron conduction and magnetoresistance (MR) within the NP assemblies. The TTF-COO-Fe3O4 NPs were prepared by replacing oleylamine (OA) from OA-coated 5.7 nm Fe3O4 NPs. In the TTF-COO-Fe3O4 NPs, the ligand binding density was controlled by the ligand size, and spin polarization on the Fe3O4 NPs was greatly improved. As a result, the interparticle spacing within the TTF-COO-Fe3O4 NP assemblies are readily controlled by the geometric length of TTF-based ligand. The shorter the distance and the better the conjugation between the TTF's HOMO and LUMO, the higher the conductivity and MR of the assembly. The TTF-coating further stabilized the Fe3O4 NPs against deep oxidation and allowed I2-doping to increase electron conduction, making it possible to measure MR of the NP assembly at low temperature (<100 K). The TTF-COO-coating provides a viable way for producing stable magnetic Fe3O4 NP assemblies with controlled electron transport and MR for spintronics applications.

10.
Adv Exp Med Biol ; 680: 497-511, 2010.
Article in English | MEDLINE | ID: mdl-20865535

ABSTRACT

Dressing the problem of virtual screening is a long-term goal in the drug discovery field, which if properly solved, can significantly shorten new drugs' R&D cycle. The scoring functionality that evaluates the fitness of the docking result is one of the major challenges in virtual screening. In general, scoring functionality in docking requires a large amount of floating-point calculations, which usually takes several weeks or even months to be finished. This time-consuming procedure is unacceptable, especially when highly fatal and infectious virus arises such as SARS and H1N1, which forces the scoring task to be done in a limited time. This paper presents how to leverage the computational power of GPU to accelerate Dock6's (http://dock.compbio.ucsf.edu/DOCK_6/) Amber (J. Comput. Chem. 25: 1157-1174, 2004) scoring with NVIDIA CUDA (NVIDIA Corporation Technical Staff, Compute Unified Device Architecture - Programming Guide, NVIDIA Corporation, 2008) (Compute Unified Device Architecture) platform. We also discuss many factors that will greatly influence the performance after porting the Amber scoring to GPU, including thread management, data transfer, and divergence hidden. Our experiments show that the GPU-accelerated Amber scoring achieves a 6.5× speedup with respect to the original version running on AMD dual-core CPU for the same problem size. This acceleration makes the Amber scoring more competitive and efficient for large-scale virtual screening problems.


Subject(s)
Drug Discovery/statistics & numerical data , Drug Evaluation, Preclinical/statistics & numerical data , User-Computer Interface , Algorithms , Computational Biology , Computer Simulation , Humans , In Vitro Techniques , Ligands , Molecular Dynamics Simulation/statistics & numerical data , Software , Software Design
SELECTION OF CITATIONS
SEARCH DETAIL
...