Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Bioeng Transl Med ; 8(2): e10399, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36925705

ABSTRACT

Tumor spread is responsible for most deaths related to cancer. Increasing the accuracy of cancer prognosis is critical to reducing the high mortality rates in cancer patients. Here, we report that the electrostatic potential difference (EPD) between tumor and its paratumor tissue is a prognostic marker for tumor spread. This finding is concluded from the patient-specific EPD values and clinical observation. The electrostatic potential values were measured on tissue cryosections from 51 patients using Kelvin probe force microscopy (KPFM). A total of ~44% (15/34) patients of Vtumor-paratumor > 0 were featured with tumor spread, whereas only ~18% (2/11) patients of Vtumor-paratumor < 0 had tumor spread. Next, we found the increased enrichment of cancer stem cells in paratumors with lower electrostatic potentials using immunofluorescence imaging, which suggested the attribution of tumor spread to the galvanotaxis of cancer stem cells (CSCs) toward lower potential. The findings were finally validated in breast and lung spheroid models composed of differentiated cancer cells and cancer stem cells at the ratio of 1:1 and embedded in Matrigel dopped with negative-, neutral- and positive-charged polymers and CSCs prefer to spread out of spheroids to lower electrostatic potential sites. This work may inspire the development of diagnostic and prognostic strategies targeting at tissue EPDs and CSCs for tumor therapy.

2.
Am J Ophthalmol ; 228: 35-46, 2021 08.
Article in English | MEDLINE | ID: mdl-33852909

ABSTRACT

PURPOSE: This study aims to improve the apparent motility of ocular prosthetic devices using technology. Prevailing ocular prostheses are acrylic shells with a static eye image rendered on the convex surface. A limited range of ocular prosthetic movement and lack of natural saccadic movements commonly causes the appearance of eye misalignment that may be disfiguring. Digital screens and computational systems may obviate current limitations in eye prosthetic motility and help prosthetic wearers feel less self-conscious about their appearance. METHODS: We applied convoluted neural networks (CNNs) to track pupil location in various conditions. These algorithms were coupled to a microscreen digital prosthetic eye (DPE) prototype to assess the ability of the system to capture full ocular ductions and saccadic movements in a miniaturized, portable, and wearable system. RESULTS: The CNNs captured pupil location with high accuracy. Pupil location data were transmitted to a miniature screen ocular prosthetic prototype that displayed a dynamic contralateral eye image. The transmission achieved a full range of ocular ductions and with grossly undetectable latency. Lack of iris and sclera color and detail, as well as constraints in luminosity, dimensionality and image stability limited the real eye appearance. Yet, the digitally rendered eye moved in the same amplitude and velocity as the native, tracked eye. CONCLUSIONS: Real-time image processing using CNNs coupled to microcameras and a miniscreen DPE may offer improvements in amplitude and velocity of apparent prosthetic eye movement. These developments, along with ocular image precision, may offer a next-generation eye prosthesis. NOTE: Publication of this article is sponsored by the American Ophthalmological Society.


Subject(s)
Consensus , Eye Movements/physiology , Eye, Artificial , Image Processing, Computer-Assisted/methods , Iris/physiopathology , Ophthalmology , Societies, Medical , Algorithms , Eye Diseases/physiopathology , Eye Diseases/surgery , Humans , Pupil/physiology , United States , Vision, Ocular/physiology
3.
IEEE Trans Pattern Anal Mach Intell ; 43(12): 4291-4305, 2021 Dec.
Article in English | MEDLINE | ID: mdl-32750771

ABSTRACT

The ability of camera arrays to efficiently capture higher space-bandwidth product than single cameras has led to various multiscale and hybrid systems. These systems play vital roles in computational photography, including light field imaging, 360 VR camera, gigapixel videography, etc. One of the critical tasks in multiscale hybrid imaging is matching and fusing cross-resolution images from different cameras under perspective parallax. In this paper, we investigate the reference-based super-resolution (RefSR) problem associated with dual-camera or multi-camera systems. RefSR consists of super-resolving a low-resolution (LR) image given an external high-resolution (HR) reference image, where they suffer both a significant resolution gap ( 8×) and large parallax (  âˆ¼ 10% pixel displacement). We present CrossNet++, an end-to-end network containing novel two-stage cross-scale warping modules, image encoder and fusion decoder. The stage I learns to narrow down the parallax distinctively with the strong guidance of landmarks and intensity distribution consensus. Then the stage II operates more fine-grained alignment and aggregation in feature domain to synthesize the final super-resolved image. To further address the large parallax, new hybrid loss functions comprising warping loss, landmark loss and super-resolution loss are proposed to regularize training and enable better convergence. CrossNet++ significantly outperforms the state-of-art on light field datasets as well as real dual-camera data. We further demonstrate the generalization of our framework by transferring it to video super-resolution and video denoising.

4.
Patterns (N Y) ; 1(9): 100173, 2020 Dec 11.
Article in English | MEDLINE | ID: mdl-33330851

ABSTRACT

[This corrects the article DOI: 10.1016/j.patter.2020.100092.].

5.
Patterns (N Y) ; 1(6): 100092, 2020 Sep 11.
Article in English | MEDLINE | ID: mdl-32838344

ABSTRACT

The emergence of the novel coronavirus disease 2019 (COVID-19) is placing an increasing burden on healthcare systems. Although the majority of infected patients experience non-severe symptoms and can be managed at home, some individuals develop severe symptoms and require hospital admission. Therefore, it is critical to efficiently assess the severity of COVID-19 and identify hospitalization priority with precision. In this respect, a four-variable assessment model, including lymphocyte, lactate dehydrogenase, C-reactive protein, and neutrophil, is established and validated using the XGBoost algorithm. This model is found to be effective in identifying severe COVID-19 cases on admission, with a sensitivity of 84.6%, a specificity of 84.6%, and an accuracy of 100% to predict the disease progression toward rapid deterioration. It also suggests that a computation-derived formula of clinical measures is practically applicable for healthcare administrators to distribute hospitalization resources to the most needed in epidemics and pandemics.

6.
IEEE Trans Vis Comput Graph ; 26(5): 2012-2022, 2020 05.
Article in English | MEDLINE | ID: mdl-32070983

ABSTRACT

Semantic understanding of 3D environments is critical for both the unmanned system and the human involved virtual/augmented reality (VR/AR) immersive experience. Spatially-sparse convolution, taking advantage of the intrinsic sparsity of 3D point cloud data, makes high resolution 3D convolutional neural networks tractable with state-of-the-art results on 3D semantic segmentation problems. However, the exhaustive computations limits the practical usage of semantic 3D perception for VR/AR applications in portable devices. In this paper, we identify that the efficiency bottleneck lies in the unorganized memory access of the sparse convolution steps, i.e., the points are stored independently based on a predefined dictionary, which is inefficient due to the limited memory bandwidth of parallel computing devices (GPU). With the insight that points are continuous as 2D surfaces in 3D space, a chunk-based sparse convolution scheme is proposed to reuse the neighboring points within each spatially organized chunk. An efficient multi-layer adaptive fusion module is further proposed for employing the spatial consistency cue of 3D data to further reduce the computational burden. Quantitative experiments on public datasets demonstrate that our approach works 11× faster than previous approaches with competitive accuracy. By implementing both semantic and geometric 3D reconstruction simultaneously on a portable tablet device, we demo a foundation platform for immersive AR applications.


Subject(s)
Augmented Reality , Imaging, Three-Dimensional/methods , Neural Networks, Computer , Semantics , Computer Graphics , Humans , Virtual Reality
SELECTION OF CITATIONS
SEARCH DETAIL
...