Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Adv Funct Mater ; 34(13)2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38706986

RESUMO

Collagen fibers in the 3D tumor microenvironment (TME) exhibit complex alignment landscapes that are critical in directing cell migration through a process called contact guidance. Previous in vitro work studying this phenomenon has focused on quantifying cell responses in uniformly aligned environments. However, the TME also features short-range gradients in fiber alignment that result from cell-induced traction forces. Although the influence of graded biophysical taxis cues is well established, cell responses to physiological alignment gradients remain largely unexplored. In this work, fiber alignment gradients in biopsy samples are characterized and recreated using a new microfluidic biofabrication technique to achieve tunable sub-millimeter to millimeter scale gradients. This study represents the first successful engineering of continuous alignment gradients in soft, natural biomaterials. Migration experiments on graded alignment show that HUVECs exhibit increased directionality, persistence, and speed compared to uniform and unaligned fiber architectures. Similarly, patterned MDA-MB-231 aggregates exhibit biased migration toward increasing fiber alignment, suggesting a role for alignment gradients as a taxis cue. This user-friendly approach, requiring no specialized equipment, is anticipated to offer new insights into the biophysical cues that cells interpret as they traverse the extracellular matrix, with broad applicability in healthy and diseased tissue environments.

2.
J Med Imaging (Bellingham) ; 10(4): 045002, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37649957

RESUMO

Purpose: Medical technology for minimally invasive surgery has undergone a paradigm shift with the introduction of robot-assisted surgery. However, it is very difficult to track the position of the surgical tools in a surgical scene, so it is crucial to accurately detect and identify surgical tools. This task can be aided by deep learning-based semantic segmentation of surgical video frames. Furthermore, due to the limited working and viewing areas of these surgical instruments, there is a higher chance of complications from tissue injuries (e.g., tissue scars and tears). Approach: With the aid of digital inpainting algorithms, we present an application that uses image segmentation to remove surgical instruments from laparoscopic/endoscopic video. We employ a modified U-Net architecture (U-NetPlus) to segment the surgical instruments. It consists of a redesigned decoder and a pre-trained VGG11 or VGG16 encoder. The decoder was modified by substituting an up-sampling operation based on nearest-neighbor interpolation for the transposed convolution operation. Furthermore, these interpolation weights do not need to be learned to perform upsampling, which eliminates the artifacts generated by the transposed convolution. In addition, we use a very fast and adaptable data augmentation technique to further enhance performance. The instrument segmentation mask is filled in (i.e., inpainted) by the tool removal algorithms using the previously acquired tool segmentation masks and either previous instrument-containing frames or instrument-free reference frames. Results: We have shown the effectiveness of the proposed surgical tool segmentation/removal algorithms on a robotic instrument dataset from the MICCAI 2015 and 2017 EndoVis Challenge. We report a 90.20% DICE for binary segmentation, a 76.26% DICE for instrument part segmentation, and a 46.07% DICE for instrument type (i.e., all instruments) segmentation on the MICCAI 2017 challenge dataset using our U-NetPlus architecture, outperforming the results of earlier techniques used and tested on these data. In addition, we demonstrated the successful execution of the tool removal algorithm from surgical tool-free videos that contained moving surgical tools that were generated artificially. Conclusions: Our application successfully separates and eliminates the surgical tool to reveal a view of the background tissue that was otherwise hidden by the tool, producing results that are visually similar to the actual data.

3.
bioRxiv ; 2023 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-37502844

RESUMO

In the tumor microenvironment (TME), collagen fibers facilitate tumor cell migration through the extracellular matrix. Previous studies have focused on studying the responses of cells on uniformly aligned or randomly aligned collagen fibers. However, the in vivo environment also features spatial gradients in alignment, which arise from the local reorganization of the matrix architecture due to cell-induced traction forces. Although there has been extensive research on how cells respond to graded biophysical cues, such as stiffness, porosity, and ligand density, the cellular responses to physiological fiber alignment gradients have been largely unexplored. This is due, in part, to a lack of robust experimental techniques to create controlled alignment gradients in natural materials. In this study, we image tumor biopsy samples and characterize the alignment gradients present in the TME. To replicate physiological gradients, we introduce a first-of-its-kind biofabrication technique that utilizes a microfluidic channel with constricting and expanding geometry to engineer 3D collagen hydrogels with tunable fiber alignment gradients that range from sub-millimeter to millimeter length scales. Our modular approach allows easy access to the microengineered gradient gels, and we demonstrate that HUVECs migrate in response to the fiber architecture. We provide preliminary evidence suggesting that MDA-MB-231 cell aggregates, patterned onto a specific location on the alignment gradient, exhibit preferential migration towards increasing alignment. This finding suggests that alignment gradients could serve as an additional taxis cue in the ECM. Importantly, our study represents the first successful engineering of continuous gradients of fiber alignment in soft, natural materials. We anticipate that our user-friendly platform, which needs no specialized equipment, will offer new experimental capabilities to study the impact of fiber-based contact guidance on directed cell migration.

4.
Artigo em Inglês | MEDLINE | ID: mdl-37124050

RESUMO

Ultrasound (US) elastography is a technique that enables non-invasive quantification of material properties, such as stiffness, from ultrasound images of deforming tissue. The displacement field is measured from the US images using image matching algorithms, and then a parameter, often the elastic modulus, is inferred or subsequently measured to identify potential tissue pathologies, such as cancerous tissues. Several traditional inverse problem approaches, loosely grouped as either direct or iterative, have been explored to estimate the elastic modulus. Nevertheless, the iterative techniques are typically slow and computationally intensive, while the direct techniques, although more computationally efficient, are very sensitive to measurement noise and require the full displacement field data (i.e., both vector components). In this work, we propose a deep learning approach to solve the inverse problem and recover the spatial distribution of the elastic modulus from one component of the US measured displacement field. The neural network used here is trained using only simulated data obtained via a forward finite element (FE) model with known variations in the modulus field, thus avoiding the reliance on large measurement data sets that may be challenging to acquire. A U-net based neural network is then used to predict the modulus distribution (i.e., solve the inverse problem) using the simulated forward data as input. We quantitatively evaluated our trained model with a simulated test dataset and observed a 0.0018 mean squared error (MSE) and a 1.14% mean absolute percent error (MAPE) between the reconstructed and ground truth elastic modulus. Moreover, we also qualitatively compared the output of our U-net model to experimentally measured displacement data acquired using a US elastography tissue-mimicking calibration phantom.

5.
Artigo em Inglês | MEDLINE | ID: mdl-34079156

RESUMO

Surgical tool segmentation is becoming imperative to provide detailed information during intra-operative execution. These tools can obscure surgeons' dexterity control due to narrow working space and visual field-of-view, which increases the risk of complications resulting from tissue injuries (e.g. tissue scars and tears). This paper demonstrates a novel application of segmenting and removing surgical instruments from laparoscopic/endoscopic video using digital inpainting algorithms. To segment the surgical instruments, we use a modified U-Net architecture (U-NetPlus) composed of a pre-trained VGG11 or VGG16 encoder and redesigned decoder. The decoder is modified by replacing the transposed convolution operation with an up-sampling operation based on nearest-neighbor (NN) interpolation. This modification removes the artifacts generated by the transposed convolution, and, furthermore, these new interpolation weights require no learning for the upsampling operation. The tool removal algorithms use the tool segmentation mask and either the instrument-free reference frames or previous instrument-containing frames to fill-in (i.e., inpaint) the instrument segmentation mask with the background tissue underneath. We have demonstrated the performance of the proposed surgical tool segmentation/removal algorithms on a robotic instrument dataset from the MICCAI 2015 EndoVis Challenge. We also showed successful performance of the tool removal algorithm from synthetically generated surgical instruments-containing videos obtained by embedding a moving surgical tool into surgical tool-free videos. Our application successfully segments and removes the surgical tool to unveil the background tissue view otherwise obstructed by the tool, producing visually comparable results to the ground truth.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...