Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
1.
IEEE Open J Eng Med Biol ; 5: 281-295, 2024.
Article in English | MEDLINE | ID: mdl-38766538

ABSTRACT

Goal: FetSAM represents a cutting-edge deep learning model aimed at revolutionizing fetal head ultrasound segmentation, thereby elevating prenatal diagnostic precision. Methods: Utilizing a comprehensive dataset-the largest to date for fetal head metrics-FetSAM incorporates prompt-based learning. It distinguishes itself with a dual loss mechanism, combining Weighted DiceLoss and Weighted Lovasz Loss, optimized through AdamW and underscored by class weight adjustments for better segmentation balance. Performance benchmarks against prominent models such as U-Net, DeepLabV3, and Segformer highlight its efficacy. Results: FetSAM delivers unparalleled segmentation accuracy, demonstrated by a DSC of 0.90117, HD of 1.86484, and ASD of 0.46645. Conclusion: FetSAM sets a new benchmark in AI-enhanced prenatal ultrasound analysis, providing a robust, precise tool for clinical applications and pushing the envelope of prenatal care with its groundbreaking dataset and segmentation capabilities.

2.
Front Big Data ; 6: 1301812, 2023.
Article in English | MEDLINE | ID: mdl-38074486

ABSTRACT

The concept of the "metaverse" has garnered significant attention recently, positioned as the "next frontier" of the internet. This emerging digital realm carries substantial economic and financial implications for both IT and non-IT industries. However, the integration and evolution of these virtual universes bring forth a multitude of intricate issues and quandaries that demand resolution. Within this research endeavor, our objective was to delve into and appraise the array of challenges, privacy concerns, and security issues that have come to light during the development of metaverse virtual environments in the wake of the COVID-19 pandemic. Through a meticulous review and analysis of literature spanning from January 2020 to December 2022, we have meticulously identified and scrutinized 29 distinct challenges, along with 12 policy, privacy, and security matters intertwined with the metaverse. Among the challenges we unearthed, the foremost were concerns pertaining to the costs associated with hardware and software, implementation complexities, digital disparities, and the ethical and moral quandaries surrounding socio-control, collectively cited by 43%, 40%, and 33% of the surveyed articles, respectively. Turning our focus to policy, privacy, and security issues, the top three concerns that emerged from our investigation encompassed the formulation of metaverse rules and principles, the encroachment of privacy threats within the metaverse, and the looming challenges concerning data management, all mentioned in 43%, 40%, and 33% of the examined literature. In summation, the development of virtual environments within the metaverse is a multifaceted and dynamically evolving domain, offering both opportunities and hurdles for researchers and practitioners alike. It is our aspiration that the insights, challenges, and recommendations articulated in this report will catalyze extensive dialogues among industry stakeholders, governmental bodies, and other interested parties concerning the metaverse's destiny and the world they aim to construct or bequeath to future generations.

3.
J Pathol Inform ; 14: 100335, 2023.
Article in English | MEDLINE | ID: mdl-37928897

ABSTRACT

Digital pathology technologies, including whole slide imaging (WSI), have significantly improved modern clinical practices by facilitating storing, viewing, processing, and sharing digital scans of tissue glass slides. Researchers have proposed various artificial intelligence (AI) solutions for digital pathology applications, such as automated image analysis, to extract diagnostic information from WSI for improving pathology productivity, accuracy, and reproducibility. Feature extraction methods play a crucial role in transforming raw image data into meaningful representations for analysis, facilitating the characterization of tissue structures, cellular properties, and pathological patterns. These features have diverse applications in several digital pathology applications, such as cancer prognosis and diagnosis. Deep learning-based feature extraction methods have emerged as a promising approach to accurately represent WSI contents and have demonstrated superior performance in histology-related tasks. In this survey, we provide a comprehensive overview of feature extraction methods, including both manual and deep learning-based techniques, for the analysis of WSIs. We review relevant literature, analyze the discriminative and geometric features of WSIs (i.e., features suited to support the diagnostic process and extracted by "engineered" methods as opposed to AI), and explore predictive modeling techniques using AI and deep learning. This survey examines the advances, challenges, and opportunities in this rapidly evolving field, emphasizing the potential for accurate diagnosis, prognosis, and decision-making in digital pathology.

4.
Data Brief ; 51: 109708, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38020431

ABSTRACT

This dataset features a collection of 3832 high-resolution ultrasound images, each with dimensions of 959×661 pixels, focused on Fetal heads. The images highlight specific anatomical regions: the brain, cavum septum pellucidum (CSP), and lateral ventricles (LV). The dataset was assembled under the Creative Commons Attribution 4.0 International license, using previously anonymized and de-identified images to maintain ethical standards. Each image is complemented by a CSV file detailing pixel size in millimeters (mm). For enhanced compatibility and usability, the dataset is available in 11 universally accepted formats, including Cityscapes, YOLO, CVAT, Datumaro, COCO, TFRecord, PASCAL, LabelMe, Segmentation mask, OpenImage, and ICDAR. This broad range of formats ensures adaptability for various computer vision tasks, such as classification, segmentation, and object detection. It is also compatible with multiple medical imaging software and deep learning frameworks. The reliability of the annotations is verified through a two-step validation process involving a Senior Attending Physician and a Radiologic Technologist. The Intraclass Correlation Coefficients (ICC) and Jaccard similarity indices (JS) are utilized to quantify inter-rater agreement. The dataset exhibits high annotation reliability, with ICC values averaging at 0.859 and 0.889, and JS values at 0.855 and 0.857 in two iterative rounds of annotation. This dataset is designed to be an invaluable resource for ongoing and future research projects in medical imaging and computer vision. It is particularly suited for applications in prenatal diagnostics, clinical diagnosis, and computer-assisted interventions. Its detailed annotations, broad compatibility, and ethical compliance make it a highly reusable and adaptable tool for the development of algorithms aimed at improving maternal and Fetal health.

5.
IEEE Trans Vis Comput Graph ; 29(11): 4708-4718, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37782610

ABSTRACT

We present a new data-driven approach for extracting geometric and structural information from a single spherical panorama of an interior scene, and for using this information to render the scene from novel points of view, enhancing 3D immersion in VR applications. The approach copes with the inherent ambiguities of single-image geometry estimation and novel view synthesis by focusing on the very common case of Atlanta-world interiors, bounded by horizontal floors and ceilings and vertical walls. Based on this prior, we introduce a novel end-to-end deep learning approach to jointly estimate the depth and the underlying room structure of the scene. The prior guides the design of the network and of novel domain-specific loss functions, shifting the major computational load on a training phase that exploits available large-scale synthetic panoramic imagery. An extremely lightweight network uses geometric and structural information to infer novel panoramic views from translated positions at interactive rates, from which perspective views matching head rotations are produced and upsampled to the display size. As a result, our method automatically produces new poses around the original camera at interactive rates, within a working area suitable for producing depth cues for VR applications, especially when using head-mounted displays connected to graphics servers. The extracted floor plan and 3D wall structure can also be used to support room exploration. The experimental results demonstrate that our method provides low-latency performance and improves over current state-of-the-art solutions in prediction accuracy on available commonly used indoor panoramic benchmarks.

6.
Stud Health Technol Inform ; 309: 189-193, 2023 Oct 20.
Article in English | MEDLINE | ID: mdl-37869840

ABSTRACT

This umbrella review aims to provide a comprehensive overview of the use of telehealth services for women after the COVID-19 pandemic. The review synthesizes findings from 21 reviews, covering diverse topics such as cancer care, pregnancy and postpartum care, general health, and specific populations. While some areas have shown promising results, others require further research to better understand the potential of digital health interventions. The review identifies gaps in knowledge and highlights the need for more rigorous and comprehensive research to address the limitations and gaps identified in the current evidence base. This includes prioritizing the use of standardized guidelines, quality assessment tools, and meta-analyses, as well as exploring the comparative effectiveness of different digital health interventions, the experiences of specific populations, and the cost-effectiveness of these technologies. By addressing these gaps, this umbrella review can inform future research and policy decisions, ultimately improving women's health outcomes in the post-pandemic era.


Subject(s)
Pandemics , Telemedicine , Pregnancy , Humans , Female , Women's Health , Telemedicine/methods , Cost-Effectiveness Analysis
7.
BMJ Health Care Inform ; 30(1)2023 Aug.
Article in English | MEDLINE | ID: mdl-37541739

ABSTRACT

BACKGROUND: The COVID-19, caused by the SARS-CoV-2 virus, proliferated worldwide, leading to a pandemic. Many governmental and non-governmental organisations and research institutes are contributing to the COVID-19 fight to control the pandemic. MOTIVATION: Numerous telehealth applications have been proposed and adopted during the pandemic to combat the spread of the disease. To this end, powerful tools such as artificial intelligence (AI)/robotic technologies, tracking, monitoring, consultation apps and other telehealth interventions have been extensively used. However, there are several issues and challenges that are currently facing this technology. OBJECTIVE: The purpose of this scoping review is to analyse the primary goal of these techniques; document their contribution to tackling COVID-19; identify and categorise their main challenges and future direction in fighting against the COVID-19 or future pandemic outbreaks. METHODS: Four digital libraries (ACM, IEEE, Scopus and Google Scholar) were searched to identify relevant sources. Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) was used as a guideline procedure to develop a comprehensive scoping review. General telehealth features were extracted from the studies reviewed and analysed in the context of the intervention type, technology used, contributions, challenges, issues and limitations. RESULTS: A collection of 27 studies were analysed. The reported telehealth interventions were classified into two main categories: AI-based and non-AI-based interventions; their main contributions to tackling COVID-19 are in the aspects of disease detection and diagnosis, pathogenesis and virology, vaccine and drug development, transmission and epidemic predictions, online patient consultation, tracing, and observation; 28 telehealth intervention challenges/issues have been reported and categorised into technical (14), non-technical (10), and privacy, and policy issues (4). The most critical technical challenges are: network issues, system reliability issues, performance, accuracy and compatibility issues. Moreover, the most critical non-technical issues are: the skills required, hardware/software cost, inability to entirely replace physical treatment and people's uncertainty about using the technology. Stringent laws/regulations, ethical issues are some of the policy and privacy issues affecting the development of the telehealth interventions reported in the literature. CONCLUSION: This study provides medical and scientific scholars with a comprehensive overview of telehealth technologies' current and future applications in the fight against COVID-19 to motivate researchers to continue to maximise the benefits of these techniques in the fight against pandemics. Lastly, we recommend that the identified challenges, privacy, and security issues and solutions be considered when designing and developing future telehealth applications.


Subject(s)
COVID-19 , Telemedicine , Humans , Artificial Intelligence , COVID-19/epidemiology , Pandemics/prevention & control , Privacy , Reproducibility of Results , SARS-CoV-2 , Telemedicine/methods
8.
Brief Bioinform ; 24(1)2023 01 19.
Article in English | MEDLINE | ID: mdl-36434788

ABSTRACT

Ultraliser is a neuroscience-specific software framework capable of creating accurate and biologically realistic 3D models of complex neuroscientific structures at intracellular (e.g. mitochondria and endoplasmic reticula), cellular (e.g. neurons and glia) and even multicellular scales of resolution (e.g. cerebral vasculature and minicolumns). Resulting models are exported as triangulated surface meshes and annotated volumes for multiple applications in in silico neuroscience, allowing scalable supercomputer simulations that can unravel intricate cellular structure-function relationships. Ultraliser implements a high-performance and unconditionally robust voxelization engine adapted to create optimized watertight surface meshes and annotated voxel grids from arbitrary non-watertight triangular soups, digitized morphological skeletons or binary volumetric masks. The framework represents a major leap forward in simulation-based neuroscience, making it possible to employ high-resolution 3D structural models for quantification of surface areas and volumes, which are of the utmost importance for cellular and system simulations. The power of Ultraliser is demonstrated with several use cases in which hundreds of models are created for potential application in diverse types of simulations. Ultraliser is publicly released under the GNU GPL3 license on GitHub (BlueBrain/Ultraliser). SIGNIFICANCE: There is crystal clear evidence on the impact of cell shape on its signaling mechanisms. Structural models can therefore be insightful to realize the function; the more realistic the structure can be, the further we get insights into the function. Creating realistic structural models from existing ones is challenging, particularly when needed for detailed subcellular simulations. We present Ultraliser, a neuroscience-dedicated framework capable of building these structural models with realistic and detailed cellular geometries that can be used for simulations.


Subject(s)
Neurons , Software , Computer Simulation
9.
IEEE Trans Vis Comput Graph ; 28(11): 3629-3639, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36067097

ABSTRACT

Nowadays 360° cameras, capable to capture full environments in a single shot, are increasingly being used in a variety of Extended Reality (XR) applications that require specific Diminished Reality (DR) techniques to conceal selected classes of objects. In this work, we present a new data-driven approach that, from an input 360° image of a furnished indoor space automatically returns, with very low latency, an omnidirectional photorealistic view and architecturally plausible depth of the same scene emptied of all clutter. Contrary to recent data-driven inpainting methods that remove single user-defined objects based on their semantics, our approach is holistically applied to the entire scene, and is capable to separate the clutter from the architectural structure in a single step. By exploiting peculiar geometric features of the indoor environment, we shift the major computational load on the training phase and having an extremely lightweight network at prediction time. Our end-to-end approach starts by calculating an attention mask of the clutter in the image based on the geometric difference between full and empty scene. This mask is then propagated through gated convolutions that drive the generation of the output image and its depth. Returning the depth of the resulting structure allows us to exploit, during supervised training, geometric losses of different orders, including robust pixel-wise geometric losses and high-order 3D constraints typical of indoor structures. The experimental results demonstrate that our method provides interactive performance and outperforms current state-of-the-art solutions in prediction accuracy on available commonly used indoor panoramic benchmarks. In addition, our method presents consistent quality results even for scenes captured in the wild and for data for which there is no ground truth to support supervised training.

10.
Diagnostics (Basel) ; 12(9)2022 Sep 15.
Article in English | MEDLINE | ID: mdl-36140628

ABSTRACT

Ultrasound is one of the most commonly used imaging methodologies in obstetrics to monitor the growth of a fetus during the gestation period. Specifically, ultrasound images are routinely utilized to gather fetal information, including body measurements, anatomy structure, fetal movements, and pregnancy complications. Recent developments in artificial intelligence and computer vision provide new methods for the automated analysis of medical images in many domains, including ultrasound images. We present a full end-to-end framework for segmenting, measuring, and estimating fetal gestational age and weight based on two-dimensional ultrasound images of the fetal head. Our segmentation framework is based on the following components: (i) eight segmentation architectures (UNet, UNet Plus, Attention UNet, UNet 3+, TransUNet, FPN, LinkNet, and Deeplabv3) were fine-tuned using lightweight network EffientNetB0, and (ii) a weighted voting method for building an optimized ensemble transfer learning model (ETLM). On top of that, ETLM was used to segment the fetal head and to perform analytic and accurate measurements of circumference and seven other values of the fetal head, which we incorporated into a multiple regression model for predicting the week of gestational age and the estimated fetal weight (EFW). We finally validated the regression model by comparing our result with expert physician and longitudinal references. We evaluated the performance of our framework on the public domain dataset HC18: we obtained 98.53% mean intersection over union (mIoU) as the segmentation accuracy, overcoming the state-of-the-art methods; as measurement accuracy, we obtained a 1.87 mm mean absolute difference (MAD). Finally we obtained a 0.03% mean square error (MSE) in predicting the week of gestational age and 0.05% MSE in predicting EFW.

11.
iScience ; 25(8): 104713, 2022 Aug 19.
Article in English | MEDLINE | ID: mdl-35856024

ABSTRACT

Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications.

12.
Stud Health Technol Inform ; 295: 574-577, 2022 Jun 29.
Article in English | MEDLINE | ID: mdl-35773939

ABSTRACT

Ultrasound images are the most used imaging methodologies in obstetrics to monitor the growth of a fetus during the gestation period. In particular, the obstetrician uses fetus head images to monitor the growth state and identify essential features such as Gestational age (GA), estimated fetus weight (EFW), and brain anatomical structures. However, this work requires an expert obstetrician, and it is time-consuming and costly. Therefore, we proposed an automatic framework by adopting a hybrid approach that combines three components i) automatic segmentation to segment the region of interest (ROI) in the fetus head, ii) measurement extraction to measure the segmented ROI, and iii) anomaly and features detection to predict fetus GA, EFW, and abnormality status.


Subject(s)
Obstetrics , Ultrasonography, Prenatal , Female , Fetus/diagnostic imaging , Gestational Age , Humans , Pregnancy , Ultrasonography , Ultrasonography, Prenatal/methods
13.
IEEE Trans Vis Comput Graph ; 28(1): 573-582, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34587033

ABSTRACT

Achieving high rendering quality in the visualization of large particle data, for example from large-scale molecular dynamics simulations, requires a significant amount of sub-pixel super-sampling, due to very high numbers of particles per pixel. Although it is impossible to super-sample all particles of large-scale data at interactive rates, efficient occlusion culling can decouple the overall data size from a high effective sampling rate of visible particles. However, while the latter is essential for domain scientists to be able to see important data features, performing occlusion culling by sampling or sorting the data is usually slow or error-prone due to visibility estimates of insufficient quality. We present a novel probabilistic culling architecture for super-sampled high-quality rendering of large particle data. Occlusion is dynamically determined at the sub-pixel level, without explicit visibility sorting or data simplification. We introduce confidence maps to probabilistically estimate confidence in the visibility data gathered so far. This enables progressive, confidence-based culling, helping to avoid wrong visibility decisions. In this way, we determine particle visibility with high accuracy, although only a small part of the data set is sampled. This enables extensive super-sampling of (partially) visible particles for high rendering quality, at a fraction of the cost of sampling all particles. For real-time performance with millions of particles, we exploit novel features of recent GPU architectures to group particles into two hierarchy levels, combining fine-grained culling with high frame rates.

14.
Front Surg ; 8: 657901, 2021.
Article in English | MEDLINE | ID: mdl-33859995

ABSTRACT

Background: While performing surgeries in the OR, surgeons and assistants often need to access several information regarding surgical planning and/or procedures related to the surgery itself, or the accessory equipment to perform certain operations. The accessibility of this information often relies on the physical presence of technical and medical specialists in the OR, which is increasingly difficult due to the number of limitations imposed by the COVID emergency to avoid overcrowded environments or external personnel. Here, we analyze several scenarios where we equipped OR personnel with augmented reality (AR) glasses, allowing a remote specialist to guide OR operations through voice and ad-hoc visuals, superimposed to the field of view of the operator wearing them. Methods: This study is a preliminary case series of prospective collected data about the use of AR-assistance in spine surgery from January to July 2020. The technology has been used on a cohort of 12 patients affected by degenerative lumbar spine disease with lumbar sciatica co-morbidities. Surgeons and OR specialists were equipped with AR devices, customized with P2P videoconference commercial apps, or customized holographic apps. The devices were tested during surgeries for lumbar arthrodesis in a multicenter experience involving author's Institutions. Findings: A total number of 12 lumbar arthrodesis have been performed while using the described AR technology, with application spanning from telementoring (3), teaching (2), surgical planning superimposition and interaction with the hologram using a custom application for Microsoft hololens (1). Surgeons wearing the AR goggles reported a positive feedback as for the ergonomy, wearability and comfort during the procedure; being able to visualize a 3D reconstruction during surgery was perceived as a straightforward benefit, allowing to speed-up procedures, thus limiting post-operational complications. The possibility of remotely interacting with a specialist on the glasses was a potent added value during COVID emergency, due to limited access of non-resident personnel in the OR. Interpretation: By allowing surgeons to overlay digital medical content on actual surroundings, augmented reality surgery can be exploited easily in multiple scenarios by adapting commercially available or custom-made apps to several use cases. The possibility to observe directly the operatory theater through the eyes of the surgeon might be a game-changer, giving the chance to unexperienced surgeons to be virtually at the site of the operation, or allowing a remote experienced operator to guide wisely the unexperienced surgeon during a procedure.

15.
IEEE Trans Vis Comput Graph ; 27(2): 645-655, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33055035

ABSTRACT

In this paper, we present a novel data structure, called the Mixture Graph. This data structure allows us to compress, render, and query segmentation histograms. Such histograms arise when building a mipmap of a volume containing segmentation IDs. Each voxel in the histogram mipmap contains a convex combination (mixture) of segmentation IDs. Each mixture represents the distribution of IDs in the respective voxel's children. Our method factorizes these mixtures into a series of linear interpolations between exactly two segmentation IDs. The result is represented as a directed acyclic graph (DAG) whose nodes are topologically ordered. Pruning replicate nodes in the tree followed by compression allows us to store the resulting data structure efficiently. During rendering, transfer functions are propagated from sources (leafs) through the DAG to allow for efficient, pre-filtered rendering at interactive frame rates. Assembly of histogram contributions across the footprint of a given volume allows us to efficiently query partial histograms, achieving up to 178 x speed-up over naive parallelized range queries. Additionally, we apply the Mixture Graph to compute correctly pre-filtered volume lighting and to interactively explore segments based on shape, geometry, and orientation using multi-dimensional transfer functions.

16.
J Vis Exp ; (151)2019 09 28.
Article in English | MEDLINE | ID: mdl-31609327

ABSTRACT

Serial sectioning and subsequent high-resolution imaging of biological tissue using electron microscopy (EM) allow for the segmentation and reconstruction of high-resolution imaged stacks to reveal ultrastructural patterns that could not be resolved using 2D images. Indeed, the latter might lead to a misinterpretation of morphologies, like in the case of mitochondria; the use of 3D models is, therefore, more and more common and applied to the formulation of morphology-based functional hypotheses. To date, the use of 3D models generated from light or electron image stacks makes qualitative, visual assessments, as well as quantification, more convenient to be performed directly in 3D. As these models are often extremely complex, a virtual reality environment is also important to be set up to overcome occlusion and to take full advantage of the 3D structure. Here, a step-by-step guide from image segmentation to reconstruction and analysis is described in detail.


Subject(s)
Imaging, Three-Dimensional/methods , Microscopy, Electron/methods , Neuroglia/cytology , Neurons/cytology , Virtual Reality
17.
Prog Neurobiol ; 183: 101696, 2019 12.
Article in English | MEDLINE | ID: mdl-31550514

ABSTRACT

With the rapid evolution in the automation of serial electron microscopy in life sciences, the acquisition of terabyte-sized datasets is becoming increasingly common. High resolution serial block-face imaging (SBEM) of biological tissues offers the opportunity to segment and reconstruct nanoscale structures to reveal spatial features previously inaccessible with simple, single section, two-dimensional images. In particular, we focussed here on glial cells, whose reconstruction efforts in literature are still limited, compared to neurons. We imaged a 750,000 cubic micron volume of the somatosensory cortex from a juvenile P14 rat, with 20 nm accuracy. We recognized a total of 186 cells using their nuclei, and classified them as neuronal or glial based on features of the soma and the processes. We reconstructed for the first time 4 almost complete astrocytes and neurons, 4 complete microglia and 4 complete pericytes, including their intracellular mitochondria, 186 nuclei and 213 myelinated axons. We then performed quantitative analysis on the three-dimensional models. Out of the data that we generated, we observed that neurons have larger nuclei, which correlated with their lesser density, and that astrocytes and pericytes have a higher surface to volume ratio, compared to other cell types. All reconstructed morphologies represent an important resource for computational neuroscientists, as morphological quantitative information can be inferred, to tune simulations that take into account the spatial compartmentalization of the different cell types.


Subject(s)
Astrocytes/ultrastructure , Brain/cytology , Brain/diagnostic imaging , Imaging, Three-Dimensional , Microglia/ultrastructure , Microscopy, Electron, Scanning , Neurons/ultrastructure , Pericytes/ultrastructure , Animals , Microscopy, Electron , Rats , Somatosensory Cortex/cytology , Somatosensory Cortex/diagnostic imaging
18.
Front Neurosci ; 12: 664, 2018.
Article in English | MEDLINE | ID: mdl-30319342

ABSTRACT

One will not understand the brain without an integrated exploration of structure and function, these attributes being two sides of the same coin: together they form the currency of biological computation. Accordingly, biologically realistic models require the re-creation of the architecture of the cellular components in which biochemical reactions are contained. We describe here a process of reconstructing a functional oligocellular assembly that is responsible for energy supply management in the brain and creating a computational model of the associated biochemical and biophysical processes. The reactions that underwrite thought are both constrained by and take advantage of brain morphologies pertaining to neurons, astrocytes and the blood vessels that deliver oxygen, glucose and other nutrients. Each component of this neuro-glio-vasculature ensemble (NGV) carries-out delegated tasks, as the dynamics of this system provide for each cell-type its own energy requirements while including mechanisms that allow cooperative energy transfers. Our process for recreating the ultrastructure of cellular components and modeling the reactions that describe energy flow uses an amalgam of state-of the-art techniques, including digital reconstructions of electron micrographs, advanced data analysis tools, computational simulations and in silico visualization software. While we demonstrate this process with the NGV, it is equally well adapted to any cellular system for integrating multimodal cellular data in a coherent framework.

19.
Article in English | MEDLINE | ID: mdl-30136947

ABSTRACT

With the rapid increase in raw volume data sizes, such as terabyte-sized microscopy volumes, the corresponding segmentation label volumes have become extremely large as well. We focus on integer label data, whose efficient representation in memory, as well as fast random data access, pose an even greater challenge than the raw image data. Often, it is crucial to be able to rapidly identify which segments are located where, whether for empty space skipping for fast rendering, or for spatial proximity queries. We refer to this process as culling. In order to enable efficient culling of millions of labeled segments, we present a novel hybrid approach that combines deterministic and probabilistic representations of label data in a data-adaptive hierarchical data structure that we call the label list tree. In each node, we adaptively encode label data using either a probabilistic constant-time access representation for fast conservative culling, or a deterministic logarithmic-time access representation for exact queries. We choose the best data structures for representing the labels of each spatial region while building the label list tree. At run time, we further employ a novel query-adaptive culling strategy. While filtering a query down the tree, we prune it successively, and in each node adaptively select the representation that is best suited for evaluating the pruned query, depending on its size. We show an analysis of the efficiency of our approach with several large data sets from connectomics, including a brain scan with more than 13 million labeled segments, and compare our method to conventional culling approaches. Our approach achieves significant reductions in storage size as well as faster query times.

20.
IEEE Trans Vis Comput Graph ; 24(1): 974-983, 2018 01.
Article in English | MEDLINE | ID: mdl-28866532

ABSTRACT

Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

SELECTION OF CITATIONS
SEARCH DETAIL
...