Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
ArXiv ; 2024 Feb 08.
Article in English | MEDLINE | ID: mdl-38351940

ABSTRACT

Together with the molecular knowledge of genes and proteins, biological images promise to significantly enhance the scientific understanding of complex cellular systems and to advance predictive and personalized therapeutic products for human health. For this potential to be realized, quality-assured image data must be shared among labs at a global scale to be compared, pooled, and reanalyzed, thus unleashing untold potential beyond the original purpose for which the data was generated. There are two broad sets of requirements to enable image data sharing in the life sciences. One set of requirements is articulated in the companion White Paper entitled "Enabling Global Image Data Sharing in the Life Sciences," which is published in parallel and addresses the need to build the cyberinfrastructure for sharing the digital array data (arXiv:2401.13023 [q-bio.OT], https://doi.org/10.48550/arXiv.2401.13023). In this White Paper, we detail a broad set of requirements, which involves collecting, managing, presenting, and propagating contextual information essential to assess the quality, understand the content, interpret the scientific implications, and reuse image data in the context of the experimental details. We start by providing an overview of the main lessons learned to date through international community activities, which have recently made considerable progress toward generating community standard practices for imaging Quality Control (QC) and metadata. We then provide a clear set of recommendations for amplifying this work. The driving goal is to address remaining challenges, and democratize access to common practices and tools for a spectrum of biomedical researchers, regardless of their expertise, access to resources, and geographical location.

3.
Neuroinformatics ; 20(1): 25-36, 2022 01.
Article in English | MEDLINE | ID: mdl-33506383

ABSTRACT

There is great need for coordination around standards and best practices in neuroscience to support efforts to make neuroscience a data-centric discipline. Major brain initiatives launched around the world are poised to generate huge stores of neuroscience data. At the same time, neuroscience, like many domains in biomedicine, is confronting the issues of transparency, rigor, and reproducibility. Widely used, validated standards and best practices are key to addressing the challenges in both big and small data science, as they are essential for integrating diverse data and for developing a robust, effective, and sustainable infrastructure to support open and reproducible neuroscience. However, developing community standards and gaining their adoption is difficult. The current landscape is characterized both by a lack of robust, validated standards and a plethora of overlapping, underdeveloped, untested and underutilized standards and best practices. The International Neuroinformatics Coordinating Facility (INCF), an independent organization dedicated to promoting data sharing through the coordination of infrastructure and standards, has recently implemented a formal procedure for evaluating and endorsing community standards and best practices in support of the FAIR principles. By formally serving as a standards organization dedicated to open and FAIR neuroscience, INCF helps evaluate, promulgate, and coordinate standards and best practices across neuroscience. Here, we provide an overview of the process and discuss how neuroscience can benefit from having a dedicated standards body.


Subject(s)
Neurosciences , Reproducibility of Results
6.
Neuroinformatics ; 18(1): 109-129, 2020 01.
Article in English | MEDLINE | ID: mdl-31236848

ABSTRACT

Mastering the "arcana of neuroimaging analysis", the obscure knowledge required to apply an appropriate combination of software tools and parameters to analyse a given neuroimaging dataset, is a time consuming process. Therefore, it is not typically feasible to invest the additional effort required generalise workflow implementations to accommodate for the various acquisition parameters, data storage conventions and computing environments in use at different research sites, limiting the reusability of published workflows. We present a novel software framework, Abstraction of Repository-Centric ANAlysis (Arcana), which enables the development of complex, "end-to-end" workflows that are adaptable to new analyses and portable to a wide range of computing infrastructures. Analysis templates for specific image types (e.g. MRI contrast) are implemented as Python classes, which define a range of potential derivatives and analysis methods. Arcana retrieves data from imaging repositories, which can be BIDS datasets, XNAT instances or plain directories, and stores selected derivatives and associated provenance back into a repository for reuse by subsequent analyses. Workflows are constructed using Nipype and can be executed on local workstations or in high performance computing environments. Generic analysis methods can be consolidated within common base classes to facilitate code-reuse and collaborative development, which can be specialised for study-specific requirements via class inheritance. Arcana provides a framework in which to develop unified neuroimaging workflows that can be reused across a wide range of research studies and sites.


Subject(s)
Brain/diagnostic imaging , Information Storage and Retrieval/statistics & numerical data , Magnetic Resonance Imaging/statistics & numerical data , Neuroimaging/statistics & numerical data , Data Analysis , Humans , Information Storage and Retrieval/methods , Magnetic Resonance Imaging/methods , Neuroimaging/methods , Software , Workflow
7.
Front Neuroinform ; 8: 30, 2014.
Article in English | MEDLINE | ID: mdl-24734019

ABSTRACT

The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research.

8.
PLoS One ; 5(4): e10049, 2010 Apr 06.
Article in English | MEDLINE | ID: mdl-20386612

ABSTRACT

BACKGROUND: The crystallographic determination of protein structures can be computationally demanding and for difficult cases can benefit from user-friendly interfaces to high-performance computing resources. Molecular replacement (MR) is a popular protein crystallographic technique that exploits the structural similarity between proteins that share some sequence similarity. But the need to trial permutations of search models, space group symmetries and other parameters makes MR time- and labour-intensive. However, MR calculations are embarrassingly parallel and thus ideally suited to distributed computing. In order to address this problem we have developed MrGrid, web-based software that allows multiple MR calculations to be executed across a grid of networked computers, allowing high-throughput MR. METHODOLOGY/PRINCIPAL FINDINGS: MrGrid is a portable web based application written in Java/JSP and Ruby, and taking advantage of Apple Xgrid technology. Designed to interface with a user defined Xgrid resource the package manages the distribution of multiple MR runs to the available nodes on the Xgrid. We evaluated MrGrid using 10 different protein test cases on a network of 13 computers, and achieved an average speed up factor of 5.69. CONCLUSIONS: MrGrid enables the user to retrieve and manage the results of tens to hundreds of MR calculations quickly and via a single web interface, as well as broadening the range of strategies that can be attempted. This high-throughput approach allows parameter sweeps to be performed in parallel, improving the chances of MR success.


Subject(s)
Computational Biology/methods , Software , Structural Homology, Protein , Crystallography, X-Ray , Internet , Programming Languages , Proteins/chemistry
SELECTION OF CITATIONS
SEARCH DETAIL
...