Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
J Biomed Inform ; 78: 102-122, 2018 02.
Article in English | MEDLINE | ID: mdl-29223464

ABSTRACT

Managers in complex organisations often have to make decisions on whether new software developments are worth undertaking or not. Such decisions are hard to make, especially at an enterprise level. Both costs and risks are regularly underestimated, despite the existence of a plethora of software and systems engineering methodologies aimed at predicting and controlling them. Our objective is to help managers and stakeholders of large, complex organisations (like the National Health Service in the UK) make better informed decisions on the costs and risks of planned new software systems that will reuse or extend their existing information infrastructure. We analysed case studies describing new software developments undertaken by providers of health care services in the UK, looking for common points of risk and high cost. The results highlighted the movement of data within and between organisations as a key factor. Data movement can be hindered by numerous technical barriers, but also by other challenges arising from social aspects of the organisation. These latter aspects are often harder to predict, and are ignored by many of the more common software engineering methodologies. In this paper, we propose data journey modelling, a new method aiming to predict places of high cost and risk when existing data needs to move to a new development. The method is lightweight and combines technical and social aspects, but relies only on information that is likely to be already known to key stakeholders, or will be cheap to acquire. To assess the effectiveness of our method, we conducted a retrospective evaluation in an NHS Foundation Trust hospital. Using the method, we were able to predict most of the points of high cost/risk that the hospital staff had identified, along with several other possible directions that the staff did not identify for themselves, but agreed could be promising.


Subject(s)
Computer Communication Networks , Decision Making, Organizational , Medical Informatics Applications , Models, Theoretical , Software , Hospitals , Humans , Retrospective Studies , State Medicine/organization & administration
2.
Nucleic Acids Res ; 36(Web Server issue): W485-90, 2008 Jul 01.
Article in English | MEDLINE | ID: mdl-18440977

ABSTRACT

Despite the growing volumes of proteomic data, integration of the underlying results remains problematic owing to differences in formats, data captured, protein accessions and services available from the individual repositories. To address this, we present the ISPIDER Central Proteomic Database search (http://www.ispider.manchester.ac.uk/cgi-bin/ProteomicSearch.pl), an integration service offering novel search capabilities over leading, mature, proteomic repositories including PRoteomics IDEntifications database (PRIDE), PepSeeker, PeptideAtlas and the Global Proteome Machine. It enables users to search for proteins and peptides that have been characterised in mass spectrometry-based proteomics experiments from different groups, stored in different databases, and view the collated results with specialist viewers/clients. In order to overcome limitations imposed by the great variability in protein accessions used by individual laboratories, the European Bioinformatics Institute's Protein Identifier Cross-Reference (PICR) service is used to resolve accessions from different sequence repositories. Custom-built clients allow users to view peptide/protein identifications in different contexts from multiple experiments and repositories, as well as integration with the Dasty2 client supporting any annotations available from Distributed Annotation System servers. Further information on the protein hits may also be added via external web services able to take a protein as input. This web server offers the first truly integrated access to proteomics repositories and provides a unique service to biologists interested in mass spectrometry-based proteomics.


Subject(s)
Databases, Protein , Proteomics , Software , Computer Graphics , Internet , Mass Spectrometry , Systems Integration
3.
Brief Bioinform ; 9(2): 174-88, 2008 Mar.
Article in English | MEDLINE | ID: mdl-18281347

ABSTRACT

Proteomics, the study of the protein complement of a biological system, is generating increasing quantities of data from rapidly developing technologies employed in a variety of different experimental workflows. Experimental processes, e.g. for comparative 2D gel studies or LC-MS/MS analyses of complex protein mixtures, involve a number of steps: from experimental design, through wet and dry lab operations, to publication of data in repositories and finally to data annotation and maintenance. The presence of inaccuracies throughout the processing pipeline, however, results in data that can be untrustworthy, thus offsetting the benefits of high-throughput technology. While researchers and practitioners are generally aware of some of the information quality issues associated with public proteomics data, there are few accepted criteria and guidelines for dealing with them. In this article, we highlight factors that impact on the quality of experimental data and review current approaches to information quality management in proteomics. Data quality issues are considered throughout the lifecycle of a proteomics experiment, from experiment design and technique selection, through data analysis, to archiving and sharing.


Subject(s)
Information Storage and Retrieval , Proteomics , Quality Control , Database Management Systems , Electrophoresis, Gel, Two-Dimensional , Information Storage and Retrieval/methods , Information Storage and Retrieval/standards , Mass Spectrometry , Proteins/analysis , Proteomics/instrumentation , Proteomics/methods , Proteomics/standards , Software
SELECTION OF CITATIONS
SEARCH DETAIL
...