Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-33279813

RESUMO

Shotgun proteomics is a high-throughput technology which has been developed with the aim of investigating the maximum number of proteins in cells in a given experiment. However, protein discovery and data generation vary in depth and coverage when different technical strategies are selected. In this study, three different sample preparation approaches, and peptide or protein fractionation methods, were applied to identify and quantify proteins from log-phase yeast lysate: sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), filter-aided sample preparation coupled with gas phase fractionation (FASP-GPF), and FASP - high pH reversed phase fractionation (HpH). Fractions were initially analyzed and compared using nanoflow liquid chromatography - tandem mass spectrometry (nanoLC-MS/MS) employing data dependent acquisition on a linear ion trap instrument. The number of fractions and analytical replicates was adjusted so that each experiment used a similar amount of mass spectrometric instrument time. A second set of experiments was performed, comparing FASP-GPF, SDS-PAGE and FASP-HpH using a Q Exactive Orbitrap mass spectrometer. Compared with results from the linear ion trap mass spectrometer, the use of a Q Exactive Orbitrap mass spectrometer enabled a substantial increase in protein identifications, and an even greater increase in peptide identifications. This shows that the main advantage of using the higher resolution instrument is in increased proteome coverage. A total of 1035, 1357 and 2134 proteins were separately identified by FASP-GPF, SDS-PAGE and FASP-HpH. Combining results from the Orbitrap experiments, there were a total of 2269 proteins found, with 94% of them identified using the FASP-HpH method. Therefore, the FASP-HpH method is the optimal choice among these approaches, when applied to this type of sample.


Assuntos
Cromatografia de Fase Reversa/métodos , Eletroforese em Gel de Poliacrilamida/métodos , Proteômica/métodos , Proteínas de Saccharomyces cerevisiae/análise , Concentração de Íons de Hidrogênio , Peptídeos/análise , Peptídeos/química , Proteoma/análise , Proteoma/química , Proteínas de Saccharomyces cerevisiae/química , Espectrometria de Massas em Tandem/métodos
3.
Proteomes ; 8(3)2020 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-32825686

RESUMO

PeptideWitch is a python-based web module that introduces several key graphical and technical improvements to the Scrappy software platform, which is designed for label-free quantitative shotgun proteomics analysis using normalised spectral abundance factors. The program inputs are low stringency protein identification lists output from peptide-to-spectrum matching search engines for 'control' and 'treated' samples. Through a combination of spectral count summation and inner joins, PeptideWitch processes low stringency data, and outputs high stringency data that are suitable for downstream quantitation. Data quality metrics are generated, and a series of statistical analyses and graphical representations are presented, aimed at defining and presenting the difference between the two sample proteomes.

4.
J Am Soc Mass Spectrom ; 31(7): 1337-1343, 2020 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-32324388

RESUMO

We randomly selected 100 journal articles published in five proteomics journals in 2019 and manually examined each of them against a set of 13 criteria concerning the statistical analyses used, all of which were based on items mentioned in the journals' instructions to authors. This included questions such as whether a pilot study was conducted and whether false discovery rate calculation was employed at either the quantitation or identification stage. These data were then transformed to binary inputs, analyzed via machine learning algorithms, and classified accordingly, with the aim of determining if clusters of data existed for specific journals or if certain statistical measures correlated with each other. We applied a variety of classification methods including principal component analysis decomposition, agglomerative clustering, and multinomial and Bernoulli naïve Bayes classification and found that none of these could readily determine journal identity given extracted statistical features. Logistic regression was useful in determining high correlative potential between statistical features such as false discovery rate criteria and multiple testing corrections methods, but was similarly ineffective at determining correlations between statistical features and specific journals. This meta-analysis highlights that there is a very wide variety of approaches being used in statistical analysis of proteomics data, many of which do not conform to published journal guidelines, and that contrary to implicit assumptions in the field there are no clear correlations between statistical methods and specific journals.


Assuntos
Proteômica , Estatística como Assunto , Pesquisa Biomédica , Humanos
5.
Proteomics ; 18(23): e1800222, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30352137

RESUMO

Western blotting as an orthogonal validation tool for quantitative proteomics data has rapidly become a de facto requirement for publication. In this viewpoint article, the pros and cons of western blotting as a validation approach are discussed, using examples from our own published work, and how to best apply it to improve the quality of data published is outlined. Further, suggestions and guidelines for some other experimental approaches are provided, which can be used for validation of quantitative proteomics data in addition to, or in place of, western blotting.


Assuntos
Proteômica/métodos , Western Blotting , Confiabilidade dos Dados
6.
Proteomics ; 16(18): 2448-53, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27461997

RESUMO

Multiple testing corrections are a useful tool for restricting the FDR, but can be blunt in the context of low power, as we demonstrate by a series of simple simulations. Unfortunately, in proteomics experiments low power can be common, driven by proteomics-specific issues like small effects due to ratio compression, and few replicates due to reagent high cost, instrument time availability and other issues; in such situations, most multiple testing corrections methods, if used with conventional thresholds, will fail to detect any true positives even when many exist. In this low power, medium scale situation, other methods such as effect size considerations or peptide-level calculations may be a more effective option, even if they do not offer the same theoretical guarantee of a low FDR. Thus, we aim to highlight in this article that proteomics presents some specific challenges to the standard multiple testing corrections methods, which should be employed as a useful tool but not be regarded as a required rubber stamp.


Assuntos
Biologia Computacional/métodos , Proteômica/métodos , Algoritmos , Proteínas/análise , Espectrometria de Massas em Tandem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...