Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Innov Clin Neurosci ; 17(7-9): 30-40, 2020 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-33520402

RESUMO

Objective: The goal of the Depression Inventory Development (DID) project is to develop a comprehensive and psychometrically sound rating scale for major depressive disorder (MDD) that reflects current diagnostic criteria and conceptualizations of depression. We report here the evaluation of the current DID item bank using Classical Test Theory (CTT), Item Response Theory (IRT) and Rasch Measurement Theory (RMT). Methods: The present study was part of a larger multisite, open-label study conducted by the Canadian Biomarker Integration Network in Depression (ClinicalTrials.gov: NCT01655706). Trained raters administered the 32 DID items at each of two visits (MDD: baseline, n=211 and Week 8, n=177; healthy participants: baseline, n=112 and Week 8, n=104). The DID's "grid" structure operationalizes intensity and frequency of each item, with clear symptom definitions and a structured interview guide, with the current iteration assessing symptoms related to anhedonia, cognition, fatigue, general malaise, motivation, anxiety, negative thinking, pain, and appetite. Participants were also administered the Montgomery- Åsberg Depression Rating Scale (MADRS) and Quick Inventory of Depressive Symptomatology-Self-Report (QIDS-SR) that allowed DID items to be evaluated against existing "benchmark" items. CTT was used to assess data quality/reliability (i.e., missing data, skewness, scoring frequency, internal consistency), IRT to assess individual item performance by modelling an item's ability to discriminate levels of depressive severity (as assessed by the MADRS), and RMT to assess how the items perform together as a scale to capture a range of depressive severity (item targeting). These analyses together provided empirical evidence to base decisions on which DID items to remove, modify, or advance. Results: Of the 32 DID items evaluated, eight items were identified by CTT as problematic, displaying low variability in the range of responses, floor effects, and/or skewness; and four items were identified by IRT to show poor discriminative properties that would limit their clinical utility. Five additional items were deemed to be redundant. The remaining 15 DID items all fit the Rasch model, with person and item difficulty estimates indicating satisfactory item targeting, with lower precision in participants with mild levels of depression. These 15 DID items also showed good internal consistency (alpha=0.95 and inter-item correlations ranging from r=0.49 to r=0.84) and all items were sensitive to change following antidepressant treatment (baseline vs. Week 8). RMT revealed problematic item targeting for the MADRS and QIDSSR, including an absence of MADRS items targeting participants with mild/moderate depression and an absence of QIDS-SR items targeting participants with mild or severe depression. Conclusion: The present study applied CTT, IRT, and RMT to assess the measurement properties of the DID items and identify those that should be advanced, modified, or removed. Of the 32 items evaluated, 15 items showed good measurement properties. These items (along with previously evaluated items) will provide the basis for validation of a penultimate DID scale assessing anhedonia, cognitive slowing, concentration, executive function, recent memory, drive, emotional fatigue, guilt, self-esteem, hopelessness, tension, rumination, irritability, reduced appetite, insomnia, sadness, worry, suicidality, and depressed mood. The strategies adopted by the DID process provide a framework for rating scale development and validation.

2.
J Affect Disord ; 256: 143-147, 2019 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-31176186

RESUMO

International Society for CNS Clinical Trials and Methodology convened an expert Working Group that assembled consistency/inconsistency flags for the Montgomery-Asberg Depression Rating Scale (MADRS). Twenty-two flags were identified. Seven flags are believed to be strong flags that suggest that a thorough review of rating is warranted. The flags were applied to assessments derived from the NEWMEDS data repository. Almost 65% of ratings had at least one inconsistency flag raised and 22% had two or more. Application of flags to clinical ratings may improve reliability of ratings and validity of trials.


Assuntos
Depressão/diagnóstico , Escalas de Graduação Psiquiátrica/normas , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Psicometria , Reprodutibilidade dos Testes
3.
Ther Innov Regul Sci ; 53(2): 176-182, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-29758992

RESUMO

Monitoring the quality of clinical trial efficacy outcome data has received increased attention in the past decade, with regulatory guidance encouraging it to be conducted proactively, and remotely. However, the methods utilized to develop and implement risk-based data monitoring (RBDM) programs vary, and there is a dearth of published material to guide these processes in the context of central nervous system (CNS) trials. We reviewed regulatory guidance published within the past 6 years, generic white papers, and studies applying RBDM to data from CNS clinical trials. Methodologic considerations and system requirements necessary to establish an effective, real-time risk-based monitoring platform in CNS trials are presented. Key RBDM terms are defined in the context of CNS trial data, such as "critical data," "risk indicators," "noninformative data," and "mitigation of risk." Additionally, potential benefits of, and challenges associated with implementation of data quality monitoring are highlighted. Application of methodological and system requirement considerations to real-time monitoring of clinical ratings in CNS trials has the potential to minimize risk and enhance the quality of clinical trial data.


Assuntos
Fármacos do Sistema Nervoso Central/uso terapêutico , Ensaios Clínicos como Assunto/normas , Humanos , Controle de Qualidade , Risco
4.
Innov Clin Neurosci ; 13(9-10): 20-31, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27974997

RESUMO

The Depression Inventory Development project is an initiative of the International Society for CNS Drug Development whose goal is to develop a comprehensive and psychometrically sound measurement tool to be utilized as a primary endpoint in clinical trials for major depressive disorder. Using an iterative process between field testing and psychometric analysis and drawing upon expertise of international researchers in depression, the Depression Inventory Development team has established an empirically driven and collaborative protocol for the creation of items to assess symptoms in major depressive disorder. Depression-relevant symptom clusters were identified based on expert clinical and patient input. In addition, as an aid for symptom identification and item construction, the psychometric properties of existing clinical scales (assessing depression and related indications) were evaluated using blinded datasets from pharmaceutical antidepressant drug trials. A series of field tests in patients with major depressive disorder provided the team with data to inform the iterative process of scale development. We report here an overview of the Depression Inventory Development initiative, including results of the third iteration of items assessing symptoms related to anhedonia, cognition, fatigue, general malaise, motivation, anxiety, negative thinking, pain and appetite. The strategies adopted from the Depression Inventory Development program, as an empirically driven and collaborative process for scale development, have provided the foundation to develop and validate measurement tools in other therapeutic areas as well.

5.
Innov Clin Neurosci ; 13(1-2): 27-33, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27413584

RESUMO

This paper summarizes the results of the CNS Summit Data Quality Monitoring Workgroup analysis of current data quality monitoring techniques used in central nervous system (CNS) clinical trials. Based on audience polls conducted at the CNS Summit 2014, the panel determined that current techniques used to monitor data and quality in clinical trials are broad, uncontrolled, and lack independent verification. The majority of those polled endorse the value of monitoring data. Case examples of current data quality methodology are presented and discussed. Perspectives of pharmaceutical companies and trial sites regarding data quality monitoring are presented. Potential future developments in CNS data quality monitoring are described. Increased utilization of biomarkers as objective outcomes and for patient selection is considered to be the most impactful development in data quality monitoring over the next 10 years. Additional future outcome measures and patient selection approaches are discussed.

7.
J Clin Psychopharmacol ; 30(2): 193-7, 2010 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-20520295

RESUMO

The use of centralized raters who are remotely linked to sites and interview patients via videoconferencing or teleconferencing has been suggested as a way to improve interrater reliability and interview quality. This study compared the effect of site-based and centralized ratings on patient selection and placebo response in subjects with major depressive disorder. Subjects in a 2-center placebo and active comparator controlled depression trial were interviewed twice at each of 3 time points: baseline, 1-week postbaseline, and end point--once by the site rater and once remotely via videoconference by a centralized rater. Raters were blind to each others' scores. A site-based score of greater than 17 on the 17-item Hamilton Depression Rating Scale (HDRS-17) was required for study entry. When examining all subjects entering the study, site-based raters' HDRS-17 scores were significantly higher than centralized raters' at baseline and postbaseline but not at end point. At baseline, 35% of subjects given an HDRS-17 total score of greater than 17 by a site rater were given an HDRS total score of lower than 17 by a centralized rater and would have been ineligible to enter the study if the centralized rater's score was used to determine study entry. The mean placebo change for site raters (7.52) was significantly greater than the mean placebo change for centralized raters (3.18, P < 0.001). Twenty-eight percent were placebo responders (>50% reduction in HDRS) based on site ratings versus 14% for central ratings (P < 0.001). When examining data only from those subjects whom site and centralized raters agreed were eligible for the study, there was no significant difference in the HDRS-17 scores. Findings suggest that the use of centralized raters could significantly change the study sample in a major depressive disorder trial and lead to significantly less change in mood ratings among those randomized to placebo.


Assuntos
Transtorno Depressivo Maior/diagnóstico , Transtorno Depressivo Maior/tratamento farmacológico , Seleção de Pacientes , Escalas de Graduação Psiquiátrica/normas , Consulta Remota/normas , Estudos Transversais , Transtorno Depressivo Maior/psicologia , Feminino , Humanos , Masculino , Variações Dependentes do Observador , Efeito Placebo , Sertralina/uso terapêutico , Método Simples-Cego , Resultado do Tratamento
8.
Int Clin Psychopharmacol ; 23(3): 120-9, 2008 May.
Artigo em Inglês | MEDLINE | ID: mdl-18408526

RESUMO

This report describes the GRID-Hamilton Depression Rating Scale (GRID-HAMD), an improved version of the Hamilton Depression Rating Scale that was developed through a broad-based international consensus process. The GRID-HAMD separates the frequency of the symptom from its intensity for most items, refines several problematic anchors, and integrates both a structured interview guide and consensus-derived conventions for all items. Usability was established in a small three-site sample of convenience, evaluating 29 outpatients, with most evaluators finding the scale easy to use. Test-retest (4-week) and interrater reliability were established in 34 adult outpatients with major depressive disorder, as part of an ongoing clinical trial. In a separate study, interrater reliability was found to be superior to the Guy version of the HAMD, and as good as the Structured Interview Guide for the Hamilton Depression Rating Scale (SIGH-D), across 30 interview pairs. Finally, using the SIGH-D as the criterion standard, the GRID-HAMD demonstrated high concurrent validity. Overall, these data suggest that the GRID-HAMD is an improvement over the original Guy version as well as the SIGH-D in its incorporation of innovative features and preservation of high reliability and validity.


Assuntos
Transtorno Depressivo Maior/diagnóstico , Entrevista Psicológica/normas , Escalas de Graduação Psiquiátrica/normas , Inquéritos e Questionários/normas , Adulto , Conferências de Consenso como Assunto , Transtorno Depressivo Maior/psicologia , Transtorno Depressivo Maior/terapia , Humanos , Cooperação Internacional , Variações Dependentes do Observador , Projetos Piloto , Valor Preditivo dos Testes , Psicometria , Reprodutibilidade dos Testes , Resultado do Tratamento , Estados Unidos
9.
Depress Anxiety ; 25(9): 774-86, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-17935212

RESUMO

Efforts to improve the Hamilton Rating Scale for Depression (HRSD) have included shortening the scale by selecting the best performing items, lengthening the scale by assessing additional symptoms, modifying the format and scoring of existing items, and developing structured interview guides for administration. We defined item performance exclusively in terms of the ability of items to discriminate differences among levels of depressive severity which has not be used to guide any revisions of the HRSD conducted to date. Two techniques derived from item response theory were used to improve the ability of the HRSD to discriminate among individuals with different degrees of depressive severity. Item response curves were used to quantify the ability of items to discriminate among individual differences in depressive severity, on the basis of which the most discriminating items were selected. Maximum likelihood estimates were used to compute an optimal depressive severity score, using all items, but which weighted highly discriminating items more so than items that did not discriminate well. The utility of each method was evaluated by comparing a subset of optimally discriminating items and maximum likelihood estimates of depressive severity to the Maier Philipp subscale of the HRSD, in terms of how well scales discriminate treatment effects. Effect sizes for overall change in depression severity as well as effect sizes differentiating response to treatment versus placebo were evaluated in a sample of 491 patients receiving fluoxetine and 494 patients receiving placebo. Results of analyses identified a new subset of items (IRT-6), selected on the basis of their ability to discriminate among differences in depressive severity, that accounted for more variance in full-scale HRSD scores and was better at detecting change in illness severity than the Maier Philipp subscale of the HRSD. The IRT-6 subscale was equally good as the Maier Philipp subscale in differentiating treatment from placebo response. No evidence supporting the benefits of using maximum likelihood estimates to develop optimally performing subscales was found. Implications of the results are discussed in terms of strategies for optimizing the assessment of change in overall depression severity as well as differentiating treatment response.


Assuntos
Antidepressivos/uso terapêutico , Depressão/diagnóstico , Depressão/tratamento farmacológico , Fluoxetina/uso terapêutico , Adulto , Depressão/psicologia , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes , Índice de Gravidade de Doença , Inquéritos e Questionários
10.
Psychiatry Res ; 158(1): 99-103, 2008 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-17961715

RESUMO

Poor inter-rater reliability (IRR) is an important methodological factor that may contribute to failed trials. The sheer number of raters at diverse sites in multicenter trials presents a formidable challenge in calibration. Videoconferencing allows for the evaluation of IRR of raters at diverse sites by enabling raters at different sites to each independently interview a common patient. This is a more rigorous test of IRR than passive rating of videotapes. To evaluate the potential impact of videoconferencing on IRR, we compared IRR obtained via videoconference to IRR obtained using face-to-face interviews. Four raters at three different locations were paired using all pair-wise combinations of raters. Using videoconferencing, each paired rater independently conducted an interview with the same patient, who was at a third, central location. Raters were blind to each others' scores. ICC from this cohort (n=22) was not significantly different from the ICC obtained by a cohort using two face-to-face interviews (n=21) (0.90 vs. 0.93, respectively) nor from a cohort using one face-to-face interview and one remote interview (n=21) (0.88). The mean Hamilton Depression Rating Scale (HAMD) scores obtained were not significantly different. There appears to be no loss of signal using remote methods of calibration compared with traditional face-to-face methods.


Assuntos
Transtorno Depressivo Maior/diagnóstico , Transtorno Depressivo Maior/psicologia , Inquéritos e Questionários , Comunicação por Videoconferência/estatística & dados numéricos , Transtorno Depressivo Maior/epidemiologia , Humanos , Variações Dependentes do Observador
12.
Int Clin Psychopharmacol ; 22(4): 187-91, 2007 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-17519640

RESUMO

Clinical trials are becoming increasingly international in scope. Global studies pose unique challenges in training and calibrating raters owing to language and cultural differences. Recent findings that poorly conducted interviews reduce study power, makes attention to raters' clinical skills critical. In this study, 109 raters from 14 countries went through a two-step certification process on the Hamilton Depression and Anxiety Rating Scales: (i) an online didactic tutorial on scoring conventions, and (ii) applied clinical training, consisting of small language-specific groups in which raters took turns interviewing patients while observed by an expert trainer, and observation and evaluation of individual interviews. Translators were used when native-language trainers were unavailable. Those who were unable to attend the startup meeting received the training individually via telephone. Results found a significant improvement in raters' knowledge of scoring conventions, with the mean number of correct answers on the 20-item test improving from 14.59 to 17.83, P<0.0001. In addition, raters' clinical skills improved significantly, with the mean score on the Rater Applied Performance Scale improving from their first to their second testing from 10.25 to 11.31, P=0.003. These results support the efficacy of this applied training model in improving raters' applied clinical skills in multinational trials.


Assuntos
Certificação , Ensaios Clínicos como Assunto/normas , Estudos Multicêntricos como Assunto/normas , Pesquisadores/educação , Pesquisadores/normas , Antidepressivos/uso terapêutico , Competência Clínica/normas , Transtorno Depressivo/tratamento farmacológico , Humanos , Cooperação Internacional , Idioma , Variações Dependentes do Observador , Escalas de Graduação Psiquiátrica , Ensino/métodos , Telecomunicações
13.
Schizophr Res ; 92(1-3): 63-7, 2007 May.
Artigo em Inglês | MEDLINE | ID: mdl-17336501

RESUMO

Problems associated with the clinician-administered rating scales have led to new approaches to improve rater training. These include interactive, on-line didactic tutorials and live, remote evaluation of raters' clinical skills through the use of videoconferencing. The purpose of this study was to evaluate this approach in training novice raters on the administration of the Positive and Negative Symptom Scale (PANSS). Twelve trainees with no prior PANSS experience completed didactic training via CD-ROM and two remote training sessions where they interviewed a standardized patient-actor while being remotely observed in real time and given feedback. Results found a significant improvement in trainees' conceptual knowledge and an improvement in trainees' clinical skills. The use of these technologies allows for training to be more effectively delivered to diverse sites in multi-center trials, and for evaluation of raters' applied clinical skills, an area that has previously been overlooked.


Assuntos
Pessoal de Saúde/educação , Internet/estatística & dados numéricos , Esquizofrenia/diagnóstico , Esquizofrenia/epidemiologia , Inquéritos e Questionários , Ensino/métodos , Comunicação por Videoconferência , Adulto , Feminino , Humanos , Masculino , Variações Dependentes do Observador , Satisfação do Paciente , Projetos Piloto
14.
J Clin Psychopharmacol ; 26(1): 71-4, 2006 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-16415710

RESUMO

OBJECTIVE: The quality of clinical interviews conducted in industry-sponsored clinical drug trials is an important but frequently overlooked variable that may influence the outcome of a study. We evaluated the quality of Hamilton Rating Scale for Depression (HAM-D) clinical interviews performed at baseline in 2 similar multicenter, randomized, placebo-controlled depression trials sponsored by 2 pharmaceutical companies. METHODS: A total of 104 audiotaped HAM-D clinical interviews were evaluated by a blinded expert reviewer for interview quality using the Rater Applied Performance Scale (RAPS). The RAPS assesses adherence to a structured interview guide, clarification of and follow-up to patient responses, neutrality, rapport, and adequacy of information obtained. RESULTS: HAM-D interviews were brief and cursory and the quality of interviews was below what would be expected in a clinical drug trial. Thirty-nine percent of the interviews were conducted in 10 minutes or less, and most interviews were rated fair or unsatisfactory on most RAPS dimensions. CONCLUSIONS: Results from our small sample illustrate that the clinical interview skills of raters who administered the HAM-D were below what many would consider acceptable. Evaluation and training of clinical interview skills should be considered as part of a rater training program.


Assuntos
Entrevistas como Assunto , Escalas de Graduação Psiquiátrica , Pesquisadores/educação , Antidepressivos/uso terapêutico , Depressão/tratamento farmacológico , Indústria Farmacêutica , Fidelidade a Diretrizes , Humanos , Entrevistas como Assunto/métodos , Guias de Prática Clínica como Assunto , Competência Profissional , Ensaios Clínicos Controlados Aleatórios como Assunto , Fatores de Tempo
15.
J Psychiatr Res ; 40(3): 192-9, 2006 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-16197959

RESUMO

OBJECTIVE: The evaluation and training of raters who conduct efficacy evaluations in clinical trials is an important methodological variable that is often overlooked. Few rater training programs focus on teaching and assessing applied clinical skills, and even fewer have been empirically examined for efficacy. The goal of this study was to develop a comprehensive, standardized, interactive rater training program using new technologies, and to compare the relative effectiveness of this approach to "traditional" rater training in a multi-center clinical trial. METHOD: 12 sites from a 22 site multi-center study were randomly selected to participate (6=traditional, 6=enriched). Traditional training consisted of an overview of scoring conventions, watching and scoring videotapes with discussion, and observation of interviews in small groups with feedback. Enriched training consisted of an interactive web tutorial, and live, remote observation of trainees conducting interviews with real or standardized patients, via video- or teleconference. Outcome measures included a didactic exam on conceptual knowledge and blinded ratings of trainee's audiotaped interviews. RESULTS: A significant difference was found between enriched and traditional training on pre-to-post training improvement on didactic knowledge, t(27)=4.2, p<0.0001. Enriched trainees clinical skills also improved significantly more than traditional trainees, t(56)=2.1, p=0.035. All trainees found the applied training helpful, and wanted similar web tutorials with other scales. CONCLUSIONS: Results support the efficacy of enriched rater training in improving both conceptual knowledge and applied skills. Remote technologies enhance training efforts, and make training accessible and cost-effective. Future rater training efforts should be subject to empirical evaluation, and include training on applied skills.


Assuntos
Depressão/epidemiologia , Educação/normas , Internet/instrumentação , Ensino/métodos , Tecnologia , Competência Clínica , Demografia , Feminino , Humanos , Entrevista Psicológica , Masculino , Pessoa de Meia-Idade , Variações Dependentes do Observador
17.
J Clin Psychopharmacol ; 25(5): 407-12, 2005 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-16160614

RESUMO

Recent evidence demonstrates that the quality of raters' applied clinical skills is directly related to study outcome. As such, the training and evaluation of raters' clinical skill in administering symptom-rating scales is essential before being certified to rate patients in clinical trials. This study examined a novel approach to rater training and certification that focused on both conceptual knowledge and applied skills. Forty-six raters (MDs = 14; PhDs = 7; MA = 5; BA/LPN/RN = 20) in a large multicenter depression study went through a 2-step Hamilton Rating Scale for Depression (HAMD) certification process: didactic training, administered online via an interactive Web tutorial, and live, applied training, where raters interviewed depressed patients while being remotely observed via 3-way teleconference. Raters' applied skills were evaluated using the Rater Applied Performance Scale (RAPS), designed specifically to evaluate critical rater behaviors associated with good clinical interviews. Raters received feedback immediately following the interviews; those receiving a failing score were given 2 more opportunities to pass. Each subsequent session was accompanied by feedback, and was conducted by a different trainer, who was blind to the results of the previous session as well as to which session number it was, to avoid bias. Raters who failed on the third attempt were excluded from rating patients in the trial. All training and testing occurred prior to the startup meeting. Results found a significant improvement pre-to-post Web training in raters knowledge of scoring conventions, P < 0.001. On the applied component, raters' RAPS scores improved significantly on the second attempt following feedback, from 9.05 to 11.58, P < 0.001, and from their second to their third session (from 9.00 to 11.00, P = 0.033. Three raters failed all 3 attempts and were excluded from the study. Results support the efficacy of the approach in improving both conceptual knowledge and applied interviewing skill.


Assuntos
Ensaios Clínicos como Assunto/normas , Estudos Multicêntricos como Assunto/normas , Pesquisadores/educação , Pesquisadores/normas , Adulto , Antidepressivos/uso terapêutico , Certificação , Competência Clínica/normas , Transtorno Depressivo/tratamento farmacológico , Feminino , Humanos , Internet , Masculino , Escalas de Graduação Psiquiátrica
19.
J Psychiatr Res ; 38(3): 275-84, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-15003433

RESUMO

Although the Hamilton Depression Rating Scale (HAMD) remains the most widely used outcome measure in clinical trials of Major Depressive Disorder, the psychometric properties of the individual HAMD items have not been extensively studied. In the present paper, data from four separate clinical trials conducted independently by two pharmaceutical companies were analyzed to determine the relationship between scores on the individual HAMD items and overall depressive severity in an outpatient population. Option characteristic curves (the probability of scoring a particular option in relation to overall HAMD scores) were generated in order to illustrate the relationship between scoring patterns for each item and the range of total HAMD scores. Results showed that Items 1 (Depressed Mood) and 7 (Work and Activities), and to a lesser degree, Items 2 (Guilt), 10 (Anxiety/Psychic), 11 (Anxiety/Somatic), and 13 (Somatic/General) demonstrated a good relationship between item responses and overall depressive severity. However, other items (e.g. Insight, Hypochondriasis) appeared to be more problematic with regard to their ability to discriminate over the full range of depression severity. The present results illustrate that co-operative data sharing between pharmaceutical companies can be a useful tool for improving clinical methods.


Assuntos
Transtorno Depressivo/tratamento farmacológico , Transtorno Depressivo/psicologia , Indústria Farmacêutica , Escalas de Graduação Psiquiátrica/normas , Inquéritos e Questionários , Ensaios Clínicos como Assunto , Transtorno Depressivo/classificação , Determinação de Ponto Final , Humanos , Escalas de Graduação Psiquiátrica/estatística & dados numéricos , Psicometria , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Índice de Gravidade de Doença
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...