Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Anal Chem ; 96(3): 966-979, 2024 Jan 23.
Article in English | MEDLINE | ID: mdl-38191128

ABSTRACT

The analytical procedure life cycle (APLC) provides a holistic framework to ensure analytical procedure fitness for purpose. USP's general chapter <1220> considers the validation activities that take place across the entire analytical procedure lifecycle and provides a three-stage framework for its implementation. Performing ongoing analytical procedure performance verification (OPPV) (stage 3) ensures that the procedure remains in a state of control across its lifecycle of use post validation (qualification) and involves an ongoing program to collect and analyze data that relate to the performance of the procedure. Knowledge generated during stages 1 (procedure design) and 2 (procedure performance qualification) is used as the basis for the design of the routine monitoring plan to support performance verification (stage 3). The extent of the routine monitoring required should be defined based on risk assessment, considering the complexity of the procedure, its intended purpose, and knowledge about process/procedure variability. The analytical target profile (ATP) can be used to provide or guide the establishment of acceptance criteria used to verify the procedure performance during routine use (e.g., through a system/sample suitability test (SST) or verification criteria applicable to procedure changes or transfers). An ATP however is not essentially required to perform OPPV, and a procedure performance monitoring program can be implemented even if the full APLC framework has not been applied. In these situations, verification criteria can be derived from existing validation or system suitability criteria. Elements of the life cycle approach can also be applied retrospectively if deemed useful.

2.
J Pharm Biomed Anal ; 181: 113051, 2020 Mar 20.
Article in English | MEDLINE | ID: mdl-31962246

ABSTRACT

It is the objective of a systematic and holistic Quality-by-Design approach to demonstrate and ensure that an analytical procedure is fit for its intended purpose over its entire lifecycle. Such a lifecycle approach, as proposed for a new USP General Information Chapter includes the three stages Procedure Design and Development, Procedure Performance Qualification, and Continued Procedure Performance Verification, in alignment to manufacturing process validation. A decisive component of this approach is the Analytical Target Profile, which defines the performance requirements for the measurement of a Quality Attribute as the target for selection, development and optimization of the respective analytical procedures. Although the most benefit can be gained by a comprehensive Quality-by-Design approach establishing the Analytical Target Profile in the very beginning of a drug development project, it may also be established retrospectively for analytical procedures long in routine use, in order to facilitate future lifecycle activities such as continual improvements, transfers, monitoring and periodic performance evaluations. In contrast to the first two stages of the analytical lifecycle with usually limited amount of data, the Continued Procedure Performance Verification stage offers the possibility to utilize a much more reliable data base to collect, analyze, and evaluate data that relate to analytical procedure performance. This monitoring program should be aligned as far as possible with other quality systems already in place and may include performance indicators such as Conformity (i.e. out-of specification test results with analytical root-cause), Validity (i.e. failure to meet method acceptance criteria, e.g. system suitability tests), and (numerical) analytical performance parameters (e.g. ranges for replicate determinations, control sample results, etc). In addition to the monitoring of analytical control parameters by means of control charts, average (pooled) performance parameters can be calculated. Over time, a large number of data can be included and thus the reliability of these estimates is increased tremendously. Such reliable estimates for the true performance parameters, e.g. repeatability or intermediate precision are essential to identify systematic effects (also called special cause variation) with good confidence. The intent of the analytical procedure performance evaluation is to identify substandard performance, identify root cause through investigations, and determine when additional activities are required to improve it. Examples are provided for the monitoring and evaluation of performance parameters for the compendial drug substance Furosemide and for biopharmaceutical applications.


Subject(s)
Drug Compounding/standards , Pharmaceutical Research/organization & administration , Product Surveillance, Postmarketing/methods , Quality Control , Research Design , Pharmaceutical Research/methods , Reproducibility of Results
3.
J Pharm Biomed Anal ; 162: 149-157, 2019 Jan 05.
Article in English | MEDLINE | ID: mdl-30240988

ABSTRACT

In pharmaceutical analysis, the precision of the reportable value, i.e. the result which is to be compared to the specification limit(s), is relevant for the suitability of the analytical procedure. Using the variance contributions determined in precision studies addressing the levels injection/system precision, repeatability, and intermediate precision, the number of the corresponding replications for analysis/injection, sample preparation, and series/runs can be varied to improve the precision of the mean (reportable) value (Ermer, Agut, J.Chromatogr. A, 1353 (2014) 71-77). However, this calculation will provide only information on the gain for the precision of the calculated reportable value itself. These so-called point estimators have uncertainty associated with them which can be quantified using statistical confidence intervals. Commonly used statistical equations only allow one to calculate confidence intervals for the intermediate precision of the reportable value, which requires that the routine replication strategy must be defined before starting the precision study. In this paper, statistical models are presented that allow optimizing efficiently the replication strategy with respect to the confidence interval of the precision based on the Satterthwaite approximation posterior, i.e. using the results from the precision study without prior knowledge, as for the point estimate. It is further proposed to simplify the model by including only significant variance contributions larger than 20% of the total variation. The advantage of this minimizing the level of nesting is that the upper precision bound will tighten as the level of nesting decreases. This is important as 90% upper confidence bounds are often up to 2 or 3 times the point estimate, even for a larger number of four runs in the precision study. Four models each have been developed both for a 2-fold balanced nested design representing a complete intermediate precision study, and for a 1-fold balanced nested design using injection/system precision from an independent source. An Excel spreadsheet that performs all the calculations in this paper as well as the appropriate model selection is available from the authors. Due to the usually rather low number of series/runs in precision studies, the uncertainty of the reportable value precision is often dominated by the factor runs. For a statistical evaluation of the precision of the reportable value (in case of three precision levels), the authors recommend a minimum of six runs, two preparations per run, and two injections/analyses per preparation, in order to provide sufficient precision of the variance estimates. However, a risk-based approach is recommended for the decision to apply a statistical evaluation of the precision of the reportable value. In case of low patient risk such as for an assay of a well-characterized drug substance with tightly controlled manufacturing and analytical variability dominating the specification range, a point estimator will usually be adequate to demonstrate the suitability of the analytical procedure.


Subject(s)
Data Accuracy , Data Interpretation, Statistical , Models, Statistical , Research Design/statistics & numerical data , Technology, Pharmaceutical/statistics & numerical data , Reproducibility of Results , Technology, Pharmaceutical/methods , Uncertainty
4.
J Pharm Biomed Anal ; 160: 73-79, 2018 Oct 25.
Article in English | MEDLINE | ID: mdl-30071392

ABSTRACT

Approaches are presented to establish precision (or target measurement uncertainty) requirements to drug substance and drug product assays. They are based on the simple and well-known concept of the normal distribution probability around true content values represented either by manufacturing range limits, or by the manufacturing target (usually 100% label claim). A maximum acceptable precision is derived which allows a defined probability of analytical results within the established acceptance limits of the specification and thus an objective and rational establishment of precision acceptance criteria. By this approach, α or type-I-errors are controlled, i.e. the maximum probability of failure for intrinsically acceptable results is limited. The combination of this normal distribution probability approach with guard bands allows controlling ß or type-II-errors, i.e. the acceptance of intrinsically not conforming results is limited. Here, no assumptions concerning the manufacturing range are needed; therefore this approach can also be applied for quantitation of impurities. The guard band approach allows the highest level of control, but requires in turn high demands on the precision. Therefore, it should be restricted to drug product assays or impurity determinations with larger risks, i.e. justified by a corresponding clinical relevance, such as narrow therapeutic ranges or substantial toxicity.


Subject(s)
Drug Contamination/prevention & control , Models, Statistical , Probability , Quality Control , Technology, Pharmaceutical/standards , Reproducibility of Results , Software , Statistical Distributions , Technology, Pharmaceutical/methods
5.
J Pharm Biomed Anal ; 51(3): 557-64, 2010 Feb 05.
Article in English | MEDLINE | ID: mdl-19801176

ABSTRACT

Analytical instrument qualification (AIQ) is a prerequisite for any analytical method validation and thus must be considered as a vital basis of analytical data integrity and quality in pharmaceutical analysis. There is a well-established system of qualification phases-Design Qualification, Installation Qualification (IQ), Operational Qualification (OQ) and Performance Qualification (PQ). As HPLC systems are "off the shelf" equipment, Design Qualification may be disregarded here. IQ establishes that the instrument is received as designed and that it is properly installed. OQ is carried out modularly with the intention to ensure that the specific modules of the system and the whole system are operating according to the defined specifications. PQ as the last step of the initial qualification is supposed to ensure continued satisfactory performance of an instrument under actual running conditions over the anticipated working range during daily use. However, PQ is not a one time exercise, but is currently repeated regularly independently from routine use of the analytical system using standard reference test condition. But this approach, which is time consuming and expensive only provides a snapshot of system performance. As HPLC procedures generally require a system suitability test (SST) prior and/or after test, it might be far more reasonable and robust to use these SST data for a continuous PQ. The work presented here demonstrates that, under certain circumstances, satisfactory instrument performance assessment can be derived from system suitability tests and performance data from daily use as well. A generally accepted qualification list, consisting of only twelve critical parameters, was compiled in a first step. Some parameters such as injector or thermostatting accuracy were considered redundant while others were successfully incorporated in the proposed holistic approach. System suitability test data as well as OQ/PQ data were provided from different sources and evaluated. The promising results confirmed our concept of ongoing/continuous PQ as a major improvement in AIQ. This approach will not only help to reduce time and effort in the daily laboratory routine without losing data quality, but also avoid the critical re-evaluation of numerous analytical tests once a routine PQ fails.


Subject(s)
Chemistry, Pharmaceutical/economics , Chemistry, Pharmaceutical/standards , Evaluation Studies as Topic , Chromatography, High Pressure Liquid/economics , Chromatography, High Pressure Liquid/standards , Pilot Projects
6.
J Pharm Biomed Anal ; 38(4): 653-63, 2005 Jul 15.
Article in English | MEDLINE | ID: mdl-15967293

ABSTRACT

A multi-company investigation is presented to obtain and compare precision results for LC assay procedures. Forty-four drug substances and drug products of various types subjected to 156 stability studies, with 2915 assay values in total, were included. This provides an excellent source of real long-term precision estimates, as the same analytical procedure was applied during the whole stability study, extending from 12 to 60 months. Intermediate precision was calculated either using the residual standard deviation of the regression line or applying an analysis of variances, depending on whether there was a significant degradation of the analyte or not. The results show impressively the large intervals where the individually calculated parameters scatter. Distribution ranges and averages for repeatability, intermediate precision, and the ratio between the two precision levels are mainly dependent on the type of drug product. Repeatabilities were found up to 0.8% for solutions, 1.6% for drug substances, 1.9% for tablets, 2.3% for creams, and 3.4% for a bath. For intermediate precision, which includes additional variability factors due to the reference standard, operator, equipment, reagents, etc., a similar dependency was obtained with a slightly changed order: up to 1.1% for drug substances, 2.2% for solutions, 2.3% for tablets, 3.1% for creams, and 3.2% for a bath. The ratio between the precision levels is up to 2.5 and similar for all investigated drug product types, apart from solutions with up to 5.3. These differences for the types of drug product may be explained by the influence of the sample and/or the sample preparation: the more complex, the higher the variability contribution. For the investigated examples, the impact of the analyte and of the concentration (dosage) seems to be of less importance. Therefore, a classification of drug product types for orientation on acceptable precision (ranges) for LC assay seems to be possible.


Subject(s)
Chromatography, High Pressure Liquid/standards , Drug Stability , Algorithms , Germany , Linear Models , Ointments , Pharmaceutical Preparations/standards , Quality Control , Reproducibility of Results , Spectrophotometry, Ultraviolet , Tablets
7.
J Pharm Biomed Anal ; 37(5): 859-70, 2005 Apr 29.
Article in English | MEDLINE | ID: mdl-15862659

ABSTRACT

Validation of analytical procedures is a vital aspect not just for regulatory purposes, but also for their efficient and reliable long-term application. In order to address the performance of the analytical procedure adequately, the analyst is responsible to identify the relevant parameters, to design the experimental validation studies accordingly and to define appropriate acceptance criteria. Establishing an acceptable analytical variability for the given application is of central importance as many other acceptance criteria can be derived from such a precision. Acceptable precision ranges for types of control tests and/or analytes can be obtained from validation, but also related activities such as transfer, control charts, or extracted from routine applications such as batch release or stability studies (data mining). Apart from compiling a database for general benchmarking, during such an information-building process, the reliability of the analytical variability of the specific procedure is more and more increased. This is important as a reliable target variability facilitates to detect or investigate atypical or out-of specification behaviour of analytical data in a routine application, thus improving the data quality and reliability. According to the life-cycle concept of validation, measures should be taken to maintain and control the validated status of analytical procedures during long-term routine application, such as monitoring relevant performance parameters (system suitability tests), control charts, etc. If the analytical system is demonstrated to be stable, i.e. under statistical control, a major variability contribution in LC originating from the standard preparation and analysis can be reduced. A concept of quantification by pre-determined calibration parameters instead of the classical approach of simultaneous calibration is described.


Subject(s)
Chemistry, Pharmaceutical/methods , Chemistry, Pharmaceutical/standards , Quality Control , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...