Blog

Regulatory challenges and bioassay discussions at WRIB

Posted July 6, 2016

This past April, I had the opportunity to attend the 10th Annual Workshop on Recent Issues in Bioanalysis (WRIB) along with 900 other professional leaders in the bioanalysis field gathered from all over the world, coming from pharmaceutical and biopharmaceutical companies, biotechnology companies, contract research organizations and regulatory agencies.

The conference was designed to facilitate sharing, review, discussion and harmonization of approaches to address the most current issues of interest in both small and large molecule bioanalysis using LCMS, hybrid LBA/LCMS and LBA.

It took an in-depth approach to discussion of biomarkers, immunogenicity and emerging technologies. WRIB aimed to provide the bioanalytical community with key information and practical tips in an effort to advance scientific excellence, improve quality and deliver better regulatory compliance related to bioanalysis of large and small molecules.

Below I have highlighted some of the bioanalysis issues addressed during the panel discussions so that you can stay informed about this important area of drug development.

Clarification of bioanalytical method validation procedures related guidance 

By Mark Bustard, Ph.D., Health Canada

Health Canada issued a draft guidance document on March 10, 2016 for clarification of bioanalytical method validation procedures issued on October 8, 2015.

For example, at some laboratories during the matrix-based stability experiment, only one bulk stability sample tube was stressed. This single tube was then pipetted into six aliquots for analysis. Sound scientific practice indicates this modified approach is reporting experimentation on a single stability tube.

Health Canada considers this a misrepresentation of the data and an unacceptable practice in submissions for matrix-based stability experiments. Clarification has been made supporting historically required six sample tubes stressed under the stated conditions. It has also been proposed to prepare a minimum of three replicate QCs per concentration to be stored under appropriate conditions to be processed and analyzed. This approach has been supported by regulatory agencies such as the FDA, EMA and AVISA.

Extract stability and processed batch acceptance: Case study

Sam Haidar, Ph.D., FDA, Director, Generic Drug BE Evaluation

Objectives are to determine stability of analyte(s) and internal standard post extraction, re-injection reproducibility. Extract stability data should be established during the validation phase and run should consist of:

  • The complete set of freshly prepared calibrators, standard curve: eight concentrations;
  • Quality control samples: four concentrations, duplicates; and
  • Subject samples, each instrumental run contained three batches of processed samples.

Procedure:  On day one, a complete set of calibrators and QCs were injected, run passed as per acceptance criteria, and stability challenge samples were stored at room temperature for > 24 hrs, then analyzed. 

Analysis: Concentration of challenge samples were determined using response ratios of calibrators from day one. Calculated concentrations were compared to nominal values for determination of stability. Once the instrumental run passed the required criteria, each batch was evaluated independently.

Inappropriate use of LC-MS integration parameters and influence on data reliability

In order to reconcile the scientific need for adjustment of integration parameters with regulatory concerns, fixing parameters in method validation, and determining the permissible changes through discussion, the following recommendations were made:

  • Integration of all chromatograms in a batch with the same automated parameters (strongly recommended)
  • All actions taken by the analyst should be captured by the relevant audit trails
  • Parameters that are fundamental to data collection should be fixed in validation

Adjustments should only be made due to:

  • Changes in instrument response (noise, sensitivity)
  • Differences between instruments of the same type
  • Minor changes in chromatography
  • Nature of samples (e.g., disease state)

The parameters that could be adjusted include noise and area thresholds, bunching factor and retention window. Adjusting integration parameters can be essential for correct integration and quantification.

There is useful regulatory guidance specifically defining re-integration such as:

  1. FDA - “A SOP or guideline for sample data reintegration should be established a priori. This SOP or guideline should define the criteria for reintegration and how the reintegration is to be performed. The rationale for the reintegration should be clarified, described and documented. Audit trails should be maintained. Original and reintegrated data should be reported.”
  2. EMA - “Chromatogram integration and re-integration should be described in a SOP. Any deviation from the SOP should be discussed in an analytical report. Chromatogram integration parameters, and in case of re-integration, initial and final integration data should be documented at the laboratory and should be available upon request”.
  3. Japan, MHLW - “Procedures for chromatogram integration re-integration should be predefined in the protocol or SOP. In case chromatogram re-integration is performed, the reason for re-integration should be recorded and the chromatograms obtained both before and after the re-integration should be kept for further reference”

Internal standard (IS) variability

Should we agree on a maximum acceptable difference (in what percent) for individual variations to trigger repeat analysis, or leave each laboratory to decide permitting flexibility to adjust to various situations? Consensus among has not been reached on the difference between individual variations and systematic differences in response, which may require investigations rather than blindly applying a strict rule.

In studies pivotal to market authorization, criteria in SOPs should define acceptance limits of IS responses and procedures to follow in cases of IS response failure. Variations in IS response are expected to some extent during the analysis of bioanalytical samples. Although regulators do not recommend a particular acceptance range, many laboratories have SOPs with acceptance limits for internal standard response variation within a run, such as a 50–150% range of the mean response. Another approach is to base acceptance criteria on the variability in IS response observed during method validation.

Recommendations:

  • Variations in IS response should be mitigated by the use of a stable isotope-labelled internal standard. SOPs (methods) with acceptance limits for IS response variation should be established to identify technical problems during sample processing
  • Trends and systemic differences should be investigated to identify their root causes and to determine effects on the accuracy of the drug/analyte concentration results in matrix
  • Investigations should be science-driven with clear rationale and documentation

Criteria to select weighting factors for linear and quadratic calibration curves to answer regulatory agency concerns

It was agreed that selection of the regression model should be step-by-step, beginning with a simple model, like linear, and moving to more complex models, like quadratic or weighted models. An informal poll of WRIB attendees indicated that the most-used weighting factors were 1/x and 1/x2. Use of weighting curves should be supported by a predefined statistical approach and documented as part of company SOPs (methods).

Slope variation in LCMS calibration curves: Is this an indication of potential method issues?

In regulated bioanalysis using LCMS, the calibration curve slope can be a quality indicator for assay performance. It was agreed that a consistent calibration curve slope across multiple analytical runs indicates the assay is rugged and reliable. Variations in calibration curve slopes could arise from complex reasons such as matrix effects and detector saturation, nonspecific adsorption or differential recoveries. Slopes of batches analyzed on the same instrument should be consistent. Unexplained, extreme variations should be evaluated to assess their impact on accuracy even with batches that meet the acceptance criteria to determine if the data remains precise and accurate.

Bioanalytical challenges in demonstrating biosimilarity

Kara Scheibner Ph.D. FDA, Generic Drug BE Evaluation

To achieve regulatory approval of biosimilar drugs, biosimilarity must be demonstrated between the physiochemical properties of bio­similar and originator batches. There are specific challenges in developing and validating bioanalytical assays, that is PK, antidrug antibodies (ADAs) and neutralizing ADA assays which are used to support pre-clinical and clinical comparability studies.

The consensus within the industry for developing PK assays for biosimilar programs is that one assay should be applied to quantify both biosimilar and originator analytes in biological matrices. Establishing assay accep­tance criteria to demonstrate ‘equivalency’ is not trivial. The potential quantification bias between biosimilar and originator compounds must be minimized.

In contrast with industry consensus for biosimilar PK assays, the opinions on bio­similar immunogenicity assays are divided into ‘one-assay’ (where one set of labelled drug reagent is used to detect both biosimilar and originator ADAs) versus ‘two-assays’ (where each set of labelled drug reagent is used to detect its respective ADA).  Since immunogenicity assays are not quantitative assays, attempting to demonstrate ‘equivalency’ using qualitative assays presents additional challenges.

The presentation highlighted scientific findings from recent FDA inspections of studies in original and biosimilar submissions, with two technical issues: a) inconsistences in characterization of anti-drug antibody samples and b) using a LIMS-based system in Immunogenicity Assays.

You may also like:


Back to Blog list