Hope Now for ALS – 21st Century Trial Design original proposal to FDA

ALS is a progressive neurological disorder that has always resulted in progressive deterioration and death.  There is no cure and the only approved treatment extends life on average 2-3 months.  80% of ALS patients die within 5 years of diagnosis and 90% within ten years.  ALS is an extremely heterogeneous disease, where patients progress at vastly different rates and often in non-linear fashion.  More than 20 different genetic mutations are associated with it but only explain 25% of total cases.  These facts make it very hard for a drug to pass a series of clinical trials and be approved using a traditional randomized double blind placebo test protocol.  Differences in the underlying progression rates between randomly selected placebo and non-placebo groups reduce the signal to noise ratio of observed changes in the primary endpoint during the trial period, producing significant statistical noise and limiting the ability to reach a standard of confidence of p<.05, or in other words a probability of less than 1 in 20 that a random event has proven the trial hypothesis rather than actual drug efficacy.

Once a standard for efficacy has been preselected (e.g. 10% less decline in FVC on treated v. placebo group) and trial size has been determined, utilization of the p<.05 standard makes analyzing trial results virtually a mathematical exercise removing all judgment and discretion from consideration by the regulator.  While this, along with the longstanding use of the standard to date, gives the regulator extreme comfort in not being second guessed that they approved a drug that they shouldn’t approve, it does little for the patient community that is seeking treatment for a disease that is always fatal and has no treatment.  The reason for this conflict is quite apparent – in the use of this same standard in all situations one group’s desire for life is diametrically opposed by the other’s desire to avoid criticism.  Unfortunately, the latter controls all of the decision points.  We accept that defining a primary desired endpoint will determine desired trial size, i.e. a more dramatically predicted change in an endpoint requires a smaller number of trial participants to test efficacy.  However, we take the position that also letting a historical p value standard remain a tyrannical rule of the game virtually eliminates the ability of the regulator to employ judgment in allowing approval of a drug/treatment and fails to reflect the societal evaluation of situational risk both in how a trial is conducted and how the results are evaluated.

Historically a standard of p<.05 came from social and behavioral science research. There is nothing magical about .05. It is more a historical standard than anything else.  Whatever P-value is chosen to be used to judge the results of a clinical trial should reflect the risk assumption or aversion desires of society at large with respect to that effort.  (See: Ronald Thisted, PhD “What is a 0P-Value” 2/14/2010 University of Chicago Department of Statistics and Health Studies.) For a drug that has serious side effects that is being used to treat a non-life threatening or life altering disease, a very low P-value as a standard makes sense.  On the other hand, we feel that in the case of a drug, where safety has already been demonstrated in Phase1 and/or Phase 2 trials, that is being used to treat an always fatal disease that has no currently available treatment alternative should either reflect a less stringent P-value standard or allow discretion and judgment to take into account other experimental observations and statistical analysis in determining trial outcome.  We do not seek elimination of statistical analysis but we sincerely recommend that a one size fits all P-value standard not be the only basis of evaluation, and not be employed in a vacuum, and recommend that judgment employed also reflect the severity of the disease and the level of unmet need for treatment.

Congress has very clearly taken the position that judgment and discretion is a critical part of the drug approval process and that the FDA has broad discretion in defining efficacy, a fact that the FDA has agreed with in our prior discussions.    We believe ALS presents a very clear case for employing that judgment and discretion.  That being said, we do not take the position that all aspects of current trial protocol should be abandoned.

To that effect, we strongly believe that advances in statistical analysis as well as advances in science are a necessary prerequisite to getting an effective treatment into the clinic.  Advanced data processing, statistical modeling and machine learning techniques have been developed and significantly enhanced over the last decade.  In combination with large patient data sets involving ALS, such as PRO-ACT or ALS-TDI’s Precision Medicine Program database that is now under development, these statistical techniques can be applied to help accelerate the development of treatments and cures for ALS.  Patient predictive models for ALS created recently can now adapt to the unique characteristics of individual patients to accurately predict each patient’s future disease state, through measures such as ALSFRS-R score, FVC and survival likelihood.

The use of predictive algorithms for ALS progression offers multiple opportunities to reduce statistical noise in ALS drug trials.

The first use for such an algorithm would be in selecting trial participants that will tend to progress at similar pace so as to better ensure that the placebo and non-placebo groups in a trial are more homogeneous than what would occur through typical exclusion/inclusion criteria and randomization.  This would better assure that noise created by mixing fast and slow progressors did not skew in different directions in comparing the treated and placebo groups, a fact that is especially important if improvement in disease progression for the treated group is the primary endpoint of the trial. Similarly, the drug/treatment effect can be made more obvious, particularly in small size clinical trials.

The second use for the algorithm is that it creates a virtual third arm for assessing trial results.  If at the start of the trial the algorithm produces a predictive result for each patient, that predictive result can be compared to the actual result as well as using the placebo group as a point of comparison to the treatment group.  By comparing the predictive result to the actual for the treated group you eliminate the heterogeneous nature of the disease as a source of statistical confusion as the virtual arm placebo group is made up of the same exact patients as the treated group.  In other words, the predicted rate of progression for the treatment group that would have occurred had they not received treatment provides a direct same patient specific control for comparison, allowing for smaller, faster, smarter studies to produce a result that can support a regulatory decision.

Finally, conducting a trial this way, allows for even further learning from the results.  You can compare the placebo group results to the virtual trial arm predictive placebo results.  If the placebo group has been populated using the progression algorithm to create maximum homogeneity, then this comparison should give a clear picture of the viability of the algorithm and its use in creating a virtual control arm.  If that result is a positive one it strengthens the learning gained from the second proposed use of the algorithm. In addition, one can then give the placebo group the drug and add the results on them to the treated group and compare to the original placebo arm and the virtual arm in order to power up the trial statistically at a much lower cost and in less time.  It also solves a recruiting objection of patients who object to an algorithm determining whether they are place in a treated or placebo group as everyone is ultimately treated unless safety becomes a concern or the drug is conclusively shown to have no positive effect.

In addition to the above, we propose that a confirmatory trial of this type:

(1) include 20-100 patients with 50% in the treated arm,

(2)  collection of clinical data (e.g. FVC; ALSFSR) on each patient going back 1-3 months;

(3) using change in FVC as a primary endpoint and change in  ALSFSR as a secondary endpoint to help judge any ambiguous results not merely as a disqualifying factor to re-judge demonstrated efficacy using FVC,

(4) using possible biomarkers as an exploratory secondary endpoint for the same purpose, and

(5) a three-month post treatment observation point in assessing first phase trial results followed by a crossover for the placebo group combining them with the treatment group and using the predictive algorithm to reconstitute a virtual placebo arm.

In summary, we strongly believe that trial design in the case of ALS needs to reflect two propositions:

(1) recognition of advances in biostatistics allowing the use of predictive progression                  algorithms in ALS trials; combined with

(2) appropriate use of discretion in selecting  and interpreting P-value significance

Doing so will allow for quicker, less costly confirmatory studies in the area of ALS, hopefully leading to drug/treatment approval, whether on a normal or accelerated basis, and ultimately bring good proven treatments to the clinic and patient community in a faster more efficient way, resolving the quagmire created by rigid adherence to traditional time honored drug trial protocols.

In making these suggestions we understand completely that complications may be being added to the trial process but we firmly believe that generating more and better data and better employing clinician and scientific judgment in analyzing results will produce a far better outcome than what has occurred to date.  We look forward to a productive dialogue and a statement of the FDA’s position on each of these issues.

Facebooktwittergoogle_plusredditpinterestlinkedinmail