11th Octg 2019 - SH Lang

New flyer for QED Biomedical Ltd


Flyer - QED Biomedical.pdf
30th Aug 2019 - SH Lang

QED Biomedical blog posted by Drug Target Review

What are the pros and cons of using organoids?

Dr Shona Lang investigates the advantages and disadvantages of using organoids within R&D, highlighting the most important questions to ask before using these models.

Organoids are three-dimensional (3D) cell cultures which mimic tissue architecture. Their growing importance in various fields of research saw them named ‘Method of the Year 2017’1 and their use in drug testing has seen a large increase in the market for 3D cell cultures.2 Importantly, these models have revealed the ability of the cell to differentiate and self-organise. Researchers have found organoids relatively easy to grow and they can produce stunning images; they can be observed using molecular, cellular and imaging techniques.

However, like any other model derived from patient tissues, they must be clearly validated. So, if you are thinking of investing in an organoid model or you are trying to select the best model from several possibilities, what questions should be asked? There are clear advantages in using organoids, but often the negatives can be overlooked or are simply unpublished.

Not an average culture

Organoids are an upgrade from the traditional primary cultures grown as a monolayer in a tissue culture flask. The difference for organoids is that cells are grown within a basement membrane gel and develop into 3D shapes; primarily into hollow or budding spheres. The cells seeded into this type of culture system can be selected (eg, stem cells) or unselected to reflect the cellular heterogeneity of the patient.

Although organoids are now widely used, they are also under-investigated and often poorly validated. It is not known what effect the basement membrane gel has on cellular behaviour – does it support the natural differentiation of the cell or does it reprogramme growth in an undetermined way? This must be kept in mind if you assume that your model recreates the tissue of interest.

Organoid morphology

Organoids offer superior morphology if you are studying a glandular tissue, but they are not appropriate for studying stratified tissues, such as skin. Originally organoid cultures were grown to investigate normal cellular differentiation in the prostate3 and breast.4 Their use as a tumour model is debatable as the influence of the basement membrane gel is not currently understood. One of the classical definitions of tumour growth is anchorage independence, yet organoids are anchorage dependent due to their adherence to the basement membrane proteins in the gels. In addition, tumour cells known to grow as solid masses in vivo can grow as hollow spheres5 in the organoid system, so they do not always recreate patient tissue architecture. Therefore, it is important to carefully consider the morphology of the disease you are studying.

Purity and removing contaminants

Cultures derived from tissues will contain a mix of cell types, for example, epithelial or stromal. These can be contaminated by neighbouring tissues dependent on the sampling technique, eg, prostate tissue sampled by transurethral resection could contain urethral tissue. Tumour tissues can also be contaminated by normal cells. A study of in vitro and in vivo models for precision medicine indicated that between 25-95 percent of ‘tumour’ organoids (across a range of tissues) were normal cells.6 When using organoids, it is imperative to confirm the absence of all potential contaminants.

article

Heterogeneity of organoids

Although heterogeneity is frequently reported as a positive aspect of spheroid cultures, it must also be recognised as a negative. If you are testing a drug and want to know its effect on a heterogeneous culture of cells, then this is a positive characteristic. However, when investigating the action mechanisms of a drug and which cells are targeted, using a heterogeneous mix of cells will not yield answers. Evidence suggests that you cannot rely on the in situ genotype to reflect the in vitro genotype.7 Since cells are usually seeded at high density into an organoid culture there is also a risk that a single organoid may not be clonal (derived from a single cell). If you are using a heterogeneous mix of cells it is critical to check whether the ratio and type of cells (tumour, normal, different tumour clones) reflects the patient or whether it is better to use multiple samples or organoid cultures per patient.

Reproducibility and clonal drift

Heterogeneous cultures contain many different clones, which means that as the culture grows certain clones will dominate while others will die out. Therefore, the heterogeneity will be different from one pass to the next and experimental reproducibility will suffer.8 Whilst a mutation may be detectable in a culture of organoids this does not mean that 100 percent of the cells within or between organoids are tumourigenic and this will have consequences for drug testing. If you identify an organoid with a mutation of interest, you need to confirm what proportion of cells have that mutation. If 10 percent of the cells contain the mutation, the response to the drug will be dominated by the wild type cells. This lack of validation can lead to false negatives and poor reproducibility. Accordingly, understanding the genotype and phenotype of your culture (including the proportion of cells) at every passage to ensure your model is fit for application to precision medicine is important.

Evidence-based validation

Currently there is a growth in omics research – the study of large datasets derived from genomic and proteomic research – that has enabled an understanding of the complete molecular profile of the patient and helped establish personalised medicine. Omics research has also led to the establishment of numerous databases to store information, which can be used to source and categorise research material, eg, biobanks and cancer model directories. However, the validity of the data in these databases is often unclear and currently there are calls to improve research standards9 and rigour10 in this area. It is important to remember that the identification of a gene mutation within a model does not confirm that the mutation is present in all cells or that it will last over a period in culture.

Currently the validation of pre-clinical models is based on the evidence reported in scientific publications or the opinion of experts; there are biases in both.11 With the advance of the Open Science movement it is well known that scientific publications face problems with publication bias.11 Expert opinion is open to bias, conflicts of interest and spin.12 Evidence-based decision making has been used for several years in medicine to avoid partiality.13 This process relies on developing a research question and protocol to find and evaluate all available evidence. It is advantageous because not only does it evaluate evidence in terms of quality, but it also highlights the evidence gaps. Advances in the application of evidence-based decision making for biomedical science allows investors and researchers to find the best validated models and ensure they recapitulate the disease of interest.14 Decision-making tools allow researchers to identify which validation criteria have been met and those which have not (tissue of origin proven, cell lineage proven, contaminants absent, concordance of histology and genotype, concordance of response to drug treatment). This ensures that both useful evidence and evidence gaps can be identified.15 The process aids the selection of the best models and can highlight the need for further validation experiments.

In conclusion, evidence-based decision-making tools improve translation into the clinic by reminding investors of the important questions to ask. It is vital to always be wary of models with minimal validation.

https://www.drugtargetreview.com/article/48244/what-are-the-pros-and-cons-of-using-organoids/


24 July 2019 - posted by Dr Anne Collins

Hierachical cancer stem cell model challenged in glioblastoma

The idea that cancer stem cells (CSCs) exist as a defined cellular entity, driving tumour growth and resistances is challenged by a team at the Luxembourg Institute of Health working in glioblastoma; a form of brain cancer.

The researchers found that cellular heterogeneity-which is a hallmark of glioblastoma and makes it particularly difficult to treat- was due to the inherent plasticity of tumour cells rather than multipotent CSCs. They describe their findings in the journal Nature Communications and caution against therapies targeting CSC cell surface markers because all cancer cells have the capacity to reconstitute tumour heterogeneity.

These findings bring into question the validity of numerous studies describing brain CSCs and the rationale of some biotech companies pursuing a strategy of targeting CSCs using immunotherapy. The peer review system is meant to examine papers on potential shortcoming, yet poor study design is often left unchallenged. For example, numerous studies reporting the existence of CSCs failed to test the potency of marker negative cells, applied different growth conditions on marker positive and negative subpopulations thereby introducing bias.

There are a number of initiatives to tackle the reproducibility crisis in biomedicine, such as replicating key landmark studies before embarking on clinical trials. This is both costly and time consuming. Another approach is to use systematic review and evidence-based decision making to verify evidence before embarking on an investment or new direction. QED Biomedical specialises in systematic reviews and tailored evaluations in biomedicine analysing data for validity as well as reproducibility. We can help you make an informed decision.

6th May 2019 - posted by Dr Anne Collins

Patient derived xenograft models: an emerging platform for translational cancer research?

One of the largest studies to date on new drug approvals reported that around 3% of cancer drugs make it to the market. The probability of success improves significantly, to just over 10%, if biomarkers are used to select patients, but this is still a dismal return for what is a considerable investment by pharma and biotech companies. This points to a failure of existing cancer models to reliably predict anticancer activity in the clinic.

Patient-derived xenografts, otherwise known as PDX models, have been cited extensively in the scientific literature as better predictors of response, compared to the more traditional cancer cell line models because they retain the cellular heterogeneity, architecture and molecular characteristic of the original cancer. With the demand for more personalized medicine, the market for PDX models has grown significantly, with pharmaceutical and biotech companies the biggest users. So how reliable are PDX models? Unfortunately, much of what is claimed does not stand up to scrutiny.

PDX models are based on the direct implantation of fresh cancer specimens from individual patients into immunodeficient mice. Once established, in individual mice, each tumour is subsequently expanded into further mice by a process known as serial transplantation. The challenge is that each copy (or xenograft) should reflect the original human cancer and should not deviate with continuous serial transplantation. To assess how well PDX models mimic the disease in man we carried out a systematic review, recently published in PeerJ . We limited the review to the four most common cancers: breast, prostate, colon and lung and used a checklist of strict criteria to determine model validity. What we found was that around half of all breast and prostate studies, that derived xenografts, did not mirror the donor cancer as claimed by the authors. Lung and colon fared somewhat better with 28% of lung and 16% of colon studies judged high risk. For example, lack of concordant gene mutations, discordant clustering from gene expression studies or failure to express tissue-specific markers in line with the donor cancer, were some of the findings reported in this review.

Overall, we categorized most studies as unclear because one or more validation conditions were not reported, or researchers failed to provide data for a proportion of their models. The most common reasons were, failure to demonstrate tissue of origin, response to standard of care agents and to exclude development of lymphoma (a common phenomenon in the PDX field).

Whether or not PDX models are more clinically predictive is yet to be determined, at least for the four most common cancers. Our ability to judge which discoveries/innovations will be beneficial is crucial. Until we adopt a more formal, unbiased assessment of biomedical research findings, with strict guidelines to ensure transparency, we are unlikely to improve the low rate of clinical trial success.

4th May 2019 - Dr Shona Lang

‘The night is dark and full of terrors’:

or the despair of reading a scientific publication

The famous quote from Game of Thrones perfectly describes my emotions when reading a modern scientific paper. This is particularly the case if I try to dig deep and evaluate whether the evidence is valid or reproducible.

I recently read a paper (published in Science) describing the creation of patient derived cellular models from cancer tissue. My aim was to assess how well the authors:

a) recreated gastrointestinal cancer tissues (cancer markers, tissue markers, morphology and treatment response)

b) proved their claims were reproducible

I consider myself a seasoned veteran of reading and writing scientific reports, however I found myself struggling to read the paper and forcing myself to identify all the relevant information. I complained loudly to colleagues about the difficulty of my task and the lengthy time it was taking.

The paper, selected at random, was around 7 pages long and had 43 authors from ten research centres. The main text contained four figures; but each contained up to seven sub-figures. There was no methods section; the trend in high impact journals is to relegate this to the supplementary materials. The supplementary materials were a whopping 47 pages including 13 additional figures (each with multiple sub-figures) and eight, large Excel data sheets. There were multiple complex methodologies including; advanced tissue culture, histology, DNA sequencing and mutational analysis. You can see the scale of the task. Maybe you can understand why I made the comparison with the futility of warring against the army of the undead.

It took me a determined day to read through all the data and find all the evidence relevant to my evaluation (or confirm the lack of it).

After completing my analysis (posted in previous blogs and tweets), I found that

a) The authors had selective reporting. They didn’t report all the evidence for all their models. It wasn’t even clear if they were only interested in reporting about particular models. This leaves you with the impression that the authors only report the results they want you to see.

b) The models were not well validated (morphology was not always consistent with the patient, there was limited evidence for colon or tumour markers, mutations were not always identical to the patient)

c) Response to treatment was very opaque. To clarify, the patient response was only mentioned in the text and it was not always clear what that response was. The model’s response to treatment was presented as a relative effect – so the absolute results could not be judged. In addition, there were no statistical analyses to prove a treatment effect nor that it was concordant with the patient. The reader had to trust the stated text and could not confirm it from clearly reported quantitative results.

d) Sadly (but not unusually) the data were frequently based on three replicates; this is not robust since the likelihood of the result repeating will be very small based on this sample size. Ideally the authors should have calculated a samples size, based on the effect size seen in the patients.

Overall, I was left with the impression of selective reporting, opacity of the absolute data and too much relative data or categorical data. I don’t think a statistician would have approved this data for publication; assuming they could read the papers’ complexity.

Basic standards of statistical reporting and the ease of reading appear to have been replaced by increasing data volume and the inclusion of many complex methodologies; this cannot be good for the scientific community. We need to remind ourselves of the reason that scientists publish: to pass on knowledge. Scientific knowledge must be reported in a manner that allows it to be understood, repeated and clearly evaluated by other scientists.

Other researchers have also suggested that scientific papers are becoming too dense, too complex and too hard to read. I felt that the current research would be more useful to the community if the evidence had been reported in multiple, shorter articles which focus on one question and report the results clearly using absolute values. It is the clarity of the evidence that is most disturbing, anyone who may want to invest in the models should be able to evaluate reproducibility and effect size. Investors should not assume the authors conclusions are correct without this evidence. QED Biomedical uses in house tools to evaluate the validity of new preclinical models and can assess the likelihood that treatment responses are reproducible.

27th Apr 2019 - Dr Shona Lang

Poor reporting of statistical analysis in preclinical organoid models for drug testing

Our recent evaluation of a scientific publication comparing the response of preclinical models to anticancer treatment with the reponse in patients, showed a shocking lack of statistical reporting and rigour.

Models were tested in triplicate for a response to cetuximab. No sample size calculation was performed. It is highly unlikely that three replicates was sufficient to detect any significant or reproducible effects of cetuximab on organoid growth. Statistical guidance has suggested that a sample size of at least 10 replicates should be applied to preclinical models. The lack of sample size calculations is one of the greatest causes of false findings, yet it remains one of the most overlooked reporting criteria in biomedical science.

To compound the issue reported above, the authors reported treatment effect as a relative value, which means the absolute values for the response to the drug treatment or to controls could not be evaluated. In addition, the authors provided no statistical analysis to prove that the drug had an effect (in comparison to the control) nor did they provide statistical evidence that the organoids replicated the patients' responses. The reader was expected to take it on good faith from narrative descriptions.

Why should you care? Organoids have been proclaimed as the 'method of the year 'and are part of a growing industry for preclinical testing of drugs. There are supporting industries in bioinformatics helping researchers find the best models for their needs. We must ensure that no matter how complex the chosen model, the basic tests of reproducibility should be reported alongside the discoveries and the limitations should be acknowledged. This is what will ensure that biomedical discoveries translate efficiently to the clinic. Precision medicine needs to be precise.

QED Biomedical is helping to tackle the reproducibility crisis. We can help you design your experiments better or we can help you identify the strengths and weaknesses in existing research.


6th Mar 2019 - Dr Shona Lang

QED Biomedical blog posted by CATAPULT Medicines Discovery

New methods to improve the translation of biomedical science towards clinical trial success

Moving forward - implementing systematic review and identifying reliable evidence

An article from our series: Views from the Discovery Nation

It is now over ten years since John Ioannidis published his essay on ‘Why most published findings are false’. This paper presented statistical arguments to remind the research community that a finding is less likely to be true if: 1) study size is small, 2) effect size is small, 3) a protocol is not pre-determined and not adhered to, 4) it is derived from a single research team. Methods to overcome this issue included establishing the correct study size, reproducing the results by multiple teams and improving the reporting standards of scientific journals.

So, have any of these suggestions been taken forward? There is an increase in the promotion of open science, a movement which supports increased rigour, accountability and reproducibility of research. In addition, there are increased requirements by some scientific journals to improve reporting standards. Whether or not these recommendations are followed or adhered to remains to be established. Evidence from a systematic review we conducted on the asymmetric inheritance of cellular organelles highlights the problems in basic science reporting and study design.

Of 31 studies (published between 1991 and 2015), not one performed calculations to determine sample size, 16% did not report technical repeats and 81% did not report intra-assay repeats. We need to educate our future scientists on study design and impose stricter publication criteria.

Impartial and robust evaluations of biomedical science are required to determine which new biomedical discoveries will be clinically predictive. We should be concerned by the lack of reproducibility in biomedical science because it is a major contributing factor in the low rate of clinical trial success (only 9.6% of phase 1 studies reach approval stage). Our ability to judge which discoveries will have real-life effects is crucial.

What can be learned from clinical research?

A lot can be learnt from clinical research. The publication of clinical research must follow strict reporting guidelines to ensure the transparency of research and decision making is based on unbiased evidence gathered by systematic review. Systematic reviews provide a methodology to identify all research evidence relating to a given research question.

Importantly systematic reviews are unbiased (not opinion-based) and are carried out according to strict guidelines to ensure the clarity of the results and their reproducibility.

They form the basis of all decision making in clinical research. Importantly the evidence gathered by the review is judged to ascertain its quality. This allows the review to present results graded according to the ‘best evidence’. Determining the quality of the clinical evidence is judged primarily according to the study design (randomisation, blinding of participants and research personnel etc). There are many risk of bias tools available, each appropriate to a different study design. The GRADE seeks to incorporate the findings of a systematic review alongside study size, imprecision, consistency of evidence and indirectness; providing a clear summary of the strength of evidence to guide recommendations for decision making.

It makes sense for decision making in biomedical research to be judged in a similar open and unbiased manner. Astoundingly, the choice of which biomedical discovery is suitable for further investment is usually an opinion-based decision. Big steps have been made to introduce systematic review to preclinical research. There are reporting guidelines for animal studies and risk of bias tools. The quality of these studies is again based on study design.

At a basic bioscientific level one must argue that judging the quality of evidence should focus on more fundamental aspects of the research: 1) how valid is the chosen model (or how well does it recapitulate the human disease of interest), 2) how valid is the chosen marker (or how well does it identify the target?) 3) how reliable is the result. Pioneering work by Collins and Lang has introduced tools to perform such judgments. These tools aim to directly address the issues raised by John Ioannidis. They aim to highlight the strengths and gaps in a given research base.

Original blog can be found here https://md.catapult.org.uk/new-methods-to-improve-the-translation-of-biomedical-science-towards-clinical-trial-success/


Photo by Drew Hays on Unsplash
1st Feb 2019 - SH Lang

Is your tumour model what you think it is?

Advancing a candidate drug from preclinical testing into clinical trials assumes that cancer models used in the laboratory have reproduced the clinical disease accurately. Our recent review shows that this assumption is not accurate. We looked at how well the authors of scientific publications validated laboratory models grown from patient tumours. More than half of the models selected for potential inclusion in our review had no validation methods at all; the authors assumed the model reflected the patient. This is a dangerous assumption; tissue samples can have contaminating cells which will outgrow the cells of interest, these can be neighbouring tissues, cells with other disease states or in the case of tumour tissues contamination can come from normal cells.

Of the studies that performed validation studies more than half did not confirm: 1) the tissue of origin 2) the absence of contaminating cells 3) or that the model looked like the tumor growing in the patient (histology). Just over half of all studies (57%) confirmed that the models were derived from tumour cells (and not normal cells). It is crucial that we clearly understand what the laboratory model is and not make assumptions. At QED Biomedical Ltd we aim to present transparent reports of how well human models are validated. This allows improved judgements of whether the model is appropriate for drug testing and likely to be predictive of the clinical patient. Make your decision on evidence and not assumptions or opinions.

Collins AT, Lang SH. 2018. A systematic review of the validity of patient derived xenograft (PDX) models: the implications for translational research and personalised medicine. PeerJ 6:e5981 https://doi.org/10.7717/peerj.5981