A brief journey to India, and into models of carcinogenesis

In early 2016, I was asked if I wished to speak at the discussion meeting “Conflict and Competition in Cellular Populations” in Bangalore, India organized by Dr Sandeep Krishna and Dr Sunil Laxman (NCBS). The title sounded so intriguing that I accepted without even checking the actual topic of the meeting. Then an adventure begun, that now concluded (did it?) in 2021 with a small paper entitled “Cooperation of partially transformed clones: an invisible force behind the early stages of carcinogenesis” published in the journal of the Royal Society, Open Science (10.1098/rsos.201532). Let me tell you the story of this journey that, perhaps, might inspire you to adventure outside of your field.

For brevity, I’ll skip the details about the actual trip. It was of course exciting to experience a culture I am often exposed but I never lived. The food, the people, the contrasts of India, a small glimpse into a complex galaxy of humanity. My short trip to India started with a sleep-deprived-me trying to explain to the border police that the conference Conflict and Competition in Cellular Populations, nicknamed CCCP, which poster was written in pseudo-Cyrillic, was not a political conference (I would have needed a different visa in that case!) and concluded back in Cambridge a week later with a slightly embellished bedtime story for my 3 years old daughter about the animals I saw in the park that hosts NCBS, a story that I am still telling now and then to her.

But of course, here I focus on the science. The conference hosted a good number of great speakers (referring to others) on the topic of ecology (er, yes, the title made sense). Suddenly it dawned on me I was ‘a bit’ off-topic. However, I loved talk after talk learning a bit about ecology, including its mathematical foundations. I really enjoyed the meeting, so much so I could not stop thinking about its relevance for my work that back then was focused on non-genetic heterogeneity in cell decisions, carcinogenesis and the DNA damage response.

The study of cancer as an ecological problem is not new, of course. Something very specific started to bug me though, something I could not find literature about. We know that different clones of cancer cells cooperate and compete in tumours but what happens during the very early steps of carcinogenesis? I was queuing to board the airplane when I succeeded to download the paper “Evolution of cooperation among tumor cells” published ten years earlier by Axelrod and colleagues in PNAS. It was a nice in-flight read, but the flight from Bangalore to London is long and I started to obsess about a very simple mathematical fact.

For a moment, let’s imagine you dream of establishing a business but you need £1M to start it. However, you are a bit of an odd person and decide to do it only if you win the lottery which jackpot is £500k. You clearly make strange decisions but I am not here to judge… the oddest thing is, however, that you bet on winning the lottery not just once but twice. Then you have an idea You agree with your village of similarly odd-minded people that if anyone wins the lottery, you will pool the money together to invest in this start-up. This is still an unlikely strategy, and certainly one that has a tiny probability to succeed, but it is definitely more likely to work out than waiting to win the jackpot twice alone.

Back to carcinogenesis. Every day, each cell has a certain probability to mutate because of exposure to radiation, chemicals or simply the chance of errors of biochemical machineries. Mutation after mutation in the right genes, a cell might grow into cancer. A very unlikely series of events that, however, with trillion of cells in our bodies, over one’s lifespan is likely to happen. We know that certain mutations occurs in cells that eventually lead to cancer. We know that one cell wins the macabre lottery of disease multiple times before leading to cancer. We then know that many cells will get mutations within an otherwise healthy tissue.

We usually consider that all these other mutant cells will either accrue neutral mutations (i.e., mutations that will not change the fitness of the cell, nor confer a cancer phenotype), or deleterious mutations that will be purged by tumour suppressive mechanisms. However, cells within a tissue communicate and mutations occurs also in genes responsible of cell-to-cell communication. In my recent work I propose a ‘toy model’ with which I explore the possibility that the gene- and cell- centric mutational process should be reconsidered in the context of an overall tissue where cell-to-cell communication might reshape the early steps of carcinogenesis. I am not the first one doing so, but I try to emphasize with simple modelling how the mutational process should be seen in the context of a collective of cells rather than in a gene- or cell- centric fashion.

What did I learn beyond what I have written in the paper (i.e. in addition to the science)?

First I had really fun, something that over time does not happen with every paper, even those more important ones where we invest major resources in. I even had fun during the revision process. As many of us experience, I often got half of the referees very supportive of my work and half rather dismissive. But those very supportive have been often extraordinary kind and helpful, either defining the manuscript ‘a refreshing read different from what I usually read in this field‘ (earlier submission in a different journal) to ‘the models presented here make the point in a clear and dramatic manner‘. The last referee of the last submission now published was particularly helpful. Not only they critically review the manuscript but also invested time to describe a discrete time Markov chain model that I could have integrated in the manuscript. This suggestion permitted me to learn a bit of maths I did not practise before, and to improve the work… this is what refereeing should be.

Second, alongside the enthusiasm of adventuring in a rather different field from my already eclectic research interests, I also felt the pain of being an outsider; a pain I feel often but that it was made sharper by the fact I was a single author. This was really a ‘pet project’. I got convinced to shape my notes in a manuscript only after I attended a seminar by Prof. Allan Balmain in 2018 related to the Nat Cell Biol article “Multicolour lineage tracing reveals clonal dynamics of squamous carcinoma evolution from initiation to metastasis“. It was a great talk and somehow relevant to the notes I had written since my trip to India. I decided to try to publish my ideas after reading the commentary by Prof. Kornelia Polyak and Prof. Michalina Janiszewska where they state: “One possible explanation is that there is a cooperative interaction between the streak and bulk tumour cell populations; an intriguing hypothesis that warrants further investigation but was not tested by Reeves et al.5. The streak pattern observed by Reeves et al. is reminiscent of the streaks generated by non-mutualistic budding yeast analysed by Muller et al.13.” Eventually, I am not sure the work I had put in this manuscript was worth the pain.

Then, do I advise others to adventure so wildly in other territories? As I have written before, it is rarely rewarding career-wise and never easy. But, once in a while, let’s just follow the passion and enthusiasm for something new, with no regrets. Any adventure comes with some pain but the fun of exploring, eventually, makes the experience worth living overall.

I wish that this small new paper can really provoke some thoughts, or inspire some young scientist to adventure… perhaps not too much and not alone as exploring comes with its perils.

Is the average between a cat and a dog a real animal?

dog-2632088_1280.jpg
Image credit: Pixabay License. Free for commercial use. No attribution require

Is it a cat? Is it a dog? Is the average between a cat and a dog a real thing, perhaps a caog or a doat?

Not all science should be based on single cell detection, and there are plenty of cases where single cell measurements are superfluous. However, too often we fail to appreciate the huge mistakes we can do in biology when we forget the assumptions we do when using population measurements.

But which assumptions do we really do?

Often implicitly, when doing population measurements (e.g., Western blots, sequencing, proteomics, etc…) we assume that populations of cells we measure are homogeneous and synchronous. Or at least we assume that these differences are unimportant and that they can be averaged out. In the best cases, we try to enforce a degree of synchronicity and homogeneity, experimentally. In reality, one of the most important assumptions we implicitly do is that the system we analyse is an ergodic system. In physics and statistics, an ergodic system is a system that, given a sufficiently long time, explore all its possible states. It is also a system where – if sufficiently sampled –  all its states are explored and, consequently, averages over time on a single cell and averages over a population at a given time are the same. However, there are limits to this assumption in biology. The obvious example is the cell cycle. There is significant literature about ergodicity and cell cycle [e.g., 1, 2, 3] and how this principle can be exploited, but…

The lottery for cell division makes you grow faster.

There is a particular phenomenon that we encountered while we were working on this project [4] that fascinated me for its simplicity and consequences. How cells can increase their fitness (i.e. their growth rate)? One obvious answer is by dividing faster. Another, at first glance less obvious answer, is by exhibiting an heterogeneous cell cycle length.  Let’s consider a population of cells that divides every 24 hours. Over one week, these cells will have 128 times the original population size. Now, let’s consider cells that divide on average every 24 hours but exhibit variation in cell cycle length, randomly, with a standard deviation of 4 hours and a normal distribution. Cells with 20 hours or 28 hours long cell cycle are equally probable to occur. However, in one week, cells with a 28 hours long cell cycle length will grow 64 times and cells with a 20 hours long cell cycle length will grow about 380 times. On average, these cells will grow ~200 times, that is much faster than cells dividing precisely every 24 hours (128 times). This is true for any pair drawn at equal distance from the two sides of the average; these pairs are equiprobable, thus cells dividing at a given average cell cycle length grow faster at increasing heterogeneity. Let’s remember that this can occur not just in the presence of genetic differences, but even just for stochastic variations where the progeny of one cell will not keep the same cell cycle length but will keep randomly changing according to an underlying distribution. This is a phenomenon that has been observed experimentally, for instance, in yeast [5] with single-cell measurements but that is occurring in any cellular systems as described in [1] and our own work [4]. Population measurements might conceal these very important phenotypic or mechanistic differences.

The sum of two normal distributions is not another normal distribution.

The beauty of the normal distribution is that it is such a ‘well behaved’ distribution and, at the same time, it represents many physical and biological phenomena.  If a population we are characterizing is made of two normal distributions, their average is the average of the normal distribution. If these have the same average, the variance of the sum will be the sum of the variances. These basic and useful mathematical relationships can be also rather misleading. In fact, while these statements are mathematically correct, two populations of cells that ‘behave rather differently’, for instance in response to a drug, cannot be averaged. For instance, one cell population might be killed with a given concentration of a drug. Another population might be resistant. By detecting 50% cell death, we could assume – incorrectly – that dosing at higher concentrations we could kill more cells.

The plot shown below illustrates this basic principle. The blue and red distributions, averaged together, exhibit the same variance and average of the yellow distribution but they represent very different systems. If the blue distribution represents the sizes of cats and the red distribution the sizes of dogs, the yellow distribution does not represent the size distribution of any real animals. In other words, the average phenotype is not a real phenotype and, in the best case scenario, when there is a dominant population, it represents the most frequent (the mode) phenotype. In all other cases, where the homogeneity of the phenotype is not checked, the average phenotype might be simply wrong.

gaussians

This is a very simple illustration of a problem we frequently encounter in biology, trusting our population measurements (averages and standard deviations over experimental repeats) without being sure of the distributions underlying our measurements. In the figure above, the purple distribution is a distribution where the average is the correct average of the blue and red distribution, but the purple distribution is the statistical error of the assay and it is unrelated to the scatter of the biological phenomenon we are measuring. Sometimes, we cannot do anything to address this problem experimentally because of the limitations of technologies but it is very important – at least – to be aware of these issues.

Just for the most curious, I should clarify that for two Gaussian distributions with relative weights A and B, we can define a mixing parameter p=A/(A+B). The average of the mixed population will be simply μP=p*μA+(1-p)*μB, i.e. for p=0.5 is the average of the means. The apparent variance is σP^2 = p*σA^2+(1-p)*σB^2+p(1-p)*(μA-μB)^2, i.e. σP^2 is the average of the variances summed to the squared separation of the two averages weighed by the geometrical averages of the mixing parameters of the two populations.

Collective behaviour of cells is not an average behaviour, quite the contrary.

When discussing these issues, I am often confronted with the statement that we eventually do not care about the behaviour of individual cells but with the collective behaviour of groups of cells. There are two important implications to discuss. First of all, when arguing the importance of single-cell measurements, we do not argue the importance of studying individual cells in isolation. Quite the contrary, we should measure individual cells in model systems the closest to the physiological state. However, many assays are incompatible with the study of cell behaviour within humans and we resort to a number of model systems: individual cells separated from each other, 2D and 3D cultures, ex and in vivo assays. The two arguments (single cell measurements or measurements in more physiological model systems of tissues or organisms) are not the same.

Second, collective behaviours are not ‘average behaviours’. There are great examples in the literature but I would advise just even to visit the websites of two laboratories that I personally admire. They nicely and visually illustrate this point, John Albeck’s laboratory at UC Davis and Kazuhiro Aoki’s laboratory at NIBB. Collective behaviours emerge from the interaction of cells in space and time as illustrated by waves of signalling or metabolic activities caused by cell-to-cell communication in response to stimuli. The complex behaviours that interacting cells exhibit, even just in 2D cultures, can be understood when single cells and their biochemistry are visualized individually. Once again, phenotypes or their mechanism might be concealed or misinterpreted by population or snapshot measurements.

This is, of course, not always the case. However, my advice is to keep at least in mind the assumptions we do when we perform an ensemble or a snapshot measurement and, whenever possible, to check they are valid.

Snap opinion on deep-learning for super-resolution and denoising

I am personally conflicted on this topic. I have recently started to work on machine learning and deep-learning specifically. Therefore, I am keen to explore the usefulness of these technologies, and I hope they will remove bottlenecks from our assays.

My knowledge about CNNs is rather limited, even more so for SR and denoising applications. My first opinion was not very positive. After all, if you do not trust a fellow scientist guessing objects from noisy or undersampled data, why should you trust a piece of software? That appeared to be also the response of many colleagues.

After the machine learning session at FoM, I partially changed opinion, and I am posting this brief -very naïve – opinion after a thread of messages I read on twitter by colleagues. Conceptually, I always thought of machine learning as ‘guessing’ the image, but suddenly I realise that CNNs are perhaps learning a prior or a set of possible priors.

I have mentioned in a previous post about the work by Toraldo di Francia on resolving power and information, often cited by Alberto Diaspro in talks. Di Francia, in his paper, states “The degrees of freedom of an image formed by any real instrument are only a finite number, while those of the object are an infinite number. Several different objects may correspond to the same image. It is shown that in the case of coherent illumination a large class of objects corresponding to a given image can be found very easily. Two-point resolution is impossible unless the observer has a priori an infinite amount of information about the object.”

Are CNNs for image restoration and denoising learning the prior? If so, issues about possible artefacts might be not put aside but at least handled a bit better conceptually by me. The problem would then shift to understand which priors a network is learning and how robust these sets are to typical variations of biological samples.

Great talks today at FoM. Eventually, we will need to have tools to assess the likelihood that an image represents the ground-truth and some simple visual representation that explain what a CNN is doing to a specific image that is restored and ensure good practise. Nothing too different from other techniques, but I feel it is better to deal with these issues earlier rather than later in order to build confidence in the community.

Related twitter thread: https://twitter.com/RetoPaul/status/1118435878270132225?s=19

Volume rendering: is this localization-based super-resolution?

Project outcome published in Biophysical Journal in 2010.

  • Esposito A*, Choimet JB, Skepper JN, Mauritz JMA, Lew VL, Kaminski CF, Tiffert T, “Quantitative imaging of human red blood cells infected with Plasmodium falciparum“, Biophys. J., 99(3):953-960

Most papers have an untold backstory that we cannot reveal in it so to focus on a main message and the most relevant discoveries. This one has a little one I wish to share. Volumetric imaging of red blood cells is not the most difficult thing I have ever done. However, accurate morphological and volumetric imaging of red blood cells infected by Plasmodium falciparum, the causative pathogen of malaria, caused me a few headaches. Let’s forget the time spent waiting for the cultures growing at the right speed to deliver bugs at the right stage of development, undecided if to sleep before or after the experiment, and always getting the decision wrong. Let’s not speak for now about the optimization of the sample preparation that that by trying and failing lead to other interesting observations. And here we focus on the very simple concept of accurate volume rendering.

In one way or another, volume rendering and estimation will require some sort of thresholding on the data so to discriminate the object from the background. As imaging conditions change even slightly from experiment to experiment, setting this threshold might confound the final outcomes. When you deal also with a sample that undergoes major morphological transitions, a simple problem soon became one for which I spent a lot of time to identify a solution for. As it happens, one perhaps does not find the best, most elegant or even the simplest solution, but the solution that they can find with their skills and tools. Mine was a brute-force solution of isosurface volume rendering, iteratively deformed by local refitting of a random sample of vertices in order to respect a specific model set for the transition of object to background. This was a method that permitted us to preserve high resolution morphological descriptions, at high accuracy and reproducibility for volume rendering.

This work was carried out while many of my colleagues were focusing on super-resolution, e.g. maximizing the spatial resolution in optical microscopy. Then, it was simple to notice that fitting a surface onto volumetric data delivers volume estimates at higher precisions than what the optical resolution of a microscope should permit. Indeed, whenever you have a model for an object, in my case the boundary of a red blood cell, in single-molecule super-resolution methods the point-spread-function of an emitter, it is possible to fit this model with a precision that is not (fully) constrained by diffraction, but – in the right conditions – only by the signal-to-noise ratio, the analytical tools and the adequacy of the model for the object.

In this Biophysical Journal paper, we focused on the biological application and, together with other published work, on the modelling of homeostasis of infected red blood cells. Also to avoid criticisms from referees, probably legitimate ones, I decided not to mention the concept of super-resolution. As my research focus is on biochemical resolution and its utilization to understand cellular decisions in cancer, I will not pursue this work any further, but I thought to write this little story.

While writing this brief story, I recalled my friend Alberto Diaspro often citing Toraldo di Francia on resolving power and information. I believe that my work was far from being breakthrough from an optical standpoint, but I wished to use it as a reminder of a fundamental issue that, often in biomedical applications, get forgotten. The resolution at which we can observe a phenomenon, irrespective of the tools used, depends both on the qualities of the instrument used and the quality of prior information we can utilize to interpret the data. Once technology permitted to image single emitters in fluorescence microscopy, the prior of point-like sources could be use to analyse images so to reveal the fullness of the information content of an image that is carried by photons.

In an experiment, information content is the most precious thing. Irrespective of the methodologies used, our protocols are designed to maximize signal-to-noise ratios and, thus, maximize information content, precision and resolution. However, as trivial as these statements are, in the biomedical sciences we often do not follow through the process of maximizing information content. Significant information can be provided by our a priori constrains and models. Moreover, a thorough understanding of information theory related to a specific assay can provide levels of precision and resolution that go beyond what we assume, at first, possible. However, priors and information theory are far too often neglected. This happens out of necessity as most people do not have the training and understanding of both biological and physical processes, and even those that might, have to invest their limited resources carefully. I wish that in the future there will be more collaborative work between the life sciences, physicists and mathematicians, aimed to better understand how to extract maximum information from experiments in the biomedical areas.

So… was our volumetric imaging super-resolution? I am not sure I care to really answer, but I wished to provoke some thoughts and make you think a little bit about the relevance of information theory in biomedical research.

Photon partitioning theorem and biochemical resolving power

Project outcome published in PLoS ONE in 2013.

  • Esposito A*, Popleteeva M, Venkitaraman AR, “Maximizing the biochemical resolving power in fluorescence microscopy”, PLOS ONE, 8(10):e77392

After my 2007 theoretical work on photon-economy and acquisition throughput, I occasionally worked on a more general framework attempting to falsify my hypothesis that multi-channel or multi-parametric imaging techniques can deliver better results than other simpler techniques.

My proposal to develop instrumentation to achieve spectrally and polarization resolved lifetime imaging (later defined as HDIM) was met with scepticism by many. The recurrent question was: if you struggle to do a double exponential fit with the small photon budget we have available in biological applications, how could you possibly dilute these photons over several channels and analyse them with more complex algorithms?

Here, there are a few fundamental misunderstandings. First, the analysis should not be carried out on each “detection channel” independently, but the entire dataset should be used to exploit all information at once. Second, the use of dispersive optics rather than filters permits to acquire a higher number of useful photons. Third, limitations in current technologies (e.g., speed or photon-collection efficiency) should not be an obstacle to the development of these techniques because these are not conceptual flaws, but simply technology obstacles that can be removed.

Although I have a lot of (unpublished) work I used to describe performances of multi-channel systems, I achieved a breakthrough only when I understood I had to focus my efforts on the description of the general properties of the Fisher information content in fluorescence detection rather than the Fisher information in a specific experiment. Fisher information is the information content that an experiment provides about an unknown we wish to estimate. Its inverse is the smallest variance ever attainable within an experiment, or what is called the Rao-Cramer limit. In other words, by maximizing Fisher information, we maximize the precision of our experiments.

Photon-partitioning theorem

The second breakthrough was the understanding that the best description of precision in biophysical imaging techniques was possible only defining the concept of biochemical resolving power that is a generalization of the resolving power of a spectrograph to any measured photophysical parameter and then to its application to biochemistry. The biochemical resolving power is proportional to the square root of the photon-efficiency of a microscopy technique and the number of detected photons. Maximization of Fisher information leads to the maximization of photon-efficiency and, therefore, net improvements in biochemical resolving power. This definition complements the definition of spatial resolution in microscopy and allows to define when two objects are spatially and/or biochemically distinct. It is worth to mention that this is equivalent to stating that two objects are spatially and photo-physically distinct, but we use the photophysics of fluorophores to do biochemistry, hence my nomenclature. I see possible implications for other techniques, including super-resolution and, perhaps, this will be the subject of a future work.

The third breakthrough was the utilization of numerical computation of Fisher information rather than the analytical solutions of equations that are not always available. This process is very common in engineering but not in our field. Therefore, we can now optimize the properties of any detection scheme in order to attain the highest performance.

This work is a very specialist one and I assume there will be not many people interested in it, although the implications of this piece of theory for everyone’s experiment are significant. I believe that this is my most elegant theoretical work, but I guess it is a matter of opinion. The paper in itself had to be expanded well beyond what I wished to publish during the refereeing process and it is now including examples, software, etc. I think the theoretical introduction and the mathematical demonstrations are the best part and the description of the numerical optimization of Fisher information the most useful.

NOTE: there are two typographical errors in the published manuscript within the definitions of photon economy and separability. These are described in a comment on PLOS ONE