It is yellow, the two proteins must interact!

In fluorescence microscopy, colocalization is the spatial correlation between two different fluorescent labels. Often, we tag two proteins in a cell with distinct fluorescent labels,  and we look if and where the staining localizes. When there is a “significant overlap” between the two signals we say that the two molecules “colocalize” and we might use this observation as possible evidence for a “functional association”. We might argue that measuring colocalization in microscopy is one of the simplest quantitation we can do. Yet, many horror stories surround colocalization measurements.  This post is not a review of how to do colocalization, but a brief casual discussion about a few common controversies that is – as often I do – aimed to junior scientists.

coloc
This is a slide I often use in a presentation to introduce FRET but useful to understand colocalization. You can see the average size of a globular protein, fused to a fluorescent protein compared to the typical resolution of diffraction-limited and super-resolving fluorescence microscopy. When the signals from two molecules are within the same pixel, these two molecules can be really far apart from each other. However, the spatial correlation of distinct labelling can inform us about possible functional associations.

***

I am imaging GFP, but the image is blue, can you help me?”. Well, this is not a question related to colocalization but it illustrates a fundamental issue. In truth, cell biology is such an inherent multidisciplinary science that – in most cases – a researcher might require the use of tens of different techniques on a weekly basis. It is thus not surprising that many researchers (I dare say most) will be an expert on some of the techniques they use but not all. Microscopy is particularly tricky. To be a true expert, you need to handle a feast of physical, engineering and mathematical knowledge alongside experimental techniques that might span chemistry, cell culture and genetic engineering. However, the wonderful commercial systems we have available permit us to get a pretty picture of a cell with just a click of a button. Here the tricky bit, you want to study a cell, you get a picture of a cell. One is lead to confusing the quantity that intends to measure with the information that is actually gathering and with its representation. This is true for any analytical technique but as ‘seeing is believing’, imaging might misrepresent scientific truth in very convincing ways. Hence, with no doubts that upon reflection the non-expert user would have understood why the picture on the screen was ‘blue’, the initial temptation was to believe the picture.

Question what you set out to measure, what the assay you have setup is actually measuring and what the representation is showing. Trivial? Not really. It is an exercise we explicitly do in my lab when we have difficulties to interpret data.

***

It is yellow, they colocalize, right?”. Weeeeeeeeellll… may be, may be not. Most of you will be familiar with this case. Often researchers acquire two images of the same sample, the pictures of two fluorescent labels, one then is represented in green and the other in red. With an overlay of the red and green channels, pixels that are bright in both colours will appear yellow. I would not say that this approach is inherently flawed but we can certainly state that it is misused most of the times and, therefore, I try to discourage its use. One issue is that colour-blindness, not as rare as people think, renders this representation impractical for many colleagues (so my colour highlights!), but even people with perfect vision will see colours with lower contrast than grey-scale representations, and green more than red. Eventually, to ‘see yellow’ is almost unavoidable to boost the brightness of the underlying two colours to make the colocalization signal visible. This can be done either during the acquisition of the image often saturating the signal (bad, saturated pixels carry very little and often misleading information) or during post-processing (not necessarily bad, if declared and properly done). Either way, at the point you are doing this, your goal to be quantitative has been probably missed. The truth is that a lot of biological work is non-quantitative but faux-quantitative representations or statistics are demanded by the broader community even when unnecessary. Let’s consider one example with one of the stains being tubulin and the other a protein of interest (PoI). Let’s assume the PoI is localizing at nicely distinguishable microtubules in a few independent experiments. Once the specificity of the stain is confirmed, the PoI can be considered localized at the microtubules (within the limitations of the assay performed) without the need for statistics or overlays. Unfortunately, it is not very rare to see papers, also after peer-review, to show diffuse stainings of at least one of the PoI and perhaps a more localised stain of the second PoI and a ‘yellow’ signal emerging from an overlay is considered colocalization, instead of what it is: just noise. Another common issue is localization in vesicles. Again, any cytoplasmic PoI would appear to colocalize with most organelles and structures within the cytoplasm with diffraction-limited techniques. Sometimes punctuated stainings might partially overlap with known properly marked vesicles, let’s say lysosomes, but not all. Then the issue is to prove that, at least, the overlap is not random and, therefore, statistics in the form of correlation coefficients are necessary.

***

The two proteins do not colocalise, two molecules cannot occupy the same volume” Really!? Well, from a quantum mechanics standpoint…. No, do not worry, I am not going there. I have received that criticism during peer-review in the past and until recently I thought this was a one-off case. However, I have recently realised that I was not the only person reading that statement. I am really uncertain why a colleague would feel the need to make such an obvious statement except for that condescending one-third of the community. I should clarify that to my knowledge no one implies physical impossibilities with the term colocalization. That statement is perfectly ok in a casual discussion or to make a point to teach beginners the basics. Some of us also might enjoy discussing definitions,  philosophical aspects related to science, controversial (real or perceived) aspects of techniques, but better at a conference or in front of a beer, rather than during peer-review.  The issue here is that while it is reasonable to criticise certain sloppy and not too uncommon colocalization studies, in general colocalization can be informative when properly done. 

***

So, is measuring colocalization useful?” Homework. Replace ‘colocalization’ with your preferred technique. Done? Now try to make the same positive effort for colocalization. Every technique is useful when used properly.

You might have noticed I marked some words in my introduction: colocalize, significant overlap and functional association. It is important we understand what we mean with those words. Colocalization means co-occurrence at the same structure, a non-trivial correlation between the localization of two molecules of interest, within the limits defined by the resolution of the instrumentation. The “significant overlap” should be really replaced by “non-trivial correlation”. Non-trivial, as diffuse stainings, unspecific stainings, saturated images can very easily result in meaningless colocalization of the signals but not of the molecules of interest. Correlation, as the concept of overlap might be improper in certain assays, for instance in some studies based on super-resolution microscopy. After we did everything properly, we still cannot say that if protein A and protein B colocalize they interact (see slide). However, we can use colocalization to disprove the direct interaction of two proteins (if they are not in the same place, they do not interact) and we can use high-quality colocalization data to suggest a possible functional association that might be not a direct interaction, and that should be then proven with additional functional assays.

Then, my friends, do make good use of colocalization as one of the many tools you have in your laboratory toolbox but beware that just because it is simple to acquire two colourful pretty pictures, there are many common errors that people do when acquire, analyse and interpret colocalization data.

 

P.S.: if I cited your question or statement, please do not take it personally. As I have written, not everyone can be an expert of everything and the discussion between experts and non-experts is very useful, so making real-life anonymous examples.

Is the average between a cat and a dog a real animal?

dog-2632088_1280.jpg
Image credit: Pixabay License. Free for commercial use. No attribution require

Is it a cat? Is it a dog? Is the average between a cat and a dog a real thing, perhaps a caog or a doat?

Not all science should be based on single cell detection, and there are plenty of cases where single cell measurements are superfluous. However, too often we fail to appreciate the huge mistakes we can do in biology when we forget the assumptions we do when using population measurements.

But which assumptions do we really do?

Often implicitly, when doing population measurements (e.g., Western blots, sequencing, proteomics, etc…) we assume that populations of cells we measure are homogeneous and synchronous. Or at least we assume that these differences are unimportant and that they can be averaged out. In the best cases, we try to enforce a degree of synchronicity and homogeneity, experimentally. In reality, one of the most important assumptions we implicitly do is that the system we analyse is an ergodic system. In physics and statistics, an ergodic system is a system that, given a sufficiently long time, explore all its possible states. It is also a system where – if sufficiently sampled –  all its states are explored and, consequently, averages over time on a single cell and averages over a population at a given time are the same. However, there are limits to this assumption in biology. The obvious example is the cell cycle. There is significant literature about ergodicity and cell cycle [e.g., 1, 2, 3] and how this principle can be exploited, but…

The lottery for cell division makes you grow faster.

There is a particular phenomenon that we encountered while we were working on this project [4] that fascinated me for its simplicity and consequences. How cells can increase their fitness (i.e. their growth rate)? One obvious answer is by dividing faster. Another, at first glance less obvious answer, is by exhibiting an heterogeneous cell cycle length.  Let’s consider a population of cells that divides every 24 hours. Over one week, these cells will have 128 times the original population size. Now, let’s consider cells that divide on average every 24 hours but exhibit variation in cell cycle length, randomly, with a standard deviation of 4 hours and a normal distribution. Cells with 20 hours or 28 hours long cell cycle are equally probable to occur. However, in one week, cells with a 28 hours long cell cycle length will grow 64 times and cells with a 20 hours long cell cycle length will grow about 380 times. On average, these cells will grow ~200 times, that is much faster than cells dividing precisely every 24 hours (128 times). This is true for any pair drawn at equal distance from the two sides of the average; these pairs are equiprobable, thus cells dividing at a given average cell cycle length grow faster at increasing heterogeneity. Let’s remember that this can occur not just in the presence of genetic differences, but even just for stochastic variations where the progeny of one cell will not keep the same cell cycle length but will keep randomly changing according to an underlying distribution. This is a phenomenon that has been observed experimentally, for instance, in yeast [5] with single-cell measurements but that is occurring in any cellular systems as described in [1] and our own work [4]. Population measurements might conceal these very important phenotypic or mechanistic differences.

The sum of two normal distributions is not another normal distribution.

The beauty of the normal distribution is that it is such a ‘well behaved’ distribution and, at the same time, it represents many physical and biological phenomena.  If a population we are characterizing is made of two normal distributions, their average is the average of the normal distribution. If these have the same average, the variance of the sum will be the sum of the variances. These basic and useful mathematical relationships can be also rather misleading. In fact, while these statements are mathematically correct, two populations of cells that ‘behave rather differently’, for instance in response to a drug, cannot be averaged. For instance, one cell population might be killed with a given concentration of a drug. Another population might be resistant. By detecting 50% cell death, we could assume – incorrectly – that dosing at higher concentrations we could kill more cells.

The plot shown below illustrates this basic principle. The blue and red distributions, averaged together, exhibit the same variance and average of the yellow distribution but they represent very different systems. If the blue distribution represents the sizes of cats and the red distribution the sizes of dogs, the yellow distribution does not represent the size distribution of any real animals. In other words, the average phenotype is not a real phenotype and, in the best case scenario, when there is a dominant population, it represents the most frequent (the mode) phenotype. In all other cases, where the homogeneity of the phenotype is not checked, the average phenotype might be simply wrong.

gaussians

This is a very simple illustration of a problem we frequently encounter in biology, trusting our population measurements (averages and standard deviations over experimental repeats) without being sure of the distributions underlying our measurements. In the figure above, the purple distribution is a distribution where the average is the correct average of the blue and red distribution, but the purple distribution is the statistical error of the assay and it is unrelated to the scatter of the biological phenomenon we are measuring. Sometimes, we cannot do anything to address this problem experimentally because of the limitations of technologies but it is very important – at least – to be aware of these issues.

Just for the most curious, I should clarify that for two Gaussian distributions with relative weights A and B, we can define a mixing parameter p=A/(A+B). The average of the mixed population will be simply μP=p*μA+(1-p)*μB, i.e. for p=0.5 is the average of the means. The apparent variance is σP^2 = p*σA^2+(1-p)*σB^2+p(1-p)*(μA-μB)^2, i.e. σP^2 is the average of the variances summed to the squared separation of the two averages weighed by the geometrical averages of the mixing parameters of the two populations.

Collective behaviour of cells is not an average behaviour, quite the contrary.

When discussing these issues, I am often confronted with the statement that we eventually do not care about the behaviour of individual cells but with the collective behaviour of groups of cells. There are two important implications to discuss. First of all, when arguing the importance of single-cell measurements, we do not argue the importance of studying individual cells in isolation. Quite the contrary, we should measure individual cells in model systems the closest to the physiological state. However, many assays are incompatible with the study of cell behaviour within humans and we resort to a number of model systems: individual cells separated from each other, 2D and 3D cultures, ex and in vivo assays. The two arguments (single cell measurements or measurements in more physiological model systems of tissues or organisms) are not the same.

Second, collective behaviours are not ‘average behaviours’. There are great examples in the literature but I would advise just even to visit the websites of two laboratories that I personally admire. They nicely and visually illustrate this point, John Albeck’s laboratory at UC Davis and Kazuhiro Aoki’s laboratory at NIBB. Collective behaviours emerge from the interaction of cells in space and time as illustrated by waves of signalling or metabolic activities caused by cell-to-cell communication in response to stimuli. The complex behaviours that interacting cells exhibit, even just in 2D cultures, can be understood when single cells and their biochemistry are visualized individually. Once again, phenotypes or their mechanism might be concealed or misinterpreted by population or snapshot measurements.

This is, of course, not always the case. However, my advice is to keep at least in mind the assumptions we do when we perform an ensemble or a snapshot measurement and, whenever possible, to check they are valid.

A ‘hyper-dimensional radio’ to listen to the biochemical communications of the cell

hdim_resIndustry, academia and healthcare often rely on fluorescence microscopy to see the fine architecture of materials, including biological ones. Fluorescence microscopy is particularly suited for biomedical studies because it can be gentle with biological materials permitting investigators to study biology in a non-destructive manner. Chemistry and genetic engineering then provide useful strategies to make samples fluorescent so to report about mechanisms that we need to study aiming to understand how biological systems work in normal conditions, during disease or therapy.

Thanks to two-decades of fast-paced innovation in fluorescence microscopy, we can now see the smallest features of a biological sample, approaching molecular resolution. However, the capability of fluorescence microscopy to observe small changes in the chemical or physical properties of biological samples is not as well-optimised as its capability to peek into small structures. In our recent paper entitled “Enhancing biochemical resolution by hyper-dimensional imaging microscopy” – now available at the Biophysical Journal – we demonstrate how to recover information that permits us to make better measurements.

We can think of a fluorescence microscope like a radio broadcaster that transmits useful information through different radio channels. When we listen to one individual radio channel, we lose information transmitted over the other frequencies. If we attempt to listen to several broadcasts at the same time, the scrambled voices will limit our understanding of the several messages that were originally broadcasted. Similarly, the lasers we use to make samples shine, and the fluorescence emitted by samples, transmit information spread over the different properties of light, for example in its colour, in the time when light is emitted (the fluorescence lifetime) and in which plane is vibrating (polarisation).

In our recent work, we describe theoretically and experimentally how all this information could be measured separately but simultaneously enhancing our capabilities to observe biological processes. By breaking conceptual barriers and showcasing possible technological implementations with hyper-dimensional imaging microscopy, we aim to catalyse advances in several applications, spanning material sciences, industrial applications, basic and applied biomedical research, and improved sensing capabilities for medical diagnostics.

The New&Notable commentary by Prof. Suhling on Biophysical Journal

Our open-access work on Biophysical Journal

At FLIM impact (Episode I)

What has been the impact of fluorescence lifetime imaging microscopy to science and to the biomedical community in particular? Is FLIM a niche technique, one of those techniques that always promise but never deliver?

The top 10 most cited papers

Excluding reviews, the list of the top 10 most cited papers, albeit representing a very narrow window on the impact that FLIM had on the broader community, is rather instructive. Do consider, we are missing all those papers where FLIM was used but not cited in title or abstract. Most of the top 10 is made of applications to cell biochemistry, demonstrating the potential and the impact that fluorescence lifetime has. FLIM helped to understand how signalling work in living cells and animals, helped to identify drugs and to study disease. Some of the top cited papers are more technical, such as Digman’s paper on the phasor-transform or Becker’s paper on TCSPC widely cited because of their influence on contemporary FLIM techniques from a perspective of data analysis and technology. Other papers date back to the initial years of FLIM with applications to biochemistry. Overall, from this list, we understand (if more evidence was needed) that FLIM had a deep impact on the understanding of cell biochemistry albeit, historically, FLIM has been limited to the specialist laboratory.

I would like to highlight also another small observation, perhaps just interesting for the specialists, and not visible from other bibliometric analyses. Tom Jovin and a group of scientists trained by him (e.g., Dorus Gadella and Philippe Bastiaens) left a significant footprint in the field, directly driving biomedical relevant applications while pushing, at the same time, technological or methodological developments. Many others are linked to this ‘school’ directly or indirectly, scientists that use/develop a microscope to do biochemistry.

  1. Mapping temperature within cells using fluorescent polymers by Okabe and colleagues (2012) from Uchiyama’s laboratory and published in Nature Communications, where FLIM was used to map temperature within cells using fluorescent polymers as temperature sensors. (442)
  2. Phasor analysis by Michelle Digman and colleagues, from the laboratory of Enrico Gratton (2008) published by Biophysical Journal. The phasor-based analysis, in different flavours, has become quite popular nowadays. (406)
  3. An in vivo FLIM-based analysis of calcium dynamics in astrocytes by Kuchibhotla and colleagues from Bacskai’s laboratory (2009) published in Science. (353)
  4. The study of Calmodulin-dependent kinase II activity in dendritic spines by Lee and colleagues from Yasuda’s laboratory (2009) published in Nature. (351)
  5. One of the first FLIM papers by Lackowicz, published in 1992 in PNAS, where they applied the methodology, yet to be fully established, to the study of free and bound NADH. (339)
  6. One of the first biochemical applications of FLIM, where Gadella and Jovin applied the new tools to the study of EGFR oligomerization (1995), published in the Journal of Cell Biology. (323)
  7.  A 2004 paper, where Becker and colleagues present the TCPSC instrumentation that would become a commercial success, published in Microscopy Research and Technique. (321)
  8. The application of FLIM and molecular motors to study viscosity of the cellular environment by Marina Kuimova and colleagues, from the laboratory of Klaus Suhling published on JACS in 2008. (319)
  9. The development of a drug interfering with the interaction between KRAS and PDEdelta published Zimmermann and colleagues with the laboratory of Philippe Bastiaens and published by Nature in 2013. (291)
  10. The interaction between PKC and integrin shown by Ng an colleagues from Parker’s laboratory in 1999 by the EMBO Journal. (277)

Methodology

Tool: Web of Science

Search term: “FLIM” and “fluorescence lifetime imaging microscopy”

Filter: Article

Note: FLiM is a component of the flagella motor and it shows up in the searches. I could not eliminate this ‘false positive’ but it is my assumption that it is not changing  the following discussion.

Citations (in parenthesis) as in April 2019.

Any bibliometric analysis is very limited in scope, certainly this very narrow search. This is just a blog post, one observation done just to trigger a discussion for those curious people about the topic.

 

The backstage story of a paper. Highs, lows, lessons to learn

Since a few months, the manuscript entitled “Multiplexed biochemical imaging reveals caspase activation patterns underlying single cell fate“, and authored by Maximilian W Fries, Kalina T Haas, Suzan Ber, John Saganty, Emma K Richardson, Ashok R Venkitaraman, Alessandro Esposito, is available as pre-print at the bioRxiv repository. It has started its journey through the peer-review process, but here I wished to explain to students and young scientists what happened behind the scenes as, I believe, can be instructive.

The inception of the idea | I am unsure if it will be evident from the manuscript, but this is the culmination of a huge effort that started more than a decade ago. I was about to leave the Cell Biophysics Group led by Prof. Fred Wouters after I completed my PhD, on a train from Goettingen to Bonn where my partner used to work,  thinking: “What should I do next? … something that while capitalizing on my training can make my work distinct from my mentors and others? Where can I have the highest impact?” Moment that stuck in my memory.

I believe I read Santos et al. (2007) “Growth factor-induced MAPK network topology shapes Erk response determining PC-12 cell fate.” in that period, a paper that influenced me significantly. It made me thinking of cells as if they were computational machines, interpreting various inputs from the extra- and intra- cellular environment to trigger appropriate outputs, cell states or transition between cell states, i.e. cellular (fate) decisions. Everyone working with microscopy knows that cells treated equally often behave differently and, therefore,  I started to formulate ideas around the following question: “How does a network of biochemical reactions encodes for cellular decisions? Why do genetically identical cells take a different decision faced by a similar stimulus?” Basic principles, the science I love the most, but questions worth answering also to obtain mechanistic insights, questions also quite relevant to disease.

As a matter of fact, it is  of fundamental importance to understand how cells trigger pathological states or if differences in biochemical networks can be used as diagnostic markers for patient stratification or targeted for therapy, concepts that I started to work only later. Certainly, I thought back then, with my unique blend of physics, engineering, mathematics, molecular and cell biology I can do, in this area, what others might not be able to. Therefore, since 2007, my aim is to image not just a biochemical reaction, but biochemical networks within intact living cells, while they undertake decisions.

Finding the resources, the initial success | Perhaps other students start less naïvely than me, but soon I would discover that having a good idea (let’s suppose it is a good idea) and having the right skills is only a tiny part of the job. First, aiming to coordinate my work with that of my partner (now wife), I accepted a job offer at the University of Cambridge to work with Prof. Clemens Kaminski and Dr. Virgilio Lew to study one exciting but quite unrelated project. While working on the homeostasis of P. falciparum infected red blood cells, I set up collaborations and wrote an EPSRC fellowship which was funded. Therefore, in 2009, two years after my first idea, I got the funding to work on biochemical multiplexing. With this fellowship, I was able to refine my expertise in biochemical multiplexing, permitting me to build advanced technologies for fluorescence sensing such as confocal  spectro-polarimetry and fast SPAD-based spectral FLIM. This EPSRC fellowship, together with my expertise and vision, and the benefit to have already established my name in the community thanks to the work I had done with and the support of Prof. Fred Wouters and Prof. Hans Gerritsen, were an excellent platform that permitted me to do the next jump and accepted a senior position at the MRC Cancer Unit.

Finding the resources, the struggle |  Rather than focusing just on technology, I then broaden my research to a research program that would require theoretical developments, engineering of new pairs of fluorescent proteins to achieve multiplexing, coding and, of course, biological applications. I recognize that expanding my research before seizing the appropriate resources was a significant mistake or at least a huge risk. Working within Prof. Ashok Venkitaraman group, I started to write ambitious EU grants. Some of them would receive excellent feedback (14 out of 15 points, first or second not funded…) but fall short of being funded. Hans once told me that “at this level of competition and quality, often it is just noise that decides the final outcome“. Probably true, even funny if you knew we worked together on photon-statistic (‘noise’). But great feedback does not replace funds, and thus I wrote an ERC grant.

I did not get ERC funding but, once again, ERC is very competitive and I was not sufficiently experienced, thus no drama. However, I started to notice one big issue. Physicists would judge my physics not great physics, biologists would judge my biology not great biology. Some colleagues would find my objectives impossible to reach. This is what I have then discovered to be the challenge of doing multi-disciplinary research (well, technically is called trans-disciplinary research, but this is the topic for another post). When your proposal is both trivial and impossible, you might have an issue that is not necessarily related only on your science. One referee commented that “A number of groups have being trying to improve the technologies for many years and although some of them have an enormous experience they are not anywhere close to where he intends to be in five years“. Around the same time, a renown scientist commented on the description of my work “It is impossible”, but then added in a wonderfully supportive and very appreciated manner “but if there is someone that could do it, it is Alessandro” – well, if funding-proposals could be judged with the human touch that people have when speaking in person knowing and respecting each others work…  I’ll cut an even longer story short, but with significantly less resources than I was asking and struggling to increase my funding, with the financial backing of Prof. Ashok Venkitaraman, we did everything we wanted to do in… five years!

The great technical success (NyxBits and NyxSense) | I wished to tell you a story of great success in a broader sense, but this has to be still written… if it will ever be. I did waste significant amount of time in looking for resources in what I found an amazingly inefficient system. However, from the end of my EPSRC fellowship since this year (~6 years), we have done a huge amount of work to realize what it was thought not to be possible:

  • Molecular Biology. I wished to develop two platforms, one based on spectrally multiplexed time-resolved anisotropy (open for collaborations here!) and one for spectral FLIM to manage the cross-talk between multiple FRET pairs and making biochemical multiplexing possible. With the limited resources I had, and initial help from Bryn Hardwick, Meredith Roberts-Thomson and David Perera in Ashok’s lab, we kick-started the project. The mole of work started to overwhelm me. Occupied with grant writing, training in a new field, engineering, software development and mathematics, I could not push this forward as fast as I wished. A great help then arrived from Max Fries who did 6 months with me as master student. Once he left, I was short of resources again, with the FRET pairs misbehaving and exhibiting aggregation or spurious signals, we abandoned one of the two sensing platforms.  Emma Richardson then joined me as a Research Assistant dedicated to cloning and testing FRET pairs and then Max came back to work with me for another four years as a PhD student. Committed and skilled, he tested tens and tens of FRET pairs. The work was a huge task, but a couple of paragraphs in the manuscript. We even have better pairs then we used in this work, all described in the Supporting Information. Indeed, under the pressure for publishing on high impact journals, I decided (probably anoher mistake of mine) to progress to applications, settling for what we recently baptized as NyxBits: mTagBFP, sREACh, mAmetrine, msCP576, mKeima and tdNirFP, so to focus on biological applications. NyxBits and NyxSense? Well, I have explained the choice of names elsewhere.
  • Mathematics and software. There is something I could not really write in the manuscript so explicitly and it is appreciated only by the experts in the field. There is something I also find impossible to communicate to review panels. As a testimony to this, I report here a comment I was once relayed to, something like: “Why do we need to offer him a carreer, once he has built the instruments we really need one person just clicking a button, no?” (I am sure I remember it much worst then it was. May be). The integration of technologies is so new and challenging, that we had to formulate new theoretical frameworks and write all new software, including how to acquire data, data format, and analysis. Also, some aspects of our work are difficult to appreciate. Let me tell you another small event that would push me in a particular direction. I really enjoy the conference Focus on Microscopy, even when criticized. Presenting new ideas, a colleague – respectfully – questioned the possibility for multiplexed imaging to be capable to measure several FRET pairs at the same time. This stimulated me to resume studying the Fisher information content in biochemical imaging. What is the biochemical resolution in microscopy? Can we enhance it? After years of thinking about this topic, in 2013 I cracked the problem, and published the mathematics in PLOS ONE where I formulate what I defined ‘the photon-partitioning theorem’. Then, with the increasing financial backing of my Director, Kalina Haas joined my growing team. Kalina implemented unmixing algorithms  and complex data analysis pipelines. Max and Kalina then became my dream-team to progress the project to the shape you can read today.
  • Technology. I mentioned some earlier technology platform that were designed for biochemical multiplexing. In my recent and first release of manuscripts on bioRxiv, we  also published a full implementation of Hyper-Dimensional Imaging Microscopy (HDIM)  with which we backed the photon-partitioning theorem with experimental evidence. We have done much more in that direction, but when we started biological applications, we realized the need for faster FLIM systems. Uncapable to wait for commercial solutions or to gain the benefits of other prototypes we had developed, I decided to build my own fast multiplexed electronics. This development was fostered by a negative criticism of a referee. During a PNAS submission of our spectral FLIM system, a referee mentioned we could do the same utilizing Hybrid PMTs. I disagreed, as achieving 64 channel spectral FLIM with the capability to run at hundreds of millions of photon-counts per second is all-together a very different application; however, there is merit in most referees’ criticisms, even the most negative ones. Only then I have realized PMT are now very fast and the bottleneck was just the electronics. Therefore, I got in touch with Surface Concept  who supported me wonderfully and  sold me one of their multi-hit TDC platforms. After several months of software development, we were then capable to run FLIM measurements with the quality of TCSPC and the speed of FD-FLIM. As usual, I presented this work at FoM where it was greatly received by colleagues and companies, but we did not publish the imaging platform as we were fully committed to pursue biological applications.
  • The biology. The bottleneck of our experiments was and still is data analysis and, with tens of experiments, thousands of biochemical traces to be painfully manually curated, we moved ahead very slowly, but working hardly. Mostly Max, Kalina and myself, suffered years of hard work, the occasional worry when something stopped working, and the excitement of seeing things that others could not see, for the first time. In this manuscript, we reveal the extent of non-genetic heterogeneity that  biochemical networks can exhibit and that eventually result into difference cellular decisions. Here, we focused on multiplexing simple biosensors for caspases as we aimed to de-risk and very ambitious project. We also decided to work with HeLa cells, again for the same reason. Despite the simplicity of the model system under study, we realized how complex and heterogeneous the response of biochemical pathways is, the cross-talk between enzymes, signaling pathways and cellular metabolism. All of this is, for me, fascinating and it shows that whenever we do ensemble measurements, we really see only the behavior of the average cell. It is then important to understand that the ‘average cell’, most of the times, does not really exist. If we are lucky, the bulk of the population responds with one phenotype and the measured ‘average cell’ will indeed represent the ‘most frequent cell’. However, in other instances when there are significant populations behaving in distinct ways, we would not just miss important information. The model inferred from the ‘average cell’ would be simply the wrong model of a non-existing cell. This is why it would be important to know, for any assay, if the sample behave synchronously with a stimulus and homogeneously. In this sense, single cell biochemistry, could bring not just an additional layer of information, but inform us if what the observations we obtain on a given model system with ensemble measurements can be reliable.

Enduring the struggle | I hope you did not mind I spoke so positvly about my own work. If you know me, you also know I am not so self-centered. However, I wished to let the younger scientists to know what there might be between a ‘good idea’ and its realization, passing through frequent failures and some success. Probably, one of the most precious quality of a scientist is resilience. We need thick skin to confront the constant failures that lead us to discoveries, the constant struggles in getting resources and eventually publishing good work in a highly competitive environment. Turning a negative event in something negative is part of this process. Understanding why one experiment did not work enables us to make troubleshooting, why an experiments falsified our hypothesis to build new and better models, why funding was not awarded or a manuscript was not published how we can improve our scientific proposals and reporting. Of course this is easier said than done.

The work we presented in bioRxiv is not the end of the story. The work, wonderfully-received in conferences, is still not peer-reviewed. Will colleagues appreciate and understand the vision of our work, its possible impact and the mole of work we had to do? Were we able to communicate properly? And even if we did it, we still have a long way in front of us. My dream is to establish a single cell systems biology of cell fate. A huge amount of work, from maths to biology, from biotechnology to physics, all still needed to be able to understand why cells do what they do, how physiological states are maintained and how pathological states emerge.

[Open hardware] A safe laser by-pass

Well, I remember when I started this business, a beam stop was done with a recycled block of lead and reflections stopped with carton boxes 😉 Brown boxes, black carton catches fires, of course (tell this to my undergrad-self). Not any longer, of course!

About ten years ago, I started the procurement and development of my first two-photon microscope. For the first time, I was directly responsible of laser safety and I had to take decisions about how to build a system that was safe for a user facility in a biomedical research institute. As I was coupling commercially sourced systems (Leica SP5, Chameleon Vision 2 and Pulse Select) and I was not planning much customization for the excitation path of this instrument (I heavily develop assays and detection), I opted to fully enclose the laser in lens tubes. The resulting system is safe, stable, and no more difficult to align compared to other enclosures.

I think that enclosures around the complete table might make sense in many instances, particularly when compartmentalized in sub-sections, but this is the system that worked best for me at the time. One solution I wish to share, is a bypass for the Pulse Picker we had used to develop spectrally resolved FLIM utilizing smart SPAD arrays (detectors that integrate photon counting electronics with them).

20181112_184730As I start planning replacement of this systems, I wished to share this design, in case some of you might find it useful. In the image on the left, you can see the Ti:Sapphire on the top, the pulse-picker on the right and the first enclosure by Leica used to steer the beam to their in-coupling optics (bottom-right).

In the middle, the laser bypass we utilize to direct the laser through the pulse-picker or around it.

In the image below, you see a close-up photo of the by-pass. The black box with the rectangular aluminum cover is the Leica spectral flattener used to reduce power of the Chameleon Vision at the peak wavelength. One of the few customization I needed here was simply to have a hole on a Thorlabs SM2 lens tube to accommodate this filter. This is screwed in a C4W-CC cube that can host a movable turning mirror with high reproducibility. The alignment of the microscope without the pulse-picker is done with the pair of mirrors provided by Leica. The alignment of the Pulse Picker is done with the kinematic mirrors visible on the left (M1 and M2). I placed a light-block behind them just in case one would become lose or to block the small amount of light transmitted through them. A kinematic cube is used to host ultrafast beam sampler by Newport to direct a small fraction of light to the Thorlabs PIN diode I use to feed the electronics of the pulse picker. In front of the PIN diode I have an xy-translating cage element. An empty four-way cube is used to allow the laser beam to pass from top to bottom (bypassed) or from left to right (coupled pulse picker). The aluminum block tagged as L1 is just a cover for the C4W-CC when empty.

20181112_184735

At the output of the pulse-picker, you see the mirror image of this bypass (on the right) and the two steering mirrors by Leica (the cylindrical towers). On the far right of the picture there is the in-coupling optics by Leica, preceded by two diagnostics ports.

20181112_184720

Below, you can see a close-up picture of this part of the coupling. Because of the layout, I needed to add one extra mirror (top left) and aiming to isolate users (placed on the top of the image) from accidental damages of the in-coupling optics, I added a light barrier.

Both diagnostics ports are based on a 4-way kinematic cube from Thorlabs hosting Newport beam samplers. The first port is used to sample the pulses after the pulse-picker and to feed our FLIM electronics. The second has two scopes. First, for course alignment of the system. I have two irises in the system that are aligned when the laser is aligned (roughly) to the in-coupling optics of Leica.

I usually remove a cover at the exit of this diagnostic port and use a fluorescent card to verify alignment, but in this picture you see the fiber coupling a spectrograph we occasionally use to diagnose faults of the laser.

 

20181112_184714

The alignment is simpler that it seems. First we start with a microscope that is fully aligned without pulse-picker as per normal operations. Then, when we need the pulse picker, we insert the two turning mirrors (L1 and R1). We do this with the laser off and with the pulse-picker crystal retracted (coarse alignment) or protected by an alignment card (fine alignment). M1 and M2 are then used to align the beam with the crystal. Then we align the PIN diode and proceed with the fine alignment of the pulse-picker cavity.  Once this is done, we align the cavity with the microscope utilizing M4 and M5. For course alignment, the signals from the two diagnostics ports is very useful until some signal is picked on the microscope monitor, after which the final fine tuning of all the optics can proceed.

Be aware, alignment of Class 4 lasers can be dangerous. Therefore, do your own risk assessments and think carefully about the logistics of your system. Now that I am starting to consider the redevelopment of the system, I thought to share these notes with you, hoping that it could be of some use.

Sharing is caring: an open access FLIM trial

Are you interested in cell biochemistry, but in single living cells, organoids or tissues? Is there a Western blot or IP you wished to do on a living sample? Or did you wish to see where in a cell a protein-protein interaction occurs.

Well, if you are interested in quantifying a ligand concentration, a post-translational modification, a protein-protein interaction, chromatin states, oligomerization of proteins, you might be interested in FLIM or FRET, but you might not be in your comfort zone to set-up or execute such assays. 

The specialist expertise and instrumentation required to perform fluorescence lifetime imaging microscopy (FLIM) is often a barrier to adoption of quantitative biochemical imaging techniques. The same can be true, although at a lesser extent, for intensity-based measurements of FRET.

Well, we have the expertise and we have the instrumentation. Not just this, but today, instrumentation and data analysis is becoming simpler and simpler. During 2019, we are going to trial a system by which we can support you for setting-up and test FLIM/FRET experiments. We have limited resources and, therefore, we will open only a few experimental sessions to start with, but there will be no strings attached. No fees, no authorship to include in that paper you really care.

Although we still have to setup the “Sharing is caring” trial, feel free to inform us about your interest. Initially, projects will be selected at our discretion, with priority given (but not confined) to cancer-related work and work with a potential to impact public health in the short or long period.

NyxBits and NyxSense? What?!

NyxSense&NyxBits paper here.

800px-Arte_romana,_statuetta_di_nyx_o_selene,_I_secolo_acI am not fond of new achronyms or ‘cool’ names, but then… guilty! you got me, I am contributing to the proliferation of four letters acronyms and fancy names like others! Lately, I have introduced a new one, HDIM as for Hyper-Dimensional Imaging Microscopy. But that is another story, and in a Supporting Note of that pre-print we explain our choice.

Earlier, we created the pHlameleons with the friend, my group leader back then, Fred Wouters. Well, first it was the Cameleon, the famous calcium reporter by the great Miyawaki and Tsien, brilliantly referred to as Camaleon because it is a protein that ‘changes colour’ upon binding calcium (Ca). Then it was the Clomeleon by Kuner and Augustine, as it senses cloride ions (Cl) rather then calcium. With all due respect for the authors, I must admit I did not love that name at first. Indeed, as we were deriving a family of pH sensors from yet another creation of Miyawaky (the CY11.5), we started to joke that we should have called this family of sensors the pHlameleons. Months after months, a joke ended up in a title of a paper, to be adopted as the name of these pH sensitive proteins. So, let’s not take ourselves too seriously too often. Sometimes we pick names for a bit of branding, other times to make our assays less heavy with too many technical terms, and other times, let’s just have fun with words (Clomeleon now for me is a great name, but I routinely joke about pHlameleons!).

Now that you know the little funny story about the pHlameleons, it is the turn of NyxSense and NyxBits. NyxSense is a software dedicated to multiplexing of FRET sensors. NyxBits are the components to create a multiplexing platform, a number of fluorescent proteins of distinct Stokes shift that can report, through their fluorescence lifetime, biochemical reactions probed via FRET with the use of dark/darker acceptor chromoproteins. A huge effort for us that took several years to bear fruit. Why Nyx?

During the revision of the drafts, colleagues found the manuscript a bit too technical and difficult to read. Thus I went back to pen and paper,  google and wikipedia, to find a name that could help us to refer to this sensing platform with a single word rather then a sentence. Greek mythology always provides great inspiration and eventually, I discovered Nyx the primordial goddess of the night (Nox in the Roman mythology). With Erebus (personification of darkness), Nyx gives birth to Aether (personification of the upper air and brightness), Moros (deadly fate), Moirai (destiny) and Thanatos (death). Then, I felt that this short name, Nyx, is intimately connected with our work for three reasons.

First, Nyx seems to link darkness and light, the day and night, a nice analogy with our bright donor fluorophores and dark acceptors. Second, Nyx is related to death and fate. We created the NyxBits and NyxSense to study cell fate, and our first application is cell death responses to an anti-cancer drug. Third, Nyx is a goddess and as I am really committed to gender equality at work (not just by picking names of fluorophores), it felt a little bit in tune with what I do, to honour a female deity.

But do not take these reflections too seriously – I do not – after all I needed just a simple name for a very complex sensing platform. As there is no way for me to tell the reasoning behind the names in the manuscripts, I thought to share with you why we picked NyxSense and NyxBit, light-heartedly.

 Now starting project Atlas… we’ll speak about this another time! 🙂

Volume rendering: is this localization-based super-resolution?

Project outcome published in Biophysical Journal in 2010.

  • Esposito A*, Choimet JB, Skepper JN, Mauritz JMA, Lew VL, Kaminski CF, Tiffert T, “Quantitative imaging of human red blood cells infected with Plasmodium falciparum“, Biophys. J., 99(3):953-960

Most papers have an untold backstory that we cannot reveal in it so to focus on a main message and the most relevant discoveries. This one has a little one I wish to share. Volumetric imaging of red blood cells is not the most difficult thing I have ever done. However, accurate morphological and volumetric imaging of red blood cells infected by Plasmodium falciparum, the causative pathogen of malaria, caused me a few headaches. Let’s forget the time spent waiting for the cultures growing at the right speed to deliver bugs at the right stage of development, undecided if to sleep before or after the experiment, and always getting the decision wrong. Let’s not speak for now about the optimization of the sample preparation that that by trying and failing lead to other interesting observations. And here we focus on the very simple concept of accurate volume rendering.

In one way or another, volume rendering and estimation will require some sort of thresholding on the data so to discriminate the object from the background. As imaging conditions change even slightly from experiment to experiment, setting this threshold might confound the final outcomes. When you deal also with a sample that undergoes major morphological transitions, a simple problem soon became one for which I spent a lot of time to identify a solution for. As it happens, one perhaps does not find the best, most elegant or even the simplest solution, but the solution that they can find with their skills and tools. Mine was a brute-force solution of isosurface volume rendering, iteratively deformed by local refitting of a random sample of vertices in order to respect a specific model set for the transition of object to background. This was a method that permitted us to preserve high resolution morphological descriptions, at high accuracy and reproducibility for volume rendering.

This work was carried out while many of my colleagues were focusing on super-resolution, e.g. maximizing the spatial resolution in optical microscopy. Then, it was simple to notice that fitting a surface onto volumetric data delivers volume estimates at higher precisions than what the optical resolution of a microscope should permit. Indeed, whenever you have a model for an object, in my case the boundary of a red blood cell, in single-molecule super-resolution methods the point-spread-function of an emitter, it is possible to fit this model with a precision that is not (fully) constrained by diffraction, but – in the right conditions – only by the signal-to-noise ratio, the analytical tools and the adequacy of the model for the object.

In this Biophysical Journal paper, we focused on the biological application and, together with other published work, on the modelling of homeostasis of infected red blood cells. Also to avoid criticisms from referees, probably legitimate ones, I decided not to mention the concept of super-resolution. As my research focus is on biochemical resolution and its utilization to understand cellular decisions in cancer, I will not pursue this work any further, but I thought to write this little story.

While writing this brief story, I recalled my friend Alberto Diaspro often citing Toraldo di Francia on resolving power and information. I believe that my work was far from being breakthrough from an optical standpoint, but I wished to use it as a reminder of a fundamental issue that, often in biomedical applications, get forgotten. The resolution at which we can observe a phenomenon, irrespective of the tools used, depends both on the qualities of the instrument used and the quality of prior information we can utilize to interpret the data. Once technology permitted to image single emitters in fluorescence microscopy, the prior of point-like sources could be use to analyse images so to reveal the fullness of the information content of an image that is carried by photons.

In an experiment, information content is the most precious thing. Irrespective of the methodologies used, our protocols are designed to maximize signal-to-noise ratios and, thus, maximize information content, precision and resolution. However, as trivial as these statements are, in the biomedical sciences we often do not follow through the process of maximizing information content. Significant information can be provided by our a priori constrains and models. Moreover, a thorough understanding of information theory related to a specific assay can provide levels of precision and resolution that go beyond what we assume, at first, possible. However, priors and information theory are far too often neglected. This happens out of necessity as most people do not have the training and understanding of both biological and physical processes, and even those that might, have to invest their limited resources carefully. I wish that in the future there will be more collaborative work between the life sciences, physicists and mathematicians, aimed to better understand how to extract maximum information from experiments in the biomedical areas.

So… was our volumetric imaging super-resolution? I am not sure I care to really answer, but I wished to provoke some thoughts and make you think a little bit about the relevance of information theory in biomedical research.

Photon partitioning theorem and biochemical resolving power

Project outcome published in PLoS ONE in 2013.

  • Esposito A*, Popleteeva M, Venkitaraman AR, “Maximizing the biochemical resolving power in fluorescence microscopy”, PLOS ONE, 8(10):e77392

After my 2007 theoretical work on photon-economy and acquisition throughput, I occasionally worked on a more general framework attempting to falsify my hypothesis that multi-channel or multi-parametric imaging techniques can deliver better results than other simpler techniques.

My proposal to develop instrumentation to achieve spectrally and polarization resolved lifetime imaging (later defined as HDIM) was met with scepticism by many. The recurrent question was: if you struggle to do a double exponential fit with the small photon budget we have available in biological applications, how could you possibly dilute these photons over several channels and analyse them with more complex algorithms?

Here, there are a few fundamental misunderstandings. First, the analysis should not be carried out on each “detection channel” independently, but the entire dataset should be used to exploit all information at once. Second, the use of dispersive optics rather than filters permits to acquire a higher number of useful photons. Third, limitations in current technologies (e.g., speed or photon-collection efficiency) should not be an obstacle to the development of these techniques because these are not conceptual flaws, but simply technology obstacles that can be removed.

Although I have a lot of (unpublished) work I used to describe performances of multi-channel systems, I achieved a breakthrough only when I understood I had to focus my efforts on the description of the general properties of the Fisher information content in fluorescence detection rather than the Fisher information in a specific experiment. Fisher information is the information content that an experiment provides about an unknown we wish to estimate. Its inverse is the smallest variance ever attainable within an experiment, or what is called the Rao-Cramer limit. In other words, by maximizing Fisher information, we maximize the precision of our experiments.

Photon-partitioning theorem

The second breakthrough was the understanding that the best description of precision in biophysical imaging techniques was possible only defining the concept of biochemical resolving power that is a generalization of the resolving power of a spectrograph to any measured photophysical parameter and then to its application to biochemistry. The biochemical resolving power is proportional to the square root of the photon-efficiency of a microscopy technique and the number of detected photons. Maximization of Fisher information leads to the maximization of photon-efficiency and, therefore, net improvements in biochemical resolving power. This definition complements the definition of spatial resolution in microscopy and allows to define when two objects are spatially and/or biochemically distinct. It is worth to mention that this is equivalent to stating that two objects are spatially and photo-physically distinct, but we use the photophysics of fluorophores to do biochemistry, hence my nomenclature. I see possible implications for other techniques, including super-resolution and, perhaps, this will be the subject of a future work.

The third breakthrough was the utilization of numerical computation of Fisher information rather than the analytical solutions of equations that are not always available. This process is very common in engineering but not in our field. Therefore, we can now optimize the properties of any detection scheme in order to attain the highest performance.

This work is a very specialist one and I assume there will be not many people interested in it, although the implications of this piece of theory for everyone’s experiment are significant. I believe that this is my most elegant theoretical work, but I guess it is a matter of opinion. The paper in itself had to be expanded well beyond what I wished to publish during the refereeing process and it is now including examples, software, etc. I think the theoretical introduction and the mathematical demonstrations are the best part and the description of the numerical optimization of Fisher information the most useful.

NOTE: there are two typographical errors in the published manuscript within the definitions of photon economy and separability. These are described in a comment on PLOS ONE