All photos are 3D pictures, it was fun to try out the technology.
Regrettably, the MRC defunded the MRC Cancer Unit and the School of Clinical Medicine could no longer support our Department. In the current academic job market, I am experiencing some uncertainties about where I will relocate. This is thus one of my last opportunities to show where the discoveries we have done over the last years (in 2022 hopefully you will see much more in print).
Biolabs and 3d printing. Most people know me for my work in microscopy. However, more than half of my group is dedicated to cancer cell biology. This is the 3D picture of one of my wet lab bays. Opposite Suzan’s and Anna’s workplaces, we have our rapid prototyping workshop. Some of you might notice an old Maker Bot Replicator 2. My first introduction to 3D printing. But the most notable printer is the Form3B by Formlabs in our custom (orange) enclosure, with which we print both biocompatible scaffolds and chambers for our light-sheet microscope. Then the Ultimaker, a workhorse for all the needs. The last entry is a CellInk BioX, for 3D bioprinting that we are still integrating into our 3D culture workflow.
Laser lab. Ready for more tech? This is the room that bridges past and future developments. This laser lab was the first lab I could call ‘my lab’, where I started to work in 2010 as an EPSRC LSI fellow on my own. All new tech I produced over the last 12 years came from this room, at least initially. One side of the room serves as our electronics and optical workshop and then we have two optical tables dedicated to prototyping. Certainly a crowded room. The development of our SIM/FLIM system (ATLAS.ONE) was delayed by the pandemics and now we disassembled it getting ready for relocating. Before ATLAS.ONE, this room hosted the various iterations of confocal spectropolarimetry I developed over the years. The centrepiece of this room is now an open-top light-sheet microscope (ATLAS.TWO – CRUK funded). This is just the first preview of a system that hopefully will be the protagonist of many future papers.
Optogenetics lab. After a major infrastructural refurbishment, I was able to get a second laboratory, which I dedicated to optogenetics. Here, we prep samples in a (blue-light or red-light) darkroom. We also have small incubators to ensure keeping cells in light-controlled areas. There up on the shelves, there is the skeleton of our first OptoFarm, a system to culture cells under tightly controlled light (biochemical) protocols. Now, this is discontinued and replaced by a much simpler and more flexible system that we’ll publish soon. And yes, of course, you see also our workhorse for single-cell fluorescence dynamics, integrated with multiple cameras, photoactivation capabilities, multiple light sources, and microfluidics. This commercial system was bought and then modified with an MRF grant and despite being very temperamental, it gave us a lot of good data!
Biophotonics lab. I hear you asking… what about FLIM. Of course, we are almost there. Here, we dive into one of the rooms of the imaging facilities where I customized a multi-photon / confocal microscope with time-resolved technologies. In this front view, you see the Leica SP5 and the Chameleon Vision 2 (to the right). Two instruments that gave me a lot of satisfaction. The blue boxes are a custom-built FLIM system (ELIS) that I built when FLIM was still relatively slow. But now commercial systems are also super-fast and I have packed ELIS for good. To be unpacked, once I will have new laboratories, the PicoQuant rapid FLIM of the latest generation.
Let’s go around the table because this room is full of tech. In this room, I hosted several generations of HDIM systems. Some published, others not. The black box on the table is a streamlined and efficient version of HDIM, a time-resolved spectropolarimeter. Coupled with the multi-photon microscope, we get high efficiency in detecting fluorescence with 16 concurrent spectral channels, 2 polarizations and 64 time-bins. Under the table, is the amazing SPC152 – the heart of the system by Becker&Hickl.
Yes, a lot of boxes around because we need to pack up! The back of the table hosts a pulse-picker we used with SPAD arrays, beam conditioning optics and HDIM Gen 4 (I think!) 🙂 I stopped its development because of COVID but soon or late I will resume. Hopefully, this will be fully automated also in its alignment and will integrate fast FLIM electronics.
This first virtual tour ends here. Hopefully, the next tour – with a bit of luck – will be from my new labs.
In early 2016, I was asked if I wished to speak at the discussion meeting “Conflict and Competition in Cellular Populations” in Bangalore, India organized by Dr Sandeep Krishna and Dr Sunil Laxman (NCBS). The title sounded so intriguing that I accepted without even checking the actual topic of the meeting. Then an adventure begun, that now concluded (did it?) in 2021 with a small paper entitled “Cooperation of partially transformed clones:an invisible force behind the early stages of carcinogenesis” published in the journal of the Royal Society, Open Science (10.1098/rsos.201532). Let me tell you the story of this journey that, perhaps, might inspire you to adventure outside of your field.
For brevity, I’ll skip the details about the actual trip. It was of course exciting to experience a culture I am often exposed but I never lived. The food, the people, the contrasts of India, a small glimpse into a complex galaxy of humanity. My short trip to India started with a sleep-deprived-me trying to explain to the border police that the conference Conflict and Competition in Cellular Populations, nicknamed CCCP, which poster was written in pseudo-Cyrillic, was not a political conference (I would have needed a different visa in that case!) and concluded back in Cambridge a week later with a slightly embellished bedtime story for my 3 years old daughter about the animals I saw in the park that hosts NCBS, a story that I am still telling now and then to her.
But of course, here I focus on the science. The conference hosted a good number of great speakers (referring to others) on the topic of ecology (er, yes, the title made sense). Suddenly it dawned on me I was ‘a bit’ off-topic. However, I loved talk after talk learning a bit about ecology, including its mathematical foundations. I really enjoyed the meeting, so much so I could not stop thinking about its relevance for my work that back then was focused on non-genetic heterogeneity in cell decisions, carcinogenesis and the DNA damage response.
The study of cancer as an ecological problem is not new, of course. Something very specific started to bug me though, something I could not find literature about. We know that different clones of cancer cells cooperate and compete in tumours but what happens during the very early steps of carcinogenesis? I was queuing to board the airplane when I succeeded to download the paper “Evolution of cooperation among tumor cells” published ten years earlier by Axelrod and colleagues in PNAS. It was a nice in-flight read, but the flight from Bangalore to London is long and I started to obsess about a very simple mathematical fact.
For a moment, let’s imagine you dream of establishing a business but you need £1M to start it. However, you are a bit of an odd person and decide to do it only if you win the lottery which jackpot is £500k. You clearly make strange decisions but I am not here to judge… the oddest thing is, however, that you bet on winning the lottery not just once but twice. Then you have an idea You agree with your village of similarly odd-minded people that if anyone wins the lottery, you will pool the money together to invest in this start-up. This is still an unlikely strategy, and certainly one that has a tiny probability to succeed, but it is definitely more likely to work out than waiting to win the jackpot twice alone.
Back to carcinogenesis. Every day, each cell has a certain probability to mutate because of exposure to radiation, chemicals or simply the chance of errors of biochemical machineries. Mutation after mutation in the right genes, a cell might grow into cancer. A very unlikely series of events that, however, with trillion of cells in our bodies, over one’s lifespan is likely to happen. We know that certain mutations occurs in cells that eventually lead to cancer. We know that one cell wins the macabre lottery of disease multiple times before leading to cancer. We then know that many cells will get mutations within an otherwise healthy tissue.
We usually consider that all these other mutant cells will either accrue neutral mutations (i.e., mutations that will not change the fitness of the cell, nor confer a cancer phenotype), or deleterious mutations that will be purged by tumour suppressive mechanisms. However, cells within a tissue communicate and mutations occurs also in genes responsible of cell-to-cell communication. In my recent work I propose a ‘toy model’ with which I explore the possibility that the gene- and cell- centric mutational process should be reconsidered in the context of an overall tissue where cell-to-cell communication might reshape the early steps of carcinogenesis. I am not the first one doing so, but I try to emphasize with simple modelling how the mutational process should be seen in the context of a collective of cells rather than in a gene- or cell- centric fashion.
What did I learn beyond what I have written in the paper (i.e. in addition to the science)?
First I had really fun, something that over time does not happen with every paper, even those more important ones where we invest major resources in. I even had fun during the revision process. As many of us experience, I often got half of the referees very supportive of my work and half rather dismissive. But those very supportive have been often extraordinary kind and helpful, either defining the manuscript ‘a refreshing read different from what I usually read in this field‘ (earlier submission in a different journal) to ‘the models presented here make the point in a clear and dramatic manner‘. The last referee of the last submission now published was particularly helpful. Not only they critically review the manuscript but also invested time to describe a discrete time Markov chain model that I could have integrated in the manuscript. This suggestion permitted me to learn a bit of maths I did not practise before, and to improve the work… this is what refereeing should be.
Second, alongside the enthusiasm of adventuring in a rather different field from my already eclectic research interests, I also felt the pain of being an outsider; a pain I feel often but that it was made sharper by the fact I was a single author. This was really a ‘pet project’. I got convinced to shape my notes in a manuscript only after I attended a seminar by Prof. Allan Balmain in 2018 related to the Nat Cell Biol article “Multicolour lineage tracing reveals clonal dynamics of squamous carcinoma evolution from initiation to metastasis“. It was a great talk and somehow relevant to the notes I had written since my trip to India. I decided to try to publish my ideas after reading the commentary by Prof. Kornelia Polyak and Prof. Michalina Janiszewska where they state: “One possible explanation is that there is a cooperative interaction between the streak and bulk tumour cell populations; an intriguing hypothesis that warrants further investigation but was not tested by Reeves et al.5. The streak pattern observed by Reeves et al. is reminiscent of the streaks generated by non-mutualistic budding yeast analysed by Muller et al.13.” Eventually, I am not sure the work I had put in this manuscript was worth the pain.
Then, do I advise others to adventure so wildly in other territories? As I have written before, it is rarely rewarding career-wise and never easy. But, once in a while, let’s just follow the passion and enthusiasm for something new, with no regrets. Any adventure comes with some pain but the fun of exploring, eventually, makes the experience worth living overall.
I wish that this small new paper can really provoke some thoughts, or inspire some young scientist to adventure… perhaps not too much and not alone as exploring comes with its perils.
Why should you know FRET? Well, FRET is used when you do a real-time qPCR, or you might be using it in assays like HTRF, or to detect biochemical reactions in single living cells. You might measure protein-protein interactions, probe cell signalling, cell metabolism or nano-meter scale conformational changes. Or what about dimerization, protein – nucleic acids interactions, checking splicing variants by FISH, or detect fast conformational changes in structural studies? This is why some of us are very fond of FRET, and many others are using it without being fully aware of it. The usefulness of FRET arises from its capability to translate molecular properties occurring at a nanometer and nanosecond scales to optical signals that can be easily detected with a microscope or a spectrofluorimeter.
What is FRET? When a fluorescent molecule is in close proximity to another that might be, in principle, capable to absorb the light emitted by the first, FRET might occur. However, FRET is not the emission and re-absorption of light, but the non-radiative transfer of energy. This is important because the molecule that will donate energy and the one that will accept it become coupled and will inform us about the distance between the two molecules only if they are within a few nanometer ranges, with sub-nanometer precision. Most of us do not use this capability directly but to engineer probes that can sense specific biochemical reactions. Ok, now you are ready. What FRET stands for? RET is Resonance Energy Transfer and it says with three simple words what I have just described. For the “F”… you would think it is simple, but the community is a bit split on the meaning of that “F”. There are two camps. One that says “F” is for Foerster, from Theodor Foerster who developed the theoretical background to describe the phenomenon. Others say that “F” is for “Fluorescence” as it is detected by means of fluorescence emission. Who prefers Foerster-type energy transfer means to distinguish it from other possible mechanisms but, most importantly, to avoid misinterpretation of the acronym. Indeed, it is not fluorescence that is transferred from donor to acceptor and the acceptor does not need to be fluorescent. Those who use Fluorescence RET often say that Foerster did not discover FRET (correct, he did a mathematical description of a known phenomenon). Does it matter? Not really, but at least now we know what FRET means. Ah, I almost forgot… FRET for me is Foerster Resonance Energy Transfer… I heard you asking.
Next. How do we measure FRET? There are many ways to measure the occurrence of FRET but today I will focus only on ratiometric FRET and Fluorescence Lifetime Imaging Microscopy (FLIM). I am going to use an analogy that is very useful, that of buckets filled with water (Fig. 1). The tap is your light source, which is filling a donor bucket with water (energy). The bucket has one hole, from which water is dripping into a plate (a detector). That stream of water highlighted in green in Figs. 1-2 is the fluorescence signal that we measure, emitted by the donor. FRET is another hole punched into the donor-bucket. Water will flow into an acceptor-bucket from where it will drip (red flow) into a second plate (detector). The ratio of the water we collect in the blue and yellow plates will tell us the fraction of water that passed through the FRET “hole”. In a real FRET experiment, this fraction, called the ‘FRET efficiency’ is proportional to the inverse of the sixth power of the distance between the buckets, er… fluorophores.
Unfortunately, the excitation and emission spectra of typical fluorophores are broad and spill-over of fluorescent signals (or water!) is usually unavoidable (Fig. 2). The buckets are large compared to their distance (the excitation spectra overlap) and part of the water we wish to put into the donor bucket will fill the acceptor bucket. This is called ‘direct excitation’ of the acceptor. The water we now collect in the yellow plate flows from one hole in the acceptor-bucket, but it originates from two different flows. Direct excitation (black flow) and FRET (red flow). The latter, FRET sensitised emission, is the signal that matters. At the same time, water flowing from the donor bucket spills-over into the yellow plate (the emission spectra overlap), adding a third (green) unwanted flow into the yellow plate.
So, how do we correct cross-talks? The good news is that sometimes you do not need to. If what you need to measure is a semiquantitative measure, the detection of changes, measuring the relative quantity of water that fell into the yellow plate compared to the blue plate will suffice. This, however, will require to ensure the stoichiometry of donor-acceptor fluorophores does not change, for instance when using typical FRET-based probes for kinase activity.
In other cases, you will need to correct for these cross-talks and techniques like ‘precision FRET’ and ‘three cube FRET’ comes to the rescue (see reference section).
Another technique that can be used to measure FRET is Fluorescence Lifetime Imaging Microscopy or FLIM. FLIM does not need measuring the flow of water from the acceptor. FLIM requires to turn the tap on and off, and measuring the time that the donor buckets requires to be emptied. When a second hole (FRET) is punched into the donor-bucket, this will empty faster. We do not measure directly any signal from the acceptor and, therefore we avoid the need to correct for spill-overs.
This brings me back to the time I was a PhD student. A very smart master student entered my office and popped the question “how FLIM can detect the presence of FRET if the only photons we measure are those that do not experience energy transfer?”. Back then, I was taken aback from the question and I could not respond immediately in a satisfactory way. The bucket analogy should do the trick.
To conclude, this was just a brief overview of FRET and how we can measure it. There are plenty of great reviews out there to improve your understanding of FRET, but I hope that the analogy with buckets might provide a simple model for the non-specialist, albeit physically inaccurate for other aspects of FRET. Below, you can find a few references. Let me also refer to my new study published on Biomedical Optics Express entitled “How many photons are needed for FRET imaging?”. It is a theoretical study, but even the non-specialist might find some sections interesting and, plenty of more bucket figures there!
SOME USEFUL REFERENCES
J. R. Lakowicz, Principles of Fluorescence Spectroscopy (Kluwer Academic/Plenum Publishers, New York, 1999).
T. Förster, “Zwischenmolekulare Energiewanderung und Fluoreszenz,” Annalen der Physik 437, 55-75 (1948).
L. Stryer and R. P. Haugland, “Energy Transfer – A Spectroscopic Ruler,” Proceedings of the National Academy of Sciences of the United States of America 58, 719-& (1967).
G. Bunt and F. S. Wouters, “Visualization of molecular activities inside living cells with fluorescent labels,” International Review of Cytology 237, 205-277 (2004).
E. A. Jares-Erijman and T. M. Jovin, “FRET imaging,” Nat. Biotechnol. 21, 1387-1395 (2003).
J. Zhang and M. D. Allen, “FRET-based biosensors for protein kinases: illuminating the kinome,” Mol Biosyst 3, 759-765 (2007).
M. Y. Berezin and S. Achilefu, “Fluorescence lifetime measurements and biological imaging,” Chem Rev 110, 2641-2684 (2010).
A. D. Elder, A. Domin, G. S. Kaminski Schierle, C. Lindon, J. Pines, A. Esposito, and C. F. Kaminski, “A quantitative protocol for dynamic measurements of protein interactions by Förster resonance energy transfer-sensitized fluorescence emission,” Journal of the Royal Society, Interface/the Royal Society (2008).
A. Hoppe, K. Christensen, and J. A. Swanson, “Fluorescence resonance energy transfer-based stoichiometry in living cells,” Biophys J 83, 3652-3664 (2002).
M. Elangovan, H. Wallrabe, Y. Chen, R. N. Day, M. Barroso, and A. Periasamy, “Characterization of one- and two-photon excitation fluorescence resonance energy transfer microscopy,” Methods 29(2003).
G. W. Gordon, G. Berry, X. H. Liang, B. Levine, and B. Herman, “Quantitative fluorescence resonance energy transfer measurements using fluorescence microscopy,” Biophysical Journal 74, 2702-2713 (1998).
C. Berney and G. Danuser, “FRET or no FRET: A quantitative comparison,” Biophysical Journal 84, 3992-4010 (2003).
J. Wlodarczyk, A. Woehler, F. Kobe, E. Ponimaskin, A. Zeug, and E. Neher, “Analysis of FRET signals in the presence of free donors and acceptors,” Biophysical Journal 94, 986-1000 (2008).
A. Zeug, A. Woehler, E. Neher, and E. G. Ponimaskin, “Quantitative intensity-based FRET approaches–a comparative snapshot,” Biophys J 103, 1821-1827 (2012).
H. C. Gerritsen, A. V. Agronskaia, A. N. Bader, and A. Esposito, “Time Domain FLIM: theory, Instrumentation and data analysis,” in FRET & FLIM Imaging Techniques, T. W. Gadella, ed. (Elsevier, Amsterdam, The Netherlands, 2009).
R. A. Neher and E. Neher, “Applying spectral fingerprinting to the analysis of FRET images,” Microscopy Research and Technique 64, 185-195 (2004).
H. Wallrabe, Y. Chen, A. Periasamy, and M. Barroso, “Issues in confocal microscopy for quantitative FRET analysis,” Microscopy Research and Technique 69, 196-206 (2006).
S. Ganesan, S. M. Ameer beg, T. Ng, B. Vojnovic, and F. S. Wouters, “A YFP-based Resonance Energy Accepting Chromoprotein (REACh) for efficient FRET with GFP,” Proceedings of the National Academy of Sciences of the United States of America 103, 4089-4094 (2006).
J. Klarenbeek, J. Goedhart, A. van Batenburg, D. Groenewald, and K. Jalink, “Fourth-generation epac-based FRET sensors for cAMP feature exceptional brightness, photostability and dynamic range: characterization of dedicated sensors for FLIM, for ratiometry and with high affinity,” PLoS ONE 10, e0122513 (2015).
K. J. Martin, E. J. McGhee, J. P. Schwarz, M. Drysdale, S. M. Brachmann, V. Stucke, O. J. Sansom, and K. I. Anderson, “Accepting from the best donor; analysis of long-lifetime donor fluorescent protein pairings to optimise dynamic FLIM-based FRET experiments,” PLoS ONE 13, e0183585 (2018).
M. W. Fries, K. T. Haas, S. Ber, J. Saganty, E. K. Richardson, A. R. Venkitaraman, and A. Esposito, “Multiplexed biochemical imaging reveals caspase activation patterns underlying single cell fate,” bioRxiv, 427237 (2018).
In fluorescence microscopy, colocalization is the spatial correlation between two different fluorescent labels. Often, we tag two proteins in a cell with distinct fluorescent labels, and we look if and where the staining localizes. When there is a “significant overlap” between the two signals we say that the two molecules “colocalize” and we might use this observation as possible evidence for a “functional association”. We might argue that measuring colocalization in microscopy is one of the simplest quantitation we can do. Yet, many horror stories surround colocalization measurements. This post is not a review of how to do colocalization, but a brief casual discussion about a few common controversies that is – as often I do – aimed to junior scientists.
“I am imaging GFP, but the image is blue, can you help me?”. Well, this is not a question related to colocalization but it illustrates a fundamental issue. In truth, cell biology is such an inherent multidisciplinary science that – in most cases – a researcher might require the use of tens of different techniques on a weekly basis. It is thus not surprising that many researchers (I dare say most) will be an expert on some of the techniques they use but not all. Microscopy is particularly tricky. To be a true expert, you need to handle a feast of physical, engineering and mathematical knowledge alongside experimental techniques that might span chemistry, cell culture and genetic engineering. However, the wonderful commercial systems we have available permit us to get a pretty picture of a cell with just a click of a button. Here the tricky bit, you want to study a cell, you get a picture of a cell. One is lead to confusing the quantity that intends to measure with the information that is actually gathering and with its representation. This is true for any analytical technique but as ‘seeing is believing’, imaging might misrepresent scientific truth in very convincing ways. Hence, with no doubts that upon reflection the non-expert user would have understood why the picture on the screen was ‘blue’, the initial temptation was to believe the picture.
Question what you set out to measure, what the assay you have setup is actually measuring and what the representation is showing. Trivial? Not really. It is an exercise we explicitly do in my lab when we have difficulties to interpret data.
“It is yellow, they colocalize, right?”. Weeeeeeeeellll… may be, may be not. Most of you will be familiar with this case. Often researchers acquire two images of the same sample, the pictures of two fluorescent labels, one then is represented in green and the other in red. With an overlay of the red and green channels, pixels that are bright in both colours will appear yellow. I would not say that this approach is inherently flawed but we can certainly state that it is misused most of the times and, therefore, I try to discourage its use. One issue is that colour-blindness, not as rare as people think, renders this representation impractical for many colleagues (so my colour highlights!), but even people with perfect vision will see colours with lower contrast than grey-scale representations, and green more than red. Eventually, to ‘see yellow’ is almost unavoidable to boost the brightness of the underlying two colours to make the colocalization signal visible. This can be done either during the acquisition of the image often saturating the signal (bad, saturated pixels carry very little and often misleading information) or during post-processing (not necessarily bad, if declared and properly done). Either way, at the point you are doing this, your goal to be quantitative has been probably missed. The truth is that a lot of biological work is non-quantitative but faux-quantitative representations or statistics are demanded by the broader community even when unnecessary. Let’s consider one example with one of the stains being tubulin and the other a protein of interest (PoI). Let’s assume the PoI is localizing at nicely distinguishable microtubules in a few independent experiments. Once the specificity of the stain is confirmed, the PoI can be considered localized at the microtubules (within the limitations of the assay performed) without the need for statistics or overlays. Unfortunately, it is not very rare to see papers, also after peer-review, to show diffuse stainings of at least one of the PoI and perhaps a more localised stain of the second PoI and a ‘yellow’ signal emerging from an overlay is considered colocalization, instead of what it is: just noise. Another common issue is localization in vesicles. Again, any cytoplasmic PoI would appear to colocalize with most organelles and structures within the cytoplasm with diffraction-limited techniques. Sometimes punctuated stainings might partially overlap with known properly marked vesicles, let’s say lysosomes, but not all. Then the issue is to prove that, at least, the overlap is not random and, therefore, statistics in the form of correlation coefficients are necessary.
“The two proteins do not colocalise, two molecules cannot occupy the same volume” Really!? Well, from a quantum mechanics standpoint…. No, do not worry, I am not going there. I have received that criticism during peer-review in the past and until recently I thought this was a one-off case. However, I have recently realised that I was not the only person reading that statement. I am really uncertain why a colleague would feel the need to make such an obvious statement except for that condescending one-third of the community. I should clarify that to my knowledge no one implies physical impossibilities with the term colocalization. That statement is perfectly ok in a casual discussion or to make a point to teach beginners the basics. Some of us also might enjoy discussing definitions, philosophical aspects related to science, controversial (real or perceived) aspects of techniques, but better at a conference or in front of a beer, rather than during peer-review. The issue here is that while it is reasonable to criticise certain sloppy and not too uncommon colocalization studies, in general colocalization can be informative when properly done.
“So, is measuring colocalization useful?” Homework. Replace ‘colocalization’ with your preferred technique. Done? Now try to make the same positive effort for colocalization. Every technique is useful when used properly.
You might have noticed I marked some words in my introduction: colocalize, significant overlap and functional association. It is important we understand what we mean with those words. Colocalization means co-occurrence at the same structure, a non-trivial correlation between the localization of two molecules of interest, within the limits defined by the resolution of the instrumentation. The “significant overlap” should be really replaced by “non-trivial correlation”. Non-trivial, as diffuse stainings, unspecific stainings, saturated images can very easily result in meaningless colocalization of the signals but not of the molecules of interest. Correlation, as the concept of overlap might be improper in certain assays, for instance in some studies based on super-resolution microscopy. After we did everything properly, we still cannot say that if protein A and protein B colocalize they interact (see slide). However, we can use colocalization to disprove the direct interaction of two proteins (if they are not in the same place, they do not interact) and we can use high-quality colocalization data to suggest a possiblefunctional association that might be not a direct interaction, and that should be then proven with additional functional assays.
Then, my friends, do make good use of colocalization as one of the many tools you have in your laboratory toolbox but beware that just because it is simple to acquire two colourful pretty pictures, there are many common errors that people do when acquire, analyse and interpret colocalization data.
P.S.: if I cited your question or statement, please do not take it personally. As I have written, not everyone can be an expert of everything and the discussion between experts and non-experts is very useful, so making real-life anonymous examples.
Is it a cat? Is it a dog? Is the average between a cat and a dog a real thing, perhaps a caog or a doat?
Not all science should be based on single cell detection, and there are plenty of cases where single cell measurements are superfluous. However, too often we fail to appreciate the huge mistakes we can do in biology when we forget the assumptions we do when using population measurements.
But which assumptions do we really do?
Often implicitly, when doing population measurements (e.g., Western blots, sequencing, proteomics, etc…) we assume that populations of cells we measure are homogeneous and synchronous. Or at least we assume that these differences are unimportant and that they can be averaged out. In the best cases, we try to enforce a degree of synchronicity and homogeneity, experimentally. In reality, one of the most important assumptions we implicitly do is that the system we analyse is an ergodic system. In physics and statistics, an ergodic system is a system that, given a sufficiently long time, explore all its possible states. It is also a system where – if sufficiently sampled – all its states are explored and, consequently, averages over time on a single cell and averages over a population at a given time are the same. However, there are limits to this assumption in biology. The obvious example is the cell cycle. There is significant literature about ergodicity and cell cycle [e.g., 1, 2, 3] and how this principle can be exploited, but…
The lottery for cell division makes you grow faster.
There is a particular phenomenon that we encountered while we were working on this project  that fascinated me for its simplicity and consequences. How cells can increase their fitness (i.e. their growth rate)? One obvious answer is by dividing faster. Another, at first glance less obvious answer, is by exhibiting an heterogeneous cell cycle length. Let’s consider a population of cells that divides every 24 hours. Over one week, these cells will have 128 times the original population size. Now, let’s consider cells that divide on average every 24 hours but exhibit variation in cell cycle length, randomly, with a standard deviation of 4 hours and a normal distribution. Cells with 20 hours or 28 hours long cell cycle are equally probable to occur. However, in one week, cells with a 28 hours long cell cycle length will grow 64 times and cells with a 20 hours long cell cycle length will grow about 380 times. On average, these cells will grow ~200 times, that is much faster than cells dividing precisely every 24 hours (128 times). This is true for any pair drawn at equal distance from the two sides of the average; these pairs are equiprobable, thus cells dividing at a given average cell cycle length grow faster at increasing heterogeneity. Let’s remember that this can occur not just in the presence of genetic differences, but even just for stochastic variations where the progeny of one cell will not keep the same cell cycle length but will keep randomly changing according to an underlying distribution. This is a phenomenon that has been observed experimentally, for instance, in yeast  with single-cell measurements but that is occurring in any cellular systems as described in  and our own work . Population measurements might conceal these very important phenotypic or mechanistic differences.
The sum of two normal distributions is not another normal distribution.
The beauty of the normal distribution is that it is such a ‘well behaved’ distribution and, at the same time, it represents many physical and biological phenomena. If a population we are characterizing is made of two normal distributions, their average is the average of the normal distribution. If these have the same average, the variance of the sum will be the sum of the variances. These basic and useful mathematical relationships can be also rather misleading. In fact, while these statements are mathematically correct, two populations of cells that ‘behave rather differently’, for instance in response to a drug, cannot be averaged. For instance, one cell population might be killed with a given concentration of a drug. Another population might be resistant. By detecting 50% cell death, we could assume – incorrectly – that dosing at higher concentrations we could kill more cells.
The plot shown below illustrates this basic principle. The blue and red distributions, averaged together, exhibit the same variance and average of the yellow distribution but they represent very different systems. If the blue distribution represents the sizes of cats and the red distribution the sizes of dogs, the yellow distribution does not represent the size distribution of any real animals. In other words, the average phenotype is not a real phenotype and, in the best case scenario, when there is a dominant population, it represents the most frequent (the mode) phenotype. In all other cases, where the homogeneity of the phenotype is not checked, the average phenotype might be simply wrong.
This is a very simple illustration of a problem we frequently encounter in biology, trusting our population measurements (averages and standard deviations over experimental repeats) without being sure of the distributions underlying our measurements. In the figure above, the purple distribution is a distribution where the average is the correct average of the blue and red distribution, but the purple distribution is the statistical error of the assay and it is unrelated to the scatter of the biological phenomenon we are measuring. Sometimes, we cannot do anything to address this problem experimentally because of the limitations of technologies but it is very important – at least – to be aware of these issues.
Just for the most curious, I should clarify that for two Gaussian distributions with relative weights A and B, we can define a mixing parameter p=A/(A+B). The average of the mixed population will be simply μP=p*μA+(1-p)*μB, i.e. for p=0.5 is the average of the means. The apparent variance is σP^2 = p*σA^2+(1-p)*σB^2+p(1-p)*(μA-μB)^2, i.e. σP^2 is the average of the variances summed to the squared separation of the two averages weighed by the geometrical averages of the mixing parameters of the two populations.
Collective behaviour of cells is not an average behaviour, quite the contrary.
When discussing these issues, I am often confronted with the statement that we eventually do not care about the behaviour of individual cells but with the collective behaviour of groups of cells. There are two important implications to discuss. First of all, when arguing the importance of single-cell measurements, we do not argue the importance of studying individual cells in isolation. Quite the contrary, we should measure individual cells in model systems the closest to the physiological state. However, many assays are incompatible with the study of cell behaviour within humans and we resort to a number of model systems: individual cells separated from each other, 2D and 3D cultures, ex and in vivo assays. The two arguments (single cell measurements or measurements in more physiological model systems of tissues or organisms) are not the same.
Second, collective behaviours are not ‘average behaviours’. There are great examples in the literature but I would advise just even to visit the websites of two laboratories that I personally admire. They nicely and visually illustrate this point, John Albeck’s laboratory at UC Davis and Kazuhiro Aoki’s laboratory at NIBB. Collective behaviours emerge from the interaction of cells in space and time as illustrated by waves of signalling or metabolic activities caused by cell-to-cell communication in response to stimuli. The complex behaviours that interacting cells exhibit, even just in 2D cultures, can be understood when single cells and their biochemistry are visualized individually. Once again, phenotypes or their mechanism might be concealed or misinterpreted by population or snapshot measurements.
This is, of course, not always the case. However, my advice is to keep at least in mind the assumptions we do when we perform an ensemble or a snapshot measurement and, whenever possible, to check they are valid.
Industry, academia and healthcare often rely on fluorescence microscopy to see the fine architecture of materials, including biological ones. Fluorescence microscopy is particularly suited for biomedical studies because it can be gentle with biological materials permitting investigators to study biology in a non-destructive manner. Chemistry and genetic engineering then provide useful strategies to make samples fluorescent so to report about mechanisms that we need to study aiming to understand how biological systems work in normal conditions, during disease or therapy.
Thanks to two-decades of fast-paced innovation in fluorescence microscopy, we can now see the smallest features of a biological sample, approaching molecular resolution. However, the capability of fluorescence microscopy to observe small changes in the chemical or physical properties of biological samples is not as well-optimised as its capability to peek into small structures. In our recent paper entitled “Enhancing biochemical resolution by hyper-dimensional imaging microscopy” – now available at the Biophysical Journal – we demonstrate how to recover information that permits us to make better measurements.
We can think of a fluorescence microscope like a radio broadcaster that transmits useful information through different radio channels. When we listen to one individual radio channel, we lose information transmitted over the other frequencies. If we attempt to listen to several broadcasts at the same time, the scrambled voices will limit our understanding of the several messages that were originally broadcasted. Similarly, the lasers we use to make samples shine, and the fluorescence emitted by samples, transmit information spread over the different properties of light, for example in its colour, in the time when light is emitted (the fluorescence lifetime) and in which plane is vibrating (polarisation).
In our recent work, we describe theoretically and experimentally how all this information could be measured separately but simultaneously enhancing our capabilities to observe biological processes. By breaking conceptual barriers and showcasing possible technological implementations with hyper-dimensional imaging microscopy, we aim to catalyse advances in several applications, spanning material sciences, industrial applications, basic and applied biomedical research, and improved sensing capabilities for medical diagnostics.
What has been the impact of fluorescence lifetime imaging microscopy to science and to the biomedical community in particular? Is FLIM a niche technique, one of those techniques that always promise but never deliver?
The top 10 most cited papers
Excluding reviews, the list of the top 10 most cited papers, albeit representing a very narrow window on the impact that FLIM had on the broader community, is rather instructive. Do consider, we are missing all those papers where FLIM was used but not cited in title or abstract. Most of the top 10 is made of applications to cell biochemistry, demonstrating the potential and the impact that fluorescence lifetime has. FLIM helped to understand how signalling work in living cells and animals, helped to identify drugs and to study disease. Some of the top cited papers are more technical, such as Digman’s paper on the phasor-transform or Becker’s paper on TCSPC widely cited because of their influence on contemporary FLIM techniques from a perspective of data analysis and technology. Other papers date back to the initial years of FLIM with applications to biochemistry. Overall, from this list, we understand (if more evidence was needed) that FLIM had a deep impact on the understanding of cell biochemistry albeit, historically, FLIM has been limited to the specialist laboratory.
I would like to highlight also another small observation, perhaps just interesting for the specialists, and not visible from other bibliometric analyses. Tom Jovin and a group of scientists trained by him (e.g., Dorus Gadella and Philippe Bastiaens) left a significant footprint in the field, directly driving biomedical relevant applications while pushing, at the same time, technological or methodological developments. Many others are linked to this ‘school’ directly or indirectly, scientists that use/develop a microscope to do biochemistry.
Mapping temperature within cells using fluorescent polymers by Okabe and colleagues (2012) from Uchiyama’s laboratory and published in Nature Communications, where FLIM was used to map temperature within cells using fluorescent polymers as temperature sensors. (442)
Phasor analysis by Michelle Digman and colleagues, from the laboratory of Enrico Gratton (2008) published by Biophysical Journal. The phasor-based analysis, in different flavours, has become quite popular nowadays. (406)
An in vivo FLIM-based analysis of calcium dynamics in astrocytes by Kuchibhotla and colleagues from Bacskai’s laboratory (2009) published in Science. (353)
The study of Calmodulin-dependent kinase II activity in dendritic spines by Lee and colleagues from Yasuda’s laboratory (2009) published in Nature. (351)
One of the first FLIM papers by Lackowicz, published in 1992 in PNAS, where they applied the methodology, yet to be fully established, to the study of free and bound NADH. (339)
One of the first biochemical applications of FLIM, where Gadella and Jovin applied the new tools to the study of EGFR oligomerization (1995), published in the Journal of Cell Biology. (323)
A 2004 paper, where Becker and colleagues present the TCPSC instrumentation that would become a commercial success, published in Microscopy Research and Technique. (321)
The application of FLIM and molecular motors to study viscosity of the cellular environment by Marina Kuimova and colleagues, from the laboratory of Klaus Suhling published on JACS in 2008. (319)
The development of a drug interfering with the interaction between KRAS and PDEdelta published Zimmermann and colleagues with the laboratory of Philippe Bastiaens and published by Nature in 2013. (291)
The interaction between PKC and integrin shown by Ng an colleagues from Parker’s laboratory in 1999 by the EMBO Journal. (277)
Tool: Web of Science
Search term: “FLIM” and “fluorescence lifetime imaging microscopy”
Note: FLiM is a component of the flagella motor and it shows up in the searches. I could not eliminate this ‘false positive’ but it is my assumption that it is not changing the following discussion.
Citations (in parenthesis) as in April 2019.
Any bibliometric analysis is very limited in scope, certainly this very narrow search. This is just a blog post, one observation done just to trigger a discussion for those curious people about the topic.
Since a few months, the manuscript entitled “Multiplexed biochemical imaging reveals caspase activation patterns underlying single cell fate“, and authored by Maximilian W Fries, Kalina T Haas, Suzan Ber, John Saganty, Emma K Richardson, Ashok R Venkitaraman, Alessandro Esposito, is available as pre-print at the bioRxiv repository. It has started its journey through the peer-review process, but here I wished to explain to students and young scientists what happened behind the scenes as, I believe, can be instructive.
The inception of the idea | I am unsure if it will be evident from the manuscript, but this is the culmination of a huge effort that started more than a decade ago. I was about to leave the Cell Biophysics Group led by Prof. Fred Wouters after I completed my PhD, on a train from Goettingen to Bonn where my partner used to work, thinking: “What should I do next? … something that while capitalizing on my training can make my work distinct from my mentors and others? Where can I have the highest impact?” Moment that stuck in my memory.
I believe I read Santos et al. (2007) “Growth factor-induced MAPK network topology shapes Erk response determining PC-12 cell fate.”in that period, a paper that influenced me significantly. It made me thinking of cells as if they were computational machines, interpreting various inputs from the extra- and intra- cellular environment to trigger appropriate outputs, cell states or transition between cell states, i.e. cellular (fate) decisions. Everyone working with microscopy knows that cells treated equally often behave differently and, therefore, I started to formulate ideas around the following question: “How does a network of biochemical reactions encodes for cellular decisions? Why do genetically identical cells take a different decision faced by a similar stimulus?” Basic principles, the science I love the most, but questions worth answering also to obtain mechanistic insights, questions also quite relevant to disease.
As a matter of fact, it is of fundamental importance to understand how cells trigger pathological states or if differences in biochemical networks can be used as diagnostic markers for patient stratification or targeted for therapy, concepts that I started to work only later. Certainly, I thought back then, with my unique blend of physics, engineering, mathematics, molecular and cell biology I can do, in this area, what others might not be able to. Therefore, since 2007, my aim is to image not just a biochemical reaction, but biochemical networks within intact living cells, while they undertake decisions.
Finding the resources, the initial success | Perhaps other students start less naïvely than me, but soon I would discover that having a good idea (let’s suppose it is a good idea) and having the right skills is only a tiny part of the job. First, aiming to coordinate my work with that of my partner (now wife), I accepted a job offer at the University of Cambridge to work with Prof. Clemens Kaminski and Dr. Virgilio Lew to study one exciting but quite unrelated project. While working on the homeostasis of P. falciparum infected red blood cells, I set up collaborations and wrote an EPSRC fellowship which was funded. Therefore, in 2009, two years after my first idea, I got the funding to work on biochemical multiplexing. With this fellowship, I was able to refine my expertise in biochemical multiplexing, permitting me to build advanced technologies for fluorescence sensing such as confocal spectro-polarimetry and fast SPAD-based spectral FLIM. This EPSRC fellowship, together with my expertise and vision, and the benefit to have already established my name in the community thanks to the work I had done with and the support of Prof. Fred Wouters and Prof. Hans Gerritsen, were an excellent platform that permitted me to do the next jump and accepted a senior position at the MRC Cancer Unit.
Finding the resources, the struggle | Rather than focusing just on technology, I then broaden my research to a research program that would require theoretical developments, engineering of new pairs of fluorescent proteins to achieve multiplexing, coding and, of course, biological applications. I recognize that expanding my research before seizing the appropriate resources was a significant mistake or at least a huge risk. Working within Prof. Ashok Venkitaraman group, I started to write ambitious EU grants. Some of them would receive excellent feedback (14 out of 15 points, first or second not funded…) but fall short of being funded. Hans once told me that “at this level of competition and quality, often it is just noise that decides the final outcome“. Probably true, even funny if you knew we worked together on photon-statistic (‘noise’). But great feedback does not replace funds, and thus I wrote an ERC grant.
I did not get ERC funding but, once again, ERC is very competitive and I was not sufficiently experienced, thus no drama. However, I started to notice one big issue. Physicists would judge my physics not great physics, biologists would judge my biology not great biology. Some colleagues would find my objectives impossible to reach. This is what I have then discovered to be the challenge of doing multi-disciplinary research (well, technically is called trans-disciplinary research, but this is the topic for another post). When your proposal is both trivial and impossible, you might have an issue that is not necessarily related only on your science. One referee commented that “A number of groups have being trying to improve the technologies for many years and although some of them have an enormous experience they are not anywhere close to where he intends to be in five years“. Around the same time, a renown scientist commented on the description of my work “It is impossible”, but then added in a wonderfully supportive and very appreciated manner “but if there is someone that could do it, it is Alessandro” – well, if funding-proposals could be judged with the human touch that people have when speaking in person knowing and respecting each others work… I’ll cut an even longer story short, but with significantly less resources than I was asking and struggling to increase my funding, with the financial backing of Prof. Ashok Venkitaraman, we did everything we wanted to do in… five years!
The great technical success (NyxBits and NyxSense) | I wished to tell you a story of great success in a broader sense, but this has to be still written… if it will ever be. I did waste significant amount of time in looking for resources in what I found an amazingly inefficient system. However, from the end of my EPSRC fellowship since this year (~6 years), we have done a huge amount of work to realize what it was thought not to be possible:
Molecular Biology. I wished to develop two platforms, one based on spectrally multiplexed time-resolved anisotropy (open for collaborations here!) and one for spectral FLIM to manage the cross-talk between multiple FRET pairs and making biochemical multiplexing possible. With the limited resources I had, and initial help from Bryn Hardwick, Meredith Roberts-Thomson and David Perera in Ashok’s lab, we kick-started the project. The mole of work started to overwhelm me. Occupied with grant writing, training in a new field, engineering, software development and mathematics, I could not push this forward as fast as I wished. A great help then arrived from Max Fries who did 6 months with me as master student. Once he left, I was short of resources again, with the FRET pairs misbehaving and exhibiting aggregation or spurious signals, we abandoned one of the two sensing platforms. Emma Richardson then joined me as a Research Assistant dedicated to cloning and testing FRET pairs and then Max came back to work with me for another four years as a PhD student. Committed and skilled, he tested tens and tens of FRET pairs. The work was a huge task, but a couple of paragraphs in the manuscript. We even have better pairs then we used in this work, all described in the Supporting Information. Indeed, under the pressure for publishing on high impact journals, I decided (probably anoher mistake of mine) to progress to applications, settling for what we recently baptized as NyxBits: mTagBFP, sREACh, mAmetrine, msCP576, mKeima and tdNirFP, so to focus on biological applications. NyxBits and NyxSense? Well, I have explained the choice of names elsewhere.
Mathematics and software. There is something I could not really write in the manuscript so explicitly and it is appreciated only by the experts in the field. There is something I also find impossible to communicate to review panels. As a testimony to this, I report here a comment I was once relayed to, something like: “Why do we need to offer him a carreer, once he has built the instruments we really need one person just clicking a button, no?” (I am sure I remember it much worst then it was. May be). The integration of technologies is so new and challenging, that we had to formulate new theoretical frameworks and write all new software, including how to acquire data, data format, and analysis. Also, some aspects of our work are difficult to appreciate. Let me tell you another small event that would push me in a particular direction. I really enjoy the conference Focus on Microscopy, even when criticized. Presenting new ideas, a colleague – respectfully – questioned the possibility for multiplexed imaging to be capable to measure several FRET pairs at the same time. This stimulated me to resume studying the Fisher information content in biochemical imaging. What is the biochemical resolution in microscopy? Can we enhance it? After years of thinking about this topic, in 2013 I cracked the problem, and published the mathematics in PLOS ONE where I formulate what I defined ‘the photon-partitioning theorem’. Then, with the increasing financial backing of my Director, Kalina Haas joined my growing team. Kalina implemented unmixing algorithms and complex data analysis pipelines. Max and Kalina then became my dream-team to progress the project to the shape you can read today.
Technology. I mentioned some earlier technology platform that were designed for biochemical multiplexing. In my recent and first release of manuscripts on bioRxiv, we also published a full implementation of Hyper-Dimensional Imaging Microscopy (HDIM) with which we backed the photon-partitioning theorem with experimental evidence. We have done much more in that direction, but when we started biological applications, we realized the need for faster FLIM systems. Uncapable to wait for commercial solutions or to gain the benefits of other prototypes we had developed, I decided to build my own fast multiplexed electronics. This development was fostered by a negative criticism of a referee. During a PNAS submission of our spectral FLIM system, a referee mentioned we could do the same utilizing Hybrid PMTs. I disagreed, as achieving 64 channel spectral FLIM with the capability to run at hundreds of millions of photon-counts per second is all-together a very different application; however, there is merit in most referees’ criticisms, even the most negative ones. Only then I have realized PMT are now very fast and the bottleneck was just the electronics. Therefore, I got in touch with Surface Concept who supported me wonderfully and sold me one of their multi-hit TDC platforms. After several months of software development, we were then capable to run FLIM measurements with the quality of TCSPC and the speed of FD-FLIM. As usual, I presented this work at FoM where it was greatly received by colleagues and companies, but we did not publish the imaging platform as we were fully committed to pursue biological applications.
The biology. The bottleneck of our experiments was and still is data analysis and, with tens of experiments, thousands of biochemical traces to be painfully manually curated, we moved ahead very slowly, but working hardly. Mostly Max, Kalina and myself, suffered years of hard work, the occasional worry when something stopped working, and the excitement of seeing things that others could not see, for the first time. In this manuscript, we reveal the extent of non-genetic heterogeneity that biochemical networks can exhibit and that eventually result into difference cellular decisions. Here, we focused on multiplexing simple biosensors for caspases as we aimed to de-risk and very ambitious project. We also decided to work with HeLa cells, again for the same reason. Despite the simplicity of the model system under study, we realized how complex and heterogeneous the response of biochemical pathways is, the cross-talk between enzymes, signaling pathways and cellular metabolism. All of this is, for me, fascinating and it shows that whenever we do ensemble measurements, we really see only the behavior of the average cell. It is then important to understand that the ‘average cell’, most of the times, does not really exist. If we are lucky, the bulk of the population responds with one phenotype and the measured ‘average cell’ will indeed represent the ‘most frequent cell’. However, in other instances when there are significant populations behaving in distinct ways, we would not just miss important information. The model inferred from the ‘average cell’ would be simply the wrong model of a non-existing cell. This is why it would be important to know, for any assay, if the sample behave synchronously with a stimulus and homogeneously. In this sense, single cell biochemistry, could bring not just an additional layer of information, but inform us if what the observations we obtain on a given model system with ensemble measurements can be reliable.
Enduring the struggle | I hope you did not mind I spoke so positvly about my own work. If you know me, you also know I am not so self-centered. However, I wished to let the younger scientists to know what there might be between a ‘good idea’ and its realization, passing through frequent failures and some success. Probably, one of the most precious quality of a scientist is resilience. We need thick skin to confront the constant failures that lead us to discoveries, the constant struggles in getting resources and eventually publishing good work in a highly competitive environment. Turning a negative event in something negative is part of this process. Understanding why one experiment did not work enables us to make troubleshooting, why an experiments falsified our hypothesis to build new and better models, why funding was not awarded or a manuscript was not published how we can improve our scientific proposals and reporting. Of course this is easier said than done.
The work we presented in bioRxiv is not the end of the story. The work, wonderfully-received in conferences, is still not peer-reviewed. Will colleagues appreciate and understand the vision of our work, its possible impact and the mole of work we had to do? Were we able to communicate properly? And even if we did it, we still have a long way in front of us. My dream is to establish a single cell systems biology of cell fate. A huge amount of work, from maths to biology, from biotechnology to physics, all still needed to be able to understand why cells do what they do, how physiological states are maintained and how pathological states emerge.
Well, I remember when I started this business, a beam stop was done with a recycled block of lead and reflections stopped with carton boxes 😉 Brown boxes, black carton catches fires, of course (tell this to my undergrad-self). Not any longer, of course!
About ten years ago, I started the procurement and development of my first two-photon microscope. For the first time, I was directly responsible of laser safety and I had to take decisions about how to build a system that was safe for a user facility in a biomedical research institute. As I was coupling commercially sourced systems (Leica SP5, Chameleon Vision 2 and Pulse Select) and I was not planning much customization for the excitation path of this instrument (I heavily develop assays and detection), I opted to fully enclose the laser in lens tubes. The resulting system is safe, stable, and no more difficult to align compared to other enclosures.
I think that enclosures around the complete table might make sense in many instances, particularly when compartmentalized in sub-sections, but this is the system that worked best for me at the time. One solution I wish to share, is a bypass for the Pulse Picker we had used to develop spectrally resolved FLIM utilizing smart SPAD arrays (detectors that integrate photon counting electronics with them).
As I start planning replacement of this systems, I wished to share this design, in case some of you might find it useful. In the image on the left, you can see the Ti:Sapphire on the top, the pulse-picker on the right and the first enclosure by Leica used to steer the beam to their in-coupling optics (bottom-right).
In the middle, the laser bypass we utilize to direct the laser through the pulse-picker or around it.
In the image below, you see a close-up photo of the by-pass. The black box with the rectangular aluminum cover is the Leica spectral flattener used to reduce power of the Chameleon Vision at the peak wavelength. One of the few customization I needed here was simply to have a hole on a Thorlabs SM2 lens tube to accommodate this filter. This is screwed in a C4W-CC cube that can host a movable turning mirror with high reproducibility. The alignment of the microscope without the pulse-picker is done with the pair of mirrors provided by Leica. The alignment of the Pulse Picker is done with the kinematic mirrors visible on the left (M1 and M2). I placed a light-block behind them just in case one would become lose or to block the small amount of light transmitted through them. A kinematic cube is used to host ultrafast beam sampler by Newport to direct a small fraction of light to the Thorlabs PIN diode I use to feed the electronics of the pulse picker. In front of the PIN diode I have an xy-translating cage element. An empty four-way cube is used to allow the laser beam to pass from top to bottom (bypassed) or from left to right (coupled pulse picker). The aluminum block tagged as L1 is just a cover for the C4W-CC when empty.
At the output of the pulse-picker, you see the mirror image of this bypass (on the right) and the two steering mirrors by Leica (the cylindrical towers). On the far right of the picture there is the in-coupling optics by Leica, preceded by two diagnostics ports.
Below, you can see a close-up picture of this part of the coupling. Because of the layout, I needed to add one extra mirror (top left) and aiming to isolate users (placed on the top of the image) from accidental damages of the in-coupling optics, I added a light barrier.
Both diagnostics ports are based on a 4-way kinematic cube from Thorlabs hosting Newport beam samplers. The first port is used to sample the pulses after the pulse-picker and to feed our FLIM electronics. The second has two scopes. First, for course alignment of the system. I have two irises in the system that are aligned when the laser is aligned (roughly) to the in-coupling optics of Leica.
I usually remove a cover at the exit of this diagnostic port and use a fluorescent card to verify alignment, but in this picture you see the fiber coupling a spectrograph we occasionally use to diagnose faults of the laser.
The alignment is simpler that it seems. First we start with a microscope that is fully aligned without pulse-picker as per normal operations. Then, when we need the pulse picker, we insert the two turning mirrors (L1 and R1). We do this with the laser off and with the pulse-picker crystal retracted (coarse alignment) or protected by an alignment card (fine alignment). M1 and M2 are then used to align the beam with the crystal. Then we align the PIN diode and proceed with the fine alignment of the pulse-picker cavity. Once this is done, we align the cavity with the microscope utilizing M4 and M5. For course alignment, the signals from the two diagnostics ports is very useful until some signal is picked on the microscope monitor, after which the final fine tuning of all the optics can proceed.
Be aware, alignment of Class 4 lasers can be dangerous. Therefore, do your own risk assessments and think carefully about the logistics of your system. Now that I am starting to consider the redevelopment of the system, I thought to share these notes with you, hoping that it could be of some use.
Are you interested in cell biochemistry, but in single living cells, organoids or tissues? Is there a Western blot or IP you wished to do on a living sample? Or did you wish to see where in a cell a protein-protein interaction occurs.
Well, if you are interested in quantifying a ligand concentration, a post-translational modification, a protein-protein interaction, chromatin states, oligomerization of proteins, you might be interested in FLIM or FRET, but you might not be in your comfort zone to set-up or execute such assays.
The specialist expertise and instrumentation required to perform fluorescence lifetime imaging microscopy (FLIM) is often a barrier to adoption of quantitative biochemical imaging techniques. The same can be true, although at a lesser extent, for intensity-based measurements of FRET.
Well, we have the expertise and we have the instrumentation. Not just this, but today, instrumentation and data analysis is becoming simpler and simpler. During 2019, we are going to trial a system by which we can support you for setting-up and test FLIM/FRET experiments. We have limited resources and, therefore, we will open only a few experimental sessions to start with, but there will be no strings attached. No fees, no authorship to include in that paper you really care.
Although we still have to setup the “Sharing is caring” trial, feel free to inform us about your interest. Initially, projects will be selected at our discretion, with priority given (but not confined) to cancer-related work and work with a potential to impact public health in the short or long period.