NyxBits and NyxSense? What?!

NyxSense&NyxBits paper here.

800px-Arte_romana,_statuetta_di_nyx_o_selene,_I_secolo_acI am not fond of new achronyms or ‘cool’ names, but then… guilty! you got me, I am contributing to the proliferation of four letters acronyms and fancy names like others! Lately, I have introduced a new one, HDIM as for Hyper-Dimensional Imaging Microscopy. But that is another story, and in a Supporting Note of that pre-print we explain our choice.

Earlier, we created the pHlameleons with the friend, my group leader back then, Fred Wouters. Well, first it was the Cameleon, the famous calcium reporter by the great Miyawaki and Tsien, brilliantly referred to as Camaleon because it is a protein that ‘changes colour’ upon binding calcium (Ca). Then it was the Clomeleon by Kuner and Augustine, as it senses cloride ions (Cl) rather then calcium. With all due respect for the authors, I must admit I did not love that name at first. Indeed, as we were deriving a family of pH sensors from yet another creation of Miyawaky (the CY11.5), we started to joke that we should have called this family of sensors the pHlameleons. Months after months, a joke ended up in a title of a paper, to be adopted as the name of these pH sensitive proteins. So, let’s not take ourselves too seriously too often. Sometimes we pick names for a bit of branding, other times to make our assays less heavy with too many technical terms, and other times, let’s just have fun with words (Clomeleon now for me is a great name, but I routinely joke about pHlameleons!).

Now that you know the little funny story about the pHlameleons, it is the turn of NyxSense and NyxBits. NyxSense is a software dedicated to multiplexing of FRET sensors. NyxBits are the components to create a multiplexing platform, a number of fluorescent proteins of distinct Stokes shift that can report, through their fluorescence lifetime, biochemical reactions probed via FRET with the use of dark/darker acceptor chromoproteins. A huge effort for us that took several years to bear fruit. Why Nyx?

During the revision of the drafts, colleagues found the manuscript a bit too technical and difficult to read. Thus I went back to pen and paper,  google and wikipedia, to find a name that could help us to refer to this sensing platform with a single word rather then a sentence. Greek mythology always provides great inspiration and eventually, I discovered Nyx the primordial goddess of the night (Nox in the Roman mythology). With Erebus (personification of darkness), Nyx gives birth to Aether (personification of the upper air and brightness), Moros (deadly fate), Moirai (destiny) and Thanatos (death). Then, I felt that this short name, Nyx, is intimately connected with our work for three reasons.

First, Nyx seems to link darkness and light, the day and night, a nice analogy with our bright donor fluorophores and dark acceptors. Second, Nyx is related to death and fate. We created the NyxBits and NyxSense to study cell fate, and our first application is cell death responses to an anti-cancer drug. Third, Nyx is a goddess and as I am really committed to gender equality at work (not just by picking names of fluorophores), it felt a little bit in tune with what I do, to honour a female deity.

But do not take these reflections too seriously – I do not – after all I needed just a simple name for a very complex sensing platform. As there is no way for me to tell the reasoning behind the names in the manuscripts, I thought to share with you why we picked NyxSense and NyxBit, light-heartedly.

 Now starting project Atlas… we’ll speak about this another time! 🙂

Volume rendering: is this localization-based super-resolution?

Project outcome published in Biophysical Journal in 2010.

  • Esposito A*, Choimet JB, Skepper JN, Mauritz JMA, Lew VL, Kaminski CF, Tiffert T, “Quantitative imaging of human red blood cells infected with Plasmodium falciparum“, Biophys. J., 99(3):953-960

Most papers have an untold backstory that we cannot reveal in it so to focus on a main message and the most relevant discoveries. This one has a little one I wish to share. Volumetric imaging of red blood cells is not the most difficult thing I have ever done. However, accurate morphological and volumetric imaging of red blood cells infected by Plasmodium falciparum, the causative pathogen of malaria, caused me a few headaches. Let’s forget the time spent waiting for the cultures growing at the right speed to deliver bugs at the right stage of development, undecided if to sleep before or after the experiment, and always getting the decision wrong. Let’s not speak for now about the optimization of the sample preparation that that by trying and failing lead to other interesting observations. And here we focus on the very simple concept of accurate volume rendering.

In one way or another, volume rendering and estimation will require some sort of thresholding on the data so to discriminate the object from the background. As imaging conditions change even slightly from experiment to experiment, setting this threshold might confound the final outcomes. When you deal also with a sample that undergoes major morphological transitions, a simple problem soon became one for which I spent a lot of time to identify a solution for. As it happens, one perhaps does not find the best, most elegant or even the simplest solution, but the solution that they can find with their skills and tools. Mine was a brute-force solution of isosurface volume rendering, iteratively deformed by local refitting of a random sample of vertices in order to respect a specific model set for the transition of object to background. This was a method that permitted us to preserve high resolution morphological descriptions, at high accuracy and reproducibility for volume rendering.

This work was carried out while many of my colleagues were focusing on super-resolution, e.g. maximizing the spatial resolution in optical microscopy. Then, it was simple to notice that fitting a surface onto volumetric data delivers volume estimates at higher precisions than what the optical resolution of a microscope should permit. Indeed, whenever you have a model for an object, in my case the boundary of a red blood cell, in single-molecule super-resolution methods the point-spread-function of an emitter, it is possible to fit this model with a precision that is not (fully) constrained by diffraction, but – in the right conditions – only by the signal-to-noise ratio, the analytical tools and the adequacy of the model for the object.

In this Biophysical Journal paper, we focused on the biological application and, together with other published work, on the modelling of homeostasis of infected red blood cells. Also to avoid criticisms from referees, probably legitimate ones, I decided not to mention the concept of super-resolution. As my research focus is on biochemical resolution and its utilization to understand cellular decisions in cancer, I will not pursue this work any further, but I thought to write this little story.

While writing this brief story, I recalled my friend Alberto Diaspro often citing Toraldo di Francia on resolving power and information. I believe that my work was far from being breakthrough from an optical standpoint, but I wished to use it as a reminder of a fundamental issue that, often in biomedical applications, get forgotten. The resolution at which we can observe a phenomenon, irrespective of the tools used, depends both on the qualities of the instrument used and the quality of prior information we can utilize to interpret the data. Once technology permitted to image single emitters in fluorescence microscopy, the prior of point-like sources could be use to analyse images so to reveal the fullness of the information content of an image that is carried by photons.

In an experiment, information content is the most precious thing. Irrespective of the methodologies used, our protocols are designed to maximize signal-to-noise ratios and, thus, maximize information content, precision and resolution. However, as trivial as these statements are, in the biomedical sciences we often do not follow through the process of maximizing information content. Significant information can be provided by our a priori constrains and models. Moreover, a thorough understanding of information theory related to a specific assay can provide levels of precision and resolution that go beyond what we assume, at first, possible. However, priors and information theory are far too often neglected. This happens out of necessity as most people do not have the training and understanding of both biological and physical processes, and even those that might, have to invest their limited resources carefully. I wish that in the future there will be more collaborative work between the life sciences, physicists and mathematicians, aimed to better understand how to extract maximum information from experiments in the biomedical areas.

So… was our volumetric imaging super-resolution? I am not sure I care to really answer, but I wished to provoke some thoughts and make you think a little bit about the relevance of information theory in biomedical research.

Photon partitioning theorem and biochemical resolving power

Project outcome published in PLoS ONE in 2013.

  • Esposito A*, Popleteeva M, Venkitaraman AR, “Maximizing the biochemical resolving power in fluorescence microscopy”, PLOS ONE, 8(10):e77392

After my 2007 theoretical work on photon-economy and acquisition throughput, I occasionally worked on a more general framework attempting to falsify my hypothesis that multi-channel or multi-parametric imaging techniques can deliver better results than other simpler techniques.

My proposal to develop instrumentation to achieve spectrally and polarization resolved lifetime imaging (later defined as HDIM) was met with scepticism by many. The recurrent question was: if you struggle to do a double exponential fit with the small photon budget we have available in biological applications, how could you possibly dilute these photons over several channels and analyse them with more complex algorithms?

Here, there are a few fundamental misunderstandings. First, the analysis should not be carried out on each “detection channel” independently, but the entire dataset should be used to exploit all information at once. Second, the use of dispersive optics rather than filters permits to acquire a higher number of useful photons. Third, limitations in current technologies (e.g., speed or photon-collection efficiency) should not be an obstacle to the development of these techniques because these are not conceptual flaws, but simply technology obstacles that can be removed.

Although I have a lot of (unpublished) work I used to describe performances of multi-channel systems, I achieved a breakthrough only when I understood I had to focus my efforts on the description of the general properties of the Fisher information content in fluorescence detection rather than the Fisher information in a specific experiment. Fisher information is the information content that an experiment provides about an unknown we wish to estimate. Its inverse is the smallest variance ever attainable within an experiment, or what is called the Rao-Cramer limit. In other words, by maximizing Fisher information, we maximize the precision of our experiments.

Photon-partitioning theorem

The second breakthrough was the understanding that the best description of precision in biophysical imaging techniques was possible only defining the concept of biochemical resolving power that is a generalization of the resolving power of a spectrograph to any measured photophysical parameter and then to its application to biochemistry. The biochemical resolving power is proportional to the square root of the photon-efficiency of a microscopy technique and the number of detected photons. Maximization of Fisher information leads to the maximization of photon-efficiency and, therefore, net improvements in biochemical resolving power. This definition complements the definition of spatial resolution in microscopy and allows to define when two objects are spatially and/or biochemically distinct. It is worth to mention that this is equivalent to stating that two objects are spatially and photo-physically distinct, but we use the photophysics of fluorophores to do biochemistry, hence my nomenclature. I see possible implications for other techniques, including super-resolution and, perhaps, this will be the subject of a future work.

The third breakthrough was the utilization of numerical computation of Fisher information rather than the analytical solutions of equations that are not always available. This process is very common in engineering but not in our field. Therefore, we can now optimize the properties of any detection scheme in order to attain the highest performance.

This work is a very specialist one and I assume there will be not many people interested in it, although the implications of this piece of theory for everyone’s experiment are significant. I believe that this is my most elegant theoretical work, but I guess it is a matter of opinion. The paper in itself had to be expanded well beyond what I wished to publish during the refereeing process and it is now including examples, software, etc. I think the theoretical introduction and the mathematical demonstrations are the best part and the description of the numerical optimization of Fisher information the most useful.

NOTE: there are two typographical errors in the published manuscript within the definitions of photon economy and separability. These are described in a comment on PLOS ONE