As an immigrant in the UK, it took some time to understand the deeper meaning of the remembrance day. In fact, remembrance day is lived by different British people in different ways, and to truly embrace this event, one has to stare a red poppy and feel what it means for them. You should have an intimate meaning for the red poppy to relate to the remembrance day. If you do, the wearing of the red poppy becomes not only a charitable gesture but a deeply meaningful action. As an immigrant from a country where the red poppy is not a tradition, therefore, it is only after one decade in the UK that I can finally embrace this day full-heartedly.
Remembrance day is approaching. I hope most people will reflect on what this day actually represents. It is the day where Commonwealth nations remember the soldiers fallen during the first world war and by extension, it is the day many intend to pay tribute to those who died in wars. In these weeks, many people will use war rhetoric and will revive patriotic emotions. Many people will proudly wear red poppies, to support veterans, as a statement of national pride, to remember the fallen soldiers, or for social pressure. Like every year, the news will invite comments, there will be vast support, but also critical opinions, and critical rebuttals of those critiques. Eventually, remembrance day ends up to be all those things. This year, however, I will have my first true remembrance day when I will not care about what this means for others, but I will care only about what it means for me. The reasons are two. One is that after many years I relate to British traditions as my own. The second is that my daughter, a British citizen who self-define as English, is in year 1 at school and I have to dialogue with her about the red poppy.
However, the colour of this flower and the origin of this symbol – the devastated fields where soldiers died during the first war and were then covered by red poppies – are so evocative that many people cannot refrain to associate the colour of the scarlet red poppy to the blood of soldiers who died in the war.
So, what is for me the red poppy? It is the blood of the soldiers shed during wars, but it is also the blood of the civilians crashed between opposing fronts. The bloody tears of those who survived, the broken families, the broken hearts, the children, the mothers and fathers, the elderly who died in battle or were visited by death at home. To me, the red poppy and remembrance day are reminders that we should always do anything possible to avoid conflict and war.
As war rhetoric came back fashionable also in democratic countries, when authoritarian movements are gaining the consensus of the public, and when too many people are proud to divide nations rather than to unite, we should not escape from the deeper meaning of this day.
To me, the red poppy is the blood that should never be spilt again but that will, and does.
And therefore, I will embrace this remembrance day as my own. With gratitude for brave soldiers that defended our freedoms but with shame because we have asked and we will ask them again to kill and to die instead of just being vigil, watching with pride our democracies working peacefully together.
In fluorescence microscopy, colocalization is the spatial correlation between two different fluorescent labels. Often, we tag two proteins in a cell with distinct fluorescent labels, and we look if and where the staining localizes. When there is a “significant overlap” between the two signals we say that the two molecules “colocalize” and we might use this observation as possible evidence for a “functional association”. We might argue that measuring colocalization in microscopy is one of the simplest quantitation we can do. Yet, many horror stories surround colocalization measurements. This post is not a review of how to do colocalization, but a brief casual discussion about a few common controversies that is – as often I do – aimed to junior scientists.
“I am imaging GFP, but the image is blue, can you help me?”. Well, this is not a question related to colocalization but it illustrates a fundamental issue. In truth, cell biology is such an inherent multidisciplinary science that – in most cases – a researcher might require the use of tens of different techniques on a weekly basis. It is thus not surprising that many researchers (I dare say most) will be an expert on some of the techniques they use but not all. Microscopy is particularly tricky. To be a true expert, you need to handle a feast of physical, engineering and mathematical knowledge alongside experimental techniques that might span chemistry, cell culture and genetic engineering. However, the wonderful commercial systems we have available permit us to get a pretty picture of a cell with just a click of a button. Here the tricky bit, you want to study a cell, you get a picture of a cell. One is lead to confusing the quantity that intends to measure with the information that is actually gathering and with its representation. This is true for any analytical technique but as ‘seeing is believing’, imaging might misrepresent scientific truth in very convincing ways. Hence, with no doubts that upon reflection the non-expert user would have understood why the picture on the screen was ‘blue’, the initial temptation was to believe the picture.
Question what you set out to measure, what the assay you have setup is actually measuring and what the representation is showing. Trivial? Not really. It is an exercise we explicitly do in my lab when we have difficulties to interpret data.
“It is yellow, they colocalize, right?”. Weeeeeeeeellll… may be, may be not. Most of you will be familiar with this case. Often researchers acquire two images of the same sample, the pictures of two fluorescent labels, one then is represented in green and the other in red. With an overlay of the red and green channels, pixels that are bright in both colours will appear yellow. I would not say that this approach is inherently flawed but we can certainly state that it is misused most of the times and, therefore, I try to discourage its use. One issue is that colour-blindness, not as rare as people think, renders this representation impractical for many colleagues (so my colour highlights!), but even people with perfect vision will see colours with lower contrast than grey-scale representations, and green more than red. Eventually, to ‘see yellow’ is almost unavoidable to boost the brightness of the underlying two colours to make the colocalization signal visible. This can be done either during the acquisition of the image often saturating the signal (bad, saturated pixels carry very little and often misleading information) or during post-processing (not necessarily bad, if declared and properly done). Either way, at the point you are doing this, your goal to be quantitative has been probably missed. The truth is that a lot of biological work is non-quantitative but faux-quantitative representations or statistics are demanded by the broader community even when unnecessary. Let’s consider one example with one of the stains being tubulin and the other a protein of interest (PoI). Let’s assume the PoI is localizing at nicely distinguishable microtubules in a few independent experiments. Once the specificity of the stain is confirmed, the PoI can be considered localized at the microtubules (within the limitations of the assay performed) without the need for statistics or overlays. Unfortunately, it is not very rare to see papers, also after peer-review, to show diffuse stainings of at least one of the PoI and perhaps a more localised stain of the second PoI and a ‘yellow’ signal emerging from an overlay is considered colocalization, instead of what it is: just noise. Another common issue is localization in vesicles. Again, any cytoplasmic PoI would appear to colocalize with most organelles and structures within the cytoplasm with diffraction-limited techniques. Sometimes punctuated stainings might partially overlap with known properly marked vesicles, let’s say lysosomes, but not all. Then the issue is to prove that, at least, the overlap is not random and, therefore, statistics in the form of correlation coefficients are necessary.
“The two proteins do not colocalise, two molecules cannot occupy the same volume” Really!? Well, from a quantum mechanics standpoint…. No, do not worry, I am not going there. I have received that criticism during peer-review in the past and until recently I thought this was a one-off case. However, I have recently realised that I was not the only person reading that statement. I am really uncertain why a colleague would feel the need to make such an obvious statement except for that condescending one-third of the community. I should clarify that to my knowledge no one implies physical impossibilities with the term colocalization. That statement is perfectly ok in a casual discussion or to make a point to teach beginners the basics. Some of us also might enjoy discussing definitions, philosophical aspects related to science, controversial (real or perceived) aspects of techniques, but better at a conference or in front of a beer, rather than during peer-review. The issue here is that while it is reasonable to criticise certain sloppy and not too uncommon colocalization studies, in general colocalization can be informative when properly done.
“So, is measuring colocalization useful?” Homework. Replace ‘colocalization’ with your preferred technique. Done? Now try to make the same positive effort for colocalization. Every technique is useful when used properly.
You might have noticed I marked some words in my introduction: colocalize, significant overlap and functional association. It is important we understand what we mean with those words. Colocalization means co-occurrence at the same structure, a non-trivial correlation between the localization of two molecules of interest, within the limits defined by the resolution of the instrumentation. The “significant overlap” should be really replaced by “non-trivial correlation”. Non-trivial, as diffuse stainings, unspecific stainings, saturated images can very easily result in meaningless colocalization of the signals but not of the molecules of interest. Correlation, as the concept of overlap might be improper in certain assays, for instance in some studies based on super-resolution microscopy. After we did everything properly, we still cannot say that if protein A and protein B colocalize they interact (see slide). However, we can use colocalization to disprove the direct interaction of two proteins (if they are not in the same place, they do not interact) and we can use high-quality colocalization data to suggest a possiblefunctional association that might be not a direct interaction, and that should be then proven with additional functional assays.
Then, my friends, do make good use of colocalization as one of the many tools you have in your laboratory toolbox but beware that just because it is simple to acquire two colourful pretty pictures, there are many common errors that people do when acquire, analyse and interpret colocalization data.
P.S.: if I cited your question or statement, please do not take it personally. As I have written, not everyone can be an expert of everything and the discussion between experts and non-experts is very useful, so making real-life anonymous examples.
Well, we are not rocket scientists but we could not miss the opportunity to speak about the space race at the Science Day of our local Primary School so close to the 50th anniversary of the moon landing. The inspiration came from the book “Space Race” by Deborah Cadbury. After reading it, a summary of the space race became one of the bedtime stories we tell our daughter. When the time came to pick a story to tell at the Science Day, after discussing work-related topics ranging from DNA extraction to optics, we opted for the space race and the moon landing. We are no experts in outreach but after a few years of volunteering, we can tell you that a well-done job is a hard job and a rewarding one. Also, like for any other communication-based activity, the three main tricks to reach impact are i) tell a compelling story ii) think about your audience and iii) be prepared.
The space race and the moon landing can be still very inspirational story to tell. It is a story of exploration, science and technology, it is a race but also a monumental teamwork. It has its roots in the cold war and the manufacturing of weapons of mass destruction… a story that ended up with a blast-off to the moon to inspire generations instead.
The first step in the organization for us was to see which are the basic experiments people do in the classrooms around the world. We clocked several hours over a few weeks trying to understand what is possible and what might excite pupils. Google and YouTube were the most obvious starting point. This activity was fun (well, particularly if you are a bit geeky!) but also stressful when we noticed we were not converging to a particular set of experiments we wished to demonstrate. Everything changed when we decided which story we would tell, as we were able to rethink all the material we explored from a different perspective.
The second step was gathering materials and more information. We studied facts about the moon, rockets and the space race. Most of it was general information that could have been useful to answer questions, some of it ended up in an introduction supported by a few slides. At the same time, we went shopping both targetting specific items but also browsing toy shops randomly trying to identify anything that could be useful. We kept brainstorming about a possible story-line and experiments to demonstrate, finally converging to a plan.
The third step was to prepare the day. We prepared a few slides and selected a few fun facts to share. While unnecessary strictly speaking, in private we discussed sensitive topics, the drive of science and technology during the cold war to prepare weapons of mass destruction, how this turned to a different type of race to reach the moon, with elements of competition and team working. While, of course, we did not discuss these topics in the classroom, eventually we were able to emphasize concepts that are important to us, the use of science and technology for good purposes (exploration and discovery) rather than bad ones (war), racing as a fun activity but highlight how teamwork is essential to reach very high goals.
Before the day came, we just needed to be sure that the day at school was organized properly, and we were lucky that Emily Boyce from the Babraham Institute had organized an excellent schedule for the entire day, logistics and liaised with teachers, so we could spend all the time we could just on the activities. Finally, risk assessments. Yes, they are boring and sometimes they seem superfluous but if done properly they help you think about what could go wrong and avoid accidents to happen. As they are anyway a legal requirement, make best use of them to help you planning the event logistics.
On the day
We had prepared a few slides with full screen images from the Apollo mission (a fired-up Saturn V, the moon lander, Armstrong’s footprint, a map of the solar system) and we ad a passionate and engaging chat with the students (see ‘Let’s talk about the Moon’ section). While the students were engaged, one of us set up all the contraptions needed for the latter part of the session.
Next, we wanted to introduce the concept of propulsion and Newton’s third law of motion. We started with this toy we found in a store:
We just showed how air pushed to the ‘rocket’ can lift it up, just small jumps catching the rocket with the hands. With the reception class, we let some children playing with it, while with year 3, we did some jokes (e.g., ‘do you see a big man or woman pushing a large pedal under the rocket?’ while pointing to the image of a fired-up Saturn V ready for lift-off) and we asked to explain to us what was happening.
Next, we told that this is not how rockets work and release rocket balloons in the room that we had inflated before entering the room and clipped. When thrown (not just released them speedless), these balloons are propelled around the room.
We engaged the students asking what they thought it was happening and clarified that air is getting out of the balloon and pushing the ballon ahead. The uncoordinated movement of the rocket balloons let us introduce the next contraption. We had placed a mock-up moon in the corner of the classroom. Because of the limited time available we prepared it at home with recycled materials within a plastic bag forced into a spherical shape with cello tape then covered with aluminium foil. We left that knotted handles of the bag out of the aluminium foil to anchor two fishing lines. The fishing lines were several meters long to cover the length of a classroom. There are plenty of instructions over the internet on how to build a rocket balloon guided by a string. I would recommend a more visible line than the one I found in the local shops but here the materials we used.
We inflated the balloon with an air pump, pasted the straw on the top of the balloon with two long pieces of cello tape and we drew a fun face on the balloon with a permanent marker. We then took one of the prepared fishing lines and demonstrated how the rocket balloon could reach the moon, asking the children to do a countdown after which we released the clip. This was just an introduction to the main activity of the session where we split the class into groups and gave materials to prepare and decorate their own balloons. As we pre-made two fishing lines, we let them race in pairs of groups to the moon.
We had planned to stop here if we ran out of time but prepared also a different ending. Our sessions were 45 minutes long and we discovered there was enough time for it. We pointed out there is no one inflating rockets and we introduced the concept of rocket fuel.
Before the beginning of the session, we poured two shots of malt vinegar in a tall glass. When the time came, we uncovered the glass and chatted about liquid and solid fuels, introducing the concept of chemical reactions used to propel a rocket. We then added a teaspoon of bicarbonate of soda to show the formation of large amounts of froth. During testing at home with the materials we could find in the local shops, we accidentally realize that malt vinegar would generate a lot of froth and that we could use this as a trick for comparing the froth to the vapours and flames coming out of a rocket engine.
Finally, we showed how this could be used to propel a rocked by inflating a balloon. We tested a few materials and opted to use a small plastic bottle with white vinegar. Keep in mind we used what we could find at the local shop and other combinations could work better. We added four shots of vinegar into the empty juice bottles. The labels were removed and we wrote the content with a marker. We also always had the bottle under control, but the obvious shape of the bottle attracted attention from younger children and we probably would use a different bottle or covered it with paper if we were to redo it, just to avoid a child grabbing it and trying to drink from during the confusion of some of the activities.
To make things simple on the day, we prepared balloons filled with two teaspoons of bicarbonate of soda, gently clipped, with excess powder blown away from the opening of the balloon.
At the right moment, we removed the clip and attached the balloon to the neck of the bottle paying attention to not let any powder drop into the bottle. Then we raised the balloon permitting the powder to mix with vinegar while holding the neck of the bottle firmly with the hands to avoid the balloon shooting in the class and spraying vinegar. We kept the vinegar a bit warmer than room temperature by pouring some hot water in a cup and keeping the bottle of vinegar in it. This was done in a staff room for safety. The lukewarm vinegar reacts faster with baking soda resulting in very fast inflation of the ballon.
This is how we prepared our Science Day activities. Each of the experiments is rather common and we got inspired by a lot of materials we read and watched. However, it is important to test every single experiment at home, identify the most appropriate materials and doses in order to ensure the timely and safe execution of each of them. Together, we probably invested about 50 hours of work in this activity in addition to the day spent at the school, spending evenings and spare time to plan the activities.
1) Who can tell what the Moon is? It is a space rock we call a satellite that turns around (orbits) the Earth. It was formed about 4.5 billion years ago when a large space object hit the Earth, and the debris from this crash formed the Moon. The Moon completes its turn around the Earth in 27.3 days.
2) What colour is the Moon and what it is made of? It’s made of mostly dust and rocks, there is no atmosphere, no water and no life. Just mountains and large craters. The Moon itself does not produce any light; we see it shining because the Moon reflects light from the Sun.
3) We see only one side of the Moon (also called near side), why is that? While orbiting around the Earth, Moon also rotates around its axis, and this rotation takes the same amount of time as it does to complete the turn around Earth. That’s why we can only see only one side of the Moon (about 60% of its surface).
4) What is the temperature at the Moon? Hot or cold? Well, both actually. During the day when the Sun hits the surface of the Moon temperatures can reach 127°C. You can fry an egg without a stove. During the night, the temperature can go down to freezing -173°C.
5) Did you know that you weigh six times less on Moon? That’s because the gravity (the force that pulls us down to the ground) on the Moon is weaker than the gravity on Earth. You can jump really high on the Moon. In fact, astronauts have to wear their heavy boots to keep them down on the ground.
6) How far is the Moon from us? It is really really far, about 384.000 km. If you are to drive this distance by car it would take you about 150 days. However, thanks to the rockets built by very talented teams of engineers and scientists we can reach the Moon in just 3 days, and we have exciting opportunities to explore the space!
We humans have always been curious about the world around us. The Moon was always one of our biggest curiosity. Using his telescope, Galileo have documented many observations about the Moon in 1600s. We have come a long way since then and thanks to the space rockets we have built we can explore places “Where No Man Has Gone Before”
Rockets were initially developed for wars, unfortunately. Luckily, later on, we realised we could use and develop rockets for much better goals – to explore the deep space. The Space race began. The first country to send an astronaut into Space was the Soviet Union, with Yuri Gagarin and his Vostok 1 capsule. The Soviet Union, sent also the first satellite in orbit (Sputnik) and the first rocket to the Moon, with the spacecraft Luna 1 passing very near to the Moon and Luna 2 crash-landing on our Satellite in 1959. It was then in 1969, that an incredible adventure lead by USA brought the first people on the Moon. Engineers and scientist in USA built a massive rocket, Saturn V and brought three brave astronauts up to the Moon with their Apollo 11 mission. The team was led by Neil Armstrong who made the first step on the Moon. As Neil Armstrong said, this was “ one small step for [a] man, one giant leap for mankind!”
Armstrong’s footstep will be a long lasting one as well. It will last in our culture, as the most exciting moment of a long adventure. It will last a long time on the Moon, where there is no wind to wipe it off.
Is it a cat? Is it a dog? Is the average between a cat and a dog a real thing, perhaps a caog or a doat?
Not all science should be based on single cell detection, and there are plenty of cases where single cell measurements are superfluous. However, too often we fail to appreciate the huge mistakes we can do in biology when we forget the assumptions we do when using population measurements.
But which assumptions do we really do?
Often implicitly, when doing population measurements (e.g., Western blots, sequencing, proteomics, etc…) we assume that populations of cells we measure are homogeneous and synchronous. Or at least we assume that these differences are unimportant and that they can be averaged out. In the best cases, we try to enforce a degree of synchronicity and homogeneity, experimentally. In reality, one of the most important assumptions we implicitly do is that the system we analyse is an ergodic system. In physics and statistics, an ergodic system is a system that, given a sufficiently long time, explore all its possible states. It is also a system where – if sufficiently sampled – all its states are explored and, consequently, averages over time on a single cell and averages over a population at a given time are the same. However, there are limits to this assumption in biology. The obvious example is the cell cycle. There is significant literature about ergodicity and cell cycle [e.g., 1, 2, 3] and how this principle can be exploited, but…
The lottery for cell division makes you grow faster.
There is a particular phenomenon that we encountered while we were working on this project  that fascinated me for its simplicity and consequences. How cells can increase their fitness (i.e. their growth rate)? One obvious answer is by dividing faster. Another, at first glance less obvious answer, is by exhibiting an heterogeneous cell cycle length. Let’s consider a population of cells that divides every 24 hours. Over one week, these cells will have 128 times the original population size. Now, let’s consider cells that divide on average every 24 hours but exhibit variation in cell cycle length, randomly, with a standard deviation of 4 hours and a normal distribution. Cells with 20 hours or 28 hours long cell cycle are equally probable to occur. However, in one week, cells with a 28 hours long cell cycle length will grow 64 times and cells with a 20 hours long cell cycle length will grow about 380 times. On average, these cells will grow ~200 times, that is much faster than cells dividing precisely every 24 hours (128 times). This is true for any pair drawn at equal distance from the two sides of the average; these pairs are equiprobable, thus cells dividing at a given average cell cycle length grow faster at increasing heterogeneity. Let’s remember that this can occur not just in the presence of genetic differences, but even just for stochastic variations where the progeny of one cell will not keep the same cell cycle length but will keep randomly changing according to an underlying distribution. This is a phenomenon that has been observed experimentally, for instance, in yeast  with single-cell measurements but that is occurring in any cellular systems as described in  and our own work . Population measurements might conceal these very important phenotypic or mechanistic differences.
The sum of two normal distributions is not another normal distribution.
The beauty of the normal distribution is that it is such a ‘well behaved’ distribution and, at the same time, it represents many physical and biological phenomena. If a population we are characterizing is made of two normal distributions, their average is the average of the normal distribution. If these have the same average, the variance of the sum will be the sum of the variances. These basic and useful mathematical relationships can be also rather misleading. In fact, while these statements are mathematically correct, two populations of cells that ‘behave rather differently’, for instance in response to a drug, cannot be averaged. For instance, one cell population might be killed with a given concentration of a drug. Another population might be resistant. By detecting 50% cell death, we could assume – incorrectly – that dosing at higher concentrations we could kill more cells.
The plot shown below illustrates this basic principle. The blue and red distributions, averaged together, exhibit the same variance and average of the yellow distribution but they represent very different systems. If the blue distribution represents the sizes of cats and the red distribution the sizes of dogs, the yellow distribution does not represent the size distribution of any real animals. In other words, the average phenotype is not a real phenotype and, in the best case scenario, when there is a dominant population, it represents the most frequent (the mode) phenotype. In all other cases, where the homogeneity of the phenotype is not checked, the average phenotype might be simply wrong.
This is a very simple illustration of a problem we frequently encounter in biology, trusting our population measurements (averages and standard deviations over experimental repeats) without being sure of the distributions underlying our measurements. In the figure above, the purple distribution is a distribution where the average is the correct average of the blue and red distribution, but the purple distribution is the statistical error of the assay and it is unrelated to the scatter of the biological phenomenon we are measuring. Sometimes, we cannot do anything to address this problem experimentally because of the limitations of technologies but it is very important – at least – to be aware of these issues.
Just for the most curious, I should clarify that for two Gaussian distributions with relative weights A and B, we can define a mixing parameter p=A/(A+B). The average of the mixed population will be simply μP=p*μA+(1-p)*μB, i.e. for p=0.5 is the average of the means. The apparent variance is σP^2 = p*σA^2+(1-p)*σB^2+p(1-p)*(μA-μB)^2, i.e. σP^2 is the average of the variances summed to the squared separation of the two averages weighed by the geometrical averages of the mixing parameters of the two populations.
Collective behaviour of cells is not an average behaviour, quite the contrary.
When discussing these issues, I am often confronted with the statement that we eventually do not care about the behaviour of individual cells but with the collective behaviour of groups of cells. There are two important implications to discuss. First of all, when arguing the importance of single-cell measurements, we do not argue the importance of studying individual cells in isolation. Quite the contrary, we should measure individual cells in model systems the closest to the physiological state. However, many assays are incompatible with the study of cell behaviour within humans and we resort to a number of model systems: individual cells separated from each other, 2D and 3D cultures, ex and in vivo assays. The two arguments (single cell measurements or measurements in more physiological model systems of tissues or organisms) are not the same.
Second, collective behaviours are not ‘average behaviours’. There are great examples in the literature but I would advise just even to visit the websites of two laboratories that I personally admire. They nicely and visually illustrate this point, John Albeck’s laboratory at UC Davis and Kazuhiro Aoki’s laboratory at NIBB. Collective behaviours emerge from the interaction of cells in space and time as illustrated by waves of signalling or metabolic activities caused by cell-to-cell communication in response to stimuli. The complex behaviours that interacting cells exhibit, even just in 2D cultures, can be understood when single cells and their biochemistry are visualized individually. Once again, phenotypes or their mechanism might be concealed or misinterpreted by population or snapshot measurements.
This is, of course, not always the case. However, my advice is to keep at least in mind the assumptions we do when we perform an ensemble or a snapshot measurement and, whenever possible, to check they are valid.
There are grants, there are great words written, there seems to be strong support, but how working between disciplines really work? Let me tell you at least how this has worked for me. This is a long read, but if you do not wish to go at the bottom of it, my advice (sadly) is the advice I once received and did not follow (with no regrets): consolidate your career in one discipline/department/subject (silo?), then you will be free to roam between disciplines at a later stage.
A very early choice to work across discipline*| As a young boy, alongside sports, I picked-up electronics and computers as hobbies leading me to select scientific studies at high school. I then matured a keen interest in physics and biology. When the time came to decide which courses to follow at University, I wanted to combine these interests, applying Physics to understand Life. However, I was undecided if to pursue this growing passion through studies in medicine, engineering, physics or biology. In a very uncharacteristic move for me, as a shy youngster from a family of non-academics and from a town without a university, I found myself sneaking into the Department of Physics at the not-too-far University of Genoa asking to speak with a scientist to get advice. I still remember that a Dr Rossi at the CNR in Genoa explained to me how I could approach my interest following different paths. While I never met again Dr Rossi and I do not recall the details of my visit, on that day and after speaking with him, I decided to study Physics and to become a researcher in biophysics.
Here I got into the first silo | Genoa was an excellent place where to study biophysics as it was one of the towns where biophysics started in Italyand it had a mature and vibrant biophysical community. However, I got an early warning about what meant to work across disciplines. Having opted for Physics, I first had to become a proper physicist, well-grounded in mathematics and theoretical physics. As I generally did well at high school with not too much studying, investing most of my spare time in tinkering with computers, electronics and doing athletics, University was a shock. With no tutoring and no advice (today things have changed), the first two years at University were brutal for me, incapable to cope with the workload and seeing around me, not only friends that were doing well but many who were dropping out (I believe we had a 50% drop-out). Until one day, seating on the floor of the library at Physics… studying maths from a book grabbed from the shelves… breathing pages of old books… when I finally got it. I found my way to study maths, my way to study 24/7. After that mountain was climbed, I picked the few – very formative – courses related to biophysics I could and I finally completed my studies. Although University could have been simpler for me with the tutoring and help that nowadays are available, I am grateful that I was forced to have a very strong theoretical background – no compromise allowed – and I am happy for that first choice of doing Physics at Genoa. However, the first warning was there, unnoticed at the time. To study Life with the tools of Physics, I had studied quantum mechanics, advanced mathematics, particle physics, but I had not a single course in biology or biochemistry. This, despite the fact that what you would nowadays call my master thesis was a year-long experimental work in neurosciences. The fact that I was doing biophysics in a very interdisciplinary environment, partially concealed the fact that science (still) works in silos.
Training at the interface | My choice for a PhD was a bit more random. At the time, I knew I wanted to work with proteins (very vaguely) and I had strong training in fluorescence microscopy. While the search for a laboratory where to do a PhD should be done differently, once again without guidance except for Altavista and Lycos (read as ‘Google’ back then) I identified the first batch of laboratories working with proteins and optics. As my first initial and unplanned search landed me immediate job offers, I was attracted by a very charismatic scientist, Prof. Fred Wouters at the European Neuroscience Institute in Goettingen. My duty was to develop biochemical imaging tools (FRET/FLIM) to study protein-protein interactions relevant to neurodegeneration. At the same time, I enrolled at the University of Utrecht, under the supervision of Prof. Hans Gerritsen, with whom I later obtained my PhD in Physics. Thanks to my struggles at Genoa, I was able to fly, build microscopes, write theory, apply imaging tools to solve biological problems and I completed a successful and productive PhD, by the end of which I was able to do tissue culture and molecular biology as well. Finishing up, on a long train journey to visit my partner who was working in Bonn (also a scientist), I asked myself what I wanted to do and the answer, since then unchanged, became clear: study how cells process information to take decisions by advancing microscopy tools dedicated to the study of biochemical pathways. In that moment I committed to work at the interface and to do both physics and biology.
Swapping disciplines and subjects, the untold dangers| The move for my first real post-doctoral experience was once again insufficiently planned career-wise. At the time, I started to be introduced at talks or in conversations as “one of the top experts in FRET” or “one of the few scientists who can handle biology and physics equally well”. Young experts working across disciplines, particularly with a background in physics and – I suppose today – in Mathematics and Computing, do not have problems to find a job at post-doctoral level. I sent two applications, got two job offers, opted for the one in Cambridge as my wife wished to apply to a lab there. The science (despite not my focus that was still neuroscience) and the environment were very interesting. My work was the attempt to falsify a homeostatic model of red blood cell infected by P. falciparum (the pathogen causing malaria). Once again I was working between disciplines, affiliated to the Dept. of Chemical Engineering and Biotechnology supervised by Prof. Clemens Kaminski and to the Dept. of Physiology, Development and Neurosciences supervised by Dr. Virgilio Lew. Once again, grateful for the training received in Genoa, I flew and I had a very successful and productive post-doctoral experience with my colleagues. However, I started to notice a few more issues.
First, despite the interest and the success, the move to malaria research was not strategic for my final goal and had potentially weakened my profile in the neurosciences. Second, the more senior you become, the more politics counts to seize a position, and without the shelter of a chosen silo (either physics or biology), one might be a bit more at risk. I looked after the former issue seizing an EPSRC Life Science Interface fellowship that I wrote to develop biophotonics tools to investigate the physiological role and interaction of some proteins involved in neurodegeneration.
An unexpected and exciting switch to cancer research | A few months into the fellowship, I was offered to move my fellowship at the MRC Cancer Unit (back then known as the MRC Cancer Cell Unit) where I became, in all effects, a staff scientist. The request was clear, refocus my work to cancer research. EPSRC agreed, and I welcomed the requests as this was strategic to achieve exactly what I planned a few years before, i.e. to study cell decisions by advancing biochemical imaging technologies. My third change of disease model, this time cancer or, more specifically, early oncogenesis, was both very good and bad for me. Very good, scientifically, as it permitted me to align perfectly my scientific ambitions to a disease model where it made perfect sense (cell decisions in cancer are very important and relevant to study). Bad, career-wise, as I once again changed subject therefore further weakening my profile. However, the offer seemed good also in terms of career progression and therefore I accepted. For the third time in a row, my fellowship was a success and productive, achieving my set goals which were, however, more related to advancing technologies while I was getting retrained in cancer biology.
The paradox of the praise of inter-disciplinary research and the silos-like organization of academia | Science works in silos, it still does. These silos communicate, exchange expertise, and they do contribute to beautiful cross-disciplinary work but they are still silos, particularly career-wise. This more or less strongly compartmentalized operation is reflected in the difficulties to review grants, papers, career progression of interdisciplinary work or people at the interface, as discussed in the many articles published on this topic. For now, let me just report a couple of specific events that describes one aspect of the problem.
One day I was at a funding workshop during which several colleagues delivered talks about inter-disciplinary science. One stated that there are excellent people who can do both biology and physics, referring to them as ‘hybrids’. He expressed his support for these hybrids and stated that, as they are rare, we have to fund collaboration between departments. After this comment – delivered by a scientist I have a lot of respect for – I was simply feeling great. Then other speakers clarified how they do not believe in individuals working interdisciplinary but they expressed the need to just collaborate across departments. This – of course – was quite a shock for me. So accustomed to read and hear praises for interdisciplinary work and striving at the interface despite the occasional hic-up and emerging ‘career frictions’, the pieces of the puzzle came together after that event.
The large majority of the Universities, as far as I can tell, are still organized in mono-disciplinary Departments. Even when individual Departments or Institutes are very inter-disciplinary, with biologists, clinicians, chemists, physicists, engineers, computer scientists and mathematicians working shoulder-to-shoulder, you should ask how much disciplinary diversity exists amongst the principal investigators, particularly the tenured academics. If the spread of disciplines suddenly shrinks to a few very related backgrounds, you would have a clearer picture of how interdisciplinary work is rewarded.
This is summarized by a comment I once heard at a conference. After a number of talks about magnetic resonance imaging at the University hospital, and the praise of mathematicians (PhDs and post-docs) who contributed so much to the progress, one person from the audience popped the magic question: “which career perspective do you offer to these young mathematicians without whom this progress could not have been achieved?”. The response was delivered bluntly, honestly and respectfully: “None. We do not have possibilities for career advancement for mathematicians but most of our PhDs and post-docs after working with us do well in industry”.
I am absolutely sure there are plenty of exceptions to what I am describing. However, I do not think I would be too wrong to warn you, perhaps a younger-me, of the risks in leaving the shelter provided by a well-established silo, at least from a career perspective. A silo where career structures might be clearer and career progression might be still very difficult but more ‘natural’.
Am I in the wrong silo? | The last chapter of my story (for the time being) is still writing itself. More importantly for those two of you young readers landing on this page, I should clarify that it is a story were many plots get entangled. I wished to answer questions such as “how was your experience working at the interface of life and physical sciences?” or “how was for you swapping between different disciplines”. However, the longer you stay in academia, other issues arise such as reaching job security, finding a good balance between family and work, maintaining/finding/expanding resources (people, funds, space, instrumentation,…), supervising/mentoring people, finding a balance between research and other academic duties, etcetera. These and other important aspects of our work are common to any scientist, irrespective of how many disciplines or subjects they touch. However, working at the interface between disciplines adds – in my opinion – a little bit of friction to most of these processes.
I am doing biomedical research in a cancer research institute, I love it and I enjoy working with my colleagues. However, I am a biophysicist with a strong track record in biophotonics, not much track record in cancer biology. After the successful completion of my EPSRC fellowship, I was expecting to get into a tenure track position with dedicated resources. However, the new (however obvious it might appear writing it down now) condition I had to confront was to have a track record in cancer research possibly with high impact factor journals. Retreat to the ‘shelter’ of Physics departments or competing on this new ground of biomedical research on the game (that I do not even like nor endorse***) of impact factors? While the choice should be obvious, I personally focused only on the scientific ambitions, trying to establish what I like to call a “single-cell systems biology of cellular decisions” and I opted, somehow reluctantly, to play the game. I am sure that others would have handled the situation better. Personally, I enslaved all my physics/engineering/mathematics to the solution of biological questions and stopped publishing specialist work. At the same time, caving-in to peer pressure, I focused on preparing manuscripts that, potentially, might be published in high impact factor journals entering a very long cycle of ‘stashing’ data seeking to have the most solid work and the most interesting narrative (I shiver spelling it out, and I corrected this by using pre-print servers and resuming publishing specialist work).
Not only the work I excel into is invisible to most biomedical colleagues, erroneously tagging it ‘just technology or methodological’. I mistakenly reinforced this trend by starting to bury a large part of my work in the supporting information of would-be high impact-factor journal papers. Somehow, the need to fit in my environment, the expectation of peers in cancer biology, referees and panels, made me behave as if I should be ashamed of the work I am actually best known for. The issue is not my institution, certainly not the very supportive colleagues. Perhaps I am simply in the wrong silo in an academic environment that works as communicating silos. By now, however, I would be in the wrong silo in most academic silos and I shall continue attempting to prove there is a reason to have some ‘hybrids’ working at the interface between disciplines.
A war of attritions| I shall conclude with a comment on something I believe is important for anyone that is ‘different’ in an academic environment, something I will expand upon in the future in a different context. In any very competitive environment, and Academia as I know it is highly competitive, the best might emerge. However, people of the same quality will experience different frictions. For example, even in the absence of outright discrimination, gender, ethnicity, nationality, religion, physical ability or even regional accent might each result in a additional friction while climbing up in career depending on the environment**. Working at the interface between disciplines, or swapping discipline, will help to make you unique but, at the same time, it might add significant friction to your walk through Academia. While I have no regrets and I love – as a physicist – working on cancer biology in a biomedical research campus, I wished to warn those scientists willing to do the same of the possible hidden risks. Of course, this is just my story, but there is plenty of research out there showing how difficult is to work across disciplines for both individuals and teams.
You will love breaking free from the cages of disciplines. You will feel strained by the absence of a safe shelter.
So, perhaps, the solution is one I was advised a decade ago, the advice I neglected as I assumed was given for self-interest. You might want to first establish yourself within a single discipline, be either physics or biology for example. Once you will have a well-established career, you will be able to use resources across disciplines.
That was not for me, I am a ‘hybrid’ after all.
* Be aware that in this blog-post I use various terms to refer to working across disciplines (inter, cross, multi, etcetera) I do this in a very colloquial way.
** I do not intend to compare the issue of inter-disciplinary research to the struggle of asserting civil rights! My point here is simply that in the absence of outright discrimination (for those environments where this might be applicable) unconscious bias might remain thus adding some friction to the career of people. Bias against multi-disciplinary research is well-characterized in the literature and, I argue, this bias is yet another friction that adds on to the normal challenges in academic progression.
*** I should clarify that I do not have anything against high impact factor journals. They are a business and they do it well. Moreover, they often provide a great editorial input and production assistance. However, I am critical on the use of such journals in Academia that, in practise and in many cases, slows down the discovery process.
Although I am no expert in livestock production and food chains, I do recall debates on making abattoirs more humane by ensuring that animals are not aware of their fate. In other words, the poor bovine should not see other fellow animals being slaughtered fearing for their lives in a long and slow progressing queue towards death. Fair enough.
While travelling to London for a networking event, I was messaging over Slack with a friend, a former PhD student, and casually chatting about a number of things. At his question: ‘how things are going’, I instinctively responded along the line of ‘well, although growing tired of the slaughterhouse that is academia…’. Although I love Academia, I have been also openly critical about it over the years. However, I never defined certain processes of Academia as a ‘slaughterhouse’ before. At the same time, the definition fits so well.
When I was a PhD student, I thought that for those students like me, doing anything else than staying in Academia was a failure. Bovine-me was roaming the green fields of Germany and The Netherlands happily fattening (quite literally in my case). A constant flow of fellow students would join us in ever greener pastures, cohort after cohort, and many others leaving for higher hills never coming back, with a few – barely visible at a distance – growing older at one of the most remote fields. Bovine-friends say – fields where the grass is the sweetest.
When the time came, and the gates opened, we rushed to the next wonderful field. Who did not rush, was simply pushed by the flow of the pack. Despite the dynamic crowdedness of lower fields, bovine-us kept decreasing in number happily walking towards the gates opening towards those greener and sweeter pastures we always fantasized about.
Most of us, fat and strong, perfect bovines, queued for the next gate, happily walking to an even better field, one-by-one, blissfully unaware, a pop, the last memory. Others are still grazing.
Academia depends on the constant flow of students through their classes, and many Universities, no doubt about those I know, do a wonderful job in training them. So many committed people that are dedicated to passing and expanding knowledge down the generations. Academic research depends on a rather large cohort of PhD student and post-doctoral scientists working hard, often paid modestly considering the years of unpaid (or worst debt-causing) training lured into the next job by promises of stability but kept in an unforgiving state of precariat. Short-term contracts after short-term contract in a job where long-term vision should be key, we are subject to a constant process of review that in the best case is rigorous and tough, but that can be often also quite random and biased.
This process is largely physiological as the academic system is very competitive. Many collogues also express no concerns about it on the basis that selection have to occur, in a way or another. However, the impact on the mental health of academic workers is now evident, and not just only on students. I believe that a more efficient and fair system would be one that promote leaving academia early as an active choice, where different career options are promoted, where there is clarity about the likelihood of promotions, and where there is no choice to be made between having a family and having a job.
Just to clarify the last point, once I was in a leadership course. A colleague asked “My husband has a tenured position at Cambridge University and we have a child. I am offered a tenured position elsewhere and I see no opportunity at Cambridge, what should I do?” The reply was: “I guess you have to make a choice between family and career”. As horrible as an answer it was, I should also clarify for those that are not aware of it that for who works in Academia this is not a choice, as if you do not progress on the ladder of academic positions, it is likely one day the gate of Slaughterhouse.ac will open for you – pop.
I hope one day, would I survive or strive in the system, I will have the tools to influence it and change it for the better. For now, I can just write about it, hoping that younger scientists will make more informed decisions than me and most of my colleagues. ■
Industry, academia and healthcare often rely on fluorescence microscopy to see the fine architecture of materials, including biological ones. Fluorescence microscopy is particularly suited for biomedical studies because it can be gentle with biological materials permitting investigators to study biology in a non-destructive manner. Chemistry and genetic engineering then provide useful strategies to make samples fluorescent so to report about mechanisms that we need to study aiming to understand how biological systems work in normal conditions, during disease or therapy.
Thanks to two-decades of fast-paced innovation in fluorescence microscopy, we can now see the smallest features of a biological sample, approaching molecular resolution. However, the capability of fluorescence microscopy to observe small changes in the chemical or physical properties of biological samples is not as well-optimised as its capability to peek into small structures. In our recent paper entitled “Enhancing biochemical resolution by hyper-dimensional imaging microscopy” – now available at the Biophysical Journal – we demonstrate how to recover information that permits us to make better measurements.
We can think of a fluorescence microscope like a radio broadcaster that transmits useful information through different radio channels. When we listen to one individual radio channel, we lose information transmitted over the other frequencies. If we attempt to listen to several broadcasts at the same time, the scrambled voices will limit our understanding of the several messages that were originally broadcasted. Similarly, the lasers we use to make samples shine, and the fluorescence emitted by samples, transmit information spread over the different properties of light, for example in its colour, in the time when light is emitted (the fluorescence lifetime) and in which plane is vibrating (polarisation).
In our recent work, we describe theoretically and experimentally how all this information could be measured separately but simultaneously enhancing our capabilities to observe biological processes. By breaking conceptual barriers and showcasing possible technological implementations with hyper-dimensional imaging microscopy, we aim to catalyse advances in several applications, spanning material sciences, industrial applications, basic and applied biomedical research, and improved sensing capabilities for medical diagnostics.
What has been the impact of fluorescence lifetime imaging microscopy to science and to the biomedical community in particular? Is FLIM a niche technique, one of those techniques that always promise but never deliver?
The top 10 most cited papers
Excluding reviews, the list of the top 10 most cited papers, albeit representing a very narrow window on the impact that FLIM had on the broader community, is rather instructive. Do consider, we are missing all those papers where FLIM was used but not cited in title or abstract. Most of the top 10 is made of applications to cell biochemistry, demonstrating the potential and the impact that fluorescence lifetime has. FLIM helped to understand how signalling work in living cells and animals, helped to identify drugs and to study disease. Some of the top cited papers are more technical, such as Digman’s paper on the phasor-transform or Becker’s paper on TCSPC widely cited because of their influence on contemporary FLIM techniques from a perspective of data analysis and technology. Other papers date back to the initial years of FLIM with applications to biochemistry. Overall, from this list, we understand (if more evidence was needed) that FLIM had a deep impact on the understanding of cell biochemistry albeit, historically, FLIM has been limited to the specialist laboratory.
I would like to highlight also another small observation, perhaps just interesting for the specialists, and not visible from other bibliometric analyses. Tom Jovin and a group of scientists trained by him (e.g., Dorus Gadella and Philippe Bastiaens) left a significant footprint in the field, directly driving biomedical relevant applications while pushing, at the same time, technological or methodological developments. Many others are linked to this ‘school’ directly or indirectly, scientists that use/develop a microscope to do biochemistry.
Mapping temperature within cells using fluorescent polymers by Okabe and colleagues (2012) from Uchiyama’s laboratory and published in Nature Communications, where FLIM was used to map temperature within cells using fluorescent polymers as temperature sensors. (442)
Phasor analysis by Michelle Digman and colleagues, from the laboratory of Enrico Gratton (2008) published by Biophysical Journal. The phasor-based analysis, in different flavours, has become quite popular nowadays. (406)
An in vivo FLIM-based analysis of calcium dynamics in astrocytes by Kuchibhotla and colleagues from Bacskai’s laboratory (2009) published in Science. (353)
The study of Calmodulin-dependent kinase II activity in dendritic spines by Lee and colleagues from Yasuda’s laboratory (2009) published in Nature. (351)
One of the first FLIM papers by Lackowicz, published in 1992 in PNAS, where they applied the methodology, yet to be fully established, to the study of free and bound NADH. (339)
One of the first biochemical applications of FLIM, where Gadella and Jovin applied the new tools to the study of EGFR oligomerization (1995), published in the Journal of Cell Biology. (323)
A 2004 paper, where Becker and colleagues present the TCPSC instrumentation that would become a commercial success, published in Microscopy Research and Technique. (321)
The application of FLIM and molecular motors to study viscosity of the cellular environment by Marina Kuimova and colleagues, from the laboratory of Klaus Suhling published on JACS in 2008. (319)
The development of a drug interfering with the interaction between KRAS and PDEdelta published Zimmermann and colleagues with the laboratory of Philippe Bastiaens and published by Nature in 2013. (291)
The interaction between PKC and integrin shown by Ng an colleagues from Parker’s laboratory in 1999 by the EMBO Journal. (277)
Tool: Web of Science
Search term: “FLIM” and “fluorescence lifetime imaging microscopy”
Note: FLiM is a component of the flagella motor and it shows up in the searches. I could not eliminate this ‘false positive’ but it is my assumption that it is not changing the following discussion.
Citations (in parenthesis) as in April 2019.
Any bibliometric analysis is very limited in scope, certainly this very narrow search. This is just a blog post, one observation done just to trigger a discussion for those curious people about the topic.
I am personally conflicted on this topic. I have recently started to work on machine learning and deep-learning specifically. Therefore, I am keen to explore the usefulness of these technologies, and I hope they will remove bottlenecks from our assays.
My knowledge about CNNs is rather limited, even more so for SR and denoising applications. My first opinion was not very positive. After all, if you do not trust a fellow scientist guessing objects from noisy or undersampled data, why should you trust a piece of software? That appeared to be also the response of many colleagues.
After the machine learning session at FoM, I partially changed opinion, and I am posting this brief -very naïve – opinion after a thread of messages I read on twitter by colleagues. Conceptually, I always thought of machine learning as ‘guessing’ the image, but suddenly I realise that CNNs are perhaps learning a prior or a set of possible priors.
I have mentioned in a previous post about the work by Toraldo di Francia on resolving power and information, often cited by Alberto Diaspro in talks. Di Francia, in his paper, states “The degrees of freedom of an image formed by any real instrument are only a finite number, while those of the object are an infinite number. Several different objects may correspond to the same image. It is shown that in the case of coherent illumination a large class of objects corresponding to a given image can be found very easily. Two-point resolution is impossible unless the observer has a priori an infinite amount of information about the object.”
Are CNNs for image restoration and denoising learning the prior? If so, issues about possible artefacts might be not put aside but at least handled a bit better conceptually by me. The problem would then shift to understand which priors a network is learning and how robust these sets are to typical variations of biological samples.
Great talks today at FoM. Eventually, we will need to have tools to assess the likelihood that an image represents the ground-truth and some simple visual representation that explain what a CNN is doing to a specific image that is restored and ensure good practise. Nothing too different from other techniques, but I feel it is better to deal with these issues earlier rather than later in order to build confidence in the community.
Most of the times, I write this blog for those two youngsters that might learn something by accidentally landing here. I wished to share with you a few things that might happen when you age, at least academically speaking. This is what happens when you attend FoM for a few years…
I have met my PhD supervisor, Hans Gerritsen, a scientist and a man I greatly respect. Lost in memory lane, I have (re)told the story of when – while working in Goettingen with Fred Wouters – I wished to update Hans with a report on my latest theoretical developments.
“this destination already contains a folder named ‘saturation FRET’ “
Well, I had discovered I had already written several tens of pages of maths for Hans, for then completely forgetting about it!!!
Which is the point? Well, I do have a horrible memory! I always had.
When you get older you get many stories to tell and I like to tell stories having a laugh. When I meet people I got to know in the past and that have longer experience than me, I like to ask more historical accounts of the early times of, for instance, FLIM developments. Not a long time ago, I had a wonderfully entertaining and instructive conversation with Peter So and Ammasi Periasamy walking in the historical streets of Venice after lecturing at an international school of microscopy organized by Alberto Diaspro. Lots of fun, for me at least, speaking about the various characters of the field, anecdotes, reconstructing the ‘genealogy’ of the various innovators (how they are scientifically related to each other). And I could not resist asking the question: “who did the first FLIM image”? I suggest, Wang et al 1989, but I am uncertain as I was 14 back then 🙂
Did I tell you I have a bad memory? Well, I did not lose the opportunity to re-ask to Hans, when I met him at FoM, “who did the first FLIM image”? Possibly, Chris Morgan.
While ‘googling’ Chris Morgan I found my own paper on Lifetime Moments Analysis (LiMA), as I cited his work. Well, DID I TELL YOU I HAVE A BAD MEMORY?
I also ‘discovered’ I did write a brief paragraph as a historical introduction about FLIM, and I had a flash-back of me asking Hans in 2005 the same question, which probably places Bugiel, Konig and Wabnitz as the winners in 1989. But let me know if you really know who published the first FLIM paper 🙂
Here we are, a very new thing I have just noticed about what happens when you age academically. Presumably, the first FLIM paper was published in 1989, with work on FLIM proliferating during the first half of the 90s. My first paper on FLIM is the LiMA work of 2005 published in Biophysical Journal, ‘only’ 16 years after the first FLIM paper. Yesterday, my latest contribution to the field just got accepted on Biophysical Journal, 14 years later. Although I do not work full-time on it, I have contributed to the development of FLIM, in one way or another, for almost half of the time that FLIM exists. This gives me a rather strange feeling.
It is very instructive, in my opinion, for students and even for a tiny bit older ‘students’ like me to pause for a moment and look to the past of their discipline or the technologies they use. Compared to Physics, for instance, cell biology, biophysics, cancer biology are all rather recent disciplines. Fluorescent proteins are on the map since the sixties, but usable only since mid-90s not so long time ago for instance. Imagine what we might be able to do in another 20 years.
FoM is an occasion to meet many people, peek in the future through the talks of fellow scientists and discussions, watch back to past memories. Yesterday I barely walked three lines of posters in 2 hours, as I was getting engaged in interesting conversations every few meters with people I just met or people that I know, in a way or another, since many years. When I called back my wife, Suzan, in the evening, she reminded me of my first FoM in Australia, when I called her saying I was feeling a bit lonely and awkward as I did not know anyone.
Conferences like FoM are community, history, a boiling pot of ideas. I have been always a bit shy, and my suggestion to younger scientists is to make an effort not to be. Engage the others. Working in academia can be rather frustrating at times, and feeling part of a community can really help you in the future.
Fourteen years from FLIM paper to FLIM paper in Biophysical Journal. When not affected by an attack of imposture syndrome, I look back and I feel good in seeing what I have done so far. However, there is yet another thing I discovered when you age academically. The legacy one person builds is not papers. In part is the reverberation of your work in those of others, irrespective of explicit citations. In part is the comments of colleagues who tell you, even just privately, when they got inspired by something you said, presented or published. But, growing a tiny bit older every year with FoM, it is also the younger generations coming to speak to you.
And, I would like to thank you all, because while impact factors, panels, research outcomes are the fog in which someone might lose themselves a bit too often, you are the light at the horizon signalling we are, after all, walking the right direction.