After Covid

Many years ago, in the early days of 2022, the dark specter of Covid finally lifted from American society. True, the disease was still killing Americans at a prodigious rate, equivalent to about 1 million a year. And most of those deaths were preventable, because other countries had done just that. What changed is that we decided to not worry about it anymore. Across the country and across the political spectrum, people agreed it was time to move on, to treat Covid like a seasonal flu, and to rescind any of the tedious mandates that had been imposed, like vaccinations or face coverings. 

But the real transformation of American life that we all celebrate today happened in the immediate aftermath. Once we realized that a million deaths a year from a single cause of mortality was acceptable, we reconsidered all the irrational fears and regulations that had accumulated over the years. For example, Americans used to be asked for identification before boarding an airplane. Worse, they were forced to remove shoes, expose their bodies to X-rays, and let guards grope their genitals. All this was justified by the threat of terrorism. Now the most spectacular act of airplane terrorism known (at that time) killed about 3000 people in one day and then nothing happened for the next 20 years. Covid kills that number every single day! Being a rational people, we recognized the absurdity right away and rescinded all restrictions on air travel. 

Next to fall were traffic regulations: At the time traffic killed about 41000 Americans a year, so it posed less than one twentieth of the risk from Covid. And what were Americans suffering just to retain such an absurdly low number? There were speed limits on the roads, restraints on the passengers called seat belts, and expensive safety designs built into cars. Worst of all: Americans were under a forced mandate to abstain from drinking while driving. All this is hard to understand: the average American was so much more likely to die while choking on a respirator than from a drunk driving accident. Needless to say we don’t live in fear anymore.

This was also the year when the utopian idea of gun control was finally abandoned. The number of gun-related deaths was already ridiculously low, only 45000 a year, barely a blip on the Covid scale. Then the drug laws were rationalized. Only 69000 americans died from opioid abuse that year, and some of that could be ascribed to Covid epidemic anyway. In any case, with our newfound acceptance of risk, no-one could justify regulating these drugs in any way. Other regulations soon followed. For example we used to have laws about how factories could pollute the air and water, again justified by some risk to life and health, which we now understand to be laughably low.

In retrospect it is hard for us to understand how our society had accumulated all these petty laws that restrict our freedom while protecting us from supposed risks. Covid finally unmasked these regulations as pure theater. No-one will contest that we are more free today than in the dark pre-Covid era. True, there are considerably fewer Americans alive than at the time, but some economists regard that as an added benefit. And, unlike in the old days, there aren’t any grandparents around to remind us of the old days.

Figure design for colorblind readers is outdated

tl;dr: Many scientific journals take a heavy hand when it comes to color choice in illustrations, with the noble goal of aiding colorblind readers. I argue that this policy is outdated, and in fact hurts the community it is intended to serve.

For example, the journal eLife (full disclosure: I contribute free services to this journal) instructs its authors: “When preparing figures, we recommend that authors follow the principles of Colour Universal Design (Masataka Okabe and Kei Ito, J*FLY), whereby colour schemes are chosen to ensure maximum accessibility for all types of colour vision.”

This policy has been superseded by the fact that everyone reads eLife on a computer display of some kind. And by now all computers offer a customized color transform for color-blind people. See these links for WindowsMacOS, and iOS. These color filters replace whatever colors are on the display with new colors that are best discriminated by someone with a color vision deficiency. Most importantly the filter can be optimized for the user’s specific form of color blindness. This service has been wildly popular with colorblind readers.

Under these conditions, following the eLife policy is in fact detrimental to colorblind readers. Recall that there is no single color palette that works best for all forms of color blindness. If the author adapts the palette to a particular 2-dimensional color space, say protanopia, that will be suboptimal for other readers. Here is an example using the document that the eLife policy cites for guidance (Okabe and Ito).

“Original” shows a classic red-green fluorescence micrograph. Below that is the color substitution recommended by Okabe & Ito: turn red into magenta. To the right are 3 images produced by the Mac OS filters for different forms of color-blindness (I photographed my display with my phone – crazy, I know). The version that Mac OS produces for protanopia is very close to what Okabe and Ito recommend. But note the other two versions for deuteranopia and tritanopia are quite different.

So following the recommended policy will favor protanopes but hinder deuteranopes. What is more, adapting the color palette to one of these abnormal color spaces will make it more difficult for the operating system to optimize the display for another space. 

In conclusion the best policy for authors is to do what comes natural: choose colors that use the widest color gamut possible. Then let the user’s display device take over and implement that specific user’s preferences. By analogy, we don’t ask authors to write with 36-point font because some readers have poor vision. We know that the reader can turn up the magnification as suits her preference. The same is now true in color space.

Magnetofiction – A Reader’s Guide

tl;dr:

This sub-genre of science fiction has made its way into mainstream science journals. I argue that the science is sketchy and the fiction is disappointing. Submit your own Biophyctional Journal abstracts in the comments!

Introduction

Science fiction rests on a compact between author and reader. The reader grants the author license to make some outrageous assumptions about the state of science and technology. In return the author spins an exciting yarn that may also hold some lessons about human nature. Recently a sub-genre of science fiction has made its entry into mainstream scientific journals. Again the authors ask readers to imagine that nature works very differently from what we know now, by factors of a million or up to 10 trillion. Then they speculate what might happen under those circumstances. Continue reading “Magnetofiction – A Reader’s Guide”

Irresponsible musings on Lamarckian evolution

Preamble: In May 2018, HHMI Janelia hosted a wonderful conference on the evolution of neural circuits, organized by Albert Cardona, Melina Hale, and Gáspár Jékely. This is a transcript of a short talk I gave there. Caution: utter speculation!

Good evening. My name is Markus Meister. Many of you probably wonder what I am doing up here, and so do I. This started innocently enough when I wanted to attend this conference to learn about a different field. But the organizers asked for a poster abstract and I sheepishly complied. Then Albert Cardona unilaterally converted this poster to a talk, so now you know whom to blame. Let me also give two warnings. First, what I’ll present is pure speculation, there are no new results, only stringing together old results into questions. Fortunately it is timed to transition into the Apres-Ski part of the conference, so please consider this as part of the evening entertainment. Second I want to deliver a trigger warning. For those of you not at an American university, we professors are encouraged to warn our sensitive students when a lecture threatens to bring up a subject they might find offensive. The present talk will include notions reminiscent of Jean-Baptiste Lamarck, in particular his idea that acquired characters are inheritable.

Neuro-Evo 180506.002

Continue reading “Irresponsible musings on Lamarckian evolution”

Open review: WN Grimes et al., Rod signaling in primate retina – range, routing and kinetics

This preprint presents new insights on visual processing in the retina, specifically how signals from rod photoreceptors are handled. Our visual system must operate over a huge range of light intensities, about 9 log units in the course of a day. In adaptation to this challenge the retina uses two kinds of photoreceptors: In the dimmest conditions only the sensitive rods are active, in the brightest conditions only the cones. In between the retina gradually switches from one input neuron to the other. However, even before the cones take over, the rod pathway undergoes substantial changes with increasing light level: the gain decreases and the speed of processing increases. This article challenges the prevailing notion of how those changes are accomplished.

Continue reading “Open review: WN Grimes et al., Rod signaling in primate retina – range, routing and kinetics”

Death of the sampling theorem?

tl;dr:

A team from Columbia University led by Ken Shepard and Rafa Yuste claims to beat the 100 year old Sampling Theorem [1,2]. Apparently anti-aliasing filters are superfluous now because one can reconstruct the aliased noise after sampling. Sounds crazy? Yes it is. I offer $1000 to the first person who proves otherwise. To collect your cool cash be sure to read to the end.

“Filter before sampling!”

This mantra has been drilled into generations of engineering students. “Sampling” here refers to the conversion of a continuous function of time into a series of discrete samples, a process that happens wherever a computer digitizes a measurement from the world. “Filter” means the removal of high-frequency components from the signal. That filtering process, because it happens in the analog world, requires real analog hardware: so-called “anti-aliasing” circuits made of resistors, capacitors, and amplifiers. That can be tedious, for example because there isn’t enough space on the electronic chips in question. This is the constraint considered by Shepard’s team, in the context of a device for recording signals from nerve cells [2].

Now these authors declare that they have invented an “acquisition paradigm that negates the requirement for per-channel anti-aliasing filters, thereby overcoming scaling limitations faced by existing systems” [2]. Effectively they can replace the anti-aliasing hardware with software that operates on the digital side after the sampling step. “Another advantage of this data acquisition approach is that the signal processing steps (channel separation and aliased noise removal) are all implemented in the digital domain” [2].

This would be a momentous development. Not only does it overturn almost a century of conventional wisdom. It also makes obsolete a mountain of electronic hardware. Anti-alias filters are ubiquitous in electronics. Your cell phone contains multiple digital radios and an audio digitizer, which between them may account for half a dozen anti-alias circuits. If given a chance today to replace all those electronic components with a few lines of code, manufacturers would jump on it. So this is potentially a billion dollar idea.

Unfortunately it is also a big mistake. I will show that these papers do nothing to rattle the sampling theorem. They do not undo aliasing via post-hoc processing. They do not obviate analog filters prior to digitizing. And they do not even come close to the state of the art for extracting neural signals from noise. Continue reading “Death of the sampling theorem?”

The fable of the missing log units

Some time ago I published a critique of attempts at “magnetogenetics” – specifically the approach based on coupling a ferritin complex to an ion channel. See my article along with the original authors’ reply here:

Meister, M. (2016). Physical limits to magnetogenetics. eLife 5, e17210.

I argued that the effects of magnetic fields on ferritin are much too weak to account for the reported observations of neural modulation. The discrepancy is 5 to 10 orders of magnitude. Several people have asked what that means. How much of a problem does this really represent? To illustrate that, here is a short fable…

Earlier this year, a team of engineers announced a discovery that could go a long way to solving the world’s energy problems. Their article, published in Nature Automotive, reports the invention of an electric car that can run for an entire year on a single AA battery. “It took a lot of persistence on the part of my students”, says the senior author. “We literally tried 21 different brands of AA battery before we found one that worked” [1].

Now a paper in eCars casts doubt on the discovery. The author performed some calculations on the amount of work needed to push a car around for a year and the amount of electrical energy stored in a battery. He says there is a discrepancy of 7 orders of magnitude, and that makes the claims very improbable: “If the car really drove around for a year it is unlikely to have anything to do with the AA battery.” The author also faults the reviewers of the original article for not recognizing how improbable the claims are, and thus failing to raise the bar for the empirical evidence accordingly. He concedes it is possible that the claimed discovery opens a window on entirely new physics, but says: “Both batteries and cars have been studied for a long time, and we have a very successful model of how they work”.

The Nature Automotive authors reply that they never proposed a mechanism for the remarkable result. They stand by their data and state that empirical observation must take priority over any theory. Because they are not experts in physics, they should not be expected to explain how the data came about.

The critic points out that Nature Automotive and similar journals have had a rather poor track record: About half of the studies published there cannot be replicated for one reason or another. Not long ago, the journal reported the invention of a car that actually produced fuel while driving, such that the gas tank needed to be emptied at regular intervals [2]. A magician dispatched by the journal subsequently debunked that report, and explained it as a mixture of wishful thinking and self-deception [3]. Nature Automotive and other journals like it profess to be concerned about the profusion of false claims, and want to improve their ability to spot those before publication. The eCars critic suggests that one ought to start with the manuscripts whose claims fly in the face of everything we know about how things work. No word yet from the editor or the referees of the original article.

[1] Vogt, N. (2016). Neuroscience: Manipulating neurons with magnetogenetics. Nature Methods 13, 394.

[2] Davenas, E., Beauvais, F., Amara, J., Oberbaum, M., Robinzon, B., Miadonnai, A., Tedeschi, A., Pomeranz, B., Fortner, P., Belon, P., et al. (1988). Human basophil degranulation triggered by very dilute antiserum against IgE. Nature 333, 816.

[3] Maddox, J., Randi, J., and Stewart, W.W. (1988). “High-dilution” experiments a delusion. Nature 334, 287–291.

Control theory meets connectomes?

My colleagues and I have been working through this intriguing paper [1] from a few weeks ago:

Yan, G., Vértes, P.E., Towlson, E.K., Chew, Y.L., Walker, D.S., Schafer, W.R., and Barabási, A.-L. (2017). Network control principles predict neuron function in the Caenorhabditis elegans connectome. Nature advance online publication.

This seems like a very important contribution. It promises detailed insights about the function of a neural circuit based on its connectome alone, without knowing any of the synaptic strengths. The predictions extend to the role that individual neurons play for the circuit’s operation. Seeing how a great deal of effort is now going into acquiring connectomes [2] – mostly lacking annotations of synaptic strengths – this approach could be very powerful.

The starting point is Barabási’s “structural controllability theory” [3], which makes statements about the control of linear networks. Roughly speaking a network is controllable if its output nodes can be driven into any desired state by manipulating the input nodes. Obviously controllability depends on the entire set of connections from inputs to outputs. Structural controllability theory derives some conclusions from knowing only which connections have non-zero weight. This seems like a match made in heaven for the structural connectomes of neural circuits derived from electron microscopic reconstructions. In these data sets one can tell which neurons are connected but not what the strength is of those connections, or even whether they are excitatory or inhibitory. Unfortunately the match is looking more like a forced marriage… Continue reading “Control theory meets connectomes?”