Comment on: “Models of the Mind” by Grace Lindsay

I much enjoyed this book: “Models of the mind” by Grace Lindsay, an account of the uses of theory and mathematics in brain science. The book addresses the general reader, equations are mostly relegated to an appendix, and there are copious citations of technical literature as well. Obviously, conveying mathematics without equations is a challenge, and Lindsay handles it very well, finding many effective metaphors where needed. This all reflects her deep mastery of the material. 

Beyond the target audience of curious laypeople, I also recommend this book to professors and students in mathematical neuroscience. For one, the ten chapters 2-11 would make a perfectly reasonable topic list for a 10-week course. While the course lectures and problem sets will focus on technical content and practice, Lindsay’s accompanying chapter can give a historical account of how the ideas evolved to our present understanding. When I try to do this in class, I always feel pressed for time, and the results aren’t satisfying. This book would make a great companion to a technical lecture series.

Lindsay’s last chapter is dedicated to “grand unifying theories” of the brain. She covers three attempts at such a theory, and politely but unmistakably dismisses them. One is “demonstrably wrong” and the other two are not even wrong. To find out what theories these are you’ll have to read the book; calling them out by name here would already give them too much credit. Lindsay concludes that brains are so “dense and messy” that they don’t lend themselves to physics-style theories that boil everything down to simple principles. On the other hand, her opening chapter makes the case that mathematics is the only language that can ultimately deal with the complexity of the brain. There’s a tension here regarding the future of mathematics in neuroscience.

This contrast between the rigor of mathematics and the squishiness of brains contributes to what I think is a continuing reluctance of many brain scientists to engage with quantitative methods. Here I’m reminded of one of the few question marks I jotted in the margins of the book. Lindsay writes that early in his career “[David] Hubel was actually quite interested in mathematics and physics”. If so, he certainly changed his mind later on. I have heard Hubel say that he “never had to use an equation after high school”. And he liked to ridicule quantitative measurement in neuroscience as “measuring the thickness of blades of grass”. Torsten Wiesel was similarly dismissive of mathematical approaches. I recall an editorial board meeting of the journal Network, which the chief editor, Joe Atick, had organized in a conference room at the Rockefeller University. In mid meeting, Torsten Wiesel – who was president of Rockefeller at the time – popped in, said hello, asked what the meeting was about, then delivered a monologue on how computational neuroscience will never make a useful contribution, and left abruptly. Whatever one thinks of these opinions, Hubel and Wiesel do have a Nobel Prize, as do many other experimenters, and theoretical neuroscientists don’t.

One can argue that Hubel and Wiesel really did not need mathematics to report the remarkable phenomena they discovered. But connecting those phenomena with their causes and consequences does require math. The little napkin sketch of simple cell receptive fields made from LGN neurons is cute but not convincing; it needs to be translated into a model before one can test the idea. Similarly, one can make a hand-waving argument that line detector neurons are useful for downstream visual processing, but understanding the reason is an entirely different matter. Unfortunately our discipline today still values isolated qualitative reports of phenomena. Most of the celebrated articles in glossy journals remain as singular contributions. No-one builds on them, hardly anyone tries to replicate them (and the rare attempts often don’t go well). Meanwhile the accompanying editorials celebrate the “tour-de-force” achievement that delivers a “quantum leap in our understanding”1

We should resign ourselves to the recognition that no single research paper will pull the veil of ignorance from our eyes and reveal the workings of the brain. The only hope lies in integrating results that come from many researchers with diverse complementary approaches. And one can piece these results together only if they are reported with some quantitative precision2 and fit into an overall mathematical model. It is to enable this basic building of a scientific edifice that neuroscientists need to learn and use mathematics, not only for the pursuit of a grand unified theory.

Footnotes:

  1. The next time you write this phrase, remember that a “quantum leap” is the smallest possible increase.
  2. The next time you write a review article, please include some numbers about the magnitude of reported effects. Thank you!

After Covid

Many years ago, in the early days of 2022, the dark specter of Covid finally lifted from American society. True, the disease was still killing Americans at a prodigious rate, equivalent to about 1 million a year. And most of those deaths were preventable, because other countries had done just that. What changed is that we decided to not worry about it anymore. Across the country and across the political spectrum, people agreed it was time to move on, to treat Covid like a seasonal flu, and to rescind any of the tedious mandates that had been imposed, like vaccinations or face coverings. 

But the real transformation of American life that we all celebrate today happened in the immediate aftermath. Once we realized that a million deaths a year from a single cause of mortality was acceptable, we reconsidered all the irrational fears and regulations that had accumulated over the years. For example, Americans used to be asked for identification before boarding an airplane. Worse, they were forced to remove shoes, expose their bodies to X-rays, and let guards grope their genitals. All this was justified by the threat of terrorism. Now the most spectacular act of airplane terrorism known (at that time) killed about 3000 people in one day and then nothing happened for the next 20 years. Covid kills that number every single day! Being a rational people, we recognized the absurdity right away and rescinded all restrictions on air travel. 

Next to fall were traffic regulations: At the time traffic killed about 41000 Americans a year, so it posed less than one twentieth of the risk from Covid. And what were Americans suffering just to retain such an absurdly low number? There were speed limits on the roads, restraints on the passengers called seat belts, and expensive safety designs built into cars. Worst of all: Americans were under a forced mandate to abstain from drinking while driving. All this is hard to understand: the average American was so much more likely to die while choking on a respirator than from a drunk driving accident. Needless to say we don’t live in fear anymore.

This was also the year when the utopian idea of gun control was finally abandoned. The number of gun-related deaths was already ridiculously low, only 45000 a year, barely a blip on the Covid scale. Then the drug laws were rationalized. Only 69000 americans died from opioid abuse that year, and some of that could be ascribed to Covid epidemic anyway. In any case, with our newfound acceptance of risk, no-one could justify regulating these drugs in any way. Other regulations soon followed. For example we used to have laws about how factories could pollute the air and water, again justified by some risk to life and health, which we now understand to be laughably low.

In retrospect it is hard for us to understand how our society had accumulated all these petty laws that restrict our freedom while protecting us from supposed risks. Covid finally unmasked these regulations as pure theater. No-one will contest that we are more free today than in the dark pre-Covid era. True, there are considerably fewer Americans alive than at the time, but some economists regard that as an added benefit. And, unlike in the old days, there aren’t any grandparents around to remind us of the old days.

Figure design for colorblind readers is outdated

tl;dr: Many scientific journals take a heavy hand when it comes to color choice in illustrations, with the noble goal of aiding colorblind readers. I argue that this policy is outdated, and in fact hurts the community it is intended to serve.

For example, the journal eLife (full disclosure: I contribute free services to this journal) instructs its authors: “When preparing figures, we recommend that authors follow the principles of Colour Universal Design (Masataka Okabe and Kei Ito, J*FLY), whereby colour schemes are chosen to ensure maximum accessibility for all types of colour vision.”

This policy has been superseded by the fact that everyone reads eLife on a computer display of some kind. And by now all computers offer a customized color transform for color-blind people. See these links for WindowsMacOS, and iOS. These color filters replace whatever colors are on the display with new colors that are best discriminated by someone with a color vision deficiency. Most importantly the filter can be optimized for the user’s specific form of color blindness. This service has been wildly popular with colorblind readers.

Under these conditions, following the eLife policy is in fact detrimental to colorblind readers. Recall that there is no single color palette that works best for all forms of color blindness. If the author adapts the palette to a particular 2-dimensional color space, say protanopia, that will be suboptimal for other readers. Here is an example using the document that the eLife policy cites for guidance (Okabe and Ito).

“Original” shows a classic red-green fluorescence micrograph. Below that is the color substitution recommended by Okabe & Ito: turn red into magenta. To the right are 3 images produced by the Mac OS filters for different forms of color-blindness (I photographed my display with my phone – crazy, I know). The version that Mac OS produces for protanopia is very close to what Okabe and Ito recommend. But note the other two versions for deuteranopia and tritanopia are quite different.

So following the recommended policy will favor protanopes but hinder deuteranopes. What is more, adapting the color palette to one of these abnormal color spaces will make it more difficult for the operating system to optimize the display for another space. 

In conclusion the best policy for authors is to do what comes natural: choose colors that use the widest color gamut possible. Then let the user’s display device take over and implement that specific user’s preferences. By analogy, we don’t ask authors to write with 36-point font because some readers have poor vision. We know that the reader can turn up the magnification as suits her preference. The same is now true in color space.

Magnetofiction – A Reader’s Guide

tl;dr:

This sub-genre of science fiction has made its way into mainstream science journals. I argue that the science is sketchy and the fiction is disappointing. Submit your own Biophyctional Journal abstracts in the comments!

Introduction

Science fiction rests on a compact between author and reader. The reader grants the author license to make some outrageous assumptions about the state of science and technology. In return the author spins an exciting yarn that may also hold some lessons about human nature. Recently a sub-genre of science fiction has made its entry into mainstream scientific journals. Again the authors ask readers to imagine that nature works very differently from what we know now, by factors of a million or up to 10 trillion. Then they speculate what might happen under those circumstances. Continue reading “Magnetofiction – A Reader’s Guide”

Irresponsible musings on Lamarckian evolution

Preamble: In May 2018, HHMI Janelia hosted a wonderful conference on the evolution of neural circuits, organized by Albert Cardona, Melina Hale, and Gáspár Jékely. This is a transcript of a short talk I gave there. Caution: utter speculation!

Good evening. My name is Markus Meister. Many of you probably wonder what I am doing up here, and so do I. This started innocently enough when I wanted to attend this conference to learn about a different field. But the organizers asked for a poster abstract and I sheepishly complied. Then Albert Cardona unilaterally converted this poster to a talk, so now you know whom to blame. Let me also give two warnings. First, what I’ll present is pure speculation, there are no new results, only stringing together old results into questions. Fortunately it is timed to transition into the Apres-Ski part of the conference, so please consider this as part of the evening entertainment. Second I want to deliver a trigger warning. For those of you not at an American university, we professors are encouraged to warn our sensitive students when a lecture threatens to bring up a subject they might find offensive. The present talk will include notions reminiscent of Jean-Baptiste Lamarck, in particular his idea that acquired characters are inheritable.

Neuro-Evo 180506.002

Continue reading “Irresponsible musings on Lamarckian evolution”

Open review: WN Grimes et al., Rod signaling in primate retina – range, routing and kinetics

This preprint presents new insights on visual processing in the retina, specifically how signals from rod photoreceptors are handled. Our visual system must operate over a huge range of light intensities, about 9 log units in the course of a day. In adaptation to this challenge the retina uses two kinds of photoreceptors: In the dimmest conditions only the sensitive rods are active, in the brightest conditions only the cones. In between the retina gradually switches from one input neuron to the other. However, even before the cones take over, the rod pathway undergoes substantial changes with increasing light level: the gain decreases and the speed of processing increases. This article challenges the prevailing notion of how those changes are accomplished.

Continue reading “Open review: WN Grimes et al., Rod signaling in primate retina – range, routing and kinetics”

Death of the sampling theorem?

tl;dr:

A team from Columbia University led by Ken Shepard and Rafa Yuste claims to beat the 100 year old Sampling Theorem [1,2]. Apparently anti-aliasing filters are superfluous now because one can reconstruct the aliased noise after sampling. Sounds crazy? Yes it is. I offer $1000 to the first person who proves otherwise. To collect your cool cash be sure to read to the end.

“Filter before sampling!”

This mantra has been drilled into generations of engineering students. “Sampling” here refers to the conversion of a continuous function of time into a series of discrete samples, a process that happens wherever a computer digitizes a measurement from the world. “Filter” means the removal of high-frequency components from the signal. That filtering process, because it happens in the analog world, requires real analog hardware: so-called “anti-aliasing” circuits made of resistors, capacitors, and amplifiers. That can be tedious, for example because there isn’t enough space on the electronic chips in question. This is the constraint considered by Shepard’s team, in the context of a device for recording signals from nerve cells [2].

Now these authors declare that they have invented an “acquisition paradigm that negates the requirement for per-channel anti-aliasing filters, thereby overcoming scaling limitations faced by existing systems” [2]. Effectively they can replace the anti-aliasing hardware with software that operates on the digital side after the sampling step. “Another advantage of this data acquisition approach is that the signal processing steps (channel separation and aliased noise removal) are all implemented in the digital domain” [2].

This would be a momentous development. Not only does it overturn almost a century of conventional wisdom. It also makes obsolete a mountain of electronic hardware. Anti-alias filters are ubiquitous in electronics. Your cell phone contains multiple digital radios and an audio digitizer, which between them may account for half a dozen anti-alias circuits. If given a chance today to replace all those electronic components with a few lines of code, manufacturers would jump on it. So this is potentially a billion dollar idea.

Unfortunately it is also a big mistake. I will show that these papers do nothing to rattle the sampling theorem. They do not undo aliasing via post-hoc processing. They do not obviate analog filters prior to digitizing. And they do not even come close to the state of the art for extracting neural signals from noise. Continue reading “Death of the sampling theorem?”

The fable of the missing log units

Some time ago I published a critique of attempts at “magnetogenetics” – specifically the approach based on coupling a ferritin complex to an ion channel. See my article along with the original authors’ reply here:

Meister, M. (2016). Physical limits to magnetogenetics. eLife 5, e17210.

I argued that the effects of magnetic fields on ferritin are much too weak to account for the reported observations of neural modulation. The discrepancy is 5 to 10 orders of magnitude. Several people have asked what that means. How much of a problem does this really represent? To illustrate that, here is a short fable…

Earlier this year, a team of engineers announced a discovery that could go a long way to solving the world’s energy problems. Their article, published in Nature Automotive, reports the invention of an electric car that can run for an entire year on a single AA battery. “It took a lot of persistence on the part of my students”, says the senior author. “We literally tried 21 different brands of AA battery before we found one that worked” [1].

Now a paper in eCars casts doubt on the discovery. The author performed some calculations on the amount of work needed to push a car around for a year and the amount of electrical energy stored in a battery. He says there is a discrepancy of 7 orders of magnitude, and that makes the claims very improbable: “If the car really drove around for a year it is unlikely to have anything to do with the AA battery.” The author also faults the reviewers of the original article for not recognizing how improbable the claims are, and thus failing to raise the bar for the empirical evidence accordingly. He concedes it is possible that the claimed discovery opens a window on entirely new physics, but says: “Both batteries and cars have been studied for a long time, and we have a very successful model of how they work”.

The Nature Automotive authors reply that they never proposed a mechanism for the remarkable result. They stand by their data and state that empirical observation must take priority over any theory. Because they are not experts in physics, they should not be expected to explain how the data came about.

The critic points out that Nature Automotive and similar journals have had a rather poor track record: About half of the studies published there cannot be replicated for one reason or another. Not long ago, the journal reported the invention of a car that actually produced fuel while driving, such that the gas tank needed to be emptied at regular intervals [2]. A magician dispatched by the journal subsequently debunked that report, and explained it as a mixture of wishful thinking and self-deception [3]. Nature Automotive and other journals like it profess to be concerned about the profusion of false claims, and want to improve their ability to spot those before publication. The eCars critic suggests that one ought to start with the manuscripts whose claims fly in the face of everything we know about how things work. No word yet from the editor or the referees of the original article.

[1] Vogt, N. (2016). Neuroscience: Manipulating neurons with magnetogenetics. Nature Methods 13, 394.

[2] Davenas, E., Beauvais, F., Amara, J., Oberbaum, M., Robinzon, B., Miadonnai, A., Tedeschi, A., Pomeranz, B., Fortner, P., Belon, P., et al. (1988). Human basophil degranulation triggered by very dilute antiserum against IgE. Nature 333, 816.

[3] Maddox, J., Randi, J., and Stewart, W.W. (1988). “High-dilution” experiments a delusion. Nature 334, 287–291.

Control theory meets connectomes?

My colleagues and I have been working through this intriguing paper [1] from a few weeks ago:

Yan, G., Vértes, P.E., Towlson, E.K., Chew, Y.L., Walker, D.S., Schafer, W.R., and Barabási, A.-L. (2017). Network control principles predict neuron function in the Caenorhabditis elegans connectome. Nature advance online publication.

This seems like a very important contribution. It promises detailed insights about the function of a neural circuit based on its connectome alone, without knowing any of the synaptic strengths. The predictions extend to the role that individual neurons play for the circuit’s operation. Seeing how a great deal of effort is now going into acquiring connectomes [2] – mostly lacking annotations of synaptic strengths – this approach could be very powerful.

The starting point is Barabási’s “structural controllability theory” [3], which makes statements about the control of linear networks. Roughly speaking a network is controllable if its output nodes can be driven into any desired state by manipulating the input nodes. Obviously controllability depends on the entire set of connections from inputs to outputs. Structural controllability theory derives some conclusions from knowing only which connections have non-zero weight. This seems like a match made in heaven for the structural connectomes of neural circuits derived from electron microscopic reconstructions. In these data sets one can tell which neurons are connected but not what the strength is of those connections, or even whether they are excitatory or inhibitory. Unfortunately the match is looking more like a forced marriage… Continue reading “Control theory meets connectomes?”