Magnetofiction – A Reader’s Guide


This sub-genre of science fiction has made its way into mainstream science journals. I argue that the science is sketchy and the fiction is disappointing. Submit your own Biophyctional Journal abstracts in the comments!


Science fiction rests on a compact between author and reader. The reader grants the author license to make some outrageous assumptions about the state of science and technology. In return the author spins an exciting yarn that may also hold some lessons about human nature. Recently a sub-genre of science fiction has made its entry into mainstream scientific journals. Again the authors ask readers to imagine that nature works very differently from what we know now, by factors of a million or up to 10 trillion. Then they speculate what might happen under those circumstances. Continue reading “Magnetofiction – A Reader’s Guide”

Irresponsible musings on Lamarckian evolution

Preamble: In May 2018, HHMI Janelia hosted a wonderful conference on the evolution of neural circuits, organized by Albert Cardona, Melina Hale, and Gáspár Jékely. This is a transcript of a short talk I gave there. Caution: utter speculation!

Good evening. My name is Markus Meister. Many of you probably wonder what I am doing up here, and so do I. This started innocently enough when I wanted to attend this conference to learn about a different field. But the organizers asked for a poster abstract and I sheepishly complied. Then Albert Cardona unilaterally converted this poster to a talk, so now you know whom to blame. Let me also give two warnings. First, what I’ll present is pure speculation, there are no new results, only stringing together old results into questions. Fortunately it is timed to transition into the Apres-Ski part of the conference, so please consider this as part of the evening entertainment. Second I want to deliver a trigger warning. For those of you not at an American university, we professors are encouraged to warn our sensitive students when a lecture threatens to bring up a subject they might find offensive. The present talk will include notions reminiscent of Jean-Baptiste Lamarck, in particular his idea that acquired characters are inheritable.

Neuro-Evo 180506.002

I want to consider how neural circuits evolved to what we find today. In any given brain one finds some circuits that are essentially specified by the genome. A good example of that is the retina: pretty much the same circuit in every member of the species, with rather little influence from visual experience. It’s a rather complex and precise network among 80 or so cell types. That ultimately produces about 30 different types of retinal ganglion cell with different visual functions. Those functions are exquisitely well adapted to the visual environment. For example the well-known center-surround organization of receptive fields matches beautifully with the statistical structure of natural scenes to produce a kind of image compression that functions a bit like JPEG. Delving in more detail we find a type of retinal ganglion cell designed to pick out a moving object against a moving background, a function that is obviously useful for both predator and prey animals. Each of these circuits relies on precise connectivity among a dozen or so different cell types. And that connectivity is ultimately determined by the developmental program stored in the genome.

Neuro-Evo 180506.003.png

The common explanation for how this adapted circuit came about is through the cumulative effects of mutation at the gene level and natural selection at the level of behavior. And I’m sure that is a large part of the story. But we have to consider that the brain also has powerful mechanisms for adapting in real time. In the visual system, if we just take a couple steps forward from the retina, we find circuits in the mammalian cortex that are much more dependent on environmental exposure, and can adapt plastically to many different environments. For example you can raise a mouse with cylindrical goggles so it experiences almost exclusively patterns with one orientation. After just a few weeks of this the cortex has reorganized to preferentially encode that orientation. The detailed synaptic circuits that accomplish this are acquired through experience, shaped by various cellular mechanisms of learning. In the course of a lifetime an organism acquires a great deal of experience from the environment that gets stored in synaptic connections within these plastic circuits. But the common notion of brain evolution says that all this experience is lost to the next generation.

Neuro-Evo 180506.004

So I would like to revisit this question now: Is it possible that some of the acquired knowledge about the environment is in fact passed on genetically to the offspring. And more specifically, is it possible that the innate neural circuits we find today are really the accumulated result of many generations of acquired circuitry that eventually got frozen in the genome. This of course relates to Lamarck’s doctrine that organisms can pass acquired traits on to their offspring. Although it is worth remembering that Darwin also speculated about this possibility. But why revisit this question now? Do we know of any example where an organism learns from the environment and stores that knowledge in the genome for the benefit of future generations?

Neuro-Evo 180506.005

Of course, you will say: the ancient bacterial immune system. By now even theoretical physicists have been caught up in the hoopla around CRISPR. But beyond the technological promise of gene editing and the associated patent fights, I find the biological function of this system more fascinating. The molecular machinery in the bacterium identifies foreign DNA, like from a virus infection, chops it into small pieces, and inserts those pieces at a special location in the bacterial genome. Meanwhile other proteins read the sequences stored in that memory and look for matching items floating around in the cell to destroy them. The descendants of this cell inherit the stored memories and are therefore better prepared for threats in the environment. Of course this clever mechanism itself had to evolve somehow, but once in place it opens the door to a Lamarckian mechanism of inheritance.

Given that some early forms of life have developed a way to record experience in the genome, it seems plausible that Nature has continued to use this powerful principle. Now there are two aspects of this bacterial system that make it particularly easy to transmit acquired experience genetically. First, the information gathered about the environment is already in the form of genetic sequences, such as a viral genome. Second, the cell that is sampling the environment is the same cell that transmits the genome to the descendants. In multi-cellular animals of course the germ line is separated from the rest of the body, and the environmental experience is not directly available as a DNA sequence. So in thinking about how this might work in animals we need to try and solve both problems: Turn synaptic connectivity into sequence information and then transmit that from the brain to the gonads.

Neuro-Evo 180506.006

So let’s think about a concrete example with minimal complexity, what is commonly called a “toy model”. We suppose that lifetime experience has set up new synaptic circuits in the brain. This leads to new synapses between two cell types, call them A and B. Each of these types is defined by expression of a specific transcription factor, TF-A and TF-B. We would like to communicate this event to the germline in such a way that in the next generation these two neurons are more likely to form a synapse all by themselves, without the need for learning and plasticity. Of course we don’t require that this happens deterministically after just one generation. Even if the probability of forming such a connection is enhanced just by a tiny amount, that biases the genetic program for this neural circuit a little bit, and that bias can accumulate over generations.

So how do we imagine that information about this event in the brain gets to the germ cells? Whatever carries that signal has to be a information-rich message, because the event is rather specific. Assuming the brain has about 1000 cell types, and we want to encode synapses between any two of them, that requires abut a million different symbols. So the signal has to be something with a million possible values. Obviously we’re not talking about some hormone. Instead the likely carrier would be a nucleotide sequence, like RNA. Short RNAs would be sufficient, since 10 bases can already encode a million signals.

Now in recent years there has been a flurry of discoveries about RNA mechanisms, and I want to point to three of them that may be relevant for the scheme we’re considering here. Starting with synapse formation: We learned just a few months ago that at a synaptic junction cells can pass RNA to each other inside a virus-like capsid, and that alters gene expression in the partner neuron. One might imagine that this could be used to strengthen a synaptic connection. For example if cell A expresses a homotypic cell adhesion molecule CAM-A, the mRNA for that could transfer to cell B, leading to expression of CAM-A in that neuron as well and consequent cell adhesion. More importantly, cell B now has a complement of RNAs that it did not have before this event, for example the mRNAs for transcription factor TF-B and for the adhesion molecule CAM-A. That can lead to the production of new species of RNA that did not exist before.

Neuro-Evo 180506.007

Second, how can that information flow to the germ line? One such mechanism are extracellular vesicles, sometimes called exosomes (although that term is ambiguous). These are small vesicles extruded from the plasma membrane of many cells. They can carry protein, RNA, and DNA. They provide a route for cells to exchange all these information-rich signals. They are transported through the blood and remarkably enough they can cross the blood-brain-barrier. Separate evidence says they are taken up in the testes. So in principle a route exists for RNA molecules to convey information about such peripheral events to the germline.

Finally, how could that information be used to alter the genome? Again here we are learning about new mechanisms that are potentially relevant. In particular there are a number of reports of DNA editing guided by sequences in non-coding RNA molecules. Of course the man-made systems based on CRISPR-Cas9 present just such a mechanism, but there seem to be related endogenous processes as well.

How would one test whether something like this is afoot? There have been a good number of studies of transgenerational learning, starting perhaps with the infamous experiments on cannibalistic planarians of the 1960s. I don’t think that’s a promising route, because any bias for directed evolution from such a cascade would likely be too weak to be observed in one or a few generations. Instead one might look for some essential mechanistic ingredient, like the postulated transport of polynucleotide signals from brain to germ cells. Is it possible, for example, to chemically label all the RNA synthesized in the brain and look for species that make their way to the testes?

As I said at the outset, these ruminations are purely speculative. But it seems to me there is still room for discovery in the ample dark regions of the genome, and some mechanism of this kind would perhaps fit there. If so it could fundamentally change our view of how the complex circuits of the brain came about. In any case, I hope that some of the napkins at the bar get used to sketch some candidate mechanisms or even to prove why it cannot possibly work.

Literature cited on the slides:

Ashley J, Cordy B, Lucia D, Fradkin LG, Budnik V, Thomson T. Retrovirus-like Gag Protein Arc1 Binds RNA and Traffics across Synaptic Boutons. Cell. 2018;172: 262-274.

Briggman KL, Helmstaedter M, Denk W. Wiring specificity in the direction-selectivity circuit of the retina. Nature. 2011;471: 183–188.

Kreile AK, Bonhoeffer T, Hübener M. Altered visual experience induces instructive changes of orientation preference in mouse visual cortex. J Neurosci. 2011;31: 13911–20.

Pastuzyn ED, Day CE, Kearns RB, Kyrke-Smith M, Taibi AV, McCormick J, et al. The Neuronal Gene Arc Encodes a Repurposed Retrotransposon Gag Protein that Mediates Intercellular RNA Transfer. Cell. 2018;172: 275-288.

Pitkow X, Meister M. Neural computation in sensory systems. In: Gazzaniga MS, Mangun GR, editors. The Cognitive Neurosciences. 5th ed. Cambridge, MA: MIT Press; 2014. pp. 305–318.

Sanes JR, Masland RH. The types of retinal ganglion cells: current status and implications for neuronal classification. Annu Rev Neurosci. 2015;38: 221–246.

Zhang Y, Kim IJ, Sanes JR, Meister M. The most numerous ganglion cell type of the mouse retina is a selective feature detector. Proc Natl Acad Sci U S A. 2012;109: E2391-8.

Open review: WN Grimes et al., Rod signaling in primate retina – range, routing and kinetics

This preprint presents new insights on visual processing in the retina, specifically how signals from rod photoreceptors are handled. Our visual system must operate over a huge range of light intensities, about 9 log units in the course of a day. In adaptation to this challenge the retina uses two kinds of photoreceptors: In the dimmest conditions only the sensitive rods are active, in the brightest conditions only the cones. In between the retina gradually switches from one input neuron to the other. However, even before the cones take over, the rod pathway undergoes substantial changes with increasing light level: the gain decreases and the speed of processing increases. This article challenges the prevailing notion of how those changes are accomplished.

When one inspects the neural circuits of the retina, the rods seem like an evolutionary afterthought added to a backbone circuit that was devoted to cones. The rod has no direct communication line to the output neurons, the retinal ganglion cells. Instead, its signals get spliced into the circuitry at various locations. The authors distinguish three routes: (1) an interneuron pathway through specialized rod bipolar and amacrine cells, which allows for high spatial convergence and high gain; (2) direct transmission of rod signals to cones via electrical junctions; and (3) transmission from rods to Off bipolar cells that predominantly handle cone signals. Each of these pathways imposes different gain and kinetics on the rod signal because different synaptic pathways are engaged.

With this background it has been proposed that the changes in human perception of rod stimuli that occur with increasing light level reflect changes in the neural routing of signals through the different synaptic pathways. In the mouse retina there is reasonable evidence for this, but the idea has received less scrutiny in primate retina, which ultimately matters for human vision. Here the authors come to the surprising conclusion that the primate retina uses pathway 1 for rod signals almost exclusively. Furthermore they claim that the changes in kinetics of rod signal processing observed in human psychophysics are largely explained by changes occurring within the rod photoreceptor cell itself. Parallel experiments in mouse retina using the same strategy gave very different results: a large contribution from the pathways 2 and 3 over the functional rod range.

The significance of this study lies first in revisiting a central problem of human vision, namely light adaptation. Second, it illuminates the relation between structure and function of neural circuits, in a case where both are exquisitely accessible by experiment; third it contributes to the question whether rodents offer the best model for the human nervous system.

The study looks very well designed and the experiments are executed with impressive skill. The main method was to record neural signals from diverse neurons in the retina that play different roles in the three rod pathways, and therefore sample signals from the 3 pathways in different proportions. For example the primate H1 horizontal cell soma should receive rod signals only through pathway 2, whereas the AII amacrine cell lies on both pathways 1 and 2. Also various pharmacological agonists and antagonists were used to block one pathway or another. The cross-species comparison with the mouse provides a positive control for the effects that seemed to be missing in the primate retina.

Regarding the conclusions drawn from the experiments, a few questions remain:

  1. Psychophysics of parallel rod pathways. The authors suggest that the human perceptual effects can be explained by a single pathway for rod signals whose gain and kinetics vary as a function of light level. However, some psychophysical experiments show that multiple pathways for rod signals are in operation simultaneously, see for example the phenomenon of “rod self-cancellation” (Stockman and Sharpe, 2006). Can that be reconciled with the single-pathway hypothesis?
  2. Health of rod-cone junctions. In terms of neural mechanisms, the authors suggest that in primate retina the rod-cone junctions are too weak to initiate a substantial pathway 2. Is it possible that these junctions are altered by the procedure of isolating the retina in vitro? The junctions seem to be sensitive to modulation, for example by circadian time, through the actions of dopamine (Ribelayga et al., 2008). That could be perturbed by isolating the retina, which would directly affect the main conclusion. How could one check whether these junctions are in the same state as in the intact human eye?
  3. Strength of the third pathway. The single pathway conclusion seems less compelling for the Off retinal ganglion cells. Suppressing the On bipolars (which blocks pathway 1) eliminates 80% of the rod signals at a light level of 20R*/s, but only 20% at 200 R*/s (line 288). In between there is a large range and it all falls within the 300R*/s range that the authors consider for rod signaling. Thus it seems that there could be a substantial contribution from pathways 2 or 3 over a substantial part of the rod signaling range. This should be elaborated further.
  4. A fourth pathway. Another potential pathway for rod signals was not considered here: rods excite H1 horizontal cells which inhibit cone terminals. There is reasonable evidence for this route in the mouse retina, where it seems to provide the entire receptive field surround of certain ganglion cells (Joesch and Meister, 2016; Szikra et al., 2014; Trümpler et al., 2008). This may affect the interpretation of some of the mouse experiments here. In the primate retina the H1 cell seems to be much less sensitive to direct rod input (Verweij et al., 1999), as stated in the manuscript (line 181).

Some other comments and suggestions:

  1. The circuit diagrams in Figs 1, 3A, 6D are confusing to anyone who doesn’t already know these circuits by heart. This could be much improved. (1) Show all the circuit components (cells and synapses) in the same diagram, so one can compare the different pathways. As it stands the same circuit element gets relabeled from On bipolar to Off bipolar in different subpanels and then new synapses appear (Fig 3B-C). A fix to this problem can be found in Fig 1 of (Rivlin-Etzion et al., 2018). (2) To make any sense of these circuits it is essential to distinguish sign-preserving and sign-inverting synapses, for example the primary pathway to Off bipolars involves two sign inversions. That should be indicated by different symbols in the circuit diagram. See for example Fig 1 of (Soucy et al., 1998).
  2. Lines 208ff: The comparison between the mouse and macaque is a key result, but a bit difficult to appreciate because it extends over multiple figures. For example Figs 6F and H don’t seem all that different: there’s a regime where only one pathway is active, and another where the other dominates, and then something in between. This transition happens at lower light levels for the mouse than the monkey. The significance becomes apparent only when one compares to the graphs of rod saturation (Fig 2B and Fig 4A), which happens at higher light levels in the mouse (half-saturation at 120 R*/s) than the monkey (45 R*/s). So the mouse has multiple pathways active within the regime where the rods are still functioning just fine, but not so for the monkey. Maybe one could combine these two graphs, e.g. by plotting the relative contribution of different pathways on the y-axis vs fraction of rod saturation on the x–axis. One curve for each species should make the difference clear.
  3. Line 555, “Rods were selectively stimulated using the 405 nm LED”: This isn’t quite right. That wavelength stimulates the rod pigment and the L and M cone pigments equally well (relative to their peak absorption wavelength), because it falls into the beta band of all those absorption spectra (Schnapf et al., 1988). The 620 nm light on the other hand is selective for cones. Of course the ganglion cells are much more sensitive to rods because of the larger number and higher gain of rods (under conditions of the present experiments). So one might say that “the 405 nm light stimulates RGCs selectively through rods rather than cones”.
  4. Fig 3 and its supplements: What is the meaning of “flash contrast” measured in %? The background is measured in units of R*/rod/s but flash strength in units of R*/rod. How does one get a dimensionless ratio?
  5. Fig 3 supplement 2: Panel B legend in figure is confusing, suggests recordings are from a rod and a cone.
  6. Fig 4: Panels A and B should use same symbol color for same species, for easier comparison.
  7. Typos: line 142 “dominant”, line 155 “complementary”.
  8. Wording: line 314 “speed up” or “accelerate” instead of “speed”?



Joesch, M., and Meister, M. (2016). A neuronal circuit for colour vision based on rod-cone opponency. Nature 532, 236–239.

Ribelayga, C., Cao, Y., and Mangel, S.C. (2008). The circadian clock in the retina controls rod-cone coupling. NEURON 59, 790–801.

Rivlin-Etzion, M., Grimes, W.N., and Rieke, F. (2018). Flexible Neural Hardware Supports Dynamic Computations in Retina. Trends Neurosci. 41, 224–237.

Schnapf, J.L., Kraft, T.W., Nunn, B.J., and Baylor, D.A. (1988). Spectral sensitivity of primate photoreceptors. Vis Neurosci 1, 255–261.

Soucy, E., Wang, Y., Nirenberg, S., Nathans, J., and Meister, M. (1998). A novel signaling pathway from rod photoreceptors to ganglion cells in mammalian retina. Neuron 21, 481–493.

Stockman, A., and Sharpe, L.T. (2006). Into the twilight zone: the complexities of mesopic vision and luminous efficiency. Ophthalmic Physiol Opt 26, 225–239.

Szikra, T., Trenholm, S., Drinnenberg, A., Juttner, J., Raics, Z., Farrow, K., Biel, M., Awatramani, G., Clark, D.A., Sahel, J.A., et al. (2014). Rods in daylight act as relay cells for cone-driven horizontal cell-mediated surround inhibition. Nat Neurosci 17, 1728–1735.

Trümpler, J., Dedek, K., Schubert, T., de Sevilla Müller, L.P., Seeliger, M., Humphries, P., Biel, M., and Weiler, R. (2008). Rod and cone contributions to horizontal cell light responses in the mouse retina. J Neurosci 28, 6818–6825.

Verweij, J., Dacey, D.M., Peterson, B.B., and Buck, S.L. (1999). Sensitivity and dynamics of rod signals in H1 horizontal cells of the macaque monkey retina. Vis. Res 39, 3662–3672.

Death of the sampling theorem?


A team from Columbia University led by Ken Shepard and Rafa Yuste claims to beat the 100 year old Sampling Theorem [1,2]. Apparently anti-aliasing filters are superfluous now because one can reconstruct the aliased noise after sampling. Sounds crazy? Yes it is. I offer $1000 to the first person who proves otherwise. To collect your cool cash be sure to read to the end.

“Filter before sampling!”

This mantra has been drilled into generations of engineering students. “Sampling” here refers to the conversion of a continuous function of time into a series of discrete samples, a process that happens wherever a computer digitizes a measurement from the world. “Filter” means the removal of high-frequency components from the signal. That filtering process, because it happens in the analog world, requires real analog hardware: so-called “anti-aliasing” circuits made of resistors, capacitors, and amplifiers. That can be tedious, for example because there isn’t enough space on the electronic chips in question. This is the constraint considered by Shepard’s team, in the context of a device for recording signals from nerve cells [2].

Now these authors declare that they have invented an “acquisition paradigm that negates the requirement for per-channel anti-aliasing filters, thereby overcoming scaling limitations faced by existing systems” [2]. Effectively they can replace the anti-aliasing hardware with software that operates on the digital side after the sampling step. “Another advantage of this data acquisition approach is that the signal processing steps (channel separation and aliased noise removal) are all implemented in the digital domain” [2].

This would be a momentous development. Not only does it overturn almost a century of conventional wisdom. It also makes obsolete a mountain of electronic hardware. Anti-alias filters are ubiquitous in electronics. Your cell phone contains multiple digital radios and an audio digitizer, which between them may account for half a dozen anti-alias circuits. If given a chance today to replace all those electronic components with a few lines of code, manufacturers would jump on it. So this is potentially a billion dollar idea.

Unfortunately it is also a big mistake. I will show that these papers do nothing to rattle the sampling theorem. They do not undo aliasing via post-hoc processing. They do not obviate analog filters prior to digitizing. And they do not even come close to the state of the art for extracting neural signals from noise. Continue reading “Death of the sampling theorem?”

The fable of the missing log units

Some time ago I published a critique of attempts at “magnetogenetics” – specifically the approach based on coupling a ferritin complex to an ion channel. See my article along with the original authors’ reply here:

Meister, M. (2016). Physical limits to magnetogenetics. eLife 5, e17210.

I argued that the effects of magnetic fields on ferritin are much too weak to account for the reported observations of neural modulation. The discrepancy is 5 to 10 orders of magnitude. Several people have asked what that means. How much of a problem does this really represent? To illustrate that, here is a short fable…

Earlier this year, a team of engineers announced a discovery that could go a long way to solving the world’s energy problems. Their article, published in Nature Automotive, reports the invention of an electric car that can run for an entire year on a single AA battery. “It took a lot of persistence on the part of my students”, says the senior author. “We literally tried 21 different brands of AA battery before we found one that worked” [1].

Now a paper in eCars casts doubt on the discovery. The author performed some calculations on the amount of work needed to push a car around for a year and the amount of electrical energy stored in a battery. He says there is a discrepancy of 7 orders of magnitude, and that makes the claims very improbable: “If the car really drove around for a year it is unlikely to have anything to do with the AA battery.” The author also faults the reviewers of the original article for not recognizing how improbable the claims are, and thus failing to raise the bar for the empirical evidence accordingly. He concedes it is possible that the claimed discovery opens a window on entirely new physics, but says: “Both batteries and cars have been studied for a long time, and we have a very successful model of how they work”.

The Nature Automotive authors reply that they never proposed a mechanism for the remarkable result. They stand by their data and state that empirical observation must take priority over any theory. Because they are not experts in physics, they should not be expected to explain how the data came about.

The critic points out that Nature Automotive and similar journals have had a rather poor track record: About half of the studies published there cannot be replicated for one reason or another. Not long ago, the journal reported the invention of a car that actually produced fuel while driving, such that the gas tank needed to be emptied at regular intervals [2]. A magician dispatched by the journal subsequently debunked that report, and explained it as a mixture of wishful thinking and self-deception [3]. Nature Automotive and other journals like it profess to be concerned about the profusion of false claims, and want to improve their ability to spot those before publication. The eCars critic suggests that one ought to start with the manuscripts whose claims fly in the face of everything we know about how things work. No word yet from the editor or the referees of the original article.

[1] Vogt, N. (2016). Neuroscience: Manipulating neurons with magnetogenetics. Nature Methods 13, 394.

[2] Davenas, E., Beauvais, F., Amara, J., Oberbaum, M., Robinzon, B., Miadonnai, A., Tedeschi, A., Pomeranz, B., Fortner, P., Belon, P., et al. (1988). Human basophil degranulation triggered by very dilute antiserum against IgE. Nature 333, 816.

[3] Maddox, J., Randi, J., and Stewart, W.W. (1988). “High-dilution” experiments a delusion. Nature 334, 287–291.

Control theory meets connectomes?

My colleagues and I have been working through this intriguing paper [1] from a few weeks ago:

Yan, G., Vértes, P.E., Towlson, E.K., Chew, Y.L., Walker, D.S., Schafer, W.R., and Barabási, A.-L. (2017). Network control principles predict neuron function in the Caenorhabditis elegans connectome. Nature advance online publication.

This seems like a very important contribution. It promises detailed insights about the function of a neural circuit based on its connectome alone, without knowing any of the synaptic strengths. The predictions extend to the role that individual neurons play for the circuit’s operation. Seeing how a great deal of effort is now going into acquiring connectomes [2] – mostly lacking annotations of synaptic strengths – this approach could be very powerful.

The starting point is Barabási’s “structural controllability theory” [3], which makes statements about the control of linear networks. Roughly speaking a network is controllable if its output nodes can be driven into any desired state by manipulating the input nodes. Obviously controllability depends on the entire set of connections from inputs to outputs. Structural controllability theory derives some conclusions from knowing only which connections have non-zero weight. This seems like a match made in heaven for the structural connectomes of neural circuits derived from electron microscopic reconstructions. In these data sets one can tell which neurons are connected but not what the strength is of those connections, or even whether they are excitatory or inhibitory. Unfortunately the match is looking more like a forced marriage… Continue reading “Control theory meets connectomes?”