Featured

Molecular bumper cars (RNA polymerase-ribosomal interactions): their (unexpected) functional effects and how to control them

Cells are extremely complex.1 Much of their “core” complexity appears to have been present in their last (universal) common ancestor, known as LUCA. We find it in the “conserved” molecular mechanisms and machines active in modern cells. LUCA and its offspring are membrane-bounded, non-equilibrium systems that import free energy and export entropy to maintain and repair themselves, to grow, behave, and reproduce (and all the other things living things do). One problem, however, with LUCA is that it makes speculation on the steps that preceded it impossible to know with certainty. Not withstanding claims of breakthroughs (e.g. ‘Monumental’ experiment suggests how life on Earth may have started“), it is likely that we will never know the actual steps involved; after all, the origin of life occurred billions of years ago and under rather different conditions than exist today.

Living systems “work” based on inherited, pre-existing molecular machines and mechanisms (1). The actions of these machines are fueled through coupling to thermodynamically favorable reactions taking place under non-equilibrium conditions, i.e. the living state. Looking at the details of these interactions reveals interesting and unexpected behaviors. Unfortunately, the “simple” physical-chemical underpinnings of these processes, key to understanding them, are not always presented to students effectively (2).  At the same time, the complexity of cellular systems means that in practice, the link between “simple” molecular mechanisms and the behavior of a biological systems can be obscure (see 3).  That said, key insights are illuminated when molecular mechanisms are examined, as illustrated by Wee et al., (2023)(4).  

Emerging from LUCA, biological populations have diverged into distinct “prokaryotic” lineages: the bacteria and archaea.2 Both are defined by a protein-lipid boundary layer, the plasma membrane. Within this membrane is a single internal compartment, the cytoplasm. Information is stored in cells in two forms, first in the on-going LUCA-derived living system and the second, information in the sequence of double-stranded DNA molecules. These two types of information are interdependent: the information in DNA makes sense only within a cell and the on-going cellular processes depend upon and utilize the information in DNA. In bacteria and archaea, these are circular double-stranded DNA molecules. Here we restrict our discussion to the common unicellular bacterium Escherichia coli (E. coli), one of the workhorse systems that led to an understanding of core molecular mechanisms.  

E. coli hasa single circular genomic DNA molecule of ~5 million nucleotide base pairs in length; it contains about 5000 distinct genes that encode polypeptides and functional “non-coding” RNA molecules (if you are interested in numbers, check out bionumbers).  An E. coli cell is rod-shaped and ~1 micrometer (10-6 meters) long. Its genomic DNA molecule is ~1000 times longer than the cell that contains it, and a rapidly dividing cell can contain multiple copies of the genome. Genes typically contain two distinct functional regions. Regulatory regions interact with various proteins that determine whether a gene is “expressed” or not. Coding regions specify what is expressed. The first step is the synthesis of an RNA molecule; such a molecule can encode a polypeptide or a non-coding RNA.3 Non-coding RNAs can have structural, catalytic, or regulatory functions.  

The first step in gene expression in all cell types is the binding of proteins to a gene’s regulatory sequences. Typically a complex of proteins leads to the binding and activation of a DNA-dependent, RNA polymerase. The RNA polymerase complex unwinds a specific region of the DNA and uses the complementary nature of nucleotide base pairing: A binding to T in DNA and U in RNA, and C binding to G in both, to synthesize an RNA molecule based on DNA sequence. Synthesis of RNAs that encode polypeptides, known as messenger RNAs (mRNAs) starts with the 5′ end of the molecule and moves toward the 3′ end (replaced ↓ soon).

In prokaryotic cells, both DNA and mRNA synthesis reactions occur in the cytoplasm. A ribosome, a molecular machine composed of multiple proteins and RNAs, can engage the 5′ end region of an mRNA as soon as it appears – before the synthesis of the mRNA is complete. The cytoplasm of a cell contains lots of ribosomes; in E. coli there are ~70,000 ribosomes per cell (more or less). This leads to some interesting and functionally significant interactions.  One thing to consider, not always stressed, is that these synthetic processes are not error proof.  DNA replication (DNA-directed, DNA synthesis), transcription (DNA-directed, RNA synthesis), and polypeptide synthesis (RNA-directed, polypeptide synthesis) all have an error rate, typically 1 error per ~106 addition events for DNA replication and transcription. Errors can lead to mutations in the DNA, RNAs that encode abnormal proteins, or abnormal and potentially toxic polypeptides.

To deal with physical realities, these synthetic processes employ various “error correction” strategies.  In the case of DNA and RNA synthesis, the polymerases involved have what is known as “proof-reading” activities. If the incorrect nucleotide is inserted into a growing DNA or RNA chain, it can be recognized; the polymerase can then “reverse” (move backward along the DNA), remove the mistakenly inserted nucleotide, and then move forward again, adding the correct nucleotide. Key here is that the polymerase is moving back and forth along the DNA strand. The result of proof-reading is to reduce the error rate of DNA-dependent DNA and RNA synthesis substantially, down to 10-8 to 10-10 per base pair in the case of DNA synthesis.  

In the case of the RNA polymerase, the newly synthesized RNA can fold back on itself, forming what is known as a “hairpin”. This hairpin “can stabilize an elemental pause (in RNA synthesis) an allosteric interaction with the β-flap tip helix of RNAP”. What Wee et al (4) report is as the mRNA-associated ribosome moves along the RNA it unfold the hairpin and “bumps” into the polymerase, inhibiting this “pause” which increases the rate of mRNA synthesis and inhibits the polymerase’s error correction function. The resulting mRNA population has more frequent base pair changes, errors that can influence the polypeptides synthesized. While cells of all types have various  “chaperone” systems that can deal with misfolded proteins that arise in response to various stresses or errors, these can be overwhelmed. The resulting misfolded (damaged) proteins can lead to cellular defects and long term effects on viability (discussed in 5).  

About 1.5 billion years later (give or take), a new type of cell appeared, the result (apparently) of a symbiotic interaction between an archaeal-like “host” and a O2-utilizing bacterium.  This synthetic organism, the progenitor of the eukaryotes, differed from either type of prokaryote in that it sequestered its genome, now composed of linear DNA molecules, within a double membrane bounded “nuclear” compartment. In this hybrid cell type, DNA and RNA synthesis was confined to the nucleus, while ribosomes and polypeptide synthesis were confined to the cytoplasm. Eukaryotic cells are typically much larger that prokaryotic cells, reproduce more slowly, and are more complex in terms of the numbers of genes, and the amount of genomic DNA they contain. It is tempting to speculated that while rapidly dividing, relatively simple prokaryotic cells may be able to tolerate more mistakes in terms of the synthesis of their polypeptides, larger, more complex eukaryotic cells would be vulnerable. A plausible result would be a selection pressure to separating RNA from polypeptide synthesis.

literature cited

  1. Alberts, B. (1998). The cell as a collection of protein machines: preparing the next generation of molecular biologists. Cell, 92, 291-294.
  2. de Lorenzo, V., 2024. The principle of uncertainty in biology: Will machine learning/artificial intelligence lead to the end of mechanistic studies?. Plos Biology, 22, p.e3002495.
  3. Klymkowsky, M.W., 2021. Making mechanistic sense: are we teaching students what they need to know? Developmental Biology, 476, pp.308-313.
  4. Wee et al., 2023. A trailing ribosome speeds up RNA polymerase at the expense of transcript fidelity via force and allostery. Cell, 186, pp.1244-1262.
  5. Klymkowsky, M.W., 2019. Filaments and phenotypes: cellular roles and orphan effects associated with mutations in cytoplasmic intermediate filament proteinsF1000Research8.

Footnotes

  1. if you want brush up on you molecular biology, check out chapter 7 of biofundamentals  ↩︎
  2. Image from Govindjee – doi:10.3389/fpls.2011.00028, CC BY 3.0.  Given the diversity of biological systems, these are general descriptions – often there a exceptions, but recognizing them all makes generating a coherent narrative difficult (and beyond me).  Mea culpa.    ↩︎
  3. bioliteracy link: When is a gene product a protein when is it a polypeptide? ↩︎
Featured

Determinism versus free will, a false dichotomy 

from wikipedia – Brownian motion

You might be constrained and contingent on past events, but you are not determined! (that said you are not exactly free either).

AI Generated Summary: Arguments for and against determinism and free will in relation to biological systems often overlook the fact that neither is entirely consistent with our understanding of how these systems function. The presence of stochastic, or seemingly random, events is widespread in biological systems and can have significant functional effects. These stochastic processes lead to a range of unpredictable but often useful behaviors. When combined with self-consciousness, such as in humans, these behaviors are not entirely determined but are still influenced by the molecular and cellular nature of living systems. They may feel like free actions, but they are constrained by the inherent biological processes.

Recently two new books have appeared arguing for (1) and against (2) determinism in the context of biological systems. There have also been many posts on the subject (here is the latest one by John Horgan. These works join an almost constant stream of largely unfounded, and bordering on anti-scientific speculation, including suggestions that consciousness has non-biological roots and exists outside of animals. Speaking as a simple molecular/cellular biologist with a more than passing interest in how to teach scientific thinking effectively, it seems necessary to anchor any meaningful discussion of determinism vs free will in clearly defined terms. Just to start, what does it mean to talk about a system as “determined” if we can not accurately predict its behavior?  This brings us to a discussion of what are known as stochastic processes.  

The term random is often use to describe noise, unpredictable variations in measurements or the behavior of a system. Common understanding of the term random implies that noise is without a discernible cause. But the underlying assumption of the sciences, I have been led to believe, is that the Universe is governed exclusively by natural processes; magical or supernatural processes are not necessary and are excluded from scientific explanations. The implication of this naturalistic assumption is that all events have a cause, although the cause(s) may be theoretically or practically unknowable. For example, there are the implications of Heisenberg’s uncertainty principle, which limits our ability to measure all aspects of a system. On the practical side, measuring the position and kinetic energy of each molecule (and the parts of larger molecules) in a biological system is likely to kill the cell. The apparent conclusion is that the measurement accuracy needed to consider a system, particularly a biological system as “determined” is impossible to achieve. In a strict sense, determinism is an illusion. 

The question that remains is how to conceptualize the “random”and noisy aspects of systems.  I would argue that the observable reality of stochasticity, particularly in biological systems at all levels of organization, from single cells to nervous systems, largely resolves the scientific paradox of randomness. Simply put, stochastic processes display a strange and counter-intuitive behavior: they are unpredictable at the level of individual events, but the behaviors of populations become increasingly predictable as population size increases. Perhaps the most widely known examples of stochastic processes are radioactive decay and Brownian motion. Given a large enough population of atoms, it is possible to accurately predict the time it takes for half of the atoms to decay. But knowing the half-life of an isotope does not enable us to predict when any particular atom will decay. In Schrödinger’s famous scenario a living cat is placed in an opaque box containing a radioactive atom; when the atom decays, it activates a process that leads to the death of the cat.  At any particular time after the box is closed, it is impossible to predict with certainty whether the cat is alive or dead because radioactive decay is a stochastic process. Only by opening the box can we know for sure the state of the cat.  We can, if we know the half-life of the isotope, estimate the probability that the cat is alive but rest assured, as a biologist who has a cat, at no time is the cat both alive and dead. We cannot know the “state of the cat” for sure until we open the box.

Something similar is going on with Brownian motion, the jiggling of pollen grains in water first described by Robert Brown in 1827. Einstein reasoned that “if tiny but visible particles were suspended in a liquid, the invisible atoms in the liquid would bombard the suspended particles and cause them to jiggle”. His conclusion was that Brownian motion provided evidence for the atomic and molecular nature of matter. Collisions with neighboring molecules provides the energy that drives diffusion; it drives the movement of molecules so that regulatory interactions can occur and provides the energy needed to disrupt such molecular interactions. The stronger the binding interaction between atoms or molecules the longer, ON AVERAGE, they will remain associated with one another. We can measure interaction affinities based on the half-life of interactions in a large enough population, but as with radioactive decay when exactly any particular complex dissociates cannot be predicted.

Molecular processes clearly “obey” rules. Energy is moved around through collisions, but we cannot predict when any particular event will occur. Gene expression is controlled by the assembly and disassembly of multicomponent complexes. The result is that we cannot know for sure how long a particular gene will be active or repressed. The result of such unpredictable assembly/disassembly events leads to what is known as transcriptional bursting; bursts of messenger RNA synthesis from a gene followed by periods of “silence” (3).  A similar behavior is associated with the synthesis of polypeptides (4). Both processes can influence cellular and organismic behaviors. Many aspects of biological systems,  including embryonic development, immune system regulation, and the organization and activity of neurons and supporting cells involved in behavioral responses to external and internal signals (5), display such noisy behaviors.

Why are biological systems so influenced by stochastic processes? Two simple reasons – they are composed of small, sometimes very small numbers of specific molecules. The obvious and universal extreme is that a cell typically contains one to two copies of each gene. Remember, a single change in a single gene can produce a lethal effect on the cell or organism that carries it.  Whether a gene is “expressed” or not can alter, sometimes dramatically, cellular and system behaviors. The number of regulatory, structural, and catalytic molecules (typically proteins) present in a cell is often small leading to a situation in which the effects of large numbers do not apply. Consider a “simple” yeast cell. Using a range of techniques Ho et al (6) estimated that such cells contain about 42 million protein molecules. A yeast cell has around 5300 genes that encode protein components, with an average of 8400 copies of each protein. In the case of proteins present at low levels, the effects of noise can be functionally significant. While human cells are larger and contain more genes (~25,000) each gene remains at one to two copies per cell. In particular, the number of gene regulatory proteins tends to be on the low side. If you are curious the B10NUMB3R5 site hosted by Harvard University provides empirically derived estimates of the average number of various molecules in various organisms and cell types. 

The result is that noisy behaviors in living systems are ubiquitous and their effects unavoidable. Uncontrolled they could lead to the death of the cell and organism. Given that each biological system appears to have an uninterrupted billion year long history going back to the “last universal common ancestor”, it is clear that highly effective feedback systems monitor and adjust the living state, enabling it to respond to molecular and cellular level noise as well as various internal and external inputs. This “internal model” of the living state is continuously updated to (mostly) constrain stochastic effects (7).  

Organisms exploit stochastic noise in various ways. It can be used to produce multiple, and unpredictable behaviors  from a single genome, and are one reason that identical twins are not perfectly identical (8). Unicellular organisms take advantage of stochastic processes to probe (ask questions of) their environment, and respond to opportunities and challenges. In a population of bacteria it is common to find that certain cells withdraw from active cell division, a stochastic decision that renders them resistant to antibiotics that kill rapidly dividing cells. These “persisters” are no different genetically from their antibiotic-sensitive relatives (9). Their presence enables the population to anticipate and survive environmental challenges.  Another unicellular stochastically-regulated system is the bacteria E. coli‘s lac operon, a classic system that appears to have traumatized many a biology student.  It enables the cell to ask “is there lactose in my environment?”  How?  A repressor molecule, LacI, is present in about 10 copies per cell. When more easily digestible sugars are absent the cell enters a stress state. In this state, when the LacI protein is knocked off the gene’s regulatory region there is a burst of gene expression. If lactose is present the proteins encoded by the operon are synthesized and enable lactose to enter and be broken down. One of the breakdown products inhibits the repressor protein, so that the operon remains active. No lactose present? The repressor rebinds and the gene goes silent (10).  Such noisy regulatory processes enables cells to periodically check their environment so that genes stay on only when they are useful.   

As noted by Honegger & de Bivort (11)(see also post on noise) decades of inbreeding with rodents in shared environments eliminated only 20–30% of the observed variance in a number of phenotypes. Such unpredictability can be beneficial. If an organism always “jumps” in the same direction on the approach of a predator it won’t take long before predators anticipate their behavior. Recent molecular techniques, particularly the ability to analyze the expression of genes at the single cell level, have revealed the noisiness of gene expression within cells of the same “type”.  Surprisingly, in about 10% of human genes, only the maternal or the paternal version of a gene is expressed in a particular cell, leading to regions of the body with effectively different genomes.  This process of “monoallelic expression” is distinct from the dosage compensation associated with the random “inactivation” of one or the other X-chromosomes in females. Monoallelic expression has been linked to  “adaptive signaling processes, and genes linked to age-related diseases such as neurodegeneration and cancer” (12). The end result of noisy gene expression, mutation, and various “downstream” effects is that we are all mosaics, composed of clones of cells that behave differently due to noisy molecular differences.  

Consider your brain. On top of the recently described identification of  over 3000 neural cell types in the human brain (13), there is noisy as well as experience-dependent variation in gene expression, neuronal morphology and connectedness, and in the rates and patterns of neuronal firing due to differences in synaptic structure, position, strength, and other factors. Together these can be expected to influence how you (your brain) perceives and processes the external world, your own internal state, and the effects associated with the interaction between these two “models”.  Of course the current state of your brain has been influenced, constrained by and contingent upon by past inputs and experiences, and the noisy events associated with its development. At the cellular level, the sum of these molecular and cellular interactions can be considered the consciousness of the cell, but this is a consciousness not necessarily aware of itself. In my admittedly naive view, as neural systems, brains, grow in complexity, they become aware of their own workings. As Godfrey-Smith (14) puts it, “brain processes are not causes of thoughts and experiences; they are thoughts and experiences”.  Thoughts become inputs into the brain’s model of itself.

What seems plausible is that as nervous systems increase in complexity, processing increasing amounts of information including information arising from its internal workings, it may come to produce a meta-model that for reasons “known” only to itself needs to make sense of those experiences, feelings, and thoughts. In contrast to the simpler questions asked by bacteria, such as “is there an antibiotic or lactose in my world?”, more complex (neural) systems may ask “who is to blame for the pain and suffering in the world?”  I absent-mindedly respond with a smile to a person at a coffeehouse, and then my model reconsiders (updates) itself depending, in part, upon their response, previous commitments or chores, and whether other thoughts distract or attract “me”. Out of this ferment of updating models emerges self-conscious biological activities – I turn to chat or bury my head back in my book. How I (my model) responds is a complex function of how my model works and how it interprets what is going on, a process influenced by noise, genetics, and past experiences; my history of rewards, traumas, and various emotional and “meaningful” events.

Am I (my model) free to behave independently from these effects? no! But am I (my model) determined by them, again no! The effects of biological noise in its various forms, together with past and present events will be reinforced or suppressed by my internal network and my history of familial, personal, and social experiences. I feel “free” in that there are available choices, because I am both these models and the process of testing and updating them. Tentative models of what is going on (thinking fast) are then updated based on new information or self-reflection (thinking slower). I attempt to discern what is “real” and what seems like an appropriate response. When the system (me) is working non-pathologically, it avoids counter-productive, self-destructive ideations and actions; it can produce sublime metaphysical abstractions and self-sacrificing (altruistic) behaviors.  Mostly it acts to maintain itself and adapt, often resorting to and relying upon the stories it tells itself.  I am neither determined nor free, just an organism coping, or attempting to cope, with the noisy nature of existence, its own internal systems, and an excessively complex neural network.

Added notes: Today (5 Dec. 23) was surprised to discover this article (Might There Be No Quantum Gravity After All?) with the following quote “not all theories need be reversible, they can also be stochastic. In a stochastic theory, the initial state of a physical system evolves according to an equation, but one can only know probabilistically which states might occur in the future—there is no unique state that one can predict.” Makes you think! Also realized that I should have cited Zechner et al (added to REF 11) and now I have to read “Free will without consciousness? by Mudrik et al.,  2022. Trends in Cog. Sciences 26: 555-566.

Literature cited

  1. Sapolsky, R.M. 2023. Determined: A Science of Life Without Free Will. Penguin LLC US
  2. Mitchell, K.J. 2023. Free Agents: How Evolution Gave Us Free Will. Princeton. 
  3. Fukaya, T. (2023). Enhancer dynamics: Unraveling the mechanism of transcriptional bursting. Science Advances, 9(31), eadj3366.
  4. Livingston, N. M., Kwon, J., Valera, O., Saba, J.A., Sinha, N.K., Reddy, P., Nelson, B. Wolfe, C., Ha, T.,Green, R., Liu, J., & Bin Wu (2023). Bursting translation on single mRNAs in live cells. Molecular Cell
  5. Harrison, L. M., David, O., & Friston, K. J. (2005). Stochastic models of neuronal dynamics. Philosophical Transactions of the Royal Society B: Biological Sciences, 360(1457), 1075-1091. 
  6. Ho, B., Baryshnikova, A., & Brown, G. W. (2018). Unification of protein abundance datasets yields a quantitative Saccharomyces cerevisiae proteome. Cell systems, 6, 192-205. 
  7. McNamee & Wolpert (2019). Internal models in biological control. Annual review of control, robotics, and autonomous systems, 2, 339-364.
  8. Czyz, W., Morahan, J. M., Ebers, G. C., & Ramagopalan, S. V. (2012). Genetic, environmental and stochastic factors in monozygotic twin discordance with a focus on epigenetic differences. BMC medicine, 10, 1-12.
  9. Manuse, S., Shan, Y., Canas-Duarte, S.J., Bakshi, S., Sun, W.S., Mori, H., Paulsson, J. and Lewis, K., 2021. Bacterial persisters are a stochastically formed subpopulation of low-energy cells. PLoS biology, 19, p.e3001194.
  10. Vilar, J. M., Guet, C. C. and Leibler, S. (2003). Modeling network dynamics: the lac operon, a case study. J Cell Biol 161, 471-476.
  11. Honegger & de Bivort. 2017. Stochasticity, individuality and behavior & Zechner, C., Nerli, E., & Norden, C. 2020. Stochasticity and determinism in cell fate decisionsDevelopment147, dev181495.
  12. Cepelewicz 2022. Nature Versus Nurture? Add ‘Noise’ to the Debate
  13. Johansen, N., Somasundaram, S.,Travaglini, K.J., Yanny, A.M., Shumyatcher, M., Casper, T., Cobbs, C., Dee, N., Ellenbogen, R., Ferreira, M., Goldy, J., Guzman, J., Gwinn, R., Hirschstein, D., Jorstad, N.L.,Keene, C.D., Ko, A., Levi, B.P.,  Ojemann, J.G., Nadiy, T.P., Shapovalova, N., Silbergeld, D., Sulc, J., Torkelson, A., Tung, H., Smith, K.,Lein, E.S., Bakken, T.E., Hodge, R.D., & Miller, J.A (2023). Interindividual variation in human cortical cell type abundance and expression. Science, 382, eadf2359.
  14. Godfrey-Smith, P. (2020). Metazoa: Animal life and the birth of the mind. Farrar, Straus and Giroux.
Featured

Making sense of noise: introducing students to stochastic processes in order to better understand biological behaviors (and even free will).

 Biological systems are characterized by the ubiquitous roles of weak, that is, non-covalent molecular interactions, small, often very small, numbers of specific molecules per cell, and Brownian motion. These combine to produce stochastic behaviors at all levels from the molecular and cellular to the behavioral. That said, students are rarely introduced to the ubiquitous role of stochastic processes in biological systems, and how they produce unpredictable behaviors. Here I present the case that they need to be and provide some suggestions as to how it might be approached.  

Background: Three recent events combined to spur this reflection on stochasticity in biological systems, how it is taught, and why it matters. The first was an article describing an approach to introducing students to homeostatic processes in the context of the bacterial lac operon (Booth et al., 2022), an adaptive gene regulatory system controlled in part by stochastic events. The second were in-class student responses to the question, why do interacting molecules “come back apart” (dissociate).  Finally, there is the increasing attention paid to what are presented as deterministic genetic factors, as illustrated by talk by Kathryn Harden, author of the “The Genetic Lottery: Why DNA matters for social equality” (Harden, 2021).  Previous work has suggested that students, and perhaps some instructors, find the ubiquity, functional roles, and implications of stochastic, that is inherently unpredictable processes, difficult to recognize and apply. Given their practical and philosophical implications, it seems essential to introduce students to stochasticity early in their educational journey.

added 7 March 2023; Should have cited:  You & Leu (2020).

What is stochasticity and why is it important for understanding biological systems? Stochasticity results when intrinsically unpredictable events, e.g. molecular collisions, impact the behavior of a system. There are a number of drivers of stochastic behaviors. Perhaps the most obvious, and certainly the most ubiquitous in biological systems is thermal motion. The many molecules within a solution (or a cell) are moving, they have kinetic energy – the energy of motion and mass. The exact momentum of each molecule cannot, however, be accurately and completely characterized without perturbing the system (echos of Heisenberg). Given the impossibility of completely characterizing the system, we are left uncertain as to the state of the system’s components, who is bound to whom, going forward. 

Through collisions energy is exchanged between molecules.  A number of chemical processes are driven by the energy delivered through such collisions. Think about a typical chemical reaction. In the course of the reaction, atoms are rearranged – bonds are broken (a process that requires energy) and bonds are formed (a process that releases energy). Many (most) of the chemical reactions that occur in biological systems require catalysts to bring their required activation energies into the range available within the cell.   [1]  

What makes the impact of thermal motion even more critical for biological systems is that many (most) regulatory interactions and macromolecular complexes, the molecular machines discussed by Alberts (1998) are based on relatively weak, non-covalent surface-surface interactions between or within molecules. Such interactions are central to most regulatory processes, from the activation of signaling pathways to the control of gene expression. The specificity and stability of these non-covalent interactions, which include those involved in determining the three-dimensional structure of macromolecules, are directly impacted by thermal motion, and so by temperature – one reason controlling body temperature is important.  

So why are these interactions stochastic and why does it matter?  A signature property of a stochastic process is that while it may be predictable when large numbers of atoms, molecules, or interactions are involved, the behaviors of individual atoms, molecules, and interactions are not. A classic example, arising from factors intrinsic to the atom, is the decay of radioactive isotopes. While the half-life of a large enough population of a radioactive isotope is well defined, when any particular atom will decay is, in current theory, unknowable, a concept difficult for students (see Hull and Hopf, 2020). This is the reason we cannot accurately predict whether Schrȍdinger’s cat is alive or dead. The same behavior applies to the binding of a regulatory protein to a specific site on a DNA molecule and its subsequent dissociation: predictable in large populations, not-predictable for individual molecules. The situation is exacerbated by the fact that biological systems are composed of cells and cells are, typically, small, and so contain relatively few molecules of each type (Milo and Phillips, 2015). There are typically one or two copies of each gene in a cell, and these may be different from one another (when heterozygous). The expression of any one gene depends upon the binding of specific proteins, transcription factors, that act to activate or repress gene expression. In contrast to a number of other cellular proteins, “as a rule of thumb, the concentrations of such transcription factors are in the nM range, corresponding to only 1-1000 copies per cell in bacteria or 103-106 in mammalian cells” (Milo and Phillips, 2015). Moreover, while DNA binding proteins bind to specific DNA sequences with high affinity, they also bind to DNA “non-specifically” in a largely sequence independent manner with low affinity. Given that there are many more non-specific (non-functional) binding sites in the DNA than functional ones, the effective concentration of a particular transcription factor can be significantly lower than its total cellular concentration would suggest. For example, in the case of the lac repressor of the bacterium Escherichia coli (discussed further below), there are estimated to be ~10 molecules of the tetrameric lac repressor per cell, but “non-specific affinity to the DNA causes >90% of LacI copies to be bound to the DNA at locations that are not the cognate promoter site” (Milo and Phillips, 2015); at most only a few molecules are free in the cytoplasm and available to bind to specific regulatory sites.  Such low affinity binding to DNA allows proteins to undergo one-dimensional diffusion, a process that can greatly speed up the time it takes for a DNA binding protein to “find” high affinity binding sites (Stanford et al., 2000; von Hippel and Berg, 1989). Most transcription factors bind in a functionally significant manner to hundreds to thousands of gene regulatory sites per cell, often with distinct binding affinities. The effective binding affinity can also be influenced by positive and negative interactions with other transcription and accessory factors, chromatin structure, and DNA modifications. Functional complexes can take time to assemble, and once assembled can initiate multiple rounds of polymerase binding and activation, leading to a stochastic phenomena known as transcriptional bursting. An analogous process occurs with RNA-dependent polypeptide synthesis (translation). The result, particularly for genes expressed at lower levels, is that stochastic (unpredictable) bursts of transcription/translation can lead to functionally significant changes in protein levels (Raj et al., 2010; Raj and van Oudenaarden, 2008).

Figure adapted from Elowitz et al 2002

There are many examples of stochastic behaviors in biological systems. Originally noted by Novick and Weiner (1957) in their studies of the lac operon, it was clear that gene expression occurred in an all or none manner. This effect was revealed in a particularly compelling manner by Elowitz et al (2002) who used lac operon promoter elements to drive expression of transgenes encoding cyan and yellow fluorescent proteins (on a single plasmid) in E. coli.  The observed behaviors were dramatic; genetically identical cells were found to express, stochastically, one, the other, both, or neither transgenes. The stochastic expression of genes and downstream effects appear to be the source of much of the variance found in organisms with the same genotype in the same environmental conditions (Honegger and de Bivort, 2018).

Beyond gene expression, the unpredictable effects of stochastic processes can be seen at all levels of biological organization, from the biased random walk behaviors that underlie various forms of chemotaxis (e.g. Spudich and Koshland, 1976) and the search behaviors in C. elegans (Roberts et al., 2016) and other animals (Smouse et al., 2010), the noisiness in the opening of individual neuronal voltage-gated ion channels (Braun, 2021; Neher and Sakmann, 1976), and various processes within the immune system (Hodgkin et al., 2014), to variations in the behavior of individual organisms (e.g. the leafhopper example cited by Honegger and de Bivort, 2018). Stochastic events are involved in a range of “social” processes in bacteria (Bassler and Losick, 2006). Their impact serves as a form of “bet-hedging” in populations that generate phenotypic variation in a homogeneous environment (see Symmons and Raj, 2016). Stochastic events can regulate the efficiency of replication-associated error-prone mutation repair (Uphoff et al., 2016) leading to increased variation in a population, particularly in response to environmental stresses. Stochastic “choices” made by cells can be seen as questions asked of the environment, the system’s response provides information that informs subsequent regulatory decisions (see Lyon, 2015) and the selective pressures on individuals in a population (Jablonka and Lamb, 2005). Together stochastic processes introduce a non-deterministic (i.e. unpredictable) element into higher order behaviors (Murakami et al., 2017; Roberts et al., 2016).

Controlling stochasticity: While stochasticity can be useful, it also needs to be controlled. Not surprisingly then there are a number of strategies for “noise-suppression”, ranging from altering regulatory factor concentrations, the formation of covalent disulfide bonds between or within polypeptides, and regulating the activity of repair systems associated with DNA replication, polypeptide folding, and protein assembly via molecular chaperones and targeted degradation. For example, the identification of “cellular competition” effects has revealed that “eccentric cells” (sometimes, and perhaps unfortunately referred to as of “losers”) can be induced to undergo apoptosis (die) or migration in response to their “normal” neighbors (Akieda et al., 2019; Di Gregorio et al., 2016; Ellis et al., 2019; Hashimoto and Sasaki, 2020; Lima et al., 2021).

Student understanding of stochastic processes: There is ample evidence that students (and perhaps some instructors as well) are confused by or uncertain about the role of thermal motion, that is the transfer of kinetic energy via collisions, and the resulting stochastic behaviors in biological systems. As an example, Champagne-Queloz et al (2016; 2017) found that few students, even after instruction through molecular biology courses, recognize that collisions with other molecules were  responsible for the disassembly of molecular complexes. In fact, many adopt a more “deterministic” model for molecular disassembly after instruction (see part A panel figure on next page). In earlier studies, we found evidence for a similar confusion among instructors (part B of figure on the next page)(Klymkowsky et al., 2010). 

Introducing stochasticity to students: Given that understanding stochastic (random) processes can be difficult for many (e.g. Garvin-Doxas and Klymkowsky, 2008; Taleb, 2005), the question facing course designers and instructors is when and how best to help students develop an appreciation for the ubiquity, specific roles, and implications of stochasticity-dependent processes at all levels in biological systems. I would suggest that  introducing students to the dynamics of non-covalent molecular interactions, prevalent in biological systems in the context of stochastic interactions (i.e. kinetic theory) rather than a ∆G-based approach may be useful. We can use the probability of garnering the energy needed to disrupt an interaction to present concepts of binding specificity (selectivity) and stability. Developing an understanding of the formation and  disassembly of molecular interactions builds on the same logic that Albert Einstein and Ludwig Böltzman used to demonstrate the existence of atoms and molecules and the reversibility of molecular reactions (Bernstein, 2006). Moreover, as noted by Samoilov et al (2006) “stochastic mechanisms open novel classes of regulatory, signaling, and organizational choices that can serve as efficient and effective biological solutions to problems that are more complex, less robust, or otherwise suboptimal to deal with in the context of purely deterministic systems.”

The selectivity (specificity) and stability of molecular interactions can be understood from an energetic perspective – comparing the enthalpic and entropic differences between bound and unbound states. What is often missing from such discussions, aside from the fact of their inherent complexity, particularly in terms of calculating changes in entropy and exactly what is meant by energy (Cooper and Klymkowsky, 2013) is that many students enter biology classes without a robust understanding of enthalpy, entropy, or free energy (Carson and Watson, 2002).  Presenting students with a molecular  collision, kinetic theory-based mechanism for the dissociation of molecular interactions, may help them better understand (and apply) both the dynamics and specificity of molecular interactions. We can gage the strength of an interaction (the sum of the forces stabilizing an interaction) based on the amount of energy (derived from collisions with other molecules) needed to disrupt it.  The implication of student responses to relevant Biology Concepts Instrument (BCI) questions and beSocratic activities (data not shown), as well as a number of studies in chemistry, is that few students consider the kinetic/vibrational energy delivered through collisions with other molecules (a function of temperature), as key to explaining why interactions break (see Carson and Watson, 2002 and references therein).  Although this paper is 20 years old, there is little or no evidence that the situation has improved. Moreover, there is evidence that the conventional focus on mathematics-centered, free energy calculations in the absence of conceptual understanding may serve as an unnecessary barrier to the inclusion of a more socioeconomically diverse, and under-served populations of students (Ralph et al., 2022; Stowe and Cooper, 2019). 

The lac operon as a context for introducing stochasticity: Studies of the E. coli  lac operon hold an iconic place in the history of molecular biology and are often found in introductory courses, although typically presented in a deterministic context. The mutational analysis of the lac operon helped define key elements involved in gene regulation (Jacob and Monod, 1961; Monod et al., 1963). Booth et al (2022) used the lac operon as the context for their “modeling and simulation lesson”, Advanced Concepts in Regulation of the Lac Operon. Given its inherently stochastic regulation (Choi et al., 2008; Elowitz et al., 2002; Novick and Weiner, 1957; Vilar et al., 2003), the lac operon is a good place to start introducing students to stochastic processes. In this light, it is worth noting that Booth et al describes the behavior of the lac operon as “leaky”, which would seem to imply a low, but continuous level of expression, much as a leaky faucet continues to drip. As this is a peer-reviewed lesson, it seems likely that it reflects widely held mis-understandings of how stochastic processes are introduced to, and understood by students and instructors.

E. coli cells respond to the presence of lactose in growth media in a biphasic manner, termed diauxie, due to “the inhibitory action of certain sugars, such as glucose, on adaptive enzymes (meaning an enzyme that appears only in the presence of its substrate)” (Blaiseau and Holmes, 2021). When these (preferred) sugars are depleted from the media, growth slows. If lactose is present, however, growth will resume following a delay associated with the expression of the proteins encoded by the operon that enables the cell to import and metabolize lactose. Although the term homeostatic is used repeatedly by Booth et al, the lac operon is part of an adaptive, rather than a homeostatic, system. In the absence of glucose, cyclic AMP (cAMP) levels in the cell rise. cAMP binds to and activates the catabolite activator protein (CAP), encoded for by the crp gene. Activation of CAP leads to the altered expression of a number of target genes, whose products are involved in adaption to the stress associated with the absence of common and preferred metabolites. cAMP-activated CAP acts as both a transcriptional repressor and activator, “and has been shown to regulate hundreds of genes in the E. coli genome, earning it the status of “global” or “master” regulator” (Frendorf et al., 2019). It is involved in the adaptation to environmental factors, rather than maintaining the cell in a particular state (homeostasis). 

The lac operon is a classic polycistronic bacterial gene, encoding three distinct polypeptides: lacZ (β-galactosidase), lacY (β-galactoside permease), and lacA (galactoside acetyltransferase). When glucose or other preferred energy sources are present, expression of the lac operon is blocked by the inactivity of CAP. The CAP protein is a homodimer and its binding to DNA is regulated by the binding of the allosteric effector cAMP.  cAMP is generated from ATP by the enzyme adenylate cyclase, encoded by the cya gene. In the absence of glucose the enyzme encoded by the crr gene is phosphorylated and acts to activate adenylate cyclase (Krin et al., 2002).  As cAMP levels increase, cAMP binds to the CAP protein, leading to a dramatic change in its structure (↑), such that the protein’s  DNA binding domain becomes available to interact with promoter sequences (figure from Sharma et al., 2009).

Binding of activated (cAMP-bound) CAP is not, by itself sufficient to activate expression of the lac operon because of the presence of the constitutively expressed lac repressor protein, encoded for by the lacI gene. The active repressor is a tetramer, present at very low levels (~10 molecules) per cell. The lac operon contains three repressor (“operator”) binding sites; the tetrameric repressor can bind two operator sites simultaneously (upper figure → from Palanthandalam-Madapusi and Goyal, 2011). In the absence of lactose, but in the presence of cAMP-activated CAP, the operon is expressed in discrete “bursts” (Novick and Weiner, 1957; Vilar et al., 2003). Choi et al (2008) found that these burst come in two types, short and long, with the size of the burst referring to the number of mRNA molecules synthesized (bottm figure adapted from Choi et al ↑). The difference between burst sizes arises from the length of time that the operon’s repressor binding sites are unoccupied by repressor. As noted above, the tetravalent repressor protein can bind to two operator sites at the same time. When released from one site, polymerase binding and initiation produces a small number of mRNA molecules. Persistent binding to the second site means that the repressor concentration remains locally high, favoring rapid rebinding to the operator and the cessation of transcription (RNA synthesis). When the repressor releases from both operator sites, a rarer event, it is free to diffuse away and interact (non-specifically, i.e. with low affinity) with other DNA sites in the cell, leaving the lac operator sites unoccupied for a longer period of time. The number of such non-specific binding sites greatly exceeds the number (three) of specific binding sites in the operon. The result is the synthesis of a larger “burst” (number) of mRNA molecules. The average length of time that the operator  sites remain unoccupied is a function of the small number of repressor molecules present and the repressor’s low but measurable non-sequence specific binding to DNA. 

The expression of the lac operon leads to the appearance of β-galactosidase and β-galactoside permease. An integral membrane protein, β-galactoside permease enables extracellular lactose to enter the cell while cytoplasmic β-galactosidase catalyzes its breakdown and the generation of allolactone, which binds to the lac repressor protein, inhibiting its binding to operator sites, and so removing repression of transcription. In the absence of lactose, there are few if any of the proteins (β-galactosidase and β-galactoside permease) needed to activate the expression of the lac operon, so the obvious question is how, when lactose does appear in the extracellular media, does the lac operon turn on? Booth et al and the Wikipedia entry on the lac operon (accessed 29 June 2022) describe the turn on of the lac operon as “leaky” (see above). The molecular modeling studies of Vilar et al and Choi et al (which, together with Novick and Weiner, are not cited by Booth et al) indicate that the system displays distinct threshold and maintenance concentrations of lactose needed for stable lac gene expression. The term “threshold” does not occur in the Booth et al article. More importantly, when cultures are examined at the single cell level, what is observed is not a uniform increase in lac expression in all cells, as might be expected in the context of leaky expression, but more sporadic (noisy) behaviors. Increasing numbers of cells are “full on” in terms of lac operon expression over time when cultured in lactose concentrations above the operon’s activation threshold. This illustrates the distinctly different implications of a leaky versus a stochastic process in terms of their impacts on gene expression. While a leak is a macroscopic metaphor that produces a continuous, dependable, regular flow (drips), the occurrence of “bursts” of gene expression implies a stochastic (unpredictable) process ( figure from Vilar et al ↓). 

As the ubiquity and functionally significant roles of stochastic processes in biological systems becomes increasingly apparent, e.g. in the prediction of phenotypes from genotypes (Karavani et al., 2019; Mostafavi et al., 2020), helping students appreciate and understand the un-predictable, that is stochastic, aspects of biological systems becomes increasingly important. As an example, revealed dramatically through the application of single cell RNA sequencing studies, variations in gene expression between cells of the same “type” impacts organismic development and a range of behaviors. For example, in diploid eukaryotic cells is now apparent that in many cells, and for many genes, only one of the two alleles present is expressed; such “monoallelic” expression can impact a range of processes (Gendrel et al., 2014). Given that stochastic processes are often not well conveyed through conventional chemistry courses (Williams et al., 2015) or effectively integrated into, and built upon in molecular (and other) biology curricula; presenting them explicitly in introductory biology courses seems necessary and appropriate.

It may also help make sense of discussions of whether humans (and other organisms) have “free will”.  Clearly the situation is complex. From a scientific perspective we are analyzing systems without recourse to non-natural processes. At the same time, “Humans typically experience freely selecting between alternative courses of action” (Maoz et al., 2019)(Maoz et al., 2019a; see also Maoz et al., 2019b)It seems possible that recognizing the intrinsically unpredictable nature of many biological processes (including those of the central nervous system) may lead us to conclude that whether or not free will exists is in fact a non-scientific, unanswerable (and perhaps largely meaningless) question. 

footnotes

[1]  For this discussion I will ignore entropy, a factor that figures in whether a particular reaction in favorable or unfavorable, that is whether, and the extent to which it occurs.  

Acknowledgements: Thanks to Melanie Cooper and Nick Galati for taking a look and Chhavinder Singh for getting it started. Updated 6 January 2023.

literature cited:

Akieda, Y., Ogamino, S., Furuie, H., Ishitani, S., Akiyoshi, R., Nogami, J., Masuda, T., Shimizu, N., Ohkawa, Y. and Ishitani, T. (2019). Cell competition corrects noisy Wnt morphogen gradients to achieve robust patterning in the zebrafish embryo. Nature communications 10, 1-17.

Alberts, B. (1998). The cell as a collection of protein machines: preparing the next generation of molecular biologists. Cell 92, 291-294.

Bassler, B. L. and Losick, R. (2006). Bacterially speaking. Cell 125, 237-246.

Bernstein, J. (2006). Einstein and the existence of atoms. American journal of physics 74, 863-872.

Blaiseau, P. L. and Holmes, A. M. (2021). Diauxic inhibition: Jacques Monod’s Ignored Work. Journal of the History of Biology 54, 175-196.

Booth, C. S., Crowther, A., Helikar, R., Luong, T., Howell, M. E., Couch, B. A., Roston, R. L., van Dijk, K. and Helikar, T. (2022). Teaching Advanced Concepts in Regulation of the Lac Operon With Modeling and Simulation. CourseSource.

Braun, H. A. (2021). Stochasticity Versus Determinacy in Neurobiology: From Ion Channels to the Question of the “Free Will”. Frontiers in Systems Neuroscience 15, 39.

Carson, E. M. and Watson, J. R. (2002). Undergraduate students’ understandings of entropy and Gibbs free energy. University Chemistry Education 6, 4-12.

Champagne-Queloz, A. (2016). Biological thinking: insights into the misconceptions in biology maintained by Gymnasium students and undergraduates”. In Institute of Molecular Systems Biology. Zurich, Switzerland: ETH Zürich.

Champagne-Queloz, A., Klymkowsky, M. W., Stern, E., Hafen, E. and Köhler, K. (2017). Diagnostic of students’ misconceptions using the Biological Concepts Instrument (BCI): A method for conducting an educational needs assessment. PloS one 12, e0176906.

Choi, P. J., Cai, L., Frieda, K. and Xie, X. S. (2008). A stochastic single-molecule event triggers phenotype switching of a bacterial cell. Science 322, 442-446.

Coop, G. and Przeworski, M. (2022). Lottery, luck, or legacy. A review of “The Genetic Lottery: Why DNA matters for social equality”. Evolution 76, 846-853.

Cooper, M. M. and Klymkowsky, M. W. (2013). The trouble with chemical energy: why understanding bond energies requires an interdisciplinary systems approach. CBE Life Sci Educ 12, 306-312.

Di Gregorio, A., Bowling, S. and Rodriguez, T. A. (2016). Cell competition and its role in the regulation of cell fitness from development to cancer. Developmental cell 38, 621-634.

Ellis, S. J., Gomez, N. C., Levorse, J., Mertz, A. F., Ge, Y. and Fuchs, E. (2019). Distinct modes of cell competition shape mammalian tissue morphogenesis. Nature 569, 497.

Elowitz, M. B., Levine, A. J., Siggia, E. D. and Swain, P. S. (2002). Stochastic gene expression in a single cell. Science 297, 1183-1186.

Feldman, M. W. and Riskin, J. (2022). Why Biology is not Destiny. In New York Review of Books. NY.

Frendorf, P. O., Lauritsen, I., Sekowska, A., Danchin, A. and Nørholm, M. H. (2019). Mutations in the global transcription factor CRP/CAP: insights from experimental evolution and deep sequencing. Computational and structural biotechnology journal 17, 730-736.

Garvin-Doxas, K. and Klymkowsky, M. W. (2008). Understanding Randomness and its impact on Student Learning: Lessons from the Biology Concept Inventory (BCI). Life Science Education 7, 227-233.

Gendrel, A.-V., Attia, M., Chen, C.-J., Diabangouaya, P., Servant, N., Barillot, E. and Heard, E. (2014). Developmental dynamics and disease potential of random monoallelic gene expression. Developmental cell 28, 366-380.

Harden, K. P. (2021). The genetic lottery: why DNA matters for social equality: Princeton University Press.

Hashimoto, M. and Sasaki, H. (2020). Cell competition controls differentiation in mouse embryos and stem cells. Current Opinion in Cell Biology 67, 1-8.

Hodgkin, P. D., Dowling, M. R. and Duffy, K. R. (2014). Why the immune system takes its chances with randomness. Nature Reviews Immunology 14, 711-711.

Honegger, K. and de Bivort, B. (2018). Stochasticity, individuality and behavior. Current Biology 28, R8-R12.

Hull, M. M. and Hopf, M. (2020). Student understanding of emergent aspects of radioactivity. International Journal of Physics & Chemistry Education 12, 19-33.

Jablonka, E. and Lamb, M. J. (2005). Evolution in four dimensions: genetic, epigenetic, behavioral, and symbolic variation in the history of life. Cambridge: MIT press.

Jacob, F. and Monod, J. (1961). Genetic regulatory mechanisms in the synthesis of proteins. Journal of molecular biology 3, 318-356.

Karavani, E., Zuk, O., Zeevi, D., Barzilai, N., Stefanis, N. C., Hatzimanolis, A., Smyrnis, N., Avramopoulos, D., Kruglyak, L. and Atzmon, G. (2019). Screening human embryos for polygenic traits has limited utility. Cell 179, 1424-1435. e1428.

Klymkowsky, M. W., Kohler, K. and Cooper, M. M. (2016). Diagnostic assessments of student thinking about stochastic processes. In bioArXiv: http://biorxiv.org/content/early/2016/05/20/053991.

Klymkowsky, M. W., Underwood, S. M. and Garvin-Doxas, K. (2010). Biological Concepts Instrument (BCI): A diagnostic tool for revealing student thinking. In arXiv: Cornell University Library.

Krin, E., Sismeiro, O., Danchin, A. and Bertin, P. N. (2002). The regulation of Enzyme IIAGlc expression controls adenylate cyclase activity in Escherichia coli. Microbiology 148, 1553-1559.

Lima, A., Lubatti, G., Burgstaller, J., Hu, D., Green, A., Di Gregorio, A., Zawadzki, T., Pernaute, B., Mahammadov, E. and Montero, S. P. (2021). Cell competition acts as a purifying selection to eliminate cells with mitochondrial defects during early mouse development. bioRxiv, 2020.2001. 2015.900613.

Lyon, P. (2015). The cognitive cell: bacterial behavior reconsidered. Frontiers in microbiology 6, 264.

Maoz, U., Sita, K. R., Van Boxtel, J. J. and Mudrik, L. (2019a). Does it matter whether you or your brain did it? An empirical investigation of the influence of the double subject fallacy on moral responsibility judgments. Frontiers in Psychology 10, 950.

Maoz, U., Yaffe, G., Koch, C. and Mudrik, L. (2019b). Neural precursors of decisions that matter—an ERP study of deliberate and arbitrary choice. Elife 8, e39787.

Milo, R. and Phillips, R. (2015). Cell biology by the numbers: Garland Science.

Monod, J., Changeux, J.-P. and Jacob, F. (1963). Allosteric proteins and cellular control systems. Journal of molecular biology 6, 306-329.

Mostafavi, H., Harpak, A., Agarwal, I., Conley, D., Pritchard, J. K. and Przeworski, M. (2020). Variable prediction accuracy of polygenic scores within an ancestry group. Elife 9, e48376.

Murakami, M., Shteingart, H., Loewenstein, Y. and Mainen, Z. F. (2017). Distinct sources of deterministic and stochastic components of action timing decisions in rodent frontal cortex. Neuron 94, 908-919. e907.

Neher, E. and Sakmann, B. (1976). Single-channel currents recorded from membrane of denervated frog muscle fibres. Nature 260, 799-802.

Novick, A. and Weiner, M. (1957). Enzyme induction as an all-or-none phenomenon. Proceedings of the National Academy of Sciences 43, 553-566.

Palanthandalam-Madapusi, H. J. and Goyal, S. (2011). Robust estimation of nonlinear constitutive law from static equilibrium data for modeling the mechanics of DNA. Automatica 47, 1175-1182.

Raj, A., Rifkin, S. A., Andersen, E. and van Oudenaarden, A. (2010). Variability in gene expression underlies incomplete penetrance. Nature 463, 913-918.

Raj, A. and van Oudenaarden, A. (2008). Nature, nurture, or chance: stochastic gene expression and its consequences. Cell 135, 216-226.

Ralph, V., Scharlott, L. J., Schafer, A., Deshaye, M. Y., Becker, N. M. and Stowe, R. L. (2022). Advancing Equity in STEM: The Impact Assessment Design Has on Who Succeeds in Undergraduate Introductory Chemistry. JACS Au.

Roberts, W. M., Augustine, S. B., Lawton, K. J., Lindsay, T. H., Thiele, T. R., Izquierdo, E. J., Faumont, S., Lindsay, R. A., Britton, M. C. and Pokala, N. (2016). A stochastic neuronal model predicts random search behaviors at multiple spatial scales in C. elegans. Elife 5, e12572.

Samoilov, M. S., Price, G. and Arkin, A. P. (2006). From fluctuations to phenotypes: the physiology of noise. Science’s STKE 2006, re17-re17.

Sharma, H., Yu, S., Kong, J., Wang, J. and Steitz, T. A. (2009). Structure of apo-CAP reveals that large conformational changes are necessary for DNA binding. Proceedings of the National Academy of Sciences 106, 16604-16609.

Smouse, P. E., Focardi, S., Moorcroft, P. R., Kie, J. G., Forester, J. D. and Morales, J. M. (2010). Stochastic modelling of animal movement. Philosophical Transactions of the Royal Society B: Biological Sciences 365, 2201-2211.

Spudich, J. L. and Koshland, D. E., Jr. (1976). Non-genetic individuality: chance in the single cell. Nature 262, 467-471.

Stanford, N. P., Szczelkun, M. D., Marko, J. F. and Halford, S. E. (2000). One-and three-dimensional pathways for proteins to reach specific DNA sites. The EMBO Journal 19, 6546-6557.

Stowe, R. L. and Cooper, M. M. (2019). Assessment in Chemistry Education. Israel Journal of Chemistry.

Symmons, O. and Raj, A. (2016). What’s Luck Got to Do with It: Single Cells, Multiple Fates, and Biological Nondeterminism. Molecular cell 62, 788-802.

Taleb, N. N. (2005). Fooled by Randomness: The hidden role of chance in life and in the markets. (2nd edn). New York: Random House.

Uphoff, S., Lord, N. D., Okumus, B., Potvin-Trottier, L., Sherratt, D. J. and Paulsson, J. (2016). Stochastic activation of a DNA damage response causes cell-to-cell mutation rate variation. Science 351, 1094-1097.

You, Shu-Ting, and Jun-Yi Leu. “Making sense of noise.” Evolutionary Biology—A Transdisciplinary Approach(2020): 379-391.

Vilar, J. M., Guet, C. C. and Leibler, S. (2003). Modeling network dynamics: the lac operon, a case study. J Cell Biol 161, 471-476.

von Hippel, P. H. and Berg, O. G. (1989). Facilitated target location in biological systems. Journal of Biological Chemistry 264, 675-678.

Williams, L. C., Underwood, S. M., Klymkowsky, M. W. and Cooper, M. M. (2015). Are Noncovalent Interactions an Achilles Heel in Chemistry Education? A Comparison of Instructional Approaches. Journal of Chemical Education 92, 1979–1987.

 

Featured

Sounds like science, but it ain’t …

we are increasingly assailed with science-related “news” – stories that too often involve hype and attempts to garner attention (and no, half-baked ideas are not theories, they are often non-scientific speculation or unconstrained fantasies).

The other day, as is my addiction, I turned to the “Real Clear Science” website to look for novel science-based stories (distractions from the more horrifying news of the day). I discovered two links that seduced me into clicking: “Atheism is not as rare or as rational as you think” by Will Gervais and Peter Sjöstedt-H’s “Consciousness and higher spatial dimensions“.  A few days later I encountered “Consciousness Is the Collapse of the Wave Function” by Stuart Hameroff. On reading them (more below), I faced the realization that science itself, and its distorted popularization by both institutional PR departments and increasingly by scientists and science writers, may be partially responsible for the absurdification of public discourse on scientific topics [1].  In part the problem arises from the assumption that science is capable of “explaining” much more than is actually the case. This insight is neither new nor novel. Timothy Caulfield’s essay Pseudoscience and COVID-19 — we’ve had enough already focuses on the fact that various, presumably objective data-based, medical institutions have encouraged the public’s thirst for easy cures for serious, and often incurable diseases.  As an example, “If a respected institution, such as the Cleveland Clinic in Ohio, offers reiki — a science-free practice that involves using your hands, without even touching the patient, to balance the “vital life force energy that flows through all living things” — is it any surprise that some people will think that the technique could boost their immune systems and make them less susceptible to the virus?” That public figures and trusted institutions provide platforms for such silliness [see Did Columbia University cut ties with Dr. Oz?] means that there is little to distinguish data-based treatments from faith- and magical-thinking based placebos. The ideal of disinterested science, while tempered by common human frailties, is further eroded by the lure of profit and/or hope of enhanced public / professional status and notoriety.  As noted by Pennock‘ “Science never guarantees absolute truth, but it aims to seek better ways to assess empirical claims and to attain higher degrees of certainty and trust in scientific conclusions“. Most importantly, “Science is a set of rules that keep the scientists from lying to each other. [2]

It should surprise no one that the failure to explicitly recognize the limits, and evolving nature of scientific knowledge, opens the door to self-interested hucksterism at both individual and institutional levels. Just consider the number of complementary/alternative non-scientific “medical” programs run by prestigious institutions. The proliferation of pundits, speaking outside of their areas of established expertise, and often beyond what is scientifically knowable (e.g. historical events such as the origin of life or the challenges of living in the multiverse which are, by their very nature, unobservable) speaks to the increasingly unconstrained growth of pathological, bogus, and corrupted science  which, while certainly not new [3], has been facilitated by the proliferation of public, no-barrier, no-critical feedback platforms [1,4].  Ignoring the real limits of scientific knowledge and rejecting, or ignoring, the expertise of established authorities, rejects the ideals that have led to science that “works”.  

Of course, we cannot blame the distortion of science for every wacky idea; crazy, conspiratorial and magical thinking may well be linked to the cognitive “features” (or are they bugs) of the human brain. Norman Cohn describes the depressing, and repeated pattern behind the construction of dehumanizing libels used to justify murderous behaviors towards certain groups [5].  Recent studies indicate that brains, whether complex or simple neural networks, appear to construct emergent models of the world, models they use to coordinate internal perceptions with external realities [6].  My own (out of my area of expertise) guess is that the complexity of the human brain is associated with, and leads to the emergence of internal “working models” that attempt to make sense of what is happening to us, in part to answer questions such as why the good die young and the wicked go unpunished. It seems likely that our social nature (and our increasing social isolation) influences these models, models that are “checked” or “validated” against our experiences. 

It was in this context that Gervais’s essay on atheism caught my attention. He approaches two questions: “how Homo sapiens — and Homo sapiens alone — came to be a religious species” and “how disbelief in gods can exist within an otherwise religious species?”  But is Homo sapiens really a religious species and what exactly is a religion? Is it a tool that binds social groups of organisms together, a way of coping with, and giving meaning to, the (apparent) capriciousness of existence and experience, both, or something else again?  And how are we to know what is going on inside other brains, including the brains of chimps, whales, or cephalopods? In this light I was struck by an essay by Sofia Deleniv “The ‘me’ illusion: How your brain conjures up your sense of self” that considers the number of species that appear to be able to recognize themselves in a mirror. Turns out, this is not nearly as short a list as was previously thought, and it seems likely that self-consciousness, the ability to recognize yourself as you, may be a feature of many such systems.  Do other organisms possess emergent “belief systems” that help process incoming and internal signals, including their own neural noise? When the author says, “We then subtly gauge participants’ intuitions” by using “a clever experiment to see how people mentally represent atheists” one is left to wonder whether there are direct and objective measures of “intuitions” or “mental representations”?   Then the shocker, after publishing a paper claiming that “Analytic Thinking Promotes Religious Disbelief“, the authors state that “the experiments in our initial Science paper were fatally flawed, the results no more than false positives.’ One is left to wonder did the questions asked make sense in the first place. While it initially seemed scientific (after all it was accepted and published in a premiere scientific journal), was it ever really science? 

Both “Consciousness and Higher Spatial Dimensions” and “Consciousness Is the Collapse of the Wave Function”, sound very scientific. Some physicists (the most sciencey of scientists, right?) have been speculating via “string theory” and “multiverses”, a series of unverified (and likely unverifiable) speculations, that they universe we inhabit has many many more than the three spatial dimensions we experience.  But how consciousness, an emergent property of biological (cellular) networks, is related to speculative physics is not clear, no matter what Nobel laureates in physics may say.  Should we, the people, take these remarks seriously?  After all these are the same folks who question the reality of time (for no good reason, as far as I can tell, as I watch my new grandchild and myself grow older rather than younger). 

Part of the issue involves what has been called “the hard problem of consciousness”, but as far as I can tell, consciousness is not a hard problem, but a process that emerges from systems of neural cells, interacting with one another and their environment in complex ways, not unlike the underlying processes of embryonic development, in which a new macroscopic organism composed of thousands to billions of cells emerges from a single cell.  And if the brain and body are generating signals (thoughts) then in makes sense these in turn feed back into the system, and as consciousness becomes increasingly complex, these thoughts need to be “understood” by the system that produced them.  The system may be forced to make sense of itself (perhaps that is how religions and other explanatory beliefs come into being, settling the brain so that it can cope with the material world, whether a nematode worm, an internet pundit, a QAnon wack-o, a religious fanatic, or a simple citizen, trying to make sense of things.

Thanks to Melanie Cooper for editorial advice and Steve Pollock for checking my understanding of physics; all remaining errors are mine alone!

  1. Scheufele, D. A. and Krause, N. M. (2019). Science audiences, misinformation, and fake news. Proceedings of the National Academy of Sciences 116, 7662-7669
  2. Kenneth S. Norris, cited in False Prophet by Alexander Kohn (and cited by John Grant in Corrupted Science. 
  3.  See Langmuir, I. (1953, recovered and published in 1989). “Pathological science.” Research-Technology Management 32: 11-17; “Corrupted Science: Fraud, Ideology, and Politics in Science” and “Bogus Science: or, Some people really believe these things” by John Grant (2007 and 2009)
  4.  And while I personally think Sabine Hossenfelder makes great explanatory videos, even she is occasionally tempted to go beyond the scientifically demonstrable: e.g. You don’t have free will, but don’t worry and An update on the status of superdeterminism with some personal notes  
  5.  Norman Cohn’s (1975) “Europe’s Inner Demons” will reveal.
  6. Kaplan, H. S. and Zimmer, M. (2020). Brain-wide representations of ongoing behavior: a universal principle? Current opinion in neurobiology 64, 60-69.

Featured

Anti-Scientific & anti-vax propaganda (1926 and today)

“Montaigne concludes, like Socrates, that ignorance aware of itself is the only true knowledge” – from “Forbidden Knowledge” by Roger Shattuck

A useful review of the history of the anti-vaccination movement: Poland & Jacobson 2011. The Age-Old Struggle against the Antivaccinationists NEJM

Science educators and those who aim to explain the implications of scientific or clinical observations to the public have their work cut out for them. In large part, this is because helping others, including the diverse population of health care providers and their clients, depends upon more than just critical thinking skills. Equally important is what might be termed “disciplinary literacy,” the ability to evaluate whether the methods applied are adequate and appropriate and so whether a particular observation is relevant to or able to resolve a specific question. To illustrate this point, I consider an essay from 1926 by Peter Frandsen and a 2021 paper by Ou et al. (2021) on the mechanism of hydroxychloroquine inhibition of SARS-CoV-2 replication in tissue culture cells.                

In Frandsen’s essay, well before the proliferation of unfettered web-based social pontification and ideologically-motivated distortions, he notes that “pseudo and unscientific cults are springing up and finding it easy to get a hold on the popular mind,” and “are making some headway in establishing themselves on an equally recognized basis with scientific medicine,” in part due to their ability to lobby politicians to exclude them from any semblance of “truth in advertising.”  Of particular resonance were the efforts in Minnesota, California, and Montana to oppose mandatory vaccination for smallpox. Given these successful anti-vax efforts, Frandsen asks, “is it any wonder that smallpox is one thousand times more prevalent in Montana than in Massachusetts in proportion to population?”  One cannot help but analogize to today’s COVID-19 statistics on the dramatically higher rate of hospitalization for the unvaccinated (e.g. Scobie et al., 2021). The comparison is all the more impactful (and disheartening) given the severity of smallpox as a disease, its elimination, in 1977, together with the near elimination of other dangerous viral human diseases (poliomyelitis and measles) primarily via vaccination efforts (Hopkins, 2013), and the discouraging number of high profile celebrities, some of whom I for one previously considered admirable figures (various forms of influencers in modern parlance) who actively promulgate positions that directly contradict objective and reproducible observation and embrace blatantly scientifically untenable beliefs (the vaccine-autism link serves as a prime example).                 

While much is made of the idea that education-based improvements in critical thinking ability can render its practitioners less susceptible to unwarranted conspiracy theories and beliefs (Lantian et al., 2021), the situation becomes more complex when we consider how it is that presumably highly educated practitioners, e.g. medical doctors, can become conspiracists (ignoring for the moment the more banal, and likely universal, reasons associated with greed and the need to draw attention to themselves).  As noted, many is the conspiracist who considers themselves to be a “critical freethinker” (see Lantian et al). The fact that they fail to recognize the flaws in their own thinking leads us to ask, what are they missing?            

A point rarely considered is what we might term “disciplinary literacy.” That is, do the members of an audience have the background information necessary to question foundational presumptions associated with an observation? Here I draw on personal experience. I have (an increasingly historical) interest in the interactions between intermediate filaments and viral infection (Doedens et al., 1994; Murti et al., 1988). In 2020, I found myself involved quite superficially with studies by colleagues here at the University of Colorado Boulder; they reproduced the ability of hydroxychloroquine to inhibit coronavirus replication in cultured cells.  Nevertheless, and in the face of various distortions, it quickly became apparent that hydroxychloroquine was ineffective for treating SARS-CoV-2 infection in humans. So, what disciplinary facts did one need to understand this apparent contradiction (which appears to have fueled unreasonable advocacy of hydroxychloroquine treatment for COVID)? The paper by Ou et al. (2021) provides a plausible mechanistic explanation. The process of in vitro infection of various cells appears to involve endocytosis followed by proteolytic events leading to the subsequent movement of viral nucleic acid into the cytoplasm, a prerequisite for viral replication. Hydroxychloroquine treatment acts by blocking the acidification of the endosome, which inhibits the capsid cleavage reaction and the subsequent cytoplasmic transport of the virus’s nucleic acid genome (see figure 1, Ou et al. 2021).  In contrast, in vivo infection involves a surface protease, rather than endocytosis, and is therefore independent of endosomal acidification.  Without a (disciplinary) understanding of the various mechanisms involve in viral entry, and their relevance in various experimental contexts, it remains a mystery for why hydroxychloroquine treatment blocks viral replication in one system (in vitro cultured cells) and not another (in vivo).             

 In the context of science education and how it can be made more effective, it appears that helping students understand underlying cellular processes, experimental details, and their often substantial impact on observed outcomes is central. This is in contrast to the common focus (in many courses) on the memorization of largely irrelevant details. Understanding how one can be led astray by the differences between experimental systems (and inadequate sample sizes) is essential. One cannot help but think of how mouse studies on diseases such as sepsis (Kolata, 2013) and Alzheimer’s (Reardon, 2018) have been haunted by the assumption that systems that differ in physiologically significant details are good models for human disease and the development of effective treatments. Helping students understand how we come to evaluate observations and the molecular and physiological mechanisms involved should be the primary focus of a modern education in the biological sciences, since it helps build up the disciplinary literacy needed to distinguish reasoned argument from anti-scientific propaganda. 

Acknowledgement: Thanks to Qing Yang for bringing the Ou et al paper to my attention.  

Literature cited:
Shattuck, R. (1996). Forbidden knowledge: from Prometheus to pornography. New York: St. Martin’s Press.

Doedens, J., Maynell, L. A., Klymkowsky, M. W. and Kirkegaard, K. (1994). Secretory pathway function, but not cytoskeletal integrity, is required in poliovirus infection. Arch Virol. suppl. 9, 159-172.

Hopkins, D. R. (2013). Disease eradication. New England Journal of Medicine 368, 54-63.

Kolata, G. (2013). Mice fall short as test subjects for some of humans’ deadly ills. New York Times 11, 467-477.

Lantian, A., Bagneux, V., Delouvée, S. and Gauvrit, N. (2021). Maybe a free thinker but not a critical one: High conspiracy belief is associated with low critical thinking ability. Applied Cognitive Psychology 35, 674-684.

Murti, K. G., Goorha, R. and Klymkowsky, M. W. (1988). A functional role for intermediate filaments in the formation of frog virus 3 assembly sites. Virology 162, 264-269.
 
Ou, T., Mou, H., Zhang, L., Ojha, A., Choe, H. and Farzan, M. (2021). Hydroxychloroquine-mediated inhibition of SARS-CoV-2 entry is attenuated by TMPRSS2. PLoS pathogens 17, e1009212.

Reardon, S. (2018). Frustrated Alzheimer’s researchers seek better lab mice. Nature 563, 611-613.

Scobie, H. M., Johnson, A. G., Suthar, A. B., Severson, R., Alden, N. B., Balter, S., Bertolino, D., Blythe, D., Brady, S. and Cadwell, B. (2021). Monitoring incidence of covid-19 cases, hospitalizations, and deaths, by vaccination status—13 US jurisdictions, April 4–July 17, 2021. Morbidity and Mortality Weekly Report 70, 1284.
Featured

Higher Education Malpractice: curving grades

If there is one thing that university faculty and administrators could do today to demonstrate their commitment to inclusion, not to mention teaching and learning over sorting and status, it would be to ban curve-based, norm-referenced grading. Many obstacles exist to the effective inclusion and success of students from underrepresented (and underserved) groups in science and related programs.  Students and faculty often, and often correctly, perceive large introductory classes as “weed out” courses preferentially impacting underrepresented students. In the life sciences, many of these courses are “out-of-major” requirements, in which students find themselves taught with relatively little regard to the course’s relevance to bio-medical careers and interests. Often such out-of-major requirements spring not from a thoughtful decision by faculty as to their necessity, but because they are prerequisites for post-graduation admission to medical or graduate school. “In-major” instructors may not even explicitly incorporate or depend upon the materials taught in these out-0f-major courses – rare is the undergraduate molecular biology degree program that actually calls on students to use calculus or a working knowledge of physics, despite the fact that such skills may be relevant in certain biological contexts – see Magnetofiction – A Reader’s Guide.  At the same time, those teaching “out of major” courses may overlook the fact that many (and sometimes most) of their students are non-chemistry, non-physics, and/or non-math majors.  The result is that those teaching such classes fail to offer a doorway into the subject matter to any but those already comfortable with it. But reconsidering the design and relevance of these courses is no simple matter.  Banning grading on a curve, on the other  hand, can be implemented overnight (and by fiat if necessary). 

 So why ban grading on a curve?  First and foremost, it would put faculty and institutions on record as valuing student learning outcomes (perhaps the best measure of effective teaching) over the sorting of students into easy-to-judge groups.  Second, there simply is no pedagogical justification for curved grading, with the possible exception of providing a kludgy fix to correct for poorly designed examinations and courses. There are more than enough opportunities to sort students based on their motivation, talent, ambition, “grit,” and through the opportunities they seek after and successfully embraced (e.g., through volunteerism, internships, and independent study projects). 

The negative impact of curving can be seen in a recent paper by Harris et al,  (Reducing achievement gaps in undergraduate general chemistry …), who report a significant difference in overall student inclusion and subsequent success based on a small grade difference between a C, which allows a student to proceed with their studies (generally as successfully as those with higher grades) and a C-minus, which requires them to retake the course before proceeding (often driving them out of the major).  Because Harris et al., analyzed curved courses, a subset of students cannot escape these effects.  And poor grades disproportionately impact underrepresented and underserved groups – they say explicitly “you do not belong” rather than “how can I help you learn”.   

Often naysayers disparage efforts to improve course design as “dumbing down” the course, rather than improving it.  In many ways this is a situation analogous to blaming patients for getting sick or not responding to treatment, rather than conducting an objective analysis of the efficacy of the treatment.  If medical practitioners had maintained this attitude, we would still be bleeding patients and accepting that more than a third are fated to die, rather than seeking effective treatments tailored to patients’ actual diseases – the basis of evidence-based medicine.  We would have failed to develop antibiotics and vaccines – indeed, we would never have sought them out. Curving grades implies that course design and delivery are already optimal, and the fate of students is predetermined because only a percentage can possibly learn the material.  It is, in an important sense, complacent quackery.

Banning grading on a curve, and labelling it for what it is – educational malpractice – would also change the dynamics of the classroom and might even foster an appreciation that a good teacher is one with the highest percentage of successful students, e.g. those who are retained in a degree program and graduate in a timely manner (hopefully within four years). Of course, such an alternative evaluation of teaching would reflect a department’s commitment to construct and deliver the most engaging, relevant, and effective educational program. Institutional resources might even be used to help departments generate more objective, instructor-independent evaluations of learning outcomes, in part to replace the current practice of student-based opinion surveys, which are often little more than measures of popularity.  We might even see a revolution in which departments compete with one another to maximize student inclusion, retention, and outcomes (perhaps even to the extent of applying pressure on the design and delivery of “out of major” required courses offered by other departments).  

“All a pipe dream” you might say, but the available data demonstrates that resources spent on rethinking course design, including engagement and relevance, can have significant effects on grades, retention, time to degree, and graduation rates.  At the risk of being labeled as self-promoting, I offer the following to illustrate the possibilities: working with Melanie Cooper at Michigan State University, we have built such courses in general and organic chemistry and documented their impact, see Evaluating the extent of a large-scale transformation in gateway science courses.

Perhaps we should be encouraging students to seek out legal representation to hold institutions (and instructors) accountable for detrimental practices, such as grading on a curve.  There might even come a time when professors and departments would find it prudent to purchase malpractice insurance if they insist on retaining and charging students for ineffective educational strategies.(1)  

Acknowledgements: Thanks to daughter Rebecca who provided edits and legal references and Melanie Cooper who inspired the idea. Educate! image from the Dorian De Long Arts & Music Scholarship site.

(1) One cannot help but wonder if such conduct could ever rise to the level of fraud. See, e.g., Bristol Bay Productions, LLC vs. Lampack, 312 P.3d 1155, 1160 (Colo. 2013) (“We have typically stated that a plaintiff seeking to prevail on a fraud claim must establish five elements: (1) that the defendant made a false representation of a material fact; (2) that the one making the representation knew it was false; (3) that the person to whom the representation was made was ignorant of the falsity; (4) that the representation was made with the intention that it be acted upon; and (5) that the reliance resulted in damage to the plaintiff.”).

Featured

Thinking about biological thinking: Steady state, half-life & response dynamics

Insights into student thinking & course design, part of the biofundamentals project. 

Something that often eludes both instructors and instructional researchers is a clear appreciation of what it is that students do and do not know, what ideas they can and cannot call upon to solve problems and generate clear, coherent, and plausible explanations. What information – thought to have been presented effectively through past instruction, appears to be unavailable to students. As an example, few instructors would believe that students completing college level chemistry could possibly be confused about the differences between covalent and non-covalent molecular interactions, yet there is good evidence that they are (Williams et al., 2015). Unless these ideas, together with their  conceptual bases and practical applications, are explicitly called out in the design and implementation of instructional materials, they often fail to become a working (relevant) part of the students’ conceptual tool-kit.   

To identify ideas involved in understanding biological systems, we are using an upper division undergraduate course in developmental biology (blog link) to provide context; this is a final “capstone” junior/senior level course that comes after students have completed multiple required courses in chemistry and biology.  Embryonic development integrates a range of molecular level processes, including the control of gene expression, cellular morphology and dynamics, through intrinsic and extrinsic signaling systems.   

A key aspect of the course’s design is the use of formative assessment activities delivered through the beSocratic system. These activities generally include parts in which students are asked to draw a graph or diagram. Students are required to complete tasks before the start of each class meeting; their responses are used to inform in-class discussions, a situation akin to reviewing game film and coaching in sports. Analysis of student drawings and comments, carried out in collaboration with Melanie Cooper and her group at Michigan State University, can reveal unexpected aspects of students’ thinking (e.g. Williams et al., 2015). What emerges from this Socratic give and take is an improved appreciation of the qualities of the tasks that engage students (as well as those that do not), and insights into how students analyze specific tasks, what sets of ideas they see as necessary and which necessary ideas they ignore when generating explanatory and predictive models. Most importantly, they can reveal flaws in how necessary ideas are developed. While at an admittedly early stage in the project, here I sketch out some preliminary findings: the first of these deal with steady state concentration and response dynamics.

The ideas of steady state concentration and pathway dynamics were identified by Loertscher et al (2014)as two of five “threshold concepts” in  biochemistry and presumably molecular biology as well. Given the non-equilibrium nature of biological systems, we consider the concentration of a particular molecule in a cell in dynamic terms, a function of its rate of synthesis (or importation from the environment) together with its rate of breakdown.  On top of this dynamic, the activity of existing molecules can be regulated through various post-translational mechanisms.  All of the populations of molecules within a cell or organism have a characteristic steady state concentration with the exception of genomic DNA, which while synthesized is not, in living organisms, degraded, although it is repaired.

In biological systems, molecules are often characterized by their “half life” but this can be confusing, since it is quite different from the way the term is used in physics, where students are likely to first be introduced to it.[1]  Echos from physics can imply that a molecule’s half-life is an intrinsic feature of the molecule, rather than of the system in which the molecule finds itself.  The equivalent of half-life would be doubling time, but these terms make sense only under specific conditions.  In a system in which synthesis has stopped (synthesis rate = 0) the half life is the time it takes for the number of molecules in the system to decrease by 50%, while in the absence of degradation (degradation rate = 0), the doubling time is the time it takes to double the number of molecules in the system.  Both degradation and synthesis rates are regulateable and can vary, often dramatically, in response to various stimuli.

In the case of RNA and polypeptide levels, the synthesis rate is determined by many distinct processes, including effective transcription factor concentrations, the signals that activate transcription factors, rates of binding of transcription factors to transcription factor binding sites (which can involve both DNA sequences and other proteins), as well as relevant binding affinities, and the rates associated with the recruitment and activation of DNA-dependent, RNA polymerase. Once activated, the rate of gene specific RNA synthesis will be influenced  by the rate of RNA polymerization (nucleotide bases added per second) and the length of the RNA molecules synthesized.  In eukaryotes, the newly formed RNA will generally need to have introns removed through interactions with splicing machinery, as well as other  post-transcriptional reactions, after which the processed RNA will be transported from the nucleus to the cytoplasm through the nuclear pore complex. In the cytoplasm there are rates associated with the productive interaction of RNAs with the translational machinery (ribosomes and associated factors), and the rate at which polypeptide synthesis occurs (amino acids added per second) together with the length of the polypeptide synthesized (given that things are complicated enough, I will ignore processes such as those associated with the targeting of membrane proteins and codon usage, although these will be included in a new chapter in biofundamentals reasonably soon, I hope). On the degradative side, there are rates associated with interactions with nucleases (that breakdown RNAs) and proteinases (that breakdown polypeptides).  These processes are energy requiring; generally driven by reactions coupled to the hydrolysis of adenosine triphosphate (ATP). 

That these processes matter is illustrated nicely in work from Harima and colleagues (2014).   The system, involved in the segmentation of the anterior region of the presomitic mesoderm, responds to signaling by activating the Hes7 gene, while the Hes7 gene product act to inhibit Hes7 gene expression. The result is an oscillatory response that is “tuned” by the length of the transcribed region (RNA length). This can be demonstrated experimentally by generating mice in which two of the genes three introns (Hes7-3) or all three introns (intron-less) are removed. Removing introns changes the oscillatory behavior of the system (Hes7 mRNA -blue and Hes7 protein – green)(Harima et al., 2013).

In the context of developmental biology, we use beSocratic activities to ask students to consider a molecule’s steady state concentration as a function of its synthesis and degradation rates, and to predict how the system would change when one or the other is altered. These ideas were presented in the context of observations by Schwanhausser et al (2011) that large discrepancies between steady state RNA and polypeptide concentrations are common and that there is an absence of a correlation between RNA and polypeptide half-lives (we also use these activities to introduce the general idea of correlation). In their responses, it was common to see students’ linking high steady state concentrations exclusively to long half-lives. Ask to consider the implications in terms of system responsiveness (in the specific context of a positively-acting transcription factor and target gene expression), students often presumed that a longer half-life would lead to higher steady state concentration which in turn would lead to increased target gene expression, primarily because collisions between the transcription factor and its DNA-binding sites would increase, leading to higher levels of target gene expression. This is an example of a p-prim (Hammer, 1996) – the heuristic that “more is more”, a presumption that is applicable to many systems. 

In biological systems, however, this is generally not the case – responses “saturate”, that is  increasing transcription factor concentration (or activity) above a certain level generally does not lead to a proportionate, or any increase in target gene expression. We would not call this a misconception, because this is an example of an idea that is useful in many situations, but generally isn’t in biological systems – where responses are generally inherently limited. The ubiquity and underlying mechanisms of response saturation need to be presented explicitly, and its impact on various processes reinforced repeatedly, preferably by having students use them to solve problems or construct plausible explanations. A related phenomenon that students seemed not to recognize involves the non-linearity of the initial response to a stimulus, in this case, the concentration of transcription factor below which target gene expression is not observed (or it may occur, but only transiently or within a few cells in the population, so as to be undetectable by the techniques used).

So what ideas do students need to call upon when they consider steady state concentration, how it changes, and the impact of such changes on system behavior?  It seems we need to go beyond synthesis and degradation rates and include the molecular processes associated with setting the system’s response onset and saturation concentrations.  First we need to help students appreciate why such behaviors (onset and saturation) occur – why doesn’t target gene expression begin as soon as a transcription factor appears in a cell?  Why does gene expression level off when transcription factor concentrations rise above a certain level?  The same questions apply to the types of threshold behaviors often associated with signaling systems.  For example, in quorum sensing among unicellular organisms, the response of cells to the signal occurs over a limited concentration range, from off to full on.  A related issue is associated with morphogen gradients (concentration gradients over space rather than time), in which there are multiple distinct types of “threshold” responses. One approach might be to develop a model in which we set the onset concentration close to the saturation concentration. The difficulty (or rather instructional challenge) here is that these are often complex processes involving cooperative as well as feedback interactions.

Our initial approach to steady state and thresholds has been to build activities based on the analysis of a regulatory network presented by Saka and Smith (2007), an analysis based on studies of early embryonic development in the frog Xenopus laevis. We chose the system because of its simplicity, involving only four components (although there are many other proteins associated with the actual system).  Saka and Smith modeled the regulatory network controlling the expression of the transcription factor proteins Goosecoid (Gsc) and Brachyury (Xbra) in response to the secreted signaling protein activin (↓), a member of

the TGFβ superfamily of secreted signaling proteins (see Li and Elowitz, 2019).   The network involves the positive action of Xbra on the gene encoding the transcription factor protein Xom.  The system’s behavior depends on the values of various parameters, parameters that include response to activator (Activin), rates of synthesis and the half-lives of Gsc, Xbra, and Xom, and the degrees of regulatory cooperativity and responsiveness.

Depending upon these parameters, the system can produce a range of complex responses.  In different regimes (→),  increasing concentrations of activin (M) can lead, initially, to increasing, but mutually exclusive, expression of either Xba (B) or Gsc (A) as well as sharp transitions in which expression flips from one to the other, as Activin concentration increases, after which the response saturates. There are also conditions at very low Activin concentration (marked by ↑) in which both Xbra and Gsc are expressed at low levels, a situation that students are asked to explain.

Lessons learned: Based on their responses, captured through beSocratic and revealed during in class discussions, it appears that there is a need to be more explicit (early in the course, and perhaps the curriculum as well) when considering the mechanisms associated with response onset and saturation, in the context of how changes in the concentrations of regulatory factors (through changes in synthesis, turn-over, and activity) impact system responses. This may require a more quantitative approach to molecular dynamics and system behaviors. Here we may run into a problem, the often phobic responses of biology majors (and many faculty) to mathematical analyses.  Even the simplest of models, such as that of Saka and Smith, require a consideration of factors generally unfamiliar to students, concepts and skills that may well not be emphasized or mastered in prerequisite courses. The trick is to define realistic, attainable, and non-trivial goals – we are certainly not going to succeed in getting late stage molecular biology students with rudimentary math skills to solve systems of differential equations in a developmental biology course.  But perhaps we can build up the instincts needed to appreciate the molecular processes involved in the behavior of systems whose behavior evolves overtime in response to various external signals (which is, of course, pretty much every biological system).

Footnotes

[1] A similar situation exists in the context of the term “spontaneous” in chemistry and biology.  In chemistry spontaneous means thermodynamically favorable, while in standard usage (and generally in biology) spontaneous implies that a reaction is proceeding at a measurable, functionally significant rate.  Yet another insight that emerged through discussions with Melanie Cooper. 

Mike Klymkowsky

Literature cited

Hammer, D. (1996). Misconceptions or p-prims. How might alternative perspectives of cognitive structure influence instructional perceptions and intentions. Journal of the Learning Sciences 5, 97-127.

Harima, Y., Imayoshi, I., Shimojo, H., Kobayashi, T. and Kageyama, R. (2014). The roles and mechanism of ultradian oscillatory expression of the mouse Hes genes. In Seminars in cell & developmental biology, pp. 85-90: Elsevier.

Harima, Y., Takashima, Y., Ueda, Y., Ohtsuka, T. and Kageyama, R. (2013). Accelerating the tempo of the segmentation clock by reducing the number of introns in the Hes7 gene. Cell Reports 3, 1-7.

Li, P. and Elowitz, M. B. (2019). Communication codes in developmental signaling pathways. Development 146, dev170977.

Loertscher, J., Green, D., Lewis, J. E., Lin, S. and Minderhout, V. (2014). Identification of threshold concepts for biochemistry. CBE—Life Sciences Education 13, 516-528.

Saka, Y. and Smith, J. C. (2007). A mechanism for the sharp transition of morphogen gradient interpretation in Xenopus. BMC Dev Biol 7, 47.

Schwanhäusser, B., Busse, D., Li, N., Dittmar, G., Schuchhardt, J., Wolf, J., Chen, W. and Selbach, M. (2011). Global quantification of mammalian gene expression control. Nature 473, 337.

Williams, L. C., Underwood, S. M., Klymkowsky, M. W. and Cooper, M. M. (2015). Are Noncovalent Interactions an Achilles Heel in Chemistry Education? A Comparison of Instructional Approaches. Journal of Chemical Education 92, 1979–1987.

Featured

Going virtual without a net

Is the coronavirus-based transition from face to face to on-line instruction yet another step to down-grading instructional quality?

It is certainly a strange time in the world of higher education. In response to the current corona virus pandemic, many institutions have quickly, sometimes within hours and primarily by fiat, transitioned from face to face to distance (web-based) instruction. After a little confusion, it appears that laboratory courses are included as well, which certainly makes sense. While virtual laboratories can be built (see our own virtual laboratories in biology)  they typically fail to capture the social setting of a real laboratory.  More to the point, I know of no published studies that have measured the efficacy of such on-line experiences in terms of the ideas and skills students master.

Many instructors (including this one) are being called upon to carry out a radical transformation of instructional practice “on the fly.” Advice is being offered from all sides, from University administrators and technical advisors (see as an example Making Online Teaching a Success).  It is worth noting that much (all?) of this advice falls into the category of “personal empiricism”, suggestions based on various experiences but unsupported  by objective measures of educational outcomes – outcomes that include the extent of student engagement as well as clear descriptions of i) what students are expected to have mastered, ii) what they are expected to be able to do with their knowledge, and iii) what they can actually do. Again, to my knowledge there have been few if any careful comparative studies on learning outcomes achieved via face to face versus virtual teaching experiences. Part of the issue is that many studies on teaching strategies (including recent work on what has been termed “active learning” approaches) have failed to clearly define what exactly is to be learned, a necessary first step in evaluating their efficacy.  Are we talking memorization and recognition, or the ability to identify and apply core and discipline-specific ideas appropriately in novel and complex situations?

At the same time, instructors have not had practical training in using available tools (zoom, in my case) and little in the way of effective support. Even more importantly, there are few published and verified studies to inform what works best in terms of student engagement and learning outcomes. Even if there were clear “rules of thumb” in place to guide the instructor or course designer, there has not been the time or resources needed to implement these changes. The situation is not surprising given that the quality of university level educational programs rarely attracts critical analysis, or the necessary encouragement, support, and recognition needed to make it a departmental priority (see Making education matter in higher education).  It seems to me that the current situation is not unlike attempting to perform a complicated surgery after being told to watch a 3 minute youtube video. Unsurprisingly patient (student learning) outcomes may not be pretty.     

Much of what is missing from on-line instructional scenarios is the human connection, the ability of an instructor to pay attention to how students respond to the ideas presented. Typically this involves reading the facial expressions and body language of students, and through asking challenging (Socratic) questions – questions that address how the information presented can be used to generate plausible explanations or to predict the behavior of a system. These are interactions that are difficult, if not impossible to capture in an on-line setting.

While there is much to be said for active engagement/active learning strategies (see Hake 1998, Freeman et al 2014 and Theobald et al 2020), one can easily argue that all effective learning scenarios involve an instructor who is aware and responsive to students’ pre-existing knowledge. It is also important that the instructor has the willingness (and freedom) to entertain their questions, confusions, and the need for clarification (saying it a different way), or when it may be necessary to revisit important, foundational, ideas and skills – a situation that can necessitate discarding planned materials and “coaching up” students on core concepts and their application. The ability of the instructor to customize instruction “on the fly” is one of the justifications for hiring disciplinary experts in instructional positions, they (presumably) understand the conceptual foundations of the materials they are called upon to present. In its best (Socratic) form, the dialog between student and instructor drives students (and instructors) to develop a more sophisticated and metacognitive understanding of the web of ideas involved in most scientific explanations.

In the absence of an explicit appreciation of the importance of the human interactions between instructor and student, interactions already strained in the context of large enrollment courses, we are likely to find an increase in the forces driving instruction to become more and more about rote knowledge, rather than the higher order skills associated with the ability to juggle ideas, identifying those needed and those irrelevant to a specific situation.  While I have been trying to be less cynical (not a particularly easy task in the modern world), I suspect that the flurry of advice on how to carry out distance learning is more about avoiding the need to refund student fees than about improving students’ educational outcomes (see Colleges Sent Students Home. Now Will They Refund Tuition?)

A short post-script (17 April 2020): Over the last few weeks I have put together the tools to make the on-line MCDB 4650 Developmental Biology course somewhat smoother for me (and hopefully the students). I use Keynote (rather than Powerpoint) for slides; since the iPad is connected wirelessly to the project, this enables me to wander around the class room. The iOS version of Keynote enables me, and students, to draw on slides. Now that I am tethered, I rely more on pre-class beSocratic activities and the Mirroring360 application to connect my iPad to my laptop for Zoom sessions. I am back to being more interactive with the materials presented. I am also starting to pick students at random to answer questions & provide explanations (since they are quiet otherwise) – hopefully that works. Below (↓) is my set up, including a good microphone, laptop, iPad, and the newly arrived volume on Active Learning.

Featured

Conceptual simplicity and mechanistic complexity: the implications of un-intelligent design

Using “Thinking about the Conceptual Foundations of the Biological Sciences” as a jumping off point. “Engineering biology for real?” by Derek Lowe (2018) is also relevant

Biological systems can be seen as conceptually simple, but mechanistically complex, with hidden features that make “fixing” them difficult.  

Biological systems are evolving, bounded, non-equilibrium reaction systems. Based on their molecular details, it appears that all known organisms, both extinct or extant, are derived from a single last universal common ancestor, known as LUCA.  LUCA lived ~4,000,000,000 years ago (give or take).  While the steps leading to LUCA are hidden, and its precursors are essentially unknowable (much like the universe before the big bang), we can come to some general and unambiguous conclusions about LUCA itself [see Catchpole & Forterre, 2019].  First LUCA was cellular and complex, probably more complex that some modern organisms, certainly more complex than the simplest obligate intracellular parasite [Martinez-Cano et al., 2014].  Second, LUCA was a cell with a semi-permeable lipid bilayer membrane. Its boundary layer is semi-permeable because such a system needs to import energy and matter and export waste in order to keep from reaching equilibrium, since equilibrium = death with no possibility of resurrection. Finally, LUCA could produce offspring, through some version of a cell division process. The amazing conclusion is that every cell in your body (and every cell in every organism on the planet) has an uninterrupted connection to LUCA. 

 So what are the non-equilibrium reactions within LUCA and other organisms doing?  building up (synthesizing) and degrading various molecules, including proteins, nucleic acids, lipids, carbohydrates and such – the components needed to maintain the membrane barrier while importing materials so that the cell can adapt, move, grow and divide. This non-equilibrium reaction network has been passed from parent to offspring cells, going back to LUCA. A new cell does not “start up” these reactions, they are running continuously through out the processes of growth and cell division. While fragile, these reaction systems have been running uninterruptedly for billions of years. 

There is a second system, more or less fully formed, present in and inherited from LUCA, the DNA-based genetic information storage and retrieval system. The cell’s DNA (its genotype) encodes the “operating system” of the cell. The genotype interacts with and shapes the cell’s reaction systems to produce phenotypes, what the organism looks like and how it behaves, that is how it reacts to and interacts with the rest of the world.  Because DNA is thermodynamically unstable, the information it contains, encoded in the sequences of nucleotides within it, and read out by the reaction systems, can be altered – it can change (mutate) in response to its environmental chemicals, radiation, and other processes, such as errors that occur when DNA is replicated. Once mutated, the change is stable, it becomes part of the genotype.

The mutability of DNA could be seen as a design flaw; you would not want the information in a computer file to be randomly altered over time or when copied. In living systems, however, the mutability of DNA is a feature – together with the effects of mutations on a cell’s reproductive success mutations lead to evolutionary change.  Over time, they convert the noise of mutation into evolutionary adaptations and diversification of life.  

 Organisms rarely exist in isolation. Our conceptual picture of LUCA is not complete until we include social interactions (background: aggregative and clonal metazoans). Cells (organisms) interact with one another in complex ways, whether as individuals within a microbial community, as cells within a multicellular organism, or in the context of predator-prey, host-pathogen and symbiotic interactions. These social processes drive a range of biological behaviors including what, at the individual cell level, can be seen as cooperative and self-sacrificing. The result is the production of even more complex biological structures, from microbial biofilms to pangolins and human beings, and complex societies. The breakdown of such interactions, whether in response to pathogens, environmental insult, mutations, politicians’ narcissistic behaviors and the madness of crowds, underlie a wide range of aberrant and pathogenic outcomes – after all cancer is based on the anti-social behavior of tumor cells.

The devil is in the details – from the conceptual to the practical: What a biologist/ bioengineer rapidly discovers when called upon to fix the effects of a mutation, defeat a pathogen, or repair a damaged organ is that biological systems are mechanistically more complex that originally thought, and are no means intelligently designed. There are a number of sources for this biological complexity. First, and most obviously, modern cells (as well as LUCA) are not intelligently designed systems – they are the product of evolutionary processes, through which noise is captured in useful forms. These systems emerge rather than are imposed (as is the case with humanly designed objects). Second, within the cell there is a high concentration of molecules that interact with one another, often in unexpected ways.  As examples of molecular interactions that my lab has worked on, the protein β-catenin – originally identified as playing a role in cell adhesion and cytoskeletal organization, has a second role as a regulator of gene expression (link). The protein Chibby, a component of the basal body of cilia (a propeller-like molecular machine involved in moving fluids) has a second role as an inhibitor of β-catenin’s gene regulatory activity (link), while centrin-2. another basal body component, plays a role in the regulation of DNA repair and gene expression (link).  These are interactions that have emerged during the process of evolution – they work, so they are retained.    

More evidence as to the complexity of biological systems is illustrated by studies that examined the molecular targets of specific anti-cancer drugs (see Lowe 2019. Your Cancer Targets May Not Be Real).  The authors of these studies used the CRISPR-Cas9 system to knock out the gene encoding a drugs’ purported target; they found that the drug continued to function (see Lin et al., 2019).  At the same time, a related study raises a note of caution.  Smits et al (2019) examined the effects of what were expected to be CRISPR-CAS9-induced “loss of function” mutations. They found expression of the (mutated) targeted gene, either by using alternative promoters (RNA synthesis start sites) or alternative translation start sites. The results were mutant polypeptides that retained some degree of wild type activity.  Finally, in a system that bears some resemblance to the CRISPR system was found in mutations that induce what is known as non-sense mediated decay.  A protection against the synthesis of aberrant (toxic) mutant polypeptides, one effect of non-sense mediated decay is to lead to the degradation of the mutant RNA.  As described by Wilkinson (2019. Genetic paradox explained by nonsense) the resulting RNA fragments can be transported back into the nucleus where they interact with proteins involved in the regulation of gene expression, leading to the expression of genes related to the originally mutated gene. The expression of these related genes can modify the phenotype of the original mutation.   

Biological systems are further complicated by the fact that the folding of polypeptides and the assembly of proteins (background: polypeptides and proteins) is mediated by a network of chaperone proteins, that act to facilitate correct, and suppress incorrect, folding, interactions, and assembly of proteins. This chaperone network helps explain the ability of cells to tolerate a range of genetic variations; they render cells more adaptive and “non-fragile”. Some chaperones are constitutively expressed and inherited when cells divide, the synthesis of others is induced in response to environmental stresses, such as increased temperatures (heat shock). The result is that, in some cases, the phenotypic effects of a mutation on a target protein may not be primarily due to the absence of the mutated protein, but rather to secondary effects, effects that can be significantly ameliorated by the expression of molecular chaperones (discussed in Klymkowsky. 2019 Filaments and phenotypes). 

The expression of chaperones along with other genetics factors complicate our understanding of what a particular gene product does, or how variations (polymorphisms) in a gene can influence human health.  This is one reason why genetic background effects are important when making conclusions as the health (or phenotypic) effects of inheriting a particular allele (Schrodi et al., 2014. Genetic-based prediction of disease traits: prediction is very difficult, especially about the future). 

As one more, but certainly not the last, complexity, there is the phenomena by which “normal” cells interact with cells that are discordant with respect to some behavior (Di Gregorio et al 2016).1  These cells, termed “fit and unfit” and “winners and losers”, clearly socially inappropriate and unfortunate terms, interact in unexpected ways. The eccentricity of these cells can be due to various stochastic processes, including monoallelic expression (Chess, 2016), that lead to clones that behave differently (background: Biology education in the light of single cell/molecule studies).  Akieda et al (2019) describe  the presence of cells that respond inappropriately to a morphogen gradient during embryonic development. These eccentric cells are “out of step” with their neighbors are induced to die. Experimentally blocking their execution leads to defects in subsequent development.  Similar competitive effects are described by Ellis et al (2019. Distinct modes of cell competition shape mammalian tissue morphogenesis). That said, not all eccentric behaviors lead to cell death.  In some cases the effect is more like an ostracism, cells responding inappropriately migrate to a more hospitable region (Xiong et al., 2013). 

All of which is to emphasize that while conceptually simple, biologically systems, and their responses to mutations and other pathogenic insults, are remarkably complex and unpredictable – a byproduct of the unintelligent evolutionary processes that produced them.  

  1. Adapted from a F1000 review recommendation.
Featured

Avoiding unrecognized racist implications arising from teaching genetics

Update to relevant article in the New York Times. December 2019

It is common to think of teaching as socially and politically beneficial, or at least benign, but Donovan et al. (2019. ” Toward a more humane genetics education” Science Education 103: 529-560)(1) raises the interesting possibility, supported by various forms of analysis and a thorough review of the literature, that conventional approaches to teaching genetics can exacerbate students’ racialist ideas. A focus on genetic diseases associated with various population groups, say for example Tay-Sachs disease within Eastern European Jewish populations of sickle cell anemia within African populations, can result in more racialist and racist perspectives among students.

What is meant by racialist? Basically it is an essentialist perspective that a person is an exemplar of the essence of a group, and that all members of a particular group “carry” that essence, an essence that defines them as different and distinct from members of other groups. Such an essence may reflect a culture, or in our more genetical age, their genome, that is the versions of the genes that they possess. In a sense, their essence is more real than their individuality, an idea that contradicts the core reality of biological systems, as outlined in works by Mayr (2,3) – a mistake he termed typological thinking.

Donovan et al. go on to present evidence that exposure of students to lessons that stress the genomic similarities between humans can help. That “any two humans share 99.9% of their DNA, which means that 0.1% of human DNA varies between individuals. Studies find that, on average, 4.3% of genetic variability in humans (4.3% of the 0.1% of the variable portion of human DNA) occurs between the continental populations commonly associated with US census racial groups (i.e., Africa, Asia, Pacific Islands, and The Americas, Europe). In contrast, 95.7% of human genetic variation (95.7% of the 0.1% of variable portion of human DNA) occurs between individuals within those same groups” (italics added). And that “there is more variability in skull shape, facial structure, and blood types within racially defined populations … than there is between them.” Lessons that emphasized the genomic similarities between people and the dissimilarities within groups, appeared effective in reducing racialist ideation – they can help dispel racist beliefs while presenting the most scientifically accurate information available.

This is of particular importance given the dangers of genetic essentialism, that is the idea that we are our genomes and that our genomes determine who (and what) we are. A pernicious ideology that even the co-discover of DNA’s structure, James Watson, has fallen prey to. One pernicious aspect of such conclusions is illustrated in the critique of a recent genomic analysis of educational attainment and cognitive performance by John Warner (4).

An interesting aspect of this work is to raise the question of where, within a curriculum, should genetics go? What are the most important aspects of the complex molecular-level interaction networks that connect genotype with phenotype that need to be included in order to flesh out the overly simplified Mendelian view (pure dominant and recessive alleles, monogenic traits, and unlinked genes) often presented? A point of particular relevance given the growing complexity of what genes are and how they act (5,6). Perhaps the serious consideration of genetic systems would be better left for later in a curriculum. At the very least, it points out the molecular and genomic contexts that should be included so as to minimize the inadvertent support for racialist predilections and predispositions. 

modified from F1000 post

References

  1. Donovan, B. M., R. Semmens, P. Keck, E. Brimhall, K. Busch, M. Weindling, A. Duncan, M. Stuhlsatz, Z. B. Bracey and M. Bloom (2019). “Toward a more humane genetics education: Learning about the social and quantitative complexities of human genetic variation research could reduce racial bias in adolescent and adult populations.” Science Education 103(3): 529-560.
  2. Mayr (1985) The Growth of Biological Thought: Diversity, Evolution, and Inheritance. Belknap Press of Harvard University Press ISBN: 9780674364462
  3. Mayr (1994) Typological versus population thinking. In: Conceptual issues in evolutionary biology. MIT Press, Bradford Books, 157-160. Sober E (ed)
  4. Why we shouldn’t embrace the genetics of education. Warner J. Inside Higher Ed blog, July 26 2018 Available online (accessed Aug 22 2019)
  5. Genes – way weirder than you thought. Bioliteracy blog, Jul 09 2018
  6. The evolving definition of the term “gene”. Portin & Wilkins. 2017 Genetics. 205:1353-1364
Featured

Latest & past (PLoS Sci-Ed)

Most recent post:  Avoiding unrecognized racist implications arising from teaching genetics

Recent posts:

curlique

Please note, given the move from PLoS some of the links in the posts may be broken; some minor editing in process.  All by Mike Klymkowsky unless otherwise noted

Featured

Humanized mice & porcinized people

mouse and pig

Updates:  12 January 2022

7 December 2020: US FDA declares genetically modified pork ‘safe to eat

A practical benefit, from a scientific and medical perspective, of the evolutionary unity of life (link) are the molecular and cellular similarities between different types of organisms. Even though humans and bacteria diverged more than 2 billion years ago (give or take), the molecular level conservation of key systems makes it possible for human insulin to be synthesized in and secreted by bacteria and pig-derived heart valves to be used to replace defective human heart valves (see link). Similarly, while mice, pigs, and people are clearly different from one another in important ways they have, essentially, all of the same body parts. Such underlying similarities raise interesting experimental and therapeutic possibilities.

A (now) classic way to study the phenotypic effects of human-specific versions of genes is to introduce these changes into a model organism, such as a mouse (for a review of human brain-specific human genes – see link).  A example of such a study involves the gene that encodes the protein foxp2, a protein involved in the regulation of gene expression (a transcription factor). The human foxp2  protein differs from the foxp2 protein in other primates at two positions; foxP2 evolution these two amio acid changes alter the activity of the human protein, that is the ensemble of genes that it regulates. That foxp2 has an important role in humans was revealed through studies of individuals in a family that displayed a severe language disorder linked to a mutation that disrupts the function of the foxp2 protein. Individuals carrying this mutant  foxp2 allele display speech apraxia, a “severe impairment in the selection and sequencing of fine oral and facial movements, the ability to break up words into their constituent phonemes, and the production and comprehension of word inflections and syntax” (cited in Bae et al, 2015).  Male mice that carry this foxp2 mutation display changes in the “song” that they sing to female mice (1), while mice carrying a humanized form of foxp2 display changes in “dopamine levels, dendrite morphology, gene expression and synaptic plasticity” in a subset of CNS neurons (2).  While there are many differences between mice and humans, such studies suggest that changes in foxp2 played a role in human evolution, and human speech in particular.

Another way to study the role of human genes using mouse as a model system is to generate what are known as chimeras, named after the creature in Greek mythology composed of parts of multiple organisms.  A couple of years ago, Goldman and colleagues (3) reported that human glial progenitor cells could, when introduced into immune-compromised mice (to circumvent tissue rejection), displaced the mouse’s own glia, replacing them with human glia cells.iPSC transplant Glial cells are the major non-neuronal component of the central nervous system. Once thought of as passive “support” cells, it is now clear that the two major types of glia, known as astrocytes and oligodendrocytes, play a number of important roles in neural functioning [back track post].  In their early studies, they found that the neurological defects associated with the shaker mutation, a mutation that disrupts the normal behavior of oligodendrocytes, could be rescued by the implantation of normal human glial progenitor cells (hGPCs)(4).  Such studies confirmed what was already known, that the shaker mutation disrupts the normal function of myelin, the insulating structure around axons that dramatically speeds the rate at which neuronal signals (action potentials) move down the axons and activate the links between neurons (synapses). In the central nervous system, myelin is produced by oligodendrocytes as they ensheath neuronal axons.  Human oligodendrocytes derived from hGPCs displaced the mouse’s mutation carrying oligodendrocytes and rescued the shaker mouse’s mutation-associated neurological defect.

golgi staining- diagramSubsequently, Goldman and associates used a variant of this approach to introduce hGPCs (derived from human embryonic stem cells) carrying either a normal or mutant version of the  Huntingtin protein, a protein associated with the severe neural disease Huntington’s chorea (OMIM: 143100)(5).  Their studies strongly support a model that locates defects associated with human Huntington’s disease to defects in glia.  This same research group has generated hGPCs from patient-derived, induced pluripotent stem cells (patient-derived HiPSCs). In this case, the patients had been diagnosed with childhood-onset schizophrenia (SCZ) [link](6).  Skin biopsies were taken from both normal and children diagnosed with SCZ; fibroblasts were isolated, and reprogrammed to form human iPSCs. These iPSCs were treated so that they formed hGPCs that were then injected into mice to generate chimeric (human glial/mouse neuronal) animals. The authors reported systematic differences in the effects of control and SCZ-derived hGPCs; “SCZ glial mice showed reduced prepulse inhibition and abnormal behavior, including excessive anxiety, antisocial traits, and disturbed sleep”, a result that suggests that defects in glial behavior underlie some aspects of the human SCZ phenotype.

The use of human glia chimeric mice provides a powerful research tool for examining the molecular and cellular bases for a subset of human neurological disorders.  Does it raise a question of making mice more human?  Not for me, but perhaps I do not appreciate the more subtle philosophical and ethical issues involved. The mice are still clearly mice, most of their nervous systems are composed of mouse cells, and the overall morphology, size, composition, and organization of their central nervous systems are mouse-derived and mouse-like. The situation becomes rather more complex and potentially therapeutically useful when one talks about generating different types of chimeric animals or of using newly developed genetic engineering tools (the CRISPR CAS9 system found in prokaryotes), that greatly simplify and improve the specificity of the targeted manipulation of specific genes (link).  In these studies the animal of choice is not mice, but pigs – which because of their larger size produce organs for transplantion that are similar in size to the organs of people (see link).  While similar in size, there are two issues that complicate pig to human organ transplantation: first there is the human immune system mediated rejection of foreign  tissue and second there is the possibility that transplantation of porcine organs will lead to the infection of the human recipient with porcine retroviruses.

The issue of rejection (pig into human), always a serious problem, is further exacerbated by the presence in pigs of a gene encoding the enzyme α-1,3 galactosyl transferase (GGTA1). GGTA1 catalyzes the addition of the gal-epitope to a number of cell surface proteins. The gal-epitope is “expressed on the tissues of all mammals except humans and subhuman primates, which have antibodies against the epitope” (7). The result is that pig organs provoke an extremely strong immune (rejection) response in humans.  The obvious technical fix to this (and related problems) is to remove the gal-epitope from pig cells by deleting the GGTA1 enzyme (see 8). It is worth noting that “organs from genetically engineered animals have enjoyed markedly improved survivals in non-human primates” (see Sachs & Gall, 2009).

pig to humanThe second obstacle to pig → human transplantation is the presence of retroviruses within the pig genome.  All vertebrate genomes, including those of humans, contain many inserted retroviruses; almost 50% of the human genome is retrovirus-derived sequence (an example of unintelligent design if ever there was one). Most of these endogenous retroviruses are “under control” and are normally benign (see 9). The concern, however, is that the retroviruses present in pig cells could be activated when introduced into humans. To remove (or minimize) this possibility, Niu et al set out to use the CRISPR CAS9 system to delete these porcine endogenous retroviral sequences (PERVs) from the pig genome; they appear to have succeeded, generating a number of genetically modified pigs without PERVs (see 10).  The hope is that organs generated from PERV-minus pigs from which antigen-generating genes, such as α-1,3 galactosyl transferase, have also been removed or inactivated together with more sophisticated inhibitors of tissue rejection, will lead to an essentially unlimited supply of pig organs that can be used for heart and other organ transplantation (see 11), and so alleviate the delays in transplantation, and so avoid deaths in sick people and the often brutal and criminal harvesting of organs carried out in some countries.

The final strategy being explored is to use genetically modified hosts and patient derived iPSCs  to generate fully patient compatible human organs. To date, pilot studies have been carried out, apparently successfully, using rat embryos with mouse stem cells (see 12 and 13), with much more preliminary studies using pig embryos and human iPSCs (see 14).  The approach involves what is known as chimeric  embryos.  In this case, host animals are genetically modified so that they cannot generate the organ of choice. Typically this is done by mutating a key gene that encodes a transcription factor directly involved in formation of the organ; embryos missing pancreas, kidney, heart, human pig embryo chimeraor eyes can be generated.  In an embryo that cannot make these organs, which can be a lethal defect, the introduction of stem cells from an animal that can form these organs can lead to the formation of an organ composed primarily of cells derived from the transplanted (human) cells.

At this point the strategy appears to work reasonably well for mouse-rat chimeras, which are much more closely related, evolutionarily, than are humans and pigs. Early studies on pig-human chimeras appear to be dramatically less efficient. At this point, Jun Wu has been reported as saying of human-pig chimeras that “we estimate [each had] about one in 100,000 human cells” (see 15), with the rest being pig cells.  The bottom line appears to be that there are many technical hurdles to over-come before this method of developing patient-compatible human organs becomes feasible.  Closer to reality are PERV-free/gal-antigen free pig-derived, human compatible organs. The reception of such life-saving organs by the general public, not to mention religious and philosophical groups that reject the consumption of animals in general, or pigs in particular, remains to be seen.

figures reinserted & minor edits 23 October 2020 – new link 17 December 2020.
references cited

  1. A Foxp2 Mutation Implicated in Human Speech Deficits Alters Sequencing of Ultrasonic Vocalizations in Adult Male Mice.
  2. A Humanized Version of Foxp2 Affects Cortico-Basal Ganglia Circuits in Mice
  3. Modeling cognition and disease using human glial chimeric mice.
  4. Human iPSC-derived oligodendrocyte progenitor cells can myelinate and rescue a mouse model of congenital hypomyelination.
  5. Human glia can both induce and rescue aspects of disease phenotype in Huntington disease
  6. Human iPSC Glial Mouse Chimeras Reveal Glial Contributions to Schizophrenia.
  7.  The potential advantages of transplanting organs from pig to man: A transplant Surgeon’s view
  8. see Sachs and Gall. 2009. Genetic manipulation in pigs. and Fisher et al., 2016. Efficient production of multi-modified pigs for xenotransplantation by ‘combineering’, gene stacking and gene editing
  9. Hurst & Magiokins. 2017. Epigenetic Control of Human Endogenous Retrovirus Expression: Focus on Regulation of Long-Terminal Repeats (LTRs)
  10. Nui et al., 2017. Inactivation of porcine endogenous retrovirus in pigs using CRISPR-Cas9
  11. Zhang  2017. Genetically Engineering Pigs to Grow Organs for People
  12. Kobayashi et al., 2010. Generation of rat pancreas in mouse by interspecific blastocyst injection of pluripotent stem cells.
  13. Kobayashi et al., 2015. Targeted organ generation using Mixl1-inducible mouse pluripotent stem cells in blastocyst complementation.
  14. Wu et al., 2017. Interspecies Chimerism with Mammalian Pluripotent Stem Cells
  15. Human-Pig Hybrid Created in the Lab—Here Are the Facts

Misinformation in and about science.

originally published as https://facultyopinions.com/article/739916951 – July 2021

There have been many calls for improved “scientific literacy”. Scientific literacy has been defined in a number of, often ambiguous, ways (see National Academies of Sciences and Medicine, 2016 {1}). According to Krajcik & Sutherland (2010) {2} it is “the understanding of science content and scientific practices and the ability to use that knowledge”, which implies “the ability to critique the quality of evidence or validity of conclusions about science in various media, including newspapers, magazines, television, and the Internet”. But what types of critiques are we talking about, and how often is this ability to critique, and the scientific knowledge it rests on, explicitly emphasized in the courses non-science (or science) students take? As an example, highlighted by Sabine Hossenfelder (2020) {3}, are students introduced to the higher order reasoning and understanding of the scientific enterprise needed to dismiss a belief in a flat (or a ~6000 year old) Earth?

While the sources of scientific illiteracy are often ascribed to social media, religious beliefs, or economically or politically motivated distortions, West and Bergstrom point out how scientists and the scientific establishment (public relations departments and the occasional science writer) also play a role. They identify the problems arising from the fact that the scientific enterprise (and the people who work within it) act within “an attention economy” and “compete for eyeballs just as journalists do.” The authors provide a review of all of the factors that contribute to misinformation within the scientific literature and its media ramifications, including the contribution of “predatory publishers” and call for “better ways of detecting untrustworthy publishers.” At the same time, there are ingrained features of the scientific enterprise that serve to distort the relevance of published studies, these include not explicitly identifying the organism in which the studies are carried out, and so obscuring the possibility that they might not be relevant to humans (see Kolata, 2013 {4}). There are also systemic biases within the research community. Consider the observation, characterized by Pandey et al. (2014) {5} that studies of “important” genes, expressed in the nervous system, are skewed: the “top 5% of genes absorb 70% of the relevant literature” while “approximately 20% of genes have essentially no neuroscience literature”. What appears to be the “major distinguishing characteristic between these sets of genes is date of discovery, early discovery being associated with greater research momentum—a genomic bandwagon effect”, a version of the “Matthew effect” described by Merton (1968) {6}. In the context of the scientific community, various forms of visibility (including pedigree and publicity) are in play in funding decisions and career advancement. Not pointed out explicitly by West and Bergstrom is the impact of disciplinary experts who pontificate outside of their areas of expertise and speculate beyond what can be observed or rejected experimentally, including speculations on the existence of non-observable multiverses, the ubiquity of consciousness (Tononi & Koch, 2015 {7}), and the rejection of experimental tests as a necessary criterion of scientific speculation (see Loeb, 2018 {8}) spring to mind.

Many educational institutions demand that non-science students take introductory courses in one or more sciences in the name of cultivating “scientific literacy”. This is a policy that seems to me to be tragically misguided, and perhaps based more on institutional economics than student learning outcomes. Instead, a course on “how science works and how it can be distorted” would be more likely to move students close to the ability to “critique the quality of evidence or validity of conclusions about science”. Such a course could well be based on an extended consideration of the West and Bergstrom article, together with their recently published trade book “Calling bullshit: the art of skepticism in a data-driven world” (Bergstrom and West, 2021 {9}), which outlines many of the ways that information can be distorted. Courses that take this approach to developing a skeptical (and realistic) approach to understanding how the sciences work are mentioned, although what measures of learning outcomes have been used to assess their efficacy are not described.

literature cited

  1. Science literacy: concepts, contexts, and consequencesCommittee on Science Literacy and Public Perception of Science, Board on Science Education, Division of Behavioral and Social Sciences and Education, National Academies of Sciences, Engineering, and Medicine.2016 10 14; PMID: 27854404
  2. Supporting students in developing literacy in science. Krajcik JS, Sutherland LM.Science. 2010 Apr 23; 328(5977):456-459PMID: 20413490
  3. Flat Earth “Science”: Wrong, but not Stupid. Hossenfelder S. BackRe(Action) blog, 2020, Aug 22 (accessed Jul 29, 2021)
  4. Mice fall short as test subjects for humans’ deadly ills. Kolata G. New York Times, 2013, Feb 11 (accessed Jul 29, 2021)
  5. Functionally enigmatic genes: a case study of the brain ignorome. Pandey AK, Lu L, Wang X, Homayouni R, Williams RW.PLoS ONE. 2014; 9(2):e88889PMID: 24523945
  6. The Matthew Effect in Science: The reward and communication systems of science are considered.Merton RK.Science. 1968 Jan 5; 159:56-63 PMID: 17737466
  7. Consciousness: here, there and everywhere? Tononi G, Koch C.Philos Trans R Soc Lond B Biol Sci. 2015 May 19; 370(1668)PMID: 25823865
  8. Theoretical Physics Is Pointless without Experimental Tests. Loeb A. Scientific American blog, 2018, Aug 10 [ Blog piece] (accessed Jul 29, 2021)
  9. Calling bullshit: the art of skepticism in a data-driven world.Bergstrom CT, West JD. Random House Trade Paperbacks, 2021ISBN: ‎ 978-0141987057