High Level Diplomacy That We'll Never See

Kate over at Outside the Beltway has a unique analysis for solving Canada’s problems with handguns. She notes the futility of Prime Minister Martin asking Secretary Rice to have the Americans do something about gun smuggling. She notes that the Prime Minister’s office is preparing a series of gun-control initiatives but that it is unlikely that a meeting with the Jamaican Prime Minister about restricting the supply a trigger pullers will be unveiled:

No less than 47 of the Jamaican-linked gang were arrested and more than 1,325 charges were laid.

The gang was the longtime rival of the Crips, another organized-crime street gang with Jamaican background.

So, what happens the day after the police and local politicians congratulate themselves for a successful investigation and massive raids?

Why, on Friday — the very next day — there were five shootings, with three of them fatal.

[ . . . ]

So, what do we do in such a discouraging situation?

Well, you just can’t keep sitting back and coming up with excuses. Such as Ontario Liberal Premier Dalton McGuinty’s chant about “American guns on Canadian streets,” with the full backing of NDP socialist David Miller — known to many as Mayor Useless. And there’s Toronto rookie police Chief Bill Blair claiming that half the weapons used here by criminals are smuggled from the U.S.

However, on Thursday, U.S. Ambassador David Wilkins knocked those claims by noting that most of the guns coming from the U.S. are actually bought there by Canadians and smuggled back here. In other words, where are Canadian customs and other border-watching authorities?

The truth is that Canada continues to suffer from a longstanding policy of federal Liberal governments going back to 1965. That’s when the Pearson-Trudeau government loosened the Immigration Act to make it much easier for previously unqualified foreigners to enter the country and stay on as new citizens. The criminal elements came right along with them. And the Liberals got most of their votes.

The Liberals also weakened the criminal justice system. They got rid of capital punishment, provided early parole, built prisons that are more like country homes and introduced a Young Offenders Act that made youth crime a sick joke.

Former Toronto police chief Julian Fantino pushed for a mandatory 10-year sentence for anyone using a gun to commit a crime. But he ended up being pushed out by Mayor Miller, who prefers the soft, social-worker approach in handling criminals.

Free Will by Remote Control

Over on the Mises Economic Blog, Lucretius, a neurobiologist, has written a post rebutting the contention put forward by Joshua Greene and Jonathan Cohen in their paper For the law, neuroscience changes nothing and everything that:

New neuroscience will change the law, not by undermining its current assumptions, but by transforming people’s moral intuitions about free will and responsibility. This change in moral outlook will result not from the discovery of crucial new facts or clever new arguments, but from a new appreciation of old arguments, bolstered by vivid new illustrations provided by cognitive neuroscience. We foresee, and recommend, a shift away from punishment aimed at retribution in favour of a more progressive, consequentialist approach to the criminal law.

I’d encourage you to read the philosophical musings being entertained but also keep in mind the news breaking in Japan today in which Nippon Telegraph & Telephone Corp. researchers demonstrated a rudimentary ability to control volunteer subjects via remote control:

A special headset was placed on my cranium by my hosts during a recent demonstration at an NTT research center. It sent a very low voltage electric current from the back of my ears through my head — either from left to right or right to left, depending on which way the joystick on a remote-control was moved.

I found the experience unnerving and exhausting: I sought to step straight ahead but kept careening from side to side. Those alternating currents literally threw me off.

The technology is called galvanic vestibular stimulation — essentially, electricity messes with the delicate nerves inside the ear that help maintain balance.

I felt a mysterious, irresistible urge to start walking to the right whenever the researcher turned the switch to the right. I was convinced — mistakenly — that this was the only way to maintain my balance.

The phenomenon is painless but dramatic. Your feet start to move before you know it. I could even remote-control myself by taking the switch into my own hands.

There’s no proven-beyond-a-doubt explanation yet as to why people start veering when electricity hits their ear. But NTT researchers say they were able to make a person walk along a route in the shape of a giant pretzel using this technique.

It’s a mesmerizing sensation similar to being drunk or melting into sleep under the influence of anesthesia. But it’s more definitive, as though an invisible hand were reaching inside your brain.

[ . . . ]

If you’re determined to fight the suggestive orders from the electric currents by clinging to a fence or just lying on your back, you simply won’t move.

But from my experience, if the currents persist, you’d probably be persuaded to follow their orders. And I didn’t like that sensation. At all.

The article goes onto speculate about the commercialization of this technology, for both civilian and military markets. What struck me about this report was how the remote control device was stimulating the senses and causing the brain to induce action, but instead of free will being the agent of initiation, it was an external electrical current acting on the nerves and initiating the cascade of processes that followed.

Posted in Uncategorized

Before the decimal points….

From The Genetic Basis of Evolutionary Change:

For many years population genetics was an immensely rich and powerful theory with virtually no suitable facts on which to operate. It was like a complex and exquisite machine, designed to process a raw material that one had succeeded in mining. Occasionally some unusually clever or lucky prospecter would come upon a natural outcrop of high-grade ore, and part of the machinery would be started up to prove to its backers that it really would work. But for the most part the machine was left to the engineers, forever tinkering, forever making improvements, in anticipation of the day when it would be called upon to carry out full production….

Some have said than in the first 5 years of the allozyme assay era more data on the the genetic structure of populations was accumulated than had been in the previous 100 years of pre-molecular experiments and observations. Anyway, evolutionary genetics is still a work in progress, and I don’t have much time to comment right now, but I thought I’d point you to this paper in PLOS Genetics, The Evolutionary Value of Recombination Is Constrained by Genome Modularity. Since biology is the science of exception riddled generalizations, take the paper with a grain of salt. What is applicable on the microorganismic level might not be applicable on the multicelluar scale (i.e., selfing and asexuality are long term fitness optimizing for example). Additionally, I also suggest you check out the most recent issue of issue of Journal of Evolutionary Biology, it is all about adaptive dynamics.

Related: Through the Rugged Roads of Gene Land.

Posted in Uncategorized

Armand on Human Diversity

Armand Leroi has a nice fluffy piece, On Human Diversity, in The Scientist. Leroi is one to keep an eye on because he is in John Brockman’s stable, and his creatures tend to become public intellectuals rather quickly. One thing though, why is a developmental biologist pushing this? Neil Risch should be stepping up! But I have a sneaking suspicion that the devo will always win over evo in a war of words since the devo people can trot out 10,000 definitions to describe the progression of a zebrafish eye while the evo people can only jot down some befuddling equations. Hat tip to Jay.

Update: Go read this post from Future Pundit, Genetic Analysis Shows Signs Of Selective Pressure In Human Evolution. By the way, doesn’t the researcher look like a baby? Of course, Brad DeLong looks like a baby too.

Update II: Also, check out Derb’s piece in NR, The Specter of Difference.

Evolution for the humanist

Over the past week I’ve been sampling chapters out of Mark Ridley’s Oxford Reader anthology, Evolution. I can’t recommend this book enough! It runs the gamut from historically oriented essays dating from the late 19th century all the way to cutting edge papers from the past 10 years. Ridley manages to balance accessibility to the general audience with rigor and relevance that would appeal to specialists. In my opinion general interest science books geared toward the lay audience are often too skewed toward biographical minutiae as opposed to the ideas which are the ends of science. Many popularizations of books on evolution don’t have any basic math which succinctly generalizes and summarizes the verbal concepts being exposited.1 By basic math, I mean some simple algebra, evolutionary biology isn’t particle physics, you don’t need to get into diffusion equations to model how alleles spread through a population at the most elementary and approximate level. In Evolution there is enough math to wet the appetite of those who wish to seek more technical treatments of the topics surveyed, instead of redigesting the science prior to presenting it for your consumption Ridley samples a small portion intact. I’m definitely going to check out the other Oxford Reader’s books out there in the hopes that the quality wasn’t just due to Ridley (I’ll start with the Classical Philosophy anthology since I know a bit about the topic and so can judge whether there is a quality drop off).

1 – Of course general audience books on “evolution” almost always skew toward macroevolution because it is a topic with more charisma than microevolutionary dynamics, which, being derived from population genetics means some math is a must for genuine internalization of the concepts. But even in macroevolution there are now mathematical models cropping up, see Evolutionary Dynamics.

Speaking of (autistic) brains…

Dr. Manuel Casanova has done some interesting research on neuronal minicolumns and autism. From the summary of Abnormalities of Brain Circuitry (Minicolumns) in Autism:

[The] neocortex is formed early on during gestation by the supernumerary aggregation of modules. The smallest module capable of processing information is called minicolumns. These modules or minicolumns are composed of both cells (neurons) and their projections which together form standardized circuits. Recent studies suggest that minicolumns may be abnormal in autism. More specifically, the brains of autistic patients have minicolumns that are smaller and more numerous than normal. Furthermore, the cells (neurons) within each minicolumn are reduced in size.

Since the metabolic efficiency of neuronal connectivity is a function of cell size, the presence of smaller neurons in the brains of autistic patients has a dramatic effect on the way that different parts of the brain interact with each other. Functions that require longer projections (e.g., language) may be impaired while shorter ones (e.g., mathematical manipulations) may be preserved or reinforced.


In autism, smaller minicolumns in brains that are, on average, larger than normal suggests their total increase in numbers….

What is the meaning of smaller minicolumns? First, this question has been approached from the standpoint of computer modeling by a group in Switzerland (Dr. Gustafson’s). Results suggest that smaller minicolumns tweak information processing in favor of the signal. By comparison other conditions characterized by larger minicolumns (e.g., dyslexia) tweak information processing in favor of noise. This means that autistic individuals usually do well in processing stimuli that requires discrimination while dyslexics are better at generalizing the salience of a particular stimulus.

Second, minicolumns are compartmentalized. Information is transmitted through the core of the minicolumn and is prevented from suffusing into neighboring units by surrounding inhibitory fibers. The inhibitory fibers act in analogous fashion to a shower curtain. When working properly and fully draping the bathtub the shower curtain prevents water from spilling to the floor. In autism minicolumnar size reduction involves primarily the peripheral compartment that provides the inhibitory surround.

This means that stimuli are no longer contained within specific minicolumns. Stimuli overflow to adjacent minicolumns thus providing an amplifier effect. This may explain the hypersensitivity of some autistic patients as well as their seizures….

Minicolumnar size is not the only abnormality observed in the neocortex of autistic patients. It appears that cells (neurons) within individual minicolumns are also reduced in size. This has important consequences in terms of connectivity. Long connections require the metabolic sustenance of large cell bodies. A neuron in the brain that connects all the way to the lower spinal chord requires a fairly large cell body. By way of contrast, a neuron whose projection remains within the cortex, contacting a closely adjacent cell, can manage its metabolic demands with a small cell body.

The small cell bodies in the brains of autistic patients favor information processing through short intra regional pathways, e.g., mathematical calculations, visual processing. Similarly, cognitive functions that require long inter regional connections would prove metabolically inefficient, e.g., language, face recognition, joint attention.

More on autistic brains: The essential difference and Male brain ~ more sons vs. female brain ~ more daughters?

More on minicolumns: The minicolumn hypothesis in neuroscience

Human brain development (evolutionary view)

I often get asked about questions regarding the size and development of the human brain in the evolutionary context, and I have a hard time remembering the material on allometry and what not that I’ve read here and there.1 But The Journal of Human Evolution has a nice paper up, Human encephalization and developmental timing, which I’ve put up as a PDF in the GNXP forum files as “encephalization.”

1 – My primary source in The Symbolic Species, though I’ve read other stuff in the literature….

Posted in Uncategorized

Extremism in defense of precision is no vice

Over a week ago I alluded to the mid-20th century debate between the Classical and Balance Schools of evolutionary genetics. I used this example specifically because I suspect many readers have an interest in evolutionary genetics and so would find the example extremely illustrative of my general point. But, in hindsight it was perhaps a somewhat obscure reference, and I will first clear any remaining obscurity, and then indicate more explicitly why I focused on this “controversy” in relation to my belief in science in general.

If the Classical vs. Balance School debate is old hat for you, skip ahead. If not, here is the sketch:

  • The Classical School envisages that in a given population the vast majority of loci are fixed. That is, greater than 99% of alleles on a locus in a population are identical.1 Polymorphism, where more than 1% of alleles are non-modal, is an epiphenomenon which is observed because transitions between two alternative alleles where one is being driven to fixation by directional selection take a number of generations (dependent on strength of selection). The vast majority of mutations in this school of thought are purified via negative selection, functionally constraining loci toward a monomorphic condition. Additionally, a few mutations are selectively advantageous and steadily driven to fixation. One could imagine that over evolutionary time a locus would generally be monomorphic, but periodically when a new mutation that was advantageous arose it would result in a transitory circumstance of polymorphism as the locus shifted from fixation on the “ancestral” allele toward fixation on the “derived” (mutant) allele. In the Classical School roughly 1% of loci might be polymoprhic, which in reality would simply be a “snapshot” taken during one generation (so in the future a different 1% would be polymorphic as different mutations would be in the process of being driven to fixation).2
  • Roughly speaking the Balance School assumed that the rate of polymorphism in the population would be higher than in the Classical School, for example, 10% of loci might be polymorphic. The central reason offered was hybrid vigor or overdominance/heterozygote advantage. In this situation obviously polymorphism would have to be maintained in a population if heterozygotes were more fit than homozygotes since the fixation of any allele would expunge heterozygosity from a population. If homozygous individuals were of equal fitness and heterozygotes were more fit than homozygotes, then the frequencies of two alterative alleles, p or q, would have to be 50% to maximize the number of heterozygous individuals (p2 + 2pq + q2 in Hardy-Weinberg Equilibrium). There are other ways that diversity could be maintained, for example, frequency dependent selection where fitness is inversely proportional to frequency. Additionally, population substructure, migration and variation in fitness contingent upon local ecological conditions could all combine to maintain polymorphism.

The short of it is that both schools were very wrong. In 1966 a famous paper by Lewontin and Hubby3 showed via molecular methods that levels of allozyme polymorphism were far higher than predicted by either school of evolutionary genetics. Enter the Neutral School.

  • The Neutral Theory basically contends that the rate of molecular evolution is dependent upon the rate of mutation.4 Roughly speaking it agrees with the Classical and Balance school that the many mutations are purified from the genome (proportion depends on the amount of “junk” in an organism’s DNA, nonsynonomous mutations would be purified obviously), but, it suggests that the vast majority of substitutions on a locus of one allele for another are driven by neutral random walk genetic drift rather than positive directional selection or balancing selection. As in the Classical School polymorphism assayed via molecular markers is epiphenomenon, in reality you have a situation where a given locus that is polymorphic is transitioning between allele A and allele B, you simply happen to be sampling between these two alternative fixed states (obviously random genetic drift would not result in fixation at the same pace as directional selection, so a far higher proportion of loci would be in the “transient” polymorphic state). This sort of random walk process is ubiquitous throughout the genome and many molecular geneticists would contend that to the first approximation Neutral Theory holds as the best predictive model for the dynamics of evolution on the genomic scale. Neutral Theory has become the ideal null hypothesis against with selectionist models are tested.

Frankly, both the history and the science of this “debate” fascinate me, but I will exercise self-restraint and spare you further details. While Kimura was the doyen of Neutral Theory, one could consider Sewall Wright the central figure in the Balance School and R.A. Fisher the primary thinker who anchored the Classical School. Wright and Fisher were brilliant men, they seeded many of the theories from which the Modern Synthesis of Neo-Darwinism took sustenance. Theodosius Dobzhansky and Ernst Mayer relied upon Sewall Wright’s mathematical models to frame their experimental and observational data in generating their evolutionary genetic world views (and these two were in many ways gatekeepers for the American public to the halls of evolutionary biology, just as Julian Huxely was in Britain). Fisher’s ideas served as the foundation for the Oxford tradition of ecology and genetics, furthered by E.B. White in ecology and W.D. Hamilton in social evolution and genetics and popularized in a broad sense by Richard Dawkins.5 Nevertheless both traditions were thoroughly blind-sided by Lewontin and Hubby’s initial results from the allozyme assays.

In a sense I think though their mistakes were different in character. The Neutral Theory in many ways undercuts the theoretical basis for the contention that heterozygote advantage of some sort is the primary means for the maintainance polymorphic diversity, an idea that is central to the basic thrust of the Balance School. If you assume the Balance School’s thesis as to the fitness advantage of heterozygosity and then plug in the number of segregating polymorphic loci that have been empirically verified (or even assume a conservative number extrapolated from the data), then you quickly get some ludicrous differentials (dozens of orders of magnitude) between super-heterozygous individuals and the population mean fitness (that is, heterozygous and homozygous across many loci).6 Heterozygote advantage is an omnipresent idea in evolutionary biology, but many scholars have a hard time finding many indisputable empirical cases (Sickle-cell anemia and MHC are the primary ones).

In contrast the misconception of the Classical School was founded on simple ignorance as to the true empirical realities of genomic architecture in the pre-DNA age. Remember, these ideas and distinctions were hashed out between Fisher, Wright and their respective acolytes in the 1930s, well before Watson and Crick had presented their theory regarding the structure and function of DNA. Richard Dawkins has offered in his many popularizations that the Classical School addresses a different subject than the concerns of Neutral Theory. The focus of selectionist
theorists in the tradition of Fisher has always been on the genetic architecture of loci which are functionally relevant. In other words, genes which influence phenotype and fitness. The fact that most of the genome of most organisms seems non-functional in the Central Dogma fashion is irrelevant to selectionists, molecular evolutionary dynamics is simply outside the purview of scholars in the Classical School.7 Both Fisher and Hamilton witnessed the rise of molecular biology (especially Hamilton), but did not believe that it was particularly relevant to their concerns (see R.A. Fisher: Life of a Scientist and Narrow Roads of Gene Land: Volume I). Their basic model did not entail moving much beyond the fusion of Mendelian and the biometrical quantitative genetics which Fisher triggered with his 1918 paper, The Correlation between Relatives on the Supposition of Mendelian Inheritance.8 As detailed in The Darwin Wars selectionists tend to use words like “gene” in a very specific and precise fashion, and much of the discourse between those who come out of the Oxford School and others influenced by Neutral Theory is a debate around semantics and ownership of a particular word (a word that has magical properties when it comes to public recognition). Dawkins and the selectionists tend to acknowledge this implicitly, though to my eye he has an annoying tendency to pretend as if somehow selectionists won the debate when it was more like a disagreemant due to the fact that the two camps spoke in sharply different dialects. The public simply tends to get confused and can’t be expected to follow along very easily (there are some like Stephen Jay Gould who did plainly disagree with Dawkins, but unlike real neutralists Gould was a verbalist and not a modeller, so confusion and miscommunication were the order of the day, not an unfortunate byproduct of the process).

The overall point is that evolutionary biology is still around, and evolutionary genetics is a vibrant field, even though the giants of the discipline were ignorant as far as the details of how the genome of most complex organisms actually worked. This ignorance led to the Classical School’s surprise as to the modal non-selectionist implication of any given segment of an organism’s genome. The fact that synonomous point mutations result in the same amino acid was not something that R.A. Fisher, or Sewall Wright, would have been able to anticipate since the basic atom of genetics, DNA, wasn’t part of their knowledge base. Even conceding neutrality, Tomoko Ohta’s Nearly Neutral Theory shows how deleterious alleles might often be fixed in populations, in contravention of both the Classical and Balance School’s (and of course, classical Neutral Theory) faith in the power of purifying selection. Understanding the genome at its most basic base pair level opens up a whole new field of implications and theoretical possibilities that were closed off to the traditional pre-DNA population geneticists.

This survey shows that science does not always work via the selection of competing hypotheses, rather, sometimes the data compels the emergence of new theoretical models and renders the old paradigms obsolete on a fundamental level. Molecular evolution is more fundamental than the phenotypic evolution that selectionists focus on, because evolution of the functional genome is a subset of evolution on the genome as a whole. Nevertheless, the pushing aside of older models as fundamental units of understanding does not mean that they are no longer useful. Both Sewall Wright and R.A. Fisher spent the 1920s outside of academia and were focused on applied quantitative genetics. Agricultural genetics is surely benefiting from modern genomics, but that does not mean that older methods of breeding premised on quantitative genetics which goes little beyond the biometrical thesis of the early 20th century are unnecessary or unprofitable. A better example is Newtonian Mechanics. As a theory of gravitation General Relativity has superseded it, but in the vast majority of situations Newtonian Mechanics is an exemplary approximation to reality. It is a banal observation that natural science is the progress of sequentially more precise, accurate and abstractionally deeper theories about the world around us. R.A. Fisher’s ideas about the fundamental nature of additive enetic variance as being the parameter that undergirds evolution might be an idea that is past its best days on a fundamental level, but the model still has great utility, explanatory power and it is an accurate fit on an important subset of evolutionary phenomena. Similarly, Sewall Wright’s ideas regarding the ubiquity of balancing selection via overdominance might be empirically falsified, but his other ideas coupled with this thesis as regards a rugged and irregular adaptive landscape are only now being pushed beyond verbal metaphors via more advanced analytic, and importantly, computational techniques. There may come a day when Neutral Theory and its molecular evolutionary spawn may seem less fundamental, and we might be able to establish more order and precision in our understanding of the variables that drive the rate of mutation.

But a major problem that always crops up is when there is a transition between the scientific discourse and the popular discourse. There are multiple interactions at play here. First, between scientists, second, between scientists and the public, and third, between individuals in the general public. Consider the debate around “gradualism.” Stephen Jay Gould and Richard Dawkins spent much of the 1980s duking it around this particular topic. Its importance as an organizing principle in terms of paradigm affiliation can be seen in PZ Myers’ comment about evo-devo guru Sean Carroll’s book Endless Forms Most Beautiful, as he says, “It also takes a very conservative view of evolutionary theory.” What exactly could Myers be talking about? If you read Carroll’s book I think it is clear that he doesn’t want to be interpreted as offering an opening to macromutationist thinking, the type that crops up in the works of the late Stephen Jay Gould, a personal hero to PZ Myers. Though I lean in Carroll’s direction as to the merits of the case, even I thought he was being a bit monomaniacal on this issue, but the key is that Carroll was addressing a lay readership and he had clearly had bad experiences with his research being distorted when transmuted for public consumption. Words have different meanings in different contexts. What exactly does “gradual” mean? In The Blind Watchmaker Richard Dawkins dismisses puncuated equilibria as simply a subset of the standard model proposed by evolutionary traditionalists. I think in many ways Dawkins is correct, but a deeper problem is the use of terms like “gradual” across the chasm of opinion. What exactly is non-gradual on evolutionary timescales? To be precise, if you took a quantitative character (i.e., height) and plotted it against number of generations since time t, what distinguishes gradual change vs. non-gradual? Is there a particular first derivative that needs to be detected at some poi
nt in the function to cross the threshold of gradualism? Has anyone agreed on the exact measure? Taken outside of a scientific context one can imagine how ludicrous and incomphensible this sort of discussion can get. Since one can’t expect the public to know calculus I think the easiest thing would be to publish gradualist vs. non-gradualist books with a large number of figures which plot trait vs. generations. I think scientifically the debate comes close to being worthless, but certainly controversy sells a lot of books, and scientists are humans with egos and bills to pay.

Sometimes the confusion isn’t even a structural bias in the way verbal transmission occurs, rather, it is a conscious attempt at obfuscation. For example, David Berlinski recently attempted to imply that Motoo Kimura’s Neutral Theory casts doubt on the power of selection in driving evolutionary change. Berlinski claims to have worked in molecular biology laboratories when he was at Princeton (Ph.D., mathematics), so I can’t believe he thinks that Kimura’s ideas about neutral molecular evolution deny macroevolution. Creationists (or Design advocates, whatever they are called now) attempt to make a distinction between microevolution and macroevolution whenever it suits their case (i.e., “that’s not evolution, that’s microevolution! I don’t deny microevolution.”), but in this case they gloss over distinctions within the genome which would make more intelligible neutrality in the context of these two processes because it serves their case of “debunking” Darwin. Similarly, though a whole cottage industry has arisen that destroys the “Nature vs. Nurture” dichotomy, those who “lean toward Nurture” (a common assertion across the political spectrum) never fail to construct the strawman to tear it down. In part it is politics as it makes the rhetoric more devastating, but I suspect in part it is a property of how the human mind works, as it needs to think in terms of types instead of expectations, variances and deviations.

This finally gets me back to the ideas that I expressed in The True Believer revisited…. and I am a believer. Science is a special enterprise in terms of its character and the yield of its ideas in material terms. But, it is still a human enterprise, and conventional cognitive, social and cultural biases are still at play. If you compare scientific culture, unified by journals, conferences and common rites of passage, to the culture of a particular religion, you seem similarities. Unfortunately, specific religions are transient, and always shifting in character. So if you take analogy far enough scientific culture itself is going to be subject to historical forces contingent upon the human condition. Intersect this with the empirical reality that genuine scientific culture, unlike religious culture in general, is not a human universal, and an exceedingly rare event in the history of the human race, then you have grounds to worry…if you value science. Though few people (i.e., less than 10% of the population) value science for the sake of science, most people appreciate its importance in scaffolding the consumer society with gadgets, goodies and tools which allow us to sate our acquisitive passions and buffer us from the vicissitudes of nature. Stipulating the utility of science, I was expressing a world view where I thought that one should acknowledge that scientists are humans and so find ways to augment scientific culture’s functional robusticity in the midst of non-scientific culture.

Which takes me to religious analogies and organizations. I am generally skeptical of functional explanations in anthropology, which assume that groups have their own higher order properties which allow them to survive no matter the details of human traits. Nevertheless, the Roman Catholic Church and the world Jewry have survived as distinctive entities for the past 2,000 years.9 My thought was simply, first, “What lessons can we learn?” “We” meaning those of us who privilege knowledge acquisition and preservation higher up in our priorities than is modal for humans. What emotional and psychological tendencies can we mimic to make our human natures a boon for our culture rather than an obstacle toward its full realization? How can we maintain a sense of elan? Self-worth? How do we encourage “conversions,” and discourage defection? My questions are all predicated on the thesis that science is a cultural enterprise. It is an enterprise with a particular fruitful system, which scientists themselves are aware of. Recently I read a paper on gene duplication and its evolutionary relevance which explicitly used Popperian terminology in laying out the framework for falsification of their hypothesis. But clearly there was science before Popper! Ironically, my impression is that in philosophy of science strict Popperism is a minority position, with the ascendence of thinkers like Imre Lakatos, Paul Feyerabend and Thomas Kuhn. I suspect that the typical working scientist will only have heard of Kuhn of the three listed. I have pointed to the research of Daniel Kahneman and Amos Tversky because they have found that scientists are guilty of the same logical and statistical fallacies as everyone else (and of course, it shocks you that medical doctors can’t grasp the implications of Bayesian probabilities, no?). Scientists are almost always smart when it comes to the g factor, but, often that just means that they do stupid things really fast and express stupid opinions in a less transparently facile fashion (this generalization applies to the high g in general). What saves scientists is that sophistry is not the summum bonum of their enterprise, and the objects of their study are amenable to spare and prickly techniques which generate a far stronger than normal signal to the noise of human bias and confusion. The objects of scientific study, the natural world and its manifest phenomena, are the saving graces, the guardian angels, of the humans who espouse science as their vocation and passion.

These objects and phenomena which are the ultimate ends of scientific exploration are not a fixed constellation, and our understanding of them at the deepest level is almost never intuitive. We are like catfish at pond’s bottom, gleaning the light on occassion when the murk clears and we are not busy rooting away in the mud, searching for tasty rotting things. This is one crucial way where the ends of science are radically different than, to continue the analogy, the Catholic Church or the Jewish people. The latter cultures exalt a transcendent mystery which triggers and coopts cognitive modules which are embedded in the unconscious fast & hard regions of our brain. I have repeated multiple times Scott Atran’s report in his book In Gods We Trust that to a great extent religious beliefs are immune to conventional falsification via logical argumentation
or data which contradicts the validity of central axioms. From a cognitive perspective I believe Intelligent Design always operates from the high ground and is flanked by deep dark forests that resist the assaults of axe and fire. Only through persistent verbal suasion at the mildest and elitist brow-beating at its extreme can scientists win the argument against religious truth claims (and most often, to “win” means that religion simply changes its tune and claims victory, and everyone politely ignores this). Philosopher Daniel Dennett has been saying that Darwinism is the “universal acid” that will eat away at all ideas, including religious ones. For a variety of reasons I disagree with Dennett, but a primary one is that we already have a better candidate for a “universal acid,” and that is the capitalist system. Unlike science, capitalism engages our first order wants and appetites, it sates deep seated cognitive biases, and easily generates new ones. Science might be the hot girl who is going to make you wait 10 years for action, but capitalism is the decent looking chick who is open to being done front door and back after 10 minutes of chit-chat. How can you compete with that? Some of us are romantics and prefer a courtship with a beautiful girl who produces civilization to the short term “rational” play of slam-bam-thank-you-mam with a mid-range hottie, but this is not the modal response. Let me grant that my previous sentences were drenched in norms. So be it, I have higher order values, sue me. To parphase Hume, ought only to be the slave of the passions.

We are all sinners before an angry God when it comes to choosing between knowledge and epicurean desserts. And yet religion has, with mixed success, attempted to restrain our appetites in the service of a “higher good” for thousands of years. Why reinvent the wheel? Now, it is true that the ends of science are peculiar, but in some ways the analogy can be mapped with surprising fidelity. Consider the adherence to patently “false” beliefs that I described from religions.10 I plainly state that the vast majority of scientific hypotheses, and likely a good majority of scientific theories in currency as plausible, are inaccurate or imprecise in their modeling of the world around us. But what exactly do I mean? Do I have access to God’s Book, where all the equations that model the world around us reside? I certainly don’t, I assume that operationally the world has order and pattern, and scientific induction justifies induction. I assume that the progressive refinement of theories, and elevation of orders of abstraction, will continue, and yield up fruitful models of the universe around us, and likely also result in applications which can serve our more conventional passions. I plainly state that Fisher and Wright were wrong. I plainly state that Lord Kelvin was a genius who was wrong. I plainly state that Isaac Newton was mental and the scientist of his millennium. And yet I still believe, because what alternative is there in a demon-haunted world? The Church may be a whore, but what other hope for salvation is there in this world? Let us reform within the Church I say, and not shatter its fundamental unity and utility. Science is a mismash of contradictory, false, imprecise and incomplete theories, but the process continues, we aim to overcome our sin and attain a state of grace through the Church (read: culture). Let me state that I am the Hugh Hewitt of science on a fundamental level, it’s gotten us pretty far, trust it. While Hewitt marshals trust in the service of a political party, I wish to marshal trust and any other cognitive bias and social system at our disposal in the service of a process and culture which serves as a necessary precondition for the modern human lifestyle.

A few days ago Michael noted that I am in “In so many ways a hard-headed skeptic.” I think the above makes one cautious about that sort of appellation directed at me. In fact, to descend into the semantical mud I would contend that this weblog is a record of my opposition to unadulterated skepticism, the Post Modernist heresy, the Pyrrhonian Skepticism of our time. There are certainly questions about which I am rather skeptical. The term “skeptic” is often applied to unbelievers in the God-hypothesis because we are dissenters from the human consensus in regards to a question that the majority regards with ontological significance. But in many ways I am an anti-skeptic, I am deeply sympathetic to positivist projects in many realms. My interest in history is only minimally humanistic in its motivations, rather, I wish to understand how things were, and how they might be. I believe that at some point in the future when our AI gods descend from heaven that social science might actually justify the term science in substance as well as style. A rejection of skepticism is, ironically enough, not even contradictory with a position of atheism. George H. Smith in Atheism: the Case Against God makes a persistent argument that rationality must be God’s judge and jury, not skepticism, which opens the door for theism and transcendent mystery. Writing in the 1970s Smith had not seen the wave of Post Modernist skepticism sweeping through the intellectual commanding heights outside of the sciences, but he was oddly prescient on the terminus of the skeptic’s path, for some Christians rejoice in the post-rationalist world. And yet too much rationality can be a bad thing, Smith is a case in point as he was once an Objectivist, a movement whose faith in rationality became a faith in a rational faith and a cult of personality (“check your premises, except the Objectivist ones”). Lord Kelvin’s case against an ancient age for the earth was eminently rational according to the physics of his day. The dissents from geologists and biologists meant nothing to a priest of the Queen of Sciences. One can find innumerable instances of rational hubris in science. And dare I say it, too much empiricism can also be a bad thing! In Geography of Thought Richard Nisbett makes the case that excessive adherence to common sense and pragmatic empiricism ultimately hampered Chinese exploration of the absurdities of the scientific world. Does “common sense” suggest to us that the earth is a sphere and that it revolves around the sun? The reality is that even scientists can balk at the absurdities of science, Einstein’s rejection of Quantum Mechanics is a prominent case.

So shall we leave it to the savants who can balance the scales of skepticism, rationalism and empiricism? No! No man knows enough to be able to comprehend the universe. Empirically the feelings of transcendence and mystery that confront us when we gaze upon the blue-black night are justified. The system of science, the culture of science, is the only method we have to truly extract reproducible and verifiable signal out of that cosmic noise.

And I suppose that’s it. I’ll leave with an old Unitarian saying, “the question is the answer,” and it is there I stand.

Addendum: I want to make a few things explicit, as I feel that I wasn’t
as clear as I should have been in the original text. In regards to the Balance School, I contend that the explanation for genetic variation implied by the model put forward by these theorists is incorrect, that is, the vast majority of variation within the genome is not due to balancing selection (overdominance, frequency dependent selection, multiniche polymorphism, oscillating environmental selection pressures, etc.). But, that does not mean that I reject that balancing selection factors are at work within populations, or that they are not important. I have read recent literature which suggests that lack of heterozygosity (i.e., inbreeding) can predict local population extinctions for many organisms. The levels of polymorphism found on the MHC loci is a very important fact which I don’t discount. It is likely maintained by some sort of balancing selection, whether it be straightforward heterozygote advantage or long term frequency dependent selection (as W.D. Hamilton seemed to be proposing). What I am implying is that first, the genetic variation within a population is generally modeled best by a neutral framework. A balancing selection angle is a further refinement, but it is not the primary causative factor of the variation. Second, as I note above, the simple mathematical extrapolations of even trivial levels of fitness advantage to heterozygotes across a large number of loci imply unrealistic fitness differentials, ergo, heterozygosity is probably a boon in a very limited number of cases (MHC for example).

A second point I want to flesh out is my exposition of the prediction of the Neutral Theory that most substitutions on a locus will be due to random genetic drift. I said substitutions very specifically because the neutralists do not necessarily hold that mutations are in the generality neutral, rather, they tend to agree with the Classical School that purifying selection purges the genome. Rather, they hold that of those mutations which do get fixed the vast majority are neutral. In contrast the Classical School tended to assume that mutations which get fixed would be subject to positive selection. The neutralists do not reject that this occurs, they simply contend that of the fixation events positive selection is responsible for a only minority. Since the time to fixation of new mutants predicted by random genetic drift is usually far longer than when fixation is driven by selection, Neutral Theory naturally predicts that there will be far greater genomic variation as the transititions between monomorphic states within the population at a locus will last far longer. Finally, one last thing I have to add on to this is that many organisms have a great deal of “junk DNA,” introns, pseudogenes and the like. So in this case, even most mutations might be neutral in regards to fitness since noncoding sequences do not have a clear functional implication. Anyway, I think that’s about it….

1 – I follow the stricter convention for the frequency of a fixed locus in part to illustrate with more starkness the contrast between the two schools. Many would say that a 95% frequency of one allele would be sufficient for the locus to be declared monomorphic.

2 – Again, I give the 1%-of-loci-polymorphic quantity to illustrate the difference between the two schools, this controversy played itself out in the pre-DNA age, so it was more theoretical than empirical.

3 – Am I the only one to wonder what happened to JL Hubby? I see no publications after 1975.

4 – Large populations have many more background mutations, but the chance of fixation of any of these via random genetic drift is rather low. In small populations the number of mutations is very low, but the chance of fixation via random genetic drift is very high. The probability of fixation of a new mutant is defined as 1/2N, where N is the populuation size. If the mutation rate is μ, the number of new mutations per generation in a population is 2Nμ. Since 1/2N * 2Nμ = μ, the rate of new mutations being fixed is μ.

5 – J.M. Smith was also influenced by Fisher through his mentor J.B.S. Haldane, who shared many of Fisher’s theoretical biases and defended their tradition in population genetics against Ernst Mayer’s derision toward “Bean Bag Genetics.”

6 – Please see Richard Lewontin’s Genetic Basis of Evolutionary Change for a the mathematical exposition on why ubiquitous genomic overdominance seems to lead to absurd implications in regards to mean population fitness vs. the most perfect of heterozygotes.

7 – In the interests of brevity I will dodge the details in terms of whether non-coding sequences of various sequences are as neutral as they are assumed to be.

8 – With the advances in molecular marker technology I think that Fisher and the younger Hamilton’s conceptions do not hold today, even though molecular biology is not the usual ends of many evolutionary biologists who dwell in the realm of quantitative and theoretical genetics, it is a crucial part of their exploratory toolkit. Even if you are breeding mice and doing pedigree analysis you’ll probably employ a fair number of molecular methods.

9 – I do not think that Judaism before the time of Jesus really resembles “Judaism” as wel understand it, that is, the rabbinical tradition which crystallized in the borderlands of Rome and Persia, and was the normative Judaism between 500 and 1800.

10 – If you want to know, the experiment was simple, Christian believers were surveyed as to their axioms. They were then given some forged documents from the “Dead Sea Scrolls” whose veracity the researchers vouched for. The documents contain evidence that the core truth claims of these Christians were highly unlikely, and almost certainly distortions of the “truth.” After this the respondents were asked if they believed in the veracity and accuracy of the documents, and many responded yes. But, these same individuals insisted that their axioms still held, and, averred that their faith was not stronger. The key point is that the contradictions were naked before them, but they refused to acknowledge it. The implication is that religious propositions are cognitively insulated from standard means of disconfirmation. One could posit that the results were in part due to the inability to reason logically because of low intelligence, but if this is modal in the population, same difference.