The Benefits of a Diet Free from Red Meat

My wife and I had very different dietary habits growing up. I was always very picky, and subsisted off of bread, cereals, dairy, junk food, and limited quantities of eggs and produce, while she ate a classically American high-meat diet. Since our marriage, we both modified our diets considerably, particularly in response to health problems in our children, and her own diagnosis of Coeliac disease. Our diet is now rich in organic vegetables, rice, eggs, and milk, with very little junk food or soda, but we still differ in the foods we eat. She follows a strict gluten-free regimen, while continuing to eat moderate quantities of meat, while I follow a roughly flexitarian habit of using meat very sparingly to liven up complex dishes.

The differences in our eating habits have stimulated investigation on the relative health benefits of different diets and foods for many years, now. We’d seen enough research to know that the traditional Western diet saturated with meats is probably shortening the average lifespan. Yet we wondered whether some meats, particularly fish or organic red meat, were still good for us. Specifically, we wanted to know whether most research finds that meat isn’t very healthy just because some meats are unhealthy, or because all meat is unhealthy?

It is widely reported throughout the scientific literature that “vegetarianism reduces mortality” (see e.g. Singh, Sabaté, & Fraser, 2003). Yet the exact pathways for this reduction are not fully understood. Reviewing research from the early 2000’s, Fraser (2009) notes that “vegetarian” is a heterogeneous category; meat consumption may not be the key variable predicting long life. This problem is brought into sharp focus by findings that health-conscious nonvegetarians can attain outcomes as favorable as vegetarians (Key et al., 2009). But what exactly these health-conscious behaviors are remains unclear, leading Fraser to conclude that “much remains to be understood.”

There are likely to be many differences between vegetarians and controls, not only in terms of diet and exercise, but also place of residence, socioeconomic status, and psychology—we know for instance that, relative to controls, vegetarians are higher in psychometric Openness (Forestell, Spaeth, & Kane, 2012), and also more intelligent, even as children (Gale et al., 2007). Thus, any effects attributed to vegetarianism could in fact arise due to vegetarianism’s association with a host of other variables.

Likewise, the results for meat consumption show heterogeneity. For example, in a review of 45 studies on colorectal cancer risk, Norat & Riboli (2001) do find that “meat consumption is associated with a modest increase in colorectal cancer risk,” yet note that the association was only consistently found for red meat and processed meat, not total meat consumed. (Norat & Riboli, 2001). Might the evident health risks of meat consumption come from the type of meat, rather than meat itself?

Considering this alternative pathway by which meat consumption could decrease lifespan, we find multiple studies supporting the hypothesis that eating processed and red meat, rather than fish or poultry, is the cause of high mortality among omnivores. As early as 25 years ago, it was found that “The ratio of the intake of red meat to the intake of chicken and fish was particularly strongly associated with an increased incidence of colon cancer,” with less than 0.05% probability for an effect this large occurring merely by chance (Willett et al., 1990). More recently, a prospective study on over 100,000 individuals initially free of cardiovascular disease and cancer as of 1986, recorded 23,926 deaths in the following 22 years. The study authors report that:

[T]he pooled hazard ratio of total mortality for a 1-serving-per-day increase was 1.13 for unprocessed red meat and 1.20 for processed red meat… We estimated that substitutions of 1 serving per day of other foods (including fish, poultry, nuts, legumes, low-fat dairy, and whole grains) for 1 serving per day of red meat were associated with a 7% to 19% lower mortality risk. We also estimated that 9.3% of deaths in men and 7.6% in women in these cohorts could be prevented at the end of follow-up if all the individuals consumed fewer than 0.5 servings per day [around 42 grams per day] of red meat.

The interpretation here is straightforward: one reason for longer lifespans in vegetarians is probably their avoidance of red and processed meat. That they also avoid poultry or fish may offer no additional health benefits. Indeed, it could even incur costs to health, if avoiding meat means their diet is low in nutrients specific to those sources, such as vitamin B6 or omega-3 fatty acids.

However, another likely pathway for meat consumption to affect longevity is via pesticide intake. Toxins from feed build up in animals before slaughter, making all meat, not only processed or red meat, potentially unhealthy if it is non-organic. Unfortunately, evidence here is scant and equivocal: A recent study of over 600,000 British women found “little or no decrease in the incidence of cancer associated with consumption of organic food, except possibly for non-Hodgkin lymphoma.” (Bradbury et al., 2014) Despite the large sample size, this null result cannot be regarded as definitive—organic food has not been widely available for more than a decade, and the long-term effects of consuming normal rather than pesticide-free food may yet turn out to be significant. As noted in a recent study,

“there is only a limited number of human studies available having investigated the effects of consumption of organic food on health, disease risks’ and health promoting compounds, and the development of reliable biomarkers to be used in such studies are still in its infancy.” (Johansson et al., 2014.)

There is at least indirect evidence for the health-benefits of organic meat, since human breast milk has been found to contain more desirable levels of conjugated linoleic acid isomers and trans-vaccenic acid when mothers consumed organic dairy and meat (Rist et al., 2007). Considering the available evidence, however, any effect from eating organic vs. inorganic meat must be minor—otherwise the British study cited above should have returned a positive result, even on this short timescale.

A final path by which vegetarianism could return higher longevity is through the nebulous effects of health-consciousness. The idea here is that individuals who care enough about health and nutrition to choose and maintain an unconventional diet may also have subtle differences in their lifestyle which influence their mortality. They may avoid using street drugs, take care to stay away from secondhand smoke, moderate their doses of over-the-counter medication, turn down the thermostat as a means of weight control, and so on. None of these would be likely to have a major impact on longevity, which is why they are seldom considered in experimental design. Yet in aggregate, they could have a significant impact on longevity.

To determine whether health-consciousness plays an important role in the long term health of vegetarians, we can compare the outcomes of vegetarians with those on other diets that do include meat. The two best-researched diet alternatives are the Mediterranean diet, rich in seafood, olive oil, nuts, and grains; and the low-carbohydrate diet (including the Atkins and Paleo diets), which encourages meats and other high-protein foods while minimizing grains and bread. These two diets both represent excellent comparisons to the vegetarian diet, as they both contain meat, but in different quantities and kinds.

The evidence for the Mediterranean diet is overwhelmingly positive. In a Greek population where adherence to the Mediterranean diet was measured on a 10-point scale, every two-point increment reduced mortality by a hazard ratio of .75 (Trichopolou, 2003); a follow-up considering elderly populations in Greece, Spain, Denmark and Australia returned similar findings (Trichopolou, 2004), as did a further study carried out in Sweden (Tognon et al., 2011). Among overweight individuals, prolonged exposure to the Mediterranean diet, with or without concurrent calorie restriction, rendered “a better cardiovascular risk profile, reduced oxidative stress, and improved insulin sensitivity” (Esposito, 2010). Finally, among Sicilian centenarians, nutritional assessment revealed their unequivocal adherence to the traditional Mediterranean diet (Vasto et al., 2012).

In sharp contrast, the evidence for low-carbohydrate diets is generally negative. A longitudinal study on over 120,000 individuals found higher mortality from an animal-based low carbohydrate diet (with a hazard ratio of 1.23), but lower mortality from a higher-vegetable diet (hazard ratio: 0.80). (Fung et al., 2010). A similar study in Sweden found both decreasing carbohydrate intake by one decile and increasing protein intake by one decile were associated with a 6% increase in total mortality (Lagiou et al., 2007); in a Greek study, the picture was much the same, with higher intake of carbohydrates being associated with a significant reduction of total mortality, and higher intake of protein being associated with a nonsignificant increase of total mortality (Trichopoulou et al., 2007). So, simply following a diet regimen doesn’t seem to be enough to confer the benefits of a health conscious lifestyle; the nature of that diet regimen itself is key.

This is not to dismiss low-carbohydrate diets out of hand. There may be very good reason for believing that diets replacing carbohydrates with protein are excellent at promoting weight loss, and if the goal is rapid weight loss rather than healthy living, such a diet could be useful. But, health and weight loss are not synonymous. In terms of longevity, both the vegetarian and traditional Mediterranean diets are well recommended over high protein diets, and this suggests that it is indeed red and processed meat, rather than meat itself, which is most wisely avoided. Eating a diet free of red meat is easily accomplished by ridding one’s diet of meat altogether and following a typical vegetarian diet. However, it is not at all clear that this is the best way to eat—given that fish are an excellent source of Omega 3 fatty acids, a pesco-vegetarian or Mediterranean diet might be healthier still.

Conclusion

I began this post to find out about the health consequences of meat consumption. The answer appears to be that red meats, and particularly processed meats, ought to be avoided. However, fish and shellfish (as central components of the classic Mediterranean diet) are likely to be healthy, and other white meats like chicken are probably acceptable if eaten in moderation.

Surprisingly, low-carb, high-protein diets like the Atkins or Paleo diet result in earlier mortality—this despite the fact that practitioners of such diets would be expected at first glance to live healthier lifestyles than ordinary Westerners. And while other factors such as exercise, stress management, and responsible drinking habits may also be important, diets saturated in the vegetables, fruits, and grains found in either the Mediterranean diet or a typically health-conscious vegetarian diet are probably ideal for a long and healthy life.

References

Bradbury, K. E., Balkwill, A., Spencer, E. A., Roddam, A. W., Reeves, G. K., Green, J., … & Shaw, K. (2014). Organic food consumption and the incidence of cancer in a large prospective study of women in the United Kingdom. British journal of cancer, 110(9), 2321-2326.

Esposito, K., Di Palo, C., Maiorino, M. I., Petrizzo, M., Bellastella, G., Siniscalchi, I., & Giugliano, D. (2010). Long-term effect of mediterranean-style diet and calorie restriction on biomarkers of longevity and oxidative stress in overweight men. Cardiology research and practice, 2011.

Forestell, C. A., Spaeth, A. M., & Kane, S. A. (2012). To eat or not to eat red meat. A closer look at the relationship between restrained eating and vegetarianism in college females. Appetite, 58(1), 319-325.

Fraser, G. E. (2009). Vegetarian diets: what do we know of their effects on common chronic diseases? The American journal of clinical nutrition, 89(5), 1607S-1612S.

Fung, T. T., van Dam, R. M., Hankinson, S. E., Stampfer, M., Willett, W. C., & Hu, F. B. (2010). Low-carbohydrate diets and all-cause and cause-specific mortality: two cohort studies. Annals of internal medicine, 153(5), 289-298.

Gale, C. R., Deary, I. J., Schoon, I., & Batty, G. D. (2007). IQ in childhood and vegetarianism in adulthood: 1970 British cohort study. BMJ, 334(7587), 245.

Johansson, E., Hussain, A., Kuktaite, R., Andersson, S. C., & Olsson, M. E. (2014). Contribution of organically grown crops to human health. International journal of environmental research and public health, 11(4), 3870-3893.

Key T.J., Appleby P.N., Spencer E.A., Travis R.C., Rodam A.W., Allen N.E. (2009). Mortality in British vegetarians: results from the European Prospective Investigation into Cancer and Nutrition. Am J Clin Nutr 89:161 3S–9S.

Lagiou, P., Sandin, S., Weiderpass, E., Lagiou, A., Mucci, L., Trichopoulos, D., & Adami, H. O. (2007). Low carbohydrate–high protein diet and mortality in a cohort of Swedish women. Journal of internal medicine, 261(4), 366-374.

Norat, T., & Riboli, E. (2001). Meat consumption and colorectal cancer: a review of epidemiologic evidence. Nutrition reviews, 59(2), 37-47.

Rist, L., Mueller, A., Barthel, C., Snijders, B., Jansen, M., Simoes-Wüst, A. P., … & Thijs, C. (2007). Influence of organic diet on the amount of conjugated linoleic acids in breast milk of lactating women in the Netherlands. British Journal of Nutrition, 97(04), 735-743.

Singh, P. N., Sabaté, J., & Fraser, G. E. (2003). Does low meat consumption increase life expectancy in humans?. The American journal of clinical nutrition, 78(3), 526S-532S.

Tognon, G., Rothenberg, E., Eiben, G., Sundh, V., Winkvist, A., & Lissner, L. (2011). Does the Mediterranean diet predict longevity in the elderly? A Swedish perspective. Age, 33(3), 439-450.

Trichopoulou, A. (2004). Traditional Mediterranean diet and longevity in the elderly: a review. Public health nutrition, 7(07), 943-947.

Trichopoulou, A., Costacou, T., Bamia, C., & Trichopoulos, D. (2003). Adherence to a Mediterranean diet and survival in a Greek population. New England Journal of Medicine, 348(26), 2599-2608.

Trichopoulou, A., Psaltopoulou, T., Orfanos, P., Hsieh, C. C., & Trichopoulos, D. (2007). Low-carbohydrate–high-protein diet and long-term survival in a general population cohort. European Journal of Clinical Nutrition, 61(5), 575-581.

Vasto, S., Scapagnini, G., Rizzo, C., Monastero, R., Marchese, A., & Caruso, C. (2012). Mediterranean diet and longevity in Sicily: survey in a Sicani Mountains population. Rejuvenation research, 15(2), 184-188.

Willett, W. C., Stampfer, M. J., Colditz, G. A., Rosner, B. A., & Speizer, F. E. (1990). Relation of meat, fat, and fiber intake to the risk of colon cancer in a prospective study among women. New England Journal of Medicine, 323(24), 1664-1672.

The Quarterstaff – King of Weapons?

You may have seen people making this claim before. If you haven’t, it may seem ridiculous to consider it now; but Zach Wylde, a writer from the eighteenth century, tells us that:

I shall proceed to Quarter-Staff, the common Length is seven Foot… It is a true British Weapon, of great Antiquity, much Practised and Admired in former Days; to give it its due Praise, it is a most Noble Weapon, and very useful in several Respects, it is in the Nature of a double Weapon, by reason when you Exercise it, you make use of both Hands: I wonder that it is not more in Vogue in this Nation, considering its Excellency, for a Man that rightly understands it, may bid defiance, and laugh at any other Weapon… (Wylde, 1711)

People even present the quarterstaff as George Silver’s favorite melee weapon, quoting his famous treatise, Paradoxes of Defence, as his “ideal weapon:”

The Short Staffe is most commonly the best weapon of all other, although other weapons may be more offensive, and especially against many weapons together, by reason of his nimbleness and swift motions, and is not much inferior to the Forest Bille, although the Forest Bille be more offensive, the Short Staffe will prove the better weapon. (Silver, 1599)

Personally, I like George Silver. He was around a bit earlier than Wylde, and he recorded insights about weapons which are beyond our reach in these more civilized days, like:

I have known a gentleman hurt in rapier fight, in nine or ten places through the body, arms, and legs, and yet has continued in his fight, & afterward has slain the other, and come home and has been cured of all his wounds without maim, & is yet living. (Silver, Paradoxes of Defence)

We know nowadays that people can survive some frightening wounds. But this may be largely because of modern medicine, which has only existed for an eyeblink in comparison to the length of time humans have been fighting one another. So accounts like Silver’s are very interesting; so, too, are the comparisons he goes on to draw between thrusts and cuts (or “blows”):

A full blow upon the neck, shoulder, arm, or leg, endangers life, cuts off the veins, muscles, and sinews, perishes the bones: these wounds made by the blow, in respect of perfect healing, are the loss of limbs, or maims incurable forever.

…And for plainer deciding this controversy between the blow and the thrust, consider this short note. The blow comes many ways, the thrust does not so. The blow comes a nearer way than the thrust most commonly, and is therefore sooner done. The blow requires the strength of a man to be warded, but the thrust may be put by by the force of a child. A blow upon the hand, arm, or leg is maim incurable, but a thrust in the hand, arm, or leg is to be recovered. The blow has many parts to wound, and in every of them commands the life, but the thrust has but a few, as the body or face, and not in every part of them either. (Silver, Paradoxes of Defence)

These arguments seem to me very forceful, and they convince me that George Silver knew what he was talking about. But as to the quarterstaff being the king of weapons, there is virtually no way this can be true.

If that were so, then no one would ever bother creating a short spear. The spear is much more difficult to construct, requiring at the very least some means of sharpening a rock or lump of metal and affixing it to a pole with twine or resin, and more likely, advanced metallurgical techniques. Constructing a weapon we might recognize as a short spear required the same materials, care, and expertise as a quarterstaff, with much more in further addition. Given the much greater expense in constructing a short spear over a quarterstaff, the fact that it was so widely used makes Silver’s claim seem impossible.

Granted, I am not a medieval duelist. The best I can offer here are philosophical arguments, and these are always much more difficult to evaluate than arguments in the sciences or mathematics. Questions can be resolved in science through experimentation, and in math, through rigorous proofs.

But some arguments and comparisons in philosophy can be extremely rigorous. For example, an argument which demonstrates that a given notion leads to a contradiction can prove conclusively that its converse must be true. Likewise, as I’ve pointed out informally before, if we can show that two sets differ only according to the possible inclusion of one element, then the sets may be compared according to that element alone.

It may be difficult to compare weapons like swords and quarterstaves, because they differ in numerous respects. But if the quarterstaff is the king of weapons, then it would need to be, at bare minimum, superior to the short polearms, such as the spear. The short spear differs from the short staff only by the addition of its head, which would significantly increase its cost over the cost of the staff. To imagine that a sharp metal spearhead would not improve the overall utility of a wooden melee weapon—and particularly, to imagine that people would regularly pay the cost for such a modification without any tangible gain—is an exercise in fantasy.

Fortunately, we don’t need to worry that widely quoted historical martial artists like George Silver actually believed the quarterstaff was the king of weapons. In fact, what Silver wrote was not that the quarterstaff was the most effective melee weapon, but rather that:

if the Bilman is not very skillful (all vantages and disadvantages of both sides being considered,) the short Staff will prove the better weapon. (Silver, 1599)

And in the hands of equally skilled warriors, this won’t be true, because:

The Welch hook or forest bill, has advantage against all manner of weapons whatsoever.  (Silver, 1599)

So… what’s the weapon Silver calls a forest bill? Well, we can’t say precisely how it looked. But according to everything we know about bills, it was apparently a short staff, with a piece of metal added to the end.

Conclusion

Zach Wylde really did seem to think a piece of wood was the king of weapons. But George Silver didn’t. And while we don’t really know what the most effective overall melee weapon may be, please trust me on this: it isn’t the nail file, it isn’t the sheep’s bladder, and it isn’t the quarterstaff.

References

Silver, George (1599). Paradoxes of Defence. London.

Wylde, Zach (1711). English Master of Defence, or, The Gentleman’s Al-a-mode Accomplish. John White.

I Told You There was More Crazy to Physics

So last week I pointed out that the crazy didn’t just skip physics, but you didn’t let me finish. See physicists will also tell you things like,

“Friction arises because of intermolecular bonds”

During my graduate work, I taught out of a textbook that claimed the reason we experience friction was because, when we bring two objects into close vicinity, bonds spontaneously form between them just like chemical bonds between atoms, causing them to stick together.

If you think this works, try selecting an object with a nice big coefficient of static friction—your shoe should suffice. Take the sole of your shoe, and press it into the ceiling.

Did it stick?

Be patient with this one; you may have to stand there a bit for the physics to kick in.

“Electrons spin”

In a sense, this isn’t as bad as the other examples of nuttiness in physics that I’ve come across. Physicists generally admit, when you remind them about it, that electron spin is a fundamental property of the electron that seems to be associated with the magnetic moment of the electron just like current passing through a loop of wire… but that it doesn’t originate in any clear sense from any rotation of the electron. It can’t. Given the charge on the electron, in order for an electron to spin around rapidly enough to generate its magnetic field, the surface of the electron would have to break the light barrier. (Hint for fans of televised science fiction: Things don’t do that.) You can read all about this problem of spinning electrons with a quick Google search.

So while we call it “spin,” that doesn’t mean the electron is actually spinning around, like a top or a gyroscope. Unfortunately, this doesn’t stop Dr. Albert Fert, a Nobel laureate in physics, from explaining electron spin in exactly that way:

In his defense, he isn’t speaking in his native language. But after explaining electron spin in terms of rotational motion, he stresses that the “only difference that has to be understood” between your idea of rotation and electron spin is that spin is quantized. I don’t mean to imply that Dr. Fert doesn’t understand physics, but I really wish he wouldn’t speak about spin as though he didn’t.

“There is no temperature below 0 Kelvin”

Oh, but there is. Yes; you probably thought, naïvely, that temperature was simply a measure of the average kinetic energy of a suite of particles as given by

$E_{avg} = \frac{3}{2} k_B T$.

But no, that’s just a consequence of thermodynamics when it’s applied to classical systems. Temperature is defined in terms of the way entropy changes with energy, via

$1/T = dS/dE$

Ordinarily this poses no problems, and absolute temperature never falls below 0. But in a quantum mechanical circumstance, we define entropy, S, to be zero when only one energy level is filled. Usually this happens at the lowest total energy, with more and more levels being filled (and entropy monotonically increasing) as we add energy. But for systems which have a highest energy level, adding energy can cause particles to bump up against the ceiling, forcing all of them to decline in entropy as energy increases. This means that the derivative dS/dE becomes negative at higher energy levels—i.e. hotter temperatures—resulting, obviously, in negative temperature, with negative temperature values near zero being the hottest of all.

(I say, “obviously” here with some facetiousness, given that anyone without at least a master’s in engineering was probably already lost in the discussion on spin.)

Everything up to this point is all very logical. It’s also quite mad. Because we could define temperature to be whatever we like; we even have a quantity, thermodynamic beta,

$\beta = 1/k_B T$

which aligns much more directly with $dS/dE$ and which could be easily used instead of temperature for such calculations. If physicists just defined “temperature” in terms of kinetic energy rather than in terms of an energy derivative, this problem of negative temperature wouldn’t exist. Instead, we have a definition for temperature which yields values just above 0 for the coldest possible system, values just below zero for the hottest possible system, and can otherwise travel into either positive or negative infinity. It’s all explained beautifully right here.

Conclusion

Physicists are crazy.

The Crazy Didn’t Just Skip Physics

Physics is a highly regarded field—arguably the single most highly regarded field in all of science. It isn’t by accident that physicists have the highest GRE scores out of just about everybody in college, or that there is no Nobel Prize in sociology. What about prizes in peace and literature? Those prizes may seem to belie the conclusion that hard sciences are more highly regarded than other fields, but we here on the Internet have some revealing things about the peace and literature prizes:

Seriously, the Lit prize is more known for the people that don’t get the prize, while the Peace prize has been a joke for decades now. You could start a terrorist organization and then do nothing for decades and get the prize, or inadvertently cause genocide and get the prize, or get the prize for things people expect you to do, you could spend half a life-time undermining attempts at combating a horrible disease and receive the prize, make up a story and get the prize, etc.

This isn’t an isolated sentiment; some of you may remember the interview where Richard Dawkins said that he didn’t think Nobel peace prizes count, either. Of course, one might mention that there’s also no Nobel Prize in Richard Dawkin’s own field of expertise.* But the point I wanted to make isn’t specifically about the Nobel Prize. Rather, I wanted to take a moment to point out that, whatever its professional or scientific standing, the physics also has its share of cranks and craziness. Here are just a few of the things I’ve heard physicists say in classrooms and universities across the country:

“Combustion converts matter into energy.”

Yes, you read correctly. You may not have realized it before but, apparently, simple combustion converts matter into energy. A teacher demonstrated this to me once by burning a thin strip of tissue that left no visible smoke or ash and saying “Look! All the matter has been converted into energy!”

When questioned as to whether this was really the case, the rejoinder was “Well why not?” Maybe because if the matter in a tenth of a gram of tissue paper were actually converted entirely into energy via $E = mc^2$, that would return about nine trillion Joules of energy, roughly equivalent to that caused by two thousand tons of exploding TNT. Lucky for all of us, that’s not how fire works; fire is an chemical phenomenon returning energy by breaking carbon bonds. Fire and nuclear fusion are two different things.

To be fair, the teacher who told me this was not in a university. To hear the really good stuff, you need to find yourself some PhD physicists. Fortunately for me, one day I encountered one such physicist who merrily explained that:

“There is no centripetal force.”

Now, people often tell you that there is no centrifugal force.That’s the force you feel when you stand on a merry-go-round while the merry-go-round is moving. If you do this, you’ll see the world spinning around a stationary merry-go-round, but you’ll feel a mysterious pull from that spinning world, trying to yank you away from the apparently stationary merry-go-round.

You can feel the same thing on a bike as you round a curve. You have to lean into the curve to stay on; a strange force will seem to be pulling you out of it, almost as though the ground were tilting away from you.

So is the centrifugal force not a real force? Not really, but it depends somewhat on your perspective. If you think a rotating reference frame is every bit as good as an inertial frame, then, there’s not just a centrifugal force that mysteriously yanks objects away from a center of rotation, but there are also Coriolis and Euler forces to account for.

A visual demonstration of centrifugal and Coriolis forces. Life is pretty complicated for people who spend their lives spinning around in circles.

The thing is that the observation of these forces requires you to select a specific spinning reference frame. Someone else who is spinning at different angular speed or around a different axis won’t get the same values for the forces that you do. People in different inertial reference frames (in simple terms, frames of reference where you aren’t spinning or accelerating) are pretty smug most of the time because they all agree about everything, at least until they start having to worry about objects moving near the speed of light. These relativistic effects undercut some of the beauty of the inertial reference frame, but we still use it by default for most calculations, meaning that these extra complications like the centrifugal force can be regarded as fictitious.

So much, so good.

But what about centripetal force? That’s $F = v^2 / r$, the force that actually makes things turn—the friction on your tires as you round that curve, or the tension in a chain that keeps a ball spinning over your head. Without a force able to pull moving objects centripetally, there would be no rotation.

So… when a physicist told me one day about how there was no centripetal force, it was pretty funny. He went on and on about it. What was absolutely hilarious was, when I later went up to another physicist and said, “Hey did you hear what this other guy just said?” He replied, “Well if you think about it, there really is no centripetal force, because…”

I took my chin in my hand, contemplated Occam’s Razor and Woodstock, and after some careful nodding concluded that these two guys must have been drinking the same exciting beverages earlier that day. Because anyone who genuinely thinks there is no such thing as the centripetal force doesn’t understand physics… or how it is that the Earth doesn’t just fly off into space rather than orbiting the sun.

But on that subject, physicists will also tell you brilliant things like:

“Planets orbit around the sun because gravity bends space.”

No. If the reason the Earth orbited the sun were because the sun curves straight lines in space into circles, then we could stop the earth and float there, without ever falling into the sun—or we could drop an apple and never hit Sir Isaac Newton on the head. What’s curved isn’t space, but spacetime.

Edward Current offers a wonderful description of this, which you can watch below if you have a beautiful mind.

This is probably enough crazy for one week.

(But yes, I promise: there is more crazy in physics.)

_______________________________

* That is, being a professional jerk. Yes, many make the mistake of assuming him to be a biologist. Generally, these are only people unfamiliar with his work, or those particularly rabid sorts of atheists who would be pleased to learn that every religious person on the planet spontaneously died of amoebic dysentery.

(My apologies to anybody who has died of amoebic dysentery, to whom this joke may not seem funny.)

Science vs Sensitivity

Hello again! Some of you may have noticed a long break in my posts; there was a reason for this. Over the course of my life, I’ve become increasingly aware of the sensitivity of the typical person. The problem that I’ve been coming up against is that just about anything I thought about that might be interesting enough to blog about was, one way or another, controversial and potentially hurtful or infuriating to someone I knew.

I know that we in the West like to think of ourselves as ruggedly individualist, priding ourselves on truth rather than social harmony. Cultures of the far East, most notably China, place great value on the concept of saving face, but not here in the West.

Except, of course, when it comes to children.

Or the environment.

Or people who are, well, not as intellectually quick as population norms would suggest that ordinary people usually are:

Although I know that by now I should never be surprised by what Americans are doing lately, I’m floored that the word “retarded”—a word introduced to describe, in a sensitive, roundabout, and value-neutral way, those people who were previously regarded as, you know, “really stupid”—is itself no longer politically correct enough for modern America. Nope! Yesterday’s political correctness is now hate-speech:

The R-word is HATE SPEECH
“I don’t think you understand how much you hurt others when you hate.  And maybe you don’t realize that you hate.  But that’s what it is; your pre-emptive dismissal of them [people with intellectual disabilities], your dehumanization of them, your mockery of them, it’s nothing but another form of hate.  It’s more hateful than racism, more hateful than sexism, more hateful than anything.” – Soeren Palumbo, student, advocate, brother to a sister with an intellectual disability.

When I first saw this, I had no idea how exactly we were supposed to speak about people who aren’t very smart. It seemed that maybe we were supposed to refer to them as “the intellectually disabled,” but I wasn’t sure that piling continually more syllables onto these people was going to fix the problem—until I realized that speaking about reality at all is the problem.

Because reality doesn’t care about how we feel. And that’s a very bad thing. The evolution of the human species, anthropogenic climate change, the way success can be earned through hard work and determination, all of this stuff is offensive to someone or another—to anyone who prefers to believe things based on something other than empiricism and rationality, really. So in polite company, we learn to cultivate sensitivity, and just not talk about such things.

I know, it seemed for a while we were doing fairly well about building an open and tolerant civilization, where Voltaire was always there to give his life to ensure my right to say whatever I wanted. But this is the 21st century, and free speech isn’t so popular anymore. We’ve changed into a people that likes to pretend to be tolerant and opposed to hatred whenever we’re feeling most venomous. Not everyone today cares about political correctness, of course. But research finds that those who do, suffer from stress, social conflicts, diminished sense of humor, and the loss of friends (Strouts & Blanton, 2015). I suppose the simple fact that they’re always arguing with people is itself another uncomfortable scientific finding that we’re not supposed to talk about. But seriously, if tolerance, openness, and freedom from hatred were what political correctness were really about, then wouldn’t the SJW PC police get along with people? Instead of being, you know, what they’re really like?

I have no particular desire to offend anybody. If I could have my wish, I would wander the world like a child in a candy shop, discussing my enthusiasm for the cosmos and that which it contains without bothering a living soul. I’m not concerned with learning that I was wrong about something, or finding out something that will make me feel smaller or more insignificant than I already am. Unfortunately, many people are concerned about that kind of thing, and Aristotle’s historic formulation of the law of the excluded middle makes it impossible for everyone to be right all the time—every proposition, once accepted, represents the contradiction of that proposition’s opposite:

[T]here cannot be an intermediate between contradictories, but of one subject we must either affirm or deny any one predicate (Aristotle, Book IV, CH 7, p. 531).

In other words, one cannot research any topic and present a position in line with the scientific literature without that information contradicting people who believe differently (usually for reasons unrelated to rationality or empiricism). But, as others have noted, this problem of sensitivity has created a chilling effect on our society:

It undermines a fundamental democratic right to free expression—a right that should extend to everyone, regardless of how contentious, bigoted or prejudiced these views might be—in order to advance a perception of decency and social harmony. But is it really harmony we win when we back down from satirising religious radicals? Or when we attack people for wishing “Merry Christmas” instead of “Happy Holidays”? This seems more like a way to cultivate social anxieties; a fear of forgetting the proper code words. Flemming Rose, editor of the Danish paper that published the controversial Mohammed cartoons in 2005, calls this approach a “tyranny of silence.”

This tyranny may have begun in the public sphere, but it’s moved into the scientific arena as well. In an argument that would have been axiomatic as little as fifty years ago, Dr. Bruce Charlton (2009) points out that “Truthfulness in science should be an iron law, not a vague aspiration:”

Although some scientists are selfishly dishonest simply in order to promote their own careers, for most people quasi-altruistic arguments for lying (dishonesty in a good cause of helping others, or to be an agreeable colleague) are likely to be a more powerful inducement to routine untruthfulness than is the gaining of personal advantage. For example, scientists are pressured to be less-than-wholly-truthful for the benefit of their colleagues or institutions, or for official/political reasons.

Science is not diplomacy, or advertising, or politics. The ultimate goal of science is truth, not harmonious relationships or sensitivity to social conventions. Although the pursuit of science may need to give way to ethical considerations, sparing the feelings of sensitive people cannot be such a consideration if it means refusing to carry on research or to speak clearly about scientific findings.

Part of the value of science is its ability to correct us, to humble us, to disprove the things we think. The truth changes us and transforms us; only when we abandon our ego and submit to the truth can we begin to connect to the yawning cosmic depths that lie waiting beyond us, around us, and inside of us all.

This doesn’t mean that we need to be blunt or cruel. We can still try to be gentle within the confines of honest communication, and this is what I’ll aspire to do on this weblog. I can’t guarantee that everyone will like everything that I have to say, but in writing these posts, I do want to at least try to present the information in a patient way. This is something that full-time scientists, too, may do well to remember—it does no good to discover life on Alpha Centauri if no one wants to let you tell them about it. The more palatable the truth can be, the more people will have a chance to learn it.

Of course, not everyone is interested in that. That’s fine with me. If you would rather throw reality under a bus than take the risk of not being perpetually soothed and coddled, then you may as well depart in ignorance. Let me be the first to wish you a safe and speedy trip!

References

Charlton, B. G. (2009). Are you an honest scientist? Truthfulness in science should be an iron law, not a vague aspiration. Medical hypotheses, 73(5), 633-635.

Strauts, E., & Blanton, H. (2015). That’s not funny: Instrument validation of the concern for political correctness scale. Personality and Individual Differences, 80, 32-40.

What You Don’t Know About Alcohol

By now, you’ve probably heard that there are “real benefits to the Paleo diet.” I know I shouldn’t be too scornful; there probably are. But the science behind it is so misinformed that it’s hard for me to take the diet seriously. It’s almost as though its proponents believe in evolution just enough to think that it happened a long time ago, in a galaxy far, far away. Because the last time I checked—and maybe I shouldn’t speak for everyone here—my ancestors have been eating grains for literally ages: the stone age, the bronze age, and the iron age.

Anyone who understands anything about evolution knows that if you change a creature’s mode of existence, you will create selection pressure that forces the species to evolve in response to that change. I’d have thought that the way humans across the globe consume maize, rice, wheat, and oats in such high quantities would be an obvious clue that we’re pretty good at digesting things that other primates aren’t used to eating, but I can be naive that way.

All this does raise an interesting question, however: how fast can evolution operate? Some people do show gluten sensitivity, after all, so it’s clear that not all of us have the ability to tolerate all cereals. The story is similar for alcohol, the anthropological handmaiden of grain; most of us tolerate it pretty well, but you’ve probably known a few people who can’t hold their liquor, or, worse, can’t stop drinking. But for all that, we drink the stuff like crazy, and it looks like we have been for quite some time.

For instance, you may have heard that Medieval populations commonly consumed diluted beers and ales instead of pure water, due to the widespread understanding that groundwater was insalubrious. Given the state of the water supply outside of the wealthy, first world nations, medievals were probably right about their water. But if watered wine (in the Mediterranean) and beer (farther north) were consumed so ubiquitously, then weren’t they all suffering from perpetually high fetal alcohol syndrome rates?

Stop and think about this and you’ll probably see that fetal alcohol syndrome had to have been a problem, particularly with 15th century experts like Michele Savonarola, in her De Regime Pregnantium, warning expectant mothers to “Beware of using cold water, it is not good for the fetus and it causes the generation of girls, especially here in our region, so keep drinking wine.”

Except that, maybe, it wasn’t a problem at all:

Light-to-moderate maternal alcohol consumption during pregnancy does not adversely affect fetal growth characteristics.(Bakker et al., 2010)

This comes from a large-scale study, using a sample of 7333 pregnant women. And the findings were quite clear:

In total, 37% of all mothers continued alcohol consumption during pregnancy, of whom the majority used less than three drinks per week. We observed no differences in growth rates of fetal head circumference, abdominal circumference or femur length between mothers with and without continued alcohol consumption during pregnancy. Compared with mothers without alcohol consumption, mothers with continued alcohol consumption during pregnancy had an increased fetal weight gain [difference 0.61 g (95% confidence interval: 0.18, 1.04) per week]. Cross-sectional analyses in mid- and late pregnancy showed no consistent associations between the number of alcoholic consumptions and fetal growth characteristics. All analyses were adjusted for potential confounders.(Bakker et al., 2010)

So not only did mothers who drank moderately not produce babies with any sign of Fetal Alcohol Syndrome, but their unborn babies gained weight at a higher rate than would be expected otherwise. The study authors sensibly concluded that further studies were needed to assess “function in postnatal life.” But as it happens, we have the results from such a study, this time carried out on over eleven thousand pregnant mothers taking part in the UK Millennium Cohort Study (N=11,513). Their findings are even more striking:

Boys and girls born to light drinkers were less likely to have high total difficulties and hyperactivity scores compared with those born to mothers in the not-in-pregnancy group. These differences were attenuated on adjustment for confounding and mediating factors. Boys and girls born to light drinkers had higher mean cognitive test scores compared with those born to mothers in the not-in-pregnancy group: for boys, naming vocabulary (58 vs 55), picture similarities (56 vs 55) and pattern construction , for girls naming vocabulary (58 vs 56) and pattern construction (53 vs 52).
Conclusions: At age 5 years cohort members born to mothers who drank up to 1–2 drinks per week or per occasion during pregnancy were not at increased risk of clinically relevant behavioural difficulties or cognitive deficits compared with children of mothers in the not-in-pregnancy group. (Kelly et al., 2010; emphasis added.)

So what are we to make of all this? Simply that evolution doesn’t proceed at geologic rates—humans have been consuming alcohol since the discovery of agriculture, thousands of years ago. And while not everybody was drinking all the time, those societies where people were exposed to alcohol throughout their evolutionary history must have had some selective pressure favoring tolerance to it.

To see what things may have been like early on, we can look at groups without a long history of agriculture, like the Native Americans, who show high rates of alcoholism (Ehlers et al., 1998). We even have a pretty good start on understanding the genetic etiology for alcohol dependence, with both the ALDH2*2 and ADH3*1 polymorphisms protecting against alcoholism, but neither being common in individuals of Native American ancestry (Wal, Carr, & Ehlers, 2003). Presumably everybody was like this 10,000 years ago; but once humans began cultivating crops and alcohol entered the scene, selective pressure acted on each generation to enhance tolerance, not only in adulthood, but during gestation.

But if all this is true, then why does everybody keep telling pregnant women to avoid alcohol so completely? The answer is probably that it’s safer for doctors to send a clear, simple message to society than to give an accurate picture of the more complex reality. Study after study tells us that children are indeed harmed by high maternal consumption of alcohol, and the dangers of fetal alcohol syndrome are real enough that no one is terribly keen to distinguish between safe, light drinking and heavy or binge drinking.

For example, a recent study was titled “Prenatal Alcohol Exposure is Associated with Conduct Disorder in Adolescence,” and reported that “Prenatal alcohol exposure is significantly associated with an increased rate of conduct disorder in the adolescents,” but with the caveat that “This effect was detected above an average exposure of one or more drinks per day in the first trimester.” (Larkby, 2011). An even better example comes from a meta-analysis on maternal drinking and later childhood outcomes, where the study authors reported that:

We observed a significant, albeit small, positive association between mild-to-moderate prenatal alcohol exposure and child cognition (Cohen’s d 0.04; 95% CI, 0.00, 0.08) (Flak et al., 2014)

…and then changed their results by excluding studies post hoc:

but the association was not significant after post hoc exclusion of 1 large study that assessed mild consumption nor was it significant when including only studies that assessed moderate alcohol consumption. None of the other completed meta-analyses resulted in statistically significant associations between mild, moderate, or binge prenatal alcohol exposure and child neuropsychological outcomes.(Flak et al., 2014)

…so that they could return conclusions more consistent with the standard line:

Our findings support previous findings suggesting the detrimental effects of prenatal binge drinking on child cognition. Prenatal alcohol exposure at levels less than daily drinking might be detrimentally associated with child behavior. The results of this review highlight the importance of abstaining from binge drinking during pregnancy and provide evidence that there is no known safe amount of alcohol to consume while pregnant. (Flak et al., 2014, emphasis added)

What the results of this review actually highlight is that there is not only a safe level, but maybe even an optimal level of alcohol consumption during pregnancy which is above zero. Unfortunately, you have to have enough scientific literacy to find and read the actual research to know anything about it. For everybody else, epidurals, Ritalin, and the Paleo Diet are better than nothing, and—to look on the bright side of things—can’t be too much much worse than the state of medicine in the Middle Ages.

References

Ehlers, C. L., Garcia-Andrade, C., Wall, T. L., Sobel, D. F., & Phillips, E. (1998). Determinants of P3 amplitude and response to alcohol in Native American Mission Indians. Neuropsychopharmacology, 18(4), 282-292.

Flak, A. L., Su, S., Bertrand, J., Denny, C. H., Kesmodel, U. S., & Cogswell, M. E. (2014). The association of mild, moderate, and binge prenatal alcohol exposure and child neuropsychological outcomes: a meta‐analysis. Alcoholism: Clinical and Experimental Research, 38(1), 214-226.

Larkby, C. A., Goldschmidt, L., Hanusa, B. H., & Day, N. L. (2011). Prenatal alcohol exposure is associated with conduct disorder in adolescence: Findings from a birth cohort. Journal of the American Academy of Child & Adolescent Psychiatry, 50(3), 262-271.

Kelly, Y. J., Sacker, A., Gray, R., Kelly, J., Wolke, D., Head, J., & Quigley, M. A. (2010). “Light drinking during pregnancy: still no increased risk for socioemotional difficulties or cognitive deficits at 5 years of age?” Journal of Epidemiology and Community Health, jech-2009.

Bakker, R., Pluimgraaff, L. E., Steegers, E. A., Raat, H., Tiemeier, H., Hofman, A., & Jaddoe, V. W. (2010). “Associations of light and moderate maternal alcohol consumption with fetal growth characteristics in different periods of pregnancy: the Generation R Study.” International journal of epidemiology, 39(3), 777-789.

Wall, T. L., Carr, L. G., & Ehlers, C. L. (2003). Protective association of genetic variation in alcohol dehydrogenase with alcohol dependence in Native American Mission Indians. American Journal of Psychiatry, 160(1), 41-46.

A Better IQ Test

Intelligence is probably more important to physicists than to most people. Anyone who’s ever crammed for an exam in E&M or Quantum Mechanics knows that subtle effects in your own mental state can make a huge difference in understanding; being hungry, tired, or simply worn out from calculation can bring the curtains down on a problem you’ve been looking at for hours and understood perfectly just a few minutes ago.

But physicists are also keen empiricists; when Lord Kelvin said that everything that exists, exists in some quantity and can therefore be measured, we thought he meant that everything that exists really ought to be measured, the sooner the better. Meter sticks, balance scales, and stopwatches take care of the simple stuff, and a good voltmeter can handle circuits pretty well. Other phenomena can be more ticklish, but if we want to count muons or measure the magnetic field in a vacuum, well, we’ll build a machine to do it.

So the current state of affairs regarding mental test scores is dissatisfying. I know that many of us treat the scores we get on exams as serious readings, numbers that tell us something clear and fundamental about our understanding of the subject matter, except that they can’t be. Exams are meant to test knowledge, yes, but they never have units of facts or neural connections; it’s always just points (and good luck translating “points” into m/kg/s).

Unfortunately for students of the hard sciences, classroom tests are psychometric batteries, falling wholly outside the pristine field of tensors and wave functions to rest squarely within the messy discipline known as psychology.

Psychometricians have made pretty good headway into mental testing with a hundred years of investigation into the intelligence quotient, or IQ. But what started out as a promising measure of “mental age vs. chronological age” (hence the “quotient” part of IQ; once upon a time, you divided MA by CA and multiplied by 100 to get your result) eventually devolved into scores relative to population norms—norms that are constantly shifting and changing as people get better at taking the tests.

And they have definitely been getting better; test scores have risen by the equivalent of over 20 points since IQ tests first came out. (You may have heard of it; it’s called the Flynn Effect.) Sadly, this improvement in test scores doesn’t mean people are getting smarter; checking the pattern of gains on different IQ subtests reveals that this rise shows no relationship to the underlying general factor (g) of mental ability (te Nijenhuis, J., & van der Flier, 2013). Alongside this, we should be aware that those biological explanations that have been given for rising IQ—explanations like nutrition or heterosis which would indicate genuine increases to the underlying intelligence of test takers—didn’t pan out well (see e.g. Flynn, 2009 and Woodley, 2011).

Of course, in spite of these disconfirmations, I do still suspect that nutrition played some role in rising test scores over the generations. Modern society really is much better at providing nutrition to children, and we wouldn’t have seen such massive gains in human height otherwise. And rather than pointing to overall nutrition, we can also look at specific nutrients like iodine; evidently the iodinization of salt by itself could explain a large proportion of the rise in IQ scores, since “for the one quarter of the population most deficient in iodine this intervention raised IQ by approximately one standard deviation” (Feyrer, Oliti, & Weil, 2013).

Yet even if factors like nutrition are influencing test scores, it still appears that simple test-taking strategies account for most of the population-wide IQ gains. For instance, higher rates of guessing (or more specifically, a pattern of giving quick, shallow answers), by itself accounts for half of the gains seen on Arithmetical Reasoning and Vocabulary subtests (Must & Must, 2013). It’s also been noted that the act of measurement alters future readings—IQ test scores remeasured up to 7 years after an earlier measurement showed significant increases due to familiarity with the test (Salthouse, Schroeder, and Ferrer, 2004). And most of all, the prevalence of mental test questions throughout education and even the media has saturated society to such an extent that simply being alive in a modern environment habituates us to the items on IQ tests. In other words, the secular gain in IQ scores over the past 100 years “is directly analogous to IQ gains via retesting” across the entire population (Armstrong & Woodley, 2014).

So this inability of current IQ tests to measure intelligence consistently is a big problem. I’d have thought psychologists would be scurrying around trying to solve it, but they seem more interested in documenting how their measuring stick is constantly compressing rather than constructing a measuring stick that stays the same length over time. Looking at the topic for a while, I can at least be sympathetic—how to come up with a good, invariant measure of intelligence doesn’t seem easy at all. But a little test called reverse digit span is probably a good start:

Our findings reveal that seven of the subtests within the WISC-R and WISC-III manifested the Flynn effect, which is marked by an insignificant change in scores when tested and retested on the same norm, but a significant decrease in scores when tested on an old norm and retested on a new norm. These subtests were Similarities, Vocabulary, Comprehension, Picture Completion, Block Design, Arithmetic, and Object Assembly. Information and Digit Span were not affected by changing test norms in our sample. (Kanauya and Ceci, 2011)

Digit span is a simple, tidy test; you simply read a string of digits, reverse it in your mind, and write down (or recite) the reversed string. Of course, any test with only one kind of question is vulnerable to subject idiosyncrasies. No one test is a pure measure of intelligence, and there must be a few of people out there who are absolute wizards at reversing digits while still being dumb as a post.

So to round out the mix, we have a few more tests based in elementary cognitive tasks (ECTs). One of my favorite ECTS is choice reaction time, where subjects need to hit one of two buttons in response to one of two lights coming on. These kinds of tests are attractive not only because they lack cultural content, but because these elementary cognitive tasks also show no intergenerational improvement. (Nettlebeck & Wilson, 2004; Silverman, 2010).

So there you go: Set people in front of computer terminals and give them each two tests. For test 1, see how many digits a subject can remember from a display, reverse, and key back into the computer, and then for test 2, check how fast he or she can press the f or j key in response to one of two stimuli appearing on screen at a standard size, color, and brightness. Scores for each test will have a clear lower bound, starting from 0 and increasing with increasing processing power. Even the units come clean: for test 1 the units are bits of information; for test 2 the units are inverse seconds. The results can still be transformed into z-scores relative to human norms, but they can also be used raw; this would allow us to directly compare the intelligence of humans with other primates. Incidentally, there have already been some studies in this venue: nonhuman primates perform about as well as 5-year-olds on other tests designed to be used cross-species (Vlamings, Hare, & Call, 2010), and after staying up all night writing this, I perform about as well as a potted plant.

References

Armstrong, E. L., & Woodley, M. A. (2014). The rule-dependence model explains the commonalities between the Flynn effect and IQ gains via retesting. Learning and Individual Differences, 29, 41-49.

Feyrer, J., Politi, D., & Weil, D. N. (2013). The Cognitive Effects of Micronutrient Deficiency: Evidence from Salt Iodization in the United States (No. w19233). National Bureau of Economic Research.

Flynn, J. R. (2009). Requiem for nutrition as the cause of IQ gains: Raven’s gains in Britain 1938–2008. Economics & Human Biology, 7(1), 18-27.

Kanaya, T., & Ceci, S. J. (2011). The Flynn effect in the WISC subtests among school children tested for special education services. Journal of Psychoeducational Assessment, 29(2), 125-136.

Must, O., & Must, A. (2013). Changes in test-taking patterns over time. Intelligence, 41(6), 780-790.

te Nijenhuis, J., & van der Flier, H. (2013). Is the Flynn effect on g?: A meta-analysis. Intelligence, 41(6), 802-807.

Salthouse, T. A., Schroeder, D. H., & Ferrer, E. (2004). Estimating retest effects in longitudinal assessments of cognitive functioning in adults between 18 and 60 years of age. Developmental psychology, 40(5), 813.

Vlamings, P. H., Hare, B., & Call, J. (2010). Reaching around barriers: the performance of the great apes and 3–5-year-old children. Animal cognition, 13(2), 273-285.

Woodley, M. A. (2011). Heterosis doesn’t cause the Flynn effect: A critical examination of Mingroni (2007). Psychological Review, 118(4), 689-693.

Sorry, Forget the Shrink Ray

I don’t talk about physics much. Partly that’s because it gets complicated fast, but also partly it’s because I already have my degree there, and it isn’t new for me. Still, every once in a while, someone brings up some gadgety, science-fictiony idea and wants to know if it will work.

Topic for today: The Shrink Ray.

So the problem with a lot of these ideas is that they violate basic laws of physics, like conservation of energy or momentum. Looking at the math, it turns out that a shrink ray actually can obey conservation of momentum without too much trouble. Take an object of initial mass and velocity $m_0$ and $v_0$ to be shrunk to final mass $m_1$ at velocity $v_1$. Then conservation of momentum simply requires objects to speed up as they shrink:

$v_1 = \frac{m_0 v_0}{ m_1}.$

Unfortunately, conservation of energy requires additionally that

$v_1 = \sqrt{\frac{ m_0 v_0^2}{2 m_1}}$

and, frustratingly, this system of two equations has no solution unless $m_0 = 0$ or $v_0 = 0$.

But wait! This was only a Newtonian argument. If we shrink a moving object, we know it would have to speed up to obey conservation of momentum. This might mean that it needs to gain energy, yes. But what’s the problem with that, if the rest mass is loaded with energy $E_0$ for it to use in going faster?

Well, then we have the initial kinetic energy plus its rest energy becoming final kinetic energy:

$K_1 = K_0 + E_0$

Or in other terms:

$\frac{m_1 v_1^2}{2} = \frac{m_0 v_0^2}{2} + (m_1 - m_0) c^2$

We know what the energies should be to conserve the momentum via $v_1 = \frac{m_0 v_0}{ m_1}$; so we can then say

$\frac{m_0^2 v_0^2}{2 m_1} = \frac{m_0 m_1 v_0^2}{2 m_1} + (m_1 - m_0) c^2$

And unfortunately because both the terms for velocity $v_0$ and the speed of light, $c$ are squared (and thus must be positive), the solution to this equation requires the change in mass $m_1 - m_0$ take a very specific value:

$m_1 - m_0 = 0$

In other words, no shrinking is allowed.

Of course, a person might avoid this by arguing that the energy is to be released into the remaining mass not as velocity, but as heat. And as it turns out, this is a really great idea; the physics would work out just fine, and all you need to do is switch the labels out and market your shrink ray as a death ray.

But wait, if the only thing stopping this from being a usable shrink ray is all that extra energy cooking the target to a crisp, then what if we were to somehow suck the energy away while the target was shrinking? Since the energy content at the shrinking end is much higher than at the end doing the shrinking, the 2nd Law of Thermodynamics would be satisfied by transferring at least some that energy from the shrinking object to the raygun itself. Then all we need is a way to store it, right?

Well, hope you have a battery capable of holding about a million Terajoules, because that’s what you’d get out of shrinking an adult human to something like 90% his or her current size.*

So seriously, forget the shrink ray. It just isn’t happening. What you really want is a super powerful battery. That’s something with genuine applications, and while you may have trouble with the chemistry or engineering, at least the physics won’t stop you from doing it!

* For comparison, recall that a bolt of lightning is only 1.21 Gigawatts, although you may have to multiply by time somewhere to compare the two values.

People are optimistic about science. It’s hard not to be, considering the new and amazing technologies science brings us every day. People writing science fiction casually assume that the rate of scientific progress is exponential, or at least linear; at worst, dystopian futures usually result from the misuse of science by an unenlightened humanity (like Walter M. Miller’s Canticle for Leibowitz, or Sherri S. Tepper’s Gate to Woman’s Country, each of which takes place in a future world devastated by nuclear warfare). But rarely considered is the idea that science itself may simply slow down and stop. It’s happened before; it could happen again.

I’ve talked in the past about science having already picked the low hanging fruit in many areas, or about the many studies showing a steady decline in our genotypic intelligence of around 1 IQ point per generation due to selection pressures encouraging smarter individuals to forgo reproduction in favor of education (see for instance Reeve, Lyerly, & Peach, 2013). But there’s also the question of simple scientific interest and motivation. Will it always be around?

Axiomatically, science depends upon there being people around who aren’t just clever, but curious; the sorts of people who like to sniff out mysteries and solve puzzles with a certain methodical flair. The days when people could simply stumble on scientific discoveries like Galileo with his telescope are well past; only sustained, persistent curiosity will do the job.

People think about this kind of interest as being an inherent or inalienable aspect of the noble human spirit, but I’m going to go out on a limb here and speculate that, as a reader of this blog, you just may have noticed that a lot of the other kids you went to high school with wouldn’t exactly find this stuff interesting. What happens to scientific inquiry if reproduction is left to the jocks, journalists, social climbers, and stoners?

Personality, that is, individual behavioral tendencies that are relatively stable across situations and time, has been associated with number of offspring in many animals, including humans, suggesting that some personality traits may be under natural selection… Using a large representative sample of contemporary Americans from the Health and Retirement Study (n = 10,688; mean age 67.7 years), we studied whether personality traits of the Five Factor Model were similarly associated with number of children and grandchildren, or whether antagonistic effects of personality on offspring number and quality lead to specific personality traits differently maximizing short and long-term fitness measures. Higher extraversion, lower conscientiousness, and lower openness to experience were similarly associated with both higher number of children and grandchildren in both sexes. In addition, higher agreeableness was associated with higher number of grand-offspring only. Our results did not indicate any quality–quantity trade-offs in the associations between personality and reproductive success. These findings represent the first robust evidence for any species that personality may affect reproductive success over several generations. (Berg et al., 2014)

The five factor model is a more primitive model of personality than the six-factor HEXACO, but Extraversion, Conscientiousness, and Openness are all essentially the same across each model: three traits related to social engagement and energy levels, to organization and dutifulness, and to curiosity and intellectual engagement, respectively. These are personality traits with substantial heritabilities coming in large part from the our parents. The first of these traits, Extraversion, doesn’t have much impact on scientific interest, but the latter two definitely do; Conscientiousness has well-established links with achievement (for example, consider Richardson & Abraham, 2009), while the Openness has a specific relationship to achievement in science (see e.g. Kaufman, 2013). And in the above study, the fertility differentials were strongest for Openness:

Compared to people with low extraversion (−1 standard deviation, SD, below the mean), people withhigh extraversion (+1 SD above the mean) had 0.15 (5.6%) more children; compared to people with low conscientiousness, people with high conscientiousness had 0.11 (3.9%) fewer children; and compared to people with low openness to experience, people with high openness to experience had 0.24 (8.2%) fewer children. (Berg et al., 2014)

These results come from a very large sample (n > 10,000), and they’re corroborated by other research elsewhere in the world. For instance, a Norwegian study also found inverse relationships between number of offspring and both Conscientiousness and Openness (Skirbekk and Blekesaune, 2014). A study on earlier cohorts born between 1920 and 1960 documented the rising inverse relationship between number of children and Conscientiousness and Openness (Jokela, 2012). In short, we’re looking at a future without Velma Dinkley.

So when I hear optimistic technophiles telling me about colonies on Mars or a Moore’s Law that never ends, I wonder, who is going to breathe life into those dreams? Their great grandchildren? Because somehow I doubt all those kids getting drunk at the beach and talking about which pop singer they want to see in the next summer blockbuster are going to make it happen.

References

Berg, V., Lummaa, V., Lahdenperä, M., Rotkirch, A., & Jokela, M. (2014). Personality and long-term reproductive success measured by the number of grandchildren. Evolution and Human Behavior, 35(6), 533-539.

Jokela M (2012). Birth-cohort effects in the association between personality and fertility. Psychological science, 23 (8), 835-41

Kaufman, S. B. (2013). Opening up Openness to Experience: A Four‐Factor Model and Relations to Creative Achievement in the Arts and Sciences. The Journal of Creative Behavior, 47(4), 233-255.

Reeve, C. L., Lyerly, J. E., & Peach, H. (2013). Adolescent intelligence and socio-economic wealth independently predict adult marital and reproductive behavior. Intelligence, 41(5), 358-365.

Richardson, M., & Abraham, C. (2009). Conscientiousness and achievement motivation predict performance. European Journal of Personality, 23(7), 589-605.

Skirbekk, V., & Blekesaune, M. (2014). Personality traits increasingly important for male fertility: Evidence from Norway. European Journal of Personality, 28(6), 521-529.

Do Women like Pretty Boys or Manly Men?

Growing up, my impression was that girls weren’t as interested in square jaws and bulging biceps as they were supposed to be. Stereotypically speaking, power lifters with lantern jaws and huge shoulders are seen as sexual kings, but the women I knew were never interested in Arnold Schwarzenegger, even young Arnold Schwarzenegger. There were always differences of opinion, of course; some women loved rugged men. But overall my impression was that women liked well-formed, androgynous features on a man.

The manly ideal even makes its way into academia, with research articles titled things like “Sex drive is positively associated with women’s preferences for sexual dimorphism in men’s and women’s faces,” but which eventually state in the body of the text that

One-sample t-tests comparing preferences for sexual dimorphism with what would be expected by chance alone showed that women generally preferred femininity in both women’s (t(130) = 18.03, 2-tailedp< 0.001;M= 4.67, SE = 0.06) and men’s faces (t(130) =4.12, 2-tailed p< 0.001;M= 3.21, SE = 0.07)…

Our ANCOVA revealed a significant main effect of sex of face (F(1,128) = 9.66, p= 0.002), whereby women preferred greater sexual dimorphism in female faces than male
faces.

Similar findings were reported in a recent article on facial masculinity (Lyons et al., 2015), noting that “women rated the high psychopathy and narcissistic faces the most masculine… We also found that women showed a low preference for the high morphs in both long and short term relationships.” The authors noted that, in other studies, “Women perceive masculine faces as unfriendly, dominant, hostile, and manipulative,” and showed that women overall tend to have a low preference for such men.

So rather than asking whether women like masculinity or femininity in men’s faces, maybe the real question is, “Does anybody like facial mascunility?” And the answer is no, they really don’t. At least in the samples I’ve seen, which focus on the preferences of Western heterosexuals, Manface means Ugly Face. Kazakhstani homosexuals may well disagree, but as an American heterosexual I know I don’t like women with a very masculine appearance, and I’m married to a woman who isn’t keen on masculine males, either.

Of course, women definitely like men to be much taller than they are (Stulp et al., 2013), and of higher status and earning potential (Perilloux, Fleischman, & Buss, 2011). But overall they don’t like them to be very masculine in their faces. Some research suggests that this is because facial masculinity is a sign of greater aggressiveness or untrustworthiness (Carre & McCormick, 2008), but as long as we’re speculating, why not notice that, if women are so ruthlessly judged by their appearance, whereas men are not, a woman would be foolish to marry and have children with a lantern-jawed, browridged Neanderthal only to have him pass these traits onto their daughters?

No, no; far better to marry a bright-eyed, triangular-jawed elf in order to produce beautiful daughters who can fend for themselves when all of the family resources go into their brothers’ flash cars and medical school tuitions (or whatever historical analogue to flash cars and medical school tuitions you can think of). Although barbarous circumstances may select for physical prowess in males, for the past several centuries, the planet has been more or less civilized. Say what you will about the impoverished European Dark Ages or the iniquities of Chinese Imperialism, being able to read (especially in Latin) was highly valued across Christendom, and common people struggled to pass the Emperor’s royal examinations across China. Middle Eastern merchants, Indian Brahmin, Egyptian scribes, Japanese samurai—virtually everywhere across the Old World, scholarly erudition was a way for men to win wealth, power, and wives.

Yes, pretty boys with diplomas may be more prone to broken noses in schoolyard scuffles. But civilized men resolve their differences with rapiers or pistols anyway, and if you’re going out into war, who doesn’t wear a nice shiny helmet? If the pressure on men being able to take a solid jab to the jaw is relaxed, there’s is no need for men to look like men to get women. Eventually women who preferred heavily masculinized male faces would lose out reproductively as their homely, male-faced daughters married down and left them few grandchildren. Pretty soon, watch and see: the entire population likes smooth skin, big eyes, and pixie noses.

Skeptics may well point out that things in areas of lower historical development in Africa or the Americas, or on the fringes of civilization in Wales or Siberia, may not have worked this way. But this isn’t really a problem for the line of reasoning I’m presenting here; all I need to explain is why people in the samples studied like facial femininity. Although clearly there were places for geeks and pretty boys to thrive as priests in the Mayan or Egyptian civilizations, maybe if we looked among these people today we actually would find them indifferent to facial femininity in men. It’s an interesting question.

Psychologists: get on it. The Internet needs to know.

References

Carré, J. M., & McCormick, C. M. (2008). In your face: facial metrics predict aggressive behaviour in the laboratory and in varsity and professional hockey players. Proceedings of the Royal Society B: Biological Sciences, 275(1651), 2651-2656.

Lyons, M. T., Marcinkowska, U. M., Helle, S., & McGrath, L. (2015). Mirror, mirror, on the wall, who is the most masculine of them all? The Dark Triad, masculinity, and women’s mate choice. Personality and Individual Differences, 74, 153-158.

Perilloux, C., Fleischman, D. S., & Buss, D. M. (2011). Meet the parents: Parent-offspring convergence and divergence in mate preferences. Personality and Individual Differences, 50(2), 253-258.

Stulp, G., Buunk, A. P., Kurzban, R., & Verhulst, S. (2013). The height of choosiness: mutual mate choice for stature results in suboptimal pair formation for both sexes. Animal Behaviour, 86(1), 37-46.