Friday, December 11, 2020

Epistemology of the Models—Climate, Economic, and Epidemiological

How do we know the future? Since we are not omniscient, we can’t know it with certainty. National Oceanic and Atmospheric Administration (NOAA) SciJinks tells us that five-day weather forecasts are about 90 percent accurate, forecasts for seven days decrease to 80 percent, and for ten days 50 percent.

We do nevertheless predict the future all the time. We predict what our spouses, children, and dogs will do tomorrow and the next day based on what we know about them, that is, what we have learned cumulatively about them over the years. Businesses make predictions of sales based on what they know about the current market, their customers and prospects. And the National Aeronautics and Space Administration (NASA) makes predictions about human survival in outer space based on the current state of science, test runs with monkeys, and a lot of trial and error.

This, in essence, is forecasting: extrapolation from past knowledge into the near future. The emphasis is on “near” because, as with weather forecasts, the further from the present we get, the less accurate the predictions.

Dressing forecasts up in mathematics or computer algorithms does not make them more accurate if their starting knowledge is dubious. It can and does fool many people into thinking that profound science is being performed!*

Meteorologist Anthony Watts identifies the problem with the so-called scientific or quantitative climate forecasts: “The flaws in existing climate models are equivalent to saying that every grain of sand on the beach is exactly the same size, shape, and composition, or that snowflakes aren’t unique, but all are exactly the same.”

In other words, aside from the horrendous politicization of climate science, the modelers ignore, or do not even know about, the significance of Aristotle’s law of identity and its relation to causality. Aristotle’s concept holds that the actions or behavior of an entity are determined by the entity’s nature, that is, its identity.

If grains of sand differ from one another, and if individual snowflakes are not quantitatively the same, those differences matter when developing mathematical formulas to describe reality. If not taken into account, predictions of the behavior of sand and snowflakes in the future will fail.

The assumptions of modelers do not accurately capture the reality of the entities they are studying.

Ayn Rand points out that today’s abandonment of any shred of Aristotelian epistemology has led in such sciences as psychology and economics to “the resurgence of a primitive mysticism.” Psychology, for example, attempts “to study human behavior without reference to the fact that man is conscious” and political economy, or economics, attempts “to study and to devise social systems without reference to man” (emphasis in original).

The nature of man, that is, the identity of human beings, is that humans possess a consciousness that has the capacity to reason. Key word is “capacity,” meaning humans possess free will and must choose to exercise that capacity. Thus, our choices can and do thwart all the “elegant” equations devised by psychologists and economists. Indeed, free will, and the failure to acknowledge it, explains the failure to replicate many studies in the so-called social sciences.

In economics, the “model” of society described by the doctrine of pure and perfect competition relies on self-evidently false assumptions: product homogeneity, no barriers to entry, “perfect” information, infinite numbers of buyers and sellers, and a stilted and deterministic concept of economic rationality. These assumptions are so arbitrary and removed from reality that they would be laughable were the model not the basis of our antitrust laws for over one hundred years.

The use of simultaneous equations to predict economic equilibrium, or the use of any other equations in the human sciences, is obfuscation.

Novelist Sarah Hoyt makes a similar point about the epidemiological models that attempt to predict how many deaths will result from a particular virus, such as COVID-19. Hoyt uses a common joke from physicists to parody the modelers. They, in effect, she says, have assumed “a spherical cow of uniform density in a frictionless vacuum.” The missing variable in the models, she continues, is culture. Because people constitute culture, their differences will be reflected in their reactions to a virus.

To elaborate, the modelers’ assumption of a particular “R naught” (R0 or r-sub-zero) at, say, 3.2 means each infected person will infect an average of 3.2 people.** Hoyt’s point is that people have choices as to how to behave, which will affect that number. Throw in other variables that are just as oh-so-(but-not-really) precise, including the nearly entirely ignored variable of prior immunity, and major embarrassing—or it should be embarrassing—error results.

A side note on medical studies, especially the alleged gold standard of double-blind controlled experiments: human beings are the sample and they are all different—in height, weight and metabolism, not to mention their choices. That can make a difference in how much “gold” comes out of these studies. How sophisticated are the ones used by NASA? Sometimes scientists on the ground ask the astronauts, “Which worked better, X or Y?” The astronaut replies, “X worked.” NASA then says, “We go with X.” That’s trial-and-error, the same technique medical doctors use when they prescribe off-label drugs!

Bottom line to my critiques of these models is that they all ignore causality.

So why is weather forecasting so much more accurate—even at 50 percent over ten days—than those of changes in climate? “Modern weather forecasting,” says one writer, “is based on the fact that gases of the atmosphere follow a number of physical principles.” Universal principles of physics, in other words, have identified the nature of relevant gases and their respective actions and consequently have allowed the development of equations that can accurately predict changes in the weather.

Predictions of what goes on in the climate, especially over decades and centuries, is not close to such accuracy. Indeed, former NASA engineer Roy Spencer says clouds, not carbon dioxide, the variable that alarmists preach as the fundamental cause of climate change, need to be researched extensively before drawing any conclusions.


* Quantitative and computer-based models are to be distinguished from scale models that test how something might actually work when at full size. A dramatic example was the radio controlled Boeing 747 with space shuttle attached, built at 1/40 scale by NASA engineer John Kiker in the 1970s, to test the feasibility of putting a real space shuttle on top of a real Boeing 747!

** When I first read about the r-naught, I immediately thought of word-of-mouth communication in marketing. How many people do we tell if we like or dislike a product? Studies vary, but the consensus says we tell more people if we dislike the product. Moral of the comparison to r-naught? People are involved in both numbers and affect its accuracy. Modelers, or rather, let’s say more generally predictors, must take that variable and the identities of the people involved into account.