Thursday, March 22, 2012

The Rational Optimist and The Grand Complication

Finding a quotable quote is a challenge because all 359 pages are exciting and pithy.  This is an antidote to the ever-popular doomsaying.  Pessimism has been an easy sale for hundreds of years.  Predictions for the end of the world transcend religion and take on mathematical precision during the very Industrial Revolution that disproved the claims.

The Rational Optimist: How Prosperity Evolves
by Matt Ridley (Harper, 2010).

“If this goes on…” by 2030, China will need more paper than Earth produces… we will run out of petroleum (of course)… we will be crowded, starved, polluted, ignorant; and the few survivors will be poorer than dirt to the end of their days.  True enough, says Ridley.  But the big “if” never obtains because the world is constantly changing, improving, getting better. “If this goes on…” fails because “this” never “goes on” but in fact is altered by something unexpected.  Yes, there are dark ages, plagues, famines, and wars, but generally, since the invention of trade about 18,000 years ago, our lives have gotten exponentially better.  Taking a word from Austrian economics, Ridley calls this “the great catallaxy.” 

The book opens with a photograph of a stone hand ax and a computer mouse.  Both fit the human hand.  The stone tool was made by one person for their own use.  Thousands of people made the mouse and no one of them knew how.  From the petroleum for the plastic to the software driver, each person did one thing; and it comes to you in exchange for the one thing you know how to do.  The maker of the hand ax enjoyed nothing they did not get for themselves.  (Among homo erectus including Neanderthal, it seems that both males and females hunted by the same methods.)  The hunter-gather was limited to their own production – and so could not consume very much.  We enjoy unlimited access to the productive work of others.  Each of us has, in effect, hundreds of servants; and would be the envy of any warrior, peasant, chief, or king for our cheap, easy, and sanitary lives.

Each chapter begins with a graphic showing the exponential improvement in life span, health, prosperity, and invention.  Another one shows the hyperbolic fall in homicides and yet another shows the dramatic decline in US deaths by water-borne diseases.  Ridley examines barter and trade (“the manufacture of trust”), the agricultural revolution, urbanization, and the invention of invention.  Each turn of the page overturns a common assumption.  Just for instance, shopping for locally produced food more often results in less efficient use of petroleum; and, of course, it penalizes farmers in poor countries. 

Ridley supports his claims with citations found at the back and linked to the page on which the assertion is made or fact is asserted.  That said, it is important to keep your calculator handy.  I found out about this book from a review (here) on the Objectivist website, Rebirth of Reason.  There, one of the frequent contributors cited this from the book:

Time you would have to spend working, in order to earn an hour's worth of reading light (and the ancillary benefits that reading brings to mankind)
--1750 BC (sesame oil lamp): over 50 hours
--1800 (tallow candle): over 6 hours
--1880 (kerosene lamp): 15 minutes
--1950 (incandescent light bulb): 8 seconds
--Today (compact fluorescent bulb): less than 0.5 seconds


I replied there:
Nonetheless, it is important to see through the razzledazzle.  Ridley presents the same information twice in two different formats. First, how much light an hour of labor would buy; then, how long you would have to have worked to have earned the equivalent of an 18-watt compact fluorescent reading light for one hour.
In the first case, he offers for 1750 BCE, 24 lumen-hours.  A lumen is about 1/12 of a candle (.079). So, one hour of labor then would buy you about 2 hours of reading light. ... but not the 18 watts of a modern fluorescent lamp. (A lumen is about 1/700 of an effective watt of spherical light -- .00147 -- so 18 watts is like 12,000 lumens.). I agree that we get more light - incredibly more - but the presentation of numbers does little to illuminate the subject. A modern lamp gives the same light as 1000 candles, but you would never put 1000 candles in one place just for reading. 
More stunning to me is the growth in fractional machines: fractional horsepower motors and fractional watt lamps. My internet modem is showing six lights, each less than a candle (maybe a lumen; maybe less) and all of them not generating but the merest fraction of the heat.
Again, I liked the book, but I am reminded of Richard Feynman's injunction that we not fool ourselves with what we want to believe.
That said, if you want to sprinkle your social chat with positive endorsements of modern times, get this book and memorize a few facts.  Life is great now.  Comparisons to the Ice Age are easy.  Ridley compares our world of 2010 to the world we left behind in 1955.  Think about your cellphone, GPS, the amount of music on a CD or your choice of DVDs.  I add that even organic food – for all the intellectual errors in that movement – is easier to get at any grocery store today as major retailers cater to our tastes.  Imagine going to McDonald’s of 1955 and asking for a salad.  If you are not an optimist now, this book could change your worldview.

The Grand Complication: a novel by Allen Kurzweil (Theia, 2001) is a delightful story that delivers an overstuffed display case of trivia centered on libraries.  Each of the Acknowledgments at the back begins with a Dewey Decimal call number (“case mark”).  I made four pages of notes for myself.  And I found some problems.  The story is easy enough: boy meets girl; boy loses girl; boy finds girl. 

Our librarian hero and his artist wife are torn apart by his obsession with work for a private collector who entices him to search for a grand piece of mechanical jewelry, “The Marie Antoinette” made by Abraham-Louis Breuget (his catalog number 160), stolen in 1983 from a Jerusalem museum.  (Wikipedia here.)  It was recovered after this book was published. (Breuget website here.)
As a good title, this one is a pun because like the timepiece, the antagonist himself is a complication.  Eventually, our hero triumphs, of course, with the help of his friends, and his wife. 

Going back over my notes, I find that some of the vocabulary seems local to the author’s own library.  Not all of the special terms were corroborated by these glossaries:

http://guides.library.fullerton.edu/slis/libraryterms.htm
http://www.abc-clio.com/ODLIS/odlis_c.aspx
(Both of those found via http://vault.lib.byu.edu/term/english.php)
http://lib.colostate.edu/lingo/
http://bindery.berkeley.edu/libraries/glossary
http://www.alibris.com/glossary/
However, I was happy to learn the both the proper name for when pages are uncut at the edges (“unopened gathering”) and the proper way to cut them apart.  Deckle, finding aid, class marks, phase box, … KWIC (key word in context), and a slew of Dewey Decimal numbers (Rock Music 781.66) to add to those which I did know. 

The attention to detail made the errors I caught stand out.  On page 23 is a “blond woman.”  On page 147 is an allusion to a mathematician who won a Nobel Prize.  It is not impossible for a mathematician to garner the award for work in medicine or world peace, etc., but it is famously known that there is no Nobel Prize in mathematics.  On page 338, to activate old automata, the hero uses old coins, including a Liberty dime for which there are several possibilities from the Draped Bust and Seated through the so-called “Mercury” or “Winged Head Liberty.”  There follows a simple typographical error, no capital B in “Buffalo nickel” (properly called the Indian Head or Buffalo 5-cent nickel coin.).  But such oversights made this book even more fun to read.
An interesting glitch from my local library is their catalog citation:
The grand complication
    Kurzweil, Allen.
Publisher::Hyperion,
Pub date::c2001.
Pages::359 p. ;
In the book itself,  our hero says that he has written a book about this adventure and had it carefully typeset to a perfect 360 pages for the 360 degrees in a circle.  Indeed, the book runs 360 - not 359 - pages.

Saturday, March 17, 2012

South by Southwest 2012

Working as a security guard at 6th and Congress, 
SXSW came to me.

Dee Hemingway in Starlet by Sean Baker

Violin Monster played in the afternoons

A bit less crowded during the day ...

We all waited for the buses at night
Balafonist Abousylla's guitar accompaniest is off camera to the left

Other "Monsters" came out at night.
I got a free t-shirt from http://www.tunein.com/ ("listen forever").  I did not get any of the very many apps being given or sold, such as Stage Page.  The cute couple from City Maps were outshone by their own glowing sign. 
Protest marches (and celebration marches) drew police escorts. 
The first days were rainy.  The last days were cloudy.
I started the week without coffee as my usual stop - The Hideout- was by SXSW Badge only through Wednesday, so I went to Frank where they do not merely brew coffee, they transubstantiate it. 
Royal Blue Grocery was sold out of gum, a minor inconvenience in exchange for the additional tourism come to town as I met people from Philadelphia and Seattle.

ALSO ON NECESSARY FACTS

Thursday, March 15, 2012

Harriman's Logical Leap Almost Makes It

The root of the problem with his presentation is that the audience is not defined.  If he were writing only for others in his peer group – he holds master’s degrees in physics and philosophy – then much of Harriman’s narrative could have been deleted.  That the book is mass-marketed indicates a wider audience for whom more or better explanation is needed. Rather than trying to replace the accepted meaning of the inductive method, Harriman should simply call his the objective (or Objectivist) method. 

The Logical Leap: Induction in Physics by David Harriman, with an introduction by Leonard Peikoff. New American Library, July 2010. Paperback, 279 pages + vi, illustrations. $16.00.
Despite some flaws in the presentation, David Harriman’s proposal for a new method of scientific methodology is interesting, valuable, and important.  Harriman’s thesis is that induction is actually the integration of a new experience with the totality of all previous experience for the purpose of creating a new generalization.  One example is enough for a generalization, if it is validly composed.  According to Harriman, to be valid, an induction must be derived from a first-level generalization.  To demonstrate the truth of his claim, Harriman provides examples from the works of Galileo, Newton, and Dalton, among others. 
[edited and shortened February 8, 2016]

In philosophy the “problem of induction” is defined by the question “How much evidence is enough?”  David Harriman’s answer is provocative on several grounds.  One fact is enough to validate a theory, if that fact is properly integrated with everything else known to be true.  That much alone would be challenging.  The back cover of this book credits Ayn Rand’s theory of epistemology as the starting point for Harriman’s work.  That flag is necessary for those who do not know Leonard Peikoff as Ayn Rand’s appointed “intellectual heir.”  Peikoff wrote the introduction, and, it is revealed, tutored Harriman in the use of induction in physics.  But Peikoff is a philosopher (doctorate from NYU) and so Harriman attempts the technical proof. 

The root of the problem with his presentation is that the audience is not defined.  If he were writing only for others in his peer group – he holds master’s degrees in physics and philosophy – then much of Harriman’s narrative could have been deleted.  That the book is mass-marketed indicates a wider audience for whom more or better explanation is needed. 

David Harriman is not the only working physicist to blunder about orbital mechanics.  It is an easy error to say that the path of a projectile is a parabola (p. 50).  Later, discussing Newton he does note that the path of an object in orbit under an inverse-square law of central force motion can be any conic section (though he leaves out the line). However, in this part he is explicit about the parabolic path of a projectile. Thirty years ago, I caught Scientific American in this same error; and for them, I photocopied a page from The Wonders of Physics by Irving Adler (Golden Books, 1966).


The Wonders of Physics:
an Introduction to the Physical World
by Irving Adler 
( Illus. by Cornelius De Witt); New York,
Golden Press [1966].
We take the parabola as an approximation for projectile motion by assuming that the Earth is flat. 

This is helpful to students for whom the mathematics of this curve is easier than that of an ellipse.  The ellipse is the most common orbital path in our immediate experience.  Harriman does not distinguish this.

If projectile motion can be explained to a child, then it should stated correctly in a technical treatise on the epistemology of science. 

This oversight is especially significant as the author claims to be explaining how the scientific revolution of the Renaissance replaced earlier mysticisms.  Galileo knew that the Earth is round; that fact was known to Aristotle.  The diameter of the Earth was measured by Eratosthenes. 

Galileo failed to make the logical leap that Newton finally did when he demonstrated via his calculus why an inverse-square force results in orbits that are conic sections.  In fact, in his introductions to editions of the Principia, Newton credits the ancients (“Chaldeans”) who “long ago believed that the planets revolve in nearly concentric orbits, around the sun and that comets do so in extremely eccentric orbits…” (Cohen/Whitman translation, 1999).

Newton’s works are prominent in this book, and rightfully so.  Newton was arguably the greatest scientist of all time.  However, Newton maintained that light consists of corpuscles; but Newton’s own experiments with optics argued against his theory of light.  Newton maintained faith in a hypothesis that he could not prove.  Harriman glides past this problem (pp. 50-67).  Later, Harriman derides the “wavicle” of modern physics.  He also denigrates Rene Descartes.  As an Objectivist, Harriman is opposed to Cartesian rationalism.  However, Descartes is credited with proving that light refracts according to the ratio of the sines of the angles of incidence and refraction.  In American schools, we call this “Snell’s Law” but Willebrord Snellius did not publish it.  So, Descartes is credited with the independent discovery that sin(I)/sin(R) = k.  Pierre de Fermat also proved this mathematically (rationalistically) from the principle of least time. 

Just as we speak too easily of parabolic motion, so, too, do we accept “white light.”  No such thing exists.  All electromagnetic phenomena exist in discrete wavelengths and white is not one of them.  It is true that if we project a mix of colors (red, blue, green; magenta, cyan, yellow) on a white screen, the screen remains white.  Projecting only a beam of red light on a white screen, the illuminated area appears red.  The perception of “white” is a consequence of perceiving several colors at once.  Harriman uses vernacular English to praise Newton for discovering that white light is composed of colors.

Attempting to explain the development of the atomic theory, Harriman offers an erroneous simile comparing a hydraulic pump to a lever (pp. 123-124).  Explaining the theory of the fluid barometer, he writes: “It is similar to the action of a lever; the weight of the air will raise the same weight of water (per unit surface area).  Here the weight of the entire atmosphere above a particular surface must be equal to the weight of thirty-four feet of water over the surface.”  It is true that all simple machines -- wedge, lever, wheel, axle, and screw – allow us to trade force, distance, time, speed, or work.  Considering conservation of energy, the liquid barometer could be likened to any of them, but it would be stretching the analogy.  A lever works by trading force and distance: the fifty-pound child at the end of a teeter-totter lifts the 150-lb. man sitting on the other side but closer to the center.  The hydraulic lift is not a lever any more than a screw is a pulley.

Harriman states in words what would be easier given as symbols.  Numbers are written out.  This reflects the lack of a defined audience.  Harriman explains some things but glosses over others; and it is hard to know when he is being technical or vernacular. 

Consider the allusions to elastic and inelastic collisions.  “… [Newton] deliberately varied the mass of the bobs and thereby proved that his law applied to both elastic and inelastic collisions.” (p. 127)  Referring to the standard college textbook by Sears and Zemansky (now 
Young and Freedman, Sears and Zemansky’s University Physics), in a perfectly inelastic collision the two bodies stick together, their kinetic energies before and after are not conserved, and the difference lost is converted to heat.  I believe that here Harriman is using the word “elastic” in its vernacular sense: balls of yarn or wood were deformed more or less by the impacts, having negligible consequences to the experiment.  However, discussing the kinetic theory of gases, Harriman uses elastic and inelastic in their proper technical senses (p. 166).

Kepler suggested that perhaps the sun attracts the planets with some kind of magnetism.  Newton ruled out magnetism in Corollary 5 to Proposition VI Theorem VI in the Principia.  However, magnetism had to be considered.  Newton’s measurements suggested that the power of magnetic attraction diminishes at a proportion between an inverse-square and an inverse-cube.  Today, we know that the field of a magnetic dipole diminishes as the inverse-cube, but that the force of attraction toward either pole follows the inverse-square rule.  Thus, gravity, static charge, and magnetism all were contenders to explain the motions of the moon and the falling apple.  As Harriman notes:  “Different causes can lead to qualitatively similar effects (e.g., a magnet with an electric charge on its surface will attract both straw and iron filings, but for different reasons” (p. 137).  But Harriman is in error when he continues: “However, when Newton proves that the moon and the apple fall with rates that were precisely in accordance with a force that varies as the inverse square of the distance from Earth’s center – then there can be no doubt that the same cause is at work” (p. 137).  Strictly on the basis of the inverse-square attraction, both magnetism and electric charge could have been the cause. 

Harriman says that Newton experimented with magnets floated on wood in a tub of water.  According to Harriman, that the magnets were mutually attracted without causing a net motion of the tub proved that the attractions were directed equal and opposite to each other (pp. 127-128).  That experiment proves nothing of the sort.  Placing the magnets in a tub of water and measuring their motions, one might discover several facts, for instance, that some materials magnetize more strongly than others or (counterfactually) that different objects are attracted with unequal accelerations.  But there is no way that they could move the tub, even if they banged into the sides.  It is a standard problem in freshman physics to determine whether a person standing on a (frictionless) rail car could move it by firing a bullet at an opposite wall. 

Harriman goes on to say
“Since Earth attracts all materials on its surface, it was reasonable to suppose (and it would later be proven) that every part of Earth attracts all other parts.  So consider the mutual attraction, say, of Asia and South America.  If these two forces were not equal and opposite, there would be a net force on Earth as a whole – and hence Earth would cause itself to accelerate.  This self-acceleration would continue indefinitely and lead to disturbances in Earth’s orbit” (p. 128).
  Again, Asia might be more strongly attracted to South America than that continent is to Asia.  All actions would take place on the “tub” of the Earth within the same inertial frame of reference.

Denigrating ancient and medieval astronomy, Harriman claims that the relative sizes of the orbits of the planets could not be computed (pp. 88-86).  This was not true; and Harriman must know that because he says that Ptolemy estimated the distance to the stars (p. 88).  Moreover, if it is true that the geocentric model prevents such calculations, then the ancient astronomers must have used some other model, because the relative sizes of the orbits were computed.  The ancients did not believe that all of the celestial lights were spread on a single sphere.  They knew that the moon is much closer than Saturn.  On the other hand, (more reasonably) the geometry and observations of the time did, indeed, allow them to make those calculations, even assuming the geocentric model.  In fact, because of the religious viewpoint, the very scale of the measurable universe and the comparatively small size of the (spherical; not flat) Earth, were substantiating evidence of the relative unimportance of Earthly affairs.  (See Astronomies and Cultures in Early Medieval Europe by Stephen McCluskey, Cambridge, 1998.)

Measurement was always important to the medieval astronomers who welcomed the new astrolabe imported from the Muslims.  Thus, it is no surprise that measurement of the Earth’s diameter and the distance to the moon were important to Sir Isaac Newton.  Harriman says that Newton accepted the numerical approximation of 60 Earth radii as the distance to the Moon (pp. 136-137).  In fact, Newton was not comfortable with these approximations, but he had to settle for them.  He was being stonewalled by John Flamsteed, the Royal Astronomer who also was working out the celestial mechanics of the Earth-Moon system and did not want to share his data.  These facts about Newton are in the standard modern biographies by Richard Westfall (Never at Rest), Michael White (Isaac Newton: the Last Sorcerer), and David Berlinski (Newton’s Gift: How Sir Isaac Newton Unlocked the System of the World.). 

Harriman has his own new theory of science, dismissing the accepted scientific method. 
“Today, it is almost universally held that the process of theory creation is nonobjective.  According to the most common view, which is institutionalized in the so-called “hypothetico-deductive method,” it is only the testing of theories (i.e., comparing predictions to observations) that gives science any claim to objectivity.  Unfortunately, say the advocates of this method, such testing cannot result in proof – and it cannot result even in disproof, since any theory can be saved from an inconvenient observation merely by adding more arbitrary hypotheses.  So the hypothetico-deductive method leads invariably to skepticism” (pp. 145-146). 
 Thus, to Harriman, Newton’s experiments did not validate Descartes’ (more correct) theory of light.

Harriman would do well to heed his own words.  “Introspection is clearly an indispensable source of data, since philosophy studies consciousness and an individual has direct access only to his own” (p. 233).  We have the introspective reports of Richard Feynman, Kary Mullis, Albert Einstein, Francis Crick and James D. Watson, and many others, all of whom report something different in their heads than what Harriman claims must be true of all humans, based, we can assume, on his own introspection. 

Richard Feynman’s The Character of Physical Law delivers an outstanding explanation of why the hypothetico-deductive method works.  Norman W. Edmund, the founder of Edmund Scientifics Corporation, created a superb website at www.scientificmethod.com/.  He teaches a 14-step process which touches on philosophy.  Any public school science teacher knows the many posters and other aids that present a 5 or 9-step method.  Regardless of the specifics, Harriman mischaracterizes the scientific method when he claims that it indulges in rationalist fantasies (p. 142). It is true that you can “make up” any airy explanations you want, but the only ones that count are the ones that can be tested.  Harriman ignores that. 

Arguing in the grand style of Ayn Rand he broadly accuses an unnamed collective of committing evils, and then draws his own conclusions about what they really believe.  “Today, it is almost universally held that the process of theory creation is nonobjective [p. 142].”  He does later resort to the Randian device of naming evil professors such as Paul Feyerabend, but nowhere does Harriman provide any support for his claim that what he opposes is “almost universally held.” 

For Ayn Rand, a person’s fundamental existential choice – to be or not to be – is to think or not to think.  Choosing to think is the essence of being human.  That raises the challenge, “Are you not thinking when you choose not to think?”  Rand’s answer came via psychologist Nathaniel Branden (at first an Objectivist himself, then developing his own Biocentric theories).  Psychological suppression is an avoidance mechanism to prevent unpleasant thoughts.  The thought process is abandoned before the thought can be fully formed.  This can begin as denial, justification, or rationalization, but typically is an emotional precognitive response to a potentially painful identification. 

Similarly, Harriman’s Objectivist theory of induction apparently rests on the very hypothetico-deductive method that he denies: in order to make a logical leap, do you not first carry out a series of experiments, any one of which could falsify the previous work until a better theory explains both?  Harriman praises Galileo and Newton both for their careful and repetitive work.  Then, he denies the repetitive aspect of induction, claiming that these scientists “leapt” to valid conclusions.  Harriman needs a meta-explanation. 

Is it inherent in human nature to think by induction?  Is this why we have superstitions as well as science because we leap to general conclusions based on single instances?  If so, what is the nature of this abstracting?  Where in the brain does it occur?  What chemicals cause it?  Can you go through life never doing it?  Or must you always do it?

Moreover, the hypothetico-deductive method is how we validate and verify the works of others.  Explanatory theories are easy to devise.  To be scientific an explanation must be tested.  The claim, no matter how compelling it seems, must be tried against new data, not in the original set.  And, best of all, a valid theory leads to new predictions not in the original data.  

Harriman requires that to be valid an induction must be integrated with all previously known truths.  If that alone were enough, then any theory might be falsified by the discovery of a new phenomenon.  That brings us back to the very problem Harriman claims to solve.  He wants to avoid the debilitating skepticism that hobbles philosophers of science.  We can never be sure of anything (they say) because something new might come along.  Thus, (it is claimed) science leads not to truth but to ignorance.  

Harriman’s beast is personalized by Paul Feyerabend.  Having recently completed a bachelor of science degree in criminology, I was assigned to read similar “post modernist” claims that there is no such thing as science, but only a “scientistic discourse” that excludes women and minorities, that criminology is only ideology in service to oppression.  Fortunately, our courts do not work on that theory any more than researchers in physics adhere to the "fashionable nonsense" of post-modernism.

While these shortcomings are bothersome, they are not fatal.  Harriman’s thesis deserves more than mere consideration.  Properly taught, it would be a revolution in science.  

Rather than trying to replace the accepted meaning of the inductive method, Harriman should simply call his the objective (or Objectivist) method. 

Objectivism (with or without the capital-O) is rational-empiricism and both sides of that equation are required.  Ayn Rand taught that existence exists, that reality is real, that A is A, entities have identities: to be is to be something.  Therefore, contradictions do not exist.  

Truth is rational and empirical, logical and evidentiary, analytic and synthetic, theoretical and experimental, ideal and practical, deductive and inductive, and even imaginary and experiential. Harriman’s book rests on those truths.  In that, its value cannot be overestimated.

ALSO ON NECESSARY FACTS
Is Physics a Science?
The Problem of Induction: Karl Popper and His Enemies
The Sokal Affair
The Structure of Scientific Revolutions

Monday, March 12, 2012

Is Philosophy a Science?

Sociology, philosophy, music performance, almost any pursuit can be a science, if the scientific method is applied.  The scientific method is the process of rational-empiricism.  You need both sides of that, the analytic and synthetic, theory and practice, logic and experiment.  If you begin with a falsifiable question, explore it, and explain it, and test your explanation, then the activity is scientific.   
(This is based on my reply to “What is Philosophy? a Status Seeking Answer" by Fabio Rojas on the OrgTheory blog here.) 
I find a dose of humility in Tom Lehrer's singing about "Sociology" (... they can snow all of their clients/ by calling it - heh - 'science' / when in fact it's only sociology...), but in truth any pursuit can be a science without mimicking physics.  And of course, we all know that physics is taken as the standard.  The problem with that is that human beings are not billiard balls.  History and philosophy and the all the rest are, indeed, sciences, but you can have two or more very different answers to the same question and have both be valid.  “Do the American people support President Obama’s policy in Afghanistan?”  is a complicated question, and perhaps a set of questions, but with statistical - rather than absolute - answers.
The scientific method is the process of rational-empiricism.  You need both sides of that, the analytic and synthetic, theory and practice, logic and experiment.  You can have a useful tool that you do not understand - the steam engine for it's first 150 years; maybe even computer chips today... - but it is not science until you can explain it. 
With history, in particular, we cannot experiment on the past, but any explanatory theory can be validated and falsified by testing it against different experiences, e.g., the Greeks and the Mayans; or different records from the same time and place.  
On whether philosophy is a science, Prof. Rojas suggested: "...  but falsifiability through logic is qualitatively different than falsifiability through experiment or observation." 
My reply is that we understand the unreality of formal logic - "Socrates is an elephant. All elephants are blue.  Therefore, Socrates is blue." - but that is trivial and demeaning when contrasted with, say, Andrew Wiles's proof of Fermat's Conjecture for which all the experimental evidence never was and never would never be sufficient.  
Also, seemingly ethereal mathematics has proved worldly: irrational, negative, and imaginary numbers are easy examples of mathematical "fantasies" that became practical tools of business and technology. Indeed, the very concept of "number" was an idea with no "reality" as long as our languages differentiated a brace of pheasants from a pair of shoes. In Japanese, we still count sticks (pencils) differently than leaves (pages).  The problem with linguistic analysis from Wittgenstein on is that such discussions seldom get past English and rarely exceed the bounds of Indo-European. So, when philosophers attempt to argue “meaning” or “the meaning of meaning” they often stray from what is truly empirical and therefore cannot come to a falsifiable claim.  That is not science.  But the errors and omissions of some do not invalidate the entire field.

Prof. Rojas also wrote:  “In the end, through, I approve of McGinn’s status seeking exercise. Systematic investigation of logical arguments is different than art history or music performance.”
Again, the proof of the pudding is in the tasting. I could claim that photography informed painting in the 19th century; and I could argue the contrary as well; but eventually, I must show some examples, i.e., provide some evidence. You don't need to play the piano to figure out how to write a piece that no one could perform.  However, you might enjoy reading it; and perhaps, it could be programmed via computer and synthesizer.  Again, the performance validates or falsifies the claim. As performance plus theory music is a science.

While philosophers worried about how we know the sun will rise tomorrow just because it always has, they never worried about “the problem of deduction.”  From valid axioms you can “prove anything” even if you never find one -- again, the square root of minus 1, negative numbers, etc. In fact, the development of double entry bookkeeping and alternating current electricity provided the empirical evidence for those rationalist conjectures.  But the point is that academic philosophers were more troubled by everyday experience than they were by higher mathematics.  The basic problem with academic philosophy - as academic sociology or academic physics - is that it avoids testing.  
Objectivist philosopher David Harriman offered a solution to “the problem of induction” in his book, The Logical Leap: Induction in Physics.  The presentation left some aspects unanswered.  Also, it might have been more productive for Harriman to call his program "objectivism" and leave the word "induction" where it resides, rather that attempting to drag it into his narrative space.  His work remains significant and should be considered by anyone who questions whether philosophy can be a science.

Reality is real.  If something is logically true - not just a blue Socratic elephant - or perceptually evident - not just a trompe l'oeil - then the logical analytic rational side must support the empirical synthetic experiential side, or the “structure” just will not “stand” and the claim must be regarded as unproved. When philosophy is carried out according to the scientific method, then it is a science.
PREVIOUSLY ON NECESSARY FACTS



Saturday, March 10, 2012

Gresham's Conjecture

We learn it as "Gresham's Law" the claim that "bad money drives good money from the market." But the general rule has many exceptions.

Free market economists quickly amend Gresham's Assertion to insist that both moneys must be legally equivalent. If a gold dollar and a silver dollar both circulate, and if their relative value changes, then the under-valued one will be hoarded.  If you can get $1.10 in silver for a gold dollar, you will save the gold coin and pass off the silver dollar, which is overvalued: worth only 90.9 cents in gold; it is good for 100 cents of a dollar. Therefore, people will hoard the coin with the greater intrinsic value.  This has some truth; history provides examples. 


Even that thumbnail explanation may be too broad; and unwarranted extensions and expansions are issued by shallow thinkers such as the "anarcho-capitalist" Murray N. Rothbard (1926-1995) of the Austrian school. In his book, A History of Money and Banking in the United States: The Colonial Era to World War II, on page 126, Rothbard claims that the nickel-copper small cent was hoarded (true) and exported (not true). Rothbard also asserts: "The penny shortage was finally alleviated when a debased and lighter-weight penny was issued in the spring of 1864, consisting of bronze instead of nickel and copper." This is utter nonsense.

There was no incentive to export a coin in the uncommon nickel alloy 88% copper 12% nickel. Nickel was chosen largely from the influence of Joseph Wharton who owned a mine. The Mint found the nickel alloy too hard: dies wore out.  The Mint turned to the more familiar and softer "French bronze" 95% copper with a 5% tin-zinc mix. The lighter coins did not drive the older issues from the market. The success of the Northern armies in the War Between the States brought confidence to the markets, though perhaps any peace  would have, regardless of who won.  In point of fact - facts often being absent from the works of Rothbard - when the smaller cents (called "nicks" or "nickels") were first issued in 1854, people lined up at the Philadelphia Mint to turn in their heavier (and therefore more intrinsically valuable) Large Cents (1793-1857). The Mint was exchanging old cents for new, one for one, but boys who had been early in the lines sold their Small Cents for premiums.  They were curios.  Eventually, they fell to parity ... and Large Cents (now scarcer) were pursued even more passionately by numismatists.


Proof  Three Cent Silver  1858
Heritage Auctions Sept. 2010 Long Beach Signature Sale Lot 5029
Proof Three Cent Nickel 1870 
Heritage Auctions Sept. 2010 Long Beach Signture Sale Lot 5132 
The history of United States federal coinage provides other counter-examples to Gresham's Conjecture.  The silver half dime circulated alongside the nickel 5-cent coin. The 3-cent silver circulated alongside the 3-cent nickel. While gold and silver did fluctuate in value, causing problems for the Mint, which was a huge consumer and reseller of both, mostly, US silver coins and US gold coins went into separate channels.
Proof Seated Half Dime 1870 
Heritage Auctions 2010 April-May Milwaukee Lot 2515 
Proof Shield Nickel  1870
Heritage Auctions 2010 January Orlando, Lot 3679 
From 1878-1904, the US Mint struck over 24 million ounces of silver dollars per year, far in excess of anyone's demand, to meet the political agenda of Western mining interests. The Comstock Lode and other strikes flooded the markets with cheap silver and the price of it fell relative to gold.  Nonetheless, silver dollars sat in bags; and even today fully one-third are in uncirculated condition.  According to Gresham's Suggestion, silver dollars should have driven gold dollars from the market.  They did not. It seems that gold dollars were not in demand at all. (See http://www.coinbooks.org/esylum_v18n30a13.html)
The tendency just described is, however, limited by the fact that coins of different metals are unlikely to be equally useful in different transactions. In particular, gold coins will generally be of larger denominations and as such cannot supply the need for smaller change (cf. Sargent and Velde 2002). Consequently, even though gold may be legally overvalued relative to silver, and silver may cease to be voluntarily rendered to the mint, silver coins are unlikely to disappear from circulation altogether.  "Gresham's Law" by George Selgin at Economic History here.
Gresham's Rule does have some validity.  In the Middle Ages, when coins hundreds of years old still circulated, old, worn coins were spent while new, heavy coins were held.  As Europe experienced perhaps putative "silver famines" the purity of coinages fell. As silver became relatively more valuable, it took less to buy the same goods and services. Coins fell in purity.  Had they not, you would have needed tweezers to hold a penny's worth of silver. Debasement was a convenience.  But it still meant that if two coins are both "pennies" and one has more silver than the other the common choice is to spend the lighter coin.  Even so, history provides many examples of heavy coins such as the stable and reliable English sterling penny being the engines of commerce.

ALSO ON NECESSARY FACTS
Numismatics Informs Economics
Numismatics: the Standard of Proof in Economics
Supplies and Demands
Murray Rothbard: Fraud or Faker

Sunday, March 4, 2012

Science Fairs and Science Frauds

Last month, I volunteered to judge the exhibits in “Behavioral and Social Sciences” at the Austin Energy Regional Science Festival.  After the judging, I spent an hour walking the hall, looking at other exhibits and talking with the entrants.  Overall, I was impressed.  Woody Allen quipped that 85% of success is showing up and every exhibit was worthy of some positive acknowledgement.  Largely, these were winners at school science fairs; and competition gets tougher the further up the pyramid you go.  Of course, the bell curve applies: most exhibits were honest C+ efforts, midrange examples of high schools science.  What else could they be?  Some were outstanding; a few were tagged for us by the sponsor panel as being ineligible for award. 

Ineligibility is defined as failure to make the minimum benchmarks.  The rules and guidelines are provided on the local website – and they are the universal Intel International Science Fair rules and guidelines.  The materials for the judges are available to the exhibitors as well.  Among the many materials simple searching will uncover is a blog from Scientific American on what not to say at a science fair.  (“How to Answer the Five Most Common Questions from a Science Fair Judge,” by Dr. Maille Lyons, here.)

There was one exception.   And it bears on the “mass mediated hyper-reality of crime”  when television shows such as the CSI franchise inform us of what we tell them we believe.  “Criminal Eyewitness Identification” was a reasonable effort for a high school science fair project.  Having left the required log book and final paper (with reprint file) in the car after the first TV interview should have disqualified the entry for an award, but did not.  There was no “green warning tag” from the oversight committee.  The exhibitor was personable, even charming, conversant, and not at all nerdy, and garnered another television interview literally on the heels of the judges.  We judges noted the lack of data and methodology.  The poster display identified two different hypotheses being tested at the same time.  We even heard one of the Five Wrong Answers: “My cousin did it last year.”  The exhibitor seemed to believe that any criminal conviction is a correct conviction; and that convictions fail because witnesses cannot agree on identification of the perpetrator.  Like most of the others in the hall, this was a nice effort by an early high school student; and it could open the door to refinement, improvement, and well-earned recognition.  The judges could not give a place award; but the television cameras – shepherded in by the sponsor committee – were all the recognition required.  This was mass media, not science. 

We know that science collides with mass media in the courtroom.  It is an old story.  Tainting Evidence: Inside the Scandals at the FBI Crime Lab by John E. Kelly and Phillip K. Wearne (Free Press, 1999; 2002) is at once a shocking exposé and a tiresome rant.  Any large, old organization will have bad experiences.  The FBI is not alone in wanting to be perceived as the paragon of best practices.  We all think well of ourselves.  Moreover, the authors rely heavily on public documents which are evidence of internal controls and corrections.  The authors claim that too little is done too late; and that real champions of justice is Frederick Whitehurst, whose insistences led to retaliations by a bureaucracy that could not admit its errors.  And the evidence is damning.  In case after case, the FBI crime lab worked backwards, starting with the prosecution’s claims and finding evidence to support them.  Compounding the falsehoods, FBI agents – the lab was run mostly by field agents, not professional scientists – committed perjury, when testifying under oath. 

Tainting Evidence examines the Oklahoma City Bombing, the O. J. Simpson case, the first World Trade Center bombing, the Unabomber and Ruby Ridge, among other cases.  Of special interest to me was the case of former Green Beret doctor Jeffrey Robert MacDonald whose guilt was proclaimed by Robert Bidinotto, an investigative writer in his book, Criminal Justice? The Legal System versus Individual Responsibility.  I know Bob Bidinotto from Objectivist blogs where we disagreed on basic issues of criminology.  He believes that MacDonald talked himself into murdering his wife and daughters and only denied it to escape responsibility for his actions.  The evidence – or lack of it – suggests otherwise: and an innocent man has spent 15 years in prison while the perpetrators are among us.

That said, Kelly and Wearne imply, also that the people we think are guilty – Timothy McVeigh, Theodore Kaczynski, O. J. Simpson, Nidal Ayyad, and others – may only have been railroaded or left holding the bag.  It is a bit of a stretch.  But it does not excuse the documented errors, oversights, omissions, fabrications, and denials.  If the FBI lab failed in these high-visibility cases where all available resources were marshaled, what then of the day-to-day work?  When the laboratory begins with the conclusions needed by the prosecution, science has been excluded from the process.
Online Universities is a resource for students interested in going to college via the Internet. "OnlineUniversities.com's goal is to assist students in finding the best online university that fits your needs and demands as a student."  They have a blog for February 27, 2012, “The 10 Greatest Cases of Fraud in University Research” (link here). 
A hundred years ago, criminologists sought to use biology to identify criminals, not in the pursuit of evidence, but in the prevention of crime by culling the population of genetic defectives.  The eugenics movement attracted Theodore Roosevelt, Margaret Sanger, Oliver Wendell Holmes, and millions of others, many of them highly placed public officials.  Based on the work of Cesare Lombroso (among others), the theory was that physical measurements of body parts could reveal ratios and proportions indicative of criminality.  Today, we seek it in genes, but it remains pseudo-science, what Richard Feynman called “Cargo Cult Science” i.e., the outward forms without the inner substance.

ALSO ON NECESSARY FACTS
Science Fair Science Fraud (2013)
Teaching Ethics to Student Engineers

Four Books about Bad Science
Misconduct in Scientific Research
Fantastic Voyages: Teaching Science with Science Fiction
Monsters from the Id (Science as Mankind's Last Hope)

Saturday, March 3, 2012

Another Cheer for American Education

A hundred years ago, an eighth-grade education was sufficient.  In 1910 only 5% of Americans entered high school.  Then, the “high school movement” began.  The fact that now 30% of Americans have earned post-secondary degrees (associate's and above) promises a renaissance in the coming generation.

 America invested millions of dollars (equivalent to billions today) in the construction of high schools, supported by taxes.  In addition to “operating levies” those property taxes stood behind the bonds that were sold to investors.  America took on a tremendous public debt.  Despite the Depression of the ‘30s and war of the 40s, by the 1950s, the high school education was a ticket to success in the job market.  More importantly, the general education level of the nation was raised.  Did it make a difference?  Daily newspapers still ran (and still run) horoscopes.  But they also ran crossword puzzles on the same page.  The general education level had been raised. 

Now, our high school students lag behind those of 20 other industrialized (informatized) nations.  But my test is this: What can you count on one hand?  Nokia did not come from the high math scores of Finland's gymnasium students.  What other companies are there in Finland?  Where is the economic growth in the Czech Republic? They are doing well, certainly but invention, innovation and enterprise are different from that.  Risks entail losses.  The Japanese are risk-averse.  How many Nobel Prize winners are in Japan?  Then we have to consider the people who never finished college, such as Bill Gates and Steve Jobs.  Of course, if Edison had Tesla's (incomplete) university education, he would have had the mathematics to get past direct current.  There are no easy answers, but overall, nothing bad can come from mass education.  A hundred years ago, we massively educated ourselves to the high school level. It worked out well enough. 
John D. Rockefeller’s family lived in Strongsville and he rented a room in a boarding house in Cleveland in order to attend Central High School 1853-1855.  He then enrolled in a ten-week course at Folsom's Commercial College to learn bookkeeping.  At sixteen,  he went to work for 50 cents a day.  He was 30 when he organized Standard Oil of Ohio. 
Master of Arts, Social Science, April 23, 2010
American high school students often need remedial mathematics classes in college -- but they take them.  That is the point.  In France, class content at university is less important than the "narrative" your education carries in the corporate and government bureaucracies.  In Japan, getting in to college is the important thing because the university you attend pre-determines the corporation you will work for.  They have a lot of C+ college students in Japan.  So do we.  The difference is that our society encourages individual choice.  Few others do.  So, their best and brightest come here for their university educations.  


The subject of useless college majors was extensively explored on OrgTheory a blog by professors of sociology.

  “I like ‘useless college majors,’ but debt undermines the humanities and other fields (like sociology!). People will rightfully resent education and the labor market. That’s what I’m worried about. When we make education into a high priced job placement test, it undermines the liberal arts. We need to stop that from happening further.” – Fabio Rojas on “Police Beat Unarmed Poet” here.  

 As long as dance education is inexpensive, that’s not such a big deal. If people want to pursue the dream, that’s great. But huge college debt makes that choice hard to sustain.” – Fabio Rojas on “Dance Majors” here.

 Colleges are filled with people who are there because they think it will lead to jobs. So, then, why are job hungry students flooding non-vocational areas? The explanation is fairly simple.  ”Good” jobs require college degrees as a test of ability and emotional maturity (being able to sit and do work), even if the job itself requires no college level skills.  – Fabio Rojas on Useless College Majors here.

Another possibility is that the Department of Education statistics show humanities BAs have been a fairly stable percentage of degrees since 1971 (decline to mid-80s, more or less a slow rise since then) while there’s been a huge rise in business (peak in the late 80s, but still much higher than 1971) and “other” (which appears to include mostly professional/vocational degrees, and is at its highest point). Also, pre-law and pre-ed people often major in the humanities. It’s almost like those subjects have some use for them. – Andrew in reply
Per Andrew, above, the big jump since 1971 for BA degrees (other than business) has been in what NCES treats as a residual category:
"Includes Agrciculture and natural resources:
  • Architecture and related services;
  • Communication, journalism, and related programs;
  • Communications technologies; Family and consumer sciences/human sciences;
  • Health professions and related clinical sciences;
  • Legal professions and studies;
  • Library science;
  • Military technologies;
  • Parks, recreation, leisure, and fitness studies;
  • Precision production;
  • Public administration and social services;
  • Security and protective services;
  • Transportation and materials moving; and
  • Not classified by field of study.  
Without doing a lot of digging, the biggies appear to be Health and Communications.  The latter is a major that seems murky to me. I believe most institutions treat it as part of liberal arts; it would be interesting to know something about career outcomes.  I any event, both of these are overwhelmingly female dominated, matching the change in undergraduate demographics over recent decades.    Eweininger in reply.


I am not an enemy of humanities but I believe that higher education must be carefully managed. It’s not a problem if *some* people major in fields with poor job prospects. After all, novelists, artists, and other creative types improve the world in important ways. We should have institutions that support the arts. But it’s a big deal when when *lots* of people major in areas with poor job prospects. These people take out massive loans for skills they will never use.Fabio Rojas on Useless College Majors here.

I think we overemphasize the importance of the kind of bachelor’s degree you get. Yes, type of undergrad degree helps you acquire job skills but isn’t a bachelor’s increasingly being used as a stepping stone for a graduate degree? I’m amazed by how many of the MBA students we teach have their undergrad degrees in “soft” majors, and yet they’re still able to get into a top business school. I think the mistake that Fabio is making in interpreting these numbers is assuming that these are terminal degrees and/or that future employers even care that much about what undergrad degree you have. – Brayden King in reply
ALSO ON NECESSARY FACTS
AMERICAN EDUCATION: AT LEAST TWO CHEERS
EDUCATING THE GIFTED AND TALENTED IN CLEVELAND, OHIO
WHERE ALL THE CHILDREN ARE ABOVE AVERAGE