Wednesday, April 29, 2015

Is the book trade too left wing for its own good?

I like to drop into the excellent Riverside Bookshop on Tooley Street, London SE1 (late of Hays Galleria). It has a good selection and dedicated staff (even if they blot their copybook slightly by not stocking my book God's Philosophers). Further afield, Daunts have a branch in Cheapside which is also worth a browse and Waterstones have some outlets in the City as well. But, given their location, I do wonder about the political emphasis of these establishments.

You might think that in the heart of London's financial district, the bookshops would reflect their potential clientele. After all, the big accounting firms have offices housing over 5,000 diligent bean counters (including me) within a couple of hundred yards of the Riverside Bookshop. Accountants are not usually at the vanguard of the Occupy protests and we rarely plot to overthrow global capitalism. In fact, many of us are secretly quite fond of market economics, recognising that it is responsible for the unprecedented reduction in global poverty over the last couple of decades.
But despite all the well-heeled capitalists just around the corner, City of London bookshops seem determined to promote left wing books rather than conservative ones. On the non-fiction stand at the Riverside Bookshop we find Owen Jones on The Establishment. At least Jones can write; but he is sharing space with the latest tedious polemic from Polly Toynbee, something on the contradictions of capitalism by Marxist David Harvey, and Naomi Klein's climate change screed. The only conservative book on offer is Andrew Robert's quixotic paean to Napoleon.

Then I had to ask myself, what are the current bestselling conservative books? Looking through the Amazon top 100 non-fiction, Boris Johnson is the only right winger in view. The publishing trade itself appears to have a marked left wing bias. Students are big buyers of books, especially non-fiction, and they tend to be left wing. That alone might account for the bias of the book trade as a whole. But that doesn't make it a good idea. Currently, the UK printed book market is worth just £1.4bn a year and is shrinking. By comparison, the video games market is a growing £2.2bn while telecoms is a massive £40bn and cars a whooping £60bn. Economically, books are not very significant. So you would think that the trade would be trying to reach as many potential readers as possible.

This means that local bookshops in areas that are likely to be conservative, like the City of London or a leafy shire town, need to work a bit harder with their buying decisions. They have to search out the books that their clientele are likely to enjoy. And having done that, they need to promote them, since readers won't necessarily be aware of them from the national scene. That does not mean having a separate section for these sorts of books, keeping them quarantined from everything else. Treat them as what they are - part of mainstream thought rather than left wing rabble-rousing.

As a help to bookshops who'd like to sell more non-fiction books to their centre-right customers, here are five excellent titles that they should stock and, just as importantly, promote front of house with those little handwritten signs about how good they are.

1: Matt Ridley The Rational Optimist - why it is a thoroughly good thing that the facts of life are conservative

2: Daron Acemoglu and James Robinson Why Nations Fail - the importance of the rule of law and free markets in making countries richer

3: Jonathan Haidt The Righteous Mind - the psychology of narrow-minded lefties and broad-minded conservatives (written by a liberal)

4: Daniel Hannan How We Invented Freedom - the story of how the Anglosphere became the most liberal and prosperous countries on Earth

5: Steven Pinker The Blank Slate - human beings are not products of their upbringing and the environment (also written by a liberal)

Discuss this post at the Quodlibeta Forum

Wednesday, April 22, 2015

The earliest reference to a telescope: England 1551?

“And hereof came it that Fryer Bakon was accompted so greate a negromancier, whiche never used that arte (by any coniecture that I can fynde) but was in geometrie and other mathematicall sciences so expert, that he coulde dooe by theim suche thynges as were wonderfall in the syght of most people.

“Great talke there is of a glasse that he made in Oxforde, in which men myght see thynges that were doon in other places, and that was iudged to be done by power of euyll [evil] spirites. But I knowe the reason of it bee good and naturall, and to be wrought by geometrie (sythe [since] perspective is a parte of it) and to stande as well with reason as to see your face in common glasse.”

The quotation above comes from the preface to a textbook on geometry called A Pathway to Knowledge published in London in 1551. It was written by a doctor and mathematician called Robert Record (1512 – 58). His arithmetic textbook, Ground of the Arts, first published in 1543, was popular with students by virtue of being in English. It went through over 40 editions right the way to the end of the seventeenth century.  Record is probably best known for his invention of the equals sign.  However, despite his relative success as an author, he died in a debtors’ prison.

Of course, the quotation is most interesting because it describes a device that sounds much like a telescope, sixty years or so before the telescope was supposed to have been invented.  “Fryer Bakon” is Roger Bacon OFM, the Franciscan scholar of the thirteenth century famous for his Opus Maior and Opus Minor who lectured at the Universities of Oxford and Paris.  He may have been (but probably wasn’t) imprisoned for a time for his adhesion to the ultra-ascetic wing of the Franciscans.  His reputation for necromancy was a not uncommon trope in sixteenth-century England where he was a famous historical figure.  However, the accusation of black magic is almost always found in the context of a denial that he was, in fact, a magician. 

Record’s preface is a good example of this kind of defence of Roger Bacon.  The specific charge is that Bacon had a device that allowed him to see what was going on in other places.  Record says the device was not magical but used Bacon’s knowledge of perspectiva, what we would call geometrical optics.  Bacon was indeed familiar with this subject and wrote a treatise on it.  In this treatise, he mentions magnifying glasses and, as it happens, spectacles were invented in Italy shortly thereafter.
To be clear, there is no evidence that Bacon had any device to see different places, magical or otherwise.  What interests me is what Record thought Bacon had invented.  The device mentioned in the quotation does not sound like a magnifying glass or spectacles.  In any case, if that was what Record had in mind, he would just have said so (probably calling a magnifying glass a “perspective glass”, which confusingly was also an early term for the telescope). 

I think there are three possible interpretations of Record’s words:

  • Record has no idea what he thinks Bacon’s device was.  He just wants to reassure his readers that it would have been built on mathematical and not necromantic principles.  This is possible.  Record is making a point that geometry is jolly useful.  But the passage reads as if he knew what the device was supposed to be and how it worked.

  • Record thinks the device was a periscope.  These had been invented a hundred years before Record wrote by Johan Gutenberg, who lost money on the venture.  Gutenberg later had more success as a pioneer printer.  The trouble is, periscopes don’t really show you what is going on in another place.  But they do allow you to see around corners, so this interpretation is a possibility.

  • Record has in mind a device like a telescope that really does let the user see things where he isn’t.  Hans Lippershey famously patented the first telescope in 1608, but several others claimed to have invented it.  This certainly best fits the context of the passage but would require that Record knew what a telescope was before it was supposedly invented.

So could Record really know about a telescope as early as 1551?  Astronomer Colin Ronan has claimed that it was invented by a Kentish mathematician and astronomer called Leonard Digges (d 1559).  The claim is actually made by Digges's son Thomas (d. 1595) on his father’s behalf.  Digges Jr produced an edition of his father’s book on practical geometry called Pantometrica which was published in 1571.  In the introduction, he notes:

“… my father by his continual painful practices, assisted with demonstrations Mathematical, was able, and sundry times hath by proportional Glasses duly situate in convenient angles, not only discovered things far off, read letters, numbered pieces of money with the very coin and superscription thereof, cast by some of his friends of purpose upon downs in open fields, but also seven miles off declared what hath been done at that instant in private places.”

This does sound a lot like the device mentioned more briefly by Record in the preface to The Pathway to Knowledge.  It would be great to be able to link Record and Digges directly.  Unfortunately, Record was based in Cambridge in the 1540s, Digges in Kent.  But Record was reasonably well-known after 1543 thanks to his arithmetic book Ground of the Arts.  Among the small community of English mathematicians, Digges and Record, both avid Protestants, could have met.


Overall, I think there is a good chance that Record is referring to a telescopic device in 1551 and if so, this is most likely to be the same one that Digges had invented.  At least, Record seems aware that a telescope exists even if he has not seen one.  If this is the case, it is evidence that Digges really did create telescope before 1551 and makes Record’s preface the earliest reference to it.

Discuss this post at the Quodlibeta Forum

Thursday, January 29, 2015

Islam and science have problems with their relationship

In August 2013, Richard Dawkins elicited one of his periodic bouts of controversy by tweeting that Trinity College, Cambridge had produced many more Nobel Laureates than the entire Muslim world.  While no one could deny that his tweet was objectively correct, any serious point he might have been making was drowned out by the condemnation of his quasi-racist language.  This is a shame because, unfortunately, science in much of the Muslim world really is in crisis.  

Nidhal Guessoum (who is associated with this website), a professor of Physics and Astronomy at the American University of Sharjah in the United Arab Emirates, likes to ask his students and colleagues about their scientific beliefs.  Despite living just down the road from the multinational entrepôt of Dubai, he has found that only about ten per cent of Muslims at the university accept that human beings evolved from other animals.  We should bear in mind that Guessoum is asking only undergraduates and members of the university faculty.  The population at large is likely to be even more dismissive of Darwin.  In comparison, Gallup polls of Americans have consistently found that half of those surveyed accept humans are descended from apes. We tend to think of the United States as a haven for creationists, but it has nothing on the UAE.  If the problem of science among Muslims were confined to rejecting Darwin, we would at least be confronting an opponent familiar from debates with Christian creationists.  But, as Professor Guessoum explains in Islam’s Quantum Question, science in the Islamic world has further problems.  He’s written the book to help explain Muslim attitudes towards science and discuss ways that the situation can be improved.  Unfortunately, besides creationism, there are several other serious threats to a harmonious relationship between Islam and modern science.  
The first threat is the claim that science is an imperialist cultural artefact with no objective claim to truth.  This leaves people in Islamic countries free to reject science as a colonial imposition.  The situation is made worse by left-wing professors in the West.  Muslim intellectuals in the United States and United Kingdom, who have drunk deep of the beguiling draught of postmodernism, have attempted to build an Islamic science better suited to their co-religionists.  Thinkers like Ziauddin Sardar and Seyyed Hossein Nasr are not household names, but their rejection of western science as incompatible with Islam has become conventional wisdom in many Muslim countries. Of course, their attempts to create an alternative natural philosophy have been an abject failure and they have been reduced to bickering among themselves.  In Islam’s Quantum Question, Guessoum is always impeccably polite, but it is clear he despairs of views like these.  His ridicule of Sardar, Nasr and their fellow travellers is all the more devastating for being so gently expressed.  Guessoum knows perfectly well that science is universal.  Its truths are the same everywhere. The idea of a specifically Islamic science makes as little sense as a Christian or atheist one.  This is why Abdus Salam, who won the Nobel Prize for Physics in 1979 for his work on the weak nuclear force, is one of Guessoum’s heroes.  Salam saw himself as an ambassador for science to the developing world.  It goes without saying that he found no conflict between his own Muslim devotion and his epochal work in nuclear physics.
A second threat is the low status of science in Muslim-majority countries.  For example, at the beginning of his book, Guessoum mentions two competitions for the pupils at his son’s school in the UAE.  One was for a science project that he was asked to judge.  It is fair to say he was less than impressed by the quality of many of the entries, but attendance was so sparse that there were few people to notice.  Three days later, the school held its annual Koranic memorisation competition.  Hundreds of parents, the local media and various guests of honour crammed into the school hall to witness prizes totalling $20,000 being handed out to the pupils.  With educational priorities like these, it is hardly surprising that the Muslim world has to import scientists and engineers from the West or send its own sons and daughters to be trained there.
The third threat is even more insidious.  The school of I’jaz teaches that the findings of modern science have been miraculously present in the Koran all along.  The verse “the Originator of the heavens and the earth! When he decrees a thing, he says only: ‘Be!’ And it is.” (Q:2:117) is taken as a reference to the Big Bang.  But proponents of I’jaz make even more surprising claims.  They can derive the speed of light, being 300,000,000 metres per second, from the verse “He directeth the ordinance from the heaven unto the earth; then it ascendeth unto Him in a Day, whereof the measure is a thousand years of that ye reckon.” (Q32:5).  Guessoum devotes one of the appendices of his book to refuting this “calculation” of the speed of light.  But there are many other examples taken extremely seriously and he is clearly angered that I’jaz is so influential.  Unfortunately, despite having much in common with the Bible Code craze of a few years back, I’jaz is fast becoming mainstream in Muslim countries.  Guessoum found that 80% of the Muslim faculty and students at his university in the UAE believed that the Koran contains explicit statements now known to be scientific facts.  
Guessoum himself suffers from none of these misapprehensions.  He is a believing Muslim but his scientific views are similar to those of most of his Western colleagues: he wholehearted accepts Darwin’s theory (rejecting intelligent design) and sees science as universal rather than local.  He rejects I’jaz but does see the merit, like many Christians, of the fine-tuning argument and theistic evolution.

Guessoum’s experience shows that reconciling Islam and Science is a problem that has already been solved.  Islam’s awesome thirteen hundred years of scholarship has already furnished the answers in this debate.  All that is needed is to retool the arguments developed centuries ago to make them fit for the modern era.  The original debate between Islamic and foreign sciences took place in the ninth to twelfth centuries when ancient Greek natural philosophy and mathematics were first translated into Arabic.  Admittedly, back then, in a high-scoring game, the mystics eventually prevailed with a late winner from Al Ghazzali (d. 1111).  Warning Muslims against the work of Euclid and Ptolemy, Al-Ghazzali said they are “the preliminary to the sciences of the ancients, which contain wrong and harmful creeds.”  He was probably talking specifically about astrology but as his influence has waxed, the sciences in the Islamic World have waned.  

That is not to say that the traditional picture of Al Ghazzali snuffing out Golden Age science is accurate.  The astronomical work of Nasir al-Tusi (d. 1274) and Idn al-Shatir (d. 1375) alone refutes that theory.  The mathematical models of both these scholars were used, unacknowledged, by Nicolas Copernicus (d. 1543) in his Revolutions of the Heavenly Spheres.  Nidhal Guessoum’s own favourite Islamic thinker is Idn Rushd, known as Averroes in the West.  He took on the challenge of Al-Ghazzali and has been a pariah among conservative Muslims ever since.  Only in the West is Averroes hailed as one of the most important thinkers in history.  

So obviously, science in the Islamic world did not break the mould in the way that it did in the West.  But, as George Saliba notes in his Islamic Science and the Making of the European Renaissance, the question of why modern science didn’t arise in the Muslim world is the wrong one to ask.  It didn’t arise in all sorts of advanced civilisations including China or India; ancient Greece and Rome; or Sassanid Persia and its great antagonist Byzantium.  Instead, we should be wondering why a recognisably modern science had arisen in the West by the end of the nineteenth century.  That this didn’t happen elsewhere isn’t because of the deficiencies of other societies.  It’s just that there was a unique conjunction of historical contingencies in one place and time.  Exactly what those contingencies were remains a matter of much debate.

What, then, is the solution to Islam’s quantum question?

There are some clues in Guessoum’s book.  One element is the need to ensure that any discussion of science is grounded in the Koran.  The esteem in which this book is held among Muslims is well known.  Since it is full of injunctions to observe and understand nature, there is strong support for science to be found within its pages.  It also supports a philosophy of the unity and predictability of nature which adheres well to the axioms of modern science.  

Obviously, Koranic literalism can be unhelpful.  Luckily, there is a history of interpretation that allows the Koran to be read in a figurative rather than literal way where necessary.  A passage that looks like a straightforward statement of fact is likely to also have a range of metaphorical and religious interpretations.  Guessoum warns of some pitfalls in this approach.  For instance, the Arabic word commonly translated as science today, ‘ilm, has the wider meaning of “knowledge” in classical Arabic.  Nonetheless, the essential lesson is that revering the Koran as the word of God does not also mean having to treat it as a scientific textbook.

To a great extent, the relationship between science and Christianity is of academic interest only.  Readers of this blog might find the subject fascinating but it only rarely impinges on public life.  When it does, the issue in question is almost always creationism which most scholars in the field regard as one of the subject’s least interesting manifestations.  The situation among Muslims is different.  For them, the question of how to reconcile science to Islam is of epochal importance.  The best-case scenario could well see them in a better place than the West – a science that recognises its ethical boundaries and rejects the naïve utilitarianism of so many western scientists.  But for the present, the story is much less encouraging.  Nidhal Guessoum is in no doubt that the relationship between science and Islam is highly problematic and that this is holding back the development of Muslim societies.  Sadly, there is little that western Christians can do about this.  

Given the importance of its subject-matter, it is unfortunate that Islam’s Quantum Question is such a poorly organised and written book.  Even the title is a misnomer – Guessoum tells us early on he’s got hardly anything to say about quantum mechanics.  The book was originally in French and, as far as I can tell, Professor Guessoum translated it into English himself.  The result is difficult to read and even harder to follow.  For most readers, the amount of new material is more than can be easily swallowed.  Muslim thinkers come thick and fast, sometimes referred to by their surnames and sometimes by their given names.  Keeping track of who is who and what they all think becomes a serious challenge.  It’s not even clear for whom the book is for.  There is lots of material which looks like it is aimed at an audience of western non-Muslims.  But Guessoum also spends a great deal of space elucidating the basic philosophy of science and presenting evidence that evolution is true.  

Deep within his book there is an essential text fighting to get out.  There is no doubting the significance or the urgency of the issues it raises.  Thus, despite its faults as a piece of writing, it is something that everyone interested in the interface between science and religion should read.

This article is a much expanded version of a review originally published in Science and Christian Belief 26(2) 2014.

Discuss this post at the Quodlibeta Forum

Friday, January 09, 2015

We know less about the ancient world than we think we do

On 15 June 763BC, a near total eclipse of the sun was visible over a swathe of the Near East.  As luck would have it, the event was noted in the official list of Assyrian high officials.  This record provides the earliest absolute and uncontroversial date in ancient history.  Using lists of kings and the chronicles of events, historians have counted the years back from this date to construct the chronology of ancient history.  
Radiocarbon analysis (which measures the decay of carbon 14, an unstable isotope) and the predicable styles of pottery found in digs both provide corroborating evidence.  Dating the layers of archaeological remains from the artefacts found within them is called stratigraphy and can yield quite precise results.  The vast amount of pot shards that has been unearthed allows archaeologists to use statistical methods to screen out random noise and anomalous samples that have found their way into the wrong strata.  Of course, pottery and radiocarbon methods need to be calibrated to produce absolute dates.  This has been done using samples of wood whose age can be determined by matching patterns of tree rings, a technique called dendrochronology.  We can count back sequences of tree rings from the present day, all the way to 2000BC.  By carbon dating the oldest samples of wood, we can tie the tree ring record to the results from carbon 14 decay.
By 1990, all these clues had yielded a multi-dimensional jigsaw which fitted together to almost everyone’s satisfaction.  There were a few heretics like Peter James, who suggested in his book Centuries of Darkness that the conventional chronology included two hundred additional years around 1000BC.  Thus remains that were conventionally dated to 1050BC actually occurred in 850BC.  Although James’s book is an excellent read, it fails to convince.  
Nonetheless, it has now turned out that the conventional chronology was not as secure as everybody else thought.  While James was convinced ancient history was two centuries too long, new evidence has begun to pile up in the opposite direction: it now looks like the conventional chronology is up to 150 years too short.  To put it another way, a cataclysm that everyone thought occurred in 1500BC actually happened before 1620BC.  The event in question was the massive eruption of the island of Thera in the Aegean Sea.  
Conventional chronology dated the end of Minoan age in Crete to 1450BC.  Archaeologists assumed that the Thera eruption (on the modern island of Santorini) and its resulting tsunami had destroyed the Minoan fleet leaving them vulnerable to raiders from the mainland.  Certainly, the havoc wrought by the volcano can clearly be seen across the Eastern Mediterranean.  When Thera exploded, it blasted 60 cubic kilometres of rock into the atmosphere which settled over Asia Minor.  The resulting layer of ash and pumice is used to date the sites where it is observed.  And the eruption had other effects.  Sulphur dioxide released by the volcano spread across the northern hemisphere and fell to earth as acid rain, or more significantly as acid snow.  At the poles, not all of that snow has yet melted and, from the 1990s, it provides a new strand of evidence to date the eruption.  
Ice cores, drilled from the icecap of central Greenland, record the depth of each annual snowfall.  The ice holds within it information on the constitution of the atmosphere going back tens of thousands of years.  Like tree rings, each layer can be counted so as to give an absolute rather than relative date.  Big volcanic eruptions show up as spikes in the sulphur-content of the annual fall of snow: Krakatau in 1886; Tombura in 1815; Vesuvius in AD79.  Despite the presence of literate civilisations in Egypt, the Levant and Babylon, no written record of the Thera eruption exists, but the ice cores should overcome that deficiency and provide an absolute date for the cataclysm.  
Actually, the fact that the Thera event went unrecorded is less surprising than it seems.  Mankind has been remarkably unobservant of enormous volcanic eruptions.  An event in 1257AD, less than 800 years ago, is indelibly imprinted into both the Greenland and Antarctic ice cores.  It was greater in size even than Tombora and thus the largest eruption in the last ten thousand years.  But remarkably, no one knows where it happened.  Only in 2012 has Mt Rinjani in Indonesia emerged as a likely candidate.  Another big eruption, as recent as 1809, remains unidentified.
By 2000, the Greenland ice cores had revealed that Thera could not have happened when everyone thought it had.  The most likely anomaly in the ice dated from 1640BC, but this turned out to be from a volcano in Alaska.  At the same time, carbon dating an olive tree buried in the Aegean eruption yielded a date of around 1620BC.  Sulphur traces in the ice have been found that correspond to this date, although they are not as strong as might be expected.  Now, the dendrochronologists have piled in.  The Thera eruption would have caused unusually cold weather which stunted plant growth across the globe.  Evidence from bristlecone pines in the western United States, oak trees in Ireland and Swedish pines all point to a cold snap in 1627BC.  This is consistent with what we’d expect from a big volcano blowing its top in the Mediterranean.  Evidence from the Antarctic ice cores should be in shortly, but for a northern hemisphere volcano, this is unlikely to be conclusive.
The lack of a definitive date for the Thera disaster is frustrating, but we can now be reasonably sure it occurred 120 years earlier than thought.  The implications of this for ancient history are immense.  The chronology of the New Kingdom of Egypt was thought to be rock solid.  Finding that they need to find room for a dozen more decades has been too disconcerting for Egyptologists to tackle so far.  There is a good chance that the extra years belong in a period after the well-documented New Kingdom called the Third Intermediate Period.
For historians of Babylonia, the crisis has been less existential.  Absolute dates for the second half of the second millennium are based on ancient observations of the planet Venus.  We know from modern calculations that a particular configuration of Venus recorded during the eighth year of the reign of a certain King Ammisaduqa must have occurred in 1702BC, 1646BC, 1582BC or 1550BC.  Other events in Babylonian history, such as the reign of King Hammurabi (famous for his law code) and the sack of Babylon by the Hittites are arranged around whichever absolute date is most convenient.  That some of these possible Venusian dates differ by 120 years, about the same length of time that the Thera eruption has been moved back, is highly suggestive to say the least.

So, where does all this leave biblical chronology?  That remains very unclear.  But the redating of Thera shows that we know a lot less about when things happened in the ancient world than we thought we did.
Discuss this post at the Quodlibeta Forum

Friday, January 02, 2015

The British Medical Association thinks it is bad for doctors to work weekends. Why do we still treat the medical profession as special?

In this age of public cynicism, few professions remain in high public esteem. No one ever liked journalists, politicians or estate agents. But in recent years, bankers and lawyers have become much less-trusted: for good reason you might say. The teaching unions continue with their long campaign to undermine the regard which much of the public still have for their members. But doctors have bucked this trend. Even recent scandals in the National Health Service (conveniently blamed on managers) haven’t really dented the way the public see physicians, or how physicians see themselves.

That isn’t too surprising. Doctors can do amazing things to heal us. They save lives habitually. In some circles, it is sacrilege even to criticise them. But, remarkably, doctors enjoyed a healthy professional reputation even back in the days when they couldn’t really help us at all. The miracle of modern medicine is much more recent than we realise. It was only from the mid-nineteenth century that doctors were more likely to cure than kill. Our expectation that we won’t die of infectious disease dates from after the Second World War.

It’s impossible to overstate just how useless pre-modern medicine was. If you fell ill, there was nothing, and I mean absolutely nothing, that a doctor could do to cure you. Granted, he had plenty of treatments and his learning was considerable. But bleeding, purgatives and the like would do you more harm than good. In essence, doctors were charging fat fees to hasten patients towards the grave.
Actually, I was slightly exaggerating when I said doctors could do nothing. There were some drugs available, like opium, to lessen pain. But you didn’t need a doctor to access these drugs and, although opium could reduce discomfort, you wouldn’t be cured. It was palliative only. Luckily for them, doctors did have another trick up their sleeves, although they did not know it. It’s called the placebo effect.

It’s well known that when you give a patient a sugar pill, something with no active ingredients, it can have marked beneficial effects. The mere fact that the patient thinks that they are being treated with an effective medicine makes them better able to heal themselves. And this effect is even more marked if the doctor himself thinks he is doing some good. That’s why new drugs are tested using the double-blind method. Patients are divided into two groups. One group is given the drug under test and the other is given a placebo. It’s called double-blind testing because the researchers giving the drug don’t know which is which any more than the subjects do. Only a second lot of researchers, who never actually come into contact with the patients, know who has received the real drug and who has received the fake.

So, a doctor in the eighteenth century, with his training and aura of competence, could help his patients cure themselves merely because all parties thought that he could. This might even offset the damage that the doctor was doing by administering dangerous drugs or ordering bleeding. Clearly, the doctors who could best help their patients were the ones who didn’t do anything besides having a reassuring bedside manner and giving out harmless placebos. That’s generally what village healers and cunning folk did. Their magical cures were less likely to hurt you than the treatments of professional doctors. Most effective of all was praying at a saint’s shrine. If you believed in it, prayer would do as much good as a visit to the doctor, and it was unlikely to do you any harm at all. Physicians made their living by cloaking themselves in learning, jargon and professional qualifications. But it was all an illusion. No matter how many long years they studied Galen and Avicenna, they couldn’t help their patients one jot.

Incidentally, that’s how homeopathy got going. It was founded by Samuel Hahnemann in 1796 while doctors were still more likely to be licensed killers than saviours. Now, I hope I won’t offend anyone when I say that homeopathic medicines do precisely nothing. They rely entirely on the power of suggestion – in other words the placebo effect. But when homeopathy was founded, doing nothing could be a huge improvement on conventional treatments. So, it appeared to work better. This meant that homeopaths gained a respected place in British medicine that they have never really relinquished. Homeopathy is still available on the National Health Service.

All this raises a slightly disconcerting question. If doctors could maintain a professional reputation back when they couldn’t help their patients, is some of the reverence in which we hold them today really just a function of good public relations? That’s not to say that today’s medical professionals don’t deserve a large measure of respect. But placing them on a pedestal doesn’t do us or them any good at all. So when the British Medical Association say that doctors are too important to work at weekends, we should treat the suggestion with the scorn it deserves.

Discuss this post at the Quodlibeta Forum