Evidence in Medicine. Iain K. Crombie
Чтение книги онлайн.
Читать онлайн книгу Evidence in Medicine - Iain K. Crombie страница 7
A more recent but widely (mis)used theory was that bed rest was beneficial for a variety of ailments. Its popularity has been traced to a series of lectures in the middle nineteenth century by John Hilton, president of the Royal College of Surgeons [8, 9]. Initially recommended for recovery following orthopaedic procedures [10], it was soon used for conditions including myocardial infarction, pulmonary tuberculosis, rheumatic fever and psychiatric illnesses [9]. Bed rest was particularly popular in pregnancy, where it was recommended for complications such as threatened abortion, hypertension or preterm labour [11]. The theory was that if rest helped to mend broken bones, then it would also heal other organs [9]. The benefits of bed rest were thought to include reduced demands on the heart, conservation of metabolic resources for healing and avoidance of stress [12]. Its use began to be challenged in the middle of the twentieth century, as evidence grew on the adverse effects of bed rest; it is now known to cause impairment of cardiovascular, haematological, musculoskeletal, immune and psychological functions [9, 12]. Bed rest is an example of a treatment based on beliefs about benefit that endured in the face of substantial evidence of harm [8, 11].
TESTING ON A SERIES OF PATIENTS
The transition, from treatments based on theory to the use of evidence derived from empirical studies, was a gradual process. A simple, and common, method was to give a treatment on a series of patients, then observe its impact on disease. A good example is the use of the leaves of the willow tree for inflamed joints, a treatment dating back to the ancient Egyptians [13]. Clinical observation confirmed the benefits: application of a decoction of willow leaves to inflamed skin reduced the swelling. Extracts of willow leaves and bark were also used for fever and pain by the Greeks from the fifth century BCE [14]. An important step in the use of the willow was taken by the Reverend Edward Stone in 1763. He administered a solution of powdered willow bark to 50 patients with fever, judging the treatment a great success [14, 15]. The active ingredient of the willow, salicin, was isolated in the 1820s [13, 15]. This drug was tested by a Dundee physician, T.J. MacLagan, who administered it to a series of patients with acute rheumatism. Not only was the treatment successful, it demonstrated antipyretic, analgesic and anti‐inflammatory effects [15]. Salicin was recognised to be an important drug, but its long‐term use was limited because gastric irritation, nausea and vomiting were common side effects. The pharmaceutical arm of the Bayer company searched for a safer alternative, and successfully modified salicin to produce a new chemical with fewer side effects [13, 15]. That drug, aspirin, is now the most widely used medicine in the world [14].
Another example of evidence from a series of patients is the discovery of insulin for the treatment of diabetes. This was undoubtedly ‘one of the most dramatic events in the history of the treatment of disease’ [16]. Research, in the late nineteenth century, had shown that removal of an animal's pancreas ‘produced severe and fatal diabetes’ [17]. Over the following 30 years many researchers tried to isolate a pancreatic extract that could control blood sugar levels. They had little success, as the extracts had only a transitory effect on blood sugar and caused unacceptable side effects (vomiting, fever and convulsions) [18, 19]. In October 1920 Frederick Banting, a young Canadian doctor, was preparing a lecture on the pancreas [16]. The research he was reading led him to think that the active ingredient was being destroyed by the digestive enzymes in the pancreas, and that this could be prevented by ligating the pancreatic ducts. Banting began the experiments with extracts of the ligated pancreas in May 1921 [17]. By January 1922 a purified extract had been obtained. This proved successful in treating a 14‐year‐old boy, and in February a further six patients were treated with equally favourable results [16]. The discovery was announced in April to international acclaim; the Nobel prize was awarded to Banting, and one of his colleagues, Dr Macleod, in 1923 [16].
COMPARING GROUPS
Case series can provide support for a treatment if, as with insulin, the benefits are immediate and substantial. But observations on a set of patients are often not sufficient to identify whether a treatment is truly effective. Consider the management of gunshot wounds in the sixteenth century. At that time it was believed that the bullet introduced poison into the body, and that cauterising the wound with boiling oil mixed with treacle would detoxify it [20, 21]. The treatment was very unpleasant, but was thought to save lives. Force of circumstances led the French barber‐surgeon, Ambroise Paré, to use a different treatment. During the Italian war of 1536–1538, Paré ran out of oil and instead used a balm of egg yolk, rose oil and turpentine [20]. He observed that the outcomes differed substantially between the two groups: those treated with the hot oil were feverish and in ‘great pain and swelling about the edges of their wounds’, whereas those given the balm were resting comfortably [21]. Further trials of the balm convinced Paré that gunshot wounds were not poisoned and should not be cauterised [20].
The comparison of groups also helped promote a technique for the prevention of smallpox. In the 1700s smallpox was a leading cause of death, with many of those who survived suffering disfigurement and blindness [22]. The available preventive measure was to infect children with puss or scab material from smallpox victims, a process known as variolation. Despite reports that it was beneficial [23], there was widespread concern that variolation might carry a greater risk of dying than allowing people to contract the disease naturally. James Jurin evaluated this in the 1720s, by collecting data on death rates in three groups: those who were diagnosed with smallpox, those at risk of contracting smallpox and those who had been variolated [22, 23]. The results appeared convincing with death rates of 16.5% (diagnosed cases), 8.3% (at risk) and 2.0% (variolated) [23]. Preventing smallpox was a much safer practice than letting nature take its course.
Death following childbirth was a serious concern in the seventeenth to nineteenth centuries, causing epidemics ‘of unimaginable proportions’ [24]. A major cause of this mortality, puerperal fever (fever following childbirth), was investigated by Ignaz Semmelweis, a Hungarian doctor. In 1844 he compared the death rates among patients in two wards of a hospital in Vienna. He found that the death rates in a ward staffed by doctors was much higher (16%) than in the one run by midwives (2%) [25]. This, and other observations, led Semmelweis to conclude that the illness was transmitted by doctors coming directly from a post‐mortem to help deliver a baby. He initiated a preventive measure, compulsory hand washing in a chloride of lime solution, which reduced the mortality in the doctors’ ward to 3% [25]. His approach was not popular, because it implied that doctors transmitted disease, and Semmelweis's contract was not renewed. He was finally vindicated some 30 years later when Pasteur identified the bacterium, Streptococcus pyogenes, that caused puerperal fever [25].
These treatment evaluations utilised two different types of comparisons: contemporary controls and historical controls. Contemporary controls are patients who were seen at the same time as those getting the new treatment, but who received the conventional care. Historical controls are patients