Sunday, 17 December 2006

Anti-viral meme




Ok, so Log Base 2 has been in hibernation lately. So what has reactivated this blog? Anti-viral tissues, that's what! At the drugstore today, I saw a box of Kleenex Anti-Viral tissues. Yikes, I thought: Kleenex impregnated with Tamiflu! Well, to be honest I can't tell for sure ...

Here's how the manufacturers of Kleenex describe their product:
"Only KLEENEX® Anti-Viral* Tissue has a moisture-activated middle layer that is scientifically proven to kill cold and flu viruses.* When moisture from a runny nose, cough or sneeze comes in contact with KLEENEX® Anti-Viral* Tissue's special middle layer, cold and flu viruses are trapped and killed."
Their footnote clarifies:
"* In the tissue within 15 minutes. Virucidal against: Rhinoviruses Type 1A and 2 (rhinoviruses are the leading cause of the common cold); Influenza A and Influenza B (causes of the flu); Respiratory Syncytial Virus (RSV, the leading cause of lower respiratory infection in children)."
Incidentally, the image above depicts the RSV virus. (And double-incidentally, it bugs me that in that last sentence, the word "virus" is effectively repeated. Like when you say "PIN number". Or "double-incidentally".)

Anyway, I guess the chemical constituents of Kleenex Anti-Viral tissues are a trade secret, because the best description I can find on their website is that it's a "blue-dot layer with special anti-viral* formula". Of course! It's those ingenious blue dots!

I do wonder, however, if perhaps this will just breed Kleenex-Anti-Viral-resistant viruses. Similarly, I've wondered if the ubiquitous anti-bacterial agent, triclosan isn't doing something similar with bacteria. But maybe you could say the same thing about good old soap? Hmmm ... I think we need a microbiologist to comment.

I leave you with this rather entertaining commercial for Kleenex Anti-Viral from YouTube.

Update 04Jan2007: I did a little investigation, and believe I've uncovered the secret of the blue dots! It's U.S. patent number 7115273, which is for:
"A soothing anti-viral lotion composition and a lotioned tissue product having a surface with the lotion composition applied thereto, and methods for making and using the same. The lotion composition includes an anti-viral organic acid and a topical delivery system. The topical delivery system includes one or more polyesters which allow incorporation of the organic acids into the lotion formulation, controls their delivery, and maintains them in the stratum corneum. The lotion composition may optionally contain a surfactant, an irritation inhibiting agent, and other additives."
(In case you're wondering, those additives may include "oils, waxes, fatty alcohols, humectants, and the like.") And how do you squeeze all that into a tissue?
"The anti-viral lotion composition may be applied to the tissue product by any of the methods known in the art such as gravure printing, flexographic printing, spraying, WEKO, slot die coating, or electrostatic spraying."
Translation: blue dots! The patent goes into lots more technical detail. A brief sampling:
"Preferred anti-viral organic acids comprise at least one member selected from the group consisting of carboxylic acids having the structure R-COOH, wherein R is a C.sub.1 C.sub.6 alkyl; C.sub.1 C.sub.6 alkyl carboxy; C.sub.1 C.sub.6 alkyl carboxyhydroxy; C.sub.1 C.sub.6 alkyl carboxy halo; C.sub.1 C.sub.6 alkylcarboxy dihydroxy; C.sub.1 C.sub.6 alkyl dicarboxyhydroxy; C.sub.1 C.sub.6 lower alkenyl; C.sub.1 C.sub.6 alkenyl carboxy; C.sub.1 C.sub.6 alkenyl phenyl; or substituted phenyl radical."
That tissue you're holding is a miracle of modern chemistry!

Saturday, 5 August 2006

A magnificent display


Greetings from St. John's, Newfoundland, where I've been on holiday for the last week. (In case you're wondering, the photo on the left is from Beirut, yesterday. Be patient: I'll draw the connection in a moment.) The other day, I visited the Newfoundland and Labrador Museum, now housed at The Rooms. One of the exhibits related to the battle of Beaumont-Hamel, which took place on July 1, 1916, the first day of the Battle of the Somme. In less than a half an hour, 733 of 801 men in the 1st Newfoundland Regiment were killed. Describing the event, the Divisional Commander wrote:
"It was a magnificent display of trained and disciplined valour, and its assault failed of success because dead men can advance no further."
Translation: eight-hundred men charged straight into the line of machine gun fire and were almost instantly cut down. The folly of war is so sad.

Fast-forward 90 years, and where are we? Israel is busy pounding the hell out of Lebanon, killing hundreds of civilians in the process. Another "magnificent" display of human folly. And here's how our Divisional Commander (Prime Minister Stephen Harper) sees things (as quoted on Deonandia):
"What we refuse to do is to be drawn into a moral equivalence between a pyromaniac and a fireman ...”
What a nuanced view! I guess the "trained and disciplined valour" of the Israeli firemen is to be contrasted with the murderous extremism of the Hezbollah pyromaniacs?

Thursday, 6 July 2006

The Crisis in Darfur










"About 400 armed people cordoned the village.... At least 82 people were killed during the first attack. Some were shot and others, such as children and the elderly, were burnt alive in their houses." — Darfur villager in a Chadian refugee camp
Amnesty International is appealing for support to protect human rights in Sudan. They have lots of information and ways you can help on their website.

Friday, 30 June 2006

My two cents

I'm a creature of habit. Each morning, upon arriving at the hospital where I work, I stop at the café. I always get the same size and type of coffee, and I know the price in advance. Surprisingly enough, today my coffee was 2 cents cheaper. Why? Well, the Conservative government has just lowered the goods and services tax (GST) by 1%.

Of course the price difference on a cup of coffee is utterly trivial. (How long before we save everyone a lot of trouble and get rid of the penny altogether?) If I'd been buying a new car I might have saved a few hundred dollars.

The Conservatives believe that this will stimulate the economy, and that may well be true, although I imagine that might take a little while. But in any case, that's an economic prediction, not a certainty. Economies don't always behave the way economic models suggest they will. (I'm being charitable.) Ultimately, it remains to be seen what will actually happen. If the economy does improve, the Conservatives will no doubt attribute the change to the reduction in the GST, but to convincingly argue for a causal relationship isn't nearly so easy. Any number of other factors could be responsible for such a change.

What seems indisputable is that in the short term at least, the government will take in less tax revenue. I haven't seen any economic analyses about the longer term. Perhaps we're supposed to just take it on faith that lower taxes are a good thing. If government revenues are reduced, then you have to increase debt or reduce spending. For fans of smaller government, the choice seems clear.

When Conservatives look at government spending, they see some prime targets: health care, education, social programs, funding for the arts. Oddly enough they seem to forget one big area of government spending: the military, which seems to enjoy some special metaphysical status. In any case, wouldn't cutting military spending be ... unpatriotic?

Tomorrow is Canada Day, and I must admit that while I generally abhor nationalism, I do have a soft spot for Canada Day. I think this is a wonderful country, and we're so fortunate in so many ways. I also love the celebration of diversity that has become such a central part of our national holiday.

I'm sympathetic to fiscal conservatives who want to cut bureaucratic waste and mismanagement. The trouble is it's much easier to talk about doing that than to actually achieve real progress. It's also pretty clear to me that while everyone would like to eliminate inefficiency, Stephen Harper's Conservative government has a lot more than that on their agenda. The move towards privatizing health care in this country isn't about eliminating inefficiency (in fact I'd argue just the opposite). It's part of the broader plan to downsize social spending in general. I, for one, would rather pay a few more cents for my coffee if it helps protect Canada's healthcare system and social programs.

Sunday, 25 June 2006

That question sucks!


Andrew Gelman commented yesterday on a recent CBS News Poll which asked the following question:
"Should U.S. troops stay in Iraq as long as it takes to make sure Iraq has a stable democracy, even if it takes a long time, or leave as soon as possible, even if Iraq is not completely stable?"
Gelman points out that it's a "double-barrelled" question with
"... the assumption that U.S. troops will 'make sure Iraq has a stable democracy,' along with the question of how long the troops should stay".
He also notes that the New York Times piece on the poll included "a yucky graph (as Tufte would put it, 'chartjunk')", which I have shown here.

It really doesn't get much better than this: an exceedingly slanted question and an exceedingly silly graph! If I may, I'd like to name it the bow tie graph ... but is that taken? On this, I defer to Kaiser over at Junk Charts. Dressing up simple percentages with multicoloured variable-sized triangle regions is ingenious, but misguided. It's a shame that so much creative effort is misspent. There are many situations where new ideas for displaying data are needed, but instead we get a never-ending stream of bizarre ways to display percentages.

The question itself stands as striking evidence of media bias. Respondents should have been given a third option: "That question sucks!" (I'm reminded of the segment on This Hour Has 22 Minutes called "That Show Sucked!") As it is, it appears that about 7% of Republicans, 5% of Democrats, and 8% of Independents didn't answer the question. Some of those who didn't answer may have felt the question was too stupid to dignify a response. But it's quite interesting to speculate on what the responses might have been like if the following (perhaps more dignified) choice had been included:
I can't answer this question because I believe it has built-in assumptions that I disagree with.
I often find that this is my reaction to opinion poll questions. To be fair, even with an honest effort to understand people's views (which seems utterly implausible in the case above), it's not easy to ask good questions, or to provide a good set of response choices. This seems like a strong argument in favour of qualitative research methods, which avoid imposing a predefined (and possibly ideologically loaded) structure on responses. I'm very much in the quantitative camp, but for complex matters like political opinions, I can see that there may be some value, particularly in the early stages of research, in taking a qualitative approach.

When it comes to exposing ideological bias, no one's better than Tom Tomorrow. Check out this archive of cartoons from his brilliant series, This Modern World.

Saturday, 17 June 2006

Another kind of nothing

Missing values are the bane of the applied statistician. They haunt our data sets, invisible specters lurking malevolently.

The missing value is like the evil twin of zero. The introduction of a symbol representing "nothing" was a revolutionary development in mathematics. The arithmetic properties of zero are straightforward and universally understood (except perhaps when it comes to division by zero, a rather upsetting idea). In comparison, the missing value has no membership in the club of numbers, and its properties are shrouded in mystery. The missing value was a pariah until statisticians reluctantly took it in—someone had to. And it's an ill-behaved tenant, popping in and out unexpectedly, sometimes masquerading as zero, sometimes carrying important messages—always a source of mischief.

... symbolizing nothing

A variety of different symbols are used to represent missing values. The statistical software packages SAS and SPSS, for example, use a dot. The fact that it's almost invisible is oddly fitting. Other software uses NA or N/A—but does that mean "not available" or "not applicable"? These are, after all, two very different situations. The IEEE floating point standard includes NaN, meaning "not a number" (for example 0/0 is not a number). In spreadsheets, a missing value is simply an empty cell (but in Excel, at least, invalid calculations result in special types of missing values—for example 0/0 results in "#DIV/0!"). Dates and times can also be missing, as can character string variables.

Logical expressions, such as "X > 0" (which can either be TRUE or FALSE), are an interesting special case. If X is missing, then the value of the expression itself is missing. Suppose Y=5. If X is missing, what is the value of the expression "(X > 0) AND (Y > 0)"? Well, we can't say, because we need to know the values of both X and of Y to determine the result. So the value of "(X > 0) AND (Y > 0)" is missing. How about "(X > 0) OR (Y > 0)"? In this case, the answer is TRUE. It is enough to know that Y is positive to answer the question, regardless of the value of X. (There's also a logical operation called exclusive-OR, denoted XOR, which means "one or the other is true, but not both". You'd need to know both values in that case.)

Even though the rules above seem straightforward, great care must still be taken in ensuring that calculations are handling missing values appropriately. That's because in reality there are any number of different kinds of missing values. Suppose, for example, that as part of a clinical study of neurotoxic effects of chemotherapy agents, IQ is measured. What does a missing value in the data set mean? Perhaps the patient wasn't available on the day the measurement took place. Or perhaps they died. Or perhaps their cognitive disability was so severe that the test couldn't be administered. In the last two cases, the missingness might well be related to the neurotoxic effect of interest. This is known as informative missingness. Statisticians also distinguish the case where values are "missing at random" versus "missing completely at random". The latter is the best we can hope for—but it's often wishful thinking.

Something for nothing

One potential solution to the problem of missing values is imputation, that is filling in values ... somehow. One approach is mean imputation in which the mean of the values that are not missing is substituted for any missing values. Seems reasonable, except that it effectively reduces variability, which can seriously distort inferences. A variety of other imputation methods have been proposed, the most sophisticated of which, multiple imputation, allows for valid variance estimates provided a number of assumptions hold. The bottom line is there are no easy solutions: you can't get something for nothing ... for nothing.

Too much of nothing

The really unfortunate thing is that missing values are often the result of bad design. Perhaps the most common instance of this is surveys. Most surveys are too long! This creates at least three problems. The first is non-response (which is missing values writ large). While I might be willing to spend 30 seconds answering a questionnaire, I'd be much less interested in spending 10 minutes. The second problem is that even if I do answer the questionnaire, I may get tired and skip some questions (or perhaps only get part way through), or take less care in answering. The third problem is that long surveys also tend to be badly designed. There's a simple explanation for this: when there are a small number of questions, great care can be taken to get them right; typically when there are a large number of questions, less effort is put into designing each individual question. "Surveybloat" is ubiquitous and wasteful, often the product of design-by-committee. The desire to add "just one more question" is just too strong and the consequences are apparently too intangible (despite being utterly predictable). I would say that most surveys are at the very least twice as long as they ought to be.

In medical research, the randomized controlled trial is considered to be the "gold standard" of evidence. By randomly assigning an experimental intervention to some patients and a control (for example, standard care) to other patients, the effect of the experimental intervention can be reliably assessed. Because of the random allocation, the two groups of patients are unlikely to be very different beforehand. This is a tremendous advantage because it permits fair comparisons. But everything hinges on being able to assess the outcomes for all patients, and this is surprisingly difficult to do. Patients die or drop out of studies (due to side effects of the intervention?); forms are sometimes lost or not completed properly; it's not always possible to obtain follow-up measurements—the sources of missing values are almost endless. But each missing value weakens the study.

If this is a problem with prospective studies, in which patients are followed forward in time and pre-planned measurements are conducted, imagine the difficulties with retrospective studies, for example reviews of medical charts. Missing values are sometimes so prevalent that data sets resemble Swiss cheese. In such cases, how confident can we really be in the study findings?

Learning nothing

Most children learn about zero even before they start school. But who learns about missing values? University-level statistics courses cover t-tests, chi-square tests, analysis of variance, linear regression ... (How much of any of this is retained by most students is another question.) It's only in advanced courses that any mention is made of missing values. So graduate students in statistics (and perhaps students in a few other disciplines) learn about missing values; but even then, it's usually from a rather theoretical perspective. In day-to-day data analysis, missing values are rife. I would hazard a guess that of all the p-values reported in scientific publications, at least half the time there were missing values in the corresponding data, and they were simply ignored. In scientific publications, missing values are routinely swept under the carpet.

Missing values are a bit of a dirty secret in science. Because they are rarely mentioned in science education, it's not surprising that they are often overlooked in practice. This is terribly damaging—regardless of whether it's due to ignorance, dishonesty, or wishful thinking.

Nihil obstat

In some cases, missing values may just be an irritation with little consequence other than a reduction in sample size. It would be lovely if that were always the case, but it simply isn't. We ignore missing values at our peril.

———

Addendum (22June2006):

In my post I discussed how logical expressions are affected by missing values. The difference between a value that is not available and one that is not applicable has an interesting effect here. Suppose that following an initial assessment of a patient, a clinic administers a single-sheet questionnaire each time the patient returns. One of the questions is:
Since your last visit, have you experienced such-and-such symptom?
It might be of interest to know what proportion of patients have ever answered yes. Suppose that patients returned to the clinic up to three times. A logical expression to represent whether each patient had ever experienced the symptom would be:
symptom = v1 OR v2 OR v3
where v1 is TRUE if the patient reported the symptom on the first return visit, and likewise for v2 and v3. Suppose that a particular patient returned to the clinic three times, and answered "no" the first two times, but the questionnaire sheet for that patient's third visit was misplaced. Then v1=FALSE, v2=FALSE, and v3 is missing (not available). Following the rules that I discussed earlier for logic with missing values (these are rules used in SPSS and R, and I suspect in most other statistical packages), the value of the logical expression would be missing, which makes sense: we unfortunately don't know if the patient ever reported experiencing the symptom.

Suppose that another patient only returned to the clinic twice, also answering "no" on both visits. Then again v1=FALSE, v2=FALSE, and v3 is missing (not applicable, since there was no third visit). Blindly applying the rules, the value of the logical expression would again be missing. But this time, it's incorrect: we know that this patient never reported experiencing the symptom.

This is one of the justifications for my statement that "Even though the rules above seem straightforward, great care must still be taken in ensuring that calculations are handling missing values appropriately."

Thursday, 15 June 2006

Of buffoonery and bigotry

Tonight is the New York City premiere of a documentary film called American Zeitgeist. The film's subtitle is "Crisis and conscience in an age of terror", and it looks fascinating. Following the screening—in fact as I write this—a debate is taking place between Eric Margolis and Christopher Hitchens.

My friend Ray pointed this out, in passing, on his blog earlier this week, and I took the opportunity to comment on Christopher Hitchens. Here is an edited version of my comments:

————————

Part of me thinks that the best response to Christopher Hitchens is simply to ignore him. How anybody can consider him to be anything but a complete buffoon is beyond me.

I think it's interesting to compare Christopher Hitchens and Ann Coulter. On the face of it, they're very different. Coulter is indisputably a joke (albeit a very nasty one), with no pretense of seriousness or intellect. Hitchens, on the other hand, has the sheen of intellectual and moral respectability.

But Coulter and Hitchens are reading from the same hateful script. Coulter plays the comic while Hitchens plays the learned professor. The groundlings are tickled by Coulter's antics, while the folks in the balconies are enthralled by Hitchens' sage pronouncements. There's something for everyone!

That is, unless you'd like a little honesty or decency.

At first blush, ignoring their nonsense seems an attractive option. But media ownership being what it is, Coulter and Hitchens are guaranteed to get lots of exposure. And they're both dangerous.

Here's a small taste of the world according to Hitchens:
"if Muslims do not want their alleged prophet identified with barbaric acts or adolescent fantasies, they should say publicly that random murder for virgins is not in their religion."
This is the kind of inflammatory rhetoric Hitchens is famous for. The same Hitchens who has been unrelenting in his support for the invasion and occupation of Iraq. One of his most recent pronouncements is that Haditha is not like My Lai. How fortunate.

Hitchens has demonstrated repeatedly that he's morally bankrupt, not the least in terms of the way he treats people, and the way he cites evidence. Everyone makes mistakes from time to time, and in general people deserve the benefit of the doubt. But at a certain point you have to pull the plug. Hitchens reached that point ages ago. I'm reminded of a line from Shakespeare: "O wicked wit and gifts, that have the power So to seduce!"

For more insight into Hitchens, see this article or this one.

————————

Another commenter took objection to my characterization of Hitchens and to what she took to be a de facto attack on anyone who agrees with him. That wasn't my intent, but if you're interested, you be the judge (click on the comments link at the end of the blog post).

Monday, 29 May 2006

Another definition of science ...

"... a body of knowledge collected an nurtured by experts according to neutral, objective, and universal standards."
That's from an entertaining article with the staggeringly unoriginal title "The Management Myth" in the June issue of The Atlantic (which was kindly passed on to me by Brother Hrab).

Hmmm, doesn't strike me as a great definition. Who are these "experts"? And what are these "neutral, objective, and universal standards"? But most of all, I still think that science is fundamentally observational. To give some context, the author of the piece, Matthew Stewart, was discussing the historical development of "scientific management". (Full disclosure: I co-authored a paper in the journal Management Science a few years ago.) To my mind, unless there's an attempt to take careful observations, it's not science.

As an aside, I'd like to comment on an ambiguity in the word observational. Sometimes people distinguish experimental from "observational" methods. But of course observation is a component in experimentation; the real distinction is that in experimentation there is planned manipulation of conditions. Sometimes, to evade this distinction, people refer to "natural" experiments, namely observations with coincident variation in potential explanatory factors. But you can't get around the fact that these are not real experiments, and may well suffer from the usual shortcomings of non-experimental studies. And that would be my preference for terminology: experimental versus non-experimental studies. Sometimes people will insist that it's not science if it's not experimental, but this is going too far: it would rule out—among many other sciences—astrophysics and evolutionary biology. Of course there are many challenging issues in analysis of data from non-experimental studies. Consider, for example, the analysis of data from a case-control study. While this epidemiological design is indispensable for investigating rare outcomes and in cases where randomization is not ethical, the problem of confounding can bedevil analysis. Hey, science isn't always easy (certainly not as easy as the textbooks sometimes portray it).

Returning to Stewart's piece in The Atlantic, I think he's at his best skewering management fads and the associated vapid management-speak:
"On the whole ... management has been less than a boon for those who value free and meaningful speech. M.B.A.s have taken obfuscatory jargon—otherwise known as bullshit—to a level that would have made even the Scholastics blanch. As students of philosophy know, Descartes dismantled the edifice of medieval thought by writing clearly and showing that knowledge, by its nature, is intelligible, not obscure."
I end with a cartoon that, by its nature, is intelligible, not obscure:

Wednesday, 17 May 2006

The trouble with models

Models are central to science. A scientific model is a kind of representation of physical reality, an analogy if you like. For example, the image on the left is of the "planetary model" of a nitrogen atom.

The power of analogy is that it lets us think about one thing in terms of something else—often something simpler or more familiar. An example of this is metaphor: when we say, for example, that "life is a journey", we are using the relatively simple model of a journey (with a starting point, a period of travel, a destination, etc.) to understand the more nebulous concept of life. The importance of metaphor was first revealed to me by the marvelous book Metaphors We Live By, now available in a second edition. (About 15 years ago, I picked up a copy at a used bookstore more or less by chance. It had a tremendous impact on my thinking; it was some time later that I learned that it's actually rather a famous book.) The authors, Lakoff and Johnson, argue not only that metaphors are ubiquitous, but that they in fact structure the way we think. Using what strikes me as a more general version of the same argument, Douglas Hofstadter suggests (in a very entertaining essay) that analogy may be the "core of cognition".

Analogies are indeed wonderful. But it is well to remember that no analogy is perfect, and sometimes an analogy can actually be an obstacle to understanding. The notion that "life is a journey" can be helpful, but it may lead us to overlook aspects of life that don't resemble a journey. For example it may cause us to unduly focus on trying to "get somewhere" in life. An alternative to the title Metaphors We Live By might be Metaphors We're Trapped By. Notwithstanding the fact that a metaphor can be very informative, strictly speaking, it is always a lie. Or as statistician George E. P. Box put it:
"All models are wrong; some are useful."
The trouble is, we often confuse models with reality. (Incidentally, that applies to fashion models too!) What is true about a model may not be true about what it represents. Sound familiar? It's very much like the objection I quoted in my recent post about definitions: it may not be valid "to draw conclusions about what is true about the world based on what is true about a word". (And what is language but a type of model? I believe this links rather directly to the field of semiotics, but unfortunately I don't know much about that!)

More things in heaven and earth ... than are dreamt of in your philosophy

It seems there has been an increasing recognition that models lie at the heart of science (rather than "laws"). And models are necessarily imperfect. The planetary model of the atom was soon superceded by the Bohr model, which was followed in turn by even better models. At its best, science progresses by recognizing the shortcomings of models and substituting more appropriate ones. But the process by which this is achieved remains controversial. The interplay of deductive and inductive reasoning is part of it, but it may be far more complex than that, as Thomas Kuhn argued in The Structure of Scientific Revolutions.

Whether we like it or not, we're stuck with models, not only in science but in language and indeed in the core of thought itself. In an essay on "The Fall and Rise of Development Economics", Paul Krugman has a fascinating section on "Metaphors and Models" in which he writes:
"The problem is that there is no alternative to models. We all think in simplified models, all the time. The sophisticated thing to do is not to pretend to stop, but to be self-conscious -- to be aware that your models are maps rather than reality."

Tuesday, 16 May 2006

Pompous rhetoric


Canada's Conservative government wants to extend the stay of Canadian troops in Afghanistan another two years beyond the current February, 2007 deadline. (See reports from The Globe & Mail and CBC.) After a 6-hour debate tomorrow, members of parliament will vote, and Canadians will have to live with the consequences. Pronounced Prime Minister Stephen Harper:
"What we are doing there is not just protecting our national interests, but providing international leadership and providing real advancement to the standard of living and human rights of the Afghan people."
Do these claims stand up? First, how is the Canadian military presence in Afghanistan protecting our national interests? No doubt there will be some grand words in the House of Commons tomorrow, but I'd like to hear a cogent argument, not just hot air. A national child-care plan would be in our national interest, but the Conservatives won't hear of that.

Next, does a foreign military adventure demonstrate international leadership? Perhaps we're supposed to believe this simply because the words "military" and "leadership" happen to be in the same sentence. Living up to our Kyoto commitments to reduce greenhouse-gas emissions would show real leadership, but instead the Conservative government wants to back out of that agreement.

Finally, is the Canadian military presence in Afghanistan advancing the standard of living and the human rights of the Afghan people? Perhaps, but what's the evidence? And could we achieve more by different means? I believe it would be much better for Canada to provide financial support and diplomatic interventions to nurture real democratic progress in Afghanistan.

I wonder if the Conservatives have anything more than pompous rhetoric to support this military adventure?

Sunday, 14 May 2006

A defining moment


Definitions are wonderful and terrible. Wonderful because carefully chosen definitions bring thought into focus. But terrible because conflicting definitions are the source of endless misunderstandings. Try to define something carefully and it won't be long before someone suggests that "it's just semantics!" Translation: you're wasting your time on words instead of what really matters.

While I agree that it's invalid "to draw conclusions about what is true about the world based on what is true about a word" (I'm quoting from the Wikipedia page on semantics), that hardly makes semantics irrelevant! Semantics is, after all, the study of meaning. I don't imagine anyone has ever objected that "it's just meaning!"

The trouble is, some words carry around a lot of baggage. As I've just indicated, an unfortunate example is semantics itself! Lately, I've been thinking a lot about evidence. Not easy to define, but everybody seems to think it's a good thing. Similarly comments apply to science and research, both of which are favorites of advertisers—a sure sign that these words evoke powerful responses.

A couple of naive approaches to definitions deserve mention. One is simply to resort to a dictionary. This raises the (oddly recursive) question of whether dictionaries are or should be descriptive or normative. The fact that multiple definitions are often given for a word suggests a descriptive role. But I think that at the same time they try to be authoritative. The fact that different dictionaries sometimes offer strikingly different definitions suggests that this is an impossible task. Language is fluid in part because thought is fluid. Evolving concepts that are abstract and controversial don't submit to anyone's putative authority. So while dictionary definitions do provide grist for the mill, they're hardly definitive the last word beyond criticism.

In my most recent post on evidence, I cited the Oxford English Dictionary, focusing not only on the definitions offered, but also on the etymology of the word. Word origins can inform a definition, but it's naive to rely on them exclusively. Science, for example, derives from the Latin scientia, meaning knowledge. Yet science surely means something different from simply knowledge.

For example, I know a number of quotes from Shakespeare's plays, but that kind of knowledge isn't science. The Wikipedia entry on science notes that science is often used informally to mean "any systematic field of study or the knowledge gained from it". For the sake of clarity let's call "the knowledge gained from it" scientific knowledge. But when we say science, do we really mean "any systematic field of study"? I would prefer to use the word research for that purpose. For example, I don't consider theology to be a science, but have no problem with the term "theological research".

Science, as I understand it, is fundamentally observational. I've made the same point about evidence, and I think the two are clearly related (but more of that in another post). An interesting test case is "computer science", sometimes known as "computing science" since its focus is really on computing more so than computers ("Computer science is no more about computers than astronomy is about telescopes." - Edsger Dijkstra). But is it a science? Well, the scientific method can be applied in computing science, in the design and analysis of empirical studies, but most of computing science is about mathematics.

And mathematics, while strongly associated with science, isn't a science. I believe that science is a good thing, but not all good things are science. I also believe that science gets at truth (and that's a challenging notion), but it's not the only thing that gets at truth. That the Earth revolves around the Sun is a truth established scientifically. Pythagoras' theorem in Euclidean geometry is a truth established mathematically. Both are objective truths in that anyone can, in principle, verify them. I suspect that there are also subjective truths that may be established using entirely different methods. The relationships between these different truths and the methods used to establish them strike me as profoundly interesting.

I still haven't offered a definition of science, and in fact I suspect that ultimately what may be most important is the pursuit of a definition rather than a definition per se. I would provisionally define science as the practice of the scientific method—clearly a bit of a dodge! The Wikipedia entries on science, the scientific method, and the philosophy of science provide useful background, but no clear consensus. I find myself in agreement of much of what is written in those entries, but not all of it. (For example, Karl Popper's perspective on science is prominent, but I have my doubts.)

Science, technology, and evidence-based medicine

Notably, discussions of science often focus exclusively on natural science. But the essence of science is careful observation and reasoning from observation, which clearly can be applied to any observable phenomena. This obviously includes human behaviour, notwithstanding the formidable obstacles that arise. The scientific method can also be applied in evaluating technology. Evidence-based medicine is an example of this. While natural science is of central importance in guiding the development of healthcare technologies—be they drugs, devices, surgical techniques, what have you—the evaluation of their performance is not a matter of natural science. The application of good science may lead to a technology that turns out not to work, or to have unanticipated adverse effects that preclude its adoption. Conversely, useful technology need not result from the application of science. Many drugs, for example, have come to us by non-scientific routes. Ultimately, regardless of their origins, healthcare technologies need to be evaluated to determine how well they work, and what adverse effects they may have. Why they work the way they do is of secondary importance.

Tuesday, 9 May 2006

Cool optical illusions

I'm a big fan of optical illusions. And I just found a blog devoted to them, from which I copied the remarkable image shown here. I got there from Blogs of Note, which features daily links to "Interesting and noteworthy Blogger-powered blogs, compiled by the Blogger Team." Their archives go back more than 5 years—great for semi-random web surfing!

By the way, for more cool optical illusions, I'd recommend the book Incredible Visual Illusions by Al Seckel.

Monday, 8 May 2006

Disclaimer

The views and opinions I express on this blog are mine, and not those of my employers, clients, family, or friends, past or present. This blog is in no way affiliated with my employers, clients, family, or friends, past or present.

The information and content on this blog are provided "as is" with no warranty of any kind, either express or implied.

Links

I cannot guarantee that webpages I link to will work all of the time and I have no control over the content of linked pages. I am not responsible for the contents of any linked websites and do not necessarily endorse the views expressed on them. The fact that I link to a given website does not mean that I endorse the contents of that website.

Comments

I welcome comments, but reserve the right to delete comments as I see fit. Comments represent the views and opinions of those who post them, and I do not necessarily endorse these views and opinions.

Sunday, 7 May 2006

Miscellany

It's Sunday evening, and rather than coming up with something thoughtful myself, here are some miscellaneous items of interest. A cornucopia if you will, as illustrated at left.

First, something important—an urgent appeal from Amnesty International Canada:
On May 9th the U.N. General Assembly will elect the members of the new UN Human Rights Council.

Canada, which is standing for election, has a critical role to play in ensuring that the Council's mandate of protecting all human rights for everyone is not a hollow promise.

Send an email to Canada's Minister of Foreign Affairs, the Honorable Peter MacKay, and urge him to ensure Canada demonstrates clear leadership in the respect and fulfillment of human rights norms and standards.

Take action before May 9th!
If you're in a science-and-public-communication mood, this post is worth reading.

Next, something fun (via Antonella Pavese's interesting blog): a very entertaining manifesto on how to be creative by advertising executive and blogger Hugh MacLeod.

Incidentally, Antonella Pavese recommends this cleverly-named, free, online, task manager: remember the milk. Ok, I haven't actually tried it, but their symbol is just too cute.

Wednesday, 3 May 2006

A view of evidence

In the last month I've discussed various aspects of the evidence-based debate. I'd now like to present some of my ideas on the subject. As I've suggested previously, the definition of evidence seems to lie at the heart of the dispute. But before I attempt to define evidence, I think it's useful to consider the word itself.

Evidence is the noun form of the adjective evident, ultimately deriving from the Latin evidens from ex- (out, forth) + videre (to see). The Oxford English Dictionary gives the following definitions for evident: "1. a. Conspicuous b. Obvious to the sight. 2. Clear to the understanding or the judgement; obvious, plain. 3. Indubitable, certain, conclusive. —1653." And the primary definition of evidence is simply "The quality or condition of being evident."

So a very literal interpretation would take evidence to be what can be seen, or "shown forth". It seems natural to extend this beyond vision to sense perceptions in general. Further metaphorical extensions bring us to the notion of clarity, certainty, and conclusiveness. And already the problem becomes apparent: we have moved from sense perceptions to things that might be considered similarly certain and conclusive. Like what? Deductive arguments? Accepted theory? Long experience? Expert opinions? Religious precepts?

Let's consider each of these in turn, starting with deductive arguments. A valid deductive argument is water-tight—provided that its premises are true. But how do we establish them? Perhaps we can depend on accepted theory. But history is littered with theories that were once universally accepted, but are now discredited or superseded. Of course there's always long experience. Experience is the cumulative product of personal practice and observation. But it is notoriously subject to selection bias (and perhaps other biases too). And when it comes to rare events, no amount of experience is sufficient. Can expert opinion step into the breach? The opinion of an expert represents a synthesis of many different sources of information, usually carried out over many years. While there may be many good reasons to trust an expert (such as his or her qualifications, intelligence, experience, and good standing in the community), well-meaning experts have been spectacularly wrong any number of times. Ultimately trusting expert opinion is an act of faith. Which nicely brings us to religious precepts as a source of certainty. These are perhaps the original "self-evident" truths—that is, to the believer, but perhaps to nobody else.

It is, of course, true that sense perceptions can be misleading too. A classic example is the straight stick that appears bent when placed in a glass of water due to the refraction of light. Any number of other illusions and hallucinations make interpretation of sense perceptions a thorny philosophical problem—one of the fundamental challenges in epistemology. Without wishing to minimize these, I will set them aside, by simply asserting that we know the external world through our sense perceptions. They are the only raw materials at our disposal, and they are the closest we can get to certainty about the external world. Evidence is observational.

But that doesn't mean that observation is all there is to evidence. For instance, a scientific experiment involves action (manipulating or controlling conditions) as well as observation. But if there's no observation, there's no evidence. To consider a particular example, this means that opinion doesn't count as evidence, but observation of opinion does! That is:
  • Opinion about X isn't evidence about X.
But
  • Observation of opinion about X is evidence about opinion about X.
I certainly don't mean to imply that opinion is worthless—that would be a bit self-defeating, now wouldn't it?—but it just can't be counted as evidence. In any case, regardless of what I or anyone else has to say, opinion will always guide us.

Ultimately, I think that opinion is most convincing when it is backed up—whether by deductive argument, theory, experience, evidence, or some combination of these. And I hope that my opinions have some of these supports.

While I haven't yet given a definition of evidence, I have presented what I think is a crucial qualification: evidence is observational. But there's a lot more to it than that!

Saturday, 29 April 2006

Probably incorrect probability

A recent post on the blog Blackademic about the alleged rape of a black woman by white Duke university lacrosse players generated a flurry of comments. One of them anonymously argued:
"Unfortunately, statistically a black women is significantly more likely to make a false accusation of rape than to have been raped by a white man. According to the National Crime Victimization Survey ( http://www.ojp.usdoj.gov/bjs/pub/pdf/cvus/current/cv0342.pdf ), less than .0004% of black rape victims were raped by whites. (The NCVS reports the percentage as 0% because there were less than 10 reported cases. I assumed 9 cases, to come up with an actual percentage) Even with the most conservative figure of 2% of rape allegations being false, this means in the case of the Duke Rape Case, the victim is 5000 times more likely to have made a false accusation than to have actually been raped."
There were some perplexed responses to this dramatic claim:
  • "yeah, cuz stats and figures are ALWAYS correct--whatever. it depends on who did the survery and for whom."
  • "How did you come up with the 5000 times more likely figure? That makes no sense at all. Using the figures you cited, the victim regrdless of race is likely to be lying only 2% of the time."
  • "Even if this study were accurate, and even if it were ethical to invoke the laws of probability to determine whether someone is believable-- two outsized ifs-- one coin, landing heads up, doesn't determine the likelihood of the next coin landing heads up. Neither does one woman, 30 years ago, have any bearing on the likelihood that another woman is telling the truth."
  • "the statistics and logic are just that - excercises in probability that tell us nothing about the case in question, because they are not equal to evidence."
But I think these responses miss the point: as far as I can see, the claim is simply incorrect. I think my reasoning is correct, but if I've slipped up please leave a comment.

First, Anonymous claimed that "less than .0004% of black rape victims were raped by whites." I followed the link to the National Crime Victimization Survey to check on this. The total number of rapes or sexual assaults of blacks listed was 24,010 and based on "about 10 or fewer sample cases" the perceived offender was white 0.0% of the time. I'm not entirely sure what this means, but Anonymous reasoned that as many as 9 of the offenders might be white. Now 9 out of 24,010 is about 0.04%, not 0.0004%. Anonymous then introduces "the most conservative figure of 2% of rape allegations being false". Dividing 2% by the incorrect figure of 0.0004%, Anonymous claims that "the victim is 5000 times more likely to have made a false accusation than to have actually been raped". If we divide 2% by the correct figure of 0.04%, we get 50 not 5000!

But apart from this error, the interpretation of the ratio is wrong. For it to be right, we would have to know the probability that the woman was raped. But that's not what the 0.04% represents. Instead, it's an estimate of the probability that if a black person is raped, the offender is white.

So the whole thing is invalid. The problem isn't with probabilistic reasoning per se, it's with faulty probabilistic reasoning. And that's a shame when something so important is at stake.

Monday, 24 April 2006

The thrust and parry of the evidence-based-medicine debate

The debate around evidence-based medicine (EBM) makes for fascinating reading, not least because of the prevalence of hyperbole. In a 2004 paper (it's not open access, but here is the reference), Massimo Porta writes:
"Common sense should build upon a body of evidence and experience accrued over the centuries and shared by the medical community. That some members of the community have made it their task to define which parts of the collective experience constitute evidence and which have less title to reach above water has contributed to the current state of affairs. EBM acolytes now perceive practitioners as grubby underlings, hopeless at applying the latest (evidence-based) literature. Clinicians, resentfully, feel watched by nerds who spend their time sipping coffee while talking to computers instead of patients."
Ow! He continues:
"When it began, it all sounded rather sensible: treatments should be tested for efficacy and trials should be controlled, randomized, double-masked and sufficiently powered. Procedures that do not pass muster should not be recommended for use in clinical practice and self-respecting, commonsensical doctors should refrain from adopting them anyway. But then epidemiologists, statisticians and librarians saw power befalling them as they trotted unexplored avenues towards number crunching."
Given my recent post, I was rather amused by his references to power-hungry number crunchers! In part he was responding to a tongue-in-cheek article in the 2002 holiday issue of BMJ, which purports to reveal the 10 commandments of evidence based medicine:
  • Thou shalt treat all patients according to the EBM cookbook, without concern for local circumstances, patients' preferences, or clinical judgment
  • Thou shalt honour thy computerised evidence based decision support software, humbly entering the information that it requires and faithfully adhering to its commands
  • Thou shalt put heathen basic scientists to the rack until they repent and promise henceforth to randomise all mice, materials, and molecules in their experiments
  • Thou shalt neither publish nor read any case reports, and punish those who blaspheme by uttering personal experiences
  • Thou shalt banish the unbelievers who partake in qualitative research, and force them to live among basic scientists and other heathens
  • Thou shalt defrock any clinician found treating a patient without reference to all research published more than 45 minutes before a consultation
  • Thou shalt reward with a bounty any medical student who denounces specialists who use expressions such as "in my experience"
  • Thou shalt ensure that all patients are seen by research librarians, and that physicians are assigned to handsearching ancient medical journals
  • Thou shalt force to take mandatory retirement all clinical experts within a maximum of 10 days of their being declared experts
  • Thou shalt outlaw contraception to ensure that there are adequate numbers of patients to randomise.
The humour and inflated language aside, there are some big issues here. For example, is it appropriate to hold up the randomized controlled trial (RCT) as the "gold standard of evidence" and relegate basic science to an inferior position? The authors of a philosophical analysis of the evidence-based medicine debate argue that:
"Statistical information from an RCT is virtually uninterpretable and meaningless if stripped away from the backdrop of our basic understanding of physiology and biochemistry."
Compare this with what one of the originators of evidence-based medicine has to say:
"In many [cases], empirical solutions, tested by applied research methods, are "holding the fort" until basic understanding—of mechanisms and interventions—is forthcoming."
This is just a small sampling. The debate about evidence-based medicine and more generally the evidence-based movement is huge. And with good reason: there's an awful lot at stake.

Sunday, 16 April 2006

Pyramid power?


In my second-last post, I discussed the recent controversy over the term "evidence-based". It was popularized through evidence-based medicine, an enormously influential movement spearheaded in the early 1990's by epidemiologists at McMaster University (see accompanying picture of main campus). It certainly sounds reasonable to suggest that medicine (or healthcare more broadly, or education, or policy ...) should be evidence-based, but what does it mean? Here, repeating from my last post, is probably the best known definition of evidence-based medicine:
"the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients."
Clearly the first step in making sense of this definition is to sort out what evidence is.

But is it really important to define evidence? Isn't it just semantics? Well, I think it does matter—for two reasons. First, there's been a widespread push for evidence-based practice and policy. Funding bodies and organizations are giving priority to "evidence-based" approaches and initiatives, and that can have a substantial impact on what research gets done and what practices and policies get implemented. Second, if evidence is not clearly defined, how can we define what "current best evidence" is?

The attempt to delineate different "levels of evidence" has been an ongoing preoccupation of evidence-based medicine. The notion is that some study designs provide more valid or reliable evidence than others, and that there is a "hierarchy of evidence", often depicted as a pyramid such as this one:
(source)
It's not hard to see why this engenders so much heated debate. For example in the figure above, "in vitro ('test tube') research" ranks below "ideas, editorials, opinions"! But this is only one of several such evidence hierarchies, which have notable differences.

For example, the figure below makes no mention at all of benchtop science, and puts "evidence guidelines" higher than randomized controlled trials (RCTs):(source)
As with the previous pyramid, meta-analyses and systematic reviews appear, but here Cochrane systematic reviews are judged best of all ("the 'Gold Standard' for high-quality systematic reviews").

Here's one more pyramid, which doesn't include systematic reviews, but does include anecdotal evidence:
(source)
There are lots of other evidence hierarchies, for example the Oxford Centre for Evidence-based Medicine Levels of Evidence, which makes distinctions according to what the evidence is about (e.g. therapy versus diagnosis).

Distilling the different types of "evidence" from these hierarchies suggests that, according to the authors, evidence may include various types of: (1) empirical studies, (2) summaries of empirical studies, and (3) opinions (hmmm ...). But it's certainly clear that there isn't complete consensus on exactly what qualifies in each of these categories, nor on the rankings.

Perhaps all these pyramids haven't been built on the strongest foundations?

Saturday, 8 April 2006

A defense of blogs - part 2



In my first post of this three-part series, I considered the possible origins of negative attitudes about blogs.

In this post, I'm going examine the significance of blogging as a form of communication. And what better starting point than Marshall McLuhan? I can't claim to understand much of what he wrote, but his epigrams are wonderfully insightful. Perhaps most famous is his assertion that "the medium is the message." And when it comes to blogs, what is the medium? Well, it's global personal publishing that's easy, interactive, and effectively free. McLuhan is suggesting that we should focus on the medium rather than the content per se. Critics of blogging miss this point, choosing instead to decry the quality of much of the content. Here is how journalist Ron Steinman saw things, writing in June 2004:
"Reputedly, there are more than a million blogs and still counting. It is scary. Truly, who has the time to read, digest, and make sense of all the words spewed forth? I do not. I do not want to try."
Methinks he doth protest too much. Is Steinman perhaps suppressing an obsessive-compulsive urge to clean the filthy stables of the blogosphere? Given that the number of blogs today is estimated to be upwards of 30 million, Hercules himself would be daunted.

Fortunately, nobody need take on such a task. The wonderful thing about blogs is if you don't like them, you don't need to read them! Unlike spam, which is an irritation we could all do without, you can just ignore blogs if that's your preference. You can also ignore books, magazine, television, and movies if you like. Goodness knows, there's lots of trash there too! But most of us reckon that it's possible to separate some of the wheat from the chaff. I don't imagine anyone is entirely successful, but there's lots of good stuff out there, and some good strategies for finding it.

Arguably, the challenge is much greater when in comes to blogs. One solution is to stick to the "A-list" blogs. But I think that's a real mistake, because the message of the blog medium is this: for the first time in human history, an ordinary person can share his or her perspectives, as he or she sees fit, with the rest of the world. A fabulous flowering of creativity and self-expression is taking place; why miss out on it?

It might be argued that blogs are not unique in this respect. Newsgroups, electronic mailing lists and internet forums have many similarities with blogs, and predate blogs by many years. However a key distinguishing feature of blogs is their ownership. Fundamentally, newsgroups, mailing lists, and forums are communities, with all the associated strengths and weaknesses. The invitation is: "come and share as we discuss X". While a community can grow around a blog through the commenting feature, the blog belongs to the owner not the community, and the central focus remains the owner's posts. The invitation is: "check out my posts, and leave comments if you like". In no way is this meant to denigrate the value of the comments. Indeed I find the comments on my blog to be a wonderful source of insight and humour—and at least half the fun. Similarly, when I read other blogs I often check the comments. Among other things, they give a great sense of who's reading (although of course, there may be many readers who remain silent).

Ironically, despite predictions (by McLuhan among others?) that the written word was doomed by the dominance of electronic media like radio, television, and the internet, blogs are heralding a renaissance in writing. The linearity of the printed page was widely dismissed as old-fashioned and boring, allegedly incompatible with the infinitesimal attention span we've all developed. This was always a weak argument, premised on an oversimplified analysis of patterns of media consumption. What is true is that we read blogs differently from how we read a newspaper, or a magazine, or a book. This is partly due to the "post-centric" nature of blogs (an observation attributed to Meg Hourihan). It is also partly a function of hyperlinks. Incidentally, there has been extensive comment on the journalistic value (or lack thereof) of blogs. I don't intend to weigh in on this, except to point out that the use of links in blog posts allows for the attribution of sources and justification of claims—something the print media could sometimes benefit from. For more on the relationship between blogs and journalism, see this article by Steven Johnson.

The internet is widely seen as the realization of McLuhan's "global village". But unlike many villages of old, blogs are making this one profoundly democratic: now it is not only the chief and the high priest whose voices can be heard—we've always been forced to listen to them—instead we can tune in to whomever we like. A. J. Liebling pointed out that "freedom of the press is guaranteed only to those who own one"—well now everyone can own a press. A soapbox for all! (Without the noise pollution.)

By opening up communication, I believe that blogs are helping to bring about a huge increase in intellectual efficiency for humanity. Ideas previously isolated by geography (even on a local scale) and stifled by dominant cultural and political assumptions can now flow freely. Earlier technologies have only hinted at this kind of exchange.

One frequently-heard criticism is that blogs are largely driven by vanity and ego. On the one hand, this is simply a tautology. A person's blog is, after all, a projection of themselves (their ego) onto the internet. On the other hand, this is a psychological claim: bloggers derive personal gratification from expressing themselves. But then this too has a tautological flavour, for why else would they do so? Presumably then, the claim is that there is too much ego and pride (a rather more neutral term than vanity) involved. In the case of a blog that is transparently self-glorifying, the claim is plausible. But regardless, there is always the choice to ignore any given blog, particularly if it offers nothing to the reader. On balance, blog narcissism would seem to be a harmless release. And thanks to professionally-designed blog templates, we don't have to deal with so many hideously ugly vanity pages (actually that one is a parody).

I leave you with some interesting links. As so often, the Wikipedia entry on blogs is excellent, with some interesting history and a list of 20 (!) different types of blogs. Seth Godin has a neat e-book about blogs. Finally, this one is more about newsgroups than blogs, but it's too much fun not to include.

Thursday, 6 April 2006

Evidence-based ambiguity


In the last 15 years, evidence-based medicine has taken the world by storm. According to a famous definition, evidence-based medicine is
"the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients."
In this spirit, the evidence supporting many time-honoured practices in medicine has been examined, and in a number of cases found wanting. (For an informative and entertaining look at this, see this slide presentation by former British Medical Journal editor Richard Smith.) The exalted status that expert opinion once enjoyed is waning. Today the cry is "Show me the evidence!" (cf. Jerry Maguire.)

If you'll pardon the pun, the success of evidence-based medicine has been infectious. The prefix "evidence-based" is popping up not just in connection with healthcare: today there is evidence-based education, evidence-based software-engineering, evidence-based librarianship, and the list goes on. The "evidence-based movement" seems victorious.

But there are stirrings of discontent. From the beginning, evidence-based medicine has had its critics. (See this editorial for a balanced account of their objections.) A key issue relates to the ambiguity in the word "evidence". If it means empirical evidence, it would seem that clinical experience, pathophysiological theory, patient values, and expert opinion have no role to play. Alternatively, evidence can be defined broadly: "evidence is anything that establishes a fact or gives reason for believing something" (The Oxford American Dictionary, via this report). But this "colloquial" definition opens the doors so wide as to be useless here. For example, a religious argument might be compelling for the believer, but surely would not constitute "evidence" for the present purposes. For some other interesting perspectives on the definition of evidence in evidence-based medicine, see this essay by Amanda Fullan (an undergraduate student at the time).

In recent years the evidence-based movement has expanded to areas such as public health and policy. In a 2004 essay titled What is Evidence and What is the Problem?, the Acting Executive Director of the American Psychological Association writes
"These days, you can hear the terms “good science”, “evidence”, and “data” a lot in Washington. One of the catch phrases around policy-making circles is “evidence-based”, applied to a host of contents including education, policy, practice, medicine, even architecture. You would think that this would make us all quite happy – at least those who advocate that decisions about policy, social interventions, and future directions be based on data. But, ironically, the new emphasis on evidence-based this and that has been simultaneously welcomed and greeted with raised anxiety levels and red flags of concern."
And the ambiguity of the word "evidence" is even more problematic in this context:
"It is clear that discussions of definitions of evidence, distinctions among kinds of evidence (including scientific data, expert judgment, observation, and theory), and consensus on when to use what, will occupy us for some time."
The Canadian Health Services Research Foundation (CHSRF) has recently grappled with these issues, issuing a report, and holding a workshop. One of the "key messages" from the workshop was that
"Although the literature shows that decision makers work with a colloquial understanding of evidence (often alongside a scientific understanding), some participants felt strongly that the information classified as colloquial evidence should not be called evidence. They acknowledged the importance of this information but suggested finding a substitute term, such as “colloquial knowledge” or “colloquial factors.”"
Finally, the CHSRF adopted the following (rather extended) definition:
"Evidence is information that comes closest to the facts of a matter. The form it takes depends on context. The findings of high-quality, methodologically appropriate research are the most accurate evidence. Because research is often incomplete and sometimes contradictory or unavailable, other kinds of information are necessary supplements to or stand-ins for research. The evidence base for a decision is the multiple forms of evidence combined to balance rigour with expedience—while privileging the former over the latter."
Hmmm ... not entirely convincing, but I see what they're getting at. But where did they get that stuff about coming "closest to the facts of the matter"? I'd say it's either begging the question or using a circular argument.

Epilogue: In their latest newsletter, the CHSRF announce that they've decided to abandon the term "evidence-based":
"Following feedback and discussions at the “Weighing Up the Evidence” workshop in September 2005, the mission of the foundation has been changed to better reflect the emerging realization that research is justifiably only one, albeit very important, input to decision-making."
The new term? Evidence-informed.

Wednesday, 29 March 2006

Canada should pull its troops out of Afghanistan



Today a Canadian soldier was killed in Afghanistan. That makes 11 since 2002.

For the record, I opposed Canada's involvement in the 2001 invasion of Afghanistan. I didn't question the brutality of the Taliban regime, nor that they provided a haven for Al Qaeda. But I wasn't convinced at the time that invading the country was the best way to improve matters. The events that ensued haven't altered my opinion. Did the invasion improve life for Afghanis? Did it stabilize the region? Did it stop al-Qaeda?

What is clear is that a lot of innocent Afghanis were maimed or killed, Osama bin Laden is still at large, and the Taliban remain a force to be reckoned with. Hundreds of "enemy combatants" were shipped off to Guantánamo Bay where, according to Amnesty International, many
"... remain held in a legal black hole ... many without access to any court, legal counsel or family visits. Denied their rights under international law and held in conditions which may amount to cruel, inhuman or degrading treatment, the detainees face severe psychological distress. There have been numerous suicide attempts."
A recent Globe and Mail story quotes NDP defence spokesman Bill Blaikie as saying "Canada's silence on Guantanamo is related to the fact that we are complicit in the whole process".

Many Canadians cherish our role as peacekeepers, but it's quite evident that "peacekeeping" doesn't really describe the Canadian military role in Afghanistan. What should our role be? I'd say that's a question for the Afghani people to answer. In the mean time, we can do more good by providing financial, technical, and moral support.

Cursed by its strategic location, Afghanistan has been repeatedly invaded over the years. Foreigners seem intent on butting in. Could it be that we're the problem?

Monday, 27 March 2006

A defense of blogs - part 1

Back in January, I commented on a newspaper article that took shots at both blogs ("high on opinion and low on fact") and readers of blogs ("getting only the 'daily me'"). In fact negative attitudes about blogs are quite widespread. What's more, predictions are often made that the "fad" of blogging will soon pass.

I'm planning to look at this in three posts. In this one, I'll explore what might be behind these negative attitudes. My second post will defend the blog as a form of communication. And in my third post, I'll try my hand at predicting what the future holds for blogging.

Dissing blogs

The blog of stereotype is truly a thing to be scorned: an online diary packed with inane details of the blogger's life together with uninformed rants—an exhibitionistic ego trip. Admittedly there are plenty of blogs like that. But there are lots of trashy books and magazines and nobody feels the urge to dismiss all books and magazines.

Where does the bad rap come from? I don't think there's a single answer, but here are a few contenders. A straightforward explanation is the Google effect: when people search for information on the Internet, increasingly they are stumbling onto blogs, and often stupid ones at that. An irritating distraction like this is unlikely to leave someone with a good impression of blogs. You can understand this reaction, but it's hardly a sensible way to judge the worth of the whole blogosphere.

A relatively subtle explanation may relate to the difficulty of adequately describing to someone just what a blog is. If you say "It's kind of like an online diary" (a quick but clearly inadequate description), it may perpetuate the notion that blogging is something only an exhibitionist would do.

But I think an authoritarian impulse lurks behind some of the criticism of blogging. Blogs let anyone express their opinions, not just the chosen few. Bloggers don't have to spout opinions that'll please the boss, the advertisers, the market, or the government. Bloggers don't have to express the consensus opinion, or bow to the prevailing fashions. And that makes them a threat.

This is not the first occasion when a new form of communication has threatened the status quo. The introduction of the printing press ushered in the era of mass communication. The revolutionary consequences were soon felt, not the least with the widespread printing of political pamphlets. No longer did the crown and the church have a monopoly on political expression.

Whenever democracy is ascendant, an authoritarian response is not far behind. This is as true today as it was hundreds of years ago. In China, old-fashioned methods are favoured: dissident voices are simply silenced. In the West, there is no need for such a blunt approach. Authoritarian impulses take a more subtle form. Blogs are denounced as frivolous displays of vanity, offering only drivel or perhaps third-rate analysis. Blogs are just a passing fashion, a bandwagon that will soon crest the hill. But perhaps the true offense is something else: the officially sanctioned organs of mass communication have been bypassed, and (gasp!) they might eventually be displaced altogether.

Noam Chomsky has discussed what he calls "the crisis of democracy" (after the title of a report by the Trilateral Commission), namely the perception by elites that there is too much democratization, and I wonder if we're not witnessing something similar. This kind of argument is often dismissed as conspiracy theory, but it's nothing of the sort. I'm not supposing there's a cabal secretly meeting to plan the downfall of blogs (though it would make a great movie). Thought control in our society doesn't require such exotic methods: the threat is far more effectively neutralized by ridicule and marginalization. A few popular mainstream blogs are given the official blessing, and the rest are written off as juvenile nonsense.

Respect for authority and the urge to conform is sufficiently ingrained in our society that a few respected "opinion leaders" can often set the tempo for the rest. Just as the top dogs in the fashion world dictate which colours we should wear this year, a relatively small number of cultural and political sources provide clear guidance on how we should see the world. The last thing they want is to see their influence diluted.

Saturday, 25 March 2006

I don't CRUNCH numbers!

As a statistician, I'm sometimes asked to "crunch the numbers". Now, I don't mean to sound sensitive, but I don't ... ahem ... "crunch" numbers. The onomatopoeic word CRUNCH suggests roughness, the application of brute force, perhaps in the form of raw computing power.

If you've seen the movie The Horse Whisperer, you'll remember how the character played by Robert Redford worked with horses. Instead of trying to "break" them, he tried to understand them and work with them. Maybe you can see where I'm going with this ...

A good data analysis requires care, patience, and understanding. It's a collaborative endeavour that should make use of subject-area knowledge wherever possible. Every number has a story to tell, and that story is not always immediately apparent. What did the researcher want to measure? How did they measure it? How did the measurement get turned into a number in a data set? And that's only the beginning, because a typical data set is the product of numerous different measurements, perhaps made on several occasions. Once the pedigree and provenance of each variable in a data set have been determined, the picture they form can be brought into focus, and the underlying patterns can be explored. Sensitivity is paramount: Are the modeling assumptions appropriate? Is something being overlooked? Would a different approach provide more relevant insights?

It seems that the term "crunching the numbers" is most commonly used to refers to what accountants do, and on this I can't comment. But for statistical data analysis, the metaphor is all wrong.

Mind matters: or mind boggling mind blogging

I just got back from a trip to the San Francisco Bay area. To the left is a photo of a T-shirt I bought at The Tech Museum of Innovation in San Jose (which is well worth a visit, by the way). Various family members have opined that it's just too geeky to wear in public, but what do I care? It certainly encapsulates some of my recent thoughts about consciousness (together with humour, one of the mind's stranger characteristics).

I just finished reading Mind : A Brief Introduction by John Searle, which is a fairly accessible work on the philosophy of mind. It seems that some people believe that scientific study of the workings of the brain will eventually reveal all the secrets of the mind. Even if that were true, we're obviously a long way from a good understanding of the brain. On this point, here's a quotation my mother pointed me to:
If the human brain were so simple
That we could understand it,
We would be so simple
That we couldn't.

- Emerson M. Pugh (as quoted by George E. Pugh, Emerson's son in G.E. Pugh, The Biological Origin of Human Values, 1977, p. 154)
Hmmm ... not sure I agree, but maybe there's something to it.

In any case, the brain and the mind are not the same thing. I think I'm probably quoting John Searle in saying that "The mind is what the brain does." Understanding how the brain works, through the methods of neurophysiology and cognitive science, can inform our understanding of the mind, but the scientific method only goes so far.

The mind is certainly not unique in this respect. For example, science may inform an understanding of music (through the physics of sound, our auditory system, and the brain's response to music), but there is more to music—and likewise other aesthetic experiences—than science. Likewise, science may inform an understanding of ethics (through evolutionary biology), but surely notions of right and wrong go beyond biology. My point here is that many of the things that concern us (and I haven't even touched language, literature, culture, or politics) are not entirely—or perhaps not even primarily—matters of science.

Incidentally, mathematics seems to me the clearest example of something to which the scientific method has no application. Mathematics proceeds not by empirical observation but by deductive reasoning. Of course, as a scientist, I place enormous value on the scientific method, but I don't think that reality is exclusively physical.

Ultimately, what puzzles me most is this: how is it that matter can develop the ability to contemplate itself? To me, this is a fundamental mystery.

Searle explores a number of other mind-boggling mind questions, concerning things like free will and the self. And here's one for the A.I. researchers: Could we create robots who would behave just like humans, but with no mental life at all? Or does consciousness click in at some point? Is free will the key issue?

Finally, here's a link to a piece by Nicholas Humphrey, a theoretical psychologist who has worked on the evolution of human intelligence and consciousness, along with comments from a bunch of people including Daniel Dennett.

I'd be delighted to hear other people's thoughts on this subject ... or should I say, I wouldn't mind hearing what you have to say?

Thursday, 2 March 2006

Privacy policies


Everywhere you go these days—online and off—there are privacy policies. I presume most of these are boilerplate. It would be nice if there were some standards. Then instead of having to wade through a page of legalese, you might only have to read one or two lines: "We follow privacy standard XYZ except in the following respects ...".

In spite of their verbosity, privacy policies seem like a good thing. I want to know that my private information is kept as private as possible. I don't want my contact details shared with marketers, let alone more personal stuff. It's good to have the rules spelled out in black and white. But let's not kid ourselves: the important stuff is never written down (a piece of wisdom that was imparted to me several years ago, and that I keep returning to).

Now, in a recent post, I noted that even the NSA has a privacy policy (the word ironic seems pathetically inadequate). Which brings me to my point. Intelligence agencies, by their very nature, are the enemies of privacy. But not only do they covertly obtain private information, they have a nasty habit of sharing it with their friends. Consider the infamous case of Maher Arar, a Canadian citizen who suffered an "extraordinary rendition" to Syria, where he was imprisoned and tortured. All kinds of information apparently changed hands between the intelligence agencies of Canada, the U.S., and Syria. Well apparently this wasn't an isolated case: at least three other Canadians
"... were also all detained by the same branch of the Syrian military intelligence where they were interrogated and brutally tortured before eventually being released. None were ever charged with any crime. All of these men say their interrogations were based on information that they believe could only have originated with Canadian investigators."
The quote is from Amnesty International, who is hosting an open letter to the Prime Minister of Canada calling for
"... the government of Canada to launch a fair, independent, comprehensive and public review of the possibility of Canadian complicity in the detention, interrogation and torture of Muayyed Nureddin, Abdullah Almalki and Ahmed Abou El-Maati"
I encourage you to add your name to the petition.

Thursday, 16 February 2006

Talkin' trash


A couple of good articles from Alternet. First, it seems that for neo-conservatives, anyone who refers to the Bush administration's "lies" about weapons of mass destruction is a "Bush-hater". They're just talkin' trash. But in an entirely different context, that's a good thing to do!