Sunday, August 16, 2015

Experts and Authority



In an article last year (and soon-to-be book) Tom Nichols complained about the new relativism brought about by Wikipedia and Google and bemoaning the declining authority of the expert. I encountered his article today via Facebook; I'm not sure whether the source of this information had any impact on its veracity.

"Today," he writes, "any assertion of expertise produces an explosion of anger from certain quarters of the American public, who immediately complain that such claims are nothing more than fallacious 'appeals to authority,' sure signs of dreadful 'elitism,' and an obvious effort to use credentials to stifle the dialogue required by a 'real' democracy."

To be sure, the three things he cites here are all things to be avoided:
  • 'Appeal to Authority' is an actual fallacy; it occurs when an authority is cited in cases where (a) the authorities disagree among themselves, or (b) where the authority is speaking outside the area of his or her expertise.
  • Elitism is a structural defect in society, representing a state of affairs where those who are in power and authority manipulate the rules in order to maintain their (or their children's) position in society.
  • Stifling the dialogue, as we are seeing in Canadian society today, is a breakdown of communications that prevents society as a whole from learning about its mistakes, exposing sources of corruption, or uncovering injustice.
How do I know all this? Well, I too am an expert - or, more accurately, I am called an expert by people who are in a position to know, or to recognize, that I am an expert. And the relation between democracy, expertise and authority is, I would say, much less straightforward than described in Nichols's column.

Let's take democracy as an example. Here's what Nichols says about democracy:
But democracy, as I wrote in an essay about C.S. Lewis and the Snowden affair, denotes a system of government, not an actual state of equality. It means that we enjoy equal rights versus the government, and in relation to each other. Having equal rights does not mean having equal talents, equal abilities, or equal knowledge. 
It is true that it does not follow that because we are governed by democracy that we attain an actual state of equality. But it does mean, as Nichols suggests, that we are all equal before the law. But what does that mean? A naive interpretation would suggest that the law treats each of us the same. But experts know (having read Rawls) that democracy means something like "justice as fairness". That is, even though we are not all equal, in a democracy, the law should tend toward helping us all become equal.

It's a complex idea but simple enough in practice. It means that society should help improve the talents of the untalented, that it should seek to increase the skills of those with lesser abilities, and to expand the knowledge of those with less knowledge. Yes, there is the presumption that there should be equality before the law (this is the famous dictate of Solon) but what it means in practice is that people in positions of power and authority should not bend the law to their own advantage. There's nothing wrong with using the law to enhance the standing of the poor and disempowered. Or as Plutarch says:
Thinking it his duty to make still further provision for the weakness of the multitude, he (Solon) gave every citizen the privilege of entering suit in behalf of one who had suffered wrong. If a man was assaulted, and suffered violence or injury, it was the privilege of any one who had the ability and the inclination, to indict the wrong-doer and prosecute him. (Life of Solon, 18.5)
So when Nichols says this:
It assuredly does not mean that “everyone’s opinion about anything is as good as anyone else’s.” And yet, this is now enshrined as the credo of a fair number of people despite being obvious nonsense...
he is wrong. In a court of law everyone's opinions are as good as everyone else's. Indeed, we even make allowances to the favour of those who are not in a position of authority or high social standing.  The facts and justice stand independently of anyone's opinions. That is what Solon enshrined, and that is the rule that forms the basis of democracy. And it is the foundational principle of reason, science and enquiry to this day.

As have numerous pedants before him, Nichols appears to be far more concerned about the source of knowledge and information than about its veracity (that is, its truth or fair representation).  "I fear we are witnessing the 'death of expertise': a Google-fueled, Wikipedia-based, blog-sodden collapse of any division between professionals and laymen, students and teachers, knowers and wonderers," he writes. Google-fueled indeed; that's how I found the references to Rawls and Plutarch.

Yes, experts make mistakes, he concedes. "But an expert is far more likely to be right than you are. On a question of factual interpretation or evaluation, it shouldn’t engender insecurity or anxiety to think that an expert’s view is likely to be better-informed than yours," he writes. This is true, just as it is true that a rich person has more money than you. But we shouldn't so lightly accept the moral authority of either.

But let's acknowledge this, and agree that "experts have a pretty good batting average compared to laymen: doctors, whatever their errors, seem to do better with most illnesses than faith healers or your Aunt Ginny and her special chicken gut poultice." Quite so.

But notice that it is in a democractic society, where experts are far more likely to be challenged, that experts hold the most sway. Before the widespread rise of public information, people were much more likely to purchase snake oil. Old wives tales, in an enlightened society, are exactly that: tales. It is when experts are beyond challenged by both the informed and uninformed that the true value of expertise can take hold.

Nichols ought to pause for a moment to consider why this is the case.

He writes, "The death of expertise is a rejection not only of knowledge, but of the ways in which we gain knowledge and learn about things. Fundamentally, it’s a rejection of science and rationality, which are the foundations of Western civilization itself."

This to me represents a fundamental misunderstanding of how science and rationality work. The world of the expert - of any expert - is obliged to be subjected to the widest possible criticism. This applies equally to practitioners of snake oil and anti-cancer vaccinations. That is how we are able to distinguish between the reasonable and the irrational. And it does not matter whether the criticism comes from within the domain of enquiry or from an authoritative source. Because rationality isn't about getting things right, it's about getting the right things right.

Look at his examples: "'Western civilization': that paternalistic, racist, ethnocentric approach to knowledge that created the nuclear bomb, the Edsel, and New Coke, but which also keeps diabetics alive, lands mammoth airliners in the dark, and writes documents like the Charter of the United Nations." These make my case far better than they make his case.

First of all, those experts who also happen to be paternalistic, racist, and ethnocentric can be questioned on that basis. We know that facts do not exist in isolation, but rather, they are a product of a perspective or a point of view (even an expert's, most especially is he or she is paternalistic, racist, or  ethnocentric). We should question whether modern physics embodies a western perspective of time. We ought to ask whether a biological thesis is informed by racism. Shielding the authority from uninformed questioning is more likely to shield the authority from these questions, which come from outside the field.

The Edsel is a really good example of getting the right things wrong. It featured many advances in automotive technology, including engine warning lights, seat belts, and child-proof rear door locks. But it failed on non-technical features such as aesthetics and price. It's the sort of failure experts could have avoided with non-expert opinion  (and it's why companies use devices such as focus groups to ensure they don't make similar mistakes in the future).

Not trusting the expert is dangerous, writes Nichols. "We live today in an advanced post-industrial country that is now fighting a resurgence of whooping cough — a scourge nearly eliminated a century ago — merely because otherwise intelligent people have been second-guessing their doctors and refusing to vaccinate their kids."

Well yes. But the reason people like Jenny McCarthy jumped on the anti-vaxxer bandwagon was that an expert, British physician Andrew Wakefield, "published a paper in The Lancet that purported to identify a link between the measles, mumps and rubella (MMR) vaccine and the appearance of autism in children." And history is such that sometimes the lone expert is more trustworthy: witness the case of Frances Oldham Kelsey, who protected the United States from the horror of Thalidomide poisoning.

Had Jenny McCarthy been right, she would today be hailed as 'an expert'. But she was wrong; Wakefield's paper was revealed as a hoax, and there is no link between vaccines and autism. But in the same breath, we would be dismissing Kelsey as a crank had Thalidomide turned out to be same. One of the features of expertise is that it is typically revealed only after the fact, when it is too late to be of any use. Before that time, we have to make use of argument, evidence and statistics, and not credentials.

Nichols displays an increasing impatience with the rigour of disproving the Kelseys of the world. "You will get snippy and sophistic demands to show ever increasing amounts of 'proof' or 'evidence' for your case," he writes, "even though the ordinary interlocutor in such debates isn’t really equipped to decide what constitutes 'evidence' or to know it when it’s presented."

But Nichols confuses between making the case generically, and making the case to the satisfaction of a particular individual. As an expert with a long history of engaging in arguments with people, I can say with assurance that it will not be possible to convince everybody of anything. People - experts and non-experts alike - have their own 'riverbed propositions' from which they will not budge no matter how reasonable the evidence. When an expert responds to an argument, it should be done publicly, as the intent is to demonstration to the public as a whole, not the individual in question. As a society, we decide.

And we need to be clear about the nature of the two assertions Nichols is making. First, he is arguing that there are things that count as evidence and things that do not. And second, he is arguing that non-experts are incapable of drawing this distinction. I think that on both counts he is wrong.

Sure, people may make unreasonable demands of science. For example, they may criticize evolution on the grounds that it is "only a theory", or demand "proof" of human-caused climate change. This does not mean these are not forms of evidence; it merely means they are unattainable (scientists would love to be able to refer to the law of evolution, or point to proof of global warming, but it's just not forthcoming).

And things like Biblical references, the missing link, the little ice age, and other such non-evidence are not non-evidence: they are anomalies that the theory must explain. The coincident increase in diagnosis of autism with vaccination is a real thing, and it needs to be shown that this is the result of the better diagnosis of autism over time, not the needle. This is what Carnap calls the principle of total evidence, and recognizes that new evidence, even seemingly unrelated, might impact whether something being considered is true or not.

And when Nichols argues that non-experts are incapable of making that distinction, he must explain the countervailing fact that society as a whole, which consists mostly of non-experts, has made that distinction, and that our medical system, our code of justice, and social infrastructure in general are built around the fact that vaccines are helpful and thalidomide is dangerous. And Nichols has to explain why societies where experts are more likely to be questioned and challenged are societies which are more likely to make this distinction.

And he can't just argue that these counterexamples are coming from a non-expert, that the evidence here cited does not count as the right evidence, and that this is only a blog. Such responses would quite rightly be dismissed as fallacies by both experts and non-experts alike (and it is indeed the prevalence of fallacies in non-expert responses that is one of the major ways non-experts can spot frauds in the ranks of experts).

And what of those people who are not even able to distinguish between a valid argument and a non-sequiter? Nichols has no time for them and finds them exhausting. I find them exhausting too, but I make my case for all to see, and have taken the time and effort to make the basis for argumentation and reason available to all. Because I value the contributions of non-experts, and would like to help them formulate their objections in the most effective manner possible. This is called the principle of charity, and is a fundamental rule in logic and reasoning.

To get to the point of Nichols's argument, he wants a return of the gatekeepers:
the journals and op-ed pages that were once strictly edited have been drowned under the weight of self-publishable blogs. There was once a time when participation in public debate, even in the pages of the local newspaper, required submission of a letter or an article, and that submission had to be written intelligently, pass editorial review, and stand with the author’s name attached. Even then, it was a big deal to get a letter in a major newspaper. Now, anyone can bum rush the comments section of any major publication. Sometimes, that results in a free-for-all that spurs better thinking. Most of the time, however, it means that anyone can post anything they want, under any anonymous cover, and never have to defend their views or get called out for being wrong.
Yes. Anybody can publish whatever they want. This has resulted in a lot of clutter. But it has also resulted in Assange, Snowden and Manning, among many other notable examples.

To understand why this is important, it is necessary to understand the role of gatekeepers. And - frankly - it was never the role of the gatekeepers to keep out the cranks. A brief look at the letters sections of most any newspaper before the days of the internet is evidence of that. It was to protect the newspaper from retribution from the rich and powerful should seriously damaging evidence or allegations be published.

Let us be clear: the time before the internet was a time when the elite entrenched and protected each other. Even Nichols makes this point (though it is not clear he knows he is making it):
There was once a time when presidents would win elections and then scour universities and think-tanks for a brain trust; that’s how Henry Kissinger, Samuel Huntington, Zbigniew Brzezinski and others ended up in government service while moving between places like Harvard and Columbia.
Yes. There was pretty much a closed walkway between positions of power and authority and the elite institutions like Harvard and Columbia. And when Nichols writes that "I have a hard time, for example, imagining that I would be called to Washington today in the way I was back in 1990" he fails to understand that this is a good thing, and that calling people like George Siemens (an itinerant blogger born in Mexico, former restaurant owner, and self-made PhD in education) instead is far far better.

And consider how Nichols regards those other gatekeepers, teachers and university professors:
One of the greatest teachers I ever had, James Schall, once wrote many years ago that 'students have obligations to teachers,' including 'trust, docility, effort, and thinking,' an assertion that would produce howls of outrage from the entitled generations roaming campuses today.
And despite the lupine ad hominem, students should indeed protest such instructions. Nobody has an obligation to produce trust and docility. Far better to be a nation of wolves than a nation of sheep! There are good reasons to eschew docility; women especially understand the need to challenge the orthodoxy of thinking emanating from the professorial pulpit. A requirement of trust and docility is in itself a betrayal of trust.

But what Nichols is really worried about is that the experts might become servants of the people. This shows up in a number of ways near the end of his article. Consider this language:
The idea of telling students that professors run the show and know better than they do strikes many students as something like uppity lip from the help
 Or this:
many academic departments are boutiques, in which the professors are expected to be something like intellectual valets.  
But the argument is a bit more subtle than that. It's that we would be happy serving the people, but the people are not good enough to be masters.
When citizens forgo their basic obligation to learn enough to actually govern themselves, and instead remain stubbornly imprisoned by their fragile egos and caged by their own sense of entitlement, experts will end up running things by default.
And so this is a "terrible" thing, he will oh so reluctantly take his position as expert, and step into his natural place as the ruler of society. Or so he thinks.

But he's wrong, and worse, he is dishonestly wrong. He is disappointed he is no longer getting calls from the White House, he yearns for the days when professors demanded docility and respect, and he would much rather return to the days when the only public discourse was that vetted by the editorial guardians of society. He does not actually want the non-expert to contribute to the governance of society, for if he did, this column would be a call or education and empowerment, and not a declaration to the effect that the masses are revolting.

His kind just is the kind that brought us the excesses of the 20th century, and more recently the debacle in Iraq. He most properly should be ashamed of himself for setting himself above the likes of you and me. But as is so common among the realm of the self-declared expert, he feels no shame.


Thursday, July 30, 2015

What I've Learned From Philosophy

I posted an item in OLDaily today from Forbes touting the benefits of formerly 'useless' liberal arts degrees. In this item Slack CEO Stewart Butterfield is quoted:
“Studying philosophy taught me two things,” says Butterfield, sitting in his office in San Francisco’s South of Market district, a neighborhood almost entirely dedicated to the cult of coding. “I learned how to write really clearly. I learned how to follow an argument all the way down, which is invaluable in running meetings. And when I studied the history of science, I learned about the ways that everyone believes something is true–like the old notion of some kind of ether in the air propagating gravitational forces–until they realized that it wasn’t true.”
It's worth mentioning because the Department of Philosophy at the University of Calgary has a notice posted on the wall to the effect that a philosophy degree was no guarantee of a job and that graduates should not study expecting employment in the field. This wasn't parody or in any way humorous - it was an official memo from the chair and posted in all seriousness. I can still see it in my mind, not a big poster but an 8.5 x 11 memo with typed text.

I have often commented that my work in philosophy left we particularly well-suited to employment in the new economy. It's not merely that sorting out corporate information might be simple after spending years teasing out the nuances in Wittgenstein, as the article suggests, though it's partially that. It's about what it is to know and to learn at a deeper level, which can then be applied to new disciplines whatever they may be.

But what, precisely, did I learn from those years of study? That's a hard question to answer. But it's worth a bit of a sketch here.

Precision

Butterfield said he learned to write clearly. But what does that mean? Fun with Dick and Jane is written clearly but we want to express thoughts more complex than "see Spot run." Writing clearly means writing with precision, and precision is what philosophy teaches.

For example, it is commonly said that a sentence has a subject and a verb. This proves to be important in clear writing. In clear writing the subject of the sentence is unambiguous. The reader knows exactly what you are talking about. Through the rest of my days I have always been attentive to the identification of the subject. You would be surprised how many people are not.

There are specific ways of naming the subject. One way is to point, in words (that is, to name your subject ostensively). "This is a sentence. That was an argument worth hearing." Wittgenstein did that a lot. Another is to use a definite description. "The present King of France,"  for example, was the subject of much discussion between Russell and Strawson. Another way is to use names, which may in turn be subject to definitions, for example, "dogs", "millennials" or "Barack Obama".

How many ways are there to be imprecise about the subject? There is always our favourite case, the amphiboly:  "One morning I shot an elephant in my pajamas. How he got into my pajamas I'll never know." Another is to refer to something without a definite or indefinite article, for example, saying "Matter of importance is clarity," instead of "A matter of importance..." or "The matter of importance." Or there is the use of vague terms: "freedom is what defines our approach to software." And on and on the list goes.

Precision is what lies at the root of grammar. In my opinion, the rules of grammar (for the most part) exist in order to ensure precision. A lot of times it is the little things that cause confusion. A single comma can change the entire meaning of a sentence. As when Johnny said, "It's supper time. We're ready to eat, Uncle Charlie."

Structure

What you learn in philosophy is that sentences - and thoughts generally - are not unstructured streams of consciousness. This is especially clear in languages like French, where you have to plan your sentences ahead of time, in order to ensure the gender of your words are in accord. In all languages, structure indicates not only the subject and verb, as mentioned above, but also logical form leading to such things as inference and explanation.

Why does this matter? Well, as I've written elsewhere, understanding this structure is key to writing useful and meaningful essays. It is also key to being able to analyze and understand what other people have written. When you read an editorial containing a whole list of sentences, how to do determine what opinion they are trying to express? It is the structure of the article that tells you this.

Structure is logic, and logic is structure. You can see this by looking at the different kinds of logic; they reveal to you the different kinds of structures you can employ in your reasoning:
  • propositional - connecting and relating the truth of basic sentences using 'and', 'or', 'if-then' and 'not'.
  • quantificational - specifying how many of something we're talking about, and inferring about properties of groups of things
  • causal - understanding the conditions under which one thing is said to cause another
  • modal - talking about whether things are 'necessary' or merely 'possible'
  • statistical - understanding probability, that is, how likely something is to happen, or to be true
  • deontic - thinking about the nature of obligation and permission
  • doxastic - the logic of beliefs
  • mathematical - axioms, calculus and set theory 
  • computational - Turing machines and computational processes
Not only did I learn that all these forms of logic exist (who knew?) I also actually learned them, which means I can make really complex inferences, but more importantly, know some pretty basic things. For example, if 'P' is necessarily true, is 'P' true? (Yes) Or for example, if 'P is always Q' is true, does it follow that 'P' is true, or that 'P' exists? (No).

Syntax and Semantics

Syntax is the structure of something - its logic - while semantics refers to its meaning, truth or value. Syntax is the fact that ten dimes make up a dollar; semantics is the fact that it takes ten dollars to attend a movie.

The very fact that syntax and semantics are distinct is important in itself, for several reasons.

The first is that syntax is arbitrary. We can make up any sort of syntax we want. This is not so easy to see in everyday arithmetic and propositional calculus, where the rules are deeply entrenched. But in modal logic, however, we have various 'systems' such as T, K, S4 and S5. Which one of these is 'true'? Well, they all are. Or none of them is. Or, it doesn't even make sense to ask the question.In mathematics, similarly, there are different axiom systems. Which is 'true', Peano arithmetic? Mill's Axioms? Or does it even matter?

In fact, a syntax, thought in and of itself, can be whatever we want it to be. Usually we set out some basic requirements - the system should not allow contradictions, for example. But there's no requirement that we do this, and if we develop a system that does not have truth as its basis (language, say) then the principle of non-contradiction doesn't even make sense! Take a look at my categorical converter - do the lines have to be drawn that way? Well, no. Or imagine a logic that is falsity-preserving, rather than truth preserving: they look like mirror images, but in falsity-preserving logic, nothing follows from a contradiction, and everything follows from a tautology.

If pressed, we would say that we need to choose one system of logic over another because one of them works in the real world, but the other doesn't. But the relation between logic and the world is far from clear. We 'prove' a system of logic with a semantical argument, but the relation between a semantics and a logic is itself the subject of discussion; these different relations are called 'interpretations'.

What does it mean, for example, to say that "the probability of 'P' is n"? There are three major types of interpretations of this statement:
  • the logical interpretation, from Rudolf Carnap - for every possible state of affairs in which P could be true or false, in n of them, P is true.
  • the frequency interpretation, from Hans Reichenbach - in all cases in the past where P could be true or false, in n of them, P is true
  • the subjectivist interpretation, from Frank Ramsay - of you were to make a bet on the likelihood that P is true, you would require odds of n
So if we are to validate the laws of probability - Bayes Theorem, for example - against an empirical model, which of these is the correct model to choose?

For that matter, what makes a statement P 'true' at all? Alfred Tarski said "the sentence 'snow is white' is true if and only if snow is white." Well, that sounds good. But the sentence "brakeless trains are dangerous" can be true even of there are no brakeless trains. So it seems there are two basic principles of truth - a correspondence principle, which requires reference to a physical world of some sort, or a coherence theory, which requires consistence with a model.

This is how murky these questions can get when we're talking about something as basic as truth. In the 20th century, however, philosophers focused on other aspects of semantics, such as meaning and value. Here, the discussion became even more murky.

When someone comes to me and says that some thing or another is 'true', you can see I have a lot to think about regarding what this assertion could possibly mean. When somebody says to me that "We can all agree that such and such," I begin to distrust this person, first because the statement is probably false, and second because it's not at all clear to me that 'agreement' is even relevant to the sort of truth, value or meaning that we are discussing.

These are really important lessons, and they apply everywhere.

What are 'Things'?

Philosophy taught me that anything can be a 'thing' - it just depends on how you look at it. And that there are different types of things, and different types of types of things.

Our teachers in school spent a lot of time telling us about the basic types of things - animals, minerals and vegetables - and the different types of each thing that fall neatly into categories beneath them as kingdoms, phyla, species and genera.

In university I learned that the way we define a thing in this system is to identify the category a thing belongs in, and what distinguishes it from other members of that category. "A cat is a mammal that purrs." "A hammer is a tool used to drive nails." That sort of thing. "An x is such that all x are y and only x are z." Necessary and sufficient conditions. Essences.

Then in philosophy I learned that all of this is arbitrary. The beautiful system was upended, most notably, by Wittgenstein. "What is a game?" he asked. Is there any statement that is true about all games? No. Is there any statement that is true about only games? No. The idea of a 'game' os that it is a bunch of things that are kind of the same, like family resemblances, so you can see that they are sort of alike, but there is nothing unique that defines them.

Language itself is like this. We don't have 'rules' properly so-called, we have "language games". What does a word mean? Well, it depends on how we use it. The meanings of words, the rules of language, the nature of what is true and what isn't - these all shift over time, like the bed of a river.

Even more importantly., what a thing is depends not on the thing itself, but on how it is observed. Because whether one thing 'resembles' something else really depends on your point of view. We can in one sense say that checkers resembles chess, while in another sense say that checkers resembles mathematics.

There are many ways to define things: we can point to them, we can say what they contain, we can say what properties they have, we can talk about what they do, what they were designed to do, what they actually do, what they might do, we can say what they're for, we can talk about where they're from or who (or what) created them, and on and on.

Viewed this way, anything can be a 'thing', and any group of things can be a thing. George Lakoff talks about the culture that divides the world into two types of things: one class consisting of  "women, fire and dangerous things," and another class consisting of  everything else.

So much of what we do today involves either working with certain types of things, or understanding that we are defining new types of things. What are 'students'? What is a 'learning object'? How do we define an 'ontology'? Philosophy taught me about the limitations of relational databases long before there were relational databases.

Theories and Models

Quine's Two Dogmas of Empiricism taught me (and everyone else) two important things:
  • There's no such thing as the analytic-synthetic distinction
  • Reductionism is false
Above I discussed the distinction between syntax and semantics. The collapse of the analytic-synthetic distinction means that no statement is wither purely syntactical or purely semantical.

What does this mean? An analytic statement is supposed to be true simply by virtue of the meanings of its terms. We say "1 + 1 = 2" is true, not because of some fact about the world, but because of the meaning of the terms '1' and '2' and '+' and '='. But if we put it this way, no statement is purely analytic. "It is obvious that truth in general depends on both language and extra-linguistic fact."

This leads us to the second dogma: reductionism. This is the idea that all true statements can be reduced to 'observation language' or some other basis in pure facts (this could be any set of facts: facts about the world, facts about pure thought, facts about the Bible). But in fact, there is no set of 'observation statements'. Every 'fact' carries with it some element of the theory it is purporting to prove. For, without the theory, there is no way to say whether even a simple sentence like "the sky is blue" is true or false.

This taught me, critically, that what a person sees depends on what that person believes. It means we have to rethink how we approach research and discovery, but also that we have to rethink how we communicate with people, how we appeal to reason and evidence, and even how we regard the world and our place in it ourselves. And it's why education - and how we think of education - is so important.

For example, I say "to teach is to model and demonstrate". These are not idly chosen concepts. What we model impacts how they see the world. Consider four world views (all of which correspond loosely with different generations in and around my lifetime):

  • We're at war. Our heroes are war heroes. When we work, we're at the front line. The challenges we face are battles. The determination of a Churchill or a Patton inspire us.
  • We are explorers. We use science and technology to discover new things. When we work, we are solving problems. The challenges we face are mysteries, the unknown. The courage of John Glenn and James T. Kirk inspire us.
  • We are players. Our heroes are athletes who bring out the best in themselves. We leave it on the playing field, but experience camaraderie outside the arena. The strength of Gordie Howe or Hank Aaron inspire us.
  • We are entrepreneurs. We take ideas and make change in the world, bending vast empires of money and people to our will. We are driven by results, and expect a return on our investment. Our heroes are people like Bill Gates and Steve Jobs.

And there are many more, in different generations of different societies around the world. Each of these does not represent just a different world view or a different paradigm. It represents a different way of life. Without philosophy, it's impossible even to understand that there are other ways of life, much less to understand what they could be like.

What are 'evidence' and 'proof' to people in each of these different worlds. I inhabit a workspace where the only measure of whether something has value is whether someone will pay for it - part of that entrepreneurial mindset. I don't agree with that mindset, but I'm also aware that my own mindset, the explorer mindset, isn't inherently superior.

People are always saying to me that "this counts as a theory, but that doesn't," or that "this counts as research, but that doesn't." I recognize such statements as arbitrary, and representing a set ofd parameters that the speaker has employed to define what will count as 'normal' (or 'standard', or 'appropriate') in their lives and work. I know I won't change their minds on this, probably, because no evidence exists that does not reinforce their world view. That's the nature of world views.

Thought is Associative

Not everybody who studies philosophy will learn this (see the preceding paragraph) but I did, and it was of fundamental importance to me.

There are different ways to make the same point. Other people, for example, will say that they learned that not everyone is rational, or that people don't make rational decisions. Others will say that people think in music and pictures and whatever. These are both true. But for me, it comes down to the idea that thought is associative.

But what does it mean? It's hard to explain in words, but by way of a metaphor, I would say that the principles of knowledge, memory and understanding are basically the same as the principles that apply when you throw a rock into a pond. There is the impact, there is the cascade as waves rush out from the rock, there is the pushback as waves bounce off each other and off the shore, and there is the settling as the pond returns to its level.

Now the human brain is much more complex than a pond, but in both cases, the impact of something new affects the entire system, even though the cause touches only one small part of it. The rock touches some water, which pushes against other water, which pushes against a shoreline, and so on. The water organizes itself through a whole series of molecule-to-molecule interactions. There's no head molecule. There is no 'purpose' or 'order' defining what the waves must be - if tyhe stone had been bigger, the water colder, the shoreline shaped differently, it would have worked out in a completely different way.

We are on the verge of understanding how that process actually works in brains (we understand pretty well already how it works in ponds, to the point that we have an entire discipline built around fluid dynamics). What we don't have yet is a way of understanding the world consistent with this understanding of how thought works.

For example, I have said frequently, knowledge is recognition. Water doesn't really retain the impact of rocks, which is why ponds aren't intelligent. But other more complex and more stable entities will retain traces of the impact. One thing influences the next, and each thing preserves a trace of that influence, such that after a while characteristic patterns of input produce characteristic responses. This is recognition. And it is, to my mind, the basis for all human intelligence.

This way of thinking is in an important sense post-semantic. I don't see one thing as a 'sign' for another. I don't see mental models as 'representations' of some external reality. I see knowledge, cognition and communications as complex interplays of signalling and interaction, each with no inherent meaning, but any of which may be subsequently recognized by one or another entity.

Remember how Marx said "everything is political"?  Well, I think that "everything is a language" (or, alternatively, there's nothing special about language over and above other forms of communication). So when I create a 'scientific theory', which is my job, I create something that consists of language, code, actions, photographs, and a host of other artifacts, all of which are reflections of my interactions with the world, not intended to 'represent' some deeper truth or underlying reality, but rather, intended to offer a set of phenomena that may be usefully employed by others (depending on what they recognize it as being useful for).

Born Free

In any number of recent movies - the Hunger Games, for example, or Divergent - the plot revolves around the idea that society is structured in such a way that we all have our assigned places where we work and live. Sometimes, as in Harry Potter, this is depicted as a good thing. But more often the established order is the subject of resistance.

The concept originates in Plato, who in the Republic argued that society should be run by philosophers, and that the position of each person would be determined by their inner nature. "One man will acquire a thing easily, another with difficulty; a little learning will lead the one to discover a great deal; whereas the other, after much study and application, no sooner learns than he forgets; or again, did you mean, that the one has a body which is a good servant to his mind, while the body of the other is a hindrance to him."

It is true that there are innate variations among humans. But the far greater differences between people are the result of their upbringing, culture and education.

In philosophy I encountered the idea that there is an inborn 'human nature' on a regular basis, from the above-mentioned assertions from Plato to Descartes's ideas about the stamp of God implanted in the human brain to Chomsky's postulation of an innate deep grammar. People argue that there are common things (love of justice, fear of death) that unite us all, and essential properties (mental capacity, physical strength, mathematical abilities) that divide us.

But none of this is true. What we have in common operates at a far lower level than people suppose. It operates at a genetic level, a cellular level, which defines only the most basic principles of human composition. Our heritage determines that we will have leg muscles, but not how string those muscles will be. It determines that we have interlinked neural cells, but not how they will be wired together. It determines that we will have a voice, but not what we will say.

Time and time again I have encountered evidence of this. When we look at physical properties, for example, and even the oft-touted difference between men and women, we see how large a role nutrition plays (women are tall and strong in nations where they are well-fed and nourished - think about that). The physical differences between individual members of any race, class or gender you can to name are far greater than any between the races, classes or any other identifiable group.

The same is true of mental properties. Time and time again, the most reliable predictor of educational outcome is socio-economic status. This is not because (as some suggest) the best and brightest become rich (surely we have countervailing evidence of that) but because of the advantages they receive in early life, everything from a rich intellectual environment, proper nutrition and stimulation, and social expectations supporting learning and achievement.

How much of philosophy is devoted to determining whether there are natural - or essential - properties of things, and most especially humans? Arguably, most of it. The argument that something 'must be X' on the basis that 'X has property or capacity Y' runs through the entire history of philosophy, from Thales to Aquinas to Kant to Fodor. And none of these speculations has ever stood the test of time.There are no innate properties of significance. We are born free.

Value

The word 'value' is a bit loaded as in our entrepreneurial age it has become virtually synonymous with some means of quantification in terms of worth, utility or commodity. There are older senses in which the term 'value' meant something like those properties synonymous with virtue, but those senses of the word are almost inaccessible to us now; we would have had to have been born in a different time and a different place to understand it.

I think philosophy has taught me to think of value a bit more deeply than that, and to at least be able to articulate alternatives that can count as 'value'. These alternatives form the basis of the various systems of morality and justice that have prevailed over the years.

I once wrote to the Globe and Mail in a no-doubt long-lost online forum that the underlying value that defines Canada is this: in diversity, harmony.

You need both parts. Harmony is the underlying value (the earth, as the Taoists might say), the receiver of all things, the pond after it has become stable, the mind after it has become calm, uncertainty and turmoil resolved. But rocks and sand crabs and fungus also exhibit large degrees of harmony; we want something more. This is provided by diversity, the possibilities of experience, the creation of the need to adapt, to understand, to grow and to learn.

But hey - it's just a value system. It's not like others haven't tried before me. And this knowledge keeps me humble.

One type of value system revolves around survival. It's an animal value system, an artifact of our lizard-brain, perhaps, brought through centuries of socialization to mean also the survival of the offspring, survival of the tribe, or survival of the species. We see it reflected to day in such philosophies as social Darwinism, survivalism, and various types of rule-based tribalism.

Another type of value system revolves around ideals. We have the Platonic forms, the perfect Christ, Man and Superman- the idea is that the closer we can come to perfection, the greater the value we have realized.

Another type of value system is based on duty and obligation. Perhaps best represented by Kant, it is informed by the idea that each person is an "end in themselves", not a means to an end (today we would say "each person is inherently valuable") and that we ought to act in the manner such that every person could also consistently act in the same manner. Your mother invokes Kant's categorical imperative when she says, "What is everyone else did that?"

Still another is based around the idea of happiness, and of freedom in a manner that enables a person to maximize their own happiness. People like Jeremy Bentham and John Stuart Mill are most closely associated with this philosophy, and Mill famously proposes that the goal of society ought to be to allow each person to pursue their own good in their own way. I have a lot of sympathy with that ideal.


Maybe they all amount to the same thing. There's no shortage of ecumenical authors who like to suggests that, at heart, we all have the same system of values. But if this were true then we would have no satisfactory explanation for a Jeffrey Dahmer or a Clifford Olsen. So even while it feels to me that hose perfect moments of harmony are a combination of happiness, obligation and ideals, I think that other people see these values very differently.

This is important to understand. People like to say things like "the truth lies somewhere in the middle" or "the good is what we can all agree on". But there really is no such thing (or if there is, we have utterly no means of finding it just yet). 

Justice

I was never really a fan of moral philosophy, because of the force of the observations just presented, and even less of political philosophy, which to my way of thinking was offered for the most part by the powerful to rationalize their exercise of power.

Of course, I have probably been jaded by the fact of being born and raised in an environment where the peak of political philosophy varied between people justifying why we would have enough military might to destroy the entire planet and people giving reason why we should or should not use it. Political philosophy in my age is and continues to be about the deployment of political power.

Probably the predominate idea in political philosophy is some sort of version of social contract theory. This is the idea (and we see it reflected in school charters and corporate vision statements) that we are united as a society under a set of principles that we have agreed to in order to live together, prosper together or learn together.

The motivation for such a social contract is generally that the alternative is unbearable. Without, for example, the benign power of an absolute sovereign, wrote Thomas Hobbes, our loves would be "solitary, poor, nasty, brutish and short" (or course, given some of the sovereigns he was defending that might be preferable).

The idea that we have actually signed such a contract is, of course, absurd. So the nature and standards of conduct in the contract are often implied - Rousseau, for example, appeals to the state of nature in which the noble savage found himself, as compared to contemporary society - "man is born free, but everywhere he is in chains." John Locke, envisioning an endless commons, imagines that the rights of property are established when someone "affixes his labour" to that which may be found in nature (an argument that justified the conquest of North and South America). John Rawls imagines that we could imagine what we would negotiate with each other under a "veil of ignorance" in which no one knew whether they would be a pauper or a king; this would result in a system of "justice as fairness".

And of course there are communitarian theories of  political philosophy based around the common ownership of "the means of production" which would ensure that everyone gets "to each, according to his needs, from each according to his means." Other communitarian theories of justice assert the collective rights of women, minorities, language groups, religions, and others.

Interestingly, I don't think that anyone who is actually in politics subscribes to any of these philosophies per se. Actual observation  (if there is such a thing) suggests that most of our social and economic leaders are engaged in one or another version of Machiavellian political theory, loosely stated as "might is right".

For my own part, I don't know whether "man is born free," but I do observe that "everywhere he is in chains," and just as I feel the limitations of my own self-actualization I feel that the other people of the world who have even less advantage than I do must feel more or less the same thing, perhaps more deeply. I do not see us as merely "workers" or even as members of this or that community; from Kant I draw the idea that each person is equally important and equally special, and that our society and our individual lives are most enhanced by realizing that.

But I have no illusions, I don't believe in utopia, and I don't believe we can engineer (as so many political philosophies suggest) a better society, a better company or a better school. In the end, the political philosophy we employ - the nature of our culture, our social believes, our nation - is the result of a billion individual, decisions made every day, and each of these decisions is based on the many factors I've outlined above.

Good government, in other words, depends as much on things like precision of language, structure of reasoning, appropriate semantics, and all the rest, and even then, there's no guarantee that the government we get will be in any meaningful sense good - the best we can hope for, maybe, is government that is just, and leave the rest to the people.

In Sum

In sum, philosophy has taught me the basics of what I need to conduct myself in virtually any enterprise or occupation (save perhaps things like Major League Baseball).

I've learned through philosophy that nobody is special, and everyone is special. That nothing is real, and everything is real. That there infinite ways we can describe and divide up the entities in the world, that in practice we fall into habits of seeing and reasoning about the world based on our experiences and the influence of those around us (and today, that influence includes language and media).

I think that the reason we are alive is because it's possible, and the reason we die is to continue to allow it to be possible, by allowing our form of existence to grow and develop and adapt and flourish.

I'm still trying to embrace diversity, and I'm still seeking harmony.

Saturday, June 27, 2015

The Exodus

We are now reading that the exodus of people from New Brunswick through the winter was the largest it has been since 1976. It's the 17th quarter in a row the province has lost people. And it shows no sign of slowing.

Wrong Way

I live in Moncton, supposedly the most prosperous city in the province. I live in a small house in what was once called the 'golden triangle', an older well-treed area just north of downtown with close proximity to the hospitals and the university. But I discovered this week that the value of my home has dropped $40K in the last few years. I can't say I'm surprised.

This part of the city has been hollowed out; it is in decline, just as Moncton is in decline, and the causes are symptomatic of the malaise that has struck the province, a disease born of ineffective and weak-willed civil and provincial politicians.

Just a couple doors down, Castle Manor, a heritage building if there ever was one, sits rotting and boarded up, home to nobody but the vagrants. The city allowed the land and the building to be divided into two parcels, with separate owners, with neither having value to anybody. There was a pre-emptive attempt to turn the lawn into a parking lot, an effort stopped not by the city but by by local citizens once they began taking the chainsaw to the trees.

Right next to it, the CBC building sits empty, four or five storeys of office building right next to the Dumont hospital which should be brimming with opportunity. But the CBC, downsized to almost non-existence, has moved to the shell of an old Zellers store next to the Atlantic Superstore down by where Hall's creek enters the river. So many stories there.

The health care system is broken in New Brunswick. In so many ways. It is centrally run, it is the subject of political influence (often by politicians seeking to shut it down and privatize health care), and it is backward and inefficient.

The system was again the centre of controversy this week. The Moncton Cancer Centre announced abruptly that it would shut down its genetic sequencing program. This happened after the provincial government reversed a decision to block the purchase of genetic sequencing equipment in Saint John.

"I cannot accept to be a party to raping the taxpayers of New Brunswick," by duplicating services, ACRI president and scientific director Dr. Rodney Ouellette announced during an interview with CBC News. He seems to be about the only person in this province who can't. 

Certainly the people who operate NB Power have no objection. One of their plants, which burn Irving oil (naturally), produces gypsum as a byproduct. This would normally be waste but they have agreed to sell the gypsum back to Irving. Anywhere else this would make money for NB Power, but in New Brunswick the contract is worded in such a way as to see NB Power pay the Irvings.

Meanwhile, next to the CBC Building and Castle Manor sits the Saint Patrick Centre. This building is still open and operated as a community fitness centre. It struggles on ancient equipment, chronic underfunding, and roof leaks that will eventually destroy the integrity of the facility.

This is what happened to Moncton High School, once the only high school near the centre of the city. The roof was left unrepaired for decaded, the metal girders rusted out, and the old stone heritage building was deemed unfit. The new school has now been relocated ten kilometers to the north, at the edge of city limits, in what literally was wilderness.

There's a lot of mystery surrounding the move, which was ordered by the provincial government, and greatly benefited the owners of local subdivisions. Across the road from the new school a new subdivision has suddenly emerged, called 'Baron Heights', named after the putative owner of the property, Baron von Munchausen (I kid you not).

The agreement with a company to 'develop' the old school fell through and now it sits, an abandoned hulk, about five blocks from where I live. The only saving grace is that the strip clubn, which for decades operated across the street from the high school, was torn down. But not until after all the students had left. In a city struggling to achieve more population density, this newest development boasts large "country lots".

Moncton High School is also the place where, in a controversy straight out of the 1950s, a girl was punished for wearing a dress that revealed her shoulders.

A lot of Moncton has been torn down. The local mall, Highfield Square, was torn down to make way for the new events centre (should it ever be built). Somehow, the price to acquire it jumped from $6 million to $12 million at the last minute.

Moncton can't manage large building projects. There's plenty of evidence for this, including the $3.8 million overrun on the stadium, due to "poor management". The feasibility study for the events centre ran well over budget. We recently hosted the World Cup there. Apparently the city was caught by surprise by the demand for bus transportation to the 13,000 seat stadium. They also designed the stadium with exactly three entrances, none of which face the road.

Just down the road from the former Moncton High School the local museum was 'upgraded'. Andrea and I tried to oppose the project before it got started, because it wiped out the last of the green lawn in the area and presented a three-storey wall as a facade on Mountain Road. It seems to have been secretly approved long before the public was told. Not surprisingly, it also ran into overruns costing millions of dollars.

Like I said, we live just a few blocks from the downtown core. Recently, the bar patrons there have taken to street-fighting; the videos have been splashed all over the world. So where are the police? They are armed like paramilitaries lined up en masse to stand down a handful of people opposed to shale gas fracking on Indian land just up the road in Rexton.

Or they're being shot at by a local thug who received his political education from a survivalist outlet across the river in Riverview. It turns out that the police we have are outgunned but a lone individual purchasing his weapons across the counter. The RCMP actually faces labour code charges in relation to the shootings.

But back to resources, one of the problems with resources in the province is that we can't exploit them without losing money. Take the forests, which should be a prime revenue driver. It actually costs the government money to have a forestry industry in the province.

Finally, we have the media in this city, which is owned by the local Oil and forestry company, and therefore not a reliable source of news. More, two editors were fired for cavorting with provincial government officials on the taxpayer's dime at a resort used by politicians, media and business to trade old boy stories and make deals, Larry's Gulch. Yes, Larry's Gulch. There really is such a thing. The newspaper's contribution to the economy was to fire all its photographers. There's some vision for you.

I could go on... and on, and on, and on. I haven't even mentioned offshore tax shelters, the impact of the federal government, the mysteries of fishing licenses, the problems with roads and bike lanes and the joke we call our 'active transportation' policy. The bus system designed by amateurs. The downtown that is now mostly parking lost. The mysterious payments made to attract concerts. The push to privatize our water supply. It never ends. I could spend a week tallying this up. I'm fed up.

The visible signs of decay in my own neighbourhood are not the result of external factors. They are not the global winds of change blowing hard across New Brunswick. They are self-made, self-inflicted, brought to us by government officials and the civic and provincial level who have been incompetent and sometimes unethical.

Too much money simply 'disappears'. Too often decisions are made to benefit shady associates and business connections. Too much of the government's money flows from the people and to the businesses and industries who should be financing it.

And the saddest thing is that there's nobody willing to stand up for the alternative. Even the NDP leader gets drawn into the trivia and minutiae fostered by the local paper instead of standing up to its owners and demanding accountability. Our provincial Liberal and Conservative premiers are active participants in this mess. Once they're done, they leave the province and are rewarded with high-paying consulting jobs, or some such thing.

My goodness, people. Have some damn courage. Stand up for something!

It costs a lot of money and takes a lot of courage to pick up stakes and leave. I know; I've done it before. In 1980 I left an economically moribund city of Ottawa suffering similar malaise and headed for the green fields of Alberta. It cost me every cent I had and I arrived in the city without a job and without prospects. And yet I still thought it was a better deal than living in a city where only the rich got richer and where the poor were an underclass.

It's not just 3416 people leaving the province. It's 3416 people making the hardest decision they've ever made, 3416 people giving up on the prospect of having a good, safe and secure life here in New Brunswick, 3416 people voting in the only way they can against a system that has become inept, incompetent and corrupt.


Tuesday, June 02, 2015

Mother Canada and Mother Russia

The current government plans to deface some pristine Cape Breton wilderness with a 'Mother Canada' monument. Here's the proposal. Here's some coverage of opposition. A photo below:

 

What I find a bit puzzling is why Canada's conservative government - the same government that wants to erect a 'victims of communism' memorial in Ottawa - would want to emulate a series of Soviet-era monuments.

Here's the 1960s era 'Mother Armenia' statue in Yerevan:





Here's Mother Georgia, in Tblisi:



Mother Russia, in Volgograd (formerly Stalingrad):



Mother Motherland, in Kiev.

 

Hero City, Minsk:



Mother Latvia:



Freedom Monument (Mother Pest?) Budapest.



And this one I photographed in Riga, Latvia:



Don't get me wrong; I love every one of these statues. But they speak to a view of the world we more commonly associate with an all-embracing state. It seems an odd choice of design for the Harper Conservatives.

The one thing the 'Mother Canada' proposal does not have in common with the other statues: the other statues are designed to be seen. This statue is designed to be installed in one of the most remote wilderness regions of Canada. It's an odd choice.

Sunday, May 03, 2015

The Study, and Other Stuff

There are three separate threads in Siemens's response to my last post, all of which are fascinating:

  • The thread concerning whether or not the study he published was bad,
  • The thread examining the question of whether universities can be a valuable force for social equity, and
  • My own experiences of the university system.
Though the latter two threads are of endless interest, I'd really rather only focus on the first, for today.


Whether or not the study he published was bad

Siemens writes, "Stephen expands on his primary concerns which are about educational research in general." Let me be clear: I was making this statement about this study in particular. That's why I cited work from the study itself. Yes, I believe that educational work in general is pretty poor. But my focus was on this particular example.

I think he agrees with me, in part:

Educational research is often poorly done. Research in social systems is difficult to reduce to a set of variables and relationships between those variables. Where we have large amounts of data, learning analytics can provide insight, but often require greater contextual and qualitative data. ... The US Department of Education has a clear articulation of what they will count as evidence for grants. It’s a bit depressing, actually, a utopia for RCTs (Randomized Controlled Trials).

And he says:
Stephen then makes an important point and one that needs to be considered that the meta-studies that we used are “hopelessly biased in favour of the traditional model of education as practiced in the classrooms where the original studies took place.” This is a significant challenge. How do we prepare for digital universities when we are largely duplicating classrooms? Where is the actual innovation? (I’d argue much of it can be fore in things like cmoocs and other technologies that we address in chapter 5 of the report). Jon Dron largely agrees with Stephen and suggests that a core problem exists in the report in that it is a “view from the inside, not from above.”
So, from this, it appears that he agrees with my criticisms.

He nonetheless persists with his defense, focusing on the fifth paper in the study, first suggesting I don't find a lot to disagree with about it, and second, suggesting it is a vehicle for a conversation between two versions of myself. He also finds fault with some other criticisms:
The names listed were advisors on the MOOC Research Initiative – i.e. they provided comments and feedback on the timelines and methods. They didn’t select the papers. The actual peer review process included a much broader list, some from within the academy and some from the outside. 

Who selected the review committee? Who are the people 'from the outside' that were on it? Here's the best we have on the review process itself. Here are the project reports. All of this was set in motion by the committee I named in my previous post. If there's another list of names of people who were responsible for the outcome, they should be named. Otherwise, the people named are the people responsible. You can't name a list of names and then say it wasn't them.

In his defense of the fifth paper (he seems not to defend the first four studies, the 'histories', at all) he also writes:
In my previous post, I stated that we didn’t add to citations. We analyzed those that were listed in the papers that others submitted to MRI. Our analysis indicated that popular media influenced the MOOC conversation and the citations used by those who submitted to the grant.
I recognize this. What I am is saying is that it seems to me that the 28 winners of a major education research grant competition would have demonstrated more depth of understanding that is apparent from the summary study that resulted. Maybe I should not have expected more from what was essentially an automated and quantitative analysis of the papers (because there are individually some bright spots). But when we look at the citations - which is essentially what we were provided - the results overall are not reassuring.

That's it for Siemens's defense of the study. The core of my criticism, which is addressed mostly at the first four chapters, s is not addressed. Let me reiterate them here:
  • They all have very small sample sizes, usually less than 50 people, with a maximum size less than 200 people
  • The people studied are exclusively university students enrolled in a traditional university course
  • The method being studies is almost exclusively the lecture method
  • The outcomes are assessed almost exclusively in the form of test results
  • Although many are 'controlled' studies, most are not actually controlled for "potential confounders"
  • All these criticisms apply if you think this is the appropriate sort of study to measure educational effectiveness, which I do not.
I would not like to add that my criticisms are reinforced by two additional authors.

Although Jon Dron says "as such reports go, I think it is a good one," he writes:

For the most part, this report is a review of the history and current state of online/distance/blended learning in formal education. This is in keeping with the title, but not with the ultimate thrust of at least a few of the findings. That does rather stifle the potential for really getting under the skin of the problem. It's a view from the inside, not from above. 

And additionally, George Veletsianos writes,

One of Downes  criticisms is the following: “the studies are conducted by people without a background in education.” This finding lends some support to his claim, though a lot of the research on MOOCs is from people affiliated with education, but to support that claim further one could examine the content of this papers and identify whether an educational theory is guiding their investigations.

I don't think it matters whether the investigation is informed by an educational theory - all I care about is that the studies contribute in a useful, relevant and credible way to the field.

Finally, Siemens says, "The appeal to evidence is to essentially state that opinions alone are not sufficient."

It can be allowed that Siemens's use of "we" in the Chronicle article "is about the academy’s embrace of MOOCs." But as I pointed out, there's no mistaking his suggestion that the people outside the academy, the Alt-ac people, do not rely on evidence. This is what he says when he says, "Another approach, and one that I see as complimentary and not competitive, is to emphasize research and evidence."

I have never suggested that opinion alone is sufficient, and never would. But he has to cease characterizing the alternatives as not evidence based. Because I believe the opposite. I believe that the controlled trials offered in the study misrepresent what little evidence they provide, and I believe that the alternative approaches offer substantially more evidence than is allowed.


Siemens says, "While Stephen says our evidence is poor, he doesn’t provide what he feels is better evidence." I did once author a Guide to the Logical Fallacies, where I discuss the statistical problems. I've also talked about the same issue of evidence as it related to public policy. I've talked about research methodologies a number of times. And just the other day, I linked to a study I felt did pass muster (and indeed, over the years, I've linked to lots of things that I felt met the appropriate standards of research and evidence). And the body of my work, grounded in practical application and observation, stands as an example of what I feel constitutes "better evidence."

The Other Stuff

It's late and I don't want to longer on the off-topic stuff. But I also want to address a few things.

It's true that I am not a fan of universities and do not feel they support our common objective of " an equitable society with opportunities for all individuals to make the lives that they want without institutions (and faculty in this case) blocking the realization of those dreams."

This does not mean that I want to see them eliminated. And (contrary to Sebastian Thrun) I expect their numbers will multiply exponentially in the future.

But they need to be reformed, and they need to be brought around to the idea that social and economic equity are important. Because as it stands, they are one of the largest bastions in society standing against that idea. Here are a few of the ways:

- universities foster the perpetuation of a social elite, especially through exclusive institutions (Harvard, Yale, etc), legacy admissions, and perpetuation of a private social society consisting pretty much only of the one-percent

- universities bleed those outside the upper classes by consistently responding to society's demand for access with higher and higher tuition fees

- universities have fostered the creation of a low-paid academic underclass in order to support the students that pay these higher fees, and resist any suggestion that they should be fairly compensated, and actively resist unionization

- universities and professors continue to contribute to mechanisms which keep academic research behind expensive paywalls - indeed, they are so indifferent to these costs that they must be required by mandates and laws to open access to their research

- private universities operate tax-free, raise substantial endowment funds (sometimes in the billions), yet always plead poverty, and are typically the prime recipient of funding provided by governments and foundations attempting to support projects leading to the betterment of social and economic conditions

- they then waste that money, and a lot of other money, padding their own resumes and producing research such as the body of work I find myself criticizing today

Yes, perhaps universities could act as a force that promotes social and economic equity. They certainly have the talent and resources. But they don't, they don't want to, and they resist any attempt to make them do it.

It is true that I was badly treated by my PhD committee. But this is not a case of "today affirming that the Stephen in front of the phd committee made the right decision – that there are multiple paths to research, that institutions can be circumvented and that individuals, in a networked age, have control and autonomy." Why not? A couple of reasons:


On the idea that, individuals, in a networked age, (should) have control and autonomy: I have always believed that. I believed that long before I ever stood before a PhD committee.

On the idea that "the Stephen that today has exceeded the impact of members on that committee through blogging, his newsletter, presentations, and software writing."This may or may not be true. But I have never believed that I have been more influential because I have worked outside of academia.

I have been influential despite being outside academia. I have been influential despite not having a professor's wages, the support of grad students, a year off every seven, tenure, funding from foundations, grants and agencies, book contracts, and the rest. No university in the world would ever hire me, because they consider me unqualified. I don't regard any of this really as an upside.

Because that's what academia does. It wields huge sums of money and the support to achieve certain social and economic outcomes. I just wish it was wielding this power for good, rather than indifference. But I don't think it ever will.


Saturday, May 02, 2015

Research and Evidence

I wrote the other day that the study released by George Siemens and others on the history and current state of distance, blended, and online learning was a bad study. I said, "the absence of a background in the field is glaring and obvious." In this I refer not only to specific arguments advanced in the study, which to me seem empty and obvious, but also the focus and methodology, which seem to me to be hopelessly naive.

Now let me be clear: I like George Siemens, I think he has done excellent work overall and will continue to be a vital and relevant contributor to the field. I think of him as a friend, he's one of the nicest people I know, and this is not intended to be an attack on his person, character or ideas. It is a criticism focused on a specific work, a specific study, which I believe well and truly deserves criticism.

And let me clear that I totally respect this part of his response, where he says that "in my part of the world and where I am currently in my career/life, this is the most fruitful and potentially influential approach that I can adopt." His part of the world is the dual environments of Athabasca University and the University of Texas at Arlington, and he is attempting to put together major research efforts around MOOCs and learning analytics. He is a relatively recent PhD and now making a name for himself in the academic community.

Unfortunately, in the realm of education and education theory, that same academic community has some very misguided ideas of what constitutes evidence and research. It has in recent years been engaged in a sustained attack on the very idea of the MOOC and alternative forms of learning not dependent on the traditional model of the professor, the classroom, and the academic degree. It is resisting, for good reason, incursions from the commercial sector into its space, but as a consequence, clinging to antiquated models and approaches to research.

Perhaps as a result, part of what Siemens has had to do in order to adapt to that world has been to recant his previous work. The Chronicle of Higher Education, which for years has advanced the anti-technology and anti-change argument on behalf of the professoriate, published (almost gleefully, it seemed to me), this abjuration as part and parcel of its article constituting part of the marketing campaign for the new study.
When MOOCs emerged a few years ago, many in the academic world were sent into a frenzy. Pundits made sweeping statements about the courses, saying that they were the future of education or that colleges would become obsolete, said George Siemens, an author of the report who is also credited with helping to create what we now know as a MOOC.

“It’s almost like we went through this sort of shameful period where we forgot that we were researchers and we forgot that we were scientists and instead we were just making decisions and proclamations that weren’t at all scientific,” said Mr. Siemens, an academic-technology expert at the University of Texas at Arlington.

Hype and rhetoric, not research, were the driving forces behind MOOCs, he argued. When they came onto the scene, MOOCs were not analyzed in a scientific way, and if they had been, it would have been easy to see what might actually happen and to conclude that some of the early predictions were off-base, Mr. Siemens said.
This recantation saddens me for a variety of reasons. For one this, we - Siemens and myself and others who were involved in the development of the MOOC - made no such statements. In the years between 2008, when the MOOC was created, and 2011, when the first MOOC emerged from a major U.S. university, the focus was on innovation and experimentation in a cautious though typically exuberant attitude. 

Yes, we had long argued that colleges and education had to change. But none of us ever asserted that the MOOC would accomplish this in one fell swoop. Those responsible for such rash assertions were established professors with respected academic credentials who came out of the traditional system, set up some overnight companies, and rashly declared that they had reinvented education.

It's true, Siemens has moved over to that camp, now working with EdX rather than the connectivist model we started with. But the people at EdX are equally rash and foolish:
(Anant) Argarwal (who launched EdX) is not a man prone to understatement. This, he says, is the revolution. "It's going to reinvent education. It's going to transform universities. It's going to democratise education on a global scale. It's the biggest innovation to happen in education for 200 years." The last major one, he says, was "probably the invention of the pencil". In a decade, he's hoping to reach a billion students across the globe. "We've got 400,000 in four months with no marketing, so I don't think it's unrealistic."
Again, these rash and foolish statements are coming from a respected university professor, a scion of the academy, part of this system Siemens is now attempting to join. As he recants, it is almost as though he recants for them, and not for us. But the Chronicle (of course) makes no such distinction. Why would it?

But the saddest part is that we never forgot that we were scientists and researchers. As I have often said in talks and interviews, there were things before MOOCs, there will be things after MOOCs, and this is only one stage in a wider scientific enterprise. And there was research, a lot of it, careful research involving hundreds and occasionally thousands of people, which was for the most part ignored by the wider academic community, even though peer reviewed and published in academic journals. Here's a set of papers by my colleagues at NRC, Rita Kop, Helene Fournier, Hanan Sitlia, Guillaume Durand. An additionally impressive body of papers has been authored and formally published by people like Frances Bell, Sui Fai John Mak, Jenny Mackness, and Roy Williams. This is only a sampling of the rich body of research surrounding MOOCs, research conducted by careful and credible scientists.

I would be remiss in not citing my own contributions, a body of literature in which I carefully and painstakingly assembled the facts and evidence leading toward connectivist theory and open learning technology. The Chronicle has never allowed the facts to get in the way of its opinions, but I have generally expected much better of Siemens, who is (I'm sure) aware of the contributions and work of the many colleagues that have worked with us over the years.

Here's what Siemens says about these colleagues in his recent blog post on the debate:
One approach is to emphasize loosely coupled networks organized by ideals through social media. This is certainly a growing area of societal impact on a number of fronts including racism, sexism, and inequality in general. In education, alt-ac and bloggers occupy this space. Another approach, and one that I see as complimentary and not competitive, is to emphasize research and evidence. (My emphasis)

In the previous case he could have been talking about the promulgators of entities like Coursera, Udacity and EdX, and the irresponsible posturing they have posed over the years. But in this case he is talking very specifically about the network of researchers around the ideas of the early MOOCs, connectivism, and related topics.

And what is key here is that he does not believe our work was based in research and evidence. Rather, we are members of what he characterizes as the 'Alt-Ac' space - "Bethany Nowviskie and Jason Rhody 'alt-ac' was shorthand for 'alternative academic' careers." Or: "the term was, in Nowviskie’s words,' a pointed push-back against the predominant phrase, 'nonacademic careers.' 'Non-academic' was the label for anything off the straight and narrow path to tenure.'" (Inside Higher Ed). Here's Siemens again:

This community, certainly blogs and with folks like Bonnie Stewart, Jim Groom, D’Arcy Norman, Alan Levine, Stephen Downes, Kate Bowles, and many others, is the most vibrant knowledge space in educational technology. In many ways, it is five years ahead of mainstream edtech offerings. Before blogs were called web 2.0, there was Stephen, David Wiley, Brian Lamb, and Alan Levine. Before networks in education were cool enough to attract MacArthur Foundation, there were open online courses and people writing about connectivism and networked knowledge. Want to know what’s going to happen in edtech in the next five years? This is the space where you’ll find it, today.
He says nice things about us. But he does not believe we emphasize research and evidence.

With all due respect, that's a load of crap. We could not be "what’s going to happen in edtech in the next five years" unless we were focused on evidence and research. Indeed, the reason why we are the future, and not (say) the respected academic professors in tenure track jobs is that we, unlike them, respect research and evidence. And that takes me to the second part of my argument, the part that states, in a nutshell, that what was presented in this report does not constitute "research and evidence." It's a shell game, a con game.

Let me explain. The first four chapters of this study are instances of what is called a 'tertiary study' (this is repeated eight times in the body of the work). And just as "any tertiary study is limited by the quality of data reported in the secondary sources, this study is dependent on the methodological qualities of those secondary sources." (p. 41) So what are the 'secondary sources'? You can find them listed in the first four chapters (the putative 'histories') (for example, the list on pp. 25-31). These are selected by doing a literature search, then culling them to those that meet the study's standards. The secondary surveys round up what they call 'primary' research, which are direct reports from empirical studies.

Here's a secondary study that's pretty typical: 'How does tele-learning compare with other forms of education delivery? A systematic review of tele-learning educational outcomes for health professionals'.The use of the archaic term 'tele-learning' may appear jarring, but despite many of the studies being from the early 2000s I selected this one as an example because it's relatively recent, from 2013. This study (and again, remember, it's typical, because the methodology in the tertiary study specifically focuses on these types of studies):
The review included both synchronous (content delivered simultaneously to face-to-face and tele-learning cohorts) and asynchronous delivery models (content delivered to the cohorts at different times). Studies utilising desktop computers and the internet were included where the technologies were used for televised conferencing, including synchronous and asynchronous streamed lectures. The review excluded facilitated e-learning and online education models such as the use of social networking, blogs, wikis and BlackboardTM learning management system software.

Of the 47 studies found using the search methods, 13 were found to be useful for the purposes of this paper. It is worth looking at the nature of this 'primary literature':


(Sorry about the small size - you can view the data in the original study, pp. 72-73)

Here's what should be noticed from these studies:
  • They all have very small sample sizes, usually less than 50 people, with a maximum size less than 200 people
  • The people studies are exclusively university students enrolled in a traditional university course
  • The method being studies is almost exclusively the lecture method
  • The outcomes are assessed almost exclusively in the form of test results
  • Although many are 'controlled' studies, most are not actually controlled for "potential confounders"
This is what is being counted as "evidence"for "tele-learning educational outcomes." No actual scientific study would accept such 'evidence' for any conclusion, however tentative. But this is typical and normal in the academic world Siemens is attempting to join, and this is by his own words what constitutes "research and evidence."

Why is this evidence bad? The sample sizes are too small for quantificational results (and the studies are themselves are inconsistent so you can't simply sum the results).The sample is biased in favour of people who have already had success in traditional lecture-based courses, and consists of only that one teaching method. A very narrow definition  of 'outcomes' is employed. And other unknown factors may have contaminated the results. And all these criticisms apply if you think this is the appropriate sort of study to measure educational effectiveness, which I do not.

I said above it was a con game. It is. None of these studies is academically rigorous. They are conducted by individual professors running experiments on their own (or sometimes a colleague's) classes.The studies are conducted by people without a background in education, subject to no observational constraints, employing a theory of learning which has been for decades outdated and obsolete. These people have no business pretending that what they are doing is 'research'. They are playing at being researchers, because once you're in the system, you are rewarded for running these studies and publishing the results in journals specifically designed for this purpose.

What it reminds me of is the sub-prime mortgage crisis. What happened is that banks earned profits by advancing bad loans to people who could not afford to pay them. The value of these mortgages was sliced into what were called 'tranches' (which is French for 'slice', if you ever wondered) and sold as packages - so they went from primary sources to secondary sources. These then were formed into additional tranches and sold on the international market. From secondary to tertiary. By this time they were being offered by respectable financial institutions and the people buying them had no idea how poorly supported they were. (I'm not the first to make this comparison.)

Not surprisingly, the reports produce trivial and misleading results, producing science that is roughly equal in value to the studies that went into it. Let's again focus on the first chapter. Here are some of the observations and discussions:
it seems likely that asynchronous delivery is superior to traditional classroom delivery, which in turn is more effective than synchronous distance education delivery. (p. 38)

both synchronous and asynchronous distance education have the potential to be as effective as traditional classroom instruction (or better). However, this might not be the case in the actual practice of distance education (p. 39)

all three forms of interaction produced positive effect sizes on academic performance... To foster quality interactions between students, an analysis of the role of instructional design and instructional interventions planning is essential.

In order to provide sufficient academic support, understanding stakeholder needs is a main prerequisite alongside the understanding of student attrition (p.40)

I'm not saying these are wrong so much as I am saying they are trivial. The field as a whole (or, at least, as I understand it) has advanced far beyond talking in such unspecific generalities as 'asynchronous', 'interaction' and 'support'. Because the studies themselves are scientifically empty, no useful conclusions can be drawn from the metastudy, and the tertiary study produces vague statements that are worse than useless (worse, because they are actually pretending to be new and valuable, to be counted as "research and evidence" against the real research being performed outside academia).

Here is the 'model' of the field produced by the first paper:

It's actually more detailed than the models provided in the other papers. But it is structurally and methodologically useless, and hopelessly biased in favour of the traditional model of education as practiced in the classrooms where the original studies took place. At best it could be a checklist of things to think about if you're (say) using PowerPoint slides in your classroom. But in reality, we don't know what the arrows actually mean, the 'interaction' arrows are drawn from Moore (1989) , and the specific bits (eg. "use of LMS") say nothing about whether we should or whether we shouldn't.

The fifth chapter of the book is constructed differently from the first four, being a summary of the results submitted from the MOOC Research Institute (MRI). Here's how it is introduced:
Massive Open Online Courses (MOOCs) have captured the interest and attention of academics and the public since fall of 2011 (Pappano, 2012). The narrative driving interest in MOOCs, and more broadly calls for change in higher education, is focused on the promise of large systemic change.

The unfortunate grammar obscures the meaning, but aside from the citation of that noted academic, Laura Pappano of the New York Times, the statements are generally false. Remember, academics were studying MOOCs prior to 2011. And the interest of academics (as opposed to hucksters and journalists) was not focused on 'the promise of large systemic change' nearly so much as it was to ionvestigate the employment of connectivist theory in practice. But of course, this introduction is not talking about cMOOs at all, but rather, the xMOOCs that were almost exclusively the focus of the study.

Indeed, it is difficult for me to reconcile the nature and intent of the MRI with what Siemens writes in his article:
What I’ve been grappling with lately is “how do we take back education from edtech vendors?”. The jubilant rhetoric and general nonsense causes me mild rashes. I recognize that higher education is moving from an integrated end-to-end system to more of an ecosystem with numerous providers and corporate partners. We have gotten to this state on auto-pilot, not intentional vision.

Let's examine the MOOC Research Institute to examine this degree of separation:
MOOC Research Initiative (MRI) is funded by the Bill & Melinda Gates Foundation as part of a set of investments intended to explore the potential of MOOCs to extend access to postsecondary credentials through more personalized, more affordable pathways.
To support the MOOC Research Initiative Grants, the following Steering Committee has been established to provide guidance and direction:
Yvonne Belanger, Gates Foundation
Stacey Clawson, Gates Foundation
Marti Cleveland-Innes, Athabasca University
Jillianne Code, University of Victoria
Shane Dawson, University of South Australia
Keith Devlin, Stanford University
Tom (Chuong) Do, Coursera
Phil Hill, Co-founder of MindWires Consulting and co-publisher of e-Literate blog
Ellen Junn, San Jose State University
Zack Pardos, MIT
Barbara Means, SRI International
Steven Mintz, University of Texas
Rebecca Petersen, edX
Cathy Sandeen, American Council on Education
George Siemens, Athabasca University
With a couple of exceptions, these are exactly the people and the projects that are the "edtech vendors" vendors Siemens says he is trying to distance himself from. He has not done this; instead he has taken their money and put them on the committee selecting the papers that will be 'representative' of academic research taking place in MOOCs.

Why was this work necessary? We are told:
Much of the early research into MOOCs has been in the form of institutional reports by early MOOC projects, which offered many useful insights, but did not have the rigor — methodological and/or theoretical expected for peer-reviewed publication in online learning and education (Belanger & Thornton, 2013; McAuley, Stewart, Siemens, & Cormier, 2010).

We already know that this is false - and it is worth noting that this study criticizing the lack of academic rigour cites a paper titled  'Bioelectricity: A Quantitative Approach' (Belanger & Thornton, 2013) and an unpublished paper from 2010 titled 'The MOOC model for digital practice' (McAuley, Stewart, Siemens, & Cormier, 2010). A lot of this paper - and this book - is like that. Despite all its pretensions of academic rigour, it cites liberally and lavishly from non-academic sources in what appears mostly to be an effort to establish its own  relevance and to disparage the work that came before.

I commented on this paper in my OLDaily post:

The most influential thinker in the field, according to one part of the study, is L. Pappano (see the chart, p. 181). Who is this, you ask? The author of the New York Times article in 2012, 'The Year of the MOOC'. Influential and important contributors like David Wiley, Rory McGreal, Jim Groom, Gilbert Paquette, Tony Bates (and many many more)? Almost nowhere to be found.

Here is the chart of citations collated from the papers selected by the committee for the MOOC Research Network (p. 181):


 Here is the citation frequencies from the same papers (p. 180):


What is interesting to note in these citations is that the people who Siemens considers to be 'Alt-Ac' above - Mackness, Stewart, Williams, Cormier, Kop, Williams, Mackness - all appear in this list. Some others - Garrison (I assume they mean Randy Garrison, not D.D.) and Terry Anderson, notably, are well known and respected writers in the field. The research we were told several times does not exist apparently does exist. The remainder come from the xMMOC community, for example,  Pritchard from EdX, Chris Peich from Stanford, Daniel Seaton (EdX). Tranches.

But what I say about the rest of the history of academic literature in education remains true. The authors selected to be a part of the MOOC Research Institute produced papers with only the slightest - if any - understanding of the history and context in which MOOCs developed. They do not have a background in learning technology and learning theory (except to observe that it's a good thing). The incidences of citations arise from repeated references to single papers (like this one) and not a depth of literature in the field.

What were the conclusions of this fifth paper? As a result, nothing more substantial than the first four (quoted, pp. 188-189):
  • Research needs to create with theoretical underpinnings that will explain factors related to social aspects in MOOCs
  • Novel theoretical and practical frameworks of understanding and organizing social learning in MOOCs are necessary
  • The connection with learning theory has also been recognized as another important feature of the research proposals submitted to MRI
  • The new educational context of MOOCs triggered research for novel course and curriculum design principles
This is why I said in my assessment of the paper that "the major conclusion you'll find in these research studies is that (a) research is valuable, and (b) more research is needed." These are empty conclusions, suggesting that either the authors of the original papers, or the authors summarizing the papers, had almost nothing to say.

In summary, I stand by my conclusion that the book is a muddled mess. I'm disappointed that Siemens feels the need to defend it by dismissing the work that most of his colleagues have undertaken since 2008, and by advancing this nonsense as "research and evidence."