Wednesday 30 April 2014

Liking vs Sharing Bias

Sometimes I come across posts on Twitter or Facebook that say "Like/Fav for Option A; Share/RT for Option B"! In fact, give me two seconds, an example won't be hard to find...
This system is flawed!

Am I the only person who gets actively annoyed whenever I see that sort of post? It seems to ignore the fact that if you SHARE something, more people will SEE it - completely screwing up your counting system!!

Okay, so maybe people don't care because "it's all in fun". But it worries me that some people may honestly be blind to the mathematics involved.

NOT ROCKET SCIENCE


For instance, pretend I'm running a poll on my favourite serial character - Para (Fave) or ParaB (ReTweet).  I have 448 followers, and we'll pretend they know what the heck I'm talking about. Notice that as long as people like Para (Fave), the only way anyone else is going to see this message is if they happen to search for one of the words I used.

But perhaps Audrey McLaren (a follower) prefers ParaB, so she ReTweets. And Audrey has 1,456 followers. There is now a MUCH larger audience! The single retweet means up to three times as many votes are possible (might be less - crossover friends). Simultaneously, as most humans tend to follow like-minded people, many of those new votes on her side would also be for ParaB, creating more ReTweets, which throw off the numbers even more. (Insert your favourite sports team there, if my analogy is breaking down.) But it's even worse than that.

In the unlikely event that I'm following someone, even though I disagree with most of their life/sports choices, the most I can do to disagree with their "ReTweet" is to "Fave", thereby cancelling it out. I cannot ReTweet it to my followers for them to "Fave" as well, because that would mean I'm voting for the other side! (I suppose I could do both, cancelling myself out, hoping a follower cancels out the ReTweet that brought it to me, but I doubt that this much thought is involved when you see these informal polls.)

In other words, once something starts getting ReTweeted, it gains perpetual motion, thanks to like-minded individuals... until that motion is negated by Favourites. Or more likely by Apathy. Which I think is really what you end up measuring.  To wit, I foresee two possible interpretations from every single one of these "polls":

1) The Favourites win. In other words, the post didn't get very far away from the source - or possibly it got so far away that the immense new population was able to trounce the people that brought said post to that wider audience. Conclusion: Everyone got riled up and shot the messenger(s).

2) The ReTweets win. In other words, the post got out there, and when most people saw it they went "meh", rather than voting against the people that brought said post to the wider audience. Conclusion: Everyone couldn't care less.

I suppose you could say you're measuring the popularity of the question itself, rather than the actual choices offered. (More faves = good question, more RTs = lame question.) Which might be clever marketing, but to me really serves no point as far as answering the question as posed.

OPEN QUESTION


So here's a thought. Is it actually possible to create an algorithm which adjusts to the constantly fluctuating population? Which can actually factor in percentages to know how many people honestly DID favour choice A or B over choice C? (Choice C being: Stop, Stop, Stop It, Stop.) I doubt it.

After all, non-response bias occurs when your survey results are influenced owing to the fact that the majority of responses were due to people with very strong opinions. (As opposed to a more typical person.) At best, I think that's what's happening. Meaning an algorithm which fixes the problem... would also fix a huge problem in regular data gathering.

But I might be way off here. Do you think I'm way off here? Comment on the post if you agree with me. Share it with others if you don't agree.

Saturday 26 April 2014

Nov PD4: Heads


November 2013 was a good month for Professional Development. I attended four sessions, each with different audiences. Finally, I'm getting to blogging about them.

PART 1: SCHOOL PD
PART 2: SUBJECT PD
PART 3: REGION PD
PART 4: HEADS PD

Technically, this event occurred in early December, not November, but I'm including it, because it's my blog. Our math curriculum heads (including Robin McAteer @robintg and Anne Holness from my last post) got mathematics heads together from around the board, and invited them to bring another teacher along. I offered to go with JP Brichta (@JPBrichta) because, to be brutally honest, I felt like I needed a day away from the school. I'm glad I went.



BELIEVE IT


The first item on the day's agenda was "Facts & Beliefs". There were some 'Norms of Engagement', and the ones that resonated the most with me (in that I jotted them down) were: "Be open to learning"; "Seek truth in others' perspectives"; "Speak and listen respectfully". There were also 'Barriers to Learning' which included: "We focus on confirming our hypotheses, not challenging them" (confirmation bias, which I've blogged about over on MuseHack), "We consider ourselves to be exceptions" (yet aren't as forgiving towards others), and "vividness bias" - effectively the thing we see as being most prominent is what we most recognize.

That last was a new one on me. I wonder if "vividness bias" is part of the reason students do things like multiply exponents when they should be adding. They recognize exponents is multiplication, it's the most prominent thing, and so... they mess it up. What if we started senior grades not with exponent laws, but with logarithms? It's new, so less of a frame of reference... and maybe from the log laws they could reverse engineer the exponents? I'm speculating here - I don't teach logarithms. I'm also off track.

For further reading, the above were concepts that came from the book "Intentional Interruption" by Steven Katz and Lisa Ann Dack. "We need to think and talk about learning as much as about teaching." (And somehow we need to find the time to do that...) From there, the session moved to talking about "Self Efficacy", which is a person's belief in his/her ability to succeed in a situation.

Quick background for those not teaching in Ontario: We have subject streaming. It's not the same streaming as 20 years ago (Basic/Advanced/Enriched), because Mike Harris implemented "destreaming" in 1999. The insanity that came from mashing everyone together brought us to course streaming (Applied/Academic... and later, Essentials, one below) which is where we are now. Note that you can, for instance, take Applied Math and Academic English.

A refrain that our curriculum advisors hear SO OFTEN is "half of these students in my Academic class should be in Applied", or possibly the reverse.


Replicators: The only way sure way
to pass on information.
Point 1: Why are students misplaced? My belief is that part of the problem involves a lack of understanding. Parents believe that if a student starts in "Applied" stream, they cannot go to university - which is not true. They'd need a minimum of 6 math courses, not 5, but there ARE pathways. It's also been pointed out that if more affluent parents are forcing their children into higher levels, it's creating a wealth inequality gap. (See the keynote section from this OAME post.) It's not doing any favours for math either, having students who hate it being forced to do abstractions rather than more concrete examples.

Now - set aside any confirmation bias you have.

Point 2: ARE students misplaced? Gaps in understanding is correlated with math aptitude, but that's NOT causation. Just because a student has gaps, should they be denied the Academic course? The issue is REALLY one of processing time - to complete the Ontario curriculum in 5 courses instead of 6, the SPEED of the Academic stream is increased. There is less (perceived?) time for manipulatives or investigation, due to the higher level of abstraction. Yet if a student believes they are capable, they may rise to the occasion.

In fact, if we consider classroom environment as a factor (as in my last PD post), a student may work to the level of those around them. Be that up or down. The bottom line is Teacher Efficacy: We have to believe that we can teach them. Regardless of level.

I've also written down "show the Grade 7/8 expectations in Grade 9, writing related questions". I forget the context, but feel like this connected to one other issue, that being modified expectations. Students can pass Grade 8 with only Grade 6 knowledge of math, because no one fails before Grade 8... in these cases, I feel like it really is a disservice not to use the streams the way they were intended.


LANGUAGE ARTS


From the discussion on "beliefs" we moved to talking about ELLs (English Language Learners) - I believe this was after lunch. It was noted that those who start in the English system (when it is not their native language) can be weaker than those who transfer in later, because those who come in later have transferable skills from their first language. The language gap generally increases from Grade 4 onwards.


We looked at the following example: "The Ministry of Natural Resources tagged and equipped two moose with tracking collars. Two hours later, one moose's location is given by (6, 8). The other moose's position is given by (-3, 5). Assuming they were captured at (0,0), determine the difference between the distance they travelled."

Dan Meyer and other educators often talk of the importance of picking out the relevant information from a task... but including such extraneous information as "The Ministry" can cause problems for ELLs before they even start. There was talk of scaffolding for English versus scaffolding for Math. A "Step 1" problem might be JUST the visual, looking for the distances, not even the difference. From there, step it up. For instance, if the problem as posed were a "Step 5", a "Step 4" might use the same phrasing, but boldface key items.

(I also must point out that the problem itself is flawed. We don't know the distance the moose traveled - we only know their ending position! Thus can only calculate a final displacement. Even when we understand english, we can still make mathematical errors with the subtleties of language. Perhaps we even do so purposefully, so as to not overcomplicate a problem.)

PD then shifted to looking at assessment plans and evidence records, as tools to support student learning. The former is meant to be a "living document", which can be adjusted/updated while going through the course... the idea is NOT to standardize something for all teachers teaching a course. (There's always this fine line between wanting more guidance... and not wanting to be told specifically how something MUST UNEQUIVOCALLY be done.)

Every table where people were sitting was given records for three different students, where their Level results on tests, tasks, et cetera were presented. We were then asked to come up with a number grade that would represent this, to go on a final report card. (For those who don't know, in Ontario it's generally accepted that Level 0 is below 50%, Level 1 is below 60%, Level 2 is below 70%, level 3 is below 80% and level 4 is above 80%.)

At our table, most of us worked individually, then discussed our results; we all thought similarly, though not exactly the same. Groups eventually put results up on a "marks timeline" which had been placed on the wall, and we all added post-it notes. Then discussed our results as compared to what the teacher themselves had given the three students.


Here, let me give you a mark...
There was some discussion after this about the whole problem of variability between teachers, and how daily context for who the student is can help in making final decisions. At one point, I remarked how I'd given a Grade 11 student a final mark of 84; she came back (after finding out in guidance) wondering if there was anything she could do to get an 86, to help put her on the honour roll.

I pretty much just gave her 86. She was a good student, and two percentage points wasn't something I was about to haggle over. Lest you get the wrong impression, I HAVE stood firmly on a 78, not rolling it to an 80 even after being phoned by a parent, for a different situation. Context is everything. Speaking of which, according to JP there was a bit of a hush in the room after I said that. I don't know that I was really aware. I'm like that.

The day concluded with a look at rubrics and how they articulate criteria; this involved a video as well as some discussion of how placemats and evidence records can help in their creation.


WRAPUP


The exercise of looking at the evidence records was a good one, we did it later in our department back at school. The other things I learned are, well, basically what you just read. I do want to mention something else though... math coordinators at the board office do lots of good work. I feel like it's important to remember such things when "board mandates" come down to us, ones that they have no control over.

Also, I think in future if I'm going to blog about this stuff, it's either going to happen sooner to the event, or not at all. >.<