Wednesday, 28 October 2015

TANDQ 09: Price Points

In 2014-2015 I wrote an education column called "There Are No Dumb Questions" for the website "MuseHack". As that site has evolved, I have decided to republish those columns here (updating the index page as I go) every Wednesday. This ninth column originally appeared on Thursday, November 27, 2014.

Why don’t more prices end in round numbers?

In large part due to the “Left Digit Effect” - but as a bonus, I’ll also mention “Benford’s Law”, and the pricing of your own items. It seems a sensible topic to tackle as we head into the season of Holiday Shopping, right? Not to mention Black Friday/Cyber Monday for the Americans.

No one (that I’ve been able to discover) knows where the practice of ending a price in a “9” or “99” began. It’s been suggested that doing so would force the cashier to open the till to make change, so that the sale would become a matter of record. But perhaps business owners simply started noticing that pricing at xx.99 was good for sales. Because it is. This has been shown experimentally. Even going back to the 1960s, when a liquor store in Southern California found that pricing their wine UP to 99 cents (from 79 and 89 cents) increased the number sold. How could this be?

One issue is the Western manner of reading, which involves scanning from left to right. So, upon seeing the price $39.99, the left most digit is seen first, and is thus given greater weight - even though by the time we get to the end, the price might as well be $40. (Particularly in countries which no longer mint pennies!) Consider, does it immediately register to you that a $5.99 item is actually double the cost of a $2.99 item? The term “Left Digit Effect” is used to describe how consumers reading $5.xx will interpret “$5 and change”, even if the cents given mean the cost is “Almost $6”. Which, granted, doesn’t quite explain why raising a price would result in better sales, but there’s an element of psychological pricing involved too - if you DO see the price as “Almost $6”, you may get the false impression that the item is somehow on sale. Even if $5.99 is the regular price.

All that said, there’s one other mathematical aspect in play, involving percentages.

Thirty Percent Chance

It turns out that not all leading (“left”) digits are created equal. While truly random numbers (like the lottery) will be evenly spread out across all digits, and truly constrained numbers (like ones which actively eliminate digits) are not subject to the following effect, a set of random measurements (for instance, house addresses) will tend to start with a “1” more often than a “2”, a “2” more often than a “3”, and so on. In fact, the left-most digit in most data sets turns out to be a “1” fully 30% of the time! That’s not even close to one ninth! The mathematics behind it is referred to as Benford’s Law, which describes the probability of the first digit (“d”) using logarithms. This law is even “scale invariant”, meaning it works regardless of whether you measure in metric, in imperial, in dollars or in euros. Why is this useful? Well, for one thing, when the expected first digit pattern is MISSING, we can identify voting anomalies, or catch those committing tax fraud. Yet how does this connect back to shopping?

At first, it seems like a complete contradiction - shouldn’t we see more “1”s, not “9”s? But remember, Benford’s law talks about the leftmost digit. The second digit does not follow the trend to the same extent, and by the time you reach the fifth digit, number choice is fairly uniform from 0-9 (all other things being equal). Why? Let’s consider the percentages. If an item is valued at $10, to move that first digit to $20, you need to double the price - a 100% increase. But for an item at $20, moving it to $30 merely requires a 50% increase - even though both cases involved an additional $10. And moving a $90 item to $100 is trivial - only a bit over a 10% increase in price. (At which point changing the $100 item to the next initial digit, $200, is again fully double.) Such is the nature of logarithms. So why not leave the price at $99? There's not much percentage to be gained by changing it.

Consider also discounts. If there is a 50% discount on any item (under $100), it will end up starting with a “1” so long as the regular price was anywhere between $20 and $40 - yet with the leading digit now being a “1”, it might appear to be an especially good deal. If we increase the discount to 60%, an item at $19.99 would have been less than $50 anyway ($49.98) - yet we may not stop to consider that actual drop in price. We may also perceive a $100 item being marked down to $89.99 as being a much better deal than seeing a $116 item priced down to $105.99, because of the change in place value - even though the price differences are the same.

So, can we relate some of this to pricing your own items for sale? Well, while the “Left Digit Effect” might be in play, a study last year suggested that customers prefer to pay in round numbers. Because really, when was the last time you were at the gas pump, trying to hit a total that ended in .99? In fact, given a “pay-what-you-want” download plan for the video game “The World of Goo”, this study found that 57% of consumers chose to pay round dollar amounts. (I’ve also noticed Kickstarter pledges tend to go in round numbers - is that built into their system?) Some stores will even use round number prices to create the impression of added value or quality. But before we disregard the psychology entirely, there are applications outside of shopping. The link (below) about game prices uses the “Left Digit Effect” as a reason to award 3000 experience points - rather than only 2950 - when you’re coding up your game. After all, the percentage increase from 2950 to 3000 is below 2%.

Will you now pull out your calculator when doing your shopping? Probably not, so your best take away here is to avoid making spur of the moment decisions, particularly when looking at what, on the surface, seems to be a “great deal”. Oh, and you should also check before following any other advice about “secret codes” used in pricing.

For further viewing:

1. Why Game Prices End in .99

2. Benford’s Law (with graphs)

3. GoodQuestion on WCCO News (3.8 min* video)
    * - see what I did there?

Got an idea or a question for a future TANDQ column? Let me know in the comments, or through email!

Wednesday, 21 October 2015

TANDQ 08: Average Expectations

In 2014-2015 I wrote an education column called "There Are No Dumb Questions" for the website "MuseHack". As that site has evolved, I have decided to republish those columns here (updating the index page as I go) every Wednesday. This eighth column originally appeared on Thursday, October 30, 2014.

Motivation through punishment, or reward?

Reward. The amount of oversimplification there is intense. Let’s go deeper.

To decide which of those is more effective, we must first distinguish between “feedback” and “reinforcement”. Feedback is when an output becomes part of the next input. That’s something to be avoided if you’re a sound engineer, but is desired if you’re involved in a creative endeavour. Reinforcement, on the other hand, is when a stimulus is used to increase the probability of a certain response. It’s not the same thing, though consistent feedback can act as a reinforcement. Also important is the fact that negative reinforcement is NOT the same thing as punishment. Reinforcement has two flavours: Positive, the addition of “good” stimulus (eg. praise), and negative, the removal of “bad” stimulus (eg. nagging). Neither of those involve discouraging a behaviour (eg. through humiliation) - that’s punishment, which would lower the probability of a certain response.

Of those three choices, positive reinforcement is generally regarded as the best. That said, this column will now be examining feedback.

Next, let’s distinguish between the “law of averages” and “regression to the mean”. The “law of averages” is a mental fabrication, a misinterpreting of the much more mathematical “law of large numbers” (or Bernoulli’s Law in statistics). The “law of averages” is the notion that, for example, if a coin flips 5 heads in a row, tails then becomes more likely. While it IS true that, over a large number of flips (a SERIOUSLY large number), you will now see more tails than heads... the chances for the next flip have not changed. Every flip is independent of the last. Even after 5 heads, heads is just as likely to occur as tails. (Unless you’ve got a two headed coin.) This can be a difficult thing for us to wrap our heads around, particularly when we consider “regression to the mean” - which IS legitimate mathematics, and the topic (finally!) that will contrast punishment with reward.

That’s So Mean

Regression (or reversion) to the mean essentially says: The further a measurement is from “normal”, the higher the chances that subsequent measurements will be closer to “normal” (whatever “normal” happens to be for the data). To use another example, if you have a really good (or bad) day, it becomes increasingly likely that your next day will be average. Again, this is not saying that a bad event becomes more likely after several good ones - where “good” may be someone’s definition of “average” - what it’s saying is that an extreme event, when it occurs, is likely to be followed by a more average one. (We see this frequently in sports.) Granted, “regression to the mean” does not eliminate the possibility that your definition of average may change over time, for instance as skill level increases. (Consider my last column about the Dunning-Kruger Effect.) But it DOES say that, following an extreme event, we will regress back to “normal”… regardless of whether the feedback received for that event was in the form of a reward or a punishment.

The Veritasium channel explains the concept very well in this 7 minute video.

It’s this problem of "regression to the mean" that requires studies and experiments to have a control group, generally in the form of a placebo (a substance that is known to have no effect). After all, given people who are by definition outside the “norm” (otherwise why would they need treatment?), we must compare a set of them who receive care with those who may simply be regressing to the mean. If both the treated and untreated groups improve by about the same amount, the treatment is ineffectual. A study on osteoarthritis of the knee even showed that surgery could be a placebo - the patients improved regardless of whether a real procedure was done. But what does all of this mean in terms of feedback?

Consider this scenario: You do really poorly on an interview. There is little benefit to beating yourself up over it. That event was outside the norm. Statistically speaking, you WILL do better next time. Similarly, if you get a really high hit count on one blog post, the count is not likely to be repeated next week. That good post was outside the norm, and it is not possible to maintain that level of performance (statistically speaking, all other things being equal). More to the point, while it is similarly futile to reward yourself for that great event… doing so consistently can turn your internal feedback into a message of reinforcement. A message of positive reinforcement (with a reward) rather than negative reinforcement (no longer berating yourself) or punishment (refusing your needs until things are done right).

Hence my saying that reward beats punishment.

That said, the message of the reward is equally as important as the reward itself! If you reward yourself for “being so smart”, you’re actually encouraging a fixed mindset. The implication is that your “normal” did not change, but somehow you “beat the odds”. (The same sort of problem will occur if you decide there is nothing to learn from that really poor interview.) On the other hand, if you reward yourself for “your hard work”, you’re encouraging a growth mindset. The implication is that your efforts are changing your “normal”, and if you keep this up, what was once was an extreme event may become the new average. Which means that it's the message you give to yourself - and perhaps more importantly, to the others you speak with - that's important!

Of course, we may not get it right the first time. But it’s our average performance over the short term that, in the end, leads us towards our great expectations.

For further viewing:

1. Identifying Negative Reinforcement

2. Regression Toward the Mean

3. Coaching and Regression to the Mean (Video)

Got an idea or a question for a future TANDQ column? Let me know in the comments, or through email!

Wednesday, 14 October 2015

TANDQ 07: Effect of Learning

In 2014-2015 I wrote an education column called "There Are No Dumb Questions" for the website "MuseHack". As that site has evolved, I have decided to republish those columns here (updating the index page as I go) every Wednesday. This seventh column originally appeared on Tuesday, September 30, 2014.

Why is this activity harder than I anticipated?

People are not good at self-assessment - with a possible caveat that I will get to later. Consider that other words can also be substituted for “activity” above, such as “job”, “hobby”, “decision” or maybe even “relationship”. And experiencing some difficulties may actually be a good sign. It relates to the following quote: “The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt.” (Bertrand Russell). In other words, it relates to the Dunning-Kruger Effect.

David Dunning and Justin Kruger published their result in a paper, “Unskilled and Unaware of It”, back in 1999. The rational wiki offers an explanation, ultimately simplifying it down to “people are too stupid to realize they’re stupid”. Let me put it another way: it has to do with your focus. When you don’t know much about a subject or activity, you tend to perceive it via the limited understanding you already have. Which (in most cases) will make things seem simple enough. Conversely, as you learn more about the subject, your focus will shift from what you know towards the things you do not yet know. As a result, the same activity appears more complicated. Let me give you an example.

My quote above, attributed to Bertrand Russell, may be incorrect. If you clicked on the “rational wiki” link, you might have noticed that their quote, while preserving the spirit, is actually quite different. Elsewhere on the internet, I have also seen the quote end similarly, but begin with: “The fundamental cause of the trouble is that in the modern world …” Alternatively, sometimes the quote includes the words “fools and fanatics”. The quote has also been attributed to Charles Bukowski, but no authentic source for that has been found. Of course, Bob Talbert (a columnist) will also turn up in searches, as he once quoted Russell. Now, when I started this column, I could never have (correctly) predicted the amount of time I’d have to spend researching that one single quote. Because I didn’t even know there was a controversy! (Ultimately, I gave up, and wrote this paragraph. Feel free to educate me as to the real quote in the comments below.)

That said, not knowing things - that’s not the problem. The problem comes in believing that we DO know things, when in actuality we do not. Or, to be generous, perhaps they are things we once knew, but no longer know under present circumstances. Either way, couple this effect with the fact that any research you do may involve Confirmation Bias (covered in my column here), and we can end up with enough rope to hang ourselves. For instance, these articles from earlier in 2014: "The less Americans know about Ukraine’s location, the more they want U.S. to intervene" and "The less Canadians know about Fair Elections Act, the more they support it". But wait - there’s more.

Reverse the Polarity

As with most things, there is a flip side. Once your focus has shifted to the things you do not know about the subject, you will tend to downgrade the knowledge you have already obtained. If you look at the graph in the original Dunning-Kruger paper, which plotted “Perceived Ability” along with “Actual Test Score”, those people in the top quartile (and only the top quartile) actually scored in a higher percentile (ie- relative to everyone else) than they believed that they would. Put more simply, once you’re in the thick of things, you might know more than you think you do.

Some connect this reversed relationship to the “impostor syndrome”, a phenomenon whereby you believe you are a fraud despite a series of accomplishments. I think the connection there is tenuous - an expert with an inaccurate perception does not necessarily think they aren’t any good at all. (The truth is probably closer to the false-consensus effect.) To provide another personal example, I wonder if maybe I’ve been telling you things you already know... that doesn’t mean I think this column is useless. To that end, let’s conclude by applying the “Dunning-Kruger” effect to the effect itself.

It’s not actually saying anything about intelligence, or stupidity. Very smart people may fall victim, if they are in a situation of which they have little knowledge, experience or skill. It’s also relative, in that if you take the top 5% of experts, and put them all in a room, a bunch of them will end up in the bottom quartile - despite the fact that (by definition) they know more than 95% of people in their field. There is also something called “regression to the mean”, the tendency for ability to get better (or worse) relative to some overall average. Feel free to check out “What the Dunning-Kruger effect is and isn’t” for more about this (it also has the graph I mentioned earlier).

And now for that caveat I mentioned at the beginning. Dunning was interviewed earlier this year, in an article entitled “Why 40% of us think we’re in the top 5%” (see link below). In it, he discusses a test of emotional intelligence, which showed that it was the top performers who showed the most interest in improving. That is, if a student did well with a puzzle, they would return to it, but if they did poorly, they would not - except in Japan. There the pattern was flipped. So... to what extent might our perceptions be a product of Western culture? I don’t claim to have an answer to that.

In the end, perhaps it's true what they say: “A little learning is a dangerous thing.” (Alexander Pope) Maybe it was him. Oh, not this again!

For further viewing:

1. Measles, the Media, and the Dunning-Kruger Effect

2. The Dunning-Kruger Effect and the Climate Debate

3. Why 40% of us think we’re in the top 5%

Got an idea or a question for a future TANDQ column? Let me know in the comments, or through email!

Sunday, 11 October 2015

Vertex Form and Sucking

To be clear, this post will not say the vertex form of the parabola sucks. Namely because it doesn’t (it’s the y=mx+b of the quadratics world). Instead, it will discuss how teaching an aspect of it sucks, why squares suck, and why sometimes I feel like I suck.

I was at a professional development session on a Thursday night (October 1st, put on by our local math organization) which was quite good. One of the main ideas was that algorithms don’t equal engagement or understanding. It’s better to make things more visual, and to leverage student ideas - to move from a task towards the abstract, rather than hoping an abstract procedure will allow time for problem solving later on.

And what did I do the very next day? I went into my class on Friday and showed them a couple algorithms for turning standard form of a parabola into vertex form. The disconnect could not be more obvious. Particularly when it kind of blew up in my face.


Let's first cast an eye towards reality, starting with “real world” explanations for why I went algorithmic right out of the gate. Because if you’re a teacher, I’m sure you’ve been there too, thinking “I really should do ‘x’, yet I’m not”. I’ll leave it to you to decide (in the comments?) if my “excuses” are valid.

Expanding is a thing...
One of the main reasons is that I wasn’t going to be there on Monday. (Due to extra curriculars on my part, there would be a substitute.) So while I could have done expanding first, and moved on to show how vertex form creates that literal square next... I wouldn’t be there for that day of going backwards. And it didn’t seem fair to leave that with a substitute, since it could have blown up in their face instead of mine.

That’s an excuse because maybe I could have found something else in the interim, and followed up on Tuesday.

The second reason is because, honestly, “completing the square” ONLY has a use in high school at THIS particular instant in time. (We don’t do conics in Ontario aside from polynomial parabolas.) A majority of these students aren’t going to see (or need) to do this sort of thing again, so why explore it? Plus there’s a work-around for it, namely that the axis of symmetry is -b/2a.

That’s an excuse because maybe there’s some reasoning coming out of this (particularly as related to algebra tiles below) that could be useful.

Finally, there’s the obvious point that I didn’t really have time to adjust my whole timetable for the unit in a single evening. Add to that the fact that I am really, REALLY not a task guy, and, well, we got what we got. Which is an excuse because I think I had the pieces for something there, if only I’d put a bit more thought into it.

It’s a matter of convincing the students to make a square, rather than use all the pieces. Right? Well, here's what DID happen:


I may not be a fan of tasks, but I DO like concrete models. Hence I am a fan of algebra tiles. For the uninitiated, here’s a summary of how the various models of a quadratic work:
A tile is named by it's area. Positive is red, negative blue.

So here’s where I try to redeem myself, in that I was not merely showing the algorithm, I was walking through it in the broader context. (And if you want to go 3D with this sort of idea, check out Al Overwijk.) Granted, the next logical step would be playing around BEFORE the algorithm, not DURING it.

Here’s the interesting piece about the situation which drove me to blog, which is something to bear in mind if you want to try such an activity yourself: The symmetry of squares is a pain in the ass.

The first example I used included x^2 + 2x, and to complete the square you need a single unit chip. (x^2 + 2x + 1) Minds were blown (possibly because I had an ‘a’ value one step before, derp). Then I offered up x^2 + 4x, and again we need to complete the square, and I circulated around a little.

The interesting bit is students did not make the square (x+2)(x+2). They made the square (1+x+1)(1+x+1), in other words, the tiles you see on the LOWER part of the image below, not the UPPER part. And this was two students, completely independently of each other (while most of the other students weren’t using the tiles).

Huh. Squares suck.

I completely get why they did that. And it didn’t disrupt from the fact that you need +4 and -4 for completeness. The question is, where does one go with that?

Should x^2 + 2x have been completed by putting HALF a tile on each side? Eventually, they will have to deal with fractional tiles... but it’s a little trickier to complete the square that way (four one quarter components). Or is it?

On the flip side, it’s important to recognize that the upper model is also a square, because you’re extending in the two dimensions you have... you don’t need to extend FOUR directions, a square is not a four dimensional shape. Or is that important?

These are obviously questions best asked of the individual students, but this was 2 people out of a class of 25, and I still had to show partial factoring for the ones in the room who were curled up in their chairs, weeping quietly. (Hyperbole!) So I moved on, in part because I couldn’t see how extending this squares idea to 23 other people would be more than added frustration. If the others weren’t even making squares, how would different squares be helpful?

It did gave me something to think about though. Maybe it's given you something to think about. If not, here's one last item to ponder:


In my mind, a lot of the individual exploration being used in classes these days works best with, well, individuals. Is groups really a way to share opinions? Or are they more a way to force deciding on a "best" method? Even with “open questions” (like ‘what do you notice?’) you ultimately have to narrow the focus of the class, and honestly? I hate that. The good of the many can outweigh the good of the few. Or the one.

Which educators can also do to themselves. Making me feel like my thinking sucks.

When vertex meets factored.
To wit, I have a song I parodied about vertex form. It’s one of my favourites, it’s call-and-response, I do it as a wrap-up... and I have at least one student who recorded it and showed it to his parents (yeah, um, okay then). And YET what seems to be a more and more common refrain at the Professional Development I go to? Educators who say: “Yeah, you can toss in a silly song, but that’s not learning, here’s a better way.”

Okay, yeah, but songs are my way, so... so I guess I'll be over here... apparently not helping, only entertaining...

Oh sure, I can talk to myself about “You’re a Good Teacher”, and don’t take things personally, and the speaker means silly raps not what I’m doing (right?)... but even though I KNOW songs aren’t valued by the mainstream educating community, does the mainstream REALLY have to keep poking at it to get a chuckle out of the majority of the crowd?

I guess what I’m saying is, one size doesn’t fit all, even in the adult world. Which I saw in a class setting while doing something as simple for me as completing the square.

WrapUp: Despite my hesitation about activities, if I WERE to create one around “making vertex form”, maybe I’d need to use values divisible by four to start. Then from there, see where it goes. Though maybe expanding first would drum that “four sides to square” thing out of students? (The x squared is always in the upper left corner for expanding, and they really, REALLY want to use those unit tiles in their tile charts, even though they’re unneeded for completing the square.)

I'm curious for more thoughts on any of this.

For further vertex reading: Variables May Vary
For further math viewing: My Math Webcomic
For more about my mental states: The Fringe of Depression

Wednesday, 7 October 2015

TANDQ 06: Around the World: England

In 2014-2015 I wrote an education column called "There Are No Dumb Questions" for the website "MuseHack". As that site has evolved, I have decided to republish those columns here (updating the index page as I go) every Wednesday. This sixth column originally appeared on Thursday, August 28, 2014.

What is the education system like in… England?

This marks the first of a semi-regular set of columns that will look at education systems in different parts of the world. My belief is that this is useful not merely to learn about them, but might also help a writer whose fictional characters originate from another country. And while I’d like to say that this column coincides with the start of “back to school”, I know of some in the US who returned to the classroom almost a month ago. The school year really isn’t as universal as some might think.

Before we begin, a quick geography lesson. The United Kingdom (UK) is made up of four countries: England, Scotland, Wales and Northern Ireland. As such, when I talk about England, don’t confuse it with the rest of the UK (though Wales is similar). In particular, I was in Scotland earlier this month, where I learned that their focus is more on breadth than depth, and their post-secondary education is publicly funded (though you will still have to pay for it if you’re not a Scottish resident). With that in mind, let’s focus back on England.

Education is free (and compulsory) for children aged 5 through 17 (this will rise to 18 in 2015). Primary school lasts for 6 years (ages 5-11) and secondary school for 5 years (ages 11-16), with a possible extension of another two years (see below). School uniforms are typical, and decided on by individual schools. Full time teachers work 195 days in a school year, teaching children for no more than 190 days. The school year begins in early September and runs until late July, with breaks in between the six terms (in October, December, February, April and May).

Ability Grouping

The national Standard Assessment Tests (SATs) are given at the end of year 2 (age 7), year 6 (before secondary) and year 9 (this last set is no longer compulsory). These evaluations help to separate what is referred to as the “Key Stages” (KS) in education. With regard to classes, there is research to indicate that one in six UK children is “taught in ability streams by age 7”, and that those born in September are more likely to be in the top streams. The first SATs taken involve literacy and maths, and the SATs which end KS2 (in year 6) involve English, Maths, and often Science. Moving from there into KS3 (high school), students will get a different teacher for each subject, rather than one teacher for the day, which is similar to the system in North America.

The optional SATs in year 9 occur in third year secondary (around age 14, before KS4). It is at this point that students choose their Options, or which additional subjects outside of the core three (English, Maths, Science) that they will be looking at in more depth going forwards. After two years of that focus, at the end of KS4 (age 16), we reach the General Certificate of Secondary Education (GCSE) exams. (These have replaced O-levels, which existed back in the 1980s.) Writing the GCSE exams for the core subjects is required, while other GCSE exams are liable to be the subject specializations decided in year 9. It is possible to retake GCSEs, though it may cost, and school ratings are influenced by the student results. Notably, in light of the 2013 increase in compulsory education to age 17, there are renewed arguments being made against the GCSE.

Moving into KS5 (Ages 16-18), students can continue working in their school towards A-Levels, assuming the school has that capability, or study at a college of Further Education (FE). This is where the academic focus narrows further, such that in the second year of KS5, only three subjects are studied in depth. The A-levels (or GCE Advanced levels) will occur at age 18, and are mostly assessed through written examinations. It is A-levels that determine acceptance into a University. Post-secondary itself (ages 18-21) would involve looking at one subject, resulting in a final degree. It’s worth noting that independent private schools also exist in England, as do boarding schools. Some boarding schools are state sponsored in terms of the courses, but you still have to pay for the accommodation.

All that said, some political issues surrounding education may seem familiar to an American (or Canadian). The former Secretary of State for Education in England, Michael Gove (who held the position until last July), came under fire during his time in office for some of his reforms. In particular, he made revisions to the GCSEs, and his name has been back in the news now that the overall results are out. So, what do you think of the British Education System? Feel free to comment below!

With thanks to Nik Doran, for a conversation we had at “Twitter Math Camp”. Any errors here are my own; if you know of one, please advise, so that I can make a correction.

For further viewing:

1. Schools in Britain (video)

2. Project Britain’s Introduction to School Life

3. “Global Education” by Global Math Dept

Got an idea or a question for a future TANDQ column? Let me know in the comments, or through email!