Sunday 29 November 2015

PD: Math in Context

My other two blogs (math webcomic and time travel serial) are queued up early this week, so time for another “Good Professional Development but Had No Time To Blog” post. Today’s topic: COMA Social 2015, from October 1st, featuring Kyle Pearce (aka @MathletePearce). See his presentation, “Making Math Contextual, Visual, and Concrete!” at www.tapintoteenminds.com/coma and/or read below.

STUDENT SETS


While teachers may get excited about the correlation coefficient, students are often less enthusiastic. How can something like that be taught so that students won’t freak out or feel overwhelmed? Kyle says “I thought this is what good teaching looked like - very organized, structured notes”. Yet students were bored.

Maybe technology will change everything? SmartBoards? Well, students were behaviourally better (not talking while he wrote on the chalk board) but still not intellectually engaged. This because it was more of a substitution, not an augmentation... the latter is needed for functional improvement.

Here’s how we learned math: 1. Take up HW; 2. Definitions, formulae, procedures (to set student up for success); 3. Examples; 4. Homework. Yet the examples were abstract, removed of context, and then asking a week later, often the concept hadn’t stuck/was forgotten. This model creates two groups of students: “Good at Math” & “Not Good at Math”. Except “Good at Math” meant understanding terminology, and following procedures, aka “Good at Memorization”. Meanwhile, those “Not Good at Math” might not memorize, or might be capable of memorizing but were unwilling to play that game.

Even students “Good at Memorization” run into problems. There is a larger “Struggle with Unfamiliar Problems” group. (If a test question isn’t like the examples, it is seen as “unfair”.) When we take in new knowledge, we have to tie it to prior knowledge, but traditional methods (and textbooks in particular) will silo concepts into tiny blocks, removing the chance to see those connections.

A VISION


Showing the connections (how math concepts in one chapter exist elsewhere), and making them more contextual, visual, and concrete, leads to more confidence. Student success and understanding can follow from that. Avoid the natural immunity to change (yes the algorithm is beautiful, yet we need to know when it applies), and a desire to aim for entertainment (showing “pi” as “pie” or inventing a “rap”) - put the student engagement where we need it.

Kyle showed a 7 step process posted next to an “instant brewer” coffee machine. (“Actually I just press this button.”) Do we need an algorithm to use one of the easiest objects in the universe? Must we be told to plug it in? He notes that process wasn’t created for the USER of the machine, but for the OWNER of the machine. Now the user has no reason to ask the owner anything! But what if there’s an upgrade? Such algorithms don’t equal engagement, or understanding. 

While technology can functionally improve a classroom, it’s the task that’s going to redefine the class. (Kyle showed a few apps quickly; don’t find an app specific to fractions, otherwise we’ll be switching from app to app for problems. And “Evergreen” apps can be used but they’re not math specific.) The fear is that people will ignore the effective teaching aspect, which has to happen before the transformational technology.

Kyle posed a question about stacking paper up a wall - and showed this with a photo. (Clipart doesn’t do much for him.) Ideally get a number of questions and settle on one in particular. Here if we ask “How many pack of paper to the ceiling?” we can now get predictions. Use “High/Low” strategy (Dan Meyer 3 acts) - what number would be too low? Too high? With a padlet for Google Docs, predictions can be put online - or jot on a whiteboard and take a picture. (Kyle got a class set of iPads funded.)

Don’t give the number yet. Request what other information is needed: How many papers in a stack? Is that an 8 foot or 10 foot ceiling? Figure out what is useful. Refine the predictions - and Kyle had us log into the playkh website with a PIN to play along like a game show.

Students can upload their own solutions. If Kyle wanted to look at exploring proportions (versus unit rates), he could decide which solution to show the class first. Also look for the best incorrect responses. Students can realize that by not showing work, it’s leading to simple mistakes - and you don’t have to reveal whose solution it is. (You can also rotate on the fly!)

EXTENSIONS


Do teacher solutions look like student solutions? Sometimes it’s unnatural to do things the way we’re asking students to do it. The “game show” can then offer extension problems to check “what do students know, what do students not know” rather than asking. For instance, we may not have talked much about variables yet, but can mention it here to check prior knowledge.

Rather than looking up the solution, actually show it (Kyle enlisted the help of a custodian). His answer doesn’t match the math - students don’t think about this. Weight is compressing the stacks? Pushed up the ceiling a bit? Tower is leaning? Floor is slanted? We have made this more visual. Maybe not concrete in terms of physically holding the paper, but might not be needed here. (Will we remember doing math or merely baking cookies?)

Another extension: What would stacks on a table look like? (Linear?) Can a student identify that there is a relationship here between two variables, not merely a division and done? Which variable impacts the other? Proportional reasoning made more explicitly linear (direct/partial variations). Kyle noted that this wouldn’t all be addressed in a single day.

We might want to bump into the algebra, rather than make it explicit. What goal are we aiming for when we include a table? (Solving an equation.. first differences.. the y-intercept!) What we have as given information is slope and a point. Pull y=mx+b from students rather than have them copy an algorithm (as in “a note”). The only way students can do it if we strip away all this context and use simple numbers is if they mimic the teacher/process.

Move the “algorithm” to CONSOLIDATION after the activity. “What does the point (1,5) represent in the context of the stacking paper task?” (1 package gives us total height of 5 - including table) “What does slope/unit rate represent?” (height of a pack) “How tall would this table be with these numbers?” Scaffold toward tasks, moving towards the abstract, rather than the other way around. Giving procedures and then hoping for problem solving at the end won’t work - they’ll be scared by the end.

NEW SETS


Executing things this way can shrink the “Not Good at Math” group down, but MORE, it will shrink the “Struggle with Unfamiliar Problems” group down. Continue digging deeper, when you’re ready to tackle another concept. If there are two package stacks of different heights, can we figure out the table underneath? What is the learning goal for students to bump into here? (Solving given two points. Intuitively uses the slope formula, in context.)

Scaffold out to: What happened here? Does it work all the time? “Our students are capable of doing all these things we want them to - we need them to find creative ways of getting there.” Then simplify what they’re doing.

“We don’t do math because it’s harder, we do math because it’s easier.”

Thanks for reading!
-For last year’s COMA Social recap, see "Public Math Relations" with Marian Small.
-For how I set up my classroom lately, see "Grouping Tagline".

Wednesday 25 November 2015

TANDQ 13: Pass It On

In 2014-2015 I wrote an education column called "There Are No Dumb Questions" for the website "MuseHack". As that site has evolved, I have decided to republish those columns here (updating the index page as I go) every Wednesday. This thirteenth column originally appeared on Thursday, March 26, 2015.

Why must my password include a capital letter?


Because the needs of the many outweigh the needs of the few. This being the one year anniversary of my column, I’ve decided to take a look at some rather simple mathematics that is often taken for granted: that of passwords. Then again, as geeks, writers, et cetera, maybe you have a very good grasp of the subject, along with how long it takes your computer hacker character to crack a code. (Maybe you can even teach me a thing or two in the comments!) I’ll endeavour to be entertaining regardless.

Here’s the basics. There are 26 letters on an English keyboard (I don’t know enough to comment on, say, Japanese). Let’s say that your password has to be exactly 10 characters long (makes it easy!). With 26 choices for each entry, the result is 26^10 or 141,167,095,653,376 possible passwords. Now, what if we include capital letters as options? This doubles the total character set, so 52 choices for each entry, resulting in 52^10 or... well, over 144 quadrillion possible passwords. We’ve increased our previous answer by 2^10. But wait, what if we FORCED at least one capital letter instead (required, no option)? Well, this is going to reduce the total. It only makes sense. When you add a restriction to something, the total will decrease.

In this case, with one (or more) of the characters having ONLY the 26 uppercase options, we can effectively remove every password that is all lowercase - in other words, the 26^10 options we had to start. They’re no longer valid. Granted, when you remove 141 trillion from over 144 quadrillion, you still have 144 quadrillion… but the restriction DID make your password a bit easier to guess. What if your password can be any number of characters? That’s harder. What if it must be at least 8 characters? Somewhat easier again - don’t try guessing a shorter password. (What if it’s a maximum number of characters instead? Then it could be that you’re watching Sherlock, Season 2.) The natural question at this point is: Why force conditions that ultimately decrease total options? It’s a pretty good question.

Predictable Entropy


Before we get into that, a word about password entropy. (I am now contractually obligated to point out this XKCD comic. There’s an in-depth analysis of the mathematics behind it in my ‘further viewing’ links below.) The short version: Entropy is defined as the total number of possible resultant states. In terms of a string of characters, this gives: (total_characters)^(length), the way we had 26^10, above. Computers work in binary, so take log base 2, giving: (length)*log_2 (total_characters) as the binary size of the message, aka bits of entropy. You’ll notice that length is the big multiplier. Yes, log base 2 of 26 is less than log base 2 of 52, but adding two more (lowercase) characters is almost equivalent. (12*log_2(26) and 10*log_2(52) are both about 57.)

So, how many bits do we need for a good password? Well, this website link says 72 bits of entropy/security is strong for short term, but 80 is better for long term use (supported elsewhere, as it means 2^80 passwords would need to be tried). How do we get there? With about 94 characters on the keyboard, we’ll need 80 = (length)*log_2 (94), so a length of 13 characters. (PIN numbers, I’m looking at you.) Here’s the interesting thing. This entropy can be similarly achieved by selecting a sequence of random words, known to many as a “passphrase”. Instead of a keyboard, let’s assume a dictionary/vocabulary of 1,000 words. Solving 80 = length*log_2 (1000) means a length of 8 words (repetition allowed). If this doesn’t seem to buy us much, try plugging in the ACTUAL size of your vocabulary to the equation - the number of necessary words will only decrease. (Unless you know less than 1,000 words.)

The caveat to using a “passphrase” is that it does need to be RANDOM. The second word shouldn’t be in any way be determined by the first. Humans are not good with random - we will pick our birthdate, our mother’s maiden name, and something off The List of 2014s Worst Passwords... all in lowercase. Unless, that is, we are forced away from that inclination using (surprise!) some sort of restriction. So even though there are a few of us who can follow the logic of “length over character use”, for the good of the many who would use their password length to expand on 123456, we must succumb to including at least one upper case character, et cetera, et cetera. It’s not all bad - throwing in a symbol does increase the complexity of a passphrase too.

Of course, all of this assumes your hacker is running some brute force algorithm, rather than being a bit more ambitious, and attempting to steal an entire password file off your network. There’s not much an individual user can do there (aside from constantly change their password, and I pretend that’s why my work account forces me to do this) but logically the system itself has security measures in place. For instance, cryptographic hash functions (a nice little application of high school mathematics). Good enough - until we hit something like 2014’s Heartbleed bug, also an XKCD comic. Or until the character in your novel decides to use telekinesis to figure out people’s passwords. But at that point, you might as well call in Sherlock to get his opinion.

For further viewing:

1. Strength/Entropy: Characters vs. Words

2. The math behind passwords

3. TeachNomination: Password Math (Video)

Got an idea or a question for a future TANDQ column? Let me know in the comments, or through email!

Wednesday 18 November 2015

TANDQ 12: Text Me Never

In 2014-2015 I wrote an education column called "There Are No Dumb Questions" for the website "MuseHack". As that site has evolved, I have decided to republish those columns here (updating the index page as I go) every Wednesday. This twelfth column originally appeared on Thursday, February 26, 2015.

When will paper textbooks go away?


Never. Yes, I say this despite the president of McGraw-Hill Higher Education stating “Textbooks are dead” last October (2014). In my defence, I can point to South Korea, who (back in 2011) declared they would go fully digital on texts by 2015 - only to back off, in part due to concerns over research about how screen time might affect brain development. And it HAS been shown (in an Israeli study) that those reading on a screen (versus from print) will perform worse in a scenario of timed comprehension - even though they thought they performed better. But wait. Notice I didn’t say paper textbooks would remain dominant. The textbook industry does need to adapt. Let’s have a look at that.

Since 1978, the price of college textbooks has risen more than 800 percent (and DO see that link for the comparison graph). In other words, a text that cost $25 over thirty years ago would now cost more than $225 (new). How can the industry get away with this? Partly because, owing to consolidation, 5 textbook companies now own more than 80 percent of the publishing market. So there isn’t a lot of competition. It also helps that this is a market where the consumers (the students) don’t get a say in the product they have to purchase. (Or do they? More on this later.) But here’s the thing, NO ONE has money for textbooks right now. Even in public education, school budgets are being slashed to lower your taxes, meaning older textbooks cannot be replaced (see also: street potholes). It’s probably even worse than you think - for instance, schools can supplement income with cafeteria sales, but now that all choices are (mandated to be) healthy, students are crossing the road to eat at McDonalds instead. It’s 2015, and I teach a Grade 12 course out of a textbook published in 2003 because THAT is REALITY.

So the first fix involves those unsustainable prices. The second item is more a need to adjust for the slow pace of the education industry. In a prior column, “Getting Graphic”, I noted that “huge technology upgrades are only possible every six or seven years, if that”. It’s largely due to money. But a slow pace isn’t necessarily a bad thing; these are your children we’re talking about. A new drug needs to undergo rigorous testing before being put on the market, otherwise someone gets sued. Yet (it seems to me) that someone can come up with a new education idea, write a book about it, and try to implement it immediately. If it doesn’t work right away? Okay, sorry about your kid’s education, we’ll try someone else’s idea next year. Seriously? (Incidentally, that is not an attack on things like common core, which involved years of research.) So yes, education is perhaps a couple beats behind the mainstream - that’s not something to attack, merely something to remember. There is still a need for paper texts in education even after a majority of society has “gone digital”... which, granted, is coming up fast, if it’s not already here.

Future of Textbooks


So where are we headed? Let’s take a moment to look at where we’ve been. From a look in the book room at my school, a Grade 9 math textbook from 1986 had 11 chapters, and about 450 pages. The format was a page of explanation, a page of exercises, repeat. It contained an occasional black and white (or red-tinted) image. A Grade 9 math textbook from 1999 had 11 chapters, and about 660 pages. The format was 3-4 pages of explanation and examples, then 3-4 pages of exercises. I would say there is only a 25% chance that you would open the book and NOT immediately see a full colour graphic. Our Grade 9 math textbook from 2008 has 8 chapters, and about 620 pages. The format is 4-6 pages of explanation and examples, then 3-4 pages of exercises. There is a huge margin around the perimeter of each page to drop in pictures, or to highlight key terms (otherwise it’s left blank). What do we conclude? That the trend is towards increased examples and visuals. I do question how seeing a picture of someone skiing is more likely to prompt answering a question about “slope”, but one hopes there’s some science behind it.

Looking forwards, the nice thing about an online/digital version of such a text is that the graphics can be made dynamic. They can allow for self-exploration of concepts, rather than simply accepting them on faith (or believing in them because of the smiling photo in the margin). But here we run into a problem - any company can potentially put something like this together, given the right materials. How do you stand out in a crowd? Well, most of the industry seems to have decided that metrics are the way to go, and wow, does this feel like a bad decision! “We must time how long the student spent reading page 3! How often they attempted problem 1.6!” and so on. No. First, we really don’t. While a generalized study might be good (for instance, to see if screen reading really is inferior to print), such data is meaningless without an individual baseline, or any idea for how to apply it. And I don't see us there yet. Second, educators are swamped with extra work as it is, they don’t have time to pore over the metrics of 90 individual students. Finally, putting more effort here feels like it’s taking away from the more dynamic possibilities mentioned above, turning exploration into more of a “hide and reveal” exercise.

Recently, there’s one more issue at play. Post-secondary students are taking more of a stance with regard to the notion of “having” to buy a textbook. A US study conducted in Fall 2013 reported that 65% of university students decided against buying a textbook - even though 94% were somewhat or significantly concerned that this decision might affect their performance. The same study showed that the high cost of textbooks could even affect student course selection. (Aside: John Oliver has a piece, not about textbooks, but about student debt, which looks at for profit schools. See “no one has money”, above.) But there are alternatives to straight defiance, those being: buying used books, the use of an open textbook (one freely available online), piracy (it does exist) - and textbook rental. An opinion article in Forbes claims that low cost rentals are the real industry disruptor, even ahead of digital. There may be something to that.

Because here’s the last piece of the puzzle: Even now, not everyone can afford the technology to view an online textbook. They have to go with a (rented?) print version - or not at all. There’s also a case to be made for the visually impaired student, who uses a text in braille (incidentally 8 times the size of a regular math book). Or other students with exceptionalities, perhaps who experience trouble with focus when it comes to online reading. THIS is why I do not see the paper textbook vanishing. Ever. If it does, I foresee a backlash once more research has been completed into reading from electronic screens. Yet, even so, the textbook industry needs to adjust. And I fear it’s not adapting as well as others believe. But you don’t have to take my word for it.

For further viewing:

1. 2 Perspectives on the Future of College Textbooks

2. Forget the Future: Here’s the Textbook I Want Now

3. Google Interviews Students: The Future of College Textbooks (video)


Got an idea or a question for a future TANDQ column? Let me know in the comments, or through email!

Wednesday 11 November 2015

TANDQ 11: Rate This Post

In 2014-2015 I wrote an education column called "There Are No Dumb Questions" for the website "MuseHack". As that site has evolved, I have decided to republish those columns here (updating the index page as I go) every Wednesday. This eleventh column originally appeared on Thursday, January 29, 2015.

Are rating systems skewed?


If responses are voluntary, yes. If they’re not - the ratings are probably still skewed. Despite this fact, people will often check a product’s “rating” before making a purchase. Online reviewers (for movies, video games, etc) will also tend to use a “star” system variant in their regular column/show. Perhaps you’ve even been asked to code up a rating system for someone else to use? Regrettably, while there is more than one type of rating scale out there, the problem of skew - which lends itself to an overestimation of reality - is pervasive. Let’s explore that further.

The first problem is one of averaging. If every review is given equal weight, we can end up with a situation like in this xkcd comic, where the most important review is lost in the other noise. (“You had one job!” comes to mind - though of course that phrase itself isn’t accurate.) In the same vein, an item that has 3 positive reviews out of 4 would get the same mean rating as an item with 75 positive reviews out of 100. But while the percentage is the same, the second item is of lower risk to the consumer, because it’s had 96 more people try it out. There’s also the question of when these reviews were posted - are all the positive reviews recent, perhaps after an update? All of this is useful information, which becomes lost (or is perhaps presented but ignored) once a final average score is made available. That’s not to say that the problem has never been addressed - the reddit system, for instance, tackled the problem mathematically. Randall Munroe (of xkcd, see above) blogged about this back in 2009. But in general, the issue of weighted averaging is not something a typical consumer considers.

Even after all of that, there is a second problem. Who is writing these reviews? Everyone who made the purchase, or who saw the movie? Of course not. Generally, a high emotional response (either good or bad) is needed to motivate us to provide the feedback. This means that a majority of responses will either be at the highest level (5), or the lowest (1). Does anyone reading this remember when YouTube had a five star rating system? It has since become “I like this” (thumbs up) or “I dislike this” (thumbs down), because five years ago, YouTube determined that on their old system, “when it comes to ratings it’s pretty much all or nothing”. Now, given these polar opposite opinions, one might expect a typical “five star” graph to form a “U” shape, with a roughly equal number of high and low rankings, tapering down to nothing in the middle. Interestingly, that’s not the graph we get.


J Walking


The graph from the YouTube blog link above is typical, known to some as the “J-shaped distribution” or “J-curve” (not to be confused with the one in economics). It’s so named because there are an overwhelming number of high “five star” reviews on the right, tapering back to almost nothing in the middle - with a small hook on the left, as the “one star” reviews slightly nudge the curve back up. Calculating the mean of a system like this, where both the mode and the median number are equivalent to the maximum, will place the “average” somewhere in the 4’s. In fact, this column came about because of a tweet I saw questioning why an “average” review (3 out of 5) would be considered by some people to be “bad”. Setting aside the fact that some dislike being called “average”... if the J-curve predicts a mean higher than four, the three IS below that. Isn’t that “bad”?


The trouble with comparisons is how useless they are, until you acknowledge what it is you’re comparing yourself against. If you’re comparing a “3” against the rating scale, it’s average - even above average, if the scale is running 0-5, not 1-5! On the other hand, if you’re comparing a “3” against existing ratings for similar products, or on prior data for the same product, the “3” might seem less good... it’s origins may even be called into question. Which actually brings up a third problem, namely that a person may intend to rate something at a “3”... but upon logging in and seeing all the other people who have rated it higher, succumb to “peer pressure”! Giving it a “4” in the heat of the moment! And we haven’t even touched on the problem of illegitimate accounts, created solely to inflate (or lower) the average score of a product. (Of course, what you should probably be doing is comparing the “3” against other scores from that same reviewer. Ideally their scores on your own previous outputs.)

Now, is there a way we can fix this rating system problem? One solution might be to force every user/viewer to provide a review. If all the people with a “meh” opinion were forced to weigh in, it would fill the J-curve. But should their input be given equal weight? After all, being forced to do something you don’t want to do is liable to either lower your satisfaction, or cause you to make stuff up. (Though implementation is not impossible - for instance, AnimeMusicVideos.org requires ratings for a certain number of downloaded videos in order to download more.) Another solution might be to adjust the scale, as YouTube did (or going the other way, IMDb uses 10 stars), but this merely tends to expand or compress the J-curve, rather than actually solve the underlying issue. In fact, I have not come across any foolproof rating system in my research - even the great critic Roger Ebert once said “I curse the Satanic force that dreamed up the four-star scale” in his post: You give out too many stars. (I recommend reading that, as it also points out the problem of having a middle position.)

Meaning it comes down to this: I don’t have a perfect solution. Much like Steven Savage and the issue of franchises from earlier this week, I’m just putting it out there. In particular, should we really trust the ratings we find online? Is achieving an unbiased rating system impossible (short of reading minds) - but the effect something we can ultimately compensate for, the more we understand it? Then again, despite this being the age of social media where everyone's weighing in, reviews might be better left in the hands of the professionals - those people who are paid to assign such ratings for a living. I dunno, do you have an opinion?

For further viewing:

1. A Statistical Analysis of 1.2 Million Amazon Reviews

2. The Problem With Online Ratings

3. Top Food Critics Tell All: The Star Rating System (3 min video)


Got an idea or a question for a future TANDQ column? Let me know in the comments, or through email!

Wednesday 4 November 2015

TANDQ 10: Around the World: France

In 2014-2015 I wrote an education column called "There Are No Dumb Questions" for the website "MuseHack". As that site has evolved, I have decided to republish those columns here (updating the index page as I go) every Wednesday. This tenth column originally appeared on Tuesday, December 30, 2014.

What is the education system like in... France?


This marks the second of a semi-regular set of columns looking at education systems in different parts of the world. The first looked at England, hence France seems a natural second step. My belief is that this is useful, not merely to learn, but it can help a writer whose fictional characters originate from another country (or world?). Usual geographic caveats apply here, in that when I say “French” I’m discussing France and not, for instance, the province of Quebec in Canada. Which would be somewhat different.

In France, education is free (and compulsory) for children aged 5 through 16. This starts with an “école maternelle” (possibly as early as age 3). Primary school (école primaire) then lasts for 5 years (ages 6-10), middle school (collège, also known as secondary school) lasts for 4 years (ages 11-14), and high school (lycée) lasts for up to 3 years (from age 15 to past the compulsory age). Of note, the French grade numbering system goes backwards compared to North America - the first year of collège is the largest number, year 6 (6ème). It is followed by year 5. The first year of lycée is year 2, then year 1 (première), and then the final year: terminale. The individual years are also grouped into various cycles.

There are 158 days in a typical school year, separated into three reporting terms. Instruction occurs on Mon, Tues, Thurs, Fri, and another half day (traditionally Sat, but in most regions this is now on Wed) to make 26 hours of instruction in a week. The school year begins in early September and runs until early July, during which there are four breaks lasting for two weeks. These holidays begin in: October, December, February and April (where actual dates for the latter two vary based on region - Zones A, B & C). Unlike England, there are no school uniforms - the closest thing they had was already being phased out in 1968.

Testing, Testing


There is no formal testing done at the national level until the end of the 3ème, before lycée. It is at this point that a national exam allows one to obtain the “brevet des collèges” – though one can still attend a lycée without it, as long as their grades are high enough. This exam is one tool used to help determine a path (and lycée) for the last three years of schooling – regular or vocational. Notably, two foreign languages are already needed by this time (selected at the 6ème, and the 4ème).

For the first year of lycée (year 2), courses involve both core subjects and electives, leading to a choice in year 1 of “baccalauréat general” (for one of: Literature/Language, Science/Math, Social Science), or “baccalauréat technologique”. Exams are written at the end of the première, for not only French language and literature, but also for a “minor” area of study, chosen at the start of that year. Then, before graduating, there is another set of exams at terminale. These cover philosophy, and other subjects studied. The final score is a weighted average across all areas, meaning it is impossible to fail a single course - you either pass, achieving at least 10/20, or you must retake the whole year. If you are close (at least 8/10) you may be given the opportunity of an oral exam to make up the difference; an oral is also compulsory for the Literature/Language stream.

Beyond lycée, there is a public university system, but the top schools - “les grandes écoles”, which specialize in engineering, business, etc. - require entrance exams. Napoleon brought this system to Italy, which gives a sense of its history. The intention here is to put emphasis on one’s merit and ability, and not on one’s social or financial status. The exams are given in both written and oral form, where a certain mark must be obtained on the former in order to be considered for the latter. There is no mark threshold here for acceptance - there are limited spaces, and as such you are competing against everyone else who is taking the exam that year. Hence students will typically do an additional two to three years of study (in either a public or private institution) before writing these higher education exams, which can only be repeated once. Once a student is accepted into a post-secondary program, a Bachelor’s Degree takes three years (at either a University or a Grande Ecole). To become a teacher in France, a European candidate needs a three year diploma to be eligible to sit for a competitive examination. Once on the job, they are evaluated by national inspectors.

Outside of the public school board, there are independent private schools, many of them Catholic; there are also five Catholic universities. Religious instruction can be included at these schools, though as long as they also follow the same (national) curriculum as state schools, teachers are still paid by the state. (Of note, they are not paid at the same rate, and their qualifying test, while written to the same standard, is different.) This means that private school fees can be quite low, and compared to the US, a greater percentage of French students attend private schools (though this number is less than 20%). With respect to current events, there aren’t any current reforms in the French system (as compared to the US or England). Time magazine did criticize them in 2010 (in particular for their philosophy requirement), and some feel change is needed to the “grande école” mindset. Do you have any thoughts about the Education System in France? Feel free to comment below!

With thanks to José Piquard, for fact checking. Any remaining errors are my own; please advise, so that I can correct them.

For further viewing:

1. A Typical Day of a French Student (video by students)

2. Education in France

3. France Guide: The French school system


Got an idea or a question for a future TANDQ column? Let me know in the comments, or through email!