Eugh, another goddamn overthought JK Rowling post…

It occurs to me that I spend a lot, and possibly an unreasonable amount, of time mocking Harry Potter across various platforms.

But, at the same time, I make comparatively little reference to JK Rowling’s public descent into single-issue TERFdom — including endorsing literal fascists and, since this was first drafted, a tiny bit of socially acceptable Holocaust denial.

Remember how she left the endorsement of Matt Walsh because she was only praising his ‘What is a Woman?’ thing, but deleted every nice thing she said about Stephen King? No one thinks that’s… A Bit Weird?

There’s a few reasons this enduring clusterfuck is lower on my priority list than “good god, wizard-boy book bad”.

First, I think everyone I know and respect are already aware of Rowling’s descent to madness. They do not need reminded of it, nor their consciousness raised to it. For those with much more skin in the game, they probably don’t want a constant reminder that a prominent public figure would cheer on their genocide, backed by a coordinated media campaign to manufacture consent for it.

I also don’t think there’s much chance of convincing extant Rowling fans that what she does is bad. It’s been a few years now, so the thousands of people clogging up King’s Cross on September 1st have already indicated that this is not an issue for them. They’ve either decided that this obsessive hatred of trans and queer people is not a deal breaker, or worse, it’s something they would actively endorse.

You also have to bear in mind the usual retort: “what has she said that’s transphobic?”

Noting that I grabbed this screenshot after her long and weirdly specific denial that the Institut für Sexualwissenschaft was destroyed by the Nazis. It’s also under an article that did not, at any point, mention trans anything.

It’s a rookie error to assume this is an honest question and a throw-down for you to provide evidence: It’s a statement to say that they don’t believe it’s a valid accusation. They will already be aware of what you’re going to say, and will have the rationale for why it “doesn’t count” pre-loaded and ready to go. Their model of reality includes Rowling’s opinions on trans people, and it precludes the idea that those are, in any sense, bad. There’s little arguing with this. You’re more likely to get traction with creationists and their highly movable definition of “transitional form”.

But another reason is that I don’t see her current obsession and main creative output as that surprising, nor is it remarkable.

Let’s compare with the usual author that crops up in these situations: the late Terry Pratchett, and a bit of a counterfactual thought experiment.

In 2017, long-time assistant and partner Rob Wilkins hired a steamroller to destroy Sir Terry’s hard drive, to permanently eradicate his unfinished manuscripts in accordance with his will. It’s hard to see all hope of ever seeing Raising Taxes get crushed, but it was The Right Thing To Do.

But, let’s think of an alternate reality for a moment. A proven revelation rocks the world: Pratchett had his hard drive destroyed to cover up the existence of countless essays on how Jews secretly rule the world, how LGBT+ people should be sent to concentration camps, and his opinions on the inferiority of lesser races. Perhaps even worse was on there.

In such an eventuality, I would — hopefully obviously — be beyond devastated. It would be a struggle, but the Discworld collection would be going on a bonfire for essential closure. But that hurt would be incomprehensible not just because someone I admire transpired to be a Total Wrongun, but because of the complete betrayal.

Pratchett’s novels are full of “militant decency”, respect for all life, and doing the right thing despite the difficulties. The text and subtext alike skewer homophobia, class oppression and racism (in deeper, more complex ways than “the green people hate the blue people…”) with unmatched fervour. There’s a reason his quotes appear so frequently in social justice circles.

The idea that the man himself thought otherwise…?

Such a thing would be so monumentally in opposition to every word he preached, in fiction and non-fiction alike. That is why it’s unthinkable, and — with as much philosophical certainty as I can muster — never going to happen.

By contrast, Harry Potter is flooded with so many red flags that the headlines should never have been “beloved children’s author turns out to hate trans people” and more like “we’re shocked it took until 2019 for her to come out and make it obvious”.

This is a series with shocking levels of casual racism, a visceral hatred of fat people, apologia for slavery and, of course, arbitrary morals where goodness is determined by the colour of your scarf and not the content of your actions. And none of this develops or changes through the series. At the start, Beloved old Hagrid magically disfigures a child for the crime of being mildly annoying (and fat) and faces no meaningful consequences. While the last thought to run through the hero’s mind before the close of the text is whether his magically indentured slave, from a lower race, should bring him a sandwich.

If anyone remembers the reported story of Rowling being bullied off a Harry Potter forum, we’re you surprised? Her books taught the fans how to do it: that it’s okay to attack and disfigure the people you don’t like, and even send them off to be assaulted by centaurs, provided you think you’re the main character. Or just laugh at them if they happen to display any sincerity towards disrupting the status quo. Her protagonists bully their way through life, and get away with it because they’re on the Right Team. And they’re not fat, of course; otherwise, how could you tell they were the Good Guys from the Right Team?

Rowling’s idea of an anti-authoritarian novel series has the main character become a cop, and the oppressive structures present at the start are still mostly maintained at the end — except, now, it’s under the new management of the Good Guys from the Right Team, so that’s fine and dandy.

All is right.

(at this point, just to throw it out there, Leigh Bardugo’s idea of an anti-authoritarian novel has six Chaos Bisexuals fuck up a bunch of rich people by any means necessary. Even King of Scars, which you’d expect to be full of Disney-like “Ah, but good kings are fine…” has our dashing lead take active steps to burn his own monarchy to the ground. 10 out of 10. No notes.)

Harry Potter, the character, would be the villain in a more competently written series. Sam Vimes would arrest him and figure out what to charge him with later, Kaz Brekker would rip his eye out and feel happy that there’s one less bastard in the world ruining the fun for everyone. Amos Burton would have a “quiet word” over a cup of coffee and within threateningly close proximity to an airlock.

Others have said this sort of thing before. They’ve said it more words, with more citations going down to the sentence level. They’ve said it for years, too. All I’m adding is that when Rowling came out as Queen of The Dickheads, no one should have been even remotely surprised. It’s all there on the page, out in the open for the best part of three decades.

That this all went uncriticised for so long, and that it remains popular despite the critiques now getting more attention, is (to me) far more interesting than her taking the next, utterly unsurprising and (to her) completely logical step and jumping on the transphobic bandwagon. It’s just another unremarkable trope that absolutely fits the person behind this literary dreck.

Have I mentioned I really dislike Harry Potter as a series and have done for 20+ years? Just trying to make it clear…

Can Opening Narration Die Already…

Written with the caveat that I obviously know nothing about storytelling, and am a complete hack, and if I ever drunkenly forward you anything I’ve written, please for the love of god don’t read it.

I’ve just started watching Rebel Moon. And by “just started”, I mean I’m something like 90 seconds into it. It has reminded me that I absolutely hate opening narration in movies. Hate it. Because I think it’s a trope that has proven, time and time again, that movie producers think they’re super-smart galaxy-brained artistes, and that their audience is stupid.

Sure, people tend to have the metacognitive capacity of a yeast infection, but they’re more than capable of figuring out that the tall guy, all in black, wearing a mask, and accompanied by ominous music, who has just commanded a bunch of masked troopers to murder a bunch of scared, unmasked, guys on the smaller spaceship that they’ve mercilessly chased down, is going to be the villain. That requires zero dialogue or on-screen text to explain. It doesn’t need to be that overt, but most audiences would rather be confused (but intrigued) by something for a few minutes than bored for a few seconds while someone reads out the plot or it’s flashed up on screen for us — an awkward moment that is always either too short to take it in, or so long we have to go over it twice.

Opening narration, therefore, can be avoided with some easy — even trivial — filmmaking approaches.

Unless you’ve miraculously avoided it in the last several decades, you probably recognised the above reference as Star Wars — it’s almost a cliché how much it’s the go-to example for overtly establishing who is Good and who is Bad with little dialogue. This is a little ironic because, often, a cliché defence of opening narration will be something like Star Wars. After all, Star Wars has its famous opening crawl: it flies by, in big yellow text, and explains the plot to you. You can’t miss it. The film that I said showcases why you don’t need opening on-screen narration (that I hate so much) happens to be one of the most memorable and famous examples!

But, if you ever pay attention to the text of those crawls, they’re surprisingly uninformative. As prose, they’re often very, very hokey. Possibly even intentionally so. I would doubt that most people can remember much of the text from any of the movies — that might be something for hardcore fans of the “we need to add ‘braces’ to Wookieepedia because an extra was seen wearing them for five frames” variety, but definitely not normal people. What do you remember, though, is the massive “STAR WARS” logo fading away, the horns of John Williams’ score, the crawling text fading into the distance… this is an audio-visual experience, it is setting a tone, a scene. You could stick Lorem Ipsum up there and lose only a few percent of the experience. And, post-1977, this has become literally iconic, so there’s that.

If you don’t get my point, simply imagine this as white text on black. Fading in paragraph by paragraph. Without the music. Or go watch 1984’s Dune with it’s “oh, I forgot to tell you” line.

Because the point of any opening narration — your whole opening scene, in fact — is not to explain the plot, but to explain to the audience why they should care about the plot. It’s almost screenwriting (and general writing) 101 to say that, but surprisingly few films manage to pull it off. Honestly, the number of movies with 9-figure budgets that fail this simple requirement makes you think Hollywood must be covering up a lot of money laundering…

In Brandon Sanderson’s lectures on writing, he states quite clearly — and early on in his series — that you should open a story with a “promise”. He essentially means that you set the tone and expectations as quickly as possible, and you deliver on it. He was talking about novels, but it applies to film, where you’re even more pushed for time but can also access the audio-visual medium, equally well. Obviously, Star Wars does this very well: everything about that opening crawl promises you a bombastic adventure, deliberately modelled on old Flash Gordon serials, and it delivers. The text on its own doesn’t matter. No one cares. But the entire experience does make you sit up and take note about what is coming next.

Anyway, the second thing that sparked off this was that I also recently watched Damsel, the Millie Bobbie Brown vehicle on Netflix and… it’s surprisingly good. Remembering that my taste in tv and movies is basically toilet water, it turns out to be trash fantasy that is (unexpectedly) competently executed. It opens with a black screen and the lead character saying something to the effect of:

There are many stories where a princess is rescued. This is not one of them.

Now, it doesn’t matter what the exact words are (go look up a script if you care), the point is this: it’s there to set the tone of the movie. Normally, with this sort of garbage, you’d expect it to go on at length while she explains who she is and the camera will pan over some wordless activity and, yes, you’ll be fully caught up but… you’ll also be bored, have zero salience for any of it, and will probably just forget the details later anyway. A key thing in filmmaking is cluing your audience into what they need to stash in their long-term memory to pull out later (what is a set-up) and, generally, voice over narration at the start just sucks at that.

Whoo hoo! ✊ Trash ✊ trash ✊ trash!

But Damsel doesn’t do any of that. It doesn’t go on longer than the one line. It doesn’t have her say “My name is Elodie and I live in this castle and this is the name of the world I live in“. It restrains itself to stating the theme, and then stops because the rest doesn’t matter. It tells us that this is going to be some fantasy movie, and probably involving a princess, and likely to be a bit dark and moody in places (given that it’s spoken over a black screen) and the rest of the film will probably just involve Millie Bobby Brown kicking ass and getting hurt a lot. Sorry for the spoilers, that’s basically the plot and premise: she racks up injuries faster than Joey King in The Princess.

Anyway…

I got 90 seconds into Rebel Moon, got annoyed by the narration trying to explain the plot, at least the background of the plot, without telling me why I should care about it… and sparked a thought I wanted to write down here. Now, I’ve reached the part where I’d talk about how bad the Rebel Moon opening narration is, but… in all honestly I’ve actually forgotten what it was. I think the words “assassins blade” were used. I forget why. I was mostly focused on whether it was Anthony Hopkins or someone trying to be Anthony Hopkins, the actual content was quite unenlightening.

That said, if the reviews the vibes I get online are correct, I feel that — in Brandon Sanderson’s terms — it has certainly promised something and it will almost certainly live up to it.

I look forward to the rest of it effortlessly ducking below my already-low expectations. As God intended.

How Demanding ‘Refunds’ Highlights the Problems With University Tuition Fees

This is not an essay, this is a raw nerve…

I was recently directed to a couple of protests by students and graduates demanding ‘refunds’ for their degrees… this has been going on a while, and rears its head every couple of months, either around admissions time or graduation.

Now, while I’d love to boil this down to a 60 second InstaTok video, this has a lot to unpack. So, long blog post it is! Very long, in fact. There are a lot of ideas that need to converge together, and chatting to a camera for a minute in portrait mode just won’t cut it. Even on YouTube, this would require at least three costume changes.

It was the COVIDest of times…

First, let’s address the low-hanging fruit pretty brutally. The main thrust of these demands for refunds is about online learning that happened a few years ago. Gee, I wonder why that happened…

228,984 reasons and counting, in fact

Just to remind everyone: we were not doing all that for funsies.

I know people who died from COVID; people who lost family to it; people who were chronically disabled by it. Sorry you had some online lectures for a year. My friend’s 10 and 13 year old kids would love to hear about it; it’ll distract them from how their mother’s lungs strangled her to death while her body graphically and horrifyingly inflamed and deformed over the course of a month.

Yes, it’s a low blow, but that elephant in the room needs addressed, and now that’s over, we can ignore it going forward. Mostly.

Quality doesn’t grow on fees…

Let’s start at the top of the grievances about money.

In the UK, higher education comes in at an absurd £9,250 per year. Loans to cover living expenses are available in a similar order of magnitude. The end result is that you can be lumbered with a debt on par with having a full mortgage by age 21.

This is, of course, Not Good.

I am emphatically not pro tuition fee. But let’s also be very clear about how this system works: in the vast majority of cases, students have not paid anything. £9k of public money moves to universities (which is still between £2-5,000 short of funding a STEM degree, incidentally) and the student then has to pay it back.

At least, they have to pay some of it back.

The debt isn’t collected at a rate familiar to graduates in the United States, where it literally can be like paying off a second mortgage. There’s an income threshold that you must meet to be liable for repayment, and then you’re effectively taxed on income above that threshold. And you eventually stop paying it — either you clear it off, or it times out (after 30 years) and the debt is wiped. This system means that the government always makes a direct loss on fees. This is a graduate tax by another name

To be clear, the deal underpinning this is steadily being made worse, but this is entirely an issue of government policy (this will become a theme below…) and frighteningly little to do with universities.

Or is it?

Let’s back up a moment. That £9,250 figure is a cap. Universities are not a allowed to charge more than that, but they are allowed to charge less. When this cap was raised (and also when the original ‘top-up’ fee with a £3,000 cap was introduced) the idea was that it would introduce competition. Universities would charge less to attract “budget-conscious” students. Only the “best” universities would charge the full amount. Whatever “best” means, but that’s another discussion about the damage league tables have caused.

Why don’t universities do this? Why are the majority of courses £9k, and we don’t see these multiple fee tiers appear in reality?

To figure that one out, notice how the “debt” gets wiped, and that repayment acts as a tax. It doesn’t matter if you’ve taken on £7k per year of debt or £9k per year of debt (or £5k or £15k for that matter) your repayments and marginal impact to your take-home salary month-on-month is the same. The only change is the number of tax-free years you’ll have at the end as you rocket towards age 50. Or, more likely, you’ve simply changed the probability you’ll have any of those tax-free years at all.

So the benefit, the incentive, for a prospective student to pick a cheaper course is negligible. But, because the fee still represents the amount of public money transferred to the university, the downside is taking a course that has 25% or even 50% less funding. That’s a material loss to any student.

There is no incentive for students to take cheap courses, there is no incentive for universities to offer them. QED.

It’s also worth pointing out that this is exactly what people in the sector said would happen when the £9k cap was proposed a decade ago. To get alarmingly political, tuition fees are the perfect microcosm of how neoliberal-conservative politics repeatedly fail to understand economic incentives and money in general; the subject they claim to be experts and ‘grown-ups’ in.

Speaking of Conservatives, however, let’s just add that there are exceptions to the above. You can choose to pay up front to avoid the graduate tax. But if you’ve got around £10k going spare, and choose to spend it on tuition fees instead of, say, putting it in a high interest savings account for little Tarquin or, you know, just giving it to them each year, you’re an idiot. Paying up front is not a good investment at all. But I have been attacked on Twitter by a handful of people (that is, parents of students) who have done this. All Conservatives. All of them.

They’ll probably talk about it being a long-term saving, but again, you do not live in the Real World if you don’t understand the power of a £20-30,000 lump sum in cash, and instead opt to save £35 on a £2600 salary each month. They are not the fiscally responsible grown-ups they claim to be.

Anyway…

The psychological damage…

Again, to reiterate: with some outlying exceptions, the vast majority of students have not “paid” £9,250 per year.

BUT…

They are made to feel like they have.

The entire media ecosystem around UK universities inescapably focuses on this figure. Students feel that higher education is an investment, and one that must pay off. If I fuck up, even slightly, the words “I’m not paying £9,000 a year for this!” can be moments away. Even if you’ve actually paid diddly squat, you’re made to feel otherwise.

It’s a lot of pressure. For staff to live up to it (and to have it held to your throat every day…), and for students to live up to the investment.

To feel that you’re spending so much money, and must get as much from your degree as possible, get the ‘best’ university name on it, get the highest grade… all to get that job at the end so you can pay for it. When you’re told, constantly, that you’re lumbering yourself with the largest debt you’ve ever seen, a figure you likely have no salience for when you’re just 18, and then have to work to justify it… I cannot overstate the damage this has caused.

If you sat down to intentionally design a system that would demotivate and depress people, using all your knowledge of goal-directed behaviour and expectancy-value theory, you couldn’t intentionally come up with something this effective.

We’re seeing students less able to work, less resilient to setbacks, and all because they are under a relentless pressure to perform that they have to panic over every lost mark, and every little thing that makes university Not Worth It. And if it’s not worth it, why get up in the morning? Why open that textbook? Why even turn up to a lecture? It’s all a battle to min-max your time: get the highest marks, for the least effort, so that you can spend that time on other things (mostly working to afford rent, as we’ll see soon…).

Then let’s go back to the pressure on staff to make degrees “value for money”. We have to pack the curriculum with training, and job experience, and authenticity, and additional skills, skills, skills… Science curricula, already extremely information dense, need boosted further with employability… While nothing is cut to make room for it… All of which needs assessed and graded (or no one will do it) and takes up huge amounts of time.

No wonder there’s a “mental health crisis”.

If fees were less, then that pressure would be off, but instead it builds and builds…

And the other loan…

Less talked about than fees is maintenance.

This is the living expenses that students need to, you know, live while studying.

In our subject, students are with us for the best part of 15-20 hours per week.

Which will immediately trigger a number of people do say that it’s low! And students should count themselves lucky to have such a short week! Well, sparky, that’s contact time. That’s the time they’re physically in the room or the lab with me. We expect an additional hour per hour of self study. Time to revise, research, work on skills, write, edit along with the pre-reading watching required to make the much sought-after in-person lecture effective. That brings us up to a full adult portion of 35-40 hours of work. Less at start of the year, often far more as due dates loom. And that’s assuming the increasingly pressured timetable is efficiently spaced enough to allow for extra work between contact sessions, otherwise, much of the day is simply wasted.

This doesn’t leave much room for part time work. And, in fact, part time work doesn’t even cut it anymore. I know STEM students — fucking STEM students! — who need to treat their degree as secondary to their full time job just to make rent. Others need to live at home with parents, and commute — wiping out as much as two more hours each day in a car or (if they’re lucky because they can use the time to read) a bus, as well as severely limiting their options for where to study.

So, to get by, you need to take out the maintenance loan. Originally the only loan (that’s the one I’m still paying off, I think — as the SLC don’t have my current contact details but have no problems charging me) then a grant combined with loans for a little while, then all loan again. It’s all complex and all over the place. This, combined with fees, is what makes the total debt astronomical.

I also cannot emphasise enough that, of these two facets of student finance, the maintenance loan/grant is the bigger scam. While the fee at least goes to tangible things like the library, giving everyone a shiny copy of Office 365, journal access and, of course, paying me (and I like being able to afford food) the maintenance loan is largely a scam designed to offload as much public money as possible into the hands of private landlords.

I did this chart some time ago. It’s already adjusted for inflation.

The total average rent a student currently pays now exceeds the total maintenance loan I was eligible for in the mid/late 2000s. The increase in rent is between 7% and 10% each year depending on the location and the figures you pick, while typical national inflation averages 2-3% in normal circumstances.

When I hear students complain that maintenance hasn’t kept up with inflation, I have to point out that (medium-to-long term) this simply isn’t the case. It has exceeded it. What it hasn’t kept up with is rental costs. And it likely never could, because the instant that income gets a boost, landlords lick their lips and suck it up. Be assured, Tax Payers and fellow Old People: that pint you see a 19 year old quaffing on a Thursday night is not the thing you’re paying for.

There are a lot of reasons for this explosion in living cost. The general perpetual housing crisis being a key suspect. However, there are also strong contributions from private developers, hell bent on creating “luxury” student accommodation and charging a devastating premium for it. There’s also universities themselves, building increasingly shiny and expensive accommodation to attract students in a accommodation arms race to impress students (well, their parents, actually…) on open days.

This accommodation is still often complete trash, and not much better than the breeze-block hell I experienced a decade (or two…) ago. Student landlords are still the worst, and are absolutely raking it in even more.

To be extremely blunt, again, that students are angry about fees but rarely mention rent reflects very badly on us as educators. So much for us converting them to Marxist revolutionaries.

Selling the Experience…

That’s student finance, and you might have noticed that I haven’t addressed the grievances about ‘refunds’ much. That’s partially because I think the blunt and brutal foreword about COVID does a lot of heavy lifting there. But there is one word that keeps popping up when you hear people talk about why they were short changed:

“Experience”

This is possibly one of the more tragic shifts in recent years. We no longer provide higher education. We no longer provide degrees. We are purveyors of student experience.

You see it in the brochures, the prospectuses, the websites, the Instagram pages. All those photos of happy 20-somethings crowding around green fields and trees reading books in the glorious sunshine (which is funny because university terms happen over the colder months); the suspiciously clean nightclubs and luminous wrist bands; the enthusiastic smiling, glasses-wearing kid raising their hand in a lecture (never happens: the school system has made people too terrified of being wrong for that to ever happen). That sort of thing.

That’s the Experience. It’s the thing we’re selling. Or, at least, the thing we’re told to sell. Customer Satisfaction is the main KPI.

COVID shut that down. COVID shut a lot of things down. Notably it shut down the respiratory system of several million people, which is why I find this discussion offensive more than just irritating, but I digress. This discussion is only possible because the purpose of university has been shifted from education to “Student Experience”.

Now, to be clear, I’m not saying student experience isn’t important. Internally, we use it a a shorthand for mental welling, achievement, accessibility, equity, feedback and assessment, all of which are important. And, of course, there are things like learning to live on your own, with others, and growing to become a functioning adult with university acting a zone of proximal development. This is important.

But packaging this up as an “experience” for us to sell has caused some damage. Do I believe we “dumb down” courses to bribe students into ticking higher scores on the National Student Survey? No, I think that’s the Office for Students making shit up. But it does sometimes feel like that episode of Community, where Dean Pelton is desperately trying to court that high-roller prospective student with endless gimmicks — and the college suffers because they’re turning it into something it’s not.

We’re selling The Student Experience of shiny computer rooms and photogenic accommodation. And we have to sell more of it to pay for it, of course. And if we don’t sell it, the customers will go elsewhere to the institutions that paid for even shinier computer rooms and even more photogenic accommodation.

All that competition means more and more funds get diverted to endless initiatives that aren’t academic. We have these lovely cafes and “study pods”, we have trendy lighting and reclaimed wood tables, and you can get your book out while someone brings you fries served in a galvanized steel bucket — meanwhile, the ceiling in my teaching lab literally collapsed this week, and two research labs don’t have access to running water. In demanding The Student Experience, students have undermined the actual purpose of higher education.

We’ve sold university as a commodity that grants you a degree, so when we shut down lectures (which are not that pedagogically great, anyway) there’s a riot.

That’s not to blame individuals, per se. This is a systematic problem that has been building for the last twenty or so years. From the media environment telling you what student life should be about (Fresh Meat being the more realistic depiction) to government policy driving competition.

Maybe I’m just biased having worked in a place that did pretty well during the pandemic — having had access to a lot of online learning specialists who could make the rest best of it, resulting in us completely bucking the NSS trend in 2020. Maybe elsewhere was truly terrible. But I’m still convinced that the feelings of students who are demanding refunds are driven by these long-festering systematic issues, especially around the crushing demotivation of student finance, and the COIVD shutdowns were nothing more than a catalyst to highlight it.

So, about those refunds…

So, you had some online lectures. You had to MS Teams your way through tutorials while you kept your screen blank and refused to say anything. You want a refund.

It feels like students think a refund would be a £1000 sent immediately to their bank account. No it wouldn’t. You’d knock a few quid off your overall debt and it would be cancelled and wiped out 10 years before you managed to pay it off instead of 12 years. Paying the graduate tax for 30 full years is the case in all possible realities. Your agreement with paying this number back is with the government and the Student Loan Company.

Let’s say it’s even possible to get a “refund”. It would basically involve the university transferring several million pounds to the government. That’s it. It would make little sense in macroeconomic terms. The government receiving money isn’t like it collecting gold coins that it can then spend like it’s King John in a Robin Hood movie. Taxation and government income is basically about removing money from the system so that any public spending doesn’t immediately trigger hyperinflation. So, as a student, you would not be getting that money. But universities would suddenly have very real pounds missing from their budgets, which is hugely damaging t their ability to deliver the precious student experience to the next cohort of students.

It’s very easy for me to sound callous here, and very snarky, but at the end of the day we’re talking about people with an immense amount of privilege whining about something that occurred because literally hundreds of thousands of people were dying. I find that very hard to move beyond.

And I find it hard to deal with because this ire is directed at universities and teaching staff when, as I’ve hopefully illustrated above, the  root causes of student finance problems are systematic and political. This is the result of decades of neoliberal and conservative policy trying to undermine higher education in the UK.

But who is the villain…

If you don’t mind me sounding like a conspiracy theorist for a moment, I’d say that this is exactly what the government wants. We’re stuck with a system that is inherently neoliberal, late-capitalist, and is slowly devolving into proto-fascism. Or possibly no “proto” about it. Universities are the enemy. It’s in the best interest of the government to get students to hate universities, and resent their education, and specifically target academic and teaching staff.

If I’m feeling feisty, I’d say think this is what the Office for Students was set up to do. Its priorities have repeatedly driven wedges between students and universities. While claiming to fight for students, it hasn’t really done much for them. Its policies and interventions have been based around non-issues like “freedom of speech” (of the “for me, not for thee” kind) and culture war nonsense. I can’t help but point out that feedback and assessment shows huge levels of dissatisfaction on the National Student Survey, but OfS haven’t set up a dedicated task force to address that — we’re far more likely to see them ask “Have you felt marginalised for saying trans people should be sent to death camps? Give us a call!” than anything else. And, already, we’re seeing independent reports remarking that OfS is simply acting as a political mouthpiece for the government. Rather than be horrified that students and universities are at war over fees and student experience, OfS is likely cackling to itself that all is going to plan and that this station will be fully operational by the time you’re rebel friends arrive…

Not good. Not good at all.

Ultimately, right-wing governments do not want educated people in their population. They do not want you to see education as a right, or something you can do because you enjoy it. They’ll dress it up, but ultimately it boils down to them viewing education as state-subsidised training that private companies don’t have to pay for. That’s why they’re happy for people to think universities owe them money, and not the government.

To try and bring it together, yes I do think a lot of calls for “refunds” is (mostly) uncalled for whining of middle-class privileged kids. But I do think it comes from a genuine place of frustration with the costs, both real (that is: rent and living) and perceived (that is: how you’re made to feel about tuition fees). That is something I can’t (and won’t) dismiss. In fact, we need to draw more attention to it.

The fees regime is… Bad. Make no mistake. But it’s something we have no control over. Most academics want education publicly funded, and for its benefits to be publicly realised. This halfway house, where we attempt to have public funding, but then badge it as a highly demotivating private debt, is simply unsustainable.

And that is something worth being angry about. Furious about, even. It should radicalise you, and make you demand change at the highest level, accepting no half-baked compromises. And it’s worth that feeling far, far more than any acute problems incurred in 2020.

There’s a lot here, but it’s also a wide reaching subject and I’ve missed lots out that could also be addressed. And because there’s so much of it, there’s no quick and easy solution.

Platform Matters – Why Twitter Should be Considered Untrustworthy by Default

I recently read a comment to the effect that “platforms aren’t untrustworthy, people are” and that you need to evaluate information sources independent of platform.

The context being a partial defence of getting news and information from Twitter, in spite of prominent examples of fake information spreading over that platform. This is from someone pretty respectable (originally, I had a screenshot, but it adds nothing but their name, which is irrelevant) and I’m going to disagree with them on this bit. Partially. Maybe wholly.

First off, at face value, that is true. A rando on Facebook and a rando on Twitter have the same credibility. This is independent of the platform. We should learn to evaluate sources and interrogate them thoroughly.

BUT: some platforms structurally invite misuse and, either through ineptitude or disinterest of their owners, are more able to propagate misinformation and disinformation. Twitter is absolutely one of these. In my view it always has been: It’s structured around short posts, atomised from any context, and these posts are trivially boosted, with their surrounding context hidden from the home timeline. Information travels fast when it’s bitesize by design, and showing something to your followers as part of Twitter’s public performance is a single button click.

We don’t have such concerns about WordPress blogs, for instance, even though it is also a platform. It’s barriers to sharing on the platform are slightly higher, and the barrier to generating content a little higher. Misinformation may exist, but its propagation is slowed. The content is far more difficult to separate from its long form context.

A blog is, I’d argue, the structural antithesis of Twitter.

And that’s just pre-Musk Twitter. In it’s current incarnation, it’s very notable for putting in huge barriers to determining legitimacy of information. Most notably in having completely dismantled its verification programme, and allowing anyone to effectively impersonate a person or organisation, up to and including that blue-tick verified shorthand.

These aspects aren’t wholly unique to Twitter, but I think it’s a place where you can easily find all of them embedded into the system and its culture.

But, I think, an underappreciated problem is that it’s simply impractical to assess the source of each individual nugget of information. It’s far more effective, and almost as reliable, to pre-filter information broadly by its source. Your hit rate on identifying disinformation is just as good if you simply dismiss anything from, say, the Daily Express and wait for BBC News to report on it instead. So I’d argue it’s safe to be far more skeptical of something someone just twet. Especially if it’s a screenshot, where the contextual clues of whether that verified sign is “real” or paid for are hidden away.

That’s before getting into the fact that we should never have got to the point where you got news from a social media post, and not a verified platform or website, or from a vetted, professional source. A reasonable heuristic for that being a source that faces meaningful consequences if they deliberately publish misinformation. That’s still, mostly, the case for professional journalists writing articles, but not for Twitter accounts. Even Twitter accounts of known people. You can lie and mislead with no hit to your income generated by having a following on Twitter.

And, of course, we could also talk, at equal length, about how Truth on social media is determined by follower count. Not only does that determine the reach of “facts”, but the account with more followers can easily control the narrative of dissent entirely by force of volume. You can be contradicted in the comments, but if you have the weight of 100k followers to back you up, you can ruin someone’s day by clapping back — whether you’re right or wrong. The “Quote Tweet” feature of Twitter being a key weapon in this, because that pushes your reply directly to your followers. I raise this mostly to underline how Twitter is structurally ripe for abuse and disinformation in a way that many other platforms are not. It’s a platform where you need to threat-model responses to your posts — even absolutely insane responses that have misunderstood you intentionally.

So, yes, platform absolutely matters and we should take that into account when evaluating information. And, going forward into this post-Musk world of X and its Xcretions, outright dismissing tweets (and screenshots of tweets) as evidence of literally anything is going to become the simplest act of due diligence we can do.

Still Treated as an Joke – Happy Birthday Rosalind Franklin 🎉

Q “What did Crick and Watson discover?”

A: “Rosalind Franklin’s notes!”

Haha! Ha! HA!!

Ha…

I hate this joke. I absolutely despise it with a passion.

This has to be one of my most controversial and fart-in-a-spacesuit opinions: modern science communication, particularly with a feminist slant, is treating Rosalind Franklin way worse than she ever was back when she was alive.

There are a lot of other adjacent problems I have, of course. Science communication about women seems to be stuck in an “there’s only two chicks in the entire galaxy” loop: specifically Franklin and Skłodowska-Curie. Though that’s gotten better in recent years. We are just about allowed to talk about women who are still alive, for instance.

To understand my problem with science communication on Franklin, we need to look at this:

That’s ‘Molecular Configuration in Sodium Thymonucleate’, Nature volume 171, pages 740–741 (1953). The authors being Rosalind E. Franklin and Raymond G. Gosling. And if the date and journal seem familiar, that’s because it was published at the same time that Crick and Watson published their own famous paper – the third in the series was the one by Wilkins. If you turn the page after finishing Crick and Watson, you find Franklin and Gosling.

It’s behind a paywall, because reasons, and you absolutely should not buy it. But it’s also difficult (not impossible) to track down full, decently scanned copies, especially since using Sci Hub to download a scientist’s papers started being treated as a worse criminal offence than murdering them.

Few people seem to understand that this paper even exists, never mind what its content is. I can probably count on one hand the times I’ve seen someone communicate the science behind the structure of DNA and mention it or explore it.

On one level, it’s easy to explain why this is. This paper is dull, boring, technical, and tedious to go through. The X-ray diffraction world has come on so far since that it’s almost quaint to read. It’s of interest to specialists, as most papers are, and there’s a very limited amount of interesting things you can extract to make good pop-science fodder.

If you wanted to unpack it, you need to know a few things first… and I think many of those things fly in the face of the usual Sunday School version of the DNA story.

  • You need to know these people were not “discovering” DNA — it had been known as a molecule for a very long time,
  • You need to know that there are multiple forms of DNA, based on its level of hydration — so they weren’t “discovering the structure” of DNA as much as refining what was known about the specific forms of it,
  • You also need to know that they weren’t even establishing that it was helical — again, something that had been known, or at least very strongly suspected, for a while (you can read that in Franklin and Gosling, above).
  • Hell, you might need to know that Rosalind Franklin isn’t even the first female crystallographer overlooked in the DNA story — that unfortunate honour probably belongs to Florence Bell, working in the Astbury lab!
  • You might also, wait for it, need to know that Frankling didn’t acquire the famous Photo 51 — that was Gosling, which raises awkward questions about supervisors stealing credit for their students’ work.

That was all done throughout the preceding 20-something years before Crick, Watson, Franklin, Wilkins and Gosling published their famous papers.

What Franklin and Gosling’s paper adds is the specific qualities of the helix in DNA: how quickly it turns, and the distances between atoms and groups. This is essential fine detail for building a structural model of it. We can hazard a guess at what it looks like if we suspect it’s helical, but unless this number matches up with the molecular model proposed by Crick and Watson in the preceding paper, we have a problem. We’re after all the fine detail at this stage.

And Franklin and Gosling did this with some exceedingly tedious, complex and dull mathematics that match up X-ray diffraction patterns to the molecular structure. And make absolutely no mistake: that’s hard work, it’s a big challenge. It requires skill, learning, practice, experience, and time plus dedication. This is highly specialist, complicated stuff.

And there’s the problem.

A lot of the popular perceptions of science are built on this idea of lone geniuses who simply see the Matrix and figure things out with instinct. If we’re lucky, we might see them working hard instead. But, on some level, it will always come back to individual personalities working alone and making grandiose discoveries in that instant. It’s far easier to get “Franklin discovered DNA single-handedly and two men stole all her work” into your head than to understand the decades of work, near-misses, and steady accumulation of evidence by hundreds of people that lead to the double helix. It ended with the Nobel Prize going to Maurice Wilkins, Francis Crick, and the abusive step-dad of DNA James Watson. They accidentally became the capstone on this massive scientific undertaking, it was never just them, and it never will be just one or two people working on problems of this size.

That’s what science is. It’s endless tedious accumulation of data to synthesise a conclusion that might be years in the making.

You might see the big speeches and announcements, and in the modern day the TikTok dances in the lab. But you don’t see the work. You don’t see the reading, the trial and error, the endless filling in of logs and lab books, ethical and COSHH applications (though that’s probably less applicable to the 1950s…) and more reading, and more questions, and more torn up notebooks of failure. That’s the part that isn’t talked about because it isn’t fun and it isn’t sexy. It’s tedious, and awful, and makes the job a job, not a “calling” or whatever people have described it as.

And if you dive into the work and contributions from Franklin, that’s what you find. Someone who has turned up, that has done the work, puts in the graft, and sticks her head down to prepare samples, acquire and process data, does the reading does the supervision, and writes it up at the end in excruciating technical detail.

In short, you find a competent, even masterful, scientist.

And she’s still reduced to a punchline of a joke.

A Call to Inaction: Universities Should not Join The Fediverse

As Twitter steadily turns further to shit, the hunt to replace everyone’s least-favourite verbal-abuse-based theme park ride continues. The popularity of long-standing free/open-source alternative Mastodon, and the accompanying Fediverse, continues to chug along. I won’t bore the uninitiated with the jargon except for the one thing needed to understand this post: you can start your own server (paying for hosting) and then connect it with the rest of the system to talk to everyone on it. So, unlike a VB or phpBB forum of old that you might stash on your own server and host a few dozen or hundred accounts, you can follow and be followed by anyone and connect to various other networks, not all of which are Twitter-like.

This has lead to numerous calls for universities to join it. I’ll use the one I’ve linked to as a jumping off point, as it also makes the case for students to use it, and even have it given to them by default as their key account. The purported benefits are community, decentralisation, and conversation without putting your trust in a corporate entity like Twitter or Facebook.

But here is why this is a bad thing and we should not do it.

Bad for Academic Freedom

Many academic staff use / have used Twitter for a combination of personal and professional work. It’s difficult, if not impossible, to separate these two spheres. I’ve seen people make a herculean effort to separate the two by running multiple accounts. Inevitably, even that still blurs the line as you will follow and be followed by the same single-account people anyway using both your ‘personal’ and ‘professional’ personas.

Despite this blur, we can get away with not representing our respective institutions because it’s Twitter – it’s not formally linked to our employers. Finding out who we work for is usually trivial but, still, it’s activity that’s nestled away over there, and badged as personal regardless. It’s ambiguous, but it works. Mostly.

All this changes the instant you end up with a “.ac.uk” or “.edu”at the end of your handle.

In that situation, there is no ambiguity. You are there representing your employer, and on their terms, using their money and resources to post and host content – not only your content, but the content of anyone you follow, since Mastodon servers cache everyone you follower, and image-heavy users fill up that disk space with NSFW furry art faster than Netflix will cancel a fantasy series.

The joy of unambiguously representing your employer

Unambiguously representing your employer brings with it many limitations. Small talk about movies and music would be allowed. I’m sure. Right? Maybe. But what about the spicier subjects that Academic Twitter has made itself known for?

CanI criticise league tables just as we’re rising up through them on an edict from the Vice Chancellor?

Can I criticise the institutions polices and implementations?

Can I shitpost? And swear?

Can I post pro-Union content and endorse strike action?

If you have to earnestly whistleblow anything from corruption to bullying and sexual assault allegations, you do not want your employer having access to the “suspend account” button.

On separate servers not controlled by the institution, that’s at least a muddy grey area, one ruled mostly by precedent, unreliable common sense and assurances that we’re posting only in a personal capacity. In an area owned and operated, wholly and officially, by the institution, if you can’t say it on your website profile, best not to say it at all. Naturally, this would mean switching to personal accounts to keep things separate. But, again, that separation is difficult to maintain by all except the most stoically self-involved of academics. That also undermines the idea of having an account designed to interact with the whole of the fediverse.

The overlap of the personal and professional

Now, what about situations where that personal capacity overlaps with professional capacity?

My professional remit covers student mental health: and one key contributors to that is finance, driven by a need to pay extortionate rent. There is no boundary between this professional remit and my personal conclusion that student landlords are a scam designed to transfer billions in public funds (as maintenance loans) into private hands (as rent). It is a simple smooth transition from observation to conclusion. This sort of thing cannot be said without bringing my employer, if not the entire sector, into some disrepute as it’s a sector that has steadily done nothing to address this, an even actively contributed to it through their expensive, decade-long arms race to build the shiniest buildings. It is incredibly murky for me to make this observation even in a personal capacity linked to my job, never mind on server owned and branded by the University. Yet, I’d be professionally negligent to not raise it as a key factor in student wellbeing.

Sure, you can choose to only post papers, dryly network with others and, frankly, just be a boring old fuck.

But for the majority who actually use social media in a semi-professional capacity (which is actually a minority of all university workers) that is not the case. Remaining boring necessarily means lower engagement and fewer people being in interested in what you have to say. No one, in practice, cares for people who have used their social media accounts for nothing except professional self-promotion. At the same time, those refusing to take any stand at all against injustices are simply branded complicit in it.

That overlap can only exist when we have the capacity to post individually. Otherwise, it’s just Worktribe with bells on.

Bad (or at least pointless) for Students

I’m now into my second decade teaching professionally in higher education. My current job title sounds very senior and fancy. I’d like to think I have sufficient experience to judge what would happen if we automatically gave every student a fediverse account. I’ll sum it up in one word: nothing.

Students will not use it. It’s yet another system, another location, another thing to check. So another thing to ignore.

There are now 15 competing standards

We have our Student Information System (SIS), the Virtual Learning Environment (VLE), SharePoint, Teams, PebblePad, the Canvas forums, email… and that’s just the official university systems that I can remember off the top of my head. There will be more. Parallel to that we have various backchannels: Instagram accounts, Whatsapp groups, student society groups on Facebook, institutional social media accounts… it’s a crowded field, competing for your attention. Each addition dilutes the one thing we need: authoritative, single points of truth, which are communicated easily.

[to anyone who has just exclaimed ‘but that’s what the fediverse can be!’ please do join me in reality]

It’s hard enough having to keep with with multiple Canvas sites running in tandem without also adding Mastodon tags or accounts to push information to students. The path of least resistance would be to simply ignore it, and never engage. It’ll come via email anyway. I’m not saying that’s the most likely course, I’m saying that will be the course. Setting up student communities artificially and with institutional authority over it will always be met with a passive shrug.

You can always force engagement by staking marks and credits on it, of course, but…

Intolerance of the intolerant

But suppose I’m wrong.

I’m not, but let’s suppose.

Suppose there is engagement from students, whether naturally or forced because grades are decided by it. We then have to look at moderation, and that’s a bit more complex than “just moderate it”. This requires time, effort, resource. I’ve ran a few MOOCs in recent years, and despite these running for a few months, with only a few hundred users interacting on my course, it was an enormous drain on my brain space and capacity. Asking me to moderate mine and my students’ Fediverse presence is not a trivial ask because it’s quite a serious duty. And it’s one I do not want. My job is slammed full as it is.

Let’s be clear: the Fediverse swings very left/liberal. Moderation of a server must clamp down on things such as racism and transphobia. Personally speaking: this is good. I genuinely like that aspect. If a server decides it wants to host racists and massive homophobes, the others very quickly isolate it and stop talking to it. But that works because people individually own their communities, are responsible for them, and have every right to control what they allow to be hosted on there. They’re paying the server costs, after all. You want to host bullshit, pay up and take the consequences.

Once you have a University signing up, that forces them to adopt a compatible stance, and by extension control and moderate their students’ personal political stances. If I, in a personal capacity, click ‘block’ on a twerp who turns out to be a student within my university, faculty, school or even on my degree programme, well that’s one thing. If I have to do it, to delete and censor their posts on their official, institutionally-backed account, which provides them access to course materials and information, and gives them an official presence within the Fediverse, or else I risk my entire institutional presence on that system, well, that’s something else entirely. You’re basically excluding students before you even start, removing them from an official platform purely for their political stance.

And that brings us back to the same point again: why would anyone want to use an account with such restrictions when they can get their own, and post without restriction?

To be clear: those are political stances that are wrong, and I do believe it is immoral. Many of those stances, if acted on, go against our internal codes of conduct and policies (as they would for many employers). But those police behaviour in the context of official activities, not online communities where personal and professional beliefs mix. I can and will protect, for instance, my trans students from abuse. I am institutionally supported in doing so. I can’t (and won’t) kick someone out of my lectures just because they joined the Harry Potter Society and started liking their author’s paranoid tweets. If I’m allowed to have opinions on the grounds of “personal capacity”, I’ll extend that courtesy to students. At least as far as Prevent lets me.

If we wanted the university to federate, someone would need to police those opinions much more closely. And despite what the newspapers think happens, none of us have the time for that.

What is it good for?

To be clear: I’m not saying the following are bad ideas:

  • Academic staff signing up via Mastodon,
  • Students signing up via Mastodon,
  • Universities and departments signing up via Mastodon,
  • Universities starting their own server so that they can have @department@institution.edu accounts.

That’s fine, but it’s also not even a mild departure from what has always been possible, no matter the platform.

But officiating our presence? Compelling a sign-up by default? (a shadow-profile thing that Meta’s Threads has been getting shit for recently) Even hinting at making it essential for grades and communication?

I cannot think of much that’s worse.

Some Actual Controversial Opinions in Atheism

Another from the drafts. Allegedly 2014-5-ish. The early 2010s was a transition period for me: it’s around the time I got very disillusioned with ‘movement’ atheism and skepticism. I stopped subscribing to blogs, vlogs, groups and mailing lists on the subject. So, technically, I don’t know if it’s still a Thing, exactly. Discussions with others suggests it has mostly fizzled out and/or ate itself; with the Athei-Bros becoming part of the reactionary man-o-sphere, and the Rebecca Watson apologists becoming queer activists. I don’t know if that’s actually true, but I honestly don’t actually care. I think I still agree with most of this one 8 years later. I’m pretty sure I’m on the right side of history with it. Even the fucking Prevent Strategy has stopped assuming all dark-skinned Muslims are secret Jihadist bombers. Christ-on-a-bike the 00s were a wild time…


I spotted this post from a popular Atheist group on Facebook a while back. I’ve screen-capped it below, but for the benefit of accessibility and context I’ll describe it a bit further.

controversy

“What is your most controversial opinion?” They ask.

The top two answers are 1) Islam is a big problem because it is, even more so than Christianity and 2) Feminism is a cult movement more concerned with sexism that exists in the western world than in the Middle East.

Controversial?

I dunno, really. I hate to play an Argumentum ad Dictionarium card, but if you gather that many up-votes, or if you can attract as many “likes” on your comment as the top level post, then your opinion isn’t really “controversial”. In fact, it’s positively mainstream, at least, in that community. I don’t think those court controversy at all, instead — trigger warning: social justice enthusiast wording ahead — these opinions pander to the white male demographic that dominates the weird beast that is Internet Atheism.

To be cynical and somewhat crude for a moment, those two opinions translate to “It’s totally that darkie foreigner religion that’s the worst” and “Bah, these bitches, eh? What can we do with them?”

And it’s pretty fucking depressing that, far from being controversial, they appear so mainstream.

Here, I aim to present some actual controversial opinions I have that are pertinent to atheism. These are opinions that will almost certainly get me down-voted to oblivion should they ever be posted to a mainstream atheist forum, or possibly have me banned from meetings should I speak them out loud. I dare not speak them lest a thousand grown men come to beat me with copies of a Sam Harris book, and then lynch me with rope made of Richard Dawkins’ pubic hair… oh, sorry, should I have trigger-warning’d that I was going to be mean to atheists? Sorry about that, I’ll give you your safe-space back soon.

1) Islam is not an extra special outlying problem

Islam definitely has its problems in its written ideology. No argument from me there.

So does Christianity; quite a few problems in fact, as evidenced by how you can mix up Bible and Qu’ran quotes and have lots of fun when people can’t tell the difference. And less said about what Scientology believes, the better.

Then again… so do Mayan and Aztec religions, which are especially nasty because they endorse human sacrifice, and that’s pretty scary in my oh-so-humble opinion. In fact, I’d like to say that, as ideologies, they’re some of the very worst.

“But wait!” You say, “No one follows those human-sacrifice religions anymore!”

Well, exactly, Skippy, that’s the point. If a religion could be apprehended in itself, and cause problems independently of the people following it, then saying things like “Muslims are fine, but Islam is bad” would make sense, and in more than just a trivial academic context. But by extension we’d also have to be scared of Aztec and Mayan religions coming to sacrifice us to their gods, because they’d be capable of causing harm independently of peoples’ existence. So instead, it makes much more sense to filter our problems with a text, or an ideology, through the lens of the people who write, interpret and act upon those texts and ideologies.

(Is that ‘structuralism’? I can never remember the terminology for this sort of thing… it’ll be an -ism of some kind. Oh, the humanities…)

“Ah, but terrorists…” You might add at this juncture. Well, quite. They certainly exist and (some) follow a religion – even if the word “terrorist” is one of those arbitrarily defined politicised things that people only use to strip (more) rights from one particular class of criminal. There might be some correlation in there, but it’d be like the “psychopath gene” all over again: many murderous psychopaths apparently have a similar genetic identifier, but so many people in the general population who aren’t murderous psychopaths have the same gene that it makes it utterly pointless to worry about. “Islam” fails as an explanation except in a trivial academic sense that exists only in a world where people don’t.

As it stands, out of a billion plus Muslims in the world, there are remarkably few terrorists. In fact, our little stereotype of them being violent middle-east dwelling sand-eaters is, put simply, false – because the largest Muslim populations are in South East Asia, not the middle-east.

And out of all terrorists, quite a few aren’t Muslim. Sure, Islam has those words that might cause people to become terrorists… but targeting Islam as a cause gives us hardly any explanatory power over who does and does not become a murderer.

Even then, Muslims killing is pretty rare in the grand scheme of things. Personally, I’m more scared of a white, English driver getting drunk and killing me than I am of an Islamic terrorist killing me. That’s reflective of simple statistics to say what is more likely. Do we then blame ‘Englishness’ as inherently problematic? Do we then say “English is the real underlying cause, so we need to criticise English… but it’s okay, the people are mostly fine, we just want to criticise English, the abstract concept.” No, because that’s fucking nuts.

Islam doesn’t scare me. Some of the people following it might, but there’s thankfully very few of them. The religion, in itself, scares me as much as Aztecs and Mayans do. Should they re-emerge and become a statistically viable threat to me, I’ll adjust my views accordingly. Until then, it’s as useful as “they breathe oxygen” or “the problem with the world is the universal wavefunction and the boundary conditions of the universe”. It explains nothing because it tries to explain everything, which makes “Islamdidit” practically the atheist version of “Flooddidit”.

And there you go. That’s a fucking ‘controversial’ opinion. It will have atheists from Reddit to Wikipedia frothing that I could be so stupid and so blind. How dare I choose not criticise a religion because us Atheists need to stick together?

Well, I don’t think that because…

2) Atheists are not the most oppressed minority

Even in the United States, which still keeps many laws on the books banning non-believers from public office, atheists are not actively oppressed.

For a start, most of the laws go unenforced, and when they do there are other legal protections against it.

Sure, people can be fired for it, and that’s bad… so long as it’s for that reason and not because they did the Atheist equivalent of telling all their co-workers they’ll burn in Hell and throwing Bibles everywhere. After all, I complain repeatedly whenever the Christian Legal Centre generates a manufactroversy by falsely claiming religious persecution, I have to be consistent and absolutely not accept it if an atheist does the same.

But, really, let’s be absolutely honest here: religion causes far more harm to LGBT groups than it does to non-believers. Atheists don’t have a higher level of suicide, they can still marry without controversy or denying who they are and what they believe, they tend to be from wealthy, affluent areas and get high paying jobs. As a class, they’re pretty stable except in extreme exceptions. But the damage done to LGBT people is reflected in laws across the world, including the supposedly civilised portion of it. You’re more likely to have fewer rights identifying as LGBT across the world than you are as “none” for religion.

There are parts of the world where atheism is oppressed and apostasy is punishable by death. But how many western atheists genuinely give a crap or do anything to help them? “Nah. Fuck off. That sounds like effort.” Far easier just to pretend your own western-centric experience is the only one that matters. It’s more smugly endearing to think that you, you poor non-believing dear, gets it worst out of everyone.

Religion’s treatment of women is also extremely pronounced, though mostly carried by social mores than religious edicts. A woman’s place is here, a woman’s place is there… no, she can’t do that, it’s a man’s job. And so on. You don’t get social pressure that “if you’re an atheist you cannot do that job”. Hell, if some surveys are to believed, atheism doesn’t even disqualify you from entering religious ministry!

What’s worse, of course, is that atheists take those social mores with them. They actually inherit many of the social problems that – so they claim – religion has generated. And then uncritically carry it forward. “Ha! Feminazis!!” they’ll cry “bothered about equal pay and depictions of women here when women are getting raped there!” – or, to translate that into English “Hey, quit criticising my misogyny, criticise theirs instead!”

Again, this likely to get me hounded out of the room for daring to criticise Glorious Atheism and how it will cure all social ills because Logic and Reason!

And speaking of Glorious Atheism…

3) Atheism is a fucking cult

“NO!” I hear you cry. “Atheism is NOT, NOT, NOT…” *bangs desk* “…a religion or a cult! Atheism just means not believing in g*d(s)!!”

Which is great, but “dictionary atheists” as I like to call them (“village atheists” has been used elsewhere) miss the point: such an idea doesn’t survive a head-on collision with the simple fact that people exist.

If atheism simply means “non-belief”, then why do atheist groups even exist? Why is their an atheist sub-reddit? Why is there an Atheism+ or a Brights movement? Why do books get written on the subject? Why are we even having this discussion?

Because people exist and movements and ideologies are way more than just their basic one-line definitions!

Atheism has a culture and a society that grows up around it. And, yes, while it’s hard to pin it down to just a single entity because those groups are diverse, and hold different opinions so can’t be lumped together (and if you agree to that but are happy to lump “Islam” and “Feminism” as great monolithic cult-like entities, we need to talk at a more basic level), it’s impossible to deny a society and a social expectation is raised around these groups.

As an Atheist, you’re expected to use “logic” and “reason” and be “rational”. They’re buzzwords. They’re verbal signals to identify each other. Be honest, when was the last time you saw an Atheist talk about “logic” and include something like “¬(¬A) ⇔ A“? Probably never. You’re more likely to see them name-drop the Dog Latin term for a (informal) logical fallacy and declare victory. Yet you’re still meant to be “logical” and “rational”, and use those words freely to describe yourself – religion, conversely, must be “illogical” and “irrational”, no matter the argument at hand. Never mind that something like the modal logic proof of God is logically valid (the issue is its applicability and scope), it has to be “illogical” because none of you fuckers know what “logic” means.

As an Atheist, you’re expected to agree with other Atheists. Stick together. Don’t criticise Dawkins because he’s a hero! But do criticise Feminazis because they’re illogical! Do bring up injustice in the middle-east, but don’t-you-fucking-dare mention injustice closer to home – and then promptly do nothing about it.

Atheism, at least when you spell it out and mention it out loud, comes with these social expectations. It’s all part and parcel, meaning “atheism” absolutely cannot refer only to the one-line dictionary entry “does not believe in g*d(s)”. Even if you object to the word being used to describe that social structure, you can’t deny the social structure still exists and in fact causes problems.

So, with three actual controversial opinions out there, you may now post this to Reddit and commence your Groupthink, suckers.

Freeze Peach

Found this in the Drafts from ca.2015. It seems to still hold true almost a decade later, so I’ve tidied it up and hit ‘Publish’.

The alt text of a well-known xkcd free speech comic reads:

I can’t remember where I heard this, but someone once said that defending a position by citing free speech is sort of the ultimate concession; you’re saying that the most compelling thing you can say for your position is that it’s not literally illegal to express.

I’ve read some really crappy comments recently trying to “debunk” this and criticise it, but apart from their own personal whining along the lines that they aren’t allowed to throw abuse at people, they never actually got around to criticising it. [note from 2023-Me, that 2015-Me did not leave any breadcrumbs about this]

In fact, they seemed to have largely missed the point.

The point is this: at no point ever should “freedom of expression” be the reason you want to be heard. It’s not a reason. It doesn’t matter if you’re discussing the moral implications or the legal ones, it’s not a reason to be heard. It’s a tautology: “I should be heard because I should be heard”. It argues for nothing, proves nothing, it is therefore not a reason.

Make no mistake, despite any of the most paranoid fantasies across the political sphere, we’re not suffering from a lack of free expression in the western world. We let all sorts of vile, disgusting, objectively harmful and damaging press be written and published and transmitted. Guilt-free, barrier-free, publish-and-be-damned, all is fair in love and free speech. A lack of free expression just isn’t a problem. We have plenty of freedom of speech to go around, and harping on about it literally proves nothing.

We probably lack responsibility and acknowledgement of the privilege of having a platform, but not the freedom.

Now imagine, just for a moment, that you are in a place that rigorously controls free expression. A place that legitimately and really clamps down on it. Examples exist out in the world; the aggressively authoritarianism of China, North Korea, or Florida for instance. Even then, the reason you need to be heard isn’t “because free speech”. How does that even follow? “We need free speech because speech should be free!” is the same useless tautology whether you have it or not.

No, That’s not the reason. If you’re stuck in a place with legally-limited and oppressed expression, the reason you need to speak out is usually the same reason your speech is suppressed – because it will hold the people in power to account.

If your message is “our leader tortures and mutilates people without trial in order to suppress political opposition”, the reason that needs to be heard isn’t “because I can” – the reason is because our leader tortures and mutilates people without trial in order to suppress political opposition. There is a reason for it to be heard. That speech has value. It needs to be said; not because it can but because it should. Freedom of expression is a means, not an end. And that’s because what you have to say can matter.

If your opinion is shit and valueless, I won’t give it undue respect or endorsement. I have no good reason to. I won’t pay to host it. I won’t waste my time listening to yet more of it.

And no, I won’t fight to the death to let you say it… what fucking idiot brings that old adage out of the blue, anyway? Who the hell wants to die just so someone can scream about how the Holocaust was fake on a park corner? I value life way too highly to end it over the sanctity of valueless opinions of dubious factual accuracy. If I need to trade my life, literally, for someone’s opinion, that opinion better damn well be worth it. I’ll defend speech I find the equivalent value in.

Perhaps, to pull an extreme example, Holocaust denial is an opinion that has some value – in which case, the person espousing it should be able to demonstrate that value to me. Is it true? Does the opinion benefit the world? Is my life improved upon hearing it? Are new truths brought to light by it? Please try and convince me of its value rather than complain that’s it’s merely your right to say it. I doubt you can, though: we live in a society that enshrines freedom of speech more than you’d like to admit, so I’m already familiar with such arguments, and it has been found wanting each and every time. I don’t need to pay travel expenses to hear it yet again.

If, literally, all you have to say in its defence is “but it’s freedom of expression”, then you’ve outright proven that you have no value to offer. And fuck it, life’s too short to waste worrying about things so worthless.

The (actually not-that-tricky) Issue of Consent and Your Children

Here’s professional failure and desperate rent-a-gob Laurence Fox, aghast that you might need consent to touch someone.

However, this blog post is not, in fact, about Laurence Fox, a man whose main reason for existing is to make Billie Piper’s marriage to Chris Evans (no, not that one) at 18 look like one of her better life decisions.

No, this post is about consent, which I think I’ve talked about before. Because people genuinely ask, and get confused about, whether they should get consent to touch their child.

Yes, you can bet your ass you should.

This usually gets mixed in with the idea that you should ask consent to change a nappy. That’s “diaper” for the Americans. Both words are… terrible.

Anyway, I think that gets brought up with consent because 1) a toddler can say “no”, and therefore roll around in their own poop all day, and 2) this principle usually includes literal babies. Both of these are easy clout-chasing objections, that can easily decry the whole issue of obtaining consent from children as “woke nonsense”. You can see it discussed in the responses to that tweet about Fox above.

I don’t want to call these “valid” objections, but I don’t want to entirely dismiss them as they have a use in illustrating some principles about consent and communication. So, if this whole idea of asking a month-old child whether you can dress them sends you into a trothing, incoherent rage about how it’s some sort of violation of sacred “common sense“, read on. It’s a little more complicated, but let’s start at the basics: yes, you should always ask consent of your children.

Why?

If it isn’t obvious, then I may struggle to convince you that it’s simply morally correct to treat your child as a human being and not your personal possession. Many people have pointed it out before, but it’s very difficult to move someone’s opinion when their base assumption is that other people don’t deserve basic respect. I don’t know the form of words required to talk someone down from assuming their child is their property to use, because I don’t think they would recognise their behaviour as that.

However, you can at least consider it from a practical perspective:

In the future, your child may be in a position where they could be touched inappropriately by an adult.

Do you want them to:

  1. Be in a position where they know that this is wrong, and stand up for themselves, or
  2. Be utterly subservient and unquestioning towards an adult, and go along with the harm because they’ve been taught this obedience.

This is particularly important because, despite what you might think from Stranger Danger morality tales of yore, statistically the biggest threat to children is close family members. Getting them to stand up for themselves is, for all practical purposes, a defence mechanism they need to learn. Even to their closest relatives. Especially to their closest relatives, in fact. I’m pretty sure anyone who has spent more than 8 minutes in therapy in the last decade can agree with that one.

Still, it’s hard to explain this in a way that will get through to anyone who outright objects to the concept. Children are people, and need to be respected as such – and they do need to learn that they can be respected, and listened to.

But, I already know the objections. I have a Facebook account, I see content from “normal” people all the time. It goes something like this:

But my child has to do the thing! They need to get dressed, get changed. What if they refuse?!

And, you know what? You’re right. Sometimes they do need to, and they simply don’t want to. A child is, on occasion, not going to do something they absolutely need to do in order to function, survive, or be healthy. You are, at some point, going to have to wipe their arse when they very much do not want it.

This is easy to work around: because consent is not just “will you do this?” and then getting a yes/no answer. There is a little more to it than that. Just a little, of course. First, you need to inform someone of what you will do to/with them – hence “informed consent” in medical practice and other areas. Someone cannot issue consent if they are not suitably informed.

With children, this needs to be taken quite seriously. You explain what they need to do, the consequences of not doing it, and inform them thoroughly. This is preparation. Never surprise a toddler out of the blue. Explain everything with as much notice as you possibly can, in highly redundant detail. With children, it’s not necessarily a case of asking a closed yes/no question. It is about informing them of what the consequences of each choice will be. “You should do [X], because it’s important. If choose not to, I will have to do it for you, this, that and the other will occur, so on and so forth…”.

It’s not a case of “will you get dressed?” hearing “no”, then immediately grabbing them, and forcing clothing onto them. It’s not about asking your nascently lingual toddler if they want that nappy changed, then letting them roll around in their own mess all because they said “no”. Think about the message that would send. It’s about explaining the consequences, holding boundaries where necessary (and not arbitrary), and also communicating with them relevant, easily-understood details.

In fact, if you want to distil it to a soundbite frequently used in parenting circles: do not ask your child a yes/no question unless you are happy with both answers. And, importantly, respecting that answer. So it is perfectly find to tell your stroppy 18-month child that you will be changing their nappy, because it’s essential. They’re very young children, with no control in their lives. They simply want to have some, and find it where they can get it: so you can give them other choices to feel in control, when something simply has to happen to or with them.

This is why you need to read beyond a headline, or dumb soundbite, or some “wine mom” influencer making InstaToks about how her children have ruined her life. You need to understand the process, and understand the founding principles of consent. The last thing you want to teach a child about consent, is that their “no” will not be respected. If you ask, and they say no, you should not do that. If you need to do it, and “no” is not an acceptable answer, then it is about informing them of what will happen, and offering them some other form of control over their life, even if only as a distraction.

This brings us back to babies. Do you ask them to consent to change a nappy?

Yes.

But why? They can’t speak or understand!

Ah, but here’s the thing… well, two things actually.

  1. They are learning to speak and communicate, and
  2. They do understand what is happening to them

So, while you might think it’s absurd to ask your weeks-old infant a question, what you’re actually teaching them is the act of conversation. A call; a response; a suitable action or reaction (the moves of a Wittgenstein language game, if you want to be high-brow about it). That practice will eventually morph into real conversation, and the child will know what to do, and so will you – it’s practice for the parent as much as the child. There’s no better time to practice speaking to a child than when they literally can’t understand and repeat the words. I mean, it’s one of the few times you can get away with profanity consequence-free.

Anyway, what is the alternative? To wait until a baby has learned to talk before actually talking to them? Wait until they can understand questions and answers until they can understand questions? I’ll leave it as an exercise for the reader to figure out what an actual affront to sacred common sense that is.

Cynic’s Guide to De-influencing

Okay, let’s try that de-influencing trend.

No, don’t buy that dress.

You only think it looks good because you’ve conditioned yourself to think that anything on a skinny lass with 1M followers, 2 hours of make-up and 17 filters looks good.

If you wear it, you’ll spend all night faffing with it and adjusting it, permanently worried that you’ll accidentally flash someone out of pure discomfort, even if no one else is physically in the room at the time.

You also do not go to enough fancy functions, nor are you having enough worthwhile and enjoyable sex, to justify owning it. Yes, I’m looking at you everyone who somehow knows how to pronounce “Mugler” without looking it up.

You’ll want to send it back, but never get around to it, and just feel guilty. But since most returned clothing is sent to landfill because it’s simply easier and cheaper than repackaging it, the effect will be the same anyway.

I know it’s only £8.99, but the reason it’s £8.99 is that it’s produced in such conditions that buying it means you’ll be about 0.45% responsible for the permanent mutilation of a Malaysian child. The waste effluent from the dying process won’t have killed many fish, but that’s because they were dead already from the last century of us doing this.

You’ll be, like, “what is this de-influencing thing?!”, look it up, find articles from galaxy brains saying it’s just the same as influencing. But that’s because there’s no ethical consumption under capitalism, only man-made horrors beyond your comprehension.

Just buy the thing anyway. Those children weren’t really using those fingers, were they.