5 Things They Don’t Tell You About Teaching in Higher Education…

Image result for university lecture

Have you ever considered a career in teaching? Does it sound totally great but the concept of a PGCE and a month’s mandated nose-wiping in a Primary School turn you off? Would you rather teach people you can be cynical and sarcastic to? Then try HE!

(note that this is primarily cathartic cynicism, it’s still a good job, and where I’ve highlighted problems below I do have solutions – at least for the things in my control – but maybe for another time)

It’s long been thought that the difference between teaching at school and at university is that the former were forced to be there – as subjects become more optional, attitudes improve. I do think that’s true, and is part about what makes teaching in further and higher ed. much more attractive. When students want to learn because they intrinsically value it, they’re great to teach, and this is backed up by (decent) research in education and psychology.


1. The students don’t want to be there

Once, long ago, students came to university because they had a passion for the subject – although this tended to correlate strongly with being wealthy and white for various reasons, that’s beside the point for now. People would happily come to university for the privilege and sheer honour of sitting in a stuffy room and listen to an academic talk endlessly about their area of expertise (a minor exaggeration, I’m sure). After all, it didn’t matter if you didn’t understand it, you just went to the library to learn it again properly before hitting the subsidised alcohol.

But that’s very much changed now.

We can blame the new fees regime, sure, but there’s been a broader cultural shift in what university is actually for – or, at least, seen to be for. It’s now a continuation of school, it’s just “what you do”, and if you don’t go to university you’re seen as a failure. Whether this comes from employers demanding “any degree” for jobs that don’t warrant it, wider society now valuing education for its own sake, or even direct bullshit-expectations from parents, students have to go to university. Students now scramble onto difficult STEM courses because they’re offered through clearing, but do so with a lack of maths qualifications and an interest in the subject that comes exclusively from being told “don’t do Art History or English, it’s a waste of time”. The expectation is “university”, as opposed to “physics” or “biochemistry”.

The end result is that students don’t really want to be taught by you. They see university as a 3-4 year prison sentence they must serve before they can graduate and a get a decent job with more money – a fallacy given that graduate wages are rapidly collapsing. Students increasingly see only the extrinsic value in the subject – the degree, the stepping stone to the next thing. You’re there to tell them how to pass the exam so that they can be graded and graduate with a 2:1 and put off deciding what they want to do with their lives for a bit longer. The effect this has on their motivation is just as bad as school pupils who are “forced” to stay in school way past the point where they care about it.

It’s not a universal, but it’s a large enough quantity to make the job much harder than it needs to be. In fact, the job is already hard enough given that…

2. There is no training (that’s of any use)

Surely, the person standing up to lecture you has been taught how to do it effectively, right? And when someone is organising a tutorial, they’ve been told how to structure the session, respond to queries, and their notes on the questions contain an extensive troubleshooter and FAQ?


You’re pretty much thrown into it with nothing if you decide to go on a teaching route. You’ll go into the lab for the first time to supervise 100 or so 18-21 year-olds and know nothing of the practicals. You’ll have a group of 6 in a tutorial and you won’t have had the chance to practice what you’re going to do with them. You’ll turn up to a lecture and this is the first time you’ll have given a presentation where the audience’s comprehension of what you’re about to say actually matters.

Now, this isn’t to say you’re ultimately terrible at it. Junior academics usually have to present to a lecture theatre (their research and proposals) before they’re employed. The ones that don’t get the job are the ones that fail to realise this isn’t a presentation, it’s an audition. As a result, anyone employed in that position can at least speak clearly and won’t fidget and mumble their way through a lecture series. But that’s it, that’s the main bit of quality control that weeds out the physically incapable. Barring a yearly peer review (usually precipitated when the one person who cares decides to organising), which focuses mainly on ticking a few boxes along the lines of “were you any good? Yeah, whatever”, there’s little to no culture of review or quality control on HE teaching. Responses from student feedback are, generally speaking, either useless (“they were fine” ranging to needlessly personal insults) or unrepresentative (a response-rate of 5% is good) so can’t be used to improve teaching and direct your own development as a teacher.

Well, there is some training. But it doesn’t involve how to deliver or develop a curriculum, or make sure that your ideas are understood by people. Instead, you’ll get taught “learning styles” (largely debunked as hokum) or you’ll be taught Kolb’s learning cycle (I’ve yet to find a use for it) and countless over-complicated words that really do nothing but state the obvious. You’ll hear about “travelling theory” where you treat your “subject as a terrain to be explored with hills to be climbed for better viewpoints with the teacher as the travelling companion or expert guide”. This all sounds lovely and poetic and makes some abstract high-level sense, but doesn’t really help you help you teach someone how to normalise a wavefunction or integrate a rate equation. And the diagrams – be sure to always call them “models” – the bloody diagrams that mean nothing but will make your eyeballs bleed. Bloom’s taxonomy (or at least the cognitive domain of it) might be useful for you writing exam questions, but that’s it. Make sure you use “conceptions” and “discourse” a lot when it comes to writing your essay to prove you’ve learned this stuff.

The only useful thing I got out of a year’s worth of workshops and coursework was a half-hour session on vocal health – because talking your bollocks off to 200 people for 45 minutes is harder physical work than it seems. That was great; and something I appreciated more than most, thanks to being married to a pro vocalist who has schooled me in the theory of that for over a decade.

Anyway, why the “training” sucks segues nicely to the next bit. You’re not really being trained to teach, exactly…

3. If you want recognition, be prepared to do something useless

Teaching is a largely thankless task in higher education. This sounds a bit weird if you think of university primarily as an educational institution, yet, it makes perfect sense if you think of them as academic institutions designed to generate research. Teaching doesn’t generate headlines, it doesn’t bring in millions in grant money, and it will get you a new building only once in a blue moon when the university finally listens to the 800th email saying “the teaching labs are about to fall down and kill people!” (because “they’re too small to fit the students you demand we should take” doesn’t get the job done).

This is slowly changing, though. We can blame the fee regime for this. Students now make up the majority of funding for universities, and with the Teaching Excellence Framework around the corner, the higher-ups are taking it seriously.


The training and recognition don’t reward good teaching, they reward talking about good teaching. Hopefully, I shouldn’t need to hammer home that these aren’t the same thing.

Consider what you need to do for an HEA fellowship, for example. You need to write essays and take part in continuous personal development (CPD), but few of those are ever based around your actual teaching (you have to write a case-study of your own teaching, but the actual aim is to analyse it using the bullshit you learned in your ‘training’ workshops). As a result, the people who get published in the educational literature, and so make a name for themselves as ‘good’ teachers, are the ones who write things like “Conceptions of Student Learning: A New Model Paradigm For Higher Education” and then proceed to yank four student types out of their arse and call them “Square Thinkers” and “Circle Thinkers” and “Triangle Thinkers” and “Squiggle Thinkers”, each described with Barnum statements and no real evidence, and then try to say something profound like “you should make your tutorial group of Squares, Circles, Triangles and Squiggles”. I’m not naming names, but this actually happened once.

So if you can guff around and talk crap about teaching and learning, and make it sound complicated and theoretical and academic, you could very easily find yourself on route to a very cushy academic job in an education department.

Alternatively, you can innovate. Innovation is something I won’t bash outright, but innovation for the sake of innovation is the enemy. Want a teaching award? Start a Twitter account! Send out homework assignments via Snapchat! Get into a packed lecture theatre and do explosions with your students – don’t bother telling them why they explode and how to stop it, that might be useful to them, and that’s boring. Experiment with keeping your office door open! Do EBL and PBL and use the word “constructivism” a lot! Add your students on Facebook! Tear up the rule book because you’re cool and wait for lavish praise to fall upon you!

If you’re a softly-spoken lecturer who stands at the front to just talk – calmly, rationally, and with a clear message – the students will go away knowing a lot about a subject. But that sort of crap doesn’t get you an award or promotion. (before you think this just sounds like bitterness on my behalf, you need to know I’m not actually this kind of person)

Anyway, you can avoid most ‘training’ sessions, except the most important one, which they probably won’t tell you about…

4. You need to learn mental health first-aid

So, cynicism aside for a moment, if you want to work with students, seriously, learn mental health first-aid. Believe me, there’s a lot that “common sense” won’t get you through here so you need to know it and get taught by someone who knows what they’re doing. It’s difficult to deal with, but it’s something you will inevitably deal with and may even take up a measurable chunk of your time (which can’t be directly assigned to the Work Allocation Model, of course).

Why is this important and potentially time-consuming?

Look above at all the crap students have to deal with. Under pressure to perform from their parents, locked into a course they hate by the expense and the fears that they’ll never pay back these objectively ridiculous fees, surrounded by staff who would rather be writing their next Science paper than answer questions on thermodynamics, faced with lab work that’s almost designed to overload their working memory… and then panicked that they haven’t learned anything from the young, hip and trendy ones that are telling them to check their twitter feed for tutorial announcements.

All that on top of being young, a bit dim, unsure… by the gods, the list goes on. It is a perfect recipe for a mental breakdown. And this is strikingly common, and not just restricted to the stereotype of the emo goth girl who broke up with her boyfriend. Anyone who comes into your office could break down in tears at a moments notice.

I really don’t talk about this often, so I’ll get it over with in a single quick-fire list: in a few short years I’ve had students on anti-depressants, undergoing  CBT, having panic attacks in labs, admitting to being sexually assaulted, having been mugged, saying that their family has just imploded, discovered they’re dyslexic, passed out in an exam and woke up in hospital, passed out in a laboratory, passed out in my office…

This is serious fucking business. We’re not there to be therapists – we shouldn’t take on that role – but university counselling services are stretched thin, underfunded (by comparison to their need), and are only really available as palliative care rather than preventative. As a result, we often have no choice. If you want to take a teaching-track route into HE, you’re likely to be in close contact with students far more often than research-focused counterparts, you’re going to be seen as more approachable because of it, you’re going to deal with this whether you like it or not.

Maybe you want to stay in research over teaching, because…

5. We don’t know if it’s going to become a dead-end or not

As recently as 5-6 years ago, a teaching-track in a university was a dead-end. Teaching staff were recruited as a cheap and easy plugs to do jobs that senior academics didn’t want to do. They don’t want to spent 6 hours on their feet in teaching labs. They don’t want to blow 4 hours a week on tutorials. They’ll put up with a lecture course if it’s the only one they have to teach that term and they don’t have to do anything but stand and talk. And so, teaching-focused staff were born – costing only as much as a postdoc to employ, capable of absorbing much of the abuse students generate, and having copious free time to load up with that “any other duties” bit of the job description.

But there was no promotion track. There’s no way, as a teaching-focused academic, you can write and bring in a 6-7 figure grant. There’s no way, as someone who doesn’t run a research group, you can really publish a high-impact paper. And so there was no way that a university or department could reward you for it.

This has, however, mildly improved. There are now promotion criteria, there are pathways to get to senior positions, and – even if it is rare as astatine – you can get a tenured professorship purely on teaching. Some places are even slowly unpicking the distinction between teaching and research focused staff, allowing you to hold the title of “lecturer” officially – ironically, “lecturer” usually means you do less lecturing than the people without it. This is all fabulous, of course. Finally, universities are recognising that students bring in a load of cash, and so the staff to teach them stuff might be worth investing in.


There’s always a ‘but’.

The UK is slowly moving over to the United States’ model in, well, every area, really – and this includes HE. We’re going to privatise our healthcare, prisons and welfare, and we’re going to hike higher education fees to make them inaccessible to all but the most advantaged people. We also run the risk of paying staff less, exploiting the eagerness of younger researchers and teaching staff to take poorly-paid positions for a 1-2% shot at the big time. The US runs on a frankly appalling system of “adjunct” professors, who are usually newly-minted PhDs who are typically paid per class they teach. The end result is that many of them teach classes at multiple institutions, often with long commutes between, and are paid only for the hours teaching. Once you factor in the travel times between jobs, the marking, grading, course development and other sundry overtime, the wages work out as just below minimum wage. Yet the system works because people feel they have no other choice – and they’d be right, that’s their only choice.

Is the UK heading that way, too? Maybe, maybe not. I don’t know. On the one hand, we’ve seen staggering improvements in the respect you get for teaching in HE, on the other hand we could revert to the US model at a moments notice if the suits in charge see that it’s cheaper to pay some young pup £3,000 to teach a class than it is to pay them £30,000 to be full-time and only teach 4-5.

I’ve seen an increase in teaching positions advertised as “term-time only”, which pro-rata down to quite a low salary for a year of work, meaning you’ll need a temp or part-time job to keep you busy in the long summer. But, more importantly, term-time-only contracts and per-class contracts robs universities of the chance to do any development work. Most teaching labs experiments were cutting edge back in the 60s, some lecture courses haven’t been updated since the 90s, intro courses given to first years are still the same tired old things despite evidence that flipped delivery would improve them. No one can do that unless teaching-focused staff are given the time, respect, and clout to develop – and that means employing them full time, even over the Christmas, Easter, and summer breaks. If the worrying trend to employ them for their hours only continues, we’ll lose any chance of curriculum development or review by people who actually care about effective teaching.

So there’s a lot of work being put in to make the position respectable. But it’s likely that the walking suits earning 10x what I’ll ever be able to won’t like that, and reverse the entire thing into a ditch.

The 10 Dumbest Things I’ve Seen An Undergraduate Chemist Do

In no particular order, here are the 10 dumbest things I’ve seen an undergraduate chemist do in the last decade or so.

1. Derive a bond-length longer than the Humber bridge

This is a fairly common error that results from not keeping track of your orders of magnitude properly. If you’ve got a lot of 10-19 or 108 type things flying around, it’s easy to get lost. But, you should at least be able to sanity check your answer and figure out that a chemical bond is about an ångström  long, with comparatively little variation – it’s not going to be 8,000 metres.

There’s plenty of theoretical dumbness where this one came from, but they’re boring to non-specialists and would take a while to explain. The rest of this list is pretty much the horror-show that is the teaching laboratories.

2. Stab themselves with a pipette full of chloroform

It turns out that while we’re pretty careful about needle-stick injuries (one a week for postgraduates, none at all for undergraduates, remarkably) it turns out that a simple Pasteur pipette can be equally capable of breaking the skin and injecting a toxic compound into you. This is probably at the more sensible end of the “how the shimmering fuck did you do that?!?” scale.

3. Throw acid in someone’s face

Let’s just summarise this one thusly:

Student A: “Watch out! This is acid!!”

**throws a beaker of clear liquid in their friend’s face**

Student A: “Haha! Lol. It was just water… Joke’s on you! Wait… wait… why are you screaming?”

Image result for chemistry teaching lab

Not pictured – the shouting… the glass breaking… the demonstrators head-desking after being asked “can’t you just tell us the answer?” for the fifteen time that day

4. Syphon an ice bath… filled with unknown crap

What’s the fastest way of emptying excess water from an ice bath? Easy – simply stick in some rubber tubing, suck it up, and and let the syphon action do the rest. Sounds great… unless you failed to spot someone spill some toxic crud in there earlier, and then sucked it into your f**king mouth. The end result of this one involved screaming across the lab to spit it out into the sink, right as the senior demonstrator turned up.

5. Spray hot oil into someone’s face

While we mostly work with metal heating blocks now (possibly for this very reason) it’s still common to use oil baths to warm things up. Even for a distillation. The trouble with a distillation is that you need to get everything else on the heated side of the condenser to get hot, and this takes a while. So you can speed it up by heating the still head yourself, and this often involves a heat gun (aka, a hair dryer).

This is fine, providing you don’t point it down into the oil bath, where a sudden blast of hot air hits the hot oil and sprays it everywhere.

Image result for chemistry teaching lab

They’re queuing up outside. The cleanliness has mere seconds to live…

6. Getting capsaisin in their eye


The first rule of Extracting Capsaisin Club is: you do not touch your eyes while extracting capsaisin!

The second rule of Extracting Capsaisin Club is: YOU DO NOT TOUCH YOUR EYES WHILE EXTRACTING CAPSAISIN!

It’s a very effective method of getting closely acquainted with an emergency eye-wash station, though.

7. Pass out in the lab

Normally, this would be considered a sympathetic accident. Call first-aider, and get them checked out.

However, when the lab starts at 10 am, and it’s because they skipped breakfast, possibly sleeping in despite not going out the night before (they said) it’s their own damn fault and they’re a god-damned danger to others.

Image result for chemistry teaching lab

*Shoves fuming and smoking bottle of soon-to-explode stuff into the demonstrator’s face* – “What do I do with this?!?!” – “What’s in it?” – “It’s the top layer from Part B.”

8. Turn up to the lab drunk

One student turned up to the labs still visibly drunk from a previous evening – explaining the lack of an apparent hangover that hadn’t yet kicked in. They then proceeded to wander around the lab doing Jack Sparrow impressions until their lab partner kicked them out and sent them home before the idiot broke something or tried to pick a fight with the senior demonstrator.

9. Fail to understand the purpose of a spreadsheet

You’ve got data. Lots of it, in fact. It all needs the same calculation performed on each point. So, obviously, you type it into Excel, do your formula on the next cell, and drag/copy down. Presto.

Of course, this wasn’t good enough for the geniuses who decided to write each one out on paper, type them into a calculator step by step, then manually type the answer into the spreadsheet. All 150 data points worth over the course of about an hour. The reason, apparently, was that they didn’t trust the computer to get it right.

10. Spill ethylenediamine down their arm and not notice… for half an hour

Ethane-1,2-diamine, ethylenediame, diaminoethane, en, whatever you like to call it, it’s a common ligand that appears everywhere in the theory component of an undergraduate chemistry course. Small wonder, then, that they forget that it’s a strong base, a derivative of ammonia, and will rip your skin off in short order if you don’t do something about it. But at least now I know what a proper ammonia burn looks like. And smells like, to be overly-honest.

You’re Not Here To Study Chemistry

I don’t gripe about work too often here… okay, maybe I do. Anyway, here’s one thought flowing throw my head as I have ten minutes to kill between doing allegedly important things.

File:Chemicals in flasks.jpg

Sometimes, I want to scream to my students: “You’re not here to learn chemistry!


If you want to learn chemistry, read a book. Read Wikipedia. Read ChemGuide. Read HyperPhysics. Any idiot can pick up the material and learn all about it. Science is, possibly more than any other discipline, a well-documented subject. Want to learn some science? It’s out there for you to take. Now, more than ever, with knowledge freely flowing through the internet, anyone can learn about chemistry.

You are mere clicks away from a myriad of experts who have it all written down for your personal consumption and pleasure.

If you’re throwing 3-4 years of your life to come and study, you need to do more than just learn the chemistry. Much, much more. And this is a lesson most of us fail to learn until it’s way too late.

You’re not here to learn chemistry…

You’re here to learn how to be a decent human being. If you leave this place thinking it’s okay to treat the rest of the world like pieces of shit, you’ve wasted your time. Graduate and become a Daily Mail reader, you’ve wasted your time. Graduate and think “well, I don’t mind gay people just so long as…”, you’ve wasted your time. Graduate and think “but women should never earn the same as men because…”, you’ve wasted your time. And you’ve wasted my time, too.

You’re here to become a rounded individual. If you do nothing but learn chemistry, and chemistry alone, and just what we put on the syllabus only, and take no time to engage with another subject, join a society, pick up an instrument, join a protest, write a novel, finger-paint the windows… I dunno, just anything else, then you’ve wasted your time. Take the opportunity to get out there and do more. Do different. Try things. Find out what you hate by doing them. If you don’t, it’s time wasted.

You’re here to become a scientist. If you just learn the facts, you’ve wasted your time. If you can’t think critically, you’ve wasted your time. You’re here to practice science, to do science, to experiment and figure out how to experiment. So if you just learn about it, you’ve wasted your time. You need to do it. Learn some philosophy of science. Learn hypothesis testing, and p-values, and Bayesian statistics, and distributions, and confidence intervals whether your module requires it or not. Learn how to write, to communicate. If you stay up all night fiddling over one lonely mark out of 100 on your lab report, you’ve wasted your time: get hammered in the pub and explain quantum mechanics to your friends instead.

You’re here to become a functioning adult. That means figuring out how to pay bills, cook food, live with others, be on time, and organise your day. Forget the alternative-living hippy-crap for now because you can’t accomplish that with dreams and wishes; if you want to change the world you first need to know how to survive in the crapshack that it is. You need to know when to sleep, when to wake up, when to plough ahead and work hard and when it’s best to give up and try another method another day. You have to tackle your anxieties, fight your depression, face your self-doubts and crippling insecurities, and learn to manage stress about deadlines. You’ve got 3-4 years of your life in the most supportive environment that is physically possible to create – and make no mistake, few other humans get that kind of opportunity. If you can’t do that here and now, when else are you going to pull this off? If you don’t take the opportunity to fight yourself head on, you’ve wasted your time.

You’re here to learn how to take over the world. In 3-4 years time you’ll graduate. You’ll be a post-graduate researcher, a teacher, or in industry, or anywhere else with a job and making a difference in the world. 5 years after that you’ll be managing and leading, making decisions. 10-15 years after that? Who knows. But without warning. and without your consent, and without any other time to prepare, you’ll be running this planet. Remember all those dicks out there running the show and making the world worse? You’re destined for their position – so if you don’t learn how to do that job less dickishly than they are, you’ve wasted your time. Whether you like it or not, all the adults, the ones that you think know what they’re doing, will die off. You are going to have to take their place. There’s not another batch of replacement adults and rulers out there to make decisions… there’s you. And you have to do a much, much better job than they have. And the bad news is that you have to do that all while being the most detested and maligned generation on record; the generation that has come before think you’re all lazy, whiny, self-entitled, self-obsessed losers for wanting even a sliver of the advantages they got, and they want to punish you for it. The hate you with a passion that’s absolutely unrivalled across countless centuries of grown-ups muttering “Bah! Kids these days!” They want to strip you of your voting rights, lumber you with debt, deny you prospects and shit on your happiness – and you’ve got 3-4 years to learn how to tell them you’re not going to fucking take it any more. You’ve got 3-4 years to unlearn everything they taught you that was to make them feel better, and learn that you have to take the keys to the planet from them before they can cause any more harm to it.

You’re not here to learn chemistry, you’re here to make the world a better place by learning that chemistry. So don’t waste your time.

The Environment: Social vs Science

A thing I’ve long suspected, but have really only figured out and cemented after having to write some lecture materials on it, is that green chemistry, climatology, sustainability and environmentalism aren’t technological issues or scientific issues – they’re absolutely social issues. I apologise if this seems utterly trivial to people and that I’m a little late to the party – and I did say something similar regarding health issues a while back – but it really does seem like this is 100% social and 0% scientific.GreenChem_green

On one level climate change denial is entirely social – it sure as hell isn’t based on the scientific evidence or a through understanding of climatology. Merely presenting evidence doesn’t change minds, so it cannot be a simple scientific issue. Science can figure it out, science could save us from the ill effects, but it doesn’t convince and it doesn’t convey with relatable rhetoric. Instead of searching for the right evidence for people to believe it, we have to search for the right incentives for people to believe it – and those two things aren’t even in the same ball park when it comes to looking for them. If the climate changes irrevocably, we could survive through technology, that’s certainly true, but… only the ones that can afford the technology will have it, and therefore only the ones who can afford to survive can thrive. That’s a social, not a scientific, issue, and no amount of technological advancement and research will help with that.

We charge 5p for a plastic carrier bag now, even though carrier bags aren’t the biggest use/waste of plastic and aren’t as big a deal as you might think… yet that isn’t really the point. No-one sensible thinks this minor little thing will change the world. If you charge for it, though, it makes people think “maybe I shouldn’t use this material as a disposable commodity… hmmm, perhaps I should re-use an old bag instead”. It makes people think “this thing has a value, I should use it responsibly.. perhaps I could use other things responsibly”. Those are social incentives, independent of any technology – we could implement such a change, and have a real impact, without having to spend a single minute in a lab developing degradable co-polymers or decomposition photocatalysts. If a simple social incentive makes people think more about where it’s come from and where it’s going, and whether it can be reduced, re-used or recycled, then it will do more for the planet than any amount of technological development in biodegradable polymers will.

Decent incentives can make people think, because science can’t do that for them.

Sustainability_greenWe can recycle cow dung into vanilla, recycle water between toilets and sinks, and breed insects for the same amount of protein at a fraction of the environmental cost of cattle – all of which could have staggering benefits for us and the planet. Yet people (well, North America and Europe for the insect thing) may well go “squick” to all of it.

We expend vast amounts of energy to purify and sterilise drinking water and pump it into homes, then use about a quarter of it flushing shit into the sewers – and no one, here in the big, developed, supposedly-civilised first-world seems to think that this is maybe, just maybe, a little bit weird. We can purify waste water to a high standard but people either won’t accept it as drinking water without an emotional buffer in the way.

I can sit through presentations from students returning from work experience in the chemical industry and note that 10% of their efforts are expended in getting a product that works and 90% of their efforts are expended in getting a product that looks and feels like it works. We are quite literally blowing our technological advancement on placating social norms and pandering to conventions. That is absolutely a social issue to be addressed. Can we educate society to accept cloudy washing-up liquid and less-viscous shampoo in exchange for diverting our scientific efforts elsewhere? Can we de-brainwash people about what things should look like providing they still work?

None of these are technological issues. Grey-water toilet systems exist. Half the planet already consumes insects. Flavourings from bio-mass and waste already exist. Bio-derived and biodegradable surfactants already exist. But accepting them as solutions or potential solutions isn’t exactly trivial. They’re new, they’re weird, and sometimes they can be a little yucky. So should we should begin draw the line and say that it’s our responsibility to adapt to the better technology rather than the technology’s responsibility to adapt to our artificial preferences? Or is that solution just too difficult?

Sure, we need the technology to develop better approaches, but without the incentive to use them that’s nothing but a pointless academic exercise.

Percentages (Procedures vs Understanding)

I can’t remember if I’ve ranted about this before somewhere, but here it is for posterity anyway.

Have you ever noticed how at school you’re taught this:


Basic percentages. Divide your numbers, multiply by 100, and you get your percentage. Easy, simple, procedural, and easily rattled off in an exam.

Except, no. No. Not at all. You don’t multiply by 100. That gets you the number, it doesn’t get you the percentage. “100*(14/20)” gets you “70”, not “70%”.

This is because by its own definition, a percentage is a fraction, where by one full unit is normalised to “100”. So 70%, as a digital number, is 0.7.* As a fraction, 70% is 7/10, or 14/20 or 70/100. It is not equivalent to 70. Although it’s implied to get you that number (the equivalent implicit multiplication by 1,000 gets you the rarely used “per-mille” unit, ‰), at no point does multiplying by 100 actually get that actual percentage. It’s effectively included in the definition already.

* It could be anything, of course. 70% of 2 is 1.4 – but you get that by multiplying 2 by 0.7, not by 1.4 or by 70. In any case, the whole thing itself, no matter what it is, is normalised to 100, the equivalent decimal is normalised to 1 the same way.

Have you ever looked at the “%” and “‰” symbols?

In fact, the word “per” usually translates to “divided by” (kilometres per hour means kilometres travelled divided by the time taken) and “cent” means 100. So “percent” means “divided by 100”. The symbol “%” is quite literally a unit, and the unit conveys meaning just as much as km/hr or m/s or Js or Kgm-2s-2. And sticking numbers in there and having a “per hundred” or “per 250” or something like that isn’t unheard of, and pops up whenever it’s convenient to rescale your units to sensible numbers. That’s already what we do when we talk about “kilometers per hour” because the SI unit is the metre, so kph is actually “1,000 meters per hour” – or “1,000 meters per 3,600 seconds” since we may as well go all out on this.

What you have really written when you’ve formally put “x100” in your expression/equation is the following:


“(14/20) * 100 = 70%” implies that “7,000 = 70”. Which is absurd.

If you take out that “x100” you get 14/20 = 70/100 = 70%, which is arithmetically correct.

So far, so obnoxiously trivial.

But I think from a pedagogical point of view this might, actually, be quite important. Not just in a narrow, pedantic sense about a bit of numeracy, but in a wider sense about how we (by which, I mean “schools”) teach things as procedures to be followed, rather than as concepts to be applied and understood. The “x100” bit is certainly implied, and it gets you the right number, but it’s not a formal part of getting you the percentage. Sticking it there as a formality strips out understanding percentages, and changes it into a set of steps to be triggered one after each other, without stopping to think about exactly what is happening.

The trouble with procedural steps is that they then only get applied to one situation and one situation only. Thinking about “%” as a unit that means “per 100” is, in fact, incredibly powerful, as looking at units and letting them guide you will let you blag your way through physics, mechanics, thermodynamics, kinetics and near-enough all times that arithmetic rears its head in science. But no, every school kid out there is left just thinking that when they want a “%”, they need to divide and multiply by 100. It’s nothing but sticking a bit of trivia into a drop down menu to be used in a few narrow situations.

And not to mention that procedural steps put together are notoriously difficult to recall. For anyone with even a mild gift at numbers the percent thing might look too simple, so to illustrate this let’s jump to an example from chemistry:

That’s a rotary evaporator, a common piece of laboratory equipment for evaporating solvents – whenever you see a generic scientist on the TV and they’re not using a Gilson pipette, they’ll probably be using a rotovap. The thing just screams “lab” at you.

The main aim is to use a water bath to heat a sample and evaporate solvent. It also uses a reduced pressure so that you don’t need as much heat to do it – the vacuum does the hard work for you. It’s a fairly simple piece of kit under all that mess, and only a handful of components are involved. Yet the first time an undergraduate chemist sees one they practically shit themselves.

So what’s the first thing a student will look for? Of course, the instructions – usually a point-by-point procedure on how to go about doing it.

And they read the procedure.

And they follow it.

And they do it.

And, hell, they successfully complete the task without getting parts of their body stuck in a lettuce and screaming “my god, the blood, it’s everywhere!”.

And then they promptly forget how to do it less than ten minutes later.

I’m not kidding, the recall on using these things is fucking appalling if all students are given is the step-by-step instructions.

It’s not because the equipment is particularly complicated. It’s just that when written out formally the procedure is about a dozen steps long, and it induces a sudden panic about doing things in the right order. “Do I do this before that? Do I turn this valve first or press this button first, and… oh gods, when do I turn this dial and when do I stop it… and…”

Well, pretty soon you’re dealing with a supposedly grown adult freaking the fuck out.

But it doesn’t have to be that way.

If you know why the bloody thing works in the first place, the procedure pretty much writes itself.

“Do I lower the flask into the water first, or turn the vacuum on first?” Well, if you know that the vacuum lowers the boiling point of the solvent, then you’ll know that heating it up first, and then turning the vacuum on risks flash-boiling the entire thing as you lower the boiling point to below the water bath’s temperature. If you turn the vacuum on first, then the pressure lowers, the solvent evaporates, adiabatic expansion cools it down, then you warm it up by lowering the flask into the water bath.

The same thing applies to gas lines, where instructions tend to be along the lines of “Open tap 1, now close tap 5, after that close tap 6 and open tap 2 slowly, break the seal on tap 4 and close tap 1 again…” Even I glaze over reading those things andknow what the hell I’m doing with that kit! Yet if you ask “now, what do you need to expose to the vacuum pump right now?” and let them figure out which tap to open, they can usually do it. You might have to stop, flick them on the nose, and actually prompt the question, but it’s absolutely not beyond the capabilities of someone to figure it out on their own. The procedure writes itself.

It’s a bit more information to take in at first, and it might be quite a bit of effort to actually teach it compared to writing down the procedural list. But you can’t get the procedure wrong once you’ve learned the actual inner workings of the equipment: because the wrong procedure makes no sense at all.

And that can apply back to percentages, too. Someone just taught it might ask “I can’t remember, do I divide by 100, or multiply by 100 to get the percentage?” Don’t laugh there, anyone just taught to rote-memorise the procedure can seriously fall into that trap. But when you actually know what “%” means, that question literally answers itself.

Ultracold Cells on Titan – Yay or Nay?

Listen up pop-science fans, I might be just about to pop one of your bubbles (or maybe not). This one, in fact. The original paper can be found here – it’s open access, and therefore extra awesome. I thought I’d do this before the Discovery Institute get their grubby mitts on it.

The regurgitation of the press release begins as follows:

Ultracold-Resistant Chemical on Titan Could Allow It to Harbor Life

Astrobiologists and planetary scientists have a fairly good idea of which chemicals might indicate the presence of oxygen-breathing, water-based life—that is if it is like us. When it comes to worlds such as Saturn’s moon Titan, however, where temperatures are too cold for aqueous biochemistry, it’s much harder to know which chemicals could signal the existence of hydrocarbon-based life.

Oh, I love pop-science headlines. They always go at least ten steps ahead of the research they’re actually reporting. In their defence, Scientific American do a decent job and don’t oversell it once they hit the third or forth paragraph, but I want to go a little deeper into the theory because I’m kind of a nerd. I’ll cover some of the core strengths and disadvantages of what they’re doing in this research.

In brief – What the f**k are they doing?

Life on Earth requires some sort of membrane to contain it. We call these cells. You may have heard of them. These are made – as high school biology graduates will know – by phospholipid bi-layers that create a fully encased supramolecular structure. These layers form because the phospholipid molecules have parts that attract to water, and parts that move away from water. Obviously, we pretty much live in water, so there aren’t many options for the parts of the molecule that dislike water – these long hydrocarbon chains – and so they form small globules called micelles, where the long chains face inward, protected from water by a shell made of the parts of the molecule that actually like to bind to water. In higher concentrations these start forming membranes, where there are two layers – the hydrophobic, water-hating, parts of the molecule all turned in and the hydrophilic, water-loving, parts turned out. Eventually, with the right concentration, they form bi-layered cells.

This is exactly how it happens.

This is exactly how it happens.

Of course, this all means we need liquid water. The chemistry of these membranes and bi-layers doesn’t work in other solvents particularly well, and certainly not at low temperatures where water and lipid molecules freeze solid. So the question is this: can we do the same thing that forms in other solvents, using other molecules, and at temperatures outside the “habitable zone” of the solar system. More specifically, can this be done for the conditions on Titan, where liquid methane acts as the moon’s “water”, and simpler organic molecules act as the phospholipids.

The response seems to be that, in principle, the answer is yes.

So it means life is possible?

Yes and no. The theory proposes a way to build the membranes and cells required to contain life – these keep the active metabolic chemicals in high concentration (the original paper mentions this as part of the introduction, it’s all part of the “RNA World” hypothesis for abiogenesis), allowing life to form and evolve. But this is far from the greatest barrier to self-organised and self-replicating life. Even if these hypothetical cells form, they would have to contain some high concentration chemistry – something that would have to be more complex and active than we currently have solid evidence for. The chemical “soup” trapped in there would also have to reach a complexity to start replicating with modification – where evolution can take over and make “life”, as we know it, Jim, almost inevitable. This is a much bigger “if” than the mere formation of membranes, and to be fair even that is still a big “if”.

Even a theoretical proposal would have to import essential chemical properties to a low temperature system with an alkane solvent. This is not impossible, but it is not staggeringly likely either.

“Computational” = “Proceed with caution”

It’s important to keep in mind this current “cells on Titan” research is theoretical – in fact, “hypothetical” might be a closer qualitative description, as it’s a big “if” rather than a solid, well-backed theory. This sort of caveat is often the first to go missing as papers get compressed into press releases, and press releases get compressed into pop-science articles, and articles get compressed to Facebook posts and tweets and meme images and Daily Mail comments. Be under no illusions: this work has been done entirely in a computer, and is just a proposition for now.

It gets lost in translation quite a bit.

It gets lost in translation quite a bit.

I can’t and won’t trash work for being purely computational. I’ve done plenty of my own calculationsthat have interfaced between real-world chemical observations and their theoretical replication, and I’ve mentioned before the successful results of using a genetic algorithm to predict the existence of usual chemical structures. However, the work I discussed there by Oganov et. al. went a step beyond their computational hypothesis – they put their experimental clout where their mouth was and actually made the substances they predicted. Score one solid goal for science, even if it didn’t “completely overturn all of chemistry” as the press release claimed.

So far with respect to cell membranes forming on Titan, there’s no empirical data forthcoming. Is this because someone has tried, failed, and neglected to publish? Is it because conclusively demonstrating that cells don’t form in liquid methane would mean proving a negative? The experiment might not be so straightforward to do, there may always be the right conditions to make it happen if the hypothesis is solid. But I expect it will come eventually, particularly if they’ve piqued the interest of parties capable of doing the experimental work. This hypothesis will either sink or swim (in liquid methane, of course) on the basis of that.

Fuck the Disco 'Tute getting hold of the story, it's IFLS you need to worry about.

Fuck the Disco ‘Tute getting hold of the story, it’s IFLS messing it up that you need to worry about.

Molecular Dynamics simulations

Computing the properties of molecules is difficult. Computing the properties accurately is even more difficult-er.

Think of it this way – for every atom (if you want to treat every atom individually) has to be described by three coordinates of position. And then three coordinates of momentum to give it a direction. And three coordinates of force acting on it computed from everything else that will change its momentum and position. It’s clear that as your system grows, you’ll need more data just to describe it. But then there are the interactions that lead up to the force that will alter its position and momentum. Two points gives you one interaction – and this is the only case you can solve perfectly. Three points gives you three interactions (consider a triangle). Four points gives you six interactions and five points requires modelling ten interactions (draw these out if you don’t believe me) and it increases from there. Some theoretical models increase their computational costs even more rapidly than that.

If you want to describe a very large system, say, a protein, or a layered membrane formed from dozens or hundreds of molecules, you will have thousands upon thousands of interactions to take into account. It stands to reason, then, that the more interactions you have the less complicated your calculations for each one must be. Otherwise you’re talking “age of the universe” time scales for making your calculation. This is where molecular mechanics and molecular dynamics come into play – you take your molecules and you simplify down the possible interactions to the most basic level, then run the simulation that way using assumptions and less intensive calculations.

In general, this is alright. You can get the basics of what a large number of molecules will try to do just from running such simple calculations, and the OPLS model used in this work is accepted as good enough for the task at hand. So the method is what we’d call “robust” – that is, it’s one of those things where 60% of the time it works 100% of the time.

If you download a neat bit of freeware called Argus Lab (warning: it’s not under active development at the moment and tends to run into trouble on 64-bit machines) you can start playing with your own things in a matter of minutes and do things like show DNA bases binding to each other using molecular mechanics calculations. The exact values you get for the strength of that interaction are dubious-as-all-hell, but hey, from fundamentally simple equations you can predict that DNA works. That’s just cool, right?

Errr.... I'll assume this point will skip you by, that's fine.

Errr…. I’ll assume this point will skip you by, that’s fine.

But the simple methods are not perfect and foolproof. Often you need to fudge a few of the simulations with real-world data. These methods are known as “semi-empirical” (you can work out the etymology of that at home) and the garbage-in-garbage-out principle holds true for them. Sometimes, even if you do try to fudge it with decent empirical data you still can’t get a good result. Even trying to work out the properties of water – something you’d think is the most well-studied molecule in existence – is insanely difficult and requires, actually, modelling a lot of water molecules because the interactions are that disperse. You can’t calculate something like a hydrogen bond strength of H2O just by considering two H2O molecules interacting. So you need to validate the simple model to make sure you aren’t falling foul of this sort of physics trickery.

The main bit of data used to validate the OPLS model in this work is the binding energy between two of their target molecules. The authors compared their predicted energies from the model that made the self-organised layers to an energy taken from a more robust and reliable (a “higher level”) calculation. And this is the part where I need to say “proceed with caution” again, because this data they’re comparing to still isn’t empirical, but also established from a calculation.

Ab initio Calculations

If you scroll down the original open access paper you’ll find the frightening combination of numbers and letters “M062X/aug-cc-pVDZ”. To explain this as quickly as possible, everything before the “/” is the “functional” – this is the theory, as laid out by clever computational people and physicists with a lot of spare time on their hands, that you will use to spit out an energy from your calculation. Everything after is the “basis set”, which are the basic building blocks of the atoms (more specifically, the electrons) that you’ll use to help derive it. There are an astounding number of each, and they are all completely interchangeable (although some combinations are more sensible than others). And each combination will spit out different energies for even the same molecule.

Calculating the binding energy between two molecules is almost comically simple. You set up your molecule and the theory and basis set you want to use to model it and the calculation spits out an energy value. You then set up two molecules next to each other and the same calculations spit out another energy value. If the latter is less than two lots of the former, the molecules prefer to sit next to each other by that amount of energy.

There are a few caveats to this, such as basis-set superposition error (BSSE), which is basically the error associated with assuming the “comically simple” approach I just described, but you can correct for that easy enough. Since you didn’t ask, you do this by taking the molecules individually as described above, but give them access to the atomic orbitals, aka the basis functions, of the other molecule but without actually putting the molecule or the electrons there – you then do some mathematical jiggery-pokery with the resulting combination of energies and you arrive at your correction. This is another thing you need to do or your TAP-IPM will chase you around with a chair.

Now, the major trouble with ab initio (from base principles) calculations is that they need to be calibrated. You do this by picking a method that produces reliable results for the work at hand.

And that’s the trick, you have to find the right combination that works. If the theory and basis-set combination you choose replicates an energy that you’ve actually measured (a known quantity) within a few percent, it’s a good bet that it will successfully predict the energy of an unknown if you’re looking at a similar-enough system. A lot of simple organic reactions can be predicted well by the combination labelled “B3LYP/6-31G”, which is about as close as you can get to a “standard” or “default” combination. But B3LYP/6-31G fails miserably for a lot of transition metals and organometallic compounds, which is where you need to start getting creative. If the process you are studying is intra-molecular – i.e., bits are just rearranging, rather than falling off or coming on – then most combinations tend to be much of a muchness. But when you’re talking inter-molecular interactions, particularly the van der Waals or electrostatic interactions between molecules, the right combination is essential. Again, garbage-in-garbage-out.

But you must measure it against something known, otherwise you are shooting in the dark. I once read a paper that proposed a very interesting new twist to a particular catalytic mechanism, something that they claimed had a much lower – and therefore more plausible – energy profile. It looked great. But it turned out they hadn’t actually calibrated/validated it well. If you could even call what they did “validation”. Their supplementary information showed that they had just changed the electron core psuedopotential (an assumption that allows you to ignore all the core electrons around an atom and replace them with just a single charge) a few times and concluded “well, we get similar enough answers each time so it must be right”. You can’t do this, or your TAP-IPM will chase you around with a chair. Again. Your chosen method has to be calibrated against empirical data or the garbage-in-garbage-out principle applies. So, when I replicated this catalytic system with a completely different level of theory and a different basis set (one that was calibrated against empirical energy values derived from some painstaking kinetic experiments) the claimed effect in this paper effectively disappeared.

And this is where I get a little dubious about the reality of micelles forming in liquid methane in reality. The molecular dynamics, and particularly the more detailed conclusions of the paper rely on an accurate binding energy between two molecules. Without this, you could get any old result. You could stick in a random number for the binding energy, and see molecular self assembly from the simplified molecular dynamics calculations that is wholly unrealistic. The energies associated with that self-assembly may well be off my a huge margin, and when you start plugging that into thermodynamic equations that are raised to the power of these numbers, your errors become far more staggering. I am also dubious about taking a binding energy from just two molecules alone. If you’re talking about large structures such as micelles, I really would like to see some ab initio stuff done on larger clusters including tetramers to see how they start interacting using this higher and more precise level of theory, BSSE-corrected or not. As I touched upon above, in water you need to get to several layers of interacting water molecules to approach experimental accuracy. Is this level of detail needed in this case? It might hurt the hypothesis, but it can’t hurt its reliability.

I also have to question the use of implicit solvation in their quantum mechanical model – that is, not making the calculation in the presence of actual solvent molecules (almost essential if you’re going to imply that solvent drives this reaction!) but in just polarisable continuum that, let’s be brutally honest about this method, only vaguely represents the idea that there’s a solvent if you squint a bit and squish it about. The binding energy that they calculate to configure and validate the model is, of course, more than against molecules verses molecules separated by infinite distance, it competes against the ability for the molecules to bind to the solvent explicitly. This isn’t always trivial. The ability for solvent molecules to make very specific interactions with molecules means that solvent getting in there are breaking up the self-assembled layers and altering their stability needs to be accounted for much more explicitly than they have done to make the results more robust.

Is all that required for acrylamide and other similar molecules working in methane? Possibly, possibly not. Hopefully the authors have done their background reading to figure that out, and I’m willing to give them the benefit of the doubt given that they’re refererring fairly robust procedures and methods – although these methods are compared against reasonable standards (M062X) rather than a “gold standard” like CCSD(T). It’s reassuring that the OPLS model’s binding energies were within 4 kJ/mol of the ab initio results, suggesting the model has merit, but as I’ve pointed out above, theoretical self-consistency should take a backseat to consistency with experiment because the former can be fudged so very easily.


Life or not, it'll almost certainly have some interesting chemistry

Life or not, it’ll almost certainly have some interesting chemistry

Overall, I think this is a pretty cool and promising result. The work by Oganov on sodium chloride stoichiometry that I’ve discussed previously on this blog demonstrates the predictive power of computational chemistry, and this could well do the same. The authors here have demonstrated some excellent potential chemistry that could be going on in liquid methane oceans. However, save the champagne for now. Without comparing their results and values to experimentally derived ones, and finally experimental verification that self-assembly of these molecules actually happens in liquid methane there is no hard evidence, yet, that this theory is realistic. Hopefully these experiments are coming soon, so we can see if this holds up. Because if we can build them in the lab, and then figure out a reliable way to detect them in the wild on Titan, the question about whether it lowers the barrier to life and its application to exobiology is irrelevant, it will be some really interesting chemistry we’ve found.

Things You Should Probably Stop Saying – “BUT EVERYTHING IS A CHEMICAL!”

GreenChem_greenAnyone who hasn’t lived under a rock somewhere in the barren wastelands of arse-nowhere for the last half century will probably, at some point, heard about dangerous “chemicals” in our food and water supply. It’s an amazingly common trope. It isn’t just limited to food-bloggers and woo-merchants, it’s practically embedded into our language. “These nasty chemicals are everywhere, and they’re not good for us. So eat organic, avoid chemicals, or you will die!” they may cry.

“Avoid chemicals!” comes one piece of advice. “That food’s bad for you because it has too many chemicals in it” says another.

This is, of course, utter nonsense. As evidenced by the average skeptic and pro-science response, which usually goes thusly:

Silly peon! Don’t you know everything is a chemical? Water is a chemical. Air is a chemical. You like drinking water and breathing, don’t you? But you’re an idiot! You don’t get it, you don’t understand what a chemical really is! What about (R)-3,4-dihydroxy-5-((S)- 1,2-dihydroxyethyl)furan-2(5H)-one? That sounds scary doesn’t it! But that’s vitamin C! You’re such an idiot. This is nothing but chemophobia!

And fair point, to a certain degree. Chemophobia – nominally the “fear of chemicals” but is really just the “fear of chemicals whose names you can’t pronounce” – is a serious problem that interferes with scientific literacy and keeps a lot of really stupid people (*cough*Vani Hari*cough*) financially solvent with ActualMoney.

But… because there’s always a “but”… “everything is a chemical” doesn’t actually refute what our hypothetical woo-merchant is saying.

This post is about how “everything is a chemical” is a phrase and reasoning you need to stop using. It misunderstands what it’s supposed to refute, and it doesn’t help.

We know from observation that a hypothetical woo-merchant who voids “chemicals” probably isn’t against breathing an admixture of O2, N2, Ar, and CO2 and H2O in their gaseous state. Similarly, we can presume they’re not against drinking dihydrogen monoxide oxidane, nor are they scared by the concept of vitamin C being generally a good thing. Their behaviour demonstrates this nicely.

Quite clearly, these things are not in the category they are talking about when they say “we should avoid this”. They’re drawing a ring around a group of substances and saying “avoid these”. What they call that ring and that set is irrelevant, because we can clearly tell what they mean from their usage of it. Their use of the word “chemical” might be ill-defined and slightly non-technical (see below), but simply re-defining what they mean by “chemical” on only our side of the conversation does nothing to refute their claim nor their fundamental errors.

They have a set of Things they call chemical. We have a set of Things we call chemical. They say one thing about their set, we say something different about our set. The only thing uniting those arguments is their common label, nothing more.

RefutSustainability_greening a claim based on operating an argument over a completely different set of Things isn’t technicality, and it isn’t nitpicking nor pedantry. It’s just plain fallacy.

It would be as if Person A said “look at those 99 red balloons go by!” and Person B declared “FALSE! There are only 45 red balloons, the remaining 54 are pink!” and concluded that, therefore, Person A was lying completely and no balloons of any colour went by. Or, if on being told that your friend was in hospital following  car crash, you decided that they couldn’t possibly be in hospital because, technically, it was a hatchback not a car. Such skeptical dismissals ignore the point of the argument, ignore the uses of the words that form the argument, and focus instead on a trivial mismatch of labelling, mistaking it for actual content.

Someone arguing to avoid chemicals very clearly use “chemical” to refer to a sub-set of all substances. They know that, and – get this – we also know that. We must, because we’re so eager, apparently, to correct their usage to “but everything is a chemical”. That’s something we couldn’t do if we didn’t at least understand their own meaning. So it’s almost as if the typical “everything is chemical” response actively acknowledges that skeptics don’t really want to engage with the argument, but want to claim superiority in terminology.

It also reduces things to a sound-bite that convinces only the already-converted: “Hey, you told that Food Bitch that everything was chemical! High-five skepto-dudebro!! Haha LOL!”

From the other side, that doesn’t look like a convincing argument so much as people actively ignoring what you have to say, and that convinces no-one. If Person B wants to change Person A’s mind about 99 balloons going by, then their best starting point is to acknowledge the wider variety of hues within the “red” set as used by Person A.

In short: refute what the other person actually talks about, not what you want them to be talking about.

Still, I think there’s a more fundamental error going on. Something that misunderstands chemicals and chemistry, and chemists. And this is where I think I have a few qualifications to butt in and add additional comment:

Calling everything  “a chemical” isn’t even a technical definition as used by actual chemists.

This may seem odd, but actually think of the manipulations that chemists have to do on a day-to-day basis. If chemists accepted “everything is a chemical” in an absolute sense, we’d have no use for the term at all. The water running through a reflux condenser would be a “chemical”. The nitrogen running through the Schlenk lines would be a “chemical”. Our lunch would be a “chemical”. Hell, our bodies are a god-damned dangerous chemical refinery of unfathomable complexity that chemical the chemicals with the chemical chemicals.

If everything was a “chemical” to us, a simple instruction like “put all the chemicals back in the chemical cupboard” – an instruction barked at undergraduates with increased profanity as time wears on – would be literally meaningless. The only way to satisfy such an instruction would be to cram the entire universe into a loosely-defined cupboard. And then put the cupboard itself inside it, too. As, of course, everything is a chemical – including the cupboard. If we had to run a risk assessment on the chemicals used in a prep, would we need to fill in the COSHH form for water, for oxygen, for the cellulose in the wooden desks or the plastics on the chairs? After all, those would all be chemicals, and they’d all be involved.


Look at the chemicals inside a chemical surrounded by chemicals in a bed of chemical… now, fetch me the chemical.

Technically, that’s correct – you know, the best kind of correct. If you want to define it like that, of course. But that definition of “everything is a chemical” is not useful to us.

Professional chemists use “chemical” to mean just a sub-set of all chemicals in the world. We use it to mean just the substances (usually solid, sometimes liquid) that we intentionally mix together for a reaction. Often, even solvents are excluded from the category “chemical” because they’re not often part of that intentional reaction, but just a support medium. I’m speaking in terms of the common parlance, of course, as you’d use it in a daily conversation with another chemist. In a more formal setting we’d use something way, way more precise – like “reagent”, “solvent”, “catalyst” or literally name the substance instead. “Everything is a substance” is more likely to resonate with a chemist than “everything is a chemical”.

Even if we held that on an abstract level that everything is a “chemical”, we wouldn’t (and couldn’t) actually use the word that way. It’d be too broad to have a use.

So, in fact, the “technical” definition of a chemical is far closer to the woo definition than most pro-science skeptics think.

Instead say…

Well, I’d go for something like this. There isn’t a nice sound-bite, but sound-bites are for you and your revision purposes, not for anyone else.

Your definition of “chemical” is really arbitrary. You seem to put substances you don’t like into it, and ones you do like aren’t included… but you’re never really clear why. This is a problem because you ignore some really important concepts such as the dose-response relationship. A sufficiently low dose of something that you might consider dangerous (like cyanide or benzene) won’t cause harm – yet a sufficiently high dose of something you might consider benign (like water) will definitely cause you a lot of harm.

It also doesn’t take into account multiple safety studies done on substances that account for this and quantify their relative harm. For instance, formaldehyde is certainly a dangerous substance in large quantities – but because it’s a common metabolite, and such a simple molecule that you can find a higher concentration of it, quite naturally, in a single apple than in shampoos that have been forced off the shelves for containing it.

Perhaps if you were more specific about the precise substances you’re against and the dosage limits you find acceptable or unacceptable, and why, then your arguments might be better accepted by the science community. Because as it stands, your definition and usage of “chemical” is simply vague and ill-defined – so we can’t really understand what you don’t like. You only seem to be using it to import the connotations of the smoke-stacks and refineries of the peterochemical industry,  which look bad, and use them to imply that otherwise-benign substances are far more dangerous than they really are.

Feel free to copy-paste that. Add the “you’re an idiot” parts back in as you see fit.