A Graphical Explanation of Consent

Apparently, explaining basic issues like “consent” to the internet is like explaining “descent with modification” to Duane Gish. Except Gish is dead, so at least you’ll have his corpse’s undivided attention.

asking for it

So there we have it. Any questions? No? Good. It’s 20-fucking-15 already, why in the unholy fuck is this conversation still apparently necessary?

Advertisements

Digital Painting Advice

This will be slightly off-topic for this blog’s normal theme, not that it has one, so feel free to ignore. Here are some generic tips for digital painting that no-one asked for – as bullshit-free as I can get them.

1) Get your tutorials from CG Society, not DeviantART

You’d think that the idea of a tutorial would be to teach a technique to someone, and to raise their ability a certain degree. Oh, my sweet summer child how wrong you are! The real aim of a tutorial – and I do mean this with the utmost offence to those people who do it – is for the artist in question to show off how good they are! At least, this appears to be the point if you flip through DA’s tutorial section. Because, of course, “then add the shading” and “now I scribble in the detail” followed by “I hope you found this useful” means I should have the right to stab you in the eyeball with a stylus. Check out CG Society instead, it is literally meant for professionals.

2) Learn colour relativity

Colour theory is one thing artists like to bang on about because it makes it look as if they’ve actually learned to some something complicated and that it might just be science, or at least difficult. But it really is useful – no sarcasm, it really is. If you put down something that’s grey, and it looks really dark, it won’t look dark once you’ve put in every other area that’s dark – in fact, it’ll look like some washed out middling grey. The same goes for colours, things that will look blue might actually be purple, or a bit red, or green, or really de-saturated (particularly true with eyes). There’s no real trick to solving this, just be aware of it and correct as you go.

See that dark-brown square in the middle of the top face and the light-yellow square in the middle of the front face? Look again, closely…

3) The paint-in-black-and-white-first trick is bollocks, sort-of…

There’s a technique in digital painting where you paint entirely in monochrome first, and then you pop in another layer, set the blend mode to “colour” and then colourise it. The theory is that this way you can establish the value – the range of black to white – first and get it right. This is important because if the value is off, it just won’t look right. In my experience this sort of works, but does effectively double your workload and really screws with your choice of colour, keeping it a little flat as you end up colourising one whole area with one tone. To compensate for this, change the colour frequently to get a more varied hue… but then you end up effectively painting it all again anyway, so it’s arguably pointless. If it sounds like too much work, simply pick a program that lets your display the image in black and white monochrome, and use that to check the value.

4) Photoshop is kinda overrated, too

There’s no need to hand your hard earned cash to Adobe. Or even torrent the thing, to be honest. Unless you’re doing high-concept and slightly abstract art that requires the textured brushes, there’s no real need for Photoshop. Simpler programs like MyPaint don’t have textured or shaped brushes, but they’re really not essential – and even if you do, Krita is free/open-source and has them. But to be frank, once you play with something as stripped back as MyPaint you might just find all the excess tools Photoshop and other image-manipulation programs come with are just distractions that get in the way. Stripping it back to just the simple tools actually teaches you to paint.

Free, and with an installer under 10Mb – which gives you a lot of bang for your byte.

5) Flip the image

Seriously, mirror the image. There are two reasons for this. One, you can check that it looks okay. If you’re drawing a face, for example, it might look okay and then suddenly, in a mirror image, you scream “holy shit what is that monstrosity!” Drawing inverted refreshes your perspective and lets you check symmetry, and the final product will be much better for it. Secondly, it helps you draw curves better. If you’re, for example, right handed, you’ll draw a curve the way your wrist moves (the top left arc of a circle) easier than the opposite. Flipping the image lets you more easily work on curves. This isn’t entirely as lazy and absurd as it sounds, when drawing on paper people do this instinctively.

I'm spotting a few things I didn't catch first time on the flipped image...

I’m spotting a few things I didn’t catch first time on the flipped image…

6) You don’t really need expensive kit

I’ve basically used entry-level graphics tablets since I started digital painting. There’s really no need for expensive ones – ignore the people saying otherwise. This is especially true if you’re not getting paid daily for this. You need half decent pressure sensitivity and that’s it. You don’t need a large tablet area, your hand-eye coordination and muscle memory will adjust pretty quickly to all sizes, the touch and feel (whether the stylus is more scratchy or slippy) you’ll also adjust to, and you certainly don’t need to donate your life savings to Wacom. These things are tools, and while a better tool might make you feel a little better about yourself (and I won’t deny that I actually like my Bamboo), they won’t actually contribute any significant improvement.

7) Don’t be afraid to trace

Seriously, you’re working digital, you can add layers, just put your source image in the background and scribble over it. Purists might say it’s cheating, but I say there’s no point in wasting your time if it transpires that your actual aim is to replicate what you see.

8) Layers are not always your friend

Layers? Yes or no? To be frank they are a double-edged sword. On the one hand, they’re great for keeping elements separate. On the other, they can get confusing – and if you’re not paying attention you will simply end up drawing into Layer A what you meant to put into Layer B and vice versa, and five minutes of that without noticing will leave you with a horrid mess and you have to go through to clean up. Generally I tend to keep to 3-4 layers these days: a background, just to keep it isolated from the subject as edges get tricky sometimes; the subject, for the same reason; any effects on top; and a sketchy outline on top of all that. Hair I tend to do in multiple layers but that’s for good reasons – see the CG Society tutorial on realistic hair for that.

9) Watch people

If I’m in a room with a strong light source, I have this weird tendency to watch how the light actually moves around the contours of their face. How it goes in a bit around the chin and lips, or over a slight ridge over their eyebrow… I’m not sure if it actually helps, and it might just make me come across as creepy when I’m really just doing research, but it’s certainly worth being aware of exactly what goes on with shadows and highlights and how they contour around people.

File:Low key Nina.jpg

Low-key lighting is good for picking out contours. Understanding that does require getting to grips with the concept of a three-light setup – do some 3D modelling or photography for that.

10) If you want realism, think shadows

The human mind and eye takes into account dozens of separate cues to build up a picture of the world around us and infer their 3D structure. Without that ability, we’d be pretty much incapable of surviving in the wild. As a result, if you want to do anything resembling a “trompe-l’œil” you don’t need pixel-perfect brush strokes nor an atomic level of detail, you need to figure out how things cast shadows and interact with each other. Suppose you’re drawing someone in a baseball cap, it might look staggeringly brilliant, but until you pen in the shadow the brim will cast across their face, there will be something wrong with it – and you might just not be able to figure out what unless you’re looking for it!

What we have here is a series of leather straps holding objects against a wall. Take a close look at where things are stuffed into the straps and how the shadow bows downward a bit where the strap is pulled from the wall. A less subtle version of this point can be found here.

11) Sample carefully

You’re working digitally, and you have an image that you want to use as a reference, clearly the best way to get the right colour is to take your colour picker and… well, yes and no. Certainly, it will be the fastest and most efficient way to get the colour you want to use, but be aware of the following two technical limitations:

  • Compression – if your sample image is a JPEG file, or any file with a “lossy” compression method, the colour you sample might not be the right one. When you zoom out of an image, you might see the right colour, but when you zoom in to the point where you can see individual pixels, you might see a grey-ish one there, a blue one here, a green one there, and then when you zoom out it’s orange. This is perfectly normal for image compression (and also as part of colour-relativity). There are a few ways around this. You could smudge/blend your source image to get rid of the compression and get the colour you want. You could sample more frequently. Or you could use the colour-picker to get an idea of what the colour is. I.e., select it, and drag the picker over the image in question, and watch the coordinates on your colour wheel or colour triangle (or which ever tool you have available) jump about. Then you’ll see what shades and tones are on the image, and can select accordingly.
  • If you have a brush tool that has a non-100% or pressure-sensitive opacity, or a blend function built into it, then the colour you put down will depend on the sample you picked and the existing colour underneath it. If that underlying colour is black or white, then the colour you draw will look washed out (see below). As a result, you might want to pick the colour, then from your colour wheel/tool add a slight and tasteful boost to the saturation of the colour in question.

Also keep in mind that colours are often the result of underlying texture – especially in skin and hair – so any samples you do take won’t look right until you put in the detail/texture. Sorry, there is no way around that, you’ll just have to put in the work!

What looks like one colour far away will look like something different close up - so you probably can't get away with just sampling single pixels with a tool to get the tone you want. It may require actual work.

What looks like one colour far away will look like something different close up – so you probably can’t get away with just sampling single pixels with a tool to get the tone you want. It may require actual work.

12) Don’t highlight with just white

If you’re using a digital brush with a slight blend or blur option built in (a mixer brush), or something with a slightly lower opacity, the way it mixes with the underlying colour becomes quite important. Because the software has to interpolate between RGB or HSV values to mix your new “paint” with the “paint” below it, white will have a strong tendency to just blur out any of the hue. This is especially true if you’re highlighting over a dark object – the dark colour will have low saturation, and the software doesn’t know you want it to highlight straight to, say, red. It’ll just end up treating it as you highlighting something very nearly black with something that is simply very very light grey. So it’ll produce grey. Instead, if you need to, highlight sequentially through progressively lighter colours if you need to highlight from a much darker colour.

13) Just practice

Yeah, really, just do it more. Everyone has a set number of really crap drawings inside them, and the faster you get them out of the way the faster you can get to the good ones.

Ultracold Cells on Titan – Yay or Nay?

Listen up pop-science fans, I might be just about to pop one of your bubbles (or maybe not). This one, in fact. The original paper can be found here – it’s open access, and therefore extra awesome. I thought I’d do this before the Discovery Institute get their grubby mitts on it.

The regurgitation of the press release begins as follows:

Ultracold-Resistant Chemical on Titan Could Allow It to Harbor Life

Astrobiologists and planetary scientists have a fairly good idea of which chemicals might indicate the presence of oxygen-breathing, water-based life—that is if it is like us. When it comes to worlds such as Saturn’s moon Titan, however, where temperatures are too cold for aqueous biochemistry, it’s much harder to know which chemicals could signal the existence of hydrocarbon-based life.

Oh, I love pop-science headlines. They always go at least ten steps ahead of the research they’re actually reporting. In their defence, Scientific American do a decent job and don’t oversell it once they hit the third or forth paragraph, but I want to go a little deeper into the theory because I’m kind of a nerd. I’ll cover some of the core strengths and disadvantages of what they’re doing in this research.

In brief – What the f**k are they doing?

Life on Earth requires some sort of membrane to contain it. We call these cells. You may have heard of them. These are made – as high school biology graduates will know – by phospholipid bi-layers that create a fully encased supramolecular structure. These layers form because the phospholipid molecules have parts that attract to water, and parts that move away from water. Obviously, we pretty much live in water, so there aren’t many options for the parts of the molecule that dislike water – these long hydrocarbon chains – and so they form small globules called micelles, where the long chains face inward, protected from water by a shell made of the parts of the molecule that actually like to bind to water. In higher concentrations these start forming membranes, where there are two layers – the hydrophobic, water-hating, parts of the molecule all turned in and the hydrophilic, water-loving, parts turned out. Eventually, with the right concentration, they form bi-layered cells.

This is exactly how it happens.

This is exactly how it happens.

Of course, this all means we need liquid water. The chemistry of these membranes and bi-layers doesn’t work in other solvents particularly well, and certainly not at low temperatures where water and lipid molecules freeze solid. So the question is this: can we do the same thing that forms in other solvents, using other molecules, and at temperatures outside the “habitable zone” of the solar system. More specifically, can this be done for the conditions on Titan, where liquid methane acts as the moon’s “water”, and simpler organic molecules act as the phospholipids.

The response seems to be that, in principle, the answer is yes.

So it means life is possible?

Yes and no. The theory proposes a way to build the membranes and cells required to contain life – these keep the active metabolic chemicals in high concentration (the original paper mentions this as part of the introduction, it’s all part of the “RNA World” hypothesis for abiogenesis), allowing life to form and evolve. But this is far from the greatest barrier to self-organised and self-replicating life. Even if these hypothetical cells form, they would have to contain some high concentration chemistry – something that would have to be more complex and active than we currently have solid evidence for. The chemical “soup” trapped in there would also have to reach a complexity to start replicating with modification – where evolution can take over and make “life”, as we know it, Jim, almost inevitable. This is a much bigger “if” than the mere formation of membranes, and to be fair even that is still a big “if”.

Even a theoretical proposal would have to import essential chemical properties to a low temperature system with an alkane solvent. This is not impossible, but it is not staggeringly likely either.

“Computational” = “Proceed with caution”

It’s important to keep in mind this current “cells on Titan” research is theoretical – in fact, “hypothetical” might be a closer qualitative description, as it’s a big “if” rather than a solid, well-backed theory. This sort of caveat is often the first to go missing as papers get compressed into press releases, and press releases get compressed into pop-science articles, and articles get compressed to Facebook posts and tweets and meme images and Daily Mail comments. Be under no illusions: this work has been done entirely in a computer, and is just a proposition for now.

It gets lost in translation quite a bit.

It gets lost in translation quite a bit.

I can’t and won’t trash work for being purely computational. I’ve done plenty of my own calculationsthat have interfaced between real-world chemical observations and their theoretical replication, and I’ve mentioned before the successful results of using a genetic algorithm to predict the existence of usual chemical structures. However, the work I discussed there by Oganov et. al. went a step beyond their computational hypothesis – they put their experimental clout where their mouth was and actually made the substances they predicted. Score one solid goal for science, even if it didn’t “completely overturn all of chemistry” as the press release claimed.

So far with respect to cell membranes forming on Titan, there’s no empirical data forthcoming. Is this because someone has tried, failed, and neglected to publish? Is it because conclusively demonstrating that cells don’t form in liquid methane would mean proving a negative? The experiment might not be so straightforward to do, there may always be the right conditions to make it happen if the hypothesis is solid. But I expect it will come eventually, particularly if they’ve piqued the interest of parties capable of doing the experimental work. This hypothesis will either sink or swim (in liquid methane, of course) on the basis of that.

Fuck the Disco 'Tute getting hold of the story, it's IFLS you need to worry about.

Fuck the Disco ‘Tute getting hold of the story, it’s IFLS messing it up that you need to worry about.

Molecular Dynamics simulations

Computing the properties of molecules is difficult. Computing the properties accurately is even more difficult-er.

Think of it this way – for every atom (if you want to treat every atom individually) has to be described by three coordinates of position. And then three coordinates of momentum to give it a direction. And three coordinates of force acting on it computed from everything else that will change its momentum and position. It’s clear that as your system grows, you’ll need more data just to describe it. But then there are the interactions that lead up to the force that will alter its position and momentum. Two points gives you one interaction – and this is the only case you can solve perfectly. Three points gives you three interactions (consider a triangle). Four points gives you six interactions and five points requires modelling ten interactions (draw these out if you don’t believe me) and it increases from there. Some theoretical models increase their computational costs even more rapidly than that.

If you want to describe a very large system, say, a protein, or a layered membrane formed from dozens or hundreds of molecules, you will have thousands upon thousands of interactions to take into account. It stands to reason, then, that the more interactions you have the less complicated your calculations for each one must be. Otherwise you’re talking “age of the universe” time scales for making your calculation. This is where molecular mechanics and molecular dynamics come into play – you take your molecules and you simplify down the possible interactions to the most basic level, then run the simulation that way using assumptions and less intensive calculations.

In general, this is alright. You can get the basics of what a large number of molecules will try to do just from running such simple calculations, and the OPLS model used in this work is accepted as good enough for the task at hand. So the method is what we’d call “robust” – that is, it’s one of those things where 60% of the time it works 100% of the time.

If you download a neat bit of freeware called Argus Lab (warning: it’s not under active development at the moment and tends to run into trouble on 64-bit machines) you can start playing with your own things in a matter of minutes and do things like show DNA bases binding to each other using molecular mechanics calculations. The exact values you get for the strength of that interaction are dubious-as-all-hell, but hey, from fundamentally simple equations you can predict that DNA works. That’s just cool, right?

Errr.... I'll assume this point will skip you by, that's fine.

Errr…. I’ll assume this point will skip you by, that’s fine.

But the simple methods are not perfect and foolproof. Often you need to fudge a few of the simulations with real-world data. These methods are known as “semi-empirical” (you can work out the etymology of that at home) and the garbage-in-garbage-out principle holds true for them. Sometimes, even if you do try to fudge it with decent empirical data you still can’t get a good result. Even trying to work out the properties of water – something you’d think is the most well-studied molecule in existence – is insanely difficult and requires, actually, modelling a lot of water molecules because the interactions are that disperse. You can’t calculate something like a hydrogen bond strength of H2O just by considering two H2O molecules interacting. So you need to validate the simple model to make sure you aren’t falling foul of this sort of physics trickery.

The main bit of data used to validate the OPLS model in this work is the binding energy between two of their target molecules. The authors compared their predicted energies from the model that made the self-organised layers to an energy taken from a more robust and reliable (a “higher level”) calculation. And this is the part where I need to say “proceed with caution” again, because this data they’re comparing to still isn’t empirical, but also established from a calculation.

Ab initio Calculations

If you scroll down the original open access paper you’ll find the frightening combination of numbers and letters “M062X/aug-cc-pVDZ”. To explain this as quickly as possible, everything before the “/” is the “functional” – this is the theory, as laid out by clever computational people and physicists with a lot of spare time on their hands, that you will use to spit out an energy from your calculation. Everything after is the “basis set”, which are the basic building blocks of the atoms (more specifically, the electrons) that you’ll use to help derive it. There are an astounding number of each, and they are all completely interchangeable (although some combinations are more sensible than others). And each combination will spit out different energies for even the same molecule.

Calculating the binding energy between two molecules is almost comically simple. You set up your molecule and the theory and basis set you want to use to model it and the calculation spits out an energy value. You then set up two molecules next to each other and the same calculations spit out another energy value. If the latter is less than two lots of the former, the molecules prefer to sit next to each other by that amount of energy.

There are a few caveats to this, such as basis-set superposition error (BSSE), which is basically the error associated with assuming the “comically simple” approach I just described, but you can correct for that easy enough. Since you didn’t ask, you do this by taking the molecules individually as described above, but give them access to the atomic orbitals, aka the basis functions, of the other molecule but without actually putting the molecule or the electrons there – you then do some mathematical jiggery-pokery with the resulting combination of energies and you arrive at your correction. This is another thing you need to do or your TAP-IPM will chase you around with a chair.

Now, the major trouble with ab initio (from base principles) calculations is that they need to be calibrated. You do this by picking a method that produces reliable results for the work at hand.

And that’s the trick, you have to find the right combination that works. If the theory and basis-set combination you choose replicates an energy that you’ve actually measured (a known quantity) within a few percent, it’s a good bet that it will successfully predict the energy of an unknown if you’re looking at a similar-enough system. A lot of simple organic reactions can be predicted well by the combination labelled “B3LYP/6-31G”, which is about as close as you can get to a “standard” or “default” combination. But B3LYP/6-31G fails miserably for a lot of transition metals and organometallic compounds, which is where you need to start getting creative. If the process you are studying is intra-molecular – i.e., bits are just rearranging, rather than falling off or coming on – then most combinations tend to be much of a muchness. But when you’re talking inter-molecular interactions, particularly the van der Waals or electrostatic interactions between molecules, the right combination is essential. Again, garbage-in-garbage-out.

But you must measure it against something known, otherwise you are shooting in the dark. I once read a paper that proposed a very interesting new twist to a particular catalytic mechanism, something that they claimed had a much lower – and therefore more plausible – energy profile. It looked great. But it turned out they hadn’t actually calibrated/validated it well. If you could even call what they did “validation”. Their supplementary information showed that they had just changed the electron core psuedopotential (an assumption that allows you to ignore all the core electrons around an atom and replace them with just a single charge) a few times and concluded “well, we get similar enough answers each time so it must be right”. You can’t do this, or your TAP-IPM will chase you around with a chair. Again. Your chosen method has to be calibrated against empirical data or the garbage-in-garbage-out principle applies. So, when I replicated this catalytic system with a completely different level of theory and a different basis set (one that was calibrated against empirical energy values derived from some painstaking kinetic experiments) the claimed effect in this paper effectively disappeared.

And this is where I get a little dubious about the reality of micelles forming in liquid methane in reality. The molecular dynamics, and particularly the more detailed conclusions of the paper rely on an accurate binding energy between two molecules. Without this, you could get any old result. You could stick in a random number for the binding energy, and see molecular self assembly from the simplified molecular dynamics calculations that is wholly unrealistic. The energies associated with that self-assembly may well be off my a huge margin, and when you start plugging that into thermodynamic equations that are raised to the power of these numbers, your errors become far more staggering. I am also dubious about taking a binding energy from just two molecules alone. If you’re talking about large structures such as micelles, I really would like to see some ab initio stuff done on larger clusters including tetramers to see how they start interacting using this higher and more precise level of theory, BSSE-corrected or not. As I touched upon above, in water you need to get to several layers of interacting water molecules to approach experimental accuracy. Is this level of detail needed in this case? It might hurt the hypothesis, but it can’t hurt its reliability.

I also have to question the use of implicit solvation in their quantum mechanical model – that is, not making the calculation in the presence of actual solvent molecules (almost essential if you’re going to imply that solvent drives this reaction!) but in just polarisable continuum that, let’s be brutally honest about this method, only vaguely represents the idea that there’s a solvent if you squint a bit and squish it about. The binding energy that they calculate to configure and validate the model is, of course, more than against molecules verses molecules separated by infinite distance, it competes against the ability for the molecules to bind to the solvent explicitly. This isn’t always trivial. The ability for solvent molecules to make very specific interactions with molecules means that solvent getting in there are breaking up the self-assembled layers and altering their stability needs to be accounted for much more explicitly than they have done to make the results more robust.

Is all that required for acrylamide and other similar molecules working in methane? Possibly, possibly not. Hopefully the authors have done their background reading to figure that out, and I’m willing to give them the benefit of the doubt given that they’re refererring fairly robust procedures and methods – although these methods are compared against reasonable standards (M062X) rather than a “gold standard” like CCSD(T). It’s reassuring that the OPLS model’s binding energies were within 4 kJ/mol of the ab initio results, suggesting the model has merit, but as I’ve pointed out above, theoretical self-consistency should take a backseat to consistency with experiment because the former can be fudged so very easily.

Conclusions

Life or not, it'll almost certainly have some interesting chemistry

Life or not, it’ll almost certainly have some interesting chemistry

Overall, I think this is a pretty cool and promising result. The work by Oganov on sodium chloride stoichiometry that I’ve discussed previously on this blog demonstrates the predictive power of computational chemistry, and this could well do the same. The authors here have demonstrated some excellent potential chemistry that could be going on in liquid methane oceans. However, save the champagne for now. Without comparing their results and values to experimentally derived ones, and finally experimental verification that self-assembly of these molecules actually happens in liquid methane there is no hard evidence, yet, that this theory is realistic. Hopefully these experiments are coming soon, so we can see if this holds up. Because if we can build them in the lab, and then figure out a reliable way to detect them in the wild on Titan, the question about whether it lowers the barrier to life and its application to exobiology is irrelevant, it will be some really interesting chemistry we’ve found.

How to stop sucking at non-belief (Part 2)

The Problem with “Religion”

There’s a big problem with “religion”. No, this isn’t going to be a tirade against how “it” supposedly brainwashes people, or how “it” starts wars, or how “it” is a massive affront to reason. No, this is about the actual word, the label itself, and how it’s used – especially amongst the anti-theist and anti-religionist crowd of atheists, because holy fuck those people can be stupid when they want to be.

The problem with “religion”? “Religion” doesn’t exist.

See, people treat “religion”, like it’s a thing.

Religion_as_a_thingBut it’s not a thing. You can’t find it anywhere. Sure, we might imagine something like a hypothetical “generalised” religion, much like the “generalised mollusc” anatomy, but that doesn’t mean such a thing exists in reality. We’d have a hard time finding this “religion” anywhere. No one follows “religion”. No one is part of “religion”. And if I type “religion” once more I’m going to have a bad time.

No, “religion” is not a thing. It’s more like a bucket.

Religion_as_a_bucket

We put stuff into this bucket based on a few superficial similarities. Things like “believes in a creator deity”, or “provides a moral code”, or something more abstract. But those similarities are superficial and generic, they overlap and criss-cross and can be quite complicated. They’re not universal, they’re not essential, there isn’t even a single common thread uniting everything in the bucket. Not all religions believe in an almighty God. Not all religions propose supernatural processes. Not all religions fleece followers of money, and not all religions profess a love for peace.

Often, the differences are far more striking than the similarities.

spot_the_difference

When you step back and think about it, it does seem strange what does go into the bucket and what doesn’t. Pick any attribute ascribed to “religion”, and you’ll be able to find a good few exceptions; “religions” that don’t posses that attribute or “not-religions” that do.

what_goes_inAnd this is sort of where the problem is. Because nothing truly unites everything in the bucket, it’s difficult to use in a general sense. It’s almost pointless to try.

Few people ever reach into the bucket to examine its contents; they’re stuck with looking at the bucket and simply declaring universal truths about it as if it was a thing. By no means are these declarations universally negative in the way anti-religionists use them (“religion is against reason”, “religion is harmful”, “religion is child abuse”),  many of the positive assertions also do this in exactly the same way (“religion is necessary”, “religion answers the big questions”, “religion should be respected”).

contents_may_differThe bucket is just that, a bucket. It does nothing but hold stuff.

Sometimes this is quite convenient. It would be a pain in the ass to refer to tall wooded objects with leaves if it wasn’t for the concept of a “tree”. But this comes at the price of, on occasion, mistaking the bucket for a real thing and then making mass generalisations about what it holds. People assume animist religions are “bullshit” for the same reasons creationism is total and utter crock. They assume Hinduism is interchangeable with Islam – or that neither have the same kind of internal sub-divisions as Christianity does, completely blind to their own geographic biases. Is atheism a religion? Well, the answer to that is actually far more complicated than “is bald a hair colour?”

Getting rid of the buckets probably isn’t an option. The world is just too big and complicated to go without them. Even fuzzy buckets would just break peoples’ brains eventually. All the inclusions, exclusions, exceptions, partial truths and partial matches would be too much information for us to handle.

Instead, we simply need better, more useful, more appropriate buckets for the task.

good_shit_bad_shit

It’s a much better approach just to simply categorise things better. But it does require some effort, especially when language and society is already rigged for the inefficient and crap version, which splits the world in to “religion” and “not-religion” and says one is good and the other is bad. You need to look into things and pick out what’s bad and what’s good. Then separate it out, and deal with things specifically. The phrase “all religion is bad” is absolutely meaningless; but if the average non-believer admitted that, and tried to say “behaviour that ostracises and demonises the out-group is harmful”, they they’d run the risk of turning a critical eye on their own behaviour. That’s not a comfortable thought, and it’s no wonder people avoid it.

This is why anger at “religion” is misplaced – and why thinking that anger directed at specific components found in the religious bucket is anger at “religion” is a foul misinterpretation. There is a “bad shit” bucket out there, and it’s something worth getting angry about – in fact, it’s a better question to ask why people don’t feel that these things are worth getting angry about. At the same time, though, there’s a “good shit” bucket (or even a “meh bucket”) and lumping that all in with “stuff worth getting angry about”  is, at best, just wasted effort.

But always remember, the bucket itself can’t harm people; its contents do.