So, it turns out, crabs can probably feel pain. And not just nociception, which can be just the mechanical reaction to a noxious stimulus – like when you pull your hand away from a stovetop before you are aware that the stove is hot – but the kind of subjective pain experience that is (researchers argue) required to learn to avoid painful stimuli. This may not surprise you at all – David Foster Wallace brilliantly covered the topic with lobsters a while ago – and/or you may not be moved to care, crab cakes being delicious and all. So how does crab pain affect the future?
I’ll start with a perhaps obvious point: when we imagine the future, it is probably more fun to focus on the products – Hoverboards! A cure for cancer! Everything-proof nanotech! – than the process of production. For most of us, anyways. I’ll gladly concede that there are plenty of engineers and scientists who truly love soldering and toiling away in the lab. But Marty McFly shows up in 2015 and rides the hoverboard; he doesn’t attend R&D committee hoverboard design meetings or dress up chimpanzees in Evel Knievel suits and helmets to test hoverboard safety. The simple point is that realizing a better future means thinking about the fun outcomes but also the (perhaps less fun) paths we take to get there. The more complex point is that we want / need better products, but we should also want / need better process.
In the domain of biomedical ethics, there are some uncontroversial baseline concepts that guide the clinical research part of the process. One is a directive not directly stated but implied by the Hippocratic Oath, “primum non nocere” or “First, do no harm.” Physicians and clinical researchers “do harm” often – they give shots, perform surgeries, etc. – but the concept is, all things being equal, do not proceed unless the likely benefit outweighs the likely harm. This is called a “negative duty” – an imperative *not* to do something – but because physicians and clinical researchers have assumed the roles of healthcare providers, it can be argued that, again all things being equal, they have a positive duty to help if possible. Failing to provide an available treatment becomes “doing harm.”
Another baseline principle is that the physician/researcher must keep these duties in mind as his/her knowledge changes in the course of the experiment. The risk involved in clinical trials is justified because we are trying to figure something out: does this new drug help more than existing treatments? We are willing to potentially harm people, by giving them a questionably effective drug and depriving them of those existing treatments, in pursuit of that new knowledge. But the instant we have compelling evidence during the trial that the drug is not helping (or is actively harming!), the justification for the experiment is gone. We have the knowledge we were seeking, so continuing to experiment is *just* doing harm. To avoid violating the “do no harm” duty, the physician/researcher must monitor the subjects and the knowledge that the experiment has generated to that point.
Naturally, these principles are more straightforward in theory than in practice. Among other things, what exactly constitutes “compelling evidence” is not always clear. But there are cases where the application is fairly transparent.
Consider the crab. The researchers began the experiment unclear on whether crabs have subjective experiences of pain but argued that if the crabs could learn to avoid a painful stimulus, this would be strong evidence that crabs indeed experience pain. This sort of experiment does not nor is it intended to benefit the crab, of course. The benefit here is knowledge about the world, and who knows what benefit that knowledge could bring down the road. As is often the case, that benefit is assumed to outweigh any harm to the crab without argument.
Which is fine; I am not arguing in any way that we need to be radical, free-them-all abolitionists with regard to animal research. But apply those biomedical ethics guiding principles. At some point during the experiment – and from the account given, it was fairly early on – there was compelling evidence that the crabs were learning, which within the researchers’ theory implies that the crabs were having subjective experiences of pain. At some point the activity changed from giving a painful stimulus wondering whether it caused pain to giving a painful stimulus knowing that it caused pain. Since that knowledge was the justification for the experiment in the first place, it’s difficult for me to conceive of the continued application of that painful stimulus as anything other than a (however minor) paragon of cruelty: pain caused to no identifiable end.
Animal research is a notoriously thorny topic, and it’s likely that applying human biomedical ethics principles to crabs will strike some as pedantic and absurd. And I don’t mean to equate crab pain with human pain. Thomas Nagel has argued in a famous essay, “What It is Like To Be a Bat,” that truly grasping a different species’ subjective experience is near impossible, so I am not sure I really have grounds to equate them even if I wanted to. The point is, though, that “what is it like to be a crab” is a challenge to our imaginations, as is “what it is like” to be a bat, rat, mouse or chimpanzee. So when we imagine *our* future, it’s important to remember that the human enterprise carries a lot of other futures right along with it, and we should consider imagining the lot of them. And again, without setting off the nuclear explosion of an animal research debate, I think it’s fair to argue that we should engage the whole of our imaginations when we contemplate what tech, science and knowledge are worthwhile – and what processes are best, not just most efficient, to pursue them.
Photo courtesy of puuikibeach, used under Creative Commons license. Thanks puuikibeach!