The laws of physics are undemocratic

On May 21, 2014


Yesterday saw the start of the Circling the Square conference at the University of Nottingham. This is a rather unusual meeting which has the lofty aim of bringing together social scientists, those in the arts and humanities, policy ‘wonks’ (for want of a better term), science communicators, and natural scientists (including physicists, of course) to discuss the various interconnected aspects of research, politics, media, and impact.

As one of the conference organisers, I was delighted that the first day featured fascinating keynote lectures, lively discussion, and a rather heated exchange amongst panellists (more on this below). In the afternoon, two of the UK’s most successful science bloggers, David Colquhoun and physicsfocus’s own Athene Donald, gave their thoughts and opinions on the role of new and old media in science communication, debating and discussing the issues with the other panel members – Felicity Mellor and Jon Turney – and a number of contributors from the floor. Andrew Williams’ media keynote lecture preceded the “Researchers facing the media” panel session and was full of important and troublesome insights into just how science can be distorted (for good or bad) through the lens of the media.

But it was the first panel session of the conference, on the science-policy interface, that got me somewhat hot under the collar. (Well, OK, I was wearing a t-shirt so perhaps this isn’t the best metaphor…). That’s because that particular panel provided a telling insight into the gulf that still exists between natural and social scientists when it comes to the interpretation and contextual underpinnings of scientific data. Until we find a way to reconcile views spanning this gulf then we’re going to continue to exist in our silos, as two distinct cultures, arguably even more divided within the sciences than CP Snow could ever have envisaged for our separation from the arts and humanities.

The panel featured a ‘robust’ exchange of views – if you’ll excuse my borrowing of a hoary old euphemism – on the interpretation of scientific data and just how it is used to inform political debate and decisions. Chris Tyler, of the Parliamentary Office of Science and Technology, forcefully put forward his view that we can never consider scientific results in isolation from the political process. Sheila Jasanoff, Professor of Science and Technology Studies at Harvard, had earlier made very similar comments in the light of engaging presentations made by Daniele Fanelli and Beth Taylor on the interface between scientific research and policymaking. The overall tone of the debate is perhaps best summed up in this tweet from Roger Pielke (who is also speaking at the conference today in the “Challenging Established Science” panel):

pielke tweet

Fanelli made an impassioned argument countering the idea that scientific evidence must always be considered in the context of its political framing. His comments certainly resonated with me, and I’d be rather surprised if what he said didn’t also strike a chord with the other physical/life scientists in the audience. We spend our lives aiming to do experiments in as disinterested a fashion as possible. It therefore rankles to be told that objective – and I use that word unashamedly – scientific evidence is nothing more than opinion.

For my colleagues in sociology and science and technology studies, I should stress that I am not for one second suggesting that scientists are immune to social biases. John Ziman, physicist-turned-sociologist, rightly disparaged the idea that scientists are always disinterested seekers of the truth, describing it as “the Legend”. Nor am I suggesting that data interpretation is not part and parcel of the scientific method (as Neuroskeptic argues convincingly).

The discussion yesterday, however, dangerously strayed very close at times to the ‘cultural relativism’ that was so successfully lampooned by Alan Sokal back in the nineties. Yes, scientific evidence must be considered as just one element – and, unfortunately, it’s often a very small element – of the political process. It would be naïve, at best, to argue otherwise. But the entire rationale for scientific research is underpinned by the understanding that we, as scientists, should always aim to put aside those socio-political and cultural biases. Otherwise, objective scientific evidence is reduced to pure opinion. Newton’s laws of motion, E=mc2, the Schrödinger equation, the speed of light, and the first and second laws of thermodynamics are not culturally or politically determined. Those same laws are just as valid for a race of small blue furry creatures from Alpha Centauri as they are for us.

Or, as Sokal famously put it,

“…anyone who believes that the laws of physics are mere social conventions is invited to try transgressing those conventions from the windows of my apartment. (I live on the twenty-first floor.)”

Image: The Shard in London, currently the European Union’s tallest building and a prime location to test the idea that the laws of gravity are merely an opinion. Credit: Miroslav Petrasko, reproduced under a Creative Commons licence.

About Philip Moriarty

Philip Moriarty is a Professor of Physics at the University of Nottingham, where his research focuses on nanoscale science. He is a member of the Science Board of the Institute of Physics and coordinates the multi-partner ACRITAS European network. He has participated in a number of research council-funded public engagement projects, including Giants of the Infinitesimal, and was a member of the Programme Committee for the controversial “Circling the Square: Research, Politics, Media, and Impact” conference held in Nottingham in May 2014. He is also a regular contributor to the Sixty Symbols video series.

81 Responses to The laws of physics are undemocratic

  1. IMHO, even physicists must consider that the (current) ‘universal laws’ of physics are not immutable – and may not correctly apply to all conditions. One can certainly point to the high energy domain of general relativity superseding Newtonian dynamics, and even new evidence of an anomalous low energy domain – in the observations of low mass, widely separated stellar binary systems in our galactic neighborhood – see

    Not even consensus scientific opinion should dominate over properly interpreted evidence!

  2. Hi, James.

    But general relativity only “supersedes” Newtonian mechanics under certain limits. It wasn’t as if general relativity came along and swept aside Newton’s laws of motion — the relativistic equations reduce to the classical case in the appropriate limits. (The same holds true for quantum mechanics).

    >>”Not even consensus scientific opinion should dominate over properly interpreted evidence!”

    I agree, and this is entirely the point of the blog post.

    Best wishes,


    • No, the convenient utility of Newtonian dynamics was not abandoned, even when their application to the evaluation of disperse, large scale structures comprising vast self-interacting compound objects of mass produced anomalous results. Instead, compensatory mass – in the form of some yet unidentified form of exotic particles referred to as dark matter – was inferred to fit gravitational evaluation results to those derived from observational interpretations.

      Like the high energy, proximal effects of relativistic gravitation – disperse, low-energy effects produced by massive objects may not comply with the fundamental inverse-square model of the law of universal gravitation. This may be most clearly evidenced in the reference in my preceding comment – especially in the anomalous results of simple, two-body evaluations of many low mass, widely separated binary stars systems in our own galactic neighborhood. Please see the above reference.

      While a vast majority of physicists in many disciplines have committed to the existence of enormous masses of universal dark matter, it is supported only by evidence derived from exceedingly complex inferences…

  3. Philip
    I don’t think it is at all the same thing to say that scientific evidence must always be considered in the context of its political framing and that we can’t (and shouldn’t) do our experiments as disinterestedly as possible. The evidence should indeed be disinterested if at all possibe but, even so, it will inevitably be framed by policy makers and those trying to convince them. Despite being a physicist I’m with Sheila Jasanoff in the main on this. I think scientists too often don’t understand or accept the meaning of framing as social scientists mean it. My post on the interactions with the media part of the day here.

  4. Hi, Athene.

    ” The evidence should indeed be disinterested if at all possibe but, even so, it will inevitably be framed by policy makers and those trying to convince them”

    I agree entirely (and say so in the post above, and previous posts) but the message I firmly got yesterday was that there is a body of thought which is based on the precept that it is impossible for scientific evidence to be attained objectively because it is always subject to the bias of the investigator. Roger Pielke’s presentation this morning only served to reinforce this when he argued that scientists too often behave as advocates rather than “honest brokers”.

    There are certainly worrisome examples of where bias has overridden the scientific method but this is not to say that it’s impossible to secure objective results. Those results will, of course, be distorted through the lens of the media and politics but that’s a very different problem to stating that the data themselves are always “suspect”. The discussion yesterday got very, very close to suggesting this.

    Daniele clearly had similar qualms to those I outline in the post above when he made the point about demolishing public trust in science if we say all evidence is “suspect” and subject to bias.

    I think that vocabulary may well play an important role here. During my time in Westminster as part of the Royal Society MP-Scientist pairing scheme , I was a little perturbed to hear that the terms “evidence” and “opinion” (particularly in the context of Select Committee hearings) were used interchangeably.


  5. My understanding, and I may well be wrong, is that the “politicisation”, or perhaps more accurately bias or prejudice, comes more in the choice of research question and methodology used which then affects the data generated. I saw one policy adviser talk about receiving evidence from scientists and he said he always asked them “why did you choose to look at that problem using that experiment under those conditions and not an alternative?”, each decision being an opportunity for bias or prejudice (unconscious or otherwise) to seep in.

    Ian Boyd hinted at this when discussing the issue of pesticides and pollinators by complaining there was too little data and that which did exist was unusable as the conditions didn’t match those found in nature. A cynical view would be that the researchers who poisoned bees with too high a dose wanted to demonstrate a positive effect although a more realistic view might be that they were simply working within the limits of their experiments. Beth Taylor stated the importance of depth when providing evidence (although maybe Ian Boyd slightly contradicted this today?), but doesn’t this fall down when we don’t have sufficient time or resources? This is surely when things get muddy and the scientists/researchers (or their political bosses) then make decisions on what data to collect and to use.

    I thought I followed Roger Pielke’s analogy of the science as honest broker/arbiter in policy discussions, which you picked up towards the end of the discussion this morning. Then I got a bit lost- I’d assumed that the scientific evidence would have its own “seat at the table” alongside other forms of evidence (social, economic, legal, regulatory etc) and the advocacy groups. But then he seemed to suggest that the science community would organise these forums and I wasn’t sure where this left the evidence, who brings it to the discussion and whether scientists merely facilitate the presentation of the evidence rather than give any interpretations. Perhaps I need to read his books…

    • Great to meet you yesterday, Alasdair. I very much enjoyed your contributions to the conference and have got a great deal out of your blog posts.

      My key problem with the “choice of research question/methodology” biasing the evidence is that it begs the question as to who makes the decision as to which research questions are valid, and which aren’t. And this brings us full circle (ahem) to the matter of impact and RCUK’s advice that perceived impact “should inform the design of your research”.

      I’m not going to bang on about impact yet again here — see this lengthy diatribe: The spirit-crushing impact of impact .

      This is surely when things get muddy and the scientists/researchers (or their political bosses) then make decisions on what data to collect and to use.

      The idea that the “political bosses” make the decisions at the “granularity” of an individual project has obvious problems in terms of the disinterestedness, impartiality, and independence of the research. It was is for this reason that the (rather nebulously defined) Haldane principle was mentioned more than once during the conference. As I said at the end of Brian Collins’ informative and interesting talk, the idea that there’s still a Haldane principle in place (when all EPSRC grant applicants have to describe the national importance of their work) is not at all defensible. David Edgerton’s paper, “The Haldane Principle and other invented traditions in science policy” is well worth reading.

      We don’t tell undergraduate students that they should do an experiment with a particular outcome in mind. Disinterestedness is at the heart of the scientific method (or should be). As I’ve said elsewhere ,

      “…and as regards teaching the [undergrad] students “why they do the experiment”, I have never told an undergraduate or graduate student here at Nottingham (or elsewhere) that the reason they do an experiment is to produce an application which will have direct socioeconomic impact or – importantly – even indirect socioeconomic impact in the far-flung future.

      They do the experiment to interrogate Nature and to learn a little more about how the universe behaves. Free of near-market considerations. Free of commercial constraints. Free of political interference. Free of preconceptions.

      This is why the panel discussion to which I refer in the blog post unsettled me. I remain of the opinion that what was put forward was a ‘mild’ version of cultural/societal relativism, where all scientific evidence is assumed to be tainted by the bias of the investigator. If that were the case, how could we have ever established the laws of thermodynamics, classical mechanics, worked out the structure of the atom and bonding in solids, determined the entire framework of quantum mechanics, relativity etc…?

      (..and before a sociologist suggests that those are all “just” esoteric physical sciences concepts and that the same reasoning about bias can’t apply at the science-policy interface, I think that the laws of thermodynamics may be of some small relevance to societal issues/challenges such as, oh, let’s say… climate change.)

      Yes, those are, in some sense, all “just” models. But I’m writing this after coming back from the lab where I’ve watched an image of silicon atoms being collected by a scanning probe microscope. And I know that someone on the other side of the world, if they’ve calibrated their instrument correctly, will measure exactly the same value for the spacing of those atoms on the same type of silicon surface (within experimental uncertainty) as I do.

      So, yes, there is always bias in the choice of the research question. But this doesn’t mean that all experimental data/conclusions are tainted by this bias (as was inferred in the panel discussion). We can reach objective conclusions independent of our starting point. Indeed, it could be argued that a wide variation in the choice of research questions – even if this is driven by particular researcher bias – is an advantage, not an inherent failing, of the scientific method.

    • P.S. Drat, forgot to respond to the “honest broker” comment, Alasdair. Sorry.

      I’ll be honest, I just don’t ‘get’ this concept. And I’m not entirely certain that re-reading Pielke’s book is going to help.

      As I mentioned yesterday, the number of scientists who get involved with politics is very small. So those who do have influence of policy can’t really be said to be representative. (Leaving aside the small issue of just what “being a scientist” means.)

      I’m not quite as cynical as Douglas Adams (see quote below) on the matter of suitability for politics/governance but let’s just say that the bias towards “advocacy” which Pielke describes may well arise from the ‘self-selecting’ dynamics of a very small subset of the scientific community who find that the only way to make a difference is to forego the disinterestedness that should be and generally is the cornerstone of the scientific method.

      We’re not the only ones to have some confusion about the ‘honest broker’ vs advocacy options. See the comments thread under this piece by Alice Bell:

      “The major problem—one of the major problems, for there are several—one of the many major problems with governing people is that of whom you get to do it; or rather of who manages to get people to let them do it to them. To summarize: it is a well-known fact that those people who must want to rule people are, ipso facto, those least suited to do it. To summarize the summary: anyone who is capable of getting themselves made President should on no account be allowed to do the job

      [From The Restaurant At The End Of The Universe , Douglas Adams]

  6. I’m with you every inch of the way, Philip. In my humble opinion that “telling insight into the gulf that still exists between natural and social scientists” ought to be telling you something: that some people who say they’re scientists, aren’t. And that it is but a short step from those “social biases” to outright censorship and propaganda. Fight it with your every breath.

  7. Hello Phil,

    Thanks for putting these thoughts down, there’s been lots of interesting blogs spin off the conference.

    I think there are a few things going on here, regarding whether or not the extent to which scientific enquiry is ‘political’.

    1. There is politics involved in the way resources for research are directed. Therefore, the available body of scientific knowledge is inevitably shaped by political concerns. Pretty sure you are with me on that one.

    2. More controversial, perhaps, are the honest broker/advocate arguments. Here, the argument is that there *must* be a source that people can go to as a source for impartial science. In his talk, Roger claimed that the Royal Society (and others) have damaged their credibility as such a source through some of the more politicised campaigns they have run. (Of course, one may argue that if the RS don’t get political about climate change etc, that is a kind of political statement in itself).

    3. I actually think Fanelli posed quite a nice challenge for social studies of science in his exchange with Chris Tyler, which reminded me of your exchange with Mike Hulme last year. It may be that sociologists can analyse when things go wrong in science, as some of the Climategate emails showed. But are they right to say that these things happen, regrettably, because we are all human? Can we do better? If so, how? More transparency/better institutions/something else?

    4. Finally, I see your point re the scientific agreement on speed of light, gravity etc, but for the kinds of complex scientific areas that are usually involved in evidence-based policy, I think things may be a little different. So often the argument is not about what the facts are, but which facts are the most relevant. This is what Steve Rayner touched on this morning, I think. This is where a more political element comes in. *Your* original research will be rigorous and robust, as we would hope everything in the peer-reviewed literature would be(!) However, the way that research is used, interpreted (or ignored) is obviously political. Now it may be that you or I would get cross about that as (social) scientists, so we would want to play a stronger advocacy role. But we should be explicit about that role, not keep it under the radar, and ensure that when we go back to our research we are again sure to keep our own biases in check.

    Hope this makes sense, as I don’t think the gap in understanding is as big as it seemed during that session. Just could have done with a bit more time.

    • Hi, Warren.

      Thanks for another really helpful set of comments and your blog posts/tweets. [As you know, I don't "do" Twitter (because of my innate inability to communicate in bursts of fewer than 1000 words), but I certainly kept an eye on the #circlesq traffic. I'll try not to sound any more klaxons... ;-) ]

      1. There is politics involved in the way resources for research are directed. Therefore, the available body of scientific knowledge is inevitably shaped by political concerns. Pretty sure you are with me on that one.

      Yep, entirely with you on that one.

      2. More controversial, perhaps, are the honest broker/advocate arguments…

      I agree. See also my comment to Alasdair above — or maybe below? — re. the honest broker/advocacy divide.

      Here, the argument is that there *must* be a source that people can go to as a source for impartial science. In his talk, Roger claimed that the Royal Society (and others) have damaged their credibility as such a source through some of the more politicised campaigns they have run.

      OK, this is what I don’t get about Pielke’s argument. Just what is meant by “impartial” science? Yes, there are biases and deficiencies and the scientific method isn’t perfect. But the idea that scientists must be impartial in the face of the weight of the scientific evidence just strikes me as perverse. It’s like getting an expert testimony in a trial and then asking that expert to return an entirely impartial opinion, regardless of their interpretation of the evidence and data. Or, indeed, it’s like asking the jury to simply return a verdict of “We don’t know”. Every time.

      3. I actually think Fanelli posed quite a nice challenge for social studies of science in his exchange with Chris Tyler…

      Delighted to read this — that’s pretty much exactly what I was thinking as that exchange (and the surrounding discussions) unfolded. Thanks also for posting the link to the blog post/video. I’d not seen it before. I stand by everything I said in that question to Hulme. It is not the norm to “bury” data and to try to obfuscate around our working practices/methods.

      OK, that’s a healthy level of agreement! Now onto point #4.

      Finally, I see your point re the scientific agreement on speed of light, gravity etc, but for the kinds of complex scientific areas that are usually involved in evidence-based policy, I think things may be a little different. So often the argument is not about what the facts are, but which facts are the most relevant.

      OK, I sort of see your point but this is the most troublesome aspect of the entire debate (and it’s what prompted the blog post above in the first place). Let’s take one of those “complex” scientific areas: climate change. First, it’s entirely arguable as to whether this is indeed more scientifically complex than, for example, elucidating the fundamental underpinnings of quantum physics, ascertaining the origin/viability of dark matter/energy, or any one of a plethora of difficult questions in the physical/life sciences.

      By complex, I think you mean that the sociopolitical aspects complicate both the acquisition and the interpretation of the data? Would that be fair to say?

      But that’s, in essence, just a restatement of point #1 above. And we both agree on that.

      I’d also agree with this:

      But we should be explicit about that role, not keep it under the radar, and ensure that when we go back to our research we are again sure to keep our own biases in check.

      But any good scientist will always aim to keep their biases in check. And even if they don’t (see my reply to Alasdair elsewhere in this thread), as long as those biases with regard to the choice or research question/methodology do not distort the data then the evidence is objective. And if a scientist has done her utmost to be objective then she can have confidence in the validity of her measurements. Interpretation of those measurements is another matter (Point #1 again).

      What frustrates me quite a bit is the idea that the bias of the investigator means that scientific evidence is always tainted. If this were the case then why bother doing the measurements in the first place?

      I managed to shoehorn one of my favourite Feynman quotes into the end of the conference yesterday. Here’s another:

      If you’re doing an experiment, you should report everything that you think might make it invalid – not only what you think is right about it…The first principle is that you must not fool yourself – and you are the easiest person to fool…After you’ve not fooled yourself, it’s easy not to fool [others],”

      This is part-and-parcel of the psyche and ethos of all committed scientists.


      • What’s this about the speed of light? It isn’t constant. Really. It’s a great example of how wrong people can be whilst being utterly absolutely convinced that they’re right. We all know about gravitational time dilation, and that clocks go slower when they’re lower. We can even detect this in a lab. See this interview with David Wineland of NIST: “if one clock in one lab is 30cm higher than the clock in the other lab, we can see the difference in the rates they run at”. The thing is this: he’s talking about optical clocks, and when a clock goes slower it isn’t because “time goes slower”. A clock does not literally measure the flow of time like some kind of cosmic gas meter. When a clock goes slower it’s because the regular cyclical motion inside that clock is going slower. The optical clock goes slower when its lower because electromagnetic phenomena go slower. Because light goes slower. And you know the same will be true of the trusty old parallel-mirror light clocks. Like this.

    • “It may be that sociologists can analyse when things go wrong in science, as some of the Climategate emails showed. But are they right to say that these things happen, regrettably, because we are all human? Can we do better? If so, how? More transparency/better institutions/something else?”

      How about not being subject over the course of many years to political attack orchestrated by powerful economic interests? Given that context, is the framing that something went wrong in science supportable? What has gone wrong with social science such that this framing is considered credible in the field?

      It’s indelicate of me to ask these questions, I know.

  8. Hi Phil,

    Now my role as organiser has finished I think I’m allowed to ask some questions! As always, really loved your contributions yesterday but two questions spring to mind. So:

    1) why is my inability to pick up and understand the latest physics paper of fundamental difference to your inability to pick up and understand the latest STS paper?

    2) If Alan Sokal’s paper successfully lampoons cultural theory to the extent that we should ignore it, does the existence of the paper with terrible photo shopping you mentioned yesterday mean I can safely treat all nano as nonsense?


    • Hi, Greg.

      Thanks for the kind words and, once again, thank you for all of your hard work on making the conference the success that it was.

      1) why is my inability to pick up and understand the latest physics paper of fundamental difference to your inability to pick up and understand the latest STS paper?

      There have been some interesting exchanges about this over at the Making Science Public blog following a recent post by Warren (Pearce). As I say in a comment there, it’s not an issue of jargon. It’s willfully writing in an impenetrable style that’s the issue.

      To summarise what I say in that comment in response to Warren’s post, and hopefully to clarify the rather garbled message I put across yesterday in the panel discussion:

      1. Not all sociology papers are poorly written.

      2. Not all papers in physics are wonderful exemplars of clear and pithy writing.


      3. As Michael Billig points out in “Learn to write badly: How to succeed in the social sciences” , in many areas of sociology there is an expectation that papers should be written in a verbose, impenetrable style. I stress again that this is not a matter of jargon (as Brigitte also points out in the Making Science Public comments thread).

      4. There is clear evidence for this impenetrability based on poor writing, rather than the over-use of jargon. I give you Exhibit #1 (from :

      Total presence breaks on the univocal predication of the exterior absolute the absolute existent (of that of which it is not possible to univocally predicate an outside, while the equivocal predication of the outside of the absolute exterior is possible of that of which the reality so predicated is not the reality, viz., of the dark/of the self, the identity of which is not outside the absolute identity of the outside, which is to say that the equivocal predication of identity is possible of the self-identity which is not identity, while identity is univocally predicated of the limit to the darkness, of the limit of the reality of the self). This is the real exteriority of the absolute outside: the reality of the absolutely unconditioned absolute outside univocally predicated of the dark: the light univocally predicated of the darkness: the shining of the light univocally predicated of the limit of the darkness: actuality univocally predicated of the other of self-identity: existence univocally predicated of the absolutely unconditioned other of the self. The precision of the shining of the light breaking the dark is the other-identity of the light. The precision of the absolutely minimum transcendence of the dark is the light itself/the absolutely unconditioned exteriority of existence for the first time/the absolutely facial identity of existence/the proportion of the new creation sans depth/the light itself ex nihilo: the dark itself univocally identified, i.e., not self-identity identity itself equivocally, not the dark itself equivocally, in “self-alienation,” not “self-identity, itself in self-alienation” “released” in and by “otherness,” and “actual other,” “itself,” not the abysmal inversion of the light, the reality of the darkness equivocally, absolute identity equivocally predicated of the self/selfhood equivocally predicated of the dark (the reality of this darkness the other-self-covering of identity which is the identification person-self).

      5. On the other hand, there are some very good examples of clearly written abstracts in sociology papers. Brigitte sent me the following. It’s the abstract of the first paper in the most recent issue of the American Journal of Sociology. Compare and contrast with the writing directly above.

      Ethnographic studies of the black middle class focus attention on the ways in which residential environments condition the experiences of different segments of the black class structure. This study places these arguments in a larger demographic context by providing a national analysis of neighborhood inequality and spatial inequality of different racial and ethnic groups in urban America. The findings show that there has been no change over time in the degree to which majority-black neighborhoods are surrounded by spatial disadvantage. Predominantly black neighborhoods, regardless of socioeconomic composition, continue to be spatially linked with areas of severe disadvantage. However, there has been substantial change in the degree to which middle- and upper-income African-American households have separated themselves from highly disadvantaged neighborhoods. These changes are driven primarily by the growing segment of middle- and upper-income African-Americans living in neighborhoods in which they are not the majority group, both in central cities and in suburbs.

      • Hi Phil,

        Thanks for the reply to point 1. I’d like to offer some things in retort, I’m not sure that they’re all mutually compatible, but I think they are worthy of consideration.

        The first thing to note in response in the accusation of particularly poor writing in the social sciences (i.e. the claim that this is a general trend rather than particular instances) is simply: I’m not sure I buy it. My background (up to MSc level) is in experimental psychology and neuroscience. I wanted to do some research on the sociology of neuroscience, but I came to a PhD in Science and Technology Studies without really knowing that the discipline existed! And, when I started, I was quite baffled. Some of that was due to particular claims – a stand out example being Charles Rosenberg’s statement that “in some senses a disease does not exist until we agree it does”; a sentence which is easy enough to read but that I found spectacularly difficult to grasp. But, for sure, some of it was about the language used and the structure of arguments. What I would say is that by the end of my PhD I was pretty happy reading texts I previously found impenetrable. A year in a Foucault reading group did a lot of good in this regard.

        Is this really so different to Physics? I’m not sure it is. I think (having not even taken a full GCSE in the discipline!) I would be absolutely perplexed by physics for a while but, after 3-4 years of trying, would have a reasonable handle on it. I suspect that exactly the same would be true of you at sociology. The interesting question is why one might expect to have instantaneous expertise in the social sciences when one would never dream of making that claim in the hard sciences…

        A second point, and this is boringly pragmatic, is that I would wager a much higher proportion of citations within the social sciences come from translated texts, something potentially awkward for a few reasons. Firstly, works might be poorly translated – I’m certainly told that Julia Kristeva, Luce Irigaray, and Bruno Latour (all of whom are particular favourites of Sokal) are much easier to read in their native tongue. Secondly, words may not be easily translated and capturing the essence may be problematic. We get this even from the title of Foucault’s most famous work Discipline and Punish which is actually entitled Surveiller et Punir ; the essence of ‘surveiller’ is not captured easily in English. Such translation issues are problematic for those of us reading those works but it is also difficult for those of us trying to use those arguments in our own writing.

        So those two points are arguing against the idea of particularly poor writing in the social sciences. I’d next like to suggest that, on occasion, there are good reasons for texts that are tricky to read.

        On youtube there is a wonderful commencement speech from David Foster Wallace. He starts:

        “There are these two young fish swimming along when they happen to meet an older fish swimming the other way, who nods at them and says ‘morning boys, how’s the water?’ And the two young fish swim on for a bit and then eventually one of them looks over at the other and goes ‘what the hell is water?’”

        Wallace, no stranger to writing difficult texts, goes on to argue that one of the primary purposes of a liberal arts education is to make students ‘aware of the water’ – the mundane stuff which we do not recognise but which structures and determines our world. In the majority of instances it might well be possible to draw our attention to the water in plain English – but what if the structure we want to expose is plain English, is Language, or the work that language is doing? When Alex Smith of Warwick came to speak at ISS (raising similar points to the ones you raise) he railed against the fact that many in the social scientists take nouns and turn them into verbs. But Judith Butler’s treatment of gender as a verb is deliberate and important; her text is tricky but it takes the form it does to ‘expose the water’ and show the structuring effect of language. Given the significance of the ‘linguistic turn’ in the social sciences I really don’t think we should underestimate just how often ‘poor’ writing is actually purposive. We might ultimately disagree with the claim that language is water, but I don’t think we can dismiss it as being demonstrably ridiculous. I’m also not sure if there is a comparison to this point in the natural sciences (although there certainly are in the arts)…

  9. Now for Point #2, Greg…

    2) If Alan Sokal’s paper successfully lampoons cultural theory to the extent that we should ignore it, does the existence of the paper with terrible photo shopping you mentioned yesterday mean I can safely treat all nano as nonsense?

    I think that there are some physicists and chemists (let alone social scientists) who would very happily agree with you that nano as a field is over-hyped and often not entirely rigorous!

    That awful Photoshopped data — it’s more likely to be a Microsoft Paint job, as some have pointed out – is an example of, like the Sokal case, where peer review has failed. Badly.

    But the key difference with the Sokal case is that the (nano)scientific community jumped on that data like a shot, facilitated by blogging (and in a way that is impossible with traditional scientific publishing). They knew it was suspect.

    Sokal himself revealed that the paper was a hoax – the Social Text readership didn’t spot it. Moreover, when he revealed that the paper was a hoax, the editors didn’t accept that they’d dropped the ball. Indeed, they made arguments that Sokal had “…a change of heart, or a folding of his intellectual resolve”

    That’s the key difference.

    It is only fair to admit that physics is not immune to hoaxing:

    Here, again, there was a failure of peer review. (The Sokal case is a much broader issue, for the reasons noted above, that “just” a failure of peer review).

    Traditional peer review is far from robust. I really hope that online (and suitably moderated) post publication peer review will be de rigeur for the next generation of (social) scientists. This is why I spoke of PubPeer in such glowing terms yesterday. (And, like Athene Donald , I was also keen to inject a little more ‘positivity’ into proceedings.).

    • Now point 2!

      So firstly, it is important to note that Sokal was NOT a failure of peer-review: Social Text was not peer-reviewed at the time of Sokal’s paper. The second thing to note is that Sokal published his piece revealing the scam incredibly soon after publication so we really don’t know how the community would have responded to it (or, if something like pubpeer existed what would’ve happened). Certainly if we believe Steve Fuller (coming here to speak in a couple of weeks) who also had something published in that issue and has written extensively on the topic ( the paper was already bound for obscurity, ignored in the issue’s introduction and taken “as a somewhat ingratiating but good faith effort by a natural scientist to bridge the Two Cultures”.

      So basically I just think Sokal is a massive red herring. Yes, a terrible piece got published. Yes, similar things have happened in basically every discipline. No, we don’t know if the piece would have ever gotten any sort of traction (although we might suspect not)…

  10. These are really interesting conversations and I am glad the conference has sparked such debates. Just a few observations: over at the Making Science Public blog Brigitte has posted a link to a paper by Jon Elster, a social scientist who accuses much of the social sciences of being obscure (if you want to watch Elster making his points in a lecture go to the video link I posted). Elster says we should not pick weak examples for criticism, but the best. I think this would be good advice to avoid attacking straw men.

    I also remember Elster commenting (in a different context) on some social science paper that it was ‘too unclear to be wrong’ which indicates that he shares the methodological position of falsificationism, and favours plain writing (although quoting passages in French ;-). His position makes sense only for people who believe in the unity of science, but not all do. Interestingly this belief divides scientists and social scientists; one side thinks the latter cannot be scientific and should not aim for it.

    Much of course depends on how we define scientific knowledge. Philip, you often use the term ‘the scientific method’, a concept which you seem to take for granted as universal. However, it has been critically examined by historians and sociologists of science. This body of work would be the starting point for me when discussing things like ‘the scientific method’. Would it be for you?

    • Hi, Reiner.

      Philip, you often use the term ‘the scientific method’, a concept which you seem to take for granted as universal. However, it has been critically examined by historians and sociologists of science. This body of work would be the starting point for me when discussing things like ‘the scientific method’. Would it be for you?

      At the very core of the scientific method is the idea of disinterestedness. Indeed, Merton (to whom you refer here and I refer here (alongside Ziman)) included disinterestedness as one of his CUDOS norms for science.

      Merton’s norms (and associated discussions) have therefore certainly informed my views on the scientific method.

      • Philip, it is interesting you mention Merton’s CUDOS. You say CUDOS is ‘at the core’ of the scientific method. What do you mean by that? I would characterise CUDOS as norms or ethos, not as scientific method. They may be related, but exactly how is not clear from your statement.

        BTW, how do scientists learn about these norms? I have met very few scientists (if any) who learned about Merton’s CUDOS in their curriculum. But every scientist gets instruction about the methods to use in their field. Hence the difference between method and ethos.

        Merton’s CUDOS has largely been forgotten or superseded in academic discourse, as I have documented in a paper on ‘Climategate’ (the paper has also a section on Pielke’s ‘honest broker’).

        Don’t get me wrong, I am not arguing ‘against Merton’, quite the contrary. But there are reasons why he is treated as a dead dog. You need to address them when appealing to him.

        • Merton didn’t invent CUDOS and then scientists followed those principles!! Merton analysed the scientific method and “distilled” its key features down to CUDOS.

          It doesn’t matter whether Merton is a “dead dog” or not because, you’re right, only a small number of scientists are aware of Merton’s writings on the CUDOS norms. But even if Merton had never written those norms down, they would still exist within the scientific community.

          See the section on the Mertonian norms in this:


          • Merton didn’t invent CUDOS and then scientists followed those principles!! Merton analysed the scientific method and “distilled” its key features down to CUDOS.

            Merton’s argument was both descriptive and normative. Taking the descriptive dimension it would be a matter of empirical investigation to verify his claim. Maybe science has changed since he wrote his article (1942), maybe science never worked like this. There has been quite some attention in the social science literature to this problem. I would recommend this article by Ed Hackett which argues that research scientists are following ambivalent norms.

            But even if Merton had never written those norms down, they would still exist within the scientific community. How do you know without collecting evidence?

          • Reiner,

            Did you follow the link to the chapter I provided in my earlier comment?

            Note the high subscription rate to Mertonian norms described in that chapter.


          • Yes Philip, this was a very good comment you wrote. I read it at the time and told you how much admire your courage. If Merton’s norms are live and kicking we should expect more researchers speaking out like you have done. I cannot see them.

            This is where the Hoffman et al study comes, in which you mention as evidence for the prevalence (or acceptance) of Merton’s norms. Unfortunately the study uses two methods which are least apt to yield valid results: focus groups and surveys. Asking scientists about the adherence to social norms is a sensitive issue (like: are you racist? are you misogynist?) and is subject to social conformity bias. This is why I think Hackett’s study is much better because it is based on in depth interviews, at various points in time. This allows him to probe much deeper. It is still patchy evidence, and does not provide systematic triangulation in all instances. But it tries to address the methodological problems much better.

          • Sorry I meant to say

            “This is where the Anderson et al. study comes in, which you mention…”

  11. Please excuse my barging in here, especially as I wasn’t present at the conference. But as someone who works in critical/political/social theory (I don’t self-identify as a social scientist, not because of any hostility to science but because I am not familiar with and do not use scientific methods) I often find outside attacks on the broader discipline I work in incredibly frustrating, for a number of reasons. Many of those have been covered by Greg, but I’ll chip in with:

    1. The method used to make these attacks is often extremely frustrating. Lifting a section of text, out of context, from a long work is completely unfair. Opaque terms may have been explicated earlier on. The text may have explicitly stated it assumes a degree of familiarity with the subject. I would not ridicule an out-of-context section of a text on antigravity because I didn’t understand it. Such criticisms also miss the fact that the text may be experimental/performative – pushing language to the limits to see what new forms can be found/explored. To compare the abstract from the American Journal of Sociology with that D.G. Leahy extract is utterly unfair: they’re doing completely different things for different audiences in different fields.

    2. Plenty of people in the social sciences/theory world also make these critiques. We are frustrated by bad translations, bad writing, jargon-for-the-sake-of-jargon. We debate it. We disagree on it (Billig, for example, who I think is totally and utterly wrong even though I am often frustrated by difficult writing). There is no homogenous field of ‘social sciences’.

    3. These are often racist and sexist claims (or, at best, claims with racist and sexist effects). Not intentionally so, maybe, but to dismiss postcolonial studies as ‘fashionable’, for example (as Denis Dutton does), is ridiculously belittling of a vast field that has done much – over a long period of time – to unpick racist assumptions in supposedly critical academic work. Academia remains dominated by white men and to lampoon those who challenge this is, quite simply, wrong.

    4. I don’t think people have done this here, but they’re so often based on errors. Lacan does not think his penis is the square root of minus one. I’ve never come across a writer who thinks that truth is absolutely relative. Nor anyone who thinks the term ‘postmodern’ has any value other than as a periodizing concept.

    I think the rigid disciplinarity of academia has a lot to answer for here. The work of people like Karen Barad (a queer theorist and theoretical physicist) shows that when disciplinary boundaries are collapsed productive, provocative work can follow.

  12. Lifting a section of text, out of context, from a long work is completely unfair. Opaque terms may have been explicated earlier on.

    Point taken. But note that I include two examples above — one an example of excruciatingly poor writing, the other a rather well-written abstract.

    No context is needed for that first piece. It is gibberish, pure and simple. Appalling, awful writing.

    The following is taken from that text and is a single sentence!

    Total presence breaks on the univocal predication of the exterior absolute the absolute existent (of that of which it is not possible to univocally predicate an outside, while the equivocal predication of the outside of the absolute exterior is possible of that of which the reality so predicated is not the reality, viz., of the dark/of the self, the identity of which is not outside the absolute identity of the outside, which is to say that the equivocal predication of identity is possible of the self-identity which is not identity, while identity is univocally predicated of the limit to the darkness, of the limit of the reality of the self).

    Are you saying that you think there’s a justification for this type of writing? Really?

    You say that you disagree with everything Billig says. Does this include the following?:

    Because I suggest that academics should write more simply, this does not mean that I am arguing that they should be addressing the general public. Most public intellectuals are bi-lingual: they use one language for addressing the public and another for addressing fellow specialists. I am suggesting that we address our fellow specialists more simply, whether or not we seek to address the general public.

    Note that Billig states that “academics” (not just sociologists) should write more simply. What’s wrong with this? Shouldn’t we strive for brevity and clarity in our writing?

    You say,

    I’ve never come across a writer who thinks that truth is absolutely relative

    You should take a look at the comments thread for this:

    Note the telling exchange re. the laws of physics for Martians…

  13. On the ‘laws of physics for Martians’ debate I’d tentatively follow Karen Barad (I can’t speak for others, though I don’t read anyone as arguing for absolute relativity: saying that truth is discursively produced is not to say that ‘anything goes’), though I’m new to her work and don’t want to do it too much violence here. She rejects relativism but argues that observation/measurement is always a two-way rather than one-way relation – a simple claim, perhaps, but one that she believes has some profound epistemological effects. In the social world, this certainly seems to make sense to me and she forcefully argues it applies in science too. This is a relationalism, not a relativism (nor is it overly concerned with the role of discourse in constructing ‘truth’). There’s a fairly lengthy but not-too-hard introduction to her position here:–new-materialism-interviews-cartographies?rgn=div2;view=fulltext

    Maybe ‘utterly wrong’ is a little harsh for Billig. My language a bit too simple, perhaps!! There is always a time and a place for writing clearly: it’s what I try to do, because it’s what I feel most comfortable doing. But I don’t want to moralise those who don’t write ‘clearly’ (which is always a relative judgement, of course: one persons’ clear is not another’s), and perhaps that’s because of my own intellectual history. When I first came to read Deleuze and Guattari (as a lazy undergraduatewho didn’t excel academically) I found it really hard work. But I found that the hard work paid off and that reading it enabled me to think in new ways, and to make connections I’d made before. Part of that was undoubtedly the force of the language. They created their own conceptual reality rather than adjusting to mine (by carefully explaining terms, etc etc). I’m sure both of them could have pitched their work ‘lower’, but it wouldn’t have had the effect it had on me. So the ‘difficult’, performative element of their work worked powerfully for me. I had to consult secondary texts, of course, but I don’t think that’s necessarily a bad thing.

    On that horrible sentence? I just don’t know enough to judge. Perhaps it’s parody. Perhaps Leahy is pushing language in the way Gertrude Stein did. Perhaps the point being made couldn’t possibly be made any other way. Perhaps he is pushing a mode of thought he is opposed to to its logical limit in order to show how nonsensical it is. Perhaps it is just terribly written. Perhaps it’s a combination of some of those things. The fact the first chapter of the book is subtitled ‘Whether the Difference between Fact and Reflection or Appearance and Essence Can, by the Saying, be Overcome – The Question Answered in Light of the Being of the Thing Perceived, at Once the Formal Critique of Historical Materialism’ makes me think he is rather taking the piss, albeit in a way I really can’t be bothered with (I note also that he has a chapter subtitled ‘Beyond the Post-Modern Nothingness’, so those associating his writing with ‘postmodernism’ – not that anyone has here – perhaps need to reconsider).

    On the subject of long sentences, there’s a horribly impenetrable and wordy one in Sir Thomas More’s Utopia which, if I remember rightly, bangs on about how clear everything will be in Utopia. Most translators chop it up into shorter sentences, (presumably) believing its long-windedness to be a result of poor writing. But one recent translator believes it a deliberate strategy by More: the whole purpose of Utopia (the text, not the place), he believes, is to portray both the desirability AND impossibility of such a state of affairs and the wordy sentence promoting utopian clarity is a deliberate performance of this contradiction. To dismiss More as a ‘bad writer’ on the basis of this would, therefore, be totally unfair.

    This Judith Butler op-ed from ’99 still has a lot to say on the topic:

    • On the ‘laws of physics for Martians’ debate I’d tentatively follow Karen Barad

      I wouldn’t. Really, I wouldn’t. I’m sorry to sound so scathing but that interview with Barad to which you linked is an example of the very worst type of quantum physics “woo”. (This is something that really gets on my nerves, for the reasons explained here ). For example, the following passage from that interview with Barad does a great disservice to both quantum physics and feminism:

      See, we assume that time is a given externality, just a parameter that marches forward, and that the past already happened and the present, that moment “now” just slipped away into the past, and that the future is yet to come. But if we examine this carefully, again using the insights from feminist theory, from post-structuralist theory, and things that Cultural Studies has been telling us, and so on, and bring them into the physics here, what we can see is that what is going on actually is the making of temporality.

      Utter nonsense. In what sense can we bring “insights from feminist theory, from post-structuralist theory, and things that Cultural Studies has been telling us” into developing a better understanding of time in quantum physics? The entire article reads like an extended entry in Private Eye‘s “Pseuds Corner”.

      Given Point #3 in your previous e-mail, I should stress that my political/societal ‘bias’ is firmly left of centre. I really hope that we have not reached the point where criticising the fashionable postmodern nonsense of Karen Barad is seen as anti-feminist or misogynistic…

  14. I don’t accept the ‘fashionable postmodern nonsense’ claim. For a start, there is no remotely coherent, observable category ‘postmodernism’. There is a particular period in history (centred around ’89, but beginning before then and stretching beyond it) when certain intellectual, artistic, architectural trends etc. can be observed; and there is a group of writers who critically engage with them from a variety of angles. Most of the social theorists Sokal and Bricmont attack have been dismissive of the idea of postmodernism; some even use the term as S&B do (as something they are hostile to). They have wildly different aims and methodologies, and I really don’t think S&B understand them, nor make any effort to. At best, the whole affair strikes me as intellectual beef with a very marginal position unfairly extrapolated to a whole tradition (and I know Bricmont’s Belgian, but there’s a real anti-European bias at work here).

    I’m interested you bring up being left-of-centre here. Chomsky, of course, provided a blurb for Sokal and Bricmont and is famous on the left for having a very empirically focused method. His mode of political operation is that there is a truth, and if only everyone knew it the world would be a better place. Reason, rationality, knowledge, progress. There’s no place for theory (he claims that dialectics is beyond him) and he argues that if you can’t explain your ideas to a 10 year old in 5 minutes they’re no good. Now Chomsky’s done some invaluable work in uncovering lies, abuses of power and so on. And to be fair, he’s also done some good work on how the concepts he espouses have been abused themselves. But it’s not as simple as he thinks. People aren’t surprised by his findings. They’re often politically demotivating rather than motivating. Why? Well, there’s a whole host of things going on. People lack a broader framework through which to synthesise these facts, cognitive dissonance means people don’t digest ‘facts’ in the manner Chomsky thinks they will etc etc. People often feel disempowered rather than empowered by knowing disturbing truths. Here’s where theory can be useful. Zizek’s reading of Lacan’s concept of the ‘subject-supposed-to-know’, for example (and I’m not a fan of Lacan – at all – and think that much of psychoanalysis *is* psuedoscience), can be extremely useful here; or an understanding of the Marxist theory of the commodity form. All the things Chomsky dismisses as ‘fashionable’ and ‘too difficult’.

    On gender/race. Well, I’m wary. Partly because the ridiculing of certain statements gets misapplied to feminism or postcolonial/subaltern studies as a whole (not here, I’ll add). Saying that a certain scientific truth is gendered doesn’t mean it’s bad, invalid, or relative, or only partially true. It most likely means that the reason this truth has been discovered is because it is a field of particular interest to men (as they are socially constructed). Funding is key here too. How would this be allocated differently if it wasn’t dominated by white men? If some funding wasn’t determined so much by the needs of capital, the state and the military? What new scientific discoveries might arise then? Science would still, of course, be socially determined – nothing is ahistorical or free from economic and power relations. This is what I encounter when I hear people saying that science is in part socially determined. By way of analogy I might say that maps are bound up with state and military control; and colonial power. That is true. But it doesn’t mean I don’t use one when I want to know where to go.

    On Barad. There’s clearly a difference between Daily Mail woo woo (even if that is based on academic work, I’d wager it’s a complete misrepresentation of said academic work) and peer-reviewed theoretical work. Barad’s point, as I understand it, is fairly uncontroversial. Our white, male culture conditions us to experience time in a particular way – as a linear succession of homogenous moments. If you cannot possibly conceive of time in any other way then it *may be* that you cannot process the results of certain experiments; or may take longer to do so. In other words, an epistemological shift on the social level may be required to process scientific data; or design experiments to make new discoveries etc etc (and this is true not just for time). Language is a problem too – and you touch on this in the youtube link (the ‘colour’ of quarks, for example): our social world conditions/limits/enables what we can comprehend, and might lead us down dead-ends. Perhaps feminism might have a linguistic strategy to help us out of such a problem; perhaps other cultures might…(I’m also wary of the danger of exoticising certain cultures as ‘inherently close to quantum reality’ or whatever)…

    I know that there’s a micro/macro fallacy in trying to apply the lessons of quantum physics to social science; and that behaviour on the sub-atomic level cannot simply be transferred to the atomic level (though it may open up space for interesting *speculative* theory-fictions). But I’d also suggest that scientists have been guilty here: Heisenberg’s Physics and Philosophy (if I remember rightly) does a lot of the things that ‘postmodernists’ are always accused of doing.


    • David,

      Sorry, I meant to pick up an important point re. use of terminology in your comment yesterday but it slipped by. Here it is…

      You use the term ” peer-reviewed theoretical work” to describe Barad’s writings.

      In what sense are you using the term “theoretical”? In physics, and in science in general, we can put forward theories but the acid test of the validity of that theory is whether it (a) agrees with experiment/observation, and/or (b) has predictive power.

      How has Barad’s “theory” been tested against observation and experimental measurements?

      If we forego this testing phase then we can make up whatever “theories” we like. For example — and I’ve used this one before — let’s say that I believe that all human emotions are driven by the exchange of nanoscopic hoops of *undetectable* electromagnetic radiation (let’s call them nanobagels). Love happens when we have constructive interference of those hoops; hate is due to destructive interference.

      Definitively prove me wrong. (Remember, those hoops are undetectable…)


  15. If one wants to use quantum physics in a metaphorical way, I think it’s important to state that very explicitly and make clear what parts of quantum physics are mapped onto what parts of social science for which reason and to illuminate what aspects of social science. It’s no use just to wave the quantum label about. That does not lead to a deeper understanding of either quantum physics or social science issues. And again, the language issue is crucial. If we all want to understand EACH OTHER, we need to use a language that can be understood by a wide range of people not just by a small self-selected circle. We have to read what we write, read it again and again and imagine a non-initated reader sitting there trying to understand what we say. I myself find this little video by John Searle quite interesting in this context, where he talks about Foucault and Bourdieu, two thinkers I respect and have heard lecture in Paris in what one might call their early and quite comprehensible periods:

  16. Wow. The disagreements between STS researchers, sociologists, and scientists at the “Circling the Square” conference, to which I refer in the blog post above, pale into insignificance against this!

    Where to start?

    First, I was using the term “postmodern” with tongue wedged so firmly in my cheek that I was causing myself quite a bit of discomfort. I am entirely with Dawkins on this: I have never met anyone who has used the term “postmodern” who has been able to give me a remotely coherent explanation of what it means. (I guess that, in this sense, we agree on this point).

    Barad’s point, as I understand it, is fairly uncontroversial. Our white, male culture conditions us to experience time in a particular way – as a linear succession of homogenous moments. If you cannot possibly conceive of time in any other way then it *may be* that you cannot process the results of certain experiments; or may take longer to do so

    Fairly uncontroversial . What?!

    As I said above, where do I start?

    So, what you (and Barad) are saying is that males and females should ideally perceive time differently? That time is relative to the degree of patriarchy of a society/culture? So a female will, say, perceive a car travelling at 60 km/hr as actually travelling at 59 km/hr, for example (or would it be 61 km/hr)? That there is some type of gender-specific theory of relativity?

    As I said in my previous comment, nonsense. Pure and utter nonsense.

    As I also said in my previous comment, not only are you abusing physics but you/Barad are actually doing feminism a grave disservice by making such absurd, nonsensical arguments.

    • Just for I point of clarity, I really don’t think that Karen Barad is using quantum physics metaphorically; her PhD, as I understand it, is in theoretical physics. I’m not in any position to say whether her research, or the physics she’s discussing, is any good, but I really don’t think she’s using it in a glib way. Personally, I found a Bergsonian way of thinking about time (I’m not sure if this is specifically what David is talking about) really interesting and useful – I’m don’t think it’s nonsense or has any bearing on a car’s mph though, and probably needs to be chatted about over a pint rather than a blog.

      W/R/T the Searle piece – I’ve never been sure about this story. I know others have disputed as to whether Foucault ever said that. And certainly his “terrorist obscurantist” quote about Derrida should be considered in the light of the fact that they weren’t best mates!

      And if we’re going to talk about fashionable nonsense, how about Richard Dawkins’ claim that children shouldn’t be read fairy tales!!

      • Greg,

        I really don’t think that Karen Barad is using quantum physics metaphorically


        Agreed. And this is the problem. She is stating that “white male culture” (to use David’s term from his comment above) actually influences the perception and measurement of time.

        As I’ve said in an e-mail exchange going on “behind the scenes”, physicists now routinely measure down to femtosecond time-scales (see ). Do all of these measurements need to be corrected for the cultural/patriarchal/social bias of the experimenters? (Oh, and the instrumentation. And the manufacturers of that instrumentation…)

        I apologise for my rather robust tone on this topic. But as a colleague – a sociologist, no less – said in an e-mail exchange earlier this morning,

        Isn’t it totally human to get exasperated by patent nonsense?

        • Isn’t Barad is talking about the way time is *perceived*? It’s not that lasers should be de-biased. It’s not necessary to question the accuracy of this phenomenal equipment, the point is merely to show that it is not the only way to think about time in the social world. That is what the interpretive social sciences is all about – recognising that particular worldviews (or ontologies) if you prefer) are appropriate in different settings. If the ways in which we experience and discuss time were reduced to those dictated by the laws of physics (not that this is very likely), then we would live in a poorer world.

          I’m not qualified to comment on the more ambitious argument about whether ‘white male culture’ has unduly influenced the progress of physics. However, I notice we don’t have a control group :-)

          • “Isn’t Barad is talking about the way time is *perceived*?”

            You mean the way men tend to worry about being _on_ time, women less so?

            I’m just a white middle aged engineer who is surprised that anyone gets funding to write such balearics.

        • I feel like I’m arguing for something that I don’t really know very much about here, but I think there are two separate arguments. With regards to something like a femtosecond (which sounds cool!) I suspect you agree that no-one in the history of humankind had ever ‘experienced’ that unit of time until various scientific instruments /practices came along. So in that sense, femtoseconds are a lovely example of how the the experience of time *is* dependent upon various social practices. Note: this is *not* – *not* – the same as saying femtoseconds are not real!!!

          More broadly, and far removed from the laboratory, experience of time is clearly dependent upon things like capitalism! I experience 8 hours as a perfectly reasonable amount of time to sit at my desk but it seems highly unlikely that many people experienced time this way prior to the industrial revolution. I’m not sure that this speaks very much to physics, but I think it’s both interesting and pretty uncontroversial…

  17. Oops – if I can quickly as an aside which I think makes a similar point. A speaker we had last week mentioned that all seminars in Oxford must start at 5 past the hour. The reason for this is that Christ Church (I think) refused to change their clock from ‘true noon’ (as the sun passes the zenith) to ‘standard noon’ (which is based in Greenwich, as we know). The difference is 4 minutes, hence the delay. The social achievement of a ‘universal noon’ (everywhere except Oxford!), whereby we all set our clocks to the same time, appears to have been ushered in by technologies such as the telegraph which made synchronicity over long distances meaningful. ‘Noon’, as we experience it, is a societal phenomena as well as a scientific one. Paul Edwards has written about this really nicely in relation to climate change. So, once again, these social changes seem to have changed how we experience time, do you not think? I really would be surprised if you find this kind of claims controversial…

  18. As my comments about metaphor have shown, I too am talking about something that I am not really au fait with. However I have just look at a book description of Barad’s work which says “Barad extends and partially revises Bohr’s philosophical views in light of current scholarship in physics, science studies, and the philosophy of science as well as feminist, poststructuralist, and other critical social theories. In the process, she significantly reworks understandings of space, time, matter, causality, agency, subjectivity, and objectivity.” Is this type of ‘extension’ really so different to metaphor, which maps knowledge of a source domain onto a target domain and normally, thereby, creates new knowledge and extends the meaning of the original word or concept used…The question is whether in this case we are creating new knowledge and new understanding or not….But I have to confess I am totally new to Barad, to probably talking nonsense. I found this review of her work by Trevor Pinch though where he says “By drawing upon the results of physics not as a metaphorical enterprise but as having direct implications for science studies, Barad also courts a form of scientism. Using science – and a highly prestigious form of elite science at that – to bolster a view in science
    studies is a dangerous game.” So perhaps framing her work as metaphorical might have made it more easy to swallow by the STS community, while not doing so makes it difficult to swallow by the science community! Hmmm not really a win-win strategy then…. DOI: 10.1177/0306312711400657

    • @Greg, @Warren, @Brigitte.

      Re. Karen Barad’s work.

      I strongly recommend that you read the article for which David Bell provided the link in one of his comments above:;view=fulltext

      This is an instructive, if near-impenetrable at times, overview of Barad’s arguments and stance. I’d just like to pick out a few choice paragraphs/sections from that interview which may help to explain just why I described her work as fashionable nonsense in a comment above.

      First, it’s worth noting that in the opening paragraph of the interview Barad has this to say:

      I am not interested in critique. In my opinion, critique is over-rated, over-emphasized, and over-utilized, to the detriment of feminism.

      Critique and criticism are the bedrock of science. Claiming a lack of interest in critique is entirely unscientific. (And if Barad doesn’t mean critique in the usual sense, she should spell out just what it is she does mean. In clear, careful language).

      She gazes upon the images and asks him: “Can you tell me what is so beautiful about those images?” The physicist turns to her with this puzzled look on his face and says: “I am not really sure why you asked the question. It’s self-evident! Everywhere you look it is the same.” And of course feminists are not trained to look or take pleasure in everything being the same, but to think about differences


      This is a remarkably broad and sweeping generalisation — amongst many — about physics and physicists. Yes, physicists look for correlations, links, and patterns everywhere. Indeed, one could very easily make the argument that the job of any scientist is to look for patterns (in time, space, data, mathematical equations…) .

      But we also think in great detail about differences.

      I’m a physicist interested in how things work at the nanoscale and the key aspect that I stress in undergrad lectures (and popular science talks) I give is that at the nanoscale, size is everything. We are interested in how properties evolve/emerge from the small scale (e.g. molecules, nanoparticles) to the large (crystals). As Philip Anderson put it back in the 70s: More is different .

      So, Barad’s sweeping generalisation about physicists not being interested in differences doesn’t work. (And given that she is a physicist by training I somehow suspect that she realises this).

      diffraction patterns record the history of interaction, interference, reinforcement, difference. Diffraction is about heterogeneous history, not about originals.

      Here’s where I take particular issue. Diffraction — and the term “diffraction pattern” — has a very distinct meaning in the sciences. If Barad clearly included a disclaimer, here and elsewhere, along the lines of “Well, of course I am only speaking metaphorically here and not for one minute suggesting that aspects of quantum physics underpin critical analysis in the humanities (or vice versa)” then I wouldn’t have an axe to grind.

      But she doesn’t.

      Here’s what she does say:

      At least for me it is the incredible satisfaction of taking insights from feminist theory, on the one hand, and insights from physics, on the other, and reading them through one another in building agential realism. And from there going back and seeing if agential realism can solve certain kinds of fundamental problems in quantum physics. And the fact that it is robust enough to do that, and that feminist theory has important things to say to physics is amazing, absolutely amazing, and key to the point I want to make as well.

      Let me highlight the key phrase in that quote in bold:

      And from there going back and seeing if agential realism can solve certain kinds of fundamental problems in quantum physics. And the fact that it is robust enough to do that, and that feminist theory has important things to say to physics is amazing

      So, the key question then becomes what she means by “agential realism”. This is a deeply frustrating question to answer because she needlessly, and repeatedly, buries her ‘explanation’ of agential realism in painfully obscure and — to use a term which has come up a number of times previously at other blogs of late — faux-clever language.

      After trawling through a few of Barad’s papers and books to find an explanation that might begin to border on being decipherable, I decided to give up and go with the explanation given at Wikipedia (which seems to chime with what Barad says in this paper):

      According to Barad’s theory of agential realism, the world is made up of phenomena, which are “the ontological inseparability of intra-acting agencies”. Intra-action, a neologism introduced by Barad, signals an important challenge to individualist metaphysics. For Barad, things or objects do not precede their interaction, rather, ‘objects’ emerge through particular intra-actions. Thus, apparatuses, which produce phenomena are not assemblages of humans and nonhumans (as in actor-network theory), rather they are the condition of possibility of ‘humans’ and ‘non-humans’, not merely as ideational concepts, but in their materiality…

      OK, that’s more than enough.

      Note the phrase at the end there: not merely as ideational concepts …

      [Errmm. Let's first rewind and translate that particular piece of needlessly overwrought language...]

      not merely as ideas, but in their materiality (“Ideational concepts”? Sheeesh).

      This is not about perception (at least in the sense that Greg and Warren mean).

      This is not about metaphor.

      Barad is putting forward the same bad science nonsense as Robert Lanza . We did a Sixty Symbols video on Lanza’s ‘claims’ a while back called Quantum Physics Woo .

      There’s a rather perceptive comment under that video which states that much of the abuse of quantum physics by Lanza et al. arises from the lack of understanding of what is meant by an observer. In quantum physics a rock can be an observer — consciousness is not required.

  19. This discussion has perhaps wound down, and I apologize for entering it so late, but I did want to say as a kind of STS person that I thought the degree of disagreement as distinct from talking past one another has been overstressed. No one in STS needs to argue like Karen Barad to make the relevant points. The Sokal hoax had to do with a literary journal, not an STS journal. The details of quantum physics are not the issue here. If we accept a small degree of fallibilism about all science the results are the same. As Warren Pearce put it

    “1. There is politics involved in the way resources for research are directed. Therefore, the available body of scientific knowledge is inevitably shaped by political concerns. Pretty sure you are with me on that one.”

    That is enough to force the realization that the content of science at any given time, including the details of quantum physics, results from past decisions to fund projects or researchers. If we are a bit humble about the limitations of our knowledge, we are faced with the reality that what we know is sometimes pretty limited, and one of the known unknowns is how our past decisions to fund one thing rather than another influenced the content of what we know in science.

    Most of the time this does not matter– we can wait for 40 years for an anomalous result to be explained, as in the case of the Homestake mine experiment. Some of the time, and especially in the areas related to policy, it does matter, because limited scientific results, which are not substantial enough to determine the right answer to complex policy problems, are nevertheless being appealed to. This is where honest brokering and advocacy collide. The scientist is put in the difficult position of either insisting on what he or she takes to be the right policy– something that might be thought of as a moral responsibility– and admitting that the evidence is limited and that many aspects of the problem are not understood– which might also lead quite predictably to bad policy.

    In the old days scientists were not normally put in this position, with the exception of medicine (which had its own rules of ethics and own problems with advocacy– see the horror of lobotomy). Today they are, and more important are, in many fields, being funded precisely because their research is supposed to provide a consensus that is a basis for action. In this novel context, where the state is not waiting forty years to work out the details, but waiting for the results of the research they have paid for, decisions about what to fund and how to fund it have a much more drastic effect on the content of science, and the “known unknown” of the effects of these decisions expands drastically.

    It is this new situation that brings forth such notions as Pielke’s “honest broker” model. The CUDOS principles are compromised by this new situation– the OS in the middle is for organized skepticism, which is exactly what the state doesn’t want to pay for (and that advocates criticize as endangering humanity). I think we would find that we have much more common ground– or at least common concerns– if we focused on these real issues of how scientists should respond to the new situation. I certainly don’t have the answers, but we need to ask the questions together.

    • Stephen,
      I think we probably all agree with what Warren said. Clearly we choose what to do, what to invest in, and what to focus on. Hence our level of understanding of a particular topic, at any time, does depend on the choices we’ve made. If that’s all that’s being suggested, then I think everyone agrees. However, there was some suggestion that in fact the societal influence is bigger than this. That even if you factor out the obvious societal influences, it still has an impact at a more fundamental level. Most physical scientists would argue that this is not the case; that our understanding is constrained by the available evidence. If the evidence allows for more than one interpretation then societal influences may well impact what interpretation is favoured. However, as more evidence is collected those interpretations that are no longer consistent with the evidence are rejected, irrespective of what societal influences there may be. Do we agree on this?

      You also say this,

      The scientist is put in the difficult position of either insisting on what he or she takes to be the right policy– something that might be thought of as a moral responsibility– and admitting that the evidence is limited and that many aspects of the problem are not understood– which might also lead quite predictably to bad policy.

      This is where maybe we part company. In what way does the scientific evidence tells us what the right policy is? In my view it doesn’t. It simply tells us something about the world/universe. If done properly, it include uncertainties and caveats. Of course this can then get presented to policy makers who should make a decision based on this evidence, but I’ve yet to find many scientific papers (in the physical sciences at least) that say something like “Evidence suggests that we may be seeing the start of the collapse of the West Antarctic Ice Sheet, therefore we must install more wind turbines in the Highlands of Scotland”. Typically the evidence is always limited; that’s what uncertainty intervals are essentially for. As I see it, one problem is that people perceive uncertainty intervals as illustrating that we’re uncertain. They should probably be more correctly called confidence intervals, as they illustrate our level of confidence.

      I can’t speak for all physical scientists, but I find the “honest broker” rhetoric rather irritating. Not because it’s wrong, but because it’s often presented in a way that makes it appear that some are not behaving as they should. Of course, not everyone’s perfect, but the idea that there is a significant level of dishonesty amongst physical scientists seems remarkable disingenuous given that it’s often presented by people who work in exactly the same environment as those that they appear to be implicitly criticising.

      • Speaking for myself, I completely agree with your comments on how evidence works. I would add that collecting the relevant evidence is constrained by policy choices within science itself, and I am perhaps a little more concerned that these choices are motivated by non-scientific considerations. But that is a minor difference.

        Where you say we differ, however, I think is a matter of talking past one another. Take the case of James Hansen. Physical scientist. Advocate of all sorts of policy measures. In the US one can hardly avoid him– he is constantly on radio programs pushing his view. This blurs the line between science and advocacy because he does seem to think that the science demands particular policies. Even his friend Freeman Dyson criticized him for this– so this is a real issue.

        I agree that this is in some sense “not about the science” but about policy. But neither scientists nor policy makers seem able to operate within strict boundaries between the two. And oddly enough, this is an issue that looms large in the history of social science, which tried honest brokering (cf Lundberg, Can Science Save Us?). It failed– the audience opted for more ideologically loaded social science, and social scientists obliged. Rules of the game separating social science and advocacy never stabilized.

        Have scientists opted for more policy relevant research, where the policies are guided by political motivations? Sure they have: NSF announces an initiative with clear policy motivations and the proposals flood in. Not perhaps in certain areas of physics, but in a lot of physical science fields. To me, that represents a significant change in the way science operates. But also an inevitable one that we need to work hard to understand. Ziman was trying, Ravetz has tried– post–normal, post-academic, etc. are among the many labels for this new situation.

        • Thanks for the response. Maybe we are talking past each other, but I think I may now have a better understanding of where people are coming from (why’s it taken so long you might ask :-) ). I was talking about the scientific evidence itself, not about the behaviour of individual scientists. As far as I’m concerned, the scientific evidence is not influenced by whether or not James Hansen (or other scientists) chooses to advocate for a particular policy. It may well have an impact on public trust and how policy makers perceive the evidence, but it doesn’t influence the evidence itself.

          So, in a sense, maybe some of the confusion is that some have been talking about the scientific evidence specifically, and others have been talking about how much it is trusted, given the behaviour of some individuals. To me, these are two separate – but related – issues.

          On the other hand, if some think think that it does actually influence the evidence, then I think the case has to be made. As I said before, if the evidence allows for more than one interpretation, then societal influences may well play a role. Therefore, if someone’s policy preference happens to be consistent with the evidence, they may well prefer that interpretation over another. Ultimately, however, the evidence constrains the interpretations and the uncertainties indicate our confidence in this evidence.

          Have scientists opted for more policy relevant research, where the policies are guided by political motivations? Sure they have: NSF announces an initiative with clear policy motivations and the proposals flood in. Not perhaps in certain areas of physics, but in a lot of physical science fields. To me, that represents a significant change in the way science operates.

          Sure, and I’m not sure that it has really changed. Science has almost always been driven by some kind of societal pressure. To me the issue is not whether or not society influences what we choose to study, but whether or not these pressures influence the actual evidence itself. I have no reason to think it does and it’s not clear to me that anyone has any evidence of it either (in a broad sense, rather than individual examples, at least).

          I’ll say something that may seem a little critical of what I’ve seen of STS research. Feel free to take it as an indication of my ignorance and feel free to enlighten me if so :-) The important issue – in my view at least – is not whether or not society influences what we study, but whether or not it influences the actual results. The former is fairly obvious true, the latter not so much. There is, however, a narrative that exists in some areas, that the evidence in certain fields is influenced by societal factors (you yourself mentioned James Hansen). I see some STS researchers buying into this narrative, yet I see no conclusive evidence. It’s clear that there are factors that might influence public trust and might influence policy making, but I’ve yet to see anything conclusive that really suggests that the fundamental evidence is influenced by politics/society. If such evidence exists, then fine. If not, then I find it disappointing that researchers are buying into a narrative for which they have very little evidence.

          • This is a little bugaboo of mine. STS people rarely if ever talk about societal influences on science. The whole language of social forces or even talking about “the social” is a part of past (very far past) social science that STS people never bought into in the first place. Most STS research is concerned with explaining how a problem is constructed and describing the way a result is arrived at. Often in the course of doing this they find that administrative structures, the framing done by policy-makers and bureaucrats, and the like play a role. Much of recent STS is concerned with regulatory science, which is framed by regulation in the first place. In the days of the science wars of 25 years ago, there was a lot of loose talk about construction, but when one got down to the details, the narratives were pretty close to what historians of science, philosophers of science, and scientists themselves said about what was going on.

            STS does problematize notions like evidence, and tends to avoid general claims about the relations of science and society. The emphasis is always on what it is that these terms actually mean in practice, meaning, very often, what the actual lab procedures and lab technology is. For them, the notion of evidence is relative to these practices. If you want to know what evidence is in astrophysics, you need to see what the scientists count as evidence and how they produce it. Larger claims about evidence as such they leave to the philosophers.

          • In response to Phil’s last paragraph (which has just popped into my email) – surely the go to reference here is SJ Gould’s The Mismeasure of Man? I appreciate that his findings have been disputed since his death but I would suggest its a pretty obvious example of societal bias affecting results…

          • Stephen,
            Thanks, that clarifies things for me quite a lot. It doesn’t resolve my issue that maybe some STS researchers are not quite making this sufficiently clear, but at least I now have a better understanding of what is actually being said. I also think that this is consistent with the post that Philip wrote on Making Science Public and that generated a fair amount of discussion.

            I’ve read a lot of SJ Gould but quite a while ago. I don’t remember that specific example, but will try and look it up. I should add, though, that I don’t doubt that one can find individual examples where societal influences affect results. The bigger question is how long such results remain accepted (i.e., how soon does someone else discover that the results are flawed). My guess (and I’ll admit that this is apocryphal) is that you would be hard pressed to find a result that is societally influenced and that remains accepted for many decades. I guess some may exist, but I can’t think of any off the top of my head (I should add that I’m referring to examples where we had the ability to get the right result but let our biases influence our work in such a way that we got the wrong result).

  20. Yes, I think a lot of the confusion (or talking past each other) was for a while caused by people talking about regulatory science ‘as if’ it was science. In my view the phrase is a bit of an oxymoron and should probably be replaced by a less confusing one. Or one should just talk about science that is used to inform regulation. Being involved in regulation naturally places particular constraints and pressures on science and scientists which have to be acknowledged and discussed. But that discussion should, in my view, not be seen as a discussion about science in general and the findings should not be generalised to science overall, be it natural or social.

    • Brigitte,
      Yes, I think I now realise (as Philip was really trying to point out in his MSP post) that people don’t quite mean the same thing when they use the term “science”. However, after thinking about this a little more last night, I realised that I don’t know what people mean by “regulatory science”. Surely it can’t mean that what is presented to policy makers is somehow different to the actual scientific evidence, or does it? It seems more to be some kind of perception of the science, than the science itself. As you say, maybe we need a new term that somehow both distinguishes between people like myself would regard as science and what others seem to regard as “regulatory science”, but that also is a better descriptor.

      • Let me try an example of regulatory science. A rare species has been identified, or classified for the first time as a species. To protect it, one needs to determine whether it is endangered. This is already a legal, not a scientific concept. If we were to ask “what is a minimum breeding population” we would be unable to say– the reproductive biology, ecology, and population biology is poorly understood, and difficult to understand. But we need a rule to provide legal certainty in a situation which may have severe consequences for ordinary people, if restrictions on land use, water, etc, are imposed. So the regulatory scientist applies the rule and collects data according to the rule. Or invents a predictive model based on data collected over time of population trends- a model with a lot of arbitrary assumptions. The data is collected “scientifically,” and is empirical and systematic enough. Someone trained in science may go out and count scats, for example. The rules (which are much more complex than I am making them out to be) treat this as evidence of the population. The rest of it is not “science,” and in many cases goes against what is actually known about the species. But it does involve facts and data collected in accordance with administratively defined rules and definitions, though there is often a lot of leeway allowed for the regulatory scientist in the adoption of predictive models.

        • Stephen,
          Thanks. So, if I’m getting this right, you’re essentially referring to some process in which scientific practices may be applied (in terms of data collection and analysis) but in which the rules may be set by some societally defined practice (i.e., what condition has to be met for a species to be defined as endangered).

          Where I’m coming from – as the name may imply – is from the physical sciences (or maybe physics more specifically) where, I think, this would rarely (if ever) apply. The rules are the fundamental laws of physics (energy conservation, momentum conservation, mass conservation,….). Of course, one can use physics to try and understand a system and how it may evolve, and the results of this could inform policy. However, it’s not really possible to provide some arbitrary sets of conditions and then tell a physicist to find something that satisfies those conditions. Such a system may simply not exist.

          • Exactly. The problem is that science in the public mind, and sometimes in legislation, is used to mean regulatory science, and in a lot of fields this kind of work is the major source of employment. So there is an incentive for scientists to run the two together and claim ownership of regulatory science. And there is a lot of regulated work, for example in drug testing and development, where the distinctions do become muddy.

  21. Hi Stephen
    I am still not totally sure what regulatory science is. There are things like environmental science for example which is the science of the environment etc. but regulatory science is not the science of regulation. That’s one confusion amongst many I am trying to disentangle in my mind. Now to your example. I am not totally sure I understand what you mean. If there is a view that some animal species is ‘likely to die out’ this is not a legal matter I think. It’s something that can be ‘determined’. This may be complicated. To determine this, the (what one may call) ‘ordinary scientist’ goes out and measures things. What does the regulatory scientist do? Do they look into the legal consequences of these matters? And what is the rule they follow? Who sets that rule? And what kind of data do they collect? Are these data about the possible consequences of whatever the ‘ordinary scientist’ finds on, say, on land use etc? But wouldn’t the ‘regulatory scientist’ also apply some form of scientific method to carry out that data collection and so act as ‘ordinary scientist’ while they do so? Do you argue that this data collection by the ‘regulatory scientist’ is, unlike the other ‘ordinary scientist’s” work contaminated by the political rule that frames the work they do?
    Ah, sorry, I am still confused!

    • .

      OK. Projecting whether a species, of a breeding population, will die out, requires a lot of knowledge about life cycles and reproduction. Mostly this knowledge is not there for endangered species. But policies need to be implemented anyway. So there are various techniques of projection that make assumptions about the populations that come up with different results depending on the assumptions. These are typically contested by different sides. In Florida we have an issue between manatees and boaters. So there is a legal definitional problem: are they “endangered” under the law, or “threatened.” These are not really scientific concepts. They are terms of policy, but they have big legal implications, and policy implications. The terms relate to biology, but they are not biology– more akin to risk analysis. In this case, a few years ago, one of the regulators ran a projection saying that the manatee population was endangered, based on the models. The critics pointed out that populations were in fact increasing. In fact they continued to increase over the next decade, and may now have their status changed to threatened. So what was the initial projection? Science? The scientist could just say “I wasn’t ‘wrong,’ I was just applying the model in accordance with the law. Maybe it was a conservative choice of models, but that was a reasonable choice.” The reporting in the press on the initial projection was not, however, that a model was chosen which produced the projection, but that “science says.” “Science” would not have “said” this unless someone was given the legal task of making the projection based on assumptions which were necessarily partly guesswork. And that is the normal situation in regulatory science and in policy related science generally. There is a need for an immediate answer. In drug testing, similarly, there is a need for a drug to be approved, so there are conventions and rules governing the research on effectiveness and side effects that is needed to approve it. These conventions serve policy purposes in the face of the twin risks of approving a bad drug or failing to approve a good drug. This rule-driven is not “science” in the traditional curiosity driven sense– we frequently can’t wait around to determine how the drug actually works, for example– but it is serious, expensive, systematic empirical research with a lot of scientific content. But it is done to meet regulatory standards.

      • Thanks Stephen, some similar points picked up on the science (or otherwise) of polar bear populations here

        Parallel to your point about whether ‘endangered’ is scientific, is the use of ‘dangerous climate change’, which began as a political term but was then attempted to be defined in the scientific literature (Is that research science, or regulatory science…?!)

        • I think it is an example of how the categories have become blurred in the era of post-normal science. If you ask whether this category would have been a normal outgrowth of curiosity-driven geophysical research, the answer would probably be “no.” But for better or worse, that is not the world we are living in. Even Mann treats this as a problem of risk analysis. And risk analysis works with data and models. It is in what I called the penumbra of science. I am reluctant to call this “science,” but the boundaries are no longer that well-defined.

          • I share your concerns. But I am afraid the battle is already lost. We are already seeing philosophers of science like Kitcher becoming propagandists for post normal science. And we are not seeing enough resistance within science itself. So for me the real need is to understand the new situation.

          • Stephen,

            But I am afraid the battle is already lost.

            I’m not sure I understand why. It seems we’re in a position where some don’t appreciate the difference between fundamental science (governed by the laws of nature if you like) and some other process that uses aspects of the scientific method but is governed by societal rules and procedures. Surely this is something worth addressing?

            And we are not seeing enough resistance within science itself.

            If I’m anything to go by, I think there are many scientists who are completely unaware that this is even an issue. They’re unaware that there are many who use the word “science” to refer to something that isn’t science as they would recognise it. There also seems to be a possible Catch-22. The more scientists who engage in this, the more they risk being accused themselves of practicing this thing called “regulatory science”. In a sense, their attempt to remain objective and independent may make it difficult to object in any meaningful way.

            Maybe I can return to my confusion about the role of STS. I had assumed that they studied and tried to understand science and technology. Firstly, what I seem to see (and this may suffer from sample bias) are people who highlight these various issues (potentially, though, failing to differentiate between actual science and this other kind of science). I don’t see many trying to actually address this is any meaningful way, though. Secondly, I’m unconvinced of their independence. If a bunch of STS researchers spend their time highlighting some possible issue at the science policy interface, can we be sure that this issue is independently present, or does the activities of some STS researchers act to make this issue more prevalent. For example, there are suggestions that the public loses trust in science because of those scientists who choose to advocate. However, is this intrinsically true, or could it partly be a consequence of a bunch of people who keep highlighting how scientists who advocate are damaging public trust?

          • I don’t think STS is making up this problem. It has become a focus because it is a real area of confusion and contention.

            When we get scientists at major universities, like Paul Ehrlich, making decades of outrageously false predictions about world hunger, resource depletion, and population growth, all in the name of science, and they get awards from their professional peers for their “science,” the confusion is not in the mind of the public.

            Of course this is not about fundamental laws. But very little science is. If there really was a simple distinction to defend here, none of this would be an issue. I share the nostalgia for old fashioned autonomous non-politicized science.

            There is a distinction between what people find in a lab or in observation and are personally convinced by– let’s call them the facts– and the larger claims, especially policy related claims, that get made on the basis of the facts. This is the distinction that Paul Ehrlich flubbed. But so did the people who gave him awards.

          • Stephen,
            I probably didn’t phrase my question quite as well as I should have. I wasn’t suggesting that it was being made up; I was asking about how much it influenced by STS research. I certainly don’t dispute that there will be physical scientists who make claims that are not supported by the evidence and that some may well get rewarded for doing so. I should add, though, that I have no real knowledge of Paul Ehrlich, what he’s claimed (although I’m aware that some of his claims seem rather extreme) or what awards he’s been given.

            There were two aspects that I was considering. Does the existence of someone like Paul Ehrlich really influence public trust in science? Maybe, but once people start highlighting this it becomes difficult to know if any lack of trust is intrinsic or if it’s because these examples are being highlighted. The other point I was considering was how significant is the existence of a Paul Ehrlich – for example – to science overall. There are thousands of scientists most of whom do not use their science to make extreme claims (or, at least, I would assert this to be the case). Therefore, what does the existence of someone like Paul Ehrlich really imply. I would argue that it implies that some individuals may be willing to make strong claims that may not be supported by the evidence, but – other than that – doesn’t really imply anything particularly significant with respect to science overall. If I’m wrong about this, it would be interesting to know what evidence exist to support that the actions of a few individuals has much broader implications.

          • Ehrlich is an extreme case, but no more extreme than Hansen. But I suspect that the real source of public confusion is the plethora of conflicting claims about nutrition and health, each of which is supported by a “study,” all claiming to be scientific.

            If one wanted to be tough minded about this, as I would, it would be best to say this this stuff is not science– if you don’t understand the underlying mechanisms, you don’t really have a “scientific” conclusion. But I wouldn’t be invited to many parties if I said that.

          • Stephen,
            Yes, I may well agree. As I think may be obvious, I’m very much seeing this from the perspective of the physical sciences where, I would argue, there are fundamental laws that – if not satisfied – invalidate any scientific theory/idea. A lot of what others have been mentioning seem to be areas where such fundamental laws do not exist. So, it is possible that we’re in a position where some are claiming to be doing science but are not doing science as a physical scientist would recognise it (they might be using aspects of the scientific method but, as you mention, if they don’t/can’t understand the underlying mechanisms then it’s not really science). So, I agree with your last paragraph and, given that I don’t get invited to many parties, maybe I would actually say it :-)

            You mention Hansen. Hansen may in some sense be an illustration of one of my issues with this topic. He has strong views that he isn’t shy to share. He also has some very specific policy preferences, many of which I wouldn’t agree with necessarily. However, I’ve also read some of his papers and there is nothing to suggest that his scientific results are influenced by his policy preferences. So, I’ve yet to find evidence to show that we shouldn’t trust some scientific results simply because someone has expressed strong views about politics/society. That doesn’t mean that the science that people who do so doesn’t necessarily have flaws, but in the physical sciences I don’t to consider someone’s public utterances in order to determine if their science has merit or not.

          • I agree that expert is a better term. I have actually been pushing this (in print) for social “science”— arguing that what exists there is not science but expertise, though there is beginning to be some relevant neuroscience.

            As to Hansen, I can tell you that he is on the airwaves constantly and never makes subtle distinctions between science and his ideas. And there are issues with the published work.

            This is what you come up with immediately by googling Hansen predictions.


            This was published in the JGR, which I know well from being friends with the editor of a series and looking at the editorial files about 35 years ago– at a (very) different time in science. They were very open– not surprising to me that it was published. They liked to stimulate debate. That was then, however.

            The more important issue to me is one I have tried to highlight. The content of science– what we know– is the result of past funding decisions. These are just judgment calls. But they are explicitly affected by (very broad) policy preferences which take the form of a need for information related to policy goals– the NSF has initiatives along policy lines, and funds accordingly. It is not explicitly “conform or don’t get funded,” but dissent, or work that doesn’t fit, is not encouraged and in any case in a high risk environment would just be stupid to propose. I’ve been on teams seeking funding for science related projects, and all our effort was devoted to figuring out how to please the possible reviewers and the panel. These may be subtle influences, but they can have a big effect, especially over time, as careers get wrapped up with these initiatives.

            The research shows quite clearly that drug company funding produces biases. Why wouldn’t this apply more generally? Not so much in the results themselves, but in the choices that lead to the results?

          • Stephen

            this raises an interesting point about the nature of the activity which is required for regulatory, or mandated “science”/science. In her book The Fifth Branch Jasanoff (1990: 229) comments on the work of scientific advisors:

            “experts themselves seem at times painfully aware that what they are doing is not ‘science’ in any ordinary sense, but a hybrid activity that combines
            elements of scientific evidence and reasoning with large doses of social
            and political judgment.”

            It is noteworthy that Jasanoff uses the word expert, not scientist.
            Is there a study which traces the rise of the notion of “expert” to the proliferation of advisory committees and regulatory/mandated science?

            BTW, the book by Salter et al. which I mentioned above (Mandated Science) clearly distinguishes between “proper science” and mandated science. The introduction states this clearly and I think the relevant pages are visible on the link.

            Another interesting question: how and when did the term “regulatory science” become more common, at the expense of “mandated science”? Was the STS community instrumental in this, did they favour it because of its ambivalence?

          • Stephen,
            This is in a sense getting interesting, while also very concerning. You link to WUWT to (I think) illustrate an issue with a paper James Hansen published in 1988. WUWT is the main reason I started engaging in this topic. Why? Because most of what is posted on that site is – scientifically at least – utter nonsense. I started reading it a year or so ago and was absolutely flabbergasted by what I saw there. Most of it is demonstrably incorrect, or some kind of cherry-pick that misses the broader picture completely. If you want to read about Hansen 1988 from those who actually understand this topic, you could try this. Of course, linking to blog posts itself is somewhat problematic since whether or not there are issues with Hansen 1988 is somewhat irrelevant. What’s relevant is what it prompted and what has happened since then.

            Here’s a serious question. Do STS researchers not realise how poor WUWT is as a science resource, or do they think that it is somehow part of the debate whether good or not? If there really are STS researchers who think that WUWT (arguably the most ill-informed scientific blog in history – actually that’s not quite fair, there are some that are worse) is a scientific resource, that is extremely concerning.

          • It just happened to come up as one of the first things on google– of many. But STS doesn’t depend on these things.

            They are, however, data points. They are there in the real world of discourse– along with people who want to abolish liberal democracy to save the planet, anti-vaccine activists, dubious drug researcher, and so forth. And STS does have to concern itself with all these things.

            My personal concern has always been derived from Polanyi– do the normal mechanisms of review, award giving, recognition, and so forth still work, or have they been undermined. I think Ziman had similar concerns. he realized that getting “reliable knowledge” depended on these mechanisms working correctly and together.

          • Stephen,

            And STS does have to concern itself with all these things.

            Yes, I can see that. However, if STS is unable to distinguish between scientific nonsense (as I would assert WUWT is) and what is scientifically credible, it’s hard to understand the value in what STS does. Of course, maybe you just stumbled across WUWT and this isn’t your particular area, but you’re not the first to refer to them. It’s certainly my view that if there are people who are studying science and technology and it’s role in politics/society, they should have some sense of what is credible and what is not.

            do the normal mechanisms of review, award giving, recognition, and so forth still work, or have they been undermined.

            I agree that this is a genuine issue. I would argue, however, that these concerns exist across all areas and, consequently, within STS itself. So it would seem to me that if researchers/academics want to address this potential problem it should be addressed collectively. If there is some group who feel that they are able to see problems that others are unable to recognise, that they should probably spend some time looking in the mirror. That’s not to say that individuals within that area cannot contribute to understanding these issues, but there’s is a fine line between highlighting potential problems and doing so in a way that suggests that they only apply to others.

          • It is not my area– I just googled it. But there is an interesting problem here, shared to some extent with history of science. There is no alternative to relying on present scientific wisdom about what is credible. STS or history of science can’t be judges– though they can make some judgements about how past science turned out (but only on the basis of present science. This isn’t and can’t be their job. But present science is also not the last word. There is no last word– there is just more science and often changed science in the future. So both STS and history of science have to acknowledge that as well.

            As I see STS, it is an extension and professionalization of concerns and issues which originated in the scientific community itself. This was much clearer in earlier generations of STS, which included lots of scientists who had gotten interested in systematically thinking about science and its institutions.

            I think science does much better than social science or the humanities with things like peer review, the reward system, and so forth. STS is not a model– it has all the problems of a messy hybrid field with political pressures and dependence on grants.

            It is not about attacking science– far from it. If STS people didn’t revere science they wouldn’t study it.

            That said, it would be unwise not to be concerned about the ways in which the institutions of science have changed in recent years– this is what Reiner is getting at. We are actually trying to understand the same problem, but from two different vantage points. One is the institutional problem of governing science and paying for it in the face of political expectations for science to produce stuff that the non-scientists doing the paying want, the other is the internal problem of separating scientific from policy considerations. These are related concerns. My point would be that they are not really separable: you get the science you pay for.

          • Stephen,
            Thanks. I agree with much of what you say in your last comment. It is a complex issue and there are some very important factors to consider and understand. I suspect that this discussion has probably run its course. It’s been interesting and I’ve learned a lot from it. So, thanks for explaining things from your perspective as clearly as you have.

  22. I think all comments on the issue of regulatory science agree that it is part of new developments in which this kind of knowledge (science?) for policy is becoming more and more prominent. Most concepts analysing these trends note a structural change of science. Some are not happy to apply the badge of distinction to this kind of knowledge (science). Physicists are not alone in this, as the terms mode 2 of knowledge production, post-normal science, or regulatory science hint at.

    There is a very useful treatment of these issues in the book Mandated Science which is available on Google books. The term mandated is probably clearer than regulatory.

    The STS community would not be happy using scare quotes for this kind of “science” as it would presuppose a belief in the possibility of an essentialist notion of science (from which deviations could be marked). This belief in essentialism was thrown overboard in the past decades (I know Philip, you find this wrong, but this is the reality).

    If it is true that science undergoes a radical change the conventions of doing science change. What is regarded as a standard of proof, or of quality, or of proper test, is subject to change. One of the mainstays in risk analysis is that there is never proof, only different probabilities, or fundamental uncertainty.

  23. Hi Stephen
    Thanks for this clarification. Yes, doing science in a context where results are supposed to inform policy and where policy puts pressure on producing results is certainly messy. I totally agree. However, despite this, the science that’s done is still science, albeit perhaps in some instances hasty science and science built on shaky, shifting and uncertain ground. This science then informs policy and regulation. And when results inform policy and policy makers then say ‘science says’, is that not just cowardice on the part of policy makers who don’t want to admit that their policy is based partly on science, partly on intuition, judgement and political haggling? My question still is, why make a distinction between ordinary science and regulatory science and not just talk about science that informs policy/regulation? My impression is that what happens in ‘regulatory science’ in terms of science is not essentially different from what happens in ‘ordinary science’. It is different to ‘curiosity driven science’ of course, but even there a lot of ‘ordinary science’ happens, only perhaps in some lucky cases there is less pressure from regulators, industry, politicians and so on. We should not let ordinary science be sucked up by politics though. And the label ‘regulatory science’ opens the door to that process, I fear.

    • Brigitte,
      I agree with what you’re saying and I both find this confusing and somewhat concerning. We all agree that social influences do determine the pathway to knowledge. However, most scientists would argue that ultimately our knowledge is constrained by the available evidence and that any social influences are constrained by this scientific evidence. So – in some simplistic sense – whether or not some area of science is policy relevant does not (and should not) influence the scientific evidence. Making some policy decision should simply require considering the available evidence and using that (along with many other things) to make a decision.

      So, to me at least, this regulatory/mandatory “science” (and, yes, I’m going to use quotes not because it’s scary, but because I’m unconvinced that this is science as a scientist would know it) either implies that some scientists are presenting evidence that has been directly influenced by some policy preference (and I would hope there is more evidence than simply hearsay if this is what is being suggested) or policy makers are referring to something that isn’t really science as science so as to avoid having to actually justify the decisions that they’re making (as suggested by Brigitte).

      So, as I think I’ve already mentioned, I fail to see why we shouldn’t be finding ways to clarify this situation rather than simply suggesting (as Reiner seems to have done) that this is simply today’s reality and that we just have to accept it. I don’t see how any of us benefit from a situation where we seem unwilling to distinguish between something that is constrained by independent evidence (what I would call science) and something that is constrained by politics and social conventions (what I might call policy making).

      In fact (with apologies for bringing up Sokal again) but the book review that Brigitte highlighted on MSP says

      Sokal aligns himself politically with Chomsky and Latour and shares their concerns that extreme postmodernism will erode the Left and empower neocons to manipulate science policy.

      Ignoring the left/right aspects of this quote, I am left with a view that what Sokal is suggesting (that extreme postmodernism will allow for a manipulation of science policy) is a genuine concern.