Some six years ago, shortly after I had been appointed to its faculty, the philosophy department at the CUNY Graduate Center began revising its long-standing curriculum; part of its expressed motivation for doing so was to bring its curriculum into line with those of "leading" and "top-ranked" programs. As part of this process, it invited feedback from its faculty members. As a former graduate of the Graduate Center's Ph.D program, I thought I was well-placed to offer some hopefully useful feedback on its curriculum, and so, I wrote to the faculty mailing list, doing just that. Some of the issues raised in my email are, I think, still relevant to academic philosophy. Not everybody agreed with its contents; some of my cohort didn't, but in any case, perhaps this might provoke some discussion.
As you know, I was the gentleman that made that remark in a private facebook thread with a close friend. If I recall correctly, people in that thread were asking about whether certain kinds of thought experiments were typically referred to as “Gettier Cases”. I said that they were, despite how inaccurate or uninformative it might be to do so, in part because of the alternative traditions you cite. I’m sorry you interpreted my remark as silencing my friends on facebook. Personally I believe that philosophers should abandon the notion of “Gettier cases” and that the practice of labeling thought experiments in this way should be discouraged. If you are interested, I have recently argued for this in two articles here (http://philpapers.org/rec/BLOGCA) and here (http://philpapers.org/rec/TURKAL).
A few months ago, I noticed an interesting and telling interaction between a group of academic philosophers. A Facebook friend posted a little note about how one of her students had written to her about having encountered a so-called "Gettier case" i.e., she had acquired a true belief for invalid reasons. In the email, the student described how he/she had been told the 'right time' by a broken clock. The brief discussion that broke out in response to my friend's note featured a comment from someone noting that the broken clock example is originally due to Bertrand Russell. A little later, a participant in the discussion offered the following comment:
Even though the clock case is due to Russell, it's worth noting that "Gettier" cases were present in Nyāya philosophy in India well before Russell, for instance in the work of Gaṅgeśa, circa 1325 CE. The example is of someone inferring that there is fire on a faraway mountain based on the presence of smoke (a standard case of inference in Indian philosophy), but the smoke is actually dust. As it turns out, though, there is a fire on the mountain. See the Tattva-cintā-maṇi or "Jewel of Reflection on the Truth of Epistemology." [links added]
We’ve all heard that regulations are bad, because they interfere with businesses doing what they want (rules about dumping toxic chemicals get in the way of dumping toxic chemicals. Laws against murder hamper the business model of assassins. And so on.). New North Carolina Senator Thom Tillis made the media rounds this week for some odd remarks he made on the topic. When asked to name a regulation he thought was bad, he came up with… the rule that restaurant employees wash their hands after visiting the toilet. He then proposed that it would be better to have restaurants state whether or not employees have to wash their hands, and then “let the market” take care of it.
There’s two obvious problems here, both of which have been pointed out a lot. One is that there’s a public health issue. The other is that he hasn’t actually reduced regulation: he’s just replaced a public health rule with a rule about signage. I actually think the second point is interesting, well beyond the “gotcha!” treatment it got, because it perfectly illustrates something about neoliberalism: it doesn’t think regulations that create markets are regulations (or, if you prefer, regulating to create markets is good, other regulations are bad. This is the same mindset that concludes that the hyper-regulated Chicago futures markets are unregulated). The cleanliness of restaurant operations is not something consumers can know much about on their own, since they don’t do things like follow employees to the restroom. In this sense restaurant sanitation is a credence good (you have to believe the restaurant; you can’t inspect the product before you buy it). Since dirty food preparation can make people very sick, rational consumers should be willing to pay more for the knowledge that their food is safely prepared. But since they won’t be in any position to know about food safety, except (maybe) for places they’ve eaten before, we can expect market failure until some mechanism arrives to help consumers make their decisions.
I cringe, I wince, when I hear someone refer to me as a 'philosopher.' I never use that description for myself. Instead, I prefer locutions like, "I teach philosophy at the City University of New York", or "I am a professor of philosophy." This is especially the case if someone asks me, "Are you a philosopher?". In that case, my reply begins, "Well, I am a professor of philosophy...". Once, one of my undergraduate students asked me, "Professor, what made you become a philosopher?" And I replied, "Well, I don't know if I would go so far as to call myself a philosopher, though I did get a Ph.D in it, and...". You get the picture.
Yesterday, in my Twentieth Century Philosophy class, we worked our way through Bertrand Russell's essay on "Appearance and Reality" (excerpted, along with "The Value of Philosophy" and "Knowledge by Acquaintance and Knowledge by Description" from Russell's 'popular' work The Problems of Philosophy.) I introduced the class to Russell's notion of physical objects being inferences from sense-data, and then went on to his discussions of idealism, materialism, and realism as metaphysical responses to the epistemological problems created by such an understanding of objects. This discussion led to the epistemological stances--rationalism and empiricism--that these metaphysical positions might generate. (There was also a digression into the distinction between necessary and contingent truths.)
At one point, shortly after I had made a statement to the effect that science could be seen as informed by materialist, realist, and empiricist conceptions of its metaphysical and epistemological presuppositions, I blurted out, "Really, scientists who think philosophy is useless and irrelevant to their work are stupid and ungrateful." This was an embarrassingly intemperate remark to have made in a classroom, and sure enough, it provoked some amused twittering from my students, waking up many who were only paying partial attention at that time to my ramblings.
Sometimes, when I talk to friends, I hear them say things that to my ears sound like diminishments of themselves: "I don't have the--intellectual or emotional or moral--quality X" or "I am not as good as Y when it comes to X." They sound resigned to this self-description, this self-understanding. I think I see things differently; I think I see ample evidence of the very quality they seem to find lacking in themselves. Sometimes, I act on this differing assessment of mine, and rush to inform them they are mistaken. They are my friends; their lowered opinion of themselves must hurt them, in their relationships with others, in their ability to do the best they can for themselves. I should 'help.' It seems like the right thing to do. (This goes the other way too; sometimes my friends offer me instant correctives to putatively disparaging remarks I make about myself.)
Foucault’s last lecture courses at the Collège de France – recently published as The Government of Self and Others[GS] and The Courage of Truth [CT] – are interesting for a number of reasons. One is of course they offer one of the best glimpses we have of where his thought was going at the very end of his life; he died only months after delivering the last seminar in CT, and there is every reason to believe that he both knew that he was dying, and why. There’s a lot to think about in them, at least some of which I hope to talk about here over a periodic series of posts. Here I want to say something introductory about the material, and look at Foucault’s critique of Derrida in it.
The lectures contain a sustained investigation of parrhesia, the ancient Greek ethical practice of truth-telling. “Truth to power” is the closest modern term we have for such a practice, though you don’t have to get very far into the lectures to realize how richly nuanced the topic is, and how many different ways it manifest itself in (largely pre-Socratic) Greek thought and literature. The lectures also contain a number of references to contemporary events and people (from the beginning: GS starts with Kant, before going back to the Greeks), and it’s hard to put CT down without a sense that, had there been another year of lectures, Foucault would have been more explicit in assessing the implications of the study of Greek parrhesia today.
In recent weeks, there has been much discussion on journal editorial practices at a number of philosophy blogs. Daily Nous ran an interesting post where different journal editors described (with varying degrees of detail) their editorial practices; many agree that the triple-anonymous system has a number of advantages and, when possible, should be adopted.* (And please, let us just stop calling it ‘triple-blind’ or ‘double-blind’, given that there is a perfectly suitable alternative!) Jonathan Ichikawa, however, pointed out (based on his experience with Phil Studies) that we must not take it for granted that a journal’s stated editorial policies are always de facto implemented. Jonathan (correctly, to my mind) defends the view that it is not desirable for a journal editor to act as a (let alone the sole) referee for a submission.
With this post, I want to bring up for discussion what I think is one of the main issues with the peer-reviewing system (I’ve expressed other reservations before: here, here, and here), namely the extreme difficulties journal editors encounter at finding competent referees willing to take up new assignments. Until two years ago, my experience with the peer-review system was restricted to the role of author (and I, as everybody else, got very frustrated with the months and months it often took journals to handle my submissions) and the role of referee (and I, as so many others, got very frustrated with the constant outpour of referee requests reaching my inbox). Two years ago I became one of the editors of the Review of Symbolic Logic, and thus acquired a third perspective, that of the journal editor. I can confirm that it is one of the most thankless jobs I’ve ever had.
Shawn Miller, a graduate student at the University of California, Davis, has been kind enough to create a philosophy of physics grad program wiki in the mold of philbio.net, which you can find at philphysics.net. He wanted me to be sure to note that the wiki uses, with permission, Christian Wüthrich's Philosophers of Physics list from his Taking up Spacetime website (https://takingupspacetime.wordpress.com/). And also that people should please feel free to contribute to the site, i.e., add people and programs that have been left off. Simple instructions on how to do this are listed on the front page.
UPDATE/REMINDER: This is a wiki! That means it is a community project, and everyone should feel free to be involved in keeping it accurate. Anyone can edit it. If you, or someone you know, is listed incorrectly, (because they are retired, or are determinately moving to another school) you can fix this yourself!
I have always wanted to have a paper in Analysis or Thought. A really neat, short, paper, that is self-contained and makes an substantive philosophical point. Unfortunately, I tend to write articles of about 8000-9000 words, and first drafts are typically even longer. I've written some pre-read papers for conferences of 3000 words, but to get all the nuances in, they typically expand to 8000 words or more once they reach the article stage. What does it take to write brief philosophy papers? More generally, what does it take to write concisely?
Flash fiction is a style of fiction of extreme brevity, 500 words or less, typically 100-150 words. One very brief example, attributed (probably falsely) to Ernest Hemingway goes as follows: "For sale: Baby shoes, never worn." Or take flash poetry. The Dutch poet Vondel wrote the shortest poem in Dutch, and probably in any language, "U, nu" (literally, "you now", or "now it's your turn"). It's also palindromic, and won him a poetry prize. How to write flash fiction? David Gaffney advises flash fiction authors to cut down on character development, and to jump right in the story. Place the denouement not at the end, but in the middle, that way you have some space left to ponder the implications of what has happened with your readers, and you avoid that your story reads like a joke, with a punchline at the end.
So how do you write really short philosophical pieces that are substantive pieces in their own right? Which brief philosophical self-contained pieces do you particularly admire?
My father, Kirkland R. Gable (born Ralph Schwitzgebel) died Sunday. Here are some things I want you to know about him.
Of teaching, he said that authentic education is less about textbooks, exams, and technical skills than about moving students "toward a bolder comprehension of what the world and themselves might become." He was a beloved psychology professor at California Lutheran University.
I have never known anyone, I think, who brought as much creative fun to teaching as he did. He gave out goofy prizes to students who scored well on his exams (e.g., a wind-up robot nun who breathed sparks of static electricity: "nunzilla"). Teaching about alcoholism, he would start by pouring himself a glass of wine (actually, water with food coloring), pouring more wine and acting drunker, arguing with himself, as the class proceeded. Teaching about child development, he would bring in my sister or me, and we would move our mouths like ventriloquist dummies as he stood behind us, talking about Piaget or parenting styles (and then he'd ask our opinion about parenting styles). Teaching about neuroanatomy, he brought in a brain jello mold, which he sliced up and passed around class for the students to eat ("yum! occipital cortex!"). Etc.
As a graduate student and then assistant professor at Harvard in the 1960s and 1970s, he shared the idealism of his mentors Timothy Leary and B.F. Skinner, who thought that through understanding the human mind we can transform and radically improve the human condition -- a vision he carried through his entire life.
I've been asked to write a review of Williamson's brand new book Tetralogue for the Times Higher Education. Here is what I've come up with so far. Comments are very welcome, as I still have some time before submitting the final version. (For more background on the book, here is a short video where Williamson explains the project.)
Disagreement in debates and discussions is an interesting phenomenon. On the one hand, having to justify your views and opinions vis-à-vis those who disagree with you is perhaps one of the best ways to induce a critical reevaluation of these views. On the other hand, it is far from clear that a clash of views will eventually lead to a consensus where the parties come to hold better views than the ones they held before. This is one of the promises of rational discourse, but one that is all too often not kept. What to do in situations of discursive deadlock?
Timothy Williamson’s Tetralogue is precisely an investigation on the merits and limits of rational debate. Four people holding very different views sit across each other in a train and discuss a wide range of topics, such as the existence of witchcraft, the superiority and falibilism of scientific reasoning, whether anyone can ever be sure to really know anything, what it means for a statement to be true, and many others. As one of the most influential philosophers currently in activity, Williamson is well placed to give the reader an overview of some of the main debates in recent philosophy, as his characters debate their views.
No, this is not clickbait. Sometimes, the headline tells the story. Gregory Holt, aka Abdul Maalik Muhammad (the Court uses the “also known as” language when referring to him, and I am not sure about the status of the two different names), an inmate in an Arkansas state prison, wanted to grow a beard, in accordance with his religious beliefs. The Arkansas Department of Corrections has a strict no-beard policy, with a medical exception (allowing those with certain dermatological conditions to grow a beard of up to ¼ inch length), on the grounds that inmates could smuggle contraband in their beards, and that it is necessary that prisoners remain clean-shaven in order to easily identify them. Sensing an uphill struggle, Muhammad proposed to grow a half inch beard as a sort of “compromise,” as he put it. The Department would have nothing of it, and so Muhammad took the argument to court. Both the district court and the 8th Circuit thought that required deference to the experts at the Dept. of Corrections outweighed any concerns they might have.
In this post, I discuss in more detail the two main categories of genealogy that were mentioned in previous posts: vindicatory and subversive genealogies.
III. Applications of genealogy
In the spirit of the functionalist, goal-oriented approach adopted here, a pressing question now becomes: what’s the point of a genealogy? What kind of results do we obtain from performing a genealogical analysis of philosophical concepts? I’ve already mentioned vindication and subversion/debunking en passant along the way, but now it is time to discuss applications of genealogy in a more systematic way.
III.1 Genealogy as vindicatory or as subversive
By now, it should be clear that genealogy is a rather plastic concept, one which can be (and has been) instantiated in a number of different ways. Craig offers a helpful description of a range of options:
[Genealogies] can be subversive, or vindicatory, of the doctrines or practices whose origins (factual, imaginary, and conjectural) they claim to describe. They may at the same time be explanatory, accounting for the existence of whatever it is that they vindicate or subvert. In theory, at least, they may be merely explanatory, evaluatively neutral (although as I shall shortly argue it is no accident that convincing examples are hard to find). They can remind us of the contingency of our institutions and standards, communicating a sense of how easily they might have been different, and of how different they might have been. Or they can have the opposite tendency, implying a kind of necessity: given a few basic facts about human nature and our conditions of life, this was the only way things could have turned out. (Craig 2007, 182)
Many philosophers refer to the game of cricket in their writings. Reading one of these references never fails to give me—a lifelong cricket fan—a little start of pleasure. Many years ago, as I began my graduate studies in philosophy in New York City, I stumbled upon JL Austin while reading on speech acts for my philosophy of language class. I was delighted to note that Austin, in his discussion of performative utterances, provided the now-classic example of a cricket umpire saying "Out". I was a lonely graduate student then, and reading about cricket, even if only in the context of an academic discussion, was a small reprieve from that loneliness. I was happy to think that perhaps some of my fellow graduate students would want clarification about the example, which, of course, I would be only too happy to provide. (They didn't. They understood the example well enough from baseball: "steeerrrikkke! You're out!").
Ten days ago a new site was launched, “A User’s Guide to Philosophy Without Rankings.” The response to the site has been extremely rewarding. Not only have there been thousands of visitors, people are using the Guide as I had hoped: they are visiting sites that are mentioned in the Guide to learn more about graduate programs, as well as the PGR. A comment on Reddit’s philosophy page regarding the Guide sums up an important reason for the site:
“Thank you so much. I'm going to be applying next year and this is exactly what I'm looking for after I heard all of the controversy about the PGR.”
I want to thank colleagues who have begun to send in resources to post on the site. And I want to make a request: please send more! Like the new philosophy wikis, the Guide is in part an aggregator of information. The more information, the more helpful it can be. Please do weigh in. You can email me about the Guide at email@example.com or leave a comment on the site.
In this section, I pitch genealogy against its close cousin archeology in order to argue that genealogy really is what is needed for the general project of historically informed analyses of philosophical concepts that I am articulating. And naturally, this leads me to Foucault. As always, comments welcome! (This is the first time in like 20 years that I do anything remotely serious with Foucault's ideas: why did it take me so long? Lots of good stuff there.)
I hope to have argued more or less convincingly by now that, given the specific historicist conception of philosophical concepts I’ve just sketched, genealogy is a particularly suitable method for historically informed philosophical analysis. In the next section, a few specific examples will be provided. However, and as mentioned above, I take genealogy to be one among other such historical methods, so there are options. Why is genealogy a better option than the alternatives? In order to address this question, in this section I pitch genealogy against one of its main ‘competitors’ as a method for historical analysis: archeology. Naturally, this confrontation leads me directly to Foucault.
[UPDATE: It seems that my post is being interpreted by some as a criticism of the Charlie Hebdo collaborators. Nothing could be further from the truth; I align myself completely with their Enlightenment ideals -- so I'm intolerant too! -- and in fact deem humor to be a powerful tool to further these ideals. Moreover, perhaps it is worth stressing the obvious: their 'intolerance' does not in any way justify their barbaric execution. It is not *in any way* on a par with the intolerance of those who did not tolerate their humor and thus went on to kill them.]
I grew up in a thoroughly secular household (my father was a communist; I never had any kind of religious education). However, I did get a fair amount of exposure to religion through my grandmothers: my maternal grandmother was a practicing protestant, and my paternal grandmother was a practicing catholic. In my twenties, for a number of reasons, I became more and more drawn to Catholicism, or at least to a particular interpretation of Catholicism (with what can be described as a ‘buffet’ attitude: help yourself only to what seems appetizing to you). This led to me getting baptized, getting married in the Catholic church, and wearing a cross around my neck. (I have since then distanced myself from Catholicism, in particular since I became a mother. It became clear to me that I could not give my daughters a catholic ‘buffet’ upbringing, and that they would end up internalizing all the dogmas of this religion that I find deeply problematic.)
At the same time, upon moving to the Netherlands in the late 90s, I had been confronted with the difficult relations between this country and its large population of immigrants and their descendants sharing a Muslim background, broadly speaking. At first, it all made no sense to me, coming from a country of immigrants (Brazil) where the very concept of being a ‘second-generation immigrant’ is quite strange. Then, many years ago (something like 13 years ago, I reckon) one day in the train, I somehow started a conversation with a young man who appeared to be of Arabic descent. I don’t quite recall how the conversation started, but one thing I remember very clearly: he said to me that it made him happy to see me wearing the catholic cross around my neck. According to him, the problem with the Netherlands is the people who have no religion – no so much people who (like me at the time) had a religion different from his own.* This observation has stayed with me since.
Anyway, this long autobiographical prologue is meant to set the stage for some observations on the recent tragic events in Paris. As has been remarked by many commentators (see here, for example), the kind of humor practiced by the Charlie Hebdo cartoonists must be understood in the context of a long tradition of French political satire which is resolutely left-wing, secular, and atheist. Its origins go back at least to the 18th century; it was an integral part of the Enlightenment movement championed by people like Voltaire, who used humor to provoke social change. In particular in the context of totalitarian regimes, satire becomes an invaluable weapon.
Here's a short piece by the New Scientist on the status of Mochizuki's purported proof of the ABC conjecture. More than 2 years after the 500-page proof has been made public, the mathematical community still hasn't been able to decide whether it's correct or not. (Recall my post on this from May 2013; little change seems to have taken place since then.)
Going back to my dialogical conception of mathematical proofs as involving a proponent who formulates the proof and opponents who must check it, this stalemate can be viewed from at least two perspectives: either Mochizuki is not trying hard enough as a proponent, or the mathematical community is not trying hard enough as opponent.
[Mochizuki] has also criticised the rest of the community for not studying his work in detail, and says most other mathematicians are "simply not qualified" to issue a definitive statement on the proof unless they start from the very basics of his theory.
Some mathematicians say Mochizuki must do more to explain his work, like simplifying his notes or lecturing abroad.
(Of course, it may well be that both are the case!). And so for now, the proof remains in limbo, as well put by the New Scientist piece. Mathematics, oh so human!
After viewing the rather disappointing Chopin: Desire for Love a couple of years ago, I was struck again by how difficult it seems to be to make movies about artists, writers, or perhaps creators of all kinds. My viewing also served to remind me that movies about philosophers' lives are exceedingly rare, and the few that have been made--or rather, that I am aware of--haven't exactly sent cinemaphiles or students of philosophy running to the nearest box-office e.g., Derek Jarman's Wittgenstein was a disappointment, and the less said about the atrocious and unwatchable When Nietzsche Wept, the better.
The great historian of logic and mathematics Ivor Grattan-Guinness passed away about a month ago, aged 73. I only heard it yesterday, when Stephen Read posted a link to the Guardian obituary on Facebook. From the obituary:
He rescued the moribund journal Annals of Science, founded the journal History and Philosophy of Logic, and was on the board of Historia Mathematica from its inception. A member of the council of the Society for Psychical Research, he wrote Psychical Research: A Guide to Its History (1982). In 1971 the British Society for the History of Mathematics was founded: Ivor served as its president (1986-88) and instituted a formal constitution.
Indeed, many of us owe him eternal gratitude for founding the journal History and Philosophy of Logic, which continues to be the main journal for studies combining historical and philosophical perspectives on logic. Ivor's work and scholarship spans over an impressive range of topics and areas, and is bound to continue to influence many generations of scholars to come. It is a great loss.
A couple of weeks ago, in a post on Theranos, which has been developing a new - and very fast and cheap - technique for blood-testing, I mentioned the woes of 23andMe.com, a site which originally offered direct to consumer genetic testing, before the FDA shut it down for any medical claims (the FDA letter is here). As 23andMe put it (in a pre-shutdown version of its website), customers might “gain insight into your traits, from baldness to muscle performance. Discover risk factors for 97 diseases. Know your predicted response to drugs, from blood thinners to coffee. And uncover your ancestral origin.” That sort of claim, and its dubious scientific basis, was the basis of the FDA’s shutdown order, which also expressed concern about false positives and the difficulty in understanding negative results in isolation. In this, the FDA was echoing concerns of bioethicists, who have generally been alarmed about the spread of genetic testing outside of a clinical context. As a 2011 piece summarized the concerns, the lack of regulatory oversight of these practices, which trade upon the public’s fear of cancer and limited understanding of genetics, creates potential problems with inappropriate referrals, misinterpretation of results, excessive anxiety about positives, false reassurances about negatives, and even the confusion between diagnostic genetic variants and surrogate genetic markers (which account for very little risk).
Daniel Zamora’s interview in Jacobin (following the publication of a book he edited), in which he claims that Foucault ended up de facto endorsing neoliberalism, has generated a lot of renewed discussion about Foucault’s late work. Over at An und für sich, Mark William Westmoreland has organized a series of posts responding to Zamora. I’m one of the contributors; the others are Verena Erlenbusch (Memphis), Thomas Nail (Denver), and Johanna Oksala (Helsinki). My contribution is cross-posted below, but you really should start with the interview and then read Erlenbusch’s post – she lays out the context of the controversy, and discusses the book (which came out fairly recently, and which hasn’t been translated yet) in considerable detail.
I’ll update with links to Nail’s and Oskala’s contributions when they’re up.
Those of you who can’t get yourselves to be offline even during the Xmas break have most likely been following the events involving Brian Leiter (BL), Jonathan Ichikawa (JI) and Carrie Jenkins (CJ), and the lawyers’ letters regarding the legal measures BL says he is prepared to take as a reaction to what he perceives as defamatory statements by (or related to) Ichikawa and Jenkins. As a matter of fact, a blog post of mine back in July seems to have played a small role in the unfolding of these unfortunate developments, and so I deem it appropriate to add a few observations of my own.
On multiple occasions (and in particular in comments at Facebook posts), Leiter claims to have been directed to Jenkins’ ‘Day One’ pledge by this blog post of mine of July 2nd 2014, in defense of Carolyn Dicey Jennings. It begins:
Most readers have probably been following the controversy involving Carolyn Dicey Jennings and Brian Leiter concerning the job placement data post where Carolyn Dicey Jennings compares her analysis of the data she has assembled with the PGR Rank. There have been a number of people reacting to what many perceived as Brian Leiter’s excessively personalized attack of Carolyn Dicey Jennings’s analysis, such as in Daily Nous, and this post by UBC’s Carrie Ichikawa Jenkins on guidelines for academic professional conduct (the latter is not an explicit defense of Carolyn Dicey Jennings, but the message is clear enough, I think). [emphasis added]
Leiter claims that this observation is what led him to Jenkins’ post in the first place, which he perceived as a direct attack on him. He also claims that this is what the passage above implies, and continues to repeat that the post “was intended by Jenkins as a criticism of me (as everyone at the time knew, and as one of her friends has now admitted), and thus explains my private response, which she chose to make public.” However, there is nothing explicitly indicating that the post was intended as a criticism of BL beyond the fact that he read it this way and keeps repeating it.
According to a broad class of materialist views, conscious experiences -- such as the experience of pain -- do not supervene on the local physical state of the being who is having those conscious experiences. Rather, they depend in part on the past evolutionary or learning history of the organism (Fred Dretske) or on what is "normal" for members of its group (David Lewis). These dependencies are not just causal but metaphysical: The very same (locally defined) brain state might be experienced as pain by one organism as as non-pain by another organism, in virtue of differences in the organisms' past history or group membership, even if the two organisms are molecule-for-molecule identical at the moment in question.
Donald Davidson's Swampman example is typically used to make this point vivid: You visit a swamp. Lightning strikes, killing you. Simultaneously, through incredibly-low-odds freak quantum chance, a being who is molecule-for-molecule identical to you emerges from the swamp. Does this randomly-congealed Swampman, who lacks any learning history or evolutionary history, experience pain when it stubs its toe? Many people seem to have the hunch or intuition, that yes, it would; but any externalist who thinks that consciousness requires a history will have to say no. Dretske makes clear in his 1995 book that he is quite willing to accept this consequence. Swampman feels no pain.
As most readers probably know, the 2014 Philosophical Gourmet Report (PGR), a "Ranking of Graduate Programs in Philosophy in the English-Speaking World," was recently published; the rankings purport to be "primarily measures of faculty quality and reputation." Mitchell Aboulafia has done a series of postings analyzing the 2014 PGR. If Aboulafia's analyses are accurate, which they seem to me to be, they show why the rankings produced by the 2014 PGR ought not to be relied on.
Some might think that some of these problems are at least partially the result of the September Statement. However, the editors of the PGR made the decision to publish the report and seem to stand by it, so the reasons behind the problems (whatever they might be) seem beside the point.