I was asked to write a review of Terry Parsons' Articulating Medieval Logic for the Australasian Journal of Philosophy. This is what I've come up with so far. Comments welcome!
Scholars working on (Latin) medieval logic can be viewed as populating a spectrum. At one extremity are those who adopt a purely historical and textual approach to the material: they are the ones who produce the invaluable modern editions of important texts, without which the field would to a great extent simply not exist; they also typically seek to place the doctrines presented in the texts in a broader historical context. At the other extremity are those who study the medieval theories first and foremost from the point of view of modern philosophical and logical concerns; various techniques of formalization are then employed to ‘translate’ the medieval theories into something more intelligible to the modern non-historian philosopher. Between the two extremes one encounters a variety of positions. (Notice that one and the same scholar can at times wear the historian’s hat, and at other times the systematic philosopher’s hat.) For those adopting one of the many intermediary positions, life can be hard at times: when trying to combine the two paradigms, these scholars sometimes end up displeasing everyone (speaking from personal experience).
Terence Parsons’ Articulating Medieval Logic occupies one of these intermediate positions, but very close to the second extremity; indeed, it represents the daring attempt to combine the author’s expertise in natural language semantics, linguistics, and modern philosophy with his interest in medieval logical theories (which arose in particular from his decade-long collaboration with Calvin Normore, to whom the book is dedicated). For scholars of Latin medieval logic, the fact that such a distinguished expert in contemporary philosophy and linguistics became interested in these medieval theories only confirms what we’ve known all along: medieval logical theories have intrinsic systematic interest; they are not only curious museum pieces.
Despite not being the first to employ modern logical techniques to analyze medieval theories, Parsons' approach is quite unique (one might even say idiosyncratic). It seems fair to say that nobody has ever before attempted to achieve what he wants to achieve with this book. A passage from the book’s Introduction is quite revealing with respect to its goals:
In a recent survey, I asked philosophers about their submissions to journals, to get a sense of what journals people submit to and also what factors might influence their decisions on where to submit papers. Specifically, I wanted to know how frequently people submit their work to the top 5 journals in philosophy, which are usually regarded (according to polls) as the best journals in the field: Philosophical Review, Journal of Philosophy, Mind, Noûs and Philosophy and Phenomenological Research. Increasingly, publications in these journals are regarded as a marker of excellence.
However, there are several hurdles to getting published in the top 5. The acceptance rates are forbidding (I don’t have exact numbers, but journals in the top-20 that have published acceptance rates as low as 5%, (e.g., Australasian Journal of Philosophy, Canadian Journal of Philosophy). Presumably, the acceptance rates in the top-5 are lower still, making them more difficult to get into than Science or Nature. Also, review times at some of these journals tends to be longer than the standard 3 months. Those journals that are quicker close submissions for half the year, and unfortunately, they do so concurrently (otherwise, so a senior philosopher pointed out to me, they wouldn’t have the lower submission rates they are aiming for).
251 philosophers completed the survey. Below the fold is a summary of some results. I asked respondents to say how many papers they submitted to top-5 journals and any refereed journal over the past year (i.e., since September 2013).
Over the past three years I have collected and reported on placement data for positions in academic philosophy. (Interested readers can find past posts here at New APPS under the "placement data" category, two of which have been updated with the new data, severalpostsatProPhilosophy, or the very first post on placement at the Philosophy Smoker.) This year, placement data will be gathered, organized, and reported on by the following committee of volunteers (listed in alphabetical order):
Over the next academic year, we aim to create a website, which will be parked at placementdata.com. This website will include a form for gathering data, a searchable database, and reports on placement data. Until that time, I am suspending updates to the Excel spreadsheet, which contains much of the data used in the past few years, plus the updates I have received over the past few months. (Many thanks to Justin Lillge for incorporating the bulk of these updates into the spreadsheet!) When the website is ready, departments will be able to update their placement data through an embeddable form. Stay tuned for these links in the coming months!
See here. The last MacArthur "genius" fellowship awarded to someone they classified as philosopher was in 1993.
On the whole, scholars outside of philosophy tend, I think, not to see much value in what most professional philosophers do. The MacArthur drought is one reflection and measure of that.
Not that prizes matter. Sheesh. We're too busy thinking about important stuff like whether the external world exists (82% of target faculty agree that it does). The MacArthur folks probably think that climate change is a more important topic. But if the external world doesn't exist then the climate can't change, can it now? So there!
One may be forgiven for thinking, on reading Brian Leiter's diatribe against identity politics and the danger it poses for academic philosophy, that there is a swell in 'consumer demand' for expanding the philosophy curriculum in questionable directions and for the wrong reasons. To clarify: the 'consumer' in this case is graduate students dissatisfied with the lack on engagement on the part of Anglo-American philosophers with non-Western philosophical traditions. Or so the story goes.
So Leiter asks: "should we really add East Asian philosophers to the curriculum to satisfy the consumer demands of Asian students rather than because these philosophers are interesting and important in their own right?"
The reality is that there is no such large-scale demand, certainly not in top graduate programs, because one could not have gotten in by announcing one's intention of working in Indian or Chinese philosophy. That is not to deny that there hasn't been talk of opening up the discipline to non-Western traditions and perspectives (more on this below). But even the advocates don't think it can be that easily accomplished (not, at least, through curricular reform alone). Rather, the sense is that some change in the demographics of philosophy departments would have to take place for cross-cultural philosophical reflection to become the norm. Alas, we are a long way from that.
I began my academic philosophy career as a 'logician.' I wrote a dissertation on belief revision, and was advised by a brilliant logician, Rohit Parikh, someone equally comfortable in the departments of computer science, philosophy and mathematics. Belief revision (or 'theory change' if you prefer) is a topic of interest to mathematicians, logicians, and computer scientists. Because of the last-named demographic, I was able to apply for, and be awarded, a post-doctoral fellowship with a logics for artificial intelligence group in Sydney, Australia. (In my two years on the philosophy job market, I sent out one hundred and fourteen applications, and scored precisely zero interviews. My visits to the APA General Meeting in 1999 and 2001 were among the most dispiriting experiences of my life; I have never attended that forum again. The scars run too deep.)
I would be very grateful if NewApps readers who are philosophers could fill out the following brief, anonymous survey on journal submissions. The aim is to get a picture of what kinds of journals you submit to, especially to the journals that are regarded as the top general philosophy journals. https://surveys.qualtrics.com/SE/?SID=SV_6lM1JE4Q88BruhD
Results will be posted when the data have been processed.
In December, I will be presenting at the Aesthetics in Mathematics conference in Norwich. The title of my talk is Beauty, explanation, and persuasion in mathematical proofs, and to be honest at this point there is not much more to it than the title… However, the idea I will try to develop is that many, perhaps even most, of the features we associate with beauty in mathematical proofs can be subsumed to the ideal of explanatory persuasion, which I take to be the essence of mathematical proofs.
As some readers may recall, in my current research I adopt a dialogical perspective to raise a functionalist question: what is the point of mathematical proofs? Why do we bother formulating mathematical proofs at all? The general hypothesis is that most of the defining criteria for what counts as a mathematical proof – and in particular, a good mathematical proof – can be explained in terms of the (presumed) ultimate function of a mathematical proof, namely that of convincing an interlocutor that the conclusion of the proof is true (given the truth of the premises) by showing why that is the case. (See also this recent edited volume on argumentation in mathematics.) Thus, a proof seeks not only to force the interlocutor to grant the conclusion if she has granted the premises; it seeks also to reveal something about the mathematical concepts involved to the interlocutor so that she also apprehends what makes the conclusion true – its causes, as it were. On this conception of proof, beauty may well play an important role, but this role will be subsumed to the ideal of explanatory persuasion.
The characters in Nevil Shute's On The Beachknow that barring natural disasters, and other unforeseen circumstances, they will die in a few months time--in September 1963--of radiation sickness, brought on by the thirty-seven day thermonuclear war that has already wiped out life in the northern hemisphere. They know its painful and uncomfortable symptoms--diarrhea and vomiting--will resemble those of cholera; they have the option to commit suicide by using a pill--supplied by the government and made available at local chemists. All humans know they will die; these ones know when and how. (As John Osborne notes, ""You've always known that you were going to die sometime. Well, now you know when.")
Perhaps unsurprisingly, last week, during a classroom discussion centered on Shute's novel, the following question slowly hoved into view: Would you want to know the time and manner of your death? We live our lives with the knowledge of our certain death; would we want to further refine it in this fashion? Why or why not? (We could also induce another twist by asking whether, if possessed of this knowledge with regards to someone else, we should tell them about it, without withholding any details. A variant of this situation occurs quite often, I think, in some medical contexts involving terminally ill patients and their doctors. Other twists include the knowledge of the details of, not our deaths, but those of loved ones.)
The answers to this cluster of questions are likely to be quite revealing. Knowledge of the time and manner of death may permit a settling of affairs, a more directed planning of one's activities, a more systematic prioritization of one's objectives; it may induce an urgency into our lives that some may find currently lacking. It may have a calming effect on some, But it may also induce paralyzing anxiety for some; the fear of the manner of death--perhaps gruesome dismemberment for some, or brutal murder for others--may have such an effect.
Why is the raising and answering of this question a philosophical exercise? Perhaps because these answers reveal valuations crucial to the chosen path of conduct in our lives--and what could be more fundamental a philosophical question than 'What is the good life?' Perhaps because in answering a question about whether some item of knowledge is desirable or not, we may possibly articulate limits on what should be known by us--a puzzle that, in the past, often confronted those who worked on thermonuclear weapons, or as in these days, those who work on cloning technologies. Answering this question could be an introspective and retrospective exercise, forcing not just a look inwards at our beliefs and desires, but also a look backwards at the lives we have lived thus far, an act likely to be imbued with an ethical and moral assessment. Such an examination of our beliefs and our plans for our lives, and the manner in which we would choose to live them, seems a fairly fundamental philosophical activity, perhaps even of the kind that Socrates was always urging on us.
Although over half the world' population are theists (according to Pew survey results), God's existence isn't an obvious fact, not even to those who sincerely believe he exists. To put it differently, as Keith DeRose recently put it, even if God exists, we don't know that he does. This presents a puzzle for theists: why doesn't God make his existence more unambiguously known? The problem of divine hiddenness has long been recognized by theists (for instance, Psalm 22), but only fairly recently has it become the focus of debate in philosophy of religion.
In several works, J.L. Schellenberg has argued that divine hiddenness constitutes evidence against God's existence. A simple version of this argument goes as follows (Schellenberg 1993, 83):
If there is a God, he is perfectly loving.
If a perfectly loving God exists, reasonable non-belief in the existence of God does not occur
Reasonable non-belief in the existence of God does occur.
No perfectly loving God exists.
There is no God.
The controversial premises are 2 and 3. Authors like Swinburne and Murray have argued against premise 2: God may have reasons to make his existence less obviously true. Their arguments state that if we knew God existed, we wouldn't be able to make morally significant choices. This is an empirical claim. Obviously, it cannot be experimentally tested directly. However, research in the cognitive science of religion (CSR) on the relationship between belief in God and morality may indicate whether or not this is a plausible claim.
How we ought to understand the terms "civility" and "collegiality" and to what extent they can be enforced as professional norms are dominating discussions in academic journalism and the academic blogosphere right now. (So much so, in fact, that it's practically impossible for me to select among the literally hundreds of recent articles/posts and provide for you links to the most representative here.) Of course, the efficient cause of civility/collegiality debates' meteoric rise to prominence is the controversy surrounding Dr. Steven Salaita's firing (or de-hiring, depending on your read of the situation) by the University of Illinois only a month ago, but there are a host of longstanding, deeply contentious and previously seething-just-below-the-surface agendas that have been given just enough air now by the Salaita case to fan their smoldering duff into a blazing fire.
In the interest of full disclosure, I'll just note here at the start that I articulated my concerns about (and opposition to) policing norms of civility/collegiality or otherwise instituting "codes" to enforce such norms some months ago (March 2014) in a piece I co-authored with Edward Kazarian on this blog here (and reproduced on the NewAPPS site) entitled "Please do NOT revise your tone." My concern was then, as it remains still today, that instituting or policing norms of civility/collegiality is far more likely to protect objectionable behavior/speech by those who already possess the power to avoid sanction and, more importantly, is likely to further disempower those in vulnerable professional positions by effectively providing a back-door manner of sanctioning what may be their otherwise legitimately critical behaviors/speech. I'm particularly sympathetic to the recent piece "Civility is for Suckers" in Salon by David Palumbo-Liu (Stanford) who retraces the case-history of civility and free speech and concludes, rightly in my view, that "civility is in the eye of the powerful."
On Friday Sept. 5, Chancellor Dirks of UC Berkeley circulated an open statement to his campus community that sought to define the limits of appropriate debate at Berkeley. Issued as the campus approaches the 50th anniversary of the Free Speech Movement, Chancellor Dirks' statement, with its evocation of civility, echoes language recently used by the Chancellor of the University of Illinois, Urbana and the Board of Trustees of the University of Illinois (especially its Chair Christopher Kennedy) concerning the refused appointment of Steven Salaita. It also mirrors language in the effort by the University of Kansas Board of Regents to regulate social media speech and the Penn State administration's new statement on civility. Although each of these administrative statements have responded to specific local events, the repetitive invocation of "civil" and "civility" to set limits to acceptable speech bespeaks a broader and deeper challenge to intellectual freedom on college and university campuses.
CUCFA Board has been gravely concerned about the rise of this discourse on civility in the past few months, but we never expected it to come from the Chancellor of UC Berkeley, the birthplace of the Free Speech Movement. To define “free speech and civility” as “two sides of the same coin,” and to distinguish between “free speech and political advocacy” as Chancellor Dirk does in his text, not only turns things upside down, but it does so in keeping with a relentless erosion of shared governance in the UC system, and the systemic downgrading of faculty’s rights and prerogatives. Chancellor Dirks errs when he conflates free speech and civility because, while civility and the exercise of free speech may coexist harmoniously, the right to free speech not only permits, but is designed to protect uncivil speech. Similarly, Chancellor Dirks is also wrong when he affirms that there exists a boundary between “free speech and political advocacy” because political advocacy is the apotheosis of free speech, and there is no “demagoguery” exception to the First Amendment.
I would be interested in hearing from other folks on their use of fiction in their class reading lists. Where and how did you do so? What was your experience like? Links to sample syllabi would be awesome.
Several months ago, I argued here that big data is going to make a big mess of privacy – primarily because of a distinction between “data,” understood as the effluvia of daily life, generated by such activities as moving around town or making phone calls, and “information,” which implies some sort of meaning. Privacy protects the disclosure of “information,” since this can be an intentional act; big data allows surveillance of areas traditionally considered private without any act of disclosure, since the analytic computers will take care of turning the data into information. My standard talking-point here is a recent study of Facebook likes which determined that all sorts of non-trivial correlations could be deduced from what people “like:”
I am working on a paper now (together with my student Leon Geerdink, for a volume on the history of early analytic philosophy being edited by Chris Pincock and Sandra Lapointe) where I elaborate on a hypothesis first presented at a blog post more than 3 years ago: that the history of analytic philosophy can to a large extent be understood as the often uneasy interplay between Russellianism and Mooreanism, in particular with respect to their respective stances on the role of common sense for philosophical inquiry. In the first part of the paper, we present an (admittedly superficial and selective) overview of some recent debates on the role of intuitions and common sense in philosophical methodology; in the second part we discuss Moore and Russell specifically, and in the third part we discuss what I take to be another prominent instantiation of the opposition between Russellianism and Mooreanism: the debate between Carnap and Strawson on the notion of explication.
I am posting here a draft of the first part, i.e. the overview of recent debates. I would be very interested to hear what readers think of it: is it at least roughly correct, even if certainly partial and incomplete? Are the categories I carved up to make sense of these debates helpful? Can they be improved? Feedback would be most welcome!
UPDATE: I forgot to mention that a paper that has been extremely useful for me to organize my thoughts on this topic is Michael Della Rocca's 'The taming of philosophy', which gets quite extensively discussed in other sections of my paper with Leon. It is an excellent paper. However, there is still a substantive disagreement between Della Rocca and us, namely that we think there is a lot more tension between Russell and Moore on the question of common sense's role for philosophy than Della Rocca recognizes (he describes both Moore and Russell as fans of common sense).
The first reading in my Philosophical Issues in Literature class this semester--which focuses on the post-apocalyptic novel--is Nevil Shute's On The Beach. I expected, more often than not, moral, ethical, and political issues to be picked up on in classroom discussions; I was pleasantly surprised to find out that the very first class meeting--on Monday--honed in on an epistemic issue, more specifically, one of normative epistemology: What should we believe? Are beliefs that comfort us--but that are otherwise without adequate evidentiary foundation--good ones? Can they ever be? Under what circumstances?
Dwight Towers, the American Navy submarine captain, is one of those unfortunates who have, thanks to nuclear war, lost their all--their homes, their families--in the northern hemisphere. In Towers' case, this means his home in Connecticut, and his wife and child. Indeed, this loss provokes his host in Australia, Peter Holmes, to take the precaution of arranging extra companionship--as distraction--for him when Holmes invites Towers to his home for dinner. But Towers does not seem to regard his family as lost. As he attends a church service, Shute grants us access to his thoughts about home:
He would be going back to them in September, home from his travels. He would see them all again in less than nine months time. They must not feel, when he rejoined them, that he was out of touch, or that he had forgotten things that were important in their lives. Junior must have grown quite a bit; kids did at that age.
Later, Shute does the same with Moira Davidson, his new-found female friend in Melbourne, who has seen the photographs of his family in his cabin:
She had known for some time that his wife and family were very real to him, more real by far than the half-life in a far corner of the world that had been forced upon him since the war. The devastation of the northern hemisphere was not real to him, as it was not real to her. He had seen nothing of the destruction of the war, as she had not; in thinking of his wife and his home it was impossible for him to visualise them in any other circumstances than those in which he had left them. He had little imagination, and that formed a solid core for his contentment in Australia.
Towers makes this explicit:
"I suppose you think I'm nuts," he said heavily. "But that's the way I see it, and I can't seem to think about it any other way."
These reflections bring us, as should be evident, to the Clifford-Jamesdebate. I have taught that debate before--in introductory philosophy classes and in philosophy of religion. The discussions--and judgments--it provokes are often quite illuminating; Monday's was no exception. The novelistic embedding of these attitudes in the context of a post-apocalyptic situation also enabled a segue into the broader ethics of 'coping strategies' and escapism, like, for instance, Moira Davidson's palliative heavy drinking.
I expect this issue to recur during this semester's discussions; I look forward to seeing how my students respond to the varied treatments of it that my reading list will afford them.
Especially given the attention we've paid to the case here (see our new tag, and also Samir's posts here and here, and Eric Schwitzgebel's here), it is important to note that Steven Salaita had a press conference today, at which he issued this following statement.
The full audio of the statement and the press conference is here. And in addition, there's a short video (embedded below) of Salaita addressing two of the core questions that have been raised in the affair, that of the nature of his engagements on Twitter and that of his approach in the classroom.
[Update: here is the full video of the event, including Salaita's full statement and the press conference.]
Finally, as many of you surely know, the Board of Trustees at UIUC is meeting on Thursday. This is a very crucial day, and it is important to produce as many visible expressions of support as possible in advance of the Trustees' meeting. If you have not already done so, there is still time for you to email the Trustees. Corey Robin's post on how to do so is here. Also, John Protevi is managing the philospher's boycott statement (see here for info on how to add your name).
One of the few productive things that came out of the recent kerfufle about ableism was a useful discussion of where we should draw the line between what seem like acceptable uses of terms like "blind review", on the one hand, and obviously offensive terms like "spaz," on the other. And if we can find that line, why is the line where we think it is?
I can think of three factors that might go into such a decision:
1. One is whether the term is being used pejoritively. So, calling an argument lame is bad because I am disparaging the argument. Saying "Justice is blind" is ok, because this is a positive characteristic of justice. (The first example was given by Keith DeRose on facebook in response to Eric S's proposal along these lines. The second was Mohan M's in a comment in a thread here.)
2. A second is whether the term has a non-metaphorical use that is not related to disability. I don't think the word "blind" is first and foremost a word for a disability. It is a word for being obscured from sight. Blindfold is not referencing a disability at all. The disability "blindness" is only one source of blinding. So, on this view, it's ok to say that someone is blind to important considerations.
3. A third thing we might cite is a long history of detachment. Calling an idea "crazy" might seem ok to you because it has referred to a colloquial category for so long in the absense of referring to a clinical condition.
What do people think? Are any or all of these principled reasons one could use to distinguish offensive terms from acceptable ones?
New APPS readers probably remember Helen De Cruz's excellent post on the polarized debate surrounding evolutionary science (which was picked up by NPR), as well as Roberta Millstein's follow-up post on the perhaps equally polarized debate concerning climate change. Both posts cite the work of Dan Kahan, who has a distinct take on these issues:
"I study risk perception and science communication. I’m going to tell you what I regard as the single most consequential insight you can learn from empirical research in these fields if your goal is to promote constructive public engagement with climate science in American society. It's this: What people “believe” about global warming doesn’t reflect what they know; it expresses who they are."
I just attended a talk by Michael Ranney, who opposes Kahan's position. In Ranney's view, communicating the mechanism of global climate change is enough to change the minds of people on both sides of the political spectrum. (Check out the videos!) Ranney shows, surprisingly, that just about no one understands the mechanism of climate change (Study 1). Further, he shows that revealing that mechanism changes participants' minds about climate change (Study 2).