In the coming weeks I hope to be updating you with more details and analyses, but for now I am simply announcing that the final report for APDA is complete. Feel free to ask questions or comment below.
*Update: we noticed an error in one of the charts and some potentially confusing language in that section, so we have updated the report at the link.
What does philosophy have to say about difficult life decisions? Recently, there has been quite some interest in what philosophers have to say on this; for example, Ruth Chang’s TED talk on how to make hard choices has had over 3,5 million views. And recently, the new book by L.A. Paul, Transformative Experience, has been making quite a splash in the American mainstream media, with references in venues such as the New Yorker and the New York Times. A transformative experience is one that so fundamentally changes the person who undergoes it that she acquires a new self altogether, because she is transformed in a profound way. (See here for the shorter, article version of this idea.) The quintessential transformative experience for Paul is becoming a parent, and other examples include the death of a loved one, emigrating to a new country, among others.
One of the upshots of this conception of transformative experience is that, for many of the most important decisions in life, we simply have no way of evaluating the pros and cons of each side because we have no idea of what we’re getting into. As put by the influential journalist David Brooks in the New York Times,
Paul’s point is that we’re fundamentally ignorant about many of the biggest choices of our lives and that it’s not possible to make purely rational decisions. “You shouldn’t fool yourself,” she writes. “You have no idea what you are getting into.”
The idea of a fully articulated philosophy of the novel does not really get going until Georg Lukács wrote Theory of the Novel during World War One, though it was not published until 1921 by which time Lukács’ political world view had changed. There may be some large scale work on the philosophy of the novel I have missed before Lukács, but there is nothing which has lasted as a point of reference.
Of course there is important work on the philosophy of the novel before Lukács in remarks by Friedrich Schlegel (as well as other Romantic ironists), G.W.F. Hegel, and F.W.J. Schelling. Going back further there is some work which indirectly addresses the novel. The New Science of Giambattista Vico is the most obviously relevant since he gives great importance to epic, particularly those attributed to Homer. Not only are there ways in which the novel is the continuation of the epic that give Vico relevance: the way in which Vico places the epic in the context of a transition from heroic-aristocratic world to legal-democratic world sets up thinking about the novel and might have been influenced by the early modern evolution of the novel as a literary form.
The whole development of aesthetic philosophy around ideas, sentiment, complexity, judgement, community and sympathy in Alexander Baumgarten, Anthony Ashley Cooper (Shaftesbury), Frances Hutcheson, Edmund Burke, David Hume, and Immanuel Kant, even though they do not attach importance to the novel, can be seen in the context of the growing importance of the novel as a form of narrative which incorporates these in the prose of a world of changeable and plural points of view, in which the nature of subjective experience is always an issue.
As the general silence of eighteenth century philosophers on novelistic form suggests, the novel had limited status as an aesthetic type. The end of the century sees some discussion of the novel, with the Romantics tending towards an elevated view, while Hegel and Schelling take a more sceptical view, though Schelling does at least allow some greatness to Don Quixote.
Kierkegaard does not exactly directly suggest that the novel be taken as the aesthetic equal of any literary form, or even devote a major text to that topic, but the novel acquires considerable significance across his writing. Taking the early From the Papers of One Still Living first, Kierkegaard addresses the little known (certainly at the present time in the English speaking world) ‘adult’ novels of Hans Christian Andersen. They suggest a world of uncertainty to Kierkegaard, a world of uncertain judgements and changeable perspectives. This at least establishes the novel as important in the culture of the modern world.
Sometime this work of Kierkegaard is regarded as just judging contemporary novels by the standards of a previously established aesthetic, but as the supposed aesthetic given different sources, including Hegel and Ludwig Tieck, by different commentators, it is perhaps possible to say that Kierkegaard made an original contribution. A contribution that continues a few years later in The Concept of Irony, which is more focused on Socrates than the modern novel, but does have notable discussion of the novel, which amongst other things suggests that thinking about Socratic irony may help understanding of the novel. The discussion of the novel there is also the discussion of German Idealism and Romantics, with Schlegel’s novel Lucinde bringing the theory in relation to aesthetic practice.
Again this is sometimes seems as a not so original discussion, since Kierkegaard often seems to be following Hegel’s Aesthetics including Hegel’s relative endorsement of Karl Solger amongst the Romantics. The Kierkegaardian use of Hegelian references should not be confused with a Hegelian point of view, however, as Kierkegaard continues the earlier discussion of the nature of subjectivity, which is quite distinct from that in Hegel.
More indirect contributions to the understanding of the novel can be found in the slightly later Either/Or in discussions of tragedy and opera. Here Kierkegaard gives a view of the distinction between ancient and modern literary forms, also looking at Mozart’s Don Giovanni in terms related to his earlier understanding of the novel.
He returns to the novel as a form a bit later in A Literary Review, in which he discusses the contemporary novels of Thomasine Gyllembourg, which he touched upon in his earlier discussion of H.C. Andersen. Here Kierkegaard suggests both that the novel is a limited form in dealing with the absolute, but also that it does show important features of modernity including a longing for the absolute in the opposition between monarchist and revolutionary politics.
The issue of Kierkegaard and the philosophy of the novel must also take account of the novelistic nature of some of his own writing. This is most clearly the case in Repetition, which is a short novel, but also applies to Either/Or and its sequel Stages on Life’s Way. These are not unified narratives on the lines of Repetition, but they are presented as fictional products containing a large amount of narrative mixed in with the essays by characters in these works. ‘Diary of a Seducer’ in Either/Or can be taken as a discrete novel within the book as a whole and is indeed often published separately. The status of literary aesthetics is not the guiding issue of Kierkegaard’s writing, but it plays an important role, particularly with regard to the status of the novel as an object of philosophy and as a way of writing philosophy.
This is the second and final part of my 'brief introduction' to formal methods in philosophy to appear in the forthcoming Bloomsbury Philosophical Methodology Reader, being edited by Joachim Horvath. (Part I is here.) In this part I present in more detail the four papers included in the formal methods section, namely Tarski's 'On the concept of following logically', excerpts from Carnap's Logical Foundations of Probability, Hansson's 2000 'Formalization in philosophy', and a commissioned new piece by Michael Titelbaum focusing in particular (though not exclusively) on Bayesian epistemology.
Some of the pioneers in formal/mathematical approaches to philosophical questions had a number of interesting things to say on the issue of what counts as an adequate formalization, in particular Tarski and Carnap – hence the inclusion of pieces by each of them in the present volume. Indeed, both in his paper on truth and in his paper on logical consequence (in the 1930s), Tarski started out with an informal notion and then sought to develop an appropriate formal account of it. In the case of truth, the starting point was the correspondence conception of truth, which he claimed dated back to Aristotle. In the case of logical consequence, he was somewhat less precise and referred to the ‘common’ or ‘everyday’ notion of logical consequence.
These two conceptual starting points allowed Tarski to formulate what he described as ‘conditions of material adequacy’ for the formal accounts. He also formulated criteria of formal correctness, which pertain to the internal exactness of the formal theory. In the case of truth, the basic condition of material adequacy was the famous T-schema; in the case of logical consequence, the properties of necessary truth-preservation and of validity-preserving schematic substitution. Unsurprisingly, the formal theories he then went on to develop both passed the test of material adequacy he had formulated himself. But there is nothing particularly ad hoc about this, since the conceptual core of the notions he was after was presumably captured in these conditions, which thus could serve as conceptual ‘guides’ for the formulation of the formal theories.
We don’t access the internet directly – it’s always through some sort of intermediary software. For that reason, it matters – a lot – what the intermediary does, and what kind of interactivity it promotes. Concern about this dates at least to a 1996 (published finally in 2000) paper by Lucas Introna and Helen Nissenbaum called “Why the Politics of Search Engines Matters.” More recently, Tarleton Gillespie has emerged as a major voice in these debates: his book, Wired Shut, makes a strong case against Digital Rights Management techniques, and his more recent “Politics of Platforms” makes an argument analogous to Introna and Nissenbaum’s for programs like Facebook. Indeed, internal FB studies seem to bear these concerns out: Facebook discovered that it could influence voter participation with simple “get out the vote” reminders sent to some users but not others. The results could easily swing a tight election. Gillespie and Kate Crawford have a new paper out that makes the argument in the context of “flagging” content.
There is a Bloomsbury Philosophical Methodology Reader in the making, being edited by Joachim Horvath (Cologne). Joachim asked me to edit the section on formal methods, which will contain four papers: Tarski's 'On the concept of following logically', excerpts from Carnap's Logical Foundations of Probability, Hansson's 2000 'Formalization in philosophy', and a commissioned new piece by Michael Titelbaum focusing in particular (though not exclusively) on Bayesian epistemology. It will also contain a brief introduction to the topic by me, which I will post in two installments. Here is part I: comments welcome!
Since the inception of (Western) philosophy in ancient Greece, methods of regimentation and formalization, broadly understood, have been important items in the philosopher’s toolkit (Hodges 2009). The development of syllogistic logic by Aristotle and its extensive use in centuries of philosophical tradition as a formal tool for the analysis of arguments may be viewed as the first systematic application of formal methods to philosophical questions. In medieval times, philosophers and logicians relied extensively on logical tools other than syllogistic (which remained pervasive though) in their philosophical analyses (e.g. medieval theories of supposition, which come quite close to what is now known as formal semantics). But the level of sophistication and pervasiveness of formal tools in philosophy has increased significantly since the second half of the 19th century. (Frege is probably the first name that comes to mind in this context.)
It is commonly held that reliance on formal methods is one of the hallmarks of analytic philosophy, in contrast with other philosophical traditions. Indeed, the birth of analytic philosophy at the turn of the 20th century was marked in particular by Russell’s methodological decision to treat philosophical questions with the then-novel formal, logical tools developed for axiomatizations of mathematics (by Frege, Peano, Dedekind etc. – see (Awodey & Reck 2002) for an overview of these developments), for example in his influential ‘On denoting’ (1905). (Notice though that, from the start, there is an equally influential strand within analytic philosophy focusing on common sense and conceptual analysis, represented by Moore – see (Dutilh Novaes & Geerdink forthcoming).) This tradition was then continued by, among others, the philosophers of the Vienna Circle, who conceived of philosophical inquiry as closely related to the natural and exact sciences in terms of methods. Tarski, Carnap, Quine, Barcan Marcus, Kripke, and Putnam are some of those who have applied formal techniques to philosophical questions. Recently, there has been renewed interest in the use of formal, mathematical tools to treat philosophical questions, in particular with the use of probabilistic, Bayesian methods (e.g. formal epistemology). (See (Papineau 2012) for an overview of the main formal frameworks used for philosophical inquiry.)
Philosophers, and many thoughtful people more generally, pride themselves on having a healthy skepticism toward claims made by the media, by politicians, by scientists – by pretty much anyone. And rightly so. Many issues are complex and have not just two sides, but multiple sides. One ought not accept proffered claims without examining all of the evidence and without thinking about whether the evidence supports the claims being made. But are there times when a healthy skepticism becomes unhealthy?
In my field, philosophy of science, we often have meta-discussions about the extent to which we should accept scientific findings or question them. But even the most naturalistic philosopher of science thinks that we ought to be skeptical of scientific findings at least some of the time and under some circumstances.
I don't think it is. I don't think we should have the same skepticism toward towards claims about future political outcomes as we have towards scientific claims. The reason is simple: in evaluating scientific claims, we are evaluating existing evidence. As new evidence comes in, we might change our evaluation, but our beliefs, whether in favor or against a given claim, are not affecting the evidence or the truth of the claim itself.
But when evaluating claims about future political events, the situation is different. To see this, let's suppose that Jane Voter likes Sanders's platform, agrees with his values and proposals, but, being of skeptical bent, Jane decides that Sanders is too much of a long shot and doesn't really have a chance. This belief leads her not to support Sanders's campaign; it also leads her to suggest to her friends and and family that it would be a waste of time to do so.
The more Janes there are in the world, the more they can convince their friends and family, the more their beliefs become a foregone conclusion. That is, unlike like beliefs about scientific claims, beliefs about political outcomes actually change the outcomes. The skepticism becomes unhealthy.
We should act to bring about the outcomes that we find desirable, not sabotage those outcomes while brandishing the banner of skepticism.
There is probably an interesting post to be written on the moral standing of the scapegoat — on whether, that is, being put in the position to take a disproportionate share of the blame for something, or even simply to shield other guilty parties from blame, entitles one to claim that one has been treated unjustly. Interesting, that is, from the point of view of the universal seminar room.
But we’re not in seminar, and this is not that post. Instead, I want to do two things that seem more timely and important in the real context of the events that are unfolding this week.
First, I want to pick up on a point that Corey Robin has been making a lot recently, and to which he devoted a whole post this morning, namely that we would be making a major mistake to allow Phyllis Wise, now a fairly obvious scapegoat, to successfully plead for some measure of our sympathy—obviously despite the fact that she played a material role in the genuinely unjust treatment to which Steven Salaita has been subjected.
Intuitive physics works great for picking berries, throwing stones, and walking through light underbrush. It's a complete disaster when applied to the very large, the very small, the very energetic, or the very fast. Similarly for intuitive biology, intuitive cosmology, and intuitive mathematics: They succeed for practical purposes across long-familiar types of cases, but when extended too far they go wildly astray.
How about intuitive ethics?
I incline toward moral realism. I think that there are moral facts that people can get right or wrong. Hitler's moral attitudes were not just different from ours but actually mistaken. The twentieth century "rights revolutions" weren't just change but real progress. I worry that if artificial intelligence research continues to progress, intuitive ethics might encounter a range of cases for which it is as ill prepared as intuitive physics was for quantum entanglement and relativistic time dilation.
Yesterday brought two major developments relating to Steven Salaita's firing by the University of Illinois at Urbana-Champaign.
First came the news that the U.S. District Court in Chicago ruled to uphold the validity all of Steven Salaita's key legal claims, rejecting the University's motion to dismiss them. This does not mean, of course, that the claims have been adjudicated in Salaita's favor. But it does mean, as Brian Leiter helpfully explains here, that "taking the facts as alleged by the plaintiff, [the claims] state legal causes of action." The claims in question allege promissory estoppel, breach of contract, and violation of Salaita's first amendment rights.
Secondly, UIUC Chancellor Phyllis Wise announced her resignation and return to the faculty, effective August 12. It is difficult to imagine that her resignation (with a transition time of less than one week) is not a more or less direct result of the above legal developments.
Corey Robin has a rundown of and commentary upon all these developments which is well worth reading. And of course, hearty congratulations to Dr. Salaita, his legal team, and his many supporters. Yesterday was a good day for academic freedom.
Last Friday (July 31st) my wife, my daughter, and I were to fly back from Vancouver to New York City after our vacation in Canada's Jasper and Banff National Parks. On arrival at Vancouver Airport, we began the usual check-in, got groped in security, and filled out customs forms. The US conducts all customs and passport checks in Canada itself for US-bound passengers; we waited in the line for US citizens. We were directed to a self-help kiosk, which issued a boarding pass for my wife with a black cross across it. I paid no attention to it at the time, but a few minutes later, when a US Customs and Border Protection officer directed us to follow him, I began to. We were directed to a waiting room, where I noticed a Muslim family--most probably from Indonesia or Malaysia--seated on benches. (The women wore wore headscarves; the man sported a beard but no moustache and wore a skull cap.)
I knew what was happening: once again, my wife had been flagged for the 'no-fly' list.
CS Lewis' Mere Christianityis rightly acknowledged as a masterpiece of Christian apologetics; it is entertaining, witty, well-written, clearly composed by a man of immense learning and erudition (who, as befitting the author of the masterful Studies in Words, cannot restrain his delightful habit of providing impromptu lessons in etymology.) Lewis is said to have induced conversions in "Francis Collins, Jonathan Aitken, Josh Caterer and the philosopherC. E. M. Joad" as a result of their reading Mere Christianity, and it is not hard to see why. The encounter of a certain kind of of receptive mind with the explication of Christian doctrine that Lewis provides--laden with provocative analogies and metaphors--is quite likely to lead to the kind of experience conversion provides: an appeal to an emotional core harboring deeply experienced and felt needs and desires, which engenders a radical shift in perspective and self-conception. Christianity offers a means for conceptualizing one's existential and pyschological crises--seeing them as manifestation of a kind of possession, by sin, by the Devil--and holds out the promise of radical self-improvement: the movement toward man--all men--becoming Christ, assuming a moral and spiritual perfection as they do so. All the sludge will fall away; man will rise and be welcomed into the bosom of God; if only he takes on faith in Christ and his teachings. This is powerful, heady stuff and its intoxicating powers are underestimated only by those overly arrogant about the power and capacities of reason and ratiocination to address emotional longings and wants.
It is clear too, from reading Lewis, why Christianity provoked the ire of a philosopher like Nietzsche. For they are all here: the infantilization of man in the face of an all-powerful, all-seeing, all-knowing, all-good God, the terrible Godly wrath visible in notions such as damnation, the disdain for this life, this earth, this abode, its affairs and matters, in favor of another one; the notion of a 'fallen man' and a 'fall from grace' implying this world is corrupt, indeed, under 'occupation' by an 'enemy force.' There is considerable self-abnegation here; considerable opportunity for self-flagellation and diminishment. No wonder the Existential Stylist was driven to apoplectic fury.
Lewis takes Biblical doctrine seriously and literally; but like any good evangelical he is not above relying on metaphorical interpretation when it suits him. (This is evident throughout Mere Christianity but becomes especially prominent in the closing, more avowedly theological chapters.) Unsurprisingly for a man of his times (who supports the death penalty and thinks homosexuals are perverts), the seemingly retrograde demand that wives unquestioningly obey their husbands, which might have sparked alarms in a more suspicious mind about the sociological origins of such a hierarchy-preserving notion, is stubbornly, if ever so slightly apologetically, defended.
Lewis' arguments are, despite the apparent effort he takes to refute views contrary to Christian doctrine, just a little too quick. His infamous trilemma arguing for the Divinity of Jesus and his dismissal of the notion that his supposed Natural or Universal Law of Morality cannot be traced to a social instinct are notoriously weak (the former's weaknesses are amply referenced in the link above while the latter simply pays no attention to history, class, and culture.)
But Mere Christianity, even if deeply flawed, is still worth a read: you witness an agile mind at work; you encounter a masterful writer; you find yourself challenged to provide refutations and counter-arguments; you even feel an emotional tug or two, letting you empathize with those who do think like you do. That's a pretty good catch for one book.
I'd been trying to grapple with the weeks and weeks of horrifying stories about the treatment of Black Americans at the hands of police, with Sandra Bland and Samuel DuBose only the latest victims, when the story about Cecil the Lion hit social media. Some reacted angrily, frustrated that one lion was getting more attention than all the black women and men whose lives had been lost. Lori Gruen, however, responded differently:
Rather than pointing fingers at each other about inadequate or disproportionate grief at the deaths of some and not others, social justice activists might instead work to develop what political theorist Claire Jean Kim calls an “ethics of avowal.” In contrast to disavowal, the act of rejection or dissociation that often leads to perpetuating patterns of social injury, she suggests that we recognize the ways that our struggles are linked and to be “open in a meaningful and sustained way to the suffering and claims of other subordinated groups, even or perhaps especially in the course of political battle.” We should empathize with the pain and indignities of others who are disempowered and avow, rather than belittle, their search for justice.
Foucault made a big deal in the lectures contained in Security, Territory, Population of the linkage between medieval pastoral power and modern governmentality. Although there have been skeptics – most notably Mika Ojakangas, who thinks Foucualt reads the ancient sources nearly backwards: it was the Greeks and Romans who practiced eugenics, and Jewish and Christian authors who opposed them – it’s certainly a narrative that has the feel of doxa.
What is pastoral power? According to Foucault, during the Middle Ages, Christianity “is a religion that … lays claim to the daily government of men in their real life on the grounds of their salvation and on the scale of humanity, and we have no other example of this in the history of societies” (STP 148). Through an elaborate apparatus of confession, submission, and obedience, a “subtle economy of merit and fault” (STP 173), Christianity established a series of equivalences between the salvation of the pastor and that of his flock according to which the salvation of one was a function of the salvation of the other (STP 169-72). Although these techniques of power were historically specific, Foucault argues that analysis of pastoral power shows it to be the “embryonic point” of modern governmentality (STP 165). In sum:
"We can say that the idea of pastoral power is the idea of a power exercised on a multiplicity rather than on a territory. It is a power that guides towards an end and functions on an intermediary towards this send. It is therefore a power with a purpose for those on whom it is exercised, and not a purpose for some kind of superior unit like the city, territory, state or sovereign …. Finally, it is a power directed at all and each in their paradoxical equivalence, and not at the higher unity formed by the whole" (STP 129).
That’s the story.
The problem is that there is another Foucauldian narrative about governmentality. You see it in his Rio lectures of 1973 (“Truth and Juridical Forms,” in the Power anthology). But it’s even more evident in his “Lives of Infamous Men” (also in Power, the pagination to which I will refer) (these texts are both slightly before STP).
This is the final post in my series on reductio ad absurdum from a dialogical perspective. Here is Part I, here is Part II, here is Part III, here is Part IV, and here is Part V. I now return to the issues raised in the earlier posts equipped with the dialogical account of deduction, and of reductio ad absurdum in particular.
A general dialogical schema for reductio ad absurdum, following Proclus’ description but inspired by the Socratic elenchus, might look like this:
Interlocutor 1 commits to A (either prompted by a question from interlocutor 2, or spontaneously), which corresponds to assuming the initial hypothesis.
Interlocutor 2 leads the initial hypothesis to absurdity, typically by relying on additional discursive commitments of 1 (which may be elicited by 2 through questions).
Interlocutor 2 concludes ~A.
The main difference between the monological and the dialogical versions of a reductio is thus that in the latter there is a kind of division of labor that is absent from the former (as noted above). The agent making the initial assumption is not the same agent who will lead it to absurdity, and then conclude its contradictory. And so, the perceived pragmatic awkwardness of making an assumption precisely with the goal of ‘destroying’ it seems to vanish. Moreover, the adversarial component provides a compelling rationale for the general idea of ‘destroying’ the initial hypothesis; indeed, while the adversarial component is present in all deductive arguments (in particular given the requirement of necessary truth preservation, as argued above), it is even more pronounced in the case of reductio arguments, that is the procedure whereby someone’s discursive commitments are shown to be collectively incoherent since they lead to absurdity. There remains the question of why interlocutor 1 would want to engage in the dialogue at all, but presumably she simply wishes to voice a discursive commitment to A. From there on, the wheel begins to spin, mostly through 2’s actions.
This is the fifth installment of my series of posts on reductio ad absurdum from a dialogical perspective. Here is Part I, here is Part II, here is Part III, and here is Part IV. In this post I discuss a closely related argumentative strategy, namely dialectical refutation, and argue that it can be viewed as a genealogical ancestor of reductio ad absurdum.
Those familiar with Plato’s Socratic dialogues will undoubtedly recall the numerous instances in which Socrates, by means of questions, elicits a number of discursive commitments from his interlocutors, only to go on to show that, taken collectively, these commitments are incoherent. This is the procedure known as an elenchus, or dialectical refutation.
The ultimate purpose of such a refutation may range from ridiculing the opponent to nobler didactic goals. The etymology of elenchus is related to shame, and indeed at least in some cases it seems that Socrates is out to shame the interlocutor by exposing the incoherence of their beliefs taken collectively (for example, so as to exhort them to positive action, as argued in (Brickhouse & Smith 1991)). However, as noted by Socrates himself in the Gorgias (470c7-10), refuting is also what friends do to each other, a process whereby someone rids a friend of nonsense. An elenchus can also have pedagogical purposes, in interactions between masters and pupils.
There has been much discussion in the secondary literature on what exactly an elenchus is, as well as on whether there is a sufficiently coherent core of properties for what counts as an elenchus, beyond a motley of vaguely related argumentative strategies deployed by Socrates (Carpenter & Polansky 2002). (A useful recent overview is (Wolfsdorf 2013); see also (Scott 2002).) For our purposes, it will be useful to take as our starting point the description of the ‘Socratic method’ in an influential article by G. Vlastos (1983) (a much shorter version of the same argument is to be found in (Vlastos 1982), and I'll be referring to the shorter version). Vlastos distinguishes two kinds of elenchi, the indirect elenchus and the standard elenchus:
This is the fourth installment of my series of posts on reductio ad absurdum arguments from a dialogical perspective. Here is Part I, here is Part II, and here is Part III. In this post I offer a précis of the dialogical account of deduction which I have been developing over the last years, which will then allow me to return to the issue of reductio arguments equipped with a new perspective in the next installments. I have presented the basics of this conception in previous posts, but some details of the account have changed, and so it seems like a good idea to spell it out again.
In this post, I present a brief account of the general dialogical conception of deduction that I endorse. Its relevance for the present purposes is to show that a dialogical conception of reductio ad absurdum arguments is not in any way ad-hoc; indeed, the claim is that this conception applies to deductive arguments in general, and thus a fortiori to reductio arguments. (But I will argue later on that the dialogical component is even more pronounced in reductio arguments than in other deductive arguments.)
Let us start with what can be described as functionalist questions pertaining to deductive arguments and deductive proofs. What is the point of deductive proofs? What are they good for? Why do mathematicians bother producing mathematical proofs at all? While these questions are typically ignored by mathematicians, they have been raised and addressed by so-called ‘maverick’ philosophers of mathematics, such as Hersh (1993) and Rav (1999). One promising vantage point to address these questions is the historical development of deductive proof in ancient Greek mathematics, and on this topic the most authoritative study remains (Netz 1999). Netz emphasizes the importance of orality and dialogue for the emergence of classical, ‘Euclidean’ mathematics in ancient Greece:
Greek mathematics reflects the importance of persuasion. It reflects the role of orality, in the use of formulae, in the structure of proofs… But this orality is regimented into a written form, where vocabulary is limited, presentations follow a relatively rigid pattern… It is at once oral and written… (Netz 1999, 297/8)
This is the third installment of my series of posts on reductio ad absurdum arguments from a dialogical perspective. Here is Part I, and here is Part II. In this post I discuss issues pertaining specifically to the last step in a reductio argument, namely that of going from reaching absurdity to concluding the contradictory of the initial hypothesis.
One worry we may have concerning reductio arguments is what could be described as ‘the culprit problem’. This is not a worry clearly formulated in the protocols previously described, but one which has been raised a number of times when I presented this material to different audiences. The basic problem is: we start with the initial assumption, which we intend to prove to be false, but along the way we avail ourselves to auxiliary hypotheses/premises. Now, it is the conjunction of all these premises and hypotheses that lead to absurdity, and it is not immediately clear whether we can single out one of them as the culprit to be rejected. For all we know, others may be to blame, and so there seems to be some arbitrariness involved in singling out one specific ingredient as responsible for things turning sour.
To be sure, in most practical cases this will not be a real concern; typically, the auxiliary premises we avail ourselves to are statements on which we have a high degree of epistemic confidence (for example, because they have been established by proofs that we recognize as correct). But it remains of philosophical significance that absurdity typically arises from the interaction between numerous elements, any of which can, in theory at least, be held to be responsible for the absurdity. A reductio argument, however, relies on the somewhat contentious assumption that we can isolate the culprit.
However, culprit considerations do not seem to be what motivates Fabio’s dramatic description of this last step as “an act of faith that I must do, a sacrifice I make”. Why is this step problematic then? Well, in first instance, what is established by leading the initial hypothesis to absurdity is that it is a bad idea to maintain this hypothesis (assuming that it can be reliably singled out as the culprit, e.g. if the auxiliary premises are beyond doubt). How does one go from it being a bad idea to maintain the hypothesis to it being a good idea to maintain its contradictory?
A few weeks ago I posted some details about a new project: Academic Placement Data and Analysis (APDA) here. Readers may be interested in some updates to that project. Note: We are sending out emails to program representatives over the next few hours with much of this information, including an extended collection goal date of July 22nd, 2015. The original blogpost is quoted below.
1) Total Placement Records
"There are approximately 2300 total entries, with several categories of data."
As of noon on July 13th, we had 3078 placement records for 2444 people--that is 573 more placed candidates than we had in the database on June 23rd. (In comparison, PhilJobs, the next most comprehensive database, had 2307 placement records for junior hires at that same time.)
This is a series of posts with sections of the paper on reductio ad absurdum from a dialogical perspective that I am working on right now. This is Part II, here is Part I. In this post I discuss issues in connection with the first step in a reductio argument, that of assuming the impossible.
We can think of a reductio ad absurdum as having three main components, following Proclus’ description:
(i) Assuming the initial hypothesis.
(ii) Leading the hypothesis to absurdity.
(iii) Concluding the contradictory of the initial hypothesis.
I discuss two problems pertaining to (i) in this post, and two problems pertaining to (iii) in the next post. (ii) is not itself unproblematic, and we have seen for example that Maria worries whether the ‘usual’ rules for reasoning still apply once we’ve entered the impossible world established by (i). Moreover, the problematic status of (i) arises to a great extent from its perceived pragmatic conflict with (ii). But the focus will be on issues arising in connection with (i) and (iii).
A reductio proof starts with the assumption of precisely that which we want to prove is impossible (or false). As we’ve seen, this seems to create a feeling of cognitive dissonance in (some) reasoners: “I do not know what is true and what I pretend [to be] true.” (Maria) This may seem surprising at first sight: don’t we all regularly reason on the basis of false propositions, such as in counterfactual reasoning? (“If I had eaten a proper meal earlier today, I wouldn’t be so damn hungry now!”) However, as a matter of fact, there is considerable empirical evidence suggesting that dissociating one’s beliefs from reasoning is a very complex task, cognitively speaking (to ‘pretend that something is true’, in Maria’s terms). The belief bias literature, for example, has amply demonstrated the effect of belief on reasoning, even when participants are told to focus only on the connections between premises and conclusions. Moreover, empirical studies of reasoning behavior among adults with low to no schooling show their reluctance to reason with premises of which they have no knowledge (Harris 2000; Dutilh Novaes 2013). From this perspective, reasoning on the basis of hypotheses or suppositions may well be something that requires some sort of training (e.g. schooling) to be mastered.
As some readers may recall, I ran a couple of posts on reductio proofs from a dialogical perspective quite some time ago (here and here). I am now *finally* writing the paper where I systematize the account. In the coming days I'll be posting sections of the paper; as always, feedback is most welcome! The first part will focus on what seem to be the cognitive challenges that reasoners face when formulating reductio arguments.
For philosophers and mathematicians having been suitably ‘indoctrinated’ in the relevant methodologies, the issues pertaining to reductio ad absurdum arguments may not become immediately apparent, given their familiarity with the technique. And so, to get a sense of what is problematic about these arguments, let us start with a somewhat dramatic but in fact quite accurate account of what we could describe as the ‘phenomenology’ of producing a reductio argument, in the words of math education researcher U. Leron:
We begin the proof with a declaration that we are about to enter a false, impossible world, and all our subsequent efforts are directed towards ‘destroying’ this world, proving it is indeed false and impossible. (Leron 1985, 323)
In other words, we are first required to postulate this impossible world (which we know to be impossible, given that our very goal is to refute the initial hypothesis), and then required to show that this impossible world is indeed impossible. The first step already raises a number of issues (to be discussed shortly), but the tension between the two main steps (postulating a world, as it were, and then proceeding towards destroying it) is perhaps even more striking. As it so happens, these are not the only two issues that arise once one starts digging deeper.
To obtain a better grasp of the puzzling nature of reductio arguments, let us start with a discussion of why these arguments appear to be cognitively demanding – that is, if we are to believe findings in the math education literature as well as anecdotal evidence (e.g. of those with experience teaching the technique to students). This will offer a suitable framework to formulate further issues later on.
The legal doctrine of substantive equality – roughly, that one look at not just the presence of stipulated, formal equality, but that one incorporate outcomes as relevant to whether or not equality has been reached – strikes me as a biopolitical concept, whereas its more formal counterpart is more juridical. Consider the right to abortion: a formal declaration that a woman has the right to terminate a pregnancy prior to fetal viability exists whenever laws do not prohibit the termination. Recent state laws that ban all abortions after a gestational age of 20 weeks run afoul of that right, because a fetus at 20 gestational weeks is not viable. On the other hand, if the right is substantive, then it matters whether women can actually take advantage of the right. State laws that require spousal consent, for example, were declared by the Court in Planned Parenthood v. Caseyto place an “undue burden” on the exercise of that right. That’s a decision based on substantive equality, and it treats women not (just) as juridical subjects possessing abstract rights, but as agents in the world trying to achieve the outcomes that such rights are (presumably) designed to allow. Current rounds of state restrictions on abortion, such as forced transvaginal ultrasounds (on the pretext of ensuring the woman is “fully informed”) or the demand that clinics look like hospitals (for the “safety of women”) seem designed to limit the substantive right to abortion, while preserving it formally. All of that is a rough-and-ready way of putting the distinction, and there may very well be any number of equality claims in particular where the substantive version doesn’t sound particularly biopolitical. That’s ok – in what follows, I want to look at education, and to propose that claims of substantive equality, even biopolitically-oriented ones, can differ dramatically in what they claim and how they claim it.
One of the notable features of Brown v. Board of Education is its reliance on social science evidence indicating the psychological harm of segregation to black children (this is the famous “footnote 11,” which cited a number of recent studies). In his reflections on Brown, Robert L. Carter, one of the attorneys who argued the case, noted that “we assumed … that educational equality in its strict educational connotations – with its emphasis on the quality of education – was the same as educational quality in its constitutional dimensions” and notes that, in a series of earlier cases, “we turned to expert testimony for the first time,” and supported the argument with two kinds of claims: by “measuring the physical facilities of the proposed black law schools against the existing university holdings and by taking into account the adverse psychological detriment that we contended segregation inflicted on blacks – all of which resulted in a denial of equal education” (Bell, ed., Shades of Brown, p. 22). The three cases prior to Brown were Sweatt v. Painter, McLaurin v. Oklahoma, and Sipurel v. Oklahoma. Let’s take them in reverse chronological order.
As a fan of profane language judiciously employed, I fear that the best profanities of English are cheapening from overuse -- or worse, that our impulses to offend through profane language are beginning to shift away from harmless terms toward more harmful ones.
Roache distinguishes between objectionable slurs (especially racial slurs) and presumably harmless swear words like "fuck". The latter words, she suggests, should not be forbidden, although she acknowledges that in some contexts it might be inappropriate to use them. Roache also suggests that it's silly to forbid "fuck" while allowing obvious replacements like "f**k" or "the f-word". Roache says, "We should swear more, and we shouldn't use asterisks, and that's fine." (31:20).
I disagree. Overstating somewhat, I disagree because of this eCard:
"Fuck" is a treasure of the English language. Speakers of other languages will sometimes even reach across the linguistic divide to relish its profanity. "Fuck" is a treasure precisely because it is forbidden. Its being forbidden is the source of its profane power and emotional vivacity.
When I was growing up in California in the 1970s, "fuck" was considered the worst of the seven words you can't say on TV. You would never hear it in the media, or indeed -- in my posh little suburb -- from any adults, except maybe, very rarely, from some wild man from somewhere else. I don't think I heard my parents or any of their friends say the word even once, ever. It wasn't until fourth grade that I even learned that the word existed. What a powerful word, then, for a child to relish in the quiet of his room, or to suddenly drop on a friend!
Nominations are OPEN for the PSA Women's Caucus new Highlighted PhilosopHer feature, recognizing the work of the Caucus's membership. Nominations need not be from Caucus members (although nominees do), so this is your chance to crow about some of your outstanding colleagues! Maybe you saw a great talk from a woman philosopher of science during this summer conference season?
The nomination form is here. Highlighted PhilosopHers will be featured on the Caucus's blog, Science Visions.
It is encouraging to see that the percentage of women in PSA is higher among more junior members, reflecting trends in other fields of philosophy and in academia generally. I am surprised, however, that there has been no statistically significant increase in the percentage of women in PSA over the last 8 years.
Student evaluations can be flattering; they can be unfair; they can be good reminders to get our act together. A few weeks ago, I received my student evaluations for the 'Twentieth Century Philosophy' class I taught this past spring semester. As I read them, I came upon one that brought me up short, because it stung:
I appreciated the professor's enthusiasm about the early portion of the class, but I was annoyed that it resulted in the syllabus being rewritten so that the already extremely minimal number of female and minority voices was further reduced.
There were some interesting cases from the Supreme Court yesterday. No, not gay marriage or Obamacare. But the Court ruled in favor of business privacy (against blanket government intrusion) and in favor of a jail inmate who had been badly handled by deputies. There’s also a potentially important regulatory takings case. I want to look at the first one for now. Los Angeles v. Patel involved an LA ordinance that required that hotel owners keep records of specified information about hotel guests, and that hotel owners must make these records “available to any officer of the Los Angeles Police Department for inspection” on demand. Several hotel owners sued, making a facial challenge to the ordinance on Fourth Amendment grounds. Today, the Court ruled (5-4, opinion by Sotomayor) that the statute was on its face unconstitutional because it provided no way to challenge an officer who showed up with a records demand.
Academic Placement Data and Analysis (APDA) is a new, collaborative research project on placement into academic jobs in philosophy. The current project members include myself, Patrice Cobb (psychology, UC Merced), Angelo Kyrilov (computer science, UC Merced), David Vinson (cognitive science, UC Merced), and Justin Vlasits (philosophy, UC Berkeley). This project is borne out of earlier work on placement that was posted here and elsewhere over the past few years. Funding for this project by the American Philosophical Association has so far provided for the development of a website and database that can host the data for this project (thanks to the work of Angelo Kyrilov over the past two months). There are approximately 2300 total entries, with several categories of data. Most of these categories of data have been made publicly available, whereas any categories that have not been made public (e.g. name, gender, race/ethnicity) will be provided to researchers with IRB approval from their home institutions. You can see the website and database so far here:
Publishing in general, and for the visual arts in particular, has moved to what’s called a “permission culture,” which basically means that nobody will publish your work unless you get explicit permission from the rights owner. This is often an arduous process, since art often includes many copyrighted images or other materials. A documentary film producer, for example, has to worry if an interview subject has the TV on in the background. Permissions culture means that the producer has to either remove whatever is on the TV, or secure permission to use it. It also means that scholars may not be able to publish articles that include images of the work they are discussing, either because the images are unavailable, or unaffordable.
On the surface of things, this seems odd: shouldn’t a lot of this fall under “fair use?’ The copyright statute, after all, cites education as an example. An important paper in 2007 explained why fair use doesn’t matter in this context. Basically, fair use is an affirmative defense against an infringement claim: you sue me for infringement, I claim fair use, and that’s the argument that litigation resolves. Fair use guidelines are deliberately vague and left to a case-by-case judicial determination, and so it’s not always obvious what gets counted as fair use. Litigation is very, very expensive, and publishers are risk averse. They don’t want to pay for litigation, and if they lose, they lose not only all that money, but the work they were trying to publish gets enjoined. So publishers won’t publish without prior permission (fair use thus systematically favors rich claimants and defendants). In addition to the problems all of this directly creates, it indirectly creates a ratcheting effect, because one place courts look to see if use is fair, is industry practices. So the more publishers seek permission for everything, the narrower fair use becomes.