The idea of Homer is significantly tied up with the idea of Europe, which is not to say that there is one thing which is Europe and that it has some pure ideal beginning. It is to say that concepts like ‘Europe’ have origins and histories, and that some ways of thinking about origin and history are particularly influential. Though Homer is associated with the beginning of Europe, the word ‘Europe’ does not appear in the two epics, and the same applies for ‘Asia’.
The ambiguities around identity, history, and origin, are very apparent in relation to the name ‘Homer’, which may or many not be the real name of an ‘author’ of The Iliad and The Odyssey which may or may not have had a single author, and which certainly build on a very old tradition of recitation and singing of poetic narratives.
As Aristotle notes (NE III.1, 1110a), if the wind picks you up and blows you somewhere you don't want to go, your going there is involuntary, and you shouldn't be praised or blamed for it. Generally, we don't hold people morally responsible for events outside their control. The generalization has exceptions, though. You're still blameworthy if you've irresponsibly put yourself in a position where you lack control, such as through recreational drugs or through knowingly driving a car with defective brakes.
Spontaneous reactions and unwelcome thoughts are in some sense outside our control. Indeed, trying to vanquish them seems sometimes only to enhance them, as in the famous case of trying not to think of a pink elephant. A particularly interesting set of cases are unwelcome racist, sexist, and ableist thoughts and reactions: If you reflexively utter racist slurs silently to yourself, or if you imagine having sex with someone with whom you're supposed to be having a professional conversation, or if you feel flashes of disgust at someone's blameless disability, are you morally blameworthy for those unwelcome thoughts and reactions? Let's stipulate that you repudiate those thoughts and reactions as soon as they occur and even work to compensate for any bias.
To help fix ideas, let's consider a hypothetical. Hemlata, let's say, lacks the kind of muscular control that most people have, so that she has a disvalued facial posture, uses a wheelchair to get around, and speaks in a way that people who don't know her find difficult to understand. Let’s also suppose that Hemlata is a sweet, competent person and a good philosopher. If the psychological literature on implicit bias is any guide, it's likely that it will be more difficult for Hemlata to get credit for intelligence and philosophical skill than it will be for otherwise similar people without her disabilities.
Now suppose that Hemlata meets Kyle – at a meeting of the American Philosophical Association, say. Kyle’s first, uncontrolled reaction to Hemlata is disgust. But he thinks to himself that disgust is not an appropriate reaction, so he tries to suppress it. He is only partly successful: He keeps having negative emotional reactions looking at Hemlata. He doesn’t feel comfortable around her. He dislikes the sound of her voice. He feels that he should be nice to her; he tries to be nice. But it feels forced, and it’s a relief when a good excuse arises for him to leave and chat with someone else. When Hemlata makes a remark about the talk that they’ve both just seen, Kyle is less immediately disposed to see the value of the remark than he would be if he were chatting with someone non-disabled. But then Kyle thinks he should be try harder to appreciate the value of Hemlata's comments, given Hemlata's disability; so he makes an effort to do so. Kyle says to Hemlata that disabled philosophers are just as capable as non-disabled philosophers, and just as interesting to speak with – maybe more interesting! – and that they deserve fully equal treatment and respect. He says this quite sincerely. He even feels it passionately as he says it. But Kyle will not be seeking out Hemlata again. He thinks he will; he resolves to. But when the time comes to think about how he wants to spend the evening, he finds a good enough reason to justify hitting the pub with someone else instead.
Recent close reading and teaching and Homer, along with some long standing interests leads me to reflect on the Homeric epics as a beginning in literature and a beginning in philosophy, which appears in later beginnings. It requires no argument to suggest that The Iliad and The Odyssey are foundational texts in the history of European or western literature, even making all allowances for the impossibility of any pure beginning and the ways that Homeric poetry emerges from a broad east Mediterranean world including northern Africa and southwestern Asia, as well as Anatolia, Greece, and Italy. Even the most radical sceptic of the centrality of ancient Greece in antiquity would surely concede anyway that the Homeric epics are at the beginning of Greek literature.
I've been watching a few episodes of the BBC drama series "Foyle's War." Its a decent show, but what interests me right now is just an expression that the main character, Christopher Foyle, often uses that I had never heard before. It works like this: someone will ask him if he thinks that they ought to ____, or if he wants them to ____, and he replies "I should!"
For example "Do you want me ask all the jewelery stores in town if they've seen this necklace before?"; "I should!" or "Do you think I should check the oil in my car?"; "I should!".
Americans would, in a similar situation say "I would," which is short for, I take it, "If I were you I would do that." How to get from one expression to the other is obvious: you drop the "If I were you," and you drop the _proverb_ "do that." "Do that," is a proverb because its anaphoric for the verb from the previous sentence, e.g. "I would do that" is anaphoric for "I would ask all the jewelry stores in town."
So, first question: is this use of "I should," still idiomatic in British English? Or have y'all degenerated to "I would" too?
I say "degenerated" because it's pretty clear that what's being asserted is not just a subjunctive conditional--what I _would_ do if I were you--but a subjunctive conditional about a normative claim--what normative facts would obtain if I were you.
So the second question is, what is Foyle's "I should" actually short for. I take it is something like:
"If I were you, it would be the case that I should call all the jewelry stores." (1)
But notice that you can't even express that without the awkward "it would be the case that." If you try to say "I would should do that," it becomes unparsable and horribly awkward. "Should" doesn't like being embedded in a separate modal, and in fact even in (1) the brain recoils at the embedding of the "should" inside the "would." I don't think our brains are at all comfortable with the very form of the construction.
And yet, when Foyle says "_I_ should!" with the right emphasis on "<b>I</b>" we have no problem understanding what he means, not even if we are Americans who have never heard that idiom before.
I have some ideas about what this kind of example shows about language but they are fairly heretical, so I would hold off on expressing them for now. "_You_ should."
I am deeply grateful for the wonderful feedback I received from readers along the way (also in the form of comments and discussions over at Facebook). I could never have written this paper if it wasn't for all this help, given that much of the material falls outside the scope of my immediate expertise. So, again, thanks all!
(And now, on to start working on a new paper, on the definition of the syllogism in Aristotle, Ockham and Buridan. In fact, it will be an application of the conceptual genealogy method, so it all ties together in the end.)
PhilJobs is collecting news about new hires in philosophy here: http://philjobs.org/appointments. Don't be shy -- if you have good news to share (and we all wish there were more good news to share, i.e., more jobs to go around) please share it! If sharing your good news is not enough of a motivation, then please share it because it allows us to better track what's going on in the profession.
Today is International Women’s Day, so here is a short post on what it means to be a feminist to me, to mark the date. Recently, a (male) friend asked me: “Why do you describe yourself as a ‘feminist’, and not as an ‘equalist’”? If feminism is about equality between women and men, why focus on the female side of the equation only? This question is of course related to the still somewhat widespread view that feminism is at heart a sexist doctrine: to promote the rights and wellbeing of women at the expense of the rights and wellbeing of men. Admittedly, the idea that it’s a zero-sum game is reminiscent of so-called second-wave feminism, in particular given the influence of Marxist ideas of class war. However, there is a wide range of alternative versions of feminism that focus on the rights and wellbeing of both men and women (as well as of those who do not identify as either), and move away from the zero-sum picture.
The much-watched TED talk by Chimamanda Ngozi Adichie, ‘We should all be feminists’ (bits of which were sampled in Beyoncé’s ‘Flawless’), offers precisely one such version, which I personally find very appealing. (After recently reading Americanah, I’ve been nurturing a crush on this woman; she is truly amazing.) The talk is worth watching in its entirety (also, it’s very funny!), and while she describes a number of situations that might be viewed as specific to their originating contexts (Nigeria in particular), the gist of it is entirely universal. It is towards the very end that Adichie provides her preferred definition of a feminist:
A feminist is a man or a woman* who says: yes, there is a problem with gender as it is today, and we must fix it. We must do better.
The Affordable Care Act was in the Supreme Court again today, this time for oral argument in King v. Burwell. For those who don’t follow the ACA’s legal woes, the challenge in Burwell is this: under the ACA, states are supposed to establish exchanges where citizens can purchase healthcare on the individual market. For states that don’t want to run their own exchanges, the federal government steps up and does it for them. Healthcare is expensive, and so the federal government heavily subsidizes the premiums (on a sliding scale) for those in the middle class. However, buried in the part of the law to do with tax code, the statute says that subsidies are available for those purchasing from an exchange “established by the state.” The challenge is basically a big, fat gotcha! moment. If you say you’ll pay me back for “dinner,” and I show up with pizza and beer, then I can plausibly expect you to pay me back for both. On the other hand, if you say that you will pay me back for buying you a pizza, then I should understand that you don’t have to pay me back for beer. By the same reasoning, if the law says subsidies flow to those whose exchanges are established by “the state,” then those subsidies are not available to those whose exchanges are established by “the federal government.” Gotcha!
If you haven't already, you should read yesterday's Stone article in the NYT by Justin McBrayer entitled "Why Our Children Don't Believe There Are Moral Facts." There, McBrayer bemoans the ubiquity of a certain configuration of the difference between "fact" and "opinion" assumed in most pre-college educational instruction (and, not insignificantly, endorsed by the Common Core curriculum). The basic presumption is that all value claims-- those that involve judgments of good and bad, right and wrong, better and worse-- are by definition "opinions" because they refer to what one "believes," in contradistinction to "facts," which are provable or disprovable, i.e., True or False. The consequence of this sort of instruction, McBrayer argues, is that our students come to us (post-secondary educators) not believing in moral facts, predisposed to reject moral realism out of hand. Though I may not be as quick to embrace the hard version of moral realism that McBrayer seems to advocate, I am deeply sympathetic with his concern. In my experience, students tend to be (what I have dubbed elsewhere on my own blog) "lazy relativists." It isn't the case, I find, that students do not believe their moral judgments are true--far from it, in fact-- but rather that they've been trained to concede that the truth of value judgments, qua "beliefs," is not demonstrable or provable. What is worse, in my view, they've also been socially- and institutionally-conditioned to think that even attempting to demonstrate/prove/argue that their moral judgments are True-- and, correspondingly, that the opposite of their judgments are False-- is trés gauche at best and, at worst, unforgivably impolitic.
A few days ago the link to an interesting piece popped up in my Facebook newsfeed: ‘Three reasons why every woman should use a vibrator’, by Emily Nagoski. I wholeheartedly agree with the main claim, but what makes the piece particularly interesting for philosophers at large is a reference to Andy Clark and the extended mind framework:
Some women feel an initial resistance to the idea of using a vibrator because it feels like they “should” be able to have an orgasm without one. But there is no “should” in sex. There’s just what feels good. Philosopher Andy Clark (who’s the kind of philosopher who would probably not be surprised to find himself named-dropped in an article about vibrators) calls it “scaffolding,” or “augmentations which allow us to achieve some goal which would otherwise be beyond us.” Using paper and pencil to solve a math equation is scaffolding. So is using a vibrator to experience orgasm.
This is an intriguing suggestion, which deserves to be further explored. (As some readers may recall, I am always happy to find ways to bring together some of my philosophical interests with issues pertaining to sexuality – recall this post on deductive reasoning and the evolution of female orgasm.) Within the extended mind literature, the phenomena discussed as being given a ‘boost’ through the use of bits and pieces of the environment are typically what we could describe as quintessentially cognitive phenomena: calculations, finding your way to the MoMA etc. But why should the kind of scaffolding afforded by external devices and parts of the environment not affect other aspects of human existence, such as sexuality? Very clearly, they can, and do. (Relatedly, there is also some ongoing discussion on the ethics of neuroenhancement for a variety of emotional phenomena.)
The FCC decided today to treat the Internet as a public utility and to (therefore) enforce net neutrality. This means that ISP’s won’t be able to favor one form of content over another by offering (for example) higher transmission rates for a fee. It also means that ISP’s can’t interfere with the transmission of content they don’t like (say, by a competitor). Assuming it holds up in court (and the major telecom companies are prepared to spend a lot of money trying to get it overturned), this is a big deal.
Readers may recall that last December we co-hosted an open letter in opposition to a draconian law that had been instituted in Macedonia, substantially abridging the autonomy the country's universities (more info here). The letter ended up with more than 100 signatures, of which more than 50 came through New APPS.
A little while ago, I received an email update about the situation from Katerina Kolozova, Professor of Philosophy, Gender Studies and Sociology at University American College-Skopje. The news is good: the law has been suspended and, negotiations have begun between the government and the 'Student Plenums' that have been organizing against the law.
Professor Kolozova writes:
Dear friends, Thank you so much for supporting us! And it hasn't been in vain. The plenums have won today: the law on higher education against which we have protested for months, against which we have been occupying universities, writing legal analyses we had no place to present except the social media, combating the Government propaganda through arguments presented on our blogs, Twitter and Facebook, the law which practically killed the university autonomy has been abolished. The Parliament voted a three year moratorium two weeks ago, and today the negotiations between the ministries and the plenums kicked off based on a concept proposed by the Plenums.
Thank you again for your signatures of support. They helped incredibly!
I would like to congratulate all the organizers, especially those on the ground who have worked for months to prevent this law from taking effect, but also those who have been working internationally to support them. And I would like to echo Professor Kolozova's thanks to all who signed on here and elsewhere in support of the campaign.
I'm teaching Wittgenstein this semester--for the first time ever--to my Twentieth-Century Philosophy class. My syllabus requires my students to read two long excerpts from the Tractatus Logico-Philosophicusand Philosophical Investigations; bizarrely enough, in my original version of that exalted contract with my students, I had allotted one class meeting to a discussion of the section from the Tractatus. Three classes later, we are still not done; as you can tell, it has been an interesting challenge thus far.
Cultural moral relativism is the view that what is morally right and wrong varies between cultures. According to normative cultural moral relativism, what varies between cultures is what really is morally right and wrong (e.g., in some cultures, slavery is genuinely permissible, in other cultures it isn't). According to descriptive cultural moral relativism, what varies is what people in different cultures think is right and wrong (e.g., in some cultures people think slavery is fine, in others they don't; but the position is neutral on whether slavery really is fine in the cultures that think it is). A strong version of descriptive cultural moral relativism holds that cultures vary radically in what they regard as morally right and wrong.
A case can be made for strong descriptive cultural moral relativism. Some cultures appear to regard aggressive warfare and genocide as among the highest moral accomplishments (consider the book of Joshua in the Old Testament); others (ours) think aggressive warfare and genocide are possibly the greatest moral wrongs of all. Some cultures celebrate slavery and revenge killing; others reject those things. Some cultures think blasphemy punishable by death; others take a more liberal attitude. Cultures vary enormously on womens' rights and obligations.
However, I reject this view. My experience with ancient Chinese philosophy is the central reason.
In an earlier post, I took some initial steps toward reading Foucault’s last two lecture courses, The Government of Self and Others (GS) and The Courage of Truth(CT), in which he studies the ancient Greek concept of parrhesia. As I noted last time, one of the things Foucault finds is a concern on the part of the Greeks that philosophy achieve effects in the world, and not remain at the level of “mere logos.”
Here, I want to say more (warning: lots more. Long post coming!) about that framework and discussion, in Foucault’s discussion of Plato in GS. In particular, I want to look at his reading of Plato’s Seventh Letter. I have to confess that I hadn’t read the Letter until this week, despite having read quite a bit of ancient Greek philosophy. I suspect that I’m not alone. This is in part because the authorship has been contested, but also no doubt because the text is completely at odds with most of the rest of Plato’s corpus. On the surface of things, the Letter is a sort of apologia: Plato is explaining his own conduct in relation to Dion and Dionysius of Syracuse, where he consents to offer advice – parrhesia – and becomes embroiled in the feuding between Dion and Dionysius by trying to mediate on Dion’s behalf. Why did he respond to the call? Because:
This is a moderated thread. So there can be no question that Leiter at least had to deliberately press ‘publish’ on this comment. It is less clear, as his own comment further down indicates, that he had fully thought through the implications of doing so.
Brian Leiter said...
Yes, I suppose I should not have approved #2, but I've been approving almost everything. On the other hand, Johnson is a very public and rather noxious presence in philosophy cyberspace, so I'm not surprised there is interest.
I’m sure we’re all glad to know that Brian has some standards (he didn’t approve everything, after all). Still, what he did approve seems to merit some comment.
The speculation about the reasons for Leigh’s ability to secure a second job in professional philosophy is untoward, given that she is a) non-tenured, b) not in any way credibly accused or even suspected of professional misconduct, and c) the characterization of her current position is inaccurate. Publishing this comment and thereby generating a public sense that Leigh does not deserve her current employment is at very least an obvious instance of bullying on Brian’s part (and fits his by now well established pattern of directing this sort of attention toward junior, precariously employed members of the profession).
In what has to be one of the great whoppers of his entire blogging career, Brian goes on to justify leaving such a comment up by validating a more general interest in the question of why someone who is, in his view, a "a very public and rather noxious presence in philosophy cyberspace” should have a job.
To say that the implicit standard in 2) risks implicating Brian himself is rather obvious. More interestingly, it seems to be perhaps as candid an admission as we are likely to get from Brian that he sees nothing wrong with harassing people he doesn’t like if he can possibly pull it off. And so we find him abusing the pretext of discussing ‘issues in the profession’ to pursue his own petty little vendetta.
Older data in sociology suggest that the prestige of PhD granting department is one of the main factors in hiring decisions (the other is the selectivity of the undergraduate institution. The authors conclude (rather dryly) "job placement in sociology values academic origins over performance."
Some six years ago, shortly after I had been appointed to its faculty, the philosophy department at the CUNY Graduate Center began revising its long-standing curriculum; part of its expressed motivation for doing so was to bring its curriculum into line with those of "leading" and "top-ranked" programs. As part of this process, it invited feedback from its faculty members. As a former graduate of the Graduate Center's Ph.D program, I thought I was well-placed to offer some hopefully useful feedback on its curriculum, and so, I wrote to the faculty mailing list, doing just that. Some of the issues raised in my email are, I think, still relevant to academic philosophy. Not everybody agreed with its contents; some of my cohort didn't, but in any case, perhaps this might provoke some discussion.
As you know, I was the gentleman that made that remark in a private facebook thread with a close friend. If I recall correctly, people in that thread were asking about whether certain kinds of thought experiments were typically referred to as “Gettier Cases”. I said that they were, despite how inaccurate or uninformative it might be to do so, in part because of the alternative traditions you cite. I’m sorry you interpreted my remark as silencing my friends on facebook. Personally I believe that philosophers should abandon the notion of “Gettier cases” and that the practice of labeling thought experiments in this way should be discouraged. If you are interested, I have recently argued for this in two articles here (http://philpapers.org/rec/BLOGCA) and here (http://philpapers.org/rec/TURKAL).
A few months ago, I noticed an interesting and telling interaction between a group of academic philosophers. A Facebook friend posted a little note about how one of her students had written to her about having encountered a so-called "Gettier case" i.e., she had acquired a true belief for invalid reasons. In the email, the student described how he/she had been told the 'right time' by a broken clock. The brief discussion that broke out in response to my friend's note featured a comment from someone noting that the broken clock example is originally due to Bertrand Russell. A little later, a participant in the discussion offered the following comment:
Even though the clock case is due to Russell, it's worth noting that "Gettier" cases were present in Nyāya philosophy in India well before Russell, for instance in the work of Gaṅgeśa, circa 1325 CE. The example is of someone inferring that there is fire on a faraway mountain based on the presence of smoke (a standard case of inference in Indian philosophy), but the smoke is actually dust. As it turns out, though, there is a fire on the mountain. See the Tattva-cintā-maṇi or "Jewel of Reflection on the Truth of Epistemology." [links added]
We’ve all heard that regulations are bad, because they interfere with businesses doing what they want (rules about dumping toxic chemicals get in the way of dumping toxic chemicals. Laws against murder hamper the business model of assassins. And so on.). New North Carolina Senator Thom Tillis made the media rounds this week for some odd remarks he made on the topic. When asked to name a regulation he thought was bad, he came up with… the rule that restaurant employees wash their hands after visiting the toilet. He then proposed that it would be better to have restaurants state whether or not employees have to wash their hands, and then “let the market” take care of it.
There’s two obvious problems here, both of which have been pointed out a lot. One is that there’s a public health issue. The other is that he hasn’t actually reduced regulation: he’s just replaced a public health rule with a rule about signage. I actually think the second point is interesting, well beyond the “gotcha!” treatment it got, because it perfectly illustrates something about neoliberalism: it doesn’t think regulations that create markets are regulations (or, if you prefer, regulating to create markets is good, other regulations are bad. This is the same mindset that concludes that the hyper-regulated Chicago futures markets are unregulated). The cleanliness of restaurant operations is not something consumers can know much about on their own, since they don’t do things like follow employees to the restroom. In this sense restaurant sanitation is a credence good (you have to believe the restaurant; you can’t inspect the product before you buy it). Since dirty food preparation can make people very sick, rational consumers should be willing to pay more for the knowledge that their food is safely prepared. But since they won’t be in any position to know about food safety, except (maybe) for places they’ve eaten before, we can expect market failure until some mechanism arrives to help consumers make their decisions.
I cringe, I wince, when I hear someone refer to me as a 'philosopher.' I never use that description for myself. Instead, I prefer locutions like, "I teach philosophy at the City University of New York", or "I am a professor of philosophy." This is especially the case if someone asks me, "Are you a philosopher?". In that case, my reply begins, "Well, I am a professor of philosophy...". Once, one of my undergraduate students asked me, "Professor, what made you become a philosopher?" And I replied, "Well, I don't know if I would go so far as to call myself a philosopher, though I did get a Ph.D in it, and...". You get the picture.
Yesterday, in my Twentieth Century Philosophy class, we worked our way through Bertrand Russell's essay on "Appearance and Reality" (excerpted, along with "The Value of Philosophy" and "Knowledge by Acquaintance and Knowledge by Description" from Russell's 'popular' work The Problems of Philosophy.) I introduced the class to Russell's notion of physical objects being inferences from sense-data, and then went on to his discussions of idealism, materialism, and realism as metaphysical responses to the epistemological problems created by such an understanding of objects. This discussion led to the epistemological stances--rationalism and empiricism--that these metaphysical positions might generate. (There was also a digression into the distinction between necessary and contingent truths.)
At one point, shortly after I had made a statement to the effect that science could be seen as informed by materialist, realist, and empiricist conceptions of its metaphysical and epistemological presuppositions, I blurted out, "Really, scientists who think philosophy is useless and irrelevant to their work are stupid and ungrateful." This was an embarrassingly intemperate remark to have made in a classroom, and sure enough, it provoked some amused twittering from my students, waking up many who were only paying partial attention at that time to my ramblings.
Sometimes, when I talk to friends, I hear them say things that to my ears sound like diminishments of themselves: "I don't have the--intellectual or emotional or moral--quality X" or "I am not as good as Y when it comes to X." They sound resigned to this self-description, this self-understanding. I think I see things differently; I think I see ample evidence of the very quality they seem to find lacking in themselves. Sometimes, I act on this differing assessment of mine, and rush to inform them they are mistaken. They are my friends; their lowered opinion of themselves must hurt them, in their relationships with others, in their ability to do the best they can for themselves. I should 'help.' It seems like the right thing to do. (This goes the other way too; sometimes my friends offer me instant correctives to putatively disparaging remarks I make about myself.)
Foucault’s last lecture courses at the Collège de France – recently published as The Government of Self and Others[GS] and The Courage of Truth [CT] – are interesting for a number of reasons. One is of course they offer one of the best glimpses we have of where his thought was going at the very end of his life; he died only months after delivering the last seminar in CT, and there is every reason to believe that he both knew that he was dying, and why. There’s a lot to think about in them, at least some of which I hope to talk about here over a periodic series of posts. Here I want to say something introductory about the material, and look at Foucault’s critique of Derrida in it.
The lectures contain a sustained investigation of parrhesia, the ancient Greek ethical practice of truth-telling. “Truth to power” is the closest modern term we have for such a practice, though you don’t have to get very far into the lectures to realize how richly nuanced the topic is, and how many different ways it manifest itself in (largely pre-Socratic) Greek thought and literature. The lectures also contain a number of references to contemporary events and people (from the beginning: GS starts with Kant, before going back to the Greeks), and it’s hard to put CT down without a sense that, had there been another year of lectures, Foucault would have been more explicit in assessing the implications of the study of Greek parrhesia today.
In recent weeks, there has been much discussion on journal editorial practices at a number of philosophy blogs. Daily Nous ran an interesting post where different journal editors described (with varying degrees of detail) their editorial practices; many agree that the triple-anonymous system has a number of advantages and, when possible, should be adopted.* (And please, let us just stop calling it ‘triple-blind’ or ‘double-blind’, given that there is a perfectly suitable alternative!) Jonathan Ichikawa, however, pointed out (based on his experience with Phil Studies) that we must not take it for granted that a journal’s stated editorial policies are always de facto implemented. Jonathan (correctly, to my mind) defends the view that it is not desirable for a journal editor to act as a (let alone the sole) referee for a submission.
With this post, I want to bring up for discussion what I think is one of the main issues with the peer-reviewing system (I’ve expressed other reservations before: here, here, and here), namely the extreme difficulties journal editors encounter at finding competent referees willing to take up new assignments. Until two years ago, my experience with the peer-review system was restricted to the role of author (and I, as everybody else, got very frustrated with the months and months it often took journals to handle my submissions) and the role of referee (and I, as so many others, got very frustrated with the constant outpour of referee requests reaching my inbox). Two years ago I became one of the editors of the Review of Symbolic Logic, and thus acquired a third perspective, that of the journal editor. I can confirm that it is one of the most thankless jobs I’ve ever had.