Saturday, August 31, 2019

Martha Nussbaum on the question, "What does it mean to be human today?" (With critical response)


Today's New York Times includes a mini-essay by philosopher and legal scholar, Martha Nussbaum. [Note: this post first appeared on 8/20/18 on a now-defunct Disqus channel] Ir is the latest in a series of philosophical responses to the question, "What does it mean to be human today?." https://www.nytimes.com/spo... Nussbaum is a prodigious philosopher whose writings since the early 80s have, by turns, focused on Classical Greek philosophy, Stoicism, Feminist Theory, Human Rights, Globalization, Cosmopolitanism, legal theory, quality of life in developing countries, "political emotions," and much more. I have great respect for Nussbaum, whose books and lectures are almost always interesting, and not infrequently quite original and insightful. I didn't have that response to her Times piece this morning, though. The topic she takes on and the issues she addresses in her essay are, however, important not only for philosophy but the future of humankind and the biosphere as a whole. What follows is Nussbaum's mini-essay followed by a very short response which I wrote for the NYT comments section earlier. Hopefully, the essay and response will generate a constructive discussion on the very timely issues involved.

--------------------------------------------------------------------------------------------------------------------------------------

What Does It Mean to Be Human? Don’t Ask:
(We don’t see the problem with our self-importance because our narcissism is so complete.)

by Martha Nussbaum 8/20/18
Source: NY Times

Over time, the idea of “being human” has surely meant — and will continue to mean — many things. There is and has never been just one answer. But surely one thing it ought to involve today is the ability to recognize that the question itself is a problem.

We humans are very self-focused. We tend to think that being human is somehow very special and important, so we ask about that, instead of asking what it means to be an elephant, or a pig, or a bird. This failure of curiosity is part of a large ethical problem.

The question, “What is it to be human?” is not just narcissistic, it involves a culpable obtuseness. It is rather like asking, “What is it to be white?” It connotes unearned privileges that have been used to dominate and exploit. But we usually don’t recognize this because our narcissism is so complete.

We share a planet with billions of other sentient beings, and they all have their own complex ways of being whatever they are. All of our fellow animal creatures, as Aristotle observed long ago, try to stay alive and reproduce more of their kind. All of them perceive. All of them desire. And most move from place to place to get what they want and need. Aristotle proposed that we should strive for a common explanation of how animals, including human animals, perceive, desire and move.

We know Aristotle as a philosopher, but he also was a great biologist who studied shellfish and other creatures large and small. He encouraged his students not to turn away from studying animals that don’t seem glamorous, since there is something wonderful in all of them, not least the sheer fact that they all strive for continued life.

This sense of wonder, which should lead us to a fuller ethical concern, is a deep part of our humanity. But wonder is on the wane, and we humans now so dominate the globe that we rarely feel as if we need to live with other animals on reciprocal terms.

Domesticated animals occupy a privileged sphere, but even they are often treated cruelly (think of puppy mills, or abandoned feral cats). The factory farming of pigs, chickens and other animals is a relatively new form of hideous brutality. As for the creatures in the “wild,” we can see that our human crimes are having a devastating effect on them: the damages that come from lab research using animals; the manifold harms endemic to the confinement of apes and elephants in zoos; the depletion of whale stocks by harpooning; the confinement of orcas and dolphins in marine theme parks; the poaching of elephants and rhinoceroses to benefit the international black market; the illicit trafficking of African elephants to American zoos; the devastation of habitat for many large mammals that is resulting from climate change. It is now estimated that human activity has contributed to the extinction of more than 80 mammalian species. https://vcresearch.berkeley...
New issues arise constantly. The world needs an ethical revolution, a consciousness-raising movement of truly international proportions. But this revolution is impeded by the navel-gazing that is typically involved in asking, “What is it to be human?”

Let’s rekindle and extend our sense of wonder by asking instead: “What is it to be a whale?” Then let’s go observe whales as best we can, and read the thrilling research of scientists such as Hal Whitehead http://whiteheadlab.weebly.... and Luke Rendell http://biology.st-andrews.a... Let’s ask about elephants (my own most beloved species), and if we can’t go on safaris, let’s watch films of elephants simply living their lives, exhibiting communal devotion, compassion, grief anda host of other complex attitudes that we humans tend to believe belong to us alone.

And let’s do much more philosophical and legal work on theoretical approaches to protecting other animals and developing more reciprocity with them. We have gathered so much scientific information about the complexities of animal lives. Now let’s put it to use philosophically. Will Kymlicka http://post.queensu.ca/~kym... and Sue Donaldson https://philpapers.org/s/Su... have already done wonderful work on reciprocity and community with domesticated animals, but there’s more to do.
In the world of philosophy-influenced policy, the most significant general approach to animal entitlements until now has been that of the British utilitarian Jeremy Bentham, https://plato.stanford.edu/... courageously and ably developed by the philosopher Peter Singer https://plato.stanford.edu/... . This approach continues to have great importance because it focuses on animals’ suffering. If we were to simply stop inflicting gratuitous pain on animals, that would be a huge step forward.

We now know that animals need many things that don’t always cause suffering when they are absent: the chance to associate with others of their kind in normal groupings; the chance to sing or trumpet in their characteristic ways; the chance to breed; the chance to move freely through unobstructed space; the chance to pursue curiosity and make new discoveries. So we also need, I believe, an approach that focuses on a plurality of distinct “capabilities,” or freedoms, that each species requires to live a flourishing life.

I’m now writing a book that will use my prior work on the “capabilities approach” https://plato.stanford.edu/... to develop a new ethical framework to guide law and policy in this area. But mine is just one approach, and it will and should be contested by others developing their own models. Lawyers working for the good of animals under both domestic and international laws need sound theoretical approaches, and philosophers should be assisting them in their work. And there is so much work to do.

So let’s put aside the narcissism involved in asking only about ourselves. Let’s strive for an era in which being human means being concerned with the other species that try to inhabit this world.

Martha C. Nussbaum is the Ernst Freund Distinguished Service Professor at the University of Chicago Law School.

--------------------------------------------------------------------------------------------------------------------------------------

A Short Response:


There seems to be a false dichotomy at work here. Either we ask about our own species (narcissistic) or we cultivate a greater sense of wonder and ethical interest in the myriad of animals and life-forms all around us (expansive/mature). Yet we can concern ourselves with both, and indeed they are interrelated more than ever in the current age in which human beings exercise tremendous influence on the fate of the environment and all the animals that she mentions. What matters, I think, is how we inquire into these things. It can be done in a self-congratulatory or narcissistic way, as Nussbaum fears, but it needn't be so.

I don't see how we can explore other animals and nature in light of ethical responsibilities without first exploring 'the human condition' which is part of nature and the biosphere. To ask who and what we are and may become has been basic to philosophy from Plato, Aristotle,Augustine,Kant, to Nietzsche, Sartre,Simone de Beauvoir, et al. Not a bad pedigree! It's interesting to read Nussbaum lauding Aristotle for sidestepping the "narcissism" that theories of human nature and purpose imply on her view. As a sometimes-classicist, she must know Aristotle's ethics and politics attempt to describe, even define, human nature. For Aristotle, we're political animals (zoon politikon), our function (ergon) is rational activity, and our ultimate end (telos), is to actualize our potentials and achieve well-being (eudaimonia) that is conducive to a harmonious society/polis.

But such answers, as intellectually valuable as they are do not give guidance for "human beings today," which is the topic of the series of essays in the NYT . With all the powers we now have to destroy life on earth, we must ask "who" we are and what we want afresh.
-PD

---------------------------------------------------------------------------------------------------------------------------

Questions:

-Is it possible to radically alter the way we perceive and treat other animals without, in so doing, asking questions about our place in nature, our needs, desires and ethical responsibilities?

-Must questions about human nature take the form of "navel gazing," "narcissistic" self-importance, as Nussbaum claims?

-The title of the essay includes the exhortation, "Don't Ask" in response to the question, "What does it mean to be human today?" (that's the NY Times' "big question for the series with the word "today" included). If we "don't ask" what it means to be human beings in the world today, how might we change "who" we are and what we do to be more compatible with the needs, or perhaps rights, of
other beings and the environment?


Thursday, August 29, 2019

The Philosophy of Bertrand Russell: An Overview


                                                 


 The following is a tribute to Bertrand Russell as a philosopher (he was also an activist, essayist in matters of what he called "non-philosophical" ethics and politics, a writer of self-help books, and much more). The author, Andrew David Irvine, is very charitable to Russell, presenting what many see as an ever-changing set of philosophical positions and projects as a unified whole. In a certain sense there is thematic unity in Russell's overall body of work. As Irvine points out, Russell's animating question was always, "what can we know, and with what degree of certainty." But the doctrines he embraced in his long efforts to wrestle with those questions changed with some frequency, which led to criticisms of inconsistency. For my own part, I admire philosophers that can concede mistakes, change tack, and follow their best hunches and evidence in favor of the misguided need to appear "consistent" at all costs. Russell (with AN Whitehead) pioneered a great part of propositional and predicate logics which are at the root of most formal logics to this day. He developed analytic philosophy in its early phase, illustrating the power of conceptual analysis in such classic essays as "On Denoting." For one reason or another, there has not been a post in his honor here, so hopefully this excerpt from the TLS series, "Footnotes to Plato" will remedy that oversight. It is an abridged version of an article that can be read here: https://www.the-tls.co.uk/a...
______________________________________________________________________________

Russell’s [overall] philosophy was motivated by a single question. As he explains in My Philosophic Development (1959), “There is only one constant preoccupation. I have throughout been anxious to discover how much we can be said to know and with what degree of certainty or doubtfulness” Initially, Russell turned to logic and mathematics in the hope that they would help him discover a body of eternal, objective truths capable of being known with absolute certainty. When Ludwig Wittgenstein convinced him that logic was simply a mechanism for discovering new ways of saying the same thing using different words, he was intellectually devastated. Even so, this disappointment led eventually to a much more plausible account of human knowledge. At its core was what Russell called the liberal or scientific outlook, summarized in his essay of 1947, “Philosophy and Politics”:
The essence of the liberal outlook lies not in what opinions are held, but in how they are held: instead of being held dogmatically, they are held tentatively, and with a consciousness that new evidence may at any moment lead to their abandonment. This is the way in which opinions are held in science, as opposed to the way in which they are held in theology. The decisions of the Council of Nicaea are still authoritative, but in science fourth-century opinions no longer carry any weight. In the USSR the dicta of Marx on dialectical materialism are so unquestioned that they help to determine the views of geneticists on how to obtain the best breed of wheat, though elsewhere it is thought that experiment is the right way to study such problems. Science is empirical, tentative, and undogmatic; all immutable dogma is unscientific. The scientific outlook, accordingly, is the intellectual counterpart of what is, in the practical sphere, the outlook of Liberalism.

This view soon became a core feature of what is now known as analytic philosophy.
Before Russell, two views about the relationship between science and philosophy dominated. The first was that throughout history the terms “science” and “philosophy” had been used largely interchangeably.“Philosophia”, meaning love of wisdom, happened to be a Greek word.“Scientia”, meaning knowledge, happened to be a Latin word. Both referred, not to a single discipline among many, but to all organized knowledge. As Descartes wrote in 1644, “all Philosophy is like a tree, of which Metaphysics is the root, Physics the trunk, and all the other sciences are the branches that grow out of this trunk, which are reduced to three principal, namely, Medicine, Mechanics and Ethics”. It is in this sense that Isaac Newton titled his landmark work of 1657 not Mathematical Principles of Physics, but Mathematical Principles of Natural Philosophy (Philosophiae Naturalis Principia Mathematica)We still rely on this understanding of philosophy whenever we award those who have carried out advanced research, whether in the humanities,the social sciences or the bench sciences, a doctor of philosophy degree or PhD.

The second view is one that Russell himself initially accepted but laterabandoned. This is the view that while science and philosophy both help us understand the world, each has its own independent area of investigation. On this view, it is the scientist’s job to discover contingent, empirical truths about the natural world. Philosophy has a different job, namely the discovery of truths that are somehow universal or necessary, especially those relating to logic and to normative theories such as ethics. On this view, science uses observation to discover that copper conducts electricity, that wood does not, that the speed of sound is less than the speed of light and that species evolve through natural selection. In contrast, philosophy relies on reason rather than observation. It is through reason that we discover that nothing can be in two places at once, that modus ponens and modus tollens are just two of many universally valid forms of inference, that murder is always wrong. On this view, science and philosophy each contribute to our understanding of the world, but they do so by focusing on different branches of Descartes’s tree of knowledge.

Russell introduced a third view. On this view, it is not the job of philosophers and scientists to investigate different branches of knowledge. Instead, philosophy and science contribute different tools to the same job. While science does the heavy lifting of empirical observation, working to discover what the Australian philosopher David Armstrong calls the “geography and history of the universe”, taking geography in this sense to include all space and history to include all time, including the future, philosophy works at clarifying the concepts science uses to generate these observations. As Jerry Fodor summarizes, “Philosophy is what you do to a problem until it’s clear enough to solve it by doing science”

More precisely, on Russell’s view, philosophy’s task is to develop a logically perfect language, an ideal language through which rigorous, reliable scientific investigation becomes possible Observations alone, without the proper concepts to express them, inevitably lead to Alice in Wonderland types of confusion in which we are misled by the surface grammar of natural language:
I see nobody on the road,” said Alice.“I only wish I had such eyes,” the King remarked in a fretful tone.“To be able to see Nobody! And at that distance too! Why, it’s as much as I can do to see real people, by this light!”

As trivial as these types of confusion might seem, Russell’s insight was that they underlie many of the most fundamental intellectual errors in human history.

Developing in-principle accounts of how to avoid such errors is not easy. Everyone knows that from “Men are mortal” and “Socrates is a man” we can infer that “Socrates is mortal”. We also know that from “Men are numerous” and “Socrates is a man” we cannot infer that “Socrates is numerous”, but without an explanation of why these two cases are different we are left wondering about the more general case, about whether from “Men are x” and “Socrates is a man” we can infer reliably that “Socrates is x” for other arbitrary instances of x. On Russell’s view, the analysis needed to resolve these kinds of puzzles does not focus primarily on the study of language, at least not initially. Hence, Russell’s spirited opposition to the mid-twentieth century turn towards ordinary language philosophy.
On Russell’s view,logical analysis is primarily a type of abstract investigation about the world. Only secondarily is it involved in developing a corresponding grammar and vocabulary. Without some principled account of why language and inference sometimes fail to mirror the underlying structure of the world, we inevitably have trouble placing even our most careful empirical observations into words. Without conceptual analysis, no scientific inference can be fully trusted. On this view, guidance comes, not from the surface features of natural language, but from the underlying structure of the world itself. As Einstein famously remarks, just as philosophy, when isolated from science, becomes intellectually empty, science without philosophy “is – insofar as it is thinkable at all – primitive and muddled”.

In a broad sense, this goal of having language mirror reality originates with Plato’s injunction to carve nature at its joints. In a more precise sense, it originates with Gottfried Leibniz, for it was Leibniz who first championed the use of a logically ideal language (or characteristica universalis), together with an effective calculus of deductive reasoning (or calculus ratiocinator). Only after developing these two tools are we able to reason about the world without fear of error. As Russell himself translate one of Leibniz’s most famous quotations in his book, A Critical Exposition of the Philosophy of Leibniz,
We should be able to reason in metaphysics and morals in much the same way as in geometry and analysis . . . If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, to sit down to their slates, and to say to each other (with a friend as witness, if they liked): Let us calculate.

On this view there is no hard line separating the philosopher’s armchair from the scientist’s workbench. Conceptual analysis may not have about it the same air of empirical discovery as science, but it is just as central to the scientific enterprise as observation and experiment. Empirical observations reporting how the world in fact is may not immediately tell us how the world should be, but they too are essential for understanding even the most normative of projects. As Wittgenstein sums up Russell’s insight, “Philosophy aims at the logical clarification of thoughts. Philosophy is not a body of doctrine but an activity”. Something similar might be said of science as well.

This view of philosophy did not come quickly or easily to Russell. The idea that philosophy’s purpose is to discover a body of eternal, transcendent truths was deeply embedded, not just in Russell’s early philosophy, but in the philosophical culture of his day. Gradually, though, Russell came to champion the idea that advances in knowledge begin as much with conceptual analysis as with observation. We start with beliefs that seem to be plausible but that we cannot express with any high degree of precision. We then go through a process that involves both observation and analysis. We search for more accurate language, we collect more data, we look for inconsistencies, we investigate more fully. This process, says Russell in My Philosophical Development, “is just like that of watching an object approaching through a thick fog: at first it is only a vague darkness, but as it approaches articulations appear and one discovers that it is a man or a woman, or a horse or a cow or what not”. The result requires the thoughtful integration of both empirical investigation and logical analysis. This view about the importance of analysis, concludes Russell, “is my strongest and most unshakable prejudice as regards the methods of philosophical investigation”.

As Russell sees it, this view of philosophy leads not just to a more accurate understanding of the world. It also leads to an increased appreciation of the importance of objective knowledge quite generally, a lesson that should not be lost on those of us living almost half a century after Russell’s death as we reflect on the social, political and educational issues of our day. As Russell puts it at the end of his History of Western Philosophy (1945),
In the welter of conflicting fanaticisms, one of the few
unifying forces is scientific truthfulness, by which I mean the habit of
basing our beliefs upon observations and inferences as impersonal, and
as much divested of local and temperamental bias, as is possible for
human beings. To have insisted upon the introduction of this virtue into
philosophy, and to have invented a powerful method by which it can be
rendered fruitful, are the chief merits of the philosophical school of
which I am a member. The habit of careful veracity acquired in the
practice of this philosophical method can be extended to the whole
sphere of human activity, producing, wherever it exists, a lessening of
fanaticism with an increasing capacity of sympathy and mutual
understanding. In abandoning a part of its dogmatic pretensions,
philosophy does not cease to suggest and inspire a way of life.

Deepfakes Are Coming. What Happens When We Can No Longer Believe What We See?



The following is taken from a recent (6/10/19) NY Times' op-ed piece, Deepfakes Are Coming; We Can No Longer Believe What We See, by philosopher Regini Reni. Are the epistemological standards by which we separate fact from hearsay going to be inevitably lost when we can no longer rank visual information as the most reliable kind? Rini explores.

************************************************

On June 1, 2019, the Daily Beast published a story exposing the creator of a now infamous fake video that appeared to show House Speaker Nancy Pelosi drunkenly slurring her words. The video was created by taking a genuine clip, slowing it down, and then adjusting the pitch of her voice to disguise the manipulation. Judging by social media comments, many people initially fell for the fake, believing that Ms. Pelosi really was drunk while speaking to the media. (If that seems an absurd thing to believe, remember Pizzagate; people are happy to believe absurd things about politicians they don’t like.)

The video was made by a private citizen named Shawn Brooks, who seems to have been a freelance political operative producing a wealth of pro-Trump web content. (Mr. Brooks denies creating the video, though according to the Daily Beast, Facebook confirmed he was the first to upload it.) Some commenters quickly suggested that the Daily Beast was wrong to expose Mr. Brooks. After all, they argued, he’s only one person, not a Russian secret agent or a powerful public relations firm; and it feels like “punching down” for a major news organization to turn the spotlight on one rogue amateur. Seth Mandel, an editor at the Washington Examiner, asked, “Isn’t this like the third Daily Beast doxxing for the hell of it?” It’s a legitimate worry, but it misses an important point. There is good reason for journalists t expose the creators of fake web content, and it’s not just the glee of watching provocateurs squirm. We live in a time when knowing the origin of an internet video is just as important as knowing what it shows.

Digital technology is making it much easier to fabricate convincing fakes. The video that Mr. Brooks created is pretty simple; you could probably do it yourself after watching a few YouTube clips about video editing. But more complicated fabrications, sometimes called “deepfakes,” use algorithmic techniques to depict people doing things they’ve never done — not just slowing them down or changing the pitch of their voice, but making them appear to say things that they’ve never said at all. A recent research article suggested a technique to generate full-body animations, which could effectively make digital action figures of any famous person.

So far, this technology doesn’t seem to have been used in American politics, though it may have played some role in a political crisis in Gabon earlier this year. But it’s clear that current arguments about fake news are only a taste of what will happen when sounds and images, not just words, are open to manipulation by anyone with a decent computer.

Combine this point with an insight from epistemology — the branch of philosophy dealing with knowledge — and you’ll see why the Daily Beast was right to expose the creator of the fake video of Ms. Pelosi. Contemporary philosophers rank different types of evidence according to their reliability: How much confidence, they ask, can we reasonably have in a belief when it is supported by such-and-such information?

We ordinarily tend to think that perception — the evidence of your eyes and ears — provides pretty strong justification. If you see something with your own eyes, you should probably believe it. By comparison, the claims that other people make — which philosophers call “testimony” —provide some justification, but usually not quite as much as perception.Sometimes, of course, your senses can deceive you, but that’s less likely than other people deceiving you.

Until recently, video evidence functioned more or less like perception. Most of the time, you could trust that a camera captured roughly what you would have seen with your own eyes. So if you trust your own perception, you have nearly as much reason to trust the video. We all know that Hollywood studios, with enormous amounts of time and money, can use CGI to depict almost anything, but what are the odds that at random internet video came from Hollywood?

Now, with the emergence of deepfake technology, the ability to produce convincing fake video will be almost as widespread as the ability to lie. And once that happens, we ought to think of images as more like testimony than perception. In other words, you should only trust a recording if you would trust the word of the person producing it.Which means that it does matter where the fake Nancy Pelosi video, and others like it, come from. This time we knew the video was fake because we had access to the original. But with future deepfakes, there won’t be any original to compare them to. To know whether a disputed video is real, we’ll need to know who made it. It’s good for journalists to start getting in the habit of tracking down creators of mysterious web content. And it’s good for the rest of us to start expecting as much from the media. When deepfakes fully arrive, we’ll be glad we’ve prepared. For now, even if it’s not ideal to have amateur political operatives exposed to the ire of the internet, it’s better than carrying on as if we can still trust our lying videos.

Below is a 10 minute documentary on the problem of deepfakes (courtesy of WSJ)




Comments/Thoughts:

In the tech sector, there have been attempts to combat the problem of Deep Fakes. For example, the photo and video verification company, True Pic, has designed technology which "fingerprints" videos and photographs right at the moment they come off the image sensor. These fingerprints can be used later to establish the authenticity of the image. There is an interesting article in Fast Co.
“If I was a campaign manager for the next election, I would be videotaping every single one of my candidate’s speeches and putting them on Truepic,” says Hany Farid, a professor of computer science at Dartmouth College who has helped detect manipulation in preexisting images for organizations including DARPA and The New York Times. He now serves as a Truepic adviser. “That way, when the question of authenticity comes up, if somebody manipulates a video, [the user] has a provable original, and there’s no debate about what happened.” https://www.fastcompany.com...
Further, Truepic is partnering with Qualcomm which manufactures a large portion of the smartphone processors in use today. The plan is to insure that the Truepic tech is built into the smartphones so that it isn't even necessary to download a photo/video authenticating app.

But tech solutions only get us so far here. Just as surely as partial antidotes hit the market, so will more advanced versions of deep fake tech. More to the point in terms of the points made by Reni in her article, no technology insures that it will be used. To combat the current problem, we would need to be, as Reni states, "at the point where a camera that does *not* use the tech is suspicious by default." We are far from being at that point. Why?

Our epistemological norms have served us relatively well for many centuries. Eyewitness accounts are now intuitively accorded much greater credibility than testimony or other sensory data such as audition without vision (e.g. claims that you recognized a voice without accompanying visuals). This hierarchy of reliable evidence informs everything from our methods of education, criminal procedure, news and journalism, right down to plain old small talk and gossip. In order to practice vigilance in the era of Deep Fakes, we would have to learn how to habitually put such implicit schemes aside and start with the premise that all perceptual evidence is deeply problematic in principle. Such a change in cognitive and perceptual MO, if it were to come about, takes a lot longer than the period of time in which more and more refined Deep Fake programs are developed and disseminated widely and cheaply.

Thursday, August 22, 2019

Wittgenstein: On Certainty

(Note: This was written as the third and final  in a series of three posts devoted to his late work, particularly Philosophical Investigations. The previous two posts are below in archives, though this can be read independently of them as well.)


After Philosophical Investigations was published posthumously in 1953, one last book, entitled On Certainty appeared. The book takes the form of a response, or series of responses to the common sense realist GE Moore. But the main line of Wittgenstein's response to Moore gives great insight into the later W's overall appraisal of philosophy and its quest for foundational knowledge.

The problem that faced GE Moore in the early 20th century was that of Idealists who said that there is no way to know that we are not actually immaterial beings in the mind of God, or something of that kind, despite all the appearances to the contrary. Thus we could not really know that our bodies exist; they could be mere images we see when God causes us to see them (or in this era a scenario like that in the Matrix might enable the same type of skepticism such that you think you are moving or eating, but you are really in a box with your brain connected to an elaborate virtual reality program). GE Moore thought that it was sheer nonsense to doubt such "self-evident knowledge" as being in possession of your own body. In sometimes-heated debates with skeptics and idealists Moore would raise his hand up, then raise his voice and exclaim "I KNOW that this is my hand." He was trying to say that knowledge of the real world does not really lead to an infinite regress (e.g. "How do you KNOW that is your hand, Prof. Moore?") but rather reaches a stopping point or foundation which is common-sense self-evident knowledge. "I know I am typing now" would be another such statement.
Why are these sorts of statements self-evidently the case? Because we cannot function in the everyday world if we seriously and sincerely start to doubt these basic articles of common sense facts, according to Moore. Only philosophers interested in abstract speculation sincerely argue about these things with counter-examples. In everyday life it would never come up. These common sense foundations are, said Moore, indubitable.

Moore's realism came to be seen as, and indeed called "Naive Realism." But Wittgenstein brought Moore's defense of realism back in a very interesting way. He said, in On Certainty, that Moore was correct to say that there was no reason to doubt that "This is my hand" while holding it up. Why? Because there is no language game in which such a doubt ever arises accept philosophy. This only shows how peculiar and detached from reality philosophy actually is, says W. So he agrees that it is "nonsense" to doubt whether or not the hand you are waving is really "your hand." But crucially it is also nonsense to say "I know this is my hand" while holding it up. Why? Because knowledge claims make no sense unless they can be doubted. If you cannot doubt something then the question of knowledge doesn't arise. The point of knowledge is to remove doubt. This was implicit in the PI when discussing sensations and pain. If I cannot doubt that I am in pain, then it makes no sense to say "I know I'm in pain." (Unless someone like a doctor doubts me, but that is not the same as me doubting my own sensations). What, would saying "I KNOW THAT I am in pain" add to the sentence, "I am in pain?" To understand this rhetorical question is to understand why W thinks philosophy creates psuedo-problems rather than clarifying things.

Wittgenstein holds that a person who doesn't believe they are in pain when they are feeling pain; or doesn't believe they have a hand when they are using one; or doesn't believe it is raining while they are walking through the rain with an umbrella; such people do not need training in the intricacies of epistemology, but rather something more like "a slap in the face, or a conversion experience." His image of a situation in which someone acts like the skeptic or idealist in these situations is one where the listener grabs the doubter and shakes him while saying something like,"For God's sake, come to your senses! Snap out of it. You're acting crazy." The only exceptions would involve idle speculation among philosophers or creative imaginings for the production of fantasy novels or movies. In everyday life, it doesn't come up. His view is that epistemology and doubt go together. The quest for certainty tends to generate doubts that otherwise do not arise. In ordinary circumstances we settle for what W calls riverbeds which are not beliefs or doubts, but the solid ground of pragmatic givens in everyday experience. I am here and you are there, and there's no problem, no intellectual issue at all. This is a lot like American Pragmatism but without any desire to convert it into a "theory" of experience. The experience is just fine as it is and requires no no theory, thinks W.  Any theory devised to reassure me that "my arm is 'really' my arm," or "I am in pain at T" or "It is raining" is absurdly superfluous, for W. There is no need to gild the lily of non-problematic experience with theories designed to "prove" that there are no problems. Anyone who is caught up in trying to prove that they are feeling pain while they are feeling it, for example, needs psychological help and not philosophy. They need the "Philosophical Therapy" of the Philosophical Investigations which, as covered in previous posts, is designed to convince the suffering philosopher that philosophy itself is a kind of illness from which s/he can only be cured by ceasing to engage in the activity. Perhaps, W saw himself as something like a "a philosopher in recovery!" And here we come to the end of the line of W's later work.
_____________________________________________________________________
 
Relevant Readings:

-On Certainty: Ludwig Wittgenstein, trans. Dennis Paul & G.E.M. Anscombe, Oxford Basil Blackwell, 1969

- Letures & Conversations on Aesthetics, Psychology and Religious Belief:
Ludwig Wittgensetein, ed. Cyril Barrett, Univ. of Cal. Press

-An Introduction to Wittgenstein's Philosophy of Religion:
Brian R. Clack, Edinburgh Univ. Press, 1999
-------------------------------------------------------------------------------------------------------------

Below is a humorous snippet from Derek Jarman's film, Wittgenstein, which deals with just the issues of this post. He has both Russell and Keynes as part of his audience, and storms out of the room in a state of vexation. (He did do this during his lectures at times).



Following the original OP from 2016 (reposted above) was  a dialogue between  a reader/commenter  (who now shows up only as "Guest" in the Disqus comments) and myself. I think it's worth including here, as the content and meaning of W's On Certainty, was made more elaborate. Also, the possible relation of W's post-1929 thought to the earlier thought of American Pragmatists like Peirce was discussed.


Guest:

Maybe I missed it, but it seems we're leaving out sort of the most crucial part of Wittgenstein's view here. For Wittgenstein, radical doubt/skepticism is structurally incoherent. When we claim we know or doubt some proposition, we are submitting that proposition to an evaluation: we are testing it to see if it can hold up to the evidence. But in order to test or evaluate a proposition, we need to do so in terms of things which are taken for granted: if I'm testing whether there is an elephant on my couch by looking around my living room, I am taking it for granted, for instance, that my eyes are working properly. Without taking this for granted, testing "there is an elephant on my couch" by looking at my couch would not work. In order to evaluate some proposition, we must take other propositions for granted: else we don't have any standard or criteria to compare them to. Something must hold firm. And the things we take for granted, like that our eyes work properly under normal circumstances, that we have hands, that our body has never left the Earth, and so on, forms our epistemic bedrock- the background framework of beliefs and assumptions against which we test epistemic claims.
On the other hand, the whole point of radical, universal doubt is that nothing is taken for granted. Universal doubt removes the very thing it requires to function, if Witt's epistemological picture here is correct: with no epistemic bedrock, we have no criteria or standard for comparison, and no basis for accepting or doubting anything. So universal doubt is incoherent and ultimately self-defeating, for Witt. And Moore is mistaken in saying that he knows he has two hands, for the same reason that the skeptic is mistaken in doubting he has hands- that we have hands is part of the background framework we use to evaluate knowledge and doubt claims, it cannot coherently be itself subjected to that framework. We don't know that we have hands- we take it for granted that we do, and such foundational assumptions provide the necessary starting point for epistemic evaluation, such as knowledge and doubt.

PD:

I don't think the points you raise were missed, at least I hope not as I did my best to summarize them (and other points) in a short and accessible way. Anyway, your reading sounds right to me. The "background framework" of assumptions against which we check knowledge claims whenever we do is a good description of the "riverbeds", "banks", "scaffolds" etc., which metaphors W uses for these presuppositions you are discussing. Importantly, they are internal to, and embedded in cultures (forms of life and associated language games). He uses metaphors like "beds and banks" to avoid any misinterpretation of him setting up yet another theory of knowledge, as I point out in the post. I thought the ideas you are mentioning were conveyed in passages like this one above:

" In ordinary circumstances we settle for what W calls "riverbeds"
which are not beliefs or doubts, but the solid ground of pragmatic
givens in everyday experience. I am here and you are there, and there's
no problem, no intellectual issue at all. This is a lot like American
Pragmatism but without any desire to convert it into a "theory" of
experience. The experience is fine as it is and needs no "theory." Any
theory devised to reassure me that "my arm is 'really' my arm," or "I am
in pain at T" or "It is raining" is absurdly superfluous, for W. There
is no need to gild the lily of non-problematic experience with theories
designed to "prove" that there are no problems."

Statements like "The Earth was here before I was" and "I know I have a hand before me" may be bedrock, but I 'm not sure I interpret them as *epistemic* bedrock, if by that there is any implication of foundationalism. I think he sidesteps (like Pragmatists) epistemology and ontology, predicating knowledge on shared norms within cultural-linguistic communities. I think you may have the same idea in mind, but only because you used the phrase *epistemic bedrock* I wanted check whether or not we can agree that if this is epistemology then it is social epistemology at bottom.

The question of how satisfactory such an account (and, of course, there's much more to On Certainty than either of us can encapsulate here) may or may not be is one that divides interpreters into myriad camps. There are a lot of different answers out there. But as far as the overview skipping your point, I don't think that happened. If it did, I'll edit in some stronger statement of the notion of presuppositions that allow language games to "go on."

Guest:

I don't say "epistemic" here because Witt was a foundationalist, but because the issue he's concerned with is justification (including of epistemic attitudes like knowing, doubting, believing, etc.) and the problem of regress. On the other hand, Witt's solution does share one striking feature with foundationalism: like foundationalists, Witt holds that the regress of justification eventually terminates- when we reach those framework assumptions which we do not, and even cannot, subject to evaluation, the chain of justification ends. But unlike in foundationalism, these terminal judgments are not intrinsically justified (due to self-evidence or immediate presence to our awareness or what have you)- being part of the background framework required for justification in general, they are themselves outside the scope of justification, and in this sense are non-epistemic. In any case I don't see anything wrong with construing this as "social" epistemology, given Witt's somewhat novel (at the time at least) emphasis on the communal nature of language and even judgment.

And though your quoted passage is perfectly consistent with what I've said, I'm still not sure that someone who wasn't already familiar with Witt's view on the matter would come away understanding why there are "no intellectual issues" with these framework judgments (i.e. because of their role in the structure of judgment and evaluation). But either way I don't think you need to edit your OP at this point: I imagine that if the point wasn't clear enough to begin with, it should be fairly clear with these additional comments.

PD:

Thanks. It's hard for me to take on the perspective of an outsider, maybe I'll add a sentence or two, though I think that after 15 days, and with comments like these, your probably right that it's unnecessary.

Also, I pretty much figured you meant what you just explained when using the term "epistemic."
I'm not sure how new the communal conception of knowledge was to be honest. As I wrote in the OP, I think he ends up in a very similar position as some of the American Pragmatists like Dewey and GH Mead, though they weren't against systems and theories, nor did they think philosophy was always misleading; rather they looked at the history of phil piecemeal.

Guest:

Sure, and I only say that Witt's emphasis on the social nature of knowledge/judgment was "somewhat novel", i.e. in the sense that it was waaaaaay outside the norm (which was to consider individual knowers in isolation from their environment, including/especially their social environment), and particularly so for the hardcore logical/analytic tradition Witt found himself in at Cambridge... not because Witt was the first to consider it. Maybe "unusual" would've been a better word for it.

PD:

Absolutely. I didn't mean to nitpick. I'm interested in the history of ideas, and it turns out that W's friend Frank Ramsey--before he died-- discussed pragmatism with Wittgenstein. Ramsey was doing work on Peirce and was familiar with Dewey et al. What is interesting about all this is that you're right in suggesting that Wittgenstein was out of step with the analytic world. He was seen by some of his peers as having gone astray. Russell was particularly scornful saying that W's later work was "invented to save W the trouble of thinking" or words very close to those.
Russell had heard some of these ideas before though. His debates with John Dewey were sustained and the divisions sharp ( in terms of theories; they met and actually became friends who disagreed about philosophical matters). Point is, there's been all this talk since Rorty about "neo-pragmatism" and I think in many ways Wittgenstein was already there in the mid- late 30s when he arrived at most of what ended up in PI. Interestingly it spawned a minor, if forgotten, revival of pragmatism at the time, and a very similar field in linguistics with Paul Grice and "Pragmatics." This leitmotif of finding meanings in the various uses of words in social contexts is definitely part of Mead and Dewey, who had also railed against the picture theory approach, the residual effects of Platonism (essential definitions etc.) and Cartesian dualism etc. I'd say there's a good bit of "family resemblance." But maybe I'm biased as I did a lot of work on the pragmatists in the past.
Of course none of this means that either pragmatists or Witt are right about many of the things argued. And indeed, as you point out on another thread, there are many interpretations of W (and for that matter Dewey). Still, the general contours are strikingly similar imo.
Though I do not like this author's interpretations of Peirce, James or Dewey, philosopher, Cheryl Misak wrote on the historical connections between the pragmatists and Ramsey---> Wittgenstein. She may overstate the case, but examines letters and dusty manuscripts from Peirce, James, Dewey, Moore, Ramsey, Wittgenstein and others. Anyway, it's an interesting historical connection.https://global.oup.com/academic/pro...

PD: (Following up on last comment about Witt. and Pragmatism)

The following passage is from W's notebooks of 1930 which Misak published for the first time in its entirety, and is one of several intriguing documents re: the prag-Witt connection. It is Wittgenstein MS 107 fr. 1930 (2 days after Ramsey's death) and contains an explicit reference to "meaning as use" as an idea coming from pragmatism. 1930 was the year of Wittgenstein's so-called "U-turn", and these are among the notes that represent a decisive change of direction which culminates in PI and On Certainty.

-From Cambridge Pragmatism pp 239-40. Wittgenstein: MS 107-

"Sentences--what we ordinarily call sentences in everyday use-- seem to me to work differently from what in logic is meant by propositions, if there are such things at all. And this is due to their hypothetical character. Events do not seem to verify or falsify them in the sense I had originally intended--rather there is still, as it were, a door left open. Verification and its opposite are not the last word.

Is it possible that everything I believe to know for sure--such as that I had parents, I have brothers and sisters, that I am in England--that all of this should prove to be false? That is, could I ever acknowledge any evidence as sufficient to show this? And then could there be even more reliable evidence to show that the first kind of evidence was deceptive?

When I say, "There is a chair over there," this sentence refers to a series of expectations. I believe I could go over there, perceive the chair and sit on it, I believe it is made of wood, and I expect it to have a certain hardness, inflammability etc. If some of those expectations are disappointed, I will see it as proof for retaining that there was no chair there. [This sounds almost like a quick summary of the main arg. in Peirce's How To Make Our Ideas Clear.]

Here one sees how one may arrive at the pragmatist conception of true and false: a sentence is true as long as it proves to be useful. Every sentence we utter in everyday life appears to have the character of a hypothesis. A hypothesis is a logical structure. That is, a symbol for which certain rules of representation hold. The point of talking of sense-data and immediate experience is that we are looking for a non-hypothetical representation.

But now it seems the *representation* [sic] loses all its value if the hypothetical element is dropped, because then the proposition does not point to the future any more, but it is, as it were, self-satisfied, and hence without any value. Experience says something like, "It is nice elsewhere too, and this is where I am anyway." And it is through the telescope of expectation that we look into the future.
It makes no sense to speak of sentences if they have no instrumental value. The sense of a sentence is its purpose.

When I tell someone, "There is a chair over there," I want to produce in him certain expectations and ways of acting. It is terribly hard here not to get lost in questions that do not concern
logic. Or, rather, it is terribly hard to find one's way out of this tangle of questions, in order to contemplate it whole, from the outside." -Ludwig Wittgenstein (Fr. 1930 Notebooks of Wittgenstein as reprinted in Cheryl Misak's book, Cambridge Pragmatism: pp. 229-30)

That's MS 107 from start to finish. I find all of this--though academic-- just fascinating. The official story is that Pragmatism died with Dewey himself, if not sooner (around 1950). "Sloppy" and "crude" Pragmatism was replaced by the icy logic and rigor of Analytic Philosophy. But PI was published in 1953, and focused anglo/analytic philosophy on decidedly social and pragmatic ideas and themes, leading to a cross-disciplinary set of movements emphasizing everyday or "ordinary" language, speech act theory, linguistic analysis of "pragmatics" or language *use*, and Wittgenstein-inspired social science, driven largely by his student, Peter Winch, among others. This is all years before Rorty's Consequences of Pragmatism which brought the pragmatists back into dialogue with Wittgenstein, Heidegger, Quine, Goodman, et al. But had the divide ever been as sharp as many mainstream analytic philosophers had imagined? I wonder.

Any thoughts?
Guest:

Yeah that is really interesting- obviously one can find areas of overlap and agreement between pragmatism and the PI, but not quite like this: parts of this passage sound, as you note, as if it could have been written by Peirce himself! And for my part, I tend to think that these historical narratives we get about AP [i.e. Analytic Philosophy- ed.] killing off pragmatism, or the continental/analytic split, tend to be overly simplified and often just misleading- useful if you're trying to give a "history of philosophy in 60 seconds or less", but not that useful otherwise. There was alot more common ground than is often supposed; one can even find hints of pragmatism in the positivists, and in some ways LP represents analytic philosophy taken to its logical extreme. I suspect if you looked at everyone's manuscripts, you'd find alot more of this behind-the-scenes dialogue going on, like what we see here with Wittgenstein.

And I guess its sort of fitting that Witt could be viewed as a kind of bridge between pragmatism and AP/OLP,  [OLP = Ordinary Language Philosophy - ed.]the same way he is sometimes viewed as a bridge between AP and continental philosophy- I remember Walter Kaufmann wrote about GE Moore and Soren Kierkegaard each being "half a Socrates", and Witt being the closest we get since Socrates to fusing these two disparate tendencies or philosophical spirits. The fact that this buys into/depends on the simplistic picture of the AP/continental split is sort of beside the point: Wittgenstein displayed qualities, and was concerned with topics, that were all over the philosophical map, and so is natural to view as a bridge between philosophers who were a little more domain-specific. And I suppose it shouldn't be overly surprising, when we consider how much of an odd duck Witt was to begin with: a Vienesse occupying a chair at Cambridge, writing in German about characteristically British philosophical concerns!

Topic Suggestions


This is for discussing topic suggestions for future posts/discussions. Please leave your suggestions and/or feedback in comments section below.

Edward Said: Orientalism


Edward Said (1935-2003) was a Palestinian-American literary critic, political activist and social thinker whose work was profoundly influential in the area of post-colonial studies, a field he helped to define and develop. He is best known for his book, Orientalism (1978), which describes the "Orient" as Said believes Europeans and Americans understand it, and asks how this understanding came about, and in what ways it informs Western political domination of the Middle East and Asia. Orientalism here has at least two meanings. On the one hand, it denotes the field in which scholars of Eastern cultures, history and languages work. The book, in part, debunks that field. But the primary meaning the author has in mind is a history of strongly biased representations of Eastern (esp. Arabic and Middle Eastern) cultures produced by Westerners including-- but not limited to-- academics.

Whereas most books on colonialism before Orientalism had examined art, literature, philosophy and other cultural productions as expressions of preexisting prejudice, racism or negative ethnic stereotypes, Said turns that logic around and argues that it is through cultural discourses (e.g. travelogues, works of art, novels, films, philosophy etc. ) that the very category "The Orient" is brought into being. The "Orient" is not a description of a real time and place, but a "way of seeing and understanding" all people who live in the East (Middle East, Central Asia, South Asia and Far Eastern Asia). It is a Western social construction. According to Said, we first construct this wholly "Other" culture and then define ourselves against it as both superior and warranted in controlling it (the "Orient" ) for its own good.

Said writes at a very general level. He does not get into many of the usual political and economic details that often come up when discussing the middle east and Asia (and he concentrates mostly on the Middle East with an emphasis on Arabs, though he believes the "Orient" is a general construct which applies to India, China et al.). By examining cultural artifacts and texts, he argues that there is a remarkably static and monolithic set of traits that Westerners assign to the Orient which combines romantic and racist tropes at one and the same time. Here are a few examples of how he defines Orientalism as the "Other" and opposite of Western culture.

- The Orient is irrational; the West is rational.

-The Orient is mysterious and elusive; the West is clear-headed and direct.

-The Orient is feminine, it is more passive than active; the West is masculine, it acts on nature and other cultures rather than being passively acted upon like the East.

-The Orient has a fixed, unchanging essence and exotic aesthetic; The West is a dynamic, changing and progressive culture which is always advancing.

-Thus the Orient is a fascinating, adventurous, but too often a savage world but vastly inferior to our own, and it needs to be guided well, much as children need guidance and discipline. We can and should provide this "beneficial" domination.

Said is influenced profoundly by Michel Foucault, who holds that it is through discourse ("discurssive formations") that we constitute the social forms and mindsets ("epistemes")we inhabit. Thus, Said thinks that texts and cultural discourses actually invent the Orient. It is not, once again, a real time or place existing objectively, but a way of interpreting and representing whole cultures and peoples which justifies Western oppression of them. The production of these representations is largely an unconscious process and not a deliberate activity.

*********

Because of the highly abstract and general descriptions of Orientalism provided, one might well wonder whether other cultures don't have equally unflattering myths about, say, Europeans. Intellectual historian, Ian Buruma wrote a book along these lines, Occidentalism: The West in the Eyes of its Enemies https://www.amazon.com/dp/B... Is it not the case (one might ask) that in-groups typically consider themselves superior to out-groups and produce "discourses" or myths and stereotypes that reflect poorly on others? Herodotus(Histories Bk. I) noted long ago that cultural groups often feel superior to groups in proportion to their cultural and geographic proximity. If we imagine other cultures in order of decreasing nearness and familiarity, said Herodotus, we tend to have the least sympathy for those which are most distant and least like our own. The ancient notion of the Chinese Empire as Tanxia (All that is under Heaven) articulated a similar world view that "centered on the Imperial court and went concentrically outward to major and minor officials and then the common citizens, tributary states, and finally ending with the fringe 'barbarians'." https://en.wikipedia.org/wi... Didn't Romans and Egyptians also conceive of themselves as the culmination of civilization? Did they not see their neighbors as inferiors? So, it may not be so unusual to produce prejudiced discourses about others that justify exploitation, hostilities, or at the very least chauvinism.

Another criticism that has been leveled at Said is that he does not go through the major works of any noted Orientalists. Bernard Lewis complained along such lines generating a heated exchange with Said prior to to the latter's death in 2003. Because Said makes his case on the basis of biased representations in art, travelogues, novels, etc., and because he was a literary critic by training and not a historian, not only Lewis but other historians that were trained Orientalists felt that he unfairly turned the word that stands for their discipline into a term of abuse. Indeed the term, Orientalism, is not used nearly as frequently as it was before the book in the descriptive sense.

However, if some Orientalists and historians were unhappy about the book, many anticolonial and postcolonial writers received it very positively. Indeed, Said (along with a few others such as Gayatri Spivak) is considered one of the founders of Postcolonial Studies, with its heavy emphasis on post-structuralist philosophies of Foucault and Derrida as well as neo-Marxian theorists like Antonio Gramsci. These are 3 of the strongest influences on Said's approach. In short, the book generated controversy, and to a lesser extent still does when it comes up today.

The following 3 minute video made by Al Jazeera nicely captures Said's concept of Orientalism.



Possible Question/Topic: Considering both the positive and negative responses to the book, what are your thoughts based on this short description?

Book Review: Islamism: What it Means For The Middle East And The World


After reading an excellent historical overview of Middle Eastern History from WWI to the close of the 20th century on the History Community Channel https://disqus.com/home/dis... I thought it would be a good idea to post a book review I wrote in 2016 that focuses on the ideological responses to that history within the same region during the same period. The book discussed below also discusses the 21st century history through the Arab Spring and its disappointing aftermath.

********************************

Tarek Osman is an Egyptian economist, journalist and historian of the modern middle east focused on contemporary issues and problems in that region.He has written a thoughtful and penetrating analysis of the rapid rise and decline of Political Islam/Islamism in the wake of the Arab Uprisings. He writes retrospectively ("How did we get here?") prospectively ("where might we be going?") and prescriptively ("where*should* we try to go from here?"), all against the backdrop of the political and intellectual history of the past century in the middle east. This is a lot to take on, and though his ambitions sometimes outstrip his answers, it is remarkable that he writes with such insight about these widely ranging topics. Additionally, he writes in a clear and accessible way even when he takes up aspects of intellectual history and politics that are usually discussed in academic contexts that can be forbidding to the interested lay-person. The book can be readprofitably by those without any special background and serves as a fine overview.

Osman thinks that in order to understand the (mostly) unfortunate current events in the region, one must first come to terms with historical, socio-political, economic and ideological forces that have shaped not just the events of the 21st century, but also the ideas and beliefs that have become predominant in the discourse of citizens, politicians, clerics and media pundits. He seeks to understand the main causes of today's hostile conflicts over the meaning and role of such fundamental categories as the state, political participation, civil society, religion, the Ulama (community of clerics), Sharia, secularism,tolerance, and economic and social justice. He breaks the history of the modern middle east down into 3 general paradigms, each of which prevailed during a particular era over roughly the past century. In chronological order these are the ages of Liberalism, Nationalism and Islamism respectively. Of course these categories are not entirely mutually exclusive, but they are useful models to understand the predominant trends in politics and culture.

The first, the Liberal age, was characterized by increased Westernization and secularism, both cultural and political. Examples might include Kemal Attaturk's extensive reforms in the 20's and 30s stressing republicanism, popular sovereignty, enforced secularization, equality of the sexes and Westernization. Attaturk tried to Liberalize Turkey by dictatorial means, and it will forever be argued by historians just how successful he was. Another example might be the Egyptian Constitution of 1923 which adopted a parliamentary representative system, drafted by the liberal Wafd Party. The era of Western style liberal reform movements waned after the establishment of Israel in 1948, or rather after the Arab states lost the war they fought to dissolve Israel. As Arabs saw things, the Western powers supported the rights of Jewish immigrants to determine their own fate in a sovereign nation-state which displaces Arabs, yet continued to administer exploitative colonies and "mandates" in the Islamic world. Thus it seemed that the European rhetoric of equality, freedom and self-determination was empty. So went the reasoning of many disillusioned liberal Arabs. Conclusion: European models of Liberalism are not the answer. (In recent times the Liberal tradition has been reevaluated by many pro-democracy movements and secularists in the region who advocate a less Eurocentric appropriation of Liberalism.)

Thinkers and politicians then turned to Arab Nationalism, as exemplified by Nasser and later such entities as the Baath party. Political self-determination and a decisive break from the late colonial mindset were at the heart of Nationalism which was usually secular in tone. The basis of solidarity and identity in this age was "Arabness" which was to serve as an inspiration to modernize states like Egypt, Syria and Iraq while diminishing poverty and oppression. In this paradigm, all religions in the Arab world are (in theory) tolerated and none form the nucleus of collective identity. Muslims, Maronites, Jews, Druze, Copts, Shii and Sunni-- these distinctions should not be barriers to one's inclusion and participation in Arab states and society. Nasser's successful nationalization of the Suez Canal in defiance of France and Britain was seen as the high point of the Nationalist age. But despite the rhetoric, poverty was not diminished, and Islamists like the Muslim Brotherhood were jailed and tortured leading to a more radical and politicized brand of Islamism. After Israel swiftly defeated Egypt, Jordan and Syria in 1967, annexing much of their territories, the Nationalist outlook lost much of its motivational sweep. Nationalist regimes tended towards dictatorships which were unresponsive to economic and social ills affecting large swathes of their populations. After 1967 it was clear that these regimes were not nearly as strong or independent as they claimed to be . They tended increasingly to become client states of great Western powers or the Soviet Union, and nationalistic pretensions were belied. Since the 70s Islamism and secularism have both grown steadily, but Islamism has grown dramatically through organizations that are increasingly well funded, and popular.

Osman maintains that the middle east is currently in the age of Islamism, which by the time of the Arab Uprisings in 2011 had reached something of a peak. By 2011, groups like the MB and Hezbollah had been embraced by many citizens who rely upon them for social services including healthcare, education, job opportunities and professional networking. Such organizations function pragmatically, and increasingly are in step with the high-tech digital age, recruiting members with a broad range of professional, technological and entrepreneurial skills. They have a great influence on media outlets and are able to get Islamist messages across on satellite TV and of course the internet. Osman explains thesociological and economic factors that led to the growth and efficiency of Islamist organizations and political parties. He then discusses the ideological and political factors that have led to their rapid decline in post "Arab Spring" countries where they recently had a firm hold on legitimate power(Mosri and the MB in Egypt and the Ennahda Party in Tunisia) but lost all of it (Egypt) or much of it (Tunisia) in the face of widespread popular resistance. Why were they rejected by large segments across the middle east and North Africa?

From the standpoint of the relatively small but influential secularists and progressives that, in large measure, kicked off the uprisings (students, intellectuals, journalists, elites), Islamists hijacked the Uprisings which were supposed to lead to greater rights and economic opportunity. They see the Islamists as even more oppressive than the military based dictatorships they replaced, and they are deeply skeptical when Islamistpoliticians claim to promote democracy, tolerance and equality. They point to, for example, Morsi's telling move to bypass the judiciary andgrant extraordinary powers to himself using the excuse of a transitional period of crisis. Appointees in Egypt and Tunisia were largely drawn from their own Islamist parties and networks, and Sharia was sometimes touted by both of those parties. All of this belied the representative, democratic veneer of the allegedly moderate Islamists who claimed to be committed to coalition politics in which many voices including those of the liberals would be registered. Aggravating things, there have been economic downturns and not the promised growth and increased opportunities after 2011. Currently, according to Osman, the standoff between secularists and Islamists defines the situation. So does Islamism have a future? If so, is it compatible with liberal secularism?

Osman does not pretend to know the answer, but he discusses earlier "Islamists" (usually they are called Islamic Modernists) who did see just such a possibility. Such leading thinkers of the late 19th and early 20th century as Al-Afghani and Muhammad Abduuh (a prestigious cleric) thought that Islamic civilization had become stagnant, that true reform requires creative and flexible reasoning aimed at making Islam workable in the Modern world. This was a scholarly bunch, and they often pointed out that the Golden Age of Islam was made possible by the reconciliation of reason and faith among philosophers and scholars of jurisprudence (fiqh) and divine law (Sharia) in ways that enabled them to address problems not mentioned or anticipated in scripture. For many reasons, the license to think originally in jurisprudence (ijtihad) was rejected as dangerous to "genuine" or "original Islam." The works of philosophers like Avicenna were deemed heretical, and some heresies were codified as capital offenses.The modernists thought that Islam needed to recover its genuinely Islamic but creative and adaptable thinking in politics, law and education. Osman ends the book by praising their efforts and saying that in the end much will depend on whether today's Islamists become as thoughtful as these modernists, or whether they will continue to fall in line with the more extreme and rigid brand of Islamism associated with anti-Western thinkers like Sayaad Qubt who has greatly influenced militant extremists. If the former there is hope for finding common ground for secularists, Islamists and other groups including those of minority religions. If not, the conflict will likely continue to escalate.

Osman also discusses the "Turkish model" in which the originally Islamist AKP (Erdogan's "Freedom and Development Party") showed itself capable of retaining its conservative principles while playing ball with other parties and interest groups. Many Turkish liberals and secularists might disagree with this sanguine interpretation, as they are uneasy with the AKP record. But it is true that Turkey, a European state and member of Nato, has been governed by a modified Islamist party and president in recent years. Though tensions related to this likely led to the loss of Turkey's previously held temporary seat at the UN Security Council in 2014, the AKP is distinct from such groups as the MB. Osman, like Tariq Ramadan and other authors, uses the "Turkish Model" to encourage a flexible approach to Islamism-- one that might allow tolerant and modern political institutions to take root in the Middle East. He clearly sympathizes with liberal secularists on many points, but sees them as a bit insulated from the general population; a privileged group with many fine ideals but an inability to reckon with the role that religion plays in the lives of many ordinary people in their own countries. The idea of emulating the "Turkish Model" is intriguing, but it presupposes the stability of democratic or parliamentary institutions which is not a given. As I write this review, Erdogan is trying to put limits on freedom of assembly, speech and the independence of the judiciary, in the name of security, all of which is causing alarm among many outside observers and citizens in Turkey who fear a slippery slope leading to more extreme and authoritarian Islamism.

As far as revisiting the Islamic Modernist thinkers and reformers of the 19th century, I am not sure that it would amount to much more than an interesting diversion. I suspect that nuanced and abstruse thinkers and reformers of Islam will not play a prominent role in the unfolding, crisis ridden situation. Multiple emergencies would seem to overshadow any interest in scholarly Islamic philosophy and deep discussions of fiqh (jurisprudence). Pragmatism is the order of the day in an age of eroding nation states (e.g. Syria, Iraq, Libya), refugee and humanitarian crises of the greatest magnitude, the growth of large and well funded militant groups like ISIS who settle and control oil rich provinces, deepening sectarian wars and widespread lack of access to education, healthcare, stable job opportunities, and security of life and limb for so many Arabs. The stakes are high and the window of opportunity to attenuate these problems is limited.

Overall, Osman astutely synthesizes history, journalism, economics and intellectual history making many important and illuminating connections along the way. His historical analysis is stronger than his prognosis, in my view. Still, it is hard to fault him if, like most others writing on these vexing topics, some of his answers and suggestions are vague and perhaps unrealistic. The book has more than adequate merit to compensate for that common shortcoming. I strongly recommend it both as an historical overview and a thinking person's guide to current events in the region.

Book Information:
Osman, Tarek
Islamism: What it Means for The Middle East and The World
Yale University Press; 1st Ed. (Feb. 2016)
------------------------------------------------------------------------------------------------------------------------

Below is a 3 minute video of the author, Tarek Osman concisely describing some central themes contained in his book (from Yale University Books)

Wednesday, August 21, 2019

Language, Culture & Thought: Linguistic Relativity




In the thought-provoking sci-fi film, Arrival, we watch as a brilliant linguist learns the written language of aliens that have landed on Earth. In mastering their language gradually, she starts to have flashbacks... or are they flash forwards? I won't spoil the plot, so suffice it to say that the language of these aliens is non-linear in terms of the direction of time. In learning it the character's experience of time itself is profoundly changed with surprising impacts on herself and those with whom she interacts. Arrival is based on a short story by Ted Chiang, a philosophical sci-fi writer in the mold of, say, Borges or PK Dick. Chiang in the original short story, was largely inspired by anthropological and psycholinguistic work on what is called Linguistic Relativity or the Sapir-Whorf Hypothesis-- a misnomer linguists are stuck with, since neither Sapir nor Whorf proposed an hypothesis. Anyway, the main focus of such work is to assess the extent to which different languages influence, shape, or in the "strong version" determine the structure of thought and experience; notably the way that space and time are represented and understood by speakers in disparate linguistic groups. Perhaps no research has yet unearthed a language that makes time-travel possible, as with some of the science-fiction, but recent empirical work has led to intriguing findings that make the once discarded hypothesis impossible to simply dismiss out of hand.

The idea that language shapes thought was an intellectual taboo during much of the latter half of the 20th century in linguistics and analytic philosophy. Noam Chomsky, Jerry Fodor, and other Universalists dominated the intellectual landscape. For these thinkers, written and spoken languages are seen as different in only trivial ways. All natural languages are manifestations of a Universal and innate language capacity which Fodor called"Mentalese." Mentalese precedes and is the basis for all spoken and written languages on this view. But in the past few decades interesting experimental work has made linguistic relativity quite a bit harder to brush aside. Perhaps the most important researchers have been Stephen Levinson in the 1990s, and more recently Lera Boroditsky. Boroditsky espouses the view that language influences the manner in which we think and the way we understand such basic categories as time and space. She holds to a "weak" version of relativity in which language has considerable influence on us but does not determine our modes of thought and understanding. In her own words: "It is concluded that (1) language is a powerful tool in shaping thought about abstract domains and (2) one’s native language plays an important role in shaping habitual thought (e.g., how one tends to think about time) but does not entirely determine one’s thinking in the strong Whorfian sense." (Boroditsky; 2001)

Understanding Time In Different Cultures:


In the 90s, Stephen Levinson studied an Aboriginal Australian language
called Kuuk Thaayore. Subsequently Boroditsky expanded on Levinson's experimental work focusing on speakers' understandings of spatial and temporal relations. Before describing the Kuuk Thaayore, it's important to describe our own ways of making sense of time. According to Levinson and Boroditsky we do so by building up models of time that are based on spatial metaphors . Our (Western) framework for grasping time is based on the metaphor of journeying through space. Wherever "I" (any given speaker) happen to be, the future is understood as that which is in front of me and the past as that which is behind me. We naturally point forward when indicating the future, and when we talk and think about the past it seems intuitively right to say it is "behind" us. If we gesticulate while speaking, we indicate these directions literally. Our sense of temporal orientation depends on this cognitive framework relative to the bodily position of each speaker. But the system gets more complex than that.

Suppose someone tells you that an appointment for next Wednesday has been "moved forward two days?" What day, then, is the new appointment being changed to? Stop and think. What is your answer? Most likely, some readers will say "Friday" and others will say "Monday." This has been shown experimentally. But why this ambiguity? Well, if I consider the term "move forward" as a reference to (my) self or ego journeying forward (a metaphor of course), that is, me moving forward two days from next Wednesday, then I will end up moving 2 days and thus terminate my journey forward "on" Friday (as in the game "Hopscotch"). However, suppose I imagine the appointment itself moving forward by 2 days in my direction. In such a case we picture the appointment, as the saying goes, "approaching me." "I" am static and the event moves forward towards me,as it were. In that case, the appointment moves forward from Wednesday to Tuesday and ends up "on" Monday. Boroditsky claims that the cognitive metaphor of movement through time can be thought of in terms of a) ego movement or b) event movement. If I move into the future, I go forward toward the event in front of me. If the future is "approaching" then I am stationary and it moves toward the front of my body. Those who thought the appointment was changed to Friday in the above example were employing the agentive metaphor of the ego moving forward, while those who chose Monday employed the passive metaphor of being approached by the event (the appointment), much as we sometimes say "the deadline is approaching" or "the holidays are approaching."

It is important to remember that in our own culture and language forward and backward are not absolute or cardinal directions, but relative to the body of each speaker. So whether I face east or west does not matter. What is in front of me (the future) adjusts to my position. If I "look forward to seeing you" the spatial metaphor does not in any way hang on notions like "north" or "east." But things are very different for the Kauuk Thaayore. They represent time, like the apparent movement of the Sun, as moving from the East to the West. If I speak of the future in their language, I will gesture to the West no matter which way my body is facing. The words used to make such future reference are encoded in the memories of native speakers in terms of east and west and 14 other cardinal directions.https://en.wikipedia.org/wiki/Cardi... The role of cardinal directions is central to the entire language which carries a set of 16 such directions https://en.wikipedia.org/wi... To make sense of which events in my memory happened before or after other ones, I must have the memories stored in such a way that I would be able to know which way I was facing at any given time. These speakers can "dead reckon" or immediately point in any of sixteen absolute directions. When they request that you move they specify not left, right, up or down but southwest or northeast perhaps. All of this is rooted in the profoundly basic mode of cognizing time important for understanding time. Even the word for "hello" is itself inclusive of one's direction of heading... literally. This means that you must know which way you are facing (north, south, east, west etc.) to communicate in this language.

The fact that these people can "dead reckon" like birds that "know" how to fly South or North is a striking empirical finding. In the past scientists supposed accurate dead-reckoning to be biologically impossible for human beings https://en.wikipedia.org/wi.... Apparently, if one's language requires the skill then it can be cultivated. This raises the question, "What other skills and cognitive capacities do different languages engender?" Language, it appears, is not just a window on the world, but a tool for augmenting and refining latent cognitive abilities and skills. But Boroditsky's claim is not that you would have to speak that language to be oriented in terms of absolute space. Rather, we all could be aware of our absolute spatial position without GPS if we practiced this orientation from early childhood because of some social
function that makes it necessary. It would not have to be an ancient Australian language. That is why her relativity is "weak" and not strong. Linguistic constructions influence but do not determine our modes of thought and understanding.

Writing Systems & Time:


When it comes to representing time graphically, the general rule is that the
direction of time follows the direction of the writing system. Indo-European languages usually run from left to right. Studies show that if you lay pictures of a baby, child, teenager, adult, old person down on a table and ask, say, a speaker of English to "put it in the right order" it will be arranged chronologically from left to right, i.e. baby, child, teen, adult and old person in that order. If you ask a speaker of Arabic or Hebrew to put the same pictures in "the right order" that order will be reversed, as those languages go from right to left. I have a print of Hieronymus Bosh's famous triptych, The Garden of Earthly Delights, on the wall, and have occasionally wondered whether Arabs, for example, would have a more optimistic interpretation of it. Would they tend to see it as "moving" towards a future Garden of Eden rather than Hell? https://en.wikipedia.org/wi... If any one whose first tongue is Arabic or Hebrew is reading this, please let me know! The point is that writing systems also influence our perception of time and direction.

Borodotsky also did research showing that speakers of Mandarin are far
more likely to think of the direction of time as vertical (up= earlier, down = later) due to the frequency of such spatio-temporal metaphors in the language. Like most East Asian languages, Mandarin originally was written and read from top to bottom and started at the upper right side moving down and then left. However, like other East Asian scripts it can be oriented vertically or horizontally, and from left to right or the opposite direction, since the language as written is made up of disconnected syllabic units and/or ideographic units each occupying a block of space. Today it is seldom oriented right to left. So, speakers of Mandarin, when asked, tended to arrange chronological pictures left to right like English speakers . But sometimes they also employed the up to down scheme which was not found in speakers of English. When bilinguals were given items to arrange "in right order" those who learned English at a younger age were less likely to think in terms of vertical patterns, indicating plasticity of cognition resulting from acculturation. Finally, speakers of English who were given exercises that reinforced associations of vertical time patterns had similar results to the bilinguals, even though they only spoke English. That last point is important. It shows that language does not determine the structure of thought, but rather influences it (again, the "weak" linguistic relativity thesis). If we were to practice exercises for orienting ourselves in terms of "up=before down" or the 16 cardinal directions used by the Aborigines, we could, in principle, acquire the same skills they already possess! In short, languages can be seen as a set of tools that alter and/or augment our cognitive capacities, perceptions and interpretations of reality in ways we are only beginning to understand.
___________________________________________________________________

Sources:

-Stephen C. Levinson, Space in Language and Cognition: Explorations
in Cognitive Diversity (New York: Cambridge University Press, 2003).

-Lera Boroditsky, How Does Our Language Shape The Way We Think (see here:https://www.edge.org/conver... )

-Lera Boroditsky, Does Language Shape Thought? Mandarin and English Speakers' Conceptions of Time; Cognitive Psychology V.43, 1-22 (2001)
https://www.edge.org/conver...