An AI startup is facing allegations of racism and discrimination after being accused of manipulating non-American accents to sound “more white.” The company uses speech recognition technology to change the user’s accent in near-real time. (Source)
Friction is an impediment to a perfect customer experience. Removing this friction is always welcome, but homogenisation by a dominant culture is a bit more sketchy. It’s laudable that someone aims to remove friction from communication. Raze that tower of Babel—or does it need constructing? I’m no biblical scholar. I’m all for fostering communication, but this control should be an option for the customer receiving the call, not the sender—press 1 if you don’t wish to hear a foreign accent.
When it comes down to it, translation services have the same challenge. Which accent comes out the other end? (I’ll guess it is similar to this one.)
And what American accent is being represented? The neutral accent of the flyover states, the Texas drawl, or the non-rhotic accent of Harvard Yard? I’m guessing it’s not California cool or urban Philadelphia or down on the bayou. Press 7 for Canadian English, eh?
It’s bad enough that US English, despite having a minority of speakers, is running roughshod over World English spelling and pronunciation, colonising the world via streaming services and infestation on the internet.
The BBC relaxed its RP requirements in 1989 for the purpose of regional cultural inclusiveness. Which direction do we want to go?
In the end, this is another example of businesses being more concerned with business than customers and the human experience.
As for me, I prefer an accent I don’t have to work so hard to discern. But at the same time, I’ve worked with many people whose first language is not English, and though it does take a bit more effort, it’s really not that difficult. Besides, I’ve heard native English speakers with regional accents and dialects that are just as taxing.
I sent a survey a month or so ago asking which regional accent people preferred. As it turned out—and not unsurprisingly—, people preferred the English they are used to hearing. Continental Indians preferred continental English; Americans wanted neutral American English; Jamaicans preferred Jamaican English, and British speakers preferred modern RP. And so it goes.
As a Social Justice Warrior, I tend to favour diversity and inclusion as a principle. As such, I follow some people who share this interest. In fact, most of these people expend much more energy toward this end than I do. The challenge I am about to convey is that some people don’t read beyond the subject line, and don’t even attempt to assess the underlying claim, let alone the issue at hand.
I recently engaged in a nonsensical interaction that I am sharing and dissecting. It started with this share, an image of the border outline of Nigeria with an overlay caption that reads: “Nigeria becomes the first country to ban white and British models in all advertising”.
I’d like to point out two items in particular. Firstly, the caption is fabricated. I’ll get to the source reference presently. Secondly, the re-poster aptly corrects the caption when he shared it—”Well, all foreign models, but HELL YEAH!”
Nigeria recently passes a law that essentially assesses a tariff or levy on advertising content using non-Nigerian talent. There is no mention of ‘white’ models, though British models would fall under this umbrella. This protectionist law stems from nationalism. I’d guess that ‘white’ people comprise less than one per cent of the Nigerian national population, but I could be wrong. This is well outside my area of expertise.
My response was to say “Down with Nationalism and the Promotion of Otherism.” I may be misinterpreting myself, but it feels to me that this is denouncing racism and other forms of otherness.
Sabrina responds, ‘Why is not having white models in advertising a bad thing?” and “Isn’t the whole point of advertising [for] people to…see themselves… ?” In response, I should have pointed out that the initiative had nothing to do with skin colour. Instead, I responded to the second question: the point of advertising is to sell product. Full stop. If people see themselves with the product, then great. Clearly, this comprises a fraction of successful adverts. More common is to make a connection to what they aspire to. It’s not about making a social statement—unless, of course, that social statement will sell more product. If an ad with a white model will sell more product, a business would be derelict not to employ one; conversely, if white models result in lower sales, a business would be foolish not to switch to the more successful vector.
Sabrina really goes off the reservation with her reply, somehow conflating Nigeria with the African continent. Attention to detail is not her forte.
At this point, I feed into her laziness and send her a link to an Al-Jazeera article addressing the law.
She leaves with a parting shot, and I quote: “Have you ever thought about the harm you might cause by playing devil’s advocate and “creating an argument”?”
She’s off course and then attempts to diminish my point by calling it ‘playing devil’s advocate’ rather than admitting that she hadn’t even considered the rationale and possible ramifications. She didn’t even grasp the main point, so I suppose I should forgive her for not noticing secondary and edge cases.
At this point, Dr Perkins adds her voice. Her initial question is valid, and as I responded, the answer is “No”. The race card was introduced by some narrator who didn’t know what game he was broadcasting. But then she goes on to “applaud Nigeria for making a [decision] centering [on] Blackness”, save to say that was not what prompted the decision.
Notice, too, that other people “Liked” the other comments, a testament to the principle of least effort of the bystanders, too.
I recognise that the original post anchored the conversation off the actual topic, but it was also very easy to track down the reference and note the content discrepancy. Granted, this takes time and effort, but so does responding on a thread and then escalating commitment to a non-cause. And for one tilting at windmills to be tossing around accusations of playing devil’s advocate. It’s not a good sign.
But wait, there’s more. I commented on this post on a second thread.
In this case, Dr Anderson suggests that this is just “a country celebrating its own citizens by recognizing their beauty and knowing they can move product just as good, and probably better than white women, to which I responded that this is a testable hypothesis. It’s either true that on balance white models sell more product or black models do. Again, don’t fail to miss the point that none of this is about white versus black models.
Somehow, LinkedIn can’t seem to keep their threads in order, but Ms Rice takes my hypothesis testing point as a support for racism before precipitating to full-on troll mode.
It scares me to see that there are two academic doctors participating in this thread, neither with a trait of attention to detail nor even a fundamental pursuit of evidence.
This is why it is difficult to engage with social media. You have no idea what level a commenter is coming in on. And even when spoon-fed information, they refuse to alter their position. In fact, they tend to double down on their wrongness. Moving on…
The Conspiracy against the Human Race is a work of non-fiction by horror author Thomas Ligotti. There is an audio podcast version and a YouTube video version. Feel free to leave comments in the space below or on YouTube.
In this segment, I’ll be reviewing a book by Thomas Ligotti, The Conspiracy Against the Human Race, A Contrivance of Horror.
I haven’t done any book reviews, but since I tend to read a lot of books, I figure why not share my take and see how it’s received? If you like these reviews, click the like button and I’ll consider creating more.
Let’s get started.
First, I’ll be providing a little background, and then I’ll summarise some of the content and main themes. I’ll close with my review and perspective.
The author is Thomas Ligotti. He is a published writer in the horror genre in the vein of Lovecraft’s atmospheric horror. I’ve not read any of his work and haven’t read much fiction in ages.
The Conspiracy Against the Human Race is Ligotti’s first work of non-fiction. The book was originally published in 2010. I read the 2018 paperback version published by Penguin Books.
Conspiracy Against the Human Race falls into the category of Ethics and Moral Philosophy in a subcategory of pessimism. The main thesis of this book is that humans ought never to have been born. Following in the footsteps of anti-natalist David Benatar, who published Better Never to Have Been Born in 2007, Ligotti doubles down on Benatar’s position on the harm of coming into existence and argues that humans should just become extinct. Moreover, we should take out life in general.
In the book, Ligotti posits that consciousness was a blunder of nature and is the root of all suffering. He argues the derived Buddhist position of dukkha, which translates as Life is suffering. He establishes that most people are aware of this fact, but that we are nonetheless wired to be biased toward optimism through delusion and what a psychoanalyst might call repressed memories. Moreover, pessimists are a cohort not tolerated by society, who don’t want their delusions shattered.
Philosophically, Ligotti is a determinist. I’ve created content on this topic, but in a nutshell, determinism is the belief that all events are caused by antecedent events, leading to a chain of causes and effects stretching back to the beginning of time and bringing us to where we are now. If we were able to rewind time and restart the process, we would necessarily end up in the same place, and all future processes will unfold in a like manner.
Ligotti likes the metaphor of puppets. He employs puppets in two manners. Firstly, being the determinist he is, he reminds us that we are meat puppets with no free will. Our strings are controlled by something that is not us. This something ends up being Schopenhauer’s Will, reminding us that one can want what we will, but we can’t will what we will. This Will is the puppeteer. Secondly, puppets are soulless, lifeless homunculi that are employed in the horror genre to create unease by means of an uncanny association. He cites the work and philosophy of Norwegian author Peter Zapffe, who also elucidates human existence as a tragedy. Humans are born with one and only one right—the right to die. And death is the only certainty. The knowledge of this causes unnecessary suffering.
“Stringently considered, then, our only natural birthright is a right to die. No other right has ever been allocated to anyone except as a fabrication, whether in modern times or days past. The divine right of kings may now be acknowledged as a fabrication, a falsified permit for prideful dementia and impulsive mayhem. The inalienable rights of certain people, on the other hand, seemingly remain current: somehow we believe they are not fabrications because hallowed documents declare they are real.”
Ligotti reminds us that consciousness is a mystery. We don’t really know what it is or what causes it other than it exists and we seem to have it, to be cursed with it. He adopts Zapffe’s position that consciousness is also responsible for the false notion of the self.
As all life is, humans are the result of an evolutionary process. Consciousness was just the result of an evolutionary blunder. He cites Zapffe and conveys that “mutations must be considered blind. They work, are thrown forth, without any contact of interest with their environment.”
Whilst pessimists view consciousness as a curse, optimists such as Nicholas Humphry think of it as a marvellous endowment.
He summarises the reason humans have it worse than the rest of nature:
“For the rest of the earth’s organisms, existence is relatively uncomplicated. Their lives are about three things: survival, reproduction, death—and nothing else. But we know too much to content ourselves with surviving, reproducing, dying—and nothing else. We know we are alive and know we will die. We also know we will suffer during our lives before suffering—slowly or quickly—as we draw near to death. This is the knowledge we “enjoy” as the most intelligent organisms to gush from the womb of nature. And being so, we feel shortchanged if there is nothing else for us than to survive, reproduce, and die. We want there to be more to it than that, or to think there is. This is the tragedy: Consciousness has forced us into the paradoxical position of striving to be unself-conscious of what we are—hunks of spoiling flesh on disintegrating bones.”
I’ll repeat that: Consciousness has forced us into the paradoxical position of striving to be unself-conscious.
He cites Zapffe’s four principal strategies to minimise our consciousness, isolation, anchoring, distraction, and sublimation
Isolation is compartmentalising the dire facts of being alive. So, he argues, that a coping mechanism is to push our suffering out of sight, out of mind, shoved back into the unconscious so we don’t have to deal with it.
Anchoring is a stabilisation strategy by adopting fictions as truth. We conspire to anchor our lives in metaphysical and institutional “verities”—God, Morality, Natural Law, Country, Family—that inebriate us with a sense of being official, authentic, and safe in our beds.
Distraction falls into the realm of manufactured consent. People lose themselves in their television sets, their government’s foreign policy, their science projects, their careers, their place in society or the universe, et cetera. Anything not to think about the human condition.
Sublimation. This reminds me of Camus’ take on the Absurd. Just accept it. Embrace it and incorporate it into your routine. Pour it into your art or music. Ligotti invokes Camus’ directive that we must imagine Sisyphus happy, but he dismisses the quip as folly.
Ligotti underscores his thesis by referencing the works of other authors from David Benatar to William James.
Interestingly, he suggests that people who experience depression are actually in touch with reality and that psychology intervenes to mask it again with the preferred veil of delusion and delf-deception. Society can’t operate if people aren’t in tune with the masquerade. Citing David Livingstone Smith in his 2007 publication, Why We Lie: The Evolution of Deception and the Unconscious Mind, Ligotti writes: “Psychiatry even works on the assumption that the “healthy” and viable is at one with the highest in personal terms. Depression, “fear of life,” refusal of nourishment and so on are invariably taken as signs of a pathological state and treated thereafter.”
Ligotti returns to the constructed notion of the self and presents examples of how a lack of self is an effective horror trope, citing John Carpenter’s The Thing and Invasion of the Body Snatchers.
He spends a good amount of time on ego-death and the illusion of self, a topic I’ve covered previously. He mentions Thomas Metzinger and his writings in several places including his Being No One, published in 2004, ostensibly reinforcing a position described as naïve realism, that things not being knowable as they really are in themselves, something every scientist and philosopher knows.
He delves into Buddhism as a gateway to near-death experiences, where people have dissociated their sense of self, illustrating the enlightenment by accident of U. G. Krishnamurti, who after some calamity “was no longer the person he once was, for now he was someone whose ego had been erased. In this state, he had all the self-awareness of a tree frog. To his good fortune, he had no problem with his new way of functioning. He did not need to accept it, since by his report he had lost all sense of having an ego that needed to accept or reject anything.” Krishnamurti had become a veritable zombie. He also cited the examples of Tem Horwitz, John Wren-Lewis, and Suzanne Segal, but I won’t elaborate here.
Russian Romantic author, Leo Tolstoy, famous for War and Peace and Anna Karenina, was another pessimist. He noticed a coping approach his associates had employed to deal with their morality.
Ignorance is the first. As the saying goes, ignorance is bliss. For whatever reason, these people are simply blind to the inevitability of their mortal lives. As Tolstoy said these people just did not know or understand that “life is an evil and an absurdity”.
Epicureanism comes next. The tactic here is to understand that we are all in here and no one gets out alive, so we might as well make the best of it and adopt a hedonistic lifestyle.
Following Camus’ cue, or rather Camus following Tolstoy and Schopenhauer, he suggests the approach of strength and energy, by which he means the strength and energy to suicide.
Finally, one can adopt the path of weakness. This is the category Tolstoy finds himself in, writing “People of this kind know that death is better than life, but not having the strength to act rationally—to end the deception quickly and kill themselves—they seem to wait for something.”
The last section of the book feels a bit orthogonal to the rest. I won’t bother with details, but essentially he provides the reader with examples of how horror works by exploring some passages, notably Radcliffe’s, The Mysteries of Udolpho; Conrad’s Heart of Darkness; Poe’s Fall of the House of Usher; Lovecraft’s Call of Cthulhu; and contrasting Shakespeare’s Macbeth and Hamlet.
This has been a summary of Thomas Logotti’s Conspiracy against the human race. Here’s my take. But first some background, as it might be important to understand where I am coming from.
I am a Nihilist. I feel that life has no inherent meaning, but people employ existentialist strategies to create a semblance of meaning, much akin to Zapffe’s distraction theme or perhaps anchoring. This said I feel that, similar to anarchism, people don’t understand nihilism. Technically, it’s considered to be a pessimistic philosophy because they are acculturated to expect meaning, but I find it liberating. People feel that without some constraints of meaning, that chaos will ensue as everyone will adopt Tolstoy’s Epicureanism or to fall into despair and suicide. What they don’t know is they’ve already fabricated some narrative and have adopted one of Zappfe’s first three offerings: isolation, which is to say repression); anchoring on God or country; or distracting themselves with work, sports, politics, social media, or reading horror stories.
Because of my background, I identify with Ligotti’s position. I do feel the suffering and anguish that he mentions, and perhaps I am weak and rationalising, but I don’t feel that things are so bad. I may be more sympathetic to Benatar’s anti-natalism than to advocate for a mass extinction event, though I feel that humans are already heading down that path. Perhaps this could be psychoanalysed as collective guilt, but I won’t go there.
I recommend reading this. I knocked it out in a few hours, and you could shorten this by skipping the last section altogether. If you are on the fence, I’d suggest reading David Benatar’s Better Never to Have Been. Perhaps I’ll review that if there seems to be interest. If you’ve got the time, read both.
So there you have it. That’s my summary and review of Thomas Ligotti’s The Conspiracy against the Human Race.
Before I end this, I’ll share a personal story about an ex-girlfriend of mine. Although she experienced some moments of happiness and joy, she saw life as a burden. Because she had been raised Catholic and embodied the teachings, she was afraid that committing suicide would relegate her to hell. In fact, on one occasion, she and her mum had been robbed at gunpoint, and her mum stepped between my girlfriend and the gun. They gave the gunmen what they wanted, so the situation came to an end.
My girlfriend laid into her mother that if she ever did something like that again and took a bullet that was her ticket out, she would never forgive her. As it turned out, my girlfriend died as collateral damage during the Covid debacle. She became ill, but because she was living with her elderly mum, she didn’t want to go to hospital and bring something back. One early morning, she was writhing in pain and her mum called the ambulance. She died later that morning in hospital, having waited too long.
For me, I saw the mercy in it all. She got her ticket out and didn’t have to face the hell eventuality. Not that I believe in any of that, but she was able to exit in peace. Were it not for the poison of religion, she could have exited sooner. She was not, in Tolstoy’s words, weak, so much as she had been a victim of indoctrination. I feel this indoctrination borders on child abuse, but I’ll spare you the elaboration. So, what are your thoughts on this book? Is there a conspiracy against humanity? Are optimists ruining it for the pessimists? What do you think about anti-natalism or even extinction of all conscious beings or the extreme case of all life on earth? Is Ligotti on to something or just on something?
In pursuit of my travail intellectuel, I stumbled on a thought experiment proposed by Richard Taylor regarding an old crowd favourite, Sisyphus.
Of course, Albert Camus had famously published his Myth of Sisyphus essay (PDF), portraying his life as analogous to the workaday human, absurdly plodding through existence like rinse and repeat clockwork—same gig on a different day.
Given my perspective on human agency and the causa sui argument, I felt commenting on Taylor’s essay, The Meaning of Life (PDF) would be apt.
The story of Sisyphus finds the namesake character, fated by the gods to each day push a stone up a hill only for it to roll back down for him to push it back up every day ad infinitum. Camus leaves us with the prompt, ‘One must imagine Sisyphus happy’. But must we.?
As Taylor puts it,
Sisyphus, it will be remembered, betrayed divine secrets to mortals, and for this he was condemned by the gods to roll a stone to the top of a hill, the stone then immediately to roll back down, again to be pushed to the top by Sisyphus, to roll down once more, and so on again and again, forever. Now in this we have the picture of meaningless, pointless toil, of a meaningless existence that is absolutely never redeemed.
Taylor wants us to consider an amended Sisyphus. He writes,
Let us suppose that the gods, while condemning Sisyphus to the fate just described, at the same time, as an afterthought, waxed perversely merciful by implanting in him a strange and irrational impulse; namely, a compulsive impulse to roll stones.
This significantly alters the dynamic. In the scenario, Sisyphus is not toiling; rather, he is pursuing his passion—following his heart. This is the athlete, artist, politician, or mass murderer following their passion. In fact, one might say that he is being his authenticself. He has no control over his self or his desire to roll stones, but he is in his element.
Taylor’s ultimate point is that in either case, the life of Sisyphus is just as devoid of meaning. Ostensibly, nothing can provide meaning. The best one can do is to have the perception of meaning. He writes,
Sisyphus’ existence would have meaning if there were some point to his labors, if his efforts ever culminated in something that was not just an occasion for fresh labors of the same kind. But that is precisely the meaning it lacks.
Although we cannot control what is within, contentment and happiness derive from perception. As we might be reminded by the quip attributed to Schopenhauer,
In the end, Taylor wants us to know that nothing out there can make us happy.
The meaning of life is from within us, it is not bestowed from without, and it far exceeds in both its beauty and permanence any heaven of which men have ever dreamed or yearned for.
Jordan Peterson is decidedly not my cup of tea. I can tolerate Pinker and Haidt. I agree with much of what they have to say, but in this video, the dissonance finally dawns on me. Interestingly, I can tolerate Peterson within the scope of this discussion.
I don’t agree with much of what these three are saying, but it is refreshing to hear Peterson outside of a philosophical domain, a place where he has no place. And although I don’t agree with him here, it is on the basis of his argumentation rather than his abject ineptitude.
I disagree with this trio. This video reveals these three people as Institutionalists. Peterson may be a political Conservative versus Pinker’s and Haidt’s enlightened Liberalism, but this is a common core value they defend with escalating commitment. Typically, we find these to be polar opposites, but here they have a common enemy that is not necessarily anti-institutionalists or anarchists but people who don’t understand venerable institutions and thereby risk tipping the apple cart or toppling the Jenga tower because they just don’t understand. Not like them. Besides constitutionalism, the common thread is Paternalism. They may disagree on the specifics, but one thing is true: We know more than you, and this knowledge is embedded in the sacred institutions. If only the others understood.
In this video, we hear these three commiserate about the diversity and inclusion forces in University today, and where this movement is off base.
First, this is an extension of sorts from a prior post on No-Self, Selves & Self, but I wanted to create a short video for my YouTube channel to establish somewhat of a foundation for my intended video on the causa sui argument. Related content can be found on this one of the Theseusposts.
This video is under 8-minutes long and provides some touch-points. I had considered making it longer and more comprehensive, but since it is more of a bridge to a video I feel is more interesting, I cut some corners. This leaves openings for more in-depth treatment down the road.
As has become a routine, I share the transcript here for convenience and SEO relevance.
In this segment of free will scepticism, we’ll establish some perspectives on the notion of the self. Most of us in the West are familiar with the notion of the self. What’s your self? It’s me. For the more pedantic crowd, It is I.
We’re inundated with everything from self-help to self-awareness to self-esteem to selfies and self-love. We’ve got self-portraits, self-image, and self-harm. We’ve got self-ish and self-less. We’ve even got self-oriented psychological disorders like narcissism. Attending to the self is a billion-dollar industry.
And whilst psychology and pop-psychology seem to consider the self to be a nicely wrapped package fastened tightly with a bow, it’s a little more contentious within philosophy. But there are other perspectives that don’t include the self, from no-self to slices of discontiguous selves. Let’s shift gears and start from the notion of having no self, what Buddhism calls no-self.
Buddhism is an Eastern discipline, so it does not have the same foundations as the West. According to this system of belief, the notion of a personal identity is delusional, so there is no self at all. This obsession and clinging to this delusional self is a major cause of suffering.
In this view, all is one and indivisible, but self-deception leads us to believe we are individuals, each with a discrete self. In fact, the Buddhist notion of Enlightenment—as opposed to the Western notion of Enlightenment—is precisely this realisation that there is only one self, and this is the collective self. But, to be fair, except for the times where the self has yet to be developed—we’ll get to this in a bit—, this notion of no-self is aspirational in the sense of losing one’s self in order to reduce suffering.
The concept of selflessness exists in language, but this is more aimed at sublimating the self in favour of a greater collective good.
The self is the central feature of many personality theories from Sigmund Freud and Carl Jung to Rollo May and Abraham Maslow. From individuation to self-actualisation. The self is self-referenced as I and me. Historically, the self had been considered to be synonymous with some metaphysical soul. Nowadays, psychology has taken the reigns on definitions.
One version of the self can be thought of as a single thread connecting beads of experience through time, time-slices of experience. We’ll come back to this. This sense of self extends backwards in time until now and contains aspirations projected forward in time as viewed from the perspective of now.
Whilst we use terms like ‘person’, ‘self’, and ‘individual’ somewhat synonymously, they each have different meanings. Whereas ‘individual’ is a biological term; ‘person’ is sociological or cultural; ‘self’ is psychological. Although the default position in the West is the adoption of the psychological notion, where each person has a self, there is also a philosophical notion. Given that the perspective of self is so ubiquitous with people accepting it as obvious, that it feels like I shouldn’t even spend time producing content to fill this space. But for a sense of completeness, I shall.
Psychologist William James distinguished between the ‘I’ and ‘me’ sense of the self, but let’s not parse this and consider each a stand-in for the self as experienced by the self. In this view, the self is generally considered to be the aggregation of continuous phenomenological moments and how we interpret them into a sense of ‘identity’.
In the West, the notion of having a self is imposed by convention. To feel otherwise is considered to be a sign of mental illness. As much as I want to share Foucault’s perspective on how delineating mental illness operates to the benefit of power structures, let’s just consider this out of scope. The Diagnostic and Statistical Manual of Mental Disorders, DSM-5, notes that a key symptom of borderline personality disorder, BPD, is a ‘markedly and persistently unstable self-image or sense of self’. Become selfless at your own peril.
There are challenges with the notion of self even in psychology. In developmental psychology, the self—differentiating one’s self into an identity separate from the world—, is not acquired until about the age of 18 months. Lacan had suggested that this so-called mirror stage developed at around 5 months as part of ego formation, but further research disputes this.
Although I won’t go into detail, individualist cultures experience the self differently than collectivist cultures. The origin of the concept of the individualistic view of self can be traced to early Christianity. In American culture, Protestantism seems to be a primary driver of the individualistic view of self. Let’s continue.
Heraclitus quipped, ‘No man ever steps in the same river twice, for it’s not the same river and he’s not the same man’. This is a nod to the impermanence of the self. Instead, there are selves.
Galen Strawson proposes that although he understands intellectually what others mean when they use the word self, he doesn’t share this experience emotionally. Unlike the phenomenological slices connected by a thread, he doesn’t feel he has a thread. He posits that he experiences this prevailing sense of narrativity episodically without continuity.
A typical view of the self is that one feels narratively connected to past slices—the 5-year-old self with the 20-year-old self and with the 50-year-old self, whether that 50-year-old self is in the past, present, or future. Even though we are not the same person, there is some felt affinity.
As for me, I consider the self to be a constructed fiction that serves a heuristic function. I don’t feel as disconnected as it seems Strawson does, but I don’t feel very connected to my 7 or 8-year-old self. And I can’t even remember before that. I’m not even sure I’ve got one data point for each year between 8 and 12, and it doesn’t get much better until 18 or 20. From there, I may be able to cobble together some average of a dozen or so per year without prompting, but I don’t even feel like the same person. Many of my views and perspectives have changed as well.
I was in the military until I quit as a Conscientious Objector. During that time, I became aware of Buddhism, and I doubled down on my musical interests. I worked in the Entertainment industry until I became an undergrad student, transitioning to become a wage slave whilst also attending grad school until I graduated. I’ve had several career foci since then. With each change, I’ve had a different self with a different outlook.
Can I connect the dots? Sort of. But I can also create a thematic collage out of magazine clippings or create art with found objects. I can tell a disjointed story of how I transitioned from X to Y to Z. It may even contain some elements of truth. Given how memory operates, who can tell?
In any case, what about you? In the next segment, I’m going to be discussing why we may not have free will owing to a lack of agency based on a causa sui argument.
Do you feel like you have a self? Does your sense of self have any gaps or inconsistencies? Do you feel you don’t have a self at all?
As a result of these recommendations, I’ve watched some 6 or more hours of video interviews with Iain, some of which are hosted on his own site, Channel McGilchrist, including this one. Before I get to the topic promised by the title of this post, I’ll say that I like Iain. I respect his intellect, his demeanour, and his approach. If you are a credentialist, his an Oxford-educated psychiatrist—so he’s no slouch.
Iain’s positions are well researched, informed, and articulated. I could listen to him for hours. In fact, I have. And yet I disagree with a fundamental position he takes on intuition. Allow me to build up to that.
My first recommendation was due to a reaction I shared that depicting left-right brain hemisphere as analytic-creative was overly reductionist and quaint. McGilchrist was recommended because he disagreed. But it turns out his disagreement was more in the way it was being portrayed. The answer was wrong because the question was wrong. In a nutshell, his contention is that we shouldn’t be asking what each hemisphere processes, but how it goes about processing. I agree with this.
His point is that in cases where an experience (inputs) might be processed on one side versus another, the interpretation (outputs) would necessarily differ. To make a false analogy, the left brain might be performing an exponential function whilst the right brain might be performing an arithmetic function. So, if ƒ(left) = xx and ƒ(right) = x+x, then an input of 3 would yield 27 and 6, respectively. There is nothing wrong with either side, they just produce different results. In context, this difference might matter: How many feet across is that chasm I must leap. I say, ‘Oops’, as I am falling to my demise having underestimated the difference, having used the right rather than the left function.
So where is this showdown you are wittering on about? A little more setup.
Science is stereotypically an analytic function, which is the say it requires a lot of left hemisphere processing. Psychology—and keep in mind that I cast psychology as pseudoscience, or para-science when I am being more charitable—elevates the notion of intuition as not only having value but of being largely ignored by science.
Those who have been following me for a while, know that I am also critical of Scientistm™, the blind-faith devotion to the current state of science as being some infallible truth. But neither am I an advocate for metaphysical claims. This is what I feel Psychology™ is trying to do with intuition. It feels like they are not only trying to inject a metaphysical claim; they are simultaneously making a normative claim that you should have (and trust) intuition; further, they are staking out the territory to be able to say an absence of this acceptance is pathological, so this is a power play. We’ve got the tea leaf readers taking up arms against science.
Of course, I am being hyperbolic and polemic for effect, but this division exists. Iain is not the first to attempt to elevate intuition. A central idea that Jonathan Haidt tries to sell the reader on in his book, The Righteous Mind, is that we need to be more accepting and trusting of intuition. Even Malcolm Gladwell pushed this point in Blink: The Power of Thinking Without Thinking.
I do think that this will escalate. Even if it doesn’t materialise into a full-scale war, people will take sides—they already have—, and we’ll see more us versus them fingerpointing. Whilst I am not fully on the side of science, my propensity is to lean in that direction.
UPDATE: Even before I post this, I discover that I am behind the times with this prediction. In searching for a suitable image for this post, I find the book Science and Pseudoscience in Clinical Psychology, which calls out pseudoscience presented as fact not only in the obvious realm of pop psychology but in the offices of practising psychologists. I have not read it, so I am not in a position to recommend it. I may get a copy for myself, if only just to have it on hand.
Before I end this, I also wish to anticipate a point of disagreement. I’ve encountered practitioners of ‘scientific psychology’ who vehemently defend their vocation as science. Without addressing this directly, let’s just raise the point that applying the scientific method and maths to a discipline doesn’t graduate it to become a science. I can apply this to Tarot or haruspicy. If fact, this is how, in general, social sciences became so-called soft sciences: ‘Look at me, mum. I’m using numbers’.
Where do you fall on the topic of intuition? Am I exaggerating and making mountains out of molehills?
Small Print (enlarged for texture): I’ve been approved as an Amazon.com Affiliate, so any links to Amazon are monetised. The purchase price to you remains unaffected, but I may be compensated for these purchases. If a purchase does not accrue to an affiliate, Amazon keeps the difference.
PSA: If you know of an affiliate you want to support—whether me or another—, purchaser through the affiliate. It costs you nothing and benefits the little guy.
I readily admit to being provocative, sometimes edgy, and polemic, but not without qualification. I keep coming across Strawson’s work, and I agree with much of it, though I feel he’s an edge case in the eyes of many. Even when discussing Strawson’s views with others, I get ‘the look’, this incredulous half-cocked quizzical glare.
In fact, I am reminded of an online conversation altercation I had recently on the topic of identity, cutting to the chase, here’s the big reveal:
You are intentionally being contrarian for no reason other than attempting to appear worldly and intellectually superior.
All living people have an identity. Every single human being. It’s not a philosophical argument. It’s basic vocabulary. Just because another culture has a different name for it, doesn’t make it untrue.
Out of courtesy I’ll withhold the ‘identity’ of this individual, save to say it is an undergraduate.
I think it’s obvious to consider the notion of identity to be self-referential. I am supposed to have a self with some concomitant identity, and so are you. According to the dictionary definition, shared by the student I engaged with, individuals possess some distinguishing character or personality. This is vague. Presumably, there needs to be some constellation of characteristics to make them distinguishing. I don’t suspect that I’m allowed to be identified by non-distinguishing features.
I’m imagining a Ku Klux Klan meeting somewhere in America. Seeking ‘Sam’, I ask the doorman where I can find him. He knows Sam and conveys that he’s a white guy wearing a white sheet with a pillowcase with eye holes.”
Never mind, perhaps I should have referenced penguins. I suppose that’s why they tag them. Is that their identity. It doesn’t feel right. I’m rambling.
Identity is predicated on the notion of the self. I’m partial to Strawson here, but I think I am somewhere in the middle. I understand that the standard narrative is that we construct a narrative to represent our self. This creates a heuristic. But life is not a story.
The problem with this concept is that people configure this narrative differently. Using video vocabulary as a reference, I can think of several approaches straightaway:
60 FPS (frames per second)
There is a memory component. I can also think of only capturing high-lights, low-lights, or some combination. My event-triggered home security camera system captures certain movement and sound, but different cameras capture different frames under different conditions, for differing durations, at different intervals, and at different fidelity. Moreover, it also captures certain aspects of any given frame.
Add to this false memories and misremembered content as well as conveyed narratives that you include in this composite. Examples from my life are stories I heard my mum telling her friends over and over as I was growing up. I have no native memory, but if I were to reconstruct a sense of self, I’d want to include them with native memories.
Memories of my early life are fragmented, and I don’t remember anything before age 5 or so. And even then, I can recall maybe 2 or 3 events unprompted. If asked if I remember this or that event, I may or may not, and it might be true or not.
I can’t claim the same lack of continuity as Strawson, I do feel that it might be substantially weaker than that of the general public. Just reflecting back top of mind, I remember these select events:
Relating my judo lessons to my grade three classmates
Being bored to tears in grade four because I had been demoted to a ‘standard’ class and being re-promoted to advanced placement classes when I ‘acted out’ due to shear boredom
Being adopted by my stepdad and taking his last name in grade four
My mate, Carl, also being adopted and taking an entirely new name—not just the last name
Various domestic abuse episodes
Choosing the coronet as a grade five school band instrument because my dad wouldn’t allow me to play the drums
After this, I can start to remember more and more, but not significantly so. I can remember certain classmates and interactions, teachers, friends and neighbours. If I stitched it together as a single filmstrip, it would be underwhelming and wouldn’t likely make much sense to anyone else. And who could even identify dramatic effect?
In stop motion parlance, there is a notion of keyframes and tweening. The aforementioned events would serve as keyframes. Tweening is the interpolation between these frames that morph and create the appearance of motion. This tweening never actually happened. It’s only realised during playback. How much of self and indentity are this filler?
At this point, I am thinking that what I am doing is setting the stage to say that the self is incomplete and imperfect, but I am leaving room for its existence. And of course, it exists. It’s a phenomenon, and we’ve labelled it. What more can one ask for?
I’m still trying to put it all together, but my ‘I’ keeps changing. How can I tame Haidt’s elephant?
“The mind is divided, like a rider on an elephant, and the rider’s job is to serve the elephant.”
― Jonathan Haidt, The Righteous Mind: Why Good People Are Divided by Politics and Religion
This infographic helps to articulate various notions of consciousness. Not much more to add.
I think I am partial to emergent theories, but I favour property dualism over emergent. The dualism employed in property dualism doesn’t feel accurate. It’s not dual so much as it just hasn’t been described yet.
I don’t think that physics can express or descriptively characterize everything that exists.
I want to accept the Buddhist notion, but I can’t seem to not differentiate.
I don’t feel I have enough information on the remainder of these. I could lean on the name and short description, but I feel this would necessarily establish me firmly in Dunning-Kruger territory. There may be even more hypotheses than are captured here.
Self and identity are cognitive heuristic constructions that allow us to make sense of the world and provide continuity in the same way we create constellations from the situation of stars, imagining Ursa Major, the little dipper, or something else. The self and identity are essentially expressions of apophenia.
Consider this thought experiment about responsibility. Rob decides to rob a bank. He spends weeks casing the target location. He makes elaborate plans, drawing maps. and noting routines and schedules. He gets a gun, and one day he follows through on his plans, and he successfully robs the bank, escaping with a large sum of money in a box with the name of the bank printed on it. Rob is not a seasoned criminal, and so he leaves much incriminating evidence at the scene. To make it even more obvious, he drops his wallet at the scene of the crime containing his driver’s licence with fingerprints and DNA on the licence and other contents of his wallet. He leaves prints and DNA on the counter where he waited for the money. This wallet even contains a handwritten checklist of steps to take to rob this bank—the address of the bank, the time and date. All of this left no doubt about who robbed the bank.
Using this evidence, the police show up at Rob’s apartment to arrest him. They knock on the door and identify themselves as law enforcement officers. Rob opens the door and invites them in. All of the purloined money is still in the box with the name of the bank printed on it. It’s on a table in plain sight next to the gun he used. All of his maps, plans and, surveillance notes are in the room, too. They read him his rights and arrest him. Things aren’t looking good for Rob.
Before I continue this narrative, ask yourself is Rob responsible for robbing the bank? Let’s ignore the question of whether Rob has agency. For this example, I am willing to ignore my contention that no one has or can have agency. Besides, the court will continue to presume agency long after it’s been determined that it is impossible because agency is a necessary ingredient to law and jurisprudence.
Is Rob responsible? Should he be convicted of armed robbery and sentenced to incarceration? Let’s make it even easier. This isn’t Rob’s first offence. In fact, he’s been in prison before for some other crimes he committed. He’s no first-time offender. Why do you think that he’s responsible? More importantly, why should he be convicted and sentenced? What should his sentence be?
Consider that the money has been recovered, no one was injured, and Rob didn’t resist arrest. At first glance, we might consider both restorative and retributive justice. I’ve purposely made it easy to ignore restorative justice as all the money was recovered. This leaves us with retributive justice. What should happen to Rob? What would you do if you were the judge? Why? Hold that thought.
Let’s continue the narrative. All of the above happened, but I left out some details. Because of course I did. After the heist, Rob returned home and he lost his balance and hit his head rendering him an amnesiac—diagnosed with permanent retrograde and dissociative amnesia. Because of the retrograde amnesia, Rob can’t remember anything prior to hitting his head. Because of the dissociation, Rob has no recollection of anything about himself, not even his name. In fact, he now only responds to the name Ash. (This is where I debate whether to have Rob experience a gender-identity swap, but I convince myself to slow my roll and focus on one thought experiment at a time.)
To make this as obvious as I can consider, Ash has no recollection of Rob, robbing the bank, or anything about Rob. Ash doesn’t know Rob’s friends or family. Ostensibly Ash is a different person inhabiting former-Rob’s body. To make it even easier, Ash is not feigning this condition. So, let’s not try to use that as an out when I ask you to reconsider responsibility.
If my experience serves as a guide, if I asked you about your response to whether Rob was responsible and what his sentence should be, you would be committed to your same response and for the same reasons, so I won’t ask again.
What I ask now is if Ash is responsible and what his sentence should be. Keep in mind that we should be able to ignore the restorative element and focus on the retributive aspect. What should happen to Ash? What would you do if you were the judge? Whether your response has changed or remained the same, why would you judge Ash this way?
Here are some considerations:
Retributive justice might serve as a lesson to other would-be offenders.
The public may not believe the amnesia excuse—even though you, as judge, are convinced thoroughly.
Ash does not believe he committed the crime and does not comprehend the charges.
Ash was surprised to discover the money and gun and was pondering how it got there and what to do with it when the police arrived at his apartment.
If released, Ash would not commit a crime in the future. (My thought experiment, my rules; the point being that Ash was no threat to society.)
From my perspective, Ash is a different person. Sentencing Ash is ostensibly the same as sentencing any person arbitrarily.
The purpose of this experiment is to exaggerate the concept of multiple selves. Some have argued that there is no self; there is just a constructed narrative stitching discrete selves together to create a continuous flow of self-ness.
I’m interested in hearing what you think. Is Ash responsible for Rob’s action, and why or why not? Let me know.