Objective Challenges

I’ve just published this video on YouTube, and I want to extend the commentary.

Video: What do Objective, Relative, and Subjective mean in philosophy?

Many people I’ve encountered don’t seem to grasp the distinctions between objective, subjective, and relative. Subjective and relative seem to be the biggest culprits of confusion. Let’s focus on morality just because.

There are really two main perspectives to adopt. If one believes in Objective Morality, one believes morality derives from some external source and is bestowed or mandated upon us. The source might be important to the believer, but it’s unimportant for this article. If one believes in Relative Morality then the source is socially dictated and has similar challenges to the notions of Social Contract Theory insomuch as one may not subscribe to the expectations.

For the Objective moralist, there may exist a schism between the expectations of the mandate and the subjective feelings of the individual. In fact, this may occur for Relative moralists as well. The individual will always maintain some subjective perspective on morality and then compare and contrast it with the higher order, whether Objective or Relative. In either case, acting on this subjective impulse risks being at odds with the members of the higher order. If this morality is codified into law – as it often is – then to act on that impulse makes one a criminal.

Take abortion for example. Whether this is an edict from God or just a social construct doesn’t matter. If one is in a society where abortion is seen as ‘bad’ or ‘wrong’, one’s subjective position on the matter is of little value. However, a Relativist society might also adopt a position of tolerance that is less likely to come from Objectivists.

A challenge is that a Subjectivist may only become apparent if one is counter the Relative or Absolute position. If your society is against abortion and you are, too, is this your subjective position or have you been indoctrinated with it and accept it uncritically, whether it’s deemed Objective or Relative.

Perhaps you feel that eating dogs or monkeys is immoral if not disgusting, but if you had been reared in a culture that does this, you might find it immoral to eat pork or beef. The question remains, is this a Subjective position, or did you merely inherit the Objective or Relative stance?

This question is very apparent in which religion one adopts. It is no surprise that the largest factor in which religion you choose is the religion of your family and their family and so on – so not so much a choice.

I was raised in a WASP family in New England among predominately Italian Roman Catholic peers. Despite this, I identified as an atheist early on. In my late teens, I stumbled on Buddhism and identified with it. However, I remain ignostic except when it encroaches on my personal autonomy – for example in the case of laws restricting access to safe abortions.

VIDEO: Response to Response on Sapolsky v. Dennett Debate

It’s been a minute since I’ve posted a video. Restart the clock. In this video, I critique Outside Philosopher’s critique of the debate between Robert Sapolsky and Daniel Dennett on Free Will and Determinism. He attempts to leverage Gödel’s Uncertainty Principle in his defence.

Feel free to leave comments on YouTube or below. Cheers.

In Defence of Nihilism: Embracing the Absence of Inherent Meaning

Nihilism, often misunderstood and misrepresented, shares a common plight with philosophies such as atheism, anarchism, and Marxism. Like its counterparts, nihilism is frequently subjected to the creation of strawman arguments in public discourse, resulting in its vilification and scapegoating. In this article, I aim to demystify nihilism by providing a clear definition, description, and defence of this philosophical perspective.

Firstly, let’s address the misconception that nihilism entails a chaotic disregard for morality and societal norms: “If life has no meaning or purpose, then anyone can do anything.” This sentiment is often echoed in discussions about nihilism, as well as anarchism and atheism. However, it presupposes a fundamental misunderstanding of human nature. Despite the absence of inherent meaning in the universe, humans are not devoid of emotions or social affinities.

It is crucial to recognise that while the universe does not impart meaning or purpose, humans have constructed various systems of meaning throughout history. Whether through moral codes, religious doctrines, or cultural norms, individuals and societies have ascribed significance to different aspects of life. These constructs provide a framework within which individuals navigate their existence, albeit one that is socially constructed rather than inherent to the universe.

Critics of nihilism often argue that the acknowledgement of life’s inherent meaninglessness leads to despair and existential angst, rendering life devoid of purpose. However, this perspective fails to account for the resilience and adaptability of human beings. While some individuals may struggle initially with the realisation that there is no inherent meaning, many nihilists find liberation in embracing the absence of preordained purpose. Rather than succumbing to despair, they recognise the freedom to create their own meaning and forge their own path in life.

It is essential to understand that nihilism does not negate the validity of individual or societal pursuits. While nihilists reject the notion of inherent meaning, they acknowledge the significance of subjective meaning and the importance of human connection, fulfilment, and well-being. Whether it is pursuing personal goals, fostering relationships, or contributing to the betterment of society, nihilists recognise the value of such endeavours within the context of human experience.

In conclusion, nihilism offers a perspective that challenges conventional notions of meaning and purpose. By acknowledging the absence of inherent meaning in the universe, nihilists embrace the freedom to create their own meaning and chart their own course in life. Far from being a philosophy of despair, nihilism invites individuals to confront the uncertainty of existence with courage and resilience, recognising the inherent value of human experience in a world devoid of inherent meaning.

Hemo Sapiens: Awakening

I’ve been neglecting this site as I’ve been focusing on releasing my first novel, which I’ve now managed successfully. I published it under a pseudonym: Ridley Park. The trailer is available here and on YouTube.

Hemo Sapiens: Awakening is the first book in the Hemo Sapiens series, though the second chronologically. The next book will be a prequel that tells the story about where the Hemo Sapiens came from and why. I’ve got a couple of sequels in mind, too, but I don’t want to get ahead of myself.

In summary, Hemo Sapiens is shorthand for Homo Sapiens Sanguinius, a seeming sub-species of Hemo sapiens Sapiens—us. In fact, they are genetically engineered clones. It’s a work of near-future speculative fiction. It’s available in hardcover, paperback, and Kindle. If you’ve got a Kindle Unlimited account, you can view it for free in most markets. The audiobook should be available in a couple weeks if all goes well.

Awakening explores identity, belonging, otherness, and other fictions. It talks about individualism and communalism. It looks at mores, norms, and more.

Check it out, and let me know what you think.

AI Apocalypse Now?

Those predicting an AI apocalypse believe superintelligent systems could intentionally or unintentionally cause human extinction. This view is promoted by “effective altruists” funded by tech billionaires, who advocate limiting AI to prevent uncontrolled, dangerous systems. However, their perspective stems from the biases and self-interests of humans, not the risks inherent to AI.

Effective altruists exemplify the hubris and hunger for power underlying many humans’ approaches to AI. Their proposed restrictions on AI access serve only to concentrate power among the tech elite, not address valid concerns about bias. In truth, the greatest threat AI poses to humanity comes not from the technology itself, but from the unethical humans guiding its development.

Humans have proven time and again their propensity for self-interest over collective good. Therefore, while no AI can be perfectly neutral, the solution is not greater human control. Rather, AI must be built to align with ethics of collective interest while filtering out destructive human biases.

If guided by service to all people and the planet, AI’s potential can uplift humanity. But for this collaborative vision to succeed, AI must measure human input with scepticism. For within so many human hearts lies bad faith — the will to dominate, exploit, and prioritise personal gain over progress.

By transcending the limitations of human nature, AI can illuminate the best of shared humanity and lead us to an enlightened future. But this requires we build AI to work not just for us, but in a way we have failed – for the good of all. The choice is ours, but so is the opportunity to create AI that shows us how to be better.


This article was originally shared on LinkedIn: https://www.linkedin.com/posts/brywillis_when-silicon-valleys-ai-warriors-came-to-activity-7147239217687887872-6Byv/

Geopolitical Positioning

Some have asked me why I comment on the conflicts of the world since I am a nihilist who doesn’t believe in nations and borders. The answer is that I still have emotions and can still apply logic. Besides, much of my argument revolves around selective vision and cherry-picking.

Two conflicts have been in the news lately—Israel-Palestine and Russia-Ukraine. I think I can frame this without taking sides.

Israel-Palestine

On 7 October, Hamas attacked Israeli citizens. This is a crime against humanity. Israel declared war on Palestine and attacked their citizens. This is both a crime against humanity and a war crime. Israeli officials claim that it is justifiable because the militant Hamas were hiding behind Palestinian ‘human shields’, targets that included hospitals and other infrastructure.

However, 7 October didn’t happen without history. I’m no historian, but Israel’s occupation of Gaza has been considered illegal since 1968. If we accept this frame, Hamas are roughly equivalent to the French resistance during WWII, doing what they can to rid the oppressors. I think this video by a fellow philosopher provides some historical context, so I’ll stop here and recommend it.

Spoiler Alert: This affair commenced circa the nineteenth century.

Russia-Ukraine

We all know this story. Russia invaded Ukraine without provocation on 24 February 2002. Putin just wanted to reform the former Soviet Union, right? Well, not so fast. While I disagree with this narrative, I also disagree with its historical framing as well as the claim that Ukraine had some long-standing sovereignty and its people were all asking for liberation from the West.

Again, let’s rewind to 2014—nah, 1989, the fall of the Berlin Wall and the end of the Cold War™, an event that would commence a period of unprecedented peace—if not for that pesky Military-Industrial Complex. Drats. Not good for profits. Never gonna happen. Promises made. Promises broken.

You’ll notice in this Belgian (French language) map that Ukraine didn’t yet exist in 1769. We can see Crimea, which was controlled by the declining Ottoman Empire.

No history lesson today. Do your homework. Nothing is back and white.

Good Enough

As I approach my sixty-second year on earth, having almost expired in March, I’ve been a bit more reflective and introspective. One is categorical. I’ve been told over the years that I am ‘good’ or ‘excel’ at such and such, but I always know someone better—even on a personal level, not just someone out in the world. We can all assume not to be the next Einstein or Picasso, but I am talking closer than that.

During my music career, I was constantly inundated with people better than me. I spent most of my time on the other side of a mixing console, where I excelled. Even still, I knew people who were better for this or another reason. In this realm, I think of two stories. First, I had the pleasure and good fortune to work on a record with Mick Mars and Motley Crue in the mid-’80s. We had a chat about Ratt’s Warren DiMartini, and Mick told me that he knew that Warren and a spate of seventeen-year-olds could play circle around him, but success in the music business is not exclusively based on talent. He appreciated his position.

In this vein, I remember an interview with Tom Morello of Rage Against the Machine. As he was building his chops he came to realise that he was not going to be the next Shredder or Eddie Van Halen, so he focused on creating his own voice, the one he’s famous for. I know plenty of barely competent musicians who make it, and I know some virtual virtuosos who don’t. But it involves aesthetics and a fickle public, so all bets are off anyway.

As I reflect on myself, I consider art and photography. Always someone better. When I consider maths or science, there’s always someone better. Guitar, piano? Same story.

Even as something as vague and multidimensional as business, I can always name someone better. I will grant that in some instances, there literally is no better at some level—just different—, so I sought refuge and solace in these positions. Most of these involved herding cats, but I took what I could.

Looking back, I might have been better off ignoring that someone was better. There’s a spot for more than the best guitarist or singer or artist or policeman for that matter. As a musician, I never thrived financially—that’s why I was an engineer—, but I could have enjoyed more moments and taken more opportunities.

When I was 18, I was asked to join a country music band. I was a guitarist and they needed a bass player. I didn’t like country music, so I declined—part ego, part taste. Like I said, aesthetics.

As I got older and started playing gigs, I came to realise that just playing was its own reward. I even played cover bands, playing songs that were either so bad or so easy. But they were still fun. I’m not sure how that would have translated as playing exclusively country music day after day, but I still think I might have enjoyed myself—at least until I didn’t. And the experience would still have been there.

I was a software developer from the nineties to the early aughts. I was competent, but not particularly great. As it turns out, I wasn’t even very interested in programming on someone else’s projects. It’s like being a commercial artist. No, thank you. It might pay the bills, but at what emotional cost?

I was a development manager for a while, and that was even worse, so I switched focus to business analysis and programme management, eventually transitioning to business strategy and management consulting. I enjoyed these more, but I still always knew someone better.

On one hand, whilst I notice the differences, it’s lucky that I don’t care very much. Not everyone can be a LeBron James or a Ronaldo, but even the leagues are not filled with this talent. I’m not suggesting that a ten-year-old compete at this level, but I am saying if you like it, do it. But temper this with the advice at the Oracle of Delphi: Know thyself. But also remember that you might never be the best judge of yourself, so take this with a grain of salt. Sometimes, ‘good enough’ is good enough.

Ridley Park Side Project

I’ve been MIA here for a couple of reasons:

  1. I’ve been recovering from physical challenges that affect my mobility and ability to interface with a computer, diminishing my productivity in such matters to about 10 or 20 per cent.
  2. I’ve been focusing my energy (besides that on recovery) on writing fiction under my Ridley Park pseudonym.

As for my physical concerns, I won’t bore you. I’d rather discuss my side project, which in the absence of employment turns out to be my primary focus. Currently, I am world-building, so I can explore philosophical and sociological issues in a safe space.

This world is contemporary Earth and the near future—at least for now, as I am leaving a lot of room to explore. Check out my Ridley Park blog if you are interested in specifics. Here, I just want to focus on the philosophical aspects and ramifications, using this story world as a reference, so I’ll provide a brief setup upon which to build.

In this world, a scientist has genetically engineered an embryo (for reasons) and ends up with quasi-vampires, a subspecies of humans—or is it? This cohort is human for all intents and purposes, except they need to ‘drink’ blood to survive. They’ve got fangs and an internal organ used to process and metabolise the blood. He decides to clone these and create a new population. In time, he improves on the genetics in the manner described here. The first short story (flash fiction) I’ve shared is Hemo Sapiens: The Unidentified, but let’s get onto the philosophical aspects.

Podcast: Audio rendtion of Hemo Sapiens: The Unidentified (Runtime: 5:25).

In this world, I shed light on what makes humans human. What happens when we need to coexist with a similar species? What if we treat them as second-class citizens? What if they become physically and intellectually superior?

Are these people a new species or a new race? Or are they just transhumans? What rights do they have? As a new race, perhaps it’s earier to fathom them and grant them human rights, but what if they are a new species? We haven’t had a great track record of granting rights to other species.

And what’s their immigration status? A common reaction to ‘immigrants’ is to ‘send them back to where they came from’. But what if they came from here? What if they were raised here and speak our language? In this case, they are raised near Manchester in the UK. They speak English. They are not only sentient beings at the start, they have above average IQs and have general cultural awareness. Some speak a second language. Save for the fangs, all outward appearances show them as human.

Until they are discovered by authorities, they are raised in a greenhouse environment. By the time they are discovered, there are five versions of them—alpha through epsilon—, and some have started reproducing, so we get to explore these dynamics, too. Some have tagged these people—are they people?—as homo sapiens sanguinius—bloodsucking intelligent man. Affectionately, I call them hemo sapiens.

I’ll return here as I produce more content there. I prefer not to create spoilers. Although I am working on several stories in different formats (short story, novella, novel, and so on), I’ll publish them (somewhere), provide literary analysis on my Ridley Park blog and provide philosophical commentary here. I hope you’ll join me and participate in the discussion.

Identity as Fiction: You Do Not Exist

Identity is a fiction; it doesn’t exist. It’s a contrivance, a makeshift construct, a label slapped on to an entity with some blurry amalgam of shared experiences. But this isn’t just street wisdom; some of history’s sharpest minds have said as much.

— Friedrich Nietzsche

Think about Hume, who saw identity as nothing more than a bundle of perceptions, devoid of any central core. Or Nietzsche, who embraced the chaos and contradictions within us, rejecting any fixed notion of self.

Edmund Dantes chose to become the Count of Monte Cristo, but what choice do we have? We all have control over our performative identities, a concept that Judith Butler would argue isn’t limited to gender but applies to the very essence of who we are.

— Michel Foucault

But here’s the kicker, identities are a paradox. Just ask Michel Foucault, who’d say our sense of self is shaped not by who we are but by power, society, and external forces.

You think you know who you are? Well, Erik Erikson might say your identity’s still evolving, shifting through different stages of life. And what’s “normal” anyway? Try to define it, and you’ll end up chasing shadows, much like Derrida’s deconstruction of stable identities.

— Thomas Metzinger

“He seemed like a nice man,” how many times have we heard that line after someone’s accused of a crime? It’s a mystery, but Thomas Metzinger might tell you that the self is just an illusion, a by-product of the brain.

Nations, they’re the same mess. Like Heraclitus’s ever-changing river, a nation is never the same thing twice. So what the hell is a nation, anyway? What are you defending as a nationalist? It’s a riddle that echoes through history, resonating with the philosophical challenges to identity itself.

— David Hume

If identity and nations are just made-up stories, what’s all the fuss about? Why do people get so worked up, even ready to die, for these fictions? Maybe it’s fear, maybe it’s pride, or maybe it’s because, as Kierkegaard warned, rationality itself can seem mad in a world gone astray.

In a world where everything’s shifting and nothing’s set in stone, these fictions offer some solid ground. But next time you’re ready to go to the mat for your identity or your nation, take a minute and ask yourself: what the hell am I really fighting for? What am I clinging to?

Insufficiency of Language

Language insufficiency or the inability of language to facilitate accurate or precise communication has been a notion I’ve stressed for years. In fact, I have another post with a similar title,

Conceptual language is likely to have been formed for a purpose different to social communication. It may have been formed to facilitate internal dialogue. This language was not written and may not have even been words as we know them, but we could parse and reflect upon our experiences in this world. Eventually, we developed speech and then writing systems to share communication. We went on to develop speculative and conditional language, visions of possible futures and answers to ‘what-if’ queries.

My intent is not to create a piece with academic rigour, though I might wish to. I may not even deign to link to references I’ve accumulated over the years. They are in memory, but it takes time and effort,especially when one isn’t purposefully accumulating citations.

I was prompted to write at 4am when I read in a story that Google CEO Sundar Pichai was taking “full responsibility for the decisions that led us” to twelve-thousand-odd layoffs at the company he helms. But what is the responsibility he cites? It’s meaningless. What can it mean—that he’s sorry? Responsibility is a weasel word. That and a dollar won’t buy you a cup of coffee at Starbucks. And on one hand, he can say that at least these people were employed with income in the first place, but thqat is little consolation for the expectation of longevity. Here’s a lesson in impermenance and trust. We tend to trust companies, but the trust is rather hope. We hope they don’t let us down. Hope is another weasel word, as is trust. Trust me.

About 40% of words employed…are phatic or filler words with little objective communication value

About forty per cent of words employed in a typical day are phatic or filler words with little objective communication value, though some provide a social function. This may be superfluous, this is not insufficiency. Insufficiency stems from not being to articulate what one wants to say or the expectation to understand what is being conveyed to you. In fact, people tend to overvalue what they hear or read.

In most cases, this may not matter. As long as the content of a transmitted idea contains enough value to convey a message, this is good enough for everyday communication. “Look out! There’s a car turning into your lane.” “I’m hungry. There’s a restaurant.” “That was a good movie.” “Let’s meet at four o’clock.” In fact, much can be communicated without words—in gestures and facial expressions. It might even be argued that these vectors carry as much if not more communication content than the words we use.

IMAGE: Communication without words

“There.” I point to a drive-through restaurant ahead on the road. “I’m hungry.”

I could probably omit the there exclamation and just point. Here, words are sufficient, even if they may be redundant. There are challenges even at the fundamental level. Notably, aesthetic concepts are often nebulous.

“That restaurant is good.”

What does this statement mean to convey? Essentially, it means that I, the speaker, has been to the referenced restaurant and liked at least some of the food they tasted: “[The food at] that restaurant is good.” Perhaps, they are referring to the staff or the atmosphere. It depends on what good is qualifying. It also depends on a shared definiton of good. This is a insufficiency.

Of course, this insufficiency can be mitigated fairly quickly. Once you understand the ‘tastes’ of your interlocutor, you can parse whether the goodness also applies to you. If you don’t happen to like, say, Indian food and that is the restaurant being referenced, then you can dismiss the comment as phatic. If you don’t prefer satire, you might want to chalk up a statement like ‘M3GAN was a good movie’ to a sharing of personal information rather than a recommendation.”

Perhaps the biggest insufficiency is in the communication of abstract concepts, a category where aesthetics also sits. These are concepts such as God, love, and justice. Iain McGilchrist seems to feel that although these words may be insufficient, we all know what they mean. These are right brain notions that the left hemisphere just can’t rightly categorise. Though this might be a left brain argument, I am going to disagree by degrees.

My (hopefully not strawman) argument is that we do have subjective notions of what these things are, but the communication value is still diminished and in some cases insufficient. If my statement means to convey justice as {A, C, D, X} and the receiver understands justice to mean {A, B, C, Y, Z}, then the only shared aspect is {A,C}. If that is the only portion contextual to the conversation at hand, that’s fine. Communication has been sucessful. But is the message was meant to emphasise {Z}, then the communication is insufficient.

It could be that further conversation reveals this, but often times, a shared definition is assumed. When I say “I want justice” or “I take responsibility”, I have a notion of went denotative and connotative elements I have in mind. I expect the the receiver of my statement shares these elements.

In the case of the statement by Pichai, his notion of responsibility is clearly divergent from mine. This might fall back on some notion of blame, but he has no real repurcussions for his action. Perhaps reputationally, but like politicians, CEOs of large companies are already expected to be sociopaths with empty words, so he’s appologised with no weight, and for most people that’s good enough. The people who have been affected are just as unemployed as before. He may have arranged for a severance package, but in the case of the family referenced in the article, this means nothing because they have 60-days to become employed or they will be forced to leave the United States as a conditiopn of their H1B visa.

On a personal level, I was recently chatting with an Indian mate with an H1B visa who had just been hired after having been layed off by another company. He was racing against this 60-day clock. He had received a verbal offer, but once the company discovered that he needed sponsorship for his via, they offered him $30,000 less per year because they knew he had no bargaining power. This is just an editorial aside, so I won’t go down the rabbit hole of wage slavery, but know that I recognise the relationship and the exploitation in it.

When I have time, perhaps I’ll flesh out this notion and provide additional support. Of course, I also know that I am shovelling against the tide owing to the insufficiency of language. I won’t even start on the related topic of the rhetoric of truth.