Humans Ruin the Economy

Humans are ruining the economy.

Podcast: Audio rendition of this page content.

This is the caption on the sign for this segment. The sign advertises a solution, which is to “Vote for DEMOCROBOT… The first party run by artificial intelligence”. It also promises to “give everyone a living wage of £1436.78 a week”.

I have been very vocal that I find the idea of humans governing humans is a bad idea at the start. By and large, humans are abysmal system thinkers and easily get lost in complexity. This is why our governments and economies require so much external energy and course correction. Not only were they poorly designed and implemented, but they’re also trying to manage a dynamic system—a complex system. It won’t work.

What about bots and artificial intelligence? The above image was posted elsewhere, and a person commented that our governments are already filled with artificial intelligence. I argued that at best we’ve got pseudo-intelligence; at worse, we’ve got artificial pseudo-intelligence, API.

The challenge with AI is that it’s developed by humans with all of their faults and biases in-built.

The challenge with AI is that it’s developed by humans with all of their faults and biases in-built. On the upside, at least in theory, rules could be created to afford consistency and escape political theatre. The same could be extended to the justice system, but I’ll not range there.

Part of the challenge is that the AI needs to optimise several factors, at least, and not all factors are measurable or can be quantified. Any such attempt would tip the playing field one way or another. We might assume that at least AI would be unreceptive to lobbying and meddling, but would this be the case? AI—or rather ML, Machine Learning or DL, Deep Learning—rely on input. It wouldn’t take long for interested think tanks to flood the source of inputs with misinformation. And if there is an information curator, we’ve got a principle-agent problem—who’s watching the watcher?—, and we may need to invoke Jeremy Bentham’s Panopticon solution.

One might even argue that an open-source, independently audited system would work. Who would be auditing and whose interpretation and opinion would we trust? Then I think of Enron and Worldcom. Auditors paid to falsify their audit results. I’d also argue that this would cause a shift from the political class to the tech class, but the political class is already several tiers down and below the tech class, so the oligarchs still win.

This seems to be little more than a free-association rant, so I’ll pile on one more reflection. Google and Facebook (or Meta) have ethical governing bodies that are summarily shunned or simply ignored when they point out that the parent company is inherently unethical or immoral. I wouldn’t expect much difference here.

I need a bot to help write my posts. I’ll end here.

Man versus Machine

Human-designed systems seem to need a central orchestration mechanism—similar to the cognitive homunculus-observer construct substance dualists can’t seem to escape—, where consciousness (for want of a better name) is more likely the result of an asynchronous web with the brain operating as a predictive difference and categorisation engine rather than the perceived cognitive coalescence we attempt to model. Until we unblock our binary fixedness, we’ll continue to fall short. Not even quantum computing will get us there if we can’t escape our own cognitive limitations in this regard. Until then, this error-correcting mechanism will be as close to an approximation of an approximation that we can hope for.

The net-input function of this machine learning algorithm operates as a heuristic for human cognition. Human-created processes can’t seem to create this decoupled, asynchronous heuristic process, instead ending up with something that looks more like a railway switching terminal.

Cover photo: Railroad tracks stretch toward Chicago’s skyline at Metra’s A2 switching station on March 29, 2019. (Antonio Perez/Chicago Tribune); story