When I was writing my review of Elbow Room, this categorical syllogism came to mind:
P1: All agents are responsible
P2: I am an agent
C: Therefore, I am responsible
Now I want to unpack it.
The first premise is that all agents are responsible. Of course, this hinges on how one defines agent and responsibility. It also depends on the scope, especially of the agent but to some extent also the scope of responsibility.
Leveraging the Causa Sui argument, the agent is a social construct and can only be responsible to what extent s/he has been programmed as well as the ability to maintain and process the programming effectively—so without bugs to continue with the parlance.
If the agent is immature or defective, expectations of responsibility are diminished.
If certain inputs were not given, there is no reason to assume a related command would be executed. This is why so much time and energy is spent on programming and evaluating children.
This first premise is predicated on the pathological need to blame. Unwritten behind the responsibility claim is that I feel compelled to blame. Blame requires responsibility, so if I want to blame someone, they must be responsible. In any given circumstances, I may feel the urge to blame anyone, so all agents [eligible people] are worthy of blame. There is no particular reason to exclude myself, so I too am blameworthy. What’s good for the goose is good for the gander, eh?
As PF Strawson said, even if moral responsibility couldn’t possibly exist, it would be invented because people need to blame. This is in line with Voltaire’s commentary on God.
We can all look around and see how pervasive the god delusion is. Moral responsibility is even more insidious. In principle, moral gods were invented for just this purpose. An omnipresent judge was needed to keep the big house in check.
Where I Stand
From my perspective, I do feel that a person in the space of Dennett’s elbow room can have responsibility. Being a non-cognitivist, I have more difficulty accepting the arbitrary imposition of morality, but I understand the motivation behind it.
The problem I have is that mechanisms to ensure that the inputs and processes are all in order and there are no superseding instructions are not in place. Moreover, if the superseding instruction does not comport with the will of the power structure, it will be marginalised or ignored. This is a limitation of morality being a social construct, and none of this gets past the ex nihilo problem causa sui invokes, so we end up cursing the computer we’ve invented. O! monster of Frankenstein. O! Pygmalion.