It appears you are interested in the workings of morality from a practical perspective.
I don’t feel there is another perspective. It is a human construct of language, but, in particular, it’s a social construct. By definition, language is arbitrary (though not capricious), any anything within language is necessarily arbitrary, too.
Morality and its cousin, ethics, are essentially normative rules of conduct, so they are, foundationally speaking, pragmatic: what is the best system for a society to flourish. This implies some goal or goals. In essence, this becomes an optimisation problem that involves systems thinking, and humans are particularly poor at systems thinking especially when faced with complexity. Along our evolutionary path, managing complexity was not a huge survival factor. Kahneman and Tversky identified two systems: System I is the sort of automatic reflexive system and System II is for advanced functions. System II requires a lot of energy and upkeep. For most humans, it’s not well-developed or maintained,and System I appears to be the arbiter and has right of first refusal, so it attempts to solve heuristically something that should be assessed analytically.
Why the ramble? Morality is a complex system with boundary conditions and dynamics: precisely what humans are poor at. Just defining the boundaries is a challenge. Perhaps this is why some people simply attempt to define the boundary as universal, but this creates its own challenges. And many reject universal morality, opting for smaller moral domains—nations, states, municipalities, communities, schools, churches, families, peer groups, couples. To Hegel and Nietzsche, the main concern is authenticity for the individual. And Nietzsche didn’t feel that there was any one-size-fits-all morality to begin with. I could operate with different moral imperatives than you. This was most evident in his Master-Herd distinction.
But since I reject the notion of objective morality, it is necessarily normative or relative or subjective. Here, we end up trying to optimise an equilibrium model, but the trick is what to optimise.
In the modern age, the consensus is to maximise happiness or utility, but, as I’ve mentioned before, we are still left in a quandary: do we optimise personal happiness or group happiness, and what group? A nationalist might choose to draw a boundary around the nation. This will maximise happiness within the borders without concern of whether ‘people are starving in Africa’.
Of course, there are multiple dimensions to any morality. If there are two people and one has $100 and the other would feel happier to take it, in a utility optimisation model, there is nothing to defend not taking it. Prospect theory within behavioural economics provides some rational insomuch as humans tend to value loss over gain, so the $100 received is perceived as having less value than the person losing it. This is further complicated if the person receiving it is poor—perhaps s/he has no money—and the person relinquishing it is Jeff Bezos, to whom it is nothing more than a rounding error.
I mention these because we could have a positive sum game (borrowing from game theory)—the gain to the poor person would outweigh the loss leading to a net positive gain—, and yet Pareto optimisation disallows this. Of course, happiness and utility cannot be measured in the first place and they are not persistent in the second place.
My point in all this is to argue that humans are woefully ill-equipped to grasp these topics, and so most of this is not much more substantial than mental masturbation. And this leads me full circle to me original contention that morality and so-called moral truths are nothing more that rhetoric, the ability to persuade that your position is the truest truth.