Milan Tresch Stories
9. The Moral Escape Hatch

One of the least discussed consequences of using AI does not appear in the quality of information or thinking, nor in the speed of decision-making, but in the nature of responsibility itself. Quietly, almost imperceptibly.
In the past, people made decisions. Not always well, not always thoughtfully, but in a way that was theirs to own. They knew the decision belonged to them, and that they would have to live with its consequences. Mistakes were personal. Responsibility was tangible.
With the arrival of AI, however, a new intermediate space has emerged. Formally, the decision still remains in human hands, but part of the path leading to that decision can now be outsourced. Analyses, recommendations, alternatives, optimized scenarios—decision-making based on these has become widely accessible. And alongside this, a new possibility has appeared: distance from responsibility.
This is not about AI making decisions for us. It is about how much easier it has become to decide without fully feeling the weight of the decision. After all, “this was the most logical option.” “This is what the analysis showed.” “This was the least risky choice.”
Behind these sentences lies a subtle separation. As if the decision were no longer entirely our own responsibility.
In psychological terms, AI has already become a shield. It protects us from the uncomfortable feeling of standing alone with the consequences. If something goes wrong, the option is always there: “this is what the data suggested,” or “this was decided based on the analysis.”
This becomes especially dangerous in situations where decisions affect other people’s lives, work, safety, or future. In leadership decisions, layoffs, medical judgments, high-risk professional choices—where decisions are never purely rational, but must rest on layers of complexity that AI alone cannot grasp.
Over time, responsibility begins to blur. It becomes unclear where analysis ends and where human decision-making begins. Who bears the consequences. Who can be held accountable when something goes wrong.
It is important to state this clearly: AI does not force anyone to do anything. It does not take control, it does not decide autonomously. Shifting responsibility is a voluntary, convenience-based choice. A form of moral easing.
Human nature inclines us to hand over the burden of responsibility. Because decisions hurt less that way. They feel less personal, less guilt-inducing.
This is not a technological issue. It is a human one. A moral temptation—the possibility that the consequences of our decisions do not press down on us with their full weight.
In this context, the real question is not how accurate, fast, or useful AI is. The question is whether we recognize the moment when we are no longer asking it for help, but for justification.
Because responsibility cannot be automated.
Only blurred.
And every act of blurring has a cost.
Contact
“I’m always looking for new and exciting opportunities.
Feel free to reach out - let’s connect!”