Milan Tresch Stories
10. When the Question Is No Longer AI, but Ourselves

By the end of this second series, we reached a point where it became almost impossible to talk about AI without talking about people. Because everything we examined along the way—comfort, acceleration, blurred responsibility, fading personal style, outsourced decisions—was never really a technological issue. It was human.
AI does not seduce us on its own. It does not coerce, it does not decide for us. It wants nothing, it strives for neither good nor evil. It simply exists, and it amplifies whatever we place in front of it. If it is given a task grounded in careful thinking, it accelerates the response to that thinking. If it receives uncertainty, it packages it back in confident form. If it is fed knowledge, it helps organize it. If what it gets is incomplete, it temporarily conceals that incompleteness.
The real fault line does not run between using AI or not using it. It runs between how we use it—and whether we are willing to keep control and responsibility in our own hands while doing so. Whether doubt remains present. Whether we are willing to accept slowness, the need for verification, and responsibility for what goes out into the world under our name.
In the previous chapters, we saw what happens when people begin to confuse a tool with a decision-maker. When they stop questioning, stop checking, stop arguing. When the system’s answer is no longer a starting point, but a final destination. At that moment, it is not AI that becomes dangerous—it is the person who becomes passive. And that is where the weight of decisions begins to shift, invisibly but irreversibly, in the wrong direction.
Perhaps the most uncomfortable realization is that there is no one else to blame. One day, it will not be enough to say, “the system made a mistake.” You cannot hide behind the tool when consequences appear. The world will not hold the algorithm accountable. It will hold accountable the person who signed it, sent it, published it, approved it.
This series does not argue that AI should be feared. It argues that AI must be taken seriously. With the same seriousness we give a dangerous tool, a powerful drug, or anything capable of producing large effects. We need to know when it helps and when it harms, when it accelerates and when it conceals, when it supports and when it replaces—and that last one must not be allowed.
If these ten texts have a single takeaway, it is not a technical one. It is this: the freedom to think cannot be outsourced. Responsibility for understanding cannot be delegated. And human presence—with all its imperfections, hesitations, and slowness—is not a weakness, but the only real guarantee that we are still in control.
There is no return to the old world. AI will stay with us, integrate itself, reshape things. But what our work becomes, what our decisions become, what responsibility means—those remain our choice.
This was the end of the second series. Not as a conclusion, but as a boundary line. From this point on, the question is no longer what AI can do.
The question is who we choose to remain beside it.
Contact
“I’m always looking for new and exciting opportunities.
Feel free to reach out - let’s connect!”