top of page

From Here On, the Tool Is No Longer the Question

4 The Forced Relearning.png

In the previous chapters, we examined the impact of AI: how it reshapes decision-making, responsibility, our perception of time, and the rhythm of thinking. At this point, however, the perspective shifts. After a while, the crucial question is no longer what AI does to humans, but what humans do through AI.

AI in itself is neither good nor bad. It is not a human entity and has no intentions. It does not seek to help, nor does it seek to cause harm. Whatever it produces is always a reflection of the person who uses it.

As a tool, AI is capable of making knowledge accessible, structuring information, assisting decision-making, simplifying complex processes, and even supporting healing or restorative efforts. At the same time, the very same tool can be used to manipulate, mislead, distance, monitor, or exploit. The difference does not lie in the technology itself. It lies in the human being using it.

This is the point where the use of AI moves beyond the question of personal efficiency and enters the realm of ethics. Not because AI is ethical, but because human decisions are. AI does not make decisions on our behalf. However, it amplifies the decisions we are already inclined to make. It accelerates them, optimizes them, and often legitimizes them. In doing so, it amplifies human intentions as well, both constructive and destructive.

The real danger is not that AI might “develop in the wrong direction.” The real danger is that we avoid confronting what we are doing through it. When negative consequences emerge, it is easy to point to the tool. It is easy to say that the algorithm led us there, that the analysis justified the outcome, or that this was the optimal solution. Such statements do not remove responsibility; they merely conceal it.

The line between positive and negative use is rarely clear-cut. It is not a matter of black and white. Often, the same decision carries both supportive and harmful consequences at once. This is precisely why the issue cannot be resolved through technological debate alone. What is required is a human-centered reflection on where boundaries should be drawn—not only legally, and not solely through regulations, but internally.

AI does not impose a new value system on us, nor does it provide one. Instead, it reveals, faster and with greater precision, the moral framework by which we have already been operating. It exposes our existing standards rather than creating new ones.

From this point onward, the central question is no longer what AI is capable of doing, but what we choose to do with it. What we reinforce through its use, what we allow to unfold, and what we refuse to do, even when it would be possible.

With this chapter, the first level of analysis comes to a close. From here on, the focus shifts away from the tool itself and toward the human being behind it: toward benevolent and malicious use, toward power, temptation, and self-justification, and toward what it means to remain human in an era where there is a tool for almost everything, but not always the restraint or discernment required to use it responsibly.

From here, the next series begins.

Contact

“I’m always looking for new and exciting opportunities.
Feel free to reach out - let’s connect!”

For press and media inquiries, please contact us at contact@milantreschstories.com

Further details about the online store and purchasing conditions can be found below.

Terms and Conditions | Payment Options

© 2025 Milan Tresch Stories. All rights reserved worldwide.

bottom of page