Milan Tresch Stories
Introduction
In the previous series, we explored how AI changes the way people think, decide, and relate to responsibility. We looked at what this tool does to us as it becomes increasingly natural to use.
Now we change direction.
From here on, we approach the subject not from the tool itself, but from the human being who uses it. The question is no longer what AI is capable of, but what we choose to use it for. What we want to use it for. And what we should not.
This series examines the beneficial and harmful uses of artificial intelligence, but not in a technical sense. Not through rules, legal frameworks, or developer intentions. Instead, through everyday human situations. Decisions. Effects. Consequences.
AI is neither moral nor immoral on its own. It amplifies. The real question is what it amplifies within us. Help or avoidance. Responsibility or excuse. Clarity or self justification.
In the following chapters, we will look at concrete examples of where AI becomes genuine support, and where it quietly begins to cause harm. Not because it is a bad tool, but because human intention is never neutral.
This is where the next journey begins.
Contact
“I’m always looking for new and exciting opportunities.
Feel free to reach out - let’s connect!”
