Milan Tresch Stories
9. When You Can No Longer Blame Everything on the Tool

There came a point when it became clear to me that working with AI does not become dangerous where most people look for the problem. Not at the moment of use, not in the act of asking questions. The real danger begins where a person starts handing over responsibility for understanding itself.
AI works from databases. This is not a metaphor, but a fact. It does not “think”; it matches patterns, weighs sources, and returns statistically probable answers. That is precisely why it can appear so convincing - confident, coherent, authoritative. Sometimes far too much so.
In Tamás’s case - someone who is vastly more prepared than most, a highly trained electrical engineer with a doctorate and decades of professional experience - it happened that the AI worked from an incorrect database. Not out of malice, not because of a “bug.” Simply because that was the material available to it.
At first glance, the answer looked correct. It was structured, logical, neatly assembled. A less prepared person would have accepted it without hesitation. Tamás did not. He stopped and checked. He questioned it, re-ran the reasoning, provided new context, and only then did a real solution emerge.
Not because the AI had bad intentions - it has none - but because AI does not know when it is wrong.
This is the difference between a tool and a professional. AI does not sense that “something is off.” It does not hesitate. It does not grow uncertain. It does not consider what its answer might cause once it leaves the screen and enters reality.
Only a human can do that.
And here is the point many people avoid saying out loud: an ignorant person gains very little from AI. In fact, they become more dangerous because of it. If you do not understand what you are discussing with the system, you are no longer using it as a partner but as a higher authority. From that moment on, control disappears. All that remains is blind acceptance, citation, and the sentence: “This isn’t my claim - the system says so.”
This is no longer a technological issue. It is a matter of character.
Tamás enjoys using AI. So do we. But precisely because we know what we are talking to it about. Because we have internal standards. Because we recognize when an answer is too simple, too finished, too confident for the complexity of the situation it claims to solve.
This is the point where a person can no longer hide behind the tool.
If AI is wrong, that is not an excuse. If it works from faulty data, that is not a justification. If it misleads, it does not absolve you of responsibility. You are the one who released the material into the world - under your name, with your face, and your professional past attached to it.
At this stage, you can no longer say, “the AI made a mistake.” The only honest sentence left is: I was not prepared enough.
This is a painful realization, but a necessary one.
Those who fail to understand this will eventually produce not only bad material, but bad decisions as well - decisions that trigger consequences that can no longer be undone.
If we are not willing to deepen and reclaim our knowledge through traditional means, then we should not be using AI for serious, complex work at all.
Contact
“I’m always looking for new and exciting opportunities.
Feel free to reach out - let’s connect!”