That kind of discussion is already everywhere these days, and many experts are talking about it.
For us, something recently clicked. We realized that when AI writes something for us, the result often lacks persuasiveness. And if someone asks questions about it, we might struggle to respond properly.
Not long ago, we used ChatGPT extensively to create a report. Before giving any instructions, we did our own detailed research, thought carefully about the content and key points, and then used AI to expand on those ideas. Of course, AI responses can contain mistakes, so we checked every part ourselves, verified the facts, and made edits or deletions as needed before using it as final material.
However, when we presented that document in a client meeting, even though it was neatly organized, it seemed to lack depth and clarity. We received many questions, and we couldn’t answer all of them on the spot.
If we had written everything ourselves from scratch or summarized what we already knew, we probably could have answered immediately. But because part of the writing wasn’t fully ours, even though we thought we understood it, it hadn’t truly become our own knowledge.
Then again, this isn’t unique to AI. The same thing happens when someone presents material written by another person. That’s why executives often bring the actual author to meetings—because they’re the ones who can answer detailed questions.
In the end, just like anything else, the effectiveness of AI depends entirely on how we use it and who uses it.




