I read Felienne Hermans' post about how all articles written with LLMs look the same and why this is bad.
Anyone who uses the internet, especially LinkedIn, is familiar with the phenomenon of uniformity, countless posts that are all similar: three times as long as necessary, unnecessary newlines and smileys, and pompous language. The unmistakable signs of an AI-generated post. Here is a terrible example of this genre.
I used to believe it was possible to be authentic and still have an LLM write everything for you. My strategy had been to simply tell it to avoid all the tropes in the example above. But I was wrong. Looking back at my old blog posts, it's painfully obvious that a lot of it was written by AI.
Writing is hard, especially in a foreign language. It's tempting to throw a few thoughts at an LLM and tell it to draft an article about it, because the result looks polished at first glance. The truth is that your ideas will get diluted with slop and the final piece is worse than what you would have written yourself.
In the article, Hermans points out that AI doesn't take responsibility and doesn't care about the truth. That responsibility is ours. We should stop legitimizing low-quality machine output by attaching our good names to it, or we risk lowering our standards and damaging our shared sense of meaning and truth.
Hermans is right, and I feel bad I wasn't more careful. From now on, I will try my best to write every word on this blog myself. I'll also revisit my old posts to get rid of some of the low quality stuff. My writings may become rougher, but they will be mine and I will have thought more carefully about them.