ChatGPT has become my favorite way to search through the web. It redefined the way I engage with the internet: the www is the content, traditional search engines are the shelves & aisles, and the chatbot is the librarian.
It's a shame that so much debate revolves around validating our preconceived fears. So many people are looking for one gotcha to dismiss AI altogether. If you use a default chatbot to draft your meal plans or to answer math questions, you're setting it up for failure. Its strength lies in the ability to map natural language to the data used in its training. In a way, AI is the pinnacle of user interface: it demands nothing more from the user than the ability to speak or write.
I tried to list the reasons as to why I like it so much:
- The relationship between a chatbot and its user is transparent: you know you're dealing with generated content in advance. Many other online services have devolved into a murky territory where you never quite know whether you're dealing with users, bots and/or shills.
- Asking something to the chatbot filters out all ads and sponsored content I might have seen on my way to the answer. Next time you google something, count all the ads you see from the search result page until you reach what you are looking for. It's like walking through the forced-path layout of a mall.
- The response of the chatbot is mostly devoid of images. If you want to be shown something, you need to ask for it. This is very different from the act of browsing, where the average user is ONE typo, ONE click or ONE scroll away from traumatic material.
- No more sidebars, recommended links or elements poking from outside the viewport! As someone with compulsive issues, the lack of clutter is chef's kiss.
I would like to read your thoughts on the matter. Imho in a few years we'll look back at old-school "browsing" and wonder how the heck we managed to find anything useful online. We're living in a Goldilocks time where AI chatbots are amazing search assistants and they have yet to be tarnished by sponsored content. Let's enjoy it while we can.
byko3y•2mo ago
bigyabai•2mo ago
byko3y•2mo ago
Few days ago I made an experiment — put my article into Qwen and asked to evaluate it for LLM-generated content. To my amazement it told me the article is 70-90% AI-generated. Which is even more weird considering the fact I know I wrote it all top to bottom. I think I spend so much time in LLM conversations I actually started copying LLM style.
I mean if you look into the article https://bykozy.me/blog/rust-is-a-disappointment/ — it's structured exactly like LLM implicit templates (a.k.a. fine tuning training datasets) do: short reiteration of the question, list of key points, key points explained exactly one by one, final summary. However, why would I not write the article this way? Just to make sure a person skimming the article would not pattern-match it to "formatted as LLM output"?
If I wanted to disguise an AI-generated article as a 100% human content — I would do it. But thanks for the suggestion — might be viable to consider doing so, because I just scare too many people away this way.