frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Nintendo Wii Themed Portfolio

https://akiraux.vercel.app/
1•s4074433•45s ago•1 comments

"There must be something like the opposite of suicide "

https://post.substack.com/p/there-must-be-something-like-the
1•rbanffy•2m ago•0 comments

Ask HN: Why doesn't Netflix add a “Theater Mode” that recreates the worst parts?

1•amichail•3m ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•10m ago•1 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•11m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•11m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•13m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•14m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•15m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•15m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•16m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•18m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
4•codexon•18m ago•2 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•19m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•23m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•23m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•24m ago•1 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•24m ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•24m ago•0 comments

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•28m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
2•_____k•28m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•30m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
3•CurtHagenlocher•31m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•32m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•33m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•33m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•34m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•35m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•38m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•42m ago•1 comments
Open in hackernews

Quod natura non dat, artificialis intelligentia (AI) non praestat

1•vayllon•9mo ago
The title of this article paraphrase the old Latin proverb “Quod natura non dat, Salmantica non praestat” — which means — "What nature does not give, Salamanca University does not provide"— we could say that artificial intelligence can't make up for what natural, biological intelligence lacks. We're talking about innate abilities like memory, comprehension, or the capacity to learn. Put simply: if someone lacks natural talent, not even ChatGPT can save them.

For those who are not familiar with the University of Salamanca, it is one of the oldest universities in Europe, founded in 1218. The proverb is carved in stone on one of its buildings, which has helped cement the popularity of the proverb.

And that brings us to the real point of this article: AI won’t make us smarter if we don’t know how to use it. When it comes to large language models (LLMs), this has everything to do with prompt engineering and context—how we craft our questions, context and provide examples to get meaningful answers, and how we decide whether to trust those answers or not.

Personally, prompt engineering is starting to feel more and more like hypnosis.

When I write complex prompts filled with detailed instructions, I think of those stage magicians who hypnotize people from the audience, telling them how to behave or even who they are, a chicken or whatever.

With each new version of large language models, this “hypnotic engineering” seems to grow stronger. I wouldn’t be surprised if, in the near future, we start seeing professional “suggesters” —specialists in AI hypnosis through carefully crafted prompts. We might even get new job titles like LLM Hypnotist or AI Whisperer. Imagine movies like The LLM Whisperer—a sequel to The Horse Whisperer.

For instance, in GPT-4.1, we’re already starting to see some highly suggestive prompts that point in this direction. Just an example:

“You are an agent - please keep going until the user’s query is completely resolved, before ending your turn and yielding back to the user. Only terminate your turn when you are sure that the problem is solved. You MUST plan extensively before each function call, and reflect extensively on the outcomes of the previous function calls. DO NOT do this entire process by making function calls only,….”

Not only do we need the skill of a hypnotist to craft these instructions, but we also need the ability of a psychologist to interpret the responses in order to keep the conversation going and even detect hallucinations. In other words, we must smart enough to effectively use these new tools.

To paraphrase another popular saying: “You must first read, then reflect. Doing so in reverse order is dangerous.” The idea here is that both reading without reflection and reflecting without a knowledge base can lead to bad results.

The same applies when using tools like ChatGPT: we need to know how to ask the right questions—and just as importantly, how to think critically about the answers we get. And this has a lot to do with how much prior knowledge we have about the domain. If we don’t know anything about the domain, we’ll probably believe whatever the Chatbot tells us—and that’s when things get really dangerous.

So, in an attempt to hypnotize the audience, I would suggest you cultivate your intelligence, your memory, and your comprehension skills. It's a daily task. It's like going to the gym. Because if you start delegating your intelligence to ChatGPT and similar, you won't have the criteria to use it. It is well known that if you delegate a skill, you lose it. You have many examples around you. Please, don't lose your ability to think; it's very dangerous.