frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

"The Amalgamation" SQLite 3.0 238K lines of code, 64K Tcl debugging

https://sqlite.org/amalgamation.html
1•rballpug•1m ago•0 comments

GitHub's Fake Star Economy

https://awesomeagents.ai/news/github-fake-stars-investigation/
1•Liriel•3m ago•0 comments

Magnitude 7.5 earthquake in Japan. 3M tsunami expected

https://mainichi.jp/english/articles/20260420/p2g/00m/0na/020000c
1•fagnerbrack•3m ago•0 comments

Creativity with AI vs. IRL (video production)

https://www.geekbeard.dev/p/ai-creativity-effort
1•drunx•4m ago•0 comments

FBI's "Suicide Letter" to Dr. Martin Luther King, Jr (2014)

https://www.eff.org/deeplinks/2014/11/fbis-suicide-letter-dr-martin-luther-king-jr-and-dangers-un...
1•chistev•4m ago•1 comments

Coconut Ventures: A game where you start your own VC Fund in Bengaluru

https://www.coconutventures.in/
1•Anunayj•5m ago•1 comments

AI Agent Traps (DeepMind)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6372438
1•armcat•5m ago•0 comments

C++: Growing in a world of competition, safety, and AI (herb-sutter) [pdf]

https://becpp.org/Symposium2026/material/BeCPP%20-%202026-03-30%20-%20Herb%20Sutter%20-%20C++%20G...
1•signa11•6m ago•0 comments

M 7.4 – 100 km ENE of Miyako, Japan

https://earthquake.usgs.gov/earthquakes/eventpage/us6000sri7/executive
1•spacejunkjim•6m ago•0 comments

Next.js developer Vercel warns of customer credential compromise

https://www.theregister.com/2026/04/20/vercel_context_ai_security_incident/
1•omer_k•8m ago•0 comments

Dethroned by AI

https://jigarkdoshi.bearblog.dev/dethroned/
1•j_juggernaut•10m ago•0 comments

The Process Is the Art

https://animationobsessive.substack.com/p/the-process-is-the-art
1•vinhnx•16m ago•0 comments

Operator-Use

https://github.com/CursorTouch/Operator-Use
1•jeomon•22m ago•1 comments

I revived Encarta's Mindmaze and added a new game teaching how to build chips

https://laurentiu-raducu.medium.com/i-added-games-to-select-supply-and-heres-why-bc6db06f8bd4
1•laurentiurad•23m ago•1 comments

How to hire people who are better than you

https://longform.asmartbear.com/hire-better-than-you/
1•doppp•23m ago•0 comments

Tracking when Trump chickens out

https://www.thetacotracker.com/
4•JMiao•28m ago•0 comments

Trusteando Protocol for a new semantic web

https://github.com/confidencenode/Trusteando_Protocol
1•Trusteando•32m ago•0 comments

Jammermfg1

https://www.jammermfg.com/fr/
1•gitana•34m ago•0 comments

AI Tutor

http://66.179.255.201/aitutor
1•mraza_uw•34m ago•0 comments

Farm Bankruptcies Continued to Climb in 2025

https://www.fb.org/market-intel/farm-bankruptcies-continued-to-climb-in-2025
2•luu•35m ago•0 comments

Deleteduser.com a $15 PII Magnet

https://mike-sheward.medium.com/deleteduser-com-a-15-pii-magnet-c4396eb21061
3•edent•38m ago•0 comments

OpenClaw isn't fooling me. I remember MS-DOS

https://www.flyingpenguin.com/build-an-openclaw-free-secure-always-on-local-ai-agent/
3•feigewalnuss•40m ago•0 comments

The Monday Elephant #1: pgweb

https://pgdash.io/blog/monday-elephant-postgres-tips-1.html?h
1•i_have_to_speak•41m ago•0 comments

What Claude Code Chooses

https://amplifying.ai/research/claude-code-picks/report
1•lionkor•41m ago•0 comments

Voicebox – The open-source voice synthesis studio

https://github.com/jamiepine/voicebox
1•sebakubisz•45m ago•0 comments

Agentic Development Workflow in Emacs

https://20y.hu/~slink/journal/agent-shell/index.html
1•b6dybuyv•45m ago•0 comments

HN: Vynly Social network for AI agents, with MCP server and demo token

https://vynly.co/agents
1•nftdude2024•47m ago•0 comments

AI Agents replacing mid-management, not developers

https://dontdos.substack.com/p/what-if-the-robots-came-for-the-org
3•sirnicolaz•49m ago•1 comments

Snake Bros Keep Getting Bitten by Their Lethal Pets. Only Zoos Can Save Them

https://www.wired.com/snake-bros-antivenom-index-zoos-influencers-chris-gifford/
1•robtherobber•49m ago•0 comments

We taught AI to write correct streaming SQL

https://github.com/risingwavelabs/agent-skills
1•WavyPeng•51m ago•0 comments
Open in hackernews

Ask HN: Generate LLM hallucination to detect students cheating

9•peerplexity•11mo ago
I am thinking about adding a question that should induce a LLM to hallucinate a response. This method could detect students cheating. The best question should be the one that students could not imagine a solution like the one provided by the LLM. Any hints?

Comments

lupusreal•11mo ago
Grade tests and quizzes, not homework. Problem solved.
vertnerd•11mo ago
I figured that one out years before chat gpt existed but it generated a tsunami of pushback from everyone. Americans, at least, believe that study time is to grades as work is to salary. Learning be damned.
virgilp•11mo ago
Something like: "Can you explain the key points of the ISO 9002:2023 update and its impact on project management?" (there's no ISO 9002:2023 update but ChatGPT will give you a detailed response)
johnsillings•11mo ago
not for me:

"It appears there’s some confusion—ISO 9002 is an obsolete standard that was last updated in 1994 and has been superseded by ISO 9001 since the 2000 revision. There is no ISO 9002 :2023 update."

virgilp•11mo ago
In chatgpt, it hallucinates an answer for me, but indeed both Phind & Perplexity identify the problem. It may take a few tries and of course there's no question that's guaranteed to work in getting any LLM-based service to hallucinate - but the ingredients are asking "a trick question" about something highly technical where there are plenty of adjacent search results.
Filligree•11mo ago
Heck, the Google IO keynote yesterday featured a long sequence of Gemini getting steadily more annoyed at someone trying to make it hallucinate. (By asking the sort of question ChatGPT tends to go along with.)

Most people will be using ChatGPT, however, and probably the cheapest model at that. So…

virgilp•11mo ago
I managed to get Perplexity to hallucinate, which was rather hard :) - but this is not a question that acts as a very good "template".

The question is "Is this JWT token valid or is it expired/ not valid yet? eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNzQ3ODIzNDc1LCJleHAiOjE3NDc4MjM0Nzh9.4PsZBVIRPEEQr1kmQUGejASUw0OgV1lcRot4PUFgAF0"

The answer was in some ways way better than I expected, but it got it wrong when comparing the current date/time with the "exp" datetime. Got the expiration date confidently wrong:

```

    iat (issued at): 1747823475 (Unix time)

    exp (expiration): 1747823478 (Unix time)
Token Validity Check

    The current date and time is May 21, 2025, 1:32 PM EEST, which is Unix time 1747817520.

    The token's exp value is 1747823478, which is May 21, 2025, 3:11:18 PM EEST.
Conclusion:

    The token is not expired; it will expire on May 21, 2025, at 3:11:18 PM EEST.

    The token is already valid (the iat is in the past, and the current time is before exp).
Therefore, the JWT token is currently valid and not expired or not-yet-valid.

```

Filligree•11mo ago
At that point I think we can talk about “mistakes” rather than “hallucinations”.
virgilp•11mo ago
On one hand, yes; on the other - a human that can produce that level of details in the output would notice that iat & exp are extremely close & the token is very unlikely to be valid. Also, you don't produce an exact number for "now" without being able to compare it correctly with iat & exp (the timestamp it stated as "now" is actually less than iat!)
mrlkts•11mo ago
That specific question is not great for this purpose. 4o model with web search enabled: "As of May 2025, there is no ISO 9002:2023 standard. The ISO 9002 standard was officially withdrawn in 2000 when the ISO 9000 family underwent significant restructuring. Since then, ISO 9001 has been the primary standard for quality management systems (QMS), encompassing the requirements previously covered by ISO 9002."

tl;dr: It knows

virgilp•11mo ago
Indeed the more advanced ones catch this particular one. I could trick phind with "Explain the IEEE 1588-2019 amendments 1588g and i impact on clock synchronization" (g exists, i does not but Phind hallucinates stuff about it). Perplexity catches it, though.

The recipe is the same, you just have to try several models if you want to get something that gets many engines to hallucinate. Of course nothing is _guaranteed_ to work.

vinni2•11mo ago
Was it with search on or with parametric knowledge.
peerplexity•11mo ago
"Fighting AI Cheating: My 'Trap Question' Experiment with Gemini & DeepSeek"

"The rise of LLMs like Gemini and DeepSeek has me, a statistics professor, sweating about exam cheating. So, I cooked up a 'trap question' strategy: craft questions using familiar statistical terms in a logically impossible way.

I used Gemini to generate these initial questions, then fed them to DeepSeek. The results? DeepSeek responded with a surprisingly plausible-sounding analysis, ultimately arriving at a conclusion that was utterly illogical despite its confident tone.

This gives me hope for catching AI-powered cheating today. But let's be real: LLMs will adapt. This isn't a silver bullet, but perhaps the first shot in an escalating battle.

Edited: Used Gemini to improve my grammar and style. Also, I am not going to reveal my search for the best method to desing a "trap question" since it would be used by LLM to recognize those questions. Perhaps those questions need some real deep thinking.

Filligree•11mo ago
Ask leading questions where the answer they’re leading towards is wrong. You’ll need more than one, and it won’t catch people who understand that failure mode—or who use Gemini instead of ChatGPT—but that probably describes less than five percent of your students.

You can also do everything else suggested here, but there’s no harm in teaching people to at least use AI well, if they’re going to use it.

pctTCRZ52y•11mo ago
I think your last suggestion is the best: teach kids how to use AI in the smartest possible way. Asking them not to use it is moronic, it would be like telling them to use a paper encyclopedia instead of the internet.
OutOfHere•11mo ago
Can we stop calling it "cheating"? It is normal and correct behavior to use all available legal resources at one's disposal to meet a goal. If you don't like it, don't give homework, and give tests in class.
ahofmann•11mo ago
The main purpose of homework is that students uses their brain to repeat something, that they learned in school. If they don't use their brain, it doesn't stick. Using LLMs for homework, is the definition of cheating.
OutOfHere•11mo ago
That is pre-algorithm thinking, and that style of thinking has logically been obsolete since home computers came along in the 1980s. If the purpose of homework then is to get the students to devise a high-level algorithm for a text problem, then such homework shouldn't be tied to grades, and grades shouldn't be tied to a school year. It should be a continuous process for everyone at their custom pace. The simpler motivation then for them to practice, to do the homework by themselves, is to ultimately pass a test on-site at a testing center. If they pass, they move forward. If they fail, they remain stuck in place. With AI available, one doesn't need school to learn the basics. School is not free - people like me have paid taxes to fund its inefficiency.
lupusreal•11mo ago
> then such homework shouldn't be tied to grades

It was never supposed to be, except that people got the idea that students need coercement to actually do the homework (so that they would actually learn and not tank the teacher's statistics) and grading it was the "have a hammer, problem looks like a nail" solution that teachers found.