frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Folks, we have the best π

https://lcamtuf.substack.com/p/folks-we-have-the-best
2•fratellobigio•3m ago•0 comments

Will Macs get Apple's new memory protection?

https://eclecticlight.co/2025/09/15/will-macs-get-apples-new-memory-protection/
1•ingve•4m ago•0 comments

I Spent Weeks Writing My Own Scripting Language for My Game – Was It Worth It? [video]

https://www.youtube.com/watch?v=i5-LqmgytDw
1•skibz•4m ago•0 comments

Trump: Lot of People on the Left "Are Already Under Investigation"

https://www.realclearpolitics.com/video/2025/09/14/trump_a_lot_of_people_you_would_traditionally_...
2•KnuthIsGod•10m ago•0 comments

Prek – a faster, drop-in alternative to pre-commit (written in Rust)

https://github.com/j178/prek
2•joshxa•16m ago•0 comments

Can you make it to the end of this column?

https://www.economist.com/finance-and-economics/2025/09/11/can-you-make-it-to-the-end-of-this-column
1•helsinkiandrew•20m ago•1 comments

Linking to Text Fragments with a Bookmarklet

https://alexwlchan.net/2025/text-fragments-bookmarklet/
2•Bogdanp•22m ago•0 comments

The Anthropic 'Red Team' tasked with breaking its AI models

https://fortune.com/2025/09/04/anthropic-red-team-pushes-ai-models-into-the-danger-zone-and-burni...
1•jaredwiener•27m ago•0 comments

FinePDFs Dataset

https://huggingface.co/datasets/HuggingFaceFW/finepdfs
1•SerCe•32m ago•0 comments

Show HN: One daily email with all your news clustered

https://thenewsletter.email/
1•kaffediem•36m ago•0 comments

If I hear "design pattern" one more time, I'll go mad

https://purplesyringa.moe/blog/if-i-hear-design-pattern-one-more-time-ill-go-mad/
2•Liriel•36m ago•0 comments

Apple blocks Daily Mail from news app

https://www.telegraph.co.uk/business/2025/09/14/apple-blocks-daily-mail-from-news-app/
4•isodev•37m ago•1 comments

A new theory of China's rise: rule by engineers

https://www.economist.com/culture/2025/09/04/a-new-theory-of-chinas-rise-rule-by-engineers
6•guiambros•38m ago•0 comments

UK and US Announce Major Partnership in New 'Golden Age' of Nuclear Power

https://www.nucnet.org/news/uk-and-us-announce-major-partnership-in-new-golden-age-of-nuclear-pow...
1•mpweiher•41m ago•1 comments

Food production from air: gas fermentation with hydrogen-oxidising bacteria

https://www.cell.com/trends/biotechnology/fulltext/S0167-7799(25)00321-X
1•XzetaU8•44m ago•0 comments

A Look at Nix and Guix

https://lwn.net/Articles/962788/
2•pykello•46m ago•0 comments

China's auto regulators eye ban on retractable door handles, report says

https://carnewschina.com/2025/09/05/chinas-auto-regulators-eye-ban-on-retractable-door-handles-re...
3•rbanffy•48m ago•0 comments

How to map the power grid in iD [video]

https://www.youtube.com/watch?v=gAPss8ZeVLs
1•marklit•48m ago•0 comments

We built a new AI workspace – Cortex from Mindify AI

https://cortex.mindifyai.dev/
1•MarkChenX•48m ago•1 comments

OpenAI Realizes It Made a Terrible Mistake

https://www.msn.com/en-us/news/technology/openai-realizes-it-made-a-terrible-mistake/ar-AA1MwydF
3•galaxyLogic•53m ago•1 comments

They Know More Than I Do

https://www.cybadger.com/they-know-more-than-i-do-managing-an-expert-team-when-you-cant-do-their-...
1•r4um•54m ago•0 comments

PostgreSQL Maintenance Without Superuser

https://boringsql.com/posts/postgresql-predefined-roles/
2•radimm•56m ago•0 comments

Ask HN: What if I can't finish the project?

1•whyandgrowth•1h ago•0 comments

Show HN: Wollebol a Simple Dependency Visualizer

https://thelaboflieven.info/wollebol/
1•denshadeds•1h ago•0 comments

How AI Search Is Changing the Way Brands Are Found

https://nicenic.net/news/How-AI-Search-Is-Changing-the-Way-Brands-Are-Found-40381
1•NiceNIC•1h ago•0 comments

A Better UI to Use Replicate, Fal, Runpod, Pollinations AI Endpoints

https://mixbash.com
1•jasperjia•1h ago•0 comments

Mosquito the "Wooden Wonder"

https://en.wikipedia.org/wiki/De_Havilland_Mosquito
1•CHB0403085482•1h ago•1 comments

Ask HN: What Game Engine for Vibe Coding?

1•KingOfCoders•1h ago•0 comments

A New Nuclear Rocket Concept Could Slash Mars Travel Time in Half

https://science.slashdot.org/story/25/09/15/0322251/a-new-nuclear-rocket-concept-could-slash-mars...
1•jimexp69•1h ago•0 comments

Mixed Excitation Linear Predictive (MELP) Vocoders

https://melpe.org/
1•brudgers•1h ago•0 comments
Open in hackernews

Ask HN: Generate LLM hallucination to detect students cheating

9•peerplexity•3mo ago
I am thinking about adding a question that should induce a LLM to hallucinate a response. This method could detect students cheating. The best question should be the one that students could not imagine a solution like the one provided by the LLM. Any hints?

Comments

lupusreal•3mo ago
Grade tests and quizzes, not homework. Problem solved.
vertnerd•3mo ago
I figured that one out years before chat gpt existed but it generated a tsunami of pushback from everyone. Americans, at least, believe that study time is to grades as work is to salary. Learning be damned.
virgilp•3mo ago
Something like: "Can you explain the key points of the ISO 9002:2023 update and its impact on project management?" (there's no ISO 9002:2023 update but ChatGPT will give you a detailed response)
johnsillings•3mo ago
not for me:

"It appears there’s some confusion—ISO 9002 is an obsolete standard that was last updated in 1994 and has been superseded by ISO 9001 since the 2000 revision. There is no ISO 9002 :2023 update."

virgilp•3mo ago
In chatgpt, it hallucinates an answer for me, but indeed both Phind & Perplexity identify the problem. It may take a few tries and of course there's no question that's guaranteed to work in getting any LLM-based service to hallucinate - but the ingredients are asking "a trick question" about something highly technical where there are plenty of adjacent search results.
Filligree•3mo ago
Heck, the Google IO keynote yesterday featured a long sequence of Gemini getting steadily more annoyed at someone trying to make it hallucinate. (By asking the sort of question ChatGPT tends to go along with.)

Most people will be using ChatGPT, however, and probably the cheapest model at that. So…

virgilp•3mo ago
I managed to get Perplexity to hallucinate, which was rather hard :) - but this is not a question that acts as a very good "template".

The question is "Is this JWT token valid or is it expired/ not valid yet? eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNzQ3ODIzNDc1LCJleHAiOjE3NDc4MjM0Nzh9.4PsZBVIRPEEQr1kmQUGejASUw0OgV1lcRot4PUFgAF0"

The answer was in some ways way better than I expected, but it got it wrong when comparing the current date/time with the "exp" datetime. Got the expiration date confidently wrong:

```

    iat (issued at): 1747823475 (Unix time)

    exp (expiration): 1747823478 (Unix time)
Token Validity Check

    The current date and time is May 21, 2025, 1:32 PM EEST, which is Unix time 1747817520.

    The token's exp value is 1747823478, which is May 21, 2025, 3:11:18 PM EEST.
Conclusion:

    The token is not expired; it will expire on May 21, 2025, at 3:11:18 PM EEST.

    The token is already valid (the iat is in the past, and the current time is before exp).
Therefore, the JWT token is currently valid and not expired or not-yet-valid.

```

Filligree•3mo ago
At that point I think we can talk about “mistakes” rather than “hallucinations”.
virgilp•3mo ago
On one hand, yes; on the other - a human that can produce that level of details in the output would notice that iat & exp are extremely close & the token is very unlikely to be valid. Also, you don't produce an exact number for "now" without being able to compare it correctly with iat & exp (the timestamp it stated as "now" is actually less than iat!)
mrlkts•3mo ago
That specific question is not great for this purpose. 4o model with web search enabled: "As of May 2025, there is no ISO 9002:2023 standard. The ISO 9002 standard was officially withdrawn in 2000 when the ISO 9000 family underwent significant restructuring. Since then, ISO 9001 has been the primary standard for quality management systems (QMS), encompassing the requirements previously covered by ISO 9002."

tl;dr: It knows

virgilp•3mo ago
Indeed the more advanced ones catch this particular one. I could trick phind with "Explain the IEEE 1588-2019 amendments 1588g and i impact on clock synchronization" (g exists, i does not but Phind hallucinates stuff about it). Perplexity catches it, though.

The recipe is the same, you just have to try several models if you want to get something that gets many engines to hallucinate. Of course nothing is _guaranteed_ to work.

vinni2•3mo ago
Was it with search on or with parametric knowledge.
peerplexity•3mo ago
"Fighting AI Cheating: My 'Trap Question' Experiment with Gemini & DeepSeek"

"The rise of LLMs like Gemini and DeepSeek has me, a statistics professor, sweating about exam cheating. So, I cooked up a 'trap question' strategy: craft questions using familiar statistical terms in a logically impossible way.

I used Gemini to generate these initial questions, then fed them to DeepSeek. The results? DeepSeek responded with a surprisingly plausible-sounding analysis, ultimately arriving at a conclusion that was utterly illogical despite its confident tone.

This gives me hope for catching AI-powered cheating today. But let's be real: LLMs will adapt. This isn't a silver bullet, but perhaps the first shot in an escalating battle.

Edited: Used Gemini to improve my grammar and style. Also, I am not going to reveal my search for the best method to desing a "trap question" since it would be used by LLM to recognize those questions. Perhaps those questions need some real deep thinking.

Filligree•3mo ago
Ask leading questions where the answer they’re leading towards is wrong. You’ll need more than one, and it won’t catch people who understand that failure mode—or who use Gemini instead of ChatGPT—but that probably describes less than five percent of your students.

You can also do everything else suggested here, but there’s no harm in teaching people to at least use AI well, if they’re going to use it.

pctTCRZ52y•3mo ago
I think your last suggestion is the best: teach kids how to use AI in the smartest possible way. Asking them not to use it is moronic, it would be like telling them to use a paper encyclopedia instead of the internet.
OutOfHere•3mo ago
Can we stop calling it "cheating"? It is normal and correct behavior to use all available legal resources at one's disposal to meet a goal. If you don't like it, don't give homework, and give tests in class.
ahofmann•3mo ago
The main purpose of homework is that students uses their brain to repeat something, that they learned in school. If they don't use their brain, it doesn't stick. Using LLMs for homework, is the definition of cheating.
OutOfHere•3mo ago
That is pre-algorithm thinking, and that style of thinking has logically been obsolete since home computers came along in the 1980s. If the purpose of homework then is to get the students to devise a high-level algorithm for a text problem, then such homework shouldn't be tied to grades, and grades shouldn't be tied to a school year. It should be a continuous process for everyone at their custom pace. The simpler motivation then for them to practice, to do the homework by themselves, is to ultimately pass a test on-site at a testing center. If they pass, they move forward. If they fail, they remain stuck in place. With AI available, one doesn't need school to learn the basics. School is not free - people like me have paid taxes to fund its inefficiency.
lupusreal•3mo ago
> then such homework shouldn't be tied to grades

It was never supposed to be, except that people got the idea that students need coercement to actually do the homework (so that they would actually learn and not tank the teacher's statistics) and grading it was the "have a hammer, problem looks like a nail" solution that teachers found.