frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Texas county passes 1-year data center construction ban

https://www.politico.com/news/2026/05/16/texas-county-data-center-construction-ban-00922493
1•cdrnsf•34s ago•0 comments

Beaver: An Enterprise Benchmark for Text-to-SQL

https://peterbaile.github.io/beaver/
1•tcp_handshaker•1m ago•0 comments

Rx Inspector: Look Up Where Your Generic Prescription Drugs Were Made

https://projects.propublica.org/rx-inspector/
2•turtleyacht•6m ago•0 comments

The 48-Hour Cancer Binder

https://ludocomito.dev/blog/48-hour-cancer-binder/
2•ludocomito•8m ago•0 comments

Palace-AI – memory palace for AI agents

https://github.com/RhyChaw/palace-ai
2•RhythmC•10m ago•0 comments

A Note On Distributed Computing (1994)[pdf]

https://waldo.scholars.harvard.edu/sites/g/files/omnuum6261/files/waldo/files/waldo-94.pdf
2•ai_critic•20m ago•0 comments

The clean-up cost of AI code is what the velocity narrative leaves out

https://webflow.com/blog/cleanup-cost-ai-generated-code
4•dxs•20m ago•0 comments

Halt and Catch Fire

https://unstack.io/halt-and-catch-fire
3•ScottWRobinson•24m ago•0 comments

Tata Electronics and ASML Announce Strategic Partnership

https://www.asml.com/en/news/press-releases/2026/tata-electronics-and-asml-announce-strategic-par...
3•saharshpruthi•26m ago•1 comments

As Russian drones hunt Ukrainians in 'human Safari,' a boy fought back

https://www.washingtonpost.com/world/2026/05/15/russian-drones-hunt-ukrainians-human-safari-boy-f...
3•Stratoscope•26m ago•1 comments

Thousands todo apps, but none allows one-click collaboration with my grandma

https://medium.com/@tombarys/dotolist-creating-a-one-click-team-065a18dbeccd
3•tombarys•28m ago•1 comments

Elevated error rates on requests to multiple models

https://status.claude.com/incidents/v9s6d0jt84hj
9•recroad•28m ago•1 comments

Show HN: Daily vibe-coding video games, day 33: Tower Defense (single prompt)

https://gamevibe.us/33-tower-defense
3•pzxc•29m ago•0 comments

CC: Anthropic API Error: 500 Internal Server Error

https://github.com/anthropics/claude-code/issues/59743
5•brenoRibeiro706•31m ago•1 comments

Nvidia Preparing RTX 5090 Price Increase Due to Rising GDDR7 Memory Costs

https://mp1st.com/news/nvidia-preparing-rtx-5090-price-increase
3•jeffufl•31m ago•0 comments

Feeds: A Minimal RSS Aggregator and Client

https://stevedylan.dev/posts/feeds/
3•speckx•33m ago•0 comments

Malta gives citizens a paid version of ChatGPT Plus for free

https://ranked.news/malta-gives-citizens-a-paid-version-of-chatgpt-plus-for-free
2•doener•35m ago•0 comments

I 3D Printed Origami [video]

https://www.youtube.com/watch?v=FNVBK7-h9Fs
5•Teever•37m ago•1 comments

Haiku boots to desktop on an M1 MacBook Air

https://discuss.haiku-os.org/t/my-haiku-arm64-progress/19044?page=2
3•calgarymicro•39m ago•1 comments

Hermes-agentmemory: pull-model episodic memory with real deletes

https://github.com/MukundaKatta/hermes-agentmemory
2•mukundakatta•39m ago•0 comments

US Is Starting to See Heavy Job Losses in Roles Exposed to AI

https://www.bloomberg.com/news/articles/2026-05-15/us-is-starting-to-see-heavy-job-losses-in-role...
8•elsewhen•40m ago•0 comments

A brief guide to self-hosting websites and apps using Cloudflare Tunnel

https://blog.dougbelshaw.com/cloudflare-tunnel/
2•speckx•44m ago•0 comments

Rubin Tracks Skyscraper-Size Asteroids, Supernovas, and Interstellar Visitors

https://www.quantamagazine.org/rubin-tracks-skyscraper-size-asteroids-failed-supernovas-and-inter...
1•rbanffy•48m ago•0 comments

OpenIQ - Building a product engineering muscle in the age of agents

https://abhirame.github.io/posts/openiq/
2•abhis3798•53m ago•0 comments

Book Club: 100 best novels of all time

https://www.theguardian.com/books/ng-interactive/2026/may/12/the-100-best-novels-of-all-time
3•fallinditch•57m ago•1 comments

Cerebras IPO Signals Growing Pressure on the GPU Scaling Model

https://www.hpcwire.com/2026/05/14/cerebras-ipo-signals-growing-pressure-on-the-gpu-scaling-model/
5•rbanffy•57m ago•1 comments

A simple Daikon-style runtime invariant miner for Python

https://rahul.gopinath.org/post/2026/05/09/simple-invariant-miner/
1•fanf2•58m ago•0 comments

AI agents make small companies bigger

https://text-incubation.com/ai-agents-make-small-companies-bigger?1
1•krrishd•59m ago•0 comments

Clarify & target potential customers with generated sales campaigns

https://mygtm.io/
1•Bryan2000100•1h ago•1 comments

LSL: Open-source lab streaming layer for synchronized multimodal recording

https://direct.mit.edu/imag/article/doi/10.1162/IMAG.a.136/132678/The-lab-streaming-layer-for-syn...
1•teleforce•1h ago•1 comments
Open in hackernews

Ask HN: Generate LLM hallucination to detect students cheating

9•peerplexity•12mo ago
I am thinking about adding a question that should induce a LLM to hallucinate a response. This method could detect students cheating. The best question should be the one that students could not imagine a solution like the one provided by the LLM. Any hints?

Comments

lupusreal•12mo ago
Grade tests and quizzes, not homework. Problem solved.
vertnerd•12mo ago
I figured that one out years before chat gpt existed but it generated a tsunami of pushback from everyone. Americans, at least, believe that study time is to grades as work is to salary. Learning be damned.
virgilp•12mo ago
Something like: "Can you explain the key points of the ISO 9002:2023 update and its impact on project management?" (there's no ISO 9002:2023 update but ChatGPT will give you a detailed response)
johnsillings•12mo ago
not for me:

"It appears there’s some confusion—ISO 9002 is an obsolete standard that was last updated in 1994 and has been superseded by ISO 9001 since the 2000 revision. There is no ISO 9002 :2023 update."

virgilp•12mo ago
In chatgpt, it hallucinates an answer for me, but indeed both Phind & Perplexity identify the problem. It may take a few tries and of course there's no question that's guaranteed to work in getting any LLM-based service to hallucinate - but the ingredients are asking "a trick question" about something highly technical where there are plenty of adjacent search results.
Filligree•12mo ago
Heck, the Google IO keynote yesterday featured a long sequence of Gemini getting steadily more annoyed at someone trying to make it hallucinate. (By asking the sort of question ChatGPT tends to go along with.)

Most people will be using ChatGPT, however, and probably the cheapest model at that. So…

virgilp•12mo ago
I managed to get Perplexity to hallucinate, which was rather hard :) - but this is not a question that acts as a very good "template".

The question is "Is this JWT token valid or is it expired/ not valid yet? eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNzQ3ODIzNDc1LCJleHAiOjE3NDc4MjM0Nzh9.4PsZBVIRPEEQr1kmQUGejASUw0OgV1lcRot4PUFgAF0"

The answer was in some ways way better than I expected, but it got it wrong when comparing the current date/time with the "exp" datetime. Got the expiration date confidently wrong:

```

    iat (issued at): 1747823475 (Unix time)

    exp (expiration): 1747823478 (Unix time)
Token Validity Check

    The current date and time is May 21, 2025, 1:32 PM EEST, which is Unix time 1747817520.

    The token's exp value is 1747823478, which is May 21, 2025, 3:11:18 PM EEST.
Conclusion:

    The token is not expired; it will expire on May 21, 2025, at 3:11:18 PM EEST.

    The token is already valid (the iat is in the past, and the current time is before exp).
Therefore, the JWT token is currently valid and not expired or not-yet-valid.

```

Filligree•12mo ago
At that point I think we can talk about “mistakes” rather than “hallucinations”.
virgilp•12mo ago
On one hand, yes; on the other - a human that can produce that level of details in the output would notice that iat & exp are extremely close & the token is very unlikely to be valid. Also, you don't produce an exact number for "now" without being able to compare it correctly with iat & exp (the timestamp it stated as "now" is actually less than iat!)
mrlkts•12mo ago
That specific question is not great for this purpose. 4o model with web search enabled: "As of May 2025, there is no ISO 9002:2023 standard. The ISO 9002 standard was officially withdrawn in 2000 when the ISO 9000 family underwent significant restructuring. Since then, ISO 9001 has been the primary standard for quality management systems (QMS), encompassing the requirements previously covered by ISO 9002."

tl;dr: It knows

virgilp•12mo ago
Indeed the more advanced ones catch this particular one. I could trick phind with "Explain the IEEE 1588-2019 amendments 1588g and i impact on clock synchronization" (g exists, i does not but Phind hallucinates stuff about it). Perplexity catches it, though.

The recipe is the same, you just have to try several models if you want to get something that gets many engines to hallucinate. Of course nothing is _guaranteed_ to work.

vinni2•12mo ago
Was it with search on or with parametric knowledge.
peerplexity•12mo ago
"Fighting AI Cheating: My 'Trap Question' Experiment with Gemini & DeepSeek"

"The rise of LLMs like Gemini and DeepSeek has me, a statistics professor, sweating about exam cheating. So, I cooked up a 'trap question' strategy: craft questions using familiar statistical terms in a logically impossible way.

I used Gemini to generate these initial questions, then fed them to DeepSeek. The results? DeepSeek responded with a surprisingly plausible-sounding analysis, ultimately arriving at a conclusion that was utterly illogical despite its confident tone.

This gives me hope for catching AI-powered cheating today. But let's be real: LLMs will adapt. This isn't a silver bullet, but perhaps the first shot in an escalating battle.

Edited: Used Gemini to improve my grammar and style. Also, I am not going to reveal my search for the best method to desing a "trap question" since it would be used by LLM to recognize those questions. Perhaps those questions need some real deep thinking.

Filligree•12mo ago
Ask leading questions where the answer they’re leading towards is wrong. You’ll need more than one, and it won’t catch people who understand that failure mode—or who use Gemini instead of ChatGPT—but that probably describes less than five percent of your students.

You can also do everything else suggested here, but there’s no harm in teaching people to at least use AI well, if they’re going to use it.

pctTCRZ52y•12mo ago
I think your last suggestion is the best: teach kids how to use AI in the smartest possible way. Asking them not to use it is moronic, it would be like telling them to use a paper encyclopedia instead of the internet.
OutOfHere•12mo ago
Can we stop calling it "cheating"? It is normal and correct behavior to use all available legal resources at one's disposal to meet a goal. If you don't like it, don't give homework, and give tests in class.
ahofmann•12mo ago
The main purpose of homework is that students uses their brain to repeat something, that they learned in school. If they don't use their brain, it doesn't stick. Using LLMs for homework, is the definition of cheating.
OutOfHere•12mo ago
That is pre-algorithm thinking, and that style of thinking has logically been obsolete since home computers came along in the 1980s. If the purpose of homework then is to get the students to devise a high-level algorithm for a text problem, then such homework shouldn't be tied to grades, and grades shouldn't be tied to a school year. It should be a continuous process for everyone at their custom pace. The simpler motivation then for them to practice, to do the homework by themselves, is to ultimately pass a test on-site at a testing center. If they pass, they move forward. If they fail, they remain stuck in place. With AI available, one doesn't need school to learn the basics. School is not free - people like me have paid taxes to fund its inefficiency.
lupusreal•12mo ago
> then such homework shouldn't be tied to grades

It was never supposed to be, except that people got the idea that students need coercement to actually do the homework (so that they would actually learn and not tank the teacher's statistics) and grading it was the "have a hammer, problem looks like a nail" solution that teachers found.