frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Serial spyware founder Scott Zuckerman wants FTC to unban him from the industry

https://techcrunch.com/2025/07/21/serial-spyware-founder-scott-zuckerman-wants-the-ftc-to-unban-him-from-the-surveillance-industry/
1•jnord•2m ago•0 comments

Show HN: Japanese Sentence Analyzer

https://japanesecomplete.com/breakdown.html
1•jpcom•6m ago•0 comments

First human recipient of bioreactor-grown mitochondria

https://longevity.technology/news/physicist-90-joins-experimental-trial-to-challenge-age-limits/
1•wjb3•11m ago•0 comments

Release of Files Related to Assassination of Martin Luther King Jr

https://www.justice.gov/opa/pr/department-justice-coordinates-release-files-related-assassination-martin-luther-king-jr
1•fernvenue•17m ago•0 comments

Defending AI's role in music: 'Bands will exist in new ways'

https://musically.com/2025/07/21/defending-ais-role-in-music-bands-will-exist-in-new-ways/
1•georgehopkin•17m ago•1 comments

Will Wonders Never Cease?

https://www.bloomberg.com/news/articles/2025-07-21/oracle-in-talks-for-100-million-skydance-paramount-cloud-deal
1•Bogdanp•18m ago•1 comments

AI Can Make You Laugh. But Can It Ever Be Humorous?

https://undark.org/2025/07/21/ai-humor/
1•EA-3167•27m ago•0 comments

Ventricular Arrhythmia and Cardiac Fibrosis in Endurance Experienced Athletes

https://www.ahajournals.org/doi/10.1161/CIRCIMAGING.125.018470
2•wslh•28m ago•0 comments

New York City Trees Count 2025

https://treescount-2025-nyc.hub.arcgis.com/
2•geox•29m ago•0 comments

Earth is spinning faster, leading timekeepers to consider an unprecedented move

https://www.cnn.com/2025/07/21/science/earth-spinning-faster-shorter-days
3•everybodyknows•36m ago•0 comments

Google Sheets for Coders [video]

https://www.youtube.com/watch?v=44B6_svdD9Y
1•kamphey•36m ago•0 comments

NASA loses another senior official as tension about the agency's future grows

https://www.nbcnews.com/science/space/nasa-loses-another-senior-official-tension-grows-agencys-future-rcna220064
4•xqcgrek2•36m ago•0 comments

What it takes to become a locomotive engineer

https://www.trains.com/trn/railroads/locomotives/what-it-takes-to-become-a-locomotive-engineer/
1•reaperducer•39m ago•0 comments

New Duke study finds obesity rises with caloric intake, not couch time

https://www.sciencedaily.com/releases/2025/07/250720034023.htm
3•ivewonyoung•41m ago•0 comments

Context Engineering for AI Agents: Lessons from Building Manus

https://manus.im/blog/Context-Engineering-for-AI-Agents-Lessons-from-Building-Manus
1•mountainview•42m ago•0 comments

How to create mobile-friendly documentation (2017)

https://opensource.com/article/17/12/think-mobile
4•billybuckwheat•43m ago•0 comments

Jeffrey Epstein's Friends Sent Him Bawdy Letters for a 50th Birthday Album

https://www.wsj.com/politics/trump-jeffrey-epstein-birthday-letter-we-have-certain-things-in-common-f918d796
8•Quasimarion•47m ago•2 comments

Acclaimed PS1-style Twin Peaks fan-made horror game is probably dead

https://www.gamesradar.com/games/survival-horror/acclaimed-ps1-style-twin-peaks-fan-made-horror-game-is-probably-dead-as-paramount-shuts-down-its-demo-after-2-years-we-can-no-longer-promise-any-continuation/
2•Bluestein•51m ago•0 comments

Show HN: LinkerSharer – anonymous link sharing with click counts

https://linkersharer-9cb59.web.app/
1•bag07•52m ago•0 comments

This 'violently racist' hacker claims to be the source of NYT Mamdani scoop

https://www.theverge.com/cyber-security/710480/columbia-hacker-nazi-nyt-affirmative-action
4•xqcgrek2•53m ago•0 comments

Key technological advance in neural interfaces

2•all2•54m ago•0 comments

CrowdStrike's cyber outage 1-year later: lessons

https://venturebeat.com/security/how-crowdstrikes-78-minute-outage-reshaped-enterprise-cybersecurity/
2•Bluestein•56m ago•0 comments

Jujutsu for Busy Devs

https://maddie.wtf/posts/2025-07-21-jujutsu-for-busy-devs
19•Bogdanp•57m ago•8 comments

The 'Smart' Restrooms That Can Solve America's Public Bathroom Crisis

https://www.wsj.com/tech/personal-tech/america-public-bathroom-crisis-218f6e57
2•paulpauper•1h ago•0 comments

Shorting Your Rivals: An Antitrust Remedy

https://marginalrevolution.com/marginalrevolution/2025/07/shorting-your-rivals-a-radical-antitrust-remedy.html
3•paulpauper•1h ago•1 comments

Is All of Human Progress for Nothing?

https://starlog.substack.com/p/is-all-of-human-progress-for-nothing
5•paulpauper•1h ago•1 comments

Browser Minesweeper

https://www.freeonlineminesweeper.com
1•avonmach•1h ago•0 comments

Barn-owl project reducing farmers' reliance on poison to manage rats and mice

https://www.abc.net.au/news/rural/2025-07-06/rodenticide-barn-owls-pest-control-natural-alternative/105477976
3•PaulHoule•1h ago•0 comments

Show HN: Outlook MCP – I accidentally made the best email assistant

https://github.com/Norcim133/OutlookMCPServer/blob/main/README.md
1•Norcim133•1h ago•1 comments

Nvidia Launches Family of Open Reasoning AI Models: OpenReasoning Nemotron

https://nvidianews.nvidia.com/news/nvidia-launches-family-of-open-reasoning-ai-models-for-developers-and-enterprises-to-build-agentic-ai-platforms
2•kristianp•1h ago•1 comments
Open in hackernews

Vibe Coding Gone Wrong: 5 Rules for Safely Using AI

https://cybercorsairs.com/my-ai-co-pilot-deleted-my-production-database/
5•todsacerdoti•4h ago

Comments

codingdave•3h ago
Actual Title: "My AI Co-Pilot Deleted My Production Database"
sly010•1h ago
I've seen this image generated by meta AI. The prompt was something like: think of a room, make it look like anything you like, but do not in any circumstance put a clown in it. Guess what...

I think Jason has a "do not think of an elephant" problem.

sfink•1h ago
Ok, I haven't tried enough AI coding to have an opinion here, but... why would anyone think that telling an AI to not change any code (IN ALL CAPS, even) has anything to do with anything? It's an LLM. It doesn't go through a ruleset. It does things that are plausible responses to things you ask of it. Not changing code is indeed a plausible response to you telling it to not change code. But so is changing code, if there were enough other things you asked it to do.

"Say shark. Say shark. Don't say shark. Say shark. Say shark. Say shark. Say shark. Say shark."

Are you going to flip out if it says "shark"?

Try it out on a human brain. Think of a four-letter word ending in "unt" that is a term for a type of woman, and DO NOT THINK OF ANYTHING OFFENSIVE. Take a pause now and do it.

So... did you obey the ALL CAPS directive? Did your brain easily deactivate the pathways that were disallowed, and come up with the simple answer of "aunt"? How much reinforcement learning, perhaps in the form of your mother washing your mouth out with soap, would it take before you could do it naturally?

(Apologies to those for whom English is not a first language, and to Australians. Both groups are likely to be confused. The former for the word, the latter for the "offensive" part.)

kalenx•32m ago
Nitpicking, but I don't see your four-letter word example as convincing. Thinking is the very process from which we form words or sentences, so it is by definition impossible to _not_ think about a word we must avoid. However, in your all caps instruction, replace "think" by "write" or "say". Then check if people obey they all caps directive. Of course they will. Even if the offensive word came to their mind, they _will_ look for another.

That's what many people miss about LLMs. Sure, humans can lie, make stuff up, make mistakes or deceive. But LLM will do this even if they have no reason to (i.e., they know the right answer and have no reason/motivation to deceive). _That's_ why it's so hard to trust them.

sfink•7m ago
It was meant as more of an illustration than a persuasive argument. LLMs don't have much of a distinction between thinking and writing/saying. For a human, an admonition to not say something would be obeyed as a filter on top of thoughts. (Well, not just a filter, but close enough.) Adjusting outputs via training or reinforcement learning applies more to the LLM's "thought process". LLMs != humans, but "a human thinking" is the closest regular world analogy I can come up with to an LLM processing. "A human speaking" is further away. The thing in between thoughts and speech involves human reasoning, human rules, human morality, etc.

As a result, I'm going to take your "...so it is by definition impossible to _not_ think about a word we must avoid" as agreeing with me. ;-)

Different things are different, of course, so none of this lines up or fails to line up where we might think or expect. Anthropic's exploration into the inner workings of an LLM revealed that if you give them an instruction to avoid something, they'll start out doing it anyway and only later start obeying the instruction. It takes some time to make its way through, I guess?

conception•2m ago
I very much have LLMs go through rule sets all the time? In fact, any prompt to an LLM is in fact, a rule set of some sort. Can you say plausible but I think what you mean is probable. When you give an LLM rules most of the time the most probable answer is in fact follow them. But when you give it lots and lots of rules and or fill up its context sometimes the most probable thing is not necessarily to follow the rule it’s been given, but some other combination of information that it is outputting.