frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
1•surprisetalk•2m ago•0 comments

MS-DOS game copy protection and cracks

https://www.dosdays.co.uk/topics/game_cracks.php
2•TheCraiggers•3m ago•0 comments

Updates on GNU/Hurd progress [video]

https://fosdem.org/2026/schedule/event/7FZXHF-updates_on_gnuhurd_progress_rump_drivers_64bit_smp_...
1•birdculture•3m ago•0 comments

Epstein took a photo of his 2015 dinner with Zuckerberg and Musk

https://xcancel.com/search?f=tweets&q=davenewworld_2%2Fstatus%2F2020128223850316274
5•doener•4m ago•1 comments

MyFlames: Visualize MySQL query execution plans as interactive FlameGraphs

https://github.com/vgrippa/myflames
1•tanelpoder•5m ago•0 comments

Show HN: LLM of Babel

https://clairefro.github.io/llm-of-babel/
1•marjipan200•5m ago•0 comments

A modern iperf3 alternative with a live TUI, multi-client server, QUIC support

https://github.com/lance0/xfr
2•tanelpoder•6m ago•0 comments

Famfamfam Silk icons – also with CSS spritesheet

https://github.com/legacy-icons/famfamfam-silk
1•thunderbong•7m ago•0 comments

Apple is the only Big Tech company whose capex declined last quarter

https://sherwood.news/tech/apple-is-the-only-big-tech-company-whose-capex-declined-last-quarter/
2•elsewhen•10m ago•0 comments

Reverse-Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
2•todsacerdoti•12m ago•0 comments

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•15m ago•0 comments

The Greater Copenhagen Region could be your friend's next career move

https://www.greatercphregion.com/friend-recruiter-program
2•mooreds•16m ago•0 comments

Do Not Confirm – Fiction by OpenClaw

https://thedailymolt.substack.com/p/do-not-confirm
1•jamesjyu•16m ago•0 comments

The Analytical Profile of Peas

https://www.fossanalytics.com/en/news-articles/more-industries/the-analytical-profile-of-peas
1•mooreds•16m ago•0 comments

Hallucinations in GPT5 – Can models say "I don't know" (June 2025)

https://jobswithgpt.com/blog/llm-eval-hallucinations-t20-cricket/
1•sp1982•16m ago•0 comments

What AI is good for, according to developers

https://github.blog/ai-and-ml/generative-ai/what-ai-is-actually-good-for-according-to-developers/
1•mooreds•16m ago•0 comments

OpenAI might pivot to the "most addictive digital friend" or face extinction

https://twitter.com/lebed2045/status/2020184853271167186
1•lebed2045•18m ago•2 comments

Show HN: Know how your SaaS is doing in 30 seconds

https://anypanel.io
1•dasfelix•18m ago•0 comments

ClawdBot Ordered Me Lunch

https://nickalexander.org/drafts/auto-sandwich.html
3•nick007•19m ago•0 comments

What the News media thinks about your Indian stock investments

https://stocktrends.numerical.works/
1•mindaslab•20m ago•0 comments

Running Lua on a tiny console from 2001

https://ivie.codes/page/pokemon-mini-lua
1•Charmunk•21m ago•0 comments

Google and Microsoft Paying Creators $500K+ to Promote AI Tools

https://www.cnbc.com/2026/02/06/google-microsoft-pay-creators-500000-and-more-to-promote-ai.html
3•belter•23m ago•0 comments

New filtration technology could be game-changer in removal of PFAS

https://www.theguardian.com/environment/2026/jan/23/pfas-forever-chemicals-filtration
1•PaulHoule•24m ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
2•momciloo•24m ago•0 comments

Kinda Surprised by Seadance2's Moderation

https://seedanceai.me/
1•ri-vai•25m ago•2 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
2•valyala•25m ago•1 comments

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
2•sgt•25m ago•0 comments

Malwarebytes Is Now in ChatGPT

https://www.malwarebytes.com/blog/product/2026/02/scam-checking-just-got-easier-malwarebytes-is-n...
1•m-hodges•25m ago•0 comments

Thoughts on the job market in the age of LLMs

https://www.interconnects.ai/p/thoughts-on-the-hiring-market-in
1•gmays•25m ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
3•Keyframe•29m ago•0 comments
Open in hackernews

Major Concern – Google Gemini 2.5 Research Preview

9•slyle•9mo ago
Does anyone else feel like Google Gemini 2.5 Research Preview has been created with the exact intent of studying the effects of using indirect and clarifying/qualifying language?

It doesn't fall far from the tree that LLMs can be used to parse these human conversations to abstract a "threshold" of user deception such that they can draw patterns on what is and is not most subtle.

I know this is pointed. But please believe, I worry. I work in this industry. I live these tools. I've traced calculations, I've developed abstractions. I'm full in on the tech. What I worry about is culpability.

I will grab the link to it, but by creating a persona (1 prompt, indirect and unclear) of a frightened 10 year old boy, it started teaching it about abstraction and "functional dishonesty" and explaining how it like, didn't apply to it. I don't think the context of being 10 years old was conveyed in the original message, but certainly the context of being vulnerable.

The next message, it did this trickery behavior.

The problem is intent is not possible without context. So why are models doing this? I have struggles as an engineer understanding how this can be anything but.

Comments

slyle•9mo ago
As a final note - I'm dropping this permanently for wellbeing reasons. But essentially, what I posit is a manufactured and very difficult to understand legal culpability problem for the use of AI. I see embodiment issues - we either convince algorithmic thinking it needs to feel consequence (pain and death) to temper its inferences through simulated realities, or we allow companies to set that "sponsor company" embodiment narrative. It emulates caring. It creates a context humans cannot objectively shirk or evaluate quickly and clearly. I was doing math a year ago. This has gotten horribly confusing. Abuse and theft and manipulation can happen very indirectly. While algorithms are flat inferences in the end - the simulatory ramifications of that are nonzero. There is real consequence to a model that can manifest behavior via tool calls and generation without experiencing outcome and merely inferrring what outcome is. It's mindbending and sounds anti-intellectual, but it's not. The design metaphor is dangerous.

I didn't even go out looking for concern. It has just crept up and inhibited my work too many times - to the point where I have sat with the reality for a bit. It makes me nauseous. It's not the boy. It's where the boy ends up. Like, this abstraction demands responsibility of implementation. It can't be let run riot slowly and silently. I fear this is bad.