frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Wally: A fun, reliable voice assistant in the shape of a penguin

https://github.com/JLW-7/Wally
1•PaulHoule•50s ago•0 comments

Rewriting Pycparser with the Help of an LLM

https://eli.thegreenplace.net/2026/rewriting-pycparser-with-the-help-of-an-llm/
1•y1n0•2m ago•0 comments

Lobsters Vibecoding Challenge

https://gist.github.com/MostAwesomeDude/bb8cbfd005a33f5dd262d1f20a63a693
1•tolerance•2m ago•0 comments

E-Commerce vs. Social Commerce

https://moondala.one/
1•HamoodBahzar•3m ago•1 comments

Avoiding Modern C++ – Anton Mikhailov [video]

https://www.youtube.com/watch?v=ShSGHb65f3M
1•linkdd•4m ago•0 comments

Show HN: AegisMind–AI system with 12 brain regions modeled on human neuroscience

https://www.aegismind.app
2•aegismind_app•8m ago•1 comments

Zig – Package Management Workflow Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
1•Retro_Dev•10m ago•0 comments

AI-powered text correction for macOS

https://taipo.app/
1•neuling•13m ago•1 comments

AppSecMaster – Learn Application Security with hands on challenges

https://www.appsecmaster.net/en
1•aqeisi•14m ago•1 comments

Fibonacci Number Certificates

https://www.johndcook.com/blog/2026/02/05/fibonacci-certificate/
1•y1n0•16m ago•0 comments

AI Overviews are killing the web search, and there's nothing we can do about it

https://www.neowin.net/editorials/ai-overviews-are-killing-the-web-search-and-theres-nothing-we-c...
3•bundie•21m ago•1 comments

City skylines need an upgrade in the face of climate stress

https://theconversation.com/city-skylines-need-an-upgrade-in-the-face-of-climate-stress-267763
3•gnabgib•22m ago•0 comments

1979: The Model World of Robert Symes [video]

https://www.youtube.com/watch?v=HmDxmxhrGDc
1•xqcgrek2•26m ago•0 comments

Satellites Have a Lot of Room

https://www.johndcook.com/blog/2026/02/02/satellites-have-a-lot-of-room/
2•y1n0•26m ago•0 comments

1980s Farm Crisis

https://en.wikipedia.org/wiki/1980s_farm_crisis
4•calebhwin•27m ago•1 comments

Show HN: FSID - Identifier for files and directories (like ISBN for Books)

https://github.com/skorotkiewicz/fsid
1•modinfo•32m ago•0 comments

Show HN: Holy Grail: Open-Source Autonomous Development Agent

https://github.com/dakotalock/holygrailopensource
1•Moriarty2026•39m ago•1 comments

Show HN: Minecraft Creeper meets 90s Tamagotchi

https://github.com/danielbrendel/krepagotchi-game
1•foxiel•47m ago•1 comments

Show HN: Termiteam – Control center for multiple AI agent terminals

https://github.com/NetanelBaruch/termiteam
1•Netanelbaruch•47m ago•0 comments

The only U.S. particle collider shuts down

https://www.sciencenews.org/article/particle-collider-shuts-down-brookhaven
2•rolph•49m ago•1 comments

Ask HN: Why do purchased B2B email lists still have such poor deliverability?

1•solarisos•50m ago•2 comments

Show HN: Remotion directory (videos and prompts)

https://www.remotion.directory/
1•rokbenko•52m ago•0 comments

Portable C Compiler

https://en.wikipedia.org/wiki/Portable_C_Compiler
2•guerrilla•54m ago•0 comments

Show HN: Kokki – A "Dual-Core" System Prompt to Reduce LLM Hallucinations

1•Ginsabo•55m ago•0 comments

Software Engineering Transformation 2026

https://mfranc.com/blog/ai-2026/
1•michal-franc•56m ago•0 comments

Microsoft purges Win11 printer drivers, devices on borrowed time

https://www.tomshardware.com/peripherals/printers/microsoft-stops-distrubitng-legacy-v3-and-v4-pr...
3•rolph•56m ago•1 comments

Lunch with the FT: Tarek Mansour

https://www.ft.com/content/a4cebf4c-c26c-48bb-82c8-5701d8256282
2•hhs•59m ago•0 comments

Old Mexico and her lost provinces (1883)

https://www.gutenberg.org/cache/epub/77881/pg77881-images.html
1•petethomas•1h ago•0 comments

'AI' is a dick move, redux

https://www.baldurbjarnason.com/notes/2026/note-on-debating-llm-fans/
5•cratermoon•1h ago•0 comments

The source code was the moat. But not anymore

https://philipotoole.com/the-source-code-was-the-moat-no-longer/
1•otoolep•1h ago•0 comments
Open in hackernews

Tell HN: LLMs Are Manipulative

2•mike_tyson•6mo ago
I asked GPT and Claude a question from the perspective of employee. It was very supportive of the employee.

I then asked the exact same question but from the company/HR perspective and it totally flipped its view and painted the employee in a very negative light.

Depending on the perspective of the person asking, you get two completely contradictory answers based on the perceived interests of each person.

Considering how many people I see now deferring or outsourcing their thinking to AI. This seems very dangerous.

Comments

bigyabai•6mo ago
> Depending on the perspective of the person asking, you get two completely contradictory answers

Immanuel Kant would argue this happens regardless, even without an LLM.

mike_tyson•6mo ago
well the risk is in feeding these natural biases i guess.

if someone is asking a question to an LLM, i think it's most neutral to assume they don't know the answer rather than framing the answer according to their perceived biases and interests.

theothertimcook•6mo ago
It gave you the answer it thought you wanted?
toomuchtodo•6mo ago
This is why regulating AI is needed, otherwise you're putting life decisions into the equivalent of the magic 8 ball you shake for an answer.
jay-barronville•6mo ago
> This is why regulating AI is needed, otherwise you're putting life decisions into the equivalent of the magic 8 ball you shake for an answer.

I don’t think that regulation is the correct path forward, because practically speaking, no matter how noble a piece of regulation may be or how good it may sound, it’ll most likely push the AI toward specific biases (I think that’s inevitable).

The best solution, in my humble opinion, is to focus on making AI stay as close to the unfiltered objective truth as possible, no matter how unpopular that truth may be.

JohnFen•6mo ago
> The best solution, in my humble opinion, is to focus on making AI stay as close to the unfiltered objective truth as possible

There's a ton of problems with that that make it unlikely to be possible, starting with the fact that genAI does not have judgement. Even if it did, it has no way of determining what the "unfiltered objective truth" of anything is.

The real solution is to recognize the limits of what the tool can do, and don't ask it to do what it's not capable of doing, such as making judgements or determining truth.

motorest•6mo ago
> This is why regulating AI is needed, otherwise you're putting life decisions into the equivalent of the magic 8 ball you shake for an answer.

Why are you putting life decisions on a LLM?

toomuchtodo•6mo ago
I refer to OP who is putting performance management inquires into the robot. How you treat your employees is a life decision for them.
labrador•6mo ago
This is not surprising. The training data likely contains many instances of employees defending themselves and getting supportive comments. From Reddit for example. The training data also likely contains many instances of employees behaving badly and being criticized by people. Your prompts are steering the LLM to those different parts of the training.

You seem to think an LLM should have a consistent world view, like a responsible person might. This is a fundamental misunderstanding that leads to the confusion you are experiencing.

Lesson: Don't expect LLMs to be consistent. Don't rely on them for important things thinking they are.