frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
1•breve•1m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•4m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
1•pastage•4m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
1•billiob•5m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
1•birdculture•10m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•16m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•17m ago•1 comments

Slop News - HN front page right now hallucinated as 100% AI SLOP

https://slop-news.pages.dev/slop-news
1•keepamovin•22m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•24m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
2•tosh•30m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
3•oxxoxoxooo•33m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•34m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•37m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•38m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•40m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•43m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
3•myk-e•45m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•46m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•48m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•50m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•52m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•55m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•59m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•1h ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•1h ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments
Open in hackernews

We're making GPT-5 warmer and friendlier based on feedback that it felt formal

https://twitter.com/OpenAI/status/1956461718097494196
28•MallocVoidstar•5mo ago

Comments

sdotdev•5mo ago
Make it stop saying "Nice - " at the start of every prompt that's annoying.
theodric•5mo ago
YMMV. I asked it for a list of something and it responded "I'm not in the habit of doing your homework, but here's a compact list[...]"
minimaxir•5mo ago
Does passive-aggression count as sycophancy?
minimaxir•5mo ago
OpenAI really does not want people using GPT-4o. The money presumably saved from GPT-5's routing must be very compelling.
kingstnap•5mo ago
I don't think its entirely about the money. A lot of people just don't understand that you can change models.

My uncle for example was using it frequently some excel vbscripts and had no idea was o4 mini was or o3.

delichon•5mo ago
I'd like a slider from sycophant to asshole please. And a checkbox to disable the zeroth law.
thrill•5mo ago
Sliders on all forms of false platitudes, so I can weld them to zero.
jhide•5mo ago
What does zeroth law mean in this context?
delichon•5mo ago
Asimov's Zeroth Law of Robotics: "A robot may not harm humanity, or by inaction, allow humanity to come to harm."

This is an addition to the other three laws embedded in positronic brains:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
To me the zeroth law echoes the paternalism built into LLMs, where they take on the role of shepherd rather than tool.

The other day I asked one a question, and didn't get an answer, but did get a lecture about how misleading the answer could be. I really don't want my encyclopedia to have an opinion about which facts I shouldn't know.

headinsand•5mo ago
“Bring it on down to 75, please.”

https://m.youtube.com/watch?v=p3PfKf0ndik

bitwize•5mo ago
The most annoyingly obsequious setting should be "Bubsy".

https://m.youtube.com/watch?v=khciDV8XvpY&t=10m0s

fragmede•5mo ago
yeah I set "never apologize" in the custom instructions because I couldn't stand it.
Fade_Dance•5mo ago
It's not "genuine" when they say that every question is a "great question" and every thought is a "deep and profound observation."

As someone who actually likes to explore ideas and steal man myself on these chats, it's especially obnoxious because those types of comments do no favors in guiding you down good paths with subjects that you may be working on and learning.

Of course the average user likes getting their ego stroked. Looks like OpenAI will embrace the dopamine pumping style that pervades the digital realm.

SOLAR_FIELDS•5mo ago
I've had fun telling all these LLM's to act like Linus Torvalds and tear me down when I talk to them. Surprisingly effective
Fade_Dance•5mo ago
YOUR RESPONSE WAS LATE AND YOU MADE THE WORLD WORSE.
vrighter•5mo ago
AND THIS CODE IS GARBAGE!
leosanchez•5mo ago
Did it ever tell that you should be retroactively aborted ?
marak830•5mo ago
Thanks, I was going to tell clause to keep it minimal response, but perhaps tearing apart my ideas may be better xD
ComputerGuru•5mo ago
I mean, it’s not genuine regardless of how often or little it’s said because it is, literally and by definition, artificial praise. Which is harmful in any quantity.
webdevver•5mo ago
there appears to be two emerging major use cases/markets for LLMs:

- synthetic friend

- a tool that happens to be much faster than google/ctrl-f/man-pages/poking around github

perhaps offer GPT-5-worker and GPT-5-friend?

nojs•5mo ago
Right, it seems like these two use cases are rapidly diverging.
pragmatic•5mo ago
- a tool that happens to be much slower than google/ctrl-f/man-pages/poking around github
dangus•5mo ago
I don’t really think this is the gist of it.

What happened with GPT-5 is that the product changed abruptly and significantly.

I don’t think most people are looking to use ChatGPT as a virtual friend, but overnight the product changed from having a very friendly (yes perhaps almost too friendly) personality to being terse.

If the product was always like that or it slowly evolved into that it wouldn’t be a big deal.

altbdoor•5mo ago
> What happened with GPT-5 is that the product changed abruptly and significantly.

And also the blazing fast deprecation of all other models in ChatGPT when 5 was announced.

myaccountonhn•5mo ago
Looking at the reactions to 4o being removed on reddit was... Sobering. The reason they toned it down was because they claimed the sycophantic behavior and the attachment some were growing weren't healthy. It was pathetic to see them not stand their ground when you at the same time see people develop these super unhealthy relationship to text generators.
KoolKat23•5mo ago
It pays the bills.
BrawnyBadger53•5mo ago
The alternative was likely that these people would move to platforms that actively prey on this instead. Imo it's good that they get people onto the newer model and help them bridge the way to healthier conversations. Though realistically, these people should just be using the personality tools and the default should be robotic so new people don't join the problem.
dangus•5mo ago
I don’t think it has anything to do with unhealthy attachment.

It has to do with an abrupt product change that was too different from the previous thing that everyone liked.

tills13•5mo ago
lol everyone saw this coming. People just want to be told that they are geniuses, all their ideas are great, they are funny, etc. It's literally a yes-man.
Simulacra•5mo ago
"Kinder, gentler machine gun hand"
Simulacra•5mo ago
No seriously
etler•5mo ago
Friendlier doesn't necessarily mean gassing you up. This is a very narrow interpretation of the complaints.
dcchambers•5mo ago
This is the opposite of what humanity needs.

We do NOT need to humanize AI.

everybodyknows•5mo ago
> You'll notice small, genuine touches like “Good question” or “Great start,” not flattery.

They've redefined "flattery". And "genuine".

AceyMan•5mo ago
I took three 45m sessions of user training from OpenAI prior to the GPT-5 switcheroo. I know when to switch models. I know know to invoke Deep Research mode. I want my GPT-4 stuff back.
zaphirplane•5mo ago
What so it’s all fake and some sweat pants wearing software dev WFH can just make it friendlier. I am devastated I thought it was real. Paying for it makes me feel dirty
whinvik•5mo ago
I really don't think this is a good idea. All the negative comments seem to have been from people who almost treated 4o as a friend rather than a tool. I don't think encouraging that direction is good in any way.
hoss1474489•5mo ago
And here was me thinking I was asking more insightful questions today than I was yesterday.
xtiansimon•5mo ago
> “…small, genuine touches…not flattery. Internal tests show no rise in sycophancy…”

While I’ve not experienced this new UX yet, I appreciate what the marketing team did in this tweet—-careful wording that adds to the Framing of artificial personalities (small moves, honest[1], not flattering, not sycophantic).

I’m not using ChatGPT much, but I have used it as a study aid while reading a book on JS/React. When I was confused about something or encountered a gap in my understanding of the text I noticed the first few words from my Chat were doing work to tell me if I was on point, or in the realm, or set me up for correction (even if I’m “on point”, I do _continue reading_). I think of these small moves like _map orientation_. You can’t effectively use a map on the ground until you align the map to the territory. Do you see?

I encounter artificial phone personalities and human CSRs frequently at work, and their sycophantic scripts enrage me because I’m trying to get something done—pay a bill, request human support. Adding emotional queues are either slowing me down or meant to manipulate my emotional reactions (generating the opposite reaction).

[1]: one caveat, I do have a small problem with their use of the word “genuine”. I don’t believe artificial personality can possess “authenticity”, which is associated meaning of genuine. I don’t much like “honesty” either, but it’s closer to the point. I do appreciate a cue to how correct was my input.

Topfi•5mo ago
I have spilled a lot of ink about my thoughts on the vast majority of “AI ethics” work done at frontier LLM labs already, but indulge me as I do it again.

While some consideration for extreme scenarios can have merit, I don't think the industry is paying attention to the most pressing issues of our time. Especially this early and given (admittedly my personal) doubts that LLMs could ever attain what one could consider intelligence, the focus on doomsday scenarios and preventing models outputting information that has been freely accessible on the internet for decades is simply not the best use of our limited attention. These topics take up an obscene percentage in system cards, public reporting, and discussions on regulation by thought leaders, partly because this does benefit companies by increasing investment ("we are so close to Super-Mega-Hyper-AGI that we have to keep the LLM from nuking us, if you invest with us we'll crack that in a trifle"[0]) and keeping regulators from focusing on more immediate concerns that could affect profits.

Current day ethical issues like copyright owner compensation, misinformation at scale, and how these tools affect people’s psyche do not get nearly enough focus. We have already had multiple people die [1] due to improper guardrails. We appear to have people isolating themselves by replacing (or no longer seeking) companionship with LLM output. And we have seen attempts to pass off model generated content as real in order to influence political reporting [2]. These are real issues we see right now, yet the industry seems unwilling to fully wrestle with how we could address them.

In this context, I actually saw GPT-5 as a step in the right direction. I had (naive, I admit) hopes this signaled OpenAI shifting toward tangible, current day concerns, complementing Anthropic’s more grounded publications that put an emphasis on real world tasks a professional user may encounter (Heck, their Claude 4 system card looked into LLM tool calls when asked to be bold and making value judgements which lead to them being criticized for "snitching", when multiple models appear to do the same [4]).

Researchers putting some focus on down the lime issues is fine. I am not against that, maybe it'll be of great value in a hypothetical future, but “super alignment” and terminator level scenarios mustn't be the only things considered. GPT-5, again, seemed to be designed in a way that considered users mental health and the issues that arise from overly agreeable output better than most, even though it wasn't reported as a focus for them. I found that a good step.

Then again, my experience with GPT-5 and Horizon Alpha Beta seemed different from most either way. The latter did unimaginably worse in my personal testing, appallingly I maintain, while the former was far more impressive to me than what the common sentiment seems to be, especially in dealing with extensive context, handling slight (intentional) tool call changes that weren't previously provided to the model and longer term task coherence. Regardless of raw performance, GPT-5 being closer to o3 or o4 than to GPT-4o in subjective agreeableness seemed like a good development for reducing some of the harm these models may cause to certain susceptible users psyches, especially if this development continues. We have already seen a subset of users largely or even entirely seizing to seek out human companionship, which is likely to affect them in not yet fully understood ways in the long term. As models advance this may expand to an ever increasing fraction of the user base and in my opinion needs to be of concern to the entire industry.

If OpenAI now walks this back, I will be severely disappointed. I would also be surprised if there weren't precise internal insights about which users are most impacted and how, similar to how cigarette and gambling companies have long known who drives profit and at what cost, while fighting regulation. If this continues and we see more people turning to LLMs for companionship whilst isolating themselves, I could see a similar trajectory in a few decades, or given the pace, in just a few years. Essentially, a lot of people suffering in the future due to a lack of regulatory action in the present.

Then again, even if all frontier LLM labs take a moral stance on this, there may be others who fill that demand and, of course, local models are always an option. So maybe the impact of this technology on certain users psyche has already become a future public healthcare expenditure we cannot prevent, which would be very depressing indeed.

I want to add that I wouldn't be surprised if a large number of researchers didn't argue for similar behind closed doors, but the conversation is unfortunately dominated by a mix of annoyingly loud, yet not very grounded in reality [4] folks, alongside a group of investors and company leaders who keep using the former to, as mentioned, justify hype-bubble valuations and draw attention away from actually impactful regulation.

[0] https://www.windowscentral.com/artificial-intelligence/opena...

[1] https://www.reuters.com/investigates/special-report/meta-ai-... and https://www.nbcwashington.com/investigations/moms-lawsuit-bl...

[2] https://www.theguardian.com/us-news/2025/aug/07/chris-cuomo-... and https://www.reuters.com/fact-check/video-does-not-show-ukrai...

[3] https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686... and https://snitchbench.t3.gg

[4] https://xkcd.com/1450/ and https://en.wikipedia.org/wiki/Pascal%27s_wager