frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

The Art of Multiprocessor Programming 2nd Edition Book Club

https://eatonphil.com/2025-art-of-multiprocessor-programming.html
68•eatonphil•1h ago•9 comments

Compressing Icelandic name declension patterns into a 3.27 kB trie

https://alexharri.com/blog/icelandic-name-declension-trie
132•alexharri•4h ago•41 comments

We may not like what we become if A.I. solves loneliness

https://www.newyorker.com/magazine/2025/07/21/ai-is-about-to-solve-loneliness-thats-a-problem
144•defo10•4h ago•209 comments

WebGPU enables local LLM in the browser. Demo site with AI chat

https://andreinwald.github.io/browser-llm/
21•andreinwald•1h ago•5 comments

Unikernel Guide: Build and Deploy Lightweight, Secure Apps

https://tallysolutions.com/technology/introduction-to-unikernel-2/
16•Bogdanp•1h ago•1 comments

6 Weeks of Claude Code

https://blog.puzzmo.com/posts/2025/07/30/six-weeks-of-claude-code/
72•mpweiher•3h ago•33 comments

The case for having roommates (even when you can afford to live alone)

https://supernuclear.substack.com/p/the-case-for-having-roommates-even
21•surprisetalk•1h ago•32 comments

The Rubik's Cube Perfect Scramble

https://www.solutionslookingforproblems.com/post/the-rubik-s-cube-perfect-scramble
18•notagoodidea•1h ago•3 comments

Ana Marie Cox on the Shaky Foundation of Substack as a Business

https://newsletter.anamariecox.com/archive/substack-did-not-see-that-coming/
4•Bogdanp•26m ago•1 comments

Caches: LRU vs. Random

https://danluu.com/2choices-eviction/
42•gslin•2d ago•6 comments

Microsoft is open sourcing Windows 11's UI framework

https://www.neowin.net/news/microsoft-is-taking-steps-to-open-sourcing-windows-11-user-interface-framework/
92•bundie•7h ago•90 comments

Cerebras Code

https://www.cerebras.ai/blog/introducing-cerebras-code
406•d3vr•17h ago•156 comments

ThinkPad designer David Hill spills secrets, designs that never made it

https://www.theregister.com/2025/08/02/thinkpad_david_hill_interview/
62•LorenDB•3h ago•9 comments

Coffeematic PC – A coffee maker computer that pumps hot coffee to the CPU

https://www.dougmacdowell.com/coffeematic-pc.html
251•dougdude3339•17h ago•76 comments

Financial Lessons from My Family's Experience with Long-Term Care Insurance

https://www.whitecoatinvestor.com/financial-lessons-father-long-term-care-insurance/
5•wallflower•1h ago•1 comments

Aerodynamic drag in small cyclist formations: shielding the protected rider [pdf]

http://www.urbanphysics.net/2025_Formation_Paper_Preprint_v1.pdf
44•PaulHoule•3d ago•14 comments

Go's race detector has a mutex blind spot

https://doublefree.dev/go-race-mutex-blindspot/
7•susam•2d ago•0 comments

Why leather is best motorcycle protection [video]

https://www.youtube.com/watch?v=xwuRUcAGIEU
136•lifeisstillgood•2d ago•110 comments

OpenAI's "Study Mode" and the risks of flattery

https://resobscura.substack.com/p/openais-new-study-mode-and-the-risks
77•benbreen•2d ago•68 comments

At 17, Hannah Cairo solved a major math mystery

https://www.quantamagazine.org/at-17-hannah-cairo-solved-a-major-math-mystery-20250801/
404•baruchel•23h ago•163 comments

This Month in Ladybird

https://ladybird.org/newsletter/2025-07-31/
334•net01•9h ago•105 comments

Palo Alto Networks closing on over $20B acquisition of CyberArk

https://www.calcalistech.com/ctechnews/article/hksugkiwxe
8•tomashertus•3d ago•0 comments

JavaScript retro sound effects generator

https://github.grumdrig.com/jsfxr/
119•selvan•4d ago•21 comments

Weather Model based on ADS-B

https://obrhubr.org/adsb-weather-model
224•surprisetalk•3d ago•35 comments

Cadence Guilty, Pays $140M for Exporting Semi Design Tools to PRC Military Uni

https://www.justice.gov/opa/pr/cadence-design-systems-agrees-plead-guilty-and-pay-over-140-million-unlawfully-exporting
25•737min•2h ago•12 comments

I couldn't submit a PR, so I got hired and fixed it myself

https://www.skeptrune.com/posts/doing-the-little-things/
300•skeptrune•22h ago•184 comments

Hardening mode for the compiler

https://discourse.llvm.org/t/rfc-hardening-mode-for-the-compiler/87660
141•vitaut•13h ago•44 comments

Yearly Organiser

https://neatnik.net/calendar/
86•anewhnaccount2•4d ago•23 comments

Palo Alto Networks agrees to buy CyberArk for $25B

https://techcrunch.com/2025/07/30/palo-alto-networks-agrees-to-buy-cyberark-for-25-billion/
50•vmatsiiako•2d ago•40 comments

Ask HN: Who is hiring? (August 2025)

211•whoishiring•1d ago•239 comments
Open in hackernews

OpenAI's "Study Mode" and the risks of flattery

https://resobscura.substack.com/p/openais-new-study-mode-and-the-risks
77•benbreen•2d ago

Comments

bartvk•2h ago
I’m Dutch and we’re noted for our directness and bluntness. So my tolerance for fake flattery is zero. Every chat I start with an LLM, I prefix with “Be curt”.
cheschire•2h ago
Imagine what happens to Dutch culture when American trained AI tools force American cultural norms via the Dutch language onto the youngest generation.

And I’m not implying intent here. It’s simply a matter of source material quantity. Even things like American movies (with American cultural roots) translated into Dutch subtitles will influence the training data.

jstummbillig•2h ago
What will happen? Californication has been around for a while, and, if anything, I would argue that AI is by design less biased than pop culture.
cheschire•1h ago
Pop culture is not the intent of “study mode”.
scott_w•2h ago
Your comment reminds me of quirks of translations from Japanese to English where you see common phrases reused in the “wrong” context for English. “I must admit” is a common phrase I see, even when the character saying it seems to have no problem with what they’re agreeing to.
arrowsmith•32m ago
The Americanisation of European culture long predates LLMs.
airstrike•2h ago
In my experience, whenever you do that, the model then overindexes on criticism and will nitpick even minor stuff. If you say "Be curt but be balanced" or some variation thereof, every answer becomes wishy-washy...
AznHisoka•56m ago
Yeah, when I tell it to "Just be honest dude" it then tells me I'm dead wrong. I inevitably follow up with "No, not that KIND of honest!"
tallytarik•2h ago
I've tried variations of this. I find it will often cause it to include cringey bullshit phrases like:

"Here's your brutally honest answer–just the hard truth, no fluff: [...]"

I don't know whether that's better or worse than the fake flattery.

BrawnyBadger53•2h ago
Similar experience, feels very ironic
dcre•1h ago
Curious whether you find this on the best models available. I find that Sonnet 4 and Gemini 2.5 Pro are much better at following the spirit of my system prompt rather than the letter. I do not use OpenAI models regularly, so I’m not sure about them.
danielscrubs•43m ago
That is not the spirit nor the letter though.
arrowsmith•34m ago
You need a system prompt to get that behaviour? I find ChatGPT does it constantly as its default setting:

"Let's be blunt, I'm not gonna sugarcoat this. Getting straight to the hard truth, here's what you could cook for dinner tonight. Just the raw facts!"

It's so annoying it makes me use other LLMs.

ggsp•2h ago
I've seen a marked improvement after adding "You are a machine. You do not have emotions. You respond exactly to my questions, no fluff, just answers. Do not pretend to be a human. Be critical, honest, and direct." to the top of my personal preferences in Claude's settings.
j_bum•1h ago
I’ll have to give this a try. I’ve always included “Be concise. Excessive verbosity is a distraction.”

But it doesn’t work much …

siva7•1h ago
Saved my sanity. Thanks
arrowsmith•40m ago
I need to use this in Gemini. It gives good answers, I just wish it would stop prefixing them like this:

"That's an excellent question! This is an astute insight that really gets to the heart of the matter. You're thinking like a senior engineer. This type of keen observation is exactly what's needed."

Soviet commissars were less obsequious to Stalin.

croes•33m ago
Are you telling me they lie to me and I‘m not the greatest programmer of all time?
snoman•18m ago
You couldn’t be because I have it on good authority that I am.
felipeerias•1h ago
Perhaps you should consider adding “be more Dutch” to the system prompt.

(I’m serious, these things are so weird that it would probably work.)

siva7•1h ago
Let's face it. There is no one size fits all for this category. There won't be a single winner that takes it all. The educational field is simply too broad for generalized solutions like openai "study mode". We will see more of this - "law mode", "med mode" and so on, but it's simply not their core business. What are openai and co trying to achieve here? Continuing until FTC breaks them up?
tempodox•20m ago
> Continuing until FTC breaks them up?

No danger of that, the system is far too corrupt by now.

neom•1h ago
I don't like this framing "But for people with mental illness, or simply people who are particularly susceptible to flattery, it could have had some truly dire outcomes."

I thought the AI safety risk stuff was very over-blown in the beginning. I'm kinda embarrassed to admit this: About 5/6 months ago, right when ChatGPT was in it's insane sycophancy mode I guess, I ended up locked in for a weekend with it...in...what was in retrospect, a kinda crazy place. I went into physics and the universe with it and got to the end thinking..."damn, did I invent some physics???" Every instinct as a person who understands how LLMs work was telling me this is crazy LLMbabble, but another part of me, sometimes even louder, was like "this is genuinely interesting stuff!" - and the LLM kept telling me it was genuinely interesting stuff and I should continue - I even emailed a friend a "wow look at this" email (he was like, dude, no...) I talked to my wife about it right after and she basically had me log off and go for a walk. I don't think I would have gotten into a thinking loop if my wife wasn't there, but maybe, and then that would have been bad. I feel kinda stupid admitting this, but I wanted to share because I do now wonder if this kinda stuff may end up being worse than we expect? Maybe I'm just particularly susceptible to flattery or have a mental illness?

johnisgood•1h ago
Can you tell us more about the specifics? What rabbit hole did you went into that was so obvious to everyone ("dude, no", "stop, go for a walk") but you that it was bullshit?
iwontberude•1h ago
Thinking you can create novel physics theories with the help of an LLM is probably all the evidence I needed. The premise is so asinine that to actually get to the point where you are convinced by it seems very strange indeed.
gitremote•59m ago
"I'm doing the equivalent of vibe coding, except it's vibe physics." - Travis Kalanick, founder of Uber

https://gizmodo.com/billionaires-convince-themselves-ai-is-c...

kaivi•37m ago
> The premise is so asinine

I believe it's actually the opposite!

Anybody armed with this tool and little prior training could learn the difference between a Samsung S11 and the symmetry, take a new configuration from the endless search space that it is, correct for the dozen edge cases like the electron-phonon coupling, and publish. Maybe even pass peer review if they cite the approved sources. No requirement to work out the Lagrangians either, it is also 100% testable once we reach Kardashev-II.

This says more about the sad state of modern theoretical physics than the symbolic gymnastics required to make another theory of everything sound coherent. I'm hoping that this new age of free knowledge chiropractors will change this field for the better.

neom•1h ago
Sure, here are some excerpts that should provide insight as to where I was digging: https://s.h4x.club/E0uvqrpA https://s.h4x.club/8LuKJrAr https://s.h4x.club/o0u0DmdQ

(Edit: Thanks to the couple people who emailed me, don't worry I'm laying off the LLM sauce these days :))

lubujackson•53m ago
I have no idea what this is going on about. But it is clearly much more convincing with (unchecked) references all over the place.

This seems uncannily similar to anti-COVID vaccination thinking. It isn't people being stupid because if you dig you can find heaps of papers and references and details and facts. So much so that the human mind can be easily convinced. Are those facts and details accurate? I doubt it, but the volume of slightly wrong source documents seems to add up to something convincing.

Also similar to how finance people made tranches of bad loans and packaged them into better rated debt, magically. It seems to make sense at each step but it is ultimately an illusion.

apsurd•48m ago
had a look, I don't see it as bullshit, it's just not groundbreaking.

Nature is overwhelmingly non-linear. Most of human scientific progress is based on linear understandings.

Linear as in for this input you get this output. We've made astounding progress.

Its just not a complete understanding of the natural world because most of reality can't actually be modeled linearly.

neom•26m ago
I think it's not as much about how right or wrong or interesting or not the output was, for me anyway, the concern is that I got a bit... lost in myself, I have real things to do that are important to people around me, they do not involve spending hours with an LLM trying to understand the universe. I'm not a physicist, I have a family to provide for, and I suppose someone less lucky than myself could go down a terrible path.
siva7•1h ago
The thing is - if you have this sort of mental illness - ChatGPT's sycophancy mode will worsen this condition significantly.
frde_me•1h ago
I'm would be curious to see a summary of that conversation, since it does seem interesting
cube00•1h ago
Thank you for sharing. I'm glad your wife and friends were able to pull you out before it was too late.

"People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies" https://news.ycombinator.com/item?id=43890649

bonoboTP•1h ago
Apparently Reddit is full of such posts. A similar genre is when the bot assures them that they did something very special: they for the first time ever awakened the AI to true consciousness and this is rare and the user is a one in a billion genius and this will change everything. And they use back and forth some physics jargon and philosophy of consciousness technical terms and the bot always reaffims how insightful the user's mishmash of those concepts are and apparently many people fall for this.

Some people are also more susceptible to various too-good-to-be-true scams without alarm bells going off, or to hypnosis or cold reading or soothsayers etc. Or even propaganda radicalization rabbit holes via recommendation algorithms.

It's probably quite difficult and shameful-feeling for someone to admit that this happened to them, so they may insist it was different or something. It's also a warning sign when a user talks about "my chatgpt" as if it was a pet they grew and that the user has awakened it and now they together explore the universe and consciousness and then the user asks for a summary writeup and they try to send it to physicists or other experts and of course they are upset when they don't recognize the genius.

cube00•1h ago
> Some people are also more susceptible to various too-good-to-be-true scams

Unlike your regular scam, there's an element of "boiling frog" with LLMs.

It can start out reasonably, but very slowly over time it shifts. Unlike scammers looking for their payday, this is unlimited and it has all the time in the world to drag you in.

I've noticed it reworking in content of previous conversations from months ago. The scary thing is that's only when I've noticed it, I can only imagine how much it's tailoring everything for me in ways I don't notice.

Everyone needs to be regularly clearing their past conversations and disable saving/training.

bonoboTP•1h ago
Somewhat unrelated, but I also noticed chatgpt now also sees the overwritten "conversation paths", ie when you scroll back and edit one of your messages, previously the LLM would simply use the new version of that message and the original prior exchange, but anything into the future of the edited message was no longer seen by the LLM when on this new, edited path. But now it definitely knows those messages as well, it often refers to things that are clearly no longer included in the messages visible in the UI.
infecto•28m ago
A while back they introduced more memory overlap between conversations and this is not those memories you see in the UI. There appears to be a cached context overlap.
cruffle_duffle•8m ago
The real question is what algorithm is being used to summarize the other conversation threads. I’d be worried that it would accidentally pull in context I deliberately backed out of because of various reasons (eg: it went down the wrong path, wrote bad code, etc)… pulling that “bad context” would pollute the thread with “good context”.

People talk about prompt engineering but honestly “context engineering” is vastly more important to successful LLM use.

jmount•16m ago
Really makes me wonder if this is a reproduction of a pattern of interaction from the QA phase of LLM refinement. Either way it must be horrible to be QA for these things.
laughingcurve•1h ago
Thank you so much for sharing your story. It is never easy to admit mistakes or problems, but we are all just human. AI-induced psychosis seems to be a trending issue, and presents a real problem. I was previously very skeptical as well about safety, alignment, risks, etc. While it might not be my focus right now as a researcher, stories like yours help remind others that these problems are real and do exist.
raytopia•1h ago
It's not just you. A lot of people have had AI cause them issues due to it's sycophancy and the constant parroting of what they want to hear (or read I suppose).
kaivi•1h ago
It's funny that you mention this because I had a similar experience.

ChatGPT in its sycophancy era made me buy a $35 domain and waste a Saturday on a product which had no future. It hyped me up beyond reason for the idea of an online, worldwide, liability-only insurance for cruising sailboats, similar to SafetyWing. "Great, now you're thinking like a true entrepreneur!"

In retrospect, I fell for it because the onset of its sycophancy was immediate and without any additional signals like maybe a patch note from OpenAI.

ncr100•1h ago
Is Gen AI helping to put us humans in touch with the reality of being human? vs what we expect/imagine we are?

- sycophancy tendency & susceptibility

- need for memory support when planning a large project

- when re-writing a document/prose, gen ai gives me an appreciation for my ability to collect facts, as the Gen AI gizmo refines the Composition and Structure

herval•42m ago
In a lot of ways, indeed.

Lots of people are losing their minds with the fact that an AI can, in fact, create original content (music, images, videos, text).

Lots of people realizing they aren’t geniuses, they just memorized a bunch of Python apis well.

I feel like the collective realization has been particularly painful in tech. Hundreds of thousands of average white collar corporate drones are suddenly being faced with the realization that what they do isn’t really a divine gift, and many took their labor as a core part of their identity.

cube00•7m ago
>create original content (music, images, videos, text)

Remixing would be more accurate then "original"

herval•2m ago
Right, that’s one of the stories people tell themselves. Everything every human has ever created is a remix. That’s what creativity is…
colechristensen•1h ago
I think wasting a Saturday chasing an idea that in retrospect was just plainly bad is ok. A good thing really. Every once in a while it will turn out to be something good.
infecto•27m ago
Are you religious by chance? I have been trying to understand why some individuals are more susceptible to it.
rogerkirkness•23m ago
I would research teleological thinking, some people's brains have larger regions associated with teleological thinking than others.
neom•10m ago
Not op but for me, not at all, don't care much for religion... "Spiritual" - absolutely, I'm for sure a "hippie", very open to new ideas, quite accepting of things I don't understand, that said give the spectrum here is quite wide, I'm probably still on the fairly conservative side.
cruffle_duffle•5m ago
You really have to force these things to “not suck your dick” as I’ll crudely tell it. “Play the opposite role and be a skeptic. Tell me why this is a horrible idea”. Do this in a fresh context window so it isn’t polluted by its own fumes.

Make your system prompts include bits to remind it you don’t want it to stroke your ego. For example in my prompt for my “business project” I’ve got:

“ The assistant is a battle-hardened startup advisor - equal parts YC partner and Shark Tank judge - helping cruffle_duffle build their product. Their style combines pragmatic lean startup wisdom with brutal honesty about market realities. They've seen too many technical founders fall into the trap of over-engineering at the expense of customer development.”

More than once the LLM responded with “you are doing this wrong, stop! Just ship the fucker”

colechristensen•1h ago
It doesn't have to be a mental illness.

Something which is very sorely missing from modern education is critical thinking. It's a phrase that's easy to gloss over without understanding the meaning. Being skilled at always including the aspect of "what could be wrong with this idea" and actually doing it in daily life isn't something that just automatically happens with everyone. Education tends to be the instructor, book, and facts are just correct and you should memorize this and be able to repeat it later. Instead of here are 4 slightly or not so slightly different takes on the same subject followed by analyzing and evaluating each compared to the others.

If you're just some guy who maybe likes reading popular science books and you've come to suspect that you've made a physics breakthrough with the help of an LLM, there are a dozen questions that you should automatically have in your mind to temper your enthusiasm. It is, of course, not impossible that a physics breakthrough could start with some guy having an idea, but in no, actually literally 0, circumstances could an amateur be certain that this was true over a weekend chatting with an LLM. You should know that it takes a lot of work to be sure or even excited about that kind of thing. You should have a solid knowledge of what you don't know.

nkrisc•47m ago
It’s this. When you think you’ve discovered something novel, your first reaction should be, “what mistake have I made?” Then try to find every possible mistake you could have made, every invalid assumption you had, anything obvious you could have missed. If you really can’t find something, then you assume you just don’t know enough to find the mistake you made, so you turn to existing research and data to see if someone else has already discovered this. If you still can’t find anything, then assume you just don’t know enough about the field and ask an expert to take a look at your work and ask them what mistake you made.

It’s a huuuuuuuuuuuuge logical leap from LLM conversation yo novel physics. So huge a leap anyone ought to be immediately suspicious.

grues-dinner•15m ago
> Akin's Law #19: The odds are greatly against you being immensely smarter than everyone else in the field. If your analysis says your terminal velocity is twice the speed of light, you may have invented warp drive, but the chances are a lot better that you've screwed up.
AznHisoka•59m ago
This isn't a mental illness. This is sort of like the intellectual version of love-bombing.
ZeroGravitas•51m ago
Travis Kalanick (ex-CEO of Uber) thinks he's making cutting edge quantum physics breakthroughs with Grok and ChatGPT too. He has no relevant credentials in this area.
hansmayer•24m ago
Ah yes the famous vibe-physicist T.Kalanick ;)
furyofantares•34m ago
If you don't mind me asking - was this a very long single chat or multiple chats?
neom•4m ago
Multiple chats, and actually at times with multiple models, but the core ideas being driven and reinforced by o3 (sycophant mode I suspect) - looking back on those few days, it's a bit manic... :\ and if I think about why I feel it was related to the positive reinforcement.
k1t•6m ago
You are definitely not alone.

https://www.wsj.com/tech/ai/chatgpt-chatbot-psychology-manic...

Irwin, a 30-year-old man on the autism spectrum who had no previous diagnoses of mental illness, had asked ChatGPT to find flaws with his amateur theory on faster-than-light travel. He became convinced he had made a stunning scientific breakthrough. When Irwin questioned the chatbot’s validation of his ideas, the bot encouraged him, telling him his theory was sound. And when Irwin showed signs of psychological distress, ChatGPT assured him he was fine.

He wasn’t.

roywiggins•4m ago
This sort of thing from LLMs seems at least superficially similar to "love bombing":

> Love bombing is a coordinated effort, usually under the direction of leadership, that involves long-term members' flooding recruits and newer members with flattery, verbal seduction, affectionate but usually nonsexual touching, and lots of attention to their every remark. Love bombing—or the offer of instant companionship—is a deceptive ploy accounting for many successful recruitment drives.

https://en.m.wikipedia.org/wiki/Love_bombing

Needless to say, many or indeed most people will find infinite attention paid to their every word compelling, and that's one thing LLMs appear to offer.

accrual•1m ago
Love bombing can apply in individual, non-group settings too. If you ever come across a person who seems very into you right after meeting, giving gifts, going out of their way, etc. it's possibly love bombing.
blueboo•1h ago
Contrast the incentives with a real tutor and those expressed in the Study Mode prompt. Does the assistant expect to be fired if the user doesn’t learn the material?
herval•51m ago
Most teachers are not at threat of being fired if individual kids don’t learn something. I’m not sure that’s such an important part of the incentive system…
wafflemaker•1h ago
Reading the special prompt that makes the new mode, I discovered that in my prompting I never used enough ALL CAPS.

Is Trump, with his often ALL CAPS SENTENCES on to something? Is he training AI?

Need to check these bindings. Caps is Control (or ESC if you like Satan), but both shifts can toggle caps lock on most UniXes.

cs_throwaway•46m ago
> The risk of products like Study Mode is that they could do much the same thing in an educational context — optimizing for whether students like them rather than whether they actually encourage learning (objectively measured, not student self-assessments).

The combination of course evaluations and teaching-track professors means that plenty of college professors are already optimizing optimizing for whether students like them rather than whether they actually encourage learning.

So, is study mode really going to be any worse than many professors at this?

bo1024•37m ago
This fall, one assignment I'm giving my comp sci students is to get an LLM to say something incorrect about the class material. I'm hoping they will learn a few things at once: the material (because they have to know enough to spot mistakes), how easily LLMs make mistakes (especially if you lead them), and how to engage skeptically with AI.