frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Show HN: Pangolin – Open source alternative to Cloudflare Tunnels

https://github.com/fosrl/pangolin
110•miloschwartz•7h ago•16 comments

Postgres LISTEN/NOTIFY does not scale

https://www.recall.ai/blog/postgres-listen-notify-does-not-scale
352•davidgu•3d ago•134 comments

Batch Mode in the Gemini API: Process More for Less

https://developers.googleblog.com/en/scale-your-ai-workloads-batch-mode-gemini-api/
63•xnx•3d ago•18 comments

Australia is quietly introducing age checks for search engines like Google

https://www.abc.net.au/news/2025-07-11/age-verification-search-engines/105516256
45•ahonhn•1h ago•21 comments

The ChompSaw: A Benchtop Power Tool That's Safe for Kids to Use

https://www.core77.com/posts/137602/The-ChompSaw-A-Benchtop-Power-Tool-Thats-Safe-for-Kids-to-Use
122•surprisetalk•3d ago•81 comments

Series of posts on HTTP status codes

https://evertpot.com/http/
18•antonalekseev•1d ago•4 comments

What is Realtalk’s relationship to AI? (2024)

https://dynamicland.org/2024/FAQ/#What_is_Realtalks_relationship_to_AI
242•prathyvsh•13h ago•81 comments

Show HN: Open source alternative to Perplexity Comet

https://www.browseros.com/
188•felarof•11h ago•65 comments

FOKS: Federated Open Key Service

https://foks.pub/
197•ubj•16h ago•43 comments

Apple-1 Computer, handmade by Steve Jobs [video]

https://www.youtube.com/watch?v=XdBKuBhdZwg
33•guiambros•2d ago•6 comments

Graphical Linear Algebra

https://graphicallinearalgebra.net/
205•hyperbrainer•13h ago•15 comments

Flix – A powerful effect-oriented programming language

https://flix.dev/
240•freilanzer•15h ago•96 comments

Measuring the impact of AI on experienced open-source developer productivity

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
556•dheerajvs•12h ago•357 comments

Red Hat Technical Writing Style Guide

https://stylepedia.net/style/
180•jumpocelot•14h ago•77 comments

America's fastest-growing suburbs are about to get expensive

https://www.vox.com/future-perfect/417892/suburbs-sunbelt-housing-affordability-yimby
6•littlexsparkee•28m ago•1 comments

Launch HN: Leaping (YC W25) – Self-Improving Voice AI

57•akyshnik•11h ago•27 comments

eBPF: Connecting with Container Runtimes

https://h0x0er.github.io/blog/2025/06/29/ebpf-connecting-with-container-runtimes/
44•forxtrot•9h ago•1 comments

Analyzing database trends through 1.8M Hacker News headlines

https://camelai.com/blog/hn-database-hype/
133•vercantez•3d ago•66 comments

Grok 4

https://simonwillison.net/2025/Jul/10/grok-4/
239•coloneltcb•9h ago•178 comments

AI coding tools can reduce productivity

https://secondthoughts.ai/p/ai-coding-slowdown
89•gk1•5h ago•59 comments

Diffsitter – A Tree-sitter based AST difftool to get meaningful semantic diffs

https://github.com/afnanenayet/diffsitter
108•mihau•16h ago•28 comments

Belkin ending support for older Wemo products

https://www.belkin.com/support-article/?articleNum=335419
68•apparent•10h ago•54 comments

Nerve pain drug gabapentin linked to increased dementia, cognitive impairment

https://medicalxpress.com/news/2025-07-nerve-pain-drug-gabapentin-linked.html
38•clumsysmurf•3h ago•24 comments

Researchers create 3D interactive digital room from simple video

https://news.cornell.edu/stories/2025/06/researchers-create-3d-interactive-digital-room-simple-video
5•rbanffy•3d ago•0 comments

Matt Trout has died

https://www.shadowcat.co.uk/2025/07/09/ripples-they-cause-in-the-world/
168•todsacerdoti•21h ago•47 comments

Regarding Prollyferation: Followup to "People Keep Inventing Prolly Trees"

https://www.dolthub.com/blog/2025-07-03-regarding-prollyferation/
47•ingve•3d ago•1 comments

The Lumina Probiotic May Cause Blindness in the Same Way as Methanol

https://substack.com/home/post/p-168042147
58•exolymph•1h ago•21 comments

Is Gemini 2.5 good at bounding boxes?

https://simedw.com/2025/07/10/gemini-bounding-boxes/
264•simedw•16h ago•58 comments

Foundations of Search: A Perspective from Computer Science (2012) [pdf]

https://staffwww.dcs.shef.ac.uk/people/J.Marshall/publications/SFR09_16%20Marshall%20&%20Neumann_PP.pdf
11•mooreds•3d ago•0 comments

Show HN: Cactus – Ollama for Smartphones

128•HenryNdubuaku•9h ago•48 comments
Open in hackernews

Grok: Searching X for "From:Elonmusk (Israel or Palestine or Hamas or Gaza)"

https://simonwillison.net/2025/Jul/11/grok-musk/
58•simonw•4h ago

Comments

rasengan•4h ago
In the future, there will need to be a lot of transparency on data corpi and whatnot used when building these LLMs lest we enter an era where 'authoritative' LLMs carry the bias of their owners moving control of the narrative into said owners' hands.
mingus88•4h ago
Not much different than today’s media, tbh.
rideontime•3h ago
One interesting detail about the "Mecha-Hitler" fiasco that I noticed the other day - usually, Grok would happily provide its sources when requested, but when asked to cite its evidence for a "pattern" of behavior from people with Ashkenazi Jewish surnames, it would remain silent.
xnx•4h ago
> It’s worth noting that LLMs are non-deterministic,

This is probably better phrased as "LLMs may not provide consistent answers due to changing data and built-in randomness."

Barring rare(?) GPU race conditions, LLMs produce the same output given the same inputs.

msgodel•4h ago
I run my local LLMs with a seed of one. If I re-run my "ai" command (which starts a conversation with its parameters as a prompt) I get exactly the same output every single time.
xnx•4h ago
Yes. This is what I was trying to say. Saying "It’s worth noting that LLMs are non-deterministic" is wrong and should be changed in the blog post.
boroboro4•3h ago
You’re correct in batch size 1 (local is one), but not in production use case when multiple requests get batched together (and that’s how all the providers do this).

With batching matrix shapes/request position in them aren’t deterministic and this leads to non deterministic results, regardless of sampling temperature/seed.

unsnap_biceps•3h ago
Isn't that true only if the batches are different? If you run exactly the same batch, you're back to a deterministic result.

If I had a black box api, just because you don't know how it's calculated doesn't mean that it's non-deterministic. It's the underlaying algorithm that determines that and a LLM is deterministic.

boroboro4•3h ago
Providers never run same batches because they mix requests between different clients, otherwise GPUs are gonna be severely underutilized.

It’s inherently non deterministic because it reflects the reality of having different requests coming to the servers at the same time. And I don’t believe there are any realistic workarounds if you want to keep costs reasonable.

Edit: there might be workarounds if matmul algorithms will give stronger guarantees then they are today (invariance on rows/columns swap). Not an expert to say how feasible it is, especially in quantized scenario.

lgessler•3h ago
In my (poor) understanding, this can depend on hardware details. What are you running your models on? I haven't paid close attention to this with LLMs, but I've tried very hard to get non-deterministic behavior out of my training runs for other kinds of transformer models and was never able to on my 2080, 4090, or an A100. PyTorch docs have a note saying that in general it's impossible: https://docs.pytorch.org/docs/stable/notes/randomness.html

Inference on a generic LLM may not be subject to these non-determinisms even on a GPU though, idk

simonw•4h ago
I don't think those race conditions are rare. None of the big hosted LLMs provide a temperature=0 plus fixed seed feature which they guarantee won't return different results, despite clear demand for that from developers.
xnx•3h ago
Fair. I dislike "non-deterministic" as a blanket llm descriptor for all llms since it implies some type of magic or quantum effect.
dekhn•2h ago
I see LLM inference as sampling from a distribution. Multiple details go into that sampling - everything from parameters like temperature to numerical imprecision to batch mixing effects as well as the next-token-selection approach (always pick max, sample from the posterior distribution, etc). But ultimately, if it was truly important to get stable outputs, everything I listed above can be engineered (temp=0, very good numerical control, not batching, and always picking the max probability next token).

dekhn from a decade ago cared a lot about stable outputs. dekhn today thinks sampling from a distribution is a far more practical approach for nearly all use cases. I could see it mattering when the false negative rate of a medical diagnostic exceeded a reasonable threshold.

kcb•3h ago
FP multiplication is non-commutative.
boroboro4•3h ago
It doesn’t mean it’s non-deterministic though.

But it does when coupled with non-deterministic requests batching, which is the case.

labrador•4h ago
Musk has a good understanding of what people expect from AI from a science, tech and engineering perspective, but it seems to me he has little understanding of what people expect from AI from a social, cultural, political or personal perspective. He seems to have trouble with empathy, which is necessary to understand the feelings of other people.

If he did have a sense of what people expect, he would know nobody wants Grok to give his personal opinion on issues. They want Grok to explain the emotional landscape of controversial issues, explaining the passion people feel on both sides and the reasons for their feelings. Asked to pick a side with one word, the expected response is "As an AI, I don't have an opinion on the matter."

He may be tuning Grok based on a specific ideological framework that prioritizes contrarian or ‘anti-woke’ narratives to instruct Grok's tuning. That's turning out to be disastrous. He needs someone like Amanda Askell at Anthropic to help guide the tuning.

alfalfasprout•3h ago
> Musk has a good understanding of what people expect from AI from a science, tech and engineering perspective, but it seems to me he has little understanding of what people expect from AI from a social, cultural, political or personal perspective. He seems to have trouble with empathy, which is necessary to understand the feelings of other people.

Absolutely. That said, I'm not sure Sam Altman, Dario Amodei, and others are notably empathetic either.

labrador•3h ago
Dario Amodei has Amanda Askell and her team. Sam has a Model Behavior Team. Musk appears to be directing model behavior himself, with predictable outcomes.
dankai•4h ago
This is so in character for Musk and shocking because he's incompetent across so many topics he likes to give his opinion on. Crazy he would nerf the model of his AI company like that.
sorcerer-mar•4h ago
Megalomania is a hell of a drug
simonw•4h ago
I think the wildest thing about the story may be that it's possible this is entirely accidental.

LLM bugs are weird.

mac-attack•3h ago
Curious if there is a threshold/sign that would convince you that the last week of Grok snafus are features instead of a bugs, or warrant Elon no longer getting the benefit of the doubt.

Ignoring the context of the past month where he has repeatedly said he plans on 'fixing' the bot to align with his perspective feels like the LLM world's equivalent of "to me it looked he was waving awkwardly", no?

simonw•3h ago
He's definitely trying to make it less "woke". The way he's going about it reminds me of Sideshow Bob stepping on rakes.
wredcoll•1h ago
What do you mean, the way he's going about it? He wanted it to be less woke, it started praising hitler, that's literally the definition of less woke.
bix6•4h ago
Why people use X is beyond me. I can’t imagine paying $20/mo for the privilege of being constantly turd walloped.
bananalychee•3h ago
It's one of the few social networks where you don't get harassed or banned for not being a devout leftist for one. The fact that every remotely popular alternative is so hostile to opinions that derive from the religious order helps it stay relevant. Not that I'd ever pay for it, not my preferred model.
philistine•3h ago
So … it’s a safe space to protect your feelings because you don’t like getting harassed.

It’s so fascinating that right-wing views are so similar to what is usually decried in the next sentence.

wredcoll•1h ago
It does give a very succinct answer to the question "what types of people are still using twitter" though.
pupppet•1h ago
Let me guess, your opinions are you being awful to people and you don’t like anyone pushing back.
felineflock•3h ago
Wait... Elon Musk supports Israel? Weren't we all supposed to think Elon Musk was a nazi because the salute?
rideontime•3h ago
Consider why an ethnonationalist would support Israel.
felineflock•3h ago
Never heard of that word before in the media.
mac-attack•3h ago
The phrase was coined over 75 years ago if 'the media' isn't your thing.
lr0•1h ago
> Never heard of that word before in the media.

Perhaps you should start looking for other methods to educate yourself.

senectus1•3h ago
dont be naive, you can be an asshole of many different shapes and colors simultaneously.
marcusb•3h ago
This reminds me in a way of the old Noam Chomsky/Tucker Carlson exchange where Chomsky says to Carlson:

  "I’m sure you believe everything you’re saying. But what I’m saying is that if you believed something different, you wouldn’t be sitting where you’re sitting."
Simon may well be right - xAI might not have directly instructed Grok to check what the boss thinks before responding - but that's not to say xAI wouldn't be more likely to release a model that does agree with the boss a lot and privileges what he has said when reasoning.
lr0•1h ago
Why is that flagged? The post does not show any concerns about the ongoing genocide in Gaza, it's purely analyzing the LLM response in a technical perspective.
MallocVoidstar•1h ago
It makes Musk/X look bad, so it gets flagged.
chambo622•1h ago
Not sure why this is flagged. Relevant analysis.