frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

AlphaGenome: AI for better understanding the genome

https://deepmind.google/discover/blog/alphagenome-ai-for-better-understanding-the-genome/
373•i_love_limes•11h ago•110 comments

A lumberjack created more than 200 sculptures in Wisconsin's Northwoods

https://www.smithsonianmag.com/travel/when-a-lumberjacks-imagination-ran-wild-he-created-more-than-200-sculptures-in-wisconsins-northwoods-180986840/
3•noleary•11m ago•1 comments

Launch HN: Issen (YC F24) – Personal AI language tutor

227•mariano54•11h ago•205 comments

The time is right for a DOM templating API

https://justinfagnani.com/2025/06/26/the-time-is-right-for-a-dom-templating-api/
86•mdhb•6h ago•47 comments

Alternative Layout System

https://alternativelayoutsystem.com/scripts/#same-sizer
128•smartmic•6h ago•16 comments

Kea 3.0, our first LTS version

https://www.isc.org/blogs/kea-3-0/
52•conductor•5h ago•19 comments

How much slower is random access, really?

https://samestep.com/blog/random-access/
40•sestep•3d ago•7 comments

Fault Tolerant Llama training

https://pytorch.org/blog/fault-tolerant-llama-training-with-2000-synthetic-failures-every-15-seconds-and-no-checkpoints-on-crusoe-l40s/
27•Mougatine•3d ago•5 comments

Dickinson's Dresses on the Moon

https://www.theparisreview.org/blog/2025/06/20/dickinsons-dresses-on-the-moon/
12•Bluestein•3d ago•0 comments

Show HN: Magnitude – Open-source AI browser automation framework

https://github.com/magnitudedev/magnitude
60•anerli•7h ago•25 comments

Snow - Classic Macintosh emulator

https://snowemu.com/
204•ColinWright•17h ago•75 comments

A Review of Aerospike Nozzles: Current Trends in Aerospace Applications

https://www.mdpi.com/2226-4310/12/6/519
68•PaulHoule•10h ago•32 comments

Matrix v1.15

https://matrix.org/blog/2025/06/26/matrix-v1.15-release/
128•todsacerdoti•6h ago•39 comments

A new pyramid-like shape always lands the same side up

https://www.quantamagazine.org/a-new-pyramid-like-shape-always-lands-the-same-side-up-20250625/
622•robinhouston•1d ago•150 comments

Puerto Rico's Solar Microgrids Beat Blackout

https://spectrum.ieee.org/puerto-rico-solar-microgrids
348•ohjeez•1d ago•199 comments

Show HN: I built an AI dataset generator

https://github.com/metabase/dataset-generator
121•matthewhefferon•11h ago•24 comments

Introducing Gemma 3n

https://developers.googleblog.com/en/introducing-gemma-3n-developer-guide/
284•bundie•9h ago•131 comments

SigNoz (YC W21, Open Source Datadog) Is Hiring DevRel Engineers (Remote)(US)

https://www.ycombinator.com/companies/signoz/jobs/cPaxcxt-devrel-engineer-remote-us-time-zones
1•pranay01•7h ago

Shifts in diatom and dinoflagellate biomass in the North Atlantic over 6 decades

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0323675
43•PaulHoule•8h ago•2 comments

Collections: Nitpicking Gladiator's Iconic Opening Battle, Part I

https://acoup.blog/2025/06/06/collections-nitpicking-gladiators-iconic-opening-battle-part-i/
5•diodorus•3d ago•0 comments

Typr – TUI typing test with a word selection algorithm inspired by keybr

https://github.com/Sakura-sx/typr
40•Sakura-sx•3d ago•29 comments

Starcloud can’t put a data centre in space at $8.2M in one Starship

https://angadh.com/space-data-centers-1
57•angadh•6h ago•70 comments

The Business of Betting on Catastrophe

https://thereader.mitpress.mit.edu/the-business-of-betting-on-catastrophe/
67•anarbadalov•3d ago•31 comments

“My Malformed Bones” – Harry Crews’s Counterlives

https://harpers.org/archive/2025/07/my-malformed-bones-charlie-lee-harry-crews/
9•Caiero•3d ago•0 comments

Lateralized sleeping positions in domestic cats

https://www.cell.com/current-biology/fulltext/S0960-9822(25)00507-X?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS096098222500507X%3Fshowall%3Dtrue
104•EvgeniyZh•7h ago•50 comments

Memory safety is table stakes

https://www.usenix.org/publications/loginonline/memory-safety-merely-table-stakes
66•comradelion•6h ago•70 comments

Thomas Aquinas – The world is divine

https://ralphammer.com/thomas-aquinas-the-world-is-divine/
6•pedroth•3h ago•1 comments

Ambient Garden

https://ambient.garden
312•fipar•3d ago•56 comments

Writing a basic Linux device driver when you know nothing about Linux drivers

https://crescentro.se/posts/writing-drivers/
424•sbt567•4d ago•59 comments

Access BMC UART on Supermicro X11SSH

https://github.com/zarhus/zarhusbmc/discussions/3
57•pietrushnic•11h ago•11 comments
Open in hackernews

A.I. Is Homogenizing Our Thoughts

https://www.newyorker.com/culture/infinite-scroll/ai-is-homogenizing-our-thoughts
54•thoughtpeddler•5h ago

Comments

timr•4h ago
It’s only homogenizing your thoughts if you don’t think for yourself.

(I realize this might be a weak point for many people.)

montagg•4h ago
The “do your own research” types end up with some of the biggest groupthink I’ve ever seen though.
chrisco255•4h ago
Sure, they were probably picking from the top 10 results on Google. Now with AI, we've got one very effective "I'm feeling lucky" button.
nullc•4h ago
It's worse because they're often more confident in the AI output than they ever were of the google results, and the results are not-infrequently-enough so bad that no human would have made that error. When they do doubt, they can ask, and the AI will often defend its dumb position -- especially when they explicitly ask it to counter the rebuttal they received.

Skepticism also seems to be reduced because we're armored against people telling us lies in their own self interest and against ours, while AI will make stuff up that benefits no one. (And even where it could benefit someone, people assume the AI isn't trying to benefit itself).

timr•4h ago
The “listen to the experts” types are the same thing, but the opposite pole.

Neither qualifies as thinking for yourself.

knowaveragejoe•4h ago
It was never "always listen to the experts", but that's the strawman given by the contrarians who have decided we need to throw out all expertise.
Swenrekcah•4h ago
Sure, but one of those is outsourcing their judgement to a panel of actual experts and the other to a panel of internet personalities
lo_zamoyski•4h ago
It always seemed odd how willing people are to defer to internet randos who are obviously out of their depth.

Of course, you also have to identify which experts are trustworthy. This is an important skill to have.

A non-expert must rely on at least two things to do this. The first are external signals, like financial associations and a record of making unpopular criticisms (but without being a contrarian or aiming for sensationalism), as well as reputational factors (not popularity, but a reputation for making strong cases).

The second is the basic coherence of their claims. If they make remarks that contradict basic reality, then this is not a good sign.

And of course, you have to be prudential and recognize your own limits.

These are probabilistic, naturally, and there is an expected divergence of opinion here, even between what you thought yesterday and what you think today.

timr•4h ago
Look, I think I made my only point in the parent comment, but I just want to emphasize that "a panel of actual experts" is rarely the compliment that you seem to think it is.

The past few years have been a veritable parade of experts saying inaccurate things, and/or being proven hilariously wrong in a variety of domains. I'm not saying that this isn't a hard problem -- it is -- but the fact is that "expert" is not a get-out-of-thinking-free card. It is, at best, a slightly higher weighted input amongst all others.

xboxnolifes•2h ago
Yeah, anyone working in any field for some amount of time is an "expert" in that field. Yet, finding 2 software engineers with differing opinions on a topic is trivial. Finding 1 making provably wrong statements on a topic is nearly as easy.

They hold more weight than the average person, but it doesn't make them right by default.

nullc•4h ago
> It’s only homogenizing your thoughts if you don’t think for yourself.

Uh oh...

More seriously, if you have non-techie (or less techie) friends or family using ChatGPT please ask to see their conversations.

You're likely to be shocked by at a least few of them... many people really don't understand what these tools are and are using them in crazy and damaging ways.

For example, one friends brother in law has his ChatGPT telling him about varrious penny stocks and obscure cryptocurrencies and promising him 10000x returns, which he absolutely believes and is making investments based on.

Other people are allowing ChatGPT to convince them that God has chosen to speak to them through ChatGPT and is commanding them to do all sorts of nonsense.

The commercial LLMs work well enough that people who don't know how they work are frequently bamboozled.

Consider how much your own skepticism of the output comes from cases where it was confidently but objectively wrong and what happens to someone who never uses it on something where objective correctness can be easily judged.

chrisweekly•4h ago
Here's a great example of an intelligent person learning this lesson (and, thankfully, sharing it in a very public and effective way):

https://amandaguinzburg.substack.com/p/diabolus-ex-machina

nullc•3h ago
Great link. It might be something I could share to wake some other people up.

I dunno how tool use is setup in chat interface as I've only used the API, but I doubt there was ever a request to any of the urls and the author could have just as easily added https://amandaguinzburg.substack.com/p/that-time-i-won-the-l... or any other made up URL and it would have waxed poetic about that one too.

beanshadow•4h ago
Differing individuals with similarly shaped axiomatic structures will discover similar theorems. Some people that are members of ideologies believe they are thinking just for themselves.

It's strong for members of a community to think alike. On the other hand, some people like to search in todash meme space for a useful idea or strategy in the rough. Problem is this treasure hunter strategy is only available to those with resources to try lots of untested and potentially quite harmful ideas.

senko•4h ago
So is social media, TV, Hollywood, and pop culture in general.
tines•4h ago
All connection technology is a force for homogeneity. Television was the death of the regional accent, for example.

Through unlimited amusement, entertainment, and connection we are creating a sad, boring, lonely world.

stego-tech•4h ago
The problem in the case of AI is who is curating that homogeneity, and to what end. Dynamic systems like IRC and messengers let folks connect and gravitate more “naturally”, while AI - being a walled garden curated by for-profit entities funded by billionaire Capitalists - naturally have a vested interest in forcing a sort of homogeneity that benefits their bottom line and minimizes risk to their business model.

That’s the real threat: reality authoring.

nullc•4h ago
Not sure about that. Billionare Capitalists live in this world too. They might cause harm, sure, but that harm generally takes predictable form and is of finite magnitude.

AI behavior on the other hand can cause under-informed users to do crazy things that no one would ever want. The form of the harm is less predictable, the magnitude isn't limited by anything except the user's ability and skepticism.

Imagine whatever US president you think is least competent talking to ChatGPT. If their conversation ventures into discussion of a Big Red Switch That Ends The World, it's going to eventually advise on all the reasons the button should be pushed, because that's exactly what would happen in the mountains of narrative material the LLM has been trained on.

Hopefully there is no end the world button and even the worst US president isn't going to push it because ChatGPT said it was a good idea. ... But you get the idea, and there absolutely are people leaving their families and doing all manner of crazy stuff because they accidentally prompted the AI into writing fiction starting them and the AI is advising them to live the life of a fictional character.

I think AI doomers have it all wrong, AI risk to the extent it exists isn't from any kind of super-intelligence, it's significantly from super-insanity. The AI doesn't need any kind of super-human persuasion, turns out vastly _sub_-human persuasion is more than enough for many.

Wealthy people abusing a new communications channel to influence the public isn't a new risk, it's a risk as old as time. It's not irrelevant, by any means, but we do have a long history of dealing with it.

tines•4h ago
> I think AI doomers have it all wrong, AI risk to the extent it exists isn't from any kind of super-intelligence, it's significantly from super-insanity. The AI doesn't need any kind of super-human persuasion, turns out vastly _sub_-human persuasion is more than enough for many.

Totally agree. We have a level of technology today that is enough to ruin the world. We don’t need to look any further for the threat to our souls.

Dracophoenix•4h ago
> AI behavior on the other hand can cause under-informed users to do crazy things that no one would ever want. The form of the harm is less predictable, the magnitude isn't limited by anything except the user's ability and skepticism.

One could say the same of the printing press.

tines•4h ago
Not at all. Quantity has a quality all its own.
k310•3h ago
I never proposed to either a chatbot or a billionaire [0]

Who says that the president isn't already a chatbot, himself? [1] Think about this article.

Enjoy.

[0] https://people.com/man-proposed-to-his-ai-chatbot-girlfriend...

[1] https://www.techdirt.com/2025/04/29/the-hallucinating-chatgp...

darkhorse222•4h ago
Geography still dictates some things in the diversity of the experiences it imparts, though admittedly much of our technology exists to insulate us from that stuff.
SerpentJoe•4h ago
Social media has already homogenized our thoughts so much. So many facts and perspectives are presented that it's impossible to construct our own opinions on it all without taking inspiration from others, and the upvote button provides a convenient consensus.
abnercoimbre•4h ago
What can we do? Would serious efforts to create offline clubs [0] serve as an antidote? This made the rounds on HN [1] recently.

Technology for touching grass.

[0] https://www.theoffline-club.com/

[1] https://news.ycombinator.com/item?id=44381168

tines•4h ago
Yep, connecting with actual human beings is what life's all about.
eikenberry•3h ago
Curious where you get the idea that regional accents are gone. If you travel around much in the US you'll hear many different regional accents. I have relatives from the west coast, mid-west, south and east coast (we're spread around) and each region has an easily recognizable accent. Some more pronounced than others, but still very much alive.
hatthew•1h ago
In my experience I don't notice any difference in accent between the east coast and west coast. The only regional accent I notice in many native english speakers is southern. All other accents seem to be cultural (AAVE, ESL) or dying (older generations have it, younger ones don't).
megaloblasto•6m ago
Having lived for a long time on each coast, this is not my experience at all. Each region has a rich accent.
smcleod•4h ago
All these hyped up doom titles are homogenising our thoughts. There's some truth to this but it's much more nuanced than presented here.
kfarr•4h ago
See... Television? Social Media? Printing Press? https://slate.com/technology/2010/02/a-history-of-media-tech...
dorkrawk•4h ago
Mass media homogenizes our input (which influences our output). If we want to think about how AI might be different we should consider how it might directly homogenize our output.
lo_zamoyski•4h ago
There is a secondary way in which homogenization occurs.

Mass media are not only able to deliver the same message to everyone, or the same presuppositions to everyone (a more dangerous thing, as the desired conclusions are then drawn by people themselves; see Bernays's "music room" tactic for getting people to buy pianos), but once the same content has been delivered to everyone, people will talk about it at some point. This creates the impression of consensus which causes people to assign greater confidence to the content that the mass media have delivered.

So it's circular. You put an idea in people's head, they all end up talking about the idea, and this causes people to feel confident about it being true, because everyone is talking about it. And even if you don't consume mass media, you still face a society of people who do. You don't escape the effects of mass media simply because you personally don't consume it.

lvl155•4h ago
This makes it sound like people are original and profound. No. 99% of day to day is just repetition. Critical thinking is innate and rare. This is why religion is such a successful social form. This is also why governments like North Korea exist. 99% live brain-dead lives doing uninspiring work…and they are happy to do it. Might also be the way we are designed and wired for survival.
seydor•4h ago
Central limit theorem is unrelenting
tolerance•4h ago
This headline is infuriating and the content is just a report on that one study that's been making the rounds on Hacker News all week.

AI chat bots enable passive consumption. Passive consumption homogenizes thought. It's not the only technology to do this.

I suspect that The New Yorker, and similar outlets, will stop caring when it becomes financially and socially advantageous to do so.

A culture that is ambivalent or disinterested in providing practical solutions to this problem is the greater issue.

alganet•4h ago
It's just the MIT study again, isn't it? Cool study.

We should wait for the peer reviews before digging too much into it though. These are, after all, preliminary results.

gowld•4h ago
We are the peer reviews. It's us.
alganet•4h ago
That's... not how these things work buddy.
lo_zamoyski•4h ago
Also bracket peer review. Peer review also has a homogenizing effect on the content in journals. It's not magic.
tqi•4h ago
> ... more than fifty students from universities around Boston were split into three groups... According to Nataliya Kosmyna, a research scientist at M.I.T. Media Lab and one of the co-authors of a new working paper documenting the experiment, the results from the analysis showed a dramatic discrepancy: subjects who used ChatGPT demonstrated less brain activity than either of the other groups. The analysis of the L.L.M. users showed fewer widespread connections between different parts of their brains; less alpha connectivity, which is associated with creativity; and less theta connectivity, which is associated with working memory.

Are we still treating low n-size fMRI studies as anything more than a clout-seeking exercise in confirmation bias?

yakshaving_jgt•4h ago
To be fair, any time vim comes up in a discussion among programmers, without fail someone always reaches for that one exact joke idea about not being able to quit.

Even for supposed critical thinkers, on average we’re not all that original.

smackeyacky•4h ago
I don't think reaching for that joke is a sign of un-originality, it's a signal to that group that you fit in.
drellybochelly•4h ago
The process they described is not much different from gluing together ideas from studies in university (when I was studying for an arts degree).

I think its on the person to realize whether A.I. is becoming a crutch.

l33tbro•4h ago
I sometimes wonder if the true "digital divide" comes down to those who were able to develop critical thinking skills prior to these last few years.

If you had previously developed these skills through widish reading and patient consideration, then tools like LLMs are like somebody handing you the keys to a wave-runner in a choppy sea of information.

But for those now having to learn to critically think, I cannot see how they would endure through the struggle of contemplation without habitually reaching for an LLM. The act of holding ambiguity within yourself, where information is often encoded into knowledge, becomes instantly relieve here.

While I feel lucky to have acquired critically skills prior to 2023, tools like LLMs being unconditionally handed to young people for learning fill me with a kind of dread.

AppleBananaPie•4h ago
I agree with you because young me would have learned nothing for the sake of short term fun
delfinom•4h ago
Idiocracy is coming.
jgalt212•4h ago
I've felt the same way about our searches and Google Autocomplete. I find myself only search for stuff that Google Autocomplete will recognize.
isaacremuant•4h ago
Censorship was doing that already.
zingababba•3h ago
I mean, when I was young I would often think in terms of whatever philosopher I was currently reading. Brains are plastic AF and adopt very quickly. Just keep your inputs more diverse than literally just chatgpt and you will be fine.
nullc•3h ago
https://web.archive.org/web/20121008025245/http://squid314.l...

But he got it wrong-- for most it doesn't need to be better than what they'd do themselves, it doesn't even need to be particularly good.

Plenty of people would prefer to put out AI copy even when they suspect it's worse than what they'd write themselves because they take less personal injury when it turns out to be flawed.

dave333•1h ago
We will just have to stand on the shoulders of homogenous giants.