frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Wine 11.0-Rc3 Released with Another Week of Bug Fixing

https://www.phoronix.com/news/Wine-11.0-rc3-Released
1•Bender•44s ago•0 comments

LG TVs' unremovable Copilot shortcut is the least of smart TVs' AI problems

https://arstechnica.com/gadgets/2025/12/lg-tvs-unremovable-copilot-shortcut-is-the-least-of-smart...
1•Bender•3m ago•0 comments

'War and Power' review: Off the battlefield, another fight

https://www.wsj.com/arts-culture/books/war-and-power-review-off-the-battlefield-another-fight-82e...
1•hhs•4m ago•0 comments

I'm a Full Professor. I Don't Code for a Living. I Shipped Prod Software Anyway

1•socreins•5m ago•0 comments

Google Engineer on His Sentient AI Claim (2022) [video]

https://www.youtube.com/watch?v=kgCUn4fQTsc
2•kevin061•6m ago•0 comments

Who's Afraid of Black Love?

https://dengalove.com/posts-do-blog/quem-tem-medo-do-amor-preto/
2•filldorns•9m ago•1 comments

Prototaxites

https://astrobiology.com/2025/03/ancient-prototaxites-dont-belong-to-any-living-lineage-possibly-...
2•andsoitis•10m ago•0 comments

Olo (Imaginary Color)

https://en.wikipedia.org/wiki/Olo_(color)
1•aleda145•13m ago•0 comments

Poor Man's Productivity Trick

https://idiallo.com/blog/poormans-productivity-trick
1•shaunpud•13m ago•0 comments

Wood wide web – the underground network of microbes that connects trees

https://www.science.org/content/article/wood-wide-web-underground-network-microbes-connects-trees...
1•andsoitis•14m ago•0 comments

Why Current AI Won't Displace Lawyers

https://deepsub.substack.com/p/six-reasons-why-current-ai-wont-displace
1•dsubburam•15m ago•0 comments

Build Your Own React

https://pomb.us/build-your-own-react/
1•howToTestFE•17m ago•0 comments

Cloudflare's Resilience plan following recent outages (Code Orange)

https://blog.cloudflare.com/fail-small-resilience-plan/
2•sdko•18m ago•0 comments

Humongous Fungus — the largest single living organism on Earth

https://www.oregonencyclopedia.org/articles/humongous-fungus-armillaria-ostoyae/
1•andsoitis•19m ago•0 comments

End of Year Pay Report 2025

https://www.levels.fyi/2025/
1•cebert•19m ago•0 comments

Ask HN: Acceleration of a Drop Falling Through Mist?

2•AnimalMuppet•23m ago•1 comments

How has public history changed since 1951?

https://www.historytoday.com/archive/head-head/how-has-public-history-changed-1951
1•hhs•25m ago•0 comments

Kew: Simple Static Site Generator

https://github.com/uint23/kew
1•uint23•27m ago•0 comments

The Great Re-Aggregation: Vertical AI and the Battle for the Control Point

https://medium.com/@gp2030/the-great-re-aggregation-vertical-ai-service-as-software-and-the-battl...
1•light_triad•32m ago•1 comments

Ask HN: What app/website do you think should exist? (non-AI)

1•DinakarS•32m ago•2 comments

PBS News Hour West to go dark after ASU discontinues contract

https://www.statepress.com/article/2025/12/politics-pbs-newshour-west-closure
2•heavyset_go•33m ago•0 comments

Ask HN: Has your iPhone Safari been hijacked by ChatGPT?

1•dalemhurley•34m ago•2 comments

Evolution by Natural Induction

https://royalsocietypublishing.org/rsfs/article/15/6/20250025/366156/Evolution-by-natural-induction
1•Anon84•36m ago•0 comments

Computational prediction of human genetic variants in the mouse genome

https://www.nature.com/articles/s41587-025-02925-0
1•bookofjoe•36m ago•0 comments

The Most Dangerous Spot on Caltrain

https://valuetowndotjson.substack.com/p/the-most-dangerous-spot-on-caltrain
1•panic•36m ago•0 comments

The Present and Future of HPC Networking with Cornelis Networks CEO Lisa Spelman

https://chipsandcheese.com/p/sc25-the-present-and-future-of-hpc
1•rbanffy•39m ago•0 comments

The Model T of Agentic AI was Introduced in 2025

https://backnotprop.substack.com/p/the-model-t-of-agentic-ai-was-introduced
1•ramoz•48m ago•0 comments

Controversial Dakota Pipeline Gets a Big, Belated Government Boost

https://www.nytimes.com/2025/12/19/climate/dakota-access-pipeline-greenpeace-energy-transfer-army...
1•geox•48m ago•0 comments

What building a Package Manager taught me

1•xerrs•50m ago•0 comments

Humanoid Robots for War and Work: Startup Plans to Build 50K by End of 2027

https://www.forbes.com/sites/johnkoetsier/2025/12/16/humanoid-robots-for-war-and-work-startup-pla...
1•rmason•51m ago•0 comments
Open in hackernews

We ran Anthropic’s interviews through structured LLM analysis

https://www.playbookatlas.com/research/ai-adoption-explorer
32•jp8585•1h ago

Comments

jp8585•1h ago
Anthropic released 1,250 interviews about AI at work. Their headline: "predominantly positive sentiments." We ran the same interviews through structured LLM analysis, and the true story is a bit different.

  Key findings:                                                                                               
  • 85.7% have unresolved tensions (efficiency vs quality, convenience vs skill)                              
  • Creatives struggle MOST yet adopt FASTEST 
  • Scientists have lowest anxiety despite lowest trust (see ai as a tool, plain and simple)
  • 52% of creatives frame AI through "authenticity" (using it makes them feel like a fraud)                            
                                                                                                              
Same data, different lens. Full methodology at bottom of page. Analysis: https://www.playbookatlas.com/research/ai-adoption-explorer Dataset: https://huggingface.co/datasets/Anthropic/AnthropicInterview...
Terretta•40m ago
“Not X. Not Y. Z.” – Are you not willing to edit the tropes out?

Or maybe teach your LLM to fix itself. Starting rule set:

https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

cmiles8•1h ago
The story that’s solidifying is the tech is cool, it’s useful for certain things (eg, meeting note taking), but business have run a ton of “innovation lab” pilots that have returned little to no measurable value with leaders getting frustrated at the invested red ink. In short the substance isn't living up to the hype.

Everywhere I look the adoption metrics and impact metrics are a tiny fraction of what was projected/expected. Yes tech keynotes have their shiny examples of “success” but the data at scale tells a very different story and that’s increasingly hard to brush under the carpet.

Given the amount of financial engineering shenanigans and circular financing it’s unclear how much longer the present bonanza can continue before the financial and business reality playing out slams on the brakes.

blindhippo•57m ago
If anything, the AI bubble is reinforcing to me (and hopefully many more people) that the "markets" are anything but rational. None of the investments going on have followed any semblance of fundamentals - it's all pure instinct and chasing hype. I just hope it doesn't tear down the world for the 99% of us unable to actually reap any benefits from it.

AI is basically a toy for 99% of us. It's a long long ways away from the productivity boost people love to claim to justify the sky high valuations. It will fade to being a background tech employed strategically I suspect - similar to other machine learning applications and this is exactly where it belongs.

I'm forced to use it (literally, AI usage is now used as a talent review metric...) and frankly, it's maybe helped speed me up... 5-10%? I spend more time trying to get the tools to be useful than I would just doing the task myself. The only true benefit I've gotten has been unit test generation. Ask it to do any meaningful work on a mature code base and you're in for a wild ride. So there's my anecdotal "sentiment".

dionian•52m ago
I multi task much more now that i can farm off small coding assignments to agents. i pay hndreds per month in tokens. for my role personally its been a massive paradigm shift.
cmiles8•43m ago
There are absolutely folks like you out there and I don’t doubt the productivity increase. The challenge is you are not the norm and the hundreds per month from you and others like you are a drop in the bucket of what’s needed to pay for all this.
WhyOhWhyQ•42m ago
To each his own, but multi-tasking feels bad to me. I want to spend my life pursuing mastery of a craft, not lazily delegating. Not that everyone should have the same goals, but the mastery route feels like it's dying off. It makes me sad.

I get it that some people just want to see the thing on the screen. Or your priority is to be a high status person with a loving family etc.. etc... All noble goals. I just don't feel a sense of fulfillment from a life not in pursuit of something deeper. The AI can do it better than me, but I don't really care at the end of the day. Maybe super-corp wants the AI to do it then, but it's a shame.

Terretta•31m ago
> I want to spend my life pursuing mastery of a craft, not lazily delegating.

And yet, the Renaissance "grand masters" became known as masters through systematizing delegation:

https://smarthistory.org/workshop-italian-renaissance-art/

WhyOhWhyQ•29m ago
I have wondered about that actually. Thanks, I'll read that, looks interesting.

Surely Donald Knuth and John Carmack are genuine masters though? There's the Elon Musk theory of mastery where everyone says you're great, but you hire a guy to do it, and there's the <nobody knows this guy but he's having a blast and is really good> theory where you make average income but live a life fulfilled. On my deathbed I want to be the second. (Sorry this is getting off topic.)

fragmede•6m ago
Masters of what though?

Steve Jobs wrote code early on, but he was never a great programmer. That didn’t diminish his impact at all. Same with plenty of people we label as "masters" in hindsight. The mastery isn’t always in the craft itself.

What actually seems risky is anchoring your identity to being the best at a specific thing in a specific era. If you're the town’s horse whisperer, life is great right up until cars show up. Then what? If your value is "I'm the horse guy," you're toast. If your value is taste, judgment, curiosity, or building good things with other people, you adapt.

So I’m not convinced mastery is about skill depth alone. It's about what survives the tool shift.

alehlopeh•6m ago
I like how you compare people to renaissance painters to inflate their egos
WhyOhWhyQ•4m ago
Inflate who's ego? Mine? It seemed more like a swipe than ego-inflation, but I was happy to see the article anyway.
fragmede•3m ago
The other surprising skill from this whole AI craze is, it turns out that being able to social engineer an LLM is a transferable skill to getting humans to do what you want.
blindhippo•42m ago
Might work for you, but if I multi task too much, the quality of my output drops significantly. Where I work, that does not fly. I cannot trust any agent to handle anything without babysitting them to avoid going off the rails - but perhaps the tools I have access to just aren't good (underlying model is claude 4.5, so it the model isn't the cause).

I've said this in the past and I'll continue to say it - until the tools get far better at managing context, they will be hard locked for value in most use cases. The moment I see "summarizing conversation" I know I'm about to waste 20 minutes fixing code.

fragmede•23m ago
If you can predict that hitting “summarize conversation” equals rework, what can you change upstream so you avoid triggering it? Are you relying on the agent to carry state instead of dumping it into .MD files? What happens if your computer crashes?

> so it the model isn't the cause

Thing is, the prompts, those stupid little bits of English that can't possiu matter all that much? It turns out they affect the models performance a ton.

fragmede•21m ago
> AI is basically a toy for 99% of us.

So you're at the "first they laugh at us" stage then.

AnimalMuppet•19m ago
OK, but not everything that gets to that stage moves on to the next, let alone the stage after that.

But I will give you this, the "first they ignore us" stage is over, at least for many people.

jp8585•44m ago
I actually think things improved substantially when compared to last year. The latest batch of sota models is incredible (just ask any software engineer about what’s happening to their profession). It’s only a matter of time until other knowledge workers start getting the asphyxiating “vibe” coding treatment and that drama is what really fascinates me.

People are absolutely torn. It seems that ai usage starts as a clutch, then it becomes an essential tool and finally it takes over the essence of the profession itself. Not using it feels like a waste of time. There’s a sense of dread that comes from realizing that it’s not useful to “do work” anymore. That in order to thrive now, we need to outsource as much of your thinking to GPT as possible. If your sense of identity comes from “pure” intellectual pursuits, you are gonna have a bad time. The optimists will say “you will be able to do 10x the amount of work”. That might be true, but the nature of the work will be completely different. Managing a farm is not the same as planting a seed.

Terretta•36m ago
There’s a sense of dread that comes from realizing that it’s not useful to “do work” anymore. That in order to thrive now, we need to outsource as much of your thinking to GPT as possible. If your sense of identity comes from “pure” intellectual pursuits, you are gonna have a bad time.

This is 180 degrees from how to think about it.

The more thinking you do as ratio to less toil, the better. The more time to apply your intellect with the better machine execution to back that up, the more profit.

The Renaissance grand masters used ateliers of apprentices and journeymen while the grand masters conceived, directed, critiqued, and integrated their work into commissioned art; at the end signing their name: https://smarthistory.org/workshop-italian-renaissance-art/

This is how to leverage the machine. It's your own atelier in a box. Go be Leonardo.

jp8585•24m ago
I definitely understand that this is the rational way of viewing it. Leveraging these tools is an incredible feeling, but the sense of dread is always there in the corner. You can just feel a deep sense of angst in a lot of these interviews. In any case, I would rather have them and use them to their full extent than to become obsolete. Becoming Leonardo it is.
delusional•11m ago
> just ask any software engineer about what’s happening to their profession

I'm a professional developer, and nothing interesting is happening to the field. The people doing AI coding were already the weakest participants, and have not gained anything from it, except maybe optics.

The thing that's suffocating is the economics. The entire economy has turned its back on actual value in pursuit of silicon valley smoke.

Lerc•43m ago
The high usage and high anxiety tracks with what I have found from taking to artists IRL. There is a sense that any any public expression that is not wholly against AI will draw vilification from a section of the artistic community.

There are a broad range of opinions but the expression seems to have been extremely chilled.

huevosabio•39m ago
``` Creatives have the highest struggle scores and the highest adoption rates. ```

Here is my guess for the puzzle: creative work is subjective and full of scaffolding. AI can easily generate this subjective scaffolding to a "good enough" level so it can get used without much scrutiny. This is very attractive for a creative to use on a day to day basis.

But, given the amount of content that wasn't created by the creative, the creative feels both a rejection of the work as foreign and a feeling of being replaced.

The path is less stark in more objective fields because the quality is objective, so harder to just accept a merely plausible solution, and the scaffolding is just scaffolding so who cares if it does the job.

ctoth•28m ago
Possible confound (seems important):

"creatives" tend to have a certain political tribe, that political tribe is well-represented in places that have this precise type of authenticity/etc. language around AI use...

Basically a good chunk of this could be measuring whether or not somebody is on Bluesky/is discourse-pilled... and there's no way to know from the study.

layer8•24m ago
One issue with AI for creatives is that it’s virtually impossible to get AI to create a specific vision you have in mind. It creates something, but you just have to accept whatever that is, you can only steer it very roughly. It can be useful for getting inspiration, but not for getting exact results. If AI was better suited for realizing one’s own creative vision and working in a detail-oriented fashion, creators would likely embrace it more.
nphardon•20m ago
I'm a scientist and I mostly agree with the scientist part, but I am definitely collaborating with my bot, I don't view it as "just a tool". I know this because this morning I had to do a forced reboot and my VsCode wasn't connecting to our remote servers, it took like over 5 minutes after reboot to reload my bot chat, and from like minutes 3-5 I had the distinct feeling of losing a valuable colleague.
malfist•17m ago
This article is rife with unedited llm signals. This makes me question their methodology here. I want you believe what they found, but I don't trust this analysis. If they were this sloppy with the write up, how sloppy were they with the science?
jp8585•5m ago
We have a full page on the methodology we used! Let me know if you’d like access to the dataset we created for this. The aim was not to be scientific but to flush out some deeper meanings from these interviews that typical nlp techniques struggle with. Ps: Of course we used llm tools as a writing aid, I’d be willing to bet those “signals” probably come from my own writing though and my appreciation of Tom Wolfe. I’ve been told it can be “sloppy” sometimes.
WhyOhWhyQ•15m ago
Another thing I might throw out there is that there are so many domains and niches out there that person A and person B are almost certainly having genuinely different experiences with the same tools. So when person A says "wow this is the best thing ever" and person B says "this thing is horrible" they might both be right.