frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What major works of literature were written after age of 85? 75? 65?

https://statmodeling.stat.columbia.edu/2026/03/25/what-major-works-of-literature-were-written-aft...
1•paulpauper•19s ago•0 comments

Learn Something Old Every Day, Part XVIII: How Does FPU Detection Work?

https://www.os2museum.com/wp/learn-something-old-every-day-part-xviii-how-does-fpu-detection-work/
1•kencausey•31s ago•0 comments

The Claim Upon the Training Data

https://www.jonadas.com/writing/essays/the-claim-upon-the-training-data
1•paulpauper•56s ago•0 comments

Seeing Like a Spreadsheet

https://davidoks.blog/p/how-the-spreadsheet-reshaped-america
1•paulpauper•1m ago•0 comments

The First Post-Reality Political Campaign

https://www.theatlantic.com/ideas/2026/03/hungary-first-post-reality-political-campaign/686565/
1•vrganj•2m ago•0 comments

The Explore-Exploit Tradeoff for AI Tools

https://www.normallydistributed.dev/the-explore-exploit-tradeoff-for-ai-tools/
1•jillcates•3m ago•0 comments

rpg.actor Game Jam

https://rpg.actor/jam
1•Kye•3m ago•0 comments

Agents for Security: The Tipping Point for Offensive AI

https://menlovc.com/perspective/agents-for-security-the-tipping-point-for-offensive-ai/
1•tcbrah•4m ago•0 comments

Circuit-level PDP-11/34 emulator

https://github.com/dbrll/ll-34
2•elvis70•6m ago•0 comments

Immich vs. ente photos – the photo backup showdown

https://alexandmanu.com/blog/immich-vs-ente-photos/
1•birdculture•12m ago•0 comments

Microsoft Set for Worst Quarter Since 2008

https://finance.yahoo.com/news/microsoft-set-worst-quarter-since-103556906.html
3•dvfjsdhgfv•15m ago•2 comments

In defense of social friction- Sycophantic AI distorts judgments and behaviors

https://www.science.org/doi/full/10.1126/science.aeg3145
1•tortilla•16m ago•0 comments

Lace Lithography raises $40M to replace chip-making light with helium atoms

https://thenextweb.com/news/lace-lithography-40m-series
2•shaicoleman•22m ago•0 comments

Designing a single-file MMAP-backed read-only hashed multi-table database

https://notes.volution.ro/v1/2026/03/notes/53ac09b0/
2•ciprian_craciun•24m ago•0 comments

Militarized snowflakes: The accidental beauty of Renaissance star forts

https://bigthink.com/strange-maps/star-forts/
10•Brajeshwar•26m ago•0 comments

PromptPaste – Voice Input for Claude Code and Codex CLI

https://www.promptpasteapp.com/
1•yanivnoema•27m ago•0 comments

How a Bill Gates-Backed Company Landed in a Fight Between Congo and Belgium

https://www.wsj.com/world/africa/congo-belgium-bill-gates-company-6d0e4be0
2•ViktorRay•27m ago•0 comments

Software Is Becoming Something You Invoke, Not Navigate

https://opuslabs.substack.com/p/the-agent-layer-is-rewriting-software
3•opuslabs•29m ago•0 comments

Phos-Chek Fire Retardant

https://en.wikipedia.org/wiki/Phos-Chek
2•laurensr•34m ago•0 comments

Explanation for why we don't see two-foot-long dragonflies anymore fails

https://arstechnica.com/science/2026/03/leading-explanation-for-ancient-giant-flying-insects-gets...
3•amichail•36m ago•0 comments

When Coupled Volcanoes Talk, These Researchers Listen

https://www.quantamagazine.org/when-coupled-volcanoes-talk-these-researchers-listen-20260327/
4•Brajeshwar•37m ago•0 comments

I Beat the Benchmark and Still Failed

https://www.tarc.blog/essays/beating_the_benchmark.html
2•_Tarik•39m ago•0 comments

Does Your Skill Earn Its Keep?

https://efexen.substack.com/p/does-your-skill-earn-its-keep
3•efexen•40m ago•0 comments

Leaked Anthropic Model Presents 'Unprecedented Cybersecurity Risks'

https://gizmodo.com/leaked-anthropic-model-presents-unprecedented-cybersecurity-risks-much-to-pen...
4•HiroProtagonist•41m ago•3 comments

I accidentally spammed a year of calendar invites

https://mattfarrugia.com/posts/i-accidentally-spammed-a-year-of-calendar-invites
3•mfarrugia•41m ago•1 comments

I Can't See Apple's Vision

https://matduggan.com/i-cant-see-apples-vision/
3•birdculture•42m ago•0 comments

Be careful: chatting with AI about your case is discoverable

https://harvardlawreview.org/blog/2026/03/united-states-v-heppner/
4•rogerallen•44m ago•2 comments

Show HN: Free, in-browser PDF editor

https://breezepdf.com/?v=2
17•philjohnson•44m ago•0 comments

The AI-assisted solo technical writer

https://buildwithfern.com/post/ai-assisted-technical-writer
2•ivanech•44m ago•0 comments

India's maternal mortality drops nearly 80% since 1990: Global study

https://economictimes.indiatimes.com/news/india/indias-maternal-mortality-drops-nearly-80-since-1...
4•pvsukale3•45m ago•1 comments
Open in hackernews

The risk of AI isn't making us lazy, but making "lazy" look productive

19•acmerfight•1h ago
I've been reflecting on how LLMs are changing our learning habits as engineers, and realized something worrying.

AI can now quickly help search and research information, distilling the core of a paper into a concise summary. It lets you pick up a term fast and have something to talk about.

But real learning requires deep reading, thinking, and practice. A polished summary is far from enough. Since having AI, how long has it been since you truly studied a paper or deeply read through and implemented a technology? Has your ability to think and your taste improved or declined? Once that ability is weakened, are you ready to let AI replace you entirely? Taste is never built by reading abstracts — it is forged through countless bad decisions and excellent practice.

To be honest, most people never seriously finished reading many papers before AI either. AI hasn't taken anything away — it has just made shallow learning more efficient and more deceptive. The real risk isn't that AI makes people lazy, but that AI makes "lazy" look like "productive." Spend ten minutes reading a summary, post it on social media, feel like you're keeping up with the frontier — but nothing actually sticks.

I am absolutely not against AI. What I advocate is using AI for deep work, not treating it as your TikTok of pretend learning. From "summarize it for me" to "debate it with me," from "do it for me" to "help me reason through it" — that is what matters.

Comments

nis0s•1h ago
What’s important? That bridges get built and stay up, or that they’re built only after toiling X amounts of hours. AI will change the nature of work, it’s going to make a lot of people uncomfortable. But more importantly, it’s going to let people who understand things faster get the info they need to be productive.
bluefirebrand•35m ago
AI does not currently build bridges that stay up
phil21•31m ago
I have a feeling we would all be terrified if we knew how much AI had a role in building bridges at the moment.

TBD if they stay up, I suppose.

The stories I hear from various white collar professions not related to tech are... interesting, to say the least. There is a whole lot of unsanctioned shadow IT going on regardless of policy.

quater321•38m ago
So what is important is not that 10 or 20 times the work can be done, but that you are stressed out and exhausted while doing your work?
dsabanin•36m ago
I'm convinced that at some point looking like being productive and being productive becomes the same thing.
potatoman22•23m ago
There's a point where they meet, but "faking it until you make it" doesn't work for productivity in the same way it doesn't work for getting rich.

But there's a secret: just buy my $399 masterclass and I'll teach you 17 simple productivity hacks to 100x your income.

elgertam•31m ago
I have a nearly total opposite take. I can't tell you how many times I've read a book, a paper or something else and been confused by some ambiguity in the author's prose. Being able to drop the paper (or even the book!) into an LLM to dig into the precise meaning has been an unbelievable boost for me.

Now I can actually get beyond conceptual misunderstanding or even ignorance and get to practice, which is how skills actually develop, in a much more streamlined way.

The key is to use the tool with discipline, by going into it with a few inviolable rules. I have a couple in my list, now: embrace Popperian falsifiability; embrace Bertrand Russell's statement: “Everything is vague to a degree you do not realize till you have tried to make it precise.”

LLMs have become excellent teachers for me as a result.

acmerfight•20m ago
We actually don't disagree at all—you are perfectly illustrating my point.

Applying strict epistemic discipline (Popper, Russell) to resolve ambiguity and accelerate actual practice is the very definition of deep work. You aren't using AI as a shortcut to skip thinking; you're using it as a Socratic sparring partner to deepen it. This is exactly the paradigm shift I'm advocating for.

caprock•30m ago
I find value in learning some things deeply but not all things.

The ability to be more selective about where I attend deeply, while leveraging fast shallow learning to complete other tasks... That seems like a potential benefit and a nice choice to have in the toolbox.

functional_dev•25m ago
trick is maintaining enough domain expertise... so we can actually audit those shallow outputs.

If the baseline knowledge drops too low we cannot tell when the AI is being lazy or wrong

atomicnumber3•28m ago
I don't think it's all that bad. There's definitely vibe coding that is "copy paste / throw away" programming on ultra steroids. But after vibe coding two products and then finding them essentially impossible to then actually get to a quality bar I considered ready to launch, I've been working on a more measured approach that leverages AI but in a way that simply speeds up traditional programming. I use it to save tons of time on "why is pylance mad about X" "X works from the docs example but my slightly modified X gives error Y" "how do I make a toggle switch in css and html" "how am I supposed to do Python context managers in 2026 (I didn't know about the generator wrapper thing)" all that bullshit that constantly slows you down but needs to be right . AI is great at helping you kickstart and then keeping you unblocked.

I've been using Gemini chat for this, and specifically only giving it my code via copy paste. This sounds Luddite but actually it's been pretty interesting. I can show it my couple "core" library files and then ask it to do the next thing. I can inspect the output and retool it to my satisfaction, then slot it in to my program, or use it as an example to then hand code it.

This very intentional "me being the bridge" between AI and the code has helped so much in getting speed out of AI but then not letting it go insane and write a ton of slop.

And not to toot my own horn too much, but I think AI accelerates people more the wider their expertise is even if it's not incredibly deep. Eg I know enough CSS to spot slop and correct mistakes and verify the output. But I HATE writing CSS. So the AI and I pair really well there and my UIs look way better than they ever have.

acmerfight•16m ago
Pure 'vibe coding' is essentially technical 'tittytainment'. Using AI for the horizontal spread while you enforce vertical architectural depth is true deep work.
softwaredoug•26m ago
I have some algorithms I absolutely must know. So I’m hand coding them and asking the agent to critique me.

I do a very similar thing in writing - I need feedback, don’t rewrite this!

In both cases I need the struggle of editing / failing to arrive at a deeper understanding.

The future dev will need to know when to hand code vs when to not waste your time. And the advantage will still go to the person willing to experience struggle to understand what they need to.

imenani•14m ago
Agreed. LLMs have helped me achieve much deeper reading, _when directed to do so_. Asking an LLM to “Teach me Socratically about this paper/code. One question at a time”, usually allows me to get a much deeper reading of the material than I would otherwise.
al_borland•13m ago
This was the issue with some the ads Apple was running when launching the iPhone 16. It showed the worst worker using Apple Intelligence to impress the boss and get promotions, which being generally lazy and terrible. I felt it was the wrong message to send. [0]

I don’t think AI is all bad for summaries though. I used to add stuff to a reading list with good intentions, but things went there to die. Hundreds of articles added, but with so much new content each day, I would never actually read any of it. Now, I use AI summaries to get more context on what the article is. If it sounds interesting and I want more info, I can read the whole thing in the moment. If I’m satisfied with the summary alone, I can move on with my life. No more pushing it off to a reading list that only generates guilt. I actually end up reading more articles due to this, not less.

[0] https://youtu.be/YP-ukrBVDH8 (this is sadly the best copy I can find)

skyberrys•8m ago
That's a different take than I've been considering AI to be genuinely useful. I try to not use it for deep work, infact I try to use it minimally but frequently for short checks on my own understanding.

Using your research paper reading example, I would read the research paper, but then ask an AI tool specific questions about the work, frequently in new chats. Then at the end I might ask it to implement my description of the paper. I guess it's your 'debate with me' conclusion, the I ly difference is I would try to have multiple short conversations.

peteforde•4m ago
Several weeks ago, I spent about a week fully reverse engineering a Stereomaker pedal. It accepts a mono signal and produces a stereo field using a 5-stage all-pass filter to mess with the phase without the use of delay (which sounds cheesy and creates a result that doesn't mix well back to mono).

I've not really worked with audio circuits previously, and I'd been intimidated to approach the domain. My journey was radically expedited by iterating through the entire process with a ChatGPT instance. I would share zoomed photos, grill it about how audio transformers work, got it to patiently explain JFET soft-switching using an inverter until the pattern was forced into my goopy brain.

Through the process of exploring every node of this circuit, I learned about configurable ground lifts, using a diode bridge to extract the desired voltage rail polarity, how to safely handle both TS and TRS cables with a transformer, that transformer outputs are 180 degrees out of phase, how to add a switch that will attenuate 10dB off a signal to switch line/instrument levels.

Eventually I transitioned from sharing PCB photos to implementing my own take on the cascade design in KiCAD, at which point I was copying and pasting chunks of netlist and reasoning about capacitor values with it.

In short, I gave myself a self-directed college-level intensive in about a week and since that's not generally a thing IRL, it's reasonable to conclude that I wouldn't have ever moved this from a "some day" to something I now understand deeply in the past tense without the ability to shamelessly interrogate an LLM at all hours of the day/night, on my schedule.

If you're lazy, perhaps you're just... lazy?

Anyhow, I highly recommend the Surfy Industries Stereomaker. It's amazing at what it does. https://www.surfyindustries.com/stereomaker

tanepiper•3m ago
I think the risk is this; when non-technical users who've never shipped software in their life can dictate to a machine and get "instant results" it going to bring back managers not understanding that you don't just ship code. Especially these days where one bad dependency can mean downtime or worse.