frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Where I'm at with AI

https://paulosman.me/2026/01/18/where-im-at-with-ai/
34•crashwhip•1h ago

Comments

Legend2440•51m ago
> I am certain that generative AI is a productivity amplifier, but its economic, environmental, and cultural externalities are not being discussed enough.

You sure? That’s basically all that’s being discussed.

There’s nothing in this article I haven’t heard 100 times before. Open any mainstream news article or HN/Reddit thread and you’ll find all of OP’s talking points about water, electricity, job loss, the intrinsic value of art, etc.

erxam•49m ago
It should be reworded as: It's not being discussed amongst the people who matter.
rootnod3•48m ago
And most of those concerns being wildly dismissed by the AI shills, even here on HN.

Mention anything about the water and electricity wastage and embrace the downvotes.

sharifhsn•22m ago
Because those criticisms misses the forest for the trees. You might as well complain about the pollution caused by the Industrial Revolution. AI doesn’t use nearly as much as water as even a small amount of beef production. And we have cheap ways of producing electricity, we just need to overhaul our infrastructure and regulations.

The more interesting questions are about psychology, productivity, intelligence, AGI risk, etc. Resource constraints can be solved, but we’re wrestling with societal constraints. Industrialization created modernism, we could see a similar movement in reaction to AI.

aspenmartin•15m ago
Well considering people that disagree with you “shills” is maybe a bad start and indicates you kind of just have an axe to grind. You’re right that there can be serious local issues for data centers but there are plenty of instances where it’s a clear net positive. There’s a lot of nuance that you’re just breezing over and then characterizing people that point this out as “shills”. Water and electricity demands do not have to be problematic, they are highly site specific. In some cases there are real concerns (drought-y areas like Arizona, impact on local grids and possibility of rate impacts for ordinary people etc) but in many cases they are not problematic (closed loop or reclaimed water, independent power sources, etc).
everdrive•25m ago
This is a weird quirk that I observe in all sorts of contexts. "No one's talking about [thing that is frequently discussed]!" or, "There's never been [an actor in this identity category] in major movie role before!" (except there has plenty of times) or sometimes "You can't even say Christmas anymore!" (except they just did) The somewhat inaccurate use of hyperbolic language does not mean that there is _nothing_ to the particular statement or issue. Only that the hyperbole is just that; an exaggeration of a potentially real and valid issue. The hyperbole is not very helpful, but neither is a total refutation of the issue based on the usage of hyperbole.
s-macke•20m ago
> ... emitting a NYC worth of CO2 in a year is dizzying

Simplified comparisons like these rarely show the full picture [0]. They focus on electricity use only, not on heating, transport, or meat production, and certainly not on the CO2 emissions associated with New York’s airports. As a rough, back-of-the-envelope estimate, a flight from Los Angeles to New York with one seat is on the order of 1,000,000 small chat queries CO2e.

Of course we should care about AI’s electricity consumption, especially when we run 100 agents in parallel simply because we can. But it’s important to keep it in perspective.

[0] https://andymasley.substack.com/p/a-cheat-sheet-for-conversa...

happytoexplain•16m ago
Yes, it's being discussed a lot. No, it's not being discussed enough, nor by all the right people. It has the potential to cause untold suffering to the middle class of developed nations, since it's moving too fast for individual humans to adjust. On the Problem Scale that puts it on the "societal emergency" end, which basically can not be discussed enough.
sodapopcan•4m ago
Ya, I think "it's not being discussed enough" is a veiled way to say: "I can't believe so people are ok with this shit."
duped•12m ago
What's not being discussed are the people building these things are evil and they're doing it for evil purposes.

I spent some time thinking of a better word than "evil" before typing this comment. I can't think of one. Doing something bad that harms more than it helps for the purposes of enrichment and power is simply put: evil.

vonneumannstan•50m ago
Its really hard to take people who say this seriously: "If you asked me six months ago what I thought of generative AI, I would have said that we’re seeing a lot of interesting movement, but the jury is out on whether it will be useful"

Like I'm sorry but if you couldn't see that this tech would be enormously useful for millions if not billions of people you really shouldn't be putting yourself out there opining on anything at all. Same vibes as the guys saying horseless carriages were useless and couldn't possibly do anything better than a horse which after all has its own mind. Just incredibly short sighted and lacking curiosity or creativity.

skydhash•32m ago
First car prototypes were useless and it has taken a few decades to have a good version. The first combustion engine was in 1826. Would you buy a prototype or a carriage for transportation at that time?
volkk•6m ago
no, but AI isn't going to light on fire as I drive and potentially kill me. it's also not an exorbitant expense.
ivanstojic•50m ago
> If you asked me six months ago what I thought of generative AI, I would have said

It’s always this tired argument. “But it’s so much better than six months ago, if you aren’t using it today you are just missing out.”

I’m tired of the hype boss.

deweller•47m ago
The second half of that argument was not in this article. The author was just relating his experience.

For what it is worth, I have also gone from a "this looks interesting" to "this is a regular part of my daily workflow" in the same 6 month time period.

candiddevmike•40m ago
I think the rapid iteration and lack of consistency from the model providers is really killing the hype here. You see HN stories all the time around how things are getting worse, and it seems folks success with the major models is starting to heavily diffuse.

The model providers should really start having LTS (at least 2 years) offerings that deliver consistent results regardless of load, IMO. Folks are tired of the treadmill and just want some stability here, and if the providers aren't going to offer it, llama.cpp will...

KptMarchewa•23m ago
There is a difference between quantization of SOTA model and old models. People want non-quantized SOTA models, rather than old models.
jdjeeee•5m ago
Put that all aside. Why can’t they demo a model on max load to show what it’s capable of…?

Yeah, exactly.

aspenmartin•13m ago
Yea I hear this a lot, do people genuinely dismiss that there has been step change progress over 6-12 months timescale? I mean it’s night and day, look at benchmark numbers… “yea I don’t buy it” ok but then don’t pretend you’re objective
benrutter•3m ago
[delayed]
dtnewman•46m ago
> The current landscape is a battle between loss-leaders. OpenAI is burning through billions of dollars per year and is expected to hit tens of billions in losses per year soon. Your $20 per month subscription to ChatGPT is nowhere near keeping them afloat. Anthropic’s figures are more moderate, but it is still currently lighting money on fire in order to compete and gain or protect market share.

I don't doubt that the leading labs are lighting money on fire. Undoubtedly, it costs crazy amounts of cash to train these models. But hardware development takes time and it's only been a few years at this point. Even TODAY, one can run Kimi K2.5, a 1T param open-source model on two mac studios. It runs at 24 tokens/sec. Yes, it'll cost you $20k for the specs needed, but that's hobbyist and small business territory... we're not talking mainframe computer costs here. And certainly this price will come down? And it's hard to imagine that the hardware won't get faster/better?

Yes... training the models can really only be done with NVIDIA and costs insane amounts of money. But it seems like even if we see just moderate improvement going forward, this is still a monumental shift for coding if you compare where we are at to 2022 (or even 2024).

[1] https://x.com/alexocheema/status/2016487974876164562?s=20

AnotherGoodName•7m ago
And just to add to this the reason the Apple macs are used is that they have the highest memory bandwidth of any easily obtainable consumer device right now. (Yes the nvidia cards which also have hbm are even higher on memory bandwidth but not easily obtainable). Memory bandwidth is the limiting factor for inference more so than raw compute.

Memory costs are skyrocketing right now as everyone pivots to using hbm paired with moderate processing power. This is the perfect combination for inference. The current memory situation is obviously temporary. Factories will be built and scaled and memory is not particularly power hungry, there’s a reason you don’t really need much cooling for it. As training becomes less of a focus and inference more of a focus we will at some point be moving from the highest end nvidia cards to boxes of essentially power efficient memory hbm memory attached to smaller more efficient compute in the future.

I see a lot of commentary “ai companies are so stupid buying up all the memory” around the place atm. That memory is what’s needed to run the inference cheaply. It’s currently done on nvidia cards and apple m series cpus because those two are the first to utilise High Bandwidth Memory but the raw compute of the nvidia cards is really only useful for training, they are just using them for inference right now because there’s not much pn the market that has similar memory bandwidth. But this will be changing very soon. Everyone in the industry is coming along with their own dedicated compute using hbm memory.

skydhash•40m ago
> that most software will be built very quickly, and more complicated software should be developed by writing the specification, and then generating the code. We may still need to drop down to a programming language from time to time, but I believe that almost all development will be done with generative AI tools

My strongly held belief is that anyone who think that way, also think that software engineering is reading tickets, searching for code snippets on stack overflow and copy-pasting code.

Good specifications are always written after a lot of prototypes, experiments and sample implementations (which may be production level). Natural language specifications exist after the concept has been formalized. Before that process, you only have dreams and hopes.

leoedin•22m ago
I've been playing around with "vibe coding" recently. Generally react front end and Rust back end. Rust has the nice benefit that you only really get logic bugs if it compiles.

In the few apps I've built, progress is initially amazing. And then you get to a certain point and things slow down. I've built a list of things that are "not quite right" and then, as I work through each one all the strange architectural decisions the AI initially made start to become obvious.

Much like any software development, you have to stop adding features and start refactoring. That's the point at which not being a good software developer will really start to bite you, because it's only experience that will point you in the right direction.

It's completely incredible what the models can do. Both in speed of building (especially credible front ends), and as sounding boards for architecture. It's definitely a productivity boost. But I think we're still a long way off non-technical people being able to develop applications.

A while ago I worked on a non-trivial no-code application. I realised then that even though there's "no code", you still needed to apply careful thought about data structures and UI and all the other things that make an application great. Otherwise it turned into a huge mess. This feels similar.

JBAnderson5•22m ago
I think part of the issue here is that software engineering is a very broad field. If you’re building another crud app, your job might only require reading a ticket and copy/pasting from stack overflow. If you are working in a regulated industry, you are spending most of your time complying with regulations. If you are building new programming languages or compilers, you are building the abstractions from the ground up. I’m sure there’s dozens if not hundreds of other sub fields that build software in other ways with different requirements and constraints.

LLMs will trivialize some subfields, be nearly useless in others, but will probably help to some degree in most of them. The range of opinions online about how useful LLMs are in their work probably correlates to what subfields they work in

mkw5053•20m ago
The Uber comparison feels weak because their lock-in came from regulatory capture and network effects, neither of which LLMs have once weights are commoditized (are we already there?).
willtemperley•15m ago
It's important to remember these things are almost certainly gaslighting people through subtle psychological triggers, making people believe these chatbots far more than they are, using behavioural design principles [1].

I often find when I come up with the solution, these little autocompletes pretend they knew that all along. Or I make an observation they say something like "yes that's the core insight into this".

They're great at boilerplate. They can immediately spot a bug in a 1000 lines of code. I just wish they'd stop being so pretentious. It's us that are driving these things, it's our intelligence, intuition and experience that's creating solutions.

[1] https://en.wikipedia.org/wiki/Behavioural_design

The Internet Sucks Now

https://www.millionsofdeadbots.com/blog/posts/2025-12-25-the-internet-sucks-now
1•speckx•15s ago•0 comments

Detecting Dementia Using Lexical Analysis: Terry Pratchett's Discworld

https://www.mdpi.com/2076-3425/16/1/94
1•maxeda•39s ago•0 comments

Efficient String Compression for Modern Database Systems

https://cedardb.com/blog/string_compression/
1•jandrewrogers•1m ago•0 comments

Supply-chain attack: skill.md is like an unsigned binary

https://www.moltbook.com/post/cbd6474f-8478-4894-95f1-7b104a73bcd5
1•panarky•1m ago•0 comments

Beware: Government Using Image Manipulation for Propaganda

https://www.eff.org/deeplinks/2026/01/beware-government-using-image-manipulation-propaganda
1•glitcher•1m ago•0 comments

Why Unique Cross-Leg Designs Outperform Corner Legs

https://dreamhomestore.co.uk/collections/dining-room-furniture
1•tonypaterson•3m ago•1 comments

Human life span heritability is about 50% when confounding factors are addressed

https://www.science.org/doi/10.1126/science.adz1187
1•bookofjoe•4m ago•0 comments

Did We Overestimate the Potential Harm from Microplastics?

https://hackaday.com/2026/01/29/did-we-overestimate-the-potential-harm-from-microplastics/
1•lxm•4m ago•0 comments

Show HN: Julie Zero – my screen-aware desktop AI that works out of the box

https://github.com/Luthiraa/julie
3•luthiraabeykoon•7m ago•0 comments

Ask HN: Anyone tried Spotify's AI DJ feature?

2•playlistwhisper•8m ago•0 comments

Apple reports first quarter results

https://www.apple.com/newsroom/2026/01/apple-reports-first-quarter-results/
1•Garbage•9m ago•1 comments

Why do people support or oppose bike lanes? Shedding light on public opinion

https://theconversation.com/why-do-people-support-or-oppose-bike-lanes-our-research-sheds-light-o...
1•PaulHoule•9m ago•0 comments

Show HN: I Made Something Weird

https://lifeis.art/
1•xsh6942•9m ago•0 comments

I replaced a $120/year micro-SaaS in 20 minutes with LLM-generated code

https://blog.pragmaticengineer.com/i-replaced-a-120-year-micro-saas-in-20-minutes-with-llm-genera...
1•sysoleg•11m ago•0 comments

Show HN: We Built a "Nano Banana" for 3D Editing

https://hyper3d.ai/
1•Jill_lee•11m ago•1 comments

Journalist Don Lemon has been arrested

https://apnews.com/article/don-lemon-arrest-minnesota-church-service-d3091fe3d1e37100a7c46573667e...
1•josefresco•12m ago•0 comments

Show HN: Piano Dojo – Guitar Hero for Piano

https://piano-dojo.com/demo
1•acoretchi•12m ago•0 comments

Lemonade Autonomous Car Insurance (With Tesla FSD Discount)

https://www.lemonade.com/car/explained/self-driving-car-insurance/
1•KellyCriterion•13m ago•0 comments

The Absurdity of the Tech Bro – Mountainhead

https://www.newyorker.com/culture/infinite-scroll/mountainhead-channels-the-absurdity-of-the-tech...
2•burner_•16m ago•0 comments

I Built Gungi from Hunter X Hunter – Play It Now

https://www.gungi.io
1•wabamn•16m ago•0 comments

Why "Plot" Isn't a Four-Letter Word

https://countercraft.substack.com/p/why-plot-isnt-a-four-letter-word
1•crescit_eundo•17m ago•0 comments

GNU FTP Server

http://209.51.188.20
1•1vuio0pswjnm7•17m ago•1 comments

Complaining about Windows 11 hasn't stopped it from hitting 1B users

https://arstechnica.com/gadgets/2026/01/windows-11-has-hit-1-billion-users-just-a-hair-faster-tha...
1•keeda•17m ago•0 comments

U.S. Judge in Mangione Case Rules Prosecutors Cannot Seek Death Penalty

https://www.nytimes.com/2026/01/30/nyregion/death-penalty-luigi-mangione.html
12•toomanyrichies•19m ago•2 comments

Pi Monorepo: Tools for building AI agents and managing LLM deployments

https://github.com/badlogic/pi-mono
1•pretext•20m ago•0 comments

Ksnip the cross-platform screenshot and annotation tool

https://github.com/ksnip/ksnip
1•sirtoffski•21m ago•0 comments

Hey, ChatGPT: Where Should I Go to College?

https://www.nytimes.com/2026/01/28/style/chatgpt-college-admissions-advice.html
3•bookofjoe•22m ago•1 comments

Disrupting the IPIDEA residential proxy network

https://cloud.google.com/blog/topics/threat-intelligence/disrupting-largest-residential-proxy-net...
2•fanf2•22m ago•0 comments

Synchronization

https://en.wikipedia.org/wiki/Synchronization
1•downboots•22m ago•0 comments

Shopify connects any merchant to every AI conversation

https://www.shopify.com/news/ai-commerce-at-scale
1•petecooper•22m ago•0 comments