frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

LLMs are making me dumber

https://vvvincent.me/llms-are-making-me-dumber/
60•vincentcheng•4h ago

Comments

etblg•4h ago
> When I need to write an email, I often bullet-point what I want to write and ask the LLM to write out a coherent, cordial email. I’ve gotten worse at writing emails.

Think I'd rather just have the bullet points in the first place, to be honest, has to be easier and quicker to read than an LLM soup of filler paragraphs.

Tempest1981•4h ago
For sure. If I get an email with 3 dense paragraphs, I'm more likely to mark it unread and come back to it later, after processing the other 20 emails in my inbox.
jebarker•4h ago
"While CS undergrads are still required to take classes on assembly, most productive SWEs never interact with assembly. Moving up the ladder of abstraction has consistently been good."

Gotta disagree. Adding abstraction has yielded benefits but it certainly hasn't been consistently good. For example, see the modern web.

zamderax•4h ago
It’s been overall good. Being able to access a web app or website by entering a URL is impressive!
PessimalDecimal•4h ago
Many browsers, especially Chrome, have abstracted away direct interaction with URLs. Would you also consider that good?
zdragnar•3h ago
You can still do that if you want to. Most people don't.

Back before I got a cell phone, I had many many phone numbers memorized. Once I got a cell phone with a contacts list, I just stopped. Now I have my parents and my wife's phone numbers memorized, and that's it.

URLs are much the same. On most websites, if I can see the domain is the one that I expect to be on, that's all I really care about. There's a few pages that I interact with the URL directly, but it's a minority.

jebarker•3h ago
You can serve web pages and render them in browsers all written in C. I'll concede that that's a useful level of abstraction over assembly.
nativeit•3h ago
A functioning URL is impressive?
PessimalDecimal•4h ago
The analogy likening LLMs to compilers is extremely specious. In both steps, the text written by the user/programmer is higher-level and thus "easier" but beyond that, the analogy doesn't hold.

- Natural language is not precise and has no spec, unlike programming languages.

- The translation from C (or other higher-level language) to assembly by a given compiler is determined in a way that the behavior of an LLM is not.

- On the flip side, the amount of control given to the tool versus what is specified by the programmer is wildly different between the two.

rvz•3h ago
Exactly. The industry has encouraged mediocrity and in-efficiency with over-abstraction and abusing technologies in areas where it doesn't make sense for basic software.

This is what you have seen with the rise with some of the worst technologies (Javascript) being used in places where it shouldn't because some engineers want to keep using one language for everything.

Which is where you have basic desktop apps written using electron taking up 500MB each and use 1.2GB of memory. It doesn't scale well on a typical 8GB laptop on a user machine.

Not saying it should be in assembly either (which also doesn't make sense), but the fact that today's excuse is that a SWE is used to one language is really a poor excuse.

Nothing wrong with using high-level compiled languages to write native desktop apps that compile to an executable.

lispisok•3h ago
>This is what you have seen with the rise with some of the worst technologies (Javascript) being used in places where it shouldn't because some engineers want to keep using one language for everything.

NodeJS was the biggest mistake our industry made and I will die on this hill. It has taken the crown from null. People have been trying to claw it back with Typescript but the real solution was to drop JS altogether. JS becoming the language in the browser was an artifact of history when we didnt know where this internet thing was going. By the time NodeJS was invented we should have known better.

resonious•3h ago
Very subjective but IME, understanding assembly is correlated with being a skilled web developer. Even though you don't actually write assembly while doing web dev.
Swizec•4h ago
> When I need to write an email, I often bullet-point what I want to write and ask the LLM to write out a coherent, cordial email. I’ve gotten worse at writing emails

Just send the bullet points! Nobody wants the prose. It’s a business email not an art. This is a hill I will die on.

Prose has its uses when you want to transmit vibes/feelings/... For actionable info communication between busy people, terse and to the point is better and more polite.

It’s bad enough when I have to read people waffling. Please don’t make me read LLM waffle.

sshine•3h ago
> Just send the bullet points! Nobody wants the prose. […] It’s bad enough when I have to read people waffling. Please don’t make me read LLM waffle.

I use LLMs to shorten my emails.

seadan83•3h ago
I think this is the author's point. The ability to write short and concisely is a skill. So goes the saying: "If I had more time, I would have written a shorter letter."

Using LLMs to do that shortening is potentially hindering that practice.

The author's point I think is less about sending LLM waffle, it's a lot more that they can't send something that is indistinguishable from LLM waffle anyways due to skills issue - because the LLM is so often used instead of building that skill.

I think the question is largely, can the LLM results be used for human learning and human training, or is it purely a shortcut for skills - in which case those skills never form or atrophy.

neom•3h ago
I think it's a fair hill to die on, I'll join you. I go so far as to say if I take a very direct tone with you after a formality and you keep up the formalities, it's a bit of a red flag. Gimmie the words with what you want only, please.
rahimnathwani•3h ago
The exception is when you're sending emails to people who don't have the same background knowledge or assumptions as you do.

Imagine:

  Write a coherent but succinct email to Ms Griffin, principal of the school where my 8yo son goes, explaining;
  - Quizlet good for short term recollection
  - no point memorising stuff if going to forget later
  - better to start using anki, and only memorize stuff where worth remembering forever
belorn•3h ago
That seems like an effective way to get Ms Griffin annoyed. Given the prevalence of cheating in education they are might be much more likely to identify that an LLM was used to generate the text, after which they label the email as spam and the parent as someone would would send them such spam.
KevinMS•3h ago
> Just send the bullet points! Nobody wants the prose.

But the recipient can just ask AI to convert the prose into bullet points.

roxolotl•2h ago
That’s what I can do with my new found time now that LLMs write my emails for me, use LLMs to converts others emails into bullet points!
MisterTea•2h ago
A long time ago I would write these stupid long, wordy, emails to my manager summing up my work week. He finally told me, "please, keep it short and sweet. I don't need to know every wire or line of code you touched. Just summarize it into a few sentences." Best conversation ever. Went from 2 hours of typing Friday afternoon to 10 minutes or so. I'm stumped as to why we went backwards.
the_arun•53m ago
Now it has gotten more informal - slack. Not sure how many still use emails for internal communications.
holografix•4h ago
There’ll be a move to oral ability assessment across the board.

Oral exams, face to face interviews, etc.

If you think of the LLM as a tireless coach and instructor and not a junior employee you’ll have a wonderful opportunity. LLMs have taught me so so much in the last 12 months. They have also removed so many roadblocks and got me to where I wanted to be quicker. Eg: I don’t particularly care about learning Make atm but I do want to compile something to WASM.

handfuloflight•4h ago
Better check if that's really a "hearing aid", then.
bryan0•4h ago
I think another good historical analogy is the invention of writing. In Phaedrus[0] Plato argued that it may make people dumber.

0: https://en.m.wikipedia.org/wiki/Phaedrus_(dialogue)

idopmstuff•4h ago
I think the list of historical analogies is missing the biggest one - the internet.

Memorization used to be a much more important skill than it is today. I am probably worse at rote memorization than I was when I was 13. Am I dumber? I would say no - I've just adapted to the fact that memorization is much less important in a world where I have access to basically the entire recorded history of human knowledge anywhere, anytime.

LLMs are just another very powerful technology that changes what subdomains of intelligence matter. When you have an LLM that can write code better than any human being (and since I know I will get testy HN replies about how LLMs can't do that, I will clarify here that I mean this is a thing that is not true today but will be in the future), the skill that matters shifts from writing code to defining the problem or designing the product.

> Looking at historical examples, successful cases of offloading occurred because the skills are either easily contained (navigation) or we still know how to perform the tasks manually but simply don’t need to anymore (calculator). The difference this time is that intelligence cannot easily be confined.

This is true, but I think it just means we'll see a more extreme kind of the same change we've seen as we've created powerful new tools in the past. I think it's helpful to think of the tool less as intelligence and more as the products of intelligence that are relevant, like generating high quality code or doing financial analysis. You'll have tools that can do those things extremely well, and it'll be up to you to make use of them rather than worrying about the loss of now-irrelevant skills.

rvz•4h ago
> Assembly to C to Python. Almost no one writes assembly by hand anymore (unless you work at Deepseek)

Unless you are maintaining hardware or device drivers which is done at any company that makes hardware such as: Apple, Google, Microsoft, Nvidia, SpaceX, Intel, AMD, ARM, Tesla and the list goes on.

jebarker•4h ago
Yep. Or writing video codecs or other performance critical software. It's amazing that people make blanket statements like this when really they're just not familiar with what other SWEs are doing
the_snooze•3h ago
More broadly, there's a lot of value in knowing how to work with constrained systems: things that have to be offline, or radiation-hardened, or low-power, or low-spec, etc. Those tend to be resilient systems; i.e., things that people can quietly rely on instead of being subject to "move fast and break things."

Building web apps that you can update willy-nilly while running on arbitrarily powerful and always-available hardware isn't the entirety of software engineering.

NewsaHackO•1h ago
Correct, crème of the crop software engineers doing bleeding edge work will most likely never be supplanted by LLM. I think the issue is that 90% of programmers do not do such work. Things that most software engineers actually do (front end web dev using a popular framework, MVC like apps, gluing together APIs and library’s to make a custom configuration in an otherwise commonly solved problem etc.) are the things that LLM excels at, and will continue to improve as time goes on.
legohead•4h ago
I love LLMs, and actually feel they are making me smarter.

I'll be thinking of something in the car, like how do torque converters work? And then I start live talk session with GPT and we start talking about it. Unlike a Wikipedia article that just straight tells you how it works, I can dive down into each detail that is confusing to me until I fully understand it. It's incredible, for the curious.

steve_adams_86•3h ago
If you're curious about torque converters I suspect you're careful about this, but what's your information vetting process? I use LLMs via text, so I can verify info as it streams in. How do you verify what's spoken to you in a car?
0xEFF•3h ago
I do the same as GP on a regular 2 hour drive up I-5 I take.

The vetting process is the same as if I were driving up I-5 with a gear head friend of mine having a conversation with them as we go.

legohead•3h ago
If something sounds off I just tell it I think it's wrong or to double check itself, similar to what I do with text.
spacemadness•3h ago
I also rather use them as a tutor of sorts than "please do things for me." I think they're quite useful in that regard, albeit I know not to trust them fully as the only source of information.
ZeWaka•4h ago
> While CS undergrads are still required to take classes on assembly, most productive SWEs never interact with assembly.

You may think this, but the principles are extremely relevant even in much 'higher tiers' of programming, such as the front-end. Performance optimization is always relevant, and understanding the concepts you learn from learning assembly is crucial.

Such courses also generally encourage a depth of understanding of the whole computing stack. This is more and more relevant in the modern age, where we have technologies such as Web*Assembly*.

pglevy•4h ago
The author mentions it at the end, but continuing to experience long-form content like reading books and listening to multi-hour podcasts — particularly in other knowledge domains — should counteract this.
000ooo000•3h ago
That's simply consumption and is by no means a stand-in for actual problem solving and learning.
kelsey98765431•4h ago
I have used a simple benchmark for productivity related personal workflow changes that has always served me very well and kept me feeling good about what i choose to use to do my job, and that's what i now call the airplane mode test. can i use this tool completely offline, isolated from the rest of the world? growing up wifi was not ubiquitous for me. i didn't live somewhere remote, i was just an early adopter of computers and carried a laptop in school when that was considered a rare piece of kit. learning programming i always kept the philosophy of can i do this on the train while going home in mind when selecting how i wrote code. i kept man pages handy and learned how to search them properly (man -k . | grep), learned how to access the gnu info pages on my machine, and even found the glibc manual tucked away safely in my /usr/share directory which i hadn't know was there. over the years i stayed away from stack overflow and google when i was writing code as much as possible, first looking at the resources available to me on my local machine.

i now have qwen3. it runs locally on my machine. it can vibe code, it can oneshot, it can reason about complex non code problems and give me tons of background information on the entire world. i used to keep a copy of wikipedia offline, only some few gigabytes for the text version and even if that is too much there's reduced selection versions available in kiwix.

i am fine with llms taking over a lot of that tedious work, because i am confident i will always have this tool the same as all my other tools and data that i backup and keep for myself. that means its ok for me to cheat a bit here and there and reach out to the higher power models in the cloud the same way i would sometimes google an error message before even reading it if i am doing work to pay my bills. i have these rungs of the ladder to climb down from and feel like i am not falling into oblivion.

i think the phrase that sums this up best is work smarter not harder. im ok with accepting a smarter way of doing things, as long as i can always depend on it being there for me in an adverse situation.

tiffanyh•4h ago
I’ve been afraid of this as well.

Which is why I try to treat LLMs like a “calculator” to check my work.

I do things myself, then after I do it myself - ask an LLM to do the same.

That way, I’m still critical thinking and as a result - I actually get more benefit from the LLM since I can be more specific in having it help me fill in gaps.

bgwalter•3h ago
For a critical article it lists quite a lot of pro-LLM analogies, which are false in my opinion.

The pocket calculator simplifies exactly one part of math and probably isn't even used that much by research mathematicians.

Chess programs are obviously forbidden in competitions. Should we forbid LLMs for programming? In line with the headline though, Magnus Carlsen said that he does not use chess programs that much and that they do make him dumber. His seconds of course use them in preparation for competitions.

LLMs destroy the essence of humanity: Free thought. We are fortunate that many people have come to the same conclusion and are fighting back.

AceJohnny2•3h ago
> Historical Analogies

I want to add another one to the author's list, which I think is even more relevant:

Writing.

Story goes, the Celtic druids relied on oral tradition and rejected writing, because they figured relying on writing was a crutch that made them weaker. They're gone now and, because of that choice, most of their culture with them.

Like Assembly to C to Python, as the author points out, LLMs allow us to move up the ladder of abstraction. There are obvious downsides to this, but I expect the evolution is inevitable.

The complaints about that evolution are also inevitable. We lose control, and expertise is less valued. We experts stand to lose a lot, especially if we clutch to the old ways. We're in the midst of a sea-change, we know what we're losing, but we're not sure what we're gaining.

PessimalDecimal•3h ago
> Story goes, the Celtic druids relied on oral tradition and rejected writing, because they figured relying on writing was a crutch that made them weaker. They're gone now and, because of that choice, most of their culture with them.

Can you help me complete this analogy? By failing to rely on "writing" (read: LLMs), what will fail to be recorded and therefore remembered? Is the idea that if knowledge isn't encompassed by an LLM, in the future it will be inaccessible to anyone?

NewsaHackO•2h ago
Sure! I am not the OP, but it seems like the analogy is how being a Luddite and refusing to integrate modern tools leads to being left behind and becoming irrelevant. Another more contemporary example: when intravascular techniques were first being developed, many CT surgeons felt as though those procedures were beneath them and gladly let cardiologists take point for those while they continued to do open procedures. Because of this, they lost a lot of ground in billable procedures, and it negatively affected compensation and demand for CT surgeons. Now, cardiologists can do some minimally invasive valve repairs and ASD closures, which will continue to take business away from CT surgeons. If you refuse to adapt to new technologies, you will be left behind.
steve_adams_86•3h ago
"However… even this might still be too slow. Why understand every line of code deeply if you can just build and ship?"

Because the journey is the destination. Using AI extensively so far appears to be a path that mostly allows for a regression to the mean. Caring about what you're doing, being intentional, and having presence of mind is what leads to interesting outcomes, even if every step along the way isn't engaging or yielding the same output as telling an LLM to do it.

I suppose if you don't care about what you're doing, go ahead and get an LLM to do it. But if it isn't worth doing yourself... Why are you doing it?

Really, do you need those Chrome extensions?

Alternatively, though... If you do, but they aren't mission critical, maybe it's fine to have an LLM puke it out.

For something that really matters to you though, I'd recommend being deep in it and taking whatever time it takes.

Also the tutor approach seems great to me. I don't feel like it's making me dumber. Using LLMs to produce code seemed to make me lazy and dumber though, so I've largely backed off. I'll use it to scaffold narrow implementations, but that's it.

skywhopper•3h ago
The vibe coding examples are interesting to me. Okay, you can create chrome extensions and personal apps with these tools. The author seems to take it as a given that that’s the extent of useful programming. How do they work in maintaining huge applications over time that require interactions between dozens or hundreds or thousands of streams of work?
karaterobot•3h ago
> No one lamented the advent of calculators.

It's interesting that he lists a number of historical precedents, like the invention of the calculator, or the mechanization of labor in the industrial revolution, and explains how they are different than AI. With the exception of chess, I think he's wrong about the effects of all of them.

For instance, people did lament the invention of calculators, saying it would harm kids' ability to do mental arithmetic. And it did. People also said that GPS navigation would hurt people's ability to use a map, or create one in their heads. And I'm old enough to say: it absolutely did. People (in aggregate) are worse at those skills now.

Fortunately for us, we replaced those skills with technology that allowed us to do something analogous to them, but faster and more easily.

The question is: what are the second- and third-order effects of losing those skills, or not learning them in the first place? Is it crazy to think that not memorizing things because we can access printed (and digitized) material might have larger, unforeseen consequences on our brains, or our societies? Could mechanizing menial labor have induced some change in how we think, or have any long term effects on our bodies?

I think we're seeing—and will continue to see—that there are knock-on effects to technology that we can't predict beforehand. We think we're making a simple exchange of an old, difficult skill for a new, easy one, but we're actually causing a more far-reaching cascade of changes that nobody can warn us of in advance.

And, to me, the even scarier thing is that those of us who don't live through those changes will have no basis for comparison to know whether the trade-off was worth it.

drewcoo•3h ago
Calculators also ruined the ability to understand and use logarithms (slide rules).
aminsadeghi•3h ago
I'd argue that using calculators instead of learning how addition is done does hurt kids' ability to do mental arithmetic. It's an experiment we haven't tried, or at least not in places I've lived in. Sure, once you get how addition is done, feel free to free up your mind skipping 2+ digit arithmetics using a calculator. Same as: sure, once you learned what caching is and implement a small prototype, feel free to ask Claude to implement caching for you.
etblg•3h ago
> For instance, people did lament the invention of calculators, saying it would harm kids' ability to do mental arithmetic. And it did.

> Fortunately for us, we replaced those skills with technology that allowed us to do something analogous to them, but faster and more easily.

Don't kids still learn to do arithmetic in their head first? I haven't been in a school in decades but I remember doing it all sans calculator in elementary school. When you move on up to higher level stuff you end up using a calculator, but it's not like we skip that step entirely, do we?

aminsadeghi•3h ago
Exactly! Steph Ango (Obsidian creator) has said it well in his "Don't delegate understanding" essay: https://stephango.com/understand
threatofrain•2h ago
I wonder if in the place of many lower level skills one is then freed to explore higher order skills. We now have very fancy calculators, such as in the form of tools like notebooks that connect to data sources and run transformations and show visualizations.
jrapdx3•2h ago
> "People also said that GPS navigation would hurt people's ability to use a map, or create one in their heads. And I'm old enough to say: it absolutely did."

Thing is some people never were good at reading/using maps, much less creating them. Even with GPS at hand I still prefer seeing a map to know where I'm going. Anyway, retaining at least a modicum of "classic" skills is beneficial. After all, GPS isn't infallible. As with all complex technologies, possibility of failure warrants having alternatives.

I was recently on a cruise, someone asked the ship's navigator whether officers were trained on using old instruments like the sextant. He replied that they were, and continue to drill on their use. Sure, the ship has up-to-date equipment, but knowing the "old ways" is potentially still relevant.

> "The question is: what are the second- and third-order effects of losing those skills, or not learning them in the first place?"

Naturally, old skills fade with advent of newer methods. There's a shortage of ferriers, people who shoe horses. Very few people are being apprenticed in the trade. (Though I'm told the work pays very well.) Owning horses is a niche but robust interest, so ferriers have full workloads, the occupation is not disappearing.

Point is that in real-world terms losing skills diminishes the richness of human lives because there's value in all constructive human endeavor. Similarly an individual's life is enriched by acquiring fundamental skills even if seldom used. Of course we have to parcel our time wisely, but sparing a bit of time to exercise basic capabilities is probably a good idea.

firejake308•3h ago
> GPS. It’s so reliable that I’m fine being unable to navigate. I’ve never gotten in a situation where I wish I had learned to navigate without Google Maps beforehand.

I feel like the possibility of having a dying and phone and needing to get back home from a new place late at night is certainly possible, so I think it is worth having at least a basic knowledge of the major highways in your locality and which direction each one goes.

kranke155•3h ago
Social media caused societal decay. Dating apps led to a loneliness epidemic. AI will make us dumber.

Digital applications lead to the opposite of what they were meant to do. This is a very reliable indicator for outcomes.

alganet•3h ago
From my perspective, the kind of loss expected with LLMs does not reveal itself in one generation.

What you described is more akin to laziness than loss of knowledge. It is also a trap. Your text is almost satirical to the notion that AI could be harmful for learning, because we all know we can relearn those things. And we can, for now.

Several generations of it, when people start to forget simple things, is where the danger lies. We don't know if it will come to that or not.

CelestialMystic•3h ago
> My first response to most problems is to ask an LLM, and this might atrophy my ability to come up with better solutions since my starting point is already in the LLM-solution space.

People were doing this with Stack Overflow / Blogs / Forums. It doesn't matter if you look up pre-existing solutions. It matters whether you understand it properly. If you do that is fine, if you don't then you will produce poor code.

> Before the rise of LLMs, learning was a prerequisite for output. Learning by building projects has always been the best way to improve at coding, but now, you can build things without deeply understanding the implementation.

People completed whole projects all the time before LLMs without deeply understanding the code. I've had to work with large amounts of code where it was clear people never read the docs, never understood the libraries frameworks they were working with. Many people seem to do "Cargo Cult Programming", where they just follow what someone else has done and just adapt enough to solve their problem.

I've seen people take snippets from stack overflow wholesale and just fiddle until it works not really understanding it.

LLMs are just a continuation of this pattern. Many people just want to do their hours and get paid and are not interested and/or capable of actually understanding fully what they are working on.

> GPS. It’s so reliable that I’m fine being unable to navigate. I’ve never gotten in a situation where I wish I had learned to navigate without Google Maps beforehand. But this is also a narrow skill that isn’t foundational to other higher-order ones. Maybe software engineering will be something as obsolete as navigating where you can wholly offload it? However, that seems unlikely given the difference in complexity of the two tasks.

I think the author will learn the hard way. You shouldn't rely on Google Maps. Literally less than 2 weeks ago, Google maps was non-functional (I ran out of data), I ended up using road signs and driving towards town names I recognised to navigate back. Learning basic navigational methods is a good idea.

comrade1234•3h ago
Hey man, it’s not necessarily the llm.
jasonthorsness•1h ago
I learn a lot from asking LLMs to do things especially in areas like front-end development where I don't know most features of CSS, HTML5, or React. All you have to do is read the code the LLM writes and ask it follow-up questions.

LLMs can accelerate learning. Everyone is optimistic about the idea of personalized tutors improving education. You can already use them like that while working on real-world projects.

Hunting extreme microbes that redefine the limits of life

https://www.nature.com/articles/d41586-025-01464-7
1•gnabgib•1m ago•0 comments

Israel forces displacement at Gaza schools, al-Shifa Hospital

https://www.aljazeera.com/news/liveblog/2025/5/14/live-israel-attacks-gaza-hospitals-as-trump-says-working-to-end-war-soon
1•mhga•3m ago•0 comments

US Warns That Using Huawei AI Chip 'Anywhere' Breaks Its Rules

https://www.bloomberg.com/news/articles/2025-05-13/us-warns-that-using-huawei-ai-chip-anywhere-breaks-its-rules
2•xnhbx•4m ago•0 comments

US overdose deaths fell 27% last year, largest decline ever seen

https://apnews.com/article/us-overdose-deaths-opioids-1561a9f189255ad60c533462f10490a2
1•geox•4m ago•0 comments

AI, the Death of 'Humanity,' the Birth of Opportunity

https://www.cwhowell.com/ai-the-death-of-humanity-the-birth-of-opportunity/
1•cratermoon•5m ago•0 comments

Toyota thought of everything when they designed the '89 Crown [video]

https://www.youtube.com/shorts/5lfEonACuqs
2•keepamovin•8m ago•0 comments

How China Is Building an Army of Hackers

https://www.bloomberg.com/news/videos/2025-04-30/how-china-is-building-an-army-of-hackers
2•NN88•11m ago•0 comments

Human

https://quarter--mile.com/Human
16•surprisetalk•23m ago•2 comments

Bypassing Hallucinations in LLMs

https://elijahpotter.dev/articles/bypassing_hallucinations_in_llms
1•chilipepperhott•26m ago•0 comments

Red Hat Enterprise Linux 10 Reaches GA

https://www.phoronix.com/news/Red-Hat-RHEL-10-GA
2•tanelpoder•27m ago•0 comments

Turn Screen Caps into Guides

https://meetzi.app
1•abdullahas•30m ago•0 comments

Volonaut Airbike – Welcome to the Future of Mobility

https://volonaut.com
1•danboarder•30m ago•0 comments

For Seven Days, Yale's Campus Had a Facemash-Esque Social Leaderboard

https://www.readfeedme.com/p/for-seven-days-yales-campus-had-a
2•gubi•33m ago•0 comments

Bench Shirt

https://en.wikipedia.org/wiki/Bench_shirt
1•thunderbong•34m ago•0 comments

Beyond qubits: Meet the qutrit (and ququart)

https://arstechnica.com/science/2025/05/beyond-qubits-meet-the-qutrit-and-ququart/
1•westurner•34m ago•1 comments

ChatGPT Turned into a Studio Ghibli Machine. How Is That Legal?

https://www.theatlantic.com/technology/archive/2025/05/openai-studio-ghibli-images/682791/
2•handfuloflight•36m ago•0 comments

'Birth of a Nation', the first Hollywood blockbuster, revived a dead KKK (2017)

https://washingtonpost.com/news/retropolis/wp/2017/07/08/the-ku-klux-klan-had-been-destroyed-then-the-first-hollywood-blockbuster-revived-it
2•thomassmith65•36m ago•2 comments

Students Are Short-Circuiting Their Chromebooks for a Social Media Challenge

https://www.nytimes.com/2025/05/14/us/tiktok-trend-school-laptops-fire.html
2•lxm•37m ago•0 comments

Exploring Gender Bias in Six Key Domains of Academic Science

https://journals.sagepub.com/doi/10.1177/15291006231163179
2•EvgeniyZh•38m ago•1 comments

Ultrasound deep tissue in vivo sound printing

https://3dprintingindustry.com/news/using-ultrasound-to-print-inside-the-body-caltech-unveils-deep-tissue-in-vivo-sound-printing-technique-239487/
1•westurner•39m ago•1 comments

Show HN: Smarketly – I built an AI assistant to automate marketing for startups

https://smarketly.lema-lema.com/
1•abilafredkb•40m ago•0 comments

Additive manufacturing of zinc biomaterials for biodegradable in vivo use

https://3dprintingindustry.com/news/additive-manufacturing-of-zinc-biomaterials-opens-new-possibilities-for-biodegradable-medical-implants-239427/
1•westurner•41m ago•1 comments

NASA Gistemp v4 Surface Temperature Analysis

https://data.giss.nasa.gov/gistemp/
1•mitthrowaway2•47m ago•0 comments

LLMs Get Lost in Multi-Turn Conversation

https://arxiv.org/abs/2505.06120
29•simonpure•51m ago•13 comments

Understanding Notion's pricing changes (May 2025)

https://www.notion.com/help/2025-pricing-changes
1•namiwang•57m ago•0 comments

Peersuite: Peer to Peer Workspace

https://github.com/openconstruct/Peersuite
1•simonpure•57m ago•0 comments

Embedding Atlas: a scalable way to explore text embeddings with DuckDB

https://github.com/apple/embedding-atlas
1•riordan•1h ago•1 comments

A Visual Explanation of SQL Joins

https://blog.codinghorror.com/a-visual-explanation-of-sql-joins/
2•90s_dev•1h ago•1 comments

Lessons from Mixing Rust and Java: Fast, Safe, and Practical

https://medium.com/@greptime/how-to-supercharge-your-java-project-with-rust-a-practical-guide-to-jni-integration-with-a-86f60e9708b8
2•killme2008•1h ago•0 comments

Vibecoding Fix and Learn

http://vibeasy.us-east-1.elasticbeanstalk.com
1•FreshlyAi•1h ago•1 comments