frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What Killed Flash Player

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•15s ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•1m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•7m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•8m ago•0 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•8m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
6•bookofjoe•8m ago•1 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•9m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
1•ilyaizen•10m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•11m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•11m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•11m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•11m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•12m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•13m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•13m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•14m ago•0 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•18m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•18m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•19m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•19m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•21m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•21m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•22m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•22m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•22m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
3•simonw•23m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•23m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
2•kevinelliott•24m ago•2 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•26m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
2•nmfccodes•26m ago•1 comments
Open in hackernews

AI is like hyperprocessed foods for learning

https://blindsidenetworks.com/ai-is-like-hyperprocessed-food-for-learning/
25•ffdixon1•8mo ago

Comments

ffdixon1•8mo ago
Is overuse of generative AI by students acting like hyperprocessed foods for learning?

Quick dopamine hits. Immediate satisfaction. Long-term learning deficits.

How to break this cycle? I wrote this article to try to answer this question.

dtagames•8mo ago
It's a good one! I'm a lifelong fan of the leveling-up techniques you're talking about and I found they're essential when working with AI agents, especially.

I had the epiphany that all of the "AI's problems" were problems with my code or my understanding. This is my article[0] on that.

[0] https://levelup.gitconnected.com/mission-impossible-managing...

hackyhacky•8mo ago
Say what you will about Oreos and other processed foods, but they do actually contain calories. They are legitimately food.

Here's my experience as a professional educator: AI tools are used not as shortcuts in the learning process, but for avoiding the learning process entirely. The analogy is therefore not to junk food, but to GLP-1, insofar as it's something that you do instead of food.

Students can easily use AI tools to write a programming project or an essay. It's basically impossible to detect. And they can pass classes and graduate without ever having had to attempt to learn any of the material. AI is already as capable as a university student.

The only solution is also hundreds of years old: in person, proctored exams. On paper. And moreover: a willingness to fail those students who don't keep up their end of the bargain.

j7ake•8mo ago
On paper? Oral exams are mush better in my opinion
hackyhacky•8mo ago
> On paper? Oral exams are mush better in my opinion

I agree: they're great, if you have that luxury. But they don't scale.

petesergeant•8mo ago
I was talking to a high-school English teacher recently about building oral exams using ChatGPT voice-mode. Current models would struggle to provide a uniform experience across students, but it feels like it's within near-term reach.
hackyhacky•8mo ago
I don't think even-more-AI is the solution to AI.
dymk•8mo ago
It doesn't even have to be on paper. That computer science exam can be done on a monitored university computer. The only bit that needs enforcement is not using outside resources, to show that a particular knowledge and how to apply it is actually in one's head.
trilbyglens•8mo ago
Here in czechia they still do oral exams where you sit in a chair in front of an instructor and they ask you questions which you have to answer by speaking. I don't think there's any better way to show actual content mastery than that
JohnKemeny•8mo ago
Not everyone does their best thinking under pressure—some students know the material well but struggle to perform in a high-stress oral setting.
hackyhacky•8mo ago
Jo, bohužel takový postup nefunguje tak dobře při větším počtu studentů. I kromě toho problému, jak najít čas na osobní výslech každého studenta, musim navíc vymyslet pro každého úplně nové otázky; jinak předchozí student prozradí obsah zkoušky následujícím. Asi v Česku máme menší třídy?
danpalmer•8mo ago
I think htere is a future for AI tools in learning, and I think they could be hugely valuable, the problem is that we aren't teaching anyone (or learning as a society) how to use these tools well, and using them well will require discipline.

In education today there's a lot of focus on knowledge and testing, and therefore it's fairly natural for AI to be used to just answer questions instead of as a learning aid. If we had a focus more on understanding, I'd hope that use of AI would be more exploratory, with more back and forth to help students learn in a way that works for them. After all, if LLMs are basically just text calculators, every student having a concept explained to them in exactly the language they need would be pretty amazing.

ffdixon1•8mo ago
Oreos are food, but only good in controlled quantities. During covid, many of my co-workers cited putting on extra weight as they were unconsciously snacking on junk food when working at home. It was just to easy to have another bite when the plate of food was next to their mouse.

For learning, I think having an Oreo cookie (using AI) is OK once in a while, especially if your hitting a wall and can't get through, but it's a really, I think, a very steep slippery slope that leads to avoiding the learning process altogether.

I remember as a co-op student spending three days solving a particularly subtle bug in a C-based word processor. My grit was rewarded. On day three, I vividly remember staring at the code and the solution just popped into my head. That was one of the most formative experiences in my earlier years as a developer and feeling of elation never left me. I worry that AI will take these moments, especially early in ones career.

Our brains have not changed in hundreds of years, and I agree that the in person experience is actually the best. Humans learn best from humans. I'm trying to learn French, and Duo has been sad for a few weeks due to my absence, but its not having the same effect on me if it were a human French teacher was was sad with me.

Regarding failing students, I personally had to take summer school twice and still ended up failing grade 12 and repeating the entire school year. Why? I was too focused on computers and nothing else. In retrospect, taking summer school and repeating grade 12 actually helped me catch up at time when the stakes were low. If I hadn't, I would have definitely failed later in life when the costs were higher.

phillipcarter•8mo ago
Related is Claude for Education: https://www.anthropic.com/news/introducing-claude-for-educat...

It's adjusted to not just give answers, but (perhaps frustratingly for the student), force them to iterate through something to get an answer.

Like anything it's likely also jail-breakable, but as we've learned with all software, the defaults matter.

ffdixon1•8mo ago
> (perhaps frustratingly for the student), force them to iterate through something to get an answer.

IMHO, I think feeling frustration is the whole point -- it's how our brains rewire themselves, it's how we know we are learning, and it's how we build up true grit to solve harder problems.

As we want to "feel the burn" in the gym, we want to "feel the frustration" when learning.

dfxm12•8mo ago
The issue would be with students who just want a certain grade. That's where the dopamine hit is. Maybe AI can write you a paper at home, but it can't fill out a blue book in a classroom. Maybe there needs to an adjustment around types of assignments or how to grade them, but the in-class exams have always held more weight anyway.

Just like we see posts here about how AI (at the very least AI on its own) is ineffective at coding a product, these students eventually learn what the Wharton study had proven, that AI is not effective at getting them the grade they want.

I know I'm lazy. I try shortcuts like AI, copying Wikipedia before that, hoping just punching number into a ti-86 would solve my problems for me. They simply don't.

petesergeant•8mo ago
I feel like the article is not disciplined about maintaining definitions between Education and learning here, but there's some interesting stuff. I've found (I think!) LLMs to be hyper-useful for enquiry-based learning: lots of "well does that mean that" and "isn't that the same as" and "but you said earlier that" and "could you use shorter answers and we'll do this step by step please".

I am curious to dig into "Generative AI Can Harm Learning"[0], referenced in the article. I think the summary in the article skips over some of the subtleties in the abstract though.

0: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4895486

ffdixon1•8mo ago
I re-read the abstract and they tried two different modes of ChatGPT-4, "base mode" and a "tutor mode". The tutor mode helped students more, but it cautioned at the end:

> Our results suggest that students attempt to use GPT-4 as a "crutch" during practice problem sessions, and when successful, perform worse on their own. Thus, to maintain long-term productivity, we must be cautious when deploying generative AI to ensure humans continue to learn critical skills.

I think the caution is the use of AI to circuit the real learning, even if AI is in a tutor mode, to avoid building up true grit.

Ultimately, in writing this article, my hope was that a student would read it and get angry, angry that over use of AI - using it as a crutch - is actually having a negative impact on their learning, and they would resolve to using it only for more efficiency and effectiveness, not a substitution for the true learning.

I was thinking of Richard Feynman’s approach to learning when writing this article. He was a genius, so I didn't want the analogy to be unrelatable. However, he really enjoyed understanding the first principles and that enjoyment gave him such a solid foundation. He put in the necessary hours to learn, and what a remarkable life he enjoyed because of it.

BrenBarn•8mo ago
The article seems a little padded out, but I think the central metaphor is a good one.
fithisux•8mo ago
If AI is controllable, what students learn is controllable when it replaces "inefficient" humans.

I also agree with the title and its implications.

But hype is hype, and humans like to ride the hype.