frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

From Pokémon Red to Standardized Game-as-an-Eval

https://lmgame.org
1•Yuxuan_Zhang13•55s ago•1 comments

The Whole Country Is Starting to Look Like California

https://www.theatlantic.com/economy/archive/2025/06/zoning-sun-belt-housing-shortage/683352/
1•ryan_j_naughton•1m ago•0 comments

Eigenvalues of Generative Media

https://stackdiver.com/posts/eigenvalues-of-generative-media/
1•d0tn3t•1m ago•1 comments

Brazil's Supreme Court clears way to hold social media liable for user content

https://apnews.com/article/brazil-supreme-court-social-media-ruling-324b9d79caa9f9e063da8a4993e382e1
1•rbanffy•4m ago•0 comments

The New Skill in AI Is Not Prompting, It's Context Engineering

https://www.philschmid.de/context-engineering
2•robotswantdata•6m ago•0 comments

Ask HN: When will YC do a batch in Europe and/or Asia?

1•HSO•6m ago•1 comments

Repurposed Materials

https://www.repurposedmaterialsinc.com/view-all-products/
1•bookofjoe•7m ago•0 comments

Liberals, you must reclaim Adam Smith

https://davidbrin.blogspot.com/2013/11/liberals-you-must-reclaim-adam-smith.html
2•matthest•8m ago•1 comments

Symbients on Stage Coming Soon: Autonomous AI Entrepreneurs

https://www.forbes.com/sites/robertwolcott/2025/06/30/symbients-on-stage-coming-soon-autonomous-ai-entrepreneurs/
1•Bluestein•8m ago•0 comments

Can Large Language Models Help Students Prove Software Correctness?

https://arxiv.org/abs/2506.22370
1•elashri•12m ago•0 comments

Developing with GitHub Copilot Agent Mode and MCP

https://austen.info/blog/github-copilot-agent-mcp/
1•miltonlaxer•12m ago•0 comments

I got removed from GitHub for making open source stuff

2•Hasturdev•13m ago•2 comments

NASA plans to stream rocket launches on Netflix starting this summer

https://www.cnbc.com/2025/06/30/nasa-rocket-launches-netflix.html
1•rustoo•15m ago•1 comments

Large Language Model-Powered Agent for C to Rust Code Translation

https://arxiv.org/abs/2505.15858
1•elashri•17m ago•0 comments

Let's create a Tree-sitter grammar

https://www.jonashietala.se/blog/2024/03/19/lets_create_a_tree-sitter_grammar/
1•fanf2•18m ago•0 comments

Musk said to bet on Tesla delivering Robotaxi in June, those who did lost big

https://electrek.co/2025/06/30/elon-musk-bet-tesla-delivering-robotaxi-june-lost-big/
2•reaperducer•18m ago•1 comments

The story how I acquired the domain name Onions.com

https://twitter.com/searchbound/status/1939658564420641064
1•eightturn•19m ago•1 comments

Offline-First AI Platform for Resilient Edge and IoT Applications

https://github.com/GlobalSushrut/mcp-zero
1•Global_Sushrut•21m ago•0 comments

Three-Dimensional Time: A Mathematical Framework for Fundamental Physics

https://www.worldscientific.com/doi/10.1142/S2424942425500045
1•haunter•22m ago•0 comments

Young job applicants fight fire (ATS systems) with fire (AI) – Global trends

https://www.coversentry.com/ai-job-search-statistics
2•coversentry•22m ago•0 comments

Google to buy fusion startup Commonwealth's power- if they can ever make it work

https://www.theregister.com/2025/06/30/google_fusion_commonwealth/
1•rntn•24m ago•0 comments

A Haaretz article on dispersing crowds became a story on the IDF shooting people

https://twitter.com/AdamRFisher/status/1938959933803728997
3•nailer•24m ago•4 comments

Apple Execs on what went wrong with Siri, iOS 26 and more [video]

https://www.youtube.com/watch?v=wCEkK1YzqBo
1•amai•24m ago•0 comments

Adding Text-to-Speech to Your Blog with OpenAI's TTS API

https://econoben.dev/posts/adding-text-to-speech-to-your-blog-openai-tts-pipeline
1•EconoBen•30m ago•1 comments

Do Car Buyers Care Which Engine Is Under the Hood? A Ford Exec Doesn't Think So

https://www.thedrive.com/news/do-car-buyers-care-which-engine-is-under-the-hood-a-ford-exec-doesnt-think-so
3•PaulHoule•33m ago•1 comments

CertMate – SSL Certificate Management System

https://github.com/fabriziosalmi/certmate
2•indigodaddy•35m ago•0 comments

Ask HN: How to build a LifeOS using vibe coding?

1•agcat•36m ago•0 comments

Show HN: On-chain Fund Administration Protocol

https://www.fume.finance/
1•fume_protocol•36m ago•0 comments

Portal, for the C64

https://www.jamiefuller.com/portal/
4•rbanffy•37m ago•0 comments

Defending Savannah from DDoS Attacks

https://www.fsf.org/bulletin/2025/spring/defending-savannah-from-ddos-attacks
3•HieronymusBosch•42m ago•0 comments
Open in hackernews

The role of the University is to resist AI

https://www.danmcquillan.org/cpct_seminar.html
61•milen•4h ago

Comments

tempodox•4h ago
> Society can't throw up its hands in shock as students outsource their thinking to simulation machines when fifty years of neoliberalism has masticated education into something homogenised, metricised and machinic. Meanwhile, so-called Ed Tech has claimed for decades that learning is informational rather than relational and ripe for technical disruption.

So university cannot effectively resist AI without resisting these ideas first. I hope it can be done.

bachmeier•3h ago
I can kind of see the author's point, though I don't agree with it. AI is what made the problems obvious. Students didn't start cheating with the appearance of AI chatbots.

My view is that university classes should be taught in such a way that students can use AI as much or as little as they desire in order to learn the material. Evaluation should primarily be done in the classroom without access to AI. 90% of the grade in my undergraduate course comes from in-person exams. I don't care how they learned the material. This can be a problem for composition classes, for instance, but the problem existed long before the chatbots.

> AI is actually a giant material infrastructure with huge demands for energy, water and concrete, while the supply chain for specialised computer chips is entangled with geopolitical conflict. It also means that the AI industry will beg, borrow and steal, or basically just steal, all the text, images and audio that it can get its spidery hands on.

Sure. We don't know yet how the economics will play out. We don't know the actual cost of LLM and other AI services, we only know what companies are currently charging for them, but they're competing for mindshare so the prices are most definitely being held low. To a large extent, the whole thing has demonstrated what can be done in the short-term in the absence of copyright restrictions, and now we have to see the long-term effects of the removal of copyright restrictions.

I agree with many of the points in the article but don't understand how that turns into a recommendation to "resist".

trod1234•3h ago
Wouldn't we be just in the first phase of the long-term effects of the removal of copyright restrictions?

The second phase being copyright claimed on the model itself, and its derivative works thus further expanding copyright paradoxically to things which couldn't be before the blackbox, and only allowing use by those who own the models?

Initially things are always rosy, then they are reduced to make profit and create moats.

freed0mdox•3h ago
I absolutely agree with you. In the right hands, LLM is a teaching tool, and the calls to resist it are as dumbfounded as the calls to resist the chalkboard would be.

One of my favourite uses of LLM is the reverse-dictionary, for example:

Give me one Saxon and one Romance word meaning "to write".

Saxon (Germanic origin): scratch — Old English scrætan, linked to marking or incising.

Romance (Latin origin): inscribe — from Latin inscribere, "to write on/in."

Genius!

lawlessone•3h ago
You could literally google that question before LLMS.
zo1•2h ago
Yes, and get bombarded with 20 ads, go through a few blog-spam articles about "10 of the coolest old-Saxon words you never heard before but use every day", open the website and get old-school popups in the form of GDPR spam, an unecessary Google account sign-in popup, and promptly close 6 ads before giving. But you're insistent and repeat it by adding Reddit to your search term, and maybe you get some sort of Old English-focused sub-reddit and find something, else you maybe maybe maybe go through and find a decent 2010's website that has the thing you want.

Or you just ask the damn AI that has gone through the useless corpus of the ad-ridden web that was infested and prompted by VC's, and somehow magically, through a lot of effort, math, and 150Gigakilowatts of electricity, and extracted the piece of info you want, and simply gives it to you with a bit of annoying fluff.

My time is precious, and I want to see the useless web burn.

lawlessone•2h ago
>My time is precious, and I want to see the useless web burn

i'm guessing from your two paragraph reverse Unabomber manifesto above your time is not that precious.

add-sub-mul-div•1h ago
> Yes, and get bombarded with 20 ads

It's wild that people don't see that LLMs are following the same playbook as streaming etc. and in time will predatorily monetize in every way possible. If you think people are trapped as customers because they can't do without tv shows, imagine five years from now when it's general thinking that people have become dependent on the tech giants for.

freed0mdox•1h ago
Maybe for simple cases sure yes, but for complicated sentences ability to map approximate/fuzzy meaning <-> words is super helpful, especially for ornamentation and ESL scenarios.

And LLM doesn't completely remove the "burden" of reading the dictionary to make sure the meaning is indeed fitting, but shortcuts the discovery by a lot. Also helps to learn new words, lol. I see it as a supercharged thesaurus.

IMHO this applies to all general research, one needs to be an utter monkey to copy LLM generated references without checking them first, so if anything, it trains critical thinking for free.

lawlessone•4m ago
fair reply. As long as you're confirming it all.
coffeefirst•2h ago
I have this idea, and I think you're landing on something similar, that LLMs can either be a bicycle for the mind (like your reverse dictionary) or an opiate for the mind (write my entire letter for me).

This isn't all that new, given that's a play on a Jobs quote about computers. And it's regular old software that can both unleash creativity and created social media brainrot.

The AI algorithms aren't the problem, it's how they're primed, marketed, and used.

There's absolutely nothing stopping us from releasing a bot that's great at looking stuff up and citing sources, but when asked to write an essay or make a decision for you, declines because that's not its job.

lawlessone•2h ago
>I have this idea, and I think you're landing on something similar, that LLMs can either be a bicycle for the mind

The Just Eat of the mind ;)

thanatropism•3h ago
> new pandemic is killing millions

> yes, but viruses existed long before it

nitwit005•2h ago
I had a freshmen class where 11 people in a class of 113 got caught directly copying journals you were supposed to maintain over the course of the semester. That's a minimum cheating rate of about 10%.

You can find old surveys asking university students how often they cheat. Let's say they don't paint a positive impression.

ToucanLoucan•3h ago
> Evaluation should primarily be done in the classroom without access to AI.

I grant that I have no evidence for this claim but: I don't see how it's reasonable to teach a subject with access to such a powerful tool and then to remove that tool to assess what the student has learned. My primary uses for LLM, limited as they may be, are explicitly about things I do not care to know, and I find it difficult to hold in my head how ChatGPT is going to help me learn anything in such a way where my understanding of it and use of that knowledge is not hinging directly on continuing to have access to it. And, more broadly, there's reason to suspect that the student will have access to it after that class ends, so it runs up against that old axiom of school meaning to prepare you for working life.

My math classes never interested me, I did the work on calculators whenever possible, and sure I have decent mental math skills, but I still pull out a calculator (app) for everything because... my meat brain just isn't as good at this task as this silicon one, and not only does every smartphone in existence have one, if you really don't want a touchscreen version, they can be had at any retailer in America for like $5-10.

PollardsRho•2h ago
Students shouldn't be treating class material as something they "do not care to know."

AI can be used in ways that lead to deeper understanding. If a student wants AI to give them practice problems, or essay feedback, or a different explanation of something that they struggle with, all of those methods of learning should translate to actual knowledge that can be the foundation of future learning or work and can be evaluated without access to AI.

That actual knowledge is really important. Literacy and numeracy are not the same thing as mental arithmetic. Someone who can't read literature in their field (whether that's a Nature paper or a business proposal or a marketing tweet) shouldn't rely on AI to think for them, and certainly universities shouldn't be encouraging that and endorsing it through a degree.

I think the most important thing about that kind of deeper knowledge is that it's "frictional", as the original essay says. The highest-rated professors aren't necessarily the ones I've learned the most from, because deep learning is hard and exhausting. Students, by definition, don't know what's important and what isn't. If someone has done that intellectual labor and then finds AI works well enough, great. But that's a far cry from being reliant on AI output and incapable of understanding its limitations.

ToucanLoucan•2h ago
> Students shouldn't be treating class material as something they "do not care to know."

> AI can be used in ways that lead to deeper understanding.

> all of those methods of learning should translate

Shouldn't be, can be, should. How can we assess if a student has used AI "correctly" to further their understanding vs. used it to bypass a course they don't believe adds value to their education?

> Someone who can't read literature in their field (whether that's a Nature paper or a business proposal or a marketing tweet) shouldn't rely on AI to think for them

That's exactly what tons of pro-AI people are doing. There's an argument to be made that that's the intended purpose for the tool. Artificial Intelligence, sold on the basis to augment your own mental acuity with that of a machine. Well, what if you're a person whom doesn't have much acuity to augment? Like it's mean but those people exist.

pebbly_bread•2h ago
The difficulty comes when you don't know to google, or to ask the LLM because you don't realise that a particular challenge requires addressing. I can build a completely functional webapp that has absolutely no security, and there may be no clear "I should google how to do this" point that would steer me towards tools that would save me from this mistake.
kevinventullo•1h ago
I don't see how it's reasonable to teach a subject with access to such a powerful tool and then to remove that tool to assess what the student has learned

Isn’t this basically the paradigm of a closed-book exam? I personally use LLM’s for learning by treating them like a textbook or Wikipedia article I can ask follow-up questions to.

Though to be clear, I am disappointed with the experience about 50% of the time.

brailsafe•2h ago
> 90% of the grade in my undergraduate course comes from in-person exams

Tangent: I've never thought exams should be anything but in-person, but I've also never thought they should be so heavily weighted towards like one or two lucky days, not that that's necessarily what you're suggesting. I recall failing my data structures and algos mid-term that largely consisted of writing syntactically correct java by hand mostly because exams don't really provoke a sense of panic in me, the 3 hours in the evening that the course was didn't turn out to be prime productivity time, so I just kind of got bored and zoned out since I knew it just didn't really matter outside the scope of grades. I think I ended up with a C or something after getting a second shot at the final.

I'd later learn I have ADHD, but there were numerous courses where my profs told me they were straight up disappointed I failed so hard, since I evidently stood out as the most engaged in the classroom, handling the course material and assignments just fine, and being a revisitor to the classroom after being a paid developer for years, then in my late twenties. I have no idea how Doctors that clearly have a similar type of attention do it through med school, maybe it's just sufficiently more difficult, enough to stay engaged.

There's nothing I can do to provoke a sufficient stress response in an exam environment, and I've basically let it be a thing of the past that comes down to a dice roll whether it's engaging enough, or I get a good sleep the night before, or any number of other uncontrollable variables work out in my favor. Ironically, a persuasive essay in a history class turned out to be perfect.

In some sense it does scare me a bit, this prospect of more heavily weighted analog exams, but I don't really see much of a way around it, as long as we continue accepting that the concept of grades and academic performance is a sufficient measure of something worth measuring, rather than the somewhat arbitrary filtering mechanism it became. If my career in software fails, I might have to re-enter into a system that's even more stacked against me than it was, unless it's a hands-on trade presumably.

bachmeier•1h ago
> so heavily weighted towards like one or two lucky days

Yeah. I took some of those classes (they were more common back then) and didn't feel they were a great measure of how much I knew. I give four exams. The students will have seen related questions on the homework and in the lecture prior to taking the exams. Anyone that's been actually learning the material will find the exams easy and those that use AI, get the answers from someone else, or whatever method to get the homework points, will be lost on the exam. At least that's my goal. Teaching is definitely an imperfect art.

ChrisArchitect•3h ago
Previously: https://news.ycombinator.com/item?id=44353422
Akranazon•3h ago
Thank you. This author's writing is a careful exemplar of everything rotten about higher ed. The article is a great compilation of features, stylistic and material, that need to be crushed and discarded from the university.
lawlessone•2h ago
I'm not in Academia myself. But why do some people (you) get white hot with rage about the people that are in it?
Akranazon•2h ago
American universities are important institutions and do a lot of good work. However, look no further than this article for the source of the "white hot rage". It's bloviating, vacuous, empty fluff, no meaningful point or persuasive argument, and the statements which come the closest to resembling coherent points are wrong and bad (see https://www.danmcquillan.org/questions_for_anthropic.html). The author feels threatened, because he realizes that an AI could do his writing style much better than he does, given the absence of original thought.

The author is a kind of free-rider: he persists in his position having read just-enough continental philosophy, and memorized the right incantations (buzzwords) to communicate the "aligned" political subtext. Academic left-wing jargon; you have to be against neoliberalism, "tech broligarchy", Palantir (the military industrial complex), technogenesis... Oh, and you have to shoehorn as many race politics subjects into your article on AI, even if they have virtually nothing to do with the topic (KKK, White Genocide, racial superiority, supremacy, eugenics). Points for creativity, I guess.

lawlessone•1h ago
I thought his reference to the AI being neutral on the KKK was relevant as it demonstrated the AI just doesn't really understand what it talks about.
Akranazon•1m ago
Each of the terms used by the author, which I listed above, are fine and meaningful on their own. The problem is, it's jarring when academics try to shoehorn their pet issues into as many topics as possible, and combined with the raw density of academic verbiage, it gets quite grating. It tells you a lot about the sensibilities/priorities of the academic.
PeterStuer•3h ago
There is a serious discussion to be had on the impacts of AI, and it's effects on how to approach education, the role of academia in AI research etc.

Sadly, this article, or lets be honest, rant, is not a contribution to that.

4ndrewl•3h ago
Because?
fallinditch•3h ago
That's a lot of words for one practical recommendation: University councils for discussing 'AI conviviality'.

I think the author presents a one dimensional view of AI bad and fails to see the bigger picture, which is ironic considering all the fine words he uses.

I agree that AI tools can potentially weaken some of our lower level cognitive functions, but on the flip side the AI tools also enable us to operate on higher levels of ability, planning, conceptualization and execution.

This is undoubtedly a point of inflexion for Universities: they should be working out how they can achieve a new deal for students and society that is far more nuanced and constructive than mere 'resistance' against AI.

Animats•3h ago
That's really the only tangible recommendation in the article.

The real problem for universities is this: much of what classical academia claims is important is not all that hard for a LLM. Writing "compare and contrast" student papers, research which consists of digging through existing texts and summarizing, and writing in a formal style are all things LLMs do. Probably better than most undergraduates.

This shakes the philosophical foundations of academia. What are universities for now? Job training? Sorting the winners from the losers? Something else?

mariusor•2h ago
> but on the flip side the AI tools also enable us to operate on higher levels of ability, planning, conceptualization and execution.

Are there any concrete examples of this? Did any researcher, engineer, artist, etc come forwards and say: "yes, this work you esteem so much, was created by me with the help of AI"?

To me, your words sound more of wishful thinking than the situation as is currently. I'm willing to be set right though.

Imnimo•3h ago
>Generative AI's main impact on higher education has been to cause panic about students cheating, a panic that diverts attention from the already immiserated experience of marketised studenthood. It's also caused increasing alarm about staff cheating, via AI marking and feedback, which again diverts attention from their experience of relentless and ongoing precaritisation.

>The hegemonic narrative calls for universities to embrace these tools as a way to revitalise pedagogy, and because students will need AI skills in the world of work. A major flaw with this story is that the tools don't actually work, or at least not as claimed.

>AI summarisation doesn't summarise; it simulates a summary based on the learned parameters of its model. AI research tools don't research; they shove a lot of searched-up docs into the chatbot context in the hope that will trigger relevancy. For their part, so-called reasoning models ramp up inference costs while confabulating a chain of thought to cover up their glaring limitations.

If AI tools do not actually work, how are students able to cheat with them? It seems like that would be a problem that would solve itself - a student would attempt to use AI to cheat, it would fail to complete the assignment, and the student would get a bad grade.

lawlessone•3h ago
>If AI tools do not actually work, how are students able to cheat with them?

Cheating doesn't have to work for it to be cheating.

If you get caught robbing a bank that doesn't un-rob the bank.

Imnimo•2h ago
But the students would quickly learn not to try to cheat in this way. It would be a non-issue.
lawlessone•2h ago
>But the students would quickly learn not to try to cheat in this way. It would be a non-issue.

We have police and prisons yet people still commit crimes.

trollbridge•3h ago
Teach, and require, students to handwrite (without a machine such as a smartphone or a PC nearby) whatever quizzes and homework and exams they need to do. Of course, this would mean professors and TAs would have to actually go back to hand-scoring work, instead of lazily leaving it up to their classroom management software.

Beyond that written work, more of what universities examine of students should be students actually standing up and speaking about their work (without a machine assisting them).

karaterobot•3h ago
> The way this technology works means that generative AI applied to anything is a form of slopification, of turning things into slop. However, where AI is undoubtedly successful is as a shock doctrine, as a way to further precaritise workers and privatise services.

You can't say both that AI produces worse results, and that it will be used to manipulate the job market: savvy companies would outcompete by not adopting AI, and hiring up the victims of AI layoffs. If either of his statements is true, the other is false.

This whole article, man. I don't know where to start with it. It definitely reminds me of grad school. In a bad way.

The university has been on a glide path toward irrelevance for quite a while—long before AI was a going concern—and the humanities and social sciences, in particular, have been skimming the treetops since at least when I was in school at the turn of the century. The role of the university is to teach and do research. AI can be a tremendous asset for both of those, and it's not going away, so deal with that reality.

dismalaf•2h ago
The role of university isn't to resist nor embrace.

When I was in university, I did finance + economics + a bunch of other random stuff from CS to archaeology to philosophy.

One subject that was interesting from a technology standpoint was statistics. I took university at a point when ML was a thing, but LLMs obviously weren't. R was a thing, Python was beginning to get popular in the domain, you potentially had all sorts of tech to help with stats.

Introduction to stats, no technology was allowed. Every single problem was done by hand. Every single quiz and test, no calculators, no multiple choice, just problems to work through by hand. If you cheated on assignments, you'd obviously fail tests (which were >50% of the course). Problem solved. We had to learn without aids. Second stats course, everything was allowed. Did all my assignments with R. The point was simply learning. First theory, then how it's done on the real world.

University absolutely should teach all the theories, concepts and history before AI. And then it should also teach how to use AI, since it's a thing in the real world.

People just need to stop thinking about university as all about grades and check marks, and learn to learn.

waffletower•1h ago
We keep hearing the argument that AI datasets are built via "stealing"; it is as if the Fair use doctrine does not exist. Large copyright holders are the obvious beneficiaries of such denials, and perhaps the OP isn't just a mindless parrot on this point, but an active participant in intentional subterfuge. Copyright infringement can occur at the output, not at the input of AI models, people. For the author to be ethically consistent on this point, they would never use a university library ever again.