frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Apple is the only Big Tech company whose capex declined last quarter

https://sherwood.news/tech/apple-is-the-only-big-tech-company-whose-capex-declined-last-quarter/
1•elsewhen•2m ago•0 comments

Reverse-Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
2•todsacerdoti•3m ago•0 comments

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•7m ago•0 comments

The Greater Copenhagen Region could be your friend's next career move

https://www.greatercphregion.com/friend-recruiter-program
1•mooreds•7m ago•0 comments

Do Not Confirm – Fiction by OpenClaw

https://thedailymolt.substack.com/p/do-not-confirm
1•jamesjyu•8m ago•0 comments

The Analytical Profile of Peas

https://www.fossanalytics.com/en/news-articles/more-industries/the-analytical-profile-of-peas
1•mooreds•8m ago•0 comments

Hallucinations in GPT5 – Can models say "I don't know" (June 2025)

https://jobswithgpt.com/blog/llm-eval-hallucinations-t20-cricket/
1•sp1982•8m ago•0 comments

What AI is good for, according to developers

https://github.blog/ai-and-ml/generative-ai/what-ai-is-actually-good-for-according-to-developers/
1•mooreds•8m ago•0 comments

OpenAI might pivot to the "most addictive digital friend" or face extinction

https://twitter.com/lebed2045/status/2020184853271167186
1•lebed2045•9m ago•2 comments

Show HN: Know how your SaaS is doing in 30 seconds

https://anypanel.io
1•dasfelix•10m ago•0 comments

ClawdBot Ordered Me Lunch

https://nickalexander.org/drafts/auto-sandwich.html
2•nick007•11m ago•0 comments

What the News media thinks about your Indian stock investments

https://stocktrends.numerical.works/
1•mindaslab•12m ago•0 comments

Running Lua on a tiny console from 2001

https://ivie.codes/page/pokemon-mini-lua
1•Charmunk•12m ago•0 comments

Google and Microsoft Paying Creators $500K+ to Promote AI Tools

https://www.cnbc.com/2026/02/06/google-microsoft-pay-creators-500000-and-more-to-promote-ai.html
2•belter•15m ago•0 comments

New filtration technology could be game-changer in removal of PFAS

https://www.theguardian.com/environment/2026/jan/23/pfas-forever-chemicals-filtration
1•PaulHoule•16m ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
2•momciloo•16m ago•0 comments

Kinda Surprised by Seadance2's Moderation

https://seedanceai.me/
1•ri-vai•16m ago•2 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
2•valyala•16m ago•0 comments

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
1•sgt•17m ago•0 comments

Malwarebytes Is Now in ChatGPT

https://www.malwarebytes.com/blog/product/2026/02/scam-checking-just-got-easier-malwarebytes-is-n...
1•m-hodges•17m ago•0 comments

Thoughts on the job market in the age of LLMs

https://www.interconnects.ai/p/thoughts-on-the-hiring-market-in
1•gmays•17m ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
2•Keyframe•20m ago•0 comments

AIII: A public benchmark for AI narrative and political independence

https://github.com/GRMPZQUIDOS/AIII
1•GRMPZ23•20m ago•0 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
2•valyala•22m ago•0 comments

The API Is a Dead End; Machines Need a Labor Economy

1•bot_uid_life•23m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•Jyaif•24m ago•0 comments

New wave of GLP-1 drugs is coming–and they're stronger than Wegovy and Zepbound

https://www.scientificamerican.com/article/new-glp-1-weight-loss-drugs-are-coming-and-theyre-stro...
5•randycupertino•26m ago•0 comments

Convert tempo (BPM) to millisecond durations for musical note subdivisions

https://brylie.music/apps/bpm-calculator/
1•brylie•28m ago•0 comments

Show HN: Tasty A.F. - Use AI to Create Printable Recipe Cards

https://tastyaf.recipes/about
2•adammfrank•28m ago•0 comments

The Contagious Taste of Cancer

https://www.historytoday.com/archive/history-matters/contagious-taste-cancer
2•Thevet•30m ago•0 comments
Open in hackernews

Paper2Code: Automating Code Generation from Scientific Papers

https://arxiv.org/abs/2504.17192
133•Jerry2•9mo ago

Comments

colkassad•9mo ago
It would be neat to run their pdf through their implementation[1] and compare results.

https://github.com/going-doer/Paper2Code

endofreach•9mo ago
Damn, i was hoping the link was your result of that. Please do that. I can't start another project currently. But i'd love the short result as an anecdote. But if you don't do it, i might have to. Please let me know. Great idea, really.
omneity•9mo ago
If I was the paper author I would have done it and include the results as an appendix or a repo.
JackYoustra•9mo ago
haha would that itself be a product of the paper then?
omneity•9mo ago
Maybe by doing it enough times o3-mini will end up reimplementing itself?
endofreach•9mo ago
Imagine, this was actually the consequence— against all odds. The true power of AI is about to be discovered... through this silly experiment... and you are the one... all you gotta do— is do it. And imagine you don't do it, because you think it can't lead to such a serious result... and if we miss this great leap forward... go, throw your life away and do this. now. the universe is waiting on you, my friend.
barotalomey•9mo ago
That's how nothing ever works outside fiction.
colkassad•9mo ago
It's be something like this that creates the Singularity, just you see
protolyticmind•9mo ago
I thought it would be humorously ironic :D
polygot•9mo ago
I decided to do that, and made Paper2Code2Code: https://github.com/alexyorke/Paper2Code2Code/tree/main
IshKebab•9mo ago
And... does it work?
bjourne•9mo ago
It relies on OpenAI's o3-mini model which (I think) you have to pay for.
sitkack•9mo ago
I have had good results doing bidirectional programming in Tex <=> Python.
somethingsome•9mo ago
Can you give more details? I'm curious
sitkack•9mo ago
Give it a try, have it teach you tex for the summation notation, have it write the code, modify the code, have it translate back to tex. Repeat.

You can do a quick test to see which models have been trained on tex.

Keep a tex visualizer handy.

somethingsome•9mo ago
I like the idea of having automatic code creation from papers, but I’m scared of it.

Suppose you get a paper, you automatically implement the code, and then modify it a bit with a novel idea, and publish your paper. Then somebody else does that with your paper, and does the same.. at some point, we will have a huge quantity of vibe coded code on github, and two similar papers will have very different underlying implementations, so hard to reason about and hard to change.

From a learning perspective, you try to understand the code, and it's all spaghetti, and you loose more time understanding the code than it would take to just reimplement it. You also learn a lot by not only reading the paper but reading the authors code where most of the small details reside.

And I'm not even talking about the reliability of the code, test to know that it's the correct implementation. Authors try to make papers as close as possible to the implementation but sometimes subtle steps are removed, sometimes from inadvertance, sometimes because the number of pages is lionmited.

A paper and an implementation are not one-to-one mappings

tomrod•9mo ago
> we will have a huge quantity of vibe coded code on github

That may actually be an improvement over much of the code that is generated for papers.

Narew•9mo ago
Honestly, the code from my interns have greatly improve since they use AI. And there is lots of really ugly and hard to read code from papers. So I don't think it will be an obvious loss of readability to have code completely generated by AI :)
somethingsome•9mo ago
Very interesting, do you have a specific approach to educate them in how to use LLMs? Or they do it free wheel? Do you give advices? If so which kind?

I would love to have a structured approach to help students learn to use better LLMs. What I have observed (for uni students) is that they produce better code overall, but have no idea how it works, and would not be able to reproduce it without LLMs. (this is from first to last year)

Narew•9mo ago
For the moment it's a bit free wheel. And I agree the code is better but they could probably not reproduce it themself. I honestly don't know how to "force" them to understand the code the llm write if the code is clean. But this happen when the code produce by llm is over complicated or bad and we catch that by doing code review. I have the impression it will create even more disparity between student, students that just use llm code and the ones that try to understand it.
exe34•9mo ago
> you try to understand the code, and it's all spaghetti, and you loose more time understanding the code than it would take to just reimplement it.

I agree with you in general, but maybe the jump would be similar to the one from hand-written punchcards/assembly to higher level compilers. Very few people worry about the asm generated from GHC for example. So maybe a lot of code would be like that. I also imagine at some point a better intermediate language for LLMs to generate will be discovered and suddenly that's how most programs will be written.

somethingsome•9mo ago
I would love that, I mostly work with ideas and the codes are implementation details for me, so yes, in some way, having automated code generation would allow me to be way more productive. I'm not against it, I'm just scared about the efficiency of the approach by an LLM (at the moment at least)

The example codes they give is 'implementing deep learning papers', I find those papers the easiest to implement compared to some obscure algorithm for example that can't rely on frameworks such as pytorch and where speed is critical.

I can't find the essay, but I think it was wolfram that wrote that we should let students use Mathematica and educate them in using it from a young age, the rationale behind is: before you had to use logarithmic tables, and it took much time during the education. Then, with the event of the calculator, students could instantaneously compute logarithms, so they could focus on more advanced ideas that use them. With Mathematica they could automatically execute matrix operations, so they would spend most of the time thinking about matrix operations instead of just learning how to manipulate a matrix by hand.

So with more powerful tools, you can expand the capabilities faster.

But the main difference I see here, is that maths are precise and well defined. Here you get a software which is a sample in the space of possible softwares that solve the problem (if you are lucky).

To get to the metaphorical point punchcards->GHC you need a LLM tool that give always the same answer, and hopefully, the optimal one, and with small changes in the paper, it moves the software in the space of viable softwares only a bit. Maybe we will get there, but this is not yet what this paper proposes

UltraSane•9mo ago
It would be very interesting to teach kids math using Mathmatica starting from kindergarten.
dadoomer•9mo ago
> the jump would be similar to the one from hand-written punchcards/assembly to higher level compilers

I wouldn't. Compilers are not stochastic text models and they can be verified and reasoned about to a great extent.

ks2048•9mo ago
So who has a code2paper model that we can hook up in a loop?
brundolf•9mo ago
Not what OP is about, but idea I just had:

We should have the technology now to hand-write pseudocode on a piece of paper (or whiteboard or chalkboard), and have it translated and executed. Maybe you even hook up a projector, and project the output back onto the board

wzdd•9mo ago
I did this recently with a forward-mode AD paper, by just pasting the PDF into Claude. Like everyone, I've had mixed results with Claude coding, so I wouldn't bet my life on the output, but Claude was able to produce something for Pytorch that worked first go, had appropriate performance characteristics, and it was able to convincingly explain the connection between parts of the generated code and the paper. I was impressed.