frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
1•init0•5m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•5m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•8m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•10m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•21m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•21m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•26m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•30m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•31m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•33m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•34m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•37m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•48m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•54m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
2•cwwc•58m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
5•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
3•pabs3•1h ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•1h ago•0 comments
Open in hackernews

Paper2video: Automatic video generation from scientific papers

https://arxiv.org/abs/2510.05096
91•jinqueeny•3mo ago

Comments

ks2048•3mo ago
Project page (links to both github and arxiv): https://showlab.github.io/Paper2Video/
anothernewdude•3mo ago
This is the opposite of what I want. I'd rather turn videos into articles.
Lerc•3mo ago
People a different, I would prefer paper to video, but this iimplentation is not yet sufficient for what I would use. But as Doctorcarolorangyfaheer says maybe a few more papers down the line
ninesnines•3mo ago
Ah I guess if you’re very bad at presentations, then this could be beneficial. However, scientific presentations are meant to be communicating science and making things stick to your audience (no matter if it’s scientists or children you’re presenting to). This does not fix that problem at all. For anyone thinking of using this: please watch: https://m.youtube.com/watch?v=Unzc731iCUY and maybe a talk from Jane Goodall on how to engagingly show your science. I would hate to see a lot of conference presentations be made with this generator.

Another thing that improved my personal presentation skills was noting down why I liked a presentation or why I didn’t - what specific things a person did to make it engaging. Just paying attention to that improved my presentation skills enormously

sebastiennight•3mo ago
Very interesting project, and I found two things particularly smart and well executed in the demo:

1. Using a "painter commenter" feedback loop to make sure the slides are correctly laid out with no overflowing or overlapping elements.

2. Having the audio/subtitles not read word-for-word the detailed contents that are added to the slides, but instead rewording that content to flow more naturally and be closer to how a human presenter would cover the slide.

A couple of things might possibly be improved in the prompts for the reasoning features, eg. in `answer_question_from_image.yaml`:

  1. Study the poster image along with the "questions" provided.
  2. For each question:
     • Decide if the poster clearly supports one of the four options (A, B, C, or D). If so, pick that answer.
     • Otherwise, if the poster does not have adequate information, use "NA" for the answer.
  3. Provide a brief reference indicating where in the poster you found the answer. If no reference is available (i.e., your answer is "NA"), use "NA" for the reference too.
  4. Format your output strictly as a JSON object with this pattern:
     {
       "Question 1": {
         "answer": "X",
         "reference": "some reference or 'NA'"
       },
       "Question 2": {
         "answer": "X",
         "reference": "some reference or 'NA'"
       },
       ...
     }

I'd assume you would likely get better results by asking for the reference first, and then the answer, otherwise you probably have quite a number of answers where the model just "knows" the answer and takes from its own training rather than from the image, which would bias the benchmark.
fsh•3mo ago
The samples from the authors' GitHub are just some text vomited onto slides, and the AI voice reading them point by point. Exactly the opposite of a good presentation.
mattjenner•3mo ago
This might likely develop faster than your typical researcher's presentation skills. It could also increase access more generally. Science communication is a skill, plus an interested reader's ability to get to a conference (or watch the recordings) is limited. If this expands access to science, I'm for it.

(and I generally think AI-produced content is slop).

davidsainez•3mo ago
IMO this seems like exactly the use cases where AI fails consistently: engaging storytelling and finding the simplest solution to a problem. For example, LLMs are really good at generating walls of code that will run but don't really have good taste in architecting a solution. When I use them for coding I will spend time thinking of a good high-level approach and then use LLMs to fill in the more boilerplate style code
hirenj•3mo ago
This is great - now I can get the authentic conference experience of a disengaged speaker reading out the slides in a monotone, without all the hassle of international travel and scheduling.

In all seriousness, there could be more utility in this if it helped explain the figures. I jumped ahead to one of the figures in the example video, and no real attention was given to it. In my experience, this is really where presentations live and die, in the clear presentation of datapoints, adding sufficient detail that you bring people along.

netsharc•3mo ago
There's porn site (is it even porn if it's just nudity) which niche is women reading the news while taking off their clothes.

For papers, it doesn't have to go that far, but I imagine a polished AI girl (or guy) reading the summary would be more engaging.

Hah, "SteveGPT, present your PowerPoints like Steve Jobs did!"

a99c43f2d565504•3mo ago
Besides just porn or nudity, maybe we could also add violence into the arsenal of engagement. For example, maybe the viewer could use a virtual sword or shotgun on some key concepts in the presentation to initiate a tangent going on a deep dive on the concept, and then come back to the presentation once done with the rabbit hole.
rft•3mo ago
A VR interactive thesis defense/sword fighting crossover game sounds just weird enough to work. Maybe base it on the fight mechanics of Until You Fall [1], we could call it "Until You Graduate" (I will see myself out for that one) or "Thesis Offense" [2].

[1] https://store.steampowered.com/app/858260/Until_You_Fall/

[2] https://xkcd.com/1403/

anarticle•3mo ago
Feels like the theme of Videodrome coming back: https://www.youtube.com/watch?v=RxXkIGVwgB4

Add sex and violence to your boring paper reading sessions more exciting!

mtillman•3mo ago
I was just thinking about this movie on Friday while at a concert. Lorna Shore, awesome show. Anyways, the person in front of me was watching an overweight person (purpose of the niche I suspect which is why I mention it) do their daily chore routine (laundry, cleaning, etc) on tiktok. After the video was finished, my fellow concert attendee quickly went to Amazon and purchased the iron in the video. No links clicked, just serious chore fomo leading to a purchase. All while standing 3 feet from a circle pit/wall of death/etc while Lorna Shore was playing 20 ft from their face.
sebastiennight•3mo ago
Upon first reading I thought you were suggesting a "polish" AI presenter for a second...
IanCal•3mo ago
If it doesn’t cram text at a tiny point size and introduce a slide with “you can’t see this but” then it’s likely better than the majority of scientific presentations I’ve seen.
tobwen•3mo ago
Hrhr, I'd love to have automatic CODE generation from Scientic Papers :D
anarticle•3mo ago
You're in luck! Paper2Agent + Paper2Code do just that: https://arxiv.org/abs/2504.17192 https://arxiv.org/abs/2509.06917
progbits•3mo ago
Damn, they automated Károly Zsolnai-Fehér
rhl314•3mo ago
Shameless plug: I have been working on a tool that lets you create whiteboard explainers.

It also works with research papers.

Here is an explainer of the famous Attention is all you need paper https://www.youtube.com/watch?v=7x_jIK3kqfA

(You can try it here https://magnetron.ai)

alfonsodev•3mo ago
wow! you are almost there, if you made a version that was only drawings, or drawings first titles later, would be awesome, right now titles take too long to write a title, making the filling and meanwhile the pace is lost with the narration, then it makes a cool drawing super fast, so it feels like with a bit of tweaking in the pace you'll be able to get an outstanding result.

Congratulations on this cool idea and results.

Where can I follow the progress or get notified ?

rhl314•3mo ago
Thanks for the feedback. Working on the making the video and narration sync better.

> Where can I follow the progress or get notified ?

I send out product updates once a week or so. Will keep you posted.

tummler•3mo ago
At last, they've come for Two Minute Papers.
ks2048•3mo ago
While the TTS sounds very good, it is interesting how some subtle prosody issues make it sound very unnatural.

example: Geoff Hinton saying "Forward-forward Algorithm" with a long pause after the first "forward".

(first few seconds in the first demo on https://showlab.github.io/Paper2Video/)