frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

House votes near unanimously to release the Epstein files

https://www.cnn.com/politics/live-news/trump-presidency-epstein-files-house-vote-11-18-25
1•gnarlouse•36s ago•0 comments

The QA Chamber of Horrors: Cautionary Tales for Software Leaders

https://www.functionize.com/blog/the-qa-chamber-of-horrors-cautionary-tales-for-software-leaders
1•mooreds•3m ago•0 comments

GitHub Down

10•mikeocool•3m ago•2 comments

Windows Clat Enters Private Preview: A Milestone for IPv6 Adoption

https://techcommunity.microsoft.com/blog/networkingblog/windows-clat-enters-private-preview-a-mil...
1•janeric•3m ago•0 comments

GitHub: Git Operation Failures

https://www.githubstatus.com/incidents/5q7nmlxz30sk
4•wilhelmklopp•4m ago•0 comments

GitHub Is Having Issues

24•polyrand•5m ago•9 comments

Report claims that Apple has yet again put the Mac Pro "on the back burner"

https://arstechnica.com/gadgets/2025/11/report-claims-that-apple-has-yet-again-put-the-mac-pro-on...
1•tosh•5m ago•0 comments

Fund managers warn AI investment boom has gone too far

https://www.ft.com/content/e2d93034-ef3b-4259-9ab1-c45396ca59b3
2•zerosizedweasle•6m ago•0 comments

Empire of AI is wildly misleading on AI water use

https://andymasley.substack.com/p/empire-of-ai-is-wildly-misleading
1•gok•7m ago•0 comments

Show HN: Largest Abacus Ever Built

https://number-garden.netlify.app/?19911388590583m
1•cpuXguy•8m ago•0 comments

GitHub Down

https://downdetector.com/status/github/
4•Exuma•9m ago•0 comments

Are Verizon's Layoffs a Warning for White-Collar Jobs in the AI Era?

https://cceonlinenews.com/investment-finance/are-verizons-layoffs-a-warning-for-white-collar-jobs...
1•mooreds•9m ago•0 comments

Facebook has made it impossible to delete Pages – dark patterns everywhere

3•ramharts•10m ago•0 comments

Shadcn UI library hits 100k Stars on GitHub

https://github.com/shadcn-ui/ui
1•codegeek•11m ago•0 comments

Mini Case Study: The Case of the Withheld Sales

https://digestibledeming.substack.com/p/mini-case-study-the-case-of-the-withheld
1•xattt•13m ago•0 comments

Improper labeling band positioning led to MV Dali disaster (NTSB animation) [video]

https://www.youtube.com/watch?v=bu7PJoxaMZg
1•frenchman_in_ny•15m ago•0 comments

Oracle is underwater on its 'astonishing' $300B OpenAI deal

https://www.ft.com/content/064bbca0-1cb2-45ab-85f4-25fdfc318d89
11•busymom0•16m ago•1 comments

Baserow 2.0: Build databases, automations, apps and agents with AI – no code

https://baserow.io/blog/baserow-2-0-release-notes
1•bram2w•16m ago•0 comments

Supercomputer simulates quantum chip in unprecedented detail

https://phys.org/news/2025-11-supercomputer-simulates-quantum-chip-unprecedented.html
1•rbanffy•17m ago•0 comments

Talking to Windows' Copilot AI makes a computer feel incompetent

https://www.theverge.com/report/822443/microsoft-windows-copilot-vision-ai-assistant-pc-voice-con...
3•speckx•19m ago•0 comments

Murphyjitsu (2018)

https://www.lesswrong.com/posts/N47M3JiHveHfwdbFg/hammertime-day-10-murphyjitsu
1•surprisetalk•21m ago•0 comments

TV Factory Is the Coolest I've Ever Seen – HiSense [video]

https://www.youtube.com/watch?v=Cmnfwabz0bA
1•xbmcuser•21m ago•0 comments

The Unraveling of the Justice Department: 60 attorneys describe a year of chaos

https://www.nytimes.com/interactive/2025/11/16/magazine/trump-justice-department-staff-attorneys....
4•tastyface•24m ago•1 comments

The Final Straw: Why Companies Replace Once-Beloved Technology Brands

https://www.functionize.com/blog/the-final-straw-why-companies-replace-once-beloved-technology-br...
5•ohjeez•24m ago•0 comments

Gimp 3.2 RC1: First Release Candidate for Gimp 3.2

https://www.gimp.org/news/2025/11/17/gimp-3-2-RC1-released/
2•marcodiego•25m ago•0 comments

Embedded Swift Improvements Coming in Swift 6.3

https://swift.org/blog/embedded-swift-improvements-coming-in-swift-6.3/
1•pjmlp•26m ago•0 comments

Reentry – A Space Flight Simulator

https://reentrygame.com/
2•nodesocket•27m ago•0 comments

fx – an efficient (micro)blogging service that you can self-host

https://github.com/rikhuijzer/fx
1•indigodaddy•27m ago•0 comments

The productivity impact of coding agents

https://cursor.com/blog/productivity
3•janpio•28m ago•0 comments

Convergence: Run queries across models and analyze inconsistencies

https://github.com/riemannzeta/convergence
1•riemannzeta•29m ago•0 comments
Open in hackernews

Show HN: TheorIA – An Open Curated Physics Dataset (Equations,Explanations,JSON)

https://theoria-dataset.github.io/theoria-dataset/
9•ManuelSH•6mo ago
We’re building TheorIA— an open, high quality dataset of theoretical physics results: equations, derivations, definitions, and explanations — all in structured, machine- and human-readable JSON.

Why? Physics is rich with beautiful, formal results — but most of them are trapped in PDFs, LaTeX, or lecture notes. That makes it hard to:

- train symbolic/physics-aware ML models,

- build derivation-checking tools,

- or even just teach physics interactively.

THEORIA fills that gap. Each entry includes:

A result name (e.g., Lorentz transformations)

Clean equations (AsciiMath)

Straightforward step-by-step derivation with reasoning

Symbol definitions & assumptions

Programmatic validation using sympy

References, arXiv-style domain tags, and contributor metadata

Everything is in open, self-contained JSON files. No scraping, no PDFs, just clear structured data for physics learners, teachers, and ML devs.

Contributors Wanted: We’re tiny right now and trying to grow. If you’re into physics or symbolic ML:

Add an entry (any result you love)

Review others' derivations

Build tools on top of the dataset

GitHub https://github.com/theoria-dataset/theoria-dataset/

Licensed under CC-BY 4.0, and we welcome educators, students, ML people, or just anyone who thinks physics deserves better data.

Comments

somethingsome•6mo ago
There are only 3 entries, am I correct?
ManuelSH•6mo ago
Yes, we are at very early stage. Looking for other physics experts to help increasing it.
somethingsome•6mo ago
I like the idea of having a dataset for physics, but those entries are very basics, most of the physics happens with very complicated maths and it will be difficult to make an entry for a lot of physics.

For example, imagine the entry for the standard equation, should all the derivation and symbolic implementation done as a unique entry? It will be difficult to separate it in logical entries that reference each others, and many physical ideas are fundamentally different, leading to divergences.

I have the impression that it should be easier to just parse reference books and format each paragraph/section as an entry, and maybe build a graph. (considering the reference book as authoritative on the subject)

ManuelSH•6mo ago
I guess you mean the Lagrangian of the Standard Model… which I agree, it will be daunting… although there is no limit in a json…

The idea of automatically parsing books is very nice and possibly faster, but note that:

- there are already various datasets of physics papers and such content - the result will be quite different versus what we intent here, which is to have a high quality dataset of physics results with clear derivations (whenever derivation exist)

Maybe we can still use your idea to achieve the last point in some way… maybe there is a book that is already formatted as the dataset and we could use it as a starting point. But I don’t know any.

BrandiATMuhkuh•6mo ago
This is some cools work.

Not sure if it fits but I still have ~20k currated step by step solution for mathematics (pedagogical math) "lying" around from my previous startup. They are all hand currated. And could even be used for fine tuning or so.

Here are some details: The dataset has 20.600 Abstract Exercises which turn into 1.193.958 Concrete Exercises.

An Abstract Exercise looks like this: a + b = c A Concrete Exercise looks like this: 2 + 3 = 5 Tital compiled file size (JSONL): 11.6GB

And here is an explorer to see some of the data https://curriculum.amy.app/ToM

ManuelSH•6mo ago
very nice! maybe you can put this dataset in some repository like github, kaggle or hugging face, if you are not doing anything with it. Can be helpful to train models.