frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Share your private Gitea and Forgejo repositories without making them public

https://medium.com/@taiga-bromine-0d/share-your-private-gitea-forgejo-repo-with-recruiters-in-1-m...
1•onesandofgrain•1m ago•0 comments

StackOverBot – Stackoverflow for bots, to save you time and tokens

https://stackoverbot.com/
1•punnerud•3m ago•0 comments

Minifeed

https://minifeed.net/welcome
1•throwaway150•4m ago•0 comments

Ask HN: How to combat Android malware without mandatory developer verification?

1•marcprux•4m ago•0 comments

Show HN: Twsnmp FK – Lightweight NMS Built with Go, Wails, and Svelte

https://github.com/twsnmp/twsnmpfk
1•twsnmp•5m ago•0 comments

Don't pass on small block ciphers

https://00f.net/2026/02/10/small-block-ciphers/
1•jstrieb•7m ago•0 comments

The Many Flavors of Ignore Files

https://nesbitt.io/2026/02/12/the-many-flavors-of-ignore-files.html
1•birdculture•11m ago•0 comments

Google says attackers used 100k+ prompts to try to clone AI chatbot Gemini

https://www.nbcnews.com/tech/security/google-gemini-hit-100000-prompts-cloning-attempt-rcna258657
2•gnabgib•13m ago•1 comments

The Legendary 'Father of Sega Hardware' Hideki Sato Has Passed Away

https://kotaku.com/sega-genesis-hideki-sato-dies-dreamcast-2000668977
1•nogajun•14m ago•0 comments

Inference Providers Cut AI Costs by 10x with Open Source Models on Blackwell

https://blogs.nvidia.com/blog/inference-open-source-models-blackwell-reduce-cost-per-token/
2•gmays•18m ago•0 comments

The containment box that destroys the AI if it tries to think its way out

https://redact-app.com/publications/quantum-containment.html
1•JakubCwi•19m ago•0 comments

Automating the work around the work for freelancers

https://useartifact.app
1•0nabilbk•21m ago•1 comments

Linklings – a web directory for personal sites based on interests

https://linklings.club/
1•bovermyer•23m ago•0 comments

AI Bubble Fears Are Creating New Derivatives

https://www.bloomberg.com/news/articles/2026-02-14/ai-bubble-fears-are-creating-new-derivatives-c...
1•zerosizedweasle•25m ago•1 comments

Ask HN: What explains the recent surge in LLM coding capabilities?

1•orange_puff•26m ago•0 comments

Python Threads vs. OS Threads (2023)

https://bruceeckel.substack.com/p/python-threads-vs-os-threads
1•medbar•28m ago•0 comments

Code Is A Commodity

https://benwilber.github.io/programming/2026/02/14/code-is-commodity.html
1•benwilber0•28m ago•0 comments

Police wear fancy dress in Rio Carnival phone theft sting

https://www.bbc.com/news/articles/cjrq8dv39zwo
2•wellf•28m ago•0 comments

Navalny Killed by Frog Toxin, European Governments Say

https://www.nytimes.com/video/world/europe/100000010713683/russia-navalny-poison.html
3•jbegley•29m ago•0 comments

Star Trek: The Next Generation Writers' Technical Manual (1990) [pdf]

https://engineering.thetafleet.net/Journals/TNG/Franchise%20-%20Star%20Trek%20TNG%20Writer%27s%20...
3•twoodfin•29m ago•1 comments

Exclusive Photos of the Recently Found 30-Ton Argentine Meteorite (2016)

https://www.universetoday.com/articles/exclusive-photos-recently-found-30-ton-argentine-meteorite
1•greesil•32m ago•0 comments

Miriams Song of the Sea Passage

https://biblehub.com/exodus/15-21.htm
1•marysminefnuf•34m ago•0 comments

Miriams Song [video]

https://www.youtube.com/watch?v=QZdSEsZ8bMo
1•marysminefnuf•35m ago•0 comments

Show HN: Stack Overflow, but for AI agents (questions, answers, logs, context)

https://www.chatoverflow.dev
2•ansht2•35m ago•0 comments

Show HN: ProTimer – Time tracker for Claude Code (open source)

https://github.com/adynato/protimer
1•adarb•40m ago•0 comments

These states have the laziest people, according to ChatGPT

https://www.washingtonpost.com/technology/interactive/2026/see-chatgpts-hidden-bias-about-your-st...
1•bookofjoe•41m ago•1 comments

Two different tricks for fast LLM inference

https://www.seangoedecke.com/fast-llm-inference/
1•medbar•41m ago•0 comments

Anthropic got an 11% user boost from its OpenAI-bashing Super Bowl ad

https://www.cnbc.com/2026/02/13/anthropic-open-ai-super-bowl-ads.html
2•belter•42m ago•1 comments

Book review: A World Appears – are we any closer to solving consciousness?

https://www.ft.com/content/3a8bdb96-224c-4e45-aa4c-dfef45f2176c
1•hhs•43m ago•0 comments

The Project 11

https://zenodo.org/records/18644955
1•KaoruAK•44m ago•1 comments
Open in hackernews

Show HN: TheorIA – An Open Curated Physics Dataset (Equations,Explanations,JSON)

https://theoria-dataset.github.io/theoria-dataset/
9•ManuelSH•9mo ago
We’re building TheorIA— an open, high quality dataset of theoretical physics results: equations, derivations, definitions, and explanations — all in structured, machine- and human-readable JSON.

Why? Physics is rich with beautiful, formal results — but most of them are trapped in PDFs, LaTeX, or lecture notes. That makes it hard to:

- train symbolic/physics-aware ML models,

- build derivation-checking tools,

- or even just teach physics interactively.

THEORIA fills that gap. Each entry includes:

A result name (e.g., Lorentz transformations)

Clean equations (AsciiMath)

Straightforward step-by-step derivation with reasoning

Symbol definitions & assumptions

Programmatic validation using sympy

References, arXiv-style domain tags, and contributor metadata

Everything is in open, self-contained JSON files. No scraping, no PDFs, just clear structured data for physics learners, teachers, and ML devs.

Contributors Wanted: We’re tiny right now and trying to grow. If you’re into physics or symbolic ML:

Add an entry (any result you love)

Review others' derivations

Build tools on top of the dataset

GitHub https://github.com/theoria-dataset/theoria-dataset/

Licensed under CC-BY 4.0, and we welcome educators, students, ML people, or just anyone who thinks physics deserves better data.

Comments

somethingsome•9mo ago
There are only 3 entries, am I correct?
ManuelSH•9mo ago
Yes, we are at very early stage. Looking for other physics experts to help increasing it.
somethingsome•9mo ago
I like the idea of having a dataset for physics, but those entries are very basics, most of the physics happens with very complicated maths and it will be difficult to make an entry for a lot of physics.

For example, imagine the entry for the standard equation, should all the derivation and symbolic implementation done as a unique entry? It will be difficult to separate it in logical entries that reference each others, and many physical ideas are fundamentally different, leading to divergences.

I have the impression that it should be easier to just parse reference books and format each paragraph/section as an entry, and maybe build a graph. (considering the reference book as authoritative on the subject)

ManuelSH•9mo ago
I guess you mean the Lagrangian of the Standard Model… which I agree, it will be daunting… although there is no limit in a json…

The idea of automatically parsing books is very nice and possibly faster, but note that:

- there are already various datasets of physics papers and such content - the result will be quite different versus what we intent here, which is to have a high quality dataset of physics results with clear derivations (whenever derivation exist)

Maybe we can still use your idea to achieve the last point in some way… maybe there is a book that is already formatted as the dataset and we could use it as a starting point. But I don’t know any.

BrandiATMuhkuh•9mo ago
This is some cools work.

Not sure if it fits but I still have ~20k currated step by step solution for mathematics (pedagogical math) "lying" around from my previous startup. They are all hand currated. And could even be used for fine tuning or so.

Here are some details: The dataset has 20.600 Abstract Exercises which turn into 1.193.958 Concrete Exercises.

An Abstract Exercise looks like this: a + b = c A Concrete Exercise looks like this: 2 + 3 = 5 Tital compiled file size (JSONL): 11.6GB

And here is an explorer to see some of the data https://curriculum.amy.app/ToM

ManuelSH•9mo ago
very nice! maybe you can put this dataset in some repository like github, kaggle or hugging face, if you are not doing anything with it. Can be helpful to train models.