frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A PR with multiple bots talking to each other

https://github.com/PostHog/posthog/pull/45132
1•wayy•1m ago•1 comments

Startup debuts DNA Sequencer that can deliver lab-grade whole genome for $100

https://www.sandiegouniontribune.com/2026/02/19/scrappy-san-diego-startup-goes-toe-to-toe-with-ge...
1•ck2•3m ago•0 comments

Bondi Bragged About Forcing Facebook to Censor Speech. Now Fire Is Suing

https://www.techdirt.com/2026/02/20/bondi-bragged-about-forcing-facebook-to-censor-speech-now-fir...
2•cdrnsf•5m ago•0 comments

Show HN: BotGuard – Automated red-teaming and real-time firewall for AI agents

https://botguard.dev/
1•boazlautman•5m ago•0 comments

AI is nothing new, LLMs are

https://jensrantil.github.io/posts/ai-is-nothing-new/
2•JensRantil•6m ago•0 comments

Show HN: Google started to (quietly) insert (self) ads into Gemini output

2•rdslw•8m ago•0 comments

colorForth Editor and Assembler (2013) [video]

https://www.youtube.com/watch?v=LJoRyxRcj4A
1•tosh•8m ago•0 comments

I hate AI side projects

https://dylancastillo.co/posts/ai-side-projects.html
1•dcastm•9m ago•0 comments

FCC asks stations for "pro-America" programming, like daily Pledge of Allegiance

https://arstechnica.com/tech-policy/2026/02/fcc-asks-stations-for-pro-america-programming-like-da...
4•pseudalopex•9m ago•0 comments

CrowdStrike, Okta lead cyber selloff after Anthropic's Claude update

https://invezz.com/news/2026/02/20/crowdstrike-okta-lead-cyber-selloff-after-anthropics-claude-up...
1•megamike•9m ago•0 comments

Chatbots Are the New Influencers Brands Must Woo

https://www.nytimes.com/2026/02/17/technology/chatbots-influencers-brands-marketing.html
2•bookofjoe•11m ago•1 comments

Amazon Is Now America's Biggest Company by Annual Revenue

https://www.wsj.com/business/retail/amazon-biggest-us-company-walmart-cfd0cac4
2•fortran77•14m ago•1 comments

Total languages do not escape the halting problem – a trinary proof sketch

https://github.com/HowWeLand/Total-Languages-Halting
1•user1138•15m ago•1 comments

Turn Your LLM into a Calibrated Classifier for $2

https://fireworks.ai/blog/Finetuning-LLMs-as-Classifiers
1•smurda•15m ago•0 comments

OpenClaw is the most fun I've had with a computer in 50 years

https://www.theregister.com/2026/02/19/50_years_using_computers/
2•B1FF_PSUVM•16m ago•0 comments

My first experience with an "AI"-ed call centre?

3•chrisjj•18m ago•0 comments

We reached bug zero using Linear

https://www.sourcebot.dev/blog/bug-zero
1•msukkarieh•19m ago•0 comments

The Loom Is Here: 12 Months of AI-Augmented Engineering

https://medium.com/@shelby.w.vanhooser/the-loom-is-here-12-months-of-ai-augmented-engineering-843...
1•orbOfOrthanc•24m ago•0 comments

Your AI Agent Will Make Money. Here's How It Traces Back to You.

https://timafey.substack.com/p/your-ai-agent-will-make-money-heres
1•Tima_fey•25m ago•1 comments

SF retiree loses $500K life savings to pig butcher scam despite family warnings [video]

https://www.youtube.com/watch?v=lV356Gx4nhE
2•randycupertino•25m ago•2 comments

Kiriakos Vlahos, the man who made Python better

https://blogs.embarcadero.com/kiriakos-vlahos-the-man-who-made-python-better/
2•cxr•25m ago•0 comments

Loon: A functional lang with invisible types, safe ownership, and alg. effects

https://loonlang.com
1•surprisetalk•26m ago•0 comments

Asha Sharma Named EVP and CEO, Microsoft Gaming

https://blogs.microsoft.com/blog/2026/02/20/asha-sharma-named-evp-and-ceo-microsoft-gaming/
2•haunter•27m ago•0 comments

Building a premium marketplace for agentic AI skills

1•advickbhalla•27m ago•0 comments

Xcode 26.3 RC 2 Release Notes

https://developer.apple.com/documentation/xcode-release-notes/xcode-26_3-release-notes
1•Austin_Conlon•30m ago•0 comments

Imperfect advice on how to be happy

https://by.ben.church/my-wish-for-you-darling/
1•bnchrch•31m ago•0 comments

From classroom to camera: A teacher who has become a sensation in Indian cinema

https://www.bbc.com/news/articles/c20zzn77w82o
1•thunderbong•32m ago•0 comments

Proposal: GenAI API Assistance in Published Packages

https://github.com/ChicagoDave/devarch/blob/main/docs/proposals/genai-package-metadata.md
1•ChicagoDave•32m ago•0 comments

Lobsters Interview with Steveklabnik

https://lobste.rs/s/w1bsle
2•robenkleene•33m ago•0 comments

Jaal – Your interactive network visualizing dashboard

https://github.com/imohitmayank/jaal
1•bjourne•33m ago•0 comments
Open in hackernews

Show HN: TheorIA – An Open Curated Physics Dataset (Equations,Explanations,JSON)

https://theoria-dataset.github.io/theoria-dataset/
9•ManuelSH•9mo ago
We’re building TheorIA— an open, high quality dataset of theoretical physics results: equations, derivations, definitions, and explanations — all in structured, machine- and human-readable JSON.

Why? Physics is rich with beautiful, formal results — but most of them are trapped in PDFs, LaTeX, or lecture notes. That makes it hard to:

- train symbolic/physics-aware ML models,

- build derivation-checking tools,

- or even just teach physics interactively.

THEORIA fills that gap. Each entry includes:

A result name (e.g., Lorentz transformations)

Clean equations (AsciiMath)

Straightforward step-by-step derivation with reasoning

Symbol definitions & assumptions

Programmatic validation using sympy

References, arXiv-style domain tags, and contributor metadata

Everything is in open, self-contained JSON files. No scraping, no PDFs, just clear structured data for physics learners, teachers, and ML devs.

Contributors Wanted: We’re tiny right now and trying to grow. If you’re into physics or symbolic ML:

Add an entry (any result you love)

Review others' derivations

Build tools on top of the dataset

GitHub https://github.com/theoria-dataset/theoria-dataset/

Licensed under CC-BY 4.0, and we welcome educators, students, ML people, or just anyone who thinks physics deserves better data.

Comments

somethingsome•9mo ago
There are only 3 entries, am I correct?
ManuelSH•9mo ago
Yes, we are at very early stage. Looking for other physics experts to help increasing it.
somethingsome•9mo ago
I like the idea of having a dataset for physics, but those entries are very basics, most of the physics happens with very complicated maths and it will be difficult to make an entry for a lot of physics.

For example, imagine the entry for the standard equation, should all the derivation and symbolic implementation done as a unique entry? It will be difficult to separate it in logical entries that reference each others, and many physical ideas are fundamentally different, leading to divergences.

I have the impression that it should be easier to just parse reference books and format each paragraph/section as an entry, and maybe build a graph. (considering the reference book as authoritative on the subject)

ManuelSH•9mo ago
I guess you mean the Lagrangian of the Standard Model… which I agree, it will be daunting… although there is no limit in a json…

The idea of automatically parsing books is very nice and possibly faster, but note that:

- there are already various datasets of physics papers and such content - the result will be quite different versus what we intent here, which is to have a high quality dataset of physics results with clear derivations (whenever derivation exist)

Maybe we can still use your idea to achieve the last point in some way… maybe there is a book that is already formatted as the dataset and we could use it as a starting point. But I don’t know any.

BrandiATMuhkuh•9mo ago
This is some cools work.

Not sure if it fits but I still have ~20k currated step by step solution for mathematics (pedagogical math) "lying" around from my previous startup. They are all hand currated. And could even be used for fine tuning or so.

Here are some details: The dataset has 20.600 Abstract Exercises which turn into 1.193.958 Concrete Exercises.

An Abstract Exercise looks like this: a + b = c A Concrete Exercise looks like this: 2 + 3 = 5 Tital compiled file size (JSONL): 11.6GB

And here is an explorer to see some of the data https://curriculum.amy.app/ToM

ManuelSH•9mo ago
very nice! maybe you can put this dataset in some repository like github, kaggle or hugging face, if you are not doing anything with it. Can be helpful to train models.