frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

ReKindle – web-based operating system designed specifically for E-ink devices

https://rekindle.ink
1•JSLegendDev•1m ago•0 comments

Encrypt It

https://encryptitalready.org/
1•u1hcw9nx•1m ago•0 comments

NextMatch – 5-minute video speed dating to reduce ghosting

https://nextmatchdating.netlify.app/
1•Halinani8•2m ago•1 comments

Personalizing esketamine treatment in TRD and TRBD

https://www.frontiersin.org/articles/10.3389/fpsyt.2025.1736114
1•PaulHoule•3m ago•0 comments

SpaceKit.xyz – a browser‑native VM for decentralized compute

https://spacekit.xyz
1•astorrivera•4m ago•1 comments

NotebookLM: The AI that only learns from you

https://byandrev.dev/en/blog/what-is-notebooklm
1•byandrev•4m ago•1 comments

Show HN: An open-source starter kit for developing with Postgres and ClickHouse

https://github.com/ClickHouse/postgres-clickhouse-stack
1•saisrirampur•5m ago•0 comments

Game Boy Advance d-pad capacitor measurements

https://gekkio.fi/blog/2026/game-boy-advance-d-pad-capacitor-measurements/
1•todsacerdoti•5m ago•0 comments

South Korean crypto firm accidentally sends $44B in bitcoins to users

https://www.reuters.com/world/asia-pacific/crypto-firm-accidentally-sends-44-billion-bitcoins-use...
1•layer8•6m ago•0 comments

Apache Poison Fountain

https://gist.github.com/jwakely/a511a5cab5eb36d088ecd1659fcee1d5
1•atomic128•8m ago•1 comments

Web.whatsapp.com appears to be having issues syncing and sending messages

http://web.whatsapp.com
1•sabujp•8m ago•2 comments

Google in Your Terminal

https://gogcli.sh/
1•johlo•10m ago•0 comments

Shannon: Claude Code for Pen Testing: #1 on Github today

https://github.com/KeygraphHQ/shannon
1•hendler•10m ago•0 comments

Anthropic: Latest Claude model finds more than 500 vulnerabilities

https://www.scworld.com/news/anthropic-latest-claude-model-finds-more-than-500-vulnerabilities
2•Bender•14m ago•0 comments

Brooklyn cemetery plans human composting option, stirring interest and debate

https://www.cbsnews.com/newyork/news/brooklyn-green-wood-cemetery-human-composting/
1•geox•15m ago•0 comments

Why the 'Strivers' Are Right

https://greyenlightenment.com/2026/02/03/the-strivers-were-right-all-along/
1•paulpauper•16m ago•0 comments

Brain Dumps as a Literary Form

https://davegriffith.substack.com/p/brain-dumps-as-a-literary-form
1•gmays•16m ago•0 comments

Agentic Coding and the Problem of Oracles

https://epkconsulting.substack.com/p/agentic-coding-and-the-problem-of
1•qingsworkshop•17m ago•0 comments

Malicious packages for dYdX cryptocurrency exchange empties user wallets

https://arstechnica.com/security/2026/02/malicious-packages-for-dydx-cryptocurrency-exchange-empt...
1•Bender•17m ago•0 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
1•shubham-coder•18m ago•0 comments

Penisgate erupts at Olympics; scandal exposes risks of bulking your bulge

https://arstechnica.com/health/2026/02/penisgate-erupts-at-olympics-scandal-exposes-risks-of-bulk...
4•Bender•18m ago•0 comments

Arcan Explained: A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
1•fanf2•20m ago•0 comments

What did we learn from the AI Village in 2025?

https://theaidigest.org/village/blog/what-we-learned-2025
1•mrkO99•20m ago•0 comments

An open replacement for the IBM 3174 Establishment Controller

https://github.com/lowobservable/oec
1•bri3d•23m ago•0 comments

The P in PGP isn't for pain: encrypting emails in the browser

https://ckardaris.github.io/blog/2026/02/07/encrypted-email.html
2•ckardaris•25m ago•0 comments

Show HN: Mirror Parliament where users vote on top of politicians and draft laws

https://github.com/fokdelafons/lustra
1•fokdelafons•25m ago•1 comments

Ask HN: Opus 4.6 ignoring instructions, how to use 4.5 in Claude Code instead?

1•Chance-Device•27m ago•0 comments

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
1•ColinWright•29m ago•0 comments

Jim Fan calls pixels the ultimate motor controller

https://robotsandstartups.substack.com/p/humanoids-platform-urdf-kitchen-nvidias
1•robotlaunch•33m ago•0 comments

Exploring a Modern SMTPE 2110 Broadcast Truck with My Dad

https://www.jeffgeerling.com/blog/2026/exploring-a-modern-smpte-2110-broadcast-truck-with-my-dad/
1•HotGarbage•33m ago•0 comments
Open in hackernews

Data Activation Thoughts

https://galsapir.github.io/sparse-thoughts/2026/01/17/data_activation/
21•galsapir•2w ago
i've been working with healthcare/biobank data and keep thinking about what "data moats" mean now that llms can ingest anything. some a16z piece from 2019 said moats were eroding — now the question seems to be whether you can actually make your data useful to these systems, not just have it. there's some recent work (tables2traces, ehr-r1) showing you can convert structured medical data into reasoning traces that improve llm performance, but the approaches are still rough and synthetic traces don't fully hold up to scrutiny (writing this to think through it, not because i have answers)

Comments

sgt101•2w ago
How to know if one should fine tune/pretrain or RL / reasoning train given some data set?
galsapir•2w ago
i honestly dont think there's a simple y/n answer there - i think considerations include mostly like 'how costly it is to do so', 'how often do you think you'll need it', and so on. traces are not as "ephemeral" as FT models - since you can use those to guide agent behaviour when a newer model is released (but still, not as evergreen as other assets - traces generated using say GPT4 would seem pale and outdated compared to ones created on the same dataset using Opus4.5 i reckon)
armcat•2w ago
I've been working in legaltech space and can definitely echo the sentiments there. There are some major legaltech/legal AI companies but after speaking to dozens of law firms, none of them are finding these tools very valuable. But they have signed contracts with many seats, they are busy people, and tech is not intrinsic to them, so they are not in the business of just changing tools and building things in-house (a handful of them are). And the problem is despite massive amount of internal data, all the solutions fail on the relevance and precision scale. When I sit down with actual legal associates, I can see how immensely complex these workflows are, and to fully utilize this data moat you need: (1) multi-step agentic retrieval, (2) a set of rules/heuristics to ground and steer everything per transaction/case "type", (3) adaptation/fine-tuning towards the "house language/style", (4) integration towards many different data sources and tools; and you need to wrap all this with real-world evals (where LLM-as-a-judge technique often fail).
dennisy•2w ago
Could you please expand on “none of them find the tools very useful”?

I would love to know how big your sample is, in what way the tools fail, what features are missing etc.

armcat•2w ago
Sure! So to qualify - I've been working in contractual law, and more specifically contract drafting. There are a tonne of other tools in the areas of document management, research, regulatory, timekeeping, etc, so I cannot speak on behalf of those.

Sample size: around 150 law firms across UK, Nordics and DACH (and a smithering across the US). Some were actual month long pilots so there were deeper interactions with some, whilst others were "just conversations". Let's say in each law firm it's 3-4 associates and 1-2 partners, so it's >600 lawyers.

Typically the legal AI solutions in contract drafting involve the lawyer uploading "their database" aka drag-and-drop a folder or a zip file containing potentially 100s-1000s contracts from previous transactions.

What's missing:

- Relevance: For the current transaction the lawyer is working on, the recommendations from AI tools suggest irrelevant information. For example, if it's an M&A transaction in one market (e.g. Nordics), it suggests pricing mechanics from a different market practice (e.g. US) that are irrelevant or not desirable. The text semantics have closest cosine (or whatever) distance, but the market characteristics are orthogonal.

- Representation: as a lawyer you are always representing a specific party (e.g. a "buyer" purchasing another company or an asset from a "seller"). You want your side to be best represented - however the tools often fail to "understand" what/who you are representing, and tend to recommend the opposite of what you want for your client.

- Diversity: The same handful of documents keep being referenced all the time, even though there are other "better" documents that should be used to ground the responses and recommendations.

- Precision: Sometimes you want precise information, such as specific leverage ratios or very specific warranty clauses for a transaction of a particular size within a particular industry.

- Language/tonality: Lawyers talk to other lawyers and there is a specific tonality and language used - precision, eloquence, professionalism. Each law firm also has their "house style" in terms of how they put the words together. AI tools come across as "odd" in terms of how they respond (even when they are correct). It trips the lawyers up a bit and they lose the trust somewhat.

Etc.

(there are many others)