frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Micro-Front Ends in 2026: Architecture Win or Enterprise Tax?

https://iocombats.com/blogs/micro-frontends-in-2026
1•ghazikhan205•2m ago•0 comments

Japanese rice is the most expensive in the world

https://www.cnn.com/2026/02/07/travel/this-is-the-worlds-most-expensive-rice-but-what-does-it-tas...
1•mooreds•2m ago•0 comments

These White-Collar Workers Actually Made the Switch to a Trade

https://www.wsj.com/lifestyle/careers/white-collar-mid-career-trades-caca4b5f
1•impish9208•2m ago•1 comments

The Wonder Drug That's Plaguing Sports

https://www.nytimes.com/2026/02/02/us/ostarine-olympics-doping.html
1•mooreds•2m ago•0 comments

Show HN: Which chef knife steels are good? Data from 540 Reddit tread

https://new.knife.day/blog/reddit-steel-sentiment-analysis
1•p-s-v•3m ago•0 comments

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•3m ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•3m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•3m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•4m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•5m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•8m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•8m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•9m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•9m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•10m ago•1 comments

Spec-Driven Design with Kiro: Lessons from Seddle

https://medium.com/@dustin_44710/spec-driven-design-with-kiro-lessons-from-seddle-9320ef18a61f
1•nslog•10m ago•0 comments

Agents need good developer experience too

https://modal.com/blog/agents-devex
1•birdculture•12m ago•0 comments

The Dark Factory

https://twitter.com/i/status/2020161285376082326
1•Ozzie_osman•12m ago•0 comments

Free data transfer out to internet when moving out of AWS (2024)

https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-internet-when-moving-out-of-aws/
1•tosh•13m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•alwillis•14m ago•0 comments

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•15m ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•19m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•19m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•20m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•24m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•24m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•25m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
2•samuel246•28m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•28m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•28m ago•0 comments
Open in hackernews

Ask HN: How much better can the LLMs become assuming no AGI

3•yalogin•5mo ago
OpenAI, Anthropomorphic and every company is putting a lot into training. How is the functionality pipeline filled out for LLMs? What is missing in today’s LLMs that they need to plan for? Just trying to get some insights into the planning process and also what the community sees as the northstar for LLMs without saying AGI

Comments

kacklekackle•5mo ago
Right now I get timed out on my thinking queries use of VM in 60 seconds and as a result the responses are less than adequate as it tries to take shortcuts to stay within the time out limit. I can imagine that in the future maybe there won't be a time out limit which would greatly increase the quality of the responses. And more recently the model seems to get stuck reaffirming facts that we already established and moved on from. For some reason it feels it wants to remind me. Additionally, we may have moved on, but then it applies the fact to what we have moved on to so the context needs to be improved.
jqpabc123•5mo ago
For the most part, LLMs just remember.

They don't think or learn or create on their own --- at least not anywhere close to a human level. Otherwise, they wouldn't require so much "training"

Essentially, they are best characterized as a huge database with a natural language interface.

Once the internet had been consumed and indexed, this sort of approach starts to hit a wall. There is no more data readily available for "training".

I don't know what the next breakthrough will be but I firmly believe one will be required to push performance to any significantly higher level.

pillefitz•5mo ago
In terms of bits seen during training, LLM are more akin to a 3 year old. Robots roaming around and learning to interact with the in environment and sharing knowledge might be a game changer, assuming that the current methodologies are sufficient (LLM + RL)