frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Veo 3 and Imagen 4, and a new tool for filmmaking called Flow

https://blog.google/technology/ai/generative-media-models-io-2025/
370•youssefarizk•5h ago•228 comments

Litestream: Revamped

https://fly.io/blog/litestream-revamped/
165•usrme•3h ago•33 comments

Gemma 3n preview: Mobile-first AI

https://developers.googleblog.com/en/introducing-gemma-3n/
159•meetpateltech•5h ago•69 comments

The NSA Selector

https://github.com/wenzellabs/the_NSA_selector
144•anigbrowl•4h ago•38 comments

"ZLinq", a Zero-Allocation LINQ Library for .NET

https://neuecc.medium.com/zlinq-a-zero-allocation-linq-library-for-net-1bb0a3e5c749
13•cempaka•36m ago•2 comments

Deep Learning Is Applied Topology

https://theahura.substack.com/p/deep-learning-is-applied-topology
316•theahura•9h ago•151 comments

Semantic search engine for ArXiv, biorxiv and medrxiv

https://arxivxplorer.com/
15•0101111101•1h ago•0 comments

Red Programming Language

https://www.red-lang.org/p/about.html
77•hotpocket777•4h ago•28 comments

Show HN: 90s.dev – Game maker that runs on the web

https://90s.dev/blog/finally-releasing-90s-dev.html
208•90s_dev•8h ago•85 comments

Robin: A multi-agent system for automating scientific discovery

https://arxiv.org/abs/2505.13400
99•nopinsight•6h ago•15 comments

Show HN: A Tiling Window Manager for Windows, Written in Janet

https://agent-kilo.github.io/jwno/
184•agentkilo•7h ago•55 comments

My favourite fonts to use with LaTeX (2022)

https://www.lfe.pt/latex/fonts/typography/2022/11/21/latex-fonts-part1.html
33•todsacerdoti•3d ago•10 comments

Show HN: A Simple Server to Match Long/Lat to a TimeZone

https://github.com/LittleGreenViper/LGV_TZ_Lookup
15•ChrisMarshallNY•1h ago•9 comments

A disk is a bunch of bits (2023)

https://www.cyberdemon.org/2023/07/19/bunch-of-bits.html
13•rrampage•3d ago•3 comments

The Dawn of Nvidia's Technology

https://blog.dshr.org/2025/05/the-dawn-of-nvidias-technology.html
103•wmf•6h ago•30 comments

AI's energy footprint

https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
84•pseudolus•12h ago•91 comments

Show HN: Juvio – UV Kernel for Jupyter

https://github.com/OKUA1/juvio
82•okost1•6h ago•19 comments

Ashby (YC W19) Is Hiring Engineering Managers

https://www.ashbyhq.com/careers?utm_source=hn&ashby_jid=933570bc-a3d6-4fcc-991d-dc399c53a58a
1•abhikp•6h ago

Google AI Ultra

https://blog.google/products/google-one/google-ai-ultra/
202•mfiguiere•4h ago•213 comments

The emoji problem (2022)

https://artofproblemsolving.com/community/c2532359h2760821_the_emoji_problem__part_i?srsltid=AfmBOor9TbMq_A7hGHSJGfoWaa2HNzducSYZu35d_LFlCSNLXpvt-pdS
301•mtsolitary•12h ago•52 comments

Magic of software; what makes a good engineer also makes a good engineering org

https://moxie.org/2024/09/23/a-good-engineer.html
4•kiyanwang•1d ago•1 comments

Launch HN: Opusense (YC X25) – AI assistant for construction inspectors on site

28•rcody•7h ago•13 comments

GPU-Driven Clustered Forward Renderer

https://logdahl.net/p/gpu-driven
73•logdahl•7h ago•18 comments

The Last Letter

https://aeon.co/essays/how-the-last-letters-of-the-condemned-can-teach-us-how-to-live
57•HR01•5h ago•16 comments

Gail Wellington, former Commodore executive, has died

https://www.legacy.com/us/obituaries/name/gail-wellington-obituary?id=58418580
57•erickhill•3d ago•19 comments

Google is giving Amazon a leg up in digital book sales

https://www.washingtonpost.com/technology/2025/05/16/google-amazon-ebooks-apps/
93•bookofjoe•3d ago•60 comments

A simple search engine from scratch

https://bernsteinbear.com/blog/simple-search/
227•bertman•13h ago•48 comments

Our Journey Through Linux/Unix Landscapes

https://blog.kalvad.com/our-journey-through-linux-unix-landscapes/
6•alekq•1h ago•3 comments

Reports of Deno's Demise Have Been Greatly Exaggerated

https://deno.com/blog/greatly-exaggerated
171•stephdin•11h ago•167 comments

The Lisp in the Cellar: Dependent types that live upstairs [pdf]

https://zenodo.org/records/15424968
78•todsacerdoti•9h ago•17 comments
Open in hackernews

The Fractured Entangled Representation Hypothesis

https://github.com/akarshkumar0101/fer
45•akarshkumar0101•7h ago

Comments

akarshkumar0101•6h ago
Tweet: https://x.com/kenneth0stanley/status/1924650124829196370 Arxiv: https://arxiv.org/abs/2505.11581
pvg•6h ago
Sounds like you're one of the co-authors? Probably worth mentioning if the case so people know they can discuss the work with one of the work-doers.
akarshkumar0101•6h ago
I mentioned that in the original post, but I don't see that text here anymore (thats why I added links via comment)... I am new to hackernews
messe•5h ago
I believe they just mean that you should edit the comment where you added the links to mention that you are the author, to add that additional context.
pvg•4h ago
I just meant 'it's good for people to know one of the authors is in the thread because it makes for more interesting conversation'. Clearly did not figure out how to do that without starting a bunch of meta!
macintux•5h ago
I believe this could (or should) have been a Show HN, which would have allowed you to include explanatory text. See the top of this page for the rules.

https://news.ycombinator.com/show

Welcome to the site. There are a lot of features which are less obvious, which you’ll discover over time.

pvg•5h ago
Reading material usually can't be a Show HN but you can just post your work without that and say you're involved.
macintux•4h ago
The repo includes runnable code.

> Show HN is for something you've made that other people can play with… On topic: things people can run on their computers or hold in their hands

pvg•4h ago
A lot of writing includes runnable code and isn't a Show HN. It's a comparatively narrow category.
ipunchghosts•4h ago
I am interested in doing research like this. Is there any way I can be a part of it or a similar group? I have been fighting for funding from DoD for many years but to no avail so I largely have to do this research on my own time or solve my current grant's problems so that i can work on this. In my mind, this kind of research is the most interesting and important right now in the deep learning field. I am a hard worker and a high-throughput thinking... how can i get connected to otherwise with a similar mindset?
scarmig•5h ago
Did you investigate other search processes besides SGD? I'm thinking of those often termed "biologically plausible" (e.g. forward-forward, FA). Are their internal representations closer to the fractured or unified representations?
timewizard•5h ago
> Much of the excitement in modern AI is driven by the observation that scaling up existing systems leads to better performance.

Scaling up almost always leads to better performance. If you're only getting linear gains though then there is absolutely nothing to be excited about. You are in a dead end.

goldemerald•4h ago
This is an interesting line of research but missing a key aspect: there's (almost) no references to the linear representation hypothesis. Much work on neural network interpretability lately has shown individual neurons are polysemantic, and therefore practically useless for explainability. My hypothesis is fitting linear probes (or a sparse autoencoder) would reveal linearly semantic attributes.

It is unfortunate because they briefly mention Neel Nanda's Othello experiments, but not the wide array of experiments like the NeurIPS Oral "Linear Representation Hypothesis in Language Models" or even golden gate Claude.

ipunchghosts•4h ago
Is what your saying imply that there is a rotation matrix you can apply to each activation output to make it less entangled?
goldemerald•4h ago
Not quite. For an underlying semantic concept (e.g., smiling face), you can go from a basis vector [0,1,0,...,0] to the original latent space via a single rotation. You could then induce said concept by manipulating the original latent point by traversing along that linear direction.
ipunchghosts•4h ago
I think we are saying the same thing. Please correct me though where I am wrong. You could look at the maps in some way but instead of the basis being one hot dimensions (the standard basis), it could be rotated.
akarshkumar0101•4h ago
We mention this issue exactly in the fourth paragraph in Section 4 and in Appendix F!
akarshkumar0101•4h ago
We mention this issue exactly in the fourth paragraph in Section 4 and in Appendix F!
goldemerald•3h ago
That is addressing the incomprehensibility of PCA and applying a transformation to the entire latent space. I've never found PCA to be meaningful for deep learning. As far as I can tell, polysemous issue with neurons cannot be addressed with a single linear transformation. There is no sparse analysis (via linear probes or SAEs) and hence the unaddressed issue.
ipunchghosts•4h ago
I am glad they evaluated this hypothesis using weight decay which is primarily thought of to induce a structured representation. My first thought was that the entire paper was useless if they didn't do this experiment.

I find it rather interesting that the structured representations go from sparse to full to sparse as a function of layer depth. I have noticed that applying weight decay penalty as an exponential function of layer depth gives improved results over using a global weight decay.

cwmoore•50m ago
Isn't this simply mirroronic gravitation?