frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

LegalArgumentException: From Courtrooms to Clojure – Sen [video]

https://www.youtube.com/watch?v=cmMQbsOTX-o
1•adityaathalye•49s ago•0 comments

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
1•petethomas•4m ago•0 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•8m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•24m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
2•init0•30m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•30m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•33m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•36m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•46m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•46m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•51m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•55m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•56m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•59m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•59m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•1h ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•1h ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
4•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
2•1vuio0pswjnm7•1h ago•0 comments
Open in hackernews

Is a new AI paradigm based on raw electromagnetic waves feasible?

5•sadpig70•4mo ago
*Is a new AI paradigm based on raw electromagnetic waves feasible?*

Hi HN,

I’d like to propose a new, theoretical AI paradigm I'm calling wAI (Wave AI). Unlike traditional AI that learns from human-interpretable data (text, images, audio), wAI would learn directly from raw electromagnetic wave patterns.

The core vision is to unlock dimensions of reality and information that are invisible to human perception. By analyzing raw wave data, a wAI could potentially decode communication between animals and plants, detect hidden bio-signals for early disease diagnostics, or even explore new cosmic phenomena. This isn’t just about making a faster AI; it's about giving intelligence a completely new sensory dimension.

I know this is highly speculative. The main challenges are immense: * How do we define "learning" from unstructured wave data without a predefined human model? * How do we collect and process this information at scale? * What theoretical framework would govern such a system?

This is more of a thought experiment than a technical proposal, and I'm genuinely curious to hear your thoughts. Do you think this is a plausible future direction for AI, or an interesting but ultimately unfeasible concept? What technical or philosophical hurdles do you see?

Looking forward to your insights.

Comments

PaulHoule•4mo ago
Electromagnetic waves are linear and can only do so much. General intelligence and communication require nonlineariy. You could have beams of light connecting some kind of optical neurons through free space or reflecting through a hologram but you still need the neuron.

https://www.nature.com/articles/s41377-024-01590-3

sunscream89•4mo ago
Yes, all of the rules of conservation and expenditure of potential distribution over manifold surface areas may be explored, possibly in parts applied.

PH says electromagnetic waves are linear though I believe he has mistaken his sensory dimensions for the extent of universal expansion.

It is exactly where there are vectors that dimensionality changes, it adds a new scalar coordinate system and allows more information (discernible disposition), etc.

Electromagnetic waves aren’t just intensity, they like gravity extrapolate and create features in the time space of existential reality. We calculate these behaviors linearly, yet reality doesn’t calculate, it distributes potentials (such as EM) over surface areas (such as space time, or intensity).

Where drawn further from fundamental forces, a priori aspects of both reality (existential aspect of universal potential distributing), information (reduction of uncertainty, that is the resolve of potential/uncertainty of distribution), and intelligence (mitigation of uncertainty, the forward determiners for determinant resolve) might be seen in new ways.

mikewarot•4mo ago
Training a model requires repetition, in the case of large language models, it's feeding it a trillion tokens while using gradient descent to improve it's predictive power, then repeating that loop a trillion times.

Those tokens save a few orders of magnitude in training costs compared to doing it with raw streams of text. (But also result in LLMs that suck at basic math, spelling, or rhyming)

Doing the same thing with raw inputs from the world would likely add 6 more more orders of magnitude to any given training task, as you would have to scale up that initial input fed into the AI to match the wider bandwidths you're talking about.

You also have to have some form of goal to have a loss against. It's unclear what that would be. I'd suggest using "surprise minimization" as the goal. Something that can just predict raw surprise might turn out to be useful.

To get the compute requirements down into the feasible range, I'd suggest starting with an autoencoder. Like we do with LLMs, you could take that raw input and just try to compress it to a much lower dimensionality. You could then try to predict that value in the future.

mikewarot•4mo ago
Ugh... missed the 2 hour window.

Initially I was focused on the training and memory requirements, but as I thought about it while doing other things, it occurred to me that the same things that work for LLMs should work with your idea.

Use an autoencoder to try to reduce the dimensionality of the data, while preserving as much information as possible. This gains you orders of magnitude data compression while remaining useful, and reducing compute requirements for the next steps by that amount squared.

Once the autoencoder is sufficiently effective, then you can try to predict the next state at some point in the future. If you have any tagging data, then you can do the whole gradient descent, repeat for a trillion iterations thing.

The thing is, trillions of cycles aren't really a barrier these days. Start with deliberately small systems, and work up.

Good luck!