frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•43s ago•0 comments

The AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
1•geox•3m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•3m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
1•jerpint•4m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•5m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
1•breadwithjam•8m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•8m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•10m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•12m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•12m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•12m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
2•vkelk•13m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•13m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•14m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•16m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
2•ykdojo•19m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•20m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•21m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
2•mariuz•21m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•25m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•28m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•29m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•30m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
2•andsoitis•30m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
2•lysace•31m ago•0 comments

Zen Tools

http://postmake.io/zen-list
2•Malfunction92•33m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
2•carnevalem•34m ago•1 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•36m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
2•rcarmo•37m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•37m ago•0 comments
Open in hackernews

Is a new AI paradigm based on raw electromagnetic waves feasible?

5•sadpig70•4mo ago
*Is a new AI paradigm based on raw electromagnetic waves feasible?*

Hi HN,

I’d like to propose a new, theoretical AI paradigm I'm calling wAI (Wave AI). Unlike traditional AI that learns from human-interpretable data (text, images, audio), wAI would learn directly from raw electromagnetic wave patterns.

The core vision is to unlock dimensions of reality and information that are invisible to human perception. By analyzing raw wave data, a wAI could potentially decode communication between animals and plants, detect hidden bio-signals for early disease diagnostics, or even explore new cosmic phenomena. This isn’t just about making a faster AI; it's about giving intelligence a completely new sensory dimension.

I know this is highly speculative. The main challenges are immense: * How do we define "learning" from unstructured wave data without a predefined human model? * How do we collect and process this information at scale? * What theoretical framework would govern such a system?

This is more of a thought experiment than a technical proposal, and I'm genuinely curious to hear your thoughts. Do you think this is a plausible future direction for AI, or an interesting but ultimately unfeasible concept? What technical or philosophical hurdles do you see?

Looking forward to your insights.

Comments

PaulHoule•4mo ago
Electromagnetic waves are linear and can only do so much. General intelligence and communication require nonlineariy. You could have beams of light connecting some kind of optical neurons through free space or reflecting through a hologram but you still need the neuron.

https://www.nature.com/articles/s41377-024-01590-3

sunscream89•4mo ago
Yes, all of the rules of conservation and expenditure of potential distribution over manifold surface areas may be explored, possibly in parts applied.

PH says electromagnetic waves are linear though I believe he has mistaken his sensory dimensions for the extent of universal expansion.

It is exactly where there are vectors that dimensionality changes, it adds a new scalar coordinate system and allows more information (discernible disposition), etc.

Electromagnetic waves aren’t just intensity, they like gravity extrapolate and create features in the time space of existential reality. We calculate these behaviors linearly, yet reality doesn’t calculate, it distributes potentials (such as EM) over surface areas (such as space time, or intensity).

Where drawn further from fundamental forces, a priori aspects of both reality (existential aspect of universal potential distributing), information (reduction of uncertainty, that is the resolve of potential/uncertainty of distribution), and intelligence (mitigation of uncertainty, the forward determiners for determinant resolve) might be seen in new ways.

mikewarot•4mo ago
Training a model requires repetition, in the case of large language models, it's feeding it a trillion tokens while using gradient descent to improve it's predictive power, then repeating that loop a trillion times.

Those tokens save a few orders of magnitude in training costs compared to doing it with raw streams of text. (But also result in LLMs that suck at basic math, spelling, or rhyming)

Doing the same thing with raw inputs from the world would likely add 6 more more orders of magnitude to any given training task, as you would have to scale up that initial input fed into the AI to match the wider bandwidths you're talking about.

You also have to have some form of goal to have a loss against. It's unclear what that would be. I'd suggest using "surprise minimization" as the goal. Something that can just predict raw surprise might turn out to be useful.

To get the compute requirements down into the feasible range, I'd suggest starting with an autoencoder. Like we do with LLMs, you could take that raw input and just try to compress it to a much lower dimensionality. You could then try to predict that value in the future.

mikewarot•4mo ago
Ugh... missed the 2 hour window.

Initially I was focused on the training and memory requirements, but as I thought about it while doing other things, it occurred to me that the same things that work for LLMs should work with your idea.

Use an autoencoder to try to reduce the dimensionality of the data, while preserving as much information as possible. This gains you orders of magnitude data compression while remaining useful, and reducing compute requirements for the next steps by that amount squared.

Once the autoencoder is sufficiently effective, then you can try to predict the next state at some point in the future. If you have any tagging data, then you can do the whole gradient descent, repeat for a trillion iterations thing.

The thing is, trillions of cycles aren't really a barrier these days. Start with deliberately small systems, and work up.

Good luck!