frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We intercepted the White House app's traffic. 77% of requests go to 3rd parties

https://www.atomic.computer/blog/white-house-app-network-traffic-analysis/
153•donutpepperoni•2h ago•46 comments

The Claude Code Source Leak: fake tools, frustration regexes, undercover mode

https://alex000kim.com/posts/2026-03-31-claude-code-source-leak/
933•alex000kim•14h ago•370 comments

Neanderthals survived on a knife's edge for 350k years

https://www.science.org/content/article/neanderthals-survived-knife-s-edge-350-000-years
38•Hooke•2h ago•4 comments

TinyLoRA – Learning to Reason in 13 Parameters

https://arxiv.org/abs/2602.04118
107•sorenjan•4d ago•11 comments

TruffleRuby

https://chrisseaton.com/truffleruby/
70•tosh•3d ago•3 comments

U.S. exempts oil industry from protecting Gulf animals, for 'national security'

https://www.npr.org/2026/03/30/nx-s1-5745926/endangered-species-committee-hegseth-security
193•Jimmc414•2h ago•71 comments

Show HN: 1-Bit Bonsai, the First Commercially Viable 1-Bit LLMs

https://prismml.com/
150•PrismML•7h ago•65 comments

My son pleasured himself on Gemini Live. Entire family's Google accounts banned

https://old.reddit.com/r/LegalAdviceUK/comments/1s92fql/my_son_pleasured_himself_in_front_of_gemi...
140•samlinnfer•1h ago•63 comments

A dot a day keeps the clutter away

https://scottlawsonbc.com/post/dot-system
198•scottlawson•6h ago•66 comments

Ministack (Replacement for LocalStack)

https://ministack.org/
163•kerblang•7h ago•32 comments

OpenAI closes funding round at an $852B valuation

https://www.cnbc.com/2026/03/31/openai-funding-round-ipo.html
371•surprisetalk•7h ago•316 comments

4D Doom

https://github.com/danieldugas/HYPERHELL
151•chronolitus•4d ago•34 comments

Why Don't You Use String Views Instead of Passing Std:Wstring by Const&

https://giodicanio.com/2024/05/14/why-dont-you-use-string-views-like-std-wstring_view-instead-of-...
18•Orochikaku•2d ago•7 comments

Ordinary Lab Gloves May Have Skewed Microplastic Data

https://nautil.us/ordinary-lab-gloves-may-have-skewed-microplastic-data-1279386
77•WaitWaitWha•6h ago•19 comments

Analyzing Geekbench 6 under Intel's BOT

https://www.geekbench.com/blog/2026/03/analyzing-geekbench-6-under-intels-bot/
3•hajile•36m ago•0 comments

Slop is not necessarily the future

https://www.greptile.com/blog/ai-slopware-future
193•dakshgupta•13h ago•344 comments

Back to FreeBSD – Part 2 – Jails

https://hypha.pub/back-to-freebsd-part-2
61•vermaden•4d ago•11 comments

Open source CAD in the browser (Solvespace)

https://solvespace.com/webver.pl
302•phkahler•15h ago•99 comments

Teenage Engineering's PO-32 acoustic modem and synth implementation

https://github.com/ericlewis/libpo32
92•ericlewis•4d ago•22 comments

Cohere Transcribe: Speech Recognition

https://cohere.com/blog/transcribe
170•gmays•11h ago•54 comments

Bring Back MiniDV with This Raspberry Pi FireWire Hat

https://www.jeffgeerling.com/blog/2026/minidv-with-raspberry-pi-firewire-hat/
3•ingve•3d ago•0 comments

I Traced My Traffic Through a Home Tailscale Exit Node

https://tech.stonecharioteer.com/posts/2026/tailscale-exit-nodes/
91•stonecharioteer•8h ago•41 comments

OkCupid gave 3M dating-app photos to facial recognition firm, FTC says

https://arstechnica.com/tech-policy/2026/03/okcupid-match-pay-no-fine-for-sharing-user-photos-wit...
416•whiteboardr•10h ago•85 comments

Why the US Navy won't blast the Iranians and 'open' Strait of Hormuz

https://responsiblestatecraft.org/iran-strait-of-hormuz/
219•KoftaBob•18h ago•561 comments

Learn Something Old Every Day, Part XVIII: How Does FPU Detection Work?

https://www.os2museum.com/wp/learn-something-old-every-day-part-xviii-how-does-fpu-detection-work/
32•kencausey•3d ago•2 comments

Axios compromised on NPM – Malicious versions drop remote access trojan

https://www.stepsecurity.io/blog/axios-compromised-on-npm-malicious-versions-drop-remote-access-t...
1793•mtud•1d ago•729 comments

Inside the 'self-driving' lab revolution

https://www.nature.com/articles/d41586-026-00974-2
18•salkahfi•1d ago•2 comments

Show HN: Postgres extension for BM25 relevance-ranked full-text search

https://github.com/timescale/pg_textsearch
112•tjgreen•11h ago•34 comments

Show HN: Forkrun – NUMA-aware shell parallelizer (50×–400× faster than parallel)

https://github.com/jkool702/forkrun
124•jkool702•4d ago•30 comments

From 300KB to 69KB per Token: How LLM Architectures Solve the KV Cache Problem

https://news.future-shock.ai/the-weight-of-remembering/
96•future-shock-ai•3d ago•7 comments
Open in hackernews

Show HN: Resonate – real-time high temporal resolution spectral analysis

https://alexandrefrancois.org/Resonate/
76•arjf•11mo ago

Comments

james_a_craig•11mo ago
For some reason the value of Pi given in the C++ code is wrong!

It's given in the source as 3.14159274101257324219 when the right value to the same number of digits is 3.14159265358979323846. Very weird. I noticed when I went to look at the C++ to see how this algorithm was actually implemented.

https://github.com/alexandrefrancois/noFFT/blob/main/src/Res... line 31.

pvg•11mo ago
That is a very 'childhood exposure to 8 digit calculators' thing to notice.
james_a_craig•11mo ago
Childhood exposure to pi generation algorithms; the correct version above was from memory.
pvg•11mo ago
Close enough! The wrong 7 jumped out at me instantly although I didn't remember more than a few after.
2YwaZHXV•11mo ago
seems since it's a float it's only 32-bits, and the representation of both 3.14159274101257324219 and 3.14159265358979323846 is the same in IEEE-754: 0x40490fdb

though I agree that it is odd to see, and not sure I see a reason why they wouldn't use 3.14159265358979323846

james_a_craig•11mo ago
Yeah, it’s as if they wrote a program to calculate pi in a float and saved the output. Very strange choice given how many places the value of pi can be found.
arjf•11mo ago
Indeed... I honestly don't remember where or how I sourced the value, and why I did not use the "correct" one - I will correct in the next release of the package. Thanks for pointing it out!
pvg•11mo ago
You got off easy compared to this dude https://en.wikipedia.org/wiki/William_Shanks
phkahler•11mo ago
This is very much like doing a Fourier Transform without using recursion and the butterflies to reduce the computation. It would be even closer to that if a "moving average" of the right length was used instead of an IIR low-pass filter. This is something I've considered superior for decades but it does take a lot more computation. I guess we're there now ;-)
arjf•11mo ago
It only requires more computation if you really need to compute the full FFT with all the bins, in which case the FFT is more efficient... With this approach you only compute the bins you really need, without having to pre-filter your signal, or performing additional computations on the FFT result. Some sliding window FFT methods compute frequency bands independently, but they do require buffering and I really wanted to avoid that.
zevv•11mo ago
I might be mistaking, but I don't see how this is novel. As far as I know, this has a proven DSP technique for ages, although it it usually only applied when a small amount of distinct frequencies need to be detected - for example DTMF.

When the number of frequencies/bins grows, it is computationally much cheaper to use the well known FFT algorithm instead, at the price of needing to handle input data by blocks instead of "streaming".

colanderman•11mo ago
The difference from FFT is this is a multiresolution technique, like the constant-Q transform. And, unlike CQT (which is noncausal), this provides a better match to the actual behavior of our ears (by being causal). It's also "fast" in the sense of FFT (which CQT is not).
zipy124•11mo ago
There exists the multiresolution FFT, and other forms of FFT which are based around sliding windows/SFFT techniques. CQT can also be implemented extremely quickly, utilising FFT's and kernels or other methods, like in the librosa library (dubbed pseudo-CQT).

I'm also not sure how this is causal? It has a weighted-time window (biasing the more recent sound), which is farily novel, but I wouldn't call that causal.

This is not to say I don't think this is cool, it certainly looks better than existing techniques like synchrosqueezing for pushing the limit of the heisenberg uncertainty principle (technically given ideal conditions synchrosqueezing can outperform the principle, but only a specific subset of signals).

waffletower•11mo ago
Curious if there is available math to show the gain scale properties of this technique across the spectrum -- in other words its frequency response. The system doesn't appear to be LTI so I don't believe we can utilize the Z-transform to do this. Phase response would also be important as well.
arjf•11mo ago
The Sliding Windowed Infinite Fourier Transform (SWIFT) has very similar math, and they provide some analysis in the paper. I use a different heuristic for alpha so I am not sure the analysis transfers directly. In my upcoming paper I have some numerical experiments and graphs that show resonator response across the range.
arjf•11mo ago
Actually digging into SWIFT a bit more, the formulas differ by more than just the heuristic for alpha (unless I missed something) so the analysis in the SWIFT paper does not apply directly to(or maybe even at all).
dr_dshiv•11mo ago
Thanks for your contribution! Reminds me of Helmholtz resonators.

I wrote this cross-disciplinary paper about resonance a few years ago. You may find it useful or at least interesting.

https://www.frontiersin.org/journals/neurorobotics/articles/...

arjf•11mo ago
Interesting - thanks for sharing!
colanderman•11mo ago
Nice! I've used a homegrown CQT-based visualizer for a while for audio analysis. It's far superior to the STFT-based view you get from e.g. Audacity, since it is multiresolution, which is a better match to how we actually experience sound. I have for a while wanted to switch my tool to a gammatone-filter-based method [1] but I didn't know how to make it efficient.

Actually I wonder if this technique can be adapted to use gammatone filters specifically, rather than simple bandpass filters.

[1] https://en.wikipedia.org/wiki/Gammatone_filter

mofeien•11mo ago
If you already have the implementation for the CQT, wouldn't you just be able to replace the morlet wavelet used in the CQT by the gammatone wavelet without much of on efficiency hit? I'm just learning about the gammatone filter, and it sounds interesting since it apparently better models human hearing.
vessenes•11mo ago
Nice! Can any signals/AI folks comment on whether using this would improve vocoder outputs? The visuals look much higher res, which makes me think a vocoder using them would have more nuance. But, I'm a hobbyist.
drmikeando•11mo ago
You can view this result as the convolution of the signal with an exponentially decaying sine and cosine.

That is, `y(t') = integral e^kt x(t' - t) dt`, with k complex and negative real part.

If you discretize that using simple integration and t' = i dt, t = j dt you get

    y_i = dt sum_j e^(k j dt) x_{i - j}
    y_{i+1} = dt sum_j e^(k j dt) x_{i+1 - j}
            = (dt e^(k dt) sum_j' e^(k j' dt) x_{i - j'}) + x_i 
            = dt e^(k dt) y_i + x_i
If we then scale this by some value, such that A y_i = z_i we can write this as

    z_{i+1} = dt e^(k dt) z_i + A x_i
Here the `dt e^(k dt)` plays a similar role to (1-alpha) and A is similar to P alpha - the difference being that P changes over time, while A is constant.

We can write `z_i = e^{w dt i} r_i` where w is the imaginary part of k

   e^{w dt (i+1)} r_{i+1} = dt e^(k dt) e^{w dt i} r_i + A x_i
             r_{i+1} = dt e^((k - w) dt) r_i + e^{-w dt (i+1) } A x_i
                     = (1-alpha) r_i + p_i x_i
Where p_i = e^{-w dt (i+1) } A = e^{-w dt ) p_{i-1} Which is exactly the result from the resonate web-page.

The neat thing about recognising this as a convolution integral, is that we can use shaping other than exponential decay - we can implement a box filter using only two states, or a triangular filter (this is a bit trickier and takes more states). While they're tricky to derive, they tend to run really quickly.

arjf•11mo ago
This formulation is close to that of the Sliding Windowed Infinite Fourier Transform (SWIFT), of which I became aware only yesterday.

For me the main motivation developing Resonate was for interactive systems: very simple, no buffering, no window... Also, no need to compute all the FFT bins so in that sense more efficient!

arjf•11mo ago
Just want to call out the resources listed at the bottom of the Resonate website:

- The Oscillators app demonstrates real-time linear, log and Mel scale spectrograms, as well as derived audio features such as chromagrams and MFCCs https://alexandrefrancois.org/Oscillators/

- The Resonate Youtube playlist features video captures of real-time demonstrations. https://www.youtube.com/playlist?list=PLVcB_ABiKC_cbemxXUUJX...

- The open source Oscillators Swift package contains reference implementations in Swift and C++.https://github.com/alexandrefrancois/Oscillators

- The open source python module noFFT provides python and C++ implementations of Resonate functions and Jupyter notebooks illustrating their use in offline settings. https://github.com/alexandrefrancois/noFFT