frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

The Dismal Failure of LLMs as EV Search Aids

http://scottmeyers.blogspot.com/2025/05/the-dismal-failure-of-llms-as-ev-search.html
1•ingve•2m ago•0 comments

Show HN: AI SVG editor built with Rust

https://svg.new
2•swazzy•2m ago•0 comments

Show HN: CustomerRipple – Customer influence networks from CSV uploads

https://www.customerripple.com/
1•ezhil•4m ago•0 comments

OpenBAO v2.3 now supports Namespaces (HashiCorp Vault fork)

https://openbao.org/blog/namespaces-announcement/
2•voigt•5m ago•0 comments

AutoGit-O-Matic: Your Git Sync Sidekick

https://github.com/FPGArtktic/AutoGit-o-Matic
1•mokulanis•6m ago•1 comments

Data Breach at LexisNexis Risk Solutions Impacts 364,000

https://www.securityweek.com/364000-impacted-by-data-breach-at-lexisnexis-risk-solutions/
2•susam•7m ago•0 comments

When Solutions Get Fixed

https://idiallo.com/blog/when-solutions-get-fixed
1•WhyNotHugo•9m ago•0 comments

LiveContainer: Run iOS app without installing it

https://github.com/LiveContainer/LiveContainer
1•Lwrless•12m ago•0 comments

Making a Ribbon Microphone [video]

https://www.youtube.com/watch?v=jkF-g9pnBSg
1•artomweb•19m ago•1 comments

Sergey Brin suggests threatening AI for better results

https://www.theregister.com/2025/05/28/google_brin_suggests_threatening_ai/
1•beardyw•19m ago•0 comments

MCP Jupyter: AI-powered Jupyter collaboration

https://block.github.io/mcp-jupyter/
1•sebg•20m ago•0 comments

European Commission: Make Europe Great Again for Startups

https://www.theregister.com/2025/05/29/european_commission_wants_tech_startups/
2•rntn•24m ago•0 comments

A Library in New Zealand Replaces Dewey with System Rooted in Māori Tradition

https://magazine.1000libraries.com/this-library-in-new-zealand-is-replacing-dewey-with-a-system-rooted-in-maori-tradition/
3•Geekette•27m ago•1 comments

Tell HN: RedwoodSDK kicks off open source fellowship

https://rwsdk.com/blog/rwsdk-x-livestore
1•pistoriusp•27m ago•1 comments

LLM: The 'Generative Block' Joke of Human

https://dmf-archive.github.io/docs/posts/llm-and-msc/
1•NetRunnerSu•28m ago•0 comments

Gamer Games for Non-Gamers

https://www.hillelwayne.com/post/vidja-games/
2•MaXtreeM•31m ago•0 comments

Melting glacier destroys village in Swiss Alps

https://twitter.com/MeteoFrComtoise/status/1927736681852961118
1•dsnr•34m ago•0 comments

Show HN: I made a site where you can sell your techie side projects

https://www.microns.io
1•sweatC•35m ago•1 comments

Carefully Reading Programming Bitcoin

https://blog.yellowflash.in/posts/2025-05-29-programming-bitcoin-handwaved.html
1•yellowflash•35m ago•0 comments

Switzerland's 370,000 Nuclear Bunkers

1•samizdis•35m ago•0 comments

RAAF recruit had chilli in eyes and was set afire and choked in hazing ritual

https://www.dailytelegraph.com.au/nocookies
1•KnuthIsGod•37m ago•0 comments

Collaborative Agentic AI Needs Interoperability Across Ecosystems

https://arxiv.org/abs/2505.21550
1•devos50•38m ago•0 comments

A Rebuttal to "Against Life Extension"

https://domofutu.substack.com/p/a-rebuttal-to-against-life-extension
1•domofutu•39m ago•0 comments

Simple webpage to Markdown Chrome Extension

https://chromewebstore.google.com/detail/webpage-to-markdown/fgpepdeaaldghnmehdmckfibbhcjoljj
1•vinyasvi•39m ago•0 comments

A Song of “Full Self-Driving”: Elon Isn’t Tony Stark. He’s Michael Scott.

https://www.thebulwark.com/p/elon-musk-self-driving-fsd-tesla-tony-stark-michael-scott
1•latexr•40m ago•0 comments

Remote MCP Servers

https://www.stephendiehl.com/posts/remote_mcp_servers/
1•rwosync•41m ago•0 comments

Cory Doctorow on how we lost the internet

https://lwn.net/SubscriberLink/1021871/ffeed46818908c91/
1•udev4096•41m ago•0 comments

Show HN: Practical Ways to Strip EXIF Metadata and Protect Your Photo Privacy

https://slimimg.tools/blog/2025-05-26-remove-exif-metadata-privacy-guide
2•aaiiggcc000•42m ago•1 comments

Paper Houses

https://medium.com/luminasticity/paper-houses-442ea84be598
1•bryanrasmussen•44m ago•0 comments

Putting Rigid Bodies to Rest [pdf]

https://www.cs.cmu.edu/~kmcrane/Projects/RestingBodies/PuttingRigidBodiesToRest.pdf
1•murkle•46m ago•1 comments
Open in hackernews

High-quality OLED displays now enabling integrated thin and multichannel audio

https://www.sciencedaily.com/releases/2025/05/250521125055.htm
44•LorenDB•1d ago

Comments

walterbell•1h ago
Could such a display also function as a microphone?
jdranczewski•1h ago
Good point, piezos do also generate voltages when deformed, so this could conceivably be run in reverse... Microphone arrays are already used for directional sound detection, so this could have fun implications for that application given the claimed density of microphones.
orbital-decay•1h ago
Technically yes, piezo cells are reversible, just like about anything that can be used to emit sound. You can use the array for programmable directionality as well.
GenshoTikamura•55m ago
A proper telescreen is not the one you watch and listen to, but the one that watches you and listens to you, %little_brother%
tuukkah•1h ago
> This breakthrough enables each pixel of an OLED display to simultaneously emit different sounds

> The display delivers high-quality audio

Are multiple pixels somehow combined to reproduce low frequencies?

GenshoTikamura•1h ago
Theoretically, any frequency can be produced by the interference of ultrasonic waves, but the amplitude is questionable, given that these emitters are embedded into a thin substrate.
lmpdev•1h ago
Would the vibration be detectable via touch?

It would be wild to integrate this into haptics

cubefox•1h ago
This is impressive. Though perhaps not very useful. Humans (and animals in general) are quite bad at precisely locating sound anyway. We only have two input channels, the right and the left ear, and any location information comes from a signal difference (loudness usually) between the two.
mjlm•1h ago
Localization of sound is primarily based on the time difference between the ears. Localization is also pretty precise, to within a few degrees under good conditions.
user_7832•1h ago
Nit: time difference, phase difference, amplitude difference, and head related transfer function (HRTF) all are involved. Different methods for different frequency localisation.

There's this excellent (German?) for website that lets you play around and understand these via demos. I’ll see if I can find it.

Edit: found it, it’s https://www.audiocheck.net/audiotests_stereophonicsound.php

cubefox•1h ago
I think for stereo sound, media like music, TV, movies and video games use loudness difference instead of time difference to indicate location.
badmintonbaseba•1h ago
At least video games use way more complex models for that, AFAIK. It might be tricky to apply to mixes of recorded media, so loudness is commonly used there.
miguelnegrao•7m ago
Unreal Engine, the only engine I'm more familiar with, implements VBAP which is just amplitude panning when played through loudspeakers for panning of 3D moving sources. It also allows Ambisonics recordings for ambient sound which is then decoded into 7.1.

For headphone based spatialization (binaral synthesis) usually virtual Ambisonics fed into HRTF convolution is used, which is not amplitude based, specially height is encoded using spectral filtering.

So loudspeakrs -> mostly amplitude based, headphones not amplitude based.

GenshoTikamura•47m ago
In music, simple panning works okay, but never exceeds the stereo base of a speaker arrangement. For truly immersive listener experience, audio engineers always employ timing differences and separate spectral treaments of stereo channels, HRTF being the cutting edge of that.
miguelnegrao•39m ago
I believe Atmos as used in cinema rooms, is as far as I know amplitude based (VBAP probably), and it is impressive and immersive. Immersion depends more on the number and placement of loudspeakers. Some systems do use Ambisonics, which can encode time differences as well, at least from microphone recordings.

HRTF as used in binaural synthesis is for headphones only, not relevant here.

miguelnegrao•43m ago
Tihs is true, but a high density of loudspeakers allows the use of Wave Field Synthesis which recreates a full physical sound field, where all 3 cues can be used.
badmintonbaseba•1h ago
The main utility isn't for the user to more precisely locate the sound source within the screen. Phased speaker arrays allow emitting sound in controlled directions, even multiple sound channels to different directions at the same time.
miguelnegrao•46m ago
I'm sorry, but this is not accurate at all. Using "only" two signals, humans are quite good at localizing sound sources in some directions:

Concerning absolute localization, in frontal position, peak accuracy is observed at 1÷2 degrees for localization in the horizontal plane and 3÷4 degrees for localization in the vertical plane (Makous and Middlebrooks, 1990; Grothe et al., 2010; Tabry et al., 2013).

from https://www.frontiersin.org/journals/psychology/articles/10....

Humans are quite good at estimating distance too, inside rooms.

Humans use 3 cues for localization, time differences, amplitude differences and spectral cues from outer ears, head, torso, etc. They also use slight head movements to disambiguate sources where the signal differences would be the same (front and back, for instance).

I do agree that humans would not perceive the location difference between two pixels next to each other.

GenshoTikamura•41m ago
Yep, the hearing is more akin to a hologram than a mere stereo pair imaging.
steelbrain•1h ago
This is a long shot but anyone know if there's an audio recording of the sound the display produced? Curious

Edit: Found it: https://advanced.onlinelibrary.wiley.com/doi/10.1002/advs.20...

Go to supporting information on that page and open up the mp4 files

IanCal•1h ago
Good find - the first video is a frequency sweep, video 2 has some music.

Edit - I'm not sure that's the same thing? The release talks about pixel based sound, the linked paper is about sticking an array of piezoelectric speakers to the back of a display.

edit 2- You're right, the press release is pretty poor at explaining this though. It is not the pixels emitting the sound. It's an array of something like 25 speakers arranged like pixels.

https://www.eurekalert.org/news-releases/1084704

jtthe13•1h ago
That’s super impressive. I guess that would work for a notification speaker. But for full sound I have doubts about the low frequencies. I would assume you would need a woofer anyway in a home setting.
timschmidt•1h ago
These are a thing: https://hackaday.com/2019/10/26/building-the-worlds-best-dml...
formerly_proven•1h ago
Many TFT and OLED panels today can produce sound unintentionally based on screen contents. This is mostly noticeable with repeating horizontal lines, which tend to produce whining at some fraction of the line frequency. Likely electrostiction.

This here seems to be about adding separate piezoelectric actuators to the display though, it doesn’t seem to use the panel itself.

> by embedding ultra-thin piezoelectric exciters within the OLED display frame. These piezo exciters, arranged similarly to pixels, convert electrical signals into sound vibrations without occupying external space.

amelius•1h ago
Can the video and audio be controlled independently?
atoav•2m ago
[delayed]
teekert•58m ago
I guess there are limits, like a pixel should never move more then its size, or you limit resolution (at least from some angels). So deep basses are out of the question?

It is getting very interesting, sound, possibly haptics. We already had touch of course, including fingerprint (and visuals of course). We are more and more able to produce richt sensory experiences for panes of glass.

ra120271•14m ago
It would be interesting to learn in time what this means for the durability of the display. Do the vibrations induce stresses that increase component failure?

Also how differing parts of the screen can generate different sound sources to create a sound scape tailored for the person in front of the screen (eg laptop user)?

Interesting tech to watch!