frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Anthropic: Latest Claude model finds more than 500 vulnerabilities

https://www.scworld.com/news/anthropic-latest-claude-model-finds-more-than-500-vulnerabilities
1•Bender•2m ago•0 comments

Brooklyn cemetery plans human composting option, stirring interest and debate

https://www.cbsnews.com/newyork/news/brooklyn-green-wood-cemetery-human-composting/
1•geox•2m ago•0 comments

Why the 'Strivers' Are Right

https://greyenlightenment.com/2026/02/03/the-strivers-were-right-all-along/
1•paulpauper•4m ago•0 comments

Brain Dumps as a Literary Form

https://davegriffith.substack.com/p/brain-dumps-as-a-literary-form
1•gmays•4m ago•0 comments

Agentic Coding and the Problem of Oracles

https://epkconsulting.substack.com/p/agentic-coding-and-the-problem-of
1•qingsworkshop•5m ago•0 comments

Malicious packages for dYdX cryptocurrency exchange empties user wallets

https://arstechnica.com/security/2026/02/malicious-packages-for-dydx-cryptocurrency-exchange-empt...
1•Bender•5m ago•0 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
1•shubham-coder•5m ago•0 comments

Penisgate erupts at Olympics; scandal exposes risks of bulking your bulge

https://arstechnica.com/health/2026/02/penisgate-erupts-at-olympics-scandal-exposes-risks-of-bulk...
2•Bender•6m ago•0 comments

Arcan Explained: A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
1•fanf2•8m ago•0 comments

What did we learn from the AI Village in 2025?

https://theaidigest.org/village/blog/what-we-learned-2025
1•mrkO99•8m ago•0 comments

An open replacement for the IBM 3174 Establishment Controller

https://github.com/lowobservable/oec
1•bri3d•10m ago•0 comments

The P in PGP isn't for pain: encrypting emails in the browser

https://ckardaris.github.io/blog/2026/02/07/encrypted-email.html
2•ckardaris•13m ago•0 comments

Show HN: Mirror Parliament where users vote on top of politicians and draft laws

https://github.com/fokdelafons/lustra
1•fokdelafons•13m ago•1 comments

Ask HN: Opus 4.6 ignoring instructions, how to use 4.5 in Claude Code instead?

1•Chance-Device•14m ago•0 comments

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
1•ColinWright•17m ago•0 comments

Jim Fan calls pixels the ultimate motor controller

https://robotsandstartups.substack.com/p/humanoids-platform-urdf-kitchen-nvidias
1•robotlaunch•21m ago•0 comments

Exploring a Modern SMTPE 2110 Broadcast Truck with My Dad

https://www.jeffgeerling.com/blog/2026/exploring-a-modern-smpte-2110-broadcast-truck-with-my-dad/
1•HotGarbage•21m ago•0 comments

AI UX Playground: Real-world examples of AI interaction design

https://www.aiuxplayground.com/
1•javiercr•22m ago•0 comments

The Field Guide to Design Futures

https://designfutures.guide/
1•andyjohnson0•22m ago•0 comments

The Other Leverage in Software and AI

https://tomtunguz.com/the-other-leverage-in-software-and-ai/
1•gmays•24m ago•0 comments

AUR malware scanner written in Rust

https://github.com/Sohimaster/traur
3•sohimaster•26m ago•1 comments

Free FFmpeg API [video]

https://www.youtube.com/watch?v=6RAuSVa4MLI
3•harshalone•26m ago•1 comments

Are AI agents ready for the workplace? A new benchmark raises doubts

https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-do...
2•PaulHoule•31m ago•0 comments

Show HN: AI Watermark and Stego Scanner

https://ulrischa.github.io/AIWatermarkDetector/
1•ulrischa•32m ago•0 comments

Clarity vs. complexity: the invisible work of subtraction

https://www.alexscamp.com/p/clarity-vs-complexity-the-invisible
1•dovhyi•33m ago•0 comments

Solid-State Freezer Needs No Refrigerants

https://spectrum.ieee.org/subzero-elastocaloric-cooling
2•Brajeshwar•33m ago•0 comments

Ask HN: Will LLMs/AI Decrease Human Intelligence and Make Expertise a Commodity?

1•mc-0•34m ago•1 comments

From Zero to Hero: A Brief Introduction to Spring Boot

https://jcob-sikorski.github.io/me/writing/from-zero-to-hello-world-spring-boot
1•jcob_sikorski•35m ago•1 comments

NSA detected phone call between foreign intelligence and person close to Trump

https://www.theguardian.com/us-news/2026/feb/07/nsa-foreign-intelligence-trump-whistleblower
13•c420•35m ago•2 comments

How to Fake a Robotics Result

https://itcanthink.substack.com/p/how-to-fake-a-robotics-result
1•ai_critic•36m ago•0 comments
Open in hackernews

Show HN: Ossia score – A sequencer for audio-visual artists

https://github.com/ossia/score
95•jcelerier•7mo ago

Comments

btown•7mo ago
This is really cool! Live music, game shows, holiday light displays, and anything in between can hugely benefit from this kind of tech.

The whole Who Wants To Be a Millionaire sequence comes to mind (where, on an arbitrarily timed cue, the lights physically rotate downwards, synchronized with the electronic score and floor panel animations, to bring pressure onto the contestant). And from a bit of research, they needed to do a fair amount of work for that, which arguably could have been "orchestrated" from software like this: https://www.tpimeamagazine.com/robe-rig-lights-who-wants-to-...

> Synching the lighting consoles to receive MIDI triggers from the show’s gaming computer which activates specific commands for sound and video related to screen content was an intense talk that took plenty of work and lateral thinking. Additionally, more signals from the lighting console were used to access the media server operating a series of pixel SMD effects inbuilt in the set – so there was a lot of synching happening!

I'm also aware of software like https://lightkeyapp.com/en - but ossia score seems to focus more on temporal flexibility/coding/behavior as the primary focus, whereas Lightkey focuses on the physical layout of lighting at any given time. Arguably the feature sets should merge - Blender's ability to have multiple views that emphasize or de-emphasize the timeline comes to mind!

These things shouldn't be blocked behind massive investments. Anyone who can put a few cheap tablets on stands and plug in a MIDI keyboard should have best-in-class visualization capabilities, and be able to iterate on that work as more professional hardware becomes available. It's one of the things I love about open source.

gsck•7mo ago
You wouldn't really use something like this to control the video and lighting. The power of a tool like this is the ability to generate content or modify it on the fly.

Judging by the Robe press release there's a good chance the lighting was controlled by an Avolites console which is owned by Robe, Avolites' desks also have the ability to control video.

Lightkey is more hobbyist level control software, its a very visual application. If you want to see the more professional stuff look up MA Lighting's GrandMA2/3 consoles, Avolites Titan software or ETC's (Electronic Theatre Controls) Eos. The software is more akin to a glorified spreadsheet/command line interface then a nice and approachable interface like Lightkey

HelloUsername•7mo ago
Previous Show HN, 13-sept-2018: https://news.ycombinator.com/item?id=17982771

Related discussion, 26-sept-2020: https://news.ycombinator.com/item?id=24600824

brcmthrowaway•7mo ago
Chataigne ftw
jcelerier•7mo ago
Chataigne is a really good software but I'm not sure they're too comparable...

Nowadays ossia is more about the content creation part, with a whole graphics pipeline amenable to VJ and real-time audioreactive visuals, where you can for instance have AI models like streamdiffusion & the like (https://streamable.com/zfrbo3) or just play with VST plug-ins and drum machines to make beats (https://streamable.com/fc02so)

All the recent artworks I've worked on involving ossia have used it exclusively, for instance for light, sound and video design, while if I'm not mistaken Chataigne is more commonly used in conjunction with for instance software such as Live or TouchDesigner.

brcmthrowaway•7mo ago
How does it compare to Millumin?
jcelerier•7mo ago
Well, Millumin does not run on Linux so I cannot really use it for starters aha.

It's very good for video mapping but for instance wouldn't allow you to do any kind of remotely advanced audio effects such as loading a VST to apply to your sound or play MIDI instruments. From the docs it only supports Mono, stereo, 5.1 or 7.1. ; in contrast ossia has been used to drive multiple many-channel spatialized sound artworks. You can trivially use the library of Faust spatialisation tools for instance to do ambisonics, VBAP etc. with either graphical utilities or through generative means with some simple scripting.

I'm not sure Millumin's timeline allows to do something like this either which has multiple kinds of interactions deisgned in a visual language, or state-machine-like behaviours: https://ossia.io/assets/feature-interaction.gif

rapjr9•7mo ago
I think this general class of software is called Show Control. There are commercial and open source projects that also do it in some form:

https://en.wikipedia.org/wiki/MIDI_Show_Control

https://v-control.com/

https://qlab.app/

https://troikatronix.com/

https://derivative.ca/

Plus a variety of Video DJ platforms like VDMX, Arkaos, GrandVJ, which have some of this functionality, and then a lot of free and commercial DMX-512 lighting control software and hardware that can be interfaced to these show control systems. Q-Lab is widely used in the stage show industry. Chataigne is one I hadn't heard of.

gsck•7mo ago
QLabs is more akin to Powerpoint in its use, it can do some really basic "projection mapping" but its mostly just keystoning and audio playback (You can also do the usual OSC/MIDI/DMX stuff but I've never seen it used or used it myself)

Never actually heard of v-control , it's website is rather light on details.

Isadora and TouchDesigner are very much in the same vain as Ossia

jcelerier•7mo ago
I'm posting here to celebrate a few fresh things from today:

1) ossia score 3.5.3 was released :)

2) We're going to give a lab on interactive graphics on embedded platforms at SIGGRAPH in Vancouver in August, which teaches how to do real-time visuals with interaction on Raspberry Pi:

https://s2025.conference-schedule.org/presentation/?id=gensu...

3) The Ars Electronica prize results were announced today and two works using ossia-max, our Max/MSP binding, got featured at Ars Electronica 2025:

- Organism + Excitable Chaos by Navid Navab and Garnet Willis got the Digital Musics & Sound Art Golden Nica

https://calls.ars.electronica.art/2025/prix/winners/16969/

- On Air by Peter van Haaften, Michael Montanaro and Garnet Willis got a Digital Musics & Sound Art honorary mention

https://calls.ars.electronica.art/2025/prix/winners/17358/

Earlier this year, ossia was also featured at the Venice Biennale, it has been used for the Pavillon of Ireland: https://www.innosonix.de/pavilion-of-ireland-at-the-venice-b...

merksoftworks•7mo ago
I love this sort of thing. I wish there were a better alternative for ISF[1], it's quickly showing it's age. The kind of GPU sand boxed graph construction that this enables would be really powerful with the right "linker". I'm thinking about drafting a proposal for wesl[2] to have a more ergonomic reflection and metadata system to make this kind of quick and scrappy pipeline construction feel first class in shader tooling. Slang has something like this, so does GDShader and the shader tooling for unity.

[1]: https://isf.video/ [2]: https://github.com/wgsl-tooling-wg/wesl-rs

jcelerier•7mo ago
heya! actually this has been on the back of my head for quite some time as this is sorely needed. My current plan involves leveraging the C++ JIT support in ossia to implement shader compilation from C++ operations as this is already something somewhat easy to do. If you want we can get in touch, I'd love to talk about it! jmcelerier at sat qc ca
spacechild1•7mo ago
Wow, ossia has come a long way! Pretty impressive for a solo-dev project I have to say :)