frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Make OpenClaw Respond in Scarlett Johansson AI Voice from the Film Her

https://twitter.com/sathish316/status/2020116849065971815
1•sathish316•4s ago•0 comments

CReact Version 0.3.0 Released

https://github.com/creact-labs/creact
1•_dcoutinho96•1m ago•0 comments

Show HN: CReact – AI Powered AWS Website Generator

https://github.com/creact-labs/ai-powered-aws-website-generator
1•_dcoutinho96•2m ago•0 comments

The rocky 1960s origins of online dating (2025)

https://www.bbc.com/culture/article/20250206-the-rocky-1960s-origins-of-online-dating
1•1659447091•7m ago•0 comments

Show HN: Agent-fetch – Sandboxed HTTP client with SSRF protection for AI agents

https://github.com/Parassharmaa/agent-fetch
1•paraaz•9m ago•0 comments

Why there is no official statement from Substack about the data leak

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
5•witnessme•13m ago•1 comments

Effects of Zepbound on Stool Quality

https://twitter.com/ScottHickle/status/2020150085296775300
2•aloukissas•16m ago•1 comments

Show HN: Seedance 2.0 – The Most Powerful AI Video Generator

https://seedance.ai/
1•bigbromaker•19m ago•0 comments

Ask HN: Do we need "metadata in source code" syntax that LLMs will never delete?

1•andrewstuart•25m ago•1 comments

Pentagon cutting ties w/ "woke" Harvard, ending military training & fellowships

https://www.cbsnews.com/news/pentagon-says-its-cutting-ties-with-woke-harvard-discontinuing-milit...
6•alephnerd•28m ago•2 comments

Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? [pdf]

https://cds.cern.ch/record/405662/files/PhysRev.47.777.pdf
1•northlondoner•28m ago•1 comments

Kessler Syndrome Has Started [video]

https://www.tiktok.com/@cjtrowbridge/video/7602634355160206623
1•pbradv•31m ago•0 comments

Complex Heterodynes Explained

https://tomverbeure.github.io/2026/02/07/Complex-Heterodyne.html
3•hasheddan•31m ago•0 comments

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
3•ArtemZ•43m ago•5 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•43m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
2•LiamPowell•45m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
4•duxup•48m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•49m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•1h ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•1h ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
3•savrajsingh•1h ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•1h ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•1h ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•1h ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
2•g1raffe•1h ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•1h ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
3•rolph•1h ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•1h ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•1h ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•1h ago•1 comments
Open in hackernews

Show HN: System to have Claude compose and perform a techno track end-to-end

https://github.com/hughes7370/AbletonComposer
2•digitcatphd•1w ago
I've been fascinated by a fundamental gap in AI music: Current models (Suno, Udio) generate audio via sequence prediction—they pattern-match existing waveforms but don't "know" music theory. Consequently, you can't get stems, adjust the mix, or modify the arrangement logic.

I wanted to see if an LLM could compose music from first principles—understanding scales, chord progressions, and arrangement theory—and control a DAW to generate the audio.

Loom Demo: https://www.loom.com/share/8f55136085a24ed1bc79acb5cdda194c

The Stack Ableton Live 12: The DAW engine.

Ableton MCP (Model Context Protocol): Forked and extended to allow Claude to manipulate MIDI, clips, and devices.

Claude 3.5 Sonnet: The "Composer," equipped with ~12 custom skill files covering arrangement, EQ, and sound design.

Gemini: The feedback loop. Used to analyze rendered audio (via stem separation) and provide critique for iteration.

Python: 1,700+ lines of performance scripts.

The Engineering Challenges 1. The Sample Library Problem Techno relies on curated samples, not just synthesis. But LLMs can't "hear" a sample library to pick the right kick or hat.

I built a sample analysis system that pre-processes the library and generates JSON profiles. This allows Claude to query samples by spectral characteristics rather than just filenames.

JSON { "file_name": "001_Stab_Low.wav", "bpm": 126.0, "key": "N/A (atonal)", "spectral_centroid_mean": 297.2, "brightness": 0.04, "warmth": 1.0, "texture_tags": ["dark", "warm", "soft-attack", "distorted"], "category": "bass" }

2. The Performance Layer (Polymetrics) Ableton's Session View handles loops, but a track needs transitions. I didn't want static blocks; I wanted a live performance.

I wrote a Python performance engine that creates a real-time automation script. It handles volume fading, spectral carving (ducking frequencies when elements collide), and—most importantly—polymetric cycling to create hypnotic phasing:

Python

# Polymetric cycle lengths in beats POLY = { "STAB": 7, # Cycles every 7 beats "RIDE": 5, # Cycles every 5 beats "DING": 11, # Cycles every 11 beats "ARPEGGIO": 13 # Cycles every 13 beats }

The Pipeline

Planning: Claude analyzes target styles (e.g., Ben Klock, Surgeon) and generates an arrangement map (Intro -> Peak -> Outro).

Setup: Spawns 19+ tracks with specific instrument racks.

Generation: Python scripts generate MIDI patterns (e.g., 256 events following G minor with velocity curves).

Performance: The system "plays" the track, automating parameters in real-time based on the energy curve logic.

Results & Learnings

The output is recognizably techno. The mix is balanced, and the structure is logical. However, while the system creates music that is theoretically correct, it currently lacks the intuition to break rules in interesting ways—the "happy accidents" of human production are missing.

I suspect the next step for symbolic music generation is modeling "taste" as a constraint function rather than just adhering to theory.