frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

European Council president warns US not to interfere in Europe's affairs

https://www.theguardian.com/world/2025/dec/08/europe-leaders-no-longer-deny-relationship-with-us-...
1•mohi-kalantari•2m ago•0 comments

A Government Shutdown and a 1913 Data Assumption Caused an Outage in 2025

https://heyoncall.com/blog/total-real-returns-outage-government-shutdown
1•compumike•2m ago•0 comments

Saturday Morning Network Outage

https://devnonsense.com/posts/saturday-morning-network-outage/
1•speckx•2m ago•0 comments

A New Approach to GPU Sharing: Deterministic, SLA-Based GPU Kernel Scheduling

1•medicis123•3m ago•0 comments

Health insur. premiums rose nearly 3x rate of worker earnings over past 25 years

https://theconversation.com/health-insurance-premiums-rose-nearly-3x-the-rate-of-worker-earnings-...
1•bikenaga•3m ago•0 comments

Gilt Futurism

https://planetocracy.org/p/gilt-futurism
1•rbanffy•6m ago•0 comments

Public libraries in TX, LA, and MS no longer protected by the First Amendment

https://lithub.com/public-libraries-in-tx-la-and-ms-are-no-longer-protected-by-the-first-amendment/
1•stopbulying•7m ago•0 comments

Binary patching live audio software to fix a show-stopping bug

https://jonathankeller.net/ctf/playback/
1•NobodyNada•9m ago•0 comments

Microsoft will no longer use engineers in China for Department of Defense work

https://techcrunch.com/2025/07/19/microsoft-says-it-will-no-longer-use-engineers-in-china-for-dep...
3•WaitWaitWha•9m ago•0 comments

Elon Musk appeared on EU Parliament employee list with internal email address

https://www.euractiv.com/news/mail-for-musk-elon-shows-up-on-parliament-employee-list/
4•giuliomagnifico•11m ago•0 comments

Missionary Accountants

https://postround.substack.com/p/missionary-accountants
1•akharris•12m ago•0 comments

The Area 51 of New England

https://www.nytimes.com/2025/12/06/movies/strange-arrivals-betty-barney-hill.html
4•bookofjoe•12m ago•1 comments

An MCP that lets you play DOOM in ChatGPT

https://old.reddit.com/r/mcp/comments/1pexcic/an_mcp_that_lets_you_play_doom_in_chatgpt/
1•the_arun•13m ago•0 comments

Oseberg longship, built by Vikings, completes its final voyage

https://www.npr.org/2025/10/02/nx-s1-5550149/viking-age-oseberg-longship-oslo
1•throwoutway•15m ago•0 comments

Add, delete and move data points to create a particular regression line

https://line-fitter--johnhorton.replit.app/
1•john_horton•15m ago•0 comments

The Zero Point of Narcissism: A Developmental Pathway Without Mirroring

https://zenodo.org/records/17857386
1•MyResearch•16m ago•0 comments

U.S. to allow export of H200 chips to China

https://www.semafor.com/article/12/08/2025/commerce-to-open-up-exports-of-nvidia-h200-chips-to-china
2•nextworddev•16m ago•0 comments

Ask HN: How valuable is a domain like messenger.new?

1•darkhorse13•18m ago•1 comments

Why UBI is a trap and Universal Basic Equity is the fix

https://medium.com/@augustinsayer/why-ai-may-be-marxs-inadvertent-vindication-85689d8ad733
2•gussayer•21m ago•0 comments

Thoughts of a Neopagan / the Reconstruction of Neolithic Religion

1•5wizard5•22m ago•0 comments

Pyversity with Thomas van Dongen (Springer Nature)

1•CShorten•22m ago•0 comments

Affordances: The Missing Layer in Front End Architecture

https://fractaledmind.com/2025/12/01/ui-affordances/
1•Kerrick•24m ago•0 comments

(mis)Translating the Buddha (2020)

http://neuroticgradientdescent.blogspot.com/2020/01/mistranslating-buddha.html
1•eatitraw•25m ago•1 comments

The History of Xerox – A Monochromatic Star

https://www.abortretry.fail/p/the-history-of-xerox
1•rbanffy•28m ago•0 comments

Deprecations via warnings don't work for Python libraries

https://sethmlarson.dev/deprecations-via-warnings-dont-work-for-python-libraries
2•scolby33•28m ago•1 comments

Even rentals in San Francisco have bidding wars

https://sfstandard.com/2025/12/08/sf-apartment-rentals-bidding-wars/
2•randycupertino•28m ago•1 comments

WSJ article on Tennessee munitions plant explosion exposes an industry

https://www.wsws.org/en/articles/2025/11/19/qvwk-n19.html
3•PaulHoule•28m ago•1 comments

A history of AI in two line paper summaries (part one)

https://xquant.substack.com/p/what-if-we-simply-a-history-of-ai
1•nb_quant•29m ago•0 comments

Show HN: Kernel-Cve

https://www.kernelcve.com/
1•letmetweakit•29m ago•1 comments

Is This the End of the Free World?

https://paulkrugman.substack.com/p/is-this-the-end-of-the-free-world
4•rbanffy•30m ago•1 comments
Open in hackernews

Collecting 10k hours of neuro data in our basement

https://condu.it/thought/10k-hours
13•nee1r•1h ago

Comments

ClaireBookworm•58m ago
Yoo this is sick!! sometimes it might actually just be a data game, so huge props to them for actually collecting all that high-quality data
ArjunPanicksser•54m ago
Makes sense that CL ends up being the best for recruiting first-time participants. Curious what other things you tried for recruitment and how useful they were?
n7ck•44m ago
The second most useful by far is Indeed, where we post an internship opportunity for participants interested in doing 10 sessions over 10 weeks. Other things that work pretty well are asking professors to send out emails to students at local universities, putting up ~300-500 fliers (mostly around universities and public transit), and posting on Nextdoor. We also just texted a lot of groupchats/posted on linkedin/ gave out fliers and the signup link to kind of everyone we talked to in cafes and similar. We take on some participants as ambassadors as well, and pay them to refer their friends.

We tried google/facebook/instagram ads, and we tried paying for some video placements. Basically none of the explicit advertisement worked at all and it wasn't worth the money. Though for what it's worth, none of us are experts in advertising, so we might have been going about it wrong -- we didn't put loads of effort into iterating once we realized it wasn't working.

mishajw•52m ago
Interesting dataset! I'm curious what kind of results you would get with just EEG, compared to multiple modalities? Why do multiple modalities end up being important?
n7ck•36m ago
EEG has very good temporal resolution, but quite bad spacial resolution, and other modalities have different tradeoffs
g413n•52m ago
what's the basis for conversion between hours of neural data to number of tokens? is that counting the paired text tokens?
rio-popper•48m ago
edit: oops sorry misread - the neural data is tokenised by our embedding model. the number of tokens per second of neural data varies and depends on the information content.
n7ck•51m ago
Hey I'm Nick, and I originally came to Conduit as a data participant! After my session, I started asking questions about the setup to the people working there, and apparently I asked good questions, so they hired me.

Since I joined, we've gone from <1k hours to >10k hours, and I've been really excited by how much our whole setup has changed. I've been implementing lots of improvements to the whole data pipeline and the operations side. Now that we train lots of models on the data, the model results also inform how we collect data (e.g. we care a lot less about noise now that we have more data).

We're definitely still improving the whole system, but at this point, we've learned a lot that I wish someone had told us when we started, so we thought we'd share it in case any of you are doing human data collection. We're all also very curious to get any feedback from the community!

Gormisdomai•45m ago
The example sentences generated “only from neural data” at the top of this article seem surprisingly accurate to me, like, not exact matches but much better than what I would expect even from 10k hours:

“the room seemed colder” -> “ there was a breeze even a gentle gust”

ninapanickssery•44m ago
Yeah, agreed
ninapanickssery•45m ago
This is very cool, thanks for writing about your setup in such detail! It’s impressive that you can predict stuff from this noninvasive data. Are there similar existing datasets or this the first of its kind?
ag8•39m ago
This is a cool setup, but naively it feels like it would require hundreds of thousands of hours of data to train a decent generalizable model that would be useful for consumers. Are there plans to scale this up, or is there reason to believe that tens of thousands of hours are enough?
richardfeynman•15m ago
This is an interesting dataset to collect, and I wonder whether there will be applications for it beyond what you're currently thinking.

A couple of questions: What's the relationship between the number of hours of neurodata you collect and the quality of your predictions? Does it help to get less data from more people, or more data from fewer people?

n7ck•2m ago
1. The predictions get better with more data - and we don't seem to be anywhere near diminishing returns. 2. The thing we care about is generalisation between people. For this, less data from more people is much better.
wiwillia•14m ago
Really interested in how accuracy improves with the scale of the data set. Non-invasive thought-to-action would be a whole new interaction paradigm.
cpeterson42•6m ago
Wild world we live in