frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Reverse engineering GitHub Actions cache to make it fast

https://www.blacksmith.sh/blog/cache
54•tsaifu•1h ago•12 comments

Using Radicle CI

https://radicle.xyz/2025/07/23/using-radicle-ci-for-development
40•aiw1nt3rs•2h ago•2 comments

Proxmox Donates €10k to the Perl and Raku Foundation

https://www.perl.com/article/proxmox-donates-to-tprf/
6•oalders•21m ago•0 comments

Cerebras launches Qwen3-235B, achieving 1.5k tokens per second

https://www.cerebras.ai/press-release/cerebras-launches-qwen3-235b-world-s-fastest-frontier-ai-model-with-full-131k-context-support
233•mihau•4h ago•88 comments

Stop Building AI Tools Backwards

https://hazelweakly.me/blog/stop-building-ai-tools-backwards/
4•eternalreturn•13m ago•0 comments

Manticore Search: Fast, efficient, drop-in replacement for Elasticsearch

https://github.com/manticoresoftware/manticoresearch
25•klaussilveira•1h ago•7 comments

Reverse Engineering the GHA Cache to Improve Performance

https://depot.dev/blog/github-actions-cache
3•bootlegbilly•5m ago•0 comments

Geocities Backgrounds

https://pixelmoondust.neocities.org/archives/archivedtiles
77•marcodiego•2d ago•16 comments

Cops say criminals use a Google Pixel with GrapheneOS – I say that's freedom

https://www.androidauthority.com/why-i-use-grapheneos-on-pixel-3575477/
166•pabs3•1h ago•101 comments

20 years of Linux on the Desktop (part 4)

https://ploum.net/2025-07-23-linux_desktop4.html
73•todsacerdoti•2h ago•42 comments

The Surprising gRPC Client Bottleneck in Low-Latency Networks

https://blog.ydb.tech/the-surprising-grpc-client-bottleneck-in-low-latency-networks-and-how-to-get-around-it-69d6977a1d02
25•eivanov89•1h ago•1 comments

QuestDB (YC S20) Is Hiring a Technical Content Lead

https://questdb.com/careers/technical-content-lead/
1•nhourcard•3h ago

Uber will let women drivers and riders request to avoid being paired with men

https://www.cnbc.com/2025/07/23/uber-women-drivers-riders.html
27•ortusdux•39m ago•23 comments

Reversing a Fingerprint Reader Protocol (2021)

https://blog.th0m.as/misc/fingerprint-reversing/
21•thejj100100•3d ago•3 comments

Qwen3-Coder: Agentic coding in the world

https://qwenlm.github.io/blog/qwen3-coder/
679•danielhanchen•17h ago•300 comments

AI groups spend to replace low-cost 'data labellers' with high-paid experts

https://www.ft.com/content/e17647f0-4c3b-49b4-a031-b56158bbb3b8
111•eisa01•3d ago•47 comments

SQL Injection as a Feature

https://idiallo.com/blog/sql-injection-as-a-feature
48•foxfired•1d ago•18 comments

Extending Emacs with Fennel (2024)

https://andreyor.st/posts/2024-12-20-extending-emacs-with-fennel/
117•Bogdanp•9h ago•22 comments

I'm Unsatisfied with Easing Functions

https://www.davepagurek.com/blog/easing-functions/
32•surprisetalk•3d ago•20 comments

When Is WebAssembly Going to Get DOM Support?

https://queue.acm.org/detail.cfm?id=3746174
100•jazzypants•9h ago•100 comments

SDR42E1 modulates Vitamin D absorption and cancer pathogenesis

https://www.frontiersin.org/journals/endocrinology/articles/10.3389/fendo.2025.1585859/full
24•bookofjoe•2h ago•2 comments

Rescuing two PDP-11s from a former British Telecom underground shelter (2023)

https://forum.vcfed.org/index.php?threads/rescuing-two-pdp-11-systems-in-uk-from-a-former-big-british-telecom-underground-shelter-in-central-london.1244723/page-2
91•mhh__•9h ago•14 comments

Checking Out CPython 3.14's remote debugging protocol

https://rtpg.co/2025/06/28/checking-out-sys-remote-exec/
36•ingve•5h ago•8 comments

Herringbone Tiles

https://nothings.org/gamedev/herringbone/herringbone_tiles.html
4•smusamashah•2d ago•0 comments

Mathematics for Computer Science (2024)

https://ocw.mit.edu/courses/6-1200j-mathematics-for-computer-science-spring-2024/
215•vismit2000•11h ago•37 comments

More than you wanted to know about how Game Boy cartridges work

https://abc.decontextualize.com/more-than-you-wanted-to-know/
372•todsacerdoti•19h ago•41 comments

AI coding agents are removing programming language barriers

https://railsatscale.com/2025-07-19-ai-coding-agents-are-removing-programming-language-barriers/
77•Bogdanp•11h ago•85 comments

Brave blocks Microsoft Recall by default

https://brave.com/privacy-updates/35-block-recall/
204•XzetaU8•5h ago•183 comments

Algorithms for Modern Processor Architectures

https://lemire.github.io/talks/2025/sea/sea2025.html
241•matt_d•16h ago•42 comments

Why you can't color calibrate deep space photos

https://maurycyz.com/misc/cc/
184•LorenDB•14h ago•82 comments
Open in hackernews

AI groups spend to replace low-cost 'data labellers' with high-paid experts

https://www.ft.com/content/e17647f0-4c3b-49b4-a031-b56158bbb3b8
111•eisa01•3d ago

Comments

aspenmayer•2d ago
https://archive.is/dkZVy
Melonololoti•7h ago
Yepp it continues the gathering of more and better data.

Ai is not a hype. We have started to actually do something with all the data and this process will not stop soon.

Aline the RL what is now happening through human feedback alone (thumbs up/down) is massive.

KaiserPro•6h ago
It was always the case. We only managed to make a decent model once we created a decent dataset.

This meant making a rich synthetic dataset first, to pre-train the model, before fine tuning on real, expensive data to get the best results.

but this was always the case.

rtrgrd•4h ago
I thought human preferences was typically considered a noisy reward signal
TheAceOfHearts•6h ago
It would be great if some of these datasets were free and opened up for public use. Otherwise it seems like you end up duplicating a lot of busywork just for multiple companies to farm more money. Maybe some of the European initiatives related to AI will end up including the creation of more open datasets.

Then again, maybe we're still operating from a framework where the dataset is part of your moat. It seems like such a way of thinking will severely limit the sources of innovation to just a few big labs.

KaiserPro•6h ago
> operating from a framework where the dataset is part of your moat

Very much this. Its the dataset that shapes the model, the model is a product of the dataset, rather than the other way around (mind you, synthetic datasets are different...)

andy_ppp•6h ago
Why would companies paying top dollar to refine and create high quality datasets give them away for free?
flir•5h ago
Same reason they give open source contributions away for free. Hardware companies attempting to commoditize their complement. I think the org best placed to get strategic advantage from releasing high quality data sets might be Nvidia.
charlieyu1•5h ago
There are some good datasets for free though, eg HLE. Although I’m sure if they are marketing gimmicks
murukesh_s•5h ago
Well that was the idea of "open"AI isn't it [1]?

[1] https://web.archive.org/web/20190224031626/https://blog.open...

delfinom•3h ago
ClosedAI gonna ClosedAI
sumedh•3h ago
Check the date.

This was published before anyone knew it running an AI company would be very very expensive.

some_random•2h ago
I feel like that was by far the most predictable part of running an AI company.
gexla•6h ago
Right, and they pay a lot of money for this data. I know someone who does this, and one prompt evaluation could go through multiple rounds and reviews that could end up generating $150+ in payouts, and that's just what the workers receive. But that's not quite what the article is talking about. Each of these companies do things a bit different.
NitpickLawyer•4h ago
> Maybe some of the European initiatives related to AI will end up including the creation of more open datasets.

The EU has started the process of opening discussions aiming to set the stage for opportunities to arise on facilitating talks looking forward to identify key strategies of initiating cooperation between member states that will enable vast and encompassing meetings generating avenues of reaching top level multi-lateral accords on passing legislation covering the process of processing processes while preparing for the moment when such processes will become processable in the process of processing such processes.

#justeuthings :)

illegalmemory•2h ago
This could work with a Wikipedia-like model. It's very difficult to pull off, but a next-generation Wikipedia would look like this.
ripped_britches•4m ago
Don’t worry - the labs will train based on this expert data and then everyone will just distill their models. Or, now that model itself can be an expert annotater.
panabee•6h ago
This is long overdue for biomedicine.

Even Google DeepMind's relabeled MedQA dataset, created for MedGemini in 2024, has flaws.

Many healthcare datasets/benchmarks contain dirty data because accuracy incentives are absent and few annotators are qualified.

We had to pay Stanford MDs to annotate 900 new questions to evaluate frontier models and will release these as open source on Hugging Face for anyone to use. They cover VQA and specialties like neurology, pediatrics, and psychiatry.

If labs want early access, please reach out. (Info in profile.) We are finalizing the dataset format.

Unlike general LLMs, where noise is tolerable and sometimes even desirable, training on incorrect/outdated information may cause clinical errors, misfolded proteins, or drugs with off-target effects.

Complicating matters, shifting medical facts may invalidate training data and model knowledge. What was true last year may be false today. For instance, in April 2024 the U.S. Preventive Services Task Force reversed its longstanding advice and now urges biennial mammograms starting at age 40 -- down from the previous benchmark of 50 -- for average-risk women, citing rising breast-cancer incidence in younger patients.

empiko•5h ago
This is true for every subfield I have been working on for the past 10 years. The dirty secret of ML research is that Sturgeon's law apply to datasets as well - 90% of data out there is crap. I have seen NLP datasets with hundreds of citations that were obviously worthless as soon as you put the "effort" in and actually looked at the samples.
panabee•5h ago
100% agreed. I also advise you not to read many cancer papers, particularly ones investigating viruses and cancer. You would be horrified.

(To clarify: this is not the fault of scientists. This is a byproduct of a severely broken system with the wrong incentives, which encourages publication of papers and not discovery of truth. Hug cancer researchers. They have accomplished an incredible amount while being handcuffed and tasked with decoding the most complex operating system ever designed.)

briandear•1h ago
> this is not the fault of scientists. This is a byproduct of a severely broken system with the wrong incentives, which encourages publication of papers and not discovery of truth

Are scientists not writing those papers? There may be bad incentives, but scientists are responding to those incentives.

eszed•47m ago
That is axiomatically true, but both harsh and useless, given that (as I understand from HN articles and comments) the choice is "play the publishing game as it is" vs "don't be a scientist anymore".
edwardbernays•31m ago
Scientists are responding to the incentives of a) wanting to do science, b) for the public benefit. There was one game in town to do this: the American public grant scheme.

This game is being undermined and destroyed by infamous anti-vaxxer, non-medical expert, non-public-policy expert RFK Jr.[1] The disastrous cuts to the NIH's public grant scheme is likely to amount to $8,200,000,000 ($8.2 trillion USD) in terms of years of life lost.[2]

So, should scientists not write those papers? Should they not do science for public benefit? These are the only ways to not respond to the structure of the American public grant scheme. It seems to me that, if we want better outcomes, then we should make incremental progress to the institutions surrounding the public grant scheme. This seems fair more sensible than installing Bobby Brainworms to burn it all down.

[1] https://youtu.be/HqI_z1OcenQ?si=ZtlffV6N1NuH5PYQ

[2] https://jamanetwork.com/journals/jama-health-forum/fullartic...

panabee•14m ago
Valid critique, but one addressing a problem above the ML layer at the human layer. :)

That said, your comment has an implication: in which fields can we trust data if incentives are poor?

For instance, many Alzheimer's papers were undermined after journalists unmasked foundational research as academic fraud. Which conclusions are reliable and which are questionable? Who should decide? Can we design model architectures and training to grapple with this messy reality?

These are hard questions.

ML/AI should help shield future generations of scientists from poor incentives by maximizing experimental transparency and reproducibility.

Apt quote from Supreme Court Justice Louis Brandeis: "Sunlight is the best disinfectant."

PaulHoule•45m ago
If you download data sets for classification from Kaggle or CIFAR or search ranking from TREC it is the same. Typically 1-2% of judgements in that kind of dataset are just wrong so if you are aiming for the last few points of AUC you have to confront that.
panabee•5h ago
To elaborate, errors go beyond data and reach into model design. Two simple examples:

1. Nucleotides are a form of tokenization and encode bias. They're not as raw as people assume. For example, classic FASTA treats modified and canonical C as identical. Differences may alter gene expression -- akin to "polish" vs. "Polish".

2. Sickle-cell anemia and other diseases are linked to nucleotide differences. These single nucleotide polymorphisms (SNPs) mean hard attention for DNA matters and single-base resolution is non-negotiable for certain healthcare applications. Latent models have thrived in text-to-image and language, but researchers cannot blindly carry these assumptions into healthcare.

There are so many open questions in biomedical AI. In our experience, confronting them has prompted (pun intended) better inductive biases when designing other types of models.

We need way more people thinking about biomedical AI.

bjourne•3h ago
What if there is significant disagreement within the medical profession itself? For example, isotretinoin is proscribed for acne in many countries, but in other countries the drug is banned or access restricted due to adverse side effects.
panabee•56m ago
If you agree that ML starts with philosophy, not statistics, this is but one example highlighting how biomedicine helps model development, LLMs included.

Every fact is born an opinion.

This challenge exists in most, if not all, spheres of life.

K0balt•2h ago
I think an often overlooked aspect of training data curation is the value of accurate but oblique data. Much of the “emergent capabilities “ of LLMs comes from data embedded in the data, implied or inferred semantic information that is not readily obvious. Extraction of this highly useful information, in contrast to specific factoids, requires a lot of off axis images of the problem space, like a CT scan of the field of interest. The value of adjacent oblique datasets should not be underestimated.
TZubiri•2h ago
I noticed this when adding citations to wikipedia.

You are may find a definition of what a "skyscraper" is, by some hyperfocused association, but you'll get a bias towards a definite measurement like "skyscrapers are buildings between 700m to 3500m tall", which might be useful for some data mining project, but not at all what people mean by it.

The actual definition is not in a specific source but in the way it is used in other sources like "the Manhattan skyscraper is one of the most iconic skyscrapers", on the aggregate you learn what it is, but it isn't very citable on its own, which gives WP that pedantic bias.

TZubiri•2h ago
Isn't labelling medical data for ai illegal as unlicensed medical practice?

Same thing with law data

bethekidyouwant•1h ago
Illegal?
iwontberude•54m ago
Paralegals and medical assistants don’t need licenses
mh-•15m ago
No.
techterrier•5h ago
The latest in a long tradition, it used to be that you'd have to teach the offshore person how to do your job, so they could replace you for cheaper. Now we are just teaching the robots instead.
skeezyboy•4h ago
the wageslave realises hes a wageslave
verisimi•5h ago
This is it - this is the answer to the ai takeover.

Get an ai to autogenerate lots of crap! Reddit, hn comments, false datasets, anything!

Cthulhu_•58m ago
That's just spam / more dead internet theory, and there will be or are companies that will curate data sets and filter out generated stuff / spam or hand-pick high quality data.
vidarh•5h ago
I've done review and annotation work for two providers in this space, and so regularly get approached by providers looking for specialists with MSc's or PhD's...

"High-paid" is an exaggeration for many of these, but certainly a small subset of people will make decent money on it.

At one provider I was as an exception paid 6x their going rate because they struggled to get people skilled enough at the high-end to accept their regular rate, mostly to audit and review work done by others. I have no illusion I was the only one paid above their stated range. I got paid well, but even at 6x their regular rate I only got paid well because they estimated the number of tasks per hour and I was able to exceed that estimate by a considerable margin - if their estimate had matched my actual speed I'd have just barely gotten to the low end of my regular rate.

But it's clear there's a pyramid of work, and a sustained effort to create processes to allow the bulk of the work to be done by low-cost labellers, and then push smaller and smaller subsets of the data up more expensive to experts, as well as creating tooling to cut down the amount of time experts spend by e.g. starting with synthetic data (including model-generated reviews of model-generated responses).

I don't think I was at the top of that pyramid - the provider I did work for didn't handle many prompts that required deep specialist knowledge (though I did get to exercise my long-dormant maths and physics knowledge that doesn't say too much). I think most of what we addressed would at most need people with MSc level skills in STEM subjects. And so I'm sure there are a few more layers on the pyramid handling PhD-level complexity data. But from what I'm seeing from hiring managers contacting me, I get the impression the pay scale for them isn't that much higher (with the obvious caveat given what I mentioned above that there almost certainly are people getting paid high multiples on the stated scale)

Some of these pipelines of work are highly complex, often including multiple stages of reviews, sometimes with multiple "competing" annotators in parallel feeding into selection and review stages.

charlieyu1•4h ago
I'll believe it when it happens. A major AI company got rid of an expert team last year because they think it is too expensive
quantum_state•4h ago
It is expert system evolved …
cryptokush•4h ago
welcome to macrodata refinement
joshdavham•1h ago
I was literally just reached out to this morning about a contract job for one of these “high quality datasets”. They specifically wanted python programmers who’ve contributed to popular repos (I maintain one repository with approx. 300 stars).

The rate they offered was between $50-90 per hour, so significantly higher than what I’d think low-cost data labellers are getting.

Needless to say, I marked them as spam though. Harvesting emails through GitHub is dirty imo. Was also sad that the recruiter was acting on behalf of a yc company.

apical_dendrite•18m ago
The latest offer I saw was $150-$210 an hour for 20hrs/week. I didn't pursue it so I don't know if that's what people actually make, but it's an interesting data point.
SoftTalker•54m ago
Isn't this ignoring the "bitter lesson?"

http://www.incompleteideas.net/IncIdeas/BitterLesson.html

TrackerFF•44m ago
I don’t know if it is related, but I’ve noticed an uptick in cold calls / approaches for consulting gigs related to data labeling and data QA, in my field (work as an analyst). I never got requests like that 2++ years ago.
the_brin92•15m ago
I've been doing this for one of the major companies in the space for a few years now. It has been interesting to watch how much more complex the projects have gotten over the last few years, and how many issues the models still have. I have a humanities background which has actually served me well here as what constitutes a "better" AI model response is often so subjective.

I can answer any questions people have about the experience (within code of conduct guidelines so I don't get in trouble...)