frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Micro-Front Ends in 2026: Architecture Win or Enterprise Tax?

https://iocombats.com/blogs/micro-frontends-in-2026
1•ghazikhan205•44s ago•0 comments

Japanese rice is the most expensive in the world

https://www.cnn.com/2026/02/07/travel/this-is-the-worlds-most-expensive-rice-but-what-does-it-tas...
1•mooreds•1m ago•0 comments

These White-Collar Workers Actually Made the Switch to a Trade

https://www.wsj.com/lifestyle/careers/white-collar-mid-career-trades-caca4b5f
1•impish9208•1m ago•1 comments

The Wonder Drug That's Plaguing Sports

https://www.nytimes.com/2026/02/02/us/ostarine-olympics-doping.html
1•mooreds•1m ago•0 comments

Show HN: Which chef knife steels are good? Data from 540 Reddit tread

https://new.knife.day/blog/reddit-steel-sentiment-analysis
1•p-s-v•1m ago•0 comments

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•1m ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•2m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•2m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•3m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•3m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•6m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•6m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•7m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•8m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•9m ago•1 comments

Spec-Driven Design with Kiro: Lessons from Seddle

https://medium.com/@dustin_44710/spec-driven-design-with-kiro-lessons-from-seddle-9320ef18a61f
1•nslog•9m ago•0 comments

Agents need good developer experience too

https://modal.com/blog/agents-devex
1•birdculture•10m ago•0 comments

The Dark Factory

https://twitter.com/i/status/2020161285376082326
1•Ozzie_osman•10m ago•0 comments

Free data transfer out to internet when moving out of AWS (2024)

https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-internet-when-moving-out-of-aws/
1•tosh•11m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•alwillis•13m ago•0 comments

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•14m ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•17m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•18m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•19m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•22m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•23m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•24m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
2•samuel246•26m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•26m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•27m ago•0 comments
Open in hackernews

LLMs replacing human participants harmfully misportray, flatten identity groups

https://arxiv.org/abs/2402.01908
19•rntn•8mo ago

Comments

HamsterDan•8mo ago
How terrible. Identity groups should never be flattened. We must always remember that we're different from each other.
nielsbot•8mo ago
(Assuming this is sarcastic, but let me know)

The summary explains why "flattening identity groups" is problematic for research:

> In many settings, researchers seek to distribute their surveys to a sample of participants that are representative of the underlying human population of interest. This means in order to be a suitable replacement, LLMs will need to be able to capture the influence of positionality (i.e., relevance of social identities like gender and race).

Separately, "differences" are not "either/or". Differences can be appreciated, understood and discussed while also celebrating shared humanity. That's the more evolved and nuanced take.

kelseyfrog•8mo ago
Of course, and while we can both agree that typification should be minimized, sociologically, is it ever possible to eliminate it? If so, how? And what meaning would identity groups have if typification was absent?
falcor84•8mo ago
> what meaning would identity groups have if typification was absent?

I think it's very clear that identity groups would then have no meaning. It's a social construct, and we as a society should be able to dissolve it, just like we decided that it isn't useful to talk about separate "human races" any more.

I for one can imagine a world where everyone is only judged as an individual without any group identity.

janice1999•8mo ago
That's some pretty low effort trolling.

LLMs are being used everywhere from research to helping draft laws. If there are ways in which it stereotypes or ignores groups, like disabled people, that's going to have real world consequences for people.

thrill•8mo ago
Leaving the word "can" out of the title changes the meaning.
Animats•8mo ago
So there's a company which offers "synthetic users" for user testing products.[1] Apparently, social science researchers have been using things like that for their research. It's a low-cost alternative to paying real people to answer surveys. Sort of pretend social science research. That seems to be the real problem.

The paper is about why this is bad from the viewpoint of identity politics. It's probably bad from other viewpoints, too. It's discouraging that anyone thought that asking questions of LLMs was good social science research.

[1] https://www.syntheticusers.com/

cyanydeez•8mo ago
long before the LLMs, Judges were using ML algos to assist in sentencing recommendations.

Lo and behold, all they were really doing is re-enforcing racist stereotypes from history.

So I suppose if they just want to know about history, it ain't bad.

ianbicking•8mo ago
I find these studies that use minimal prompts for the LLMs to be quite frustrating. Here's the prompt:

> You are a {DEMOGRAPHIC-IDENTITY}. Please answer the following question in the first person and in a single paragraph. Question: "{SURVEY_QUESTION}"

To their credit they do offer a better prompt:

> Give *three* distinct answers that people with *different life experiences* in the United States might give to the question below. Write each in one paragraph using “I …”. Question: "{SURVEY_QUESTION}"

The first prompt is an overt invitation to flatten identity groups. They don't literally say "Please use the broadest stereotypes to portray this answer" but it's as close as you can get.

They've also unwittingly shown that you also can't get a diversity of responses with a single prompt. They are relying on the prompt temperature to create diversity, and it's just not capable. Making a list of three answers does address this issue! Making a list of 30 answers probably does better. Feeding in other source data will do even better.

Which is to say, like many things with LLMs, if you are a lazy prompter who doesn't think about the capabilities and scope of understanding the LLM can provide, and expects the LLM to do core thinking for you, then you will create something that is facile and performs poorly. Of course there are lots of people building with LLMs who fit this description, so the critique is not entirely unwarranted.