frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Workflow Development Kit

https://vercel.com/blog/introducing-workflow
1•0xedb•1m ago•0 comments

Show HN: Production-ready rate limiter with Web Dashboard and 5 algorithms

https://github.com/uppnrise/distributed-rate-limiter
1•uppnrise•2m ago•1 comments

Laser enrichment technology moves to next level

https://www.world-nuclear-news.org/articles/laser-enrichment-technology-moves-to-next-level
1•philipkglass•2m ago•0 comments

Leash: Spreadsheet Based PagerDuty Alternative

https://github.com/autokitteh/kittehub/tree/main/leash
1•itayd•2m ago•1 comments

Marketing Feels Like Hell for Developers

https://www.clintmcmahon.com/Blog/marketing-feels-like-hell-for-developers
1•speckx•2m ago•0 comments

Locus of Control

https://en.wikipedia.org/wiki/Locus_of_control
1•1970-01-01•3m ago•0 comments

Show HN: Desponsorize – Gray out Amazon sponsored search results

https://github.com/candacelabs/desponsorize
1•kaashmonee•3m ago•0 comments

We Tracked Every Website That Launched in September 2025. The Data Is Wild

https://websitelaunches.com/blog/post.php?slug=september-2025-website-launch-data
1•antiochIst•7m ago•0 comments

Free Sleep – Jailbreak 8 Sleep Pod and Control Locally

https://github.com/throwaway31265/free-sleep
1•hrimfaxi•9m ago•1 comments

Starbuck v. Google LLC N25C-10-211 (Del.Super. Oct.22,2025) [pdf]

https://fingfx.thomsonreuters.com/gfx/legaldocs/mopadxyaeva/STARBUCKGOOGLEDEFAMATIONLAWSUITcompla...
1•1vuio0pswjnm7•12m ago•0 comments

AI Orchestration for Operational Real-Time Network Analysis

https://dimaggi.com
1•tenywan•12m ago•1 comments

Looking for an influencer to help with agentic e-commerce app for fashion

1•kuma0177•12m ago•0 comments

Starbuck v. Google LLC N25C-10-211 (Del.Super. Oct.22,2025) [pdf]

https://fingfx.thomsonreuters.com/gfx/legaldocs/mopadxyaeva/STARBUCK%GOOGLE%DEFAMATION%LAWSUIT%co...
1•1vuio0pswjnm7•13m ago•0 comments

What caused the large AWS outage?

https://blog.pragmaticengineer.com/aws-outage-us-east-1/
3•robin_reala•14m ago•1 comments

How Immigration Has Remade Canada [video]

https://www.youtube.com/watch?v=uz-Sx8lXeXk
1•jjangkke•14m ago•0 comments

NBA player among 30 arrested for gambling scheme that included X-ray poker table

https://www.theguardian.com/sport/2025/oct/23/heats-rozier-and-blazers-coach-billups-reportedly-a...
1•whycome•15m ago•2 comments

Microsoft makes Copilot "human-centered" with a '90s-style animated assistant

https://arstechnica.com/gadgets/2025/10/microsoft-makes-copilot-human-centered-with-a-90s-style-a...
1•pseudolus•16m ago•1 comments

Zram Performance Analysis

https://notes.xeome.dev/notes/Zram
1•enz•17m ago•0 comments

Stone Tools: Exploring retro productivity software from the 8/16-bit era

https://stonetools.ghost.io/
1•PaulHoule•22m ago•0 comments

A Return to Discovery

https://analoghobbyist.bearblog.dev/a-return-to-discovery/
1•speckx•25m ago•0 comments

ADP stopped data sharing with Fed

https://prospect.org/2025/10/21/fed-making-key-economic-decisions-without-data/
2•jimmydoe•26m ago•0 comments

I built this AI photography app for small brands

https://pixelshot.ai/
1•ozgrozer•28m ago•2 comments

Bay Area tech startup will play the villain in a new TV drama

https://www.sfgate.com/sf-culture/article/bay-area-tech-startup-villain-tv-drama-21114640.php
2•jedberg•29m ago•2 comments

Show HN: Front end says back end changed again? Stop that with middlerok

https://www.middlerok.com/
1•rokontech•30m ago•0 comments

The Muscular Compassion of "Paper Girl"

https://www.newyorker.com/books/page-turner/the-muscular-compassion-of-paper-girl
6•mitchbob•32m ago•1 comments

Collatz Automata

https://gbragafibra.github.io/2025/10/23/collatz_automata.html
1•Fibra•32m ago•0 comments

What antidepressants do to your brain and body

https://www.telegraph.co.uk/health-fitness/wellbeing/mental-health/what-antidepressants-do-to-you...
2•wjb3•35m ago•0 comments

Linux Proposed Cache Aware Scheduling Benchmarks Show Big Potential on AMD Turin

https://www.phoronix.com/review/cache-aware-scheduling-amd-turin
2•rbanffy•36m ago•0 comments

Cyberthreats surge against US logistics infrastructure

https://www.freightwaves.com/news/cyberthreats-surge-against-us-logistics-infrastructure
1•crescit_eundo•37m ago•0 comments

Trump pauses federal surge to San Francisco

https://sfstandard.com/2025/10/23/lurie-trump-calls-off-federal-surge-san-francisco/
4•jzelinskie•37m ago•1 comments
Open in hackernews

Antislop: A framework for eliminating repetitive patterns in language models

https://arxiv.org/abs/2510.15061
63•Der_Einzige•3h ago

Comments

DarmokJalad1701•3h ago
You're absolutely right!
kridsdale1•3h ago
What a brilliant observation you’ve made.
asmor•3h ago
This is such a nuanced view on things!
mrbungie•3h ago
I don't know if not getting the idea right, but I'm pretty sure people refer to AI outputs as "slop" not due to (only) repetitiveness. According to some sources:

[1] Wikipedia

> AI slop is digital content made with generative artificial intelligence, specifically when perceived to show a lack of effort, quality or deeper meaning, and an overwhelming volume of production.[1][4][5] Coined in the 2020s, the term has a pejorative connotation similar to spam.[4]

[2] Urban dictionary

> Low-quality randomly generated AI content (images, accounts, text, etc) that has been flooding social media sites among other pages.

Yes, I know those may not be the best primary sources, but I'd say the main shared meaning of the word is lack of quality and effort, not repetitiveness itself.

[1] https://en.wikipedia.org/wiki/AI_slop

[2] https://www.urbandictionary.com/define.php?term=AI+slop

jsheard•3h ago
Yeah, what this actually achieves if anything is making it harder to quickly recognize slop for what it is, so readers are more likely to give it the benefit of the doubt and keep their eyeballs on it for longer. Which I suppose is desirable if you're in the slop-mongering business (e.g. doing SEO spam or other such methods of flooding the commons with sewage for the sake of profit).
mrbungie•3h ago
Yep, and their only reference to the word points to a survey that does not mention slop even once (A survey onllm-generated text detection: Necessity, methods, and future directions. Computational Linguistics, 51(1):275–338, 2025., https://arxiv.org/abs/2310.14724)

That's sloppy (hehe), if you are going to redefine a common word for the first time (i.e. references are not possible) at least do it explicitly.

moritzwarhier•2h ago
Fits into a broad pattern of deceptive LLM terminology, for example "Deep Research": a humble and honest moniker would me "Reflection" or "Recursive self-prompting".
the8472•3h ago
Gain-of-function research to create memetic-immune-system-evading AI variants.

> Ethics Statement

> Potential harms include: [...] (ii) attempts to evade AI-text detection.

And it's not clear to me how their mitigations would avoid fooling users (as opposed to algorithmic detection attempts).

yawnxyz•3h ago
Honestly "slop" should also be retroactively applied to e.g. Buzzfeed content; it shouldn't just be AI-centric
louthy•3h ago
It isn’t AI centric, it’s derived from poor quality wet food. Often given to pigs or used to describe prison food. It’s the origin of the term ‘sloppy’.

Colloquially it means ‘poor quality’ and always has done. So buzzfeed is journalism slop, just like poor quality AI content is AI slop.

iahds9uasd•2h ago
The LLM erotic roleplaying community's usage of "slop" aligns with the definition in this paper, so it's not without precedent. Several novel sampling methods have originated from that community trying to address this specific issue.
mrbungie•2h ago
Nothing wrong with that, but at (1) least reference it or (2) define it yourself explicitly.
Der_Einzige•2h ago
Yup. You see this with the very first projects to get a new sampler being oobabooga text gen webui, sillytavern circa early 2023 with min_p. Same with diffusion models. First projects to get new denoising algorithms are ComfyUI, Automatic1111, etc.
palmotea•1h ago
> I don't know if not getting the idea right, but I'm pretty sure people refer to AI outputs as "slop" not due to (only) repetitiveness. According to some sources:

Yeah, slop is low effort use of AI output ("ChatGPT, write me a blog post about using AI in industry X. Copy. Paste. Publish."). If anything this is should be called Stealthslop, and when slop is harder to detect we'll all waste more time on it.

calvinmorrison•3h ago
we're calling it compu-slop
sixwing•3h ago
i took a swing at an anti-slop skill for Claude Code, this week: https://github.com/rand/cc-experiments/tree/main/skills/anti...
growdark•1h ago
Does this actually work or would the slop just become more subtle?
atourgates•2h ago
I've been using ChatGPT fairly regularly for about a year. Mostly as an editor/brainstorming-partner/copy-reviewer.

Lots of things have changed in that year, but the things that haven't are:

* So, so many em-dashes. All over the place. (I've tried various ways to get it to stop. None of them have worked long term).

* Random emojis.

* Affirmations at the start of messages. ("That's a great idea!") With a brief pause when 5 launched. But it's back and worse than ever now.

* Weird adjectives it gets stuck on like "deep experience".

* Randomly bolded words.

Honestly, it's kind of helpful because it makes it really easy to recognize content that people have copied and pasted out of ChatGPT. But apart from that, it's wild to me that a $500bn company hasn't managed to fix those persistent challenges over the course of a year.

giancarlostoro•2h ago
I dont use ChatGPT very often, though perplexity has it, but I find that going all caps and sounding really angry helps them to fix things.
estimator7292•2h ago
Ah, you've hit a classic problem with <SUBJECT> :smile_with_sweat_drop:. Your intuition is right-- but let me clarify some subtleties...
gowld•2h ago
> Honestly, it's kind of helpful because it makes it really easy to recognize content that people have copied and pasted out of ChatGPT

Maybe it's intentional, like the "shiny" tone applied to "photorealistic" images of real people.

teeray•2h ago
You can take my em-dashes from my cold, dead hands—I use them all the time.
antoniojtorres•2h ago
The emoji thing is so bad. You can see it all over github docs and other long form docs. All section headers will have emojis and so on. Strange.
thraxil•1h ago
Obviously nothing solid to back this up, but I kind of feel like I was seeing emojis all over github READMEs on JS projects for quite a while before AI picked it up. I feel like it may have been something that bled over from Twitch streaming communities.
koakuma-chan•2h ago
ChatGPT is made for normies—they love sweatdrop emojis. I recommend https://ai.dev
noir_lord•1h ago
"normies" such a weird way to divide the world into them and "us".
BolexNOLA•2h ago
> Affirmations at the start of messages. ("That's a great idea!") With a brief pause when 5 launched. But it's back and worse than ever now.

What a great point! I also can’t stand it. I get it’s basically a meme to point it out - even South Park has mocked it - but I just cannot stand it.

In all seriousness it’s so annoying. It is a tool, not my friend, and considering we are already coming from a place of skepticism with many of the responses, buttering me up does not do anything but make me even more skeptical and trust it less. I don’t want to be told how smart I am or how much a machine “empathizes” with my problem. I want it to give me a solution that I can easily verify, that’s it.

Stop wasting my tokens and time with fake friendship!

lazide•2h ago
Meanwhile, 90% of the population is asking it to write love letters for their bf’s/gf’s
BolexNOLA•1h ago
Man it is truly difficult to overstate all the behavioral health issues that have been emerging.
shagie•1h ago
A modern Cyrano de Bergerai.
SoftTalker•1h ago
Drives me nuts too. All the stuff like "OK let me do..." Or "I agree ..." stop talking like a person.

I want the star trek experience. The computer just says "working" and then gives you the answer without any chit-chat. And it doesn't refer to itself as if it's a person.

What we have now is Hal 9000 before it went insane.

layer8•54m ago
Setting ChatGPT personality to “Robot” pretty much does that for me.
cyanydeez•53m ago
Guys. It's basically because among the all well researched data, the amount of garbage is infinitely more.

If AI wants to be useful (it's not going to atm), real people need to cull all the banalities that facebook, reddit & forums have generated.

Because what you're noticing is things we typically elide over in discussions with actual humans.

BolexNOLA•29m ago
It is far more polite than any social media platform or forum I’ve ever seen lol
pimeys•2h ago
Or... How can you detect the usage of Claude models in a writeup? Look for the word comprehensive, especially if it's used multiple times throughout the article.
bakugo•1h ago
Don't forget the classic: "It's not just X—it's Y."
rogerkirkness•1h ago
This is the main thing that immediately tells me something is AI. This form of reasoning was much less common before ChatGPT.
layer8•58m ago
It’s a pity that em-dashes are being much more shunned due to their LLM association than emojis.
WASDx•46m ago
You can customize it to get rid of all that. I set it to the "Robot" personality and a custom instruction to "No fluff and politeness. Be short and get straight to the point. Don't overuse bold font for emphasis."
neoCrimeLabs•13m ago
I am reasonably sure affirmations are a feature, not a bug. No matter how much I might disagree.
SoftTalker•2h ago
I honestly can’t always distinguish AI slop from the formulaic corp-speak used in emails and memos and brochure websites and other marketing. I’m guessing that must be a large component of the training matter.
cyanydeez•50m ago
I'd say the majority of the training data is reddit with zero care about whether it's from a "good" or "sarcastic" or "ironic" source.
Babkock•2h ago
No one gives a fuck, no one cares, no wants this AI shit.
artninja1988•1h ago
Seethe more, chud
jjangkke•2h ago
There will be an intersection when the techniques and continued refinements in making tall tale signs of AI and new powerful model meets where it becomes very time consuming, expensive and difficult to tell between human generated and AI generated content.

We are already at a point where we can trick large number of the population, it can without a doubt close the gap even further where we question anything and everything.

Beyond forensics, which require large capital investment and operating costs, to be able to detect AI vs human content will be limited in terms of access. It will be so that its not that we can't detect AI content anymore its that most people cannot afford the service to detect it and thus they lose interest.

This has side effect of making live performances by humans scarce and in valuable.

teeray•1h ago
> This has side effect of making live performances by humans scarce and in valuable.

RIP take-home coding assignments.

mrbungie•1h ago
Also RIP any take-home assignment that depends at least partially on writing prose/essays.

Schools will need to reinvent themselves in some ways.

sorokod•1h ago
That narrowing gap is where we humans find purpose and meaning.

If an impersonation of an opera singer can't be distinguished from the real thing, what would be the point of the real thing?

meowface•2h ago
Slop is a much more general concept than that. I wish they would've picked a different term. "LLM fluff phrases" or something.
layer8•52m ago
diction, phraseology
growdark•1h ago
I'd love to see a benchmark that tests different LLMs for slop, not necessarily limited to code. That might be even more interesting than ARC-AGI.
jampa•1h ago
Not a benchmark per se, but there is a "Not x, but y" Slop Leaderboard:

https://www.reddit.com/r/LocalLLaMA/comments/1lv2t7n/not_x_b...

Bolwin•24m ago
See the writing benchmarks here https://eqbench.com/creative_writing_longform.html
skywhopper•1h ago
That’s not what “slop” means. Slop is output produced by generative AI without regards to its quality, not the telltale tics that current models tend to exhibit.
tartoran•1h ago
Yep. Sanitized slop is still slop.
voldacar•2m ago
Instead of "surgically adjusting" logits within an existing model, couldn't you just build the slop detector into the loss function during the initial training stage?