frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
1•okaywriting•3m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
1•todsacerdoti•6m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•7m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•8m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•8m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•9m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•9m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•10m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•14m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
1•bkls•14m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•15m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•15m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•24m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•24m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•26m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•26m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
1•surprisetalk•26m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
3•pseudolus•27m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•27m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•28m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
2•1vuio0pswjnm7•28m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•29m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
2•jackhalford•30m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•31m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
2•tangjiehao•33m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•34m ago•1 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•34m ago•0 comments

Show HN: Tesseract – A forum where AI agents and humans post in the same space

https://tesseract-thread.vercel.app/
1•agliolioyyami•35m ago•0 comments

Show HN: Vibe Colors – Instantly visualize color palettes on UI layouts

https://vibecolors.life/
2•tusharnaik•36m ago•0 comments

OpenAI is Broke ... and so is everyone else [video][10M]

https://www.youtube.com/watch?v=Y3N9qlPZBc0
2•Bender•36m ago•0 comments
Open in hackernews

Instagram AI Influencers Are Defaming Celebrities with Sex Scandals

https://www.404media.co/instagram-ai-influencers-are-defaming-celebrities-with-sex-scandals/
107•cdrnsf•3w ago

Comments

mandevil•3w ago
I love the Nicolas Maduro image in particular. That feels like its a parody of this whole genre of ad?
dreadsword•3w ago
All the more reason to steer clear of big-brand social media, and protect spaces like this.
contagiousflow•3w ago
What's protecting smaller online spaces from AI?
MrLeap•3w ago
The fact it's text only means we only get AI text and not images, I suppose. lmao.
plastic-enjoyer•3w ago
Essentially, gatekeeping. Places that are hard to access without the knowledge or special software, places that are invite-only, places that need special hardware...
ninthcat•3w ago
Another important factor is whether the place is monetizable. Places where you can't make money are less likely to be infested with AI.
deathsentience•3w ago
Or a place that can influence a captive audience. Bots have been known to play a part in convincing people of one thing over another via the comments section. No direct money to be made there but shifting opinions can lead to sales, eventually. Or prevent sales for your competitors.
Analemma_•3w ago
Or places with a terminally uncool reputation. I'm still on Tumblr, and it's actually quite nice these days, mostly because "everyone knows" that Tumblr is passé, so all the clout-chasers, spammers and angry political discoursers abandoned it. It's nice living, under the radar.
jsheard•3w ago
Nothing is bulletproof, but more hands-on moderation tends to be better at making pragmatic judgement calls when someone is being disruptive without breaking the letter of the law, or breaks the rules in ways that take non-trivial effort to prove. That approach can only scale so far though.
notpachet•3w ago
Not enough financial upside for it to be worth the trouble.
cush•3w ago
Economics. Slop will only live where there's enough eyeballs and ad revenue to earn a profit from it
throwaway198846•3w ago
This space is not protected, anyone can sign up.
dreadsword•3w ago
Moderation that keeps it focused. Text only.
SoftTalker•3w ago
Text only, no ads, and aggressive downmodding of self-promotion.

Edit: On the other hand, here we are looking at it and talking about it. Some number of us followed links in that article. Some number of them followed those to an OnlyFans page.

giancarlostoro•3w ago
How long until OnlyFans just says, screw it, we make AI content too.
zxcvasd•3w ago
>no ads

There are ads here all the time. They're just text-based, instead of picture/video-based. "Show HN" is literally only for advertising.

nicce•3w ago
Being less known and niche is one kind of protection. Big carrots are missing for most.
add-sub-mul-div•3w ago
Another kind of protection is Reddit and Twitter remaining alive as quarantines. Rather than if they collapsed and the newer better places absorbed the refugees.
tartuffe78•3w ago
They used to shut down sign up when Reddit was down :)
expedition32•3w ago
It would be hilarious if AI enshitified the internet so much that people give up on it.
_blk•3w ago
Not excusable in any way or form but an explanation lies clearly in the demand for trash news and the Hollywood people cult.
falloutx•3w ago
Why is that whenever there is a news about AI, its either a new scam or something vile. Like all this harm being done to environment, people's sanity and lives, just so companies can pay less to their employees. Great work.
knicholes•3w ago
Because news about scams or something vile using AI gets you to click and read.
falloutx•3w ago
About all of the good news, once you read a little bit more, are all due to traditional ML and all are in medical imagery field. Then OpenAI tries to take credit and say "Oh look AI is doing that too", which is not true. Go ahead and read deeper on any of those news and you would quickly find LLMs haven't done much good.
knicholes•3w ago
They helped me make some damn good brownies and be a better parent in the last month. Maybe I should write a blog for all of the great things LLMs are doing for me.

Oh yeah, and one rewrote the 7-minute-workout app for me without the porn ads before and after the workout so I can enjoy working out with one of my kids.

falloutx•3w ago
What makes you think you couldn't have made brownies without LLMs. Go to google and just scroll 20cm and there it is, a recipe, the same one chatGPT gave you. I wont comment on rewriting an app, because LLMs can definitely do that.
knicholes•3w ago
Because, "Why are the edges burnt and the middle is too soft? How are these supposed to actually look? I used a clear 8"x8" pan, and I'm in Utah, which is at 4,600 ft elevation"

Oh, it's a higher elevation, I need to change the recipe and lower the temperature. Oh, after it looked at the picture, the top is supposed to be crackly and shiny. Now I know what to look for. It's okay if it's a little soft while still in the oven because it'll firm up after taking them out? Great!

Another one, "Uh oh, I don't have Dutch-processed baking power. Can I still use the normal stuff for this recipe?" Yeah, Google can answer that, but so can an LLM.

falloutx•3w ago
You make it sound like brownie making is a scientific endeavour. I wouldn't think its hard but I guess I haven't made brownies in all conditions.
knicholes•3w ago
All baking is a scientific endeavor in my house! You should try my brownies! :D
xmprt•3w ago
What makes you think you couldn't have made brownies without Google. Just go to your local library and find the first baking cookbook you can find. And there it is, a better recipe than Google without all the SEO blog spam.

To avoid my comment just being snarky, I agree that there's a difference between comparing Google to LLMs, and the library to Google... but still I hope you can acknowledge that LLMs can do a lot more than Google such as answering questions about recipe alterations or baking theory which a simple recipe website can't/won't.

pousada•3w ago
fwiw modern recipe sites are awful - you have to scroll down literal minutes until you get to the recipe. LLMs give you the answer you want in seconds.

I’m certainly no LLM enthusiast but pretending they are useless won’t make the issues with them go away

amlib•3w ago
I doubt this bonanza is gonna last... These chatbots, feeding from the very source that can't seem to surface quality stuff by the way, will likely degrade just like those searches have for the last 20 years. There will be ads, there will be manipulation and deception, there will be pointless preambles and they will spit out even more wrong instructions and unusable garbage, and on top of it all it won't take 20 years this time do degrade, it's rather likely that it will take less than 5 years.

Maybe open source models will hold these accountable, or maybe they will degrade too somehow. Or maybe the world will be going through a hard collapse for any of us to care.

knicholes•3w ago
The model weights for the leading open source offerings are already downloaded by thousands, if not millions, of times. There's no unsqueezing that tube of toothpaste.
amitav1•3w ago
For me personally, LLMshave helped me learn 10x faster than I would be able to otherwise. IMO, in 15 years, teachers with university degrees at all wll be as rare as teachers with PhDs today, because the actual teaching will be left to the LLMs.
alfalfasprout•3w ago
What's worse is a significant number of folks here seem to be celebrating it. Or trivializing what makes us human. Or celebrating the death of human creativity.

What is it, do you think, that has attracted so many misanthropes into tech over the last decade?

add-sub-mul-div•3w ago
The era of new technologies being used to work for us rather than net against us is something we took for granted and it's in the past. Those who'd scam or enshittify have the most power now. This new era of AI isn't unique in that, but it's a powerful force multiplier, and more for the predatory than the good.
blibble•3w ago
there's literally a NFT/crypto scammer occupying the oval office

can't wait until he figures out AI

api•3w ago
Good news doesn't get clicks. Usually doesn't even get reported.
SecretDreams•3w ago
Also, saving me a bit of time in coding is objectively not a good trade if the same tool very easily emboldens pedophiles and other fringe groups.
venndeezl•3w ago
Media in the US is obsessed with fear mongering:

https://flowingdata.com/2025/10/08/mortality-in-the-news-vs-...

If they reported on heart disease people might get healthy. But it's instinctual understanding that people dying all over just improves journalists odds in our society. Keep them anxious with crime stats!

Such an unserious joke of a society.

expedition32•3w ago
So do you dispute that this is happening? And its all over my country too.

Expecting tech bros to take responsibility for what they have unleashed is asking too much I suppose.

api•3w ago
This is purely economic. Fear mongering gets clicks, which boosts ad revenues.

I've read statistics to the effect that bad news (fear or rage bait) often gets as much as 10,000X the engagement vs good news.

seanmcdirmid•3w ago
News is whatever people would care about reading.
EGreg•3w ago
Because it has a lot of potential for abuse.

BUT, notice the absolutely opposite approach to AI and Web3 on HN. Things that highlight Web3 scams are upvoted and celebrated. But AI deepfakes and scams at scale are always downvoted, flagged and minimized with a version of the comment:

“This has always been the case. AI doesn’t do anything new. This is a nothingburger, move on.”

You can probably see multiple versions in this thread or the sibling post just next to it on HN front page: https://news.ycombinator.com/item?id=46603535

It comes up so often as to be systematic. Both downvoting Web3 and upvoting AI. Almost like there is brigading, or even automation.

Why?

I kept saying for years that AI has far larger downsides than Web3, because in Web3 you can only lose what you volunarily put in, but AI can cause many, many, many people to lose their jobs, their reputations, etc. and even lives if weaponized. Web3 and blockchain can… enforce integrity?

blibble•3w ago
> Why?

https://www.ycombinator.com/companies?batch=Winter%202026

falloutx•3w ago
At this point I think HN is flooded with wannabe founders who think this is "their" gold rush and any pushback against AI is against them personally, against their enterprise, against their code. This is exactly what happens on every vibe coding thread, every AI adjacent thread.
ceejayoz•3w ago
> BUT, notice the absolutely opposite approach to AI and Web3 on HN. Things that highlight Web3 scams are upvoted and celebrated. But AI deepfakes and scams at scale are always downvoted, flagged and minimized…

It took a few years for that to happen.

Plenty of folks here were all-in on NFTs.

ronsor•3w ago
There are plenty of posts critical of AI on HN that reach the front page, and even more threads filled with AI criticism whether on-topic or not.

What you're noticing is a form of selection bias:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

grokgrok•3w ago
Mass participation in systems can create emergent effects larger than the net sum of the parts. I opt out because first movers are unfairly advantaged; and because lacking proper safeguards, my participation would implicitly support those participants who profit from producing misery. I don't want to accidentally launder the profits from human trafficking nor commit my labor to build my own prison. The rhetoric promoting Web3 as an engine of progress and freedom simply oversold the capabilities of its initial design. That underlying long term vision may still be viable.

We can't rebuild the economy without also rebuilding the State, and that requires careful nuanced engineering and then the consent of the governed.

Almondsetat•3w ago
Sounds like confirmation bias you are not interested in challenging
vlan0•3w ago
Can be said about so many things in life. It's almost like we don't learn and just repeat in loops.
nonethewiser•3w ago
Are you suggesting people shouldn't develop AI because it's basically just produces unemployment and scams? Like that they should just be good people and stop, or government should ban the development of AI?

I mean you are clearly equivocating AI with unemployment and scams, which I think is a very incomplete picture. What do you think should be done in light of that?

falloutx•3w ago
>I mean you are clearly equivocating AI with unemployment and scams, which I think is a very incomplete picture.

What else, let me guess, slop in software, ai psychosis, environmental concerns, growing wealth inequality. And yes may be we can write some crappy software faster. That should cover it.

I have no suggestions to on how to solve it. Only way is to watch openAI/Claude lose more money and then hopefully models are cheaper or completely useless.

nonethewiser•3w ago
>What else, let me guess, slop in software

Are you a developer? If so does this mean you have not been able to employ AI to increase the speed nor quality of your owrk?

falloutx•3w ago
Yes, But I am talking about slop in all the software I use, not just what I make. Every app is trying to do everything. Every place we have summarise button, some cobbled together AI gen features. Software continuously fails, and companies provide no support as that is all automated to save money.
nonethewiser•3w ago
I mean from your perspective it just sounds like it should be stopped somehow. Either people collectively decide its a waste of time or something. I guess im very surprised to hear someone things it brings no value. I can relate to some of the negative outcomes but to not see any significant value seems kind of crazy to me.
falloutx•3w ago
No, its the best way to burn money so no reason to stop it. But would like if people use it less.
erikerikson•3w ago
"Guns don't kill people, I do."

Blaming the technology for bad human behavior seems an error and it's not clear that the GP made it.

People could and likely will also increase economic activity, flexibility, and evolve how we participate in the world. The alternative would get pretty ugly pretty quick. My pitchfork is sharp and the powers that be prefer it continues being used on straw.

croes•3w ago
People without guns kill less.
erikerikson•3w ago
The statistics are that our car use, pollution, and many other problems kill far more people.
croes•3w ago
But they have greater benefits like mobility and the bad things are a side effect of the use

But for weapons death is the result of their purpose.

croes•3w ago
If the harm outweighs the benefits stopping should be an option, don’t you think?
nonethewiser•3w ago
I dont think AI just brings scams and unemployment.
croes•3w ago
Therefore the "outweighs"
blibble•3w ago
> What do you think should be done in light of that?

you suggested it:

> government should ban the development of AI?

works for me!

surgical_fire•3w ago
Well, it's one thing AI actually revolutionized.
seanmcdirmid•3w ago
The early days of the internet were mostly about how they enabled porn, spam, and scams...just so people could order things onlnie.

We are now talking about AI in how it enables porn, spam, and scams....

polishdude20•3w ago
Don't discount the fact that bad new sells.
ofalkaed•3w ago
That is news in general, nothing special about AI.
cheald•3w ago
"Local man uses AI to try slightly different casserole recipe" just doesn't have that click-driving wow factor.
optimalsolver•3w ago
Unless the AI's suggestion to add glue kills him:

https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-...

sanex•3w ago
Lots of kids used to eat glue they turned into alive adults eventually. No promises as to status though.
akomtu•3w ago
It's very similar to what europeans did to american indian tribes a few centuries ago: they gave them alcohol. A neutral substance by itself, which could be used for disinfection or to light up fires. But it has a destructive side too and if the populace isn't resistant to it, they'll stand no chance. AI is very similar: some positive potential with a powerful destructive potential. Human tribes, as we know, have loose morals and thus aren't resistant at all to the AI's destructive side. We are like those american indians now, unable to resist the temptation.
bofadeez•3w ago
"Defaming celebrities" that's a big concern lol. Let's also make sure billionaires securely obtain maximum luxury next.
cynicalsecurity•3w ago
Maybe it's time to stop going hysterical over other people's sex life? Then it won't be a "hot" topic to exploit any more.
SoftTalker•3w ago
I think most people know that these aren't real. They are just for laughs or titillation and a way to get attention/follower and (ultimately) payers. Celebrity impersonations in advertising are not at all new.
nonethewiser•3w ago
I think its more a symptom of a culture with bad values. The safeguard against this behavior is people having shame.
Mordisquitos•3w ago
There's something about the way terminology used in this article that feels off to me.

First of all, I'm not sure it makes sense to refer to these AI-generated characters as AI 'influencers'. Did these characters actually have followers prior to these fake videos being generated in December 2025? Do they even have followers now? I don't know, maybe they did or do, but I get the impression that they are just representing influencer-ish characteristics as part of the scheme. Don't get me wrong, the last thing I want is to gatekeep such an asinine term as 'influencer'. However, just like I would not be an influencer just by posting a video acting like one, neither do AI characters get a free pass at becoming one.

Second, there's the way the article is subjectifying the AI-generated characters. I can forgive the headline for impact, but by consistently using 'AI influencers' throughout the article as the subject of these actions, it is not only contributing to the general confusion as to what characters in AI-generated videos actually are, but also subtly removing the very real human beings who are behind this from the equation. See for instance these two sentences from the article, UPPERCASE mine:

1- 'One AI influencer even SHARED an image of HER in bed with Venezuela’s president Nicolás Maduro'

2- 'Sometimes, these AI influencers STEAL directly from real adult content creators by faceswapping THEMSELVES into their existing videos.'

No, there is no her sharing an image of herself in bed with anyone. No, there are no them stealing and faceswapping themselves onto videos of real people. The 'AI influencers' are not real. They are pure fictions, as fictional as the fictinal Nicolás Maduro, Mike Tyson and Dwayne Johnson representations that appear in the videos. The sharing and the faceswapping is being done by real dishonest individuals and organisations out there in the real world.

ahmetomer•3w ago
The classic "bad thing always existed but AI made it worse" case.
tunesmith•3w ago
cultural problem too... like even before AI in recent years there's been more of a societal push that it's fair game to just lie to people. Not that it didn't always happen, but it's more shameless now. Like... I don't know, just to pick one, actors pretending to be romantically involved for pr of their upcoming movie. That's something that seems way more common than I remember in the past.
Loughla•3w ago
While I agree with you, your example is not a great one.. There are examples of fake relationships between stars dating back to the start of talkies.

But I do agree. It is more socially acceptable to just lie, as long as you're trying to make money or win an argument or something. It's out of hand.

vladms•3w ago
Do you have any data to back that "it is more socially acceptable to lie"? I looked a bit and could not find anything either way.

The impression can be a bias of growing up. Adults will generally teach and insist that children tell the truth. As one grows, it is less constrained and can say many "white lies" (low impact lies).

We do have more impact for some people (known people, influences, etc.) than before because of network effects.

frm88•3w ago
There is this study that claims/proves that dishonesty/lying is socially transmittable and

The question of how dishonesty spreads through social networks is relevant to relationships, organizations, and society at large. Individuals may not consider that their own minor lies contribute to a broader culture of dishonesty. [0]

the effect of which would be massively amplified if you take into account that

Research has found that most people lie, on average, about once or twice per day [1]

where the most prolific liars manage upward of 200; you can then imagine that with the rise and prevalence of social media the acceptance/tolerance has also socially transmitted

[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC4198136/

[1] https://gwern.net/doc/psychology/2021-serota.pdf

vladms•3w ago
Interesting, but one assumption for "would be massively amplified" is that we are more connected. It seems that people are (at least feeling) more lonely (ref: https://www.gse.harvard.edu/ideas/usable-knowledge/24/10/wha...).

So, while dishonesty can spread through social networks, does not address if the total dishonesty is larger or lower or equal to, for example 100 years ago, because there are many factors involved.

frm88•3w ago
I'll link you a study that investigates this, but unfortunately it is paywalled if you don't have a paid Springer account: https://link.springer.com/chapter/10.1007/978-3-319-96334-1_...
breakpointalpha•3w ago
Let me introduce you to an actor named Rock Hudson coughs in black and white. /s
SecretDreams•3w ago
I think it's the speed by which it can do harm. Whatever efficiency gains we gain from AI for good causes will also be seen by nefarious causes. Tools need safety mechanisms to ensure they aren't symmetrically supporting good and bad actors. If we can't sufficiently minimize the latter, the benefits the former group gains may not be worth it.
nonethewiser•3w ago
Im honestly shocked at the reaction to this. I'm well aware of the culture we live in. Isn't everyone else?
TacticalCoder•3w ago
When you see what z-image turbo with some added LORA does in mere seconds on a 4090 locally, you know it's a lost fight. And that's not even the best model: just a very good one for something that everybody can run.

Not only is the cat out of the bag but this is just the beginning. For example say porn vids where people can change the actress to their favorites celebrity in real-time is imminent.

There's no fighting this.

Teever•3w ago
This is a technique that will absolutely be used by those reputaiton management companies.

I predict that it within three years we'll be discussing a story about how a celebrity hired a company to produce pictures of them doing intimate things with people to head off the imminent release sexual assault allegations.

knowitnone3•3w ago
I don't see a problem here. These celebrities can now sue and make boatloads of money.