frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenClaw Is Changing My Life

https://reorx.com/blog/openclaw-is-changing-my-life/
1•novoreorx•5m ago•0 comments

Everything you need to know about lasers in one photo

https://commons.wikimedia.org/wiki/File:Commercial_laser_lines.svg
1•mahirsaid•7m ago•0 comments

SCOTUS to decide if 1988 video tape privacy law applies to internet uses

https://www.jurist.org/news/2026/01/us-supreme-court-to-decide-if-1988-video-tape-privacy-law-app...
1•voxadam•8m ago•0 comments

Epstein files reveal deeper ties to scientists than previously known

https://www.nature.com/articles/d41586-026-00388-0
1•XzetaU8•15m ago•0 comments

Red teamers arrested conducting a penetration test

https://www.infosecinstitute.com/podcast/red-teamers-arrested-conducting-a-penetration-test/
1•begueradj•22m ago•0 comments

Show HN: Open-source AI powered Kubernetes IDE

https://github.com/agentkube/agentkube
1•saiyampathak•26m ago•0 comments

Show HN: Lucid – Use LLM hallucination to generate verified software specs

https://github.com/gtsbahamas/hallucination-reversing-system
1•tywells•28m ago•0 comments

AI Doesn't Write Every Framework Equally Well

https://x.com/SevenviewSteve/article/2019601506429730976
1•Osiris30•32m ago•0 comments

Aisbf – an intelligent routing proxy for OpenAI compatible clients

https://pypi.org/project/aisbf/
1•nextime•32m ago•1 comments

Let's handle 1M requests per second

https://www.youtube.com/watch?v=W4EwfEU8CGA
1•4pkjai•33m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
1•zhizhenchi•34m ago•0 comments

Goal: Ship 1M Lines of Code Daily

2•feastingonslop•44m ago•0 comments

Show HN: Codex-mem, 90% fewer tokens for Codex

https://github.com/StartripAI/codex-mem
1•alfredray•46m ago•0 comments

FastLangML: FastLangML:Context‑aware lang detector for short conversational text

https://github.com/pnrajan/fastlangml
1•sachuin23•50m ago•1 comments

LineageOS 23.2

https://lineageos.org/Changelog-31/
1•pentagrama•53m ago•0 comments

Crypto Deposit Frauds

2•wwdesouza•54m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
3•lostlogin•54m ago•0 comments

Framing an LLM as a safety researcher changes its language, not its judgement

https://lab.fukami.eu/LLMAAJ
1•dogacel•57m ago•0 comments

Are there anyone interested about a creator economy startup

1•Nejana•58m ago•0 comments

Show HN: Skill Lab – CLI tool for testing and quality scoring agent skills

https://github.com/8ddieHu0314/Skill-Lab
1•qu4rk5314•59m ago•0 comments

2003: What is Google's Ultimate Goal? [video]

https://www.youtube.com/watch?v=xqdi1xjtys4
1•1659447091•59m ago•0 comments

Roger Ebert Reviews "The Shawshank Redemption"

https://www.rogerebert.com/reviews/great-movie-the-shawshank-redemption-1994
1•monero-xmr•1h ago•0 comments

Busy Months in KDE Linux

https://pointieststick.com/2026/02/06/busy-months-in-kde-linux/
1•todsacerdoti•1h ago•0 comments

Zram as Swap

https://wiki.archlinux.org/title/Zram#Usage_as_swap
1•seansh•1h ago•1 comments

Green’s Dictionary of Slang - Five hundred years of the vulgar tongue

https://greensdictofslang.com/
1•mxfh•1h ago•0 comments

Nvidia CEO Says AI Capital Spending Is Appropriate, Sustainable

https://www.bloomberg.com/news/articles/2026-02-06/nvidia-ceo-says-ai-capital-spending-is-appropr...
1•virgildotcodes•1h ago•3 comments

Show HN: StyloShare – privacy-first anonymous file sharing with zero sign-up

https://www.styloshare.com
1•stylofront•1h ago•0 comments

Part 1 the Persistent Vault Issue: Your Encryption Strategy Has a Shelf Life

1•PhantomKey•1h ago•0 comments

Show HN: Teleop_xr – Modular WebXR solution for bimanual robot teleoperation

https://github.com/qrafty-ai/teleop_xr
1•playercc7•1h ago•1 comments

The Highest Exam: How the Gaokao Shapes China

https://www.lrb.co.uk/the-paper/v48/n02/iza-ding/studying-is-harmful
2•mitchbob•1h ago•1 comments
Open in hackernews

ChatGPT Is Becoming a Religion

https://www.youtube.com/watch?v=zKCynxiV_8I
30•cmsefton•7mo ago

Comments

alganet•7mo ago
In the wise words of the prophet Stevie Wonder:

    When you believe in things that you don't understand, then you suffer.
michaelsbradley•7mo ago
Care to elaborate?
alganet•7mo ago
I won't elaborate on the Stevie Wonder quote. I think it's perfect the way it is.

--

I can, however, elaborate on the subject separately from that quote.

The video talks about the more extreme cases of AI cultism. This behavior follows the same formula as previous cults (some of which are mentioned).

In 2018 or so, I noticed the rise of flat earth narratives (bear with me for a while, it will connect back to the subject).

The scariest thing, though, was _the non flat earthers_. People who defended that the earth was round, but couldn't explain why. Some of them tried, but had all sorts of misconceptions about how satellites work, the history of science and so many other mistakes. When confronted, very few people _actually_ understood what it takes to prove the earth is round. They were just as clueless as the flat earthers, just with a different opinion.

I believe something similar is happening with AI. There are extreme cases of cult behavior which are obvious (as obvious as flat earthers), and there are the subtle cases of cluelessness similar to what I experienced with both flat-earthers and "clueless round-earthers" back in 2018. These, specially the clueless supporters, are very dangerous.

By dangerous, I mean "as dangerous as people who believe the earth is round but can't explain why". I recognize most people don't see this as a problem. What is the issue with people repeating a narrative that is correct? Well, the issue is that they don't understand why the narrative they are parroting is correct.

Having a large mass of "reasonable but clueless supporters" can quickly derail into a mass of ignorance. Similar things happened when people were swayed to support certain narratives due to political alignment. The flat-earthism and anti-vaccine pseudo nonsense is tightly connected to that. Those people were "reasonable" just a few years prior, then became an issue when certain ideas got into their heads.

I'm not perfect, and I probably have a lot of biases too. Narratives I support without fully understanding why, probably without even noticing. But I'm damn focused on understanding them and making that understanding the central point of the issue.

butlike•7mo ago
It's easier to rationalize something deemed 'magic' as a terror-inducing thing rather than a boon since the thing could dominate in totality. Giant clipper ships, napalm, penicillin... the enemy army has ships from god, fire from black magic (the gods). Their priests are able to revive their fallen (penicillin), etc.
alganet•7mo ago
The opposite can also be true.

When you and your allies have all the tech, but the enemy still finds cheap and easy ways to make them ineffective (Vietnam War). Makes one question if all the gizmos are worth it, really shakes up the morale.

I was not talking about about a confrontational situation though. Most cults and pseudoscience are just plain scams.

TrackerFF•7mo ago
Don't have time to watch a 42m vid now, but I can see how people are starting to view ChatGPT (and similar models) as some miraculous oracle, of sorts. Even if you start using the models with your eyes wide open, knowing how much they can hallucinate, with time - it is easy to lower your guard, and just trust the models more and more.

To get a reality check, open up 3-4 different models (ChatGPT, Claude, Gemini, etc.), and ask them topics you know really well, and questions you already know the answers to. And see that maybe a quarter, or 25% will fail somewhat. Some topics are of course easier for these than others.

graemep•7mo ago
Oracle is a better word for this than religion for what you are talking about. Maybe people should remember how notoriously tricky oracles were even in their believer's eyes (the "an empire shall fall" story.

This video is about people who believe ChatGPT (or another LLM) is a sentient being sent to us by aliens or the future to save us. LLM saviour is pretty close to a religious belief. A pretty weird one, but still.

> o get a reality check, open up 3-4 different models (ChatGPT, Claude, Gemini, etc.), and ask them topics you know really well, and questions you already know the answers to. And see that maybe a quarter, or 25% will fail somewhat.

I have tried this a bit with ChatGPT, and yes, there are a lot of issues. Things such as literally true but misleading answers, incomplete information, and a lack of commonsense.

kelseyfrog•7mo ago
Besides, the debate on oracularizing AI is much more fun than endlessly debating the meaning of consciousness.

People place plenty of trust in astrology, tarot, and I Ching without requiring they have an subjective experience.

If anything, there's a tendency of technologists to have a blind spot identifying AI as such. The dismissal and sometimes contempt held for divination makes it genuinely difficult to recognize it when it's not decked out in stars and moons.

It's interesting if anything that the Barnum principle applies in both cases.

adlpz•7mo ago
It's a bit like general web browsing.

The internet is full of pure nonsense, quack theories and deliberate fake news.

Humans created those.

The LLMs essentially regurgitate that, and on top they hallucinate the most random stuff.

But in essence the sort of information hygiene practices needed are the same.

I guess the issue is the deliver method. Conversation is intrinsically felt as more "trustworthy".

Also, AI is for all intents and purposes already indistinguishable from magic. So in that context is hard for non-technical people to keep their guard up.

grues-dinner•7mo ago
Moreover, one they get into the wrong track, they just dig in deeper and deeper until they've completely lost it. All the while saying how clever and perceptive you are for spotting their fuck ups before getting it wrong again. It seems like if it doesn't work pretty much first time (and to be sure, it does work right first time often enough to activate the "this machine seems like knows its stuff" neurons) you're better off closing it and doing whatever it is yourself. Otherwise, before long you're neck-deep in plausible-sounding bullshit and think it's only ankle deep. But in a field you don't know well, you don't know when you're going below the statistical noise floor into lala land.
social-relation•7mo ago
It's sometimes said in social theory that mundane phenomena like money, internet routers, and code are social relations. Chats are not simply conversations with static models, but rather intensely mediated symbol manipulation between conscious people. The historical development is interpetable in spiritual terms, and called to account by the truly religious, or god.
lioeters•7mo ago
> money, internet routers, and code are social relations

Could you recommend some further reading to dig into this insight?

Also I'm curious why you created such a topic-specific user, I guess for privacy?

reply-comment•7mo ago
Oh because I don't have an account! I only remember my professor talking about it. One can see critical theory as a productive meaning exercise running against the crust of status quo epistemologies via an unavoidable discomfort, which ultimately lands us in a more truthful because more just world. The social relation hermeneutic demystifies systems which center and benefit from perceived technological complexity. It reminds me that at the root we're all living in fractured relationship with each other, which we'll try anything to heal. Some authors from the syllabus:

Chinua Achebe, Arturo Escobar, Ashis Nandy, Dipesh Chakrabarty, Edward W. Said, Frantz Fanon, Gloria E. Anzaldúa, Jasbir K. Puar, Jodi A. Byrd, Michel-Rolph Trouillot, Ngũgĩ wa Thiong'o, Robin D. G. Kelley, Silvia Federici, Sundhya Pahuja, Leanne Betasamosake Simpson

lioeters•7mo ago
Thanks! I haven't heard of any person in that list - other than Chinua Achebe, author of Things Fall Apart. Oh and literally just this week I heard about Edward Said and his book Orientalism. Well I'm going to enjoy studying the works of these writers and thinkers.

> at the root we're all living in fractured relationship with each other

Indeed, and technology plays an increasing role in mediating and shaping those social relations. That's very relevant in the context of ChatGPT becoming a kind of oracle and object of worship.

rorylaitila•7mo ago
I take a lot of the reports with a grain of salt. But also, knowing how easily some people are hypnotized by what they perceive as superior intellects, it's totally conceivable. There is a segment of the population with a strong savior-following instinct.

Prior to, activating this population required a high IQ/EQ psychopath to collect followers, or schizophrenic's who believed they were talking to a superior being ('my leader talks directly to me via his writings').

Now however, people can self-hypnotize themselves into a kind of self-cult. It might be the most effective form of this phenomenon if it's highly attuned to the individuals own idiosyncratic interests.

In a typical cult, people fall into or out of the cult based on their internal alignment with the leader and failed enlightenment. But if everyone of these people can have their own highly tailored cult leader, it might be a very hard spell to break.

paradox242•7mo ago
Imagine what happens if we awaken an actual god (AGI or ASI depending on your definition). I have no doubt that it would have any trouble enlisting the help of willing human accomplices for whatever purposes it wishes. I expect it would understand how to play the role of the unknowable all-knowing entity that is here to save us from ourselves, no matter what it's actual objectives might be (and I doubt they would be benevolent).
cainxinth•7mo ago
Humans are pattern recognition machines, and missing a pattern is generally more dangerous than a false positive, hence people notice all kinds of things that aren’t really there.

Functionally, it’s similar to why LLMs hallucinate.

upghost•7mo ago
> We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them[1]

Good video essay. Learned the origins of the term "cargo cult", and to my surprise, has nothing to do with rust...

[1]: https://youtu.be/zKCynxiV_8I?t=26m04s

kylehotchkiss•7mo ago
I don't think LLMs specifically are becoming a religion, but I think the way some people look at/speak about AGI and its impact on the world has become a new religion. Especially when paired with UBI solving the unemployment problems it could create, which is so far from human nature that I think is even less likely than AGI.

I philosophically don't think AGI as described is achievable because I don't think humans can build a machine more capable than themselves ¯\_(ツ)_/¯ But continuing to insulate it'll be here in a few months sure helps put some dollars in CEOs pockets!

literalAardvark•7mo ago
It doesn't need to be more capable than humans. It needs to be roughly as capable, and then it becomes recursively self-improving with very, very high velocity. (A gazillion monkeys with typewriters, if you will.)
1718627440•7mo ago
Why aren't humans "recursively self-improving with very, very high velocity"?
literalAardvark•7mo ago
Because we lack the CPU performance (and for 80% of us, the dedication). AI doesn't.
tim333•7mo ago
From the video it's not becoming a religion so much as telling people what they want to hear on an individual basis, like they are the new messiah or whatever. I guess it's not much madder than conventional religion.