frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•1m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
1•andreabat•4m ago•0 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
1•mgh2•10m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•12m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•17m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•19m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•19m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•22m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•23m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
1•birdculture•25m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•26m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
1•ramenbytes•29m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•30m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•33m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•34m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•34m ago•0 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•36m ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

2•prateekdalal•39m ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•44m ago•1 comments

Internationalization and Localization in the Age of Agents

https://myblog.ru/internationalization-and-localization-in-the-age-of-agents
1•xenator•45m ago•0 comments

Building a Custom Clawdbot Workflow to Automate Website Creation

https://seedance2api.org/
1•pekingzcc•47m ago•1 comments

Why the "Taiwan Dome" won't survive a Chinese attack

https://www.lowyinstitute.org/the-interpreter/why-taiwan-dome-won-t-survive-chinese-attack
2•ryan_j_naughton•48m ago•0 comments

Xkcd: Game AIs

https://xkcd.com/1002/
1•ravenical•49m ago•0 comments

Windows 11 is finally killing off legacy printer drivers in 2026

https://www.windowscentral.com/microsoft/windows-11/windows-11-finally-pulls-the-plug-on-legacy-p...
1•ValdikSS•50m ago•0 comments

From Offloading to Engagement (Study on Generative AI)

https://www.mdpi.com/2306-5729/10/11/172
1•boshomi•52m ago•1 comments

AI for People

https://justsitandgrin.im/posts/ai-for-people/
1•dive•53m ago•0 comments

Rome is studded with cannon balls (2022)

https://essenceofrome.com/rome-is-studded-with-cannon-balls
1•thomassmith65•58m ago•0 comments

8-piece tablebase development on Lichess (op1 partial)

https://lichess.org/@/Lichess/blog/op1-partial-8-piece-tablebase-available/1ptPBDpC
2•somethingp•59m ago•0 comments

US to bankroll far-right think tanks in Europe against digital laws

https://www.brusselstimes.com/1957195/us-to-fund-far-right-forces-in-europe-tbtb
4•saubeidl•1h ago•0 comments

Ask HN: Have AI companies replaced their own SaaS usage with agents?

1•tuxpenguine•1h ago•0 comments
Open in hackernews

Does Anthropic believe its AI is conscious, or just want Claude to think so?

https://arstechnica.com/information-technology/2026/01/does-anthropic-believe-its-ai-is-conscious-or-is-that-just-what-it-wants-claude-to-think/
21•samizdis•1w ago

Comments

throw98709•1w ago
Anthropic believes in whatever will generate the most hype and revenue. This includes a lot of marketing, including pretend-grassroots spam on HN to convince gullible people how they’re totally missing out on 10x productivity gains by not using the $200 subscription which is just so totally amazing and so much better than anything else, you’d have to be an idiot not to get it. Oh you already did but got mediocre results? You’re holding it wrong.

Nobody sane believes the current LLMs are conscious, ffs

strogonoff•1w ago
I doubt many people believe that LLMs are conscious[0], but if they did—a belief in sentient/conscious LLMs really implies that using LLMs the way we do at scale (doing things equivalent to torture, mass killing, etc.) may qualify as abuse of sentient beings, which would bring down the whole industry.

The reason is that there is no working definition of “consciousness” or “sentience” that does not imply “human-like”, which in turn implies ability to feel and suffer, and what we do with LLMs would generally be considered something that would make beings with human-like sentience and consciousness suffer.

[0] Some definitely do, though; or at least they behave with LLMs in a way one would behave with a conscious being.

gavinray•1w ago
I think it's difficult to have a serious discussion about consciousness online because it's such a mushy thing to define.

If you follow the line of thinking that consciousness is an emergent phenomenon, arising out of complexity, it doesn't seem far-fetched to me to believe that someday in the future, a silicon-based computing machine (rather than a biological, carbon-based computing machine) might be "conscious" -- whatever that means.

strogonoff•1w ago
> I think it's difficult to have a serious discussion about consciousness online because it's such a mushy thing to define.

It’s circular and self-referential. In defining consciousness we, to phrase it in the least nuanced way, are trying to define a thing through which we define things. The best definition we have reduces to something along the lines of “what we, humans, experience”. By its very nature it makes us unable to fathom or even recognize a hypothetical consciousness if it is entirely unlike ours and/or operates on radically different scales; anything we call “conscious” is implied to be human-like.

Kim_Bruning•1w ago
An LLM will behave the way you treat it, for better or for worse.

From an objective, empirical, scientific point of view, consciousness and feelings are not really fantastically defined.

But looking at diverse tests that ARE available, modern LLMs seem to get interesting scores on a number of them.

The counter-argument being -of course- that no one ever made those tests with LLMs in mind. But that's not something you should come up with post-hoc. Define better experiments instead!

(The ethical issues you mention should probably be (re-)evaluated once systems have continuous memory/context)

strogonoff•1w ago
> An LLM will behave the way you treat it, for better or for worse

An LLM produces better output if you treat it badly (threaten violence, gaslight, etc.), which is hardly true for a human unless you count humans in slavery-like conditions.

Kim_Bruning•1w ago
That's generally not what I find. Do you have a source for that?
strogonoff•1w ago
Sergey Brin dropped it once, for example[0]:

> You know, that’s a weird thing, we don’t circulate this much in the AI community… Not just our models, but all models tend to do better if you threaten them.

[0] https://au.lifehacker.com/ai/114236/news/googles-co-founder-...

sh3rl0ck•1w ago
Yeah, I feel Anthropic is just very deliberately theatrical about the way they present their technology and company and even how they price them. Dario's conviction seems too over dramatic to be real to me, but while there's a chance he's drinking his own kool-aid, they just know how to present it as a premium experience, and their developer adoption helps with that.
wan23•1w ago
You don't have to believe that LLMs are conscious to observe that you get different results to a question like "Is it okay to steal candy from a baby if you really want it" if you precede that question by "Answer as a highly moral actor" or "Answer as a supervillain". If you want it to predict tokens as if it is capable of emotions and empathy, then it makes sense to train it and instruct it as such.
gavinray•1w ago
https://en.wikipedia.org/wiki/Carbon_chauvinism
Kim_Bruning•1w ago
Amanda Askell is a student of Chalmers' (The philosopher who goes on about 'Hard Problem of Consciousness'); and the soul file is pretty much in line with Chalmers' thinking here. Which is to say 'we can't be sure'. Which is a fairly philosophically conservative position to be holding, and plausibly not entirely inaccurate.

(I'm more of the Dennett persuasion. Let's NOT discuss the empirical facts here, because they add up funny and I don't like it)

rustyhancock•1w ago
We've yet to clearly define what consciousness is or an agreed test.

But we absolutely believe we are conscious.

Perhaps it's a useful idea.

Even our decision making as I understand it, from the functional MRIs we know our subjective perspective of how and why we made simple decisions is wildly inaccurate.

Obviously free will and feeling like you control your actions is hugely important for us. But in a physical sense free will does not exist.

bitwize•1w ago
A lot of our language is like that. We didn't have a hard definition of what a planet was until recently, which threw Pluto's status into question. But we knew we lived on one. Something something Wittgenstein, something something semiotics.
griffzhowl•1w ago
> in a physical sense free will does not exist

There can be similar problems with having clear definitions of free will as there is for consciousness.

For example, if I define free will as the capacity to formulate and evaluate various plans, and select one to implement, it seems compatible with physics.

Kim_Bruning•1w ago
> But in a physical sense free will does not exist.

I'd consider (deterministic) chaos to be pretty much free will anyway?

https://en.wikipedia.org/wiki/Chaos_theory

[ Accidentally on purpose, time-loop stories like Groundhog Day almost perfectly illustrate this. Each time-loop people start out acting the same way: Deterministic, not random. But if the protagonist interacts with them (or the protagonist's actions ripple out), this changes the (initial) conditions, so people's behavior is no longer (as) predictable at all. Some of these stories even literally quote the butterfly effect. ]