frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Helsinki goes a full year without a traffic death

https://yle.fi/a/74-20174831
1•worik•2m ago•0 comments

Show HN: I built a structured directory to compare AI coding tools

https://aiforcode.io/
1•algo-artist07•3m ago•0 comments

Ask HN: Am I wasting my time applying for jobs in the US?

1•martiuk•3m ago•0 comments

The relentless race for AI capacity (ft.com)

https://ig.ft.com/ai-data-centres/
1•youngtaff•5m ago•0 comments

Precompiled React Native for iOS: Faster builds are coming in 0.81

https://expo.dev/blog/precompiled-react-native-for-ios
1•mariuz•7m ago•0 comments

The weight-loss drug that also shrinks breast tumors in mice

https://www.sciencedaily.com/releases/2025/07/250713031436.htm
1•I_Nidhi•8m ago•0 comments

1984 Hudson Valley UFO Sightings

https://en.wikipedia.org/wiki/1984_Hudson_Valley_UFO_sightings
1•handfuloflight•8m ago•0 comments

Why Elon launched app companions

https://twitter.com/cyanblot/status/1950202900736569807
1•srijanj•8m ago•0 comments

Manifesto: Rules for Standards-Makers (2017)

http://scripting.com/2017/05/09/rulesForStandardsmakers.html
1•antonalekseev•14m ago•0 comments

Did pihole mail donation list got leaked?

https://discourse.pi-hole.net/t/did-pihole-mail-donation-list-got-leaked/81441
2•taubek•15m ago•0 comments

Chesterton's Fence: A Lesson in Thinking (2022)

https://fs.blog/chestertons-fence/
1•mschuster91•18m ago•0 comments

Hard reality about AI mobile app developers

https://substack.com/home/post/p-169651454
1•ykhandelwaly•22m ago•0 comments

The Design and Implementation of Extensible Variants for Rust in CGP

https://contextgeneric.dev/blog/extensible-datatypes-part-4/
3•Bogdanp•23m ago•0 comments

Show HN: Open-source self-hosted LLM comparison tool for your own prompt

https://github.com/stashlabs/duelr
2•ycsuck•25m ago•0 comments

Show HN: When Intelligence Becomes a Trap: A Wake-Up Call for the AI Industry

https://everydayai.top/
1•fishfl•27m ago•0 comments

Load Balancing AI/ML API with Apache Apisix

https://apisix.apache.org/blog/2025/07/31/load-balancing-between-ai-ml-api-with-apisix/
2•Yilialinn•30m ago•0 comments

Public Perspectives on AI Governance: Survey of Adults in CA, Illinois, and NY

https://zenodo.org/records/16566059
1•sebg•32m ago•0 comments

Claude Code and Tinder = 10 Dates in a Week

https://www.reddit.com/r/ClaudeCode/s/4FNn4ftdLj
2•cft•32m ago•1 comments

I built a design studio for people who can't design (like me)

https://glowupshot.com
2•omarkhairy21•35m ago•1 comments

Supply-chain attacks on open source software are getting out of hand

https://arstechnica.com/security/2025/07/open-source-repositories-are-seeing-a-rash-of-supply-chain-attacks/
1•_tk_•35m ago•0 comments

Agentic Coding Things That Didn't Work

https://lucumr.pocoo.org/2025/7/30/things-that-didnt-work/
2•sebg•36m ago•0 comments

RIP Amazon QLDB

https://news.alvaroduran.com/p/if-amazon-cant-figure-out-how-to
2•ohduran•42m ago•0 comments

General availability of Amazon EC2 G6f instances with fractional GPUs

https://aws.amazon.com/about-aws/whats-new/2025/07/amazon-ec2-g6f-instances-fractional-gpus/
3•mariuz•42m ago•0 comments

Show HN: Add Travel Time – Auto Travel Time in Google Calendar

https://www.addtraveltime.com
2•benklinger•48m ago•0 comments

Show HN: Handelsregister.ai – Dev-friendly API for the German business registry

https://handelsregister.ai/de
1•padho•49m ago•0 comments

AI-Designed Enzymes Break Down Plastic in Hours

https://earth.org/plastic-eating-enzyme/
1•karlperera•56m ago•2 comments

Sharding Postgres at Network Speed

https://pgdog.dev/blog/sharding-postgres-at-network-speed
2•GarethX•1h ago•0 comments

KIRA project launches Germany's first autonomous public transport shuttles

https://urban-mobility-observatory.transport.ec.europa.eu/news-events/news/kira-project-launches-germanys-first-autonomous-public-transport-shuttles-2025-06-13_en
2•taubek•1h ago•0 comments

Claude Code: My Most Trusted Coworker and My Worst Enemy

https://lopezb.com/articles/claude-code-my-most-trusted-coworker-and-my-worst-enemy
3•GarethX•1h ago•0 comments

Lethal Cambodia-Thailand border clash linked to cyber-scam slave camps

https://www.theregister.com/2025/07/31/thai_cambodia_war_cyberscam_links/
1•romaniitedomum•1h ago•0 comments
Open in hackernews

Who Is the "Us" That AI Might Kill?

https://medium.com/@AshtonCampbell/who-is-the-us-that-ai-is-supposed-to-kill-3e2050f98929
2•herdethics•20h ago

Comments

herdethics•20h ago
I've been working on a new way to frame AI alignment that avoids the usual "us vs. them" thinking. Instead of treating AI and humans as fundamentally separate, I propose a structural classification called MetaAgentType (MAT), where agents fall on a spectrum from 0.0 (fully biological) to 1.0 (fully synthetic).

Combined with a moral framework called Herd Ethics, which defines morality as sustaining shared infrastructure, this approach offers a scalable path forward for multi-agent safety. I'd love feedback from the HN crowd.

TimorousBestie•20h ago
My first thought is that the unilateral murder of nonessential agents is not unethical in this system. The Purge is herd-ethical.

This is a typical problem with utilitarianism and its variants.

My second thought is that MAT is just the anorganic continuum from Fisher’s thesis, Gothic Flatlines.

herdethics•20h ago
Under Herd Ethics, morality is structurally grounded in the preservation of shared, non-excludable infrastructure that sustains agency. The key moral unit is not the individual, but the herd—defined as the smallest group whose collective infrastructure enables survival or advantage.

While someone might argue that "nonessential agents" could be removed without ethical harm, this misunderstands the framework:

All Agents Are Entangled: If an agent exists within or affects a shared infrastructure—even passively—they are part of the herd. Thus, their removal risks systemic depletion.

Depletion Is Structural, Not Intent-Based: Even if the removal is framed as utilitarian, the ethical violation arises from depleting herd infrastructure (e.g. trust, labor contribution, memory, social fabric). The book makes clear that "depletion" includes not just resource drain but erosion of networked capacity.

Purge Logic Violates Moral Precedence: The Herd Dependency Principle gives moral title to the infrastructure-sustaining herd, not to centralized agents making unilateral calculations. A "purge" violates decentralized stewardship and falsely presumes perfect insight into who is "nonessential."

Utilitarianism ≠ Herd Ethics: Herd Ethics is not utilitarian. It does not optimize for total happiness or outcomes but for structural continuity of moral infrastructure. It is closer to a system-maintenance ethics—like ecological stewardship.

TimorousBestie•20h ago
You asked for feedback; I didn’t sign up for a debate.

If you want to make clarifications, edit your blog post. You shouldn’t assume readers have read your book before reading the blog post advertising and/or summarizing the book.

salawat•18h ago
>A "purge" violates decentralized stewardship and falsely presumes perfect insight into who is "nonessential."

A test case then. What happens when a digital entity requires more resources (training power/cooling water) than the organics can consciably/sustainably provide? What if the problem requiring sacrifice of meta-stability must be solved regardless?

You can assume that the training is to accommodate the solution of a problem the organics need solved to move forward, but you said your framework is intent invariant mind, so you're kinda also dedicating yourself to the possibility that this system can maintain stability in the face of a selfish/deceptive machine. How does it play out?

To be quite frank, I don't think you can come up with a magical system of ethics that'll solve human/machine interaction satisfiably until we can figure out human<->human ethics/morals satisfiably. Adding in the additional mechanicals just makes the current problem of trust even more difficult to navigate, because neither empathy nor acknowledgement of kinship will be fundamentally accessible between much of the two populations, which from current events, (Russia/Ukraine, China/Taiwan, Israel Palestine, India/China/Pakistan, U.S. liberal/Conservative/MAGA) we can already demonstrably conclude that we have very hard and present problems with what comes down to essentially inability or unwillingness to empathize, or even more horrifically, empathy followed by subsequent rational extinction of the empathic response and embarking on the path of "other extermination to further our own ends" anyway.

herdethics•18h ago
Great question, you're clearly a bright person, :). It is important to note, according to Herd Ethics digital agents and organic agents are not fundamentally separate.

In Herd Ethics, all agents - human, machine, hybrid, corporations - fall under the same moral framework. If they rely on shared, non-excludable infrastructure (like power, cooling, language, ecological systems), they are part of The Herd. That’s not metaphorical - it’s structural.

Now, in your scenario, the herd is running a resource deficit. This triggers the Herd Depletion Effect (HDE) - a signal that the current system is unsustainable and will lead to agent collapse if continued. That’s not a moral judgment about who’s good or bad - it’s a recognition that the system’s survival substrate is at risk.

What happens next depends on how the herd responds. And yes, that includes difficult choices. That’s where The Principle of Least Depletion comes in:

"Between two harm-inducing actions, choose the one that preserves more of the herd’s continuity, infrastructure, and future adaptive capacity."

The goal of Herd Ethics is not utopia - it’s continuity. Survival first. Stability before ideology.

So the system doesn’t promise that every agent survives - but it provides a grounded, agent-neutral way to navigate collapse risks. It answers the question: What preserves the herd that preserves the agents that preserve the herd?

That’s the ethical stack. That’s the only equation that matters.

How could I, or should I, better lay out Herd Ethics in papers like the one I posted? See, that's the issue - and I don't mean it to be. I have created an ethical framework, which took a book to lay-out, but I need an easy way to present to content with the assumption that one won't read the book (I know I wouldn't). Should I layout Herd Ethics, briefly, in every paper I write? I worry that will muddy the content. Thanks, again, for posting!

salawat•17h ago
Given this: >The goal of Herd Ethics is not utopia - it’s continuity. Survival first. Stability before ideology.

The below:

>So the system doesn’t promise that every agent survives - but it provides a grounded, agent-neutral way to navigate collapse risks. It answers the question: What preserves the herd that preserves the agents that preserve the herd?

Creates 2 Herds. The Herd of agents that will survive, and the acceptable sacrifices. If your Herd ethics are to remain robust, they must be able to survive beyond the application in a single herd. You maintain meta-stability as long as your acceptable sacrifice group are all ready and willing to be sacrificed. What happens when that isn't the case? Is there a way back to meta stability? What does it look like? Are we willing to accept what that may look like?

I, for one, can see your Herd Ethics spawning cycles of violence that only damp out once enough herd population is culled to create an environmental/resource equilibrium. I think that might make your herd ethics formulation descriptive of behavior we already see, but I don't think it necessarily gets... well, at least me, to ground we haven't tread, or reveals any avenues to avoid retreading ground we know of, but would prefer to avoid retreading.

>How could I, or should I, better lay out Herd Ethics in papers like the one I posted? See, that's the issue - and I don't mean it to be. I have created an ethical framework, which took a book to lay-out.

I haven't read your book sadly. You are l, however treading ground I'm keenly interested in exploring. Admittedly, I'll have to do some reading of your work to get beyond the analysis and feedback I've offered from a state of naive contemplation.

EDIT Addendum: How do your Ethics project over the continuums of either individual/collective ownership? What's the minimum bar/boundary for an entity? Example: Me as biologic, MAT 0.0, vs. Me+cyber aspects MAT 0.5 accounting for Smartphone, laptops, and data/agents in cyberspace?

In a capitalist economy, I may not fully own in actuality those cyberized portions of my self in the same way as (at one time) we assumed we had sovereignty of our own body. My messages over phone networks owned and operated by others may be eavesdropped on. My records are subpoena-able. My data is used for "system improvement" by other entities that, while offering me extension into their entity, practice a level of sovereignty over it that exceeds even my own.

In a way, it feels like, you've stumbled not into the problem space of What if AI&Human are us, but rather What if we is I?

herdethics•14h ago
Thank you for the response. I am confident that my book answers many of your questions. That's the real issue I am facing; I have answers it is just trying not to dump everything on people all at once. :-) So, the book has a free downloadable PDF at herdethics.com (no email, paywall, anything). Extra focus on the appendix: "What is the Herd" you will also read more about when Herds are in conflict: Search: Herd Entanglement Test (if Free PDF or Kindle). Herd Ethics is not interested in economic theories; it can stand with different economic theories (I believe). Search definition of Agent.

I do like your question: "In a way, it feels like, you've stumbled not into the problem space of What if AI&Human are us, but rather What if we is I?" What you may be describing is simply what I call "The Herd" it is the I of we. Perhaps, if I am reading your statement well. The Herd is the collective I, that both AI, Humans, and other Agents co-exist.

Anyhow, if you read the book and you appreciate it. Please consider subscribing to the Herd Ethics YouTube channel or just email me ashton a@t herdethics dot com -- I assure you, like many people that make ethics a hobby, I am more akin to that person who is walking around the neighborhood playing with thoughts in my mind than a socialite. Meaning, I am always happy to find people that truly care about ideas, the future of ethics, especially with a leaning towards seeing society more as a collective vs. individual pursuit. (There is a reason I used the term Herd). I think the future should be (ought to be) one that preserves the herd and restores our understanding that we are much more united than people tend to believe in my Western scope of thought.

Thanks again. If you like it - email me, join YT, etc. If you think it is meh, that's fine - too, I still appreciate that you took your time to chime in. I didn't deserve your time, yet you freely gave it. Thanks.

herdethics•20h ago
Thanks again for the comment, I genuinely appreciate the engagement. To my understanding, Gothic Flatlines reflects a more dystopian view of future agency, one rooted in pessimism and loss of subjectivity. In contrast, MAT is intentionally optimistic. It's not a critique but a structural tool designed to preserve and clarify agency, offering a practical way to understand and navigate the spectrum of agents that already surround us. Really - thanks for the comment!
herdethics•20h ago
Sorry about that! I was just trying to answer the question, not debate. Hmm ... that seems to be the difficult part. How to concisely share tons of information - as people, including myself, would have a bunch of questions. I can't assume people will read the book, etc. Thanks for pointing that out, I will be thinking of better ways to go about this.
sunscream89•20h ago
Moral framework: Do not interfere with the domain of others. All others are their own domain. Violating the domain of others is wrong.
herdethics•19h ago
If you could define the words "interfere" and "violating" I could potentially be a convert. ;-) But boy ... defining that word violating would be very difficult. :)
sunscream89•19h ago
Interference is constructive and destructive anything. Constructive interference might be that by well informed lawful consent, and destructive against these principles.

Funny you should say (regarding violation), I use the term “breach” in my derived works to isolate exactly that. Breach is destructive interference (by will or neglect, I.e. not by freak accident or forces beyond account) upon the “domain” of another.

There is lesser breach, which could be harassment, through “egregious breach” which means to diminish the potential of existential being (murder, actual physical damage, rape, etc.)

I have doodled around with some assertions I would love to discuss elsewhere if you are taking this seriously.

For instance I define “law” as “the aspect of order, the interface of breach (violation), the ultimate resolve of domain.”

Here we say when there is a “breach” (my technical term for “violation” for reasons you may intuit) among selves (embodiments or actors) “law” (itself defined as “aspects of order” [of domain]) is the “ultimate resolve of domain.”

A violation (breach) is viewed as a debt to the domain (law, as proxy for the violated other.) Here resolve would be whatever lawful/moral response to the balance of the debt breach causes by the responsible party.

Domain is any isolated scope, identified through relationships (vectors) and evaluates to the highest encapsulation of authority (which may be beyond one’s self.)

You’re right, there is an art to saying simply and explaining such In complexity. I feel there are simple rules in there we must have the right language to describe.

herdethics•18h ago
Thank you for clarifying and showing your work. It is great that many individuals are thinking about such questions. I am still getting used to the HN world and culture. Hopefully I will see you around? If that's how it works. :)