frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•3m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•5m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•6m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•7m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•9m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•10m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•15m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•16m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•16m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•17m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•19m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•22m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•25m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•31m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•33m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•38m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•39m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•40m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•43m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•44m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•46m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•47m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•50m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•51m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•54m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•55m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•55m ago•2 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•57m ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

2•prateekdalal•1h ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•1h ago•1 comments
Open in hackernews

Legal Contracts Built for AI Agents

https://paid.ai/blog/ai-agents/paid-gitlaw-introducing-legal-contracts-built-for-ai-agents
72•arnon•4mo ago

Comments

ataha322•4mo ago
The question isn't just who's liable - it's whether traditional contract structures can even keep up with systems that learn and change behavior over time. Wonder if this becomes a bigger moat than the AI.
tuesdaynight•4mo ago
Probably a dumb question, but what do you mean with changing behavior over time? Contract with changing clauses? From my limited knowledge on the matter, the idea of a contract is getting rules that would not change without agreement from both parties.
candiddevmike•4mo ago
I encounter this all the time with GenAI projects. The idea of stability and "frozen" just doesn't exist with hosted models IMO. You can't bet that the model you're using will have the exact behavior a year from now, hell maybe not even 3 months. The model providers seem to be constantly tweaking things behind the scenes, or sunsetting old models very rapidly. Its a constant struggle of re-evaluating the results and tweaking prompts to stay on the treadmill.

Good for consultants, maybe, horrible for businesses that want to mark things as "done" and move them to limited maintenance/care and feeding teams. You're going to be dedicating senior folks to the project indefinitely.

htrp•4mo ago
You're gonna have to own the model weights and there will be an entire series of providers dedicated to maintaining oldmodels.
hodgesrm•4mo ago
This is a big motivation for running your own models locally. OpenAI's move to deprecate older models was an eye-opener to some but also typical behavior of the SaaS "we don't have any versions" style of deployment. [0] It will need to change for AI apps to go mainstream in many enterprises.

[0] https://simonwillison.net/2025/Aug/8/surprise-deprecation-of...

idiotsecant•4mo ago
This isn't a new problem. It's like if you built a business based on providing an interface to a google product 10 years ago and google deleted the product. The answer is you don't sell permanent access to something you don't own. Period.
avs733•4mo ago
I interpreted the comment as worrying about drift across many contracts not one contract changing.

Imagine I create a new agreement with a customer once a week. I’m no lawyer so might not notice the impact of small wording changes on the meaning or interpretation of each sequential contract.

Can I try and prompt engineer this out? Yeah sure. Do I as a non lawyer know I have fixed it - not to a high level of confidence.

bryanrasmussen•4mo ago
humans.

Also it might be that with systems that learn and change behavior over time, some sort of contract structure is needed. Not sure if traditional is the answer though.

lazide•4mo ago
You literally don’t want contracts that ‘learn and change behavior over time’?

What is the stated use case here?

hodgesrm•4mo ago
No, at least not in all cases. Customers incur review costs and potentially new risks if you change contract terms unexpectedly. In my business many large customers will only adopt our ToS if we commit to it as a contract that does not change except by mutual agreement. This is pretty standard behavior.
lazide•4mo ago
I can’t think of any case where someone who cares about the contract (aka actual terms) would be okay with it just changing. Arguably, it violates the concept of a contract which in most legal systems requires a meeting of the minds.

Do you have any examples where it would be okay?

hodgesrm•4mo ago
Only a fraction of our customers insist on locking the contract terms. It's far less than half, and it's not correlated in an obvious way to the value of the contract.
n8m8•4mo ago
Can't scroll, Cookies disclaimer doesn't work in firefox with ublock origin :(
aleatorianator•4mo ago
reader mode?
Neywiny•4mo ago
That's why I always incognito. Sure, I accept your cookies. They're gone in a few hours anyway
Neywiny•4mo ago
I'm not sure I understand why this is about agents. This feels more like contracting than SaaS. If I contract a company to build a house and it's upside down, I don't care if it was a robot that made the call, it's that company's fault not mine. I often write electronic hardware test automation code and my goodness if my code sets the power supply to 5000V instead of 5.000V (made up example), that's my fault. It's not the code's fault or the power supply's fault.

So, why would you use a SaaS contract for an agent in the first place? It should be like a subcontractor. I pay you to send 10k emails a day to all my clients. If you use an agent and it messes up, that's on you. If you use an agent and it saves you time, you get the reward.

nemomarx•4mo ago
To have that you need a human to take responsibility somewhere, right?

I think people want to assign responsibility to the "agent" to wash their hands in various ways. I can't see it working though

arnon•4mo ago
If I am a company that builds agents, and I sell it to someone. Then, that someone loses money because this agent did something it wasn't supposed to: who's responsible?

Me as the person who sold it? OpenAI who I use below? Anthropic who performs some of the work too? My customer responsible themselves?

These are questions that classic contracts don't usually cover because things tend to be more deterministic with static code.

Xylakant•4mo ago
> These are questions that classic contracts don't usually cover because things tend to be more deterministic with static code.

Why? You have a delivery and you entered into some guarantees as part of the contract. Whether you use an agent, or roll a dice - you are responsible for upholding the guarantees you entered into as part of the contract. If you want to offload that guarantee, then you need to state it in the contract. Basically, what the MIT Licenses do: "No guarantees, not even fitness for purpose". Whether someone is willing to pay for something where you enter no liability for anything is an open question.

mlinhares•4mo ago
Technically that's what you do when you google or ask chatgpt something, right? They make no explicit guarantees that any of what is provided back is true, correct or even reasonable. you are responsible for it.
idiotsecant•4mo ago
It's you. You contracted with someone to make them a product. Maybe you can go sue your subcontractors for providing bad components if you think you've got a case, but unless your contract specifies otherwise it's your fault if you use faulty components and deliver a faulty product.

If I make roller skates and I use a bearing that results in the wheels falling off at speed and someone gets hurt, they don't sue the ball bearing manufacturer. They sue me.

hobs•4mo ago
Yes they do, adding "plus AI" changes nothing about contract law, OAI is not giving you idemification for crap and you cant assign liability like that anyway.
Neywiny•4mo ago
Agreeing with the others. It's you. Like my initial house example, if I make a contract with *you* to build the house, you provide me a house. If you don't, I sue you. If it's not your fault, you sue them. But that's not my problem. I'm not going to sue the person who planted the tree, harvested the tree, sawed the tree, etc etc if the house falls down. That's on you for choosing bad suppliers.

If you chose OpenAI to be the one running your model, that's your choice not mine. If your contract with them has a clause that they pay you if they mess up, great for you. Otherwise, that's the risk you took choosing them

hluska•4mo ago
In your first paragraph, you talk about general contractors and construction. In the construction industry, general contractors have access to commercial general liability insurance; CGL is required for most bids.

There’s nothing quite like CGL in software.

Neywiny•4mo ago
Maybe I'm not privy to the minutae, but there are websites talking about insurance for software developers. Could be something. Never seen anyone talk about it though
Gerardo1•4mo ago
Did you, the company who built and sold this SaaS product, offer and agree to provide the service your customers paid you for?

Did your product fail to render those services? Or do damage to the customer by operating outside of the boundaries of your agreement?

There is no difference between "Company A did not fulfill the services they agreed to fulfill" and "Company A's product did not fulfill the services they agreed to fulfill", therefore there is no difference between "Company A's product, in the category of AI agents, did not fulfill the services they agreed to fulfill."

jacobr1•4mo ago
Well, that depends on what we are selling. Are you selling the service, black-box, to accomplish the outcome? Or are you selling a tool. If you sell a hammer you aren't liable as the manufacturer if the purchaser murders someone with it. You might be liable if when swinging back it falls apart and maims someone - due to the unexpected defect - but also only for a reasonable timeframe and under reasonable usage conditions.
Neywiny•4mo ago
I don't see how your analogy is relevant, even though I agree with it. If you sell hammers or rent them as a hammer providing service, there's no difference except likely the duration of liability
jacobr1•3mo ago
There difference isn't renting or selling a hammer. The difference is providing a hammer (rent/sell) VS providing a handyman that will use the hammer.

In the first case the manufacturer is only liable for defects, for normal use of the tool. So the manufacturer is NOT liable for misuse.

In the second case, the service provider IS liable for misuse of the tool. If they say, break down a whole wall for some odd reasons when making a repair, they would be liable.

In both cases there is a separation between user/manufacturer liability - but the question relevant to AI and SaaS is just that. Are you providing the tool, or delivering the service in question? In many cases, the fact the product provided is SaaS doesn't help - what you are getting is "tool as a service."

seanhunter•4mo ago
These are absolutely questions that classic contracts cover.
pcrh•4mo ago
AI "agents" would be treated the same as machines that do or don't perform according to promise.

Otherwise, "agents" as a class in contracts are well covered by existing law:

https://en.wikipedia.org/wiki/Law_of_agency

nocoiner•4mo ago
Classic contracts cover liability and allocation of risk in, like, literally every contract ever written?
mort96•4mo ago
If I am a company that builds technical solutions, and I sell it to someone. Then, that someone loses money because the solution did something it wasn't supposed to: who's responsible?

Me as the person who sold it? The vendor of a core library I use? AWS who hosts it? Is my customer responsible themselves?

These are questions that classic contracts typically cover and the legal system is used to dealing with, because technical solutions have always had bugs and do unexpected things from time to time.

If your technical solution is inherently unreliable due to the nature of the problem it's solving (because it's an antivirus or firewall which tries its best to detect and stop malicious behavior but can't stop everything, because it's a DDoS protection service which can stop DDoS attacks up to a certain magnitude, because it's providing satellite Internet connectivity and your satellite network doesn't have perfect coverage, or because it uses a language model which by its nature can behave in unintended ways), then there will be language in the contract which clearly defines what you guarantee and what you do not guarantee.

Gerardo1•4mo ago
Exactly. I have said several times that the largest and most lucrative market for AI and agents in general is liability-laundering.

It's just that you can't advertise that, or you ruin the service.

And it already does work. See the sweet, sweet deal Anthropic got recently (and if you think $1.5B isn't a good deal, look at the range of of compensation they could have been subject to had they gone to court and lost).

Remember the story about Replit's LLM deleting a production database? All the stories were AI goes rogue, AI deletes database, etc.

If an Amazon RDS database was just wiped a production DB out of nowhere, with no reason, the story wouldn't be "Rogue hosted database service deletes DB" it would be "AWS randomly deletes production DB" (and, AWS would take a serious reputational hit because of that).

Animats•4mo ago
These people want to assign it to the customer. See above.
IIAOPSW•4mo ago
People assign responsibility to "agents" in contracts to wash their hands of thins in various ways all the time, and it usually works.

Wait is this still about AI?

jimbo808•4mo ago
This is lawyers buying the hype that LLMs are actually intelligent and capable of autonomous decision making.
mort96•4mo ago
Wellll...

LLMs are not actually intelligent, and absolutely should not be used for autonomous decision making. But they are capable of it... as in, if you set up a system where an LLM is asked about its "opinion" on what should be done, it will give a response, and you can make the system execute the LLM's "decision". Not a good idea, but it's possible, which means someone's gonna do it.

BoorishBears•4mo ago
This is the birth of a new anthropomorphic mind virus around how LLMs operate, funded by a team looking desperately for distribution.

11/10 content marketing but it will be a shame if this gets any attention outside this comment section.

LiquidSky•4mo ago
I think it’s the opposite: people with lots of technical knowledge and little legal knowledge (but who believe the former grants them mastery of the latter) trying to create “one weird trick” workarounds to avoid legal responsibility, not understanding that the law doesn’t work that way.
jrm4•4mo ago
Sigh -- another not-even-thinly-veiled ducking of "A computer can never be held accountable, therefore a computer must never make a management decision."

This is not the way we want to be going.

binarysneaker•4mo ago
Which way should we be going?
jrm4•4mo ago
More accountability for humans and/or corporations, not less?
Animats•4mo ago
Legal contracts built for sellers of AI agents.

The contract establishes that your agent functions as a sophisticated tool, not an autonomous employee. When a customer's agent books 500 meetings with the wrong prospect list, the answer to "who approved that?" cannot be "the AI decided."

It has to be "the customer deployed the agent with these parameters and maintained oversight responsibility."

The MSA includes explicit language in Section 1.2 that protects you from liability for autonomous decisions while clarifying customer responsibility.

The alternative is that the service has financial responsibility for its mistakes. This is the norm in the gambling industry. Back when GTech was publicly held, their financial statements listed how much they paid out for their errors. It was about 3%-5% of revenue.

Since this kind of product is sold via large scale B2B deals, buyers can negotiate. Perhaps service responsibility for errors backed up by reinsurance above some limit.

nadis•4mo ago
> "The template uses CommonPaper's Software Licensing Agreement and AI Addendum as a foundation, adapted for the unique characteristics of AI agents. Nick and the GitLaw team built this based on patterns from reviewing hundreds of agent contracts. We contributed our research from working with dozens of agent companies on monetization challenges."

Unless I'm misunderstanding and GitLaw and CommonPaper are related or collaborating, I feel like this callout deserves to be mentioned earlier on and the changes / distinctions ought to be called out more explicitly. Otherwise, why not just use CommonPaper's version?