frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•8m ago•0 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•11m ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•15m ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
1•mkyang•17m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•27m ago•0 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•31m ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•32m ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
1•ambitious_potat•38m ago•0 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•38m ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
1•irreducible•38m ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•40m ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•45m ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•56m ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•1h ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•1h ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
2•alexjplant•1h ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
3•akagusu•1h ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•1h ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•1h ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•1h ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
7•DesoPK•1h ago•4 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•1h ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
35•mfiguiere•1h ago•20 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
3•meszmate•1h ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•1h ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•1h ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•2h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•2h ago•1 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•2h ago•1 comments
Open in hackernews

Do Users Verify SSH Keys? (2011) [pdf]

https://www.usenix.org/system/files/login/articles/105484-Gutmann.pdf
49•8organicbits•3mo ago

Comments

radial_symmetry•3mo ago
I appreciate the to-the-point abstract
tobinfricke•3mo ago
Also in compliance with Betteridge's law of headlines: "Any headline that ends in a question mark can be answered by the word no."
kqr•3mo ago
I feel like this is unnecessarily reductive. The initial handshake is always fraught with security problems. I struggle to see a scenario in which a bad actor is able to give me the address of a bad machine, yet not be able to trick me into their host key being the correct one.

I would definitely however spend effort into verifying a host key that changes unexpectedly.

jrochkind1•3mo ago
I'll be honest, I have never spent effort on a host key that had changed unexpectedly, and at least a few have.
kragen•3mo ago
I've often called people on the phone and stuff. It depends somewhat on what's at stake. Authenticating users with SSH passwords puts much more at stake than using public keys, since an attacker who can get you to send your unencrypted password to a malicious server once can steal your account; deploying PAKE algorithms (successors to SRP, see https://eprint.iacr.org/2021/1492.pdf) could mitigate that, but I don't think any shipped SSH version has ever supported a PAKE algorithm.
amelius•3mo ago
We need a new protocol, where installing the OS of a new machine automatically installs a trusted key from an inserted USB drive, so that the machine automatically becomes part of the "enclave".

Or something like that.

organsnyder•3mo ago
This is common in corporate environments.
MaxMatti•3mo ago
The paper does mention that you can have your ssh keys signed by a ca, so in a company the it staff could configure everybodys os to only trust ssh keys signed by the organization.
otabdeveloper4•3mo ago
> you can have your ssh keys signed by a ca

Good idea. That way when your CA private key leaks (the key which we never ever rotate, of course) the bad guys can compromise the whole fleet and not just one server. Bonus points if the same CA is also used for authenticating users.

waste_monk•3mo ago
>That way when your CA private key leaks (the key which we never ever rotate, of course)

As with X.509, any serious usage will involve a hardware security module, so that compromise of the CA host does not allow the key to be leaked. You'd still have a very bad day, but it can be mitigated.

I do think it's a fairly significant flaw that SSH CA doesn't support intermediate CA's (or at least didn't last time I looked into it) to enable an offline root CA.

>Bonus points if the same CA is also used for authenticating users.

The SSH CA mechanism can be used for both Host and User auth, yes.

Keeping in mind, in a real use case this would be tied to something like active directory / LDAP, so you can automate issuance of ssh keys to users and hosts.

Systems configured to trust the SSH CA can trust that the user logging in is who they say they are because the principal has already been authenticated and vouched for by the identity provider, no more manually managing known_hosts and authorized_keys, or having to deal with Trust On First Use or host key changed errors.

You can also set the CA's endorsement of the issued keys to fairly short lifetimes, so you can simplify your keymat lifecycle management a great deal - no worrying about old keys lying around forever if the CA only issues them as valid for an hour / day / etc. .

Overall I think you still come out ahead on security.

thesuitonym•3mo ago
You are under no obligation to use the generated, self-signed key. Most people do because it's "good enough".
kragen•3mo ago
They might be able to spoof your DNS, for example if you're using their Wi-Fi, so you get the wrong IP address for the right hostname, but not your mail server's SSL. You could pass the host key fingerprint across an existing secure connection, such as in email or with ssh to a host that you already have the fingerprint of.
whatevaa•3mo ago
5 dollar wrench xckd applies here.
kragen•3mo ago
No, it really doesn't, as in most cases where people invoke it.
integralid•3mo ago
It does not. DNS attacks happen. It won't be used by APT on you or me, but may be used as an escalation mechanism in a company, for example. It's also something a hacked router could do, but I never heard of it happening, to be fair.

I was actually personally a victim of such (unsuccessful) attack in the tor network. SSH login to my hidden service complained about a wrong fingerprint. The same happened when I tried again. After tor restart, problem disappeared. I assume this was an attempt at SSH mitm by one of exit nodes?

kragen•3mo ago
That's possible. I've also had this happen with Wi-Fi captive portals; I assume redirecting all the port 22 traffic to port 22 on the router was an unintentional side effect of redirecting all the port 80 traffic to port 80 on the router.
kqr•3mo ago
Sure, but they'd have to do this on the first connection attempt. Rarely do I try to connect to new servers when I'm not using a trusted connection – among other things for this very reason.
dspillett•3mo ago
> I struggle to see a scenario in which a bad actor is able to give me the address of a bad machine, yet not be able to trick me into their host key being the correct one.

If you aren't bothering to verify then they do not need to trick you at all.

In DayJob we have a lot of clients send feeds and collect automated exports via SFTP, and a few to whom we operate the other way (us pulling data via SFTP or pushing it to their endpoint). HTTPS based APIs are very common and becoming more so, but SFTP is still big in this area (we offer some HTTPS APIs, few use them instead of SFTP).

One possible exploit route, for a malicious actor playing a long and targetted game, that could affect us:

1. Attacker somehow poisons our DNS, or that of a specific prospective client of ours, sending traffic for sftp.ourname.tld to their server, and has access to our mail.

2. Initially they just forward traffic so key verification works for existing users. They monitor for some time to record host addresses that already access the server, so when they start intervening they can keep just forwarding connections from those addresses, so those users see no warnings (and are unaffected by the hack).

3. When they do start to intercept connections from hosts not already on the list make above instead of forwarding everything, existing users are unaffected¹ but new users coming in from entirely different addresses now go to their server and if they are not verifying the key will happily send information through it², authenticating with the initial user+pass we sent or PKI using the public key they sent, with the malicious server connecting through to ours to complete the transfers.

4. Now wait and collect data as no one realises there is a MitM, and later use any PII or other valuable information for ransom/extortion purposes.

Of course there are ways to mitigate this attack route. For one: source address whitelisting, supported by OpenSSH's key based auth as the acceptable source list can be included with the public key so only specific sources can use that key for auth. But they client would have to make effort to do this, and if they aren't going to make the effort to verify the host key then they aren't going to make other efforts either.

We do have some clients who verify the host properly and/or give us source addresses to limit connections to when they provide a public key, we work with financial institutions who are appropriately paranoid about their data and the data of their customers, some even use PGP for data in transit (and in case it is ever stored where it shouldn't be) for an extra level of paranoia. But most do none of this. Most utterly ignore our strong suggestion that they use keys, or change passwords in case of email breach, instead using the password we mail them before first connection for eternity.

--------

[1] none of our clients are likely to be sending files from dynamic source addresses, at most the source might move around a v4/24 or v6/64, currently I don't think all of them connect from a single IPv4 address, I've had one recently let us know (months in advance) that their source address will be changing.

[2] it can connect to us and send the data

zenmac•3mo ago
Server should publish their key fingerprint to at least the authorized personal of the group. So people know if the server they are connecting to is actually that server.
otabdeveloper4•3mo ago
> ...a host key that changes unexpectedly.

Literally happens every single damn day and literally nobody on the face of this earth ever gives a shit.

Host keys are the stupidest idea in the history of computer so-called "security".

BenjiWiebe•3mo ago
Why are yours changing every day? If they always did that, then yes it would be a stupid idea. But they don't change on their own, or for no reason, so it isn't a stupid idea.

Mine change maybe once every couple of years, if I do a full reinstall without copying over the old host key. And then I know exactly why it changed.

otabdeveloper4•3mo ago
> Why are yours changing every day?

Nobody knows how the hell the host keys are generated in the first place. Don't worry about it.

> And then I know exactly why it changed.

Really? What is a "full" reinstall as opposed to a "non-full" reinstall, and how much exactly reinstall do I need for my host keys to change?

rcxdude•3mo ago
the only time the host keys should change is if you a) delete them (either by wiping the whole machine or just deleting the files), or b) explicitly regenerate them. If they're changing for any other reason you're doing something weird.
BenjiWiebe•3mo ago
Or they're getting MITM'd repeatedly by multiple different attackers...
otabdeveloper4•3mo ago
Probably not. MITM pretty much never happens in the real world.
otabdeveloper4•3mo ago
I don't think anybody actually generates host keys by hand. It's always some sort of "automation" script in your OS or SSH implementation.
BenjiWiebe•3mo ago
A system upgrade reinstalls every package, but does not regenerate host keys (Fedora). A full reinstall is wiping the drive completely, and running the installer from a LiveCD/LiveUSB. Nothing is retained, and new host keys are generated.

If my host keys were changing regularly, I would worry about it. There's no legitimate reason for that to be happening, since I'm not regularly wiping the drive and reinstalling, nor am I regularly manually deleting the host keys (the other way they get regenerated).

chasil•3mo ago
My corporate SFTP server that became mandated to use several years ago presented multiple keys, apparently because it was on DNS round robin.

My attempts to convince them to use the same key came to naught, so instead I use one of the IP addresses.

I could alternately erase the known hosts entry on each transfer. That would probably have been preferable.

I also got a shell on it when I attempted ssh, so you can guess the care that is taken with it.

dcminter•3mo ago
I guess the interesting question to me is: how often does this matter? How many successful mitm attacks on ssh connections are there and in what sort of circumstances do they occur?

It seems like it ought to matter, but if roughly nobody verifies and yet the sky has not fallen - does it?

1oooqooq•3mo ago
nobody robbed my house in years. i still lock the door.

it's so banal to check host keys.

marcosdumay•3mo ago
It's only for the first connection, and it's very rare that targets are valuable on the first connection.

On the other hand, we know of at least two suppliers of software that run with elevated access everywhere (including the dev side of every advanced military) that have been breached by unknown parties for years. The most likely explanation, by far, is that the sky only didn't fall yet because nobody wants it to. And that leaves us vulnerable to somebody suddenly wanting it.

vbezhenar•3mo ago
It's really ridiculous that ssh does not use standard PKI which is deployed everywhere. So unsecure.
dspillett•3mo ago
To use the style of server identity management we use for, say, HTTPS, you need a “trusted” 3rd party involved to sign certificates. This is impractical for SSH in many (most?) cases for several reasons (SSH does support cert based identification and authentication, but there are not many circumstances where this is more practical than, or otherwise preferable to, TOFU for SSH).

In fact, many people who don't properly understand SSH's trust-on-first-use system (so don't actually verify server certificate fingerprints) argue for browsers to support it as an option alongside the current certificate signing & verification regimes.

kragen•3mo ago
By "standard PKI which is deployed everywhere" do you mean SSL certificates? That would make you vulnerable to dozens of poorly secured CAs throughout the world; any attacker who could penetrate one of them could then use that access to MITM any SSH connection in the world (if they could additionally spoof DNS).

SSL certificates are probably the best we can do for the "talk to a server you've never heard of" scenario, but we can do enormously better for the scenario where you're SSHing into a server you already have a pre-existing trust relationship with.

vbezhenar•3mo ago
It is good enough for websites, it will be good enough for ssh. "pre-existing trust relationship" prevents from rotation keys which is standard security measure (unheard of in ssh, of course).
kbolino•3mo ago
Let us set aside the reasons why SSH adopted a different certificate format (namely, that X.509 is much more complex than they needed at the time).

WebPKI only realistically serves a small portion of the SSH hosts out there. This is quite different from the situation with HTTPS. Even so, this would still be very convenient and useful. As I said elsewhere, I think this is sub-5% of SSH servers.

X.509 more broadly could replace SSH certificates. Many institutional settings already have trust stores set up to include their in-house CAs. Public clouds and major hosting providers could also set up their own CAs, but they would have trouble distributing them (cf. AWS RDS, for example). Now we're probably up to 25% or so of deployed SSH servers. In the case of clouds, though, this adds a massive new exploitation vector (IP reassignment) and thus puts pressure on expiration/revocation.

The rest are going to need self-signed certs.

Between the non-WebPKI CA distribution problem and the probable predominance of self-signed certs, trust-on-first-use would still be the norm, and so relying on pre-existing trust relationships would still be necessary. We could augment TOFU/known-hosts with some kind of certificate or CA pinning rather than just key pinning, though.

So, again, while I think adopting X.509 isn't a bad idea, and makes a lot more sense today than it did in 2010 (pre-Heartbleed!) when SSH added certificates, it's not really solving the problem that SSH has much better than today's solutions, no matter how well it solves the problem that HTTPS has.

kragen•3mo ago
> It is good enough for websites, it will be good enough for ssh.

This is backwards. Breaking SSH authentication permits subverting most websites; the converse is not true.

> "pre-existing trust relationship" prevents from rotation keys

This is also false. Things like Signal and OTR rotate keys frequently and automatically within pre-existing trust relationships.

advisedwang•3mo ago
ssh does support certificate based authentication [1]

[1] https://docs.redhat.com/en/documentation/red_hat_enterprise_...

jon-wood•3mo ago
Worth noting this is similar to but not the same as the type of certificate based authentication used in web browsers. Most notably you can't chain CAs, so there is no root of trust beyond whoever operates the CA you care about telling you the public key out of band.

For SSH this is fine, because very rarely is anyone connecting to a random SSH server on the internet without being able to talk to the operators (hi Github, we see you there, being the exception).

vbezhenar•3mo ago
You can't just use letsencrypt certificates and make it work out of the box. Still insecure.
kbolino•3mo ago
The CA/Browser Forum does not want to support this either. They are only interested in public, domain-verified websites served over HTTPS. They forbid client certificates and dual-use CAs, they require certificate transparency and short expiration times, and their policies get stricter every year. Most SSH deployments would not want to accept these constraints.

So, even if SSH supported X.509 certificates, which isn't necessarily a bad idea, it would be completely detached from WebPKI, thus removing most of the benefit.

vbezhenar•3mo ago
A lot of servers do have domains associated with them. So that's not an issue.

There are CA which will issue certificates for public IP addresses. So any public ssh server also can use these certificates.

There's no reason to detach ssh PKI from Web PKI. They can use exactly the same certificates and keys.

kbolino•3mo ago
There is no doubt some number of SSH servers which have public domain names and/or public IP addresses, can accept DNS verification or running a completely unrelated HTTP server for IP verification, don't mind having their existence published in certificate transparency logs, don't care about or can separately handle client certs, and don't mind the SSH server restarting every ~month (until this gets shortened again) when the certificate is rotated. However, I would estimate the share of such sites at less than 5% of deployed SSH servers. The primary use case I can see here is to reuse an existing HTTPS cert for SSH on a box that already hosts a website.

FWIW, there is an RFC for X.509 certificates in SSH, but it has not achieved wide adoption: https://www.rfc-editor.org/rfc/rfc6187

kbolino•3mo ago
(I have also responded to you in kragen's subthread, you may want to consolidate the discussion there)
vbezhenar•3mo ago
More interesting statistics question is: how many connections over the world are opened to the servers with and without public domain/IP?

I think that github alone makes a sizeable chunk of these connections. So if there was some better mechanism to establish trust before first handshake, that would benefit all of these connections.

One approach that I could envision is to simply host ssh public key on some well known path (github.com/.well-known/ssh.pub) and ssh client will grab it over https before first connection and when key suddenly changes.

kbolino•3mo ago
I think your proposed solution is a great idea, but we're evaluating it at the wrong layer of the software stack. The real problem is Git(Hub)'s (mis-)use of SSH, not SSH itself per se. Adding full TLS and HTTP stacks to SSH clients would be massive bloat and a security/maintenance burden that OpenSSH, PuTTY, etc. probably don't want to take on. Whereas, it would be much easier for Git to add this functionality to its client, since it already has HTTPS support.

I think we could also attack this problem from the angle of phasing out git+ssh protocol (or at least greatly reducing its use) by improving unattended/headless HTTPS user authentication.

kemotep•3mo ago
Thanks for sharing this! Yesterday I was just wondering about ssh key verification techniques for third party services.

SSH keys are amazing, portable and in some ways easier to use than Passkeys. But for them to successfully replace passwords and account configuration, which works decently well for a service like pico.sh, the user experience needs to be improved significantly. Not impossible but what does become a continuous and ongoing problem is verification.

erikerikson•3mo ago
Fails to mention that you can paste in the expected key. Of course if there is a compromise of the source the key is copied from that no help but that's a higher bar. Still easy and doesn't rely on human frailty.
jon-wood•3mo ago
Or you can use SSH certificates, where you work on the basis that if the host key is signed by the correct CA then it's legit. No more tofu required beyond need to trust whatever source you got your CA's public key from.
hk1337•3mo ago
Coupled with that you have to have a matching private key that you created and using ssh-config, is this even necessary?
teeray•3mo ago
I kinda like the approach Github does: they just publish their fingerprints here: https://docs.github.com/en/authentication/keeping-your-accou...

This is served over TLS, so it's no worse than TLS. You can also benefit from the paved road that LetsEncrypt has provided. It might not be as smooth as SSH CAs once they're set up, but setting those up and the Day 2 operations involved isn't nearly as straightforward.

beala•3mo ago
Terminal.shop lets you order coffee over ssh, which is kind of novel and fun. I did it, and the coffee was good! This post reminded me that they've gotten enough questions about security that they've added this to their FAQ:

> is ordering via ssh secure?# you bet it is. arguably more secure than your browser. ssh incorporates encryption and authentication via a process called public key cryptography. if that doesn’t sound secure we don’t know what does. [1]

I think this is wrong though for exactly the reasons described in this post. TLS verifies that the URL matches the cert through the chain of trust, whereas SSH leaves this up to the user to do out-of-band, which of course no one does.

But then the author of this article goes on to say (emphasis mine):

> This result represents good news for both the SSL/TLS PKI camps and the SSH non-PKI camps, since SSH advocates can rejoice over the fact that the expensive PKI-based approach is no better than the SSH one, while PKI advocates can rest assured that their solution is no less secure than the SSH one.

Which feels like it comes out of left field. Certainly the chain of trust adds some security, even if it's imperfect. I know many people just click through the warning, but I certainly don't.

[1] https://www.terminal.shop/faq

tw04•3mo ago
>TLS verifies that the URL matches cert through the chain of trust,

I think you need to point out that TLS utilizes the browsers cert store for that chain of trust. If a bad actor acquires an entity that has a trusted cert, or your cert store is compromised, that embedded cert store is almost entirely useless which has happened on more than one occassion (Chinese government and Symantec most recently).

https://expeditedsecurity.com/blog/control-the-ssl-cas-your-...

This is typically caught pretty quickly but there's almost nothing a user can do to defend against a chain of trust attack. With SSH, while nobody does it, at least you have the ability to protect yourself.

zie•3mo ago
in SSH, it's a two-way handshake, the client ordering the coffee also gets a cert to prove their identity.

In browser land, the client browser doesn't get a cert to prove their identity, it's one-way only.

Certainly TLS supports client certs, browsers(at least some) technically even implement a version, but the UX is SOOOO horrible that nobody uses it. Some people have tried, the only people that have ever seen any success with client side authentication certificates over a web browser are webauthn/passkeys and the US Military(their ID cards have a cert in them).

webauthn/passkeys are not fully baked yet, so time will tell if they will actually be a success, but so far their usage is growing.

kbolino•3mo ago
I think webauthn/passkeys will be more successful (frankly I think they already have been) because they're not part of TLS. The problem with client certs, and other TLS client auth like TLS-SRP, is that it inherently operates at a different layer than the site itself. This cross-cutting through layers greatly complicates getting the UX right, not just on the browser side (1) but also on the server side (2). Whereas, webauthn is entirely in the application layer, though of course there's also some supporting browser machinery.

(1) = Most browsers defer to the operating system for TLS support, meaning there's not just a layer boundary but a (major) organizational one. A lot of the relevant standards are also stuck in the 1990s and/or focused on narrow uses like the aforementioned U.S. military and so they ossified.

(2) = The granularity of TLS configuration in web servers varies widely among server software and TLS libraries. Requesting client credentials only when needed meant tight, brittle coupling between backend applications and their load balancer configuration, which was also tricky to secure properly.

zie•3mo ago
So true, two-way certs with TLS have crappy implementations everywhere, not just in the browser.

I have 2 problems with webauthn/passkeys:

* You MUST run Javascript, meaning you are executing random code in the browser, which is arguably unsafe. You can do things to make it safer, most of these things nobody does(never run 3rd party code, Subresource Integrity, etc).

* The implementations throughout the stack are not robust. Troubleshooting webauthn/passkey issues is an exercise in wasted time. About the only useful troubleshooting step you can do is delete the user passkey(s) and have them try again, and hope whatever broke doesn't break again.

zie•3mo ago
We transfer ACH files(i.e. paychecks) via SSH(SFTP) to several banks. You better believe I check keys. One of the banks forces key rotation every 2-ish years. I absolutely verify it every rotation and delete the old keys.

Occasionally it fails, almost always it's something unexpected happening, but occasionally we catch their errors(verified by connecting from various endpoints/DNS queries/etc). We used to call them all the time whenever that happened. Now we just auto-retry on failure in an hour and that fixes the issue all of the time(so far). We only re-try once and then fail with a ticket. Most of us like our paychecks, so we are pretty good about getting that ticket resolved quickly.

chuckadams•3mo ago
No, and expecting users to actually do so is a sign that something is very wrong about the process. TOFU turns out to be good enough for most purposes anyway, but if a key changes (perhaps the server was reimaged) then verifying it is about as friendly as a tax audit. Or using GPG.
SoftTalker•3mo ago
Yeah that is my experience. Users don't understand public key cryptography. You ask them for their public key and they send you the private one. They use the same key everywhere. They don't understand the difference between a host key and a login key. Ask them to do anything with their authorized_keys file and your next ticket will be "I'm locked out of my system."

They do understand passwords, and most can manage an SMS code as a second factor. That's about the limit of what you can count on.

chuckadams•3mo ago
I've been doing this for 30 years and sometimes I give the wrong key file on the command line by forgetting to add '.pub' to the end. Far as I remember, I've always caught it before I managed to send it somewhere public, and thankfully most of my keys nowadays have a passphrase that gets remembered in my OS's keychain. But the UX is really that bad.
franga2000•3mo ago
Users can understand asymmetric crypto, but the tools are so convoluted for no reason that they usually just give up. I've had no trouble explaining it to "average" computer users and they got it completely, but then actually using the tools for signing or authentication was the nearly impossible part.

Your key has to parts: public and private. You give your public part to the server so it knows it's talking to you because only you have the private part. The server has its own pair and it gives you its public part so you know you're not talking to an impostor server. The private key is never sent, it stays on your computer, but it does some fancy math so the server can know you have it.

franga2000•3mo ago
This is entirely the fault of the software.

For planned key rotations, you could sign the new key with the old key and send that in the handshake, so the client could change the known_hosts file on its own.

For unplanned rotations (server got nuked), you could isntruct your users to use a secure connection and run "ssh-replace-key server.example.com b3f620", which would re-run TOFU, with the last param being an optional truncated hash of the key for extra security.

You could also do a prompt like "DANGER!! The host key has changed. If you know this is expected or of your IT administrator told you to do so, type 'I know what I'm doing or the IT admin told me to do this' ".

foxyv•3mo ago
Until SSH servers implement PKIX based Host Key verification, it's always going to be fraught with issues like this. Users will just keep blindly accepting host keys because they "Don't got time for that."
EPendragon•3mo ago
The abstract for this paper is fire
egberts1•3mo ago
Yeah and whatever you do, don't be deploying SSHFP DNS record unless its DNS server is running DNSSEC AND ... AND your web browser/client is also using DNSSEC-verified query responses too (either thru your dedicated DNSSEC-enabled resolver and/or specially-modified /etc/resolv.conf

Source: https://egbert.net/blog/tags/sshfp.html

tptacek•3mo ago
SSHFP is baffling to me. The entire point of the DNS is to make introductions between parties with no preexisting relationship, which is exactly not how an SSH cluster works. SSH already has a (very good) certificate system that solves the same problem. Why would you include the global DNS among your trust anchors!?
egberts1•3mo ago
It was for me too. But when ordered by my boss to implement SSHFP, the hacker in me took it apart and made this checklist.

In hindsight, it isn't a very useable DNS RCODE.

And yes, it is preferable to use SSH certificates (for as long as you are aware that private keys must be guarded jealously). We need PAKE capability in SSH, preferably Signal-like protocol for authentication.

egberts1•3mo ago
Oh yeah, you are better off having a separate but hardened SSH authentication server that is to be consulted by such SSH gateways, just to protect the private keys.

And use DTLS (two sets of client and server PKI , one for each direction) to guard that link between SSH gatewat(s) and SSH authentication (certificate-based) server.

The starting point of all this:

https://jpmens.net/2019/03/02/sshd-and-authorizedkeyscommand...