> But software that's fast changes behavior.
I wonder if the author stopped to consider why these opposing points make sense, instead of ignoring one to justify the other.
My opinion is that "fast" only becomes a boon when features are robust and reliable. If you prioritize going twice as "fast" over rooting out the problems, you get problems at twice the rate too.
Lets say that the author has a machine that is a self contained assembly line, it produces cans of soup. However, the machine has a problem - every few cans of soup, one can comes out sideways and breaks the machine temporarily, making them stop and unjam it.
The author suggests to double the speed of the machine without solving that problem, giving them their soup cans twice as fast, requiring that they unjam it twice as often as well.
I believe that (with situation exceptions) this is a bad approach, and I would address the problem causing the machine to get jammed before I doubled the speed of the machine
That being said, this is a very simplistic view of the situation, in a real situation either of these solutions has a number of variables that may make it preferable over the other - my gripe with the piece is that the author suggests the "faster" approach is a good default that is "simple", "magical" and "fun". I believe it is shortsighted, causes compounding problems the more it is applied in sequence, and is only "magical" if you bury your head in the sand and tell yourself the problems are for someone else to figure out - which is exactly what the author handwaves away at the end, with a nebulous allusion to some future date when these tools that we should accept because they are fast will eventually be made good by some unknown person.
I have been asking about Latency-Free Computing for a very long time. Every Computing now is slow.
Speed of all kinds is incredibly important. Give me all of it.
- Fast developers
- Fast test suites
- Fast feedback loops
- Fast experimentation
Someone (Napoleon?) is credited with saying "quantity has a quality all its own", in software it is "velocity has a quality all its own".
As long as there is some rigor and you aren't shipping complete slop, consistently moving very quickly fixes almost every other deficiency.
- It makes engineering mistakes cheaper (just fix them fast)
- It make product experimentation easy (we can test this fast and revert if needed)
- It makes developers ramp up quickly (shipping code increases confidence and knowledge)
- It actually makes rigor more feasible as the most effective rigorous processes are light weight and built-in.
Every line of code is a liability, the system that enables it to change rapidly is the asset.
Side note: every time I encounter JVM test startup lag I think someday I am going to die and will have spent time doing _this_.
Joe Stalin, I believe. It's a grim metaphor regarding the USSR's army tactics in WW2.
https://www.goodreads.com/quotes/795954-quantity-has-a-quali...
Are you kidding me? My product owner and management ask me all the time to implement features "fast".
I disagree with the statement too, as people definitely ask for UX / products to be "snappy", but this isn't about speed of development.
I put it in a ticket to speed up the 40 minutes build and was asked "How does this benefit the end user?" and I said "The end user would have had the product six months ago if the build was faster."
Speed is what made Google, which was a consumer product at the time. (I say this because it matters more in consumer products.)
Beautiful tools make you stretch to make better things with them.
They don't explicitly ask for it, but they won't take you seriously if you don't at least pretend to be. "Fast" is assumed. Imagine if Rust had shown up, identical in every other way, but said "However, it is slower than Ruby". Nobody would have given it the time of day. The only reason it was able to gain attention was because it claimed to be "Faster than C++".
Watch HN for a while and you'll start to notice that "fast" is the only feature that is necessary to win over mindshare. It is like moths to a flame as soon as something says it is faster than what came before it.
Nah. React, for example, only garnered attention because it said "Look how much faster the virtual DOM is!". We could go on all day.
> People want features, people want compatibility
Yes, but under the assumption that it is already built to be as "fast" as possible. "Fast" is assumed. That's why "faster" is such a great marketing trick, as it tunes people into "Hold up. What I'm currently using actually sucks. I'd better reconsider."
"Fast" is deemed important, but it isn't asked for as it is considered inconceivable that you wouldn't always make things "fast". But with that said, keep in mind that the outside user doesn't know what "fast" is until there is something to compare it with. That is how some products can get away with not being "fast" — until something else comes along to show that it needn't be that way.
On one hand I like controlled components because there is a single source of truth for the data (a useState()) somewhere in the app, but you are forced to re-render for each keypress. With uncontrolled components on the other hand, there's the possible anarchy of having state in React and in the actual form.
I really like this library
which has a rational answer to the problems that turn up with uncontrolled forms.
https://krausest.github.io/js-framework-benchmark/current.ht...
But React started a movement where frontend teams were isolated from backend teams (who tend to be more conservative and performance minded), tons of the view was needlessly pushed into browser rendering, every paged started using 20 different JSON endpoints that are often polling/pushing adding overhead etc. So by every measure it made the Web slower and more complicated, in exchange for some slightly easier/cohesive design management (that needs changing yearly).
The particulars on the vdom framework itself are probably not that important in the grand scheme. Unless it's design encourages doing less of those things (which many newer ones do but React is flexible).
Yes, fast wins people over. And yet we live in a world where the actual experience of every day computing is often slow as molasses.
"Fast" is the feature people always wanted, but absent better information, they have to assume that is what they already got. That is why "fast" marketing works so well. It reveals that what they thought was pretty good actually wasn't. Adding the missing kitchen sink doesn't offer the same emotional reaction.
We don't have to assume. We know that JavaScript is slow in many cases, that shipping more bundles instead of less will decrease performance, and that with regard to the amount of content served generally less is more.
Whether this amount of baggage every web app seems to come with these days is seen as "necessary" or not is subjective, but I would tend to agree that many developers are ignorant of different methods or dislike the idea of deviating from the implied norms.
This is what people are missing. Even those "slow" apps are faster than their alternatives. People demand and seek out "fast", and I think the OP article misses this.
Even the "slow" applications are faster than their alternatives or have an edge in terms of speed for why people use them. In other words, people here say "well wait a second, I see people using slow apps all the time! People don't care about speed!", without realizing that the user has already optimized for speed for their use case. Maybe they use app A which is 50% as fast as app B, but app A is available on their toolbar right now, and to even know that app B exists and to install it and learn how to use it would require numerous hours of ramp up time. If the user was presented with app A and app B side by side, all things equal, they will choose B every time. There's proficiency and familiarity; if B is only 5% faster than A, but switching to B has an upfront cost in days to able to utilize that speed, well that is a hidden speed cost and why the user will choose A until B makes it worth it.
Speed is almost always the universal characteristic people select for, all things equal. Just because something faster exists, and it's niche, and hard to use (not equal for comparison to the common "slow" option people are familiar with), it doesn't mean that people reject speed, they just don't want to spend time learning the new thing, because it is _slower_ to learn how to use the new thing at first.
And for that, we absolutely do have points of comparison, and yeah, pretty much all web apps have bad interactivity because they are limited too much by network round trip times. It's an absolute unicorn web app that does enough offline caching.
It's also absurd to assume that applications are as fast as they could be. There is basically always room for improvement, it's just not being prioritised. Which is the whole point here.
I’ve mentioned this before.
Quest Diagnostics, their internal app used by their phlebotomists.
I honestly don’t know how this app is done, I can only say it appears to run in the tab of a browser. For all I know it’s a VB app running in an ActiveX plugin, if they still do that on Windows.
L&F looks classic Windows GUI app, it interfaces with a signature pad, scanner, and a label printer.
And this app flies. Dialogs come and go, the operator rarely waits on this UI, when she is keying in data (and they key in quite a bit), the app is waiting for the operator.
Meanwhile, if I want to refill a prescription, it fraught with beach balls, those shimmering boxes, and, of course, lots of friendly whitespace and scrolling. All to load a med name, a drugstore address, and ask 4 yes/no questions.
I look at that Quest app mouth agape, it’s so surprisingly fast for an app in this day and age.
Seriously though, you're so right- I often wonder why this is. If it's that people genuinely don't care, or that it's more that say ecommerce websites compete on so many things already (or in some cases maintain monopolies) that fast doesn't come into the picture.
They don't say they buy the iPhone because it has the fastest CPU and most responsive OS, they just say it "just works".
Like the « evergreen » things Amazon decided to focus on : faster delivery, greater selection, lower cost.
I'm already pissed I have to use the damn thing, please don't piss me off more.
Wait.
Wait for typing indicator.
Wait for cute text-streaming.
Skip through the paragraph of restating your question and being pointlessly sycophantic.
Finally get to the meat of the response.
It’s wrong.
Fast is _absolutely not_ the only thing we care about. Not even top 5. We are addicted to _convenience_.
Considering the current state of the Web and user application development, I tend to agree with regard to its developers, but HN seems to still abide by other principles.
I imagine a large chunk of us would gladly throw all that out the window and only write super fast efficient code structures, if only we could all get well paid jobs doing it.
Your example says it, people will go, this is like X (meaning it does/has the same features as X), but faster. And now people will flock from X to your X+faster thing.
Which tells us nothing about if people would also move to a X+more-features, or a X+nicer-ux, or a X+cheaper, etc., without them being any faster than X or even possibly slower.
C and C++ were and are the benchmark, it would have been revolutionary to be faster and offer memory safety.
Today, in some cases Rust can be faster, in others slower.
Assuming, like, three days, 6 minutes is 720x faster. 10000x faster than 6 minutes is like a month and a half!
baba is fast.
I sometimes get calls like "You used to manage a server 6 years ago and we have an issue now" so I always tell the other person "type 'alias' and read me the output", this is how I can tell if this is really a server I used to work on.
fast is my copilot.
https://en.wikipedia.org/wiki/HTTP/3
https://en.wikipedia.org/wiki/QUIC
They don't require you to use QUIC to access Google, but it is one of the options. If you use a non-supporting browser (Safari prior to 2023, unless you enabled it), you'd access it with a standard TCP-based HTTP connection.
Hm, shell environment is fairly high on the list of things I'd expect the next person to change, even assuming no operational or functional changes to a server.
And of course they'd be using a different user account anyway.
(Throw tomatoes now but) Torvalds said the same thing about Git in his Google talk.
Genuinely hard to read this and think little more than, "oh look, another justification for low quality software."
He pays special attention to the speed of application. The Russian social network VK worked blazingly fast. The same is about Telegram.
I always noticed it but not many people verbalized it explicitly.
But I am pretty sure that people realize it subconsciously and it affects user behaviour metrics positively.
These operations are near instant for me on telegram mobile and desktop.
It's the fastest IM app for me by a magnitude.
I used to play games on N64 with three friends. I didn’t even have a concept of input lag back then. Control inputs were just instantly respected by the game.
Meanwhile today, if I want to play rocket league with three friends on my Xbox series S (the latest gen, but the less powerful version), I have to deal with VERY noticeable input lag. Like maybe a quarter of a second. It’s pretty much unplayable.
It could be my unusual nervous system. I'm really good at rhythm games, often clearing a level on my first try and amazing friends who can beat me at other genres. But when I was playing League of Legends, which isn't very twitchy, it seemed like I would just get hit and there was nothing I could do about it when I played on a "gaming" laptop but found I could succeed at the game when I hooked up an external monitor. I ran a clock and took pictures showing that the external monitor was 30ms faster than the built-in monitor.
The lag is due to some software. So the problem is with how software engineering as a field functions.
Your experience is not normal.
If you’re seeing that much lag, the most likely explanation is your display. Many TVs have high latency for various processing steps that doesn’t matter when you’re watching a movie or TV, but becomes painful when you’re playing games.
It was either fast, or nothing. Image quality suffered, but speed was not a parameter.
With LCDs, lag became a trade-off parameter. Technology enabled something to become worse, so economically it was bound to happen.
Sounds reasonable, but no.
Almost everywhere I’ve worked, user-facing speed has been one of the highest priorities. From the smallest boring startup to the multi billion dollar company.
At companies that had us target metrics, speed and latency was always a metric.
I don’t think my experience has been all that unique. In fact, I’d be more surprised if I joined a company and they didn’t care about how responsive the product felt, how fast the pages loaded, and any weird lags that popped up.
I have found this true for myself as well. I changed back over to Go from Rust mostly for the iteration speed benefits. I would replace "fast" with "quick", however. It isn't so much I think about raw throughput as much as "perceived speed". That is why things like input latency matter in editors, etc. If something "feels fast" (ie Go compiles), we often don't even feel the need to measure. Likewise, when things "feel slow" (ie Java startup), we just don't enjoy using them, even if in some ways they actually are fast (like Java throughput).
In general, I like cargo a lot better than the Go tooling, but I do wish the Rust stdlib was a bit more "batteries included".
As much as Rust's strongest defenders like to claim, compilation speed and bloat just really wasn't a goal. That's cascaded down into most of the ecosystem's most used dependencies, and so most Rust ecosystem projects just adopt the mindset of "just use the dependency". It's quite difficult to build a substantial project without pulling in 100s of dependencies.
I went on a lengthy journey of building my own game engine tools to avoid bloat, but it's tremendously time consuming. I reinvented the Mac / Windows / Web bindings by manually extracting auto-generated bindings instead of using crates that had thousands of them, significantly cutting compile time. For things like derive macros and serialization I avoided using crates like Serde that have a massive parser library included and emit lots of code. For web bindings I sorted out simpler ways of interacting with Javascript that didn't require a heavier build step and separate build tool. That's just the tip of the iceberg I can remember off the top of my head.
In the end I had a little engine that could do 3D scenes, relatively complex games, and decent GPU driven UI across Mac, Windows, and Web that built in a fraction of the time of other Rust game engines. I used it to build a bunch of small game jam entries and some web demos. A clean release build on the engine on my older laptop was about 3-4 seconds, vastly faster than most Rust projects.
The problem is that it was just a losing battle. If I wanted Linux support or to use pretty much any other crate in the Rust ecosystem, I'd have to pull in dependencies that alone would multiple the compile time.
In some ways that's an OK tradeoff for an ecosystem to make, but compile times do impede iteration loops and they do tend to reflect complexity. The more stuff you're building on top of the greater the chances are that bugs are hard to pin down, that maintainers will burn out and move on, or that you can't reasonably understand your stack deeply.
Looking completely past the languages themselves I think Zig may accrue advantages simply because its initial author so zealously defined a culture that cares about driving down compile times, and in turn complexity. Pardon the rant!
Meanwhile, compiler performance just didn't have a strong advocate with the right vision of what could be done. At least that's my read on the situation.
By comparison, Go doesn't have _that_ problem because it just doesn't have metaprogramming. It's easy to stay fast when you're dumb. Go is the Forest Gump of programming languages.
Windows is the worst offender here, the entire desktop is sluggish even though it there is no computational task which justifies those delays.
Yeah, that's not "a world" it's just the USA. Parts of the world - EU, UK etc have already moved on from that. Don't assume that the USA is leading edge in all things.
"In a world" is a figure of speech which acknowledges the non-universality of the statement being made. And no it is not "just the USA". Canada and Mexico are similarly slow to adopt real-time payments.
It is wild to tell someone "don't assume" when your entire comment relies on your own incorrect assumption about what someone meant.
It's a bit more substantial, and less complaints about the semantics of the wording.
For example, if you're running experiments in one big batch overnight, making that faster doesn't seem very helpful. But with a big enough improvement, you can now run several batches of experiments during the day, which is much more productive.
First, efficient code is going to use less electricity, and thus, fewer resources will need to be consumed.
Second, efficient code means you don't need to be constantly upgrading your hardware.
[1]: https://en.wikipedia.org/wiki/Rebound_effect_(conservation)
So if we use cost as a proxy for environment impact it’s not saving much at all.
I think this is a meme to help a different audience care about performance.
Btw, cool site design.
This is the same in Switzerland. If you request an IBAN transfer, it's never instant. The solution there for fast payments is called TWINT, which works at almost POS terminal (you take a picture of the displayed QR code).
I think BACS is similarly "slow" due to the settlement process.
[0] https://en.m.wikipedia.org/wiki/Faster_Payment_System_(Unite...
But the faster payments ceiling is large enough that buying a house falls under the limit.
This is the largest reason why in-place upgrades to the U.S. financial system are slow. Coordinating the Faster ACH rollout took years, and the community bank lobby was loudly in favor of delaying it, to avoid disadvantaging themselves competitively versus banks with more capability to write software (and otherwise adapt operationally to the challenges same-day ACH posed)."
From the great blog Bits About Money: https://www.bitsaboutmoney.com/archive/community-banking-and...
https://real-timepayments.com/Banks-Real-Time-Payments.html
Prior to that, you could get instant transfers but it came with a small fee because they were routed through credit card networks. The credit card networks took a fee but credit card transactions also have different guarantees and reversibility (e.g. costing the bank more in cases of fraud)
Which means nobody can send me money.
FedNow on the backend is supported by fewer banks than Zelle is, which is probably why hardly any banks expose a front-end for it.
The bank had an opportunity to notify me precisely because ACH is not real time. And I had an opportunity to fix it because wire transfers is almost real time (finishes in minutes not days). I appreciate it when companies pull money from my account I get days of notice but if I need to move money quickly I can do it too.
You don't need slow transfers to get an opportunity to satisfy automatic payments. I don't know how it works but in the UK direct debits (an automatic "take money from my account for bills" system) gives the bank a couple of days notice so my banking app warns me if I don't have enough money. Bank transfers are still instant.
I believe this is because Ürs has to load my silver pieces onto the donkey and drive it to the other bank.
Again, no ill will intended at all, but I think it straddles the promotional angle here a bit and maybe people weren't aware
Early in my career as a software engineer, I developed a reputation for speeding things up. This was back in the day where algorithm knowledge was just as important as the ability to examine the output of a compiler, every new Intel processor was met with a ton of anticipation, and Carmak and Abrash were rapidly becoming famous.
Anyway, the 22 year old me unexpectedly gets invited to a customer meeting with a large multinational. I go there not knowing what to expect. Turns out, they were not happy with the speed of our product.
Their VP of whatever said, quoting: "every saved second here adds $1M to our yearly profit". I was absolutely floored. Prior to that moment I couldn't even dream of someone placing a dollar amount on speed, and so directly. Now 20+ years later it still counts as one of the top 5 highlights of my career.
P.S. Mentioning as a reaction to the first sentence in the blog post. But the author is correct when she states that this happens rarely.
P.P.S. There was another engineer in the room, who had the nerve to jokingly ask the VP: "so if we make it execute in 0 seconds, does it mean you're going to make an infinite amount of money?". They didn't laugh, although I thought it was quite funny. Hey, Doug! :)
I don't get it. Wouldn't going from 1 second to 0 seconds add the same amount of money to the yearly profit as going from 2 seconds to 1 second did? Namely, $1M.
Geez, life in my opinion is not so serious. It’s okay to say stupid things and not feel bad about it, as long as you are not trying to hurt anyone.
I bet they felt great and immediately forgot about this bad joke.
But I also concur with you that it is good to bring some levity to “serious” conversations!!
Required reading for internet comedians.
Of course the joke was silly. But perhaps I should have provided some context. We were making industrial automation software. This stuff runs in factories. Every saved second shrinks the manufacturing time of a part, leading to increase of the total factory output. When extrapolating to abusrd levels, zero time to manufacture means infinite output per factory (sans raw materials).
For example, looking up command flags within man pages is slooooow and becomes agonizingly frustrating when you're waking up and people are waiting for you so that they can also go back to sleep. But if you've spent the time to learn those flags beforehand, you'll be able to get back to sleep sooner.
S3 is “slow” at the level of a single request. It’s fast at the level of making as many requests as needed in parallel.
Being “fast” is sometimes critical, and often aesthetic.
I contributed "whisperfile" as a result of this work:
* https://github.com/Mozilla-Ocho/llamafile/tree/main/whisper....
* https://github.com/cjpais/whisperfile
if you ever want to chat about making transcription virtually free or so cheap for everyone let me know. I've been working on various projects related to it for a while. including open source/cross-platform superwhisper alternative https://handy.computer
Woah, that's really cool, CJ! I've been toying the with idea of standing up a cluster of older iPhones to run Apple's Speech framework. [1] The inspiration came from this blog post [2] where the author is using it for OCR. A couple of things are holding me back: (1) the OSS models are better according to the current benchmarks and (2) I have customers all over the world, so that geographical load-balancing is a real factor. With that said, I'll definitely spend some time checking out your work. Thanks for sharing!
[1] https://developer.apple.com/documentation/speech
[2] https://terminalbytes.com/iphone-8-solar-powered-vision-ocr-...
* developer insecurity and pattern lock in
* platform limitations. This is typically software execution context and tool chain related more than hardware related
* most developers refuse to measure things
Even really slow languages can result in fast applications.
That's because it's understood that things should work as quickly as possible, and not slowly on purpose (generally). No one asks that modern language is used in the UI as opposed to Sanskrit or heiroglyphs, because it's understood.
C++ with no forward decls, no clang to give data about why the compile time is taking so long. 20 minute compiles. Only git tool I like (git-cola) is written in Python and slows to a crawl. gitk takes a good minute just to start up. Only environments are MSYS which is slow due to Windows, and WSL which isn't slow but can't do DPI scaling so I squint at everything.
> Rarely in software does anyone ask for “fast.”
I don't think I can relate this article to what actually happened to the web. It went from being an unusable 3D platform to a usable 2D one. The web exploded with creativity and was out of control thanks to Flash and Director, but speeds were unacceptable. Once Apple stopped supporting it, the web became boring, and fast, very fast. A lot of time and money went into optimising the platform.
So the article is probably more about LLMs being the new Flash. I know that sounds like blasphemy, but they're both slow and melt CPUs.
As C++ devs we used to complain a lot about it's compilation speed. Now after moving to Rust, sometimes we wish we could just go back to C++ due to Rust's terrible compilation speeds! :-)
https://blog.superhuman.com/superhuman-is-being-acquired-by-...
Being fast helps, but is rarely a product.
It's obviously wrong if you think about it for more than a second. All it shows is that speed isn't the only thing that matters but who was claiming that?
Speed is important. Being slow doesn't guarantee failure. Being fast doesn't guarantee success. It definitely helps though!
>Being fast doesn't guarantee success.
Sometimes it can be a deciding factor though.
Also sometimes speedyness or responsiveness beyond nominal is not as much of a "must have" compared to nominally fast performance in place of sluggishness.
I distinctly remember when Slashdot committed suicide. They had an interface that was very easy for me to scan and find high value comments, and in the name of "modern UI" or some other nonsense needed to keep a few designers employed, completely revamped it so that it had a ton of whitespace and made it basically impossible for me to skim the comments.
I think I tried it for about 3 days before I gave up, and I was a daily Slashdot reader before then.
If you can find what you want and read it you might not spend 5 extra seconds lost on their page and thus they can pad their stats for advertisers. Bonus points if the stupid page loads in such a way you accidentally click on something and give them a "conversion".
Sadly financial incentive is almost always towards tricking people into doing something they don't want to do instead of just actually giving them what they fucking want.
HN might have over 100 chars per line of text. It could be better. I know I could do it myself, and I do. But "I fixed it for me" doesn't fix it for anyone else.
I think low density UIs are more beginner friendly but power users want high density.
Having 50 buttons and 10 tabs shoved in your face just makes for opaqueness, power user or not.
Different people process visual information differently, and people reading articles have different goals, different eyesight, and different hardware setups. And we already have a way for users to tell a website how wide they want its content to be: resizing their browser window. I set the width of my browser window based on how wide I want pages to be; and web designers who ignore this preference and impose unreadable narrow columns because they read about the "optimal" column width in some study or another infuriate me to no end. Optimal is not the same for everyone, and pretending otherwise is the antithesis of accessibility.
This means that I have a difficult time reading text with very large paragraphs. If a paragraph goes on for 10+ lines, I'll start to lose my place at the end of most lines. This is infuriating and drastically impairs my ability to read and comprehend the text.
It's interesting to me that you mention preferring a ragged right over justification, because I literally do not notice the difference. This suggests to me that we read in different ways -- perhaps you focus on the shape and boundaries of a line more than the shape of a paragraph. This makes intuitive sense to me as to why you would prefer narrower columns.
I don't think that I'm "right" for preferring wider columns or that you or anyone else are "wrong" for preferring narrower columns. I think it's just how my brain learned to process text.
I have pretty strong opinions on what's too wide of a column and what's too narrow of a column, so I won't fullscreen a browser window on anything larger than a laptop. Rather, I'll set it for a size that's comfortable for me. If some web designer decides "actually, your preferred text width is wrong, use mine instead" then I'm gonna be pretty annoyed, and I think rightfully so, because what those studies say is "optimal" for the average person is nigh unreadable for me. (Daring Fireball is the worst offender I can think of off the top of my head. I also find desktop Wikipedia's default view pretty hard to read, but the toggleable "wide" mode is excellent).
The text density, however, I rather like.
Northstar should be user satisfaction. For some products that might be engagement (eg entertainment service) while for others it is accomplishing a task as quickly as possible and exiting the app.
With IRC its basically part of the task, but every forum i read, its rare that i ever consider whose saying what.
There are people who show up much less often and have less obvious usernames, Andrew Ayer is agwa for example, and I'm sure there are people I blank on entirely.
Once in a while I will read something and realise oh, that "coincidental" similarity of username probably isn't a coincidence, I believe the first time I realised it was Martin Uecker was like that for example. "Hey, this HN person who has strong opinions about the work by Uecker et al has the username... oh... Huh. I guess I should ask"
Doesn't really help a ton with recognizing but it makes it easier to track within a thread.
I blame HN switching to AWS. Downtime also increased after the switch.
(Those are trick questions, because we haven't switched to AWS. But I genuinely would like to hear the answers.)
(We did switch to AWS briefly when our hosting provider went down because of a bizarre SSD self-bricking incident a few years ago..but it was only for a day or two!)
bonus, these have both http & https endpoints if you needed a differential diagnosis or just a means to trip some shitty airline/hotel walled garden into saying hello.
It does happen less than it used to, but still.
I work at an e-waste recycling company. Earlier this week, I had to test a bunch of laptop docking stations, so I kept force refreshing my blog to see if the Ethernet port worked. Thing is, it loads so fast, I kept the dev tools open to see if it actually refreshed.
The site seemed to start to go downhill when it was sold, and got into a death spiral of less informed users, poor moderation, people leaving, etc. It's amazing that it's still around.
Plus Rusty just pushed out Kuro5hin and it felt like “my scene” kind of migrated over.
As an aside, Kuro5hin was the only “large” forum that I ever bothered remembering people’s usernames. Every other forum it’s all just random people. (That isn’t entirely true, but true enough)
It was interesting in a different way though.
Like Adequacy.
Did you also move over to MetaFilter ?
I wrote my take on an ideal UI (purely clientside, against the free HN firebase API, in Elm): https://seville.protostome.com/.
I actually never care about the vote count but have been on this site long enough to recognise the names worth paying attention to.
Also the higher contrast items are the click/tap targets.
Just goes to show that all of us reading HN don’t actually share with each other how we’re reading HN :)
Too funny… thank you!!
I know there are changes to the moderation that have taken place many times, but not to the UI. It's one of the most stable sites in terms of design that I can think of.
What other sites have lasted this long without giving in to their users' whims?
Over the last 4 years my whole design ethos has transformed to "WWHND" (What Would Hacker News Do?) every time I need to make any UI changes to a project.
I don't really get why so many websites are slow and bloated these days. There are tools like SpeedCurve which have been around for years yet hardly anyone I know uses them.
I agree, but I wonder how not knowing how to spell would affect that. The highschool kids I work with, are not great spellers (nor do they have good handwriting).
I know this is completely different scale, but compare: [1] https://github.com/git/git [2] https://gitpatch.com/gitpatch/git-demo
And there is no page cache. Sub 100ms is just completely different experience.
I should be able to create a Jira ticket in however long it takes me to type the acceptance criteria plus a second or two. Instead I’ve got slow loading pages, I’ve got spinners, I’ve got dropdowns populating asynchronously that steal focus from what I’m typing, I’ve got whatever I was typing then triggering god knows what shortcuts causing untold chaos.
For a system that is—at least how we use it at my job—a glorified todo list, it is infuriating. If I’m even remotely busy lately I just add “raise a ticket for x” to my actual todo list and do it some other time instead.
Figuring out the best way to present logically complex content, controls and state to users that don’t wrangle complexity for a living is difficult and specialized work, and for most users, it’s much more important than snappy click response.
Both of these things obviously exist on a spectrum and are certainly not mutually exclusive. (ILY McMaster-Carr) But projects rarely have enough time for either complete working features, or great thoughtful usability, and if you’re weighing the relative importance of these two factors, consider your audience, their goals, and their technical savvy before assuming good performance can make up for a suboptimal user workflow.
Thanks for enduring my indulgent rant.
It's a retroactively fixed thing. Like imagine forgetting to make a UI, shipping just an API to a customer then thinking "oh shit, they need a UI they are not programmers". And only noticing from customer complaints. That is how performance is often treated.
This is probably because performance problems usually require load or unusual traffic patterns, which require sales, which require demos, which dont require performance tuning as there is one user!
If you want to speed your web service up first thing is invest time, maybe money in really good observability. Should be easy for anyone in the team to find a log, see what CPU is at etc. Then set up proxy metrics around speed you care about and talk about them every week and take actions.
Proxy metrics means you likely cant (well probably should not) check the speed that Harold can sum his spreadsheet every minute, but you can check the latency of the major calls involved. If something is slow but metrics look good then profiling might be needed.
Sometimes there is an easy speed up. Sometimes you need a new architecture! But at least you know what's happening.
As a result, performance (and a few other things) functionally never gets “requested”. Throw in the fact that for many mid-to-large orgs, software is not bought by the people who are forced to use it and you have the perfect storm for never hearing about performance complaints.
This in turn, justifies never prioritising performance.
Fast is indeed magical, that's why I exclusively browse Instagram from the website; it's so slow I dip out before they get me with their slot machine.
The greatest day of productivity in my life was a flight from Melbourne to New York via LAX. No wifi on either flight, but a bit in transit. Downloaded everything I needed in advance. Coded like a mofo for like 16 hours.
Fast internet is great for distractions.
I once used Notion at work and for personal note taking. I'd never felt it was "slow." Until later I moved my notes to Obsidian. Now when I have to use Notion at my job it feels sluggish.
Glad to hear Obsidian is better as I’ve been considering it as an alternative.
I've been working on making Obsidian real-time collaborative with the Relay [0] plugin. In combination with a few other plugins (and the core Bases plugin) you can build a pretty great Notion alternative.
I'm bullish on companies using Obsidian for knowledge management and AI driven workflows. It's pretty reasonable to build custom plugins for a specific vertical.
[0] https://relay.md
Working quickly is more important than it seems (2015) - https://news.ycombinator.com/item?id=36312295 - June 2023 (183 comments)
Speed matters: Why working quickly is more important than it seems (2015) - https://news.ycombinator.com/item?id=20611539 - Aug 2019 (171 comments)
Speed matters: Why working quickly is more important than it seems - https://news.ycombinator.com/item?id=10020827 - Aug 2015 (139 comments)
jsomers gets a lot of much-deserved love here!
For what is worth I built myself a custom jira board last month, so I could instantly search, filter and group tickets (by title, status, assignee, version, ...)
Motivation: Running queries and finding tickets on JIRA kills me sometimes.
The board is not perfect, but works fast and I made it superlightweight. In case anybody wants to give it a try:
https://jetboard.pausanchez.com/
Don't dare to try on mobile, use desktop. Unfortunately it uses a proxy and requires an API key, but doesn't store anything in backend (just proxies the request because of CORS). Maybe there is an API or a way to query jira cloud instance directly from browser, I just tried first approach and moved on. It even crossed my mind to add it somehow to Jira marketplace...
Anyway, caches stuff locally and refreshes often. Filtering uses several tricks to feel instant.
UI can be improved, but uses a minimalistic interface on purpose, like HN.
If anybody tries it, I'll be glad to hear your thoughts.
Here is some extensive advice for making complex websites load extremely quickly
https://community.qbix.com/t/qbix-websites-loading-quickly/2...
Here is also how to speed up APIs:
https://community.qbix.com/t/building-efficient-apis-with-qb...
One of the biggest things our framework does as opposed to React, Angular, Vue etc. is we lazyload all components as you need them. No need for tree-shaking or bundling files. Just render (static, cached) HTML and CSS, then start to activate JS on top of it. Also helps massively with time to first contentful paint.
https://community.qbix.com/t/designing-tools-in-qbix-platfor...
All this evolved from 2021 when I gave this talk:
It's gorgeous
Google Webfont loader is (usually) non blocking when done right, but the text should appear fine before
The page loaded instantly for me
I guess they were used to typing stuff then inspecting paperwork or other stuff waiting for a response. Plus, it avoided complaints when usage inevitably increased over time.
Is the most pressing problem facing the world is that we are not doing enough things fast enough? Seems a bit off the mark, IMO.
Fast reading does not just enumerate examples.
Fast reading does not straw-man.
Fun conveys opportunity and emotion: "changing behavior", "signals simplicity", "fun". Fun creates an experience, a mode, and stickiness. It's good for marketing, but a drag on operations.
Fast is principles with methods that just work. "Got it."
Fast has a time-to-value of now.
Fast is transformative when it changes a background process requiring all the infrastructure of tracking contingencies to something that can just be done. It changes system-2 labor into system-1 activity -- like text reply vs email folders, authority vs discussion, or take-out vs. cooking.
When writers figure out how to monetize fast - how to get recurrent paying users (with out-of-band payment) just from delivering value - then we'll no longer be dragged through anecdotes and hand-waving and all the salience-stretching manipulations that tax our attention.
Imagine an AI paid by time to absorb and accept the answer instead of by the token.
Fast is better than fun -- assuming it's all good, of course :)
Microsoft needs to take heed, for example Explorer's search, Teams, make your computer seem extremely slow. VS Code on the other hand is fast enough, while slower than native editors such as Sublime Text.
Bookmarking this one
At least in B2B applications that rely heavily on relational data, the best developers are the ones who can optimize at the database level. Algorithmic complexity pretty much screams at me these days and is quickly addressed, but getting the damned query plan into the correct shape for a variety of queries remains a challenge.
Of course, knowing the correct storage medium to use in this space is just as important as writing good queries.
nowadays it's fancy touch display, requires concentration and often sluggish, and the machine often felt cheap and made cheap sound when tapped on, I don't think the operator are ever enjoying interacting with it and the software's often slow across the network....
I'm all for fast. It shows no matter what, at least somebody cared enough for it to be blazing fast.
It was unpopular, because devs love the shiny. But it worked - we had nice quick applications. Which was really important for user acceptance.
I didn't make this rule because I hated devs (though self-hatred is a thing ofc), or didn't want to spend the money on shiny dev machines. I made it because if a process worked acceptably quickly on a dev machine then it never got faster than that. If the users complained that a process was slow, but it worked fine on the dev's machine, then it proved almost impossible to get that process faster. But if the dev experience of a process when first coding it up was slow, then we'd work at making it faster while building it.
I often think of this rule when staring at some web app that's taking 5 minutes to do something that appears to be quite simple. Like maybe we should have dev servers that are deliberately throttled back, or introduce random delays into the network for dev machines, or whatever. Yes, it'll be annoying for devs, but the product will actually work.
This is a good point. Often datasets are smaller in dev. If a reasonable copy of live data is used, devs would have an intuition of what is making things slow. Doesn't work for live data that is too big to replicate on a developer's setup though.
You make much better code, and much better products, if you "fast" from your vocabulary. Instead set specific, concrete latency budgets (e.g. 99.99% within x ms). You'll definitely end up with fewer errors and better maintainability than the people who tried to be "fast". You'll often end up faster than them too.
I don't want my code deployed in seconds or milliseconds. I'm happy to wait even an hour for my deployment to happen, as long as I don't have to babysit it.
I want my code deployed safely, rolled out with some kind of sane plan (like staging -> canary -> 5% -> 20% -> 50% -> 100%), ideally waiting long enough at each stage of the plan to ensure the code is likely being executed with enough time for alerts to fire (even with feature flags, I want to make sure there's no weird side effects), and for a rollback to automatically occur if anything went wrong.
I then want to enable the feature I'm deploying via a feature flag, with a plan that looks similar to the deployment. I want the enablement of the feature flag, to the configured target, to be as fast as possible.
I want rollbacks to be fast, in-case things go wrong.
Another good example is UI interactions. Adding short animations to actions makes the UI slower, but can considerably improve the experience, by making it more obvious that the action occurred and what it did.
So, no, fast isn't always better. Fast is better when the experience is directly improved by making it fast, and you should be able to back that up with data.
This is true, but I also think there's a backlash now and therefore some really nice mostly dev-focused software that is reeaaaaly fast. Just to name a few:
- Helix editor - Ripgrep - Astral python tools (ruff, uv, ty)
That's a tiny subsection of the mostly bloated software that exists. But it makes me happy when I come across something like that!
Also, browsers seems to be really responsive despite being one of the most feature bloated products on earth thanks to expanding web standards. I'm not really counting this though because Firefox and Chrone might rarely lag, the websites I view with them often do, so it's not really a fast experience.
Of course fast has downsides but it's interesting this pitch is here. Must have occurred many times in the past.
"Fast" was often labeled "tactical" (as opposed to "strategic" in institutions). At the time I remember thinking a lot about how delays plus uncertainty meant death (nothing gets done, or worse). Even though Fast is often at the cost of noise and error, there is some principle that it can still improve things if not "too far".
Anyone know deeper writings on this topic?
On interfaces:
It's not only the slowness of the software or machine we have to wait for, it's also the act of moving your limb that adds a delay. Navigating a button (mouse) adds more friction than having a shortcut (keyboard). It's a needless feedback loop. If you master your tool all menus should go away. People who live in the terminal know this.
As a personal anecdote, I use custom rofi menus (think raycast for Linux) extensively for all kinds of interaction with data or file system (starting scripts, opening notes, renaming/moving files). It's notable how your interaction changes if you remove friction.
Venerable tools in this vein: vim, i3, kitty (former tmux), ranger (on the brim), qutebrowser, visidata, nsxiv, sioyek, mpv...
Essence of these tools is always this: move fast, select fast and efficiently, ability to launch your tool/script/function seamlessly. Be able to do it blindly. Prefer peripheral feedback.
I wish more people saw what could be and built more bicycles for the mind.
There's never a reason to make a content website use heavyweight JS or CSS though.
I don't know, there are a sizeable subset of folks who value fast, and it's a big subset, it's not niche.
Search for topics like turning off animations or replacing core user space tools with various go and rust replacements, you'll find us easily enough.
I'm generally a pretty happy MacOS user, especially since M1 came along. But I am seriously considering going back to linux again. I maintain a parallel laptop with nixos and i'm finding more and more niggles on the mac side where i can prioritise lower friction on linux.
Tell me "fast" again!
However the capital required will probably never happen again in relation to the return for any investor involved in that product.
Props to them for pushing the envelope, but they did it in the zero interest era and its a shame this is never highlighted by them. And now the outcome is pretty clear in terms of where the company has ended up.
As some working on embedded audio DSP code I just had to laugh a little.
Yes, there is a ton of code that has a strict deadline. For audio that may be determined by your buffer size — don't write your samples to that buffer fast enough and you will hear it in potentially destructively loud fashion.
This changes the equation, since faster code now just means you are able to do more within that timeframe on the same hardware. Or you could do the same on cheaper hardware. Either way, it matters.
Similar things apply to shader coding, game engines, control code for electromechanical systems (there, missing the deadline can be even worse).
It is implicit, in the same way that in a modern car you expect electric windows and air-conditioning (yes, back in the day, those were premium extras)
I loathe working on it but don’t have the time to refactor legacy code.
———————-
I have another project that I am principal engineer and it uses Django, nextjs, docker compose for dev and ansible to deploy and it’s a dream to build in and push features to prod. Maybe I’m more invested so it’s more interesting to me but also not waiting 10 seconds to register and hot reload a react change is much more enjoyable.
PaulHoule•18h ago
I asked an agent to write an http endpoint at the end of the work day when I had just 30 min left -- my first thought was "it took 10 minutes to do what would have taken a day", but then I thought, "maybe it was 20 minutes for 4 hours worth of work". The next day I looked at it and found the logic was convoluted, it tried to write good error handling but didn't succeed. I went back and forth and ultimately wound up recoding a lot of stuff manually. In 5 hours I had it done for real, certainly with a better test suite than I would have written on my own and probably better error handling.
See https://www.reddit.com/r/programming/comments/1lxh8ip/study_...
citizenpaul•17h ago
A prompt like " I want to make this change in the code where any logic deals with XXX. To be/do XXX instead/additionally/somelogicchange/whatever"
It has been pretty decent at these types of changes and saves time of poking though and finding all the places I would have updated manually in a way that find/replace never could. Though I've never tried this on a huge code base.
skydhash•16h ago
kfajdsl•16h ago
skydhash•14h ago
> cursor agent does it just fine in the background
That's for a very broad definition of fine. And you still need to review the diff and check the surrounding context of each chunk. I don't see the improvement in metrics like productivity and cognitive load. Especially if you need to do serveral rounds.
kfajdsl•14h ago
Now, once you have that, to actually make edits, you have to record a macro to apply at each point or just manually do the edit yourself, no? I don't pretend LLMs are perfect, but I certainly think using one is a much better experience for this kind of refactoring than those two options.
skydhash•13h ago
For me, it's like having a moodboard with code listings.
Karrot_Kream•6h ago
I used to have more patience for doing it the grep/macro way in emacs. It used to feel a bit zen, like going through the code and changing all the call-sites to use my new refactor or something. But I've been coding for too long to feel this zen any longer, and my own expectations for output have gotten higher with tools like language-server and tree-sitter.
The kind of refactorings I turn to an LLM for are different, like creating interfaces/traits out of structs or joining two different modules together.
Karrot_Kream•16h ago
skydhash•14h ago
https://youtu.be/f2mQXNnChwc?t=2135
https://youtu.be/zxS3zXwV0PU
And for Vim
https://youtu.be/wOdL2T4hANk
Standard search and replace in other tools pales in comparison.
Karrot_Kream•13h ago
citizenpaul•10h ago
"Now duplicate this code but invert the logic for data flowing in the opposite direction."
I'm simplifying this whole example obviously but that was the basic task I was working on. It was able to spit out in a few seconds what would have taken me probably more than an hour and at least one tedium headache break. I'm not aware of any pre LLM way to do something like that.
Or a little while back I was implementing a basic login/auth for a website. I was experimenting with high output token LLM's (i'm not sure that's the technical term) and asked it to make a very comprehensive login handler. I had to stop it somewhere in the triple digits of cases and functions. Perhaps not a great "pro" example of LLM but even though it was a hilariously over complex setup it did give me some ideas I hadn't thought about. I didn't use any of the code though.
Its far from the magic LLM sellers want us to believe but it can save time same as various emac/vim tricks can to devs that want to learn them.
zahlman•14h ago
If I reached a point where I would find this helpful, I would take this as a sign that I have structured the code wrongly.
baq•5h ago
rtpg•5h ago
tomrod•16h ago
As much as I like agents, I am not convinced the human using them can sit back and get lazy quite yet!
michaelsalim•15h ago
tomrod•13h ago
A senior can write, test, deploy, and possibly maintain a scalable microservice or similar sized project without significant hand-holding in a reasonable amount of time.
A junior might be able to write a method used by a class but is still learning significant portions and concepts either in the language, workflow orchestration, or infrastructure.
A principal knows how each microservice fits into the larger domain they service, whether they understand all services and all domains they serve.
A staff has significant principal understanding across many or all domains an organization uses, builds, and maintains.
AI code assistance help increase breadth and, with oversight, improve depth. One can move from the "T" shape to "V" shape skillset far easier, but one must never fully trust AI code assistants.
stavros•15h ago
tomrod•13h ago
I think LLM assistants help you become functional across a more broad context -- and completely agree that testing and reviewing becomes much, much more important.
E.g - a front end dev optimizing database queries, but also being given nonsensical query parameters that don't exist.
stavros•9h ago
toenail•4h ago
Karrot_Kream•16h ago
I was writing some URL canonicalization logic yesterday. Because we rolled this out as an MVP, customers put URLs in all sorts of ways and we stored it into the DB. My initial pass at the logic failed on some cases. Luckily URL canonicalization is pretty trivially testable. So I took the most used customers from our DB, send them to Claude and told Claude to come up with the "minimum spanning test cases" that cover this behavior. This took maybe 5-10 sec. I then told Zed's agent mode using Opus to make me a test file and use these test cases to call my function. I audited the test cases and ended up removing some silly ones. I iterated on my logic and that was that. Definitely faster than having to do this myself.
cycomanic•15h ago
pron•15h ago
I think coding assistants would end up being more helpful if, instead of trying to do what they're asked, they would come back with questions that help us (or force us) to think. I wonder if a context prompt that says, "when I ask you to do something, assume I haven't thought the problem through, and before doing anything, ask me leading questions," would help.
I think Leslie Lamport once said that the biggest resistance to using TLA+ - a language that helps you, and forces you to think - is because that's the last thing programmers want to do.
PaulHoule•15h ago
Typescript can make astonishingly complex error messages when types don't match up so I went through a couple of rounds of showing the errors to the assistant and getting suggestions to fix it that were wrong but I got some ideas and did more experiments and over the course of two days (making desired changes along the way) I figured out what was going wrong and cleared up the use of types such that I was really happy with my code and when I saw a red squiggle I usually knew right away what was wrong and if I did ask the assistant it would also get it right right away.
I think there's no way I would have understood what was going on without experimenting.
xyzzy123•10h ago
When you can see what goes wrong with the naive plan you then have all the specific context in front of you for making a better plan.
If something is wrong with the implementation then I can ask the agent to then make a plan which avoids the issues / smells I call out. This itself could probably be automated.
The main thing I feel I'm "missing" is, I think it would be helpful if there were easier ways to back up in the conversation such that the state of the working copy was restored also. Basically I want the agent's work to be directly integrated with git such that "turns" are commits and you can branch at any point.
ChrisMarshallNY•15h ago
I think the results are excellent, but I can hit a lot of dead ends, on the way. I just spent several days, trying out all sorts of approaches to PassKeys/WebAuthn. I finally settled on an approach that I think will work great.
I have found that the old-fashioned “measure twice, cut once” approach is highly destructive. It was how I was trained, so walking away from it was scary.
rablackburn•8h ago
To be fair it’s great advice when you’re dealing with atoms.
Mutable patterns of electrons, not so much (:
cruffle_duffle•13h ago
makeitdouble•12h ago
> "An hour of debugging/programming can save you minutes of thinking"
The trap so many dev fall into is assuming code behaves like they think it is. Or believing documentation or seemingly helpful comments. We really want to believe.
People's mental image is more often than not wrong, and debugging tremendously helps bridge the gap.
cycomanic•10h ago
This is such a great observation. I'm not quite sure why this is. I'm not a programmer, but a signal-processing/system engineer/researcher. The weird thing seems that it's the process of programming that causes the "not-thinking" behaviour, e.g. when I program a simulation and I find that I must have a sign error somewhere in my implementation (sometimes you can see this from the results), I end up switching every possible sign around instead of taking a pen and pencil and comparing theory and implementation, if I do other work, e.g. theory, that's not the case. I suspect we try to avoid the cost of the context switch and try to stay in the "programming-flow".
polskibus•7h ago
nine_k•7h ago
alfalfasprout•8h ago
It's definitely possible to adapt these tools to be more useful in that sense... but it definitely feels counter to what the hype bros are trying to push out.
panarky•7h ago
This is the essence of my workflow.
I dictate rambling, disorganized, convoluted thoughts about a new feature into a text file.
I tell Claude Code or Gemini CLI to read my slop, read the codebase, and write a real functional design doc in Markdown, with a section on open issues and design decisions.
I'll take a quick look at its approach and edit the doc to tweak its approach and answer a few open questions, then I'll tell it to answer the remaining open questions itself and update the doc.
When that's about 90% good, I'll tell the local agent to write a technical design doc to think through data flow, logic, API endpoints and params and test cases.
I'll have it iterate on that a couple more rounds, then tell it to decompose that work into a phased dev plan where each phase is about a week of work, and each task in the phase would be a few hours of work, with phases and tasks sequenced to be testable on their own in frequent small commits.
Then I have the local agent read all of that again, the codebase, the functional design, the technical design, and the entire dev plan so it can build the first phase while keeping future phases in mind.
It's cool because the agent isn't only a good coder, it's also a decent designer and planner too. It can read and write Markdown docs just as well as code and it makes surprisingly good choices on its own.
And I have complete control to alter its direction at any point. When it methodically works through a series of small tasks it's less likely to go off the rails at all, and if it does it's easy to restore to the last commit and run it again.
oblio•3h ago
2. Thank you for the detailed explanation, it makes a lot of sense. If AI is really a very junior dev that can move fast and has access to a lot of data, your approach is what I imagine works - and crucially - why there is such a difference in outcomes using it. Because what you're saying is, frankly, a lot of work. Now, based on that work you can probably double your output as a programmer, but considering the many code bases I've seen that have 0 documentation, 0 tests, I think there is a huge chunk of programmers that would never do what you're doing because "it's boring".
3. Can you share maybe an example of this, please:
> and write a real functional design doc in Markdown, with a section on open issues and design decisions.
Great comment, I've favorite'd it!
creamyhorror•6h ago
I get what you're referring to here, when it's tunnel-vision debugging. Personally I usually find that coding/writing/editing is thinking for me. I'm manipulating the logic on screen and seeing how to make it make sense, like a math problem.
LLMs help because they immediately think through a problem and start raising questions and points of uncertainty. Once I see those questions in the <think> output, I cancel the stream, think through them, and edit my prompt to answer the questions beforehand. This often causes the LLM's responses to become much faster and shorter, since it doesn't need to agonise over those decisions any more.
pjmlp•6h ago
Tools like Lean and Dafny are much more appreciated, as they generate code from the model.
pron•2h ago
TLA+ is for when you have a 1MLOC database written in Java or a 100KLOC GC written in C++ and you want to make sure your design doesn't lead to lost data or to memory corruption/leak (or for some easier things, too). You certainly can't do that with Dafny, and while I guess you could do it in Lean (if you're masochistic and have months to spare), it wouldn't be in a way that's verifiably tied to the code.
There is no tool that actually formally ties spec to code in any affordable way and at real software scale, and I think the reason people say they want what doesn't exist is precisely because they want to avoid the thinking that they'll have to do eventually anyway.
[1]: Lean and TLA+ are sort-of similar, but Dafny is something else altogether.
sirwhinesalot•1h ago
That is not the case for the TLA+ spec and your 1MLOC Java Database. You hope with fingers crossed that you've implemented the design, but have you?
I can measure that a physical wall has the same dimensions as specified in the blueprint. How do I know my program follows the TLA+ spec?
I'm not being facetious, I think this is a huge issue. While Dafny might not be the answer we should strive to find a good way to do refinement.
And the thing is, we can do it for hardware! Software should actually be easier, not harder. But software is too much of a wild west.
That problem needs to be solved first.
markasoftware•14h ago
quarkcarbon279•9h ago
If using off the shelf LLMs always have a bottleneck of their speed.
cornfieldlabs•11h ago
resonious•11h ago
xyzzy123•10h ago
ChadNauseam•9h ago
I realized as I was doing it that I wouldn't be able to tell anyone about it because I would sound like the most obnoxious AI bro ever. But it worked! (For the simple requests I used it on.) The most annoying part was that I had to tell it to run rustfmt every time, because otherwise it would fail CI and I wouldn't be able to merge it. And then it would take forever to install a rust toolchain and figure out how to run clippy and stuff. But it did feel crazy to be able to work on it from the beach. Anyway, I'm apparently not very good at taking vacations, lol
discordance•5h ago
And use something like ntfy to get notifications on your phone:
https://ntfy.sh/
I’ve also seen people assign Claude code issues on GitHub and then use the GitHub mobile app on their phone to get notifications and review PRs.
resonious•5h ago
Edit: clarity
oblio•4h ago
resonious•3h ago
Aeolun•1h ago
SchemaLoad•6h ago
resonious•5h ago
saagarjha•3h ago
resonious•3h ago
kmacdough•3h ago
speed_spread•1h ago
LLM workflow is competing with other ways of writing code. DIY, stack overflow, paired, offshored...
TimTheTinker•44m ago
I bought a bunch of poker chips and taught Texas Hold'em to my kids. We have a fantastic time playing with no money on the line, just winning or losing the game based on who wins all the chips.
speed_spread•29m ago
Aeolun•1h ago
roncesvalles•10h ago
LLMs are the anti-thesis of fast. In fact, being slow is a perceived virtue with LLM output. Some sites like Google and Quora (until recently) simulate the slow typed output effect for their pre-cached LLM answers, just for credibility.
pjmlp•6h ago