struct S { int a; int b; int c; int d; int e; /* about 15 more members */ }
so I wrote
const auto match_a = s.a == 10; const auto match_b = s.c == 20; const auto match_c = s.e == 30; /* about 15 more of these */ if (match_a && match_b && match_c) { return -1; }
Turns out compilers (I think because of the language) totally shit the bed at this. It generates a chain of 20 if-else instead of a mask using SIMD or whatever. I KNOW this is possible, so I asked an LLM, it was able to produce said code that uses SIMD.
I worked on a research topic in grad school and learned about holes in files, and how data isn’t removed until the last fd is closed. I use that systems knowledge in my job weekly.
A tip. Kernel development can be lonely, share what you are working on and find others.
https://sebastianhetman.com/why-quantity-matters/
Do stuff, and you learn stuff. Go play.
The quantity group has a trivial way to "hack" the metric. I can just sit there snapping photos of everything. I could just set up a camera to automatically snap photos all day and night. To be honest, if I'm not doing this at a stationary wall there's probably a good chance I get a good photo since even a tiny probability can be an expected result given enough samples.
But I think the real magic ingredient comes from the explanation
> The group never worried about the quality of their work, so they spent time experimenting with lighting, composition, and such.
The way I read this is "The quantity group felt assured in their grade, so used the time to be creative and without pressure." But I think if you modified the experiment so that you'd grade students on a curve and in proportion to the number of photos they took then the results might differ. Their grade might not feel secure as it could take just one person to communicate that they're just snapping photos all day as fast as they can. In this setting I think you'd have even less ability to explore and experiment than the quality group.I do think the message is right though and I think this is the right strategy in any creative or primarily mental endeavor (including coding). The more the process depends on creativity the more time needs to be allocated to this type of exploration and freedom. I think in jobs like research that this should be the basis for how they are structured and effectively you should mostly remove evaluation metrics and embrace the fundamentally ad hoc nature. In things like coding I think you need more of a mix and the right mix depends highly on the actual objectives. But I wanted to make the above distinction because I think it is important if we're trying to figure out what those objectives are.
Do an experiment for me. Write down every bug you face today. Even small. I think you'll be surprised at how many there are and even more at how many are likely simple to solve.
I know it's not just me as so many around me are getting increasingly frustrated with their devices. It's not the big things, it's a thousand paper cuts. But if you just you look at a paper cut in isolation, it isn't meaningful. That's the problem and why they get ignored. But they get created because we care more about speed than direction. I'd argue that's a good way to end up over a cliff
I mean, lets leave phones out of this for a moment and look at PCs...
When was the last time your PC operating system crashed?
When was the last time your applications you use on your PC crashed?
When was the last time you could not find an application for your PC that did what you needed to accomplish?
The early days of software absolutly sucked. Crashes, data loss, and limitations were the name of the game. The big things like data corruption were constantly problems. You didn't notice the small problems because you were fighting the big ones. The problem spaces software was solving for people was also relatively small. Now it's easy to find small bugs everywhere because software ate the world. It's harder to name something software has not expanded into than what it has, and yet we are still not near the boundaries of exploration of what software can do. When a system has not discovered its boundaries speed will almost always win over direction.
> When was the last time your PC operating system crashed?
I daily drive both Linux and Mac. Last year I had a Windows laptop from work.For Linux? I'm not sure, but definitely longer than 6 mo. Last crash I remember was my fault and quite awhile ago.
For OSX? Last week. The most common one of when I close my laptop, go "ops, need to do X real quick", and when I open it up the screen doesn't come up. Not sure if a backlight issue but I don't think it is. It doesn't seem to be logging in and I can't get keys like volume to respond where I'd have feedback. Flashlight trick doesn't work. If I wait the laptop reboots itself and half the time I do get a crash report (I suspect this happens more than I think too since occasionally I'll come to my laptop and it was rebooted. Like back from lunch or a quick break, not overnight. Less frequently I find crash reports) Happened since day 1 and even on the last MacBook I had. I get this error at least twice a month. Also can happen when disconnecting my monitor.
Windows? Jesus Christ, how do people live like that? Arch was more stable a decade ago.
> When was the last time your applications you use on your PC crashed?
On Linux, two weeks ago I had a crash while playing cyberpunk (the most optimized game there ever was... except maybe StarField). Last week Silksong had a soft error where the joystick stopped responding when my wireless controller issued the low battery warning. Outside that, I can't think of anything other than when I accidentally run a sim calling too many resources, but that's not common and I'm not sure it counts.On Mac, at least every two weeks. I feel like 6mo ago it was more like once a month though. I've been interviewing lately and Teams is definitely a bigger problem than Zoom. I think Firefox has crashed twice in the last 6 months? I'm also a tab hoarder. But mail crashed way more early on so I switched back to Thunderbird and while it doesn't crash I'm sure there is a small memory leak. I'll restart like once every 2 weeks because it'll start pushing a gig of ram. I know, I'm picky, but my email client shouldn't ever use a gig of ram. And I was writing my PhD thesis a few months back and preview would occasionally crash. Zathura had no issues. Slack definitely more frequently than FF.
Windows? Just about every day. I was able to reduce errors once the IT guy informed me that Windows Hello being used for the login was why Outlook was constantly crashing. Switching to a typed password did a lot. But after that it still reminded me of the days where I was learning Linux and distro hopping.
> When was the last time you could not find an application for your PC that did what you needed to accomplish?
Daily? Okay, but probably weekly?Less of a problem on Linux. Often it's small things so I can write a quick shell script. I can find most things I want on the AUR but often I'll build from source.
OSX? That depends. Do I have to pay? If so, IDK. I'm not willing to pay a subscription fee for what's a glorified shell script.
Windows? Work laptop so wasn't really looking.
I generally agree, things are getting better but the problem wasn't that big 10-20 years ago ime. I can only think of 2 instances where I had data losses and both were on Linux laptops while distro hopping over 15 years ago. In both cases I was able to recover too. Yeah maybe the issue was bigger in the pre Windows 95 days but that's way back and hardware has also made significant strides. Don't give software the credit for better hardware.
But I think you're missing an important question:
How often do programs have unintended or unexpected behavior?
That is not a crash but is a bug. The calendar issue I mentioned above is not unique to my phone. Later on I also mention how I had essentially no means to merge contacts despite identical names, nicknames, phone numbers, and birthdays (differing only on email and notes). *WHO THE FUCK THINKS "FIND CONTRACTS" IS A BETTER SOLUTION THAN CLICKING TWO ENTIRES AND THEN CLICK "MERGE"? And guess what, it couldn't find the duplicates. This is a trivial database problem! I was able to eventually find a way to merge after some googling. But I discovered the duplication because my gf had 3 entries on my calendar for her birthday. When I merged, she ended up with 4! Again, that is a trivial database problem.This category of problem is greatly increasing in frequency. I know you might think it's a comparison bias, and I think that's a reasonable guess, but it isn't. I've definitely "infected" my friends and family so they're more "sensitive" to bugs but years after that they agree that their devices are becoming harder to use for the same tasks they did before. Either through little bugs like this piling up or having some new features being pushed on them that they don't want and disrupts their workflow. That last one is really common.
So I don't want to dismiss you, because I don't think you're entirely wrong. But also I think you're being too quick to dismiss me. Your justification isn't a complete explanation of my experience. Nor does it account for how programs and getting slower and heavier. I mean God damn, how many seconds does it take for Microsoft Word to open (cached? Not cached?) and how many gigabytes of ram does it use? Those shouldn't even be the units of measurement we should be discussing! Both are at least a magnitude too large. I'm absolutely certain programs are bloating. And I think it should be unsurprising that with bloated comes more opportunities for errors.
This is also the type of problem I expect average people to not notice as it happens more slowly and if you don't understand computers it's easy to believe it's just the way it is. But we're on HN, we do. We know better. We also know how the sausage is made and we've experienced the increasing pressure for speed and seen the change in how programs and programmers are evaluated. I do not think you can discount these effects.
It shouldn't be a surprise that moving faster correlates strongly with making more mistakes. You don't deny the optimization for speed, but do you really think you can accelerate for free? There's a balance and I'm confident we've sped through that equilibrium.
I'm talking code from people with no programming experience, trying to contribute to open-source mod projects by pattern matching words they see in the file. They see the keyword static a lot, so they just put static on random things.
That's why. If all codes in a project are stupid, there's no stupid code indeed relatively.
Go read Linux kernel mailing list.
And if an unacceptable patch made it to Linus's desk, someone downstream hasn't been doing their damn job. The submaintainers are supposed to filter the stupid out, perhaps by more gentle guidance toward noob coders. The reason why Linus gets so angry is because the people who let it through should know better.
Therefore, if you push yourself to the limit of your abilities to create the most clever code you can, you won't be able to debug it.
> Therefore, if you push yourself to the limit of your abilities to create the most clever code you can, you won't be able to debug it.
If only advocates of LLM-based code generation understood this lemma.
He said it with a very specific idea in mind, and like most of software engineering "laws", if you know enough to know when to apply it, you don't need the law.
Amusing coincidence. I also wanted to be a rock star, or at least a successful working musician. My mom also talked me out of it. Her argument was: If there's no way to learn it in school, then go to school anyway and learn something fun, like math. Then you can still be a rock star. Or a programmer, since I had already learned programming.
So I went to college as a math major, and eventually ended up with a physics degree.
I still play music, but not full time, and with the comfort of supporting myself with a day job.
I'm inspired by the quote from Pablo Casals when he was in his 90s. They asked him why he still needed to practice, and he said: "Because I'm finally beginning to see some improvement."
There was perhaps less of a distinction between "arts" and trades. People did all kinds of work on paintings, sculptures, etc., and expected to get paid for it. They rarely put their names on their works.
I've read a bit about Bach's life, and he was always concerned about making money.
One music history textbook I read identified the invention of printed music as the start of the "music industry." Before the recording era, people composed and published sheet music. There were pianos in middle class parlors, and people bought sheet music to play. Two names that come to mind were Scott Joplin and Jelly Roll Morton. Movie theaters hired pianists or organists, though that employment vanished when talking movies came out. The musicians of the jazz era were making their livings from music. One familiar name is Miles Davis. His father was a dentist, and his parents considered music to be a respectable middle class career for their son. People did make a living from recordings before the Internet era. Today, lucrative work in the arts still exists for things like advertising and sports broadcasting.
(Revealing my bias, I'm a jazz musician).
In fact the expectation that an artist should not earn a decent living is kind of a new thing.
But it's good for discovery, and artists generally don't make much off album sales either; concerts and merchandise is where it's at.
Never too late to try stuff out of course, but very little beats structured higher ed education in relatively small classes (think there was only about 24 people in the robotics major?)_
It shouldn't be hard to go beyond what almost all universities provide.
> I still play music, but not full time, and with the comfort of supporting myself with a day job.
Some people say;
Pursue your dream or you will regret it.
This is said by people who regret their own choices.Other people say;
Don't make your dream a job, because all it will
be is a job and no longer special.
This is said by people who had misconceptions about what pursuing their dream actually entailed.I say;
Happiness is found in neither a dream chased nor a chosen
profession. It is instead a choice we make each day in
what we do, in how we view same, and if we allow
ourselves to possess it.
What constitutes each day is immaterial.
But that's just me.The rest of those sayings are just for us plebs that have to rationalize working 40-60 hours a week.
There is no sociopolitical statement, no call-to-arms, no pontification as to the measure of one's life, no generational implications. There is an existential consideration, but not of the nature your post implies.
Happiness is an individual choice, available to us all at any time.
Full stop.
What you're promoting is a deeply narcissistic worldview, and I hope either the cure or the consequences reach you soon.
Though maybe those are going to present as the same thing.
The unhinged part was to imply that people can just choose to be happy under any circumstance which is obviously magical thinking.
Worse, I'm afraid. It's ideological thinking of the basest sort.
Magical thinking at least lets a person see that their bullshit isn't working, potentially even walk it back, correct themselves.
In ideological thinking, you gotta act as if the impossible wish has already come true. Reality says otherwise? Well, wish harder - or else. That's ideological thinking for ya.
And those are only two of the cards in that deck. I've observed that with sufficient mental self-mutilation, people can in fact choose to be happy under any circumstance. Occasionally even at no cost to innocents. (Though rarely - who'd permit them a clean getaway?)
Woulda had a field day with figuring out what complexes are puppetting AdieuToLogic, if their most coherent argument wasn't "fuck off" - pardon, "full stop".
I never said nor implied that. It has only been the person with the account name "cardanome" who has applied absolutist determiners such as "all" and "any" to mischaracterize what I wrote.
> Worse, I'm afraid. It's ideological thinking of the basest sort.
Projection is a poor position to espouse and one easily identified such as the above, further supported by your previous assertion of "[w]hat you're promoting is a deeply narcissistic worldview."
Reread what I originally wrote in this thread objectively, if either you or "cardanome" can:
I say;
Happiness is found in neither a dream chased nor a chosen
profession. It is instead a choice we make each day in
what we do, in how we view same, and if we allow
ourselves to possess it.
What constitutes each day is immaterial.
But that's just me.
This is what is called a personal philosophy[0], specifically: 2 a : pursuit of wisdom
b : a search for a general understanding of values and
reality by chiefly speculative rather than
observational means
> Woulda had a field day with figuring out what complexes are puppetting AdieuToLogic, if their most coherent argument wasn't "fuck off" - pardon, "full stop".I was directly replying to this[1] post, which contains phrases such as "I think PP's point was ..", "elites of your heretical society", and "at odds with your own inner values and moral compass".
If you and/or "cardanome" cannot comprehend why I finished with "full stop" in response to this post, then there is nothing I can do to help either or both of you understand.
There's a reason why you're finding it necessary to explain what a personal philosophy even is. Think about that before you go ooh wizdum (and if you have a spare dictionary, throw it up.)
>No, my point is happiness is a choice.
If happiness was a choice, there would be no point to happiness.
> There's a reason why you're finding it necessary to explain what a personal philosophy even is. Think about that before you go ooh wizdum (and if you have a spare dictionary, throw it up.)
All you have done in this thread is direct conversations into ad hominem attacks you launch and/or a meaningless non sequitur such as:
> If happiness was a choice, there would be no point to happiness.
As I alluded in a peer comment, I hope you reflect on why you choose this style of communication and have someone in your life you trust in which you confide regarding same.
I am not that person and shall no longer enable your vitriol.
I could of course explain exactly what my words "hinge" on, and what "provokes" them. I've found that this does not create understanding where previously it was lacking. So instead let's talk about what you said.
Two of your words I consider harmful and insulting:
>is seemingly
What the hell?!
...oh, right:
- If you say "X is Y", you gotta back it up. Scary!
- If you say "X seems to me Y", you gotta justify your perceptions. Nasty!
- But saying "X is, seemingly, Y", that's totally safe! Because it's bullshit. It posits a statement as true knowledge and elidies the need for justification outright, on the syntactic level.
What's worse, you probably didn't even notice you were doing this. You just picked up the pattern from people who looked like they had what you wanted.
That cognitive habits like yours are so widely accepted as "normal", is exactly why I'm guessing that CBT (or, for that matter, parent poster's wireheading suggestions) are probably going to be super effective on you, not kidding.
If you were to give those a shot, anyway. Instead of, you know, just stating existences of literatures at people. Also unless your current state of mind wasn't already achieved by similar methods. In any case, do report back!
Look at the multiple ad hominem attacks you put forth in the above comment alone. Imagine you are the person to whom you replied when reading it.
All because someone disagreed with a post you authored by writing in part:
... your comment is seemingly unhinged and unprovoked ...
I truly hope you reflect on this and can find a way to talk about it with someone.This is pure magical thinking. There are many reasons to be not happy. Being in pain, having lost a loved one, not having your physical needs met and well simply having depression or a myriad of other problems.
And people shouldn't be happy with all circumstances. It is not healthy to be happy all the time. Sometimes accepting the negative emotions is important for growth.
I understand what both of you are saying, but I think its disingenuous to assume that happiness is simply the opposite of depression.
Your welcome. (Sarcasm returned)
> This is pure magical thinking. There are many reasons to be not happy. Being in pain, having lost a loved one, not having your physical needs met and well simply having depression or a myriad of other problems.
Of course there are many life situations where "being happy" is not what a person can or needs to experience at that moment, where "moment" is defined as some period of time determined by each person. And there are medical conditions where trying to choose happiness is simply not possible, such as "having depression or a myriad of other problems."
> And people shouldn't be happy with all circumstances. It is not healthy to be happy all the time. Sometimes accepting the negative emotions is important for growth.
I never wrote anything to that effect. What I wrote was:
Happiness is an individual choice,
available to us all at any time.
Just because a choice is available does not mandate it must be chosen immediately and unconditionally.But you go ahead and mischaracterize what I wrote to serve whatever agenda you have and I will reiterate what I posted earlier in this thread:
My key point is that happiness is a choice.
I hope everyone can find a way to choose it.Some want to live on the seas. They can be perfectly happy as a sailor, even if poor and single.
Some want a family, educated children, respect. They would likely need a nice house, enough resources to get a scholarship, a shot at retirement. This is obtainable working in public service, even without money.
But most have multiple dreams. That's what makes things complex. The man who wishes for a wife but also wishes to be on the seas will find much fewer paths available. Sailors also don't generally get respected by most in laws.
To mix the two, they try to find the dream-job. Perhaps work for a big oil company and be 'forced' to go offshore.
Eventually people learn that desire is suffering in some form and cut down on the number of dreams. They may even see this as mature and try to educate others that this is the way. Those who have kids often are forced to pick kids as the dream. So there's a selection bias as well.
"Don't put one foot in your job and the other in your dream, Ed. Go ahead and quit, or resign yourself to this life. It's just too much of a temptation for fate to split you right up the middle before you've made up your mind which way to go".
They write a maze algo in any new language they learn just to learn bits of the language.
I used to get hung up on things like doing a loop when a ternary operator would work. "Somebody is going to see this and be rude about it." But sometimes you write code how you're thinking about the problem at the time. And if you think of it as a loop, or a series of if statements, or whatever, do it that way.
If it makes you feel better, note it in a comment to revisit later. And if somebody is rude about it, so what. It's not theirs, it's yours.
I used to try to think ahead, plan ahead and "architect", then I realized simply "getting something on paper" corrects many of the assumptions I had in my head. A colleague pushed me to "get something working" and iterate from there, and it completely changed how I build software. Even if that initial version is "stupid" and hack-ish!
I think it is common for a programmer to just start programming without coming up with any model, and just try to solve the problem by adding code on top of code.
There are also many programmers who go with their first “working” implementation, and never iterate.
These days, I think the pendulum has swung too far from thinking about the program, maybe mapping it out a bit on paper before writing code.
1. Get it working.
2. Get it working well.
3. Get it working fast.
This puts the "just get it working" as the first priority. Don't care about quality, just make it. Then, and only once you have something working, do you care about quality first. This is about getting the code into something reasonable that would pass a review (e.g., architectually sound). Finally, do an optimization pass.
This is the process I follow for PRs and projects alike. Sometimes you can mix all the steps into a single commit, if you understand the problem&solution domain well. But if you don't, you'll likely have to split it up.
Depending on how low-level your code is, this... may not work out in those terms.
In other words, I’d say that if you actually want good software—and that includes making sure its speed falls within a reasonable factor of the napkin-math theoretical maximum achievable on the platform—your three steps can easily constitute three entire rewrites or at least substantial refactors. You might well need to rearchitect if the “working well” version has multiple small loops split by domain-level concern when the hardware really wants a single large one, or if you’re doing a lot of pointer-chasing and need to flatten the whole thing into a single buffer in preorder, or if your interface assumes per-byte ops where SIMD can be applied.
This is not a condemnation of the strategy, mind you. Crap code is valuable and I wish I were better at it. I just disagree that the transition from step 2 to step 3 can be described as an optimization pass. If that’s what you limit yourself to, you’ll quite likely be forced to leave at least an order of magnitude’s worth of performance on the table.
And yes, most consumer software is very much not good by that definition.
(For instance, I’m expecting that the Ladybird devs will be able to get their browser to work well for daily tasks—which I would count a tremendous achievement—but I’m not optimistic about it then becoming any faster than the state of the art even ten or fifteen years ago.)
Sometimes, it might even be completely separate people working on each step... separated by time and space.
In any case, most software generally stops at (2) simply due to the fact that any effort towards (3) isn't worth the effort -- for example, there's very little point in spending two weeks optimizing a report generation that runs in the middle of the night, once a month. At some point, there may be, but usually not anytime soon.
I much prefer a code base that is readable and straightforward (maybe at the expense of some missed perf gains) over code that is highly performant but hard to follow/too clever.
I've used a similar mantra of "make it work, make it pretty, make it fast" for two decades.
I think I've had to get to step 3 once and that was because the specs went from "one device" to "20 devices and two factories" after step 1 :D
Again, this is something I set bite enterprise style applications quite often as they can be pushed out piecemeal where you can get things like the datastore/input APIs/UI to the customer quickly, then over the next months things like reporting, auditing, and fine grained access controls get put in, and suddenly you find yourself stuffed working around major issues where a little bit up up front thinking about the later steps would have saved you a lot of heartache.
I once joined a team where they knew they were going to do translations at some point ... and the way they decided to "prepare" for it was absolutely nonsensical. It was clear none of them had ever done translations before, so when it came time to actually do the work, it was a disaster. They had told product/sales "it was ready" but it didn't actually work -- and couldn't ever actually work. It required redesigning half the architecture and months of effort across a whole team to get it working. Even then, some aspects were completely untranslatable that took an additional 6-8 months of refactoring.
So, another lesson is to not try to engineer something unless your goal is to "get it working". If you don't need it, it is probably still better to actually wait until you need it.
Sometimes I think about code structure like a sudoku where you have to eliminate two possibilities by following through what would happen. Writing the code is (to me) like placing the provisional numbers and finding where you have a conflict. I simply cannot do it by holding state in my head (ie without making marks on the paper).
It could definitely be a limitation of me rather than generally true.
But an easy example is "just build the single player version" (of an application) can be worse than just eating your vegetables. It can be very difficult to tack-on multiplayer, as opposed to building for this up front.
Business requirements != programming requirements/features.
Very often both the business requirements and programming requirements change a lot since unless you have already written this one thing, in the exact form that you are making it now, you will NEVER get it right the first time.
It is possible to build systems that can adapt to change, by decoupling and avoiding cross cutting concerns etc you can make a lot of big sweeping changes quite easily in a well designed system. It's just that most developers are bad at software development, they make a horrible mess and then they just keep making it worse while blaming deadlines and management etc.
I thought it was a masterpiece of abusing the C pre-processor to ensure that all variables used for player physics, game state, inputs, and position outputs to the graphics pipeline were guarded with macros to ensure as the (overwhelmingly) single-player titles continued to be developed that the code would remain clean for the two titles that we hoped to ship with split-screen support.
All the state was wrapped in ss_access() macros (“split screen access”) and compiled to plain variables for single-player titles, but with the variable name changed so writing plain access code wouldn’t compile.
I was proud of the technical use/abuse of macros. I was not proud that I’d done a lot of work and imposed a tax on the other teams all for a feature that producers wanted but that in the end we never shipped a single split-screen title. One console title was cancelled (Saturn) and one (mine) shipped single-player only (PlayStation).
We should definitely have a plan before we start, and sketch out the broad strokes both in design and in actual code as a starting point. For smaller things it's fine to just start hacking away, but when we're designing nå entire application i think the right way to approach it is to plan it out and then solve the big problems first. Like multiplayer.
They don't have to be completely solved, it's an iterative process but they should be part of the process from the beginning.
An example from my own work: I took over an app two other developers had started. The plan was to synchronize data from a third party to our own db, but they hadn't done that. They had just used the third party api directly. I don't know why. So when they left and I took over, I ended up deleting/refactoring everything because everything was built around this third party api and there was a whole bunch of problems related to that and how they were just using the third party's data structure directly rather than shaping the data the way we wanted it. The frontend took 30-60+ seconds to load a page because it was making like 7 serialized requests, waiting for a response before sending the next one and the backend did the same thing.
Now it's loading instantly, but it did require that I basically tear out everything they've done and rewrite most of the system from scratch.
Yes and no, depending on how dependent you become on that first iteration, you might drown an entire project or startup in technical debt.
You should only ever just jump in if:
A) it's a one off for some quick results or a demo or whatever
B) it's easy enough to throw away and nobody will try to ship it and make you maintain it
That said, having so much friction and analysis paralysis that you never ship is also no good.
That said, it takes quite a bit of practice to become good enough at refactoring to actually practice that.
"Just make it exist first. You can make it good later."
You're going to write the "stupid code" to get things out the door, get promoted and move on to another job, and then some future engineer has to come along and fix the mess you made.
But management and the rest of the org won't understand why those future engineers are having such a hard time, why there's so much tech debt, and why any substantial improvements require major rework and refactoring.
So the people writing the stupid code get promoted and look good, but the people who have to deal with the mess end up looking bad.
A foundation that isn't useful to build atop is just a shitty foundation. Everyone is taking it for granted that building a good foundation is impossible if you haven't built a shitty foundation for the same building first, but that's not the only way to do things.
Depends on what you do. If you build a network protocol, you'd better architect it carefully before you start building upon it (and maybe others do, too).
The question is: "if I get this wrong, how much impact does it have?". Getting the API of a core service wrong will have a lot of impact, while writing a small mobile app won't affect anything other than itself.
But the thing is, if you think about that before you start iterating on your small app, then you've already taken an architectural decision :-).
Basicaly, do you have a good foundation to build from. With more experience, you can build a better foundation.
I didn't know about Deno and streams, but this looks fine
const file = await Deno.open("huge-quotes.txt");
const quotes: string[] = [];
await file.readable
.pipeThrough(new TextDecoderStream())
.pipeThrough(new TextLineStream())
.pipeTo(new WritableStream({
write(line) {
quotes.push(line);
}
}));“Enjoy writing it, it doesn’t have to be nice or pretty if it’s for you. Have fun, try out that new runtime or language.”
It doesn’t have to be nice or pretty EVEN if it’s NOT for you. The value in prototyping has always been there and it’s been very concrete: to refine mental models, validate assumptions, uncover gaps in your own thinking (or your team’s), you name it.
Unfortunately it feels that the pendulum has swung in the completely opposite direction. There’s a lot of “theatre” in planning, writing endless tickets and refining them for WEEKS before actually starting to write code, in a way that’s actively harmful for building software. When you get stuck in planning mode you let wrong assumptions grow and get baked in into the design so the sunken cost keeps rising.
Simply have a BASIC and SHARED mental model of the end goal with your team and start prototyping. LLMs have made this RIDICULOUSLY CHEAP. But, the industry is still stuck in all the wrong ways.
Prototypes (start ups) rarely have the luxury of "getting it right", their actual goal is "getting it out there FAST to capture the market (and have it working enough to keep the market)"
(Some - apologies but I'm not a game dev enough to be able to say what types this applies to) Game devs - they're more or less build it, ship it, and be done with it, players tend to be forgiving of most bugs, and they move on to the next shiny thing long before it's time to fix all the things.
Once the product has traction in the market, and you have paying customers, then it's time to deal with the load (scale up) and bugs, I recall reading somewhere that it's probably best to drop the start up team, they did their job (and are now free to move on to the next brilliant idea), and replace them with a scale up team, who will do the planning, architecting, and preparation for the long term life of the software.
I think that that approach would have worked for Facebook (for example) they had their PHP prototype that captured the market very quickly, and (IMO) they should have moved through to a scale up team (who could have used the original code as a facade, strangling it to replace it with something funky (Java/C++ would have been what was available at the time, but Go would be what I would suggest now)
> There’s a lot of “theatre” in planning, writing endless tickets and refining them for WEEKS before actually starting to write code, in a way that’s actively harmful for building software.
I'd love to have a "high paying job" where I am allowed to start prototyping and modelling the problem and then iteratively keep on improving it into fully functional solution.
I won't deny that the snowballing of improvements and functional completeness manifests as acceleration of "delivery speed" and as a code-producing experience is extremely enjoyable. Depth-first traversal into curiosity driven problem solving is a very pleasurable activity.
However, IME in real world, someone up the chain is going to ask "when will you deliver this". I have ever only once been in a privileged enough a position in a job to say "I am on it and I will finish it when I finish it... and it will be really cool"
Planning and task breakdown, as a developer, is pretty much like my insurance policy. Because when someone up the chain (all the way down to my direct manager) comes asking "How much progress you have made ?" I can say (OR "present the data" as it is called in a certain company ?) "as per the agreed plan, out of the N things, I have done k (< N) things so far. However at this (k+1)th thing I am slowing down or blocked because during planning that-other-thing never got uncovered and we have scope-creep/external-dependency/cattle-in-the-middle-of-the-road issue". At which point a certain type of person will also go all the way to push the blame to another colleague to make themselves appear better hence eligible for promotion.
I would highly encourage everyone to participate in the "planning theatre" and play your "role".
OR, if possible start something of your own and do it the way you always wanted to do it.
I'm curious who is in these kinds of jobs. Because I've never seen this in practice.
Put another way, refining tickets for weeks isn't the problem; the problem is when you do this without prototyping, chances are you aren't actually refining the tickets.
Planning stops when you take steps that cannot be reverted, and there IS value in delaying those steps as much as possible, because your project then becomes vulnerable to outside risk. Long planning is valuable because of this; it's just that many who advocate for long planning would just take a long time and not actually use that time for planning.
Also, 2010 was just yesterday my young friend :)
Oh, this sort of "dumb" code. That is just exercise. It bothers me that in this field we don't think we should rehearse and exercise and instead use production projects for that.
Actual dumb code is one that disregards edge cases or bets on things being guaranteed when they're not.
Whether an idea is good or not can often only be judged when it becomes more concrete. The actual finished project is as concrete as it gets, but it takes time and work to get there. So the next best thing is to flesh it out as much as possible ahead and decide based on that whether it is worth doing it that way.
Most people have the bad habit of being too attached to their own ideas. Kill your darlings. Ideas are meant to be either done, shelved or thrown into the bin. It doesn't do any good to roll them around in your head forever.
The first one that comes to mind relates closely to naming. If we think about a program in terms of its user facing domain, then we might start to name and structure our data, functions, types too specifically for that domain. But it's almost always better to separate computational, generic data manipulation from domain language.
You only need a _little bit_ more time to move much of the domain specific stuff into your data model. Think of domain language as values rather than field names or types. This makes code easier to work with _very quickly_.
Another stupidity is to default to local state. Moving state up requires a little bit of planning and sometimes refactoring and one has to consider the overall data model in order to understand each part. But it goes a long way, because you don't end up with entangled, implicit coordination. This is very much true for anything UI related. I almost never regret doing this, but I have regretted not doing this very often.
A third thing that is unnecessarily stupid is to spread around logic. Harder to explain, but everyone knows the easy feeling of putting an if statement (or any kind of branching, filtering etc.) that adds a bunch of variables somewhere, where it doesn't belong. If you feel pressed to do this, re-consider whether your data is rich enough (can it express the thing that I need here) and consistent enough.
I once worked on a Perl script that had to send an email to "Harry". (Name changed to protect the innocent). I stored Harry's email address in a variable called "$HARRY".
Later on a second person (with a different name) wanted to get the emails as well. No problem, just turn the scalar into an array, "@HARRIES".
I thought it was very funny but nobody else did.
Writing stupid code is like walking to the shop. You're not going to improve your marathon time, but that's not the point. It's just using an existing skill to do something you need to do.
But you should also study and get better at things. If you learnt to cycle you could get to the shop in a third of the time. Similarly, if you learn new languages, paradigms, features etc. you will become a more powerful programmer.
Like the original was: Go ahead, write the "stupid" code, I dare ya!
But for strategic decisions, having a well-researched document (a PRD or similar) helps as a starting point for iteration, and the approach you take will be influenced by your team's culture.
ForOldHack•4mo ago