I'll always have a soft spot in my heart for Armed Bear because that JVM library ecosystem is enormous https://github.com/armedbear/abcl
https://docs.racket-lang.org/guide/performance.html
As I understand it the difference between raco make and raco exe is that the latter bundles a VM.
I don't really care about these minutiae, it's a great platform for GUI development that consistently builds as well on Debian as Windows.
Can you elaborate on that? I'm interested in deciding on a good tech stack for desktop GUI app development for personal projects, so was interested in your comment.
The problem with that approach is that you need to figure out some parts on your own, like state management. If you need that flexibility it's still a good option, or you'd opt for gui-easy, a library on top of the GUI toolkit that adds observables for state management and a more declarative API, https://docs.racket-lang.org/gui-easy/index.html.
I haven't managed to get cross-compilation going but I've had no problem just copying my Racket files to another computer and build there. It's supposed to be possible however, you'll probably manage to figure it out if it's important to you.
The gui-easy library makes it trivial to pack up some small tool in a GUI in a few tens of lines of code. I'm guessing there is a way to prune the binaries but don't really care about it myself, I just go with the default ~20 MB executables.
"Every definition or expression to be evaluated by Racket is compiled to an internal bytecode format, although “bytecode” may actually be native machine code. In interactive mode, this compilation occurs automatically and on-the-fly. Tools like raco make and raco setup marshal compiled bytecode to a file, so that you do not have to compile from source every time that you run a program. ... For the CS implementation of Racket, the main bytecode format is non-portable machine code."
There's more about what this entails here and how to view the generated assembly: https://docs.racket-lang.org/reference/compiler.html#(part._...
(Source: I'm one of Racket's core developers.)
https://minikanren.org/workshop/2020/minikanren-2020-paper7....
Though, Reddit eventually realized that javascript: URLs - in Markdown - were an XSS risk.
We’ve all heard about how “security through obscurity” isn’t real security, but so many simple anti-abuse measures are very effective as long as their exact mechanism isn’t revealed.
HN’s downvote and flagging mechanisms make for quick cleanup of anything that gets through, without putting undue fatigue on the users.
Security measures uphold invariants: absent cryptosystem breaks and implementation bugs, nobody is forging a TLS certificate. I need the private key to credibility present my certificate to the public. Hard guarantee, assuming my assumptions hold.
Likewise, if my OS is designed so sandboxed apps can't steal my browser cookies, that's a hard guarantee, modulo bugs. There's an invariant one can specify formally --- and it holds even if the OS source code leaks.
Abuse prevention? DDoS avoidance? Content moderation? EDR? Fuzzy. Best effort. Difficult to verify. That these things are sometimes called security products doesn't erase the distinction between them and systems that make firm guarantees about upholding formal invariants.
HN abuse prevention belongs to the security-adjacent but not real security category. HN's password hashing scheme would fall under the other category.
This is something that programmers enjoy repeating but it has never been true in the real world.
It is related to Kerckhoffs principle: "The design of a system should not require secrecy, and compromise of the system should not inconvenience the correspondents"
This means that all of the security must reside on the key and little or nothing in the method, as methods can be discovered and rendered ineffective if that's not the case. Keep in mind that this is for communication systems where it is certain that the messages will be intercepted by an hostile agent, and we want to prevent this agent to read the messages.
When implementing modern cryptographic systems, it is very easy to misuse the libraries, or to try to reimplement cryptographic ideas without a deep understanding of the implications, and this leads to systems that are more vulnerable than intended.
Security by obscurity is the practice of some developers to reinvent cryptography by applying their cleverness to new, unknown cryptosystems. However, to do this correctly, it requires deep mathematical knowledge about finite fields, probability, linguistics, and so on. Most people have not spent the required decades learning this. The end result is that those "clever" systems with novel algorithms are much less secure than the tried and true cryptosystems like AES and SSL. That's why we say "security by obscurity" is bad.
Now, going back to the main topic: Hacker News is not a cryptographic system where codified messages are going to be intercepted by an hostile actor. Therefore Kerckhoffs principle doesn't apply.
EDIT: UltraLisp for QuickLisp.
> Much of the HN codebase consists of anti-abuse measures that would stop working if people knew about them. Unfortunately. separating out the secret parts would by now be a lot of work. The time to do it will be if and when we eventually release the alternative Arc implementations we’ve been working on.
More here: https://news.ycombinator.com/item?id=44099560.
Open-sourcing HN wouldn't work because of the anti-abuse stuff, etc. But open-sourcing the Arc implementation (i.e. Clarc) would be much easier. The way to do it would be to port the original Arc release (http://arclanguage.org/) to Clarc. It includes a sample application which is an early version of HN, scrubbed of anything HN- or YC-specific.
If you're looking for volunteers... :)
Vincent (my full name appears in linked projects ;) )
The business logic in encoded into the original structure, making migration to anything different effectively impossible - without some massive redesign.
This, I think more than any response, indicates why the philosophy of “it’s working don’t touch it” will always win and new features” requests will be rejected.
HN didn’t depaginate based on user desires, it was based on internal tooling making that feature available within the context of the HN overall structure.
HN has zero financial or structural incentive to do anything but change as little as possible. That’s why this place, unique in the internet at this point unfortunately has lasted.
HN is not *trying* to grow, it’s trying to do as little as possible while staying alive; so by default, it’s more coherent to maintain because its structure isn’t built for it and changing the structure would break the encoded rituals (anti-abuse measures).
Something to think about when you’re trying to solve for many problems like “legacy code” “scaling needs” etc… it all comes back to baseline incentives
Mortality.
I use the HN Arc code, but the site is about retro computing and gaming.
Maybe I should find a way to have APOD every day again.
What makes HN work is the tight focus and heavy moderation.
Backend services in languages other than Hack do exist, of course. When I left Meta (then called Facebook) in 2019, they were almost exclusively in C++. Now I don’t know for sure but I think Rust is gaining a lot of popularity for non-Hack stuff.
One of the original motivating examples were Unix-like systems (simple implementation, few correctness guarantees in interfaces) vs. Lisp-based systems (often well specified interfaces, but with complicated implementations as the cost.)
https://dreamsongs.com/WorseIsBetter.html
> One day in Spring 1989, I was sitting out on the Lucid porch with some of the hackers, and someone asked me why I thought people believed C and Unix were better than Lisp. I jokingly answered, "because, well, worse is better." We laughed over it for a while as I tried to make up an argument for why something clearly lousy could be good.
He then wrote Lisp: Good News, Bad News, How to Win Big (https://www.dreamsongs.com/WIB.html) for his EuroPAL keynote speech
> JWZ excerpted the worse-is-better sections [from Lisp: Good News, Bad News, How to Win Big] and sent them to his friends at CMU, who sent them to their friends at Bell Labs, who sent them to their friends everywhere.
The excerpt: https://www.dreamsongs.com/RiseOfWorseIsBetter.html
The lisp path won, Lispus instead of Linux, and we had AGI in 1997 due to code elegance.
Maybe now it's been ported to Common Lisp it'll be easier to add features.
The flag button?
Really? IIRC, Slashdot's moderation was garbage, remember penis-bird, GNAA, goatse?
But yes, I remember that to see that stuff you had to expand the down-modded comments.
That stuff was also a product of its time. Slashdot had the strong free speech ethos of the early internet, so CmdrTaco had a policy of never deleting comments unless they broke the site somehow or there was a legal process requiring it. Sometimes that meant very new stories would get these comments and they'd be visible before they got modded, but if you browsed stories that had been active for a little while you wouldn't see them.
One downside of a sophisticated moderation system on a site designed for programmers is that some people take it as a challenge. The reason Slashdot trolling was a bunch of dumb memes rather than e.g. commercial ads is because a lot of bored teenagers found spamming it a good way to learn web programming. The systematic nature of the moderation meant that it was a system to beat, a game to conquer. Hence the brief influx of "page widening posts" and other technical hacks. But I don't know if you'd see the same stuff today. The culture has changed, there are much better ways to learn programming and way more opportunities now. And you don't have to be fully automated. CmdrTaco had a strongly systems-oriented streak, but the problem on HN is hardly ever the actions of dang and the other paid moderators, it's really abuse of the overly simple system by other users that's a problem. You could have both good paid moderators and stricter controls on user moderation.
HN has ads? I've been on some 2011 and I have never seen them...
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Hacker News is the opposite and the better for it. If you're openly promoting your work, awesome! If you're doing anything to attempt to manipulate the platform for PR reasons, you can bet you will be punished for it.
I never understood why Reddit, which always tried to give off the "for the little guy" vibes was so rabidly against anyone promoting their own work.
In terms of paid advertising I guess the whole of HN kind of advertises YC who fund it.
Then again, Im not in CS so the job by boards posts are never interesting to me
Edit: or as someone else who has phrased it better: "less is more".
At least, that's how my bash pager has it in the manpage.
I also thought Slashdot's moderation system was kind of fun. I am not sure it was useful but I enjoyed the annotations (+5 Funny when serious, +5 Insightful when inciteful, etc.) Meta-moderation was also neat?
In terms of Slashdot groupthink, no one uses (used) Windows and Microsoft was about to fall, but when looking outside of that at computer sales vs counted Linux installs, the picture was and is still very different. The reverse happened on the server, but Nadella was able to see outside the groupthink bringing Azure to the success it enjoys today.
ETA: obligatory: /s
Slashdot's moderation system didn't lead people to think "no one used windows", the userbase just didn't like microsoft.
Beyond that, having to re-debate every single idea every single time it's brought up is inefficient to the point of uselessness. We, as individuals, don't have time to verify every single theory from first principles, so we rely on tools like "moderation" as a heuristic to make progress.
HN has some very clear bubbles that probably wouldn't happen without a popularity system tied to its comments and submissions; maybe the janitorial duty of removing spam and so on is enough for a page like this. I'm not sure I see the merits of upvotes and downvotes at this point.
I completely disagree. That was a very coherent and well articulated comment. Having a useful vocabulary is not the equivalent of using a bunch of buzzwords.
Voting effects the presentation order of comments, which is especially significant when there are many responses sharing an immediate parent.
That's probably a bigger impact from voting than making points publicly viewable would be
(I think the best argument against the groupthing argument here is how inconsistent the positions are that are claimed to be the “groupthink” position by those claiming that.)
Aren’t you countering yourself by not providing the research requested above though?
I mean without objective evidence it’s all just a subjective opinion on either side
I'm not against memes and jokes, I like them. But I also like some actual intelligent discussion in between.
And that's why right now I visit Hacker News and it's been many years since I used Slashdot.
But I agree dark mode would be nice.
Being able to check a profile box would be a lot easier.
Then on the phone, Safari is again the default browser, but if I click a link from the gmail app it opens in mobile Chrome.
So to use a plugin to change anything, I'd have to have the plugin on all five browsers.
So there's still an element of who says it that matters
These go in your ublock origin "my filters" section. Enables Dark Mode through CSS, and another filter restricts the width of comments.
I don't mean this against designer specifically. I've seen plenty of software engineers that do the same thing. Hell, I've caught myself doing the same thing. It's just part of being human, but recognizing our human nature and not doing dumb things because of it is an ideal to shoot for in my opinion.
* Triple ticks for code ```
* Bullet lists
Two spaces to mono space is somewhat offensiveNot as clumsy or random as a Markdown; an elegant weapon for a more civilized age.
Annoyingly enough it's been talked about for years but it never gets implemented, despite only three colors really needing a swap: background to dark sepia or just dark gray, and text to white and off-white.
For users without an account you just stick to prefers-color-scheme. For users with an account you add a setting 'disable dark mode'
Dark Reader has autodetection so those users won't be a problem either.
And if you really wanna keep to the identity of the site, the top bar doesn't even really need a color swap.
It really is less of a conundrum than you think.
The Browser also has controls. Good Browsers let you set your browser-wide choice differently from your OS-wide choice. Great Browsers let you pick per-site overrides directly, as a standard user setting in a consistent location in browser controls. I realize a lot of UX designers have come to much prefer the "add more controls" approach over the "teach a person to fish" / understand how your OS and browser controls work as the user of the site approach. I realize why a lot of UX designers will always prefer that approach, because teaching people is hard and it is easier to cut complaints off at the pass than answer complaints with "use your browser's settings".
But seriously, it should be fine to release a dark mode in 2025 that only responds to `prefers-color-scheme: dark` and leaves it to users to understand their OS and Browser tools. It irks me a lot more when sites like Wikipedia and Bing and Google ignore `prefers-color-scheme: dark` by default and makes you dig for some dumb website-specific control (that's in a different place on every website) just to set it to whatever they call "System default" that means "trust the Browser's prefers-color-scheme, I know what I'm doing". UX designers have taken something that should be natural and automatic and made it more complex and more confusing just because a small handful of users complain that they don't understand their OS and Browser Settings tools.
But I think HN built on what Reddit got right (at least old reddit) and also on a context of more online/faster interactions as opposed to Slashdot that brought some of the old forums structure and on a context of slower and more meaningful (ahem, for the most part) interactions. Hence why moderation was more precise, upvotes had color and you still had things like user signatures
In a way, users and posts on HN are "cattle", not pets ;)
"worse is better" is people putting up with footguns like this in python, because it's percieved easier to find a python job:
def fun(a = []):
HN is very much "less is better", not "worse is better".> It refers to the argument that software quality does not necessarily increase with functionality: that there is a point where less functionality ("worse") is a preferable option ("better") in terms of practicality and usability.
For that reason, I think I am applying the term precisely as it was defined.
The irony of my comment, which dang picked up, is that the original idea was a criticism against Lisp, suggesting that the bloat of features was a part of the reason its adoption had lagged behind languages like C.
Swiss army knives are not as good at being screwdrivers as screwdrivers are.
People write a lot of Python, because the language is easy to get into for a lot of non computer-science folks (e.g., engineers and scientists) and the ecosystem is massive with libraries for so many important things. It isn't as conceptually pure as lisp, but most probably don't care.
Maybe you were blessed with colleagues, for the past 14 years, that all know about how dangerous it is to do it in Python so they use workarounds? That doesn't negate the fact that it's a concern, though, does it?
The idea in Python is:
1. Statements are executed line by line in order (statement by statement).
2. One of the statements is "def", which executes a definition.
3. Whatever arguments you have are strictly evaluated. For example f(g(h([]))), it evaluates [] (yielding a new empty list), then evaluates h([]) (always, no matter whether g uses it), then evaluates g(...), then evaluates f(...).
So if you have
def foo(x = []): ...
that immediately defines
foo = (lambda x = []: ...)
For that, it has to immediately evaluate [] (like it always does anywhere!). So how is this not exactly what it should do?
Some people complain about the following:
class A:
x = 3
y = x + 2
That now, x is a class variable (NOT an instance variable). And so is y. And the latter's value is 5. It doesn't try to second-guess whether you maybe mean any later value of x. No. The value of y is 5.For example:
a = A()
assert a.__class__.x == 3
assert a.x == 3
a.__class__.x = 10
b = A()
assert b.x == 10
succeeds.But it just evaluates each line in the class definition statement by statement when defining the class. Simple!
Complicating the Python evaluation model (that's in effect what you are implying) is not worth doing. And in any case, changing the evaluation model of the world's most used programming language (and in production in all countries of the world) in 2025 or any later date is a no go right there.
If you want a complicated (more featureful) evaluation model, just use C++ or Ruby. Sometimes they are the right choice.
When a linter warns me about such an expression, it usually means that even if it doesn't blow up, it increases the cognitive load for anyone reviewing or maintaining the code (including future me). And I'm not religious — if I can't easily rewrite the expression in an obviously safe way, I just concede that its safety is not 100% obvious and add a nolint comment with explanation.
> For that, it has to immediately evaluate [] (like it always does anywhere!). So how is this not exactly what it should do?
It has a lambda there. In many programming languages, and the way human beings read this, say that "when there is a lambda, whatever is inside is evaluated only when you call it". Python evaluating default arguments at definition time is a clear footgun that leads to many bugs.
Now, there is no way of fixing it now, without probably causing other bugs and years of backwards compatibility problems. But it is good that people are aware that it is an error in design, so new programming languages don't fall into the same error.
For an equivalent error that did get fixed, many Lisps used to have dynamic scoping for variables instead of lexical scoping. It was people critizing that decision that lead to pretty much all modern programming languages to use lexical scoping, including python.
What is inside the lambda is to the right of the ":". That is indeed evaluated only when you call it.
>But it is good that people are aware that it is an error in design, so new programming languages don't fall into the same error.
Python didn't "fall" into that "error". That was a deliberate design decision and in my opinion it is correct. Scheme is the same way, too.
Note that you only have a "problem" if you mutate the list (instead of functional programming) which would be weird to do in 2025.
>For an equivalent error that did get fixed, many Lisps used to have dynamic scoping for variables instead of lexical scoping. It was people critizing that decision that lead to pretty much all modern programming languages to use lexical scoping, including python.
Both are pretty useful (and both are still there, especially in Python and Lisp!). I see what you mean, though: lexical scoping is a better default for local variables.
But having weird lazy-sometimes evaluation would NOT be a better default.
If you had it, when exactly would it force the lazy evaluation?
def g():
print('HA')
return 7
def f(x=lazy: [g()]):
pass
^ Does that call g? def f(x=lazy: [g()]):
print(x)
^ How about now? def f(x=lazy: [g()]):
if False:
print(x)
^ How about now? def f(x=lazy: [g()]):
if random() > 42: # If random() returns a value from 0 to 1
print(x)
^ How about now? def f(x=lazy: [g()]):
if random() > 42:
print(x)
else:
print(x)
print(x)
^ How about now? And how often? def f(x=lazy: [g()]):
x = 3
if random() > 42:
print(x)
^ How about now?Think about the implications of what you are suggesting.
Thankfully, we do have "lazy" and it's called "lambda" and it does what you would expect:
If you absolutely need it (you don't :P) you can do it explicitly:
def f(x=None, x_defaulter=lambda: []):
x = x if x is not None else x_defaulter()
Or do it like a normal person: def f(x=None):
x = x if x is not None else []
Explicit is better than implicit.Guido van Rossum would (correctly) veto anything that hid control flow from the user like having a function call sometimes evaluate the defaulter and sometimes not.
Yes, the fact that most people learn very early the correct way to have a constant value of a mutable type used when an explicit argument is not given and that using a mutable value directly as a default argument value uses a mutable value shared between invocations (which is occasionally desirable) means that the way those two things are done in Python isn't a substantial problem.
(And, no, I don't think a constant mutable list is actually all that commonly needed as a default argument in most languages where mutable and immutable iterables share a common interface; if you are actually mutating the argument, it is probably not an optional argument, if you aren't mutating it, an immutable value -- like a python tuple -- works fine.)
Exactly because it's a footgun that everybody hits very early. I think the Python linters even flag this.
The fact that default arguments in Python get set to "None" is precisely because of this.
The bigger problem is with dicts and sets because they don't have the equivalent concise representation for the immutable alternative.
Arguably the even bigger problem is that Python collection literals produce mutable collections by default. And orthogonal to that but contributing to the problem is that the taxonomy of collections is very disorganized. For example, an immutable equivalent of set is frozenset - well and good. But then you'd expect the immutable equivalent of list to be frozenlist, except it's tuple! And the immutable equivalent of dict isn't frozendict, it... doesn't actually exist at all in the Python stdlib (there's typing.MappingProxyType which provides a readonly wrapper around any mapping including dicts, but it will still reflect the changes done through the original dict instance, so to make an equivalent of frozenset you need to copy the dict first and then wrap it and discard all remaining references).
Most of this can be reasonably explained by piecemeal evolution of the language, but by now there's really no excuse to not have frozendict, nor to provide an equally concise syntax for all immutable collections, nor to provide better aliases and more uniform API (e.g. why do dicts have copy() but lists do not?).
So while it's a footgun you will be writing some weird code to actually trigger it.
Seems fine to me. If the default expression causes side effects, then that's what I would expect.
>This kind of function is fine in Python because it's unidiomatic to mutate your parameters, you do obj.mutate() not mutate(obj).
I first wrote Python over 10 years ago and I never learned this.
How would you idiomatically write a function/method which mutates >1 parameter?
If you want to mutate two parameters just pass them to a function like you normally would.
It's sloppy and a bad habit, I would not let it pass a PR in production code. Probably OK for a throwaway script.
A common case where you would have a free function which mutates its parameter would be a function which takes a file handle but it's also the case that you wouldn't have a mutable default for this value.
def fun(a = None):
_a = a if a is not None else []
But I don't value the look and feel of Hackernews, because it drives people away -- as if these people are of lesser value. That is just elitist and gatekeeper mentality.
This sounds good in theory until you realize just who it is that is being "gatekept".
Peruse through any sufficiently large Discord server or the comments on a YouTube Shorts / Instagram Reels video to see what our fellow "valued internet compatriots" are up to.
I, for one, have had enough of dealing with neuron-fried dopamine addicts and literal children from those aforementioned circles to last me a lifetime, I'd prefer HN doesn't become (more) like that.
There's always Reddit for those who prefer a community with the front gates blasted wide open.
I think it's more likely that most people (even most tech-adjacent people) simply don't know this place exists, or don't care, since no one is sharing links to Hacker News on mainstream social media and nothing goes viral here outside of already established HN-adjacent circles.
I don’t think there is heavy moderation in the traditional sense. It’s primarily user-driven, aside from obvious abusive behavior. The downvote and flagging mechanisms do the heavy lifting.
The heuristics that detect a high ratio of arguments to upvotes (as far as I can tell) can be frustrating at times, but they also do a good job of driving ragebait off the front page quickly.
The moderators are also very good at rescuing overlooked stories and putting them in the second chance pool for users to consider again, which feels infinitely better than moderators forcing things to the front page.
It also seems that some times moderators will undo some of the actions that push a story off the front page if it’s relevant. I’ve seen flagged stories come back from the dead or flame war comment sections get a section chance at the front page with a moderator note at the top.
Back in the Slashdot days I remember people rotating through multiple accounts for no reason other than to increase their chances of having one of them with randomly granted moderation points so they could use them as weapons in arguments. Felt like a different era.
It seems to be a combination of manual and automated moderation (mostly by dang but he has more help now), using the kind of over/under-engineered custom tools you'd expect from technophiles. I've wondered a lot about the kind of programming logic he and the others coded up that make HN as curious as it is, and I have half a mind to make a little forum (yet another HN clone, but not really) purely for the sake of trying to implement how I think their moderation probably works. If I went through with this, I'd have it solely be for Show HN style project sharing/discussion.
https://news.ycombinator.com/item?id=43558671 for those who missed it
* For every N=round(10) years software experience, you can click submit N10 times.
* You must* provide a link and year proving your earliest project or employoment.
* Max 256 submissions per day for everyone total.
Should be a fun experiment. Email me if you want an early invite.
I'd expect Slashdot's point systems and meta moderation to make a comeback in the LLM slop world we live in currently, but nobody knows about it anymore. Steam kinda rediscovered it in their reviews, perhaps even was inspired by it (I hope...)
All of this to say that one feature brings in a whole set of additional complications. Less is more.
As opposed to tearing through a thread and downvoting any and everything you disagree with.
Slashdot encouraged more positive moderation, unless you were obviously trolling.
The meta-moderators kept any moderation abuse in check.
It's sad to see we have devolved from this model, and conversations have become far more toxic and polarized as a direct result of it. (Dissenting opinions are quickly hidden, and those that reinforce existing norms bubble to the top.)
I believe HN papers over these problems by relying on a lot of manual hand-moderation and curation which sounds very labor intensive, whereas Slashdot was deliberately hands-off and left the power to the people.
Slashdot is struggling a bit these days. The lower the comment count, the worse the moderation, so it's a bit of a snowball effect. The UI could use some help; there are many who don't want it to change at all, but it would be nice if an alternate UI were available, hitting the same API.
I think HN leans towards deriding both MS and Musk (see any thread on MS and FOSS). In any case, I think that part of being well-spoken is that you speak out against severely bad actors often. It's never useful to reflexively criticize something, but people may contemplate and still decide they're right. Making a comment is the bare minimum of accountability for bad actors who should know better. It may not be to your taste that HN is such a platform, but that's not up to your decision any more than it is mine. There are many problems from a society that struggles to speak well or ill as a subject deserves, which is to say to speak the truth when it should be spoken, and not to speak mistruths except in exceptional circumstances. It would surely be best if one reasoned critique solved the problem and we never would hear of it again, but alas.
I don't expect HN commenters to change their minds necessarily, but I do wish they would elevate posts with more consideration and objectivity, and less low-effort outrage.
I don't really see it. /. had this basically every single thread and the criticism was very not substantive. Musk is unpopular here, but the criticism at least has a bit more meat to it and is not on every single post.
On HN Meta is one step away from going bankrupt and being sued into oblivion. Meta’s Earnings Reports tell a very different story.
I feel like HN fits the same shape in tech as Slashdot did and I’m not happy about it.
unsure why precisely it descended so much
not crazy about HN's approach but the quality of the discourse here is so high through whatever mechanism, I don't much care
To be spartan is to philosophize.
It's like they know somewhere deep inside that "mo tech" is not helping anyone.
- technologists and startup wannabes feeling like HN is "underground" because of the stripped down aesthetic and weird tech stack
- out of touch VCs who are successful because of money and connections but want to cosplay as technical
- the end users of the startups, who are fed the enshittified products funded by the VCs and created by the technologists
- Every Venture Capitalist Ever.
Here's the original essay -- https://www.dreamsongs.com/RiseOfWorseIsBetter.html
This is a good little overview entitled "Worse is Better Considered Harmful" -- https://cs.stanford.edu/people/eroberts/cs201/projects/2010-... -- in which the authors argue for "Growable Is Better".
In summary - it's about ease of implementation trumping all else. C and Unix are memorably labelled "the ultimate computer viruses".
This was all running on a single core??
It still puts into perspective what a big pile of dogshit consumer software has become that stuff like this comes as a surprise. Also, the last time I checked, Let's Encrypt also ran on a single system. As did the Diablo 2 server (I love reading about these anecdotes.)
For every incremental change in HW performance, there is an order-of-magnitude regression in SW performance.
Also, re: I/O, the CPU usually also has to handle interrupts there, as well as whatever the application might be doing either that I/O.
Interrupts? Interrupts? We don't need no stinking interrupts! https://docs.kernel.org/networking/napi.html#poll
I/O tends to be the bottleneck (disk IOPS and throughput, network connections, IOPS and throughput). HN only serves text so that's mostly an easy problem.
Just look at New Reddit, it's an insane GraphQL abomination.
No, what changed is the industry devolved into over-reliance on mountains of 'frameworks' and other garbage that no one person fully understands how it all works.
Things have gotten worse, not better.
It's really dumbfounding that most devs fell for it even as raw computing power has gotten drastically cheaper.
I took the bait once and analyzed a $5000 bill. IIRC, it worked out to about the compute provided by an RPi 4. “OK, but what about when your site explodes in popularity?” “I dunno, take the other $4900 and buy more RPis?”
This was many years ago on hardware several times slower than the current generation of servers.
Spawning new processes for every user is possible but would probabaly be less scalable than even thread-switching.
> Spawning new processes for every user is possible but would probabaly be less scalable than even thread-switching.
I’d just like to note/clarify that there is, in fact, multi-threading happening under the hood when running Node.js. libuv, the underlying library used for creating and managing the event loops, also creates and maintains thread pools that are used for some concurrent and parallelizable tasks. The fact that JavaScript (V8 in the case of Node.js) and the main event loop are single-threaded doesn’t mean that multi-threading isn’t involved. This is a common source of confusion.
HN is an island of sanity in a sad world.
(I assume that this update has removed that HN restriction, but haven't bothered to go look to verify this assumption.)
Originally on MzScheme, then later PLT Scheme. It was ported to Racket by the great kogir, IIRC.
(This conversation has turned unexpectedly ontological!)
HN runs now on SBCL, which is much faster and also multi-threaded.
Also, I believe pg started implementing Arc on Scheme48 based on mailing list activity at the time. I've always been curious about the switch to PLT!
As some point, when I was writing a lot of basic ecosystem code that I tested on many Scheme implementations, PLT Scheme (including MzScheme, DrScheme, and a few other big pieces), by Matthias Felleisen and grad students at Rice, appeared to be getting more resources and making more progress than most.
So I moved to be PLT-first rather than portable-Scheme-first, and a bunch of other people did, too.
After Matthias moved to Northeastern, and students graduated on to their own well-deserved professorships and other roles, some of them continued to contribute to what was soon called Racket (rather than PLT Scheme). With Matthew Flatt still doing highly-skilled and highly-productive systems programming on the core.
Eventually, no matter how good their intentions and how solid their platform for production work, the research-programs-first mindset of Racket started to be a barrier to commercial uptake. They should've brought in at least one of the prolific non-professor Racketeers into the hooded circle of elders a lot sooner, and listened to that person.
One of the weaknesses of Racket for some purposes was lack of easy multi-core. The Racket "Places" concept (implementation?) didn't really solve it. You can work around it creatively, as I did for important production (e.g., the familiar Web interview load-balancing across application servers, and also offloading some tasks to distinct host processes on the same server), but using host multi-core more easily is much nicer.
As a language, I've used both Racket and CL professionally, and I prefer a certain style of Racket. But CL also has more than its share of top programmers, and CL also has some very powerful and solid tools, including strengths over Racket.
I don't mean that, of course. But there's a reason for the joke. When I did extensive work in Emacs Lisp (before they added lexical scope) I came to appreciate (1) how amazing a domain-specific language it is, for the domain of a programmable text editor—it's really one of the ultimate classics of a domain language; and (2) how everything being dynamically scoped was somehow closely allied with this domain. It made Elisp less useful as a general purpose language (lexical scope is a good thing!) but arguably more useful for making and extending a programmable text editor.
Based on the current id, about 45,000,000 items.
Assuming 1KB per item, about 45GB.
So with code and OS, probably it would fit on a $10 thumb drive without compression.
</back of napkin>
If I am within a couple of orders of magnitude, it is hard for me to see a benefit from compression.
Is this a case where security through obscurity is good, or bad? Legit question. I am curious to read the responses it may prompt.
I found this though: https://news.ycombinator.com/item?id=27457350
> There are a lot of anti-abuse features, for example, that need to stay secret (yes we know, 'security by obscurity' etc., but nobody knows how to secure an internet forum from abuse, so we do what we know how to do). It would be a lot of work to disentangle those features from the backbone of the code.
The question still stands for curiosity!
Hacker News is small enough that obscurity would give moderators enough time to detect bad actors and update rules if necessary.
Literally any e-commerce site has larger and more critical infrastructure to protect.
To me; philosophically; and to a first approximation, all security is through obscurity.
For example encryption works for Alice so long as Bob can't see the key...
... or parking the Porsche in the garage, reduces the likelihood someone knows there is a Porsche and reduces the likelihood they know what challenges exist inside the garage. Now put a tall hedge and a fence around it and the average passerby has to stop and think "there's probably a garage behind that barrier."
To put it another way, out of sight has a positive correlation to out of mind.
Yes of course a determined well funded Bob suggests obscurity with Bob's determination and budget. If Bob is willing to use a five dollar wrench, Alice might tell Bob the key.
"The design of a system should not require secrecy, and compromise of the system should not inconvenience the correspondents"
This means that all of the security must reside on the key and little or nothing in the method, as methods can be discovered and rendered ineffective if that's not the case. Keep in mind that this is for communication systems where it is certain that the messages will be intercepted by an hostile agent, and we want to prevent this agent to read the messages.
When implementing modern cryptographic systems, it is very easy to misuse the libraries, or to try to reimplement cryptographic ideas without a deep understanding of the implications, and this leads to systems that are more vulnerable than intended.
Security by obscurity is the practice of some developers to reinvent cryptography by applying their cleverness to new, unknown cryptosystems. However, to do this correctly, it requires deep mathematical knowledge about finite fields, probability, linguistics, and so on. Most people have not spent the required decades learning this. The end result is that those "clever" systems with novel algorithms are much less secure than the tried and true cryptosystems like AES and SSL. That's why we say security by obscurity is bad.
Now, going back to the main topic: Hacker News is not a cryptographic system where codified messages are going to be intercepted by an hostile actor. Therefore Kerckhoffs principle doesn't apply. There's not a secret key that can be changed in a way the system will recover its functionality if the secret key is discovered.
There is a series of measures that have worked in the past, and are still working today despite a huge population of active spamming and disrupting agents, and they should be kept secret as long as they keep working.
https://news.ycombinator.com/item?id=43208973
In March this year, HN changed pagination behavior. Previously, one needed to paginate through pages to read more than X comments. Around March, they now serve all comments at once.
A post having over a thousand comments is extremely rare so not a big deal.
I've beeing a part of many online communities as both a member and moderator. However, Hackernews is the community that I've been apart of for the longest and the one that brings me the most joy.
Dang, is there anything random people like me can do for you? Can I at least buy you a coffee or something?
And HN founder and original author Paul Graham is (at least on paper) billionaire, not merely the decamillionare he used to be.
Though it's still good for it to be a self-funding project even if that means accepting donations.
Oh I have been on HN since 2008 and didn't know that.
Another text that might describe the HN communities understanding of the word is "How to become a hacker" by Eric S. Raymond. [1]
You can go back more in time towards the origins of the term, the MIT labs of the 50s and 60s, see the hacker ethic. [2] But it's not like the folks in the valley would care that much for those values nowadays.
The wiki page for the term hacker also is quite helpful, hn crowd is talking about the first kind of hacker. [3]
[0] https://paulgraham.com/gba.html
[1] http://www.catb.org/esr/faqs/hacker-howto.html
The earliest documented use of 'hack' is from "AN ABRIDGED DICTIONARY of the TMRC LANGUAGE", written in 1959 by Peter Samson. (TMRC was the Tech Model Railroad Club of MIT.) The definition was itself a playful example of what it was defining:
HACK: 1) something done without constructive end; 2) a project undertaken on bad self-advice; 3) an entropy booster; 4) to produce, or attempt to produce, a hack.
HACKER: one who hacks, or makes them.
Samson (2005): "I saw this as a term for an unconventional or unorthodox application of technology, typically deprecated for engineering reasons. There was no specific suggestion of malicious intent (or of benevolence, either). Indeed, the era of this dictionary saw some 'good hacks': using a room-sized computer to play music, for instance; or, some would say, writing the dictionary itself."https://www.gricer.com/tmrc/dictionary1959.html
The 'malicious' connotation (e.g. breaking into someone else's system) dates from early 1960s phone phreaking. The claim that the malicious sense came earlier than the creative sense was made in 2003 by a researcher [1] who retracted it when this 1959 usage was pointed out:
"as soon as the 1959 citation was discovered I conceded that I was probably wrong about "hacker" originally having malicious connotations" (https://news.ycombinator.com/item?id=19416623)
I'm not sure Peter Samson would agree that it was "discovered", but never mind.
[1] https://web.archive.org/web/20051023131548/http://listserv.l...
---
Edit: The phone-phreaking instance dates from 1963: https://blog.historyofphonephreaking.org/2013/09/document-of...
The wrong idea that the so-called 'malicious' usage came first was widespread for a while—here's an example: https://imranontech.com/2008/04/01/the-origin-of-hacker/
need to check out what it adds to CL: http://arclanguage.org/
"since a few months" sounds wrong, it isn't idiomatic english. Consider replacing it with:
"HN has been running on top of SBCL for a few months now."
Rewriting it to "since a few months ago" seems to be the easiest way to fix this, though my favorite way to express the the same thing is "as of a few months ago".
It should be noted that the author, like most people you're likely to interact with in this bubble, is not a native speaker of English. What matters is getting the message across - which they did.
You'll end up not being very productive if you spend your time pointing all of these little slips out.
---
[1] See what I did there? Eh? Eh?
English, where that construction sounds weird and at least needs some helpers around it to exist, is a bit of an odd one out. It is kind of odd that we can say "for three months" to say that something took three months, but we can't say "since three months" to refer to something that has been going on as of three months ago.
HN has a bunch of factors that make it amenable to a rewrite. It has gigantic scale, not a ton of complexity at a business level, and what it “is” is pretty slow moving at this point.
That means it’s not a great example to justify a rewrite at work :) that said the success does prove rewrites are possible. Bravo on shipping!
I'm not saying it's bad, or criticizing anyone. I mean it does what it does, and it works, and people like it. But no one should care what technology they're using because there's just nothing impressive going on from a technical perspective.
I'm not an infosec professional, or a competent LISP coder, I'm not in a position to say what's better. This is just what pros in the field say to me.
(It's mentioned in the article)
And surely HN is way less monetised (and therefore way more trustworthy) than virtually every other links site / every social media platform out there?
PG made an assertion once that websites (in contrast to desktop software) are free to use any stack of their choosing, as long as it can take in HTTP requests and output JSON or HTML. This intuitively seems to be true, especially so with how powerful modern machines can get, but it seems like it hasn't increased stack diversity much.
The advantages of boring technology and "resume-driven development" seem to outweigh whatever gains you may get from using something custom.
I'm reminded of definitively the most extreme writing on programming I've ever read, here https://llthw.common-lisp.dev/introduction.html, including but in no way limited to claims such as:
> The mind is capable of unconsciously understanding the structure of the computer through the Lisp language, and as such, is able to interface with the computer as if it was an extension to its own nervous system. This is Lisp Consciousness, where programmer and computer are one and the same; they drink of each other, and drink deep; and at least as long as the Lisp Hacker is there in the flow, riding the current of pure creativity and genius with their trusty companions Emacs and SLIME, neither programmer nor computer know where one ends and the other begins. In a manner of speaking, Lispers already know machine intelligence---and it is beautiful.
Has any other language produced such thoughts in the minds of human beings? Maybe yes, but I don't know of one. Maybe Forth, or Haskell, or Prolog, but I haven't found similar writing. Please do share.
> there’s now an Arc-to-JS called Lilt, and an Arc-to-Common Lisp called Clarc.
> But Clarc’s code isn’t released, although it could be done:
> Releasing the new HN code base however wouldn’t work:
I'm not sure if I follow all that. If the Clarc is not released, then how does HN run on it?
The same person who writes Clarc also deploys HN (assumption, but seems dang does can do both :) ), so using unreleased software is just a matter of navigating to the right local directory.
A heavy lesson in that for other implementors of discussion-forum cum blog-comment systems.
rcarmo•1d ago
rwmj•1d ago
quotemstr•1d ago
SoftTalker•1d ago
Things like colors, contrast levels, font sizes, are often matters of personal preference, and the browser (in theory) is the common place to manage those. Each site should not have to reinvent this feature.
johnisgood•1d ago
altairprime•1d ago
randallsquared•1d ago
johnisgood•1d ago
SoftTalker•1d ago
saint_yossarian•21h ago
shwouchk•19h ago
vinceguidry•1d ago
This would send me into peals of laughter if I weren't already crying. The time to make this argument was 30 years ago, when the web wasn't fragmenting into a billion different pieces. The browser can make exactly zero assumptions about any given site, so it could never be a place where user preferences about them could be actionable. Downvoters should get work as web developers sometime. You really want the browser making assumptions about your web design?
All it can do is pass a header and let the website do what it will with it.
galaxyLogic•1d ago
A user's custom style-sheet might be good for one web-site, but not for every website.
The original web was much about self-expression of developer-users, but now the web is all about apps, which must not break because a user might want to use different colors.
And why should you need to customize colors? I can understand that different users need larger fonts which you can do by zooming in the browser. Colors should be good to go if the website is well-styled to begin with.
zzo38computer•1d ago
What if it is not well-styled? Or, maybe some people think it is and others disagree and want something else. The end user should need to customize fonts (not only larger, but also if you want smaller fonts; I more often find the fonts on a web page are too big and want smaller fonts; however, also you might prefer a specific font typeface and not only the font size), colours (also for many reasons, including using a monochrome display or printer that the web page author might not have been aware of), animations (e.g. to disable them), margins, etc.
quotemstr•23h ago
Pretty sure AI-driven style derivation will finally deliver the dream of custom stylesheets in a robust and automatic way.
zzo38computer•1d ago
quotemstr•1d ago
That works today. No problem.
> on existing CSS commandsNot sure what you mean. Between the new selector combinators and attribute selectors, you can do a ton. You also have style-based container queries, which are probably close to what you want.
rcarmo•1d ago
rcarmo•1d ago
abdullahkhalids•23h ago
rcarmo•14h ago
jaoane•12h ago
rcarmo•9h ago
jaoane•9h ago
dang•1d ago
rcarmo•1d ago
The genius solution in there is probably this one:
...which you can try by doing this in the browser console: But I get that there are a lot of opinions. Just try one, put up a vote over a week, do it over 4-6 weeks, settle on the one that has the best feedback...brudgers•1d ago
This might be what we are up against:
https://norvig.com/21-days.html
https://paulgraham.com/hundred.html
rcarmo•14h ago
yyx•1d ago
rcarmo•1d ago
wvenable•1d ago
rcarmo•14h ago
Maybe I could fund a startup for that…
cess11•1d ago
rcarmo•14h ago
satiric•1d ago
krior•23h ago
satiric•22h ago
On the home page the text that tells you who the poster is, how many upvotes and comments, etc, is gray text on a gray background, at 7pt font. Again, Firefox and Chrome scale this up to 9.33pt, which again, is too small for me to read comfortably on a 24 inch desktop monitor without zoom.
(I accept that 120% would be fine; that brings up the main font size to 14.4pt. Wikipedia seems to use 14pt and that's totally fine for me. But still, neither me nor the browser should have to scale up the website.)
Even at 130% zoom, on the home page I can see 20 posts at once. I understand complaints that reddit went too far in the other direction, but that doesn't mean they should throw accessibility out the window for this site.
simoncion•8h ago
Only if you've told it to. My Firefox settings have "Minimum Font Size" set to "None". Perhaps scaling up to 12px is a default? (Edit: Also, are you sure you're not thinking of 12pt? IIRC, points are DPI-independent units and (AIUI) the traditional way of specifying font sizes in computerized typography.)
Despite my age, I still have eyes that are good enough to easily read the font sizes you're complaining about. A hugely important part of a User Agent is that it provide overrides for site design choices that the Agent's user has decided will benefit them. It's a good thing that UAs let folks like you choose a minimum-possible font size. It's an equally good thing that UAs let folks like me choose to see the choices that designer made that others criticize.
simoncion•6h ago
To be clear, this is a confusingly- (and perhaps incorrectly-) worded way to say "At a given point size, a particular glyph from a particular font is supposed to be the same size on the output device, regardless of its physical size or number of pixels.".
satiric•3h ago
chuckadams•6m ago
ashwinsundar•21h ago
1718627440•12h ago
Tijdreiziger•1d ago
rcarmo•14h ago
justsomehnguy•1d ago
rcarmo•14h ago