Here's a lit-html template that sets a property:
html`<my-element .someProp=${x}></my-element>`
That the element may use a library for its implementation is basically irrelevant.
The goal of web components is to enable interoperable, encapsulated, reusable components, where how they're built is an implementation detail.
You can use a web component that used lit-html without knowing anything about lit-html, it even that the component uses it.
Why bother when I can just write everything in a single framework?
``` class SomeElement extends HTMLElement { constructor() { super(); }
someMethod(x) {
this.innerHTML = `<h1>${x}</h1>`;
}
}// ignore registry stuff
const customElement = document.getElementById('custom-element-id');
customElement.someMethod(42); ```
But you won't learn that from most mainstream custom element tutorials though, for whatever reason.
You're right, it is just a method call from a class. Nothing interesting or new. And that's exactly why I like it! I like me FE code as boring, unimpressive and as simple as possible.
Fine for x:string, but what about w:WebWorker?
If I haven't, then presumably I'll be satisfied with the inherited default behavior, which will probably look something like "<h1>[object Worker]</h1>".
If I care about this extremely contrived example case, in other words, I'll do something to handle it. If I don't, I won't. If I do, it's been easy for at least 25 years now; iirc .toString() was specified in ES3, which was published in March 2000.
Similarly, these references would be from the JS perspective a POJO with lots of seriously heavy implicit "render magic," so you can use them, as with any first-class Javascript value, as function arguments parallel to but a superset of what React does with its props. See the MDN documentation on Node.appendChild (and Node, Element, HTMLElement, etc) for more: https://developer.mozilla.org/en-US/docs/Web/API/Node
If I want to represent the state of a worker thread in the UI, a problem I first recall solving over a weekend in 2016, the way I do it will end up closely resembling the "MVC pattern," with the Worker instance as "model," the DOM element structure as "view," and a "controller" that takes a Worker and returns an element tree. Even if I'm using React to build the UI - which I have also been mostly doing for about as long - I am still going to handle this translation with a library function, even if my component actually does accept a Worker as a prop, which it actually very likely will since that will enable me to easily dispatch effects and update the UI on changes of worker state. I might define that "business logic" function alongside the component which uses it, in the same module. But React or vanilla, I won't put that logic in the UI rendering code, unless it is trivial property mapping and no more (unlikely in this case, since any interesting worker thread state updates will arrive via message events requiring the parent to keep track in some way.)
Does that help clear up what I'm getting at?
<MyComponent worker={worker} />
and expect the worker instance to be passed as an object to `MyComponent` as a prop. But with webcomponents, I can't do something like that. this.innerHTML = `<my-component worker="${worker}">`
will just stringify the worker and pass that string to the `my-component`. To get the worker instance to be passed correctly, I'd need to do something like this.innerHTML = `<my-component>`
this.firstChild.worker = worker;
Tbh, I'm not sure there's a way for that. But why not just define a method in your target child component and pass the worker in there?
You can use properties (as opposed to attributes) as I demonstrated, and you can use methods like you suggest, but these are both verbose and limited, and add an extra "the component has been created but the props haven't been fully passed" state to the component you're writing. Imagine a component with maybe five different props, all of which are complex objects that need to be passed by property. That's a lot of boilerplate to work with.
You can set them declaratively with a template binding in most template systems.
https://plainvanillaweb.com/blog/articles/2024-10-07-needs-m... https://plainvanillaweb.com/blog/articles/2024-08-30-poor-ma...
I haven't tried signals yet, but I couldn't see why you could pass in an object with multiple values.
el.worker = worker
Or declaratively: html`<my-component .worker=${worker}></my-component>`
That's using lit-html syntax, but there are a lot of other rendering libraries that work similarly.But once I decide to cross the "no dependencies" line, using something like Preact + htm as a no-build solution would also take the most of the rest of the pain away, and solve many, many other problems Web Components have no solution and no planned solution for.
All HTML elements are JavaScript objects that have properties. You can pass arbitrary data to custom elements via those properties.
Look at any modern HTML template system and you'll see the ability to pass data to properties declaratively.
You do also have access to properties, but only in Javascript — you can't for example write something like `<my-custom-component date="new Date(2024, 02, 04)">`. That means that if you need to pass around complex data types, you need to either manipulate the DOM objects directly, or you need to include some sort of templating abstraction that will handle the property/attribute problem for you.
This is my main criticism of web components. At the simplest levels, they're not useful — you could build this website very easily without them, they aren't providing a particularly meaningful abstraction at this level of complexity. But at the more complex levels, they're not sufficient by themselves — there's no state management concept, there's no templating, there's not even much reactivity, other than the stuff you could do with native JS event emitters.
As far as I can tell, the best use-case for web components is microfrontends, which is a pretty useful use-case (much better than iframes), but it's very niche. Apart from that, I really don't see why you wouldn't just write normal Javascript without worrying about web components at all.
You're holding web components to a higher standard here in expecting them to take arbitrary data in HTML when HTML itself doesn't support arbitrary data. Notably you can't assign arbitrary data to framework components from within HTML either, so how are web components any more limited?
The point of web components is that they create normal HTML elements. So it makes sense to consider what the value of them being normal HTML elements is. You can write them directly in your HTML source code, for example. But if you do that, you only get access to attributes and not to properties, and therefore everything needs to be strings. Alternatively, you can treat them as DOM nodes in Javascript, at which point you get access to properties and can use non-string values, but now you've got to deal with the DOM API, which is verbose and imperative, and makes declarative templating difficult.
Yes, we could compare them to components from other frameworks, but that's honestly an absurd comparison. They're simply trying to do different things. Frameworks aren't trying to create HTML elements. They're trying to reactively template the DOM. It's just a completely different goal altogether. The comparison doesn't make sense.
/s
- React-DOM has one runtime dependency (a cooperative scheduler)
- React itself has no runtime dependencies
It's possible the poster above is referring to build time dependencies. This is harder to assess because there are several options. For example, if compiling JSX using TypeScript, this adds only one more dependency.
The benchmarks I've seen actually show web components being slightly slower than the best frameworks/libraries.
The idea is: no build steps initially, minimal build steps later on, no dealing with a constant stream of CVEs in transitive dependencies, no slow down in CI/CD, much more readable stack traces and profiling graphs when investigating errors and performance, no massive node_modules folder, etc. Massive worlds of complexity and security holes and stupid janky bullshit all gone. This will probably also be easier to serve as part of the backend API and not require a separate container/app/service just running node.js and can probably just be served as static content, or at least the lack of Node build steps should make that more feasible.
It's a tradeoff some people want to make and others don't. There isn't a right and wrong answer.
React is the new Java.
But when you need something the framework can't provide, good luck. Yes, high performance is typically one of those things. But a well engineered design is almost always another, costing maintainability and flexibility.
This is why you almost always see frameworks slow to a crawl at a certain point in terms of fixing bugs, adding features, or improving performance. I'd guess React did this around like 2019 or so.
Overly hated on and conflated with frameworks, yes.
React requires very little from you. Even less if you don't want to use JSX. But because Facebook pushes puzzlingly heavy starter kits, everyone thinks React needs routers or NextJS. But what barebones React is, at the end of the day, is hardly what I'd call a framework, and is suitable for little JS "islands."
For me it's mostly about de-tooling your project.
For example - I have a fairly complex app, at least for a side project, but so far I managed to keep everything without a single external dependecy nor any build tool. Now I have to convert it all to react and I'm not very happy about that, I will have to add a ton of tooling, and what's most important for me, I won't be able to make any changes without deploying the tooling beforehand.
Cognitive burden is relevant to 100% of web apps.
When frameworks are used for that 99% of web apps for which they are overkill, performance actually drops.
> frameworks seem to be the lingua franca for a reason
Sure, but theres no evidence that the reason is what you think it is.
The reason could just be "legacy", ie. " this is the way we have always done it".
I'm sitting on two UIs at work that nobody can really do anything with, because the cutting-edge, mainstream-acceptable frameworks at the time they are built in are now deprecated, very difficult to even reconstruct with all the library motion, and as a result, effectively frozen because we can't practically tweak them without someone dedicating a week just to put all the pieces back together enough to rebuild the system... and then that week of work has to largely be done again in a year if we have to tweak it again.
Meanwhile the little website I wrote with just vanilla HTML, CSS, & JS is chugging along, and we can and have pushed in the occasional tweak to it without it blowing up the world or requiring someone to spend a week reconstructing some weird specific environment.
I'm actually not against web frameworks in the general sense, but they do need to pull their weight and I think a lot of people underestimate their long-term expense for a lot of sites. Doing a little bit of JS to run a couple of "fetch" commands is not that difficult. It is true that if you start building a large enough site that you will eventually reconstruct your own framework out of necessity, and it'll be inferior to React, but there's a lot of sites under that "large enough" threshold.
Perhaps the best way to think about it is that this is the "standard library" framework that ships in the browsers, and it's worth knowing about it so that you can analyze when it is sufficient for your needs. Because if it is sufficient, it has a lot of advantages to it. If it isn't, then by all means go and get something else... again, I'm definitely not in the camp of "frameworks have no utility". But this should always be part of your analysis because of its unique benefits it has that no other framework has, like its 0KB initial overhead and generally unbeatable performance (because all the other frameworks are built on top of this one).
Although I do think there is merit to "light" frameworks such as Astro, with its scoped styling and better component syntax. But at the same time, independence is also important.
I started something similar with https://webdev.bryanhogan.com/ which focuses on writing good and scalable HTML and CSS.
Noob question, but what happens behind the scenes in terms of these concepts for web components instead?
Edit: I just added a shadow root to a div in your comment, saving its content beforehand, moved its content inside the shadow root, and added a style with * { all: initial } and your comment text got Times New Roman.
Many of the 25 mentions of Shadow DOM in the Components Page and 14 mentions of it in the Styling page are about solving problems you now only have because the site recommended you use Shadow DOM, and the problem Shadow DOM itself is _trying_ to solve is one I don't have in components written specifically for my app, which will only be used in my app.
I know you’ll “write your own framework” but maybe that’s optimal in some situations - more than we might give credit to at the moment.
You'll eventually need \<link> anyway for preloads, especially with web fonts, since that's the only way to prevent FOUC. Personally I lean on build steps to solve this. I don't like bundlers or frameworks, but I have grown accustomed to custom-building.
If you’re writing an SPA, and it isn’t Twitter or a content consumption platform, and it isn’t necessary to be API-driven, seriously just stop. You’re not clever, you’re not improving the experience, you’re only appealing to the tastes of other programmers. If billion-dollar companies can’t get them working perfectly, you certainly won’t.
If you think that’s interesting, send me your firm’s URL, I’ll consider applying. (Edit: or really anyone reading this who is interested - government of a foreign nation outside the US is not viable, unfortunately.)
Moving a navigation/menu from one app to another takes 3 people 2-3 weeks by now. Changing the "router" to update the TS version is just as time consuming. Things that would have taken 1 day max. take now 2-3 weeks. That is 1/14th of the productivity and people get paid for that, more than I did. Oh and did I mention the menu responsiveness was broken for months at a certain viewport width? I did report it, but apparently it was so hard to fix, that it could not be done in a quiet hour, so it took months laying around in the queue of things to fix.
It seems unintuitive, but traversing an object tree using reflection to generate json is just slower than using an HTML template, which is probably a rope data structure.
It’s such a pain having clear separation in development where everything runs off of the react server by default, but then it won’t when deploying, so either you end up doing all sorts of weird stuff or you need to be building and running production locally constantly, thus defeating the whole convenience of having a dev server that can hot reload the changes
I openly admit that I'd rather learn a new framework than touch anything to do with figuring out how the browser is intended to behave in practice. What an abomination and insult to humanity.
Edit: holy shit y'all do not like the earnest approach to technology
``` fetch('/my-content').then(async res => { if(res.ok) { document.getElementById('my-element').innerHtml = await res.text(); } }) ```
Something like that. Doesn't get much easier. Gone are the days of browser inconsistencies, at least of you stick to "Baseline Widely available" APIs which is now prominently displayed on MDN.
You do have to call `getElementById` on a document. There can be many documents in a window.
Although a very consistent convention, there are no guardrails put in place to prevent something from setting the same id on two or more elements.
getElementById will return the first element that it finds with the id, but you can't know for sure if it is the only one without additional checks
<textarea id="userNotes"></textarea>
<button type="button" id="copyButton">Copy</button>
<script>
copyButton.addEventListener('click', () => {
navigator.clipboard.writeText(userNotes.value)
copyButton.innerText = 'Copied'
setTimeout(() => copyButton.innerText = 'Copy', 1000)
})
</script>
function getId(v) {return document.getElementById(v)}
Dev tools allows $ $$, dunno, make a macro?
We must not defy the system!
const g = (x) => document.getElementById(x);
But what's the problem again?
I get that huge apps probably need a lot more than that, but so many people these days reach for heavy frameworks for everything, because it’s all they know.
The author of this article blog is describing some more advanced SPA-like scenarios, but those are completely optional
The web finally provides a widespread standard for deploying applications inexpensively. Unfortunately, the technology used to build user interfaces for the web remains somewhat mediocre.
It’s unfortunate that history took some wrong turns with X11, Java, Flash, Silverlight, .NET, and countless other alternatives that I haven’t personally lived through.
Hopefully, someone will eventually find a way to make developing web applications comfortable and the broader community will embrace it.
I do not understand why people hold this impression, especially in corporate environments.
Windows supports both system and per-user deployments; the latter so you don't even need administrator rights. And with Intune, deployments can be pulled or pushed.
Many desktop applications are written in .Net so you don't even need to install the runtime because it's preinstalled on the operating system.
Even ClickOnce deployments -- which you can deploy on the web or on a file share -- pretty much make deployments painless.
EDIT: For the naysayers: please then explain to me why Steam is so successful at deploying large games on multiple platforms?
And is there even a guarantee that your deploy will be rolled out in X minutes?
Version skew remains one of the biggest sources of catastrophic bugs at the company I work for, and that's not even taking into client app ver, just skew between the several services we have. Once you add client app ver, we have to support things for 3 years.
At my one-person company I just do a Kubernetes deploy, it goes out in a couple minutes and everyone has the latest ver whether they like it or not. I don't have the resources to maintain a dozen versions simultaneously.
The webapp doesn't care if someone's machine was down overnight or if the paranoid lady in design managed to install some local "antivirus" which blocked the updated rollout or if the manager of sales has some unique setting on his machine which for some inscrutable reason does silly things to the new version. If their web browser works, the inventory database works for them, and they're on the latest version. If their web browser doesn't work, well, your support teams would have had to eventually take care of that anyway.
Not sure yet how to solve this problem on the Internet yet though. How can we prevent uninformed masses from creating incentives for businesses, that turn the web into a dystopia?
Not on my computers. At home, or at work
Weylus gives you a URL that you can visit on the device and instantly use it. Try doing that with native apps. They'd need native apps for Windows, Linux, Mac, iOS, Android... get them on the app stores too, support all the different Linux distros... or just a single URL that works instantly anywhere.
Steam works for the same reason the App Store works, it targets mostly a single platform (Windows) and all dependencies are bundled in. The Steam client itself is a web app running on the chrome engine, though.
The last .NET version to be deployed this way has a 10 year old feature set. Nowadays you bundle the parts of .NET you need with the application.
https://learn.microsoft.com/en-us/dotnet/core/install/window...
Although, on re-read, maybe they meant there's a good chance another application already installed it? This I wouldn't agree with, as applications often insist on installing different versions of the system-wide runtime, even for the same major version.
To be specific, .NET install is version-aware and would manage those side by side, unless the destination folder is overridden.
Because Valve puts lots of time and money into making it work for their customers (https://github.com/ValveSoftware/Proton/graphs/contributors), time and money that the average small business can't afford.
- many people refuse to upgrade for various reasons so we have to support ancient versions (especially for important clients), for stuff like license activations etc.
- various misconfigurations (or OS updates) on Windows can make the app suddenly crash and burn - and you waste time investigating the problem. My favorite recent bug: the app works OK everywhere except on some Japanese systems where it just crashes with access violation (see the next bullet point)
- debugging is hard, because you don't have immediate access to the machine where the bug triggered
- originally it was built 100% for Windows but now we have people asking for a MacOS port and it's a lot of work
- people crack our protection 1 day after release and can use it without paying
SaaS has none of those problems:
- people are used to the fact that SaaS applications are regularly updated and can't refuse to upgrade
- modern browsers are already cross-platform
- browser incompatibilities are easier to resolve
- you debug your own, well-known environment
- if someone doesn't play by the rules, you just restrict access with 1 button
> EDIT: For the naysayers: please then explain to me why Steam is so successful at deploying large games on multiple platforms?
How many games are multi-platform on steam? Checking the top 20 current most played games, 14 of them are windows only. If it's so easy to be multi-platform, why would 14 of the top 20 games not be multi-platform? Oh, it's because it's NOT EASY. Conversely, web apps are cross platform by default.
Only 2 of the top 20 games supported 3 platforms (windows, mac, steam-deck). 1 was windows+steam-deck, 3 were windows+mac
if you look into the support forums on steam for any random game you'll find lots of complaints about stability and crashes, many of which are likely to be esoteric system-specific problems.
Games were made in flash so it can't be that slow.
Maybe slow for time to load. I think we got that down a bit and this wasn't to render content but a utility for a signed up user.
Modern day SPAs are far, far worse offenders when it comes to smothering resources.
I was so happy when it finally started to disappear. That kind of sheer disregard for my system resources is inexcusable.
As for the speed of the flash player itself it was quite good.
Edit: just tried it again and seems they fixed it! The ads stays in loading state forever but the site is usable again. Wonder if they did something on their side or if they changed ad provider.
Though I think maybe if one is using it for a different purpose, like looking for apartments or roommates then that's probably shit.
Why not?
It's really irritating for your users. They might not consciously be able to point it out as the cause of their irritation, but they'll like your app a lot less.
My business doesn't use React anymore, and I'm so happy I don't have to do pointless maintenance.
A side note, included in my repository: you update your element's innerHtml from the constructor, not from the connectedCallback. MDN and the web standards seem to be inconsistent on this, and elsewhere people write about it, too.
People talk a lot about data and binding, etc. I think it's weird, it's like people have forgotten how to just use setters. You can call a render() method from a setter, and it does the same thing WebKit et al do when you call an internal setter on an HTML element.
I don't see the value in these frameworks anymore.
Notably, Web Components. They're fantastic for distributing components - after all, a Web Component will work in every framework (even React, the IE of frameworks, finally added support), or even in vanilla HTML. So you can build a component once and everybody can use it. It's fantastic.
But for internally composing a web application, Web Components are simply awful. They add nearly no helpful features that vanilla JS doesn't already have, and they add bucketloads of extra complexity. You got attributes and properties, and they're mostly the same but kinda not always and oh by the way, attributes can only be strings so good luck keeping that in sync. You got shadow DOM which is great for distribution but super restrictive if you want any sort of consistent global styling (and a total pain in devtools, especially once you go 10 levels deep). You can turn that off, of course, but you gotta know to do that. And plenty more quirks like this. It's just.. It makes lots of easy things 5x harder and 1 hard thing 1.5x easier. Totally not worth it!
If you really want to not use a big clunky framework, but at the same time you have a complex enough app that composition and encapsulation is important, you'd be much better off just making your own object hierarchy (ie without extending HTMLElement), skipping all that awkward web component luggage and solely doing what they do best (tie an object to an element).
Or, better yet, get over yourself and just use a framework that does this for you. Your future self will thank you (and your colleagues even more so).
ps. rant time: If only the web browser people had gotten off their high horse and not proposed the word "Web Components"! If they would've just been named "Custom Elements", which is what they are, then we wouldn't have had 5+ years of people mistaking them for a React competitor.
Lit is amazing though. It's fast, lean, and easy to learn. In fact, to my experience, the only things about it that are un-ergonomic, are due to the fact that it's built around web components. Lit without web components would be so much better (again, except if you're building something to be distributed to other programmers on any framework). It wouldn't have a single global registry of components, it wouldn't have that attribute mess, and I bet it'd not need a compiler at all (it doesn't need it now either, but is easier/nicer with it).
Fair enough, but the way I read TFA, it doesn't dissuade developers from using tiny convenience libraries that leverage native browser capabilities. Based on your pushback, it sounds like my perception that Lit is mostly a convenient base class for Web Components is very incorrect. I'll dig into that more, thanks!
It has custom syntax, custom directives that look like regular JS functions but cannot be used like regular functions, a custom compiler in the works etc. etc.
They will keep telling you it just a small library and very vanilla though.
It's a haphazard solution that they now fight against with "directives" and even a custom compiler "for when initial render needs to be fast". It's not bad, but it's far from genius. And honestly, the only reason it works is that browsers have spent ungodly amounts of time optimizing working with strings and making innerHtml fast.
Additionally, it's weird to ask for "close to html" in a 100% Javascript-driven library/framework.
As for "etc.", I don't even know what etc. stands for. lit is busy re-implementing all the things all other frameworks have had for years. Including signals, SSR etc.
> Additionally, it's weird to ask for "close to html" in a 100% Javascript-driven library/framework.
Fair point. I don't personally really care about that either. I guess I just meant to be say that it's not all too custom :-)
E.g. classMap https://lit.dev/docs/templates/directives/#classmap
--- start quote ---
The classMap must be the only expression in the class attribute, but it can be combined with static values
--- end quote ---
So now you have to figure out which attribute this is called from, whether this particular call is allowed in this attribute etc.
So what they do is they parse (with regexes) their "close to HTML" code into a structure not dissimilar to React's, figure all this stuff out, reassemble actual HTML and dump it to the DOM
[0]: Table of contents element: https://github.com/cmaas/table-of-contents-element
[1]: SVG avatars (React-free fork of Boring Avatars): https://github.com/cmaas/playful-avatars
So at least in terms of minimal bases, all of these frameworks are much of a muchness.
The browser people literally promoted them as a React alternative. Then their goals and role changed every year, and now they are again promoted as an alternative
Perhaps my brain has been addled by a decade of React but as the examples became more advanced, they just looked a lot noiser and more complex compared to JSX or Svelte components.
I run sites that serve hundreds of millions per day and we pour a ton of resources into these projects. We're trapped in a framework (react) prison of our own creation. All failures are those of complexly. Highly coupled state machines with hacked, weaved, minified, compiled, transpiled, shmanzpiled etc. into unreadable goop in production. Yes I know source maps, but only when needed, and they are yet another layer of complexity that can break - and they do. How I long for the good old days before frameworks.
Perhaps nostalgia and the principle of KISS (and a few others) is clouding my judgement here, after all, frameworks are made for a reason. But it's difficult to imagine a new engineer having any more difficulty with vanilla than learning framework after framework.
I feel the same way. React and Angular (well an earlier version of Angular) were made prior to ES2015 being mainstream, so I do think they made sense to use when they were initially created. Since then, I've become burned out from the constant churn of new frontend frameworks to learn. Even within the world of React, there's different ways of writing components, and different state managers.
But you're not wrong about there being many ways to write components and manage state.
And RSC is a mess. But thankfully you can keep pretending that doesn't exist.
HTML was supposed to just a slight markup layer to make it easier to transit and render text documents, likewise that's all HTTP was designed for. The ratio of text-to-markup should be fairly high (I'm sure today it's less than 1). But for some reason (probably the initial ease of publishing globally) we decided that the future of application development should be to entirely reinvent GUIs to be built on top of this layer. If you look under the hood at what React is doing, we just have decades of hacks and tricks well organized to create the illusion that this is a rational way to create UIs.
Imagine a world where we decided we wanted to create applications by hacking Excel documents and Visual Basic (being from the before times, I had seen attempts at this), or have every app be a PDF file making clever use of PostScript? When one remembers how ridiculous it is to attempt to do what we have collectively done, it's not too surprising that a website can require megabytes of JS to render properly and, more importantly, to allow reasonable development processes allowing for some organization of the code base.
I also don't like the state of the web.
I too am from the before times where I guess we built essentially our own frameworks to handle own unique issues and take security into our own hands (verses have to trust external libraries and external resources….)
I sadly laugh when I hear 20+ years late people are still fighting back button navigation issues. I still see sites that can’t handle double posts correctly or recovery gracefully when post responses fail/timeout.
I’m out of the game now but for all the benefits of the web it’s been haves to death to support stuff the underlying infrastructure was nit designed for.
†I know this claim will rub some people the wrong way, but if you compare the capabilities of web tooling to build rich application UIs against desktop app tooling of even 20-30 years ago, there’s no comparison. The web is still primitive, and while JS and CSS are improving at a good pace, HTML is almost frozen in carbonite.
Not really - there are pretty big escape hatches - you can do pretty much anything in canvas, custom elements allows you to define your own elements with their own behaviour.
I'd say the problem is the opposite - one of the reasons desktop apps from 20-30 years ago ( say MacOS 7 ) where great from a user perspective is pretty much all apps looked and worked in the same way, using the same UI toolkit and the same UI principals. And from a developer perspective - a lot of the key decisions had already been made for you.
The web of today is a zoo of UI styles and interaction modes.
The problem isn't so much a lack of innovation or possibilities, but perhaps rather the opposite.
To me the killer app in the modern world is reactivity, ie making views that update in response to changes in the data model. To manually create listeners and do differential updates, and manage removal and teardown of event listeners etc, is akin to doing manual memory management. We used to do that with jquery sometimes, and it’s the most error-prone thing you can do. It’s a stateful shithole.
Once they manage to find a way to modularize components in a way that is largely view-declarative, I would be happy to crawl up on the surface of vanilla JS again, but not before. It’s simply missing functionality, imo.
As someone who stepped away from web for a while and came back to it a couple of years ago, a straight React or Next.js application is so, so much nicer to work with than old-school webapps were. It feels like the web has finally been dragged kicking and screaming into a world of regular software development best practices. JS isn't the best programming language but it's a proper programming language and it does the job, I'm continually baffled that people would rather contort themselves with the sub-Turing tarpit of CSS and what have you instead of just using some libraries, writing some code, and actually solving their problems in what's usually an objectively easier and better way.
The boilerplate, the innerHTML strings, the long list of caveats, the extra code to make basic things like forms work, manually attaching styles etc. etc.
One recommendation is to change the mental model here. In my eyes it isn’t “don’t use a framework”, it’s “build the framework you need”. It’s tremendously easy to set up an extension of HTMLElement that has 90% of the functionality you’d want out of react. My SPA non.io is built like this - one master component that has class functions I want, and one script for generic functions.
The last 10%, OTOH...
That in itself undermines a lot of the author's message, as they were not able to reasonably de-framework themselves.
(And - it's not hard to get the navbar right, in any number of ways. But you do have to do a bit of work before you preach things to others.)
It’s not some sort of mystery, it’s just a global of your variables.
The example given clearly showed components backed by JavaScript. Which means you could implement a state management system in seconds.
But beyond that state management is overrated. I remember when react came out and somebody from face book was describing updating some pending message count number. And how they had trouble with it. The issue was they made shitty code, and the result of react was a solution that hid that shitty code. Because I bet to this day if you profile their example they still had the bug but the state management system silently hid it as a future performance problem.
Even something simple like a date picker. Where are you going to store which month you're looking at?
There's state everywhere, and we should not be sending it all back and forth to the server.
The main issue is that, typically, if you're storing data directly in the DOM, you're storing redundant data. For example, if you've got an input that should have a green background if the input is one string, and a purple background if the input is another string, what state should you store here? Fundamentally, there's only one thing that's interesting: the input's value. Anything else — say, the class used to set the background colour — is derived from that value. And therefore we always want our data to flow in one direction: the input value is the source, and any other state or UI updates are based on that.
But in practice, it's usually hard to maintain that strict flow of data. For example, you'll probably have multiple sources that combine in different ways to create different derived data. Or you might have multiple different ways of updating those sources that all need to perform the same updates to the derived data. Or you might have multiple levels of derived data.
It's definitely possible to build applications like this — I started out this way, and still use this for simple projects where the state is fairly easy to manage. But as projects get more complex — and as the state gets more complex — it almost invariably leads to weird glitches and broken states that shouldn't exist. Hence why state management is the core of most modern frameworks — it's the most important problem to solve.
N.B. `useMemo` in React has nothing to do with the DOM — React will never thrash the DOM with unnecessary changes, that's the point of the diffing process. `useMemo` is instead about reducing the number of times that React builds its own VDOM nodes.
I find your version of history amusing, because the first project that I migrated away from vanilla JS was actually to Angular, because the team found React's JSX syntax too weird. And these weren't even the first wave of frameworks — people had been using state management systems long before then, but they had other flaws that made them difficult to work with in their own right. React became popular very quickly because it had a very simple rendering model, and because it was quick in comparison to a lot of other options at the time. Since then, the field has developed a lot, and at this point I wouldn't recommend React at all.
Out of curiosity, what would you recommend?
That said, it's kind of niche, which means you need to be more willing to write stuff yourself, so a good mainstream alternative is Vue. Vue is also signals-based and relatively lightweight, but still has a lot of batteries included and a wider ecosystem that is fairly strong. They're also great at taking the parts of other frameworks that work best and implementing them in Vue, but in a fairly considered way. I understand there's also some fairly battle-tested full-stack frameworks in the Vue ecosystem, although I've not had much personal experience of that.
Both libraries are signals-based, which is a completely different reactivity concept to React, which will take a bit of getting used to. But I suspect Vue will make that transition a bit easier - despite SolidJS's more superficial similarities to React, it behaves quite differently.
Rendering libraries like Lit handle declarative templates, and there are many state management solutions, the same ones you can use with React, etc.
> An explainer for doing web development using only vanilla techniques. No tools, no frameworks — just HTML, CSS, and JavaScript.
You're thinking of a lightweight site, which this isn't claiming to be.
I’m not convinced it’s worth it.
If you want something à la KISS[0][0], just use Svelte/SvelteKit[1][1].
Nowadays, the primary exception I see to my point here is if your goal is to better understand what’s going on under the hood of many libraries and frameworks.
[0]: https://en.wikipedia.org/wiki/KISS_principle
[1]: https://svelte.dev
I tend to rely on Bootstrap so the layout is reasonable on mobile- but that's purely through my own lack of experience.
But I guess someone with a more intimate knowledge can easily get the idea of building a framework. Then add on all the use cases and feature requests...
Some web pages, particularly traditional media websites are absolute hell holes of bloat, but I guess that's more about their ad serving than lean web development.
Interesting. Why do web components have this effect, but some JS frameworks apparently do not? Or do they all?
#include "header.htm"
<h1>Title</h1>
...
#include "footer.htm"
I'm still using it, and I wonder when I'll be able to get rid of PHP.I've been building many simple apps using this technique, it works fine. I'm sure there is some kind of performance hit; it's almost never an issue. I have never seen any browser compatibility issues.
> Warning: This is a security risk if the string to be inserted might contain potentially malicious content. When inserting user-supplied data you should always consider using a sanitizer library, in order to sanitize the content before it is inserted.
https://developer.mozilla.org/en-US/docs/Web/API/Element/inn...
Or a simple escapeHTML function within the innerHTML - but I prefer innerText in a separate pass, as using escapeHTML as a pattern gives an opportunity to forget to use it.
Users will be able to inject cross site scripting attacks into the forms of other users.
Not saying to never sit, just saying to be aware of injection attacks on the front end.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
Anything more is for the kids on my lawn.My team and I built https://restofworld.org without any frameworks. The feedback, from surveys, outreach, and unsolicited emails, has been overwhelmingly positive, especially around usability and reading experience.
We might adopt a framework eventually (I do appreciate how their opinionated structure can cut down on endless debates), but for now, plain JavaScript serves us really well.
Any sharp edges to this approach?
And the reason for that is mainly to prevent edge cases and make sure people in CRUD apps see up-to-date content.
The experience with a no-cache max-age=0 server-rendered site would be very similar.
All of the headaches around custom routing code + restoration of state are pretty much obsolete wirh bfcache, introduced more than 5 years ago.
If you build a dynamic page and use HTTP headers to prevent caching, stop complaining about it. Most people making this argument want to offload dealing with external APIs to the frontend, and then complain about the result when no requirements around API response caching were defined, let alone time was given to define them.
Browser caching isn't a replacement as the first request for every endpoint is still going to be slow.
Yet somehow, APIs aren't treated with the same requirements for performance as web pages.
SPAs need to hold onto the data themselves, and correctly transition it between pages/components. Poorly-built ones will simply refresh it every time, or discard it if the data is too large.
Both mechanisms can allow for immediate page loads if they've been correctly implemented. They just require a different approach.
You get your lovely developer experience with components, with the same component API enabling layouts for the repeatable outer structure, easy composition of reusable parts of your content, ease of passing data around between these and implementing any logic you need before rendering or inside what you're rendering.
The user gets a snappy, static website.
10x longer than what?
> It’s just the developers that don’t care because deviating from the framework
You realize that many major modern (and legacy) web frameworks started inside companies?
On the one hand we have lots of people on here who are building full-featured web apps, not websites, on teams of 30+. These people look at frameworkless options and immediately have a dozen different questions about how your frameworkless design handles a dozen different features that their use case absolutely requires, and the answer is that it doesn't handle those features because it doesn't require them because it's a blog.
Meanwhile there are also a lot of people on here who have never worked on a large-scale web app and wonder why frameworks even exist when it's so obviously easy to build a blog without them.
It would be nice if we could just agree that the web hosts an enormous spectrum of different kinds of software and make it clear what kind of software we're talking about when we were opining about frameworks—whether for or against.
In this case: this is a WordPress blog.
If we're talking about Medium, then yes, Medium is a complete disaster zone that should not be emulated anywhere. Their reader-facing use case does not require JavaScript at all except as a light optional seasoning on top.
All I'm saying is that we need to actually talk about use cases whenever we're talking about frameworks, which we almost never do. Failing to specify the use cases turns the conversation incoherent because we end up having two different conversations without even realizing it.
Every time I've been at a company and suggested moving to vanilla css+html or a static generator, the reaction is like I brought up rewriting in assembly.
There needs to be a significant push for simplification of the default toolchain. I understand using React, but reaching for it by default at the beginning should be seen as unnecessary complication.
Especially because it's 100x easier to move from html+css to a framework than it is to move from a framework to literally anything else.
A sibling comment to yours described it very well.
There really isn't a good substitute for understanding early on whether you're going to be making a website or an application.
If you're building something with specifications, then what are we even talking about? You know what you need to build so just build that.
But this thread is about what to do when you don't know. "Start the simplest way" is not always the right answer, because you have some information about what you plan or want or hope to build, so you can use that information. Not everything is a set of hyperlinked webpages, and you often know that right away, even when you don't have many details sorted out at all.
—John Gall
Why use two incompatible languages (JSX and HTML) for different types of web pages, when you could just always use JSX? React can statically render to HTML if your page is really static.
I think what you’re really complaining about is that people use React for static content and don’t statically render it first. That is sloppy, I agree. But it’s not React itself that’s the problem but rather the developers who don’t care what they are doing.
The islands/RSC work seems to offer some improvements to this, but most of these websites still include a full copy of React on the page to essentially do what the speculation rules API and a `<dialog>` element could do just as easily.
Meanwhile, the original Space Jam website still renders perfectly.
The standards are superior to the frameworks built on top of them. Abstractions come and go, but web standards, once adopted, have a solid track record of sticking around.
I also write more complex webapps using vanilla webtech (with a strong server-side solution like flask, django or similar). I checked out react it just hasn't clicked yet for me in combination with my serverside style.
It worked great but then the business grew and the software became bigger than what fits in 1 engineer’s head. We now have lots of issues that need fixing.
A good example are pages that take 5 seconds to load because they have to do so much, then you submit a form, and the page reload takes 5 seconds to go through but there is no UI feedback so you press the button a few more times because maybe it didn’t work? Then you get an error because the first action worked but the subsequent actions failed due to uniqueness constraints. Now as a user you’re confused: did you place the order or not? You got an error but the order also showed up in your history. Hmmm
As engineers we have issues understanding what is and isn’t in scope. Will this javascript file you touched affect only the feature you wanted, or another 50 pages you didn’t even know about? When you add a new thing in a template, how many N+1 issues across how many pages did you just cause? Where does the data even get loaded?? The query lives so far away from where you use the data that it’s really hard to track down. Etc
We started putting new things in React, especially the more interactive areas of the site. It’s pretty nice in comparison. Not perfect, but the framework pushing you to think in components helps a lot. I’m slowly introducing this mentality to other parts but the framework really wants to fight you on it so it’s gonna take a while.
However, I'm interested in how frameworks solve the developer experience problem you mentioned:
> Will this javascript file you touched affect only the feature you wanted, or another 50 pages you didn’t even know about?
> When you add a new thing in a template, how many N+1 issues across how many pages did you just cause?
> Where does the data even get loaded??
Doesn't this just change into
- "Will changing this React component affect only the feature I want, or do other pages interact with it differently?"
- "When adding a new component into a template, will I break any other pages which use this template?"
- "Where does the data even get loaded??"
Yes and no!
TypeScript helps a lot – you get quick traceability of where things are used and squiggly lines, if you break a contract. Yes a statically typed MVC framework would get you this, but in my experience the apps that get into this mess also don't use types "because types add too much complexity" (likely true for that company stage).
Componentization brings the other piece – self-contained components that declare their own data dependencies (load their own data), bring their own isolated styling, and generally handle all their internal behavior. This takes some skill/experience to get right and yes you can totally pull it off with every toolstack if you're good enough. The benefit is having a stack that encourages you to think about interfaces and contracts between components and hiding the messy internals from the outside world.
So for example in Flask I'm encouraging this pattern of tiny composable views: https://swizec.com/blog/a-pattern-for-composable-ui-in-flask... Once you have these, you can then move them in and out of the page with some JavaScript and an Ajax call. HTMX does this afaik and it's also how we used to build PHP+Ajax apps for a brief moment 20 years ago before client-side rendering took over for various reasons (smaller payloads mattered back then as did sharing an API between web and mobile)
edit: Point is that an approach based on composability guarantees that components won't break each other, can be moved around, and can live side-by-side without worry. The more your stack can guarantee this (as opposed to manual vigilance) the better.
Of course you could just have made up that example as a straw man as well.
Yes and that’s normal. Big ball of mud is the worlds most popular software architecture.
Original: http://www.laputan.org/mud/
My modern take based on the original: https://swizec.com/blog/big-ball-of-mud-the-worlds-most-popu...
> Well ... put it in "menu.js". Will it affect 50 other pages? No, it will affect the menu
MVC frameworks, used traditionally, don’t support this super well. If you want some dynamic value in the menu supplied by the backend, every view has to make sure it passes that value into the template layer. Change the menu, gotta go around changing every view to supply that new value.
If 1/few people are building a site so simple that the menu code is in "menu.js", then sure, separate your code and go about your day. But when 30+ FTEs are building a huge app with lots of tightly interconnected features, then the complexity is there no matter how you architect your code - it's part of the business requirements. Like GGP said, they're different domains, and stuff said about one doesn't necessarily apply to the other.
It's been standard practice for at least 25 years to disable the submit button once it is pressed. If you aren't doing this as a developer, you are practically begging for trouble, and it's really so easy to avoid that trouble.
https://www.joelonsoftware.com/2002/05/06/five-worlds/
His thesis was that before arguing about software development tools, practices, anything really, it's vital to establish what kind of development you're doing, because each "world" has its own requirements that in turn motivate different practices and tools.
The worlds he quoted were Shrink-wrap; Internal; Embedded; Games; and Throwaway. Shrink-wrap is no longer a thing for most developers, and we can probably name some others today that didn't exist then. But the basic advice he gave then matches what you're saying today:
We need to anchor arguments about tooling in a statement about the type of development we're trying to address, and we need to appreciate that the needs of each world are different.
Actually, I feel like many times the conversations about code are pretty shallow anyway with not much info. Maybe it's just difficult without properly established context but OTOH that can quickly get complicated and require a lot more effort.
And yet large teams of very smart people reach for NPM as just part of their initial setup process. It's so ubiquitous that I essentially had to learn to not write this way on my own, by asking myself "can I do this without JS"? Almost every time I asked that question, the answer was "yes", and the non-JS way was much easier. But for a long time I wondered if I was wrong--maybe the framework JS way is easier in some way I don't understand--because everyone around me was doing it the framework JS way. It's taken me years of experimentation to build up a preponderance of evidence to persuade myself that in fact, the framework JS way is just worse, most of the time.
Everybody wants to be Facebook or Google, but the fact is you probably just aren't. You probably don't have their problems and you probably don't need to use their tools.
Not saying frameworks are never the right answer of course, but it's as much a trade-off for complex apps as it is for blogs. Things like performance, bundle size, tooling complexity, easy of debugging and call stack depth, API stability, risk of hitting hard-to-work-around constraints all matter at scale too.
Forgot to say.. I very much admire and appreciate the aspect of "ejectability". More software should strive for this ideal.
https://thymer.com/ejectable-apps
I read that and nod all the way through. Hope you succeed!
I just pointed out that frameworks are not always necessary.
>> In this case: this is a WordPress blog.
No. It is not a "blog". It's a news site. It uses WP as a CMS. That does not make it a blog, and comes across as nothing more than an attempt to belittle it. The New Yorker, with its 90 year archive (at the time) was run on WP too for a time. Its used by many major publishers.
If you look at other news sites, be it NYTimes, Polygon, Verge, Wired, etc, most of them use frameworks to some degree. React, preact, svelte, whatever. It works for them. Underscore, and Backbone are two frameworks that were developed at the NYTimes. Its not alway necessary, and you can still do a lot without them.
If you're aware of requirements that a news site has that a blog doesn't (and I assume you would be, as the OP and creator of the above site), I'd love to hear it.
1. Something like a one-person blog published by committing Markdown to a GitHub repository and having that published automatically, all the way to; 2. A journalistic news site that has a full CMS back-end tracking multiple authors/bylines, some kind of editor/approval/review workflow, features for deciding what gets shown "above the fold," linters that enforce style guidelines, specialized search features, &c.
While some blogs can have complex requirements and some news sites might be simple, I hope we can appreciate that there will be some blogs that are much, much simpler than the NYT and this have fewer and simpler requirements.
They're both "publishing words," but the back-end complexity reflects the complexity of the business -processes and model more than the complexity of displaying articles on web pages.
In this particular site's case, the requirements are met by WordPress, so "a WordPress blog" is a simple description of what it is. It wasn't meant to include a value judgement.
This is where those people and teams are wrong.
Dogma is a hell of a drug.
> In this case: this is a WordPress blog.
I kind think that if your discussing tech about the web you need to include the scale and your experience.
Would you mind sharing one or two kind of feature that are required by these development team ?
Rather than opening up that can of worms I'm going to leave it where I left it.
I was genuinely curious because i never faced the kind of issue a web development team would have, it wasn't about counter argumenting.
You could have just said "x and y".
Photopea is a full-featured web app, not website, written without frameworks by a single individual. It is absolutely possible to build feature rich applications without the framework du jour.
The only real viable alternative with a large team is a hand rolled framework, which isn't always worth the trade-off.
Team of 30 people is not needed to write fully featured application.
Team of 30 people is needed to build, run and maintain enterprise applications. Where you might need 24/7 support and such support requires at least 5 FTE, you might need features changed when 3 people go on 2 or 3 weeks vacations. Maybe that single person will come up with something better to do with their life so you need to have some people to take over.
What parent comment was about, people make broad statements not understanding that world is much bigger place.
Yea it is absolutely possible to build an app the way you describe but it doesn’t work for every app everywhere.
The same of course using React/Angular doesn’t work for everything, everywhere.
Those kind of posts are the same as ORm vs plain SQL or Scrum vs whatever.
Time wasting at best, misdirection of inexperienced at worst.
A counter example is Filestash [1], a full fledge file manager that can open > 100 file (from parquet, arrow, geojson, shp, raw images, etc...). Since it got moved out of React, the app perform better by every single metric than the original one. While I agree there's no team of 30 behind it, so is tons of software with people blindly defaulting to React for the same reason than people choose IBM: nobody get fired for choosing IBM / React.
A blog is just one example of a place where frameworks don't help, chosen because the site OP shared is functionally the same as a blog. Other applications have different requirements and those requirements may also not benefit from a framework. Alternatively, the requirements themselves may actually have benefited from a framework and the authors chose to avoid them because the team preferred not to or because they felt strongly about avoiding frameworks because of personal preference.
In this case, this project has really one active contributor [0], so it's missing one of the key ingredients that in my experience really call for a framework: coordination between a large number of people.
(And lest I be mistaken: just because a project has a large number of people does not mean a framework would for sure be the best choice. I'm sure there are counterexamples to that too! It's just a hint in that direction, not an ironclad law.)
[0] https://github.com/mickael-kerjean/filestash/graphs/contribu...
Keep going, you might just end the whole debate in this thread.
Software engineers are great at providing “well, why don’t you just..” answers to problems they don’t actually have their brains wrapped around. This often leads to attempts to scale poorly scaling approaches (e.g., we’ll just bake every variant, since there are only four today - which becomes combinatorially intractable after a few years of development).
On the flip side, software engineers are also great at choosing how they’re going to solve a problem before they really have a handle on what they’re going to solve.
I use this maybe 5-10 times per year (and several engineers I work with have taken to repeating it): “tell me what you’re doing but, more importantly, tell me what you’re not doing.”
Forcing people to be explicit about what is not in scope helps to bring clarity to otherwise murky tangles of ambiguity.
Irony here is that OP did precisely this with a link to the precise thing he built. Meanwhile you haven’t offered even a rough description of what you built, much less a link.
I do find it a little annoying that when we have these discussions about vanilla vs frameworks people hold up an example and I go there and see a news site of articles rather than a complex application, and I think to myself yeah I wouldn't have used a framework in this situation anyway. It's annoying from 2 directions: the first that people ARE using frameworks for sites that shouldn't have ever had them in the first place, and I can bring to mind a couple of news sites I visit regularly where this is obviously the case because of the silly ways they work. Secondly that this is then held up as an example, because most of my work is on very interactive sites so I end up thinking well yeah but that's not going to work for me is it. My feeling is that I could do vanilla but would end up with something like a site specific framework.
My current approach is to use a framework and consistent patterns but to try and use as few other dependencies as possible.
Also infinite scroll breaks the most basic usability issue - you can't track where in the article you're with the scrollbar positioning
With most frameworks you don't need 100kB of JS code.
Heck, with Mithril you get the components, vdom, router, and http client in like 10kB gzip.
Shit just show me how long it takes you to create a nice reactive table with search or a form with proper labels, user interaction, validation, errors, etc
Why would I implement that all from scratch when I can install svelte install a UI library and add a couple lines of code, all with a 25kb overhead and get a better, nicer looking, more well tested product that would take you to do in a week?
The WC ecosystem has grown a lot, and one of the great things about native web components is that they can be consumed by most major frameworks at this point. There are also many mature component sets you can pick from, like shoelace, etc... you can mix and match components from different offerings, something that's not practical with React/Angular (for example).
I can use a select from Ionic, a breadcrumb from Shoelace, and a date picker from Lightning and it'll all just work.
Doing vanilla Javascript development doesn't mean "don't use any libraries" (well, it might to purists, but whatever) it means stick to using standards based technologies with libraries and accelerators.
For example, I use many of the techniques described by the site, but I prefer to use lit-html library (NOT lit framework) for rendering.
Sometimes I use the vaadin router for routing, sometimes the Ionic router, sometimes my home grown one.
I use a little library I wrote called ApplicationState for cross component shared state, because I don't like state being coupled to my DOM structure.
Sure, this requires a deeper understanding than "batteries included" frameworks, but the advantages outweigh the negatives IMHO.
And quite frankly I don't see why Id want to reinvent the wheel there.
{#each notes as note} {note.content} {/each
is exactly what Id want to implement for example if I were writing JS functionality to render a collection.
And Svelte for example doesnt restrict you from using vanilla JS whenever you want either
100kb would never be a considerations for any of the apps I worked on (as a contractor in the corporate world). I mostly work in large teams, sometimes we had mulit-team monorepos with various line-of-buisness apps using React. 100kb are so completely irrelevant, as for LoB or B2B, no-one cares about the initial page load. Your users use the app on a daily basis, so they get all the JS/CSS/static assets served from their browser cache almost all of the time, until we make a new prod deployment (usually weeks). Plus, all users are always on Ethernet or WiFi at least, not some 3G cellular network somewhere in the desert.
If you use some smarter framework like Next.js, you even get smartly chunked js parts on a per-route basis, that are immutable - they contain a sha256 hash of their content in the filename, and we configure the webserver to serve them via `Cache-Control: immutable`. There won't even be a roundtrip to check for newer versions.
Plus Nextjs comes with tons of other stuff, like pre-loading chunks in the background, or when hovering over a link (i.e. using the time between the hover and the click to already load a chunk). Let alone that the initial render is static html and hydration happens completely unnoticed from the user.
* Are exclusively loaded on high speed internet in corporate networks.
* Have a high hours-of-work-per-page-load count.
* Update infrequently.
We're engineers, or at least we like to call ourselves that. Engineering is the process of gathering requirements and building a system that meets those requirements on time and under budget. If your requirements include being able to serve users on 3G networks or users who only load your site in order to read a single piece of content per day, yeah, optimize your load times. But don't attack other engineers for failing to live up to your app's requirements.
Developer productivity, theoretically. Although some of these frameworks don’t help with that for me personally.
And you're using WordPress, so yeah you are using a framework. Turns out you do think they're necessary.
And as you can see, framework != slow, whether it's WordPress or anything else.
The static websites and beautiful designs
With a lot of iframes and widgets, running some kind of services like Disqus chatrooms were
It’s far more secure, too.
Because it was heavily pushed, advertised, and is the default way of writing web components (you have to explicitly specify mode: open)
The mode does not toggle the shadow DOM on and off. It just specifies whether or not the element's shadowRoot object is a public property. An open mode makes this possible: `document.querySelector('my-component').shadowRoot`.
You have to explicitly opt-in to the shadow DOM either imperatively by calling `attachShadow`, or declaratively in HTML by giving the element a template child element with a `shadowrootmode` attribute.
if (!this.#span)
What is that kind of magic syntax sugar?! A shorthand for document.getElementById?[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
The problem I found is that my full-SSR project doesn't use any Node.js at all, and it works fine for everything but CSS, because in order to use includes and variables I need a CSS compiler.
For example, I use a CSS library that defines a very nice style class "alib-link" for all links. I would want to set it for all <a> elements, without adding `class="alib-link"` all the time. It's easy with a CSS-preprocrssor, but how to avoid using one?
a {
/* the code from the alib-link class */
}
If you need to style anchor links depending on where they point to, you could use an attribute selector: a[href*="news.ycombinator.com"] {
/* more css */
}
No preprocessing necessary.I kept on wondering while making it if I was doing it wrong without any framework. Because that's what everyone else seems to expect.
Splinter, splinter.almonit.club, if anyone cares.
The main reason is I want direct control of the DOM and despite having 10 years or more react experience, it is too frustrating to deal with the react abstractions when you want direct dom control.
The interesting thing is that it’s the LLM/AI that makes this possible for me. LLMs are incredibly good at plain JavaScript.
With an LLM as capable as Gemini you rarely even need libraries - whatever your goal the LLM can often directly implement it for you, so you end up with few of no dependencies.
I ended up using hugo + isso (for commenting) and it works very well. I moved to this setup after the wordpress drama, and even if I miss something, freedom has no price: https://gioorgi.com/2024/liberta-come-aria/
https://go.rt.ht has other custom elements!
No web site is intrinsically valuable - the information and functionality it wraps is what holds its value. Developing the access to that information and function, enforcing correctness, and the timeliness of that development is what frameworks empower orgs to deliver at much lower cost in the present and future, vs. vanilla web dev.
In practice, it is sometimes true, and often not.
You can't overstate how often decisions are large orgs are driven by hype, follow-the-herd, or "use popular framework X because I won't get in trouble if I do" mentalities. The added complexity of tools can easily swamp productivity gains, especially with no one tracking these effects. And despite being terrible decisions for the business, they can align with the incentives of individual decision makers and teams. So "people wouldn't do it if it wasn't a net positive" is not an argument that always holds.
WebAssembly doesn't need to be bloated, I've written a C program that imports nothing, and the resulting WASM file is under 3KB in size, and there's under 10 lines of code to load it up.
Tinygo is also on my list to try for Wasm.
More importantly, the needs of the site are drop-dead simple so no need to install a Ferrari when all I need is a bicycle.
Plain vanilla site served from Github FTW!
Sites are way too bloated nowadays, a lot of the content being served can be static and cached.
Maybe it's time for me to crawl out of my systems-software-only hole and play around with the web again.
Meiosis Pattern: https://meiosis.js.org/docs/01-introduction.html
Mithril: https://mithril.js.org/
IMHO everyone building for the web should understand how it works and know how to use the foundational web stack, leveraging the web platform per se. Then, by all means use a build system and your preferred frameworks at your discretion, remaining aware of tradeoffs.
For my part, one of the things I like best about Remix (/React-router v7+) is its explicit philosophical alignment with leveraging web standards and doing more by doing less (e.g. mutate server data using... the HTML form tag).
I also highly recommend https://every-layout.dev which showcases eye-opening, mind-expanding techniques for implementing performant, accessible, flexible web layouts using only browser-native CSS.
Context: I've been professionally involved with web development since 1998.
No regrets, except I found myself also building a framework....
https://plainvanillaweb.com/pages/applications.html This example code does exactly one of those things: update the address bar by setting the fragment identifier. Middle clicking and back/forward buttons work because of that. Error handling (if it were to not just be static content but actually loading something from a server like a real website, which can fail) and loading indicators are not present
Re-implementing browser code in JavaScript is not using the "plain" or "vanilla" web functionality. You're making your own framework at that point
I'd like to use lit-html but Jetbrains does not support it - very annoying.
1. Get the desired node.
2. Write the content.
That is it. The corresponding code is two lines.
But but but... what about state? State is an entirely unrelated issue from writing content. State is also simple though. State is raw MVC plus storage. The way I do it is to have a single state object, the model. I update that model as necessary on user and/or network interaction, the view. When the page loads I read from the state object and then execute the corresponding changes using the same event handlers that would normally execute in response to the user interactions that wrote the state changes in the first place, which is the control.
That is simple, but many people find that simple is not easy.
But really that isn't what this is about. Framework people were always falling back on this fail to scale argument even in the early days of jQuery. The real issue is that some people cannot architect and cannot refactor. That isn't a code or application limitation. Its a person limitation.
getting a site up and running with vercel is like 20x faster and more simple than hardcoding .html files man.
I've discovered that when you start getting really cynical about the actual need for a web application - especially in B2B SaaS - you may become surprised at how far you can take the business without touching a browser.
A vast majority of the hours I've spent building web sites & applications has been devoted to administrative-style UI/UX wherein we are ultimately giving the admin a way to mutate fields in a database somewhere such that the application behaves to the customer's expectations. In many situations, it is clearly 100x faster/easier/less bullshit to send the business a template of the configuration (Excel files) and then load+merge their results directly into the same SQL tables.
The web provides one type of UI/UX. It isn't the only way for users to interact with your product or business. Email and flat files are far more flexible than any web solution.
If you have even a small bandwidth to maintain it over time, quick and simple solutions like an Excel template and a few custom scripts work great and often end up being more flexible as your end user is mostly working with raw data.
I've purchased specialized woodworking tools online that simply involved filling out a form. I later received the parts with an invoice to send payment. You can simply not pay if you choose not to.
There are so many way to do commerce both on and offline and if you squint and look closely you'll find interesting people doing interesting things all around you.
Yep :)
I know the other way around is basically the norm: how does one know the company, the seller, will actually provide the product after paying. But the prevailing culture, currently, is that companies in this regard are trustworthy and customers are not. It's a bit debatable but it makes sense.
Yes, I believe it does. These are highly specific brass tacks for making oval shaker boxes. There are very few manufactures of a proper #2 brass tack these days, so I would very much want to stay on the good side and any suppliers.
I think they said in their faq that twice they haven't been paid in 20 years or something like that. Their online "store" appears to be down right now–hopefully they are still in business.
edit: It appears John passed in 2023 :( https://blog.lostartpress.com/2023/02/07/john-wilson-1939-20...
People romanticize businesses like this but there’s a reason you’re not posting the link. It only works when it’s for a small group of people who are in the know and refer trusted buyers.
It’s also trivial to set up any number of different online purchase options that would avoid all of the problems with this for a nearly trivial fee.
I guess I’ve spent enough time dealing with things like non-payment or doing work for trades that never arrive that I just don’t romanticize this stuff any more.
You are correct that I was a worried about an HN hug of orders for brass tacks.
> I guess I’ve spent enough time dealing with things like non-payment or doing work for trades that never arrive that I just don’t romanticize this stuff any more.
In this case there isn't much of a choice. When the last manufacture of brass tacks closed down, John bought the machinery and is the only place I know of to get proper shaker-style brass tacks in the US. I wanted the tacks, so I had no choice in the method of payment.
I understand your sentiment but it was perfectly normal to pump gas before paying in the U.S. for a very long time and still is in many places. In other cities it is unheard of. Restaurants we can still eat before paying, but not many other places still give consumers much trust.
I had a friend who worked at a gas station in high school. Filing police reports for people who filled up and left without paying was a standard part of operations.
This was in a nice area, too. Often people just forgot and drove away. They had recourse because they had security cameras and people had license plates. The cities where you’re forced to pay inside first probably have police departments that don’t respond to non-payment as gas stations.
If someone showed up with a gas can or something they were instructed to shut off the pump and make them come inside to pay. It wasn’t as much of an honor system as it seems if you’re not familiar with it.
You can’t have a pay-later business without an amount of non-payment, which has to be compensated by higher prices (which other customers shoulder).
These are all choices a vendor can make. Something like this usually lasts right up until the secret gets out to a wider audience where the people who don’t care about social norms have no problem abusing a system left wide open.
People who never experienced high-trust and customs societies cannot grasp why and how it works infinitely better than low-trust ones.
But granted, all it takes is a few determined bad faith actors to break high-trust, when they are not vehemently and swiftly rejected…
But we have created an environment where this kind of thing is unthinkable, not even because people won't do it, but because they will only create legal trouble for themselves if they try. So the modus operandi for your average citizen in Western societies in general and US in particular is to not get involved and leave it all to law enforcement.
The business can just make less money than they would if everyone paid (which is, as you say, impossible). Costs and prices are linked in some markets, but it's not a natural law.
An interesting story behind those:
https://blog.lostartpress.com/2023/02/07/john-wilson-1939-20...
While an e-commerce solution is not always needed, there’s a good chance that a very simple shop cart facility will convert more than an email link, for certain types of products.
Why do I want "interesting" ways to buy things? I want to be able to buy what I want quickly and reliably. I don't get the benefit of making me try to figure out how to buy something I want
Why wouldn't you appreciate getting out of the (often) dull (sometimes) frictionless select-order-pay-receive-use-store-forget-discard purchase experience?
It's only a matter of time before this seller falls victim to a scammer - once they're found. I used to work for a book publisher who started doing their own e-commerce, and at the time one of the payment methods was a "pay later" one that predated the internet ("acceptgiro"). It only took a few months (if that) after the first sites went live that someone placed an order for a few hundred euros worth of books and had it delivered at a storage unit address. We scrapped the pay later payment option for orders over a certain amount then, and I'm sure later on the pay later option was removed entirely in favor of pay in advance methods.
There's newer pay later schemes now (Klarna IIRC) but the risk and debt collection falls to this payment provider. Of course, they got in legal trouble recently because they're a loan / credit company but didn't present the user with the right legalese for that.
It's one of the earliest lessons you'll learn from starting a company. Another close one is to not waste time on failed sales and annoying customers, replacing them with new customers is usually more profitable and enjoyable.
Payment after receipt is very common in Switzerland, but fraud is presumably rare. Your name would probably go on the debtors register and that's the sort of bad credit history you don't want to have. At some point the police/debt collection is involved, they get sent to whatever the address is and so on. For the average person it's not worth burning your name and address for a free spokeshave.
Every business is basically a phone number that you can message. It does not matter if you buy a pizza or furniture, book a hotel or need someone to clean your sofa.
No website. No need to fill in forms. No platform fee.
Whilst cheap 'dumb' phones do still exist and are used, even low income earners in South East Asia tend to have a smart phone, and most of the business done as landgenoot says is conducted via Telegram, Facebook, or occasionally WhatsApp or Line.
There are also ways of paying using nothing but a phone number, but usually business is done on a smart phone where photos of products are are shared, delivery is arranged, and since COVID, most payments are done via QR codes that require a smart phone.
In my experience this is not only true of the cities, but also even out in the provinces.
But just because it works that way there, doesn't mean it's right. There's nothing about SEA that implies to me the pinnacle of operational efficiency.
Enabling customer self administration/configuration of a "B2B SaaS" system mandates some form of interaction. I would be surprised at how many would not expect "touching a browser" to do so.
> A vast majority of the hours I've spent building web sites & applications has been devoted to administrative-style UI/UX wherein we are ultimately giving the admin a way to mutate fields in a database somewhere such that the application behaves to the customer's expectations.
If there is no validation of information given nor system rules enforcement, then I would question the correctness of said sites/applications.
> In many situations, it is clearly 100x faster/easier/less bullshit to send the business a template of the configuration (Excel files) and then load+merge their results directly into the same SQL tables.
This approach obviates the value added by a system enforcing core software engineering concepts such as fitness of purpose, error detection/reporting, business workflows, and/or role-based access control.
In short, while letting customers send Excel files "and then load+merge their results directly into the same SQL tables" might sound nice, this does not scale and will certainly result in a failure state at some point.
Much of US banking operates almost entirely on this premise and has done so forever.
> error detection/reporting, business workflows, and/or role-based access control.
I'd take a look at the Nacha (ACH) operating rules if you have any doubt that sophisticated business workflows can be built on top of flat files and asynchronous transmission.
I might be mistaken, but isn't banking famous for
(1) the long hours (i.e., processes suck), (2) the drudgery of updating said Excel files (i.e., processes suck) (3) horribly expensive to access (i.e., processes suck)
I have never once in my life as a corporate lawyer thought of banking as a model of operational efficiency.
Granted the only real security was the FTP username and password, but it was all internal and at the time (1990s) that was good enough.
> Much of US banking operates almost entirely on this premise and has done so forever.
This is a disingenuous statement as it relates to at least credit/debit/gift card transactions. Bank-to-bank and select high-volume Merchants communicate in certain circumstances with secure file transfers, especially for daily settlements.
The vast majority of Merchants do not, and instead rely on secure web applications to authorize, capture, and settle their transactions.
Perhaps other banking domains rely on Excel and similar to do business. I cannot speak to those business areas.
> I'd take a look at the Nacha (ACH) operating rules if you have any doubt that sophisticated business workflows can be built on top of flat files and asynchronous transmission.
And I'd recommend you take a look at different integration - online payment processing using Oribtal as described by Oracle.
https://docs.oracle.com/cd/E69185_01/cwdirect/pdf/180/cwdire...
I've had the same epiphany when I worked for a fintech startup that interacts with financial institutions. Having a website just isn't necessary for some of the day-to-day operations vs just sending CSV/Excel files back and forth for reconciliation, settlement, accounting purposes.
This doesn't end well when there are hundreds of thousands of "reconciliation, settlement, accounting purposes" to support.
Not literally. Banks have a lot of automations. The reasons they’re not real-time has more to do with various regulations and technicalities of recourse, not because it’s someone doing semi-manual processing of everything at each bank.
We execute Python in our PG database and do testing through PGTAP.
> Email and flat files are far more flexible than any web solution.
Some times I feel like I’m taking crazy pills when I read HN lately. Suggesting that we e-mail requests around and have a person on the other end manually do things including merge data into SQL tables is such a bizarre claim to see. Every once in a while I encounter some business that operates like this and it’s inevitably a nightmare, either as a customer or a partner. Not to mention it’s ripe for everything from fraud to the company falling apart because the person who normally reads those e-mails and does the things for 20 years suddenly leaves one day.
This feels like a mix of rose-tinted glasses reminiscing about the old-timey way of doing things combined with a layer of career burnout that makes everything modern feel bad. Dealing with a business that operates this way is not good.
> Yeah, I mean, it’s a it’s a very, very, very important question, the SaaS applications, or biz apps. So let me just speak of our own dynamics. The approach at least we’re taking is, I think, the notion that business applications exist, that’s probably where they’ll all collapse, right in the agent era, because if you think about it, right, they are essentially CRUD databases with a bunch of business logic. The business logic is all going to these agents, and these agents are going to be multi repo CRUD, right? So they’re not going to discriminate between what the back end is. They’re going to update multiple databases, and all the logic will be in the AI tier, so to speak. And once the AI tier becomes the place where all the logic is, then people will start replacing the back ends, right?
https://www.windowscentral.com/microsoft/hey-why-do-i-need-e...
7:40 - Bob was delayed by 5 minutes. ETA 7.45
7:35 - Bob is heading your way. ETA 7:40
7:20 - Bob has picked up your order from Pizza place
So you work in logistics support, but you pay to do it?
I originally planned to scrape the data and make my own website with better (imo) controls, but v0 turned into pumping the data into a Google sheet.
I've never needed v1. The Google sheets "UI" solves filtering (i.e., breakfast vs lunch), access control, and basic analytics (~7 other colleagues also use this daily).
That said, I do think plain vanilla approaches like this site describes are valuable—not because everyone should use them, but because they remind us what the browser can do without 20 layers of abstraction. That can be really useful when the goal isn't a full app, but something in-between email and interface.
It’s less about dogma, more about choosing from a wider spectrum of options—including “no UI at all.”
That said, there are b2b exchanges where a simple website is perfect. Supplier quality cases where people need to exchange short texts or general supplier information exchange.
Also b2b customers are far less concerned about UX style than the average retail customer. The former wants a system that just works efficiently and everything else is a waste of time. Sometimes productivity clashes with modern sensibilities and in the b2b case productivity still wins.
I hate mail though for formal processes though. In that case a link to a simple website that orderly takes up information and it is the better solution.
I also hate putting excel files into tables. There is always something going wrong, especially with files created in many different languages, and it is still work overhead. But there are already solutions for general information exchange that aren't necessarily parasitic SaaS services.
Of course there are alternatives, but I wouldn't call the usual ERP/CRM software superior to web apps.
The fact that this all got hyperlinked is a superb. convenient, but also a challenge from tech perspective, and what FAANG did in the 30 years to come (after 1992) led to this horror of entangled and super-complex technologies we have today. Even vanilla web is quite complex if you take into consideration all the W3 standardization.
Security or not, you can have an appliance run much simpler software given longer product lifetimes,... My only hope is now with llm-assisted coding this vanilla approach comes back, as the boilerplate pain becomes more bearable (how I really hated html at some point...). Besides, it is much more pleasant to prompt yourself a website, rather than try to position some image on stupid fb/insta pages/events, which is one major reason to step back and own your content again.
With that title I didn't expect Javascript to be part of the equation. To me "vanilla" is CERN's HTTP+HTML.
The thing that happened is that FAANG redesigned the web for their own needs, then other companies used that to fulfill their own needs too. That's how we ended up with a lot of available content, but also user info mining, browser monopoly, and remote code execution (JS) as a daily normalization of deviance.
There are some secessionists - Gopher is still alive, Gemini - but alternatives have a hard time to compete with the content companies can provide apparently for free. Most of the content we want costs time and/or money. Content creators can be fine with contributing from their own pocket, but this is not really sustainable. User sponsorship (donations via Paypal, Patreon, Kofi, ...) don't work well either.
Also, since the supporters of the alternatives are generally supporters of freedom (who isn't? Well, people don't reject freedom, they are "just" making compromises), they have to deal with illegal content and other forms of abuse.
So there are 3 problems an alternative web must solve: protocols, moderation and repayment.
and most importantly - we lost our right to search the content that the community generates. it is now walled off behind FAANG services, that threw us directly in the dark ages of internet, when even your own content is out of reach.
here's as simple example - a group of friends been throwing parties for 20 years, like raves. all these are announced on the FB and now-and then on some other services. more than 400+ events for 20 years. trying to find these again is impossible. google won't index them, fb won't allow you to scrape then, insta also. perhaps some obscure snapshot lives of it in internet archive, perhaps not. so one reason to own the content you publish is to be able to actually use it yourself after a while.
Even with JavaScript in the equation, the vanilla web is a good option to reclaim all that, and honestly bringing a personal site up in 2025 takes... less than a day to setup with all the VMs, DNS, CF tunnel, DB, FE/BE hassle that stands in the way. It's more available than ever, people just need to brave and embrace this... but something tells me the majority will not do it.
All the framework abstractions we made for humans coding productivity will need to be re-visited! I support plain vanilla web for this reason.
https://vorticode.github.io/solarite/
I was planning to improve performance more before announcing it. But even as is it beats React and approaches Lit.js on the JS framework benchmarks.
My main gripe with web components is that while they expose the first half of the tree traversal (from the outside in, via connectedCallback), they don't provide access to the second half, after child elements have been created. (akin to Initialize and Load events from ye olde .NET)
One can imagine a cross-compiler so you could write React, but have it compiled to React-free javascript - leaving no React dependency in production.
Would be a lift, but looks superficially possible.
What are the blockers to this?
Thanks for the pointer!
Yeah I don't need your shitty onclick hijacks. Thanks.
The site positions itself as advocating for simplicity and minimalism in web development—stressing plain HTML, CSS, and JavaScript—but then pivots into building the entire project using React. That’s a contradiction.
If the goal is truly "plain vanilla web," introducing React (with its build tools, dependencies, and abstraction layers) runs counter to that ethos. A truly minimalist approach would skip frameworks entirely or at most use small, native JS modules.
So yes, it's philosophically inconsistent. Want to dig into a better alternative stack that sticks to that principle?
At first, I learned and use plain HTML/CSS/PHP and I thought that was good. At college, they taught .NET framework and for some years, that was my go to techstack. Then I started to learn about more languages and frameworks. At some point, it's hard to switch between them
Now I stick with one thing, unless that platform doesn't support it. This also allow me to be a lot more productive since I know most of the thing needed to be done
Sure I can start with vanilla web, or some new framework but it'll take a lot more of time and just not worth it
As an example, the examples on `connectedCallback()` don't guard against the fact that this callback gets called every time the element gets inserted into the DOM tree. This could happen because you've removed it to hide it and then added it later to show it again, or it could happen because you've moved it to a new location in the tree (different parent, or re-ordering with respect to its siblings). Or maybe you're not manipulating this object at all! Maybe someone else is moving around a parent element of your custom element. Whatever the case, if you're element's ancestor path in any way gets disconnected and then reconnected from `documentElement`, `connectedCallback()` gets called again.
That means that you have to spend extra effort to make sure anything you do in `connectedCallback()` either gets completely undone in `disconnectedCallback()`, or has some way of checking to make sure it doesn't redo work that has already been done.
There are some other pitfalls involving attributes, child elements, styling, behavior with bundling, etc., that they never really get into. And generally speaking, you're probably only going to find out about them from experience. I don't think I've seen anywhere that goes into Web Component best practices.
Which is a shame, because they are incredibly powerful and I enjoy using them quite a bit. Now that my company has decided to go all in on React, I think they've only really seen the beginner path on both. Web Components as a beginner look harder than React as a beginner. Once you start actually building apps, I find they actually end up having to do largely the same level of manual shenanigans. But I find that I'm doing manual shenanigans to work around React, whereas with Web Components there isn't any magic to work around, but there also isn't a lot of "help" that you'd end up ignoring in most cases anyway.
The web is supposed to degrade gracefully if you are missing browser features, up to and including images turned off.
Now, web developers give you a blank page if you don’t run a megabyte of their shitty code.
No thank you.
I agree. (This should also include CSS, TLS, cookies, and many other things; I often disable CSS, and it should work just as well if CSS is disabled just as much as if pictures or JavaScripts or cookies are disabled.)
However, there are some uses where JavaScripts may be helpful e.g. if a web page has a calculations or something like that implemented by JavaScripts; but that is not an excuse to prevent the documentation from being displayed if JavaScripts are disabled. They should really make documentation and as much other stuff to work even if JavaScripts are disabled (and, depending on what it does, may provide a description of the calculation or of the rules of the game being implemented, or a link to API documentation, or something else like that).
Pictures also might be useful in some articles (but are often overused); but even then, if the picture is not displayed you could use an external program to display them. However, if it can be explained in the text, then it should be explained in the text if possible so that even if you do not have a suitable program to display that picture (or do not want to display that picture, e.g. the file size is too big; or maybe you are using text to speech or a braille display or something else like that) then it will still work.
TLS also should not always be mandatory, either. For things that require user authentication, and for writing, it can be useful to be mandatory (especially if you are using X.509 client authentication; this will be better than using cookies or other methods for authentication, anyways); but for read-only access to public data, TLS should be optional (but the server should still allow it in case the client wants to use TLS for read-only access to public data too).
Recommend you try to start with webcomponents + a well thought out design system -> move on from there and you're pretty sure to have a solid base from which you can always add react, lit, vue or whatever else cooks your noodle.
the other way around is near impossible
Sure you don't need bundlers and compilers (such as TS to JS), but at some point you might need async updates on `fetch()` of components that also share state with other components. At this point you're into framework territory, whether using your own or someone else's.
Producing a framework with those features that still fits in a single small size JS file would be great, especially if it can do component updates to shared state (without updating the DOM each turn, hence shadow DOM).
It lets you build full web UIs without touching HTML, CSS, or JavaScript, entirely in Java. The UI logic runs on the server, meaning: no API design, no JSON mapping, no Redux — just a Java object model.
Vaadin follows a true full-stack approach, where frontend and backend live in a single codebase. Combined with Spring Boot or other JVM frameworks, this creates a cohesive Java application—no complex build pipelines, no split repos, and no friction between frontend/backend roles.
What I personally enjoy most is the smooth developer experience: you see your changes reflected in the browser instantly — no manual builds, reload fiddling, or sluggish toolchains - just java and a bit of maven. For many internal business apps, it feels just as “plain” as the old-school server-rendered apps—just with modern capabilities like Web Components and security by default.
(Full disclosure: I work at Vaadin, but I’m genuinely a fan of the approach because it saves developers a lot of headaches.)
Debugging in the frontend is not trivial, but can still be done with the appropriate setting in the properties (https://vaadin.com/docs/latest/flow/configuration/developmen...)
The page https://plainvanillaweb.com/pages/sites.html uses custom components for all code examples and embedded content. Without JavaScript, it merely shows "enable scripting to view ../XYZ.html" in place of all code examples and demos. Better than having no fallback at all, I suppose, yet still not "navigable".
The fact that it does not even bother to build these custom components on any native element with a similar purpose—like, say, a link that users could follow to see the text document (*), or a plain old iframe (**)—is grim.
Web components are indeed useful for prototyping and personal testing, but are they really beyond the threshold where it is safe to serve them in the wild, potentially harming some users?
(*) I know, view-source: links and embeds are sadly blocked by browsers nowadays. Makes very little sense to me. Someone likely managed to exploit it for some nasty purposes, so now we are "protected", I suppose.
(**) See? In the olden days even iframes were said to have a link fallback inside, for user agents that did not support them.
Problems of FAANG are not our problems, yet they've somehow convinced majority of software architects that complexity is good.
My only concern is when I use HTMX/Hyperscript in my FOSS projects will others be comfortable contributing to it even though it has very little learning curve but they have to empty their React cup.
You have such a great community with big, very well thought out libraries like Tanstack Query that is pretty nice to work with. I can have backend and front end code in the same repository in a simple way and reuse code.
I also have the project in Phoenix Liveview which is also a much nicer way of using components. The thing is I don't really know which tech stack is gonna win so I made a small prototype in both to see different advantages / disadvantages.
One thing is clear tho, pretty much everything is better than using vanilla components and it's really sad because I really do want to use vanilla components and I want them to be good.
andrewrn•19h ago
prisenco•16h ago