The server deciding what to replace reminds me of some old (dangerous, I think) patterns like returning actual JS from the server which the client then executes.
But it was a nice pattern to work with: for example if you made code changes you often got hot-reloading ‘for free’ because the client can just query the server again. And it was by definition infinitely flexible.
I’d be interested to hear from anyone with experience of both Datastar and Hotwire. Hotwire always seemed very similar to HTMX to me, but on reflection it’s arguably closer to Datastar because the target is denoted by the server. I’ve only used Hotwire for anything significant, and I’m considering rewriting the messy React app I’ve inherited using one of these, so it’s always useful to hear from others about how things pan out working at scale.
Also, custom actions [https://turbo.hotwired.dev/handbook/streams#custom-actions] are super powerfull, we use it to emmit browser events, update dom classes and attributes and so on, just be careful not to overuse it.
Which states some of the basic (great) functionality of Datastar has been moved to the Datastar Pro product (?!).
I’m eager to support an open source product financially and think the framework author is great, but the precedent this establishes isn’t great.
They focus on the practical solutions much more than on the typical bikeshedding.
My intent was to say that hobbyists have a different, refreshing approach to programming and it's technologies that I appreciate.
I had been tracking Datastar for months, waiting for the 1.0.0 release.
But my enthusiasm for Datastar has now evaporated. I've been bitten by the open-source-but-not-really bait and switch too many times before.
HTMX is a single htmx.js file with like 4000 lines of pretty clearly written code.
It purports to - and I think succeeds - in adding a couple of missing hypermedia features to HTML.
It's not a "framework" - good
It's not serverside - good
Need to add a feature? Just edit htmx.js
My current thoughts lean towards a fully functional open source product with a HashiCorp style BSL and commercial licensing for teams above a size threshold.
[1]: https://data-star.dev/ [2]: https://data-star.dev/reference/datastar_pro#attributes
That said, the attitude of the guy in the article is really messed up. Saying "fuck you" to someone who gave you something amazing for free, because he's not giving you as much as you want for free -- it's entitled to a toxic degree, and poisons the well for anyone else who may want to do something open-source.
For such reasons, The Economist style guide advises against using fancy language when simpler language will suffice.
Bit like Pydantic. It's a JSON parsing library at the end of the day, and now suddenly that's got a corporate backer and they've built a new thing
Polars is similar. It's a faster Pandas and now suddenly it's no longer the same prospect.
FastAPI the same. That one I find even more egregious since it's effectively Starlette + Pydantic.
Edit: Add Plotly/Dash, SQLAlchemy, Streamlit to that list.
SQLAlchemy just has paid for support, I shouldn't have included it with the others, I must have confused it with something else.
Interestingly, this article pops up first page if you search "htmx vs datastar".
I've written costumer-facing interfaces in HTMX and currently quite like it.
One comment. HTMX supports out-of-bound replies which makes it possible to update multiple targets in one request. There's also ways for the server to redirect the target to something else.
I use this a lot, as well as HTMX's support for SSE. I'd have to check what Datastar offers here, because SSE is one thing that makes dashboarding in HTMX a breeze.
created: 18 minutes ago
Maybe I'm cynical, but fresh new accounts praising stuff that has only the core open source, works as negative-ad.You have my sword!
The patch statements on the server injecting HTML seems absolutely awful in terms of separation of concerns, and it would undoubtedly be an unwieldy nightmare on an application of any size when more HTML is being injected from the server.
Reimplantations tend to simply some bits, but end up amassing complexities in various corners...
Let's say I'm intrigued and on the fence.
1. If the element is out-of-band, it MUST have `htmx-swap-oob="true"` in it, or it may be discarded / cause unexpected results
2. If the element is not out-of-band, it MUST NOT have `htmx-swap-oob="true"` in it, or it may be ignored.
This makes it hard to use the same server-side HTML rendering code for for a component that may show up either OOB or not; you end up having to pass down "isOob" flags, which is ugly and annoying.
Interestingly, elements sent via the HTMX websocket extension [1] do use OOB by default.
The term was coined in 1965 by Ted Nelson in: https://dl.acm.org/doi/10.1145/800197.806036
Here's the exact sentence: "The hyperfilm-- a browsable or vari-sequenced movie-- is only one of the possible hypermedia that require our attention."
As for Datastar, all the signal and state stuff seems to me like a step in the wrong direction.
Going back to it is the point. HTMX lets you do that while still having that button refresh just a part of the page, instead of reloading the whole page. It's AJAX with a syntax that frees you from JS and manual DOM manipulation.
I fairly recently developed an app in PHP, in the classic style, without frameworks. It provided me with stuff I remembered, the $annoyance $of $variable $prefixes, the wonky syntax, and a type system that makes JS look amazing -- but it still didn't make me scream in pain and confusion like React. Getting the app done was way quicker than if any JS framework was involved.
Having two separate but tightly integrated apps is annoying. HTMX or any other classic web-dev approaches like PHP and Django make you have one app, the backend. The frontend is the result of executing the backend.
This isn't true. HTMX has native support for "pushing" data to the browser with Websockets or SSE, without "custom" code.
And reading comments one would think this is some amazing piece of technology. Am I just old and cranky or something?
This feels... very hard to reason about. Disjoint.
You have a front-end with some hard-coded IDs on e.g. <div>s. A trigger on a <button> that black-box calls some endpoint. And then, on the backend, you use the SDK for your choice language to execute some methods like `patchElements()` on e.g. an SSE "framework" which translates your commands to some custom "event" headers and metadata in the open HTTP stream and then some "engine" on the front-end patches, on the fly, the DOM with whatever you sent through the pipe.
This feels to me like something that will very quickly become very hard to reason about globally.
Presentation logic scattered in small functions all over the backend. Plus whatever on-render logic through a classic template you may have, because of course you may want to have an on-load state.
I'm doing React 100% nowadays. I'm happy, I'm end-to-end type safe, I can create the fanciest shiny UIs I can imagine, I don't need an alternative. But if I needed it, if I had to go back to something lighter, I'd just go back to all in SSR with Rails or Laravel and just sprinkle some AlpineJS for the few dynamic widgets.
Anyway, I'm sure people will say that you can definitely make this work and organize your code well enough and surely there are tons of successful projects using Datastar but I just fail to understand why would I bother.
I've also onboard interns and juniors onto React codebase, and there's things about React that only really make sense if you're more old-school and know how different types behave to understand why certain things are necessary.
I remember explaining to an intern why passing an inlined object as a prop was causing the component to rerender, and they asked whether that's a codebase smell... That question kinda shocked me because to me it was obvious why this happens, but it's not even a React issue directly. Howeve the fix is to write "un-javascripty" code in React. So this persons intro to JS is React and their whole understanding of JS is weirdly anchored around React now.
So I totally understand the critique of hooks. They just don't seem to be in the spirit of the language, but do work really well in spite of the language.
As someone who survived the early JS wilderness, then found refuge in jQuery, and after trying a bunch of frameworks and libraries, finally settled on React: I think React is great, but objectively parts of it suck, and it's not entirl its fault
Something that's been an issue with our most junior dev, he's heard a lot of terminology but never really learned what some of those terms mean, so he'll use them in ways that don't really make sense. Your example here is just the kind of thing I'd expect from him, if he's heard the phrase "code smell" but assumed something incorrect about what it meant and never actually looked up what it means.
It is possible your co-worker was asking you this the other way around - that they'd just learned the term and were trying to understand it rather than apply it.
My dream was having a Go server churning out all this hypermedia and I could swerve using a frontend framework, but I quickly found the Go code I was writing was rigid and convoluted. It just wasn’t nice. In fact it’s the only time I’ve had an evening coding session and forgotten what the code was doing on the same evening I started.
I’m having a completely opposite experience with Elixir and Phoenix. That feels like an end to end fluid experience without excessive cognitive load.
[1]: https://templ.guide/
Granted, I’ve only used it for smaller projects, but I can almost feel my brain relax as the JS fades out, and suddenly making web apps is super fun again.
FWIW, default config of Rails include Turbo nowadays, which seems quite similar to Datastar in concept.
I may be a little biased because I've been writing webapps with htmx for 4 years now, but here are my first thoughts:
- The examples given in this blogpost show what seems to be the main architectural difference between htmx and Datastar: htmx is HTML-driven, Datastar is server-driven. So yes, the API on client-side is simpler, but that's because the other side has to be more complex: on the first example, if the HTML element doesn't hold the information about where to inject the HTML fragment returned by the server, the server has to know it, so you have to write it somewhere on that side. I guess it's a matter of personal preference then, but from an architecture point-of-view both approaches stand still
- The argument of "less attributes" seems unfair when the htmx examples use optional attributes with their default value (yes you can remove the hx-trigger="click" on the first example, that's 20% less attributes, and the argument is now 20% less strong)
- Minor but still: the blogpost would gain credibility and its arguments would be stronger if HTML was used more properly: who wants to click on <span> elements? <button> exists just for that, please use it, it's accessible ;-)
- In the end I feel that the main Datastar selling point is its integration of client-side features, as if Alpine or Stimulus features were natively included in htmx. And that's a great point!
Edit - rather than spam with multiple thank you comments, I'll say here to current and potential future repliers: thanks!
I assume it had backend scaling issues, but usually backend scaling is over-stated and over-engineered, meanwhile news sites load 10+ MB of javascript.
This reduces a lot of accidental complexities. If done well, you only need to care about the programming language and some core libraries. Everything else becomes orthogonal of each other so cost of changes is greatly reduced.
I'm contemplating using HTMX in a personal project - do you know if there are any resources out there explaining why you might also need other libraries like Alpine or Stimulus?
I'm not too strong in frontend, but wouldn't this make for a lighter, faster front end? Especially added up over very many elements?
Well, not at all. The only compelling reason for me to use server-side rendering for apps (not blogs obviously,they should be HTML) is for metadata tags. That's why I switched from pure React and everything has been harder, slower for the user and more difficult to debug than client-side rendering.
* Datastar sends all responses using SSE (Server Side Events). Usually SSE is employed to allow the server to push events to the client, and Datastar does this, but it also uses SSE encoding of events in response to client initiated actions like clicking a button (clicking the button sends a GET request and the server responds with zero or more SSE events over a time period of the server's choice).
* Whereas HTMX supports SSE as one of several extensions, and only for server-initiated events. It also supports Websockets for two-way interaction.
* Datastar has a concept of signals, which manages front-end state. HTMX doesn't do this and you'll need AlpineJS or something similar as well.
* HTMX supports something called OOB (out-of-band), where you can pick out fragments of the HTML response to be patched into various parts of the DOM, using the ID attribute. In Datastar this is the default behaviour.
* Datastar has a paid-for Pro edition, which is necesssary if you want certain behaviours. HTMX is completely free.
I think the other differences are pretty minor:
* Datastar has smaller library footprint but both are tiny to begin with (11kb vs 14kb), which is splitting hairs.
* Datastar needs fewer attributes to achieve the same behaviours. I'm not sure about this, you might need to customise the behaviour which requires more and more attributes, but again, it's not a big deal.
Returning html from a server is... just the WWW.
For those of you who don't think Datastar is good enough for realtime/collaborative/multiplayer and/or think you need any of the PRO features.
These three demos each run on a 5$ VPS and don't use any of the PRO features. They have all survived the front page of HN. Datastar is a fantastic piece of engineering.
- https://checkboxes.andersmurphy.com/
- https://cells.andersmurphy.com/
- https://example.andersmurphy.com/ (game of life multiplayer)
On both the checkboxes/cells examples there's adaptive view rendering so you can zoom out a fair bit. There's also back pressure on the virtual scroll.
Pro features ? Now I see - it is open core, with a $299 license. I'll pass.
I don't use anything from pro and I use datastar at work. I do believe in making open source maintainable though so bought the license.
The pro stuff is mostly a collection of foot guns you shouldn't use and are a support burden for the core team. In some niche corporate context they are useful.
You can also implement your own plugins with the same functionality if you want it's just going to cost you time in instead of money.
I find devs complaining about paying for things never gets old. A one off life time license? How scandalous! Sustainable open source? Disgusting. Oh a proprietary AI model that is built on others work without their consent and steals my data? Only 100$ a month? Take my money!
Tbh that mental model seems so much simpler than any or all of the other datastar examples I see with convoluted client state tracking from the server.
Would you build complex apps this way as well? I'd assume this simple approach only works because the UI being rendered is also relatively simple. Is there any content I can read around doing this "immediate mode" approach when the user is navigating across very different pages with possibly complicated widget states needing to be tracked to rerender correctly?
how do you zoom out?
Also, even with your examples, wouldn't data-replace-url be a nice-to-have to auto update the url with current coordinates, e.g. ?x=123&y=456
<span hx-target="#rebuild-bundle-status-button" hx-select="#rebuild-bundle-status-button" hx-swap="outerHTML" hx-trigger="click" hx-get="/rebuild/status-button"></span>
Turn into:
<span data-on-click="@get('/rebuild/status-button')"></span>
The other examples are even more confusing. In the end, I don't understand why the author switched from HTMX to Datastar.
The Datastar code instead says: "when this span is clicked, fetch /rebuild/status-button and do whatever it says". Then, it's /rebuild/status-button's responsibility to provide the "swap the existing #rebuild-bundle-status-button element with this new one" instruction.
If /rebuild/status-button returns a bunch of elements with IDs, Datastar implicitly interprets that as a bunch of "swap the existing element with this new one" instructions.
This makes the resulting code look a bit simpler since you don't need to explicitly specify the "target", "select", or "swap" parts. You just need to put IDs on the elements and Datastar's default behavior does what you want (in this case).
Datastar keeps the logic in the backend. Just like we used to do with basic html pages where you make a request, server returns html and your browser renders it.
With Datastar, you are essentially doing kind of PWA where you load the page once and then as you interact with it, it keeps making backend requests and render desired changes, instead of reloading the entire page. But yo uare getting back snippets of HTML so the browser does not have to do much except rendering itself.
This also means the state is back in the backend as well, unlike with SPA for example.
So again, Datastar goes back to the old request-response HTML model, which is perfectly fine, valid and tried, but it also allows you to have dynamic rendering, like you would have with JavaScript.
In other words, the front-end is purely visual and all the logic is delegated back to the backend server.
This essentially is all about thin client vs smart client where we constantly move between these paradigms where we move logic from backend to the frontend and then we swing back and move the logic from the frontend to the backend.
We started with thin clients as computers did not have sufficient computing power back in the day, so backend servers did most of the heavy lifting while the thin clients very little(essentially they just render the ready-made information). That changed over time and as computers got more capable, we moved more logic to the frontend and it allowed us to provide faster interaction as we no longer had to wait for the server to return response for every interaction. This is why there is so much JavaScript today, why we have SPAs and state on the client.
So Datastar essentially gives us a good alternative to choose whether we want to process more data on the backend or on the frontend, whilst also retaining the dynamic frontend and it is not just a basic request-response where every page has to re-render and where we have to wait for request to finish. We can do this in parallel and still have the impression if a "live" page.
It smells of rigging.
Happy user of https://reflex.dev framework here.
I was tired of writing backend APIs with the only purpose that they get consumed by the same app's frontend (typically React). Leading to boilerplate code both backend side (provide APIs) and frontend side (consume APIs: fetch, cache, propagate, etc.).
Now I am running 3 different apps in productions for which I no longer write APIs. I only define states and state updates in Python. The frontend code is written in Python, too, and auto-transpiled into a React app. The latter keeping its states and views automagically in sync with the backend. I am only 6 months into Reflex so far, but so far it's been mostly a joy. Of course you've got to learn a few but important details such as state dependencies and proper state caching, but the upsides of Reflex are a big win for my team and me. We write less code and ship faster.
PostgREST is great for this: https://postgrest.org
I run 6 React apps in prod, which used to consume APIs written with Falcon, Django and FastAPI. Since 2 years ago, they all consume APIs from PostgREST. I define SQL views for the tables I want to expose, and optionally a bunch of SQL grants and SQL policies on the tables if I have different roles/permissions in the app, and PostgREST automatically transforms the views into endpoints, adds all the CRUD + UPSERT capabilities, handles the authorization, filtering, grouping, ordering, insert returning, pagination, and so on.
Htmx gives me bad vibes from having tons of logic _in_ your html. Datastar seems better in this respect but has limitations Hotwire long since solved.
Write some HTMX and you'll find that exactly the opposite is true
<div hx-get="{% url 'web-step-discussion-items-special-counters' object.bill_id object.pk %}?{{ request.GET.url...who knows how many characters long it is.
It's hard to tell whether they optimised the app, deleted a ton of noise, or just merged everything into those 300-character-long megalines.
of course it (should) lead to a lot less code! at the cost of completely foregoing most of the capabilities offered by having a well-defined API and a separate client-side application
... and of course this is happening as over the last ~2 decades we mostly figured out that most of that amazing freedom on the client-side does not worth it
... most clients are dumb devices (crawlers), most "interactions" are primitive read-only ones, and having a fast and simple site is a virtue (or at least it makes economic sense to shunt almost all complexity to the server-side, as we have fast and very capable PoPs close to users)
It's not that, at least in my opinion, it's that we love (what we perceive as) new and shiny things. For the last ten years with Angular, React, Vue et al., new waves of developers have forgotten that you can output stuff directly from the server to the browser outside of "APIs".
This implementation is "dumb" to me. Feels like the only innovation is using SSE, otherwise it's a `selector.onClick(() => {selector.outerHTML = await (fetch(endpoint)).json()});`. That's most of the functionality right there. You can even use native HTML element attributes instead of doing .onClick https://developer.mozilla.org/en-US/docs/Web/API/Element/cli....
I really don't see any benefit to using this.
There's also a slide in my talk that presents how many JS dependencies we dropped, while not adding any new Python. Retrospectively, that is a much more impressive achievement.
Or the then backwards-incompatible HTMX v2 will give it the rest, leaving all the obsolete codebase. It's the circle of life.
I am not saying it is wrong. Just it is abit funny looking from perspective how pendulum is going now the other way.
Everything old is new again.
Having the backend aware of the IDs in the HTML leads to pain. The HTMX way seems a lot simpler and Rails + Turbo has gone in that direction as well.
This isn't really a criticism of Datastar, though: I think the popularity of OOB in HTMX indicates that the pure form of this is too idealistic for a lot of real-world cases. But it would be nice if we could come up with a design that gives the best of both worlds.
You send down the whole page on every change. The client just renders. It's immediate mode like in video games.
sgt•6h ago
I'm seriously keen on trying it out. It's not like Htmx is bad, I've built a couple of projects in it with great success, but they all required some JS glue logic (I ended up not liking AlpineJS for various reasons) to handle events.
If Datastar can minimize that as well, even better!
sudodevnull•7m ago