This project seems more like something you'd do to demonstrate your skills with all these tools that do have use in a business/for-profit context working with groups but they have absolutely no use or place hosting a personal static website. Unless you're doing it for kicks and enjoy useless complexity. That's fair. No accounting for taste in recreation.
Also, starting any comment with an unqualified "The best way..." is probably not the best way to engage in meaningful dialog.
This level of complexity would've been acceptable if this was about deploying one's own netlify type of service for personal use. Otherwise, it's just way too complicated.
I'm currently working on a Django app, complete with a database, a caching layer, a reverse-proxy, a separate API service, etc. and still much simpler to deploy than this.
It might have gotten better since, but back when I was running a Wordpress install it was a constant battle to keep bots out.
The tools selected are faster than their more mainstream counterparts — but since it's a static site anyway, the pre-build side of the toolchain is more about "nice dev ux" and the post-build is more about "really fast to load and read".
So I can't agree.
"helps contextual the blog" mean?
It appears, for the verb, you meant: "frobnicate".
Taken from the Coolify website (which OP uses for hosting):
> Brag About It. You can impress anyone by saying that you self-host in the Cloud. They will definitely be amazed.
This is the result of a hyper consumerist post Protestant culture in America and the rest of the English speaking countries.
My Ops brain says "Taken in vacuum, yes" However, if you make other things that are not static, put them into a container and run said container on a server, keeping the CI/CD process consistent makes absolute sense.
We run static sites at my company in containers for same reason. We have Kubernetes cluster with all DNS Updating, Cert Grabbing and Prometheus monitoring so we run static sites from nginx container.
# copy all files
COPY . .
# install Python with uv
RUN uv python install 3.13
# run build process
RUN uv run --no-dev sus
This adds the entire repository to the first layer, then installs python, then runs the build which I assume will only then install the dependencies. This means that changing any file in the repository invalidates the first layer, triggering uv reinstalling python and all the dependencies again. The correct Dockerfile would be something like # install Python with uv
RUN uv python install 3.13
# copy info for dependencies
COPY pyproject.toml uv.lock .
# Install dependencies
RUN uv whatever
# Copy over everything else
COPY . .
# run build process
RUN uv run --no-dev sus
Same as how it's good to be able to easily run the exact same test suite in both dev and CI.
Even if you aren't an expert it is trivial these days to copy/paste it into chatGPT and ask it to optimize or suggest improvements to the dockerfile, it will then explain it to you.
Also while using Kubernetes, please use event-driven PubSub mechanisms to serve files for added state-of-the-art points.
/pun
html file -> ftp -> WWW html file -> mv /var/www/public -> WWW
Possibly SSG -> html -> etc.
#!/usr/bin/env -S uv run --script
# -*- mode: python -*-
#
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "pyyaml", "flask", "markdown-it-py",
# "linkify-it-py", "mdit-py-plugins"
# ]
# ///
The HTML templates & CSS are baked into the file which is why it's so long. flask so that I can have a live view locally while writing new notes.uv's easy dependency definition really made it much easier to manage these. My previous site was org exported to html and took much more effort.
(With the conceit that the website is a "notebook" I call this file "bind").
I had the opposite reaction when I read this post: I thought it was a very neat, clean and effective way to solve this particular problem - one that took advantage of an excellent stack of software - Caddy, Docker, uv, Plausible, Coolify - and used them all to their advantage.
Ignoring caching (which it sounds like the author is going to fix anyway, see their other comments) this is an excellent Dockerfile!
FROM ghcr.io/astral-sh/uv:debian AS build
WORKDIR /src
COPY . .
RUN uv python install 3.13
RUN uv run --no-dev sus
FROM caddy:alpine
COPY Caddyfile /etc/caddy/Caddyfile
COPY --from=build /src/output /srv/
8 lines is all it takes. Nice. And the author then did us the favor of writing up a detailed explanation of every one of them. I learned a few useful new trick from this, particularly around using Caddy with Plausible.This one didn't strike me as over-engineering: I saw it as someone who has thought extremely carefully about their stack, figured out a lightweight pattern that uses each of the tools in that stack as effectively as possible and then documented their setup in the perfect amount of detail.
FROM nginx:alpine
COPY . /usr/share/nginx/html
make && make deploy
where the default target is simply `uv run --no-dev sus` and the deploy target is simply `rsync -avz --delete ./dist/ host:/path/to/site/` is hell a lot more neat, clean, effective, and lightweight? (And if you care about atomic deployment it's just another command in the deploy target.)I have ~60 static websites deployed on a single small machine at zero marginal cost. I use nginx but I can use caddy just the same. With this "lightweight pattern" I'd be running 60 and counting docker containers for no reason.
Also their site isn't entirely static: they're using Caddy to proxy specific paths to plausible.io for analytics.
If you don't want to have multiple `COPY`s, you can add a `.dockerignore` file (https://docs.docker.com/build/concepts/context/#dockerignore...) with the `COPY . .` directive and effectively configure an allowlist of paths, e.g.,
*
!src/
!requirements.txt
Oh wait …
zahlman•1d ago
For making a static site that you're personally deploying, exactly why is Docker required? And if the Docker process will have to bring in an entire Linux image anyway, why is obtaining Python separately better than using a Python provided by the image? And given that we've created an entire isolated container with an explicitly installed Python, why is running a Python script via `uv` better than running it via `python`? Why are we also setting up a virtual environment if we have this container already?
Since we're already making a `pyproject.toml` so that uv knows what the dependencies are, we could just as well make a wheel, use e.g. pipx to install it locally (no container required) and run the program's own entry point. (Or use someone else's SSG, permanently installed the same way. Which is what I'm doing.)
nkantar•20h ago
Broadly speaking, I explicitly wanted to stay in the Coolify world. Coolify is a self-hostable PaaS platform—though I use the Cloud service, as I mentioned—and I really like the abstraction it provides. I haven’t had to SSH into my server for anything since I set it up—I just add repos through the web UI and things deploy and show up in my browser.
Yes, static sites certainly could—and arguably even should—be done way simpler than this. But I have other things I want to deploy on the same infrastructure, things that aren’t static sites, and for which containers make a whole lot more sense. Simplicity can be “each thing is simple in isolation”, but it can also be “all things are consistent with each other”, and in this case I chose the latter.
If this standardization on this kind of abstraction weren’t a priority, this would indeed be a pretty inefficient way of doing this. In fact, I arrived at my current setup by doing what you suggested—setting up a server without containers, building sites directly on it, and serving them from a single reverse proxy instance—and the amount of automation I found myself writing was a bit tedious. The final nail in the coffin for that approach was realizing I’d have to solve web apps with multiple processes in some other way regardless.
divbzero•6h ago
I too was skeptical of the motivation until reading this. Given that Coolify requirement, your solution (build static files in one container, deploy with Caddy in another) seems quite sensible.
john-tells-all•1h ago
So what you're saying is that "Static sites with Python, uv, Caddy, and Docker" wasn't the overall goal. You want to stay in Coolify world, where most things are a container image.
It just so happens that a container can be just a statically-served site, and this is a pattern to do it.
By treating everything as a container, you get a lot of simplicity and flexibility.
Docker etc is overkill for the static case, but useful for the general case.
cyanydeez•7h ago
TZubiri•6h ago
Static sites with HTML, CSS, Apache and Linux.
Avamander•3h ago
TZubiri•5h ago
So you just solve all problems with advanced tools, no matter how simple. You get into tech by learning how to use a chainsaw because it's so powerful and you wanted to cut a tree, now you need to cut some butter for a toast? Chainsaw!
busyant•5h ago
Using a Ferrari to deliver the milk is how I've heard it said.
worldsayshi•5h ago
kelvinjps10•2h ago
adithyassekhar•50m ago
pfranz•1h ago
I mostly work in a different domain than webdev, but feel strongly about trying to decouple base technologies of your OS and your application as much as possible.
It's one thing if you are using a Linux image and choose to grab their Python package and other if their boot system is built around the specific version of Python that ships with the OS. The goal being if you later need to update Python or the OS they're not tethered together.
rr808•33m ago