I don’t think we need to foist such basic tasks into a prediction machine, so much as we just need to go back to making competent software for end users again.
Like sure, it’s a neat concept, but man I just do not see the value of this as opposed to prior, lighter, and easier methods of static site generation. If anything, I see an author lamenting the lack of consumer options for site building that don’t involve extortionate subscriptions to overly powerful tools, and trying to reframe ChatGPT as some form of godsend of simplicity when it…kinda isn’t.
The ultimate resource is human time and effort. Why should I care about efficiency of the underlying process. That's captured by the price. So considering an LLM call to this likely a few pennies if that much, and it's saving me even a few seconds, then I would say it's worth it.
There's costs to optimizations. We don't need to optimize everything. This is a nice general solution
Because, as you just said yourself…
> The ultimate resource is human time and effort.
This is a disconnect I don’t think most people appreciate. Developers seem to operate as if Moore’s law will continue forever, that components are infinite, that cloud resources appear and disappear at the snap of their fingers, and that optimization is pointless in an era of excess.
Meanwhile, the actual engineers are building out new data centers to support these fancy prediction machines, supply chains are exploiting labor abroad for the raw materials necessary to power these guessing boxes, and we’re tearing our hair out at developers casually demanding dozens of CPU cores or terabytes of memory for their newest product specifically because they did no optimizations.
Actual humans - millions of them, in and outside of technology fields - are working in concert to support the least optimized software product in human history, just so you can squander water, energy, and land to run inference on a farm of GPUs to output a static website that Microsoft Word could generate in 2003 on a Pentium 4.
Jesus christ I am sick of this nonsensical argument that because something is cheap, it is somehow optimized and/or superior.
I guess you could write a script that reads your template and body and submits the whole thing to chatgpt API then dumps out the file. But you also have to double check the whole thing again to make sure it didn't mess anything up or change any text?
But then all you do is invent a shittier SSG that costs more money and more resources for literally no benefit.
Seems dumb
OP isn't going to save the world. Them saving a few minutes isn't going to free them up to discover unlimited energy. This is just excess consumption masked as something else.
This is just more excuses to justify laziness. "Its worth it cuz one day I'll solve world hunger."
Add it all up = trillions of hours saved.
Ergo, plenty more time to work on facilitating energy too cheap to meter.
(You won't understand. It's okay. You lack the imagination and mindset necessary to understand.)
This quote in particular sells me on the idea of satire:
> Almost anything that relies on structured input would be more convenient with an AI solution
You could take what you know about that structure and deterministically transform it, or you could just vibe it over to chatGPT! That doesn't seem like a serious statement at all.
But this feels real:
> I looked into some other generators: Jekyll, Pelican, and so on. But everything seemed to have two problems. First, most of these seemed overkill for what I wanted to do. And second, despite the overhead, many didn’t seem to provide the kind of flexibility I wanted — they relied on writing strict Markdown or fitting within a predefined way of structuring the site.
That's an understandable problem. The purists will suggest, "Just write HTML", but markdown is convenient.
For anyone facing this issue, I'd suggest pandoc does a great job at transforming markdown to html. One of the benefits of HTML was that it would work around what it considered "bad" input. You didn't even need to close tags properly a lot of the time. If you made an error, you'd still get mostly correct output. Pandoc markdown isn't too fussy either. They have a markdown_strict variant if you want strict spec adherence. They also have a gfm (github flavoured markdown) option for those of us used to the github extensions.
Pandoc parses a small markdown page into html in, well:
Milliseconds : 71
So, certainly faster than any LLM response time.Markdown is easier to write than raw HTML, and deterministically transforms. But writing strict Markdown (or using any specific syntax) creates an efficiency penalty - e.g., how do I format a URL again? How do I insert an image? Guess I'll have to look that up. What if I want to use more advanced CSS formatting? I'll have to update the HTML afterwards by hand.
In contrast, using an LLM to format means I can put any kind of text in and the machine will just figure it out for me.
That's what I mean by "more convenient".
(Kompozer, Brackets, SeaMonkey, BlueGriffon, etc. did not work reliably for me. I found that TinyMCE etc. are WYSIWYG for text, but not for positioning [i.e. they are "rich text editors"])
Why did such free WYSIWYG apps to generate HTML/CSS die out? Anyone have theories?
There’s a healthy appetite for WYSIWYG editors still, but nobody wants to make anything that simple anymore. It’s all about building moats, using the latest frameworks, and ultra-slick designs with spy pixels and a deluge of cookies. Everything must be in a CMS, on a hosting provider, with load balancers and CDNs, and saddled with pop-ups, pop-ins, chat boxes, e-mail signups, metric collection, data hoarding, and third-party tie-ins.
It was literally just a case of going to (from memory, the details might be wrong) ftp://user@password:web.pipex.co.uk/ , and then putting whatever you wanted into the ~/public_html directory there.
Browsers even started bringing out FTP support so you didn't need to find a client, although I found it more reliable to use WS_FTP.
5-20mb of free hosting for every ISP customer was just something ISPs did back then. Everyone was more innocent (or naive, depending how you look at it), but there wasn't much of a barrier to getting a static site up and running, and there really wasn't too much of a learning curve beyond, "Put your files here".
ChatGPT could develop your dynamic website too, per request.
``` <!-- Footer -->
<footer class="py-3" style="margin-top:5rem;">
```(For what it's worth, that was part of the handwritten template already, not something the LLM generated on its own.)
nullwarp•4h ago
Why use something that can take next to no CPU and instead use massive energy sucking LLM?
stego-tech•4h ago
Static site generation really should be as simple as “Export to HTML”, and upload to your web server. The fact it’s not anymore shows that none of these are about “democratizing” anything, but just locking people into prisons of their own making with highly specific tools.