This neatly demonstrates one of the issues with CGI: they add synchronisation issues while removing synchronisation tooling.
Here's that code:
let new = !Path::new(DB_PATH).exists();
let conn = Connection::open(DB_PATH).expect("open db");
// ...
if new {
conn.execute_batch(
r#"
CREATE TABLE guestbook(
So the bug here would occur only the very first time the script is executed, IF two processes run it at the same time such that one of them creates the file while the other one assumes the file did not exist yet and then tries to create the tables.That's pretty unlikely. In this case the losing script would return a 500 error to that single user when the CREATE TABLE fails.
Honestly if this was my code I wouldn't even bother fixing that.
(If I did fix it I'd switch to "CREATE TABLE IF NOT EXISTS...")
... but yeah, it's a good illustration of the point you're making about CGI introducing synchronization errors that wouldn't exist in app servers.
Plus, honestly, even if you are relatively careful and configure everything perfectly correct, having the web server execute stuff in a specific folder inside the document root just seems like a recipe for problems.
And, sadly, there is no getting around the "configure everything perfectly" problem. :(
It’s also more secure because each request is isolated at the process level. Long lived processes leak information to other requests.
I would turn it around and say it’s the ideal model for many applications. The only concern is performance. So it makes sense that we revisit this question given that we make all kinds of other performance tradeoffs and have better hardware.
Or you know not every site is about scaling requests. It’s another way you can simplify.
> but it is an outdated execution model
Not an argument.
The opposite trend of ignoring OS level security and hoping your language lib does it right seems like the wrong direction.
So the upshot of writing CGI scripts is that you can... ship broken, buggy code that leaks memory to your webserver and have it work mostly alright. I mean look, everyone makes mistakes, but if you are routinely running into problems shipping basic FastCGI or HTTP servers in the modern era you really need to introspect what's going wrong. I am no stranger to writing one-off Go servers for things and this is not a serious concern.
Plus, realistically, this only gives a little bit of insulation anyway. You can definitely still write CGI scripts that explode violently if you want to. The only way you can really prevent that is by having complete isolation between processes, which is not something you traditionally do with CGI.
> It’s also more secure because each request is isolated at the process level. Long lived processes leak information to other requests.
What information does this leak, and why should I be concerned?
> Or you know not every site is about scaling requests. It’s another way you can simplify.
> > but it is an outdated execution model
> Not an argument.
Correct. That's not the argument, it's the conclusion.
For some reason you ignored the imperative parts,
> It's cool that you can fork+exec 5000 times per second, but if you don't have to, isn't that significantly better?
> Plus, with FastCGI, it's trivial to have separate privileges for the application server and the webserver.
> [Having] the web server execute stuff in a specific folder inside the document root just seems like a recipe for problems.
Those are the primary reasons why I believe the CGI model of execution is outdated.
> The opposite trend of ignoring OS level security and hoping your language lib does it right seems like the wrong direction.
CGI is in the opposite direction, though. With CGI, the default behavior is that your CGI process is going to run with similar privileges to the web server itself, under the same user. On a modern Linux server it's relatively easy to set up a separate user with more specifically-tuned privileges and with various isolation options and resource limits (e.g. cgroups.)
Yes. The code is already shitty. That’s life. Let’s make the system more reliable and fault tolerant.
This argument sounds a lot like “garbage collection is for bad programmers who can’t manage their memory”.
But let me add another reason with your framing. In fire/forget programmers get used to crashing intentionally at the first sign of trouble. This makes it easy to detect failures and improve code. The incentive for long running processes is to avoid crashing, so programs get into bad states instead.
> The only way you can really prevent that is by having complete isolation between processes
Yes. That’s the idea. Separate memory spaces.
> What information does this leak
Anything that might be in a resource, or memory. Or even in the resource of a library.
> and why should I be concerned
Accessing leaked information form a prior run is a common attack.
> but if you don't have to, isn't that significantly better?
Long running processes are inherently more complex. The only benefit is performance.
> H’the web server execute stuff in a specific folder inside the document root just seems like a recipe for problems.
As opposed to? All processes have a working directory. What problems come from using the file system?
> cgroups
Yes it’s the same amount of effort to configure.
> This argument sounds a lot like “garbage collection is for bad programmers who can’t manage their memory”.
This is not a "Simply don't make mistakes" type of argument, it's more like a "We've moved past this problem" type of argument. The choice of garbage collection as an example is a little funny, because actually I'd argue heavily in favor of using garbage collection if you're not latency-sensitive; after all, like I said, I use Go for a lot of one-off servers.
It'd be one thing if every person had to sit there and solve again the basic problems behind writing an HTTP server, but you don't anymore. Many modern platforms put a perfectly stable HTTP server right in the standard library, freeing you from even needing to install more dependencies to be able to handle HTTP requests effectively.
> > The only way you can really prevent that is by having complete isolation between processes
> Yes. That’s the idea. Web server forks, and execs. Separate memory spaces.
That's not complete isolation between processes. You can still starve the CPU or RAM, get into contention over global locks (e.g. sqlite database), do conflicting file I/O inside the same namespace. I can go on but the point is that I don't consider two processes running on the same machine to be "isolated" with each-other. ("Process isolation" is typically used to talk about isolation between processes, not isolation of workloads into processes.) If you do it badly, you can wind up with requests that sporadically fail or hang. If you do it worse, you can wind up with corruption/interleaving writes/etc.
Meanwhile, if you're running a typical Linux distro with systemd, you can slap cgroups and namespacing onto your service with the triviality of slapping some options into an INI file. (And if you're not because you hate systemd, well, all of the features are still there, you just may need to do more work to use them.)
> > What information does this leak
> Anything that might be in a resource, or memory. Or even in the resource of a library you use.
> > and why should I be concerned
> Accessing leaked information form a prior run is a common attack.
I will grant you that you can't help it if one of your dependencies (or God help you, the standard library/runtime of your programming language) is buggy and leaks global state between instantiations. Practically speaking though, if you are already not sharing state between requests this is just not a huge issue.
Sometimes it feels like we're comparing "simple program written in CGI where it isn't a big deal if it fails or has some bugs" to "complex program written using a FastCGI or HTTP server where it is a big deal if it leaks a string between users".
> As opposed to? All processes have a working directory. What problems come from using the file system?
The problem isn't the working directory, it's the fact that anything in a cgi-bin directory 1. will be exec'd if it can be 2. exists under the document root, which the webserver typically has privileges to write to.
> Yes it’s the same amount of effort to configure this.
I actually really didn't read this before writing out how easy it was to use these with systemd, so I guess refer to the point above.
Sure, it is easy to view this as the process being somewhat sloppy with regards to how it did memory. But it can also be seen as just less work. If you can toss the entire allocated range of memory, what benefit is there to carefully walking back each allocated structure? (Notably, arenas and such are efforts to get this kind of behavior in longer lived processes.)
Service management:
systemctl start service.app
or docker run --restart=unless-stopped --name=myservice myservice:version
If it isn't written as a service, then it doesn't need management. If it is written as a service, then service management tools make managing it easy.> there is no graceful shutdown ... to worry about
Graceful shutdown:
kill -9
or docker kill -9 myservice
If your app/service can't handle that, then it's designed poorly.But the world has changed. Modern systems are excellent for multiprocessing, CPUs are fast, cores are plentiful and memory bandwidth just continues getting better and better. Single thread performance has stalled.
It really is time to reconsider the old mantras. Setting up highly complicated containerized environments to manage a fleet of anemic VMs because NodeJS' single threaded event loop chokes on real traffic is not the future.
Still, even when people run single-thread event loop servers, you can run an instance per CPU core; I recall this being common for WSGI/Python.
> The CGI model may still work fine, but it is an outdated execution model
The CGI model of one process per request is excellent for modern hardware and really should not be scoffed at anymore IMO.
It can both utilize big machines, scale to zero, is almost leak-proof as the OS cleans up all used memory and file descriptors, is language-independent, dead simple to understand, allows for finer granularity resource control (max mem, file descriptor count, chroot) than threads, ...
How is this execution model "outdated"?
> having the web server execute stuff in a specific folder inside the document root just seems like a recipe for problems
This is easy enough for non-technical people or school kids and still how it works for many Wordpress sites.
The modern way of deploying things is safer but the extra complexity has pushed many, many folks to just put their stuff on Facebook/Instagram instead of leveling up their devops skills.
Somehow we need to get the simplicity back, I think. Preferably without all the exploits.
That said, fork+exec isn't the best for throughput. Especially if the httpd doesn't isolate forking into a separate, barebones, child process, fork+exec involves a lot of kernel work.
FastCGI or some other method to avoid forking for each request is valuable regardless of runtime. If you have a runtime with high startup costs, even more so.
What's the point of using FastCGI compared to a plain http server then? If you are going to have a persistent server running why not just use the protocol you are already using the semantics of?
Path traversal bugs allowing written files to land in the cgi-bin used to be a huge exploit vector. Interestingly, some software actually relied on being able to write executable files into the document root, so the simple answer of making the permissions more limited is actually not a silver bullet.
If you've never seen or heard of this, ¯\_(ツ)_/¯
> Unix doesn't have folders
Great and very important point. Someone should go fix all of these bugs:
https://github.com/search?q=repo%3Atorvalds%2Flinux%20folder...
Of course, disabling ExecCGI in one directory won't help if you do have path traversal holes in your upload-handling code. I'm not convinced that disabling CGI will help if attackers can use a path traversal hole to upload malicious executables to arbitrary paths you can write to. They can overwrite your .bashrc or your FastCGI backend program or whatever you're likely to execute. CGI seems like the wrong thing to blame for that.
Why are you linking me to a "Sign in to search code on GitHub" page?
GitHub is basically the only service I'm aware of that actually has the ability to grep over the Linux kernel. Most of the other "code search" systems either cost money to use or only search specific symbols (e.g. the one hosted on free-electrons.)
For a similar effect, grep the Linux kernel and be amazed as the term "folder" is actually used quite a lot to mean "directory" because the distinction doesn't matter anymore (and because when you're implementing filesystem drivers you have to contend with the fact that some of them do have "folders".)
A couple of years ago my (now) wife and I wrote a single-event Evite clone for our wedding invitations, using Django and SQLite. We used FastCGI to hook it up to the nginx on the server. When we pushed changes, we had to not just run the migrations (if any) but also remember to restart the FastCGI server, or we would waste time debugging why the problem we'd just fixed wasn't fixed. I forget what was supposed to start the FastCGI process, but it's not running now. I wish we'd used CGI, because it's not working right now, so I can't go back and check the wedding invitations until I can relogin to the server. I know that password is around here somewhere...
A VPS would barely have simplified any of these problems, and would have added other things to worry about keeping patched. Our wedding invitation RSVP did need its own database, but it didn't need its own IPv4 address or its own installation of Alpine Linux.
It probably handled less than 1000 total requests over the months that we were using it, so, no, it was not significantly better to not fork+exec for each page load.
You say "outdated", I say "boring". Boring is good. There's no need to make things more complicated and fragile than they need to be, certainly not in order to save 500 milliseconds of CPU time over months.
No, it's not.
CGI is Common Gateway Interface, a specific technology and protocol implemented by web servers and applications/scripts. The fact that you do a fork+exec for each request is part of the implementation.
"Serverless" is a marketing term for a fully managed offering where you give a PaaS some executable code and it executes it per-request for you in isolation. What it does per request is not defined since there is no standard and everything is fully managed. Usually, rather than processes, serverless platforms usually operate on the level of containers or micro VMs, and can "pre-warm" them to try to eliminate latency, but obviously in case of serverless the user gets a programming model and not a protocol. (It could obviously be CGI under the hood, but when none of the major platforms actually do that, how fair is it to call serverless a "marketing term for CGI"?)
CGI and serverless are only similar in exactly one way: your application is written "as-if" the process is spawned each time there is a request. Beyond that, they are entirely unrelated.
> A couple of years ago my (now) wife and I wrote a single-event Evite clone for our wedding invitations, using Django and SQLite. We used FastCGI to hook it up to the nginx on the server. When we pushed changes, we had to not just run the migrations (if any) but also remember to restart the FastCGI server, or we would waste time debugging why the problem we'd just fixed wasn't fixed. I forget what was supposed to start the FastCGI process, but it's not running now. I wish we'd used CGI, because it's not working right now, so I can't go back and check the wedding invitations until I can relogin to the server. I know that password is around here somewhere...
> A VPS would barely have simplified any of these problems, and would have added other things to worry about keeping patched. Our wedding invitation RSVP did need its own database, but it didn't need its own IPv4 address or its own installation of Alpine Linux.
> It probably handled less than 1000 total requests over the months that we were using it, so, no, it was not significantly better to not fork+exec for each page load.
> You say "outdated", I say "boring". Boring is good. There's no need to make things more complicated and fragile than they need to be, certainly not in order to save 500 milliseconds of CPU time over months.
To be completely honest with you, I actually agree with your conclusion in this case. CGI would've been better than Django/FastCGI/etc.
Hell, I'd go as far as to say that in that specific case a simple PHP-FPM setup seems like it would've been more than sufficient. Of course, that's FastCGI, but it has the programming model that you get with CGI for the most part.
But that's kind of the thing. I'm saying "why would you want to fork+exec 5000 times per second" and you're saying "why do I care about fork+exec'ing 1000 times in the total lifespan of my application". I don't think we're disagreeing in the way that you think we are disagreeing...
That's the sense in which I mean "Serverless is a marketing term for CGI." But you're right that it's not, strictly speaking, true, because (AFAIK, e.g.) AWS doesn't actually use the CGI protocol in between the parts of their setup, and I should have been clear about that.
PHP is great as a runtime, but it sucks as a language, so I didn't want to use it. Django in regular CGI would have been fine; I just didn't realize that was an option.
Honestly this isn't even the right terminology. The point of "serverless" is that you don't manage a server. You can, for example, have a "serverless" database, like Aurora Serverless or Neon; those do not follow the "CGI" model.
What you're talking about is "serverless functions". The point of that is still that you don't have to manage a server, not that your function runs once per request.
To make it even clearer, there is also Google Cloud Run, which is another "serverless" platform that runs request-oriented applications, except it actually doesn't use the function call model. Instead, it runs instances of a stateful server container on-demand.
Is "serverless functions" just a marketing term for CGI? Nope. Again, CGI is a non-overlapping term that refers to a specific technology. They have the same drawbacks as far as the programming model is considered. Serverless functions have pros and cons that CGI does not and vice versa.
> because it's spinning up (AFAIK) an entire VPS
For AWS Lambda, it is spinning up Firecracker instances. I think you could conceivably consider these to not be entire VPS instances, even though they are hardware virtualization domains.
But actually it can do things that CGI does not, since all that's prescribed is the programming model and not the execution model. For example, AWS Lambda can spin up multiple instances of your program and then freeze them right before the actual request is sent, then resume them right when the requests start flowing in. And like yeah, I suppose you could build something like that for CGI programs, or implement "serverless functions" that use CGI under the hood, but the point of "serverless" is that it abstracts the "server" away and the point of CGI is that it let you run scripts under NCSA HTTPd requests.
Because the programming language models are compatible, it would be possible to adapt a CGI program to run under AWS Lambda. However, the reverse isn't necessarily true, since AWS Lambda also supports doing things that CGI doesn't, like servicing requests other than HTTP requests.
Saying that "serverless is just a marketing term for CGI" is wrong in a number of ways, and I really don't understand this point of contention. It is a return to a simpler CGI-like programming model, but it's pretty explicitly about the fact that you don't have to manage the server...
> PHP is great as a runtime, but it sucks as a language, so I didn't want to use it. Django in regular CGI would have been fine; I just didn't realize that was an option.
I'm starting to come back around to PHP. I can't argue against that it has some profound ugliness, but they've sincerely cleaned things up a lot and made life generally better. I like what they've done with PHP 7 and PHP 8 and think that it is totally suitable for simple one-off stuff. And, package management with composer seems straight-forward enough for me.
To be completely clear, I still haven't actually started a new project in PHP in over 15 years, but my opinion has gradually shifted and I fear I may see the day where I return.
I used to love Django, because I thought it was a very productive way to write apps. There are things that Django absolutely gets right, like the built-in admin panel; it's just amazing to have for a lot of things. That said, I've fallen off with Django and Python. Python may not have as butt-ugly as a past as PHP, but it has aged poorly for me. I feel like it is an easy language to write bugs in. Whereas most people agree that TypeScript is a big improvement for JavaScript development, I think many would argue that the juice just isn't worth the squeeze with gradual typing in Python, and I'd have to agree, I just feel like the type checking and ecosystem around it in Python just makes it not worth the effort. Surprisingly, PHP actually pulled ahead here, adding type annotations with some simple run-time checking, making it much easier to catch a lot of bugs that were once very common in PHP. Django has probably moved on and improved since I was last using it, but I definitely lost some of my appreciation for it. For one thing, while it has a decent ecosystem, it feels like that ecosystem is just constantly breaking. I recall running into so many issues migrating across Django versions, and dealing with things like static files. Things that really should be simple...
I think you might not be very familiar with how people typically used CGI in the 01990s and 02000s, because you say "[serverless] is a return to a simpler CGI-like programming model, but it's pretty explicitly about the fact that you don't have to manage the server..." when that was the main reason to use CGI rather than something custom at the time; you could use a server that someone else managed. But you seem to think it was a difference rather than a similarity.
Why do you suppose we were running our CGI scripts under NCSA httpd before Apache came out? It wasn't because the HTTP protocol was super complicated to implement. I mean, CGI is a pretty thin layer over HTTP! But you can implement even HTTP/1.1 in an afternoon. It was because the guys in the computer center had a server (machine) and we didn't. Not only didn't we have to manage the server; they wouldn't let us!
As for Python, yeah, I'm pretty disenchanted with Python right now too, precisely because the whole Python ecosystem is just constantly breaking. And static files are kind of a problem for Django; it's optimized for serving them from a separate server.
It is not strictly limited to the CGI protocol, of course, but it is the marketing term for the concept of the application not acting as the server, of which CGI applications would be included. CGI, like all serverless applications, outsource the another process, such as Apache or nginx, to provide the server. Hence the literal name.
> "Serverless" is a marketing term for a fully managed offering where you give a PaaS
Fully managed offerings are most likely to be doing the marketing, so it is understandable how you might reach that conclusion, but the term is being used to sell to developers. It communicates to them, quite literally, that they don't have to make their application a server, which has been the style for networked applications for a long time now. But if you were writing a CGI application to run on your own systems, it would also be serverless.
The point isn't really that the application is unaware of the server, it's that the server is entirely abstracted away from you. CGI vs serverless is apples vs oranges.
> [...] but the term is being used to sell to developers. It communicates to them, quite literally, that they don't have to make their application a server [...]
I don't agree. It is being sold to businesses, that they don't have to manage a server. The point is that you're paying someone else to be the sysadmin and getting all of the details abstracted away from you. Appealing to developers by making their lives easier is definitely a perk, but that's not why the term "serverless" exists. Before PaaSes I don't think I've ever seen anyone once call CGI "serverless".
Do you mean a... computer? Server is a software term. It is a process that listens for network requests.
At least since CGI went out of fashion, embedding a server right in your application has been the style. Serverless sees a return to the application being less a server, pushing the networking bits somewhere else. Modern solutions may not use CGI specifically, but the idea is the same.
If you did mistakenly type "server" when you really meant "computer", PaaS offerings already removed the need for businesses to manage computers long before serverless came around. "Serverless" appeared specifically in reference to the CGI-style execution model, it being the literal description of what it is.
Between this and the guy arguing that UNIX doesn't have "folders" I can see that these kinds of threads bring out the most insane possible lines of rhetoric. Are you sincerely telling me right now you've never seen the term "server" used to refer to computers that run servers? Jesus Christ.
Pedantry isn't a contest, and I'm not trying to win it. I'm not sitting here saying that "Serverless is not a marketing term for CGI" to pull some epic "well, actually..." I'm saying it because God damnit, it's true. Serverless was a term invented specifically by providers of computers-that-aren't-yours to give people options to not need to manage the computers-that-aren't-yours. They actually use this term serverless for many things, again including databases, where you don't even write an application or a server in the first place; we're just using "serverless" as a synonym for "serverless function", which I am fine to do, but pointing that out is important for more than just pedantry reasons because it helps extinguish the idea that "serverless" was ever meant to have anything to do with application design. It isn't and doesn't. Serverless is not a marketing term for CGI. Not even in a relaxed way, it's just not. The selling point of Serverless functions is "you give us your request handler and we'll handle running it and scaling it up".
This has nothing to do with the rise of embedding a server into your application.
No. "Cloud" was the term invented for that, inherited from networking diagrams where it was common to represent the bits you don't manage as cloud figures. Usage of "Serverless" emerged from AWS Lamba, which was designed to have an execution model much like CGI. "Severless" refers to your application being less a server. Lamba may not use CGI specifically, but the general idea is very much the same.
That was the selling point of CGI hosting though. Except that the "scaling it up" part was pretty rare. There were server farms that ran CGI scripts (NCSA had a six-server cluster with round-robin DNS when they first published a paper describing how they did it, maybe 01994) but the majority of CGI scripts were almost certainly on single-server hosting platforms.
Usually somebody else is managing the server, or servers, so you don't have to think about it. That's been how it's worked for 30 years.
> Before PaaSes I don't think I've ever seen anyone once call CGI "serverless".
No, because "serverless" was a marketing term invented to sell PaaSes because they thought that it would sell better than something like "CloudCGI" (as in FastCGI or SpeedyCGI, which also don't use the CGI protocol). But CGI hosting fits cleanly within the roomy confines of the term.
Having a guy named Steve manage your servers is not "serverless" by my definition, because it's not about you personally having to manage the server, it's about anyone personally having to manage it. AWS Lambda is managed by Amazon as a singular giant computer spawning micro VMs. And sure yes, some human has to sit here and do operations, but the point is that they've truly abstracted the concept of a running server from both their side and yours. It's abstracted to the degree that even asking "what machine am I running on?" doesn't even have a meaningful answer and if you did have the answer you couldn't do anything with it.
Shared hosting with a cgi-bin is closer to this, but it falls short of fully abstracting the details. You're still running on a normal-ish server with shared resources and a web server configuration and all that jazz, it's just that you don't personally have to manage it... But someone really does personally have to manage it. It's not an autonomous system.
And anyway, there's no reason to think that serverless platforms are limited to things that don't actually run a server. On the contrary there are "serverless" platforms that run servers! Yes, truly, as far as I know containers running under cloud run are in fact normal HTTP servers. I'm actually not an expert on serverless despite having to be on this end of the argument, but I'll let Google speak for what it means for Cloud Run to be "serverless":
> Cloud Run is a managed compute platform that enables you to run stateless containers that are invocable via HTTP requests. Cloud Run is serverless: it abstracts away all infrastructure management, so you can focus on what matters most — building great applications.
These PaaSes popularized the term to mean this from the gitgo, just because you have passionately formed a belief that it ever meant something else doesn't change a thing.
The performance numbers seem to show how bad it is in real world.
For testing I converted the CGI script into a FastAPI script and benchmarked it on my MacBookPro M3. I'm getting super impressive performance numbers,
Read ``` Statistics Avg Stdev Max Reqs/sec 2019.54 1021.75 10578.27 Latency 123.45ms 173.88ms 1.95s HTTP codes: 1xx - 0, 2xx - 30488, 3xx - 0, 4xx - 0, 5xx - 0 others - 0 Throughput: 30.29MB/s ``` Write (shown in the graph of the OP) ``` Statistics Avg Stdev Max Reqs/sec 931.72 340.79 3654.80 Latency 267.53ms 443.02ms 2.02s HTTP codes: 1xx - 0, 2xx - 0, 3xx - 13441, 4xx - 0, 5xx - 215 others - 572 Errors: timeout - 572 Throughput: 270.54KB/s ```
At this point, the contention might be the single SQL database. Throwing a beefy server like in the original post would increase the read performance numbers pretty significantly, but wouldn't do much on the write path.
I'm also thinking that at this age, one needs to go out of their way to do something with CGI. All macro, micro web frameworks comes with a HTTP server and there are plenty of options. I wouldn't do this for anything apart from fun.
FastAPI-guestbook.py https://gist.github.com/rajaravivarma-r/afc81344873791cb52f3...
I’d also be interested in getting a concrete reason though.
Another reason to use CGI is if you have a very small and simple system. Say, a Web UI on a small home router or appliance. You're not going to want the 200 NPM packages, transpilers and build tools and 'environment' managers, Linux containers, Kubernetes, and 4 different observability platforms. (Resume-driven-development aside.)
A disheartening thing about most my recent Web full-stack project was that I'd put a lot of work into wrangling it the way Svelte and SvelteKit wanted, but upon finishing, wasn't happy with the complicated and surprisingly inefficient runtime execution. I realized that I could've done it in a fraction of the time and complexity -- in any language with convenient HTML generation, a SQL DB library, and an HTTP/CGI/SCGI-ish library, plus a little client-side JS).
Most of the chore part is done by chatgpt and the mental model of understanding what it wrote is very light and often single file. It is also easily embedded in static file generators.
On the contrary Vue/React have a lot of context required to understand and mentally parse. On react the useCallback/useEffect/useMemo make me need to manually manage dependencies. This really reminds me of manual memory management in C, with perhaps even more pitfalls. On vue the difference between computed, props and vanilla variables. I am amazed that the supposed more approachable part of tech is actually more complex than regular library/script programming.
I used jQuery in a project recently where I just needed some interactivity for an internal dashboard/testing solution. I didn't have a bunch of time to setup a whole toolchain for Vue (and Pinia, Vue Router, PrimeVue, PrimeIcons, PrimeFlex and the automated component imports) because while I like using all of them and the developer experience is quite nice, the setup still takes a bit of time unless you have a nice up to date boilerplate project that's ready to go.
Not even having a build step was also really pleasant, didn't need to do complex multi-stage builds or worry that copying assets would somehow slow down a Maven build for the back end (relevant for those cases when you package your front end and back end together in the same container and use the back end to serve the front end assets, vs two separate containers where one is just a web server).
Only problem was that jQuery doesn't compose as nice, I missed the ability to nest a bunch of components. Might just have to look at Lit or something.
But in the end efficiency isn't my concern, as I have almost not visitors, what turns out to be more important is that Go has a lot of useful stuff in the standard library, especially the HTML templates, that allow me to write safe code easily. To test the statement, I'll even provide the link and invite anyone to try and break it: https://wwwcip.cs.fau.de/~oj14ozun/guestbook.cgi (the worst I anticipate happening is that someone could use up my storage quota, but even that should take a while).
However this still requires a lockfile because while rename(2) is an atomic store it's not a full CAS, so you can have two processes reading the file concurrently, doing their internal update, writing to a temp file, then rename-ing to the target. There will be no torn version of the reference file, but the process finishing last will cancel out the changes of the other one.
The lockfile can be the "scratch" file as open(O_CREAT | O_EXCL) is also guaranteed to be atomic, however now you need a way to wait for that path to disappear before retrying.
I agree that it still requires a lockfile if write conflicts are not acceptable.
Actually shell scripting is the perfect language for CGI on embedded devices. Bash is ~500k and other shells are 10x smaller. It can output headers and html just fine, you can call other programs to do complex stuff. Obviously the source compresses down to a tiny size too, and since it's a script you can edit it or upload new versions on the fly. Performance is good enough for basic work. Just don't let the internet or unauthenticated requests at it (use an embedded web server with basic http auth).
Summary:
- 60 virtual AMD Genoa CPUs with 240 GB (!!!) of RAM
- bash guestbook CGI: 40 requests per second (and a warning not to do such a thing)
- Perl guestbook CGI: 500 requests per second
- JS (Node) guestbook CGI: 600 requests per second
- Python guestbook CGI: 700 requests per second
- Golang guestbook CGI: 3400 requests per second
- Rust guestbook CGI: 5700 requests per second
- C guestbook CGI: 5800 requests per second
https://github.com/Jacob2161/cgi-bin
I wonder if the gohttpd web server he was using was actually the bottleneck for the Rust and C versions?
I struggled for _15 mins_ on yet another f#@%ng-Javascript-based-ui-that-does-not-need-to-be-f#@%ng-Javascript, simply trying to reset my password for Venmo.
Why... oh why... do we have to have 9.1megabytes of f#@*%ng scripts just to reset a single damn password? This could be literally 1kb of HTML5 and maybe 100kb of CSS?
Anyway, this was a long way of saying I welcome FastCGI and server side rendering. Js need to be put back into the toys bin... er trash bin, where it belongs.
What is a modern python-friendly alternative?
- wsgiref.handlers.CGIHandler, which is not deprecated yet. gvalkov provided example code for Flask at https://news.ycombinator.com/item?id=44479388
- use a language that isn't Python so you don't have to debug your code every year to make it work again when the language maintainers intentionally break it
- install the old cgi module for new Python from https://github.com/jackrosenthal/legacy-cgi
- continue using Python 3.12, where the module is still in the standard library, until mid-02028
It's so simple and it can run anything, and it was also relatively easy to have the CGI script run inside a Docker container provided by the extension.
In other words, it's so flexible that it means the extension developers would be able to use any language they want and wouldn't have to learn much about Disco.
I would probably not push to use it to serve big production sites, but I definitely think there's still a place for CGI.
In case anyone is curious, it's happening mostly here: https://github.com/letsdiscodev/disco-daemon/blob/main/disco...
andrewstuart•4h ago
diath•3h ago
xnx•3h ago
dspillett•1h ago
hu3•1h ago
So I'd say per day is not very meaningful.
kragen•1h ago
As other commenters have pointed out, peak traffic is actually more important.