Many popular C projects do really well. Projects that you probably use.
Memory-safe languages eliminate vulnerability classes, but well-engineered C has proven viable for security-critical <insert whatever you want> infrastructure. The real question is whether the framework maintains that standard, not whether C is inherently unsuitable, thus the security concerns are legitimate but not absolute.
I think you are being a bit too dismissive, and your comment puts nothing concrete on the table.
Can write safe code does not mean always writes safe code. A web server needs to be safe code, always.
> One of the highest priorities for the HN algorithm is to promote good interactions and discourage bad interactions. The logic is if you have a lot of people bickering with each other, regardless of the topic, it normalizes bad behavior. HN is trying to sustain itself as a forum with great discussions.
If any of the above is incorrect, I'm interested in learning more.
However it's just not constructive and repetitive. You're basically walking into a bar and yelling that alcohol is unhealthy.
It's true that the repliers crossed into the red as well, but fundamentally that's a healthy immune response going a little too far.
Rather than flaming someone for not responding in the intended HN spirit, and invoking their recent post as a gotcha, it would be better to take that post (https://news.ycombinator.com/item?id=45340298) as evidence that they want the same things that you (and we!) do, and base your response on that.
Nearly everyone here wants great discussions; the problem is that we all underestimate the provocations in our own comments, or even just don't see them at all. Meanwhile the provocations in other people's comments often land much harder on us as readers. Say the skew is 10x in each direction—that leads to a 100x distortion. This "100x problem" is probably at the root of most interpersonal glitches here (and not only here). Unfortunately, it seems to be a deep and universal bias.
Indeed, a web server needs to be "safe". How do you know this project is not safe? Have you even tried it, let alone review it, or did you just see "in C" and automatically assumed it is not a safe web framework?
I am pretty sure the author of this project is thrilled to wait for you to submit issues or even PRs.
The fact that you CAN write memory safe code in C does not mean all maintenance programmers of your project will always write memory safe code in all their commits.
Memory managed languages unquestionably reduce the surface area of bugs one has to worry about, and in particular they eliminate the class of vulnerabilities that was most prevalent in web servers prior to the widespread adoption of memory safe languages.
Yes, memory-safe languages eliminate vulnerability classes. I said that in my first reply. Yes, people make mistakes in C. Obvious. None of this tells us anything about this specific web framework.
You dismissed it as "a terrible idea" based on the language alone. That's lazy analysis. Either review the actual code and find the bugs, or admit you're just cargo-culting the "C bad" narrative without looking at the implementation.
Have you actually examined this codebase or not?
Given that you don't disagree that C is an inherently riskier language to write in than memory managed languages, I can only conclude that what you're opposed to is the fact that I'm saying the quiet parts out loud. I'm not saying the quiet parts for you, you know them, I'm saying them for the junior devs who stumble across this page of senior devs waxing poetic about how wonderful C is. Every dev should know how to write C, but it's equally important to know its generally the wrong language to choose for a web server, which is what's being discussed here.
I'm sorry, but it's like scratching your left ear with your right hand. But for fun, yeah, there are worse things people do. Good luck and have fun. Now here's where most of us will probably be sarcastic, but it's certainly a good way to explore whatever others consider bullshit.
Edit: Pls read the following comment. I would hire him/her because I consider this as a waste of OP skills and he/she would be useful in many more projects.
TLDR; it was not a hate. I am sorry if it sounds so.
I wish OP good luck. It was not sarcastic, I really do, and would like to hire him/her for the skills. But for mankind, this project is almost useless… I apologize if this sounds harsh.
You have a great potential if you can “see code” and have logical thinking deep inside. Not too many people have it.
Elon Musk said once that all those innovations are redeemed by the tremendous efforts of all the engineers. So I appreciate everyone who can do something.
I am not doing anything special but I do inform our community (“mankind”) for 25 years… And I feel useful because I am good at it.
“Mankind” can be a group of other people.
Edit: What people value the most? Compliments. So if you are useful and receive compliments, you will eventually be happy. But ofc you can be happy without being useful, for sure.
Fun fact: I've built something very much like this that powered a number of programs that I sold over the years and it was written when I wasn't nearly as good of a programmer as I am now (take off 30 years of additional experience). If I look at OP's code there are a whole raft of nitpicks but there isn't anything immediately and obviously wrong with it and just speaking for myself, that is surprising because most people's C code is - and I'm being generous here - absolutely terrible. This has potential, but I'd have to really dig in to see how solid it is and I don't have time for that right now, but I've seen far worse code than this.
Least surprising thing you've said so far.
but I do inform our community
Inform them what? "You there, this project you've worked on, learned from, gotten joy from, been complimented on by your peers...I, with my 25 years of peerless wisdom, find it useless and you should feel bad for doing it"?
Sure...you're a real people person.
It is about that life is too short to do things that are almost nonsense.
Jeez you must be a real joy to be around for the folks unfortunate enough to have to. /s
- I've done CSS frameworks that replicate most of bootstrap that I use.
- I've made client-side reactive web-components (kind of) that almost replaced the parts of react that I like.
- I've built bespoke HTTP servers countless times since the VB6 days.
- And I've written my own MVC engines probably a half dozen times, just to learn a new language or library.
All of that to say, it isn't web devs who are threatened, it is developers who don't want to learn the underlying technologies that power the libraries and frameworks they use.
I actually see no fault in being that way. I've know tons of decent-to-good developers that have no desire to understand HTTP or Vanilla JavaScript, and they still do great work tying systems together. It's all about the kind of learner you are. Do you want depth, breadth, or a mixture of both (but always lacking in both - aka me).
This was years ago (20 years ago?)
It's been a long time since I've used C, so maybe it's using some syntax that I'm unaware of?
IE: What defines "home" that is referenced as an argument to the "appRoute" function, and then passed to the "get" function to set up the "/home" route? Is "home" defined in lavandula.h, or is this really pseudocode?
#define appRoute(name) HttpResponse name(AppContext ctx)
The 'appRoute' is a macro that expands to a function signature.
The macro is: '#define appRoute(name) HttpResponse name(AppContext ctx)' and the parameter I passed as 'home' is expanded into the function name. The reason is because all controller function signatures are the same, so just writing 'appRoute' allows the developer to save time writing endpoints!
It is a tradeoff between readability and development speed. And one of the ideas behind the framework is succint and minimal code.
Makes sense, thanks!
#define appRoute(name) HttpResponse name(AppContext ctx)
- Web framework : inherently hard to maintain due to communication over evolving standards. Check.
- AI written code where nobody knows howwhatwhenwhy!? Check.
- Written in C. Check.
bwahahahaha!
edit: semi-joking. As I actually like the simplicity of pure C. But the combination of AI written,network-facing and C makes me shudder.
Every agent I know of or use will always say they built "Production ready, secure, fast package for X" if you ask them to build that, but they rarely actually will. It takes enormous time and effort to actually do that, and any first iteration of "production ready" is definitely aspirational until it actually hits the real world and survives. I'm speaking from experience, fwiw.
Still probably I'm going to continue learning golang in most situations, because that's where the money is (i.e. job offers), but I will create a hobby project based on your framework.
--- EDIT ---
> 5 hours ago
Ohh it's fresh. I almost smell the freshly baked buns with my mind
I'd love to hear about your project when you get round to it.
The "contrarian dynamic" (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...) is the tendency for reflexively negative comments to show up early with shallow/generic objections to a submission, followed by a later wave of comments objecting to the objections and defending the submission.
The latter tend to get upvoted—rightly so, since they are more positive and usually more substantive. This puts the thread in the paradoxical-but-common state where the top comments are objecting to how prominent the bottom comments are! (Or, rather, were.) That's odd, but at least it's better than having the negative ones at the top.
In this case, these 5 comments all appear higher in the thread:
https://news.ycombinator.com/item?id=45528218
https://news.ycombinator.com/item?id=45527967
https://news.ycombinator.com/item?id=45527886
https://news.ycombinator.com/item?id=45527879
https://news.ycombinator.com/item?id=45527728
... than the negative(ish) ones that were posted earlier:
https://news.ycombinator.com/item?id=45527887
https://news.ycombinator.com/item?id=45527480
https://news.ycombinator.com/item?id=45527387
https://news.ycombinator.com/item?id=45527278
https://news.ycombinator.com/item?id=45527259
Some of those were only slightly negative and probably not meant that way, but yeah, the early impact of running into a bunch of these leads to a WTF feeling.
Ultimately I think this has to do with the reflexive/reflective distinction: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor.... That's probably the clearest way of describing the difference between the kind of comments we want on this site vs. the kind we don't want.
You're being too kind, but I suppose your work requires diplomacy. When multiple top-level comments basically say "I can't believe the amount of hate in this thread", without adding any other thought, the thread start to feel like a simulacra of a discussion, an absurd comedy of nonsense, a deeply layered shaggy dog story.
Come on, people. Want to counterbalance negative comments? Find something positive and interesting to say. Yes, this will require actual effort, as with many things in life. You can do it!
I feel like there is too much positive reinforcement for this project (see the comments about how clean the code is, this is how C should be written, etc).
This project is an exceptionally poor example of well-written C. I know the author states that AI was used for the JSON bits, but I rather doubt it based on how over-engineered the env file parsing is.
When C is written in this way it becomes harder to spot bugs.
Mad props for building this. It's hard and it's fun!
As to other comments in the thread about the "why": why not. For the love of the craft.
I also love the BSD C CGI Postgres stack. I'm just a CRUDmonkey with mostly python skills, so getting to explore low language and memory concepts is a lot of fun for me.
People will whine and moan about how this is not practical, but as embedded devices become more ubiquitous I think a clear value add may actually emerge.
I've been playing with the pico calc, and if I was building something as a "mobile app" for that I would much rather reach for C for my framework code.
Cheers, great work
I've also never seen tests written this way in C. Great work.
C was the first programming language I learned when I was still in middle/high school, raising the family PC out of the grave by installing free software - which I learned was mostly built in C. I never had many options for coursework in compsci until I was in college, where we did data structures and algorithms in C++, so I had a leg up as I'd already understood pointers. :-)
Happy to see C appreciated for what it is, a very clean and nice/simple language if you stay away from some of the nuts and bolts. Of course, the accessibility of the underlying nuts and bolts is one of the reasons for using C, so there's a balance.
Appreciate you saying that!
I'm busy writing some of the most optimized-but-still-portable code that I've ever written and it is very interesting to see how even a slight difference in how you express something can cause a massive difference in execution speed (especially, obviously, in inner loops). Your code is clearly written from what your comfort zone with C is and I'm really impressed by the restraint on display. At the same time, some of the code feels a bit repetitive and would benefit from more universal mechanisms. But that would require more effort and I'm not even sure if that is productive. One part where I see this is in the argument parsing code as well as in the way you handle strings, it is all coded very explicitly, which substantially increases the chance of making a mistake.
Another limitation is that using AI to help you write the code means you don't actually understand what it does, and this in turn may expose you to side effects that you are not able to eliminate because you did not consider them while writing, it is as if someone else gave you that code and asked you to trust them they did not make any mistakes.
Ok I hear this all the time. Are pointers really that hard for so many people to understand? I'm not trying to brag it took me I think like 15 minutes to grok them from learning about them the first time. I'm sure it took me longer to be proficient but I don't get this legendary difficulty aura that seems to surround their existance.
Also yes nice project.
Job app complete projected archived and abandoned in 3...2..1... :). I hope not.
I still feel like this argument could be transferred to nearly any concept in CS though. Abstract enough anywhere and you will always start exceeding the brains working memory.
We are just pretending, there is nothing to understand?
They aren't even numbers. They're voltage-high and voltage-low signals.
Numbers don't even exist! You'll never find a 2 in nature. You'll find two things, but you'll never find the 2 itself.
And all 2s are the same 2. But every voltage signal representing a 2 is a completely different voltage signal. Sometimes they aren't even voltage signals! Sometimes they're magnetic flux signals! Sometimes they're electrical field signals! Sometimes they're photons modulated down a wire made of glass!
But the 2 they represent? Not even that is 2. It's 10!
Like we pretend it is high for convenience while we really mean higher. For all practical purposes our imaginary world works! hurray!
Yes, anyone who has taken algorithms and data structures class in C knows that some people just don't get it.
Also the way people teach it tends to be bad, before teaching pointers you need to teach Stack and Heap at a conceptual level.
Apparently they are; I believe it's the indirection that gets people.
Most learners aren't really taught basics properly - they learn that a variable "contains" a value, when instead they should learn that all values have a type, and some variables hold values.
> I'm not trying to brag it took me I think like 15 minutes to grok them from learning about them the first time.
I can't remember not knowing pointers, so I can't really tell you how long it took for it to click, but I do know that I had done a non-trivial amount of assembly before I used C, so maybe that helped/.
It seems a lot of people assume that pointers don't actually consume any memory and then get confused trying to understand it that way.
I came at C after doing 6502 and 8086 assembler. Pointers just made sense because working with indirect addressing and understanding how things were stored in memory already made sense.
What does academia believe, then? I don't know what the negation of "If you know something you can teach it" is.
That if you don't know something, then you can't teach it?
That if you know something you can't teach it?
That if you don't know something you can teach it?
It's obviously "If you don't know something, you can't teach it."
Now dependency injection, that's some magical bullshit right there.
That's all there's to it.
You can do DI in your own startup code and have some logic in there that substitutes mocks when running under test. Or you could change the logging when debug is enabled. Hardly rocket science. If you can write code, you can write the startup code.
If your team likes patterns, dont mention dependency injection unless you're confident it wont get replaced with the framework of the day.
See https://www.jamesshore.com/v2/blog/2023/the-problem-with-dep...
Frameworks turn your DI code into highly complicated configuration. The end result is a giant lump of code whose only achievement is hiding the new operator and possibly also someones job security.
> Now dependency injection, that's some magical bullshit right there.
I see you there! Joking aside, for me, I also struggled a lot with DI when I first saw it in Java. The ridiculous frameworks that hid all of the details drove me crazy. Even Google Guice was supposed to be more clear, but it was never as clear as... Eventually, I settled on hand-writing the wiring in one giant 1,000+ line function that builds the entire object graph on start-up. Then I could really understand DI because you could actually read (and debug) the code doing the "wiring" (building the object graph).I think a lot of noobs learning C struggle with pointers especially because there are no good error messages besides "segmentation fault" :D
The * vs & always gets me and not to mention if I ever have to deal with Pointer Math.
int * p;
int *p;
int* p;
Now remember that the type is a memory address. I'm sure it is semantically wrong for whatever reason somebody will explain but it helps to think about it. So you can do : int my_number = 6;
int* p = &my_number;
Both sides of the "=" are the same type (int* is an address, and &my_number is also an address, the one of my_number).Now p is a pointer (or an int* or an address), and *p is... an int ! So this is totally valid :
printf("%d\n", *p)
and for anything else than int you need to malloc that so you will see a lot of : my_struct* s = malloc(sizeof(my_struct);
which makes sense because malloc returns an address (the beginning address of the content of s ; yet again somebody will tell me I'm wrong to call it "the content of s" so sorry for that). my_struct* // is the type of s, it is an address
my_struct // is the type of *s (some custom type of size sizeof(my_struct))
I don't like that syntax, because it confuses people. It might be sensible to think of the type as (int *), but C just doesn't work this way. You might never declare more that a single variable in a statement, but it still gives people the wrong intuition.
I very much prefer that syntax, because the '*' is part of the type, not part of the variable name.
> You might never declare more that a single variable in a statement
int a, *b, c, *d;
Yes, you can do that, and in fact if you want to declare multiple pointers on the same line, you are required to put a '*' in front of every pointer variable name.Personally, I've always considered this to be a defect of the language. Would it really be so awful to have to write instead:
// Both are of type 'int'. Pretty straightforward.
int a, c;
// In my fantasy world, both of these would be of type 'pointer to int',
// but IRL b is a pointer and d is an int. fun!
int* b, d;
But of course that's not the language we have.I'd be very curious to know the motivation for requiring a '*' in front of each pointer variable name, as opposed to once with the type. So far the only ones I've thought of (not mutually exclusive) are:
a) we REALLY love terseness, and
b) ooh look at this clever hack I came up with for variable declaration syntax.
I don't really know C, but personally prefer your version. However, I can also get behind considering the * to be part of the variable, rather than a type: "var is a pointer that happens to hold an int". I mean, maybe they could have used the & operator meaning "var is an address holding an int"? Honestly, it just feels like there's too much tradition and legacy and so on, and what one considers intuitive is the thing that they're most used to.
I agree with you that this is an obvious mental model and it might be true for other languages, but this isn't the model that the C language has, which reveals in the fact that:
int* a, b;
does not declare two pointers.You can see it like this: C does not have a way to declare a variable to be a pointer to int. In C you can only declare an expression with a variable to have the type int.
That's why I don't like this syntax especially for beginners. It is deceiving. It leads to people thinking it works differently than it really does and coming up with weird mental models. For example, that the unary '*'-operator has two meanings: dereference and declaring a pointer. Than they say a pointer should better be denoted by '&', because that's the address-of operator. But that's wrong, unary '*' always means dereference. You don't declare 'a' to have type 'int *', you declare '*a' to have the type 'int'!
It's the same with array syntax (and also with function pointers and really every declaration):
Java: int[size] a;
C: int a[size];
I agree that it doesn't cause problems for people who are experienced in the declaration rules of C, and it might never cause confusion if you never declare multiple variables in a line (I never do, because of diffability), but when teaching it leads to the wrong user model. Show 'int* a, b;' and people are confused, show 'int *a, b;' and it is obvious how it works.I still don't understand this decision. I think it should've been like int^ p = &i; ... or ... int i = *p;
Everything clicked ironically when I went even deeper and studied assembly language. Then following pointers to data vs just reading pointers becomes very clear and explicit.
Variable declaration `T v;` means "declare `v` such that expression `v` has type `T`". Variable declaration `T *p` means declare `p` such that the expression `*p` has type `T`". etc.
This is the most confusing concept of pointers. I feel this could have been easily avoided with different character like ~ or ^ or other.
float * (*foo(int *)) (int);
foo is something, that can be called with an 'int *', which results in a pointer to something that can be called with an 'int', which results in something which can be dereferenced, which is a float.Then when you start using pointers, it makes sense. If variable is a pointer, that means its a memory location. *variable is a way to get that data. Then arrays is just a wrapper around pointer arithmetic.
Whereas with CS, you learn about variables first, which is an abstraction on top of memory, and pointers don't make sense in this regard.
This is why any EE/ECE grads are much better developers than CS grads, because once you understand fundamentals
This is largely not the case in my experience. They probably understand the lower level details of manipulating memory better, but there's a lot more to developing software than understanding that memory is a specific place.
Hah, like fuck they are. The worst code I regularly have to review is written by EE grads. They have less of an understanding of pointers than the people in the office with a CS background.
Can't agree with this enough. The moment i finally understood what pointers are was when I landed embedded job and during debugging session I looked at memory browser that showed my variable at exact address in memory. After that all about pointer arithmetic and even function pointers became clear as day. Something at least 3 teachers weren't able to explain clear enough.
The problem arises when you start to mix memory management with more complex structures.
It’s extremely easy to make mistakes, and you must be very careful about ownership and clean up. Those things are not strictly related to pointers, but in C, it’s inevitable to use pointers to handle them. That's why people say pointers are hard.
When I first started learning to program, it was in C, with one of those "Sam's Learn Yerself a C++ in Eleventy-Three Days" books. I was, like, 15 or something. This was long enough ago and my family was just barely poor enough that we didn't even have the Internet yet.
The book explained memory. That was not hard to understand. But we had been using stack-allocated variables through several chapters in the book so far. I didn't get why you would ever want anything as a pointer. If I wanted to write a program to add 3 and 5, why wouldn't I just say "int x = 3;"? Why bother with this stupid dereferencing syntax? Given that the author chose to explain pointers by first explain the address-of operator on stack allocated variables, it felt particularly perverse. The regular, ol' variables were right there! Why but just use them
I didn't have a concept yet of what one could even do with programming. Hell, just a few years prior to that point, I was pretty sure all of the other programs on my computer were written by massive teams of wizards manually typing out 1s and 0s.
I still managed to bungle on and write code. But my programs in C never really worked well. Though, they still worked better than my classmates' versions! Then, in my 2nd year of college, I had transferred universities and the new place taught in a Java.
Java was disgusting. It was so verbose. Why did we need all these weird, multi-sylabic, period-infested function calls to do anything? Why was it so slow? Why couldn't I write ASCII-art graphics with it? Why couldn't I run my programs on my friend's computer?
It wasn't until I had taken computer architecture that I gained a much better understanding of what any of all these computer things were meant to do. I ended up implementing a basic scripting language in Java. And then, suddenly, I understood pointers.
Yes. Especially pointer to pointer to ...
The big problem is that arrays are conflated with pointers because C doesn't do slices. If C had slices, people would naturally avoid pointers except in the cases where they were genuinely necessary. That would make teaching pointers vastly easier.
http.c around line 398, that looks wrong.
How would it be slower? Isn't it simply bumping the stack pointer?
Using alloca will result in stack overflows if you use more than a couple megabytes, so it isn't a very good idea.
I will add it to the backlog of things to do :)
This project is an awful example of how to write C. No checking of return values, leaking memory with realloc, over-engineered parsing (what should be 8 lines is +200).
I can understand it as a learning project, and even if it wasn't, I can sorta understand that sometimes bugs creep in ("oops, forgot to use a tmp variable for realloc in one out of 10 places") but this is not what is happening: This is not how you write C!
Good job OP. Now if you can add HTML templating, this may become a complete framework :)
Yes it's on the backlog and will be fun to implement :)
As an aside, I don't see any support for parallelization. That's fine for an initial implementation, but web servers do benefit from threading off requests. If you go that route (pun intended) you might consider using something like libuv [2].
[1] https://github.com/ashtonjamesd/lavandula/blob/51d86a284dc7d...
I did intend to implement parallelization as a later feature so it's good to bring it up.
I had such a bad experience with GWT back in the Java days of my life that I've steered clear of any "server" language for web frameworks since. I'd love for that to change though. I definitely will be trying this out.
Thanks for sharing, this looks amazing
I’ve been building out my C standard library replacement in earnest for a little while. If you like this framework, check it out.
Thank you, I'll will implement that :)
* dropping the prefix "test_" * substituting the "_" characters in the function for whitespace * uppercasing the first letter of each word.
So `test_tokenize_simple_model` becomes "Tokenize Simple Model".
Your work is a nice reference, it is neat to see someone else working in this space!
The translation should be much faster while giving a lot of the efficiency benefits of C (but by no means all).
i understand other auth schemes are more complicated, and maybe theres no desire to pull in big libraries. just that if theres no TLS or proper auth, you can also just skip basic auth. its only use would be to trick someone who's not familiar (unlikely with such a repo but not impossible) into a false sense of security.
ofc, not really an issue with the code, and its an excellent base to look into how this stuff works and if you want since its pretty clean and easy to ready, expand upon it. well done! love ppl churning out good ol C projects. respect!
Maybe to have some "basic" auth for an embedded device web interface or something like that? I suppose it's better than nothing. I've devices which prompt for username and password with no TLS either.
Yeah, I know those languages have a the frameworks but nothing really beats understanding something like doing it ground up on your own.
https://github.com/ashtonjamesd/lavandula/blob/2dbefe6da16bf... - is it intended?
https://github.com/ashtonjamesd/lavandula/blob/2dbefe6da16bf... - pain....
That would honestly sound like an amazing book, just walking through all the ways it's horrible chapter by chapter, and how to structure the code instead, slowly. Like an accelerated history to create such a matured http library.
Additionally, the .env file parser is quite clean.
https://github.com/ashtonjamesd/lavandula/blob/main/src/dote...
However, it doesn't seem that the parser supports comments. I guess a "good first issue" for anyone interested in contributing would be extending the `skipWhitespace` function to detect `#` tokens and skip the rest of the line when present.
Would also need to handle edge cases like env vars having values containing `#` tokens inside (but these will be quoted, so it's probably not too tricky to handle.)
So you then need to implement escaping which can go from a very simple implementation to an actual lookahead parser
EDIT:
Actually I agree, this parser is already very overbuilt and should be able to handle comments. Generally an env parser is a few lines a best… you need to read a line, look for the first instance of the separator, generally no reason to build a full parser for that, env is an absurdly simple format if you don’t want features like comments.
Hell even an ini format parser is simple to implement in the same style.
I didn't find it clean; it's so over-engineered that you won't easily be able to spot bugs in it.
What you want is (assuming you have a string-trimming function):
while ((fgets (name, sizeof name, inputf)) {
if (!(value = strchr (name, '='))) { continue; }
*value++ = 0;
strtrim(name);
strtrim(value);
if (!*name) { continue; }
// Now store `name` and `value`
}
> I guess a "good first issue" for anyone interested in contributing would be extending the `skipWhitespace` function to detect `#` tokens and skip the rest of the line when present.Or do it before processing:
// First statement of while loop
char *comment = strchr (name, '#');
if (comment) *comment = 0;
// Continue with processing `name`
The way it's done in the linked code raises a ton of red flags.A couple of notes: you'll want to use non-blocking I/O and an event loop to prevent one slow client from locking up the whole server. You should also check for partial read and write calls, so that if a client sends a couple bytes at a time, you can buffer up their full response and still be able to respond to it. A fixed size buffer for requests isn't ideal either since POST requests can easily blow through your 4096 byte buffer.
You might also want to look into using an AF_INET6 socket. You can still accept IPv4 connections, but you'll also gain IPv6 basically for free, and in 2025, you really should support IPv6.
I think the appRoute macro obfuscates the types and signatures, and introduces some unnecessary indirection. I would get rid of it.
Related, the AppContext type could be renamed RequestContext or ControllerContext or something as its App + HTTP Request + DB and not just the App.
Otherwise, I agree with other commenters that this is some of the cleanest C code I’ve seen in a while! Great effort!
hope hw vendors will adopt it so their management web pages are less ass than they actually are currently.
Is if a one thread per request model?
Does it support async await type of architecture?
- check return value from malloc();
- consider using your own arena allocator (which gets a larger block of memory with a one-time call of malloc, then calls an in-process allocator that assigns part of that block);
- use a library prefix e.g. Lavandula_ before API functions like get() or runApp() to avoid name collisions.
- The JSON function is not spec-compliant; why not use an existing library? (I understand external dependecies may introduce unwanted bloat, but in this case, there are many compact and efficient options.)
Great project. I remember using mongoose a while back that's also written in C. Personally, the more library independent and self-sufficient it is, the more I'm likely to use it. Like, if you can even avoid using the stdlib! Even thought that sounds crazy (but a server in C is a bit crazy anyways?). The more standalone it is, the more transformable and embeddable it can be.
guerrilla•1d ago
hmry•1d ago
ashtonjamesd•1d ago
cozzyd•1d ago