If you're new to Rust and you want to "just make a web app", the view at the async Rust landscape could be a turnoff for novices. I speak from experience, having started a couple Rust projects in Python/C++ teams. After writing in Rust for 3+ years I can navigate async contepts without troubles, but for someone coming from the usual popular languages (Python/C#/Java/C++), there are simply too many new things to learn when jumping straight into an async Rust codebase.
IMO this framework is going in a good direction, assuming that it will only be used for small/educational projects.
For the async Rust landscape, things are improving every year, IMO we're around 5-10 years until we get tooling which will feel intuitive to complete newcomers.
use feather::{App, AppContext, MiddlewareResult,Request,
Response};
fn main() {
let mut app = App::new();
app.get("/",|_req: &mut Request, res: &mut Response, _ctx: &mut AppContext| {
res.send_text("Hello, world!");
MiddlewareResult::Next
});
app.listen("127.0.0.1:3000");
}
Given my lack of experience, I'm sure it's needed, it's just unclear to me what purpose it would serve in a server app.I am mostly naive to Rust and web server frameworks, so its a naive thought that may be completely contraindicated due to other issues, but I don't expect to see meaningless(?)/repetitive code in the advertisement for a framework that advertises as lightweight.
Most web frameworks in Rust don't make responses a side effect and keep them as a response return type since that's better devex and much less boilerplate.
$ cat src/main.rs
use feather::{App, AppContext, MiddlewareResult, Request, Response};
use std::{thread, time};
fn main() {
let mut app = App::new();
app.get(
"/",
|_req: &mut Request, res: &mut Response, _ctx: &mut AppContext| {
res.send_text("Hello, world!\n");
thread::sleep(time::Duration::from_secs(2));
MiddlewareResult::Next
},
);
app.listen("127.0.0.1:3000");
}
$ cargo run -q &
[1] 119407
Feather Listening on : http://127.0.0.1:3000
$ curl localhost:3000 & curl localhost:3000 & time wait -n && time wait -n
[2] 119435
[3] 119436
Hello, world!
[2]- Done curl localhost:3000
real 2.008s
Hello, world!
[3]+ Done curl localhost:3000
real 2.001s
That is: when the request handler takes 2 seconds, and you fire two requests simultaneously, one of them returns in 2 seconds, but the other one takes 4 seconds, because it has to wait for the first request to finish before it can begin.It feels like this has to be the behavior from the API, because if two threads run `ctx.get_mut_state::<T>()` to get a `&mut T` reference to the same state value, only one of those references is allowed to be live at once.
It doesn't quite seem fair to call this "designed for Rust’s performance and safety". One of the main goals of Rust is to facilitate safe concurrency. But this library just throws any hope of concurrency away altogether.
Feather seems fundamentally single threaded and requires more boilerplate for something pretty simple. So I'm not sure the claim about developer experience holds up to scrutiny here either.
[0]: https://rocket.rs/guide/v0.5/upgrading/#stable-and-async-sup...
Not all async Rust webframeworks let you do away with async and futures entirely in your business logic.
If you're averse to touching async fn's or tokio APIs _at all_, it's nice devex.
What does "pure raw metal" performance mean? Go has a garbage collector, which I usually hear causing GC pauses negatively affecting performance compared to C/C++/Rust.
It means exactly what it means. If I get a pure bare metal server, will that computer simply handle more requests than a Go or a Node server (assuming the same single-threaded paradigm)? That's the only reason I'd ever consider moving away from the ergonomics of something like Node or Python, if my bare metal server can save me money by simply handling more requests with less cpu/memory.
Edit:
Thanks for that link though, just got turned onto this:
https://github.com/uNetworking/uWebSockets/blob/master/misc/...
… but what does the "bare metal server" have to do with it? Presumably, Occam's Razor would suggest that a Rust framework that outperforms Go on a VM would likely continue to outperform it on a bare metal. The bare metal machine might outperform a VM, but these are most two orthogonal, unrelated axes: bare metal vs. VM, or a Rust framework vs. a Go or Node framework…
It’s for the same reason that some people are leaving Rust behind when it comes to game development after the initial excitement fades and problems start.
Now I do understand that there are cases where this might be viable (for example if you already have a team of experienced Rust developers) but I think in majority of cases you would not want to use Rust for web development.
wishinghand•3h ago