Minecraft is famously under-optimized and needy in terms of CPU frequency. If running a vanilla (no server mods) version, then using something optimized, like PaperMC is a better idea for datacenter VMs. (Until you need to dupe sand or something.)
The other route is installing a bunch of optimization mods - some really do help.
However, they may be a problem if players are sensitive to possible non-vanilla behaviour (as you mentioned, and it’s not limited to cheaty duping). Thankfully, spinning up a server with a selection of performance mods is very easy these days. Various tricks like pre-generating chunks in advance also help.
Paper is good enough for anyone but very technical players pushing to the limits of redstone tick timing logic, entity behavior, chunk loading mechanics, etc. These don't matter even for advanced players doing normal things.
No C# in Bedrock. No Java unless you're talking about the Android versions. Very little C.
It's mostly C++.
strogonoff•10h ago
Promoting a telemetry solution when it comes to a hobby server, which you host for yourself and which can’t bankrupt you by running up a massive AWS bill, doesn’t seem to make much sense when simply bottling it up in Docker and being able to restart or recreate at will is enough (mount volumes for logs and persistent data, back it up, and you’re good).
With games like Minecraft in particular there’s value in being able to have multiple servers with different worlds, perhaps different mods, etc. If you decide not to have more servers because they are snowflakes you do not have time to set up monitoring for then you rob yourself and your players of the opportunity to have more fun.
Furthermore, containerizing it allows you to upgrade as new game versions come out quickly by simply spinning up a new container with your preexisting world as a test, and you get you basic system resource usage monitoring built-in.
What I think could be a more interesting exercise is a dashboard for friends or family that allows to manage the lifetime and configuration of their respective containers.
dpe82•9h ago
strogonoff•9h ago
The goal of my comment was to highlight opportunities for more fun and less what seems like toil.
Furthermore, this is an article about a telemetry solution posted on a site of that telemetry solution. They make money from this.
dewey•7h ago
strogonoff•7h ago
dewey•7h ago
strogonoff•6h ago
> So, the Minecraft server should work reliably and, if it goes down, I should know well before they do
How are metrics helpful? There is so much fun that could be had in setting up an actually resilient system instead.
Why worry over metrics and alerts when you could orchestrate an infrastructure that grants you the superpower of being able to spin up a server with a copy of the world on a whim instead (or even a system that auto-starts one whenever there is demand)?
dewey•6h ago
As you said "There is so much fun that could be had in setting up an actually resilient system instead.", maybe the author has more fun setting up alerts and metrics instead of a resilient system like you do?
The truth is that in most real-world scenarios getting alerts, metrics is much more important than building a fully resilient system (Expensive, maybe overengieering for early stage etc.).
> However, the author is not really procrastinating—he gets paid for this. As the first sentence in the blog post says "One of the secret pleasures of life is to be paid for things you would do for free.", which I can very much understand as I often work or play with things I could use at work in my free time.
mmanciop•4h ago
Adding the backup for the world files, already having Systemd bringing back a crashing server, makes the setup rather resilient. Sure, there's infinite more things that can go wrong, but with swiftly decreasing likelihood.
> The truth is that in most real-world scenarios getting alerts, metrics is much more important than building a fully resilient system (Expensive, maybe overengieering for early stage etc.).
This, very much this.
> However, the author is not really procrastinating—he gets paid for this. As the first sentence in the blog post says "One of the secret pleasures of life is to be paid for things you would do for free.", which I can very much understand as I often work or play with things I could use at work in my free time.
Yes :-)
strogonoff•2h ago
Funny, because I have the opposite opinion. Build for failure first; if it’s critical/production then also monitor, but if an earthquake takes down an EC2 zone and you have no ability to spin it up exactly the way it was then the avalanche of alerts and metrics falling off a cliff[0] isn’t exactly going to help you (or your mental well-being).
Generally speaking, if you build for failure first, then monitoring becomes much more useful and actionable; and simultaneously it becomes much less important for a hobby project.
[0] That assuming you gather them from a different zone that wasn’t affected by the same downtime in the first place; speaking of, how are you monitoring your monitors? and so on.
dewey•1h ago
Of course for engineers that's a nice challenge, but that's the reason why engineers without a business sense have a hard time building their own companies if you prioritize perfect code and overengineered infrastructure over talking to customers or building the business.
strogonoff•1h ago
mmanciop•4h ago
Metrics are the means to an end of alerting. And with alerting, I mean getting pinged on my phone when something important breaks. Like, you know, the server going down.
> Why worry over metrics and alerts when you could orchestrate an infrastructure that grants you the superpower of being able to spin up a server with a copy of the world on a whim instead (or even a system that auto-starts one whenever there is demand)?
As somebody who has run cloud and enterprise software for almost two decades now, I can be that needs monitoring too. The more moving parts there are, the more things go wrong. The more things go wrong, and the more you care they get fixed, the more monitoring you need :-)
strogonoff•1h ago
> As somebody who has run cloud and enterprise software for almost two decades now, I can be that needs monitoring too
To be clear, I strongly believe that if you run anything seriously in production, you must monitor it—but first you need to be able to spin it back up with minimal effort. It may take a while to get there if you just inherited a rusty legacy snowflake monolith that no one dares to breathe the wrong way near, but if you are starting anew it is a bad mistake to not have that down first considering how straightforward it is nowadays.
Then, for hobby projects of low criticality (because people in this thread mistakenly assume I mean any personal project, I have to reiterate: nothing controlling points of ingress into your house or the like), you may find that once you have the latter, the former becomes optional and not really that interesting anymore.
mmanciop•4h ago
I am also a massive observability nerd, so YMMV :-)
strogonoff•1h ago
mmanciop•59m ago
ajmurmann•9h ago
Am I misguided?
strogonoff•9h ago
jauntywundrkind•9h ago
strogonoff•9h ago
jauntywundrkind•8h ago
Limiting yourself to only naive senses is a wild proposition to me. The scientific mindset compels us to see further: it is a wild privilege to see more, to build and use tools that expand how we can see.
strogonoff•7h ago
Furthermore, to me useless or excessive data is very much a reality (if you do not agree that it is a possibility and a thing that happens, we have clearly no way of understanding each other), and per my criteria it is just that sort of data in this use scenario.
mmanciop•3h ago
About excessive telemetry, that depends on what you want to achieve. Using facilities in the OpenTelemetry Collector like [2], you can easily drop all telemetry you have no use for. At the cost of tooting my own horn, we actually provide super easy ways of doing the same dropping at no charge whatsoever to the end user in Dash0 [3].
[1] https://thenewstack.io/is-otel-the-last-observability-agent-...
[2] https://github.com/open-telemetry/opentelemetry-collector-co...
[3] https://www.dash0.com/changelog/spam-filters
ajmurmann•9h ago
Also, I've had phone tech support sessions with family that were more stressful than calls with large banks who were worried about losing very large amounts of money in case of an outage. Different stressful, but nonetheless...
strogonoff•9h ago
Telemetry does not address this, though. Shoving it into a container and assigning it a simple “restart if down” rule does. Minecraft is a flaky beast, if you run snapshots and/or mods. Metrics or not, often “start again” is all you need.
Furthermore, this is a game that adds new gameplay features multiple times per month. If you do not update it frequently and your kid misses out on a new mob, you run into the same stress. Containerizing it makes the upgrade very straightforward, and once you run a couple of containerized instances… Do you not struggle to see the value of detailed system monitoring?
mmanciop•3h ago
A Systemd unit as shown in [1] does it too without using containers and with fewer moving parts of using containers. I use containers every day at $work. I have been using containers since before Docker was a thing. In this case, it's entirely overkill: Systemd units use the important things like cgroups already.
For the upgrade: depends. You do need a container image regardless, and I have not seen official ones. Upgrading servers in Minecraft requires upgrading clients to match, and my kids prefer to play, more than upgrade. (Unless a biome is released. Then it must be immediately available to them.) But then again, I just need to download the binary with a cURL call. And if the configurations change, Docker won't help me there one bit anyhow.
[1] https://github.com/dash0hq/minecraft-server/blob/main/drople...
strogonoff•1h ago
I found that vanilla server is insufficient and an ability to declaratively define mods, the seed, OP players, etc. through container environment is very important for iterative evolution, but of course this is individual.
mmanciop•3h ago
My personal definition of nanosecond is the time passing between the Minecraft server having a hiccup, and the first scream piercing the air.
The printer not printing is DEFCON 5 material.
koinedad•9h ago
strogonoff•9h ago
For a game, a solution that simply restarts the container if it’s down solves the issue. You can mount game logs in a volume if you want, and you can see resource usage in container host dashboard. What value do detailed system metrics bring?
Furthermore, you don’t care what software you run to make your garage door system Siri-enabled, as long as it does its job and is not vulnerable; whereas with a game that adds new gameplay features multiple times per month, you do want to update it frequently. Babysitting a snowflake server makes it way more difficult than it should be.
gmuslera•9h ago
In any case, fun starts when the system have more interdependent components.
strogonoff•9h ago
mmanciop•4h ago
harrall•5h ago
I have Dockerfiles from 10 years ago for Grafana and a time-series DB so basically you learn it once and you can bang out basic telemetry infra in an hour afterwards.
And I still actually use InfluxDB and Grafana for my hobby stuff. My current Dockerfiles just look like my old ones…
strogonoff•1h ago
jeroenhd•5h ago
Luckily, all of the interesting components are existing third party libraries so if you don't want to use their SaaS service, you can build your own Minecraft dashboard pretty easily.
mmanciop•4h ago
Alerting is specific to Dash0. I know of no other monitoring solution that lets you run real PromQL on logs. But there will be similar ways of accomplishing the same alerting logic.
mmanciop•4h ago