[1] https://www.bbc.com/future/article/20221011-how-space-weathe...
I am not an OS developer, so I take my own conclusion with a grain of salt.
Any OS this game engine ran on would experience this crash.
It's a router.. oh my god that made me laugh
Update:
After the recent hacker news "invasion", I have now determined that the page can handle up to 1536 users before running out of RAM, meaning that the IP camera surprisingly is fully sufficient for its purpose. In other words, I will not be moving the forum in the near future as 32 MB of RAM seem to be enough to run it
Which is fine unless you get to HN frontpage.
Source: https://lenowo.org/viewtopic.php?t=28
badass
[ 25 ] Now [ 13 ]
yepThe problem being so many micro-controllers, non-interfaceable or cheaply designed computers/devices/machines might not follow the standards and therefore be susceptible although your iPhone, Laptop and Fridge should all be fine.
I know the OpenFirmware in my old SunServer 600MP had the issue. Unfortunately I don’t have to worry about that.
"See this crash?
I predicted it years ago.
Don't ask me how, I couldn't tell you."
p.s. I had an old iPaq that I wouldn't have trusted to run for longer than a day and stay stable, kudos for that at the very minimum.
Left for 2.26 years, it will overflow.
When it does finally overflow, we get "minus" time and the game breaks in funny ways. I did a video about it: https://youtu.be/f7ZzoyVLu58
https://finalfantasy.fandom.com/wiki/Excalibur_II_(Final_Fan...
AFAIK the source for FF9 PSX (and all the PSX ff games) has been lost as Square just used short term archives
Also, FF9 does not run at a constant framerate. Like all the PSX FF games it runs at various rates, sometimes multiple at a time (example: model animations are 15fps vs 30 for the UI)
In terms of timers, the bios does grant you access to root timers, but these are largely modulated by a hardware oscillator
(Incidentally, the hardware timing component is the reason a chipped PAL console cannot produce good NTSC video. Only a Yaroze can support full multiregion play)
FFIX for PSX would have been written in C (or possibly C++) with PSY-Q. It will not be one program - those games were composed of multiple overlays that are banked in / out over the PlayStation's limited memory.
From what I know the PC release was a port to a new framework, which supports the same script engines, but otherwise is fresh code. This is how it can support mobile, widescreen, Steam achievements etc.
There are a number of timers and things used. But the claim that it runs slower is absolutely false. It’s just perceived that way because it’s “drawn” slower.
Secondly, it absolutely will run slower. Animations will take longer to complete; FMVs will play at a different rate ; controller sampling will be reduced.
My scepticism isn't coming from hearsay or ignorance: I have written PlayStation software, and PSX software is not parallelised, even though it can support threading and cooperative concurrency. The control flow of the title is very locked into the VSync loop, from your first ResetGraph(0) right to your final DrawOTable(*p).
In addition, I have done a bunch of reversing work on the other two PSX games, and they are not monolithic programs. They can't be because there simply isn't enough RAM to store the .TEXT of the entire thing at once. So when you say "the source code", I'm inclined to ask - for which module? The kernel or one of the overlays?
It’s not lost except to maybe Square Enix’s corporate but they don’t know where anything is.
(I also never managed to get it)
I'm guessing the game probably streams FMV cutscenes of the disc as they play, and the fallback behaviour if it can't find them is to skip rather than crash.
"The perfect racing car crosses the finish line first and subsequently falls into its component parts."
Games fit this philosophy, compared to many other pieces of software that are expected to be long-lived and receiving a lot of maintenance and changes and evolve.
But plenty of people will find complaints when they try to drive their car beyond its design specs and more or less everything starts failing at once.
I sous vide now and then, about twice a week for 6 hours each, so around 12 hours a week. That works out to roughly 15 years of usable machine time for the average person.
Not bad at all.
And yes, 15 years is bad. I don't want to replace my entire household every 15 years FFS.
- the product doesn't break and you don't buy a replacement from them because you still have a working product
- the product breaks and there is a greater than 0% chance that you will buy a replacement product from them
Of course in practice it's more complicated but I wouldn't be so quick to declare that the math doesn't work out.
This is a fine example of what I meant about people complaining when they use products beyond their design parameters.
I'm on a mostly carnivore, mostly ruminant meat diet and for costs tend to do a lot of ground beef... I sous vide a bunch of burgers in 1/2lb ring molds, refrigerate and sear off when hungry. This lets me have safer burgers that aren't overcooked. I do 133F for 2.5+ hours.
I also do steaks about once or twice a week. I have to say it's probably the best kitchen investment I could have made in terms of impact on the output quality.
Personally, most of my headphones I look for metal mechanical connections instead of plastic and I buy refurbished when I can. I think I pay about as much as he does or less, but we haven’t really hashed out the numbers together. I’m typing this while wearing a HyperX gaming headset I bought refurbished that’s old enough that I’ve replaced the earpads while everything else continues to work.
Computers and computer parts often have, in my experience, a better reliability record competently refurbished than when they first leave the factory too. I wonder if sous vide cookers would.
Sous vide is generally not a bacterial growth risk above 140F. At 150F throughout, you get decent pasteurization in under two minutes. Two days of that is such extreme overkill that I'm concerned about the nutritional effect of over cooking.
The Food Saver style vacuum sealers fail fast for me, so I bought a $400 chamber sealer, and I'm on year 5 with it.
Short rib is shocking where I am. Even chuck is pushing past $15 a pound.
What are you doing for sides/sauce? Generally when I think braise/sous-vide I think some rich, flavourful sauce, but that seems unpractical for daily consumption.
I crisp it up in an air fryer before serving. Here's the full ingredient list: meat, butter, salt. After five years I still look forward to every repeat.
I just replaced an air fryer that lasted two years of daily use, a personal record. I was ready to replace it anyway, because they accumulate grease where you can't clean, and the smell gets interesting.
Or if you want something even beefier: https://sammic.com/en/smartvide-xl
https://www.justice.gov/archives/opa/pr/aluminum-extrusion-m...
Also a correction to GP: They were payload deployment failures, they didn't blow up on the pad. More here: https://arstechnica.com/science/2019/05/nasa-finally-conclud...
I'm chuckling at the thought of barely building something. (All in good fun, thank you.)
After two weeks, the Infrastructure department changed the sign allowing up to 45t.
Wasn't F1 teams basically doing this by replacing their engines and transmissions until the rules introduced penalties for component swaps in 2014?
It still does. New Zealand has a crop of tobacco funded politicians.
when they leave politics do they just rapidly age and dissolve like that guy in the Indiana Jones film?
The BMW turbocharged M12/M13 that was used in the mid-eighties put out about 1,400 horsepower at 60 PSI of boost pressure, but it may have been even more than that because there was no dyno at the time capable of testing it.
They would literally weld the wastegate shut for qualifying, and it would last for about 2-3 laps: outlap, possibly warmup lap, qualifying time lap, inlap.
After which the engine was basically unusable, and so they'd put in a new one for the race.
You can see this really on display with the AMG ONE. It's a "production" car using an F1 engine that requires a rebuild every 31,000 miles.
"Verily, verily, I say unto you, Except a corn of wheat fall into the ground and die, it abideth alone: but if it die, it bringeth forth much fruit.”
(John 12:24)
Regardless I can still complain about how intrusive the ads are.
Edit: ah only works on safari
Edit: ah forgot my vpn was off, usually clears all that up for me. Much better now
These types of comments are always very unhelpful.
My choice of device is irrelevant when assessing their crappy site.
Who knew at the time they were creating games that would be disassembled, deconstructed, reverse engineered. Do any of us think about that regarding any program we write?
Although for old games released before internet was widespread in the general population, it might have not been this obvious.
looking at the various comments, there might be even some kind of weird appeal to leave such things in your game :D for people to find and chuckle about. it doesnt really disrupt the game normally does it?
Pretty much doable even without resorting to VM migrations or ksplice. My last one had uptime in 1700s (days). Basically I leased it, put a debian on it and that was that until I didn't need it anymore.
Error: game running for two years, rebooting so you cant cheese a timer.
Does this make the bug any better handled? Bugs like this annoy me because they arent easily answered.
I wonder if any sense this is criticism (or actual criticism) is based on implementers of SaaS who have it so deeply ingrained that “haha what if the users of this software did this really extreme thing” is more like “oh shit what if the users of this software did this really extreme thing”.
When I worked on Google cloud storage, I once shipped a feature that briefly broke single-shot uploads of more than 2gb. I didn’t consider this use case because it was so absurd - anything larger than 2mb is recommended to go through a resumable/retryable flow, not a one-shot that either sends it all correctly the first time or fails. Client libraries enforced this, but not the APIs! It was an easy fix with that knowledge, but the lesson remained to me that whatever extreme behaviors you allow in your API will be found, so you have to be very paranoid about what you allow if you don’t want to support it indefinitely (which we tried to do, it was hard).
Anyway in this case that level of paranoia would make no sense. The programmers of this age made amazing, highly coreographed programs that ran exactly as intended on the right hardware and timing.
Anyway, in answer to the question, I would guess the reason was because of signed / unsigned type promotion.
Doom is actually such a good game, I always go back to it every few years. The 2016 reboot is also pretty fun, but the later two in the series didn’t do it for me.
So valve next?
Old age can make him give that up before death.
Not everything, but they do invest in it.
They've been working on Linux support since at least around the time that Microsoft introduced the Windows Store... so for the last twelve years or so.
And, man, a couple of months ago I figured out how to run Steam as a separate user on my Xorg system. Not-at-all-coincidentally, I haven't booted into Windows in a couple of months. Not every game runs [0], but nearly every game in my library does.
I'm really gladdened by the effort put in to making this work.
[0] Aside from the obvious ones with worryingly-intrusive kernel-level anticheat, sometimes there are weird failures like Highfleet just detonating on startup.
And now I'm reluctant to move back to Linux for gaming, even though they've clearly come so far. I guess I should just go ahead and give it another shot.
ProtonDB has a feature where you can give it access to your Steam account for reading and it'll give you a full report based on your personal library: https://www.protondb.com/profile
And I find if anything it tends to the conservative. I've encountered a few things where it was overoptimistic but its outweighed by the stuff that was supported even better than ProtonDB said.
In the late 2000s, I played a few things, but I went in with the assumption it either wouldn't work, or wouldn't work without tweaking. Now I go in with the assumption that it will work unless otherwise indicated. Except multiplayer shooters and VR.
You should absolutely revisit. Proton has changed the game. Literally the only game I've tried that was remotely difficult to play in SteamOS is Minecraft, likely because Microsoft owns it now. But I was able to get that working too (if anyone's wondering: you want Minecraft Bedrock Launcher, which is in the Discover store if you're on the Steam Deck and here[1] if you're somewhere else; basically it downloads and runs the Android version of Minecraft through a small translation layer, which is essentially identical to the Windows version).
Speed also is greatly improved from previous solutions. Games played through Proton are often very close in terms of performance to playing them natively.
And Gabe won't be around forever and the guy is already over sixty. Statistically he's got about two decades left to live and not all of that will be at a level where he can lead Valve.
It's like the game industry got a fake memo saying no one wanted linear story-based games anymore. I ended up buying two more Teyon games because I was so happy with their formula and they are playable in a dozen or so hours. Tight, compact, linear, fun story and game play... No MTX or always online BS and they don't waste my time with busy work.
It's even worse in multiplayer games like COD and BF. As soon as I need to figure out combinations of 5x attachments to guns I lose all my interest in playing the game. That's why I'm still on CS I guess lol.
Especially the 'mech scale' stuff was just boring. I don't remember what they call it in-universe, but essentially the parts of the game where you're playing from a giant robot and just walking over tanks and fighting supersized demons.
I'll be honest, I don't like this part. I'm a rabid collector. If the game gives a metric to an item, I must have all of the items. I end up killing the flow by scouring the level looking for secrets. This is entirely my fault of course
I also dislike this trend. As a sibling comment noted, boomer shooters are generally closer to the old-school Doom gameplay, although some are adopting the newer design too.
Sadly it appears that archive.org didn't capture all of the site formatting, but at least the text is there.
Maybe I need my morning coffee. :)
@ID_AA_Carmack Are you going to write a patch to fix this?
I had read an article about how DOOMs engine works and noticed how a variable for tracking the demo kept being incremented even after the next demo started. This variable was compared with a second one storing its previous value
Doesn't sound like something that would crash, I wonder what was the actual crashAn actual analysis would be needed to understand the actual cause of the crash.
Another super easy way to enter UB land by assuming an integer is nonnegative is array indexing.
int foo[5] = { … }
foo[i % 5] = bar;
Everything is fine as long as i isn’t negative. But if it is… (note that negative % positive == negative in C)Doom4CE (this port) was based on WinDoom, which only creates the program window once at startup, then switches the graphical mode, and proceeds to draw on screen independently, processing the keyboard and mouse input messages. I'm not sure, but maybe Windows CE memory management forced the programmer to drop everything and start from scratch at the load of each level? Then why do we see the old window?
There are various 32 bit integer counters in Doom code. I find it quite strange that the author neither names the specific one, nor what it does, nor tries to debug what happens by simply initialising it with some big value.
Moreover, 2^32 divided by 60 frames per second, then by 60 seconds, 60 minutes, 24 hours, 30 days, and 12 months gives us a little less than 2.5 years. However, Doom gameplay tick (or “tic”), on which everything else is based, famously happens only 35 times a second, and is detached from frame rendering rate on both systems that are too slow (many computers at the time of release), or too fast (most systems that appeared afterwards). 2^32 divided by 35, 60 seconds, etc. gives us about 4 years until overflow.
Would be hilarious if it really is such an easy mistake.
It's a shame the source code for doom isn't available, and that the author couldn't just link directly to a specific line in a gitweb repository. /s
But I specifically said that it doesn’t look like SOUB in this particular case, and proposed an alternative mechanism for crashing. What’s almost certain is that some type of UB is involved because "crashing" is not any behavior defined by the standard, except if it was something like an assertion failing, leading to an intentional `abort`.
After a few hours precision errors accumulate and the texture become stretched and noisy, but since explosions are generally short-lived its never a problem.
Yet this keep bothering me..
UPDATE: Apparently it was 49.7 days in NT, same timer bug as 9x. Only remember this was a server OS. https://www.reddit.com/r/sysadmin/comments/86jxva/anyone_rem...
That, or the Reddit poster and I have the same wrong memory of the bug. I do know my boss at the time made us make the scheduled task to reboot because he understood it at the time to happen on NT 4.
0cf8612b2e1e•4mo ago