A set of libraries on our code had hit 20% of response time through years of accretion. A couple months to cut that in half, no architectural or cache changes. Just about the largest and definitely the most cost effective initiative we completed on that team.
Looking at flame charts is only step one. You also need to look at invocation counts, for things that seem to be getting called far more often than they should be. Profiling tools frequently (dare I say consistently) misattribute costs of functions due to pressures on the CPU subsystems. And most of the times I’ve found optimizations that were substantially larger improvements than expected, it’s been from cumulative call count, not run time.
(Depending on your IMGUI API you might be setting tooltip text in advance as a constant on every visible control, but that's probably a lot fewer than 38000 controls, I'd hope.)
It's interesting that every control previously had its own dedicated tooltip component, instead of having all controls share a single system wide tooltip. I'm curious why they designed it that way.
It seems like those libraries do what IMGUI do, but more structured.
React requires knowing about your state because it wants to monitor all of it for changes to try to optimize not doing things if nothing changed. This ends up infecting every part of your code to do so. It's the number 1 frustration I have using React. I haven't used Flutter or SwiftUI so I don't know if they are analogus
AFAIK, UE relies on a retained mode GUI, but I never got far enough into that version of Narnia to experience it first hand.
With immediate mode you don't have to construct any widgets or objects. You just render them via code every frame which gives you more freedom in how you tackle each UI element. You're not forced into one widget system across the entire application. For example, if you detect your tooltip code is slow you could memcpy all the strings in a block of memory and then have tooltips use an index to that memory, or have them load on demand from disk, or the cloud or space or whatever. The point being you can optimise the UI piecemeal.
Immediate mode has its own challenges but I do find it interesting to at least see how the different approaches would tackle the problem
It took the whole afternoon
It's no wonder UE5 games have the reputation of being poorly optimized, you need an insane machine only just to run the editor..
State of the art graphics pipeline, but webdev level of bloat when it comes to software.. I'd even argue electron is a smoother experience tan Unreal Engine Editor
Insanity
To get UE games that run well you either need your own engine team to optimise it or you drop all fancy new features.
I am kinda sad we have reached point where native resolution is not the standard for high mid tier/low high tier GPUs. Surely games should run natively at non-4k resolution on my 700€+ GPU...
By Games I mean modern AAA first or third person games. 2D and others will often run at full resolution all the time.
New monitors default to 60hz but folks looking to game are convinced by ads that the only reason they lost that last round was not because of the SBMM algorithm, but because the other player undoubtedly had a 240hz 4K monitor rendering the player coming around the corner a tick faster.
Competitive gaming and Twitch are what pushed the current priorities, and the hardware makers were only too happy to oblige.
For me, it's not quite as big of a jump as, say, when we went from SD to HD TV, but it's still a big enough leap that I don't consider it gimmicky.
Gaming in 4K, on the other hand, I don't really care for. QHD is plenty, but I do find 4K makes for slightly nicer desktop use.
Edit: I'll add that I almost always limit FPS anyway because my GPU turns into a jet engine under high load and I hate fan noise, but that's a different problem.
For a bit of background modern games tend to do game procesing and rendering at the same time in parallel, but that means that the frame being processed by the rendering system is the previous frame, and then once rendering has been submitted to the "graphics card" it can take one or more more frames before it's actually visible on the monitor. So you end up with a lag of 3+ frames rather than only a single one like you had on old DOS games and such. So having a faster monitor and being able to render frames at that faster rate will give you some benefit.
In addition this is why using frame generation can actually hurt the gaming experience as instead of waiting 3+ frames to see your input reflected in what is on the screen you end up with something like 7+ frames because the fake in-between frames don't actually deal with any input.
It’s only when LCDs appeared that 60 Hz started being a thing on PCs and 60 fps followed as a consequence, because the display can’t show more anyway.
It’s true that competitive gaming has pushed the priority of performance, but this happened in the 90s already with Quake II. There’s nothing fake about it either. At the time a lot of playing happened at LANs not online. The person with the better PC got better results. Repeatedly reproduced by rotating people around on the available PCs.
I recall playing games at 100 FPS on my 100 Hz CRT. People seriously interested in multiplayer shooters at the time turned vsync off and aimed for even higher frame rates. It was with this in mind I was quick to upgrade to a 144 Hz display when they got cheap enough: I was taking back territory from when the relatively awful (but much more convenient) flat screens took over.
> because the other player undoubtedly had a 240hz 4K monitor rendering the player coming around the corner a tick faster.
I play 99% single player games and in most of those, response time differences at that scale seem inconsequential. The important difference to me is in motion clarity. It's much easier to track moving objects and anticipate where they will be when you get more frames of animation along their path. This makes fast-paced games much more fun to play, especially first person games where you're always rapidly shifting your view around.
And now antialiasing is so good you can start from lower resolutions and still fake even higher quality
It's really the same problem as in synthesizing audio. 44.1 kHz is adequate for most audio purposes, but if you are generating sounds with content past the nyquist frequency it's going to alias and fold back in undesirable ways, causing distortion in the audible content. So you multisample, filter to remove the high frequency content and downsample in order to antialias (which would be roughly equivalent to SSAA) or you build the audio from band limited impulses or steps.
Wouldn't it taking the whole afternoon be because it's downloading and installing assets, creating caches, indexing, etc?
Like with IDEs, it really doesn't matter much once they're up and running, and the performance of the product has ultimately little to do with the tools used in making them. Poorly optimized games have the reputation of being poorly optimized, that's rarely down to the engine. Maybe the complete package, where it's too easy to just use and plop down assets from the internets without tweaking for performance or having a performance budget per scene.
Care to exemplify?
I find UE games to be not only the most optimized, but also capable of running everywhere. Take X-COM, which I can play on my 14 year old linux laptop with i915 excuse-for-a-gfx-card, whereas Unity stuff doesn't work here, and on my Windows gaming rig always makes everything red-hot without even approaching the quality and fidelity of UE games.
To me UE is like SolidWorks, whereas Unity is like FreeCAD... Which I guess is actually very close to what the differences are :-)
Or is this "reputation of being poorly optimized" only specific to UE version 5 (as compared to older versions of UE, perhaps)?
It also has a terrible reputation because a bunch of the visual effects have a hard dependency on temporal anti-aliasing, which is a form of AA which typically results in a blurry-looking picture with ghosting as soon as anything is moving.
"Firstly, despite its name, the function doesn’t just set the text of a tooltip; it spawns a full tooltip widget, including sub-widgets to display and layout the text, as well as some helper objects. This is not ideal from a performance point of view. The other problem? Unreal does this for every tooltip in the entire editor, and there are a lot of tooltips in Unreal. In fact, up to version 5.6, the text for all the tooltips alone took up around 1 GB of storage space."
But I assume the 1GB storage for all tooltips include boilerplate. I doubt it is 1 GB of raw text.
The user can ever only see one single tooltip. (Or maybe more if you have tooltips for tooltips but I don't think Unreal has that, point is, a limited number.)
So initialize a single tooltip object. When the users mouses over an element with an tooltip, set the appropriate text, move the tooltip widget to the right position and show it. If the user moves away, hide it.
Simple and takes nearly no memory. Seems like some people still suffer from 90s OOP brain rot.
I struggle with UE over others for any project that doesn't demand an HDRP equivalent and nanometric mesh resolution. Unity isn't exactly a walk in the park either but the iteration speed tends to be much higher if you aren't a AAA wizard with an entire army at your disposal. I've never once had a UE project on my machine that made me feel I was on a happy path.
Godot and Unity are like cheating by comparison. ~Instant play mode and trivial debugging experience makes a huge difference for solo and small teams. Any experienced .NET developer can become productive on a Unity project in <1 day with reasonable mentorship. The best strategy I had for UE was to just use blueprints, but this is really bad at source control and code review time.
OBJECTIVE: Any project that demands HDRP and Nanometric Mesh
BONUS: Find the happy path
We're a team with < 10 employees. He's paying very handsomely, so even if his Unreal foray is an absolute disaster, I'll have the savings to find something else.
With a bit of experience you can achieve global illumination results that are competitive with Pixar films by using static scene elements, URP, area lighting, baked GI and 5~10 minutes on a 5700XT. The resulting output will run at hundreds of FPS on most platform targets. If this means pegging vsync, it may also be representative of a power savings on those platforms.
Lights in video games certainly use real electricity, but the power spent on baked lights is amortized across every unique target that runs the game. The biggest advantage of baking is that you can use an unlimited # of lights. Emulation of a physical scene is possible. There are also types of lights that aren't even accessible at real-time (area/volumetric). These produce the most compelling visual results, avoiding problems that others create such as hotspots in reflection probes and hard shadowing.
Lightmap baking is quickly becoming a lost art because realtime lighting is so simple by comparison (at first!). It also handles a lot of edges cases automagically. The most important ones being things like dynamic scene elements and foliage. Approximately half of the editor overlays in Unity are dedicated to visualizing the various aspects of baked lighting. It is one of the more difficult things to polish but if you have the discipline to do so it will make your game highly competitive in the AAA arena.
The crazy thing to me about baked GI is that it used to be incredibly crippling on iteration time. Working at a studio back in 2014 I recall wild schemes to bake lights in AWS so we could iterate faster. Each scene would take hours to fully bake. Today, you can iterate global GI in a fixed viewport multiple times per second with a progressive GPU light mapper. Each scene can be fully baked in <10 minutes. There has never been a better time to build games using technology like this. If I took a game studio from a decade ago and gave them the technology we have today, they would wipe the floor with every other studio on earth right now.
This tech doesn't have to be all-or-nothing either. Most well engineered AAA games utilize a mixture of baked & real time. The key is to make as many lights baked as possible, to the extent that you are kind of a constraining asshole about it, even though you can support 8+ dynamic lights per scene object. I look at real time lighting as a bandaid, not a solution.
If you want to attack this from a business perspective - Bleeding edge lighting tech is a nightmare if you want to ship to a large # of customers on a wide range of platforms.
Game engines have always been a problem. They're very tricky to make and cover everyone's use cases, and I don't think they've ever been in as good a state as right now.
I absolutely love both Unreal and Unity. Unreal is amazing from a technical perspective and having worked with a talented team in Unreal the stuff we were able to make were mind-blowing given the resources we had.
Unity is way easier to work with if you aren't focused on high fidelity graphics. In fact, I've never tried any engine that felt as easy to work with as Unity. I would absolutely not call it a monster. Even with a fairly green team you can hit the ground running and get really productive with Unity.
So yeah, Unity is an easy on-ramp. But unfortunately, I think it puts people in a bad market that doesn't serve them well.
Why has it been changed? The number of tooltips improves the title and is accurate to the post.
I like Godot primarily because of GDScript; you are not compiling anything so iteration time is greatly reduced. Unigine is also worth a mention. It has all the modern features you could want like double precision, global illumination etc but without the bloat and complexity. It's easy to use as much or as little of the engine as you need; in every project you write the main() function. Similar license options to Unity/Unreal.
adithyassekhar•21h ago
Similarly, adding a modal like this
{isOpen && <Modal isOpen={isOpen} onClose={onClose} />}
instead of
<Modal isOpen={isOpen} onClose={onClose} />
Seems to make the app smoother the more models we had. Rendering the UI (not downloading the code, this is still part of the bundle) only when you need it seems to be a low hanging fruit for optimizing performance.
pathartl•20h ago
The tradeoff is for more complicated components, first renders can be slower.
trylist•20h ago
You basically have a global part of the component and a local part. The global part is what actually gets rendered when necessary and manages current state, the local part defines what content will be rendered inside the global part for a particular trigger and interacts with the global part when a trigger condition happens (eg hover timeout for a tooltip).
high_priest•12h ago
This is, in general, the idea that is being solved by native interaction with the DOM. It stores the graphic, so it doesn't have to be re-instated every time. Gets hidden with "display:none" or something. When it needs to display something, just the content gets swapped and the object gets 'unhidden'.
Good luck.
jitl•9h ago
Excessive nodes - hidden or not - cost memory. On midrange Android it’s scarce and even if you’re not pushing against overall device memory limit, the system is more likely to kill your tab in the background if you’ve got a lot going on.
adithyassekhar•7h ago
llbbdd•6h ago
aiiizzz•17h ago
amatecha•17h ago
chamomeal•5h ago
akie•16h ago
debugnik•16h ago
abdusco•15h ago
https://developer.mozilla.org/en-US/docs/Web/CSS/@starting-s...
csande17•14h ago
btown•12h ago
zarzavat•11h ago
adithyassekhar•7h ago
jitl•7h ago
chamomeal•5h ago
And the “react way” is to have the UI reflect state. If the state says the modal is not being rendered, it should not be rendered
jitl•3h ago
There are many high quality third party tools to help with this, such as Motion’s <AnimatePresence> (https://motion.dev/docs/react-animate-presence). I haven’t used the library you mentioned, but it seems somewhat unmaintained and isn’t compatible with react-dom@19.
First party support is coming to React with the new <ViewTransition> component (https://react.dev/reference/react/ViewTransition).
If you insist that only the React maintainers are allowed to diverge DOM state from the render tree or write code you don’t understand, you can adopt it today from react{,-dom}@experimental. It’s been available there since April (https://react.dev/blog/2025/04/23/react-labs-view-transition...).
socalgal2•1h ago
Cthulhu_•15h ago
adithyassekhar•7h ago