Isn't this just trying to find a hamiltonian cycle, isn't this NP hard? That's when I would give up, especially because you put so many constraints in it to make it human walkable.
Edit: Of course you don't have to give up, but it's good to know what you get yourself into
"The important part is figuring out what you should be optimizing for in the first place.
Most of the time, we don’t even ask that question. We just optimize for whatever’s easy to measure and hope it works out.
Spoiler: it probably doesn’t."
Hell, given that there is a social safety net, and you'll have costs to do the job (food, public transport, ...) you're probably even better off doing worse than that, and getting fired when the manager is "tired of your shit" or whatever.
Then you'll get unemployment, which is slightly less, but you can invest the time in cooking at home, and you'll eat better and have more money left over.
In a way, the computer science student may or may not have realized that he has stumbled upon one of the biggest problems in software development--the arrogance of ignorance.
Especially with lawn mowers, turns are highly weighted over distance. Also, if you are regularly mowing, then it's not so obvious what has been mowed and what not. So regularization and simplification of the path is even more important than turns so that you can discard whole plots in the to-do list.
Roomba's (RIP) don't have the same memory and turn weight function that humans do, of course.
>You can write perfect code. You can build flawless systems. You can optimize the sh*t out of your cost function. And you can still end up with something that sucks.
>The important part isn’t the optimization algorithm. The important part is figuring out what you should be optimizing for in the first place.
>Most of the time, we don’t even ask that question. We just optimize for whatever’s easy to measure and hope it works out.
>Spoiler: it probably doesn’t.
As tech people, it's kinda hard to admit it, but it's totally correct. Sometimes you actually have to optimize for X, sometimes you don't. It's totally ok to optimize stuff just for passion or to try out new stuff, but if you expect external validation you should do this for things people actually care about.
As an aside, this is also related to the way random companies carry out technical interviews, cargo-culting FAANG practices.
FAANGs tend to care about optimizing a lot of stuff, because when you have billions of customers even the smallest things can pile up a lot of money.
If you are a random company, even a random tech company, in many domains you can go a long way with minimal tuning before you have to turn to crazy optimization tricks.
For example, one day has almost 100k seconds, so if you have 100k daily requests (which is still a lot!), even if you have 10x peaks during the day, you are most likely getting <= 10 requests per second. It's not that much.
You can take it one step further: imagine you live in a smallish country (10 million people).
If your market share is 10% of the population and they make 1 request per day, that is just 10 requests per second.
10% is a large market share for everyday use. So you can use 1% market share and 10 requests and it will still be just 10 reqs/sec.
In fact, 1% market share of 10 million people and you can use the number of requests each user makes as the number of requests that your server will get (on average) per second.
There is a lot of business in small countries that never need to scale (or business in narrow sectors, e.g. a lot of B2B).
Or you could still use multiple instances, not for scaling but for patching without downtime and so on.
Availability can be way more important than sheer performance or number of concurrent requests.
I would just never write code which struggles with n.
And having some hashmap added at one point because I know how stuff works properly doesn't cost me anything.
Sure if it costs nothing, go for it.
With that said,
1) time complexity is just one kind of complexity. In real life, you may be interested in space complexity, too. Hashmaps tend to use more "space" than regular arrays, which might be an issue in some cases. Also, some data have a lot of collisions when managed using hashmaps, which may not be ideal.
A well thought design with respect to performance and scalability relies on a few assumptions like these, which could lead to one solution or another.
2) a real-world application is not necessarily constrained (in space or time) by traversing an array or a hashmap. Unless your application is mostly processing, sorting,... data structures, this is probably not the case.
For example, consider a simple application which lets users click and reserve a seat at a theater/conference/stadium/train/whatever.
The application is essentially a button which triggers a database write and returns a 'Success' message (or maybe a PDF). In this case, you are mostly constrained by the time needed to write on that database and maybe the time needed for a PDF generation library to do its things. You are in fact interacting with two "APIs" (not necessarily Web, REST APIs!): the database API and the third-party PDF library API. I don't have any special knowledge about PDF libraries, but I suspect their performance depends on the amount of data you have to convert to PDF, which is more or less the same for every user. And when it comes to databases your performance is mostly limited by the database size and the number of concurrent requests.
If you think this is too simple, consider additional features like authentication, sending an email notification, or maybe choosing between different seat categories. In most cases, your code is doing very little processing on its own and it's mostly asking stuff to other libraries/endpoints and getting an answer.
Consider another example. You want to find out the distance between a user (which is assumed to have a GPS receiver) and a known place like Times Square or whatever. What you have is a mobile app which gets the GPS position from the phone and computes the distance between the user and the known coordinates, using a known formula. The input size is always the same (the size of the data structure holding GPS coordinates), the formula to compute the distance is always the same, so processing time is essentially constant.
Now let's say you have a bunch of well known places, let's say N. The app computes the distance for all N places, effectively populating an array or an array of dicts of whatever, with length N. Maybe the app also sorts the data structure to find the 5 closest places. How long will that take? How many places you need to compute and sort before a user notices the app is kinda slow (i.e. before, say, 200 milliseconds) or exceedingly slow (let's say above 1 second, or even 500 ms)?
There are a lot of scenarios and real-world applications where, using modern hardware and/or external APIs and reasonable expectations about clients (users don't care about microseconds, sending an email or push notification in one second is totally acceptable in most cases,...), you are not constrained by the data structure you are using unless you're working at a large scale.
Optimize for human satisfaction: NP-hardest
So, good for the author that they spent the time to learn path optimization. Now onto 3D bin packing!
If you have non zero inertia, then refusing to model inertia isn't technically correct or optimal.
The turn penalty is a way to avoid having an explicit dynamics model, which is a nice hack that will probably return pretty good results.
There is also a missing reference to liquidity preference. People don't make expensive and costly plans, because they represent a commitment and obligation to follow the plan through. Unlike what economists say, people don't actually make lifelong commitments at birth and then just follow them. They usually make decisions on the spot with the information they learned during the course of their life.
My Roomba will take less time when it dead reckons as opposed to avoiding visiting tiles twice. I guess energy and time are still spent poorly by humans and Roomba for following the optimal plan.
1. Why a grid system, and what justifies the grid size? As a person who works on algorithmic optimizations since decades (god, I am old), I would have chosen a different approach: Since we know that the shortest distance between to points is the straight line, I would only consider points at a junction for a path decision. Basically this could be seen as either a one-way road system (start to end of an aisle) or a two-way road system, with ony one lane per direction. With this system your graph is much smaller and can be traversersed more easily.
2. I wouldn't have chosen Simulated Annealing for this job. It guarantees you to find a solution in a short period of time, but the chance that you get stuck in local minima is very high in contrast to finding the global minimum (which you couldn't even proove you found). Path finding is well documentated and there are heuristics (if you need them) to solve that problem reasonably without using such a complicated algorihtm. I would have probably build a Dijkstra Matrix between all points (from above) then you know all distances going from A to C via B, for example.
3. I think the problem is a bit similar to the Traveling Saleman Problem (which has been solved optimally in a mathematical way now, AFAIK), however, one needs to make sure that each path is traversed at least once. (We don't want to skip an aisle).
I often have the feeling that people are trying to solve problems by just throwing data aimlessly on (AI) algorithms which is compuational expensive without realising that the data needs to be filtered and maybe converted (i.e. feature extraction). Furthermore the right algorithms need to be used to solve a job (e.g: Do you need a quick solution that's not optimal? and so on.). Having said that, of course, any simulation (optimization) is just a positive, simplified view of the real world. To stay with the author's example, there might be some aisles where sweeiping needs to be done more often then on other aisles depending on the groceries on the shelves etc.
Reminded me of a pattern I keep seeing in business software - teams spend months optimizing the wrong abstraction. They'll build an incredibly efficient data pipeline that turns out to process information nobody actually needs, or an algorithm that minimizes compute time when the real bottleneck is waiting for a human approval that takes 3 days.
The simulated annealing approach wasn't wrong per se - it's just that "minimise distance walked" was never actually the objective function that mattered to the humans doing the walking.
Put on headphones with music or a podcast or even youtube, and take 1-2 or even 4 hours more than you would normally do. Have an accident every few days.
Oh, and here's another problem with the best solution. Managers are idiots. They actually have zero clue what's a good time/performance for floor sweeping since they'd never stoop to the level of doing it themselves, even once. If you're going close to best possible, any slight disturbance will make a large difference in your performance, since that's what the manager actually measures. If you're doing a piss-poor job but by changing your speed a bit when something inevitably goes wrong, and you do about the same quality in about the same time every time, your manager will be more happy. Hell, it'll make it easier for him to organize the store so it may actually be better economically too.
The better you optimize this, the worse you're off. Hell, given that many consider you're better off on unemployment you should do worse than the worst the manager accepts (since you get unemployment if the manager fires you but not if you quit). Then you get a bit less money, but you have no costs (you have to pay for food, and public transport, every day, but only if working. Unemployed you can stay at home and cook)
So ... does this optimization tool have a way to accomplish the actual best outcome?
One option one could use is eg. https://fields2cover.github.io/ but that doesn't work too well if there's lots of obstacles in the fields like in this case. I'm having the same issue at work right now in agricutrural robots, covering the area between rows and rows of trees. Some implements on our robot hang off to one side so paths can't be bidirectional, etc. Lots of interesting constraints.
See also https://sohl-dickstein.github.io/2022/11/06/strong-Goodhart....
elbci•13h ago
- And you didn't try to escape?
- I couldn't, the hallway has only two doors; the attacker come trough the front door, in the back... I just finished sweeping