https://www.ibm.com/think/topics/linear-regression
A proven way to scientifically and reliably predict the future
Business and organizational leaders can make better decisions by using linear regression techniques. Organizations collect masses of data, and linear regression helps them use that data to better manage reality, instead of relying on experience and intuition. You can take large amounts of raw data and transform it into actionable information.
You can also use linear regression to provide better insights by uncovering patterns and relationships that your business colleagues might have previously seen and thought they already understood.
For example, performing an analysis of sales and purchase data can help you uncover specific purchasing patterns on particular days or at certain times. Insights gathered from regression analysis can help business leaders anticipate times when their company’s products will be in high demand.
Linear regression, for all its faults, forces you to be very selective about parameters that you believe to be meaningful, and offers trivial tools to validate the fit (i.e. even residuals, or posterior predictive simulations if you want to be fancy).
ML and beyond, on the other hand, throws you in a whirl of hyperparameters that you no longer understand and which traps even clever people in overfitting that they don't understand.
Obligatory xkcd: https://xkcd.com/1838/
So a better critique, in my view, would be something that the JW Tukey wrote in his famous 1962 paper: (paraphrasing because I'm lazy):
"better to have an approximate answer to a precise question rather than an answer to an approximate question, which can always be made arbitrarily precise".
So our problem is not the tools, it's that we fool ourselves by applying the tools to the wrong problems because they are easier.
This can be seen as another occurence of the "bitter lesson": http://www.incompleteideas.net/IncIdeas/BitterLesson.html
I indeed find the lesson that it describes unbearably bitter. Searching and learning, as used by the article, may discover patterns and results (due to infinite scaling of computation) that we, humans, are physically uncapable of discovering -- however, all those learnings will have no meaning, they will not expose any causality. This is what I find unbearable, as it implies that the real world must ultimately remain impervious to human cognizance; it implies that our meaning- and causality-based human reasoning ultimately falls short to model the world, while general, computation-only methods (given ever-growing computing power) at least "converges" to a faithful (but meaningless) description of the world.
See examples like protein folding, medicine research, AI-assisted diagnosis, self driving cars. We're going to rely on their results, but we'll never know why those results work. We're not going to reject self-driving cars if those cars save lives per same distance driven and/or same time driven; however, we're going to sit in, and drive, those cars blind. To me, that's an unbearable thought, even apart from the possibility that at some point the system might break down, and cause a huge accident inexplicably. An inexplicable misbehavior of the system is of course catastrophic, but to me, even the inexplicable proper behavior of the system is an unsettling thought -- because it is inexplicable.
Edited to add: I think the phrase "how we think we think" is awesome in the essay. We don't even know how our reasoning works, so trying to "machinize" those misconceptions is likely bound to fail.
The notion of predicting the mean can be extended to other properties of the conditional distribution of the target variable, such as the median or other quantiles [0]. This comes with interesting implications, such as the well-known properties of the median being more robust to outliers than the mean. In fact, the absolute loss function mentioned in the article can be shown to give a conditional median prediction (using the mid-point in case of non-uniqueness). So in the OP example, if the data set is known to contain outliers like properties that have extremely high or low value due to idiosyncratic reasons (e.g. former celebrity homes or contaminated land) then the absolute loss could be a wiser choice than least squares (of course, there are other ways to deal with this as well).
Worth mentioning here I think because the OP seems to be holding a particular grudge against the absolute loss function. It's not perfect, but it has its virtues and some advantages over least squares. It's a trade-off, like so many things.
For gradients, Stanford CS229 [1] jumps right into it.
[0] https://www.stat.cmu.edu/~cshalizi/mreg/15/lectures/06/lectu...
[1] https://cs229.stanford.edu/lectures-spring2022/main_notes.pd...
And for introductory content there's always that risk if you provide to much information you overwhelm the reader, make them feel like maybe this is too hard for them.
Personally I find the process of building a model is a great way of learning all this.
I think a course is probably helpful, but the problem with things like data camp is they are overly repetitive and they don't do a great job of helping you look up earlier content unless you want to scroll through a bunch of videos, where the formula goes on screen for 5 seconds.
Would definitely just recommend getting a book for that stuff, I found "All of statistics" good, I just wouldn't recommend trying to read it from cover to cover, but I have found it good as a manual where I could just look up the bits I needed when I needed it. Tho the book may be a bit intimidating if you're unfamiliar with integration and derivatives (as they often express the PDF/CDF of random variables in those terms).
There are plenty of error formulations that give a smooth loss function, and many even a convex one, but most don't have analytical solutions so they are solved via numerical optimization like GD.
The main message is IMHO correct though: square error (and its implicit gaussian noise assumption) is all too often used just per convenience and tradition.
And in any case nobody uses GD for regressions for statistical analysis purposes. In practice Newton-Raphson or other more complicated schemes (with a lot higher computation, memory and IO demands) with a lot nicer convergence properties are used.
Square error is used because it is the maximum likelihood estimator under the assumption that observation noise is normally distributed, not because it is analytical.
I think that as a field, Machine Learning is the exception rather than the norm, where people people start off or proceed rapidly to non-linear models, huge datasets and (stochastic) gradient based solvers.
Gaussianity of errors is more of a post-hoc justification (which is often not even tested) for fitting with OLS.
Even the most popular more complicted models like multilevel (linear) regression make use of the mathematical convenience of the square error, even though the solutions aren't fully analytical.
Square error indeed gives estimates for normally distributed noise, but as I said, this assumption is quite often implicit, and not even really well understood by many practitioners.
Analytical solutions for squared errors have a long history for more or less all fields using regression and related models, and there's a lot of inertia for them. E.g. ANOVA is still the default method (although being replaced by multilevel regression) for many fields. This history is mainly due to the analytical convenience as they were computed on paper. That doesn't mean the normality assumption is not often justifiable. And when not directly, the traditional solution is to transform the variables to get (approximately) normally distributed ones for analytical solutions.
I did the Stats I -> II -> II pipeline at uni but you should be fitting basic linear models by the end of Stats I
And for actual gradient descent code, here is an older example of mine in PyTorch: https://github.com/stared/thinking-in-tensors-writing-in-pyt...
brrrrrm•5h ago
It's interesting to continue the analysis into higher dimensions, which have interesting stationary points that require looking at the matrix properties of a specific type of second order derivative (the Hessian) https://en.wikipedia.org/wiki/Saddle_point
In general it's super powerful to convert data problems like linear regression into geometric considerations.