What is included in "statistical physics" that is not included in "statistical mechanics"?
“Statistical mechanics” is also used in a broad sense, just like “quantum mechanics” is often used for anything “quantum”.
I think it’s frequent. For example: https://teach-me-codes.github.io/computational-physics/the_p...
When you try do that for real problems, it can sometimes be difficult to sample from complex probability distributions/models efficiently in a way that is representative. There are lots of tricks around that, like most topics it's a black-hole of details. But it still boils down to randomly testing options.
Look at the source code, even in C it's really short and simple: https://github.com/msuzen/isingLenzMC/blob/master/src/isingL...
Statisticians like to do this kind of intellectual inflation, there are many such scary terms with simple meanings: "Markov Chain" is a process who's next state depends only on the current state, "stochastic" is a straight-up synonym for "random"... Illegitimi non carborundum!
It's not published yet, but already a classic. (Might be more intermediate than beginner, though.)
For something a bit more gentle, I also recommend chapter 29 of this book: https://www.inference.org.uk/mackay/itila/book.html
Once you understand and use this approach, you can figure out most other approaches you need to use.
https://archive.org/details/TheMonte-carloMethodlittleMathem...
Because if it isn’t in the “hype” it’s worthless, obsolete, trash…
Welcome to the new world of tech, that warms its hands by burning the old world.
Enjoy the vibes…
I don't quite agree, rather melodramatic, but it really paints a picture.
Also, older papers can be of interest but don't usually make it to the front page of a general audience news site unless there's something bigger going on that gave it renewed general interest.
Apart from intellectual appeal,
(1) there was a new paper from Google about quantum ergodicity, see https://doi.org/10.48550/arXiv.2506.10191 . So in general tech community can benefit a lot from understanding ergodicity via this package and see hands on how it is implemented, see Vignette as well, https://cran.r-project.org/web/packages/isingLenzMC/vignette...
(2) The repo is part of ergodicity research that is now revisited from classical point of view. actually new commits are significant, a new dataset is generated. See, https://zenodo.org/records/17151290 , so reproducibility is amazing even after so many years.
The upvoting scheme can not distinguish between a hot topic of interest to 10% of readers, that deliver 10 upvotes in the first 10 minutes, from a more niche topic of interest to 5% of readers, that gets 10 upvotes in 10 minutes.
At least it can't distinguish at that time. So things go to the front page, and future votes determine what happens!
But that initial "on the front page" boost is a nonlinearity that many good posts do not get through.
Personally, I really liked this post, and was merely asking because I was very surprised others liked it too!
northlondoner•4mo ago
Qem•4mo ago
mhog_hn•4mo ago
rjdj377dhabsn•4mo ago
It has some really great statistical and data science packages that were well ahead of the competition 10-15 years ago. The web frameworks were good enough for dashboards and what most people were using R for.
But if you wanted to write fast and elegant nom-vectorized code, R is really lacking. I left it for Julia for that reason.
mvieira38•4mo ago
rjdj377dhabsn•4mo ago
I'm not a fan of pandas, so I'd say Julia and R beat python at basic dataframe manipulation. Nothing beats kdb+/q at dataframes though imo.
mvieira38•4mo ago
mhogers•4mo ago
larrydag•4mo ago
teruakohatu•4mo ago
This makes it seem a bit disjointed, in a way that other languages don’t.
The R community should have anointed one object system and made tidyverse a core part of R.
All that said, R is fantastic and the depth of libraries is extensive. Libs are often written by the original researchers that develop the method. At some academic institutions an R package is counted as a paper.
mvieira38•4mo ago
kjkjadksj•4mo ago
tylermw•4mo ago
northlondoner•4mo ago
paddleon•4mo ago
> and made tidyverse a core part of R.
Not a tidyverse fan. It doesn't scale well.
Learn data.table, which has a much more R-like interface and is fast fast fast even for large data sizes. More powerful and more expressive than pandas, and again, faster
See https://cran.r-project.org/web/packages/data.table/vignettes...
mscbuck•4mo ago
vovavili•4mo ago
clircle•4mo ago
teruakohatu•4mo ago
It's hard to generalise for all data scientists everywhere, but that is not my experience.
Data transformation (80% of the job) is very functional and so objects systems don't matter much.
But when you are training neural nets in Python you are probably using a framework of some type. Torch in R looks very object orientation'y .
The issue is not that object orientation is fundamentally needed for data science, but when you install a random object orientated R library you get a random R object system or pseudo-object system that needs to be reasoned about.
It is a pity R didn't just ditch object systems or adopt a limited simple system like Lua's table approach.
3abiton•4mo ago
shoo•4mo ago
None of this was forced by R the language, it was purely a library design thing by the folks writing the libraries. Whereas in contrast, you simply wouldn't and didn't get such library design in mainstream general purpose programming languages (e.g. in C++, java some of this stuff wouldn't even type check) and similarly in python, even though python being dynamic was fertile ground for people to develop completely bonkers and unautomatable numeric and scientific libraries, the customs for how libraries should work were different
This is maybe just a reflection that R and R's libraries were being designed for interactive use by humans doing exploratory data analysis, model fitting etc, unlike other programming languages which are used to automate things or build software products that can be shipped.
kjkjadksj•4mo ago
dkga•4mo ago
esafak•4mo ago
melenaboija•4mo ago
UpsideDownRide•4mo ago
mscbuck•4mo ago
shiandow•4mo ago
northlondoner•4mo ago
mamami•4mo ago
physicsguy•4mo ago
UpsideDownRide•4mo ago
kjkjadksj•4mo ago
cactusfrog•4mo ago
UpsideDownRide•4mo ago
analog31•4mo ago
For instance in my own case, my first use of Python was outside of mainstream scientific computing. I needed something to install on lab computers, for data acquisition and automation. And it needed to be free because my employer was under a spending freeze after the 2008 financial meltdown. Oh, and I also wanted something for hobby projects, that would be equally at home on Windows or Linux.
So I think the quality of the language came first.