My impression was that it's pretty easy to do straightforward things like the examples described in the article. But when you have to do complicated or unusual things with your data I found it very frustrating to work with. Access to the underlying data was often opague and it was difficult to me at times to figure out what was happening under the hood.
Does anyone here know any research areas still using R?
That's where I realised that the "modern" approach was taken in the article - which obviously I had not looked at.
1. Wrap the complicated bits in functions, then force it into the tidyverse model by abusing summarize and mutate.
2. Use data.table. It's very adaptable and handles arbitrary multiline expressions (returning a data.table if the last expression returns a list, otherwise returning the object as-is).
3. Use base R. It's not as bad as people make it out to be. You'll need to learn it to anyway, if you want to do anything beyond the basics.
also I recommend trying Ibis. created by the creator of pandas originally and solves so many of the issues
It seems like Ibis uses DuckDB on its backend (by default) and has Polars support as well. Given this, maybe see if Ibis works better for you than polars. If you very specifically need polars, using that will for sure be better. DuckDB is faster than polars and it has great polars support, so depending on how Ibis is implemented it might be "better" than polars as data frame lib.
https://pypi.org/project/narwhals/#description
I tried really hard to use Ibis but I ran into issues where it was way easier to do some stuff in pandas/polars and had to keep coming out of Ibis to make it work so I gave up on it for the time being.
What Python desperately needs is a coordinated effort for a core data science /scientific computing stack with a unified framework.
In my opinion, if it weren't for Python's extensive use in Industry and package ecosystem, Julia would be the language of choice for nearly all data science and scientific computing uses.
That's my impression as well. Going back to the topic of the original post, pandas only partially implements the idioms of the tidyverse so you have to mix in a lot of different forms of syntax (with lambdas to boot) go get things done. Julia is much nicer, but I find myself using PythonCall more often than I'd like.
Scipy was originally supposed to provide the scientific computing stack, but then many offshoots in the direction of pandas / ibis / JAX, etc. happened. I guess that's what you get with a community-based language. MATLAB has its warts but MathWorks does manage to present a coherent stack on that end.
A few years ago I made a package called "redframes" that tried to "solve" all of my frustrations with pandas, make data wrangling feel more like R, while retaining all the best bits of Python...
Alas, it never really took off. For those curious: https://github.com/maxhumber/redframes
There is so much hype and luck to widespread adoption, you never know with these things.
(I've never used R myself, but certainly have some very strong opinions about Pandas after having written 3 books about it.)
great_wubwub•4h ago
j_bum•4h ago
But, recently I’ve been working with much larger scale data than R can handle (thanks to R’s base int32 limitation) and have been needing to use Python instead.
Polars feels much more intuitive and similar to `dplyr` to me for table processing than Pandas does.
I often ask my LLM of choice to “translate this dplyr call to Polars” as I’ve been learning the Polars syntax.
aydyn•3h ago
j_bum•3h ago
This is one of those decisions that I just do not understand. In your mind, why do you imagine a set of improvements won’t be made?
Otherwise, for now, working with Python and R using the reticulate package in Quarto is perfect for my needs.
If the Positron IDE could get in-line plot visualization in Quarto documents like the RStudio IDE has, I’d be the happiest camper.