> Many Computer Science (CS) programs still center around problems that AI can now solve competently.
Yeah. No they do not. Competent CS programs focus on fundamentals not your ability to invert a binary tree on a whiteboard. [1]
Replacing linear algebra and discrete mathematics with courses called "Baby's First LLM" and "Prompt Engineering for Hipster Doofuses" is as vapid as proposing that CS should include an entire course on how to use git.
Knowing how to swap 2 variables and traverse data structures are fundamentals.
But there’s still a lot of very important concepts in CS that people should learn. Concepts like performance engineering, security analysis, reliability, data structures and algorithms. And enough knowledge of how the layers below your program works that you can understand how your program runs and write code which lives in harmony with the system.
This knowledge is way more useful than a lot of people claim. Especially in an era of chatgpt.
If you’re weak on this stuff, you can easily be a liability to your team. If your whole team is weak on this stuff, you’ll collectively write terrible software.
You can teach fundamentals all day long, but on their first day of work they are going to be asked adhere to some internal corporate process that is so far removed from their academic experience that they will feel like they should have just taught themselves online.
80% of software development boils down to:
1. Get JSON(s) from API(s)
2. Read some fields from each JSON
3. Create a new JSON
4. Send it to other API(s)
Eventually people stopped pretending that you need a CS degree for this, and it spawned the coding bootcamp phenomenon. Alas it was short-lived, because ZIRP was killed, and as of late, we realized we don't even need humans for this kind of work!
We no longer hires junior engineers because it just wasn't worth the time to train them anymore.
Most CS grads will end up in a position that has more in common with being an electrician or plumber than an electrical engineer, difference is that we can't really automate installing wires and pipes to same degree we have automated service integration and making api calls.
Really the problem is there are too many CS grads. There should be a software engineering degree.
These a just a couple of examples of things that I see juniors really struggle with that are day 1 basics of the profession that are consistently missed by interview processes that focus on academic knowledge.
People won't teach you how to solve these problems online, but you will learn how to solve them while teaching yourself.
Point is that what is deemed important in academic circle is rarely important in practice, and when it is I find it easier to explain a theory or algorithm than teach a developer how to use an industry standard tool set.
We should be training devs like welders and plumbers instead of like mathematicians because practically speaking the vast majority of them will never use that knowledge and develop an entirely new skill set the day they graduate.
Also, btw, I did eventually learn how to use Docker. I did actually vaguely know how it worked for a while but I didn't want Linux VM anywhere near my computer, but eventually I capitulated provided I didn't have Linux VM running all the time.
I had to take calculus and while I think it’s good at teaching problem solving, that’s probably the best thing I can say about it. Statistics, which was not required, would also check that box and is far more applicable on a regular basis.
Yes calculus is involved in machine learning research that some PhDs will do, but heck, so is statistics.
I'm all for criticizing a lack of scientific rigor, but this bit pretty clearly shows that the author knows even less about sample sizes than the GitHub guy, so it seems a bit pot calling the kettle black. You certainly don't need to sample more than 25% of any population in order to draw statistical information from it.
The bit about running the study multiple times also seems kinda random.
I'm sure this study of 22 people has a lot of room for criticism but this criticism seems more ranty than 'proper analysis' to me.
Certainly? Now, who is ranting?
Reproducibility? But knowing it comes from the CEO of Github, who has vested interests in that matter because AI is one of the things that will allow to maintain Github's position on the market (or increase revenue of their paid plans, once everyone is hooked on vibe coding etc.), anyone would anyway take it with a grain of salt. It's like studies funded by big pharma.
It's interesting to note that for a billion people this number changes to a whopping ... 385. Doesn't change much.
I was curious, with 22 sample size (assuming unbiased sample, yada yada), while estimating the proportion of people satisfying a criteria, the margin of error is 22%.
While bad, if done properly, it may still be insightful.
> To add insult to injury, the image seems to have been created with the Studio Ghibli image generator, which Hayao Miyazaki described as an abomination on art itself.
He never said this. This is just false, and it seems like the author didn't even fact check if Hayao Miyazaki ever said this.
but yeah sensationalism and all and people don't do research so unless you remember well
and also lost in translation from Japanese to English, the work sampled by their engineers, it depicted some kinda of zombie like pictures in a very rough form, thus the -insult to life- as in literally
Miyazaki is repulsed by an AI-trained zombie animation which reminded him of a friend with disabilities. So the oft quoted part is about that zombie animation.
When he the team tells him that they want to build a machine that can draw pictures like humans do, he doesn't say anything just stares.
Spot on, I think this every time I see AI art on my Linkedin feed.
This seems to be the fundamental guiding ideology of LLM boosterism; the output doesn't actually _really_ matter, as long as there's lots of it. It's a truly baffling attitude.
If the AI tool generates a 30 line function which doesn’t work. And you spend time testing and modifying the 3 lines of broken logic. The vast majority of the code was AI generated even if it didn’t save you any time.
That's crazy, should really be the opposite. If someone releases weights that promises "X% less lines generated compared to Y", I'd jump on that in an instant, more LLMs are way too verbose by default. Some are really hard to even use prompts to get them to be more concise (looking at you, various Google models)
Fundamentals don't matter anymore, just say whatever you need to say to secure the next round of funding.
Looking at the original blog post, it's marketing copy so there's no point in even reading it. The conclusion is in the headline and the methodology is starting with what you want to say and working back to supporting information. If it was in a more academic setting it would be the equivalent of doing a meta-analysis and p-hacking your way to the pre-defined conclusion you wanted.
Applying any kind of rigour to it is pointless but thanks for the effort.
righthand•3h ago