I implemented a basic MLP that supports different activation and loss functions. It was trained via mini-batch gradient descent. I wrote it from scratch, using no external libraries except Eigen (for linear algebra).
I learned how a Neural Network learns (all the math) -- how the forward pass works, and how learning via backpropagation works. How to convert all that math into code.
I’ll write a blog soon explaining how MLPs work in plain English. My dream is to get into MIT/Harvard one day by following my passion for understanding and building intelligent systems. Ive attached link to my GitHub repo. Feedback is much appreciated!!
rvz•2h ago
You should also keep going and continue learning technologies that most here are petrified of learning which is C++ (and Rust). You will get very far with mastering both of them.
The things you should REALLY avoid in 2025:
1. Do not fall for the hype in web frameworks and especially the Javascript / Typescript ecosystems.
It has set the tech industry back 15 years and I guarantee you it will be heavily automated by LLMs.
muchlakshay•25m ago