There is no shortage of AI tutorials available. Yet, through my personal learning journey, I found that many resources fell short in a few critical areas:
1. Comprehensive and Systematic: I wanted a structured roadmap to understand where I am, and where I’m going. 2. Up-to-Date: Some resources are reasonably comprehensive but cover only the basics. I needed something that reflects the latest techniques, so I would know what’s going on these days. 3. Practical: The content should be deep enough to help with papers, projects, or jobs — but not overwhelming. 4. Accessible Anywhere: I want something paperless, easy to browse whenever and wherever.
Rather than constantly piecing together scattered information, I’m consolidating my learning into a single, living document — complete with references (try to) and fact-checkable sources wherever possible.
What's on it?
Currently, the focus is on Large Language Models (LLMs), broken into multiple detailed sections. Each section introduces key concepts and dives deeper into technical details where necessary — especially when mathematics is essential for understanding. For example:
1.5 Positional Encoding: A section with a comprehensive tutorial that involves the most commonly used encoding methods nowadays: from absolute PE, relative PE, to current RoPE, and YaRN 3.2 Reinforcement Learning: A mathematically heavier section, covering concepts crucial for understanding methods like Reinforcement Learning from Human Feedback (RLHF). 5.3 Retrieval-Augmented Generation (RAG): A practical section that ends with hands-on practices on Colab using LangChain and LangSmith.
There may be minimal ads in the future to help support the time and effort involved in maintaining and expanding the resource. My goal remains the same: to make advanced AI knowledge freely accessible and practical for anyone who needs it.
Thanks for reading — and I hope this resource can help you on your own AI journey!