Reinforcement learning (RL) has become the engine behind some of the most significant advances in modern artificial intelligence, from defeating world champions in Go to aligning large language models with human preferences. Yet despite its central role, RL remains poorly understood by many practitioners who work with these systems daily. Reinforcement Learning in Action: From Foundations to Frontiers bridges the gap between classical RL theory and the cutting-edge techniques driving today’s AI breakthroughs. The book traces a complete path from Markov Decision Processes and Bellman equations through deep RL methods (DQN, REINFORCE, Actor-Critic, PPO) to the modern landscape of LLM alignment (RLHF, DPO, SimPO, KTO), reasoning optimization (GRPO, VinePPO, MCTS), and agentic systems with tool use, memory, and multi-turn planning. A distinguishing feature is the book’s consistent five-layer pedagogical structure: each algorithm is presented with its key characteristics, a full mathematical derivation, an honest assessment of its advantages and limitations, a complete from-scratch Python/PyTorch implementation in which variable names match the equations, and a hands-on case study with reproducible experiments. Case studies progress from Grid World navigation and CartPole control to fine-tuning language models with DPO on the HuggingFace ecosystem, training reasoning models with GRPO on mathematical benchmarks, and building a full agentic customer support system. Written for ML engineers, researchers, and advanced students, this book provides both the conceptual depth and implementation fluency needed to understand, build, and extend the RL systems shaping the future of AI.