Once you are familiar with the basic concepts and history, it is time to understand how neural networks work. Some courses go very light on this, but I strongly believe that—in contrast to coding—taking a look into the simple mathematics of neural networks is a great investment. For example, understanding that machines learn by multiplying vectors and matrices explains why GPUs are important, and why electricity demand is growing so fast.

Realizing that neural networks are just statistical tools that work with probabilities helps to understand issues like bias and hallucinations. For this, I recommend a short book by J.D. Kelleher titled Deep Learning (2019) from MIT Press, which only requires high school level math.

Link to book

(purchase required)