• AlexNet Paper - Sustskever & Geoffrey Hinton
    • Welch Labs - The moment we stopped understanding AI [AlexNet]
    • scaled up lenet-5 (lecunn)
    • activation maps of the images
    • activation atlas: showing all the lower dimension (human understandable) representation of the features learned and how the model organises them
    • embedding space: provide that the vector (direction and distance) between features have semantic meaning. (King,Queen & Man, Woman - have the same distance)
    • the first layer has multiple kernels, and 3 layers (RGB) in the input image. now the second layer has same multiple layers which become dimensions for the third layer. (so 96 kernels translate to 96 dimensions for the next layer)
    • series of attention layers and MLPs