Topic | Content Overview | Resources |
---|---|---|
Python intro, JetBrains Academy plugin in PyCharm |
1. Main goals of the AI club 2. Gentle intro to Python 3. Objects representation in Python 4. PyCharm, JetBrains Academy plugin |
📺 Video | 📝 Assignment: Python practice |
NumPy & Pandas |
1. Numpy: intro and ndarray 2. Slices and Indexing 3. Concatenation and broadcasting 4. Pandas DataFrames, Titanic Dataset |
📺 Video | 📝 Numpy practice |
Intro to Machine Learning: tasks types, examples, quality evaluation |
1. Types of ML tasks, math statement 2. Linear Regression in sklearn 3. Overfitting, train/test split 4. Significant events in machine learning history |
📺 Video | 📝 Pandas practice |
Linear models and Stochastic gradient descent |
1. Loss functions examples 2. Linear classifier 3. Stochastic gradient descent 4. Regularization |
📺 Video | 📝 Gradient Descent |
Logical Rules and Decision Trees |
1. Logical rules 2. Decision trees 3. ID3 algorithm 4. Mesurement of uncertainty |
📺 Video | 📝 LR and DT in Cogniterra |
Ensembles, gradient boosting and random forest |
1. Simple and weighted voting, mixture of experts 2. Boosting, bagging, RSM 3. XGBoost, CatBoost, LightGBM 4. Random forest |
📺 Video | 📝 Kaggle Competition |
Intro to neural networks and backpropagation |
1. Rise of neural networks 2. Expressive power of neural network 3. Backpropagation algorithm |
📺 Video | 📝 Simple Neural Network |
Recurrent neural networks basics |
1. Disadvantages of Feed-Forward Neural Networks 2. Architectures of Recurrent Networks 3. Vanilla RNN, LSTM, GRU |
📺 Video | 📝 Contest 1 in Cogniterra |
PyTorch Tutorial |
1. Working with tensors 2. Autograd and Neural Network example 3. Demo model for word classification |
📺 Video | 📝 PyTorch Introduction |
Language modeling: bigrams and multi layers perceptron |
1. Makemore on the bigrams level 2. Adding prior for more robust predictions 3. Multi Layer Perceptron |
📺 Video | 📝 Intro to LM in Cogniterra |
Attention and transformers |
1. Disadvantages of RNNs 2. Attention mechanism 3. Transformers, BERT |
📺 Video | 📝 Backpropagation and MLP |
Building GPT-2 from scratch |
1. "Attention is all you need" 2. Math trick in self-attention 3. Layer normalization and dropout |
📺 Video | 📝 Transformers |