Neural Network Programming with Python - An Introduction
Duration: 5 Days
Course Overview, Intended Audience and Prerequisites
This course is aimed at those who have some Python programming experience and are interested in getting an introduction to Neural Networks and Machine Learning. This may, for example, be because they wish to prepare some demonstrations and teaching materials, or because they wish to gain an overview of Machine Learning and Neural Networks with a view to possibly embarking on a machine learning project. This course also serves as a foundation course for a course in Deep Learning using Python.
As well as teaching some more advanced Python programming techniques the course uses the Scikit-Learn framework to demonstrate how to create and train neural networks, and to evaluate the quality of the learning.
Course Contents
- Neural Networks and Machine Learning - an Introduction
- Neurons, Neural layers, Activation functions
- Neural Networks
- Learning abilities of Neural Networks
- Perceptrons and Supervised Learning
- Supervised learning and multi-layer perceptrons
- Learning in multi-layer perceptrons
- Limitations of multi-layer perceptrons
- Essential Mathematical Underpinnings and Concepts
- Linear prediction
- Maximum likelihood
- Regularisers
- Basis functions
- Cross validators
- Optimisation
- Logistic regression
- Intensive Overview of Scikit-Learn, Numpy and Matplotlib
- Loading an example dataset
- Classification
- k-Nearest neighbors classifier
- Support vector machines (SVMs) Classification
- K-means clustering
- Principal Component Analysis
- Sparse models
- Model selection - choosing estimators and their parameters
- Grid-search and cross-validated estimators
- Simple face recognition case study
- Multi-layer Neural Networks
- Sigmoid neurons
- Input layer + Hidden layer + Output layer architecture
- Backpropagation and layer design of neural networks
- Supervised vs. Unsupervised Learning
- Introduction to Restricted Boltzmann Machines
- Implementing a Multi-Layer Perceptron and Simple Back Propagation using Scikit-Learn
- Multi-layer Perceptron - Maths and Architecture
- Classification - MLPClassifier
- Regression - MLPRegressor
- Regularization
- Algorithms - Stochastic Gradient Descent, Adam, L-BFGS
- Performance Limitations
- Background to Unsupervised Learning
- Gaussian mixture models
- Manifolds and learning manifolds
- Clustering
- Principal component analysis
- Covariance estimation
- Novelty and outlier detection
- Unsupervised Neural Network Models
- Restricted Boltzmann Machines
- Stochastic Maximum Likelihood Learning
- Model Selection and Evaluation
- Cross-validation: evaluating estimator performance
- Tuning parameters of an estimator
- Model Evaluation - Cross Validation
- Evaluation Curves
- Dataset processing and transformations
- Dataset preparation and loading
- Feature extraction
- Preprocessing data
- Pipeline and FeatureUnion - combining estimators
- Unsupervised dimensionality reduction
- Random Projection
- Pairwise metrics, Affinities and Kernels
- Scaling and Performance
- Strategies for handling large data sets
- Prediction latency issues
- Prediction throughput issues
Call us:
Technical enqiries: 020 8669 0769
Sales enquiries: 020 8647 1939, 020 77681 40786