1 line
7.7 KiB
Plaintext
1 line
7.7 KiB
Plaintext
{"cells":[{"cell_type":"markdown","metadata":{"id":"Habepm5vJCD-"},"source":["# Lab 5 - Perceptron (single layer)"]},{"cell_type":"markdown","metadata":{"id":"9FN7V0bJJCD_"},"source":["The purpose of this lab is for you to familiarise yourself with the Python toolbox for implementing a Perceptron (single layer).\n","\n","You will use the `Perceptron` function from the package `sklearn.linear_model`. Here is a link to the documentation, which you will need to refer to frequently as you work through this lab:\n","\n","https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Perceptron.html\n","\n","As usual, we will also import `numpy` and `matplotlib.pyplot`.\n","\n","\n"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"7oL7NSl0JCD_"},"outputs":[],"source":["import numpy as np\n","import matplotlib.pyplot as plt\n","\n","from sklearn.linear_model import Perceptron"]},{"cell_type":"markdown","metadata":{"id":"u77FZsFDJCEA"},"source":["# 1. Linearly separable data"]},{"cell_type":"markdown","metadata":{"id":"VzXItPEIJCEA"},"source":["Generate 100 instances of the normal random variables `X` and `Y`, where the mean of $(X,Y)=(2,0)$, $X$ and $Y$ both have variance 1, and $X$ and $Y$ are uncorrelated (covariance is 0). (You may refer back to Lab 2.)"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"d4PVVVgaJCEA"},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{"id":"wz5jMVGNJCEA"},"source":["Generate 100 instances of the normal random variables `U` and `V`, where the mean of $(U,V)=(10,0)$, $U$ and $V$ both have variance 1, and $U$ and $V$ are uncorrelated."]},{"cell_type":"code","execution_count":null,"metadata":{"id":"hE0HRWd1JCEA"},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{"id":"gfN-eI_UJCEA"},"source":["You should have two 100 x 2 arrays. Concatenate these two arrays into one 200 x 2 array and create a corresponding array of class labels: 100 zeros, followed by 100 ones."]},{"cell_type":"code","execution_count":null,"metadata":{"id":"QWtinaXMJCEA"},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{"id":"Un0xdPSUJCEA"},"source":["Make a scatter plot of your data, showing the $(X,Y)$ data in red and the $(U,V)$ data in blue."]},{"cell_type":"code","execution_count":null,"metadata":{"id":"vBIbGS1hJCEA"},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{"id":"wkRhn_MBJCEA"},"source":["We are considering the $(X,Y)$ data to be from one class and the $(U,V)$ data to be from another class. These classes should be linearly separable, that is you should be able to draw a line that has all of class 1 on the left and all of class 2 on the right (in the unlikely scenario that this is not the case, then generate a new random sample!)."]},{"cell_type":"markdown","metadata":{"id":"L69aXKGWJCEA"},"source":["Now have a go at training a perceptron to be able to classify the datapoints. (For simplicity here, do not split the data into a training and testing dataset - just train and test on the whole dataset.) Begin by just running the `Perceptron` model with default settings."]},{"cell_type":"code","execution_count":null,"metadata":{"id":"MnA3VGkfJCEA"},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{"id":"Tz5B1z4mJCEA"},"source":["Batch learning will have been carried out. Have a look through what some of these default settings are that have been applied. (Note we have not discussed regularisation in the context of the perceptron.) Print out how many iterations of batch learning were performed to train the perceptron."]},{"cell_type":"code","execution_count":null,"metadata":{"id":"YxbMAiRpJCEA"},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{"id":"_Rh2ez8hJCEA"},"source":["Now run the `predict` function to obtain predictions of the class of each data point - see Methods at https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Perceptron.html"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"B9OQl2OTJCEC"},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{"id":"Sn3tolE3JCEC"},"source":["Import from `sklearn.metrics` functions to compute the *accuracy* and the *confusion matrix*, and display these for your data. https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"nxrabK8VJCEC"},"outputs":[],"source":["from sklearn.metrics import accuracy_score, confusion_matrix\n","\n"]},{"cell_type":"markdown","metadata":{"id":"dXokepU-JCEC"},"source":["You should have obtained 100% accuracy!"]},{"cell_type":"markdown","metadata":{"id":"yCuTg7wHJCEC"},"source":["Plot the decision boundary and a scatter plot of the data again, on the same graph. The decision boundary is the line:\n","\n","\\begin{equation}\n","y=-\\frac{w_1}{w_2}x - \\frac{w_0}{w_2}\n","\\end{equation}\n","\n","where $w_1$ and $w_2$ are the 2 weights (stored in an array called *`coef_`*), and $w_0$ is the bias (stored in a variable called *`intercept_`*) of your trained perceptron.\n","\n","[NB: although `sklearn` calls the bias here the intercept, this is not the same thing as the $y$-intercept of the decision boundary, which is $-w_0/w_2$.]"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"RIkAzpklJCEC"},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{"id":"hiuLwq6mJCEC"},"source":["## Optional extension"]},{"cell_type":"markdown","metadata":{"id":"Zl9uQ_7GJCEC"},"source":["Implement sequential learning (update the weights based on presentation of a single data point, rather than only after presentation of all 200 data points). For this you will need the `partial_fit` function from your `Perceptron`. Ideally, you should present the data in a novel random order for each epoch."]},{"cell_type":"code","execution_count":null,"metadata":{"id":"jzh4LZJnJCEC"},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{"id":"Hry60kXcJCEC"},"source":["# 2. Non-linearly separable data"]},{"cell_type":"markdown","metadata":{"id":"s1AJGCY9JCEC"},"source":["Repeat the above exercise with data from two non-linearly separable classes (simply bring the means closer or make the\n","standard deviations larger, so that the data from the two classes overlaps)."]},{"cell_type":"code","execution_count":null,"metadata":{"id":"HJPeSaSIJCEC"},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{"id":"D3zd7tt3JCEC"},"source":["Play around with some of the hyperparameters, including the learning rate, and the criterion for stopping training (get training to stop when accuracy is no longer improving very much). Play around also with how much overlap there is in the data from the two classes (vary how close the means are to each other).\n","\n","What do you observe? Do you always find the best decision boundary possible? When do you get the best, or most efficient result?"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"1_9rx_8rJCEC"},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{"id":"Z2zRhsRpJCEC"},"source":["## Optional extensions"]},{"cell_type":"markdown","metadata":{"id":"SsJcinEnJCEC"},"source":["1. Play around with sequential learning on the non-linearly separable data.\n","\n","2. Train a perceptron to distinguish the two non-linearly separable species of iris, *versicolor* and *virginica*, on Fisher's iris dataset (see Lab 1)."]},{"cell_type":"code","execution_count":null,"metadata":{"id":"1ICmfsXeJCEC"},"outputs":[],"source":[]}],"metadata":{"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.7.4"},"colab":{"provenance":[]}},"nbformat":4,"nbformat_minor":0} |