Google Colaboratory

이론문제 1

정답 : 4. Unsupervised learning은 데이터 중 training set이 없어도 학습이 가능하다. -> training set 중 target 이 없는 경우다

이론문제 2

3. Dataset을 Train set : Test set = 1 : 1 비율로 나누면 Overfitting이 발생할 수 있다. -> 조금 애매함.

5. Validation set의 사용 목적은 Test set과 일치한다. -> validation set의 사용 목적은 모델의 성능을 평가하고, 그 결과를 토대로 모델을 튜닝하는 작업이다. test set의 사용목적은 최종적으로 '이 모델이 실전에서 이만큼 성능을 낼 것이다!' 라는 것을 확인하는 단계이다.

실습문제 1

from sklearn.datasets import load_iris
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np

iris = load_iris()

plt.scatter(iris.data[:,0],iris.data[:,1],c='r')

plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1])
plt.show()
plt.scatter(iris.data[:,2],iris.data[:,3],c='b')
plt.xlabel(iris.feature_names[2])
plt.ylabel(iris.feature_names[3])
plt.show()

Untitled

Untitled

실습문제 2

from sklearn.datasets import load_iris
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np

iris = load_iris()

mean=np.mean(iris.data,axis=0)
std=np.std(iris.data,axis=0)

iris_scaled=(iris.data-mean)/std

plt.scatter(iris_scaled[:,0],iris_scaled[:,1],c='r')
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1])
plt.show()
plt.scatter(iris_scaled[:,2],iris_scaled[:,3],c='b')
plt.xlabel(iris.feature_names[2])
plt.ylabel(iris.feature_names[3])

plt.show()

Untitled

Untitled

실습문제 3

from sklearn import datasets,preprocessing
import numpy as np
from matplotlib import pyplot as plt

wine = datasets.load_wine(as_frame=True)

x_scaled = preprocessing.scale(wine.data)

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(x_scaled, wine.target, test_size=0.25)

from sklearn.neighbors import KNeighborsClassifier

knn = KNeighborsClassifier(n_neighbors=5)

knn.fit(X_train, y_train)
knn.score(X_test, y_test)

test_size의 역할 : 테스트 세트의 비율이나 개수를 지정, 디폴트 값은 0.25

n_neighbors의 역할: k-최근접 이웃 알고리즘에서 어떤 데이터의 값을 구할 때 주변의 가까운 데이터를 몇개까지 볼 지 정하는 매개변수