不知道为什么最近突然觉得牛客网很火,好奇心驱使下我也点开看了看…发现真的不错。

机器学习是python新增加的板块,其实只有5道题哈哈

ps:题目很简单很基础,真的很适合刚刚入门机器学习的小白检验阶段性的学习成果。

趁着题还很少,将来出一道做一道,岂不是成就感满满。

鸢尾花分类_1

原题:鸢尾花分类_1_牛客题霸_牛客网 (nowcoder.com)

这道题采用贝叶斯算法能够保证该数据集下准确率在100%。

# 朴素贝叶斯from sklearn import datasetsfrom sklearn.model_selection import train_test_splitfrom sklearn.naive_bayes import GaussianNBfrom sklearn.metrics import accuracy_scoredef train_and_predict(train_input_features, train_outputs, prediction_features):    G = GaussianNB()    G.fit(train_input_features, train_outputs)    y_pred = G.predict(prediction_features)    return y_prediris = datasets.load_iris()X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target,                                                    test_size=0.3, random_state=0)y_pred = train_and_predict(X_train, y_train, X_test)if y_pred is not None:    print(accuracy_score(y_pred,y_test))

鸢尾花分类_2

原题:鸢尾花分类_2_牛客题霸_牛客网 (nowcoder.com)

我使用的是决策树模型,默认参数下该二分类问题准确率还是100%

import numpy as npfrom sklearn import datasetsfrom sklearn.model_selection import train_test_splitfrom sklearn.metrics import accuracy_scorefrom sklearn.tree import DecisionTreeClassifierdef transform_three2two_cate():    data = datasets.load_iris()    new_data = np.hstack([data.data, data.target[:, np.newaxis]])    new_feat = new_data[new_data[:, -1] != 2][:, :4]    new_label = new_data[new_data[:, -1] != 2][:, -1]    return new_feat, new_labeldef train_and_evaluate():    data_X, data_Y = transform_three2two_cate()    train_x, test_x, train_y, test_y = train_test_split(data_X, data_Y, test_size=0.2)    DT = DecisionTreeClassifier()    DT.fit(train_x, train_y)    y_predict = DT.predict(test_x)    print(accuracy_score(y_predict, test_y))if __name__ == "__main__":    train_and_evaluate()

信息熵的计算

原题:决策树的生成与训练-信息熵的计算_牛客题霸_牛客网 (nowcoder.com)

这道题十分简单,我的做法是把下面的数据转换为numpy的ndarray矩阵取出最后一列,直接套公式:

import numpy as npimport pandas as pdfrom collections import CounterdataSet = pd.read_csv('dataSet.csv', header=None).values[:, -1]def calcInfoEnt(dataSet):    numEntres = len(dataSet)    cnt = Counter(dataSet)  # 计数每个值出现的次数    probability_lst = [1.0 * cnt[i] / numEntres for i in cnt]    return -np.sum([p * np.log2(p) for p in probability_lst])if __name__ == '__main__':    print(calcInfoEnt(dataSet))

信息增益的计算

原题:决策树的生成与训练-信息增益_牛客题霸_牛客网 (nowcoder.com)

import numpy as npimport pandas as pdfrom collections import Counterimport randomdataSet = pd.read_csv('dataSet.csv', header=None).values.T  # 转置 5*15数组def entropy(data):  # data 一维数组    numEntres = len(data)    cnt = Counter(data)  # 计数每个值出现的次数  Counter({1: 8, 0: 5})    probability_lst = [1.0 * cnt[i] / numEntres for i in cnt]    return -np.sum([p * np.log2(p) for p in probability_lst])  # 返回信息熵def calc_max_info_gain(dataSet):    label = np.array(dataSet[-1])    total_entropy = entropy(label)    max_info_gain = [0, 0]    for feature in range(4):  # 4种特征 我命名为特征:0 1 2 3        f_index = {}        for idx, v in enumerate(dataSet[feature]):            if v not in f_index:                f_index[v] = []            f_index[v].append(idx)        f_impurity = 0        for k in f_index:            # 根据该特征取值对应的数组下标 取出对应的标签列表 比如分支1有多少个正负例 分支2有...            f_l = label[f_index[k]]            f_impurity += entropy(f_l) * len(f_l) / len(label)  # 循环结束得到各分支混杂度的期望        gain = total_entropy - f_impurity  # 信息增益IG        if gain > max_info_gain[1]:            max_info_gain = [feature, gain]    return max_info_gainif __name__ == '__main__':    info_res = calc_max_info_gain(dataSet)    print("信息增益最大的特征索引为:{0},对应的信息增益为{1}".format(info_res[0], info_res[1]))

使用梯度下降对逻辑回归进行训练

原题:使用梯度下降对逻辑回归进行训练_牛客题霸_牛客网 (nowcoder.com)

import numpy as npimport pandas as pddef generate_data():    datasets = pd.read_csv('dataSet.csv', header=None).values.tolist()    labels = pd.read_csv('labels.csv', header=None).values.tolist()    return datasets, labelsdef sigmoid(X):    hx = 1/(1+np.exp(-X))    return hx        #code end heredef gradientDescent(dataMatIn, classLabels):    alpha = 0.001  # 学习率,也就是题目描述中的 α    iteration_nums = 100  # 迭代次数,也就是for循环的次数    dataMatrix = np.mat(dataMatIn)     labelMat = np.mat(classLabels).transpose()     m, n = np.shape(dataMatrix)  # 返回dataMatrix的大小。m为行数,n为列数。    weight_mat = np.ones((n, 1)) #初始化权重矩阵    for i in range(iteration_nums):        hx=sigmoid(dataMatrix*weight_mat)        weight_mat-=alpha*dataMatrix.transpose()*(hx-labelMat)    return weight_matif __name__ == '__main__':    dataMat, labelMat = generate_data()    print(gradientDescent(dataMat, labelMat))