0%

逻辑回归

逻辑回归模型

模型的假设:数据服从伯努利分布。

损失函数

等价于:

如果我们采集到了一组数据一共 N 个,总概率为:

最大化这个概率即最小化负 Log 损失函数:

梯度计算

上面式子中的 p 的公式如下:

1-p 的公式:

p 的导数如下:

1-p 的导数如下:

所以损失函数的梯度如下:

逻辑回归的决策边界

逻辑回归的决策边界如下:

简一下上面的曲线公式,得到:

如下图所示,所以决策边界是线性的。

代码

逻辑回归 + L2 范数正则化代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
class LogisticRegression():
""" A simple logistic regression model with L2 regularization (zero-mean
Gaussian priors on parameters). """

def __init__(self, x_train=None, y_train=None, x_test=None, y_test=None,
alpha=.1, synthetic=False):
# Set L2 regularization strength
self.alpha = alpha
# Set the data.
self.set_data(x_train, y_train, x_test, y_test)
# Initialize parameters to zero, for lack of a better choice.
self.betas = np.zeros(self.x_train.shape[1])

def negative_lik(self, betas):
return -1 * self.lik(betas)

def lik(self, betas):
""" Likelihood of the data under the current settings of parameters. """
# Data likelihood
l = 0
for i in range(self.n):
l += log(sigmoid(self.y_train[i] *
np.dot(betas, self.x_train[i,:])))
# Prior likelihood
for k in range(1, self.x_train.shape[1]):
l -= (self.alpha / 2.0) * self.betas[k]**2
return l

def train(self):
""" Define the gradient and hand it off to a scipy gradient-based
optimizer. """
# Define the derivative of the likelihood with respect to beta_k.
# Need to multiply by -1 because we will be minimizing.
dB_k = lambda B, k : (k > 0) * self.alpha * B[k] - np.sum([
self.y_train[i] * self.x_train[i, k] *
sigmoid(-self.y_train[i] * np.dot(B, self.x_train[i,:]))
for i in range(self.n)])

# The full gradient is just an array of componentwise derivatives
dB = lambda B : np.array([dB_k(B, k)
for k in range(self.x_train.shape[1])])
# Optimize
self.betas = fmin_bfgs(self.negative_lik, self.betas, fprime=dB)

def set_data(self, x_train, y_train, x_test, y_test):
""" Take data that's already been generated. """
self.x_train = x_train
self.y_train = y_train
self.x_test = x_test
self.y_test = y_test
self.n = y_train.shape[0]

def training_reconstruction(self):
p_y1 = np.zeros(self.n)
for i in range(self.n):
p_y1[i] = sigmoid(np.dot(self.betas, self.x_train[i,:]))
return p_y1

def test_predictions(self):
p_y1 = np.zeros(self.n)
for i in range(self.n):
p_y1[i] = sigmoid(np.dot(self.betas, self.x_test[i,:]))
return p_y1

def plot_training_reconstruction(self):
plot(np.arange(self.n), .5 + .5 * self.y_train, 'bo')
plot(np.arange(self.n), self.training_reconstruction(), 'rx')
ylim([-.1, 1.1])

def plot_test_predictions(self):
plot(np.arange(self.n), .5 + .5 * self.y_test, 'yo')
plot(np.arange(self.n), self.test_predictions(), 'rx')
ylim([-.1, 1.1])

if __name__ == "__main__":
from pylab import *
# Create 20 dimensional data set with 25 points -- this will be
# susceptible to overfitting.
data = SyntheticClassifierData(25, 20)

# Run for a variety of regularization strengths
alphas = [0, .001, .01, .1]
for j, a in enumerate(alphas):
# Create a new learner, but use the same data for each run
lr = LogisticRegression(x_train=data.X_train, y_train=data.Y_train,
x_test=data.X_test, y_test=data.Y_test, alpha=a)

print "Initial likelihood:"
print lr.lik(lr.betas)

# Train the model
lr.train()

# Display execution info
print "Final betas:"
print lr.betas
print "Final lik:"
print lr.lik(lr.betas)

# Plot the results
subplot(len(alphas), 2, 2*j + 1)
lr.plot_training_reconstruction()
ylabel("Alpha=%s" % a)
if j == 0:
title("Training set reconstructions")

subplot(len(alphas), 2, 2*j + 2)
lr.plot_test_predictions()
if j == 0:
title("Test set predictions")
show()

逻辑回归中为什么使用对数损失而不用平方损失

对于逻辑回归,这里所说的对数损失和极大似然是相同的。 不使用平方损失的原因是,在使用 Sigmoid 函数作为正样本的概率时,同时将平方损失作为损失函数,这时所构造出来的损失函数是非凸的,不容易求解,容易得到其局部最优解。 而如果使用极大似然,其目标函数就是对数似然函数,该损失函数是关于未知参数的高阶连续可导的凸函数,便于求其全局最优解。

Softmax

代价函数:

其中,1${x}$ 是指示函数,x 为真时取 1 否则取 0。

支持一根棒棒糖!