组员分工:

刘豪:数据预处理,神经网络模型,K近邻,Huber,PassiveAggressive

李佳钰:ElasticNet,ElasticNetCV,Lasso,LassoCV

齐若岩:AdaBoost,GradientBoosting,XGB,LGBM

李挺:Ridge,RidgeCV,BayesianRidge,KernelRidge

王佳佳:RandomForest,ExtraTrees,SVR,DecisionTree

1、导入数据

In [1]:
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
%matplotlib inline
import matplotlib.pyplot as plt  # Matlab-style plotting
import matplotlib.gridspec as gridspec
import matplotlib.style as style
import seaborn as sns
color = sns.color_palette()
sns.set_style('darkgrid')
from scipy import stats
from scipy.stats import norm, skew #for some statistics
from subprocess import check_output
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import torch
from torch import nn, optim
from torch.utils.data import DataLoader
import torch.nn.functional as F
from torch.nn import init
import warnings

warnings.filterwarnings("ignore")

pd.set_option('display.float_format', lambda x: '{:.3f}'.format(x)) #Limiting floats output to 3 decimal points
path = "./data/house-prices-advanced-regression-techniques"

print(check_output(["ls", path]).decode("utf8")) #check the files available in the directory

train = pd.read_csv(path + '/train.csv')
test = pd.read_csv(path + '/test.csv')
train.drop('Id', axis=1, inplace=True)
test.drop('Id', axis=1, inplace=True)
print(train.shape)
print(test.shape)
train.head(5)
data_description.txt
sample_submission.csv
test.csv
train.csv

(1460, 80)
(1459, 79)
Out[1]:
MSSubClass MSZoning LotFrontage LotArea Street Alley LotShape LandContour Utilities LotConfig ... PoolArea PoolQC Fence MiscFeature MiscVal MoSold YrSold SaleType SaleCondition SalePrice
0 60 RL 65.000 8450 Pave NaN Reg Lvl AllPub Inside ... 0 NaN NaN NaN 0 2 2008 WD Normal 208500
1 20 RL 80.000 9600 Pave NaN Reg Lvl AllPub FR2 ... 0 NaN NaN NaN 0 5 2007 WD Normal 181500
2 60 RL 68.000 11250 Pave NaN IR1 Lvl AllPub Inside ... 0 NaN NaN NaN 0 9 2008 WD Normal 223500
3 70 RL 60.000 9550 Pave NaN IR1 Lvl AllPub Corner ... 0 NaN NaN NaN 0 2 2006 WD Abnorml 140000
4 60 RL 84.000 14260 Pave NaN IR1 Lvl AllPub FR2 ... 0 NaN NaN NaN 0 12 2008 WD Normal 250000

5 rows × 80 columns

In [2]:
test.head()
Out[2]:
MSSubClass MSZoning LotFrontage LotArea Street Alley LotShape LandContour Utilities LotConfig ... ScreenPorch PoolArea PoolQC Fence MiscFeature MiscVal MoSold YrSold SaleType SaleCondition
0 20 RH 80.000 11622 Pave NaN Reg Lvl AllPub Inside ... 120 0 NaN MnPrv NaN 0 6 2010 WD Normal
1 20 RL 81.000 14267 Pave NaN IR1 Lvl AllPub Corner ... 0 0 NaN NaN Gar2 12500 6 2010 WD Normal
2 60 RL 74.000 13830 Pave NaN IR1 Lvl AllPub Inside ... 0 0 NaN MnPrv NaN 0 3 2010 WD Normal
3 60 RL 78.000 9978 Pave NaN IR1 Lvl AllPub Inside ... 0 0 NaN NaN NaN 0 6 2010 WD Normal
4 120 RL 43.000 5005 Pave NaN IR1 HLS AllPub Inside ... 144 0 NaN NaN NaN 0 1 2010 WD Normal

5 rows × 79 columns

2、数据预处理

2.1、目标变量分布变换

我们的目标变量是SalePrice,首先查看一下它的分布情况

In [3]:
def plotting_3_chart(df, feature): 
    ## Creating a customized chart. and giving in figsize and everything. 
    fig = plt.figure(constrained_layout=True, figsize=(12,8))
    ## crea,ting a grid of 3 cols and 3 rows. 
    grid = gridspec.GridSpec(ncols=3, nrows=3, figure=fig)
    #gs = fig3.add_gridspec(3, 3)

    ## Customizing the histogram grid. 
    ax1 = fig.add_subplot(grid[0, :2])
    ## Set the title. 
    ax1.set_title('Histogram')
    ## plot the histogram. 
    sns.distplot(df.loc[:,feature], norm_hist=True, ax = ax1)

    # customizing the QQ_plot. 
    ax2 = fig.add_subplot(grid[1, :2])
    ## Set the title. 
    ax2.set_title('QQ_plot')
    ## Plotting the QQ_Plot. 
    stats.probplot(df.loc[:,feature], plot = ax2)

    ## Customizing the Box Plot. 
    ax3 = fig.add_subplot(grid[:, 2])
    ## Set title. 
    ax3.set_title('Box Plot')
    ## Plotting the box plot. 
    sns.boxplot(df.loc[:,feature], orient='v', ax = ax3 );
 

print('Skewness: '+ str(train['SalePrice'].skew())) 
print("Kurtosis: " + str(train['SalePrice'].kurt()))
plotting_3_chart(train, 'SalePrice')
Skewness: 1.8828757597682129
Kurtosis: 6.536281860064529

可见峰度和偏度较大,需要进行变换,我们使用常用的log(1+x)进行变换。

In [4]:
train["SalePrice"] = np.log1p(train["SalePrice"])
print('Skewness: '+ str(train['SalePrice'].skew()))   
print("Kurtosis: " + str(train['SalePrice'].kurt()))
plotting_3_chart(train, 'SalePrice')
Skewness: 0.12134661989685333
Kurtosis: 0.809519155707878

现在我们得到标准化的目标变量分布了。注意最后要使用逆运算对结果进复原。

2.2、相关性分析

我们看一下训练集中所有变量和价格之间的相关性。

In [5]:
style.use('ggplot')
sns.set_style('whitegrid')
plt.subplots(figsize = (30,20))
## Plotting heatmap. 

# Generate a mask for the upper triangle (taken from seaborn example gallery)
mask = np.zeros_like(train.corr(), dtype=np.bool)
mask[np.triu_indices_from(mask)] = True


sns.heatmap(train.corr(), 
            cmap=sns.diverging_palette(255, 133, l=60, n=7), 
            mask = mask, 
            annot=True, 
            center = 0, 
           );
## Give title. 
plt.title("Heatmap of all the Features", fontsize = 30);

从图中可以看出,一些特征之间的相关性非常大,我们在之后的缺失值处理中可以去掉一些相关性比较大,缺失值比较多的特征。同时,我们发现了一些和价格相关性非常高的特征,分别是OverallQual和GrLivArea。我们可以根据这两个特征进行异常值检测,来剔除一些异常值。

2.3、异常值检测

GrLivArea是连续值,我们根据该特征和售价的关系来进行异常值检测。

In [6]:
#scatter plot grlivarea/saleprice
var = 'GrLivArea'
data = pd.concat([train['SalePrice'], train[var]], axis=1)
data.plot.scatter(x=var, y='SalePrice');

面积特别大而售价很低的是异常值,应该剔除。处理后可视化结果如下:

In [7]:
#Deleting outliers
train = train.drop(train[(train['GrLivArea']>4000) & (train['SalePrice']<12.5)].index)

#Check the graphic again
fig, ax = plt.subplots()
ax.scatter(train['GrLivArea'], train['SalePrice'])
plt.ylabel('SalePrice')
plt.xlabel('GrLivArea')
plt.show()

2.4、可视化分析

我们看一下房屋价格和相关性强的特征关系图。

首先看下价格和总评的关系,用盒图来展示:

In [8]:
#box plot overallqual/saleprice
var = 'OverallQual'
data = pd.concat([train['SalePrice'], train[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="SalePrice", data=data)

可以看到,总体评估越高,房屋价格越高。

接下来绘制出全部和价格高相关性的特征,与销售价格的散点图,如下图所示。

In [9]:
#scatterplot
sns.set()
cols = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt']
sns.pairplot(train[cols], height = 2.5)
plt.show();

从第一行可以看出,售价和这些特征之间呈现比较强的正相关关系,这和之前相关性分析的结果一致。

2.5、缺失值处理

训练集和测试集都含有缺失值,我们需要同时对二者进行处理,否则特征维度可能不一致。

In [10]:
print(train.shape)
dataset =  pd.concat(objs=[train, test], axis=0,sort=False).reset_index(drop=True)
dataset.head()
(1458, 80)
Out[10]:
MSSubClass MSZoning LotFrontage LotArea Street Alley LotShape LandContour Utilities LotConfig ... PoolArea PoolQC Fence MiscFeature MiscVal MoSold YrSold SaleType SaleCondition SalePrice
0 60 RL 65.000 8450 Pave NaN Reg Lvl AllPub Inside ... 0 NaN NaN NaN 0 2 2008 WD Normal 12.248
1 20 RL 80.000 9600 Pave NaN Reg Lvl AllPub FR2 ... 0 NaN NaN NaN 0 5 2007 WD Normal 12.109
2 60 RL 68.000 11250 Pave NaN IR1 Lvl AllPub Inside ... 0 NaN NaN NaN 0 9 2008 WD Normal 12.317
3 70 RL 60.000 9550 Pave NaN IR1 Lvl AllPub Corner ... 0 NaN NaN NaN 0 2 2006 WD Abnorml 11.849
4 60 RL 84.000 14260 Pave NaN IR1 Lvl AllPub FR2 ... 0 NaN NaN NaN 0 12 2008 WD Normal 12.429

5 rows × 80 columns

In [11]:
total = dataset.isnull().sum().sort_values(ascending=False)
total.drop("SalePrice",inplace=True)
percent = (dataset.isnull().sum()/dataset.isnull().count()).sort_values(ascending=False)
percent.drop("SalePrice",inplace=True)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(25)
Out[11]:
Total Percent
PoolQC 2908 0.997
MiscFeature 2812 0.964
Alley 2719 0.932
Fence 2346 0.804
FireplaceQu 1420 0.487
LotFrontage 486 0.167
GarageCond 159 0.055
GarageYrBlt 159 0.055
GarageQual 159 0.055
GarageFinish 159 0.055
GarageType 157 0.054
BsmtCond 82 0.028
BsmtExposure 82 0.028
BsmtQual 81 0.028
BsmtFinType2 80 0.027
BsmtFinType1 79 0.027
MasVnrType 24 0.008
MasVnrArea 23 0.008
MSZoning 4 0.001
Utilities 2 0.001
Functional 2 0.001
BsmtFullBath 2 0.001
BsmtHalfBath 2 0.001
GarageCars 1 0.000
BsmtFinSF2 1 0.000
In [12]:
remove_columns=percent[percent>0.002]
columns=pd.DataFrame(remove_columns)
print("我们会舍弃下列特征,因为它们的缺失率高于 "+str(0.002*100)+"%: ")
print(remove_columns)
dataset=dataset.drop(columns.index,axis=1)
我们会舍弃下列特征,因为它们的缺失率高于 0.2%: 
PoolQC         0.997
MiscFeature    0.964
Alley          0.932
Fence          0.804
FireplaceQu    0.487
LotFrontage    0.167
GarageCond     0.055
GarageYrBlt    0.055
GarageQual     0.055
GarageFinish   0.055
GarageType     0.054
BsmtCond       0.028
BsmtExposure   0.028
BsmtQual       0.028
BsmtFinType2   0.027
BsmtFinType1   0.027
MasVnrType     0.008
MasVnrArea     0.008
dtype: float64

可以发现有一些特征缺失值达到了50%以上,对于此类我想在后续处理过程中最好直接剔除该特征。对于缺失值少的特征,我们可以根据不同特征进行不同的分析。其中的Garage为前缀的几个特征,可以发现它们具有相同的缺失值个数,经过数据可视化以及数据相关度分析,我们发现车库的重要特征是GarageCars和GarageArea,其他的特征对于房屋价格影响甚微,外加我们的经验可以判断,这几个车库特征可以舍弃。同样的,我们可以发现这后边的这几个缺失值似乎都是不那么重要的特征,外加有非常多(将近80个)的特征待分析,可以直接删去含有缺失值的特征。对于缺失值非常少的我们使用中值填补。

In [13]:
cat=dataset.select_dtypes("object")
for column in cat:
    dataset[column].fillna(dataset[column].mode()[0], inplace=True)


fl=dataset.select_dtypes(["float64","int64"]).drop("SalePrice",axis=1)
for column in fl:
    dataset[column].fillna(dataset[column].median(), inplace=True)
    
print(dataset.shape)
dataset.drop("SalePrice",axis=1).isnull().values.any()
(2917, 62)
Out[13]:
False

False,缺失值处理完毕,保存一下供后续使用

In [14]:
dataset.to_csv('dataset.csv', index=False)

3、训练和预测

3.1、数据集划分和编码

In [15]:
dataset = pd.read_csv('dataset.csv')
dataset.head()
Out[15]:
MSSubClass MSZoning LotArea Street LotShape LandContour Utilities LotConfig LandSlope Neighborhood ... EnclosedPorch 3SsnPorch ScreenPorch PoolArea MiscVal MoSold YrSold SaleType SaleCondition SalePrice
0 60 RL 8450 Pave Reg Lvl AllPub Inside Gtl CollgCr ... 0 0 0 0 0 2 2008 WD Normal 12.248
1 20 RL 9600 Pave Reg Lvl AllPub FR2 Gtl Veenker ... 0 0 0 0 0 5 2007 WD Normal 12.109
2 60 RL 11250 Pave IR1 Lvl AllPub Inside Gtl CollgCr ... 0 0 0 0 0 9 2008 WD Normal 12.317
3 70 RL 9550 Pave IR1 Lvl AllPub Corner Gtl Crawfor ... 272 0 0 0 0 2 2006 WD Abnorml 11.849
4 60 RL 14260 Pave IR1 Lvl AllPub FR2 Gtl NoRidge ... 0 0 0 0 0 12 2008 WD Normal 12.429

5 rows × 62 columns

接下来需要把类别特征转化为数值特征,采用one hot编码,并对特征进行归一化。

In [16]:
dataset = pd.get_dummies(dataset, dummy_na=True, drop_first=True)
sale_price = dataset['SalePrice']
scaler = MinMaxScaler()
dataset = pd.DataFrame(scaler.fit_transform(dataset), columns = dataset.columns)
dataset['SalePrice'] = sale_price
dataset.head()
Out[16]:
MSSubClass LotArea OverallQual OverallCond YearBuilt YearRemodAdd BsmtFinSF1 BsmtFinSF2 BsmtUnfSF TotalBsmtSF ... SaleType_New SaleType_Oth SaleType_WD SaleType_nan SaleCondition_AdjLand SaleCondition_Alloca SaleCondition_Family SaleCondition_Normal SaleCondition_Partial SaleCondition_nan
0 0.235 0.033 0.667 0.500 0.949 0.883 0.176 0.000 0.064 0.168 ... 0.000 0.000 1.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000
1 0.000 0.039 0.556 0.875 0.754 0.433 0.244 0.000 0.122 0.248 ... 0.000 0.000 1.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000
2 0.235 0.047 0.667 0.500 0.935 0.867 0.121 0.000 0.186 0.181 ... 0.000 0.000 1.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000
3 0.294 0.039 0.667 0.500 0.312 0.333 0.054 0.000 0.231 0.148 ... 0.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
4 0.235 0.061 0.778 0.500 0.928 0.833 0.163 0.000 0.210 0.225 ... 0.000 0.000 1.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000

5 rows × 220 columns

划分训练集和验证集,比例为80%和20%

In [17]:
pytrain=dataset[dataset["SalePrice"].notnull()].copy()
pytest=dataset[dataset["SalePrice"].isna()].copy()
print(pytrain.shape)
print(pytest.shape)
pytest.drop('SalePrice', axis=1, inplace=True)
X_train, X_val, y_train, y_val = train_test_split(pytrain.drop('SalePrice', axis=1), pytrain['SalePrice'], test_size=0.2, random_state=42)
(1458, 220)
(1459, 220)

3.2、神经网络预测价格

首先定义神经网络,我们使用带relu激活的多层感知机对售价进行预测,含有三层隐藏层。

In [18]:
class MLP(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(219, 144)
        self.fc2 = nn.Linear(144, 72)
        self.fc3 = nn.Linear(72, 36)
        self.fc4 = nn.Linear(36, 18)
        self.fc5 = nn.Linear(18, 1)

    def forward(self, x):

        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.relu(self.fc3(x))
        x = F.relu(self.fc4(x))
        x = F.relu(self.fc5(x))

        return x

我们训练的损失需要根据整个数据集的评价指标进行定义。我们最终的评价指标是,对预测价格和真实价格都取对数,并对该结果求均方根误差,这样做的好处是可以消除房屋价格高低对结果的影响。据此我们得到了损失函数。我们把学习率定义为0.001,并在第180和200个迭代时把学习率降低十倍。对整个训练集迭代220次,结果如下。

In [19]:
def adjust_learning_rate(epoch, learning_rate, lr_decay_epochs, optimizer):
    steps = np.sum(epoch > np.asarray(lr_decay_epochs))
    if steps > 0:
        new_lr = learning_rate * (0.1 ** steps)
        for param_group in optimizer.param_groups:
            param_group['lr'] = new_lr
            
def pred_by_nnet(X_train, X_val, y_train, y_val, model):
    y_train = np.expm1(y_train)
    y_val = np.expm1(y_val)
    train_batch = np.array_split(X_train, 50)
    label_batch = np.array_split(y_train, 50)
    for i in range(len(train_batch)):
        train_batch[i] = torch.from_numpy(train_batch[i].values).float()
    for i in range(len(label_batch)):
        label_batch[i] = torch.from_numpy(label_batch[i].values).float().view(-1, 1)

    X_val = torch.from_numpy(X_val.values).float()
    y_val = torch.from_numpy(y_val.values).float().view(-1, 1)
    
    ps = model(train_batch[0])
    print(ps.shape)
    criterion = nn.MSELoss()
    optimizer = optim.Adam(model.parameters(), lr=0.001)
    lr_decay_epochs = [180, 200]

    epochs = 220

    train_losses, test_losses = [], []
    model.train()
    for e in range(epochs):
        train_loss = 0
        adjust_learning_rate(e, 0.001, lr_decay_epochs, optimizer)
        for i in range(len(train_batch)):
            optimizer.zero_grad()
            output = model(train_batch[i])
            loss = torch.sqrt(criterion(torch.log(output), torch.log(label_batch[i])))
            loss.backward()
            optimizer.step()

            train_loss += loss.item()

        else:
            test_loss = 0
            accuracy = 0

            with torch.no_grad():
                model.eval()
                predictions = model(X_val)
                test_loss += torch.sqrt(criterion(torch.log(predictions), torch.log(y_val)))

            train_losses.append(train_loss/len(train_batch))
            test_losses.append(test_loss)

            print("Epoch: {}/{}.. ".format(e+1, epochs),
                  "Training Loss: {:.3f}.. ".format(train_loss/len(train_batch)),
                  "Test Loss: {:.3f}.. ".format(test_loss))
    
    return train_losses, test_losses
In [20]:
model = MLP()
nntrain_losses, nntest_losses = pred_by_nnet(X_train, X_val, y_train, y_val, model)
torch.Size([24, 1])
Epoch: 1/220..  Training Loss: 12.058..  Test Loss: 9.614.. 
Epoch: 2/220..  Training Loss: 8.158..  Test Loss: 7.004.. 
Epoch: 3/220..  Training Loss: 6.236..  Test Loss: 5.549.. 
Epoch: 4/220..  Training Loss: 5.003..  Test Loss: 4.505.. 
Epoch: 5/220..  Training Loss: 4.092..  Test Loss: 3.710.. 
Epoch: 6/220..  Training Loss: 3.364..  Test Loss: 3.033.. 
Epoch: 7/220..  Training Loss: 2.727..  Test Loss: 2.444.. 
Epoch: 8/220..  Training Loss: 2.176..  Test Loss: 1.928.. 
Epoch: 9/220..  Training Loss: 1.692..  Test Loss: 1.480.. 
Epoch: 10/220..  Training Loss: 1.273..  Test Loss: 1.092.. 
Epoch: 11/220..  Training Loss: 0.908..  Test Loss: 0.757.. 
Epoch: 12/220..  Training Loss: 0.604..  Test Loss: 0.502.. 
Epoch: 13/220..  Training Loss: 0.406..  Test Loss: 0.377.. 
Epoch: 14/220..  Training Loss: 0.339..  Test Loss: 0.353.. 
Epoch: 15/220..  Training Loss: 0.329..  Test Loss: 0.346.. 
Epoch: 16/220..  Training Loss: 0.323..  Test Loss: 0.341.. 
Epoch: 17/220..  Training Loss: 0.317..  Test Loss: 0.335.. 
Epoch: 18/220..  Training Loss: 0.311..  Test Loss: 0.329.. 
Epoch: 19/220..  Training Loss: 0.305..  Test Loss: 0.322.. 
Epoch: 20/220..  Training Loss: 0.298..  Test Loss: 0.315.. 
Epoch: 21/220..  Training Loss: 0.291..  Test Loss: 0.308.. 
Epoch: 22/220..  Training Loss: 0.284..  Test Loss: 0.301.. 
Epoch: 23/220..  Training Loss: 0.277..  Test Loss: 0.294.. 
Epoch: 24/220..  Training Loss: 0.270..  Test Loss: 0.287.. 
Epoch: 25/220..  Training Loss: 0.263..  Test Loss: 0.280.. 
Epoch: 26/220..  Training Loss: 0.256..  Test Loss: 0.273.. 
Epoch: 27/220..  Training Loss: 0.249..  Test Loss: 0.266.. 
Epoch: 28/220..  Training Loss: 0.243..  Test Loss: 0.260.. 
Epoch: 29/220..  Training Loss: 0.237..  Test Loss: 0.254.. 
Epoch: 30/220..  Training Loss: 0.232..  Test Loss: 0.248.. 
Epoch: 31/220..  Training Loss: 0.227..  Test Loss: 0.243.. 
Epoch: 32/220..  Training Loss: 0.222..  Test Loss: 0.238.. 
Epoch: 33/220..  Training Loss: 0.217..  Test Loss: 0.233.. 
Epoch: 34/220..  Training Loss: 0.213..  Test Loss: 0.229.. 
Epoch: 35/220..  Training Loss: 0.209..  Test Loss: 0.225.. 
Epoch: 36/220..  Training Loss: 0.205..  Test Loss: 0.221.. 
Epoch: 37/220..  Training Loss: 0.201..  Test Loss: 0.218.. 
Epoch: 38/220..  Training Loss: 0.198..  Test Loss: 0.214.. 
Epoch: 39/220..  Training Loss: 0.195..  Test Loss: 0.211.. 
Epoch: 40/220..  Training Loss: 0.192..  Test Loss: 0.208.. 
Epoch: 41/220..  Training Loss: 0.188..  Test Loss: 0.205.. 
Epoch: 42/220..  Training Loss: 0.185..  Test Loss: 0.203.. 
Epoch: 43/220..  Training Loss: 0.183..  Test Loss: 0.200.. 
Epoch: 44/220..  Training Loss: 0.180..  Test Loss: 0.198.. 
Epoch: 45/220..  Training Loss: 0.177..  Test Loss: 0.196.. 
Epoch: 46/220..  Training Loss: 0.175..  Test Loss: 0.193.. 
Epoch: 47/220..  Training Loss: 0.172..  Test Loss: 0.191.. 
Epoch: 48/220..  Training Loss: 0.170..  Test Loss: 0.190.. 
Epoch: 49/220..  Training Loss: 0.167..  Test Loss: 0.188.. 
Epoch: 50/220..  Training Loss: 0.165..  Test Loss: 0.186.. 
Epoch: 51/220..  Training Loss: 0.163..  Test Loss: 0.184.. 
Epoch: 52/220..  Training Loss: 0.160..  Test Loss: 0.183.. 
Epoch: 53/220..  Training Loss: 0.158..  Test Loss: 0.181.. 
Epoch: 54/220..  Training Loss: 0.156..  Test Loss: 0.180.. 
Epoch: 55/220..  Training Loss: 0.154..  Test Loss: 0.179.. 
Epoch: 56/220..  Training Loss: 0.152..  Test Loss: 0.177.. 
Epoch: 57/220..  Training Loss: 0.150..  Test Loss: 0.176.. 
Epoch: 58/220..  Training Loss: 0.148..  Test Loss: 0.175.. 
Epoch: 59/220..  Training Loss: 0.147..  Test Loss: 0.174.. 
Epoch: 60/220..  Training Loss: 0.145..  Test Loss: 0.173.. 
Epoch: 61/220..  Training Loss: 0.143..  Test Loss: 0.172.. 
Epoch: 62/220..  Training Loss: 0.141..  Test Loss: 0.171.. 
Epoch: 63/220..  Training Loss: 0.140..  Test Loss: 0.170.. 
Epoch: 64/220..  Training Loss: 0.138..  Test Loss: 0.169.. 
Epoch: 65/220..  Training Loss: 0.136..  Test Loss: 0.168.. 
Epoch: 66/220..  Training Loss: 0.135..  Test Loss: 0.167.. 
Epoch: 67/220..  Training Loss: 0.133..  Test Loss: 0.166.. 
Epoch: 68/220..  Training Loss: 0.132..  Test Loss: 0.165.. 
Epoch: 69/220..  Training Loss: 0.131..  Test Loss: 0.164.. 
Epoch: 70/220..  Training Loss: 0.129..  Test Loss: 0.163.. 
Epoch: 71/220..  Training Loss: 0.128..  Test Loss: 0.162.. 
Epoch: 72/220..  Training Loss: 0.127..  Test Loss: 0.162.. 
Epoch: 73/220..  Training Loss: 0.126..  Test Loss: 0.161.. 
Epoch: 74/220..  Training Loss: 0.125..  Test Loss: 0.160.. 
Epoch: 75/220..  Training Loss: 0.124..  Test Loss: 0.159.. 
Epoch: 76/220..  Training Loss: 0.123..  Test Loss: 0.159.. 
Epoch: 77/220..  Training Loss: 0.122..  Test Loss: 0.158.. 
Epoch: 78/220..  Training Loss: 0.121..  Test Loss: 0.157.. 
Epoch: 79/220..  Training Loss: 0.120..  Test Loss: 0.156.. 
Epoch: 80/220..  Training Loss: 0.119..  Test Loss: 0.156.. 
Epoch: 81/220..  Training Loss: 0.118..  Test Loss: 0.155.. 
Epoch: 82/220..  Training Loss: 0.117..  Test Loss: 0.155.. 
Epoch: 83/220..  Training Loss: 0.117..  Test Loss: 0.154.. 
Epoch: 84/220..  Training Loss: 0.116..  Test Loss: 0.153.. 
Epoch: 85/220..  Training Loss: 0.115..  Test Loss: 0.153.. 
Epoch: 86/220..  Training Loss: 0.115..  Test Loss: 0.152.. 
Epoch: 87/220..  Training Loss: 0.114..  Test Loss: 0.151.. 
Epoch: 88/220..  Training Loss: 0.113..  Test Loss: 0.151.. 
Epoch: 89/220..  Training Loss: 0.113..  Test Loss: 0.150.. 
Epoch: 90/220..  Training Loss: 0.112..  Test Loss: 0.150.. 
Epoch: 91/220..  Training Loss: 0.112..  Test Loss: 0.149.. 
Epoch: 92/220..  Training Loss: 0.111..  Test Loss: 0.149.. 
Epoch: 93/220..  Training Loss: 0.110..  Test Loss: 0.148.. 
Epoch: 94/220..  Training Loss: 0.110..  Test Loss: 0.148.. 
Epoch: 95/220..  Training Loss: 0.109..  Test Loss: 0.147.. 
Epoch: 96/220..  Training Loss: 0.109..  Test Loss: 0.147.. 
Epoch: 97/220..  Training Loss: 0.108..  Test Loss: 0.146.. 
Epoch: 98/220..  Training Loss: 0.107..  Test Loss: 0.146.. 
Epoch: 99/220..  Training Loss: 0.107..  Test Loss: 0.145.. 
Epoch: 100/220..  Training Loss: 0.106..  Test Loss: 0.145.. 
Epoch: 101/220..  Training Loss: 0.106..  Test Loss: 0.144.. 
Epoch: 102/220..  Training Loss: 0.105..  Test Loss: 0.144.. 
Epoch: 103/220..  Training Loss: 0.105..  Test Loss: 0.143.. 
Epoch: 104/220..  Training Loss: 0.104..  Test Loss: 0.143.. 
Epoch: 105/220..  Training Loss: 0.104..  Test Loss: 0.143.. 
Epoch: 106/220..  Training Loss: 0.103..  Test Loss: 0.142.. 
Epoch: 107/220..  Training Loss: 0.103..  Test Loss: 0.142.. 
Epoch: 108/220..  Training Loss: 0.102..  Test Loss: 0.141.. 
Epoch: 109/220..  Training Loss: 0.102..  Test Loss: 0.141.. 
Epoch: 110/220..  Training Loss: 0.101..  Test Loss: 0.141.. 
Epoch: 111/220..  Training Loss: 0.101..  Test Loss: 0.140.. 
Epoch: 112/220..  Training Loss: 0.100..  Test Loss: 0.140.. 
Epoch: 113/220..  Training Loss: 0.099..  Test Loss: 0.140.. 
Epoch: 114/220..  Training Loss: 0.099..  Test Loss: 0.139.. 
Epoch: 115/220..  Training Loss: 0.098..  Test Loss: 0.139.. 
Epoch: 116/220..  Training Loss: 0.098..  Test Loss: 0.139.. 
Epoch: 117/220..  Training Loss: 0.097..  Test Loss: 0.138.. 
Epoch: 118/220..  Training Loss: 0.097..  Test Loss: 0.138.. 
Epoch: 119/220..  Training Loss: 0.096..  Test Loss: 0.138.. 
Epoch: 120/220..  Training Loss: 0.096..  Test Loss: 0.138.. 
Epoch: 121/220..  Training Loss: 0.095..  Test Loss: 0.137.. 
Epoch: 122/220..  Training Loss: 0.095..  Test Loss: 0.137.. 
Epoch: 123/220..  Training Loss: 0.095..  Test Loss: 0.137.. 
Epoch: 124/220..  Training Loss: 0.094..  Test Loss: 0.136.. 
Epoch: 125/220..  Training Loss: 0.094..  Test Loss: 0.136.. 
Epoch: 126/220..  Training Loss: 0.093..  Test Loss: 0.136.. 
Epoch: 127/220..  Training Loss: 0.093..  Test Loss: 0.136.. 
Epoch: 128/220..  Training Loss: 0.092..  Test Loss: 0.135.. 
Epoch: 129/220..  Training Loss: 0.092..  Test Loss: 0.135.. 
Epoch: 130/220..  Training Loss: 0.091..  Test Loss: 0.135.. 
Epoch: 131/220..  Training Loss: 0.091..  Test Loss: 0.135.. 
Epoch: 132/220..  Training Loss: 0.090..  Test Loss: 0.135.. 
Epoch: 133/220..  Training Loss: 0.090..  Test Loss: 0.134.. 
Epoch: 134/220..  Training Loss: 0.089..  Test Loss: 0.134.. 
Epoch: 135/220..  Training Loss: 0.089..  Test Loss: 0.134.. 
Epoch: 136/220..  Training Loss: 0.088..  Test Loss: 0.134.. 
Epoch: 137/220..  Training Loss: 0.088..  Test Loss: 0.134.. 
Epoch: 138/220..  Training Loss: 0.088..  Test Loss: 0.134.. 
Epoch: 139/220..  Training Loss: 0.087..  Test Loss: 0.133.. 
Epoch: 140/220..  Training Loss: 0.087..  Test Loss: 0.133.. 
Epoch: 141/220..  Training Loss: 0.086..  Test Loss: 0.133.. 
Epoch: 142/220..  Training Loss: 0.086..  Test Loss: 0.133.. 
Epoch: 143/220..  Training Loss: 0.085..  Test Loss: 0.133.. 
Epoch: 144/220..  Training Loss: 0.085..  Test Loss: 0.132.. 
Epoch: 145/220..  Training Loss: 0.085..  Test Loss: 0.132.. 
Epoch: 146/220..  Training Loss: 0.084..  Test Loss: 0.132.. 
Epoch: 147/220..  Training Loss: 0.084..  Test Loss: 0.132.. 
Epoch: 148/220..  Training Loss: 0.083..  Test Loss: 0.132.. 
Epoch: 149/220..  Training Loss: 0.083..  Test Loss: 0.132.. 
Epoch: 150/220..  Training Loss: 0.083..  Test Loss: 0.132.. 
Epoch: 151/220..  Training Loss: 0.082..  Test Loss: 0.132.. 
Epoch: 152/220..  Training Loss: 0.082..  Test Loss: 0.131.. 
Epoch: 153/220..  Training Loss: 0.081..  Test Loss: 0.131.. 
Epoch: 154/220..  Training Loss: 0.081..  Test Loss: 0.131.. 
Epoch: 155/220..  Training Loss: 0.081..  Test Loss: 0.131.. 
Epoch: 156/220..  Training Loss: 0.080..  Test Loss: 0.131.. 
Epoch: 157/220..  Training Loss: 0.080..  Test Loss: 0.131.. 
Epoch: 158/220..  Training Loss: 0.080..  Test Loss: 0.131.. 
Epoch: 159/220..  Training Loss: 0.079..  Test Loss: 0.131.. 
Epoch: 160/220..  Training Loss: 0.079..  Test Loss: 0.131.. 
Epoch: 161/220..  Training Loss: 0.078..  Test Loss: 0.131.. 
Epoch: 162/220..  Training Loss: 0.078..  Test Loss: 0.131.. 
Epoch: 163/220..  Training Loss: 0.078..  Test Loss: 0.131.. 
Epoch: 164/220..  Training Loss: 0.077..  Test Loss: 0.130.. 
Epoch: 165/220..  Training Loss: 0.077..  Test Loss: 0.130.. 
Epoch: 166/220..  Training Loss: 0.077..  Test Loss: 0.130.. 
Epoch: 167/220..  Training Loss: 0.076..  Test Loss: 0.130.. 
Epoch: 168/220..  Training Loss: 0.076..  Test Loss: 0.130.. 
Epoch: 169/220..  Training Loss: 0.076..  Test Loss: 0.130.. 
Epoch: 170/220..  Training Loss: 0.075..  Test Loss: 0.130.. 
Epoch: 171/220..  Training Loss: 0.075..  Test Loss: 0.130.. 
Epoch: 172/220..  Training Loss: 0.075..  Test Loss: 0.130.. 
Epoch: 173/220..  Training Loss: 0.075..  Test Loss: 0.130.. 
Epoch: 174/220..  Training Loss: 0.074..  Test Loss: 0.130.. 
Epoch: 175/220..  Training Loss: 0.074..  Test Loss: 0.130.. 
Epoch: 176/220..  Training Loss: 0.074..  Test Loss: 0.130.. 
Epoch: 177/220..  Training Loss: 0.073..  Test Loss: 0.130.. 
Epoch: 178/220..  Training Loss: 0.073..  Test Loss: 0.130.. 
Epoch: 179/220..  Training Loss: 0.073..  Test Loss: 0.130.. 
Epoch: 180/220..  Training Loss: 0.073..  Test Loss: 0.130.. 
Epoch: 181/220..  Training Loss: 0.072..  Test Loss: 0.130.. 
Epoch: 182/220..  Training Loss: 0.072..  Test Loss: 0.130.. 
Epoch: 183/220..  Training Loss: 0.072..  Test Loss: 0.130.. 
Epoch: 184/220..  Training Loss: 0.072..  Test Loss: 0.130.. 
Epoch: 185/220..  Training Loss: 0.072..  Test Loss: 0.130.. 
Epoch: 186/220..  Training Loss: 0.072..  Test Loss: 0.130.. 
Epoch: 187/220..  Training Loss: 0.072..  Test Loss: 0.130.. 
Epoch: 188/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 189/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 190/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 191/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 192/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 193/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 194/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 195/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 196/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 197/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 198/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 199/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 200/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 201/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 202/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 203/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 204/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 205/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 206/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 207/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 208/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 209/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 210/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 211/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 212/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 213/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 214/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 215/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 216/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 217/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 218/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 219/220..  Training Loss: 0.071..  Test Loss: 0.130.. 
Epoch: 220/220..  Training Loss: 0.071..  Test Loss: 0.130.. 

用训练好的模型对测试集进行预测。

In [21]:
pytest = torch.from_numpy(pytest.values).float()

with torch.no_grad():
    model.eval()
    output = model.forward(pytest)

output.shape
Out[21]:
torch.Size([1459, 1])
In [22]:
np.random.seed(seed=42)

from sklearn.metrics import mean_squared_error,mean_absolute_error
from sklearn.ensemble import GradientBoostingRegressor,RandomForestRegressor,AdaBoostRegressor,ExtraTreesRegressor
from lightgbm import LGBMRegressor
from xgboost import XGBRegressor
from sklearn.linear_model import Ridge,RidgeCV,BayesianRidge,LinearRegression,Lasso,LassoCV,ElasticNet,RANSACRegressor,HuberRegressor,PassiveAggressiveRegressor,ElasticNetCV
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import VotingRegressor
from sklearn.svm import SVR
from sklearn.kernel_ridge import KernelRidge

接下来我们使用其他的一些模型来对房价进行预测:

3.3 Boost

Boost提升方法是集成学习的一种思想,是指按顺序训练一批独立的模型。每个独立的模型都会从前一个模型所犯的错误中学习。这样得到一系列基本分类器之后,通过线性组合成强分类器。

3.4 AdaBoost

AdaBoost向前一个错误学习的方法是通过提高预测错误的数据的权重。算法思想如下:

1、 初始化训练数据的权值分布,这时每个数据的权重都相等。

2、 训练基本分类器

3、 计算该分类器在训练数据上的分类误差

4、 计算该分类器在集成后对应的线性组合里的系数

5、 针对这次训练调整数据的权重,把分类错误的数据权重放大,有利于下一个模型训练时能从上一个模型的错误中学习。

6、 重复步骤2,知道达到我们要求的树的数量

7、 将所有基本模型线性组合

3.5 GradientBoosting

GradientBoosting向前一个错误学习的方法是通过直接残差。

1、 训练基本分类器

2、 计算预测的值与实际数据值的差,把差作为下一轮训练的目标值Y值

3、 然后重复步骤一,直到达到目标的分类器的数量

4、 做出最后的预测

3.6 XGBoost

XGBoost是一种经过优化的分布式梯度提升树。基于梯度提升框架,XGBoost实现了并行方式的决策树提升(Tree Boosting),从而能够快速准确地解决各种数据科学问题。XGBoost不仅支持各种单机操作系统(如:Windows,Linus和OS X),而且支持集群分布式计算。但所需时间和空间较大。

1、对所有特征按数值进行预排序。

2、在每次的样本分割时,用O(# data)的代价找到每个特征的最优分割点。

3、找到最后的特征以及分割点,将数据分裂成左右两个子节点。

这种pre-sorting算法能够准确找到分裂点,但是在空间和时间上有很大的开销。由于需要对特征进行预排序并且需要保存排序后的索引值(为了后续快速的计算分裂点),因此内存需要训练数据的两倍。在遍历每一个分割点的时候,都需要进行分裂增益的计算,消耗的代价大。

3.7 LightGBM

LightGBM(Light Gradient Boosting Machine)同样是一款基于决策树算法的分布式梯度提升框架。LightGBM使用的是histogram算法,占用的内存更低,数据分隔的复杂度更低。其思想是将连续的浮点特征离散成k个离散值,并构造宽度为k的Histogram。然后遍历训练数据,统计每个离散值在直方图中的累计统计量。在进行特征选择时,只需要根据直方图的离散值,遍历寻找最优的分割点。

3.8 Lasso

Lasso(Least absolute shrinkage and selection operator)方法是以缩小变量集(降阶)为思想的压缩估计方法。它通过构造一个惩罚函数,可以将变量的系数进行压缩并使某些回归系数变为0,进而达到变量选择的目的。Lasso回归的出现是为了解决线性回归出现的过拟合以及在通过正规方程方法求解θ的过程中出现的(XTX)不可逆这两类问题的,通过在损失函数中引入正则化项来达到目的。Lasso回归可以使得一些特征的系数变小,甚至还使一些绝对值较小的系数直接变为0,从而增强模型的泛化能力。它的使用场景是对于高纬的特征数据,尤其是线性关系是稀疏的,就采用Lasso回归,或者是要在一堆特征里面找出主要的特征。

3.9 ElasticNet

ElasticNet将Lasso和Ridge组成一个具有两种惩罚因素的单一模型:一个与L1范数成比例,另外一个与L2范数成比例。使用这种方式方法所得到的模型就像纯粹的Lasso回归一样稀疏,但同时具有与岭回归提供的一样的正则化能力。ElasticNetCV对超参数a和p使用交叉验证,帮助我们选择合适的a和p。它的使用场景在我们发现用Lasso回归太过(太多特征被稀疏为0),而Ridge回归也正则化的不够(回归系数衰减太慢)的时候。

3.10 岭回归算法

岭回归算法可以看成是对线性回归算法的改进,而在岭回归算法上又可衍生其他更加有效的回归算法。如结合了核函数的核岭回归算法、结合了贝叶斯的贝叶斯岭回归算法等。岭回归算法实质上是一种改良的最小二乘估计法,以损失部分信息、降低精度为代价获得回归系数更符合实际、更可靠的回归方法。岭回归的基本思想是限制参数的大小,加入惩罚项来对原模型正则化。在岭回归中,惩罚项是参数w的L2范数。

3.11 Decision Tree和Random Forest

决策树是用树的结构来构建分类模型,每个节点代表着一个属性,根据这个属性的划分,进入这个节点的儿子节点,直至叶子节点,每个叶子节点都表征着一定的类别,从而达到分类的目的。

常用的决策树有ID4,C4.5,CART等。在生成树的过程中,需要选择用那个特征进行剖分,一般来说,选取的原则是,分开后能尽可能地提升纯度,可以用信息增益,增益率,以及基尼系数等指标来衡量。如果是一棵树的话,为了避免过拟合,还要进行剪枝(prunning),取消那些可能会导致验证集误差上升的节点。

随机森林实际上是一种特殊的bagging方法,它将决策树用作bagging中的模型。首先,用bootstrap方法生成m个训练集,然后,对于每个训练集,构造一颗决策树,在节点找特征进行分裂的时候,并不是对所有特征找到能使得指标(如信息增益)最大的,而是在特征中随机抽取一部分特征,在抽到的特征中间找到最优解,应用于节点,进行分裂。随机森林的方法由于有了bagging,也就是集成的思想在,实际上相当于对于样本和特征都进行了采样(如果把训练数据看成矩阵,就像实际中常见的那样,那么就是一个行和列都进行采样的过程),所以可以避免过拟合。

prediction阶段的方法就是bagging的策略,分类投票,回归均值。

3.12 SVR

支持向量机(SVM)本身是针对二分类问题提出的,而SVR(支持向量回归)是SVM(支持向量机)中的一个重要的应用分支。SVR回归与SVM分类的区别在于,SVR的样本点最终只有一类,它所寻求的最优超平面不是SVM那样使两类或多类样本点分的“最开”,而是使所有的样本点离着超平面的总偏差最小。

3.13 Extra-Trees

ET算法与随机森林算法十分相似,都是由许多决策树构成。极限树与随机森林的主要区别:

RandomForest应用的是Bagging模型,ExtraTree使用的所有的样本,只是特征是随机选取的,因为分裂是随机的,所以在某种程度上比随机森林得到的结果更加好

随机森林是在一个随机子集内得到最佳分叉属性,而ET是完全随机的得到分叉值,从而实现对决策树进行分叉的。

3.14 K近邻回归

K近邻回归模型是无参数模型,借助K个最近训练样本的目标数值,对待测样本的回归值进行决策。即根据样本的相似度预测回归值。衡量待测样本回归值可以对K个近邻目标数值使用普通的算术平均算法,也可以考虑距离的差异进行加权平均。

3.15 Huber回归

Huber回归对分类为离群点的样本施加线性损失。如果样本的绝对误差小于某个阈值,则将其分类为非离群点。它的优点是不会忽略异常值的影响,会给它们带来较小的权重,最终降低了离群点对回归结果的影响。

3.16 PassiveAggressive

PassiveAggressive回归是用于大规模学习的一系列算法。 与感知器相似,它们不需要学习率。 但是它们包括正则化参数。

In [23]:
r_s = 42
my_regressors=[               
               ElasticNet(alpha=0.001,l1_ratio=0.70,max_iter=100,tol=0.01, random_state=r_s),
               ElasticNetCV(l1_ratio=0.9,max_iter=100,tol=0.01,random_state=r_s),
               Lasso(alpha=0.00047,random_state=r_s),
               LassoCV(),
               AdaBoostRegressor(random_state=r_s),
               GradientBoostingRegressor(n_estimators=3000, learning_rate=0.05, max_depth=4, max_features='sqrt',
                                         min_samples_leaf=15, min_samples_split=10, loss='huber',random_state =r_s),
               XGBRegressor(random_state=r_s),           
               LGBMRegressor(objective='regression', num_leaves=4,learning_rate=0.01, n_estimators=5000,max_bin=200, 
                             bagging_fraction=0.75,bagging_freq=5,bagging_seed=7,feature_fraction=0.2,feature_fraction_seed=7,
                             verbose=-1,random_state=r_s),               
                              
               RandomForestRegressor(random_state=r_s),
               ExtraTreesRegressor(random_state=r_s),
               SVR(C= 20, epsilon= 0.008, gamma=0.0003),
               DecisionTreeRegressor(),
               Ridge(alpha=6),
               RidgeCV(),
               BayesianRidge(),
               KernelRidge(),     
               KNeighborsRegressor(),
               HuberRegressor(),
               PassiveAggressiveRegressor(random_state=r_s),
              ]

regressors=[]

for my_regressor in my_regressors:
    regressors.append(my_regressor)


scores_val=[]
scores_train=[]
RMSE=[]


for regressor in regressors:
    scores_val.append(regressor.fit(X_train,y_train).score(X_val,y_val))
    scores_train.append(regressor.fit(X_train,y_train).score(X_train,y_train))
    y_pred=regressor.predict(X_val)
    RMSE.append(np.sqrt(mean_squared_error(np.log(np.expm1(y_val)),np.log(np.expm1(y_pred)))))

    
results=zip(scores_val,scores_train,RMSE)
results=list(results)
results_score_val=[item[0] for item in results]
results_score_train=[item[1] for item in results]
results_RMSE=[item[2] for item in results]


df_results=pd.DataFrame({"Algorithms":my_regressors,"Training Score":results_score_train,"Validation Score":results_score_val,"RMSE":results_RMSE})
df_results
Out[23]:
Algorithms Training Score Validation Score RMSE
0 ElasticNet(alpha=0.001, l1_ratio=0.7, max_iter... 0.928 0.904 0.127
1 ElasticNetCV(l1_ratio=0.9, max_iter=100, rando... 0.939 0.911 0.123
2 Lasso(alpha=0.00047, random_state=42) 0.932 0.907 0.125
3 LassoCV() 0.940 0.912 0.122
4 (DecisionTreeRegressor(max_depth=3, random_sta... 0.852 0.818 0.175
5 ([DecisionTreeRegressor(criterion='friedman_ms... 0.985 0.909 0.124
6 XGBRegressor(base_score=0.5, booster='gbtree',... 0.999 0.890 0.136
7 LGBMRegressor(bagging_fraction=0.75, bagging_f... 0.964 0.911 0.122
8 (DecisionTreeRegressor(max_features='auto', ra... 0.982 0.873 0.146
9 (ExtraTreeRegressor(random_state=1608637542), ... 1.000 0.883 0.140
10 SVR(C=20, epsilon=0.008, gamma=0.0003) 0.913 0.877 0.144
11 DecisionTreeRegressor() 1.000 0.728 0.213
12 Ridge(alpha=6) 0.931 0.897 0.132
13 RidgeCV(alphas=array([ 0.1, 1. , 10. ])) 0.943 0.908 0.124
14 BayesianRidge() 0.944 0.908 0.124
15 KernelRidge() 0.765 0.569 0.269
16 KNeighborsRegressor() 0.804 0.695 0.227
17 HuberRegressor() 0.755 0.703 0.224
18 PassiveAggressiveRegressor(random_state=42) 0.881 0.776 0.194

根据RMSE从小到大排序,找到最好的模型。

In [24]:
best_models=df_results.sort_values(by="RMSE")
best_model=best_models.iloc[0][0]
best_stack=best_models["Algorithms"].values
best_models
Out[24]:
Algorithms Training Score Validation Score RMSE
3 LassoCV() 0.940 0.912 0.122
7 LGBMRegressor(bagging_fraction=0.75, bagging_f... 0.964 0.911 0.122
1 ElasticNetCV(l1_ratio=0.9, max_iter=100, rando... 0.939 0.911 0.123
5 ([DecisionTreeRegressor(criterion='friedman_ms... 0.985 0.909 0.124
14 BayesianRidge() 0.944 0.908 0.124
13 RidgeCV(alphas=array([ 0.1, 1. , 10. ])) 0.943 0.908 0.124
2 Lasso(alpha=0.00047, random_state=42) 0.932 0.907 0.125
0 ElasticNet(alpha=0.001, l1_ratio=0.7, max_iter... 0.928 0.904 0.127
12 Ridge(alpha=6) 0.931 0.897 0.132
6 XGBRegressor(base_score=0.5, booster='gbtree',... 0.999 0.890 0.136
9 (ExtraTreeRegressor(random_state=1608637542), ... 1.000 0.883 0.140
10 SVR(C=20, epsilon=0.008, gamma=0.0003) 0.913 0.877 0.144
8 (DecisionTreeRegressor(max_features='auto', ra... 0.982 0.873 0.146
4 (DecisionTreeRegressor(max_depth=3, random_sta... 0.852 0.818 0.175
18 PassiveAggressiveRegressor(random_state=42) 0.881 0.776 0.194
11 DecisionTreeRegressor() 1.000 0.728 0.213
17 HuberRegressor() 0.755 0.703 0.224
16 KNeighborsRegressor() 0.804 0.695 0.227
15 KernelRidge() 0.765 0.569 0.269

可以看到,最好的模型效果优于神经网络模型,故采用LassoCV模型进行预测。

In [25]:
print(best_model)
best_model.fit(pytrain.drop('SalePrice', axis=1), pytrain['SalePrice'])
y_test=best_model.predict(pytest)
submission = pd.read_csv('data/house-prices-advanced-regression-techniques/sample_submission.csv')
submission['SalePrice'] = np.expm1(y_test)
submission.to_csv('submission.csv', index=False)
LassoCV()

用LassoCV进行预测,提交。最终得分为0.13583。