Similar to the gradient lifting tree classification method, the gradient lifting tree regression is to apply the gradient lifting tree model to solve the regression problem, that is, to predict the continuous value.
In this method, the data Training Procedure of gradient boosting tree regression is carried out, and the model can be obtained according to the data characteristics, and then used for prediction.
Training Dataset: a required parameter. The Dataset to be trained accesses Connection Info, including Data Type, Connect Parameter, Dataset name, etc. You can connect HBase data, dsf data, and Local Data.
Data Query Conditions: optional parameter; the specified data can be filtered out for corresponding analysis according to the Query Conditions; attribute conditions and Spatial Query are supported. E.g. SmID <100 and BBOX(the_geom, 120,30,121,31)。
Explanatory Fields: a required parameter, the field of the explanatory variable. Enter one or more explanatory fields of the training Dataset as the independent variables of the model, which can help predict the category.
Modeling field: a required parameter, which is used to train the field of the model, that is, the dependent variable. This field corresponds to a known (trained) value of a variable that will be used to make predictions at unknown locations.
Depth of the tree: An optional parameter, or the maximum number of partitions made into the tree. The value range is 0-30 and the default value is 30. If you use a larger maximum depth, more divisions will be created, which may increase the likelihood of overfitting the model.
Maximum Iterations: The maximum Iterations must be greater than 0. The default value is 100.
Percent of data used during training: Optional parameter. It specifies the percentage of elements used for each gradient lift tree. The value range is 0-1.0. The default value is 1.0, which means 100% of data. Use a lower percentage of Input Data per tree: You can increase the speed of the tool for large Datasets.
Loss function type: a way to determine the size of the residual, the smaller the residual value, the better the fitting effect. Squared and Absolute are the methods to calculate the residual error. The filling methods are ABSOLUTE and SQUARED. Generally, SQUARED is selected. Select Absolute if there are a large number of outliers in the data, and the Squared method amplifies the error values.
Leaf Node Splitting Threshold: Optional parameter, the minimum number of observations required to retain a leaf (i.e., a terminal node on a tree that is not further split). The value range is 0, and the default value is 1. For very large data, increasing these values will reduce the runtime of the tool.
Model Save Directory: optional parameter; save the model with good Training Result to this address. An empty value indicates that the Model will not be saved.
gbtModelCharacteristics: Properties of the gradient boosted tree regression model.
Variable: The Field array of the gradient boosting tree regression model, which refers to the field of the independent variable in the training model.
feature Importances: The field importance refers to the degree of influence of the respective variable feature on the dependent variable.
mSE: mean square error, the mean of the squared error between the predicted value and the true value.
rMSE: RMSE, the mean of the square root of the error between the predicted value and the true value.
mae: mean absolute error, the mean of the absolute value of the error between the predicted value and the true value.
r2: coefficient of determination. According to the value of r2, the quality of the model can be judged. The value range is [0,1]. Generally speaking, the larger the r2 is, the better the fitting effect of the model is. r2 reflects how accurate it is, because with the increase of the number of samples, r2 will inevitably increase, which can not really quantitatively explain the degree of accuracy, but can only be roughly quantitative.
explained Variance: explains the variance.