Skip to main content

Feature Selection — (Exploration Study)

 These days there is a hype in the world that if one has more features then we have better discrimination in the model but this hypothesis will not hold in all the situations. Actually the performance of the model vs no of features will decrease when the number of features increases and it will look like

Image for post

As in Instance-Based learning methods (like k-means) where features are very important in the case of k-means extra features will add noise in the distance calculation. To handle this problem we should have to avoid irrelevant features while selecting the features because they will add the noise to the model learning. These extra features will generally affect the performance when we have a limited training dataset. In the data science world, we call this a curse of dimensionality

Curse of dimensionality in simple words is when we have training dataset with large number of training features and more computational resource to learn.

The only solution to this big threat is feature reduction. There are two ways to do feature reduction

  1. Feature Selection
  2. Feature Extraction

In this module, we are going to explore the feature selection method of feature reduction. The feature selection is the way in which we have to do select the feature set from all the features such that it will be a subset of the bigger one.

There are 2^n possibilities of the feature reduction set. If n is the size of the features set. For starting this procedure you have to first create the dataset in which all the features are highly uncorrelated.

Feature Selection:

To do the process there are two ways exists:

  1. Forward Selection: In this, we will start with the empty feature list and we will try each feature combination and select the best combination.
  2. Backward Selection: In this, we will start with the full feature list and we will try each feature combination and select the best combination.

As till now, we are working with the multiple combinations of the features and trying for optimization of the model. we can use the feature by feature and give them score then select those feature which will clear the cutoff those feature will be the best for the optimum solution/model.

There are many ways to give score when we are trying the single wise features which will help to select the best features which are :

  1. Pearson correlation coefficient
  2. F-score
  3. Chi-square
  4. Signal to noise ration
  5. mutual information

These methods will help you to select the features which are really useful.

I hope all this will help you to select the feature space and built the finest model.

Comments

Popular posts from this blog

Random Forest and how it works

  Random Forest Random Forest is a Machine Learning Algorithm based on Decision Trees. Random forest works on the ensemble method which is very common these days. The ensemble method means that to make a decision collectively based on the decision trees. Actually, we make a prediction, not simply based on One Decision Tree, but by an unanimous Prediction, made by ‘ K’  Decision Trees. Why should we use There are four reasons why should we us e  the random forest algorithm. The one is that it can be used for both  classification and regression  businesses. Overfitting is one critical problem that may make the results worse, but for the Random Forest algorithm, if there are enough trees in the forest, the classifier  won’t overfit  the model. The third reason is the classifier of Random Forest can handle  missing values , and the last advantage is that the Random Forest classifier can be modeled for  categorical values. How does the Random...

DBSCAN Clustering Algorithm-with maths

  DBSCAN is a short-form of   D ensity- B ased   S patial   C lustering of   A pplications with   N oise. It is an unsupervised algorithm that will take the set of points and make them into some sets which have the same properties. It is based on the density-based clustering and it will mark the outliers also which do not lie in any of the cluster or set. There are some terms that we need to know before we proceed further for algorithm: Density Reachability A point “p” is said to be   density reachable from a point “q” if point “p” is within ε distance from point “q” and “q” has a sufficient number of points in its neighbors which are within distance ε. Density Connectivity A point “p” and “q” are said to be density connected if there exists a point “r” which has a sufficient number of points in its neighbors and both the points “p” and “q” is within the ε distance. This is a chaining process. So, if “q” is neighbor of “r”, “r” is neighbor of “s”, “s” ...

Neural Network theory and implementation for Regression

Introduction and background In this article, we are going to build the regression model from neural networks for predicting the price of a house based on the features. Here is the implementation and the theory behind it. The neural network is basically if you see is derived from the logistic regression, as we know that in the logistic regression: Formulae for Logistic Regression:  y = ax+b so for every node in each layer, we will apply it and after this output is from the activation function which will have the input from logistic regression and the output is output from the activation function. So now  w e will implement the neural  network  with 5 hidden layers. Implementation 1. Import the libraries which we will going to use 2. Import the dataset and check the types of the columns 3. Now build your training and test set from the dataset. 4. Now we have our data we will now make the model and I will describe to you how it will predict the price. Here we are making...