Skip to main content

Feature Selection — (Exploration Study)

 These days there is a hype in the world that if one has more features then we have better discrimination in the model but this hypothesis will not hold in all the situations. Actually the performance of the model vs no of features will decrease when the number of features increases and it will look like

Image for post

As in Instance-Based learning methods (like k-means) where features are very important in the case of k-means extra features will add noise in the distance calculation. To handle this problem we should have to avoid irrelevant features while selecting the features because they will add the noise to the model learning. These extra features will generally affect the performance when we have a limited training dataset. In the data science world, we call this a curse of dimensionality

Curse of dimensionality in simple words is when we have training dataset with large number of training features and more computational resource to learn.

The only solution to this big threat is feature reduction. There are two ways to do feature reduction

  1. Feature Selection
  2. Feature Extraction

In this module, we are going to explore the feature selection method of feature reduction. The feature selection is the way in which we have to do select the feature set from all the features such that it will be a subset of the bigger one.

There are 2^n possibilities of the feature reduction set. If n is the size of the features set. For starting this procedure you have to first create the dataset in which all the features are highly uncorrelated.

Feature Selection:

To do the process there are two ways exists:

  1. Forward Selection: In this, we will start with the empty feature list and we will try each feature combination and select the best combination.
  2. Backward Selection: In this, we will start with the full feature list and we will try each feature combination and select the best combination.

As till now, we are working with the multiple combinations of the features and trying for optimization of the model. we can use the feature by feature and give them score then select those feature which will clear the cutoff those feature will be the best for the optimum solution/model.

There are many ways to give score when we are trying the single wise features which will help to select the best features which are :

  1. Pearson correlation coefficient
  2. F-score
  3. Chi-square
  4. Signal to noise ration
  5. mutual information

These methods will help you to select the features which are really useful.

I hope all this will help you to select the feature space and built the finest model.

Comments

Popular posts from this blog

Random Forest and how it works

  Random Forest Random Forest is a Machine Learning Algorithm based on Decision Trees. Random forest works on the ensemble method which is very common these days. The ensemble method means that to make a decision collectively based on the decision trees. Actually, we make a prediction, not simply based on One Decision Tree, but by an unanimous Prediction, made by ‘ K’  Decision Trees. Why should we use There are four reasons why should we us e  the random forest algorithm. The one is that it can be used for both  classification and regression  businesses. Overfitting is one critical problem that may make the results worse, but for the Random Forest algorithm, if there are enough trees in the forest, the classifier  won’t overfit  the model. The third reason is the classifier of Random Forest can handle  missing values , and the last advantage is that the Random Forest classifier can be modeled for  categorical values. How does the Random...

DBSCAN Clustering Algorithm-with maths

  DBSCAN is a short-form of   D ensity- B ased   S patial   C lustering of   A pplications with   N oise. It is an unsupervised algorithm that will take the set of points and make them into some sets which have the same properties. It is based on the density-based clustering and it will mark the outliers also which do not lie in any of the cluster or set. There are some terms that we need to know before we proceed further for algorithm: Density Reachability A point “p” is said to be   density reachable from a point “q” if point “p” is within ε distance from point “q” and “q” has a sufficient number of points in its neighbors which are within distance ε. Density Connectivity A point “p” and “q” are said to be density connected if there exists a point “r” which has a sufficient number of points in its neighbors and both the points “p” and “q” is within the ε distance. This is a chaining process. So, if “q” is neighbor of “r”, “r” is neighbor of “s”, “s” ...

How to be a HERO in Machine Learning/Data Science Competitions

At present to master machine learning models one has to participate in the competition which is appearing in various platforms. So how somebody who is new to ml can become a  hero  from  zero . The guideline is in this article. The idea for this is not too hard. Just patience and some hard work are required. I will take an example of a Competition that is just finished within top 10. So the competition generally gives you the problem in which some of the features are hidden because they want you to  explore the data  and come up with the feature that explains the target value. By exploring I mean to say the few things: Look at the data. Get the sense of the data. Find the correlation of all features with a target value. Try new features made up of existing features. Exploration needs some  cleaning of the data  also. Because in general, the host will add the noise into the data so that it becomes a trouble for us to achieve good accuracy. By cleaning I...