Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Classification Algorithms MCQs

1. Which machine learning technique is primarily used for classification tasks and involves dividing the data into distinct categories?
a) Regression analysis
b) Decision trees
c) Clustering
d) Dimensionality reduction

Answer: b) Decision trees
Explanation: Decision trees are a popular method for classification tasks as they recursively split the data based on features to create a tree-like structure where each leaf node represents a class label.

2. What classification algorithm assumes independence among features and calculates the probability of a sample belonging to a certain class using Bayes’ theorem?
a) Decision tree
b) Naïve Bayes
c) Logistic regression
d) Support vector machine

Answer: b) Naïve Bayes
Explanation: Naïve Bayes classifiers assume that the presence of a particular feature in a class is unrelated to the presence of any other feature, making it a simple yet effective algorithm for classification tasks.

3. Logistic regression is used for:
a) Linear regression
b) Binary classification
c) Clustering
d) Dimensionality reduction

Answer: b) Binary classification
Explanation: Logistic regression is commonly used for binary classification problems, where the target variable has only two possible outcomes.

4. Support Vector Machine (SVM) is used for:
a) Regression analysis
b) Clustering
c) Classification
d) Dimensionality reduction

Answer: c) Classification
Explanation: SVM is primarily used for classification tasks, aiming to find the optimal hyperplane that separates different classes in the feature space.

5. Random Forest is an ensemble learning method based on:
a) Singular decision trees
b) Boosting algorithms
c) Bagging algorithms
d) Gradient descent

Answer: c) Bagging algorithms
Explanation: Random Forest is a bagging algorithm that constructs multiple decision trees during training and outputs the class that is the mode of the classes of the individual trees.

6. K Nearest Neighbour Classifier assigns the majority class of:
a) The nearest data point
b) The farthest data point
c) K nearest data points
d) All data points

Answer: c) K nearest data points
Explanation: K Nearest Neighbour Classifier assigns the majority class among the K nearest data points to the test sample.

7. Which approach aims to reduce the computational complexity of nearest neighbor classification by selecting only a subset of prototypes from the training set?
a) Condensed nearest neighbor
b) Edited nearest neighbor
c) CNN-based classification
d) Feature selection

Answer: a) Condensed nearest neighbor
Explanation: The condensed nearest neighbor approach selects a subset of prototypes from the training set that adequately represents the entire dataset, reducing computational complexity while maintaining classification accuracy.

8. Which technique combines the predictions from multiple classifiers to improve classification performance?
a) Feature extraction
b) Ensemble learning
c) Dimensionality reduction
d) Regularization

Answer: b) Ensemble learning
Explanation: Ensemble learning combines the predictions from multiple models to produce a more accurate and robust prediction compared to individual models.

9. In machine learning, the training set is used for:
a) Evaluating the model’s performance
b) Tuning hyperparameters
c) Training the model
d) Validating the model’s generalization

Answer: c) Training the model
Explanation: The training set is used to train the machine learning model by adjusting its parameters to minimize the error between predicted and actual outcomes.

10. What is the process of scaling features so that they have a mean of 0 and a standard deviation of 1?
a) Standardization
b) Normalization
c) Regularization
d) Dimensionality reduction

Answer: a) Standardization
Explanation: Standardization rescales features to have properties of a standard normal distribution, making it easier for machine learning algorithms to learn the optimal parameters.

Leave a Comment