Site icon codemaniacstudio

Class 10th Unit-3 Evaluating Models

Unit-3 Evaluating Models

Introduction to Evaluating Models

Introduction to Evaluating Models (20 with Answers)


1. What is the main purpose of evaluating an AI model?
a) To design the dataset
b) To measure the model’s performance
c) To increase dataset size
d) To collect new data
Answer: b) To measure the model’s performance


2. Which of the following is NOT an evaluation step in AI?
a) Measuring accuracy
b) Checking predictions
c) Collecting raw data
d) Comparing results with actual values
Answer: c) Collecting raw data


3. In AI, a “model” refers to:
a) A dataset
b) A trained system that makes predictions
c) A data label
d) A storage unit
Answer: b) A trained system that makes predictions


4. Why is evaluation important in AI projects?
a) To ensure the model is working correctly
b) To waste time
c) To reduce dataset size
d) To change the domain
Answer: a) To ensure the model is working correctly


5. Which of the following is a common way to evaluate a model?
a) Confusion Matrix
b) Random guessing
c) Data collection
d) File storage
Answer: a) Confusion Matrix


6. What does “ground truth” mean in evaluation?
a) The correct, actual outcome
b) The predicted output
c) The wrong prediction
d) The dataset size
Answer: a) The correct, actual outcome


7. A model that always predicts correctly is said to have:
a) Low accuracy
b) High accuracy
c) High error
d) Poor evaluation
Answer: b) High accuracy


8. Which metric is often used in classification problems?
a) Accuracy
b) Loss function
c) Precision & Recall
d) All of the above
Answer: d) All of the above


9. Evaluating a model helps in:
a) Improving the model
b) Identifying errors
c) Comparing with other models
d) All of the above
Answer: d) All of the above


10. If a model performs well on training data but poorly on testing data, it is called:
a) Underfitting
b) Overfitting
c) Balanced
d) Correct
Answer: b) Overfitting


11. Which of the following is NOT a model evaluation metric?
a) Precision
b) Recall
c) Accuracy
d) Data Collection
Answer: d) Data Collection


12. In model evaluation, “test data” is used to:
a) Train the model
b) Evaluate the model’s performance
c) Clean the dataset
d) Change labels
Answer: b) Evaluate the model’s performance


13. Which dataset is mainly used to check model accuracy?
a) Training dataset
b) Testing dataset
c) Raw dataset
d) Input dataset only
Answer: b) Testing dataset


14. What does “precision” measure in classification?
a) Correct positive predictions out of all positive predictions
b) Correct predictions out of all data
c) Correct negative predictions only
d) None of the above
Answer: a) Correct positive predictions out of all positive predictions


15. Recall measures:
a) Correct negatives
b) Correct positives out of all actual positives
c) Number of datasets
d) Wrong predictions
Answer: b) Correct positives out of all actual positives


16. Which one is a graphical tool to evaluate classification models?
a) Confusion matrix
b) Histogram
c) Scatter plot
d) Pie chart
Answer: a) Confusion matrix


17. A good evaluation method should be:
a) Reliable and accurate
b) Biased
c) Random
d) Incomplete
Answer: a) Reliable and accurate


18. Evaluating models helps to detect:
a) Overfitting and underfitting
b) Dataset collection errors only
c) Programming syntax errors
d) Hardware issues
Answer: a) Overfitting and underfitting


19. In evaluation, the difference between predicted and actual value is called:
a) Bias
b) Error
c) Accuracy
d) Weight
Answer: b) Error


20. The final goal of evaluating AI models is to:
a) Ensure better predictions and decision making
b) Store data safely
c) Collect more images
d) Remove datasets
Answer: a) Ensure better predictions and decision making

Importance of Model Evaluation

Importance of Model Evaluation (20 with Answers)


1. Why is model evaluation important in AI?
a) To measure how well a model performs
b) To increase dataset size
c) To reduce storage space
d) To remove labels
Answer: a) To measure how well a model performs


2. Model evaluation helps to check if a model is:
a) Biased or fair
b) Good or bad
c) Overfitting or underfitting
d) All of the above
Answer: d) All of the above


3. Which of the following is a benefit of evaluating models?
a) Helps improve accuracy
b) Helps select the best model
c) Helps understand limitations
d) All of the above
Answer: d) All of the above


4. Without evaluation, an AI model may lead to:
a) Wrong decisions
b) Ethical risks
c) Bias in predictions
d) All of the above
Answer: d) All of the above


5. Which dataset is most useful in model evaluation?
a) Training data
b) Testing data
c) Raw unclean data
d) Sample data only
Answer: b) Testing data


6. Importance of evaluation lies in identifying:
a) Model errors and weaknesses
b) Data storage format
c) Internet speed
d) Programming language
Answer: a) Model errors and weaknesses


7. Evaluating models helps in comparing:
a) Different datasets
b) Different algorithms or models
c) Different users
d) Different storage devices
Answer: b) Different algorithms or models


8. If a model performs well only on training data but not on test data, evaluation will show:
a) Underfitting
b) Overfitting
c) Balanced fitting
d) Error-free learning
Answer: b) Overfitting


9. Which of the following is a direct outcome of model evaluation?
a) Understanding model accuracy
b) Reducing dataset size
c) Collecting more raw data
d) Changing project domain
Answer: a) Understanding model accuracy


10. Evaluating a model ensures that it can:
a) Generalize to unseen data
b) Only memorize training data
c) Avoid predictions
d) Work without datasets
Answer: a) Generalize to unseen data


11. Which statement is TRUE about model evaluation?
a) It helps to monitor performance over time
b) It is only needed during training
c) It makes the dataset smaller
d) It removes labels from data
Answer: a) It helps to monitor performance over time


12. Importance of model evaluation in healthcare AI is to:
a) Ensure correct diagnosis predictions
b) Save internet speed
c) Reduce number of patients
d) Replace doctors fully
Answer: a) Ensure correct diagnosis predictions


13. A fair evaluation process reduces:
a) Model bias
b) Dataset collection
c) Internet usage
d) Coding errors
Answer: a) Model bias


14. Model evaluation is required before:
a) Deploying the model into real-world use
b) Collecting data
c) Choosing a domain
d) Writing project report
Answer: a) Deploying the model into real-world use


15. Why is accuracy alone not enough in evaluation?
a) Because it may ignore false positives and negatives
b) Because accuracy is always 100%
c) Because accuracy cannot be measured
d) Because accuracy removes labels
Answer: a) Because it may ignore false positives and negatives


16. The importance of model evaluation in finance is to:
a) Reduce financial risks from wrong predictions
b) Improve internet speed
c) Collect more coins
d) Eliminate all banks
Answer: a) Reduce financial risks from wrong predictions


17. A good evaluation ensures that the AI model is:
a) Reliable, accurate, and fair
b) Only fast but not accurate
c) Biased and overfitted
d) Random and inconsistent
Answer: a) Reliable, accurate, and fair


18. Evaluating models helps in deciding whether to:
a) Use the model in real life
b) Delete the dataset
c) Change computer hardware
d) Stop learning AI
Answer: a) Use the model in real life


19. In which step of the AI project cycle is evaluation important?
a) Only data collection
b) At the end before deployment
c) Only at the beginning
d) Never required
Answer: b) At the end before deployment


20. The main importance of model evaluation is to ensure:
a) Trust in AI decisions
b) More storage of data
c) Faster internet connection
d) Fewer datasets
Answer: a) Trust in AI decisions

Need of Model Evaluation

Need of Model Evaluation (20 with Answers)


1. Why do we need model evaluation in AI?
a) To measure how well a model performs
b) To increase dataset size
c) To reduce memory usage
d) To write longer codes
Answer: a) To measure how well a model performs


2. The need for model evaluation arises because models can:
a) Overfit or underfit data
b) Run without data
c) Work without training
d) Always give 100% accuracy
Answer: a) Overfit or underfit data


3. Model evaluation ensures that a model can:
a) Generalize to unseen data
b) Memorize only training data
c) Always predict correctly
d) Work without testing
Answer: a) Generalize to unseen data


4. Which of the following best explains the need for evaluation?
a) To test model reliability and accuracy
b) To increase dataset collection cost
c) To remove unwanted features
d) To reduce coding steps
Answer: a) To test model reliability and accuracy


5. If a model is not evaluated, it may lead to:
a) Wrong decisions
b) Bias in predictions
c) Ethical issues
d) All of the above
Answer: d) All of the above


6. The need for model evaluation is highest before:
a) Deploying the model into real-world use
b) Collecting raw data
c) Choosing an AI domain
d) Writing a project report
Answer: a) Deploying the model into real-world use


7. Which dataset highlights the need for evaluation?
a) Training dataset
b) Testing dataset
c) Random dataset
d) Unlabeled dataset
Answer: b) Testing dataset


8. Evaluation helps to identify if the model is:
a) Fair and unbiased
b) Storing more data
c) Reducing file size
d) Increasing speed only
Answer: a) Fair and unbiased


9. Why is evaluation needed in sensitive domains like healthcare?
a) To ensure safe and accurate predictions
b) To reduce number of patients
c) To replace doctors entirely
d) To stop human involvement
Answer: a) To ensure safe and accurate predictions


10. The need of model evaluation can be linked to:
a) Improving model accuracy
b) Comparing different models
c) Building trust in AI
d) All of the above
Answer: d) All of the above


11. Without evaluation, a model may:
a) Perform poorly on unseen data
b) Still work perfectly everywhere
c) Not require testing data
d) Never make errors
Answer: a) Perform poorly on unseen data


12. The need of model evaluation is important in finance because:
a) Wrong predictions may cause financial loss
b) Banks need more storage
c) People want faster internet
d) Models don’t require testing
Answer: a) Wrong predictions may cause financial loss


13. Model evaluation checks whether a model is:
a) Reliable and accurate
b) Larger in size
c) Free from datasets
d) Only faster in execution
Answer: a) Reliable and accurate


14. Why is model evaluation needed for comparing models?
a) To select the best performing one
b) To delete other models
c) To avoid training
d) To reduce dataset size
Answer: a) To select the best performing one


15. Which of the following is NOT a need for model evaluation?
a) Improving prediction quality
b) Reducing model bias
c) Checking fairness
d) Increasing internet speed
Answer: d) Increasing internet speed


16. The need for evaluation arises because accuracy alone:
a) May ignore false positives and negatives
b) Is always 100%
c) Cannot be measured
d) Removes labels
Answer: a) May ignore false positives and negatives


17. In AI project cycle, evaluation is needed to:
a) Verify performance before deployment
b) Stop model development
c) Delete unnecessary data
d) Write smaller codes
Answer: a) Verify performance before deployment


18. Why is evaluation needed in autonomous vehicles?
a) To ensure safety and prevent accidents
b) To save storage
c) To reduce car speed
d) To stop human drivers
Answer: a) To ensure safety and prevent accidents


19. Model evaluation is necessary to build:
a) Trust in AI predictions
b) Larger datasets only
c) Faster computers
d) Smaller projects
Answer: a) Trust in AI predictions


20. The key need for evaluating models is to:
a) Ensure accurate, fair, and ethical outcomes
b) Reduce dataset size
c) Increase programming languages
d) Avoid real-world use
Answer: a) Ensure accurate, fair, and ethical outcomes

Evaluating Model’s Performance

Evaluating Model’s Performance (20 with Answers)


1. What is the main purpose of evaluating a model’s performance?
a) To test model speed only
b) To check how well it predicts outcomes
c) To reduce dataset size
d) To increase training time
Answer: b) To check how well it predicts outcomes


2. Which dataset is mainly used to evaluate a model’s performance?
a) Training dataset
b) Testing dataset
c) Raw dataset
d) Unlabeled dataset
Answer: b) Testing dataset


3. Which of the following is a common metric for evaluating classification models?
a) Accuracy
b) Precision
c) Recall
d) All of the above
Answer: d) All of the above


4. Accuracy is defined as:
a) Correct predictions ÷ Total predictions
b) Correct predictions × Total predictions
c) Wrong predictions ÷ Total predictions
d) Predictions ÷ Features
Answer: a) Correct predictions ÷ Total predictions


5. Which metric measures the proportion of correctly predicted positive cases?
a) Precision
b) Recall
c) Accuracy
d) F1-Score
Answer: a) Precision


6. Which metric measures how many actual positive cases were identified correctly?
a) Precision
b) Recall
c) Accuracy
d) Loss
Answer: b) Recall


7. The F1-score is useful because it:
a) Balances precision and recall
b) Only measures accuracy
c) Ignores wrong predictions
d) Works only on large datasets
Answer: a) Balances precision and recall


8. What is a confusion matrix?
a) A table showing correct and incorrect predictions
b) A matrix to store dataset
c) A way to reduce model confusion
d) A loss function
Answer: a) A table showing correct and incorrect predictions


9. Which of the following values appear in a confusion matrix?
a) True Positive, False Positive, True Negative, False Negative
b) Training Loss, Testing Loss
c) Dataset Size, Features
d) Weights and Biases
Answer: a) True Positive, False Positive, True Negative, False Negative


10. Why is accuracy not always the best metric?
a) It ignores class imbalance
b) It is too complex
c) It needs more datasets
d) It always gives 0%
Answer: a) It ignores class imbalance


11. Which metric is most important in spam detection?
a) Recall (to catch most spam emails)
b) Accuracy only
c) Dataset size
d) Training time
Answer: a) Recall (to catch most spam emails)


12. Precision is important when:
a) False positives must be minimized
b) False negatives must be maximized
c) Data is imbalanced
d) Model is too simple
Answer: a) False positives must be minimized


13. Recall is important when:
a) False negatives are costly
b) False positives don’t matter
c) Accuracy is high
d) Model is complex
Answer: a) False negatives are costly


14. Which performance metric is best for medical diagnosis (like cancer detection)?
a) High Recall
b) Low Recall
c) High False Negatives
d) High Dataset Size
Answer: a) High Recall


15. In evaluating regression models, which metrics are often used?
a) Mean Absolute Error (MAE)
b) Mean Squared Error (MSE)
c) Root Mean Squared Error (RMSE)
d) All of the above
Answer: d) All of the above


16. A model gives high accuracy on training data but poor accuracy on testing data. This is an example of:
a) Overfitting
b) Underfitting
c) Perfect model
d) Balanced model
Answer: a) Overfitting


17. If a model performs poorly on both training and testing data, it is an example of:
a) Underfitting
b) Overfitting
c) Perfect fitting
d) Balanced learning
Answer: a) Underfitting


18. Evaluating model performance helps in:
a) Identifying model weaknesses
b) Improving future predictions
c) Choosing the best algorithm
d) All of the above
Answer: d) All of the above


19. ROC curve is used in model evaluation to:
a) Show the trade-off between true positive rate and false positive rate
b) Store data in a matrix
c) Train regression models
d) Reduce dataset imbalance
Answer: a) Show the trade-off between true positive rate and false positive rate


20. Which of the following statements is TRUE about evaluating model performance?
a) It ensures the model is accurate, fair, and reliable
b) It increases dataset size automatically
c) It reduces programming effort
d) It avoids the need for testing data
Answer: a) It ensures the model is accurate, fair, and reliable

Accuracy and Error

Accuracy and Error (20 with Answers)


1. What does accuracy measure in a model?
a) The speed of predictions
b) The proportion of correct predictions
c) The number of features in the dataset
d) The time taken for training
Answer: b) The proportion of correct predictions


2. Accuracy is calculated as:
a) (Correct predictions ÷ Total predictions) × 100
b) (Wrong predictions ÷ Total predictions) × 100
c) (Training size ÷ Testing size) × 100
d) (Features ÷ Records) × 100
Answer: a) (Correct predictions ÷ Total predictions) × 100


3. Which of the following is considered an error in model prediction?
a) Correct classification
b) Wrong classification
c) Dataset labeling
d) Data splitting
Answer: b) Wrong classification


4. Error rate is defined as:
a) Correct predictions ÷ Total predictions
b) Wrong predictions ÷ Total predictions
c) Training data ÷ Testing data
d) Features ÷ Labels
Answer: b) Wrong predictions ÷ Total predictions


5. If a model makes 80 correct predictions out of 100, its accuracy is:
a) 60%
b) 70%
c) 80%
d) 90%
Answer: c) 80%


6. If a model makes 20 wrong predictions out of 100, its error rate is:
a) 10%
b) 20%
c) 25%
d) 80%
Answer: b) 20%


7. Accuracy is a good measure only when:
a) Classes are balanced
b) Dataset is unlabeled
c) Only training data is used
d) Model is overfitted
Answer: a) Classes are balanced


8. Which of the following problems reduces the usefulness of accuracy?
a) Class imbalance
b) Model speed
c) Training size
d) Feature selection
Answer: a) Class imbalance


9. High accuracy always means a good model. True or False?
a) True
b) False
Answer: b) False


10. In medical diagnosis, which is more important than just accuracy?
a) Precision and Recall
b) Dataset size
c) Training speed
d) Model complexity
Answer: a) Precision and Recall


11. Which of the following is NOT an error type in classification models?
a) True Positive
b) False Positive
c) False Negative
d) Dataset Split
Answer: d) Dataset Split


12. A model correctly classifies 900 samples out of 1000. What is the error rate?
a) 5%
b) 9%
c) 10%
d) 15%
Answer: c) 10%


13. What does zero error rate mean?
a) Model is underfitted
b) Model predictions are 100% correct
c) Model has no dataset
d) Model cannot predict
Answer: b) Model predictions are 100% correct


14. Which of the following formulas is correct for error rate?
a) Error Rate = Wrong Predictions ÷ Total Predictions
b) Error Rate = Correct Predictions ÷ Total Predictions
c) Error Rate = Correct ÷ Wrong Predictions
d) Error Rate = Dataset Size ÷ Features
Answer: a) Error Rate = Wrong Predictions ÷ Total Predictions


15. If accuracy is 85%, then error rate is:
a) 10%
b) 15%
c) 20%
d) 25%
Answer: b) 15%


16. Which scenario shows misleading accuracy?
a) When dataset has balanced classes
b) When dataset is small
c) When dataset is highly imbalanced
d) When model is trained properly
Answer: c) When dataset is highly imbalanced


17. In a dataset of 95 negatives and 5 positives, a model predicts all as negative. What is its accuracy?
a) 95%
b) 50%
c) 5%
d) 0%
Answer: a) 95%


18. Why is the above model (Q17) not good despite high accuracy?
a) It failed to identify positive cases
b) It has fewer features
c) It trained too fast
d) It has low dataset size
Answer: a) It failed to identify positive cases


19. Accuracy + Error rate together will always equal:
a) 50%
b) 75%
c) 100%
d) Depends on dataset
Answer: c) 100%


20. Which statement is TRUE?
a) Accuracy shows correct predictions, error shows wrong predictions
b) Accuracy and error mean the same
c) Accuracy is only useful for regression
d) Error is ignored in evaluation
Answer: a) Accuracy shows correct predictions, error shows wrong predictions

Evaluating Metrics for Classification

Evaluating Metrics for Classification (20 with Answers)


1. Which of the following is NOT a classification evaluation metric?
a) Accuracy
b) Precision
c) Recall
d) Regression Line
Answer: d) Regression Line


2. Precision is defined as:
a) Correct positive predictions ÷ Total actual positives
b) Correct positive predictions ÷ Total predicted positives
c) Correct negatives ÷ Total negatives
d) Correct predictions ÷ Total predictions
Answer: b) Correct positive predictions ÷ Total predicted positives


3. Recall is also known as:
a) Specificity
b) Sensitivity
c) Accuracy
d) Error Rate
Answer: b) Sensitivity


4. Recall is calculated as:
a) True Positives ÷ (True Positives + False Negatives)
b) True Positives ÷ (True Positives + False Positives)
c) True Negatives ÷ (True Negatives + False Positives)
d) Wrong Predictions ÷ Total Predictions
Answer: a) True Positives ÷ (True Positives + False Negatives)


5. F1-Score is the:
a) Average of Precision and Recall
b) Maximum of Precision and Recall
c) Harmonic Mean of Precision and Recall
d) Difference of Precision and Recall
Answer: c) Harmonic Mean of Precision and Recall


6. Which of the following metrics is most useful in case of imbalanced datasets?
a) Accuracy
b) F1-Score
c) Dataset size
d) Training loss
Answer: b) F1-Score


7. In a confusion matrix, which value represents correctly predicted positive cases?
a) False Positive
b) True Negative
c) True Positive
d) False Negative
Answer: c) True Positive


8. In a confusion matrix, False Negative means:
a) Model predicted positive, but it was negative
b) Model predicted negative, but it was positive
c) Model predicted correctly
d) Model predicted randomly
Answer: b) Model predicted negative, but it was positive


9. Which metric answers the question: “Out of all actual positive cases, how many did the model correctly identify?”
a) Precision
b) Recall
c) Accuracy
d) F1-Score
Answer: b) Recall


10. Which metric answers the question: “Out of all predicted positive cases, how many are actually positive?”
a) Recall
b) Accuracy
c) Precision
d) Error rate
Answer: c) Precision


11. Specificity is defined as:
a) True Negatives ÷ (True Negatives + False Positives)
b) True Positives ÷ (True Positives + False Negatives)
c) False Positives ÷ (True Negatives + True Positives)
d) Wrong predictions ÷ Total predictions
Answer: a) True Negatives ÷ (True Negatives + False Positives)


12. If a model has high recall but low precision, it means:
a) It identifies almost all positives but with many false alarms
b) It predicts very few positives but mostly correct ones
c) It is highly accurate
d) It has a balanced performance
Answer: a) It identifies almost all positives but with many false alarms


13. If a model has high precision but low recall, it means:
a) It misses many positives but predictions made are mostly correct
b) It predicts all cases correctly
c) It predicts negatives only
d) It has poor accuracy
Answer: a) It misses many positives but predictions made are mostly correct


14. Which of the following is best used when False Negatives are very costly (e.g., medical tests)?
a) Precision
b) Recall
c) Accuracy
d) Specificity
Answer: b) Recall


15. Which of the following is best used when False Positives are very costly (e.g., spam detection)?
a) Precision
b) Recall
c) Accuracy
d) Sensitivity
Answer: a) Precision


16. F1-score balances between:
a) Accuracy and Error
b) Precision and Recall
c) Recall and Specificity
d) Precision and Accuracy
Answer: b) Precision and Recall


17. What does a confusion matrix represent?
a) Training speed of a model
b) Correct and incorrect predictions categorized into classes
c) Number of features used in training
d) Dataset size
Answer: b) Correct and incorrect predictions categorized into classes


18. Which evaluation metric becomes misleading in highly imbalanced datasets?
a) Precision
b) Recall
c) Accuracy
d) F1-score
Answer: c) Accuracy


19. If a model has Precision = 0.9 and Recall = 0.9, then F1-score is:
a) 0.81
b) 0.90
c) 0.95
d) 1.0
Answer: b) 0.90


20. Which of the following statements is TRUE?
a) Precision measures false positives, Recall measures false negatives
b) Precision measures true negatives, Recall measures true positives
c) Precision measures correct positives out of predicted positives, Recall measures correct positives out of actual positives
d) Precision and Recall mean the same thing
Answer: c) Precision measures correct positives out of predicted positives, Recall measures correct positives out of actual positives

Ethical Concerns around Model Evaluation

Ethical Concerns around Model Evaluation (20 with Answers)


1. Which of the following is a major ethical concern in model evaluation?
a) Data bias
b) High accuracy
c) Low training speed
d) Use of large datasets
Answer: a) Data bias


2. Bias in AI models can lead to:
a) Fair and equal predictions
b) Unfair treatment of certain groups
c) Faster training time
d) Lower computation cost
Answer: b) Unfair treatment of certain groups


3. Which ethical concern arises when a model discriminates against people based on gender, race, or age?
a) Transparency
b) Fairness
c) Privacy
d) Accuracy
Answer: b) Fairness


4. Privacy concerns in model evaluation are related to:
a) Use of sensitive personal data
b) Model complexity
c) Dataset size
d) Evaluation speed
Answer: a) Use of sensitive personal data


5. If a medical AI system wrongly predicts that a patient does not have a disease, it raises an ethical issue related to:
a) False Negatives
b) Precision
c) Training size
d) Overfitting
Answer: a) False Negatives


6. Which of the following best describes the ethical concern of “Transparency” in model evaluation?
a) Keeping model results secret
b) Explaining clearly how the model makes decisions
c) Reducing accuracy for fairness
d) Hiding training data
Answer: b) Explaining clearly how the model makes decisions


7. Lack of explainability in AI models is also called:
a) Black-box problem
b) White-box problem
c) Training issue
d) Data leakage
Answer: a) Black-box problem


8. An AI model showing biased hiring decisions is an example of:
a) Accuracy problem
b) Ethical concern in fairness
c) Random error
d) High precision
Answer: b) Ethical concern in fairness


9. Which of the following is NOT an ethical concern in model evaluation?
a) Bias and fairness
b) Privacy
c) Transparency
d) Dataset normalization
Answer: d) Dataset normalization


10. Why is fairness important in model evaluation?
a) To ensure faster predictions
b) To reduce the number of datasets
c) To avoid discrimination and unfair treatment
d) To reduce storage needs
Answer: c) To avoid discrimination and unfair treatment


11. If a model performs well on one group but poorly on another, it violates:
a) Accuracy
b) Generalization
c) Fairness
d) Privacy
Answer: c) Fairness


12. Which ethical issue occurs if user’s personal data is used without consent in AI evaluation?
a) Transparency
b) Fairness
c) Privacy violation
d) Accuracy drop
Answer: c) Privacy violation


13. Over-reliance on model accuracy without considering fairness can lead to:
a) More balanced datasets
b) Ethical issues in decision-making
c) Better transparency
d) Higher recall
Answer: b) Ethical issues in decision-making


14. If an AI admission system favors only students from certain schools, this shows:
a) Bias in training data
b) High accuracy
c) Strong privacy protection
d) Transparency in results
Answer: a) Bias in training data


15. Which metric alone cannot guarantee ethical evaluation of a model?
a) Accuracy
b) Precision
c) Recall
d) All of the above
Answer: d) All of the above


16. Which ethical principle ensures that AI systems do not harm people?
a) Fairness
b) Accountability
c) Non-maleficence
d) Transparency
Answer: c) Non-maleficence


17. A biased dataset leads to:
a) Ethical model evaluation
b) Unfair predictions
c) Balanced results
d) Transparent decisions
Answer: b) Unfair predictions


18. Why is accountability important in model evaluation?
a) To ensure someone is responsible for model outcomes
b) To make models run faster
c) To reduce dataset size
d) To increase memory usage
Answer: a) To ensure someone is responsible for model outcomes


19. What is the ethical concern if an AI system hides the reasoning behind its decision?
a) Privacy
b) Transparency
c) Accuracy
d) Bias
Answer: b) Transparency


20. Ethical evaluation of AI models should include:
a) Only accuracy measurement
b) Both technical performance and social impact
c) Only recall values
d) Only dataset size
Answer: b) Both technical performance and social impact

Exit mobile version