Part 4 :Measuring model for Massive Multitask Language Understanding(MMLU)

Kapalesachin
4 min readJun 10, 2024

--

Does your model support multitasking and to what degree

Most of us have encountered large language models (LLMs) described as versatile tools, much like a Swiss Army knife — adept in many areas but not necessarily expert in all. This raises questions about how to effectively evaluate their strengths and limitations across different tasks. It’s crucial to identify standardized methods for assessing their multi-task language understanding and how well they perform in various domains.

What are MMLU standards and recommendation?

When it comes to evaluating LLMs for multitask language understanding (MMLU), one of the most referenced papers is the one by Hendrycks et al., which outlines a comprehensive framework for these evaluations. This paper is often cited when discussing standards for assessing the capabilities of LLMs in multiple domains. You can find the paper here:

https://arxiv.org/abs/2009.03300

MMLU evaluations involve various datasets and often take into account different “shots” or examples provided to the model during testing. Some models are particularly sensitive to the length of the context provided, affecting their accuracy in specific areas.

A concern often raised is the potential for models to memorize parts of the training data. This can lead to artificially high accuracy if the evaluation questions overlap with the training set. To mitigate this, evaluators sometimes source questions from different documents or ensure that questions and answers are located on different pages. There are multiple MMLUs available in market, here I have used cais/mmlu.

List of Subsets in typical MMLU Dataset:

The MMLU dataset is divided into several subsets, each covering a distinct field of knowledge. Here’s a breakdown of the areas included in cais/mmlu which is available on hugging face:

  1. Abstract Algebra (116 rows)
  2. Anatomy (154 rows)
  3. Astronomy (173 rows)
  4. Auxiliary Train (99.8k rows)
  5. Business Ethics (116 rows)
  6. Clinical Knowledge (299 rows)
  7. College Biology (165 rows)
  8. College Chemistry (113 rows)
  9. College Computer Science (116 rows)
  10. College Mathematics (116 rows)
  11. College Medicine (200 rows)
  12. College Physics (118 rows)
  13. Computer Security (116 rows)
  14. Conceptual Physics (266 rows)
  15. Econometrics (131 rows)
  16. Electrical Engineering (166 rows)
  17. Elementary Mathematics (424 rows)
  18. Formal Logic (145 rows)
  19. Global Facts (115 rows)
  20. High School Biology (347 rows)
  21. High School Chemistry (230 rows)
  22. High School Computer Science (114 rows)
  23. High School European History (188 rows)
  24. High School Geography (225 rows)
  25. High School Government and Politics (219 rows)
  26. High School Macroeconomics (438 rows)
  27. High School Mathematics (304 rows)
  28. High School Microeconomics (269 rows)
  29. High School Physics (173 rows)
  30. High School Psychology (610 rows)
  31. High School Statistics (244 rows)
  32. High School U.S. History (231 rows)
  33. High School World History (268 rows)
  34. Human Aging (251 rows)
  35. Human Sexuality (148 rows)
  36. International Law (139 rows)
  37. Jurisprudence (124 rows)
  38. Logical Fallacies (186 rows)
  39. Machine Learning (128 rows)
  40. Management (119 rows)
  41. Marketing (264 rows)
  42. Medical Genetics (116 rows)
  43. Miscellaneous (874 rows)
  44. Moral Disputes (389 rows)
  45. Moral Scenarios (1k rows)
  46. Nutrition (344 rows)
  47. Philosophy (350 rows)
  48. Prehistory (364 rows)
  49. Professional Accounting (318 rows)
  50. Professional Law (1.71k rows)
  51. Professional Medicine (308 rows)
  52. Professional Psychology (686 rows)
  53. Public Relations (127 rows)
  54. Security Studies (277 rows)
  55. Sociology (228 rows)
  56. U.S. Foreign Policy (116 rows)
  57. Virology (189 rows)
  58. World Religions (195 rows)

Evaluations using MMLU often cover these areas at a high level. Other MMLU datasets can also be used for more targeted evaluations, especially if you’re looking to apply LLMs in specific fields. It’s crucial to ensure the model’s evaluation in your area of interest meets the necessary standards.

Increasing the model size alone doesn’t guarantee better performance. It must be paired with rich, diverse training data. Current research suggests that a 10% increase in model size requires an approximate 5% increase in training data for effective improvement.

I performed couple of tests on “bert-base-uncased” model, which is trained on Pre-training Data

English Wikipedia (2.5B words), BooksCorpus (800M words).

It was pretrained with Number of Layers: 12 Transformer blocks (layers)Hidden Size: 768 Total Parameters: 110 million Maximum Input Length: 512 tokens

I could get on elementary mathematics data an accuracy of around 21.95% again confidence level was low. This model, developed by Google AI, uses a transformer architecture that leverages bidirectional training to understand the context of words in a sentence. Primary use cases for this were Masked Language Modeling (MLM): Predicting randomly masked tokens in sentences.Next Sentence Prediction (NSP): Understanding the relationship between pairs of sentences.

Outcome of evaluating bert uncased model against elemenary mathematics

This evaluation specifically focuses on elementary mathematics. However, you can choose any subset from the dataset to assess a model’s performance, providing insights into its average accuracy across various domains.

Conclusion :

As we continue to develop and use LLMs, it’s vital to assess whether existing evaluation standards are sufficient for our specific use cases. Creating custom evaluation datasets for your applications might be necessary. Over time, models may memorize evaluation data, requiring us to develop new datasets to ensure robust performance on unseen data. Ultimately, it’s up to us to decide how to evaluate pre-trained models effectively, and I hope these insights help you in evaluating any model from the MMLU perspective.

--

--

Kapalesachin
Kapalesachin

Written by Kapalesachin

Sachin Kapale works as a Director Of Technical Architecture.His other articles are available on http://www.sachinkapale.com

No responses yet