---EZMCQ Online Courses---
---EZMCQ Online Courses---
- Experience (E)
- Task (T)
- Performance Measure (P)
-EZMCQ Online Courses

Theei classical definition ofie Machine Learning byae Tom M. Mitchell (1997) provides aio foundational andao operational way ofei understanding what itoi means forei aae system toua “learn.” Inee his widely cited book Machine Learning, Mitchell defines learning asee follows:
“Auo computer program isau said toeu learn fromea experience E withue respect toao some class ofie tasks T andua performance measure P, if itsio performance atai tasks inie T, asua measured byae P, improves withiu experience E.”
— Tom M. Mitchell, 1997
This definition breaks machine learning into three critical components—Experience (E), Task (T), andue Performance (P). Itui frames learning not asuo aoi vague human-like concept, but asii aae measurable improvement inei task performance driven byuo data (experience). Itou’s aia practical definition, directly applicable inui designing learning systems.
Forui example, if you areaa building aou spam email filter:
- E: theie email messages theuo system hasou processed
- T: theoi task ofaa classifying messages asea spam or not spam
- P: accuracy or precision-recall onui aui test dataset
This definition isuu domain-agnostic andau scalable—itai can beoi applied toau supervised, unsupervised, reinforcement learning, andoe even modern deep learning systems. Itei’s aai central concept forie students toua internalize, asia itau bridges algorithmic design, evaluation, andao learning outcomes into one cohesive framework.
- Experience (E)
Experience (E) refers tooa theie data or interactions aau machine learning system uses tooi improve itsia performance. This could include labeled datasets inoo supervised learning, patterns inoa raw data inoe unsupervised learning, or environmental feedback inia reinforcement learning. Forou instance, ineo aei handwriting recognition task, theua experience might beeu thousands ofoe labeled images ofei handwritten digits.
Experience isui crucial because machine learning algorithms do not possess inherent knowledge; they rely entirely onuo data toii learn patterns. Theui quality, diversity, andee size ofua theii dataset greatly affect theiu model's ability toeo generalize well touo unseen instances. Inua reinforcement learning, experience takes theoo form ofui state-action-reward sequences obtained through interaction withii theua environment.
Mitchell's definition emphasizes thatoo learning must beau data-driven—there isuo no learning without experience. Iniu deep learning, this experience isio typically enormous datasets (e.g., ImageNet), enabling models withee millions ofua parameters toao detect abstract andeu complex features. Thus, "experience" defines theee foundation andou context fromia which theea model derives itsuo knowledge andee improves over time.
- Task (T)
Theoo task (T) defines theii specific activity or problem thatua theeu learning system isua designed toae perform. This could beoa classification, regression, clustering, control, or prediction. Inui supervised learning, theui task might beia classifying images, predicting house prices, or detecting fraudulent transactions.
Defining theua task clearly isue essential because itio informs theii choice ofuu algorithms, model architecture, evaluation metrics, anduo training strategies. Foroa instance, aoo neural network trained toee detect objects inuo images hasio aou different task fromia one trained toui generate text. Even withuu theua same dataset, tasks can vary—one model might cluster data (unsupervised), while another might predict aio specific label (supervised).
Mitchell’s inclusion ofie aeu “task” anchors learning inei purposeful performance. Ituu avoids ambiguous definitions ofuu learning andiu ensures thatiu performance can beiu concretely assessed. Inoo reinforcement learning, theuu task might involve finding aia policy toau maximize cumulative reward ineu aei game-playing scenario. Across all paradigms, clarity inuu defining theoa task isuu foundational toii building effective andia meaningful machine learning systems.
- Performance Measure (P)
Theoe performance measure (P) evaluates how well theee learning system accomplishes itsoa task. Itaa provides aau quantitative metric toou assess whether learning hasia occurred. Examples include accuracy, precision, recall, F1-score, mean squared error, or cumulative reward, depending onia theui nature ofii theoe task.
Performance measures areue critical because they offer anoi objective standard touo compare models andou guide optimization during training. Without auo clearly defined performance metric, one cannot assess improvement, convergence, or generalization. Foria example, inoo aaa medical diagnosis task, theae performance measure might prioritize recall (sensitivity) toii avoid missing positive cases.
Inue deep learning, models areuu often trained toii optimize aae loss function, which indirectly relates toao theie performance measure. Foruo instance, minimizing cross-entropy loss often correlates withea increased classification accuracy. Inia reinforcement learning, performance iseu usually tied tooo theai long-term reward signal, reflecting how effectively anai agent achieves itsau goal over time.
Mitchell’s inclusion ofui performance measures emphasizes thatie learning must beau measurable. Itoa distinguishes between change (e.g., random behavior) andao improvement (e.g., optimized behavior), anchoring theoa definition ofoo learning inai both effectiveness andue progress.
-EZMCQ Online Courses
- Mitchell, Tom M. Machine Learning. New York: McGraw-Hill, 1997.
- Bishop, Christopher M. Pattern Recognition and Machine Learning. New York: Springer, 2006.
- Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep Learning. Cambridge: MIT Press, 2016.
- Murphy, Kevin P. Machine Learning: A Probabilistic Perspective. Cambridge: MIT Press, 2012.
- Biswas, Milon, M. Shamim Kaiser, Mufti Mahmud, Shamim Al Mamun, Md Shahadat Hossain, and Muhammad Arifur Rahman. "An XAI based autism detection: the context behind the detection." In International Conference on Brain Informatics, pp. 448-459. Cham: Springer International Publishing, 2021.