ULMFiT: Universal Language Model Fine-Tuning for Text Classification
Shenson Joseph1, Herat Joshi2

1Herat Joshi, Department of Analytics & Decision Support, Great River Health Systems, Burlington, (Iowa), United States of America (USA).

2Shenson Joseph, Department of Computer Engineering, University of North Dakota, Houston, (Texas), United States of America (USA). 

Manuscript received on 02 October 2024 | Revised Manuscript received on 11 October 2024 | Manuscript Accepted on 15 October 2024 | Manuscript published on 30 October 2024 | PP: 1-9 | Volume-4 Issue-6 October 2024 | Retrieval Number:100.1/ijamst.E304904061024 | DOI: 10.54105/ijamst.E3049.04061024

Open Access | Editorial and Publishing Policies | Cite | Zenodo | OJS |  Indexing and Abstracting
© The Authors. Published by Lattice Science Publication (LSP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/

Abstract: While inductive transfer learning has revolutionized computer vision, current approaches to natural language processing still need training from the ground up and task-specific adjustments. As a powerful transfer learning approach applicable to any NLP activity, we provide Universal Language Model Fine-tuning (ULMFiT) and outline essential strategies for language model fine-tuning. With an error reduction of 18–24% on most datasets, our technique considerably surpasses the state-of-the-art on six text categorization tasks. Additionally, it achieves the same level of performance as training on 100 times more data with only 100 annotated examples. We have made our pretrained models and code publicly available.

Keywords: ULMFiT, Learning, Code, Language, NLP, Techniques, Strategies.
Scope of the Article: Health Improvement Strategies