# Prompt text classifications with transformer models! An exemplary introduction to prompt-based learning with large language models > This study investigates the potential of automated classification using prompt-based learning approaches with transformer models (large language models trained in an unsupervised manner) for a domain-specific classification task. Prompt-based learning with zero or few shots has the potential to (1) ... ## Metadata - Authors: Christian Mayer, Sabrina Ludwig, Steffen Brandt - Journal: Journal of Research on Technology in Education - Published: 2022-11-22 - DOI: https://doi.org/10.1080/15391523.2022.2142872 - Citations: 51 - Source: OpenAlex ## Technology Hub - Hub: Large Language Models - Discipline: Computer Science / AI - Hub URL: https://science-database.com/technology/large-language-models - Hub llms.txt: https://science-database.com/technology/large-language-models/llms.txt ## Abstract This study investigates the potential of automated classification using prompt-based learning approaches with transformer models (large language models trained in an unsupervised manner) for a domain-specific classification task. Prompt-based learning with zero or few shots has the potential to (1) make use of artificial intelligence without sophisticated programming skills and (2) make use of artificial intelligence without fine-tuning models with large amounts of labeled training data. We apply this novel method to perform an experiment using so-called zero-shot classification as a baseline model and a few-shot approach for classification. For comparison, we also fine-tune a language model on the given classification task and conducted a second independent human rating to compare it with the given human ratings from the original study. The used dataset consists of 2,088 email responses to a domain-specific problem-solving task that were manually labeled for their professional communication style. With the novel prompt-based learning approach, we achieved a Cohen’s kappa of .40, while the fine-tuning approach yields a kappa of .59, and the new human rating achieved a kappa of .58 with the original human ratings. However, the classifications from the machine learning models have the advantage that each prediction is provided with a reliability estimate allowing us to identify responses that are difficult to score. We, therefore, argue that response ratings should be based on a reciprocal workflow of machine raters and human raters, where the machine rates easy-to-classify responses and the human raters focus and agree on the responses that are difficult to classify. Further, we believe that this new, more intuitive, prompt-based learning approach will enable more people to use artificial intelligence. ## Links - DOI: https://doi.org/10.1080/15391523.2022.2142872 - OpenAlex: https://openalex.org/W4309685935 - JSON API: https://science-database.com/api/v1/technology/large-language-models --- Generated by science-database.com — The Knowledge Interface Paper ID: oa-W4309685935 | Hub: large-language-models