Create AI bots Slides
KREATEBOTS |
SLIDE22 |
SLIDE23 |
How can LLMs be fine-tuned for summarization? LLMs (Large Language Models) like GPT-3 can be fine-tuned for summarization using the following approaches: Supervised training - The simplest approach is to fine-tune the LLM using a large dataset of text-summary pairs. The model is trained to generate the corresponding summary given the input text. This requires a sizable supervised dataset, which can be expensive to create. Public datasets like CNN/DailyMail can be used. Self-supervised training - The LLM is trained using the original text as input and the first few sentences as the "summary". This creates weak supervision from the data itself. The model is then fine-tuned on a smaller set of human-written summaries to improve accuracy. This approach requires less labeled data. Reinforcement learning - The LLM is first trained autoencoding - to reproduce the input text. Then, rewards are given based on the quality and conciseness of the generated summary. The model learns to generate better summaries through trial-and-error to maximize these rewards. However, this requires defining a good reward function. Filtering and post-processing - Generated summaries from the LLM can be filtered and refined using techniques like: • Extracting sentences with the highest similarity to human references• Removing repetitive sentences• Combining overlapping content into a single sentence, etc. This requires minimal fine-tuning of the base LLM but provides less control over the summary style. Prompting - The LLM can be "prompted" to generate a summary using natural language instructions. For example: In 2-3 short paragraphs, summarize the main points of the following text: This relies more on the pre-trained LLM abilities and requires less labeled data. But accuracy tends to be lower. So in short, there are a variety of approaches to fine-tune LLMs for summarization - from fully supervised to minimally supervised. The choice depends on the available data, required accuracy and custom need. |
Blog |