LLM Overview Slides | LLM & RAG Guide

AGENDA

AGENDA
AGENDA

LLM TOPICS

LLM TOPICS
LLM TOPICS

LLM OVERVIEW

LLM OVERVIEW
LLM OVERVIEW

APPLICATIONS OF LLM

APPLICATIONS OF LLM
APPLICATIONS OF LLM

NLP APPLICATIONS

NLP APPLICATIONS
NLP APPLICATIONS

SOFTWARE APPLICATIONS

SOFTWARE APPLICATIONS
SOFTWARE APPLICATIONS

EVOLOVING APPLICATIONS OF LLMS

EVOLOVING APPLICATIONS OF LLMS
EVOLOVING APPLICATIONS OF LLMS

OPEN SOURCE VS CLOSE

OPEN SOURCE VS CLOSE
OPEN SOURCE VS CLOSE

BUILDING BLOCKS OF LLMS

BUILDING BLOCKS OF LLMS
BUILDING BLOCKS OF LLMS

MULTI MODEL LLMS

MULTI MODEL LLMS
MULTI MODEL LLMS

COMMON TERMINOLOGY LLMS

COMMON TERMINOLOGY LLMS
COMMON TERMINOLOGY LLMS

AI ASSISTANTS TYPES

AI ASSISTANTS TYPES
AI ASSISTANTS TYPES

TYPES OF AI ASSISTANTS

TYPES OF AI ASSISTANTS
TYPES OF AI ASSISTANTS

FEATURES OF AI ASSISTANTS

FEATURES OF AI ASSISTANTS
FEATURES OF AI ASSISTANTS

EXAMPLE FEATURES OF AI ASSISTA

EXAMPLE FEATURES OF AI ASSISTA
EXAMPLE FEATURES OF AI ASSISTA

AI ASSISTANT EVALUATION METRIC

AI ASSISTANT EVALUATION METRIC
AI ASSISTANT EVALUATION METRIC

ASSISTANT BOT METRICS

ASSISTANT BOT METRICS
ASSISTANT BOT METRICS

METRICS TO EVALUATE AI ASSISTA

METRICS TO EVALUATE AI ASSISTA
METRICS TO EVALUATE AI ASSISTA

TECHNICAL METRICS AI ASSISTANT

TECHNICAL METRICS AI ASSISTANT
TECHNICAL METRICS AI ASSISTANT

METRICS FOR SEARCH BOT

METRICS FOR SEARCH BOT
METRICS FOR SEARCH BOT

METRICS FOR RECOMMENDATION BOT

METRICS FOR RECOMMENDATION BOT
METRICS FOR RECOMMENDATION BOT

BEHAVIORAL METRICS FOR RECOMME

BEHAVIORAL METRICS FOR RECOMME
BEHAVIORAL METRICS FOR RECOMME

CRITERIA TO COMPARE LLMS

CRITERIA TO COMPARE LLMS
CRITERIA TO COMPARE LLMS

LLM TECHNOLOGY SLIDES

LLM TECHNOLOGY SLIDES
LLM TECHNOLOGY SLIDES

AI ASSISTANT TECH STACK

AI ASSISTANT TECH STACK
AI ASSISTANT TECH STACK

AI ASSISTANT ARCHITECTURE

AI ASSISTANT ARCHITECTURE
AI ASSISTANT ARCHITECTURE

CONSIDERAIONS FOR BOT ARCHITEC

CONSIDERAIONS FOR BOT ARCHITEC
CONSIDERAIONS FOR BOT ARCHITEC

RAG SLIDES

RAG SLIDES
RAG SLIDES

WHEN TO USE RAG

WHEN TO USE RAG
WHEN TO USE RAG

WHEN NOT TO USE RAG

WHEN NOT TO USE RAG
WHEN NOT TO USE RAG

AI ASSISTANT WRAPPER

AI ASSISTANT WRAPPER
AI ASSISTANT WRAPPER

AI ASSISTANT ON YOUR DATA

AI ASSISTANT ON YOUR DATA
AI ASSISTANT ON YOUR DATA

AI ASSISTANT FINETUNE MODEL

AI ASSISTANT FINETUNE MODEL
AI ASSISTANT FINETUNE MODEL

AI ASSISTANT CUSTOM MODEL

AI ASSISTANT CUSTOM MODEL
AI ASSISTANT CUSTOM MODEL

AI ASSISTANT BUILDING BLOCKS

AI ASSISTANT BUILDING BLOCKS
AI ASSISTANT BUILDING BLOCKS

LLM FUNCTION CALLING

LLM FUNCTION CALLING
LLM FUNCTION CALLING

RAG OVERVIEW SLIDES

RAG OVERVIEW SLIDES
RAG OVERVIEW SLIDES

RAG ARCHITECTURE SLIDE

RAG ARCHITECTURE SLIDE
RAG ARCHITECTURE SLIDE

RAG RETRIEVER OPTIONS

RAG RETRIEVER OPTIONS
RAG RETRIEVER OPTIONS

RAG NODE PROCESSOR

RAG NODE PROCESSOR
RAG NODE PROCESSOR

RAG NODE POST PROCESSOR

RAG NODE POST PROCESSOR
RAG NODE POST PROCESSOR

HOW TO FORM RESPONSE IN AI ASS

HOW TO FORM RESPONSE IN AI ASS
HOW TO FORM RESPONSE IN AI ASS

LLM ARCHITECTUE FOR BOT

LLM ARCHITECTUE FOR BOT
LLM ARCHITECTUE FOR BOT

LLM CONCERNS SLIDES

LLM CONCERNS SLIDES
LLM CONCERNS SLIDES

LLM THREATS SLIDES

LLM THREATS SLIDES
LLM THREATS SLIDES

LLM CHALLENGES SLIDE

LLM CHALLENGES SLIDE
LLM CHALLENGES SLIDE

LLM ETHICAL CONCERNS SLIDES

LLM ETHICAL CONCERNS SLIDES
LLM ETHICAL CONCERNS SLIDES

LLM UNCONTROLLED BEHAVIOR SLID

LLM UNCONTROLLED BEHAVIOR SLID
LLM UNCONTROLLED BEHAVIOR SLID

LLLM ETHICAL ISSUES

LLLM ETHICAL ISSUES
LLLM ETHICAL ISSUES

DATA OWNERSHIP ISSUES LLM

DATA OWNERSHIP ISSUES LLM
DATA OWNERSHIP ISSUES LLM

LLM GENERATED OUTPUT ISSUES

LLM GENERATED OUTPUT ISSUES
LLM GENERATED OUTPUT ISSUES

LLM ENVIRONMENT ISSUES

LLM ENVIRONMENT ISSUES
LLM ENVIRONMENT ISSUES

ENTERPRISE GRADE ANSWERS AI AS

ENTERPRISE GRADE ANSWERS AI AS
ENTERPRISE GRADE ANSWERS AI AS

APPROACHES TO VERIFY AI ASSIST

APPROACHES TO VERIFY AI ASSIST
APPROACHES TO VERIFY AI ASSIST

EXAMPLE BOT ASSISTANTS

EXAMPLE BOT ASSISTANTS
EXAMPLE BOT ASSISTANTS

CUSTOMER ONBOARDING BOT

CUSTOMER ONBOARDING BOT
CUSTOMER ONBOARDING BOT

CUSTOMER ONBOARDING AI ASSISAT

CUSTOMER ONBOARDING AI ASSISAT
CUSTOMER ONBOARDING AI ASSISAT

ARCHITECTURE SLIDE FOR CUSTOME

ARCHITECTURE SLIDE FOR CUSTOME
ARCHITECTURE SLIDE FOR CUSTOME

SLIDE59

SLIDE59
SLIDE59

LLM TRAINING STEPS

LLM TRAINING STEPS
LLM TRAINING STEPS

ARCHITECTURE LLM TRAINING

ARCHITECTURE LLM TRAINING
ARCHITECTURE LLM TRAINING

SLIDE62

SLIDE62
SLIDE62

SLIDE63

SLIDE63
SLIDE63

SLIDE64

SLIDE64
SLIDE64

SLIDE65

SLIDE65
SLIDE65

SLIDE66

SLIDE66
SLIDE66

SLIDE67

SLIDE67
SLIDE67

SLIDE68

SLIDE68
SLIDE68

SLIDE69

SLIDE69
SLIDE69

SLIDE70

SLIDE70
SLIDE70

SLIDE71

SLIDE71
SLIDE71

SLIDE72

SLIDE72
SLIDE72

SLIDE73

SLIDE73
SLIDE73

SLIDE74

SLIDE74
SLIDE74

SLIDE75

SLIDE75
SLIDE75

SLIDE76

SLIDE76
SLIDE76

SLIDE77

SLIDE77
SLIDE77

SLIDE78

SLIDE78
SLIDE78

SLIDE79

SLIDE79
SLIDE79

SLIDE80

SLIDE80
SLIDE80

SLIDE81

SLIDE81
SLIDE81

SLIDE82

SLIDE82
SLIDE82

SLIDE83

SLIDE83
SLIDE83

SLIDE84

SLIDE84
SLIDE84

SLIDE85

SLIDE85
SLIDE85

SLIDE86

SLIDE86
SLIDE86

SLIDE87

SLIDE87
SLIDE87

SLIDE88

SLIDE88
SLIDE88

SLIDE89

SLIDE89
SLIDE89

SLIDE90

SLIDE90
SLIDE90

SLIDE91

SLIDE91
SLIDE91

SLIDE92

SLIDE92
SLIDE92

SLIDE93

SLIDE93
SLIDE93

SLIDE94

SLIDE94
SLIDE94

SLIDE95

SLIDE95
SLIDE95


Large Language Models (LLMs)


LLMs are a type of artificial intelligence (AI) capable of processing and generating human-like text in response to a wide range of prompts and questions. Trained on massive datasets of text and code, they can perform various tasks such as:

Generating different creative text formats: poems, code, scripts, musical pieces, emails, letters, etc.
Answering open ended, challenging, or strange questions in an informative way: drawing on their internal knowledge and understanding of the world.
Translating languages: seamlessly converting text from one language to another.
Writing different kinds of creative content: stories, poems, scripts, musical pieces, etc., often indistinguishable from human-written content.

Retrieval Augmented Generation (RAG)


RAG is a novel approach that combines the strengths of LLMs with external knowledge sources. It works by:

Retrieval: When given a prompt, RAG searches through an external database of relevant documents to find information related to the query.
Augmentation: The retrieved information is then used to enrich the context provided to the LLM. This can be done by incorporating facts, examples, or arguments into the prompt.
Generation: Finally, the LLM uses the enhanced context to generate a response that is grounded in factual information and tailored to the specific query.
RAG offers several advantages over traditional LLM approaches:

Improved factual accuracy: By anchoring responses in real-world data, RAG reduces the risk of generating false or misleading information.
Greater adaptability: As external knowledge sources are updated, RAG can access the latest information, making it more adaptable to changing circumstances.
Transparency: RAG facilitates a clear understanding of the sources used to generate responses, fostering trust and accountability.
However, RAG also has its challenges:

Data quality: The accuracy and relevance of RAG's outputs depend heavily on the quality of the external knowledge sources.
Retrieval efficiency: Finding the most relevant information from a large database can be computationally expensive.
Integration complexity: Combining two different systems (retrieval and generation) introduces additional complexity in terms of design and implementation.

Prompt Engineering


Prompt engineering is a crucial technique for guiding LLMs towards generating desired outputs. It involves crafting prompts that:

Clearly define the task: Specify what the LLM should do with the provided information.
Provide context: Give the LLM enough background knowledge to understand the prompt and generate an appropriate response.
Use appropriate language: Frame the prompt in a way that aligns with the LLM's capabilities and training data.





When to finetune LLM




Fine Tuning Steps




Verify LLM and AI Assistant Answers





How to evaluate LLM


Method Description
Perplexity Perplexity measures how well a language model predicts a sample of text. Lower perplexity indicates better performance.
BLEU Score BLEU (Bilingual Evaluation Understudy) Score is commonly used to evaluate the quality of machine-translated text by comparing it to human-generated translations.
ROUGE Score ROUGE (Recall-Oriented Understudy for Gisting Evaluation) Score is used to evaluate the quality of summaries produced by a language model by comparing them to reference summaries.
Human Evaluation Human evaluation involves having human judges assess the quality of text generated by the language model based on criteria such as fluency, coherence, and relevance.
Word Error Rate (WER) WER measures the difference between the words generated by the language model and the reference text. Lower WER indicates better performance.





100K-tokens    Agenda    Ai-assistant-architecture    Ai-assistant-building-blocks    Ai-assistant-custom-model    Ai-assistant-evaluation-metric    Ai-assistant-finetune-model    Ai-assistant-on-your-data    Ai-assistant-tech-stack    Ai-assistant-wrapper   

Dataknobs Blog

10 Use Cases Built

10 Use Cases Built By Dataknobs

Dataknobs has developed a wide range of products and solutions powered by Generative AI (GenAI), Agent AI, and traditional AI to address diverse industry needs. These solutions span finance, healthcare, real estate, e-commerce, and more. Click on to see in-depth look at these use cases - Stocks Earning Call Analysis, Ecommerce Analysis with GenAI, Financial Planner AI Assistant, Kreatebots, Kreate Websites, Kreate CMS, Travel Agent Website, Real Estate Agent etc.

AI Agent for Business Analysis

Analyze reports, dashboard and determine To-do

DataKnobs has built an AI Agent for structured data analysis that extracts meaningful insights from diverse datasets such as e-commerce metrics, sales/revenue reports, and sports scorecards. The agent ingests structured data from sources like CSV files, SQL databases, and APIs, automatically detecting schemas and relationships while standardizing formats. Using statistical analysis, anomaly detection, and AI-driven forecasting, it identifies trends, correlations, and outliers, providing insights such as sales fluctuations, revenue leaks, and performance metrics.

AI Agent Tutorial

Agent AI Tutorial

Here are slides and AI Agent Tutorial. Agentic AI refers to AI systems that can autonomously perceive, reason, and take actions to achieve specific goals without constant human intervention. These AI agents use techniques like reinforcement learning, planning, and memory to adapt and make decisions in dynamic environments. They are commonly used in automation, robotics, virtual assistants, and decision-making systems.

Build Dataproducts

How Dataknobs help in building data products

Building data products using Generative AI (GenAI) and Agentic AI enhances automation, intelligence, and adaptability in data-driven applications. GenAI can generate structured and unstructured data, automate content creation, enrich datasets, and synthesize insights from large volumes of information. This helps in scenarios such as automated report generation, anomaly detection, and predictive modeling.

KreateHub

Create New knowledge with Prompt library

At its core, KreateHub is designed to enable creation of new data and the generation of insights from existing datasets. It acts as a bridge between raw data and meaningful outcomes, providing the tools necessary for organizations to experiment, analyze, and optimize their data processes.

Build Budget Plan for GenAI

CIO Guide to create GenAI Budget for 2025

CIOs and CTOs can apply GenAI in IT Systems. The guide here describe scenarios and solutions for IT system, tech stack, GenAI cost and how to allocate budget. Once CIO and CTO can apply this to IT system, it can be extended for business use cases across company.

RAG For Unstructred and Structred Data

RAG Use Cases and Implementation

Here are several value propositions for Retrieval-Augmented Generation (RAG) across different contexts: Unstructred Data, Structred Data, Guardrails.

Why knobs matter

Knobs are levers using which you manage output

See Drivetrain appproach for building data product, AI product. It has 4 steps and levers are key to success. Knobs are abstract mechanism on input that you can control.

Our Products

KreateBots

  • Pre built front end that you can configure
  • Pre built Admin App to manage chatbot
  • Prompt management UI
  • Personalization app
  • Built in chat history
  • Feedback Loop
  • Available on - GCP,Azure,AWS.
  • Add RAG with using few lines of Code.
  • Add FAQ generation to chatbot
  • KreateWebsites

  • AI powered websites to domainte search
  • Premium Hosting - Azure, GCP,AWS
  • AI web designer
  • Agent to generate website
  • SEO powered by LLM
  • Content management system for GenAI
  • Buy as Saas Application or managed services
  • Available on Azure Marketplace too.
  • Kreate CMS

  • CMS for GenAI
  • Lineage for GenAI and Human created content
  • Track GenAI and Human Edited content
  • Trace pages that use content
  • Ability to delete GenAI content
  • Generate Slides

  • Give prompt to generate slides
  • Convert slides into webpages
  • Add SEO to slides webpages
  • Content Compass

  • Generate articles
  • Generate images
  • Generate related articles and images
  • Get suggestion what to write next