Generative AI 101 Slides | GenAI Guide

GenAI enable - Creatvity, Automation and Personalization.

We have seen evolution of product development from traditional coding to data-driven machine learning. Today, generative AI is capable of generating both data and code. This shift is set to transform product business models and how software AI driven product will be built. This will change product business model and also product management.

20 Latest Update of Generative AI

Apple announced chatpt integration with SIRI. Nvidia announced a data pipeline to generate synthehic data for LLM training.
Anthropic released Sonnet 3.
Meta Releasd Llama 3.
Microsoft announced investment in G42
AWS invested in Anthropic Claude
Tesla announce RoboTaxi for Aug 2024

Gen AI for IT Systems

CIOs and CTOs can implement GenAI use cases first in IT.
Automate - IT support
AI Driven Analytics - Ask questions from Data, Generate narrative from datasets
Infrastructure Moniotring
automate Compliance Thru Gen AI
Personalized Corporate Education, Use Ge AI in hiring

Here is guide for CIO and CTO - how to build budget for GenAI

Gen AI offering/update from Vendors

Google has Gemini, Vertext AI and also building semi conductor

Microsoft has partned with OpenAI. Now Microsoft is also working on next generation model.



AWS has released AWS Q and bedrock for Gen AI.

Gen AI offering fron Non Hyper Scalers

Open AI is leading GPT research and development. They are focusing on building state of art API and assistant.

Hugging Face is providing Open source small LLM in user friendly manner.



Meta is providing tool for ad creation as well as very advance and big open source LLM.

GenAI vs Predictive AI

Large Language Model

  • LLMs are changing the way how documents are analyzed, summarized
  • LLMs are influential in creating new text and getting used in content generation
  • Code generation is area that will significantly getreefined by LLMs
  • With Generative AI progress - one should reconsider how software should be developed.

    How to use LLMs and build applications

  • LLMs building blocks are Data, Architecture, Training, Inference
  • While making decisions about Closed vs Open model you should consider above factors, control needed, IP risks etc
  • Lllama and Hugging Face are 2 famous open source model. ChatGPT, Bard, Palm , Claude are closed models.
  • LLM Comparision Criteria

  • Model parameter indicate complexity. Bigger paramter size indicate ability to handle complexity
  • Top most criteria should be - accuracy for given task
  • In many cases ability to fine tune a model, reasoning capabilities are impotant too.
  • Guardrails for GenAI LLM and Chatbots

    Guardrails are essentially guidelines and controls that steer the LLM's outputs in the desired direction.

    Here are some ways to keep your LLM on track:



    Input Validation: Set criteria for what kind of information the LLM can process, preventing nonsensical or malicious inputs.
    Output Filtering: Review and potentially edit the LLM's outputs before they are used, catching any biases or factual errors.
    Real-time Monitoring: Continuously track how the LLM is being used and intervene if it generates harmful content.
    Human oversight: Ensure humans are always involved in the LLM interaction

    Foundation Model

  • Foundation model work out of box for universal scenarios.
  • Specific models can be trained on these.
  • Reduce the labeling requirements
  • As FMs are exposed to internet scale data and various forms and myriad of patterns, FMs learn to apply their knowledge within a wide range of contexts.
  • Consideration to Extend Foundation Model

  • Extend and Fine tune for specific task
  • Train custom model for specific domain
  • Build datasets on which you can train model
  • Prompt Engineering

  • Give instruction , Give persona, add delimeter
  • Add examples, provide steps to complete task
  • Specify output format such as HTML, JSON, Table to produce
  • Retrieval-Augmented Generation (RAG)

    Retrieval-Augmented Generation. It's a technique that helps large language models. Before answering your question, the LLM uses RAG to search a vast external knowledge base for relevant information.

    With this extra knowledge, the LLM can provide more accurate and informative answers. RAG combines the vast knowledge of LLMs with your data, enhancing AI's ability to provide contextually rich responses.



    RAG is extremly ueful when you need to answer question on specific domain, latest and up to date information or use internal knowledge. Use RAG is
    Domain-specific knowledge: If your assistant needs to be an expert in a specific domain, RAG can be used to integrate relevant databases or knowledge repositories to enhance its understanding.
    Accuracy is crucial: If your LLM assistant needs to provide highly accurate information, especially on factual topics like science, history, or specific procedures, RAG can ensure responses are grounded in real-world knowledge.
    Combating hallucinations: LLMs can sometimes make up information, called hallucination. RAG combats this by providing verifiable evidence to support the response.
    Building trust: By allowing users to see where the information comes from (think footnotes!), RAG fosters trust and transparency in the assistant's responses.

    AI Assistant Tradeoff Factors

    Trade Off Factors For AI Assistants

    3 Dimensions


    Accuracy : Lead toUser Trust and Effectiveness: Higher accuracy in understanding and responding to user queries builds trust and reliability. Accurate AI assistants can effectively handle complex tasks, providing precise information and solutions.
    Performance : Better Speed and Scale lear to quick response. High-performance AI assistants provide quick responses, improving user experience and efficiency. Efficient performance ensures that the AI can handle numerous requests simultaneously, essential for scaling operations.
    Cost : Lead to adoption and ROI generation. Lower costs can make AI assistants accessible to a broader range of users and businesses. Affordable solutions encourage wider adoption, enabling more industries to benefit from AI.

    Unleash the power of Similarity Search

    Vector DB store date in vector embeddings which are high dimension vectors that represent features or attributes of data. Traditional DB store dat in rows and columns. In traditional db each columen has signle field and each row is a record.

    Traditional DBs are good for extact match vs vector dbs are great for similarity search.


    Vector Dbs have great use in many use cases like:

    Text Search, Question Answer Bot
    Vision search
    Find similar design
    Efficient analysis of audio
    Find visually similar images e.g. blue water

    OpenAI Fine tuning

  • Finetuning and building custom model give you edge
  • With Fine tuning you have your own model, your IP based on data
  • With Fine tune model, you can handle complex scenarios
  • Before Fine tuning - try prompt engineering, few shot learnings.

    Fine tuning Steps

  • One Fine tune - evaluate the results
  • Make a inference calls/li>
  • Do Error Analysis.
  • Tech Stack and modeling architectures

    Generative AI is based on "comprehend existing" data and determine trajectories data can take. It uses it for geenration

  • Diffusion architecture is suitable for generation
  • Transformer are suitable for language gentration in sequence
  • How to Evaluate Gen AI

    Traditional machine learning model has evaluaition metrics like accuracy. Generative AI creates new data. It evaluation is based on subjecive measures like diversity of data, realism, nobalty, creativity. It is hard to evaluate or benchmark geenrative models.

    Generative AI adoption framework

    Use above dimensions to identify quardant of your use case(s). Low risk and applicability of generic data e.g content writing for sales,travel guide are easy to adopt.

  • Areas where universal data is available but risk of geenrating wrong results are high - it is opportunity for companies that want to train and sell custom models
  • Areas where task specifci fine tuning is needed, is opprtunity for services companies
  • Companies that want to build defendable IP will focus on create new dataset and model training on these dataset for areas where risk is high and universal dataset are not available.
  • Trade off and Conflicts

    Generative AI has potential, but there are many challenges and open questions. One should consider these before using geenrative AI in enterprise(s)

    Uncontrolled data production

    Generative AI is based on "comprehend existing" data and determine trajectories data can take. It uses it for creating new data. However the method of producing new variation of data makes it controllable.

  • Generative Model output is unpredicatble and uncontrollable. Main issue is - how to get confidence if you want to use it in mission critical envioronment.
  • Large language Model (LLM) inherit bias from data they are trained on.
  • There are open questions on who have copy right on generated content. In future there may be new laws that will impact consumers of generative AI
  • LLM or Image model that are trained on universal data and produce new data are compute intensive. There are impact of enviornment one need to consider.
  • Enviornment concern and legal question

    There is significant amount of compute, energy usage. There is significant amount of carbon emission. One need to ensure it is for good cause and not for producing variation(s)

  • Dataknobs has created set of controls to handle this
  • GenAI Project Management

    GENAI PROGRAM MANAGEMENT FRAMEWORK

    4 AREAS


    THE GENAI PROJECT MANAGEMENT FRAMEWORK INVOLVES IDENTIFYING OPPORTUNITIES FOR AI INTEGRATION, DEVELOPING A PROOF OF CONCEPT, AND PROGRESSING THROUGH STAGES OF MATURITY INCLUDING PROTOTYPING, PILOTING, SCALING, AND OPTIMIZING. A ROBUST GOVERNANCE STRUCTURE ENSURES SUCCESSFUL IMPLEMENTATION THROUGH RIGOROUS TESTING, COMPLIANCE, CONTINUOUS MONITORING, AND STAKEHOLDER ENGAGEMENT.

    GenAI Budget Planning

    GENAI - How CIOs should allocate Budget

  • Identify Use Cases
  • Understand Tech stack
  • Understanding Costing - Training, Inference, APP
  • Understand how cost change with scale and prod rollout
  • Divide budget - People, Data, Development, API, Off the shelf, Licensing
  • -->

    Governace for GenAI LLM and Chatbots | governance Framework for GenAI

    We recognize the immense potential of LLMs to revolutionize various aspects of our lives. However, we also acknowledge the critical need to ensure their development and deployment are guided by ethical principles and safeguard human values. Above are guiding principles and framework for AI. It is further extended for GenAI. Click to see detail slides for personalziation, automation and creative scenario to specific governance items

    Security Framework - For Gen AI

    Establish AI Governance across enterprise. Have action plan to secure data, infrastructure and Model. See more details by clicking on slides

    GPT 4o vs GPT 4 Turbo vs GPT 3.5

    GPT 4o is latest model released by Open AI. It accept text and image as input and produce text output. It can reason across vision, text and audio

    Use GPT 3.5 Turbo for ordinary task. It is cheapest. Use GPT 4o for complex task that need high quality

    DALLE, TTS, Whisper are model for images, sound

    GPT (Generative Pre-Trained Transformer) is family of large language model. developed by OPEN AI. GPT4 is latest generaral purpose LLM released by OpenAI. ChatGPT4 is chatbot focused LLM.

  • ChatGPT4 has large token length compared to GPT3.5 ChatGPT4 can process 25000 words of context. It is 8 times higher than chatGPT3.5
  • ChatGPT4 can understand and process visual input.
  • ChatGPT 4 has better programming capabilities compared to ChatGPT 3.5
  • ChatGPT 4 has fewer hallunication compared to ChatGPT 3.5
  • Digital Human

    Here is framework for Digital Agent, Virtual Assistant, Digital Inluencer and Digital Human.

    Virtual Agent to Digital Human

  • Virtual Agent/Digital Agent are for one off task.
  • Virtual assistant carry context and are ongoing engagement
  • Digital influencer add experience and emotion into interaction
  • Digtial Human provide experience/emotion for ongoing engagement.
  • AI Assistant vs Digital Human vs Robots

    AI Assistant vs Digital Human vs Robot

    Similarties and differences

    AI Assistant do not have appearance. They are well suited for tasks. These interact thru text and voice. Digital Human has physical appearance but they exist in digital world. They use non verbal cues such as facial expression in addition to text and voice. Robot exist in physical world. They have mobility and can interact with real world enviornment to move things. Digital human can only do digial task such as give information, schedule meeting. Robot on the other hand can do task in physical enviornment.

    GenAI for Technology Domain

    Understand user intent better and improve search with embeddings. With Generative AI - do persona development and create personalized responsed. With GenAI do Proactive threat detection. Generative AI can also simulate cyber attack and help prepare model and advance technqies to handle such attacks.

    Generative AI Applications in Security

  • Generative AI is extremly useful for Cyber Security
  • Simulate phishing attack to check robustmess of security solution
  • Automate the analysis of security logs
  • Generative AI Applications in Payments Industry

  • Payments is highly regulated area
  • Customer Service can be made effective in Payment industry
  • Audit can be made more robust in payment industry
  • Check validation, Fraud detection can be improved
  • Ad

    Gen AI for Mortgage Industry

    GenAI is useful for Mortgage Industry

    It can do document analysis in loan origination process and streamline it. It can help in customer service. Most important it can automate compliance in mortgage industry

    Gen AI for Mobile Development

    GenAI can revolutionze mobile app development.

    New apps willemerge for creativity. Better personalization will be provided in future apps.

    Generative AI Vendors

  • Vendors: OpenAI, Microsoft, GCP, AWS, Anthropic, Dataknobs, Snorkel and more
  • Evaluation criteria : Features, accuracy, flexibility, ability to fine tune model, cost of inference, how reliable reults are
  • li>
  • Open AI and Micorosoft model is most used
  • li>
  • GCP bard provide up to date information. GCP also has TPU
  • AWS has large cloud share
  • Anthropic released caluse with 100K tokens
  • Hugging Face provide many smaller models
  • ChatGPT3.5 ChatGPT4 and Bard

  • Open AI has chatGPT3.5 and chatGPT4.0
  • Microsoft Azure provide OPEN AI services integrated with Azure
  • Google has Bard, Vertex AI with generative AI studio. In addition google has TPUs
  • AWS provide cloud to use existing capabilities

  • AWS bedrock enable using Hugging face, anthropic or other model on AWS cloud
  • Hugging Face is providing various small model like BERT, GPT-3, ROBERTA, XLNET
  • Anthropic has build Claude. Available to use at "poe . com". There are 3 flavors even with 100K tokens.
  • build Data Products using GenAI

    Dataknobs capablities - KREATE, KONTROLS and KNOBS.
    KREATE focus on creatibvity and generation
    KONTROLS provide guardrails, lineage, compliance, privacy and security.
    KNOBS enable experimentation and diagnosis