What's new
Warez.Ge

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

Prompt Engineering and Generative AI - Fundamentals

voska89

Moderator
Staff member
e4d37f4b57f9447372ec267377df4555.jpeg

Free Download Prompt Engineering and Generative AI - Fundamentals
Published 3/2024
Created by Sathish Jayaraman
MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz, 2 Ch
Genre: eLearning | Language: English | Duration: 19 Lectures ( 1h 31m ) | Size: 813 MB​

Large Language Models, GPT, Gemini, LLM fine-tuning, Few-Shot, Chain-of-Thought, Tree-of-Thoughts, Guardrails, Langchain
What you'll learn:
Fundamentals of Prompt Engineering and Generative AI.
Prompt Engineering Techniques : Zero-Shot, Few-Shot and Chain-of-Thought, Tree of Thoughts
Retrieval Augmented Generation fundamentals
RAGAS Evaluation Framework for LLM and LangSmith
Fine-tuning a Large Language Model
Guardrails for validating LLM response
Requirements:
Basic knowledge of data science and ML principles will be helpful
Familiarity with Python
A computer with internet to access course material
Description:
This course delves into the fundamental concepts related to Prompt Engineering and Generative AI. The course has subsections on Fundamentals of Prompt Engineering, Retrieval Augmented Generation, Fine-tuning a large language model (LLM) and Guardrails for LLM. Section on Prompt Engineering Fundaments :The first segment provides a definition of prompt engineering, best practices of prompt engineering and an example of a prompt given to the Gemini-Pro model with references for further reading.The second segment explains what streaming a response is from a large language model, examples of providing specific instructions to the Gemini-Pro model as well as temperature and token count parameters. The third segment explains what Zero-Shot Prompting technique is with examples using the Gemini Model. The fourth segment explains Few-shot and Chain-of-Thought Prompting techniques with examples using the Gemini Model. Subsequent segments in this section shall discuss setting up the Google Colab notebook to work with the GPT model from OpenAI and provide examples of Tree-of-Thoughts prompting technique, including the Tree-of-Thoughts implementation from Langchain to solve the 4x4 Sudoku Puzzle. Section on Retrieval Augmented Generation (RAG) :In this section, the first segment provides a definition of Retrieval Augmented Generation Prompting technique, the merits of Retrieval Augmented Generation and applying Retrieval Augmented Generation to a CSV file, using the Langchain framework In the second segment on Retrieval Augmented Generation, a detailed example involving the Arxiv Loader, FAISS Vector Database and a Conversational Retrieval Chain is shown as part of the RAG pipeline using Langchain framework.In the third segment on Retrieval Augmented Generation, evaluation of response from a Large Language Model (LLM) using the RAGAS framework is explained. In the fourth segment on Retrieval Augmented Generation, the use of Langsmith is shown complementing the RAGAS framework for evaluation of LLM response. In the fifth segment, use of the Gemini Model to create text embeddings and performing document search is explained. Section on Large Language Model Fine-tuning :In this section, the first segment provides a summary of prompting techniques with examples involving LLMs from Hugging Face repository and explaining the differences between prompting an LLM and fine-tuning an LLM. The second segment provides a definition of fine-tuning an LLM, types of LLM fine-tuning and extracting the data to perform EDA (including data cleaning) prior to fine-tuning an LLM. Third segment explains fine-tuning a pre-trained large language model on a task specific labeled dataset in detail. Section on Guardrails for Large Language Models:In this section, the first segment provides a definition of Guardrails as well as examples of Guardrails from OpenAI. In the second segment on Guardrails, examples of open source Guardrail implementations are discussed with a specific focus on GuardrailsAI for extracting information from text.In the third section, use of GuardrailsAI for generating structured data and interfacing GuardrailsAI with a Chat Model have been explained. Each of these segments has a Google Colab notebook included.
Who this course is for:
This course is suited for anyone interested in the realm of Natural Language Processing, Large Language Models, Prompting Engineering and Generative AI and Data Science
Homepage
Code:
https://www.udemy.com/course/prompt-engineering-and-generative-ai-fundamentals/





Recommend Download Link Hight Speed | Please Say Thanks Keep Topic Live
No Password - Links are Interchangeable
 

Users who are viewing this thread

Back
Top