Have you been bombarded with AI and Generative AI news lately. Everywhere you look on the internet, conferences, TV, and all media outlets, it is the talk of the year. The ease of use, how it will change the world, how it will change our daily life and how we conduct business.
The truth of the matter is, it is not as simple as bringing up a window of some kind, ask your question and voila, the genie will get you an answer that is accurate and relevant. There are a lot of things that you need to know about, especially as a responsible person in an enterprise that have to adhere to security, scalability, content safety and performance standards, not to mention compliance at the corporate level.
This full day workshop will give you the big picture but also dive into the nitty gritty to allow you to practice the infusion of Generative AI on an enterprise scale.
What will be covered:
An in-depth view of what is an LLM and what its purpose is, including how it is built and made available for consumption.
An in-depth view of Prompt Engineering as a tool that can help AI chatbots generate relevant and coherent responses in real-time conversations as It helps ensure the AI understands user queries and provides meaningful answers.
In-depth discussion on Orchestrators and the important role they provide and how to differentiate between the popular ones and when to use them:
- Sematic Kernel - an open-source, AI Software Development Kit (SDK) from Microsoft. It allows developers to integrate large language models (LLMs) like Hugging Face, Azure OpenAI, and OpenAI with conventional programming languages like C#, Java, and Python. We will demonstrate how SK can help make working with LLMs more effective by marshaling and orchestrating inputs and functions. It also allows users to sanitize inputs, guiding the LLM to produce useful outputs.
- LangChain - an open-source framework for developing applications using large language models (LLMs). LLMs are deep-learning models that are pre-trained on large amounts of data. They can generate responses to user queries, such as answering questions or creating images from text-based prompts. We will demonstrate its use in Orchestrating the communication and negotiation between your data and the LLM of choice through the Prompt.
Hands-on lab covering the use of Semantic Kernal from Microsoft
Hands-on lab covering the use of LangChain
A deep dive into the Retrieval Augmentation process in an enterprise to give access to company data to be part of the prompt that will enrich the LLM to give more accurate responses.
- RALM – Retrieval Augmentation Language Modeling.
- RAG – Retrieval Augmented Generation.
- ReAct - Reasoning and Action pattern.
- COT – Chain of Thought pattern.
Hands-on lab to exercise the concepts using FlowiseAI and VectroShift
In depth view into Embedding Models and how they work as algorithms that convert high-dimensional data into low-dimensional vectors. This process is called dimensionality reduction, which simplifies data and makes it easier for machine learning algorithms to process.
A view into the concept of Vectorization, also known as word embedding, and demonstrating the process of converting text data into numerical vectors. These vectors are used to build machine learning models where they can be used for Word Predictions, Similarities and Natural Languages.
A look in the life of Vectorization Databases which is a type of database that stores data as mathematical representations, also known as vector embeddings. These mathematical representations are high-dimensional vectors, each with a certain number of dimensions. The number of dimensions can range from tens to thousands, depending on the complexity and granularity of the data. Discussion about some of the available ones like Azure AI Search, Pinecone, Chroma, Faiss, Azure Postgresql and others…
Hands-on lab for using Embeddings and Vectorization databases.
Attendee Requirements:
- You must provide your own laptop computer (Windows or Mac) for this hands-on lab.
- All other requirements will be posted 2 weeks prior to the conference.