Open-Source Generative AI

Open-Source Generative AI for the Enterprise is a comprehensive 4-day course that teaches practical applications for AI in the business environment. This course offers a combination of lectures and hands-on labs, providing participants with a solid understanding of AI concepts and the skills to design and implement AI solutions.


Throughout the course, you will learn about AI transformer-based architectures, the fundamentals of Python programming for AI deployments, and the deployment of open-source Transformer models. You will also explore the importance of hardware requirements in AI performance, comparing different GPU architectures and understanding how to match AI requirements with suitable hardware. The course delves into training techniques, including backpropagation, gradient descent, and various AI tasks such as classification, regression, and clustering.


You will gain practical experience through hands-on exercises with open-source LLM (Language Learning Model) frameworks, allowing you to work with fine-tuned models and run workloads on different models to understand their strengths and weaknesses. Additionally, the course covers the conversion of model formats and provides an in-depth exploration of AI programming environments like PyTorch + transformers and transformers' low-level interactive inspection.


Towards the end of the course, you will delve into advanced topics such as context extension through fine-tuning and quantization for specific application target environments. By the completion of the course, you will have the opportunity to earn an AI certification from Alta3 Research, further enhancing your credentials in the field of Artificial Intelligence. This course is ideal for Python Developers, DevSecOps Engineers, and Managers or Directors seeking a practical overview of AI and its practical application in the enterprise. 

Course Information

Price: $2,595.00
Duration: 4 days
Certification: 
Exam: 
Learning Credits:
Continuing Education Credits:
Course Delivery Options

Check out our full list of training locations and learning formats. Please note that the location you choose may be an Established HD-ILT location with a virtual live instructor.

Train face-to-face with the live instructor.

Access to on-demand training content anytime, anywhere.

Attend the live class from the comfort of your home or office.

Interact with a live, remote instructor from a specialized, HD-equipped classroom near you. An SLI sales rep will confirm location availability prior to registration confirmation.

All Sunset Learning dates are guaranteed to run!

Register

Prerequisites:

Previous exposure to any programming language, preferably Python

 

Target Audience:

  • Project Managers
  • Architects
  • CKA Developers
  • Data Acquisition Specialist

 

Course Objectives:

  • 5G EN-DC architecture
  • 5G Standalone architecture
  • Beamforming, mm-wave, massive-mimo
  • 5G Access technology, O-RAN, and v-RAN
  • New Radio technology, numerology
  • Narrowband IoT support technology
  • IMS in the 5G Network
  • Network Slicing
  • 5G call trace analysis

 

Course Outline:

The Mechanics of Deep Learning – Gain an intuitive understanding of the most current generative AI architecture.

  • Choosing the latest AI models
    • What is current and what has already gone extinct?
  • Neural network architecture essentials
    • Tokenization
    • Embedding
    • Parameters: weights and bias
    • Nodes
    • Fully connected/Partially connected
    • Prompts and prompt engineering

The Transformer Model – Develop an intuitive understanding of the transformer model, without math.

  • Neural Network Architectures
  • Word embeddings
    • Importance in NLP
    • Word2Vec and GloVe embeddings
    • Contextualized embeddings (BERT, ELMo, etc.)
  • Self-attention mechanism and multi-head attention
    • Input Representation (Query, Key, and Value)
    • Computing Similarities
    • Attention Weights
    • Weighted Sum
    • Final Output
  • Positional encoding for attention
  • Transformer layers and stacking
  • Quantization
  • Effective LLM selection criteria and use cases
    • Models: size, datasets, quantization, etc
    • Libraries
    • Frameworks

Hardware Requirements

  • GPUs role in AI performance (CPU vs GPU)
  • GPUs architecture history
    • Pascal, Turing, Ampere
  • Tensor core vs older GPU architectures
  • GPU vocabulary and the transformer
    • SM, tensors, tensor core, CUDA core, FP and INT cores, warp, threads
  • Current GPUs and cost vs value
    • Analysis of GPU specification sheets
  • GPU selection for models and workload
  • Quantizing for hardware performance and cost

Pre-trained LLM Model Essentials

  • Select and use pre-trained models to produce immediate results
  • Model ratings, metrics, leader boards
  • Model licensing and commercial use
  • Numbers every AI developer should know (cost, time, dataset size, etc.)
  • Synthetic data generation for model training.
  • Model training through feature extraction
    • Derive features from the provided dataset
    • Use annotated data for fine-tuning training

Pre-trained LLM hands-on – Run inference on multiple models on diverse prompts

  • Frameworks
    • llama.cpp, exllama, GPT4All
  • Evaluate multiple models, fine-tuning quantization
    • Falcon, Orca Mini, OpenLlama, Alpaca, MPT
  • Parameters
  • Fine-tuned models
  • Prompts
    • Model expectations (instruct, chat, etc)
    • Extension improvement

Transformer Model training essentials

  • Overfitting
  • Regularization (Avoiding Overfitting)
  • Backpropagation
  • Gradient Descent
  • Embedding
  • Learning Rate
  • Perplexity
  • Batch Normalization
  • Warm-up and Learning Rate Decay
  • Loss Functions
  • Data Augmentation
  • Training Strategies
  • Evaluation Metrics

Conversion of Model Formats

  • PyTorch to ggml
  • JAX to ggml
  • F16 to 4bit quantization

Hands-on with AI programming environments

  • PyTorch and Transformers
  • Transformers low-level interactive inspection

Model Fine Tuning

  • Perform fine-tuning in a hands-on environment
  • Demonstration project with good fine-tuning data to highlight frameworks
    • with llama.cpp
    • with PyTorch
  • Understanding fine-tuning dataset formats
  • Data cleaning skills

Introduce Application Interfacing to Models

  • LangChain
  • Guidance for structured LLM output

Applications Augmentation with Langchain and Guidance

  • LangChain
  • Guidance for structured LLM output
  • Prompt engineering

Advanced Topics

  • Optimize your model through fine-tuning and quantization.
  • Realize quality and speed inference on the specific workload
  • Context extension through fine-tuning
  • LLM deployment

Use llama to perform Natural Language Processing tasks

  • Rewrite a classic poem of a well-known author in the tone of another well-known author
  • Use Named Entity Recognition (NER) to identify Cajun food in recipes
  • Use domain-specific models to improve project accuracy

Deploy a Natural Language Model Capstone

  • Build a full NLP project
  • Download, install, and implement a trained NLP model
  • Deploy your NLP to perform the following tasks:
    • text classification
    • sentiment analysis
    • machine translation
    • chatbots
    • question answering

 

Free Microsoft
Copilot Classes

Empower Your Workforce with Copilot for Microsoft 365 (MS-4004)
November 13 | 10:00am EST

Craft Effective Prompts for Microsoft Copilot for Microsoft 365 (MS-4005)
November 14 | 10:00am EST

Microsoft Copilot for Security (SC-5006)
November 15 | 10:00am EST

End-of-Year
Gift Card Giveaway

Gift Card

Sunset Learning is spreading holiday cheer with a giveaway of 5 x $50 Amazon Gift Cards! Enter below for your chance to win.