Key Features
Completion Certificate
Internship
Internship Certificate
7 Days Refund Policy
Expert Instructors
One-to-One Session
What Will You Learn?
Accelerate your learning journey with our comprehensive course designed to equip you with essential skills and practical knowledge in Generative AI In Cloud Using AWS, Azure & Google Cloud.
- Generative AI Fundamentals
- AWS Cloud Services
- AWS Bedrock & SageMaker
- Azure OpenAI Service
- Google Cloud Vertex AI
- Cloud AI Model Deployment
- Fine-Tuning Foundation Models
- API Integration with Lambda & API Gateway
- Building AI-powered Applications in Cloud
Requirements
Before getting started with this course, it's beneficial to have the following:
- Laptop with good internet
- Basic knowledge of cloud services (optional but preferred)
- Passionate about learning
- No prior experience needed
- Willing to dedicate time
- Curious about working with cloud-based AI tools
Course Completion
Yes

Curriculum
- Exploring Amazon Bedrock - An Overview
- Hands-on with Amazon Bedrock Console
- Architecture of Amazon Bedrock
- Exploring Bedrock Foundation Models
- Understanding Bedrock Embeddings
- Using Amazon Bedrock Chat Playgrounds
- Amazon Bedrock - A Look at Inference Parameters
- Pricing and Cost Structure of Amazon Bedrock
- Overview of AWS SageMaker
- Step-by-Step AWS SageMaker Walk-through
- Exploring AWS SageMaker Studio
- Hands-on with SageMaker Studio Walk-through
- Choosing Pre-trained Models with SageMaker
- Creating SageMaker Endpoints
- Accessing the SageMaker Console
- Creating a SageMaker Domain
- Opening SageMaker Studio
- Exploring SageMaker JumpStart
- Deploying Models with AWS SageMaker
- Creating AWS Lambda Functions and Upgrading Boto3
- Writing the AWS Lambda Function to Connect to Bedrock Service - Part 1
- Writing the AWS Lambda Function to Connect to Bedrock Service - Part 2
- Creating REST APIs with AWS API Gateway and Lambda Integration
- Building an End-to-End Application with AWS Lambda and API Gateway
- Demo of What We Will Build
- Overview of Vectors, Embedding, Vector DB, and Use Case Architecture
- Setting Up the Environment Before Coding
- Data Ingestion Process
- Data Transformation and Processing
- Embedding, Vector Store, and Indexing
- Hands-on LLM Creation with Context
- Retrieval QA Techniques
- Frontend Development and Final Demo
- End-to-End Demo Implementation
- Demo Overview of What We Will Build
- Exploring Vectors, Embedding, Vector DB, and Architecture for the Use Case
- Preparing the Environment Before Coding
- Data Ingestion Process
- Transforming and Processing Data
- Understanding Embedding, Vector Store, and Indexing
- Hands-on - Creating LLM with Context
- Frontend Development and Final Demo
- Complete End-to-End Demo
- Introduction to Azure OpenAI Service
- Setting Up Your Azure OpenAI Service Account
- Understanding System Message Framework and Templates
- Deploying a Model on Your Own Azure Server
- Chatting with Your Model using Completion API
- Using Function Calling with Azure OpenAI
- Chat Completions API with Tokens, Temperature, and Other Parameters
- Using Dall-E, GPT-4, and GPT-4v Models
- Deploying a Web App (Chat Engine) with Your Own Data on Azure Cloud
- Choosing the Right Foundation Model from Azure OpenAI
- Fine Tuning of Foundation Model - Overview
- Fine Tuning of Foundation Model - Architecture
- Fine Tuning of Foundation Models - Hands On
- Identifying the Right Parameters for Fine-Tuning
- PEFT (Parameter Efficient Fine-Tuning)
- Data Compliance & Data Privacy Challenges
- Working with Codey (Palm 2) model
- Working with Text Chat prompt
- Generate Code, Unit test with Code Chat Bison model
- Translate text with Translation LLM
- Summarization (Use case of language model)
- Classification (Use case of language model)
- Vision Model
- Speech to Text & Text to Speech
- Multimodal Prompts