Natural Language Processing: NLP With Transformers in Python

Learn next-generation NLP with transformers for sentiment analysis, Q&A, similarity search, NER, and more

4.25 (2005 reviews)
Udemy
platform
English
language
Data Science
category
instructor
Natural Language Processing: NLP With Transformers in Python
27,401
students
11.5 hours
content
Aug 2022
last update
$84.99
regular price

What you will learn

Industry standard NLP using transformer models

Build full-stack question-answering transformer models

Perform sentiment analysis with transformers models in PyTorch and TensorFlow

Advanced search technologies like Elasticsearch and Facebook AI Similarity Search (FAISS)

Create fine-tuned transformers models for specialized use-cases

Measure performance of language models using advanced metrics like ROUGE

Vector building techniques like BM25 or dense passage retrievers (DPR)

An overview of recent developments in NLP

Understand attention and other key components of transformers

Learn about key transformers models such as BERT

Preprocess text data for NLP

Named entity recognition (NER) using spaCy and transformers

Fine-tune language classification models

Why take this course?

Transformer models are the de-facto standard in modern NLP. They have proven themselves as the most expressive, powerful models for language by a large margin, beating all major language-based benchmarks time and time again.

In this course, we cover everything you need to get started with building cutting-edge performance NLP applications using transformer models like Google AI's BERT, or Facebook AI's DPR.

We cover several key NLP frameworks including:

  • HuggingFace's Transformers

  • TensorFlow 2

  • PyTorch

  • spaCy

  • NLTK

  • Flair

And learn how to apply transformers to some of the most popular NLP use-cases:

  • Language classification/sentiment analysis

  • Named entity recognition (NER)

  • Question and Answering

  • Similarity/comparative learning

Throughout each of these use-cases we work through a variety of examples to ensure that what, how, and why transformers are so important. Alongside these sections we also work through two full-size NLP projects, one for sentiment analysis of financial Reddit data, and another covering a fully-fledged open domain question-answering application.

All of this is supported by several other sections that encourage us to learn how to better design, implement, and measure the performance of our models, such as:

  • History of NLP and where transformers come from

  • Common preprocessing techniques for NLP

  • The theory behind transformers

  • How to fine-tune transformers

We cover all this and more, I look forward to seeing you in the course!

Content

Introduction

Introduction
Course Overview
Environment Setup
CUDA Setup

NLP and Transformers

The Three Eras of AI
Pros and Cons of Neural AI
Word Vectors
Recurrent Neural Networks
Long Short-Term Memory
Encoder-Decoder Attention
Self-Attention
Multi-head Attention
Positional Encoding
Transformer Heads

Preprocessing for NLP

Stopwords
Tokens Introduction
Model-Specific Special Tokens
Stemming
Lemmatization
Unicode Normalization - Canonical and Compatibility Equivalence
Unicode Normalization - Composition and Decomposition
Unicode Normalization - NFD and NFC
Unicode Normalization - NFKD and NFKC

Attention

Attention Introduction
Alignment With Dot-Product
Dot-Product Attention
Self Attention
Bidirectional Attention
Multi-head and Scaled Dot-Product Attention

Language Classification

Introduction to Sentiment Analysis
Prebuilt Flair Models
Introduction to Sentiment Models With Transformers
Tokenization And Special Tokens For BERT
Making Predictions

[Project] Sentiment Model With TensorFlow and Transformers

Project Overview
Getting the Data (Kaggle API)
Preprocessing
Building a Dataset
Dataset Shuffle, Batch, Split, and Save
Build and Save
Loading and Prediction

Long Text Classification With BERT

Classification of Long Text Using Windows
Window Method in PyTorch

Named Entity Recognition (NER)

Introduction to spaCy
Extracting Entities
Authenticating With The Reddit API
Pulling Data With The Reddit API
Extracting ORGs From Reddit Data
Getting Entity Frequency
Entity Blacklist
NER With Sentiment
NER With roBERTa

Question and Answering

Open Domain and Reading Comprehension
Retrievers, Readers, and Generators
Intro to SQuAD 2.0
Processing SQuAD Training Data
(Optional) Processing SQuAD Training Data with Match-Case
Our First Q&A Model

Metrics For Language

Q&A Performance With Exact Match (EM)
ROUGE in Python
Applying ROUGE to Q&A
Recall, Precision and F1
Longest Common Subsequence (LCS)
Q&A Performance With ROUGE

Reader-Retriever QA With Haystack

Intro to Retriever-Reader and Haystack
What is Elasticsearch?
Elasticsearch Setup (Windows)
Elasticsearch Setup (Linux)
Elasticsearch in Haystack
Sparse Retrievers
Cleaning the Index
Implementing a BM25 Retriever
What is FAISS?
FAISS in Haystack
What is DPR?
The DPR Architecture
Retriever-Reader Stack

[Project] Open-Domain QA

ODQA Stack Structure
Creating the Database
Building the Haystack Pipeline

Similarity

Introduction to Similarity
Extracting The Last Hidden State Tensor
Sentence Vectors With Mean Pooling
Using Cosine Similarity
Similarity With Sentence-Transformers

Fine-Tuning Transformer Models

Visual Guide to BERT Pretraining
Introduction to BERT For Pretraining Code
BERT Pretraining - Masked-Language Modeling (MLM)
BERT Pretraining - Next Sentence Prediction (NSP)
The Logic of MLM
Fine-tuning with MLM - Data Preparation
Fine-tuning with MLM - Training
Fine-tuning with MLM - Training with Trainer
The Logic of NSP
Fine-tuning with NSP - Data Preparation
Fine-tuning with NSP - DataLoader
The Logic of MLM and NSP
Fine-tuning with MLM and NSP - Data Preparation

Reviews

Aziz
August 9, 2023
I watched the section multiple times. I still don't quite understand how Transformers work. I gained a general idea about the component. But I am not sure I understand what component does what and why. Hopefully it will become clearer with the coming sections. Update 1: As you proceed with the course, you actually start to comprehend some NLP terminologies. I am half way there, I feel more confident about my NLP knowledge now. Not my Transformers knowledge yet...
Shivam
July 22, 2023
can be more indepth by walking throgh base models codes that we are just directly using through APIs.
Francisco
July 13, 2023
Aprendí los algoritmos usando transformeers pero me hubira gustado un poco más de teoría matemática para saber qué es lo que hacen ciertas funciones
Nesim
July 8, 2023
Just plain reading a boring text book :/ the sound is like dead man :( wish he use some metaphors to simply, it's difficult to understand the context without having knowledge on the topic, but thanks to ChatGPT I ask to make simpler for me.
Ren
June 29, 2023
Not really a beginner's course IMO, could use more examples for decoder style models such as GPT2 / GPT-Neo, lectures on the transformer's architecture (attention) was quite rushed.
Khrystyna
June 13, 2023
some more theoretical coverage would be beneficial because in order to be able to modify and apply it yourself, you need a better understanding. However, overall it's a good course, especially considering there isn't many courses on transformers here.
Martin
May 25, 2023
In-Depth explanations of the most important aspects of transformers + example projects with real world application
Santhosh
May 4, 2023
Yes, the is exactly what i was expecting, and very well explained in a gradual and methodical manner, with illustrations, code and justifications.
Harrison
May 2, 2023
Really clear explanations. Easy to get right into the concepts with limited understanding of neural networks beforehand.
Felipe
April 25, 2023
Very good content. Really enjoyed how the instructor explained many concepts by implementing from scratch the components before diving into existing libraries. Also really enjoyed how deep the course went into QA and retrieval.
Henry
April 17, 2023
The description of the prerequisites is incorrect. This training lacks pedagogy and immediately starts with high-level concepts that are not explained.
Przemyslaw
April 16, 2023
Overall, really cool but I had issues with understanding positional encoding and had to do a lot of additional digging why sin/cos are used.
Francisco
April 10, 2023
Dear James, I hope you're doing well. I wanted to express my appreciation for the 4 homeworks you've provided in the course. I find that the hands-on approach, with real-world applications, helps me better understand the concepts and techniques taught in the lessons. I believe it would be beneficial for us students to have access to detailed solutions for the homework assignments. This would allow us to compare our approaches, identify potential areas for improvement, and ultimately reinforce our understanding of the material. Providing well-explained solutions can help clarify any misconceptions and give us additional insights into problem-solving strategies and best practices. Once again, thank you for your dedication to teaching and for creating such an engaging learning environment. I look forward to continuing to build my skills in this course. Francisco
Arik
March 24, 2023
It seems that this course was inadequately derived from another more comprehensive course, as the chapters are not properly aligned. Additionally, there are some concepts that are not explained at all, while some simple concepts are explained in excessive detail. I would like to obtain access to the original course.
Pranay
March 9, 2023
One of the finest courses available online regarding Transformers(NLP). Very well explained with hands on experience on state of art models w.r.t Sequence Modelling.

Coupons

DateDiscountStatus
6/5/2021100% OFF
expired

Charts

Price

Natural Language Processing: NLP With Transformers in Python - Price chart

Rating

Natural Language Processing: NLP With Transformers in Python - Ratings chart

Enrollment distribution

Natural Language Processing: NLP With Transformers in Python - Distribution chart
3754106
udemy ID
1/6/2021
course created date
6/5/2021
course indexed date
Bot
course submited by