COS 484: Natural Language Processing
[Information]   [Schedule]   [Coursework]   [FAQ]

Links

What is this course about?

Recent advances have ushered in exciting developments in natural language processing (NLP), resulting in systems that can translate text, answer questions and even hold spoken conversations with us. This course will introduce students to the basics of NLP, covering standard frameworks for dealing with natural language as well as algorithms and techniques to solve various NLP problems, including recent deep learning approaches. Topics covered include language modeling, representation learning, text classification, sequence tagging, syntactic parsing, machine translation, question answering and others.

Information

Course staff:

Time/location:

(All the times are in EST.)

Grading

Prerequisites:

Reading:

There is no required textbook for this class, and you should be able to learn everything from the lectures and assignments. However, if you would like to pursue more advanced topics or get another perspective on the same material, here are some books (all of them can be read free online):

Previous offerings:

Schedule

Lecture schedule is tentative and subject to change. All assignments are due 9:30am EST before Monday lectures.

Week Date Topics Readings Assignments
1 Mon (1/30) Introduction to NLP 1. Advances in natural language processing
2. Human Language Understanding & Reasoning
A0 out
Wed (2/1) n-gram language models J & M 3.1-3.5
Fri (2/3) Precept 1
2 Mon (2/7) Text classification Naive Bayes: J & M 4.1-4.6
Logistic regression: J & M 5.1-5.8
A0 due, A1 out
Wed (2/8) Word embeddings 1 J & M 6.2-6.4, 6.6
Fri (2/10) Precept 2
3 Mon (2/13) Word embeddings 2 1. J & M 6.8, 6.10-6.12
2. Efficient Estimation of Word Representations in Vector Space (original word2vec paper)
3. Distributed representations of words and phrases and their compositionality (negative sampling)
Wed (2/15) Sequence models 1 1. J&M 8.1-8.4
2. Michael Collin's notes on HMMs
Fri (2/17) Precept 3
4 Mon (2/20) Sequence models 2 1. Michael Collins's notes on MEMMs and CRFs
2. Michael Collins's notes on CRFs
A1 due, A2 out
Wed (2/22) Constituency Parsing 1. Notes from Michael Collins: PCFGs, Lexicalized PCFGs
2. J&M 17.1-17.6
Fri (2/24) Precept 4
5 Mon (2/27) Dependency Parsing J&M 18.1-18.2
Wed (3/1) Neural networks for NLP J&M 7.3-7.5
Fri (3/3) Precept 5
6 Mon (3/6) Midterm review: part 1
Midterm review: part 2
A2 due, A3 out
Wed (3/8) Midterm
7 Mon (3/13) Spring Recess (no class)
Wed (3/15) Spring Recess (no class)
8 Mon (3/20) Recurrent neural networks 1 1. J&M 9.1-9.3
2. The Unreasonable Effectiveness of Recurrent Neural Networks
Wed (3/22) Recurrent neural networks 2
PyTorch basics
1. J&M 9.5
2. Understanding LSTM Networks
3. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation (GRUs)
4. Simple Recurrent Units for Highly Parallelizable Recurrence (SRUs)
Fri (3/24) Project proposal due
9 Mon (3/27) Machine translation + seq2seq models 1. J&M 13.2
2. Michael Collin's notes on IBM models 1 and 2
3. Sequence to Sequence Learning with Neural Networks
4. Machine Translation. From the Cold War to Deep Learning.
Wed (3/29) Seq2seq models + attention 1. Neural Machine Translation by Jointly Learning to Align and Translate
2. Effective Approaches to Attention-based Neural Machine Translation
3. Blog post: Visualizing A Neural Machine Translation Model
4. Blog post: Sequence to Sequence (seq2seq) and Attention
A3 due, A4 out
Fri (3/31) Precept 6
10 Mon (4/3) Transformers 1 1. J&M 10.1
2. Attention Is All You Need
3. The Annotated Transformer
4. The Illustrated Transformer
Wed (4/5) Transformers 2 1. Efficient Transformers: A Survey
2. Vision Transformer
Fri (4/7) Precept 7
11 Mon (4/10) Contextualized representations and pre-training 1. Deep contextualized word representations (ELMo)
2. Improving Language Understanding by Generative Pre-Training (GPT)
3. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
4. The Illustrated BERT, ELMo, and co. (How NLP Cracked Transfer Learning)
Wed (4/12) Large language models 1. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators (Electra)
2. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (T5)
3. Language Models are Few-Shot Learners (GPT-3)
4. Training language models to follow instructions with human feedback (InstructGPT)
5. GPT-4 Technical Report (GPT-4)
12 Mon (4/17) Question answering 1. SQuAD: 100,000+ Questions for Machine Comprehension of Text
2. Reading Wikipedia to Answer Open-Domain Questions
3. Learning Dense Representations of Phrases at Scale
4. How Much Knowledge Can You Pack Into the Parameters of a Language Model?
Wed (4/19) Natural language generation (guest lecture: Prof. He He from NYU) A4 due
13 Mon (4/24) Grounding (guest lecture: Prof. Karthik Narasimhan)
Wed (4/26) Project feedback session (no lecture; individual times to be set)
Fri (5/5) Project poster session (Location: Friend Center upper atrium, time: 9:30-11:30)
Tue (5/9) Dean's date Final project report due
Coursework

Assignments

All assignments are due at 9:30am before the Monday lecture. There are 96 free late hours (~4 days) in total over all assignments. Once you have used up all your free late hours, late submissions incur a penalty of 10% for each day, up to a maximum of 3 days beyond which submissions will not be accepted. The only exception to this rule is if you have a note from your Dean of Studies. In this case, you must notify the instructors via email. For students with a dean’s note, the weight of their missed/penalized assignment will be added to the midterm and your midterm score will be scaled accordingly (for homeworks 0, 1 and 2) (e.g. if you are penalized 2 points overall, your midterm will be worth 27 and your score will be multiplied by 27/25). Missing homework 3 and 4 after the midterm can only be compensated by arranging an oral exam on the pertinent material.
Writeups: Homeworks should be written up clearly and succinctly; you may lose points if your answers are unclear or unnecessarily complicated. Using LaTeX is recommended (here's a template), but not a requirement. If you've never used LaTeX before, refer to this introductory guide on Working with LaTeX to get started. Hand-written assignments must be scanned and uploaded as a pdf.
Programming: For each assignment, we provide a Google Colab file with the programming questions included. You’ll need to make a copy of this file, fill in necessary parts, run results, and upload the code and results as a PDF file. If you've never used Google Colab before, refer to this introductory guide on Working with Google Colab to get started.
Collaboration policy and honor code: You are free to form study groups and discuss homeworks and projects. However, you must write up homeworks and code from scratch independently, and you must acknowledge in your submission all the students you discussed with. The following are considered to be honor code violations (in addition to the Princeton honor code):
  • Looking at the writeup or code of another student.
  • Showing your writeup or code to another student.
  • Discussing homework problems in such detail that your solution (writeup or code) is almost identical to another student's answer.
  • Uploading your writeup or code to a public repository (e.g. github, bitbucket, pastebin) so that it can be accessed by other students.
When debugging code together, you are only allowed to look at the input-output behavior of each other's programs (so you should write good test cases!). It is important to remember that even if you didn't copy but just gave another student your solution, you are still violating the honor code, so please be careful. If you feel like you made a mistake (it can happen, especially under time pressure!), please reach out to Danqi; the consequences will be much less severe than if we approach you.

Final Project

The final project offers you the chance to apply your newly acquired skills towards an in-depth NLP application. Students are required to complete the final project in teams of 3 students.

There are two options for the final project: (a) reproducing an ACL/NAACL/EMNLP 2020-2022 paper (encouraged); (b) complete a research project (for this option, you need to discuss your proposal and get prior approval from the instructor). All the final projects will be completed in teams of 3 students (Find your teammates early!).
Deliverables: The final project is worth 35% of your course grade. The deliverables include:
Policy and honor code:
  • The final projects are required to implement in Python. You can use any deep learning framework such as PyTorch and Tensorflow.
  • You are free to discuss ideas and implementation details with other teams. However, under no circumstances may you look at another team's code, or incorporate their code into your project.
  • Do not share your code publicly (e.g. in a public GitHub repo) until after the class has finished.

Submission

Electronic Submission: Assignments and project proposal/paper are to be submitted as pdf files through Gradescope. If you need to sign up for a Gradescope account, please use your @princeton.edu email address. You can submit as many times as you'd like until the deadline: we will only grade the last submission. Submit early to make sure your submission uploads/runs properly on the Gradescope servers. If anything goes wrong, please ask a question on Ed or contact a TA. Do not email us your submission. Partial work is better than not submitting any work. For more detailed information on submitting your assignment solutions, see this guide on assignment submission logistics.

For assignments with a programming component, we may automatically sanity check your code with some basic test cases, but we will grade your code on additional test cases. Important: just because you pass the basic test cases, you are by no means guaranteed to get full credit on the other, hidden test cases, so you should test the program more thoroughly yourself!

Regrades: If you believe that the course staff made an objective error in grading, then you may submit a regrade request. Remember that even if the grading seems harsh to you, the same rubric was used for everyone for fairness, so this is not sufficient justification for a regrade. It is also helpful to cross-check your answer against the released solutions. If you still choose to submit a regrade request, click the corresponding question on Gradescope, then click the "Request Regrade" button at the bottom. Any requests submitted over email or in person will be ignored. Regrade requests for a particular assignment are due one week after the grades are returned. Note that we may regrade your entire submission, so depending on your submission you may actually lose more points than you gain.

FAQ