Based on 20 Users
Prof. Chang was a good teacher. Slides were very informative, but also a bit overwhelming, especially towards the end of the quarter. Some of the math formulas and derivations for loss functions could get quite complicated and confusing. However, he was a very kind professor, and clearly cared about our learning. He was always willing to stop for questions.
Assignments were very easy, but also interesting. There were 6 online 24-hour quizzes, essentially a free 20% grade boost and good review for exams, and 3 python notebook coding assignments. Lectures were mostly conceptual, so these 3 HWs were crucial to understanding the applications of what we learned, as well as a good introduction to sklearn.
The midterm was easy, timed and on gradescope, with a mean of 90. The final was much harder, as the concepts at the end of the course were a bit more complex, with harder math as well. The mean was around a 70. However, the rest of the class was pretty easy, and the final was only 30% of the grade, so it's not the end of the world if you stay on top of everything else.
Overall, the class was very beneficial to me. I'm very interested in machine learning, and received a great introduction to many concepts, as well as some basic applications. Prof. Chang was very fair, kind, and a pretty good lecturer. I highly recommend this as an elective if you're at all interested in AI/ML.
I think people who like math would do well in this class, but there's derivations involved and not enough time to slowly go over them in lecture, so I was mostly lost the entire time regarding the math part. The workload of homework and quizzes isn't bad. The prof is really nice but it was hard to stay focused since he had a lot of material to get through.
This class was hard. It was very math-heavy and filled with proofs. If you are not super familiar with the material like I was, prepare to spend a lot of time trying to understand what is going on outside of class. Kai-Wei didn't do a great job explaining any of the concepts that we went over and could be quite confusing when explaining the math behind different models. There are only four problem sets in the course, but they can take a bit of time to do, so the workload isn't too bad for the most part. The exam questions are generally a little bit easier than the homework questions, but there is a lot of probability related questions on exams. The curve for the class this quarter was as follows (He won't curve down only up): A+ 11, A 33, A- 33, B+ 35, B 22, B- 23, C+ 14, C 14, Other 6 (# of people per grade not %). Although the material was pretty challenging and Kai-Wei didn't do an excellent job explaining it, I found the class to be semi-interesting. I thought it was cool to understand what machine learning really is and what goes into making a model. PAC Theory was especially interesting in my opinion!
For this quarter, we were assigned 5 projects. The midterms and finals are both 24-hours. The materials were more in-depth (you can compare those slides from Fall 19) but doable. We were not required to read any textbook, but I spent a super amount of time studying the documentation of sklearn. The projects are definitely a plus.
Overall, this class is super useful. It actually teaches you a lot of stuff for ML interviews (SVM, Boosting vs Bagging ...) and for research track internships (MSRA, Google research, etc.)
I would recommend starting from these theoretical classes and then progress into more application-based classes like Stats231A.
Probably shouldn't use a really light marker in a 200+ person lecture hall, but here are my major faults with this class:
Homeworks I heard are completely copied off of a different machine learning course from UIUC. Midterm and Final questions also heavily are influenced from Dan Roth's course.
Overall the course wasn't conceptually hard. However, a lot of the test questions gave almost no partial credit. Eg, if you had the right answer when the question was "Show X concept", almost no partial credit was given; they looked more for proofs than anything else.
Only a two hour final, and the test questions can be written quite poorly / not clearly. If you're a text-learner, this is a great class! If you're a visual learner and want diagrams on your test, expect to be disappointed.
He also doesn't curve his class at all, rather he scales the margins (eg A- at 88% or something similar, and each letter grade following). However, people did well on his midterm (86-ish median), so he made his final alot harder to artificially decrease the grade distro. IMO this would have been fine if he curved the class, but he doesn't.
Also, he doesn't really consider test statistics that much. Everything is straight scale, so your percentiles on each test dont matter.
Overall, good lecturer, however I found him hard to understand in class sometimes, but bruincast helped out a lot (1.5x speed ftw). However his testing and grading schemes could be a lot better. My suggestions are to: make the midterms and finals more consistent in same difficulty, test on more concepts (almost no SVM questions on final iirc), have more diagrams, and have consistent writing on tests. (EG telling students not to ask questions about the test-questions on the final due to unclear language in the final seems wrong IMO. Especially on a test that ends up having a typo.)
Prof. Chang uses slides with minions on them, which is great. The lectures are on the dry side, but the examples he does do pop up on the final. There were 5(?) quizzes in lieu of a midterm, and a similar number of homeworks. The quizzes are pretty alright. The first homework is the hardest (lots of coding, etc), and the later ones become much easier, with the coding devolving into calling a few libraries. If you dont mess up and forget to include one of the many figures the HW asks for, the HW is pretty simple.
All in all, I would say he's alright. I wasn't inspired by the course, but I wasnt in pain either.
Professor Chang was a pretty good professor overall. I felt that his lectures were pretty informative and gave a good overview to a variety of different topics in Machine Learning. The only issue I had with the class, was that it felt like you were assumed to already have some kind of exposure to ML and Data Science. The lectures were fairly conceptual, but the homeworks and final were math heavy. I watched every lecture top to bottom and it still felt like the homeworks and final had a lot more calculations than what was taught in lecture.
The projects also felt the same. The TA's and professor are very generous with hints on the projects, but I felt like I was stumbling in the dark trying to figure out how to do the coding, as I had never used Python for Machine Learning before.
Unless you have some prior exposure to ML and Data science and/or you're just really good at math and Python, then you'll probably going to spend a lot of your time googling and figuring out what to do like I was.
Other than that, I think the class was pretty good overall. You learn a lot of relevant and important topics and Professor Chang does a good job explaining the concepts and intuition.
To be honest, not sure how I ended up with an A. Most of the material in this class confused me all quarter and doing every homework assignment felt like swimming in the dark, if not for TAs (s/o to Johnson) and study groups. The lecture material was just not very engaging to me, but Kai-Wei was quite prompt with helping students and answering questions. 4 homework assignments, some of them are repeated from previous quarters but over half of the problems were original. Each assignment took me 5-7 hours. There was also a multiple-choice quiz every week with 3 attempts allowed, so you’re almost guaranteed to get 100% on all of them. No midterm this quarter most likely due to the remote situation. The final was 24-hour open note, a mixture of multiple-choice and short answer, some math/derivation as well. It was said to take around 3 hours, but we all know that’s overestimating our capability to concentrate and recall information. doubted myself a lot and it ended up taking me around 6 hours. The grading scale was not curved because median was usually 90 or above on the homeworks. 80ish on the final. If you’re not into derivations and statistics/probability, I would not recommend this class.
imho prof kai-wei really understands ML well and is really good at explaining it. Also, his effort to facilitate the best learning experience he can is evident by his gentle promotion of participation, eagerness to adopt feedback, and quirky slides
Class is def not easy, though, and is mathematically involved, significantly more so toward the latter half.
This class is an introduction to NLP and covers tasks such as part-of-speech tagging, word representation, syntactic parsing, semantic parsing, co-reference resolution, machine translation and more. The models and algorithms used for these tasks are a mixture of classical ones (e.g Hidden Markov Models) and modern ones (e.g Transformer neural nets), where the class focuses more on the latter.
Generally, I am very happy with Prof Chang's delivery of this material. The lectures are well-prepared and interactive and are updated regularly to include new concepts, interesting papers, etc. I especially like the quality of the lecture slides, which are almost good enough to learn from on entirely their own.
One issue I had with the class is that it is fairly work-intensive. Here is the list of assignments in the class:
-Weekly quizzes (5 in total)
-1 midterm group project
-1 paper group presentation
-1 final group project
-1 final exam
-Various peer reviews
While there are quite a few, I did like the hands-on nature of these assignments. We could implement a range of different approaches for each project and even had the opportunity to peer-review other students' work. I found the latter especially useful as it gives you a better way to compare and learn than only receiving a grade.
Overall I can really recommend this class to someone interested in NLP. Its material is current and the instructors genuinely want to help you learn about the field.