Machine Learning by Andrew Ng at coursera
tags: 2013, machine learning, Standford
In this class, you will learn about the most effective machine learning techniques, and gain practice implementing them and getting them to work for yourself. More importantly, you’ll learn about not only the theoretical underpinnings of learning, but also gain the practical know-how needed to quickly and powerfully apply these techniques to new problems. Finally, you’ll learn about some of Silicon Valley’s best practices in innovation as it pertains to machine learning and AI.
Mining of Massive Data Sets at coursera
This class teaches algorithms for extracting models and other information from very large amounts of data. The emphasis is on techniques that are efficient and that scale well.
The Analytics Edge at edx
tags: MIT, machine learning
Through inspiring examples and stories, discover the power of data and use analytics to provide an edge to your career and your life.
Interacting with Data at Princeton University with David Blei
tags: clustering, classification, regression
Computers have made it possible, even easy, to collect vast amounts of data from a wide variety of sources. It is not always clear, however, how to use that data, and how to extract useful information from it. This problem is faced in a tremendous range of business, medical and scientific applications. The purpose of this course is to teach some of the best and most general approaches to this broad problem of how to get the most out of data. The course will explore both theoretical foundations and practical applications. Students will gain experience analyzing several kinds of data, including document collections, biological data, and natural images.
Topics will include: Classification, Clustering, Regression, Dimensionality reduction, Advanced topics and application
Data Science of Johns Hopkins University at coursera
A Sequence of Courses: Learn to be a Data Scientist and Apply Your Skills in a Capstone Project. In this course you will learn: Formulate context-relevant questions and hypotheses to drive data scientific research, Identify, obtain, and transform a data set to make it suitable for the production of statistical evidence communicated in written form, Build models based on new data types, experimental design, and statistical inference
In-depth introduction to machine learning in 15 hours of expert videos, Stanford University
tags: 2014, machine learning, Standfor University
The course will cover most of the material in An Introduction to Statistical Learning: with Applications in R book published in 2013, which the instructors coauthored with Gareth James and Daniela Witten. Each chapter ends with an R lab, in which examples are developed. By January 1st, 2014, an electronic version of this book will be available for free from the instructors’ websites.
instructor: Yann LeCun
tags: deep learning, 2015, kaggle
course material: video
This is a graduate course on deep learning, one of the hottest topics in machine learning and AI at the moment.
instructor: Richard Socher
tags: deep learning, nlp, 2015
course material: syllabus (with slide and videos)
Natural language processing (NLP) is one of the most important technologies of the information age. Understanding complex language utterances is also a crucial part of artificial intelligence. Applications of NLP are everywhere because people communicate most everything in language: web search, advertisement, emails, customer service, language translation, radiology reports, etc. There are a large variety of underlying tasks and machine learning models powering NLP applications. Recently, deep learning approaches have obtained very high performance across many different NLP tasks. These models can often be trained with a single end-to-end model and do not require traditional, task-specific feature engineering. In this spring quarter course students will learn to implement, train, debug, visualize and invent their own neural network models. The course provides a deep excursion into cutting-edge research in deep learning applied to NLP. The final project will involve training a complex recurrent neural network and applying it to a large scale NLP problem. On the model side we will cover word vector representations, window-based neural networks, recurrent neural networks, long-short-term-memory models, recursive neural networks, convolutional neural networks as well as some very novel models involving a memory component. Through lectures and programming assignments students will learn the necessary engineering tricks for making neural networks work on practical problems.
Deep Learning 2015 at Oxford
tags: 2015, deep learning
Nando de Freitas taught a deep learning course at the University of Oxford. All of the videos are freely available. The playlist is a bit out of order, but starting with Lecture 1 is probably the best technique.
Cluster Analysis in Data Mining, Jiawei Han, University of Illinois at Urbana-ChampaignUniversity of Illinois at Urbana-Champaign
tags: 2015, clustering
Discover the basic concepts of cluster analysis, and then study a set of typical clustering methodologies, algorithms, and applications. This includes partitioning methods such as k-means, hierarchical methods such as BIRCH, density-based methods such as DBSCAN/OPTICS, probabilistic models, and the EM algorithm. Learn clustering and methods for clustering high dimensional data, streaming data, graph data, and networked data. Explore concepts and methods for constraint-based clustering and semi-supervised clustering. Finally, see examples of cluster analysis in applications.
Machine Learning: Supervised, Unsupervised & Reinforcement, Georgia Tech, udacity
tags: supervised, unsupervised, reinforcement learning. MOOC
Machine Learning is a graduate-level course covering the area of Artificial Intelligence concerned with computer programs that modify and improve their performance through experiences. The first part of the course covers Supervised Learning, a machine learning task that makes it possible for your phone to recognize your voice, your email to filter spam, and for computers to learn a bunch of other cool stuff. In part two, you will learn about Unsupervised Learning. Ever wonder how Netflix can predict what movies you’ll like? Or how Amazon knows what you want to buy before you do? Such answers can be found in this section! Finally, can we program machines to learn like humans? This Reinforcement Learning section will teach you the algorithms for designing self-learning agents like us!
Stanford NLP Open Course, Stanford
tags: nlp, 2012, youtube
instructors: Dan Jurafsky, Christopher Manning
This course covers a broad range of topics in natural language processing, including word and sentence tokenization, text classification and sentiment analysis, spelling correction, information extraction, parsing, meaning extraction, and question answering, We will also introduce the underlying theory from probability, statistics, and machine learning that are crucial for the field, and cover fundamental algorithms like n-gram language modeling, naive bayes and maxent classifiers, sequence models like Hidden Markov Models, probabilistic dependency and constituent parsing, and vector-space models of meaning.