【CS224N Summary 1】 Word Vector的前世今生

## 本文将回顾以下几个方面(展示图片等结果均来自于斯坦佛公开课CS224N): 词向量的前世 - 发展史 WordNet Discrete Symbols - one hot Distributional Semantics 词向量的今生:General的Word2Vec长什么样? 怎...

Natural Language Processing Projects Summary

Hi, I am Yuji. Here are some projects regarding NLP I have implemented. Always updating :) 1. Commenter-Based Prediction on the Helpfulness of Online Product Reviews code available ...

Basic Machine Learning Models Summary & Implement

SVM SVM详细的原理已经在去年整理在: 【大数据算法课程笔记】Lesson 6/7-SupportVectorMachine Theorem 【大数据算法课程笔记】Lesson 8 - Optimal Condition & Dual SVM 【大数据算法课程笔记】Lesson 9 - SVM & Algorithm (ADMM...

[MLAPP] Chapter 3: Generative Models for Discrete Data

Learning Notes on the book Machine Learning: A Probabilistic Perspective

3.1 Introduction Generative Models aim to model $P(X,y)$ While discriminative Models aim to directly model $P(y|X)$ Applying Bayes rule to a generative classifier of the form: ...

Moments


[MLAPP] Chapter 2: Probability

Learning Notes on the book Machine Learning: A Probabilistic Perspective

2.1 Introduction What is probability? The first interpretation is called the frequentist interpretation. In this view, probabilities represent long run frequencies of events. Th...

Big Data Algorithm Lesson 1: About Kernel

Why do we use kernel?

最近在旁听复旦大学一门大数据算法课,老师比较侧重于底层优化算法,讲的很细致很有意思,于是在知乎上整理了笔记,由于公式太难重新打了,那么就把链接放在下面吧: 【大数据算法课程笔记】Lesson 1 - Why Kernel 【大数据算法课程笔记】Lesson 2 - Kernel K-means 【大数据算法课程笔记】Lesson 3 - Kernel...