Statistical Ranking Problem
|
|
- Erica Horn
- 6 years ago
- Views:
Transcription
1 Statistical Ranking Problem Tong Zhang Statistics Department, Rutgers University
2 Ranking Problems Rank a set of items and display to users in corresponding order. Two issues: performance on top and dealing with large search space. web-page ranking rank pages for a query theoretical analysis with error criterion focusing on top machine translation rank possible (English) translations for a given input (Chinese) sentence algorithm handling large search space 1
3 Web-Search Problem User types a query, search engine returns a result page: selects from billions of pages. assign a score for each page, and return pages ranked by the scores. Quality of search engine: relevance (whether returned pages are on topic and authoritative) other issues presentation (diversity, perceived relevance, etc) personalization (predict user specific intention) coverage (size and quality of index). freshness (whether contents are timely). responsiveness (how quickly search engine responds to the query). 2
4 Relevance Ranking: Statistical Learning Formulation Training: randomly select queries q, and web-pages p for each query. use editorial judgment to assign relevance grade y(p, q). construct a feature x(p, q) for each query/page pair. learn scoring function ˆf(x(p, q)) to preserve the order of y(p, q) for each q. Deployment: query q comes in. return pages p 1,..., p m in descending order of ˆf(x(p, q)). 3
5 Measuring Ranking Quality Given scoring function ˆf, return ordered page-list p 1,..., p m for a query q. only the order information is important. should focus on the relevance of returned pages near the top. DCG (discounted cumulative gain) with decreasing weight c i DCG( ˆf, q) = m c i r(p i, q). j=1 c i : reflects effort (or likelihood) of user clicking on the i-th position. 4
6 Subset Ranking Model x X : feature (x(p, q) X ) S S: subset of X ({x 1,..., x m } = {x(p, q) : p} S) each subset corresponds to a fixed query q. assume each subset of size m for convenience: m is large. y: quality grade of each x X (y(p, q)). scoring function f : X S R. ranking function r f (S) = {j i }: ordering of S S based on scoring function f. quality: DCG(f, S) = m i=1 c ie yji (x ji,s) y ji. 5
7 Some Theoretical Questions Learnability: subset size m is huge: do we need many samples (rows) to learn. focusing quality on top. Learning method: regression. pair-wise learning? other methods? Limited goal to address here: can we learn ranking by using regression when m is large. massive data size (more than 20 billion) want to derive: error bounds independent of m. what are some feasible algorithms and statistical implications. 6
8 Bayes Optimal Scoring Given a set S S, for each x j S, we define the Bayes-scoring function as f B (x j, S) = E yj (x j,s) y j The optimal Bayes ranking function r fb that maximizes DCG induced by f B returns a rank list J = [j 1,..., j m ] in descending order of f B (x ji, S). not necessarily unique (depending on c j ) The function is subset dependent: require appropriate result set features. 7
9 Simple Regression Given subsets S i {y i,1,..., y i,m }. = {x i,1,..., x i,m } and corresponding relevance score We can estimate f B (x j, S) using regression in a family F: ˆf = arg min f F n i=1 m (f(x i,j, S i ) y i,j ) 2 j=1 Problem: m is massive (> 20 billion) computationally inefficient statistically slow convergence ranking error bounded by O( m) root-mean-squared-error. Solution: should emphasize estimation quality on top. 8
10 Importance Weighted Regression Some samples are more important than other samples (focus on top). A revised formulation: ˆf = arg minf F 1 n n i=1 L(f, S i, {y i,j } j ), with L(f, S, {y j } j ) = m j=1 w(x j, S)(f(x j, S) y j ) 2 + u sup j w (x j, S)(f(x j, S) δ(x j, S)) 2 + weight w: importance weighting focusing regression error on top zero for irrelevant pages weight w : large for irrelevant pages for which f(x j, S) should be less than threshold δ. importance weighting can be implemented through importance sampling. 9
11 Relationship of Regression and Ranking Let Q(f) = E S L(f, S), where L(f, S) = E {yj } j SL(f, S, {y j } j ) m = w(x j, S)E yj (x j,s) (f(x j, S) y j ) 2 + u sup w (x j, S)(f(x j, S) δ(x j, S)) 2 +. j j=1 Theorem 1. Assume that c i = 0 for all i > k. Under appropriate parameter choices with some constants u and γ, for all f: DCG(r B ) DCG(r f ) C(γ, u)(q(f) inf f Q(f )) 1/2. Key point: focus on relevant documents on top; j w(x j, S) is much smaller than m. 10
12 Generalization Performance with Square Regularization Consider scoring f ˆβ(x, S) = ˆβ T ψ(x, S), with feature vector ψ(x, S): L(β, S, {y j } j ) = ˆβ = arg min β H m j=1 [ 1 n ] n L(β, S i, {y i,j } j ) + λβ T β, (1) i=1 w(x j, S)(f β (x j, S) y j ) 2 + u sup j w (x j, S)(f β (x j, S) δ(x j, S)) 2 +. Theorem 2. Let M = sup x,s φ(x, S) 2 and W = sup S [ x j S w(x j, S) + u sup xj S w (x j, S)]. Let f ˆβ be the estimator defined in (1). Then we have DCG(r B ) E {Si,{y i,j } j } n i=1 DCG(r f ˆβ) C(γ, u) [ ( 1 + W M ) 2 inf (Q(f β) + λβ T β) inf 2λn β H f Q(f) ] 1/2. 11
13 Interpretation of Results Result does not depend on m, but the much smaller quantity quantity W = sup S [ x j S w(x j, S) + u sup xj S w (x j, S)] emphasize relevant samples on top: w is small for irrelevant documents. a refined analysis can replace sup over S by some p-norm over S. Can control generalization for the top portion of the rank-list even with large m. learning complexity does not depend on the majority of items near the bottom of the rank-list. the bottom items are usually easy to estimate. 12
14 Key Points Ranking quality near the top is most important statistical analysis to deal with the scenario Regression based algorithm to handle large search space importance weighting of regression terms error bounds independent of the massive web-size. 13
15 Statistical Translation and Algorithm Challenge Problem: conditioned on a source sentence in one language. generate a target sentence in another language. General approach: scoring function: measuring the quality of translated sentence based on the source sentence (similar to web-search) search strategy: effectively generate target sentence candidates. search for the optimal score. structure used in the scoring function (through block model). Main challenge: exponential growth of search space 14
16 Graphical illustration b 4 airspace Lebanese violate warplanes Israeli b 2 b 1 b 3 A l T A } r A t A l H r b y P A l A s r A } t n t h k A l m j A l A l j w y A l l b n A n y y l y P 15
17 Block Sequence Decoder Database: a set of possible translation blocks e.g. block a b translates into potential block z x y. Scoring function: candidate translation: a block sequence (b 1,..., b n ) map each sequence of blocks into a non-negative score s w (b n 1) = n i=1 wt F (b i 1, b i ; o). Input: source sentence. Translation: generate block sequence consistent with the source sentence. find the sequence with largest score. 16
18 Decoder Training Given source/target sentence pairs {(S i, T i )} i=1,...,n. Given a decoding scheme implementing: ẑ(w, S) = arg max z V (S) s w (z). Find parameter w such that on the training data: ẑ(w, S i ) achieves high BLEU score on average. Key difficulty: large search space (similar issues in MLR) Traditional tuning. Our methods: local training. global training. 17
19 Traditional scoring function A bunch of local statistics gathered from the training data, and beyond. different language models of the target P (T i T i 1, T i 2...). can depend on statistics beyond training data. Google benefited significantly from huge language models. orientation (swap) frequency. block frequency. block quality score: e.g. normalized in-block unigram translation probability: (S, T ) = ({s 1,..., s p }, {t 1,..., t q }); P (S T ) = j n j i p(s j t i ). Linear combination of log-frequencies (five or six features): s w ({b n 1}) = i j w j log f j (b i ; b i 1, ) Tuning: hand-tuning; gradient descent adjustment to optimize BLEU score 18
20 Large scale training Motivation: want: the ability of incorporating large number of features. require: automated training method to optimize millions of parameters. similar to the transition from inktomi rank function tuning to MLR. Challenges: search space V (S) is too large. Need to break it down. direct global training using relevant set model: treats decoder as a blackbox. 19
21 Global training of decoder parameter Treat decoder as a black box, implementing: ẑ(w, S) = arg max z V (S) s w (z). No need to know V (S). Generate truth set as block sequences with K-largest blue scores: V K (S). Generate alternatives as a subset of V (S) that are most relevant. if w does well on the relevant alternatives, then it does well on the whole set V (S). related to active sampling procedure in MLR. Some related works in parsing exist. 20
22 Learning method Try to minimize the following regularized empirical risk minimization: [ ] 1 N ŵ = arg min Φ(w, V K (S i ), V (S i )) + λw 2 w m i=1 Φ(w, V K, V ) = 1 K z V K max z V V K ψ(w, z, z ), ψ(w, z, z ) = φ(s w (z), Bl(z); s w (z ), Bl(z )), Relevant set: let ξ i (w, z) = max z V (S i ) V K (S i ) ψ(w, z, z ) V (r) (ŵ, S i ) = {z V (S i ) : z V K (S i ), ξ i (ŵ, z) 0 & ψ(ŵ, z, z ) = ξ i (ŵ, z)}. Key observation: can replace V by V (r) without changing solution. 21
23 Example loss functions Truth z V K, alternative z V V K (or in V (r) ). Bl(z) > Bl(z ): want to penalize if score s w (z) s w (z ). Estimate s w (z) using least squares (consistent): φ(s w (z), Bl(z); s w (z ), Bl(z )) = α(s w (z) Bl(z)) 2 + α (s w (z ) Bl(z )) 2. does not do well in our experiments. Possible reasons: we did implement correctly (re-weighting); s w (z) cannot approximate Bl(z) very well. Estimate s w (z) using pair-wise loss (inconsistent): φ(s w (z), Bl(z); s w (z ), Bl(z )) = (Bl(z) Bl(z )) max(1 (s w (z) s w (z )), 0) 2. 22
24 Approximate Relevant Set Method Observation: relevant set depends on w; w can be calculated based on approximate relevant set. Iterate to find both (similar to the active learning procedure in MLR). fix V (r), update w. fix w, update V (r). 23
25 Table 1: Generic Relevant Set Algorithm divide training points into m blocks J 1,..., J m initialize weight vector w w 0 for each data point S initialize truth V K (S) and alternative V (r) (S) for l = 1,, L for j = 1,, m for each S J j select relevant points { z k } S (*) update V (r) (S) V (r) (S) { z k } update w by solving the following approximately (**) N i=1 Φ(w, V K(S i ), V (r) (S i )) + λw 2 min w 1 m 24
26 Convergence analysis Approximate relevant set size: relevant set update rule (*): z k V (S) V k (S) for each z k V K (S) (k = 1,..., K) such that ψ(w, z k, z k ) = max ψ(w, z k, z ) z V (S) V K (S) convergence bound independent of the size of V (S). generalization bound: independent of size of V (S). real implementation using decoder output: z = max z V (S) s w (z ) Weight update rule in (**): can be implemented using stochastic gradient descent. 25
27 Experiments (MT03 Arabic-English DARPA evaluation) dj363.bleutest dl363.bleutest dj363.bleutraining dl363.bleutraining Figure 1: BLEU score on test and training data as a function of the number of training iterations. 26
28 Translation results (25 training iterations) Model training test MON-PHR MON SWAP Using only simple binary-features without language models, etc (never done before in SMT) MON-PHR: phrase id based features; MON/SWAP: plus internal word features with and without swapping. Context: many published grammar based methods, in the.20s. state of the art (with many additional engineering tuning): Google with huge language model (around.50), IBM (around.45). 27
29 Remarks Method to handle large search space for arbitrary decoding procedure key: restrict search space dependency through a loss function of the form max z V (S) A φ(z, ). z achieved at a small relevant set (can ignore the non-relevant points) using decoder output to approximate relevant set. automatic construction of most relevant alternatives. 28
Introduction to Logistic Regression
Introduction to Logistic Regression Guy Lebanon Binary Classification Binary classification is the most basic task in machine learning, and yet the most frequent. Binary classifiers often serve as the
More informationSparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28
Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:
More information10-701/ Machine Learning - Midterm Exam, Fall 2010
10-701/15-781 Machine Learning - Midterm Exam, Fall 2010 Aarti Singh Carnegie Mellon University 1. Personal info: Name: Andrew account: E-mail address: 2. There should be 15 numbered pages in this exam
More informationLogistic Regression: Online, Lazy, Kernelized, Sequential, etc.
Logistic Regression: Online, Lazy, Kernelized, Sequential, etc. Harsha Veeramachaneni Thomson Reuter Research and Development April 1, 2010 Harsha Veeramachaneni (TR R&D) Logistic Regression April 1, 2010
More informationLog-Linear Models, MEMMs, and CRFs
Log-Linear Models, MEMMs, and CRFs Michael Collins 1 Notation Throughout this note I ll use underline to denote vectors. For example, w R d will be a vector with components w 1, w 2,... w d. We use expx
More informationGradient Descent. Sargur Srihari
Gradient Descent Sargur srihari@cedar.buffalo.edu 1 Topics Simple Gradient Descent/Ascent Difficulties with Simple Gradient Descent Line Search Brent s Method Conjugate Gradient Descent Weight vectors
More informationGoogle s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Google s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation Y. Wu, M. Schuster, Z. Chen, Q.V. Le, M. Norouzi, et al. Google arxiv:1609.08144v2 Reviewed by : Bill
More informationLogistic Regression Trained with Different Loss Functions. Discussion
Logistic Regression Trained with Different Loss Functions Discussion CS640 Notations We restrict our discussions to the binary case. g(z) = g (z) = g(z) z h w (x) = g(wx) = + e z = g(z)( g(z)) + e wx =
More informationTuning as Linear Regression
Tuning as Linear Regression Marzieh Bazrafshan, Tagyoung Chung and Daniel Gildea Department of Computer Science University of Rochester Rochester, NY 14627 Abstract We propose a tuning method for statistical
More informationEmpirical Risk Minimization
Empirical Risk Minimization Fabrice Rossi SAMM Université Paris 1 Panthéon Sorbonne 2018 Outline Introduction PAC learning ERM in practice 2 General setting Data X the input space and Y the output space
More informationLogistic Regression Logistic
Case Study 1: Estimating Click Probabilities L2 Regularization for Logistic Regression Machine Learning/Statistics for Big Data CSE599C1/STAT592, University of Washington Carlos Guestrin January 10 th,
More informationSemestrial Project - Expedia Hotel Ranking
1 Many customers search and purchase hotels online. Companies such as Expedia make their profit from purchases made through their sites. The ultimate goal top of the list are the hotels that are most likely
More informationInformation Retrieval and Organisation
Information Retrieval and Organisation Chapter 13 Text Classification and Naïve Bayes Dell Zhang Birkbeck, University of London Motivation Relevance Feedback revisited The user marks a number of documents
More informationLinear Models in Machine Learning
CS540 Intro to AI Linear Models in Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu We briefly go over two linear models frequently used in machine learning: linear regression for, well, regression,
More informationModeling Data with Linear Combinations of Basis Functions. Read Chapter 3 in the text by Bishop
Modeling Data with Linear Combinations of Basis Functions Read Chapter 3 in the text by Bishop A Type of Supervised Learning Problem We want to model data (x 1, t 1 ),..., (x N, t N ), where x i is a vector
More informationAd Placement Strategies
Case Study 1: Estimating Click Probabilities Tackling an Unknown Number of Features with Sketching Machine Learning for Big Data CSE547/STAT548, University of Washington Emily Fox 2014 Emily Fox January
More informationKernel Logistic Regression and the Import Vector Machine
Kernel Logistic Regression and the Import Vector Machine Ji Zhu and Trevor Hastie Journal of Computational and Graphical Statistics, 2005 Presented by Mingtao Ding Duke University December 8, 2011 Mingtao
More informationMachine Learning. Classification, Discriminative learning. Marc Toussaint University of Stuttgart Summer 2015
Machine Learning Classification, Discriminative learning Structured output, structured input, discriminative function, joint input-output features, Likelihood Maximization, Logistic regression, binary
More informationCPSC 340: Machine Learning and Data Mining. Stochastic Gradient Fall 2017
CPSC 340: Machine Learning and Data Mining Stochastic Gradient Fall 2017 Assignment 3: Admin Check update thread on Piazza for correct definition of trainndx. This could make your cross-validation code
More informationUndirected Graphical Models
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Properties Properties 3 Generative vs. Conditional
More informationRETRIEVAL MODELS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS
RETRIEVAL MODELS Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 1 Outline Intro Basics of probability and information theory Retrieval models Boolean model Vector space model Probabilistic
More informationCS60021: Scalable Data Mining. Large Scale Machine Learning
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 CS60021: Scalable Data Mining Large Scale Machine Learning Sourangshu Bhattacharya Example: Spam filtering Instance
More informationTraining the linear classifier
215, Training the linear classifier A natural way to train the classifier is to minimize the number of classification errors on the training data, i.e. choosing w so that the training error is minimized.
More informationLinear discriminant functions
Andrea Passerini passerini@disi.unitn.it Machine Learning Discriminative learning Discriminative vs generative Generative learning assumes knowledge of the distribution governing the data Discriminative
More informationExpectation Maximization (EM)
Expectation Maximization (EM) The EM algorithm is used to train models involving latent variables using training data in which the latent variables are not observed (unlabeled data). This is to be contrasted
More informationLarge-scale Collaborative Ranking in Near-Linear Time
Large-scale Collaborative Ranking in Near-Linear Time Liwei Wu Depts of Statistics and Computer Science UC Davis KDD 17, Halifax, Canada August 13-17, 2017 Joint work with Cho-Jui Hsieh and James Sharpnack
More informationGraphical Models for Collaborative Filtering
Graphical Models for Collaborative Filtering Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Sequence modeling HMM, Kalman Filter, etc.: Similarity: the same graphical model topology,
More informationAd Placement Strategies
Case Study : Estimating Click Probabilities Intro Logistic Regression Gradient Descent + SGD AdaGrad Machine Learning for Big Data CSE547/STAT548, University of Washington Emily Fox January 7 th, 04 Ad
More informationTopic 3: Neural Networks
CS 4850/6850: Introduction to Machine Learning Fall 2018 Topic 3: Neural Networks Instructor: Daniel L. Pimentel-Alarcón c Copyright 2018 3.1 Introduction Neural networks are arguably the main reason why
More informationLinear Regression. Aarti Singh. Machine Learning / Sept 27, 2010
Linear Regression Aarti Singh Machine Learning 10-701/15-781 Sept 27, 2010 Discrete to Continuous Labels Classification Sports Science News Anemic cell Healthy cell Regression X = Document Y = Topic X
More informationMachine Learning, Fall 2012 Homework 2
0-60 Machine Learning, Fall 202 Homework 2 Instructors: Tom Mitchell, Ziv Bar-Joseph TA in charge: Selen Uguroglu email: sugurogl@cs.cmu.edu SOLUTIONS Naive Bayes, 20 points Problem. Basic concepts, 0
More informationLinear Regression. Volker Tresp 2018
Linear Regression Volker Tresp 2018 1 Learning Machine: The Linear Model / ADALINE As with the Perceptron we start with an activation functions that is a linearly weighted sum of the inputs h = M j=0 w
More informationRelative Loss Bounds for Multidimensional Regression Problems
Relative Loss Bounds for Multidimensional Regression Problems Jyrki Kivinen and Manfred Warmuth Presented by: Arindam Banerjee A Single Neuron For a training example (x, y), x R d, y [0, 1] learning solves
More informationSome Formal Analysis of Rocchio s Similarity-Based Relevance Feedback Algorithm
Some Formal Analysis of Rocchio s Similarity-Based Relevance Feedback Algorithm Zhixiang Chen (chen@cs.panam.edu) Department of Computer Science, University of Texas-Pan American, 1201 West University
More informationLecture 5 Neural models for NLP
CS546: Machine Learning in NLP (Spring 2018) http://courses.engr.illinois.edu/cs546/ Lecture 5 Neural models for NLP Julia Hockenmaier juliahmr@illinois.edu 3324 Siebel Center Office hours: Tue/Thu 2pm-3pm
More informationLinear Regression (9/11/13)
STA561: Probabilistic machine learning Linear Regression (9/11/13) Lecturer: Barbara Engelhardt Scribes: Zachary Abzug, Mike Gloudemans, Zhuosheng Gu, Zhao Song 1 Why use linear regression? Figure 1: Scatter
More informationCase Study 1: Estimating Click Probabilities. Kakade Announcements: Project Proposals: due this Friday!
Case Study 1: Estimating Click Probabilities Intro Logistic Regression Gradient Descent + SGD Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade April 4, 017 1 Announcements:
More informationBig Data Analytics. Lucas Rego Drumond
Big Data Analytics Lucas Rego Drumond Information Systems and Machine Learning Lab (ISMLL) Institute of Computer Science University of Hildesheim, Germany Predictive Models Predictive Models 1 / 34 Outline
More informationName: Student number:
UNIVERSITY OF TORONTO Faculty of Arts and Science APRIL 2018 EXAMINATIONS CSC321H1S Duration 3 hours No Aids Allowed Name: Student number: This is a closed-book test. It is marked out of 35 marks. Please
More informationLecture 5: Logistic Regression. Neural Networks
Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture
More informationVariable Selection in Data Mining Project
Variable Selection Variable Selection in Data Mining Project Gilles Godbout IFT 6266 - Algorithmes d Apprentissage Session Project Dept. Informatique et Recherche Opérationnelle Université de Montréal
More informationPart A. P (w 1 )P (w 2 w 1 )P (w 3 w 1 w 2 ) P (w M w 1 w 2 w M 1 ) P (w 1 )P (w 2 w 1 )P (w 3 w 2 ) P (w M w M 1 )
Part A 1. A Markov chain is a discrete-time stochastic process, defined by a set of states, a set of transition probabilities (between states), and a set of initial state probabilities; the process proceeds
More informationSTATISTICAL BEHAVIOR AND CONSISTENCY OF CLASSIFICATION METHODS BASED ON CONVEX RISK MINIMIZATION
STATISTICAL BEHAVIOR AND CONSISTENCY OF CLASSIFICATION METHODS BASED ON CONVEX RISK MINIMIZATION Tong Zhang The Annals of Statistics, 2004 Outline Motivation Approximation error under convex risk minimization
More informationMidterm exam CS 189/289, Fall 2015
Midterm exam CS 189/289, Fall 2015 You have 80 minutes for the exam. Total 100 points: 1. True/False: 36 points (18 questions, 2 points each). 2. Multiple-choice questions: 24 points (8 questions, 3 points
More informationLinear classifiers: Overfitting and regularization
Linear classifiers: Overfitting and regularization Emily Fox University of Washington January 25, 2017 Logistic regression recap 1 . Thus far, we focused on decision boundaries Score(x i ) = w 0 h 0 (x
More informationComputational Oracle Inequalities for Large Scale Model Selection Problems
for Large Scale Model Selection Problems University of California at Berkeley Queensland University of Technology ETH Zürich, September 2011 Joint work with Alekh Agarwal, John Duchi and Clément Levrard.
More informationPenalized Squared Error and Likelihood: Risk Bounds and Fast Algorithms
university-logo Penalized Squared Error and Likelihood: Risk Bounds and Fast Algorithms Andrew Barron Cong Huang Xi Luo Department of Statistics Yale University 2008 Workshop on Sparsity in High Dimensional
More informationHomework #3 RELEASE DATE: 10/28/2013 DUE DATE: extended to 11/18/2013, BEFORE NOON QUESTIONS ABOUT HOMEWORK MATERIALS ARE WELCOMED ON THE FORUM.
Homework #3 RELEASE DATE: 10/28/2013 DUE DATE: extended to 11/18/2013, BEFORE NOON QUESTIONS ABOUT HOMEWORK MATERIALS ARE WELCOMED ON THE FORUM. Unless granted by the instructor in advance, you must turn
More informationVariational Decoding for Statistical Machine Translation
Variational Decoding for Statistical Machine Translation Zhifei Li, Jason Eisner, and Sanjeev Khudanpur Center for Language and Speech Processing Computer Science Department Johns Hopkins University 1
More informationWeb Search and Text Mining. Learning from Preference Data
Web Search and Text Mining Learning from Preference Data Outline Two-stage algorithm, learning preference functions, and finding a total order that best agrees with a preference function Learning ranking
More informationLinear Classifiers IV
Universität Potsdam Institut für Informatik Lehrstuhl Linear Classifiers IV Blaine Nelson, Tobias Scheffer Contents Classification Problem Bayesian Classifier Decision Linear Classifiers, MAP Models Logistic
More informationGradient Boosting (Continued)
Gradient Boosting (Continued) David Rosenberg New York University April 4, 2016 David Rosenberg (New York University) DS-GA 1003 April 4, 2016 1 / 31 Boosting Fits an Additive Model Boosting Fits an Additive
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table
More informationLinear Regression. Volker Tresp 2014
Linear Regression Volker Tresp 2014 1 Learning Machine: The Linear Model / ADALINE As with the Perceptron we start with an activation functions that is a linearly weighted sum of the inputs h i = M 1 j=0
More informationMachine Learning. Linear Models. Fabio Vandin October 10, 2017
Machine Learning Linear Models Fabio Vandin October 10, 2017 1 Linear Predictors and Affine Functions Consider X = R d Affine functions: L d = {h w,b : w R d, b R} where ( d ) h w,b (x) = w, x + b = w
More informationLecture 4: Types of errors. Bayesian regression models. Logistic regression
Lecture 4: Types of errors. Bayesian regression models. Logistic regression A Bayesian interpretation of regularization Bayesian vs maximum likelihood fitting more generally COMP-652 and ECSE-68, Lecture
More informationNeural Network Training
Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification
More informationDecoupled Collaborative Ranking
Decoupled Collaborative Ranking Jun Hu, Ping Li April 24, 2017 Jun Hu, Ping Li WWW2017 April 24, 2017 1 / 36 Recommender Systems Recommendation system is an information filtering technique, which provides
More informationLinear Discrimination Functions
Laurea Magistrale in Informatica Nicola Fanizzi Dipartimento di Informatica Università degli Studi di Bari November 4, 2009 Outline Linear models Gradient descent Perceptron Minimum square error approach
More informationMachine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression
Machine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression Due: Monday, February 13, 2017, at 10pm (Submit via Gradescope) Instructions: Your answers to the questions below,
More informationMulticlass Classification-1
CS 446 Machine Learning Fall 2016 Oct 27, 2016 Multiclass Classification Professor: Dan Roth Scribe: C. Cheng Overview Binary to multiclass Multiclass SVM Constraint classification 1 Introduction Multiclass
More informationCS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine
CS 277: Data Mining Mining Web Link Structure Class Presentations In-class, Tuesday and Thursday next week 2-person teams: 6 minutes, up to 6 slides, 3 minutes/slides each person 1-person teams 4 minutes,
More informationCollaborative topic models: motivations cont
Collaborative topic models: motivations cont Two topics: machine learning social network analysis Two people: " boy Two articles: article A! girl article B Preferences: The boy likes A and B --- no problem.
More informationHomework 4. Convex Optimization /36-725
Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)
More informationOutline: Ensemble Learning. Ensemble Learning. The Wisdom of Crowds. The Wisdom of Crowds - Really? Crowd wiser than any individual
Outline: Ensemble Learning We will describe and investigate algorithms to Ensemble Learning Lecture 10, DD2431 Machine Learning A. Maki, J. Sullivan October 2014 train weak classifiers/regressors and how
More informationCSC 411 Lecture 6: Linear Regression
CSC 411 Lecture 6: Linear Regression Roger Grosse, Amir-massoud Farahmand, and Juan Carrasquilla University of Toronto UofT CSC 411: 06-Linear Regression 1 / 37 A Timely XKCD UofT CSC 411: 06-Linear Regression
More informationLarge Scale Machine Learning with Stochastic Gradient Descent
Large Scale Machine Learning with Stochastic Gradient Descent Léon Bottou leon@bottou.org Microsoft (since June) Summary i. Learning with Stochastic Gradient Descent. ii. The Tradeoffs of Large Scale Learning.
More informationClassification Logistic Regression
Announcements: Classification Logistic Regression Machine Learning CSE546 Sham Kakade University of Washington HW due on Friday. Today: Review: sub-gradients,lasso Logistic Regression October 3, 26 Sham
More informationStochastic Gradient Descent. CS 584: Big Data Analytics
Stochastic Gradient Descent CS 584: Big Data Analytics Gradient Descent Recap Simplest and extremely popular Main Idea: take a step proportional to the negative of the gradient Easy to implement Each iteration
More informationDiscriminative Training. March 4, 2014
Discriminative Training March 4, 2014 Noisy Channels Again p(e) source English Noisy Channels Again p(e) p(g e) source English German Noisy Channels Again p(e) p(g e) source English German decoder e =
More informationStatistical Machine Learning from Data
January 17, 2006 Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Multi-Layer Perceptrons Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole
More informationσ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =
Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,
More informationAn Introduction to Machine Learning
An Introduction to Machine Learning L6: Structured Estimation Alexander J. Smola Statistical Machine Learning Program Canberra, ACT 0200 Australia Alex.Smola@nicta.com.au Tata Institute, Pune, January
More informationVariations of Logistic Regression with Stochastic Gradient Descent
Variations of Logistic Regression with Stochastic Gradient Descent Panqu Wang(pawang@ucsd.edu) Phuc Xuan Nguyen(pxn002@ucsd.edu) January 26, 2012 Abstract In this paper, we extend the traditional logistic
More informationOnline Social Networks and Media. Link Analysis and Web Search
Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information
More informationEXAM IN STATISTICAL MACHINE LEARNING STATISTISK MASKININLÄRNING
EXAM IN STATISTICAL MACHINE LEARNING STATISTISK MASKININLÄRNING DATE AND TIME: August 30, 2018, 14.00 19.00 RESPONSIBLE TEACHER: Niklas Wahlström NUMBER OF PROBLEMS: 5 AIDING MATERIAL: Calculator, mathematical
More informationFast Logistic Regression for Text Categorization with Variable-Length N-grams
Fast Logistic Regression for Text Categorization with Variable-Length N-grams Georgiana Ifrim *, Gökhan Bakır +, Gerhard Weikum * * Max-Planck Institute for Informatics Saarbrücken, Germany + Google Switzerland
More informationPresentation in Convex Optimization
Dec 22, 2014 Introduction Sample size selection in optimization methods for machine learning Introduction Sample size selection in optimization methods for machine learning Main results: presents a methodology
More informationSVRG++ with Non-uniform Sampling
SVRG++ with Non-uniform Sampling Tamás Kern András György Department of Electrical and Electronic Engineering Imperial College London, London, UK, SW7 2BT {tamas.kern15,a.gyorgy}@imperial.ac.uk Abstract
More informationMachine Learning 2017
Machine Learning 2017 Volker Roth Department of Mathematics & Computer Science University of Basel 21st March 2017 Volker Roth (University of Basel) Machine Learning 2017 21st March 2017 1 / 41 Section
More informationClassification. Classification is similar to regression in that the goal is to use covariates to predict on outcome.
Classification Classification is similar to regression in that the goal is to use covariates to predict on outcome. We still have a vector of covariates X. However, the response is binary (or a few classes),
More informationToday. Calculus. Linear Regression. Lagrange Multipliers
Today Calculus Lagrange Multipliers Linear Regression 1 Optimization with constraints What if I want to constrain the parameters of the model. The mean is less than 10 Find the best likelihood, subject
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationCSE546 Machine Learning, Autumn 2016: Homework 1
CSE546 Machine Learning, Autumn 2016: Homework 1 Due: Friday, October 14 th, 5pm 0 Policies Writing: Please submit your HW as a typed pdf document (not handwritten). It is encouraged you latex all your
More informationCS 6375 Machine Learning
CS 6375 Machine Learning Nicholas Ruozzi University of Texas at Dallas Slides adapted from David Sontag and Vibhav Gogate Course Info. Instructor: Nicholas Ruozzi Office: ECSS 3.409 Office hours: Tues.
More informationCoordinate Descent and Ascent Methods
Coordinate Descent and Ascent Methods Julie Nutini Machine Learning Reading Group November 3 rd, 2015 1 / 22 Projected-Gradient Methods Motivation Rewrite non-smooth problem as smooth constrained problem:
More informationRegML 2018 Class 2 Tikhonov regularization and kernels
RegML 2018 Class 2 Tikhonov regularization and kernels Lorenzo Rosasco UNIGE-MIT-IIT June 17, 2018 Learning problem Problem For H {f f : X Y }, solve min E(f), f H dρ(x, y)l(f(x), y) given S n = (x i,
More informationWarm up: risk prediction with logistic regression
Warm up: risk prediction with logistic regression Boss gives you a bunch of data on loans defaulting or not: {(x i,y i )} n i= x i 2 R d, y i 2 {, } You model the data as: P (Y = y x, w) = + exp( yw T
More informationECS289: Scalable Machine Learning
ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Sept 29, 2016 Outline Convex vs Nonconvex Functions Coordinate Descent Gradient Descent Newton s method Stochastic Gradient Descent Numerical Optimization
More informationMachine Learning Basics: Stochastic Gradient Descent. Sargur N. Srihari
Machine Learning Basics: Stochastic Gradient Descent Sargur N. srihari@cedar.buffalo.edu 1 Topics 1. Learning Algorithms 2. Capacity, Overfitting and Underfitting 3. Hyperparameters and Validation Sets
More informationLinear Models for Regression CS534
Linear Models for Regression CS534 Example Regression Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict
More informationLecture 11. Linear Soft Margin Support Vector Machines
CS142: Machine Learning Spring 2017 Lecture 11 Instructor: Pedro Felzenszwalb Scribes: Dan Xiang, Tyler Dae Devlin Linear Soft Margin Support Vector Machines We continue our discussion of linear soft margin
More informationSVAN 2016 Mini Course: Stochastic Convex Optimization Methods in Machine Learning
SVAN 2016 Mini Course: Stochastic Convex Optimization Methods in Machine Learning Mark Schmidt University of British Columbia, May 2016 www.cs.ubc.ca/~schmidtm/svan16 Some images from this lecture are
More informationMachine Learning for NLP
Machine Learning for NLP Linear Models Joakim Nivre Uppsala University Department of Linguistics and Philology Slides adapted from Ryan McDonald, Google Research Machine Learning for NLP 1(26) Outline
More informationLinear Classification. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington
Linear Classification CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Example of Linear Classification Red points: patterns belonging
More informationMaximum Margin Matrix Factorization for Collaborative Ranking
Maximum Margin Matrix Factorization for Collaborative Ranking Joint work with Quoc Le, Alexandros Karatzoglou and Markus Weimer Alexander J. Smola sml.nicta.com.au Statistical Machine Learning Program
More informationLow-Dimensional Discriminative Reranking. Jagadeesh Jagarlamudi and Hal Daume III University of Maryland, College Park
Low-Dimensional Discriminative Reranking Jagadeesh Jagarlamudi and Hal Daume III University of Maryland, College Park Discriminative Reranking Useful for many NLP tasks Enables us to use arbitrary features
More informationFORMULATION OF THE LEARNING PROBLEM
FORMULTION OF THE LERNING PROBLEM MIM RGINSKY Now that we have seen an informal statement of the learning problem, as well as acquired some technical tools in the form of concentration inequalities, we
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationDATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS
DATA MINING LECTURE 3 Link Analysis Ranking PageRank -- Random walks HITS How to organize the web First try: Manually curated Web Directories How to organize the web Second try: Web Search Information
More information