Latent Geographic Feature Extraction from Social Media
|
|
- Moses Fowler
- 5 years ago
- Views:
Transcription
1 Latent Geographic Feature Extraction from Social Media Christian Sengstock* Michael Gertz Database Systems Research Group Heidelberg University, Germany November 8, 2012
2 Social Media is a huge and increasing source of unstructured and uncertain geographic information Eort to make data usable: (Structured) Information Extraction Place/event extraction from Flickr [Rattenburry SIGIR'07] Event trajectory extraction from Twitter [Sakaki WWW'10] Spatial Analysis Spatio-temporal forecasting using Flickr [Jin MM'12] Study ecological phenomena [Zhang WWW'12] Nov 8 2 / 33
3 General Motivation: Extract spatial variables from unstructured and noisy geographic information sources Flickr Twitter Wikipedia Φ(l) l 2 Φ(l) l 2 l 1 l 1 This work: Framework for unsupervised extraction of informative spatial variables (dimensions of geographic semantics) from Social Media Nov 8 3 / 33
4 Outline 1 Denitions and Problem Statement 2 Data Characteristics and Normalization 3 Latent Geographic Feature Extraction 4 Experiments 5 Conclusions Nov 8 4 / 33
5 Outline 1 Denitions and Problem Statement Geographic Feature Signal Estimation Problem Statement 2 Data Characteristics and Normalization Distribution Characteristics Normalization Geographic Feature Types 3 Latent Geographic Feature Extraction Dimensionality Reduction Framework 4 Experiments Technique Comparison Normalization Inuence Exploration Task 5 Conclusions Nov 8 5 / 33
6 Terminology Geographic Feature f A dimension representing some semantics of a location (e.g., temperature, population, number of restaurants) Sampled (measured) at any location l in geographic space W ( spatial variable) Geographic Feature Sensor φ and Signal φ(l) of f : φ : W R + l φ(l) Nov 8 6 / 33
7 Terminology Set of geographic features f 1,..., f p Geographic Feature Sensor: denes a Multivariate Φ := (φ 1,..., φ p ) T Spatial sampling scheme (measurements) L = (l 1,..., l n ) denes a Location Sampling Matrix: φ 1 (l 1 )... φ p (l 1 ) X n p = (Φ(l 1 ),..., Φ(l n )) T =..... φ 1 (l n )... φ p (l n ) Nov 8 7 / 33
8 Terminology A Social Media Collection D consists of documents: X : u : l : t : d i = (X, u, l, t) Bag of document features (terms, tags, image features,...) User Location Timestamp Assumption: Features with geographic meaning aggregate in subsets of geographic space high signal Nov 8 8 / 33
9 Signal Estimation Every document feature f 1,..., f p is a possibly meaningful/meaningless geographic feature Intuition of geographic feature signal φ i (l): Number of users using feature f i around location l W 1 Estimation of φ i by Non-parametric 2D-histogram estimator on regular grid C of bandwidth w Small w Capture small scale variation/phenomena Large w Capture large scale variation/phenomena 1 motivated in next section Nov 8 9 / 33
10 Problem Statement Problem: Given high-dimensional geographic feature signal Φ from a Social Media collection (all terms/tags) Features might be meaningless, redundant, noisy Goal: Unsupervised extraction of small number of informative geographic features Applications: Prepare data for learning tasks that cannot handle high-dimensional data Discover hidden spatial variables in the data Nov 8 10 / 33
11 Outline 1 Denitions and Problem Statement Geographic Feature Signal Estimation Problem Statement 2 Data Characteristics and Normalization Distribution Characteristics Normalization Geographic Feature Types 3 Latent Geographic Feature Extraction Dimensionality Reduction Framework 4 Experiments Technique Comparison Normalization Inuence Exploration Task 5 Conclusions Nov 8 11 / 33
12 Dataset Two Flickr datasets covering US and LA Document features: Tags (pre-ltered by minimum user frequency) Spatial resolution: US (1.0 degree), LA (0.01 degree) Nov 8 12 / 33
13 Spatial Distribution Characteristics Figure: F (l): Num of features, D(l): Num of documents, U(l): Num of users, Fd(l): Num of distinct features. Exponential characteristics of spatial feature distribution Users distinct features / documents features Nov 8 13 / 33
14 Spatial Feature Distribution: 'beach' Figure: F (l, f ): Number of feature f = beach, U(l, f ): Number of users using f = beach Some users contribute large number of documents Estimate signal on basis of users is less biased (more robust) Nov 8 14 / 33
15 Normalization Exponential distribution characteristics Few locations dominate the signals' spatial distribution Normalization transforms the signal into a more natural domain Logging: Binarization: φ i(l) := log φ i (l) + 1 φ i(l) := 1{φ i (l) > 0} Nov 8 15 / 33
16 Geographic Feature Types Geographic Feature Types: Classes of geographic features with similar geographic semantics [Sengstock ACMGIS'11] Global: Same intensity as baseline distribution (number of users) Not interesting to discriminate between locations Regional: Widely spread in geographic space but dierent from baseline Interesting to discriminate between large subsets in geographic space Landmark: Occurring only in small subsets of geographic space Interesting to discriminate between single small subset and the rest Depends on area of interest W and spatial resolution w. Nov 8 16 / 33
17 Geographic Feature Types Entropy over locations of spatial signal X i as geographic feature type statistic for f i : large entropy Signal widely spread / smoothly distributed small entropy Signal peaky / occurs in small areas Figure: Ordered entropies H[X i ] for tag features of US Flickr dataset Nov 8 17 / 33
18 Outline 1 Denitions and Problem Statement Geographic Feature Signal Estimation Problem Statement 2 Data Characteristics and Normalization Distribution Characteristics Normalization Geographic Feature Types 3 Latent Geographic Feature Extraction Dimensionality Reduction Framework 4 Experiments Technique Comparison Normalization Inuence Exploration Task 5 Conclusions Nov 8 18 / 33
19 Dimensionality Reduction Describe high-dimensional data by k << p dimensions while preserving statistical properties of data General formulation (generative latent factor model) X n p = S n k A k p S: Latent factor (component) values for each record A: Combination of latent factors by original features Latent Geographic Feature Sensor/Signal: Φ(l) k 1 := A k p Φ(l) p 1 Nov 8 19 / 33
20 Dimensionality Reduction Spatial distributions in the data reect spatial phenomena Statistical structure in X depends on spatial co-occurrence of features Reducing the location sampling matrix X: Latent factors represent dominant spatial distributions (correlated features collapse, non-dominant features diminish) Latent factors describe distinct spatial distributions Latent geographic features describe signals of dominant and distinct geographic phenomena Nov 8 20 / 33
21 Framework Nov 8 21 / 33
22 Outline 1 Denitions and Problem Statement Geographic Feature Signal Estimation Problem Statement 2 Data Characteristics and Normalization Distribution Characteristics Normalization Geographic Feature Types 3 Latent Geographic Feature Extraction Dimensionality Reduction Framework 4 Experiments Technique Comparison Normalization Inuence Exploration Task 5 Conclusions Nov 8 22 / 33
23 Technique Comparison Study of dimensionality reduction techniques preserving dierent statistical properties Principal Component Analysis (PCA): Components are statistically uncorrelated Sparse Principal Component Analysis (SPCA): Components are statistically uncorrelated and sparse (α << p non-zero entries) Independent Component Analysis (ICA): Components are statistically independent Nov 8 23 / 33
24 Technique Comparison Qualitative Evaluation a Extraction of k = 20 latent geographic features b Manual labeling of extracted features on basis of component weights and spatial signal distribution c Identication of 'informative' latent features d Selection of similar latent features of other techniques on basis of highest component weights e Comparison of component weights and signal distribution Nov 8 24 / 33
25 Technique Comparison SPCA feat.: 'landscape' (top), 'beach' (center), 'desert' (bot.) Nov 8 25 / 33
26 Technique Comparison: Comparison 'beach': PCA (top), ICA (center), SPCA (bottom) Nov 8 26 / 33
27 Normalization Inuence Extracted latent geographic features can be of dierent types (global, landmark, regional) Calculation of entropy over k = 20 extracted features for each technique and each normalization strategy (none, logging, binarization) SPCA and ICA show response towards more regional features for stronger normalization Nov 8 27 / 33
28 Normalization Inuence Normalization: None (top), Logging (center), Binar. (bottom) Nov 8 28 / 33
29 Exploration Task Extraction of informative geographic features for Los Angeles using Flickr Exploration Setting SPCA feature extraction Normalization as exploration parameter Nov 8 29 / 33
30 Exploration Task Los Angeles Landmark Features Nov 8 30 / 33
31 Exploration Task Los Angeles Regional Features Nov 8 31 / 33
32 Outline 1 Denitions and Problem Statement Geographic Feature Signal Estimation Problem Statement 2 Data Characteristics and Normalization Distribution Characteristics Normalization Geographic Feature Types 3 Latent Geographic Feature Extraction Dimensionality Reduction Framework 4 Experiments Technique Comparison Normalization Inuence Exploration Task 5 Conclusions Nov 8 32 / 33
33 Conclusions Summary General framework for unsupervised extraction of informative spatial variables (dimensions of geographic semantics) from Social Media PCA ICA SPCA Transformation of informative geographic feature extraction into a problem of high-dimensional statistics in geographic feature space Extraction of spatial variables of dierent types (landmarks, regional) by normalization Ongoing Work Extrinsic Evaluation of parametrization and techniques (e.g. spatial classication task) (Semi-) supervised feature extraction Nov 8 33 / 33
Latent Geographic Feature Extraction from Social Media
Latent Geographic Feature Extraction from Social Media Christian Sengstock Institute of Computer Science Heidelberg University, Germany sengstock@informatik.uni-heidelberg.de Michael Gertz Institute of
More informationDimension Reduction (PCA, ICA, CCA, FLD,
Dimension Reduction (PCA, ICA, CCA, FLD, Topic Models) Yi Zhang 10-701, Machine Learning, Spring 2011 April 6 th, 2011 Parts of the PCA slides are from previous 10-701 lectures 1 Outline Dimension reduction
More informationDiscovering Geographical Topics in Twitter
Discovering Geographical Topics in Twitter Liangjie Hong, Lehigh University Amr Ahmed, Yahoo! Research Alexander J. Smola, Yahoo! Research Siva Gurumurthy, Twitter Kostas Tsioutsiouliklis, Twitter Overview
More informationYahoo! Labs Nov. 1 st, Liangjie Hong, Ph.D. Candidate Dept. of Computer Science and Engineering Lehigh University
Yahoo! Labs Nov. 1 st, 2012 Liangjie Hong, Ph.D. Candidate Dept. of Computer Science and Engineering Lehigh University Motivation Modeling Social Streams Future work Motivation Modeling Social Streams
More informationPrincipal Component Analysis
Principal Component Analysis Yingyu Liang yliang@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison [based on slides from Nina Balcan] slide 1 Goals for the lecture you should understand
More informationGeneralisation and Multiple Representation of Location-Based Social Media Data
Generalisation and Multiple Representation of Location-Based Social Media Data Dirk Burghardt, Alexander Dunkel and Mathias Gröbe, Institute of Cartography Outline 1. Motivation VGI and spatial data from
More informationBasics of Multivariate Modelling and Data Analysis
Basics of Multivariate Modelling and Data Analysis Kurt-Erik Häggblom 2. Overview of multivariate techniques 2.1 Different approaches to multivariate data analysis 2.2 Classification of multivariate techniques
More informationLiangjie Hong, Ph.D. Candidate Dept. of Computer Science and Engineering Lehigh University Bethlehem, PA
Rutgers, The State University of New Jersey Nov. 12, 2012 Liangjie Hong, Ph.D. Candidate Dept. of Computer Science and Engineering Lehigh University Bethlehem, PA Motivation Modeling Social Streams Future
More informationDeep Poisson Factorization Machines: a factor analysis model for mapping behaviors in journalist ecosystem
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
More informationPredictive analysis on Multivariate, Time Series datasets using Shapelets
1 Predictive analysis on Multivariate, Time Series datasets using Shapelets Hemal Thakkar Department of Computer Science, Stanford University hemal@stanford.edu hemal.tt@gmail.com Abstract Multivariate,
More informationStyle-aware Mid-level Representation for Discovering Visual Connections in Space and Time
Style-aware Mid-level Representation for Discovering Visual Connections in Space and Time Experiment presentation for CS3710:Visual Recognition Presenter: Zitao Liu University of Pittsburgh ztliu@cs.pitt.edu
More informationIntelligent Data Analysis Lecture Notes on Document Mining
Intelligent Data Analysis Lecture Notes on Document Mining Peter Tiňo Representing Textual Documents as Vectors Our next topic will take us to seemingly very different data spaces - those of textual documents.
More informationIntroduction to Machine Learning
10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what
More informationTensor Methods for Feature Learning
Tensor Methods for Feature Learning Anima Anandkumar U.C. Irvine Feature Learning For Efficient Classification Find good transformations of input for improved classification Figures used attributed to
More informationPattern Recognition and Machine Learning. Learning and Evaluation of Pattern Recognition Processes
Pattern Recognition and Machine Learning James L. Crowley ENSIMAG 3 - MMIS Fall Semester 2016 Lesson 1 5 October 2016 Learning and Evaluation of Pattern Recognition Processes Outline Notation...2 1. The
More informationDM-Group Meeting. Subhodip Biswas 10/16/2014
DM-Group Meeting Subhodip Biswas 10/16/2014 Papers to be discussed 1. Crowdsourcing Land Use Maps via Twitter Vanessa Frias-Martinez and Enrique Frias-Martinez in KDD 2014 2. Tracking Climate Change Opinions
More informationOverview of Statistical Tools. Statistical Inference. Bayesian Framework. Modeling. Very simple case. Things are usually more complicated
Fall 3 Computer Vision Overview of Statistical Tools Statistical Inference Haibin Ling Observation inference Decision Prior knowledge http://www.dabi.temple.edu/~hbling/teaching/3f_5543/index.html Bayesian
More informationMachine learning for pervasive systems Classification in high-dimensional spaces
Machine learning for pervasive systems Classification in high-dimensional spaces Department of Communications and Networking Aalto University, School of Electrical Engineering stephan.sigg@aalto.fi Version
More informationMultimedia analysis and retrieval
Multimedia analysis and retrieval from the masses, and for the masses Lexing Xie IBM T J Watson Research Center xlx@us.ibm.com ICIP-MIR workshop, Oct 12, 2008 2008 IBM Research multimedia information analysis
More informationPart-of-Speech Tagging + Neural Networks 3: Word Embeddings CS 287
Part-of-Speech Tagging + Neural Networks 3: Word Embeddings CS 287 Review: Neural Networks One-layer multi-layer perceptron architecture, NN MLP1 (x) = g(xw 1 + b 1 )W 2 + b 2 xw + b; perceptron x is the
More informationBeating Social Pulse: Understanding Information Propagation via Online Social Tagging Systems 1
Journal of Universal Computer Science, vol. 18, no. 8 (2012, 1022-1031 submitted: 16/9/11, accepted: 14/12/11, appeared: 28/4/12 J.UCS Beating Social Pulse: Understanding Information Propagation via Online
More informationCorrelation Preserving Unsupervised Discretization. Outline
Correlation Preserving Unsupervised Discretization Jee Vang Outline Paper References What is discretization? Motivation Principal Component Analysis (PCA) Association Mining Correlation Preserving Discretization
More informationLatent Dirichlet Allocation Introduction/Overview
Latent Dirichlet Allocation Introduction/Overview David Meyer 03.10.2016 David Meyer http://www.1-4-5.net/~dmm/ml/lda_intro.pdf 03.10.2016 Agenda What is Topic Modeling? Parametric vs. Non-Parametric Models
More informationPCA, Kernel PCA, ICA
PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per
More informationInformation Extraction from Text
Information Extraction from Text Jing Jiang Chapter 2 from Mining Text Data (2012) Presented by Andrew Landgraf, September 13, 2013 1 What is Information Extraction? Goal is to discover structured information
More informationA Framework of Detecting Burst Events from Micro-blogging Streams
, pp.379-384 http://dx.doi.org/10.14257/astl.2013.29.78 A Framework of Detecting Burst Events from Micro-blogging Streams Kaifang Yang, Yongbo Yu, Lizhou Zheng, Peiquan Jin School of Computer Science and
More informationA method of Area of Interest and Shooting Spot Detection using Geo-tagged Photographs
A method of Area of Interest and Shooting Spot Detection using Geo-tagged Photographs Motohiro Shirai Graduate School of Informatics, Shizuoka University 3-5-1 Johoku, Naka-ku Hamamatsu-shi, Shizuoka,
More informationPRINCIPAL COMPONENT ANALYSIS
PRINCIPAL COMPONENT ANALYSIS Dimensionality Reduction Tzompanaki Katerina Dimensionality Reduction Unsupervised learning Goal: Find hidden patterns in the data. Used for Visualization Data compression
More informationModeling Complex Temporal Composition of Actionlets for Activity Prediction
Modeling Complex Temporal Composition of Actionlets for Activity Prediction ECCV 2012 Activity Recognition Reading Group Framework of activity prediction What is an Actionlet To segment a long sequence
More informationNotes on Latent Semantic Analysis
Notes on Latent Semantic Analysis Costas Boulis 1 Introduction One of the most fundamental problems of information retrieval (IR) is to find all documents (and nothing but those) that are semantically
More informationMining Big Data Using Parsimonious Factor and Shrinkage Methods
Mining Big Data Using Parsimonious Factor and Shrinkage Methods Hyun Hak Kim 1 and Norman Swanson 2 1 Bank of Korea and 2 Rutgers University ECB Workshop on using Big Data for Forecasting and Statistics
More informationLearning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text
Learning the Semantic Correlation: An Alternative Way to Gain from Unlabeled Text Yi Zhang Machine Learning Department Carnegie Mellon University yizhang1@cs.cmu.edu Jeff Schneider The Robotics Institute
More informationDeep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści
Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, 2017 Spis treści Website Acknowledgments Notation xiii xv xix 1 Introduction 1 1.1 Who Should Read This Book?
More informationPredictive Analytics on Accident Data Using Rule Based and Discriminative Classifiers
Advances in Computational Sciences and Technology ISSN 0973-6107 Volume 10, Number 3 (2017) pp. 461-469 Research India Publications http://www.ripublication.com Predictive Analytics on Accident Data Using
More informationTerms in Time and Times in Context: A Graph-based Term-Time Ranking Model
Terms in Time and Times in Context: A Graph-based Term-Time Ranking Model Andreas Spitz, Jannik Strötgen, Thomas Bögel and Michael Gertz Heidelberg University Institute of Computer Science Database Systems
More informationIntroduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin
1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)
More informationMining Massively Incomplete Data Sets by Conceptual Reconstruction
Mining Massively Incomplete Data Sets by Conceptual Reconstruction Charu C. Aggarwal IBM T. J. Watson Research Center Yorktown Heights, NY 0598 charu@us.ibm.com Srinivasan Parthasarathy Ohio State University
More informationDiversity Regularization of Latent Variable Models: Theory, Algorithm and Applications
Diversity Regularization of Latent Variable Models: Theory, Algorithm and Applications Pengtao Xie, Machine Learning Department, Carnegie Mellon University 1. Background Latent Variable Models (LVMs) are
More informationMultidimensional scaling (MDS)
Multidimensional scaling (MDS) Just like SOM and principal curves or surfaces, MDS aims to map data points in R p to a lower-dimensional coordinate system. However, MSD approaches the problem somewhat
More informationDimensionality Reduction
Dimensionality Reduction Le Song Machine Learning I CSE 674, Fall 23 Unsupervised learning Learning from raw (unlabeled, unannotated, etc) data, as opposed to supervised data where a classification of
More informationGeographic Knowledge Discovery Using Ground-Level Images and Videos
This work was funded in part by a DOE Early Career award, an NSF CAREER award (#IIS- 1150115), and a PECASE award. We gratefully acknowledge NVIDIA for their donated hardware. Geographic Knowledge Discovery
More informationOutline. Limits of Bayesian classification Bayesian concept learning Probabilistic models for unsupervised and semi-supervised category learning
Outline Limits of Bayesian classification Bayesian concept learning Probabilistic models for unsupervised and semi-supervised category learning Limitations Is categorization just discrimination among mutually
More informationNonlinear Dimensionality Reduction. Jose A. Costa
Nonlinear Dimensionality Reduction Jose A. Costa Mathematics of Information Seminar, Dec. Motivation Many useful of signals such as: Image databases; Gene expression microarrays; Internet traffic time
More informationMulti-Label Informed Latent Semantic Indexing
Multi-Label Informed Latent Semantic Indexing Shipeng Yu 12 Joint work with Kai Yu 1 and Volker Tresp 1 August 2005 1 Siemens Corporate Technology Department of Neural Computation 2 University of Munich
More informationPCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani
PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given
More informationCloud-Based Computational Bio-surveillance Framework for Discovering Emergent Patterns From Big Data
Cloud-Based Computational Bio-surveillance Framework for Discovering Emergent Patterns From Big Data Arvind Ramanathan ramanathana@ornl.gov Computational Data Analytics Group, Computer Science & Engineering
More informationDimensionality Reduction
Lecture 5 1 Outline 1. Overview a) What is? b) Why? 2. Principal Component Analysis (PCA) a) Objectives b) Explaining variability c) SVD 3. Related approaches a) ICA b) Autoencoders 2 Example 1: Sportsball
More informationPrincipal Component Analysis (PCA)
Principal Component Analysis (PCA) Salvador Dalí, Galatea of the Spheres CSC411/2515: Machine Learning and Data Mining, Winter 2018 Michael Guerzhoy and Lisa Zhang Some slides from Derek Hoiem and Alysha
More informationMachine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012
Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Principal Components Analysis Le Song Lecture 22, Nov 13, 2012 Based on slides from Eric Xing, CMU Reading: Chap 12.1, CB book 1 2 Factor or Component
More informationIndependent Component Analysis and Unsupervised Learning. Jen-Tzung Chien
Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More information7. Variable extraction and dimensionality reduction
7. Variable extraction and dimensionality reduction The goal of the variable selection in the preceding chapter was to find least useful variables so that it would be possible to reduce the dimensionality
More informationTemporal Multi-View Inconsistency Detection for Network Traffic Analysis
WWW 15 Florence, Italy Temporal Multi-View Inconsistency Detection for Network Traffic Analysis Houping Xiao 1, Jing Gao 1, Deepak Turaga 2, Long Vu 2, and Alain Biem 2 1 Department of Computer Science
More informationBagging and Other Ensemble Methods
Bagging and Other Ensemble Methods Sargur N. Srihari srihari@buffalo.edu 1 Regularization Strategies 1. Parameter Norm Penalties 2. Norm Penalties as Constrained Optimization 3. Regularization and Underconstrained
More informationInformation Dynamics Foundations and Applications
Gustavo Deco Bernd Schürmann Information Dynamics Foundations and Applications With 89 Illustrations Springer PREFACE vii CHAPTER 1 Introduction 1 CHAPTER 2 Dynamical Systems: An Overview 7 2.1 Deterministic
More informationClick Prediction and Preference Ranking of RSS Feeds
Click Prediction and Preference Ranking of RSS Feeds 1 Introduction December 11, 2009 Steven Wu RSS (Really Simple Syndication) is a family of data formats used to publish frequently updated works. RSS
More informationGatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II
Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II Gatsby Unit University College London 27 Feb 2017 Outline Part I: Theory of ICA Definition and difference
More informationGIST 4302/5302: Spatial Analysis and Modeling
GIST 4302/5302: Spatial Analysis and Modeling Basics of Statistics Guofeng Cao www.myweb.ttu.edu/gucao Department of Geosciences Texas Tech University guofeng.cao@ttu.edu Spring 2015 Outline of This Week
More informationIssues and Techniques in Pattern Classification
Issues and Techniques in Pattern Classification Carlotta Domeniconi www.ise.gmu.edu/~carlotta Machine Learning Given a collection of data, a machine learner eplains the underlying process that generated
More informationSTA 414/2104: Lecture 8
STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable models Background PCA
More informationDecember 3, Dipartimento di Informatica, Università di Torino. Felicittà. Visualizing and Estimating Happiness in
: : Dipartimento di Informatica, Università di Torino December 3, 2013 : Outline : from social media Social media users generated contents represent the human behavior in everyday life, but... how to analyze
More informationDATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD
DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary
More informationCPSC 340: Machine Learning and Data Mining. Linear Least Squares Fall 2016
CPSC 340: Machine Learning and Data Mining Linear Least Squares Fall 2016 Assignment 2 is due Friday: Admin You should already be started! 1 late day to hand it in on Wednesday, 2 for Friday, 3 for next
More informationLecture 8: Principal Component Analysis; Kernel PCA
Lecture 8: Principal Component Analysis; Kernel PCA Lester Mackey April 23, 2014 Stats 306B: Unsupervised Learning Sta306b April 19, 2011 Principal components: 16 PCA example: digit data 130 threes, a
More informationRobustness of Principal Components
PCA for Clustering An objective of principal components analysis is to identify linear combinations of the original variables that are useful in accounting for the variation in those original variables.
More informationTerm Filtering with Bounded Error
Term Filtering with Bounded Error Zi Yang, Wei Li, Jie Tang, and Juanzi Li Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University, China {yangzi, tangjie, ljz}@keg.cs.tsinghua.edu.cn
More informationPart 2: Multivariate fmri analysis using a sparsifying spatio-temporal prior
Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 2: Multivariate fmri analysis using a sparsifying spatio-temporal prior Tom Heskes joint work with Marcel van Gerven
More informationData-driven characterization of multidirectional pedestrian trac
Data-driven characterization of multidirectional pedestrian trac Marija Nikoli Michel Bierlaire Flurin Hänseler TGF 5 Delft University of Technology, the Netherlands October 8, 5 / 5 Outline. Motivation
More informationShared Segmentation of Natural Scenes. Dependent Pitman-Yor Processes
Shared Segmentation of Natural Scenes using Dependent Pitman-Yor Processes Erik Sudderth & Michael Jordan University of California, Berkeley Parsing Visual Scenes sky skyscraper sky dome buildings trees
More informationDynamic Data Modeling, Recognition, and Synthesis. Rui Zhao Thesis Defense Advisor: Professor Qiang Ji
Dynamic Data Modeling, Recognition, and Synthesis Rui Zhao Thesis Defense Advisor: Professor Qiang Ji Contents Introduction Related Work Dynamic Data Modeling & Analysis Temporal localization Insufficient
More informationAssessing pervasive user-generated content to describe tourist dynamics
Assessing pervasive user-generated content to describe tourist dynamics Fabien Girardin, Josep Blat Universitat Pompeu Fabra, Barcelona, Spain {Fabien.Girardin, Josep.Blat}@upf.edu Abstract. In recent
More informationJae-Bong Lee 1 and Bernard A. Megrey 2. International Symposium on Climate Change Effects on Fish and Fisheries
International Symposium on Climate Change Effects on Fish and Fisheries On the utility of self-organizing maps (SOM) and k-means clustering to characterize and compare low frequency spatial and temporal
More informationECE521 week 3: 23/26 January 2017
ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear
More informationClustering based tensor decomposition
Clustering based tensor decomposition Huan He huan.he@emory.edu Shihua Wang shihua.wang@emory.edu Emory University November 29, 2017 (Huan)(Shihua) (Emory University) Clustering based tensor decomposition
More informationLecture'12:' SSMs;'Independent'Component'Analysis;' Canonical'Correla;on'Analysis'
Lecture'12:' SSMs;'Independent'Component'Analysis;' Canonical'Correla;on'Analysis' Lester'Mackey' May'7,'2014' ' Stats'306B:'Unsupervised'Learning' Beyond'linearity'in'state'space'modeling' Credit:'Alex'Simma'
More informationBelieve it Today or Tomorrow? Detecting Untrustworthy Information from Dynamic Multi-Source Data
SDM 15 Vancouver, CAN Believe it Today or Tomorrow? Detecting Untrustworthy Information from Dynamic Multi-Source Data Houping Xiao 1, Yaliang Li 1, Jing Gao 1, Fei Wang 2, Liang Ge 3, Wei Fan 4, Long
More informationRaRE: Social Rank Regulated Large-scale Network Embedding
RaRE: Social Rank Regulated Large-scale Network Embedding Authors: Yupeng Gu 1, Yizhou Sun 1, Yanen Li 2, Yang Yang 3 04/26/2018 The Web Conference, 2018 1 University of California, Los Angeles 2 Snapchat
More informationExploring the Patterns of Human Mobility Using Heterogeneous Traffic Trajectory Data
Exploring the Patterns of Human Mobility Using Heterogeneous Traffic Trajectory Data Jinzhong Wang April 13, 2016 The UBD Group Mobile and Social Computing Laboratory School of Software, Dalian University
More informationDetecting Origin-Destination Mobility Flows From Geotagged Tweets in Greater Los Angeles Area
Detecting Origin-Destination Mobility Flows From Geotagged Tweets in Greater Los Angeles Area Song Gao 1, Jiue-An Yang 1,2, Bo Yan 1, Yingjie Hu 1, Krzysztof Janowicz 1, Grant McKenzie 1 1 STKO Lab, Department
More informationIndependent Component Analysis and Unsupervised Learning
Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien National Cheng Kung University TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent
More informationCIFAR Lectures: Non-Gaussian statistics and natural images
CIFAR Lectures: Non-Gaussian statistics and natural images Dept of Computer Science University of Helsinki, Finland Outline Part I: Theory of ICA Definition and difference to PCA Importance of non-gaussianity
More informationNeuroscience Introduction
Neuroscience Introduction The brain As humans, we can identify galaxies light years away, we can study particles smaller than an atom. But we still haven t unlocked the mystery of the three pounds of matter
More informationIntroduction to Gaussian Process
Introduction to Gaussian Process CS 778 Chris Tensmeyer CS 478 INTRODUCTION 1 What Topic? Machine Learning Regression Bayesian ML Bayesian Regression Bayesian Non-parametric Gaussian Process (GP) GP Regression
More informationMachine Learning Linear Regression. Prof. Matteo Matteucci
Machine Learning Linear Regression Prof. Matteo Matteucci Outline 2 o Simple Linear Regression Model Least Squares Fit Measures of Fit Inference in Regression o Multi Variate Regession Model Least Squares
More informationA Modular NMF Matching Algorithm for Radiation Spectra
A Modular NMF Matching Algorithm for Radiation Spectra Melissa L. Koudelka Sensor Exploitation Applications Sandia National Laboratories mlkoude@sandia.gov Daniel J. Dorsey Systems Technologies Sandia
More informationRelevance Aggregation Projections for Image Retrieval
Relevance Aggregation Projections for Image Retrieval CIVR 2008 Wei Liu Wei Jiang Shih-Fu Chang wliu@ee.columbia.edu Syllabus Motivations and Formulation Our Approach: Relevance Aggregation Projections
More informationSparse vectors recap. ANLP Lecture 22 Lexical Semantics with Dense Vectors. Before density, another approach to normalisation.
ANLP Lecture 22 Lexical Semantics with Dense Vectors Henry S. Thompson Based on slides by Jurafsky & Martin, some via Dorota Glowacka 5 November 2018 Previous lectures: Sparse vectors recap How to represent
More informationANLP Lecture 22 Lexical Semantics with Dense Vectors
ANLP Lecture 22 Lexical Semantics with Dense Vectors Henry S. Thompson Based on slides by Jurafsky & Martin, some via Dorota Glowacka 5 November 2018 Henry S. Thompson ANLP Lecture 22 5 November 2018 Previous
More informationECE 5984: Introduction to Machine Learning
ECE 5984: Introduction to Machine Learning Topics: (Finish) Expectation Maximization Principal Component Analysis (PCA) Readings: Barber 15.1-15.4 Dhruv Batra Virginia Tech Administrativia Poster Presentation:
More informationData Mining Techniques
Data Mining Techniques CS 6220 - Section 3 - Fall 2016 Lecture 12 Jan-Willem van de Meent (credit: Yijun Zhao, Percy Liang) DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Linear Dimensionality
More informationLinear Factor Models. Sargur N. Srihari
Linear Factor Models Sargur N. srihari@cedar.buffalo.edu 1 Topics in Linear Factor Models Linear factor model definition 1. Probabilistic PCA and Factor Analysis 2. Independent Component Analysis (ICA)
More informationPart I. Linear Discriminant Analysis. Discriminant analysis. Discriminant analysis
Week 5 Based in part on slides from textbook, slides of Susan Holmes Part I Linear Discriminant Analysis October 29, 2012 1 / 1 2 / 1 Nearest centroid rule Suppose we break down our data matrix as by the
More informationMachine Learning - MT & 14. PCA and MDS
Machine Learning - MT 2016 13 & 14. PCA and MDS Varun Kanade University of Oxford November 21 & 23, 2016 Announcements Sheet 4 due this Friday by noon Practical 3 this week (continue next week if necessary)
More informationExploring Urban Areas of Interest. Yingjie Hu and Sathya Prasad
Exploring Urban Areas of Interest Yingjie Hu and Sathya Prasad What is Urban Areas of Interest (AOIs)? What is Urban Areas of Interest (AOIs)? Urban AOIs exist in people s minds and defined by people s
More informationMachine Learning 11. week
Machine Learning 11. week Feature Extraction-Selection Dimension reduction PCA LDA 1 Feature Extraction Any problem can be solved by machine learning methods in case of that the system must be appropriately
More informationDictionary Learning Using Tensor Methods
Dictionary Learning Using Tensor Methods Anima Anandkumar U.C. Irvine Joint work with Rong Ge, Majid Janzamin and Furong Huang. Feature learning as cornerstone of ML ML Practice Feature learning as cornerstone
More informationDistributed ML for DOSNs: giving power back to users
Distributed ML for DOSNs: giving power back to users Amira Soliman KTH isocial Marie Curie Initial Training Networks Part1 Agenda DOSNs and Machine Learning DIVa: Decentralized Identity Validation for
More informationIterative Laplacian Score for Feature Selection
Iterative Laplacian Score for Feature Selection Linling Zhu, Linsong Miao, and Daoqiang Zhang College of Computer Science and echnology, Nanjing University of Aeronautics and Astronautics, Nanjing 2006,
More informationMETHODS FOR IDENTIFYING PUBLIC HEALTH TRENDS. Mark Dredze Department of Computer Science Johns Hopkins University
METHODS FOR IDENTIFYING PUBLIC HEALTH TRENDS Mark Dredze Department of Computer Science Johns Hopkins University disease surveillance self medicating vaccination PUBLIC HEALTH The prevention of disease,
More informationAnomaly (outlier) detection. Huiping Cao, Anomaly 1
Anomaly (outlier) detection Huiping Cao, Anomaly 1 Outline General concepts What are outliers Types of outliers Causes of anomalies Challenges of outlier detection Outlier detection approaches Huiping
More information