SKA machine learning perspec1ves for imaging, processing and analysis

Size: px
Start display at page:

Download "SKA machine learning perspec1ves for imaging, processing and analysis"

Transcription

1 1 SKA machine learning perspec1ves for imaging, processing and analysis Slava Voloshynovskiy Stochas1c Informa1on Processing Group University of Geneva Switzerland with contribu,on of: D. Kostadinov, S. Ferdowsi, M. Diephuis, O. Taran and T. Holotyak Swiss SKA Day, EPFL

2 Outline Machine learning challenges in SKA Challenge 1 imaging-reconstruc1on Challenge data compression for transfer and storage Challenge 3 analy1cs and processing of big data Conclusions

3 3 Remark Compila,on of exper,se from Computer Science department Sec1on of mathema1cs Observatory

4 Machine learning reali1es and SKA 4 New perspec,ves of machine learning based image processing due to: large amount of collected observa1ons (training data) new powerful computa1onal facili1es modern phased antenna arrays op1misa1on algorithms

5 Main SKA challenges 5 Challenge 1: Imaging-reconstruc,on Huge amount of computa1on for pair-wise correla1ons, calibra1on, reconstruc1on Challenge : Data transfer and storage Data transfer from correlators to reconstruc1on servers, data centers, SDP and end users Challenge 3: Analy,cs Automa1c processing of produced data (recogni1on, mining, search, tracking, )

6 Challenge 1: Imaging generic approach 6 Restora1on x( λ) p( x( λ) ) H p(x) Priors on H Priors on z y = Hx + z p(y x) ˆx MAP ˆx = arg max x p(y x)p(x) Main issue: How to model p(x) to obtain accurate, tractable and low-complexity solu1on?

7 Challenge 1: Imaging hand-cra\ed approach 7 Tradi,onal approaches to defini,on of p(x) Sta,s,cal/determinis,c approaches Direct domain Smoothness of solu1on, local correla1ons.. Ω a ˆx = arg min a Ω x y Hx + λω x ( ) Transform domain (decorrela1on, energy compac1on, direc1vity, ) â = arg min a y HDa + λω a ( ) ( ) = ln p( x ) ˆx = Dâ fixed, signal independent (DCT, DWT.) Ω( a ) = ln p( a ) Sparsity-based approach â = arg min a ( ) y HΨa + λω a ˆx = Ψâ Overcomplete and can be learned i.i.d. GGD i.i.d. Student i.i.d. Mixture of Gaussians etc ( ) = a Ω a 0 ( ) = a 1 Nonconvex and NP-hard problem: relaxa1o1on/greedy approaches Complex op1miza1on tools: proximal algorithms prime-dual methods augmented Langrangian...leading to parallel and distributed solu1ons

8 Challenge 1: Imaging machine learning approach 8 Physical model x( θ 1,θ,!θ L ) Given: a lot of training data Learn: sta1s1cal model p(x) Physical phenomenon x { x( λ 1 )} { x( λ )} { x( λ 3 )} { x( λ 4 )} { x( λ 5 )} { x( λ 6 )} Various imaging configura1ons Radiowaves Microwaves Infrared Visible Ultraviolet X-Ray ALMA, EVLA, LOFAR, VLBI,, SKA + Simula1on tools Faraday, ASKAP,CASA.. Training data

9 Challenge 1: Imaging as learning problem 9 a Ω( a ) Given: training data ( y j, x j ), 1 j K Reconstruc1on Ωab ( a, b) algorithm = mapper Transcoding ϕ (.) y x j j D d Learn a mapper ϕ (.)

10 Challenge 1: Imaging learning as encoding-decoding 10 Given: training data ( y j, x j ), 1 j K Encoder Decoder D d y j x j Split in two parts

11 Challenge 1: Imaging learning as encoding-decoding 11 Intermediate representa1on a ( ) Ω a b Given: training data ( y j, x j ), 1 j K D e ( a, b ) Ωab Ω ab Transcoding y x j j D d Encoder-decoder training ( ˆD e, ˆD d ) = arg min â = arg min a â = ψ D e y, ˆb ( ) K ˆD e, ˆD d a j D e y j j=1 Projec,on problem a D e y + γ1 Ω a Encoder + λ1 Ω a a j ( a ) + γ Ω ab ( a, ˆb ) Training ( ) + λ Ω ab ( a j, b j ) + λ 3 x j D d b j! ˆb = arg min a ˆx = D dˆb Decoder Reconstruc,on problem y D d b + λωab ϒ( D e, D d ) ( â, b)

12 Challenge 1: Imaging learning op1mal imaging configura1ons 1 Objec,ve: joint op,miza,on of reconstruc,on and imaging (CS random sampling) x j Imaging H D e Intermediate representa1on a Ω( a ) ( a, b) Ωab Transcoding y j x j b D d Training Imaging system-encoder-decoder training Encoder ( Ĥ, ˆD e, ˆD d ) = arg min K H, ˆD e, ˆD d a j D e Hx j j=1 + λ1 Ω a a j ( ) + λ Ω ab ( a j, b j ) + λ 3 x j D d b j Decoder ϒ( D e, D d ) Φ( H) constraints on number and possible geometry of synthesized arrays

13 Challenge 1: Imaging learning for adap1ve imaging 13 Objec,ve: minimize the load on correlators adap,ve light-weight imaging!h i - es1ma1on configura1on (trained) y i I x!x Es1ma1on of dominant components Reconstruc1on ˆx y j!h j - adap1ve configura1on

14 Challenge : Compression for transfer, storage and distribu1on 14 Tradi,onal approach to compression Restora1on Compression x H y = Hx + z Alterna,ve - compressive sensing quan,zed compressive sensing ˆx JPEG ˆx JPEGK c Generic and image independent y = Q( Hx + z) observa1ons are quan1zed (even to several bits) ˆx = arg min a y Q( Hx ) + λω x ( ) - inverse problem

15 Challenge : Compression machine learning approach 15 Op,on 1: Replace generic JPEG/JPEGK codecs by special algorithms trained on RI images We suppose that the image is already reconstructed and the problem is how to deliver it to the end users Op,on : Sparse code representa,on between the Encoder-Decoder Encoder-decoder pairs in reconstruc1on are trained with the entropy constrained sparse code Efficient coding based on structured codebooks vs random ones Intermediate representa1on a b ( ) Ω a D e ( a, b) Ωab Transcoding D d y j x j

16 Challenge : Compression informa1on-theore1c approach 16 Codebook training (like Shannon R(D) theory) from p(x) 1 Codebook C Rate-distor,on plot n 1 M nr Informa1on-theore1c approach (limits) p(x) is not known Codebook size is exponen1al in n Codebook is unstructured High compression complexity and memory

17 Challenge : Compression machine learning approach 17 Structured codebooks with successive refinement Rate-distor,on plot 1 Codebook C 1 Codebook C Codebook C L n 1 M 1 M M L L trained codebooks (structured) Polynomial complexity Approaching Shannon lower bound Outperforming JPEG/JPEGK for very low bit rates

18 Challenge 3: Analy1cs automa1c processing of Big Data 18 Main issue: dimensionality and amount of images for automa,c processing Intermediate representa1on a b ( ) Ω a D e ( a, b) Ωab D d y j x j Feature vector = compact and discrimina1ve image representa1on Supervised hashing Recogni1on Data mining Search Indexing Automa,c tools for Big Data analysis

19 Conclusions 19 SKA is a great tool for research. and not only in astronomy and astro-physics It is a test planorm for many ideas in machine learning based image processing thanks to big data and flexible imaging In turns, it will lead to new alterna1ve approaches to three SKA challenges covering: imaging and reconstruc1on algorithms compression algorithms automa1c processing of big data

CS 6140: Machine Learning Spring What We Learned Last Week 2/26/16

CS 6140: Machine Learning Spring What We Learned Last Week 2/26/16 Logis@cs CS 6140: Machine Learning Spring 2016 Instructor: Lu Wang College of Computer and Informa@on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Sign

More information

Computer Vision. Pa0ern Recogni4on Concepts Part I. Luis F. Teixeira MAP- i 2012/13

Computer Vision. Pa0ern Recogni4on Concepts Part I. Luis F. Teixeira MAP- i 2012/13 Computer Vision Pa0ern Recogni4on Concepts Part I Luis F. Teixeira MAP- i 2012/13 What is it? Pa0ern Recogni4on Many defini4ons in the literature The assignment of a physical object or event to one of

More information

COMP 562: Introduction to Machine Learning

COMP 562: Introduction to Machine Learning COMP 562: Introduction to Machine Learning Lecture 20 : Support Vector Machines, Kernels Mahmoud Mostapha 1 Department of Computer Science University of North Carolina at Chapel Hill mahmoudm@cs.unc.edu

More information

Part 2. Representation Learning Algorithms

Part 2. Representation Learning Algorithms 53 Part 2 Representation Learning Algorithms 54 A neural network = running several logistic regressions at the same time If we feed a vector of inputs through a bunch of logis;c regression func;ons, then

More information

Exact data mining from in- exact data Nick Freris

Exact data mining from in- exact data Nick Freris Exact data mining from in- exact data Nick Freris Qualcomm, San Diego October 10, 2013 Introduc=on (1) Informa=on retrieval is a large industry.. Biology, finance, engineering, marke=ng, vision/graphics,

More information

CS 6140: Machine Learning Spring 2016

CS 6140: Machine Learning Spring 2016 CS 6140: Machine Learning Spring 2016 Instructor: Lu Wang College of Computer and Informa?on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Logis?cs Assignment

More information

CS 6140: Machine Learning Spring What We Learned Last Week. Survey 2/26/16. VS. Model

CS 6140: Machine Learning Spring What We Learned Last Week. Survey 2/26/16. VS. Model Logis@cs CS 6140: Machine Learning Spring 2016 Instructor: Lu Wang College of Computer and Informa@on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Assignment

More information

Unsupervised Learning: K- Means & PCA

Unsupervised Learning: K- Means & PCA Unsupervised Learning: K- Means & PCA Unsupervised Learning Supervised learning used labeled data pairs (x, y) to learn a func>on f : X Y But, what if we don t have labels? No labels = unsupervised learning

More information

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color Compression What Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color Why 720x480x20x3 = 31,104,000 bytes/sec 30x60x120 = 216 Gigabytes for a 2 hour movie

More information

Quan&fying Uncertainty. Sai Ravela Massachuse7s Ins&tute of Technology

Quan&fying Uncertainty. Sai Ravela Massachuse7s Ins&tute of Technology Quan&fying Uncertainty Sai Ravela Massachuse7s Ins&tute of Technology 1 the many sources of uncertainty! 2 Two days ago 3 Quan&fying Indefinite Delay 4 Finally 5 Quan&fying Indefinite Delay P(X=delay M=

More information

STAD68: Machine Learning

STAD68: Machine Learning STAD68: Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! h0p://www.cs.toronto.edu/~rsalakhu/ Lecture 1 Evalua;on 3 Assignments worth 40%. Midterm worth 20%. Final

More information

Par$$oned Elias- Fano indexes. Giuseppe O)aviano Rossano Venturini

Par$$oned Elias- Fano indexes. Giuseppe O)aviano Rossano Venturini Par$$oned Elias- Fano indexes Giuseppe O)aviano Rossano Venturini Inverted indexes Core data structure of Informa$on Retrieval Documents are sequences of terms 1: [it is what it is not] 2: [what is a]

More information

UVA CS 4501: Machine Learning. Lecture 6: Linear Regression Model with Dr. Yanjun Qi. University of Virginia

UVA CS 4501: Machine Learning. Lecture 6: Linear Regression Model with Dr. Yanjun Qi. University of Virginia UVA CS 4501: Machine Learning Lecture 6: Linear Regression Model with Regulariza@ons Dr. Yanjun Qi University of Virginia Department of Computer Science Where are we? è Five major sec@ons of this course

More information

Machine Learning and Data Mining. Bayes Classifiers. Prof. Alexander Ihler

Machine Learning and Data Mining. Bayes Classifiers. Prof. Alexander Ihler + Machine Learning and Data Mining Bayes Classifiers Prof. Alexander Ihler A basic classifier Training data D={x (i),y (i) }, Classifier f(x ; D) Discrete feature vector x f(x ; D) is a con@ngency table

More information

Wavelets & Mul,resolu,on Analysis

Wavelets & Mul,resolu,on Analysis Wavelets & Mul,resolu,on Analysis Square Wave by Steve Hanov More comics at http://gandolf.homelinux.org/~smhanov/comics/ Problem set #4 will be posted tonight 11/21/08 Comp 665 Wavelets & Mul8resolu8on

More information

Image Data Compression. Steganography and steganalysis Alexey Pak, PhD, Lehrstuhl für Interak;ve Echtzeitsysteme, Fakultät für Informa;k, KIT

Image Data Compression. Steganography and steganalysis Alexey Pak, PhD, Lehrstuhl für Interak;ve Echtzeitsysteme, Fakultät für Informa;k, KIT Image Data Compression Steganography and steganalysis 1 Stenography vs watermarking Watermarking: impercep'bly altering a Work to embed a message about that Work Steganography: undetectably altering a

More information

Q1 current best prac2ce

Q1 current best prac2ce Group- A Q1 current best prac2ce Star2ng from some common molecular representa2on with bond orders, configura2on on chiral centers (e.g. ChemDraw, SMILES) NEW! PDB should become resource for refinement

More information

Predic've Analy'cs for Energy Systems State Es'ma'on

Predic've Analy'cs for Energy Systems State Es'ma'on 1 Predic've Analy'cs for Energy Systems State Es'ma'on Presenter: Team: Yingchen (YC) Zhang Na/onal Renewable Energy Laboratory (NREL) Andrey Bernstein, Rui Yang, Yu Xie, Anthony Florita, Changfu Li, ScoD

More information

CSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on

CSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on CSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on Professor Wei-Min Shen Week 13.1 and 13.2 1 Status Check Extra credits? Announcement Evalua/on process will start soon

More information

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Sparse Regression Codes for Multi-terminal Source and Channel Coding Sparse Regression Codes for Multi-terminal Source and Channel Coding Ramji Venkataramanan Yale University Sekhar Tatikonda Allerton 2012 1 / 20 Compression with Side-Information X Encoder Rate R Decoder

More information

Least Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on

Least Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on Least Square Es?ma?on, Filtering, and Predic?on: Sta?s?cal Signal Processing II: Linear Es?ma?on Eric Wan, Ph.D. Fall 2015 1 Mo?va?ons If the second-order sta?s?cs are known, the op?mum es?mator is given

More information

Introduction to Particle Filters for Data Assimilation

Introduction to Particle Filters for Data Assimilation Introduction to Particle Filters for Data Assimilation Mike Dowd Dept of Mathematics & Statistics (and Dept of Oceanography Dalhousie University, Halifax, Canada STATMOS Summer School in Data Assimila5on,

More information

Founda'ons of Large- Scale Mul'media Informa'on Management and Retrieval. Lecture #3 Machine Learning. Edward Chang

Founda'ons of Large- Scale Mul'media Informa'on Management and Retrieval. Lecture #3 Machine Learning. Edward Chang Founda'ons of Large- Scale Mul'media Informa'on Management and Retrieval Lecture #3 Machine Learning Edward Y. Chang Edward Chang Founda'ons of LSMM 1 Edward Chang Foundations of LSMM 2 Machine Learning

More information

Lecture 17: Face Recogni2on

Lecture 17: Face Recogni2on Lecture 17: Face Recogni2on Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei-Fei Li Stanford Vision Lab Lecture 17-1! What we will learn today Introduc2on to face recogni2on Principal Component Analysis

More information

Least Mean Squares Regression. Machine Learning Fall 2017

Least Mean Squares Regression. Machine Learning Fall 2017 Least Mean Squares Regression Machine Learning Fall 2017 1 Lecture Overview Linear classifiers What func?ons do linear classifiers express? Least Squares Method for Regression 2 Where are we? Linear classifiers

More information

PATTERN RECOGNITION AND MACHINE LEARNING

PATTERN RECOGNITION AND MACHINE LEARNING PATTERN RECOGNITION AND MACHINE LEARNING Chapter 1. Introduction Shuai Huang April 21, 2014 Outline 1 What is Machine Learning? 2 Curve Fitting 3 Probability Theory 4 Model Selection 5 The curse of dimensionality

More information

Some thoughts on linearity, nonlinearity, and partial separability

Some thoughts on linearity, nonlinearity, and partial separability Some thoughts on linearity, nonlinearity, and partial separability Paul Hovland Argonne Na0onal Laboratory Joint work with Boyana Norris, Sri Hari Krishna Narayanan, Jean Utke, Drew Wicke Argonne Na0onal

More information

Lecture 22: Final Review

Lecture 22: Final Review Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information

More information

Auto-Encoders & Variants

Auto-Encoders & Variants Auto-Encoders & Variants 113 Auto-Encoders MLP whose target output = input Reconstruc7on=decoder(encoder(input)), input e.g. x code= latent features h encoder decoder reconstruc7on r(x) With bo?leneck,

More information

Lecture 17: Face Recogni2on

Lecture 17: Face Recogni2on Lecture 17: Face Recogni2on Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei-Fei Li Stanford Vision Lab Lecture 17-1! What we will learn today Introduc2on to face recogni2on Principal Component Analysis

More information

CSE 473: Ar+ficial Intelligence. Probability Recap. Markov Models - II. Condi+onal probability. Product rule. Chain rule.

CSE 473: Ar+ficial Intelligence. Probability Recap. Markov Models - II. Condi+onal probability. Product rule. Chain rule. CSE 473: Ar+ficial Intelligence Markov Models - II Daniel S. Weld - - - University of Washington [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188

More information

Diffeomorphic Warping. Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel)

Diffeomorphic Warping. Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel) Diffeomorphic Warping Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel) What Manifold Learning Isn t Common features of Manifold Learning Algorithms: 1-1 charting Dense sampling Geometric Assumptions

More information

Lecture 2: August 31

Lecture 2: August 31 0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 2: August 3 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy

More information

STA 4273H: Sta-s-cal Machine Learning

STA 4273H: Sta-s-cal Machine Learning STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 1 Evalua:on

More information

Algorithms for NLP. Classifica(on III. Taylor Berg- Kirkpatrick CMU Slides: Dan Klein UC Berkeley

Algorithms for NLP. Classifica(on III. Taylor Berg- Kirkpatrick CMU Slides: Dan Klein UC Berkeley Algorithms for NLP Classifica(on III Taylor Berg- Kirkpatrick CMU Slides: Dan Klein UC Berkeley The Perceptron, Again Start with zero weights Visit training instances one by one Try to classify If correct,

More information

Stochas(c Dual Ascent Linear Systems, Quasi-Newton Updates and Matrix Inversion

Stochas(c Dual Ascent Linear Systems, Quasi-Newton Updates and Matrix Inversion Stochas(c Dual Ascent Linear Systems, Quasi-Newton Updates and Matrix Inversion Peter Richtárik (joint work with Robert M. Gower) University of Edinburgh Oberwolfach, March 8, 2016 Part I Stochas(c Dual

More information

Par$$oned Elias- Fano Indexes

Par$$oned Elias- Fano Indexes Par$$oned Elias- Fano Indexes Giuseppe O)aviano ISTI- CNR, Pisa Rossano Venturini Università di Pisa Inverted indexes Docid Document 1: [it is what it is not] 2: [what is a] 3: [it is a banana] a 2, 3

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

Performance and Quality Analysis of Interpolation Methods for Coupling

Performance and Quality Analysis of Interpolation Methods for Coupling Performance and Quality Analysis of Interpolation Methods for Coupling Neda Ebrahimi Pour, Sabine Roller Outline Motivation Coupling Approaches Interpolation/Evaluation Results of Investigation Summary

More information

Lecture 3: Minimizing Large Sums. Peter Richtárik

Lecture 3: Minimizing Large Sums. Peter Richtárik Lecture 3: Minimizing Large Sums Peter Richtárik Graduate School in Systems, Op@miza@on, Control and Networks Belgium 2015 Mo@va@on: Machine Learning & Empirical Risk Minimiza@on Training Linear Predictors

More information

Compressed Sensing and Linear Codes over Real Numbers

Compressed Sensing and Linear Codes over Real Numbers Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st,

More information

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation

More information

Application of Computer in Chemistry SSC Prof. Mohamed Noor Hasan Dr. Hasmerya Maarof Department of Chemistry

Application of Computer in Chemistry SSC Prof. Mohamed Noor Hasan Dr. Hasmerya Maarof Department of Chemistry Application of Computer in Chemistry SSC 3533 Prof. Mohamed Noor Hasan Dr. Hasmerya Maarof Department of Chemistry Outline Fields of applica:on Examples Types of computer Programming languages 2 Introduction

More information

Algorithms for sparse analysis Lecture I: Background on sparse approximation

Algorithms for sparse analysis Lecture I: Background on sparse approximation Algorithms for sparse analysis Lecture I: Background on sparse approximation Anna C. Gilbert Department of Mathematics University of Michigan Tutorial on sparse approximations and algorithms Compress data

More information

Simple, Efficient and Neural Algorithms for Sparse Coding

Simple, Efficient and Neural Algorithms for Sparse Coding Simple, Efficient and Neural Algorithms for Sparse Coding Ankur Moitra (MIT) joint work with Sanjeev Arora, Rong Ge and Tengyu Ma B. A. Olshausen, D. J. Field. Emergence of simple- cell recepnve field

More information

Causal Modeling with Generative Neural Networks

Causal Modeling with Generative Neural Networks Causal Modeling with Generative Neural Networks Michele Sebag TAO, CNRS INRIA LRI Université Paris-Sud Joint work: D. Kalainathan, O. Goudet, I. Guyon, M. Hajaiej, A. Decelle, C. Furtlehner https://arxiv.org/abs/1709.05321

More information

Cellular automata, entropy and box- coun4ng dimension

Cellular automata, entropy and box- coun4ng dimension Cellular automata, entropy and box- coun4ng dimension Cellular Automata Cellular automata (CA) models epitomize the idea that simple rules can generate complex pa=erns. A CA consists of an array of cells

More information

Adap>ve Filters Part 2 (LMS variants and analysis) ECE 5/639 Sta>s>cal Signal Processing II: Linear Es>ma>on

Adap>ve Filters Part 2 (LMS variants and analysis) ECE 5/639 Sta>s>cal Signal Processing II: Linear Es>ma>on Adap>ve Filters Part 2 (LMS variants and analysis) Sta>s>cal Signal Processing II: Linear Es>ma>on Eric Wan, Ph.D. Fall 2015 1 LMS Variants and Analysis LMS variants Normalized LMS Leaky LMS Filtered-X

More information

LORD: LOw-complexity, Rate-controlled, Distributed video coding system

LORD: LOw-complexity, Rate-controlled, Distributed video coding system LORD: LOw-complexity, Rate-controlled, Distributed video coding system Rami Cohen and David Malah Signal and Image Processing Lab Department of Electrical Engineering Technion - Israel Institute of Technology

More information

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000 1 Wavelet Scalable Video Codec Part 1: image compression by JPEG2000 Aline Roumy aline.roumy@inria.fr May 2011 2 Motivation for Video Compression Digital video studio standard ITU-R Rec. 601 Y luminance

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Compressing Tabular Data via Pairwise Dependencies

Compressing Tabular Data via Pairwise Dependencies Compressing Tabular Data via Pairwise Dependencies Amir Ingber, Yahoo! Research TCE Conference, June 22, 2017 Joint work with Dmitri Pavlichin, Tsachy Weissman (Stanford) Huge datasets: everywhere - Internet

More information

Graphical Models. Lecture 1: Mo4va4on and Founda4ons. Andrew McCallum

Graphical Models. Lecture 1: Mo4va4on and Founda4ons. Andrew McCallum Graphical Models Lecture 1: Mo4va4on and Founda4ons Andrew McCallum mccallum@cs.umass.edu Thanks to Noah Smith and Carlos Guestrin for some slide materials. Board work Expert systems the desire for probability

More information

Quantum Information Theory. Renato Renner ETH Zurich

Quantum Information Theory. Renato Renner ETH Zurich Quantum Information Theory Renato Renner ETH Zurich Do quantum computers exist? Geordie Rose and his D- Wave quantum computer Commercial devices Quantum cryptography device by id Quan(que, in opera@on

More information

Cosmological N-Body Simulations and Galaxy Surveys

Cosmological N-Body Simulations and Galaxy Surveys Cosmological N-Body Simulations and Galaxy Surveys Adrian Pope, High Energy Physics, Argonne Na3onal Laboratory, apope@anl.gov CScADS: Scien3fic Data and Analy3cs for Extreme- scale Compu3ng, 30 July 2012

More information

Stochastic Generative Hashing

Stochastic Generative Hashing Stochastic Generative Hashing B. Dai 1, R. Guo 2, S. Kumar 2, N. He 3 and L. Song 1 1 Georgia Institute of Technology, 2 Google Research, NYC, 3 University of Illinois at Urbana-Champaign Discussion by

More information

Sparsifying Transform Learning for Compressed Sensing MRI

Sparsifying Transform Learning for Compressed Sensing MRI Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois

More information

Ad Placement Strategies

Ad Placement Strategies Case Study 1: Estimating Click Probabilities Tackling an Unknown Number of Features with Sketching Machine Learning for Big Data CSE547/STAT548, University of Washington Emily Fox 2014 Emily Fox January

More information

Greedy Layer-Wise Training of Deep Networks

Greedy Layer-Wise Training of Deep Networks Greedy Layer-Wise Training of Deep Networks Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle NIPS 2007 Presented by Ahmed Hefny Story so far Deep neural nets are more expressive: Can learn

More information

Gaussian discriminant analysis Naive Bayes

Gaussian discriminant analysis Naive Bayes DM825 Introduction to Machine Learning Lecture 7 Gaussian discriminant analysis Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. is 2. Multi-variate

More information

Reduced Models for Process Simula2on and Op2miza2on

Reduced Models for Process Simula2on and Op2miza2on Reduced Models for Process Simulaon and Opmizaon Yidong Lang, Lorenz T. Biegler and David Miller ESI annual meeng March, 0 Models are mapping Equaon set or Module simulators Input space Reduced model Surrogate

More information

A mul&scale autocorrela&on func&on for anisotropy studies

A mul&scale autocorrela&on func&on for anisotropy studies A mul&scale autocorrela&on func&on for anisotropy studies Mario Scuderi 1, M. De Domenico, H Lyberis and A. Insolia 1 Department of Physics and Astronomy & INFN Catania University ITALY DAA2011 Erice,

More information

Last Lecture Recap UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 3: Linear Regression

Last Lecture Recap UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 3: Linear Regression UVA CS 4501-001 / 6501 007 Introduc8on to Machine Learning and Data Mining Lecture 3: Linear Regression Yanjun Qi / Jane University of Virginia Department of Computer Science 1 Last Lecture Recap q Data

More information

Computa(on of Similarity Similarity Search as Computa(on. Stoyan Mihov and Klaus U. Schulz

Computa(on of Similarity Similarity Search as Computa(on. Stoyan Mihov and Klaus U. Schulz Computa(on of Similarity Similarity Search as Computa(on Stoyan Mihov and Klaus U. Schulz Roadmap What is similarity computa(on? NLP examples of similarity computa(on Machine transla(on Speech recogni(on

More information

Cri$ques Ø 5 cri&ques in total Ø Each with 6 points

Cri$ques Ø 5 cri&ques in total Ø Each with 6 points Cri$ques Ø 5 cri&ques in total Ø Each with 6 points 1 Distributed Applica$on Alloca$on in Shared Sensor Networks Chengjie Wu, You Xu, Yixin Chen, Chenyang Lu Shared Sensor Network Example in San Francisco

More information

Solution Recovery via L1 minimization: What are possible and Why?

Solution Recovery via L1 minimization: What are possible and Why? Solution Recovery via L1 minimization: What are possible and Why? Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Eighth US-Mexico Workshop on Optimization

More information

Overfitting, Bias / Variance Analysis

Overfitting, Bias / Variance Analysis Overfitting, Bias / Variance Analysis Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 8, 207 / 40 Outline Administration 2 Review of last lecture 3 Basic

More information

Naïve Bayes classification

Naïve Bayes classification Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss

More information

Learning Deep Genera,ve Models

Learning Deep Genera,ve Models Learning Deep Genera,ve Models Ruslan Salakhutdinov BCS, MIT and! Department of Statistics, University of Toronto Machine Learning s Successes Computer Vision: - Image inpain,ng/denoising, segmenta,on

More information

Compressed Sensing and Neural Networks

Compressed Sensing and Neural Networks and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications

More information

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Capacity of a channel Shannon s second theorem. Information Theory 1/33 Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,

More information

ECE Unit 4. Realizable system used to approximate the ideal system is shown below: Figure 4.47 (b) Digital Processing of Analog Signals

ECE Unit 4. Realizable system used to approximate the ideal system is shown below: Figure 4.47 (b) Digital Processing of Analog Signals ECE 8440 - Unit 4 Digital Processing of Analog Signals- - Non- Ideal Case (See sec8on 4.8) Before considering the non- ideal case, recall the ideal case: 1 Assump8ons involved in ideal case: - no aliasing

More information

Op#mal Control of Nonlinear Systems with Temporal Logic Specifica#ons

Op#mal Control of Nonlinear Systems with Temporal Logic Specifica#ons Op#mal Control of Nonlinear Systems with Temporal Logic Specifica#ons Eric M. Wolff 1 Ufuk Topcu 2 and Richard M. Murray 1 1 Caltech and 2 UPenn University of Michigan October 1, 2013 Autonomous Systems

More information

Lecture 3. Mathematical methods in communication I. REMINDER. A. Convex Set. A set R is a convex set iff, x 1,x 2 R, θ, 0 θ 1, θx 1 + θx 2 R, (1)

Lecture 3. Mathematical methods in communication I. REMINDER. A. Convex Set. A set R is a convex set iff, x 1,x 2 R, θ, 0 θ 1, θx 1 + θx 2 R, (1) 3- Mathematical methods in communication Lecture 3 Lecturer: Haim Permuter Scribe: Yuval Carmel, Dima Khaykin, Ziv Goldfeld I. REMINDER A. Convex Set A set R is a convex set iff, x,x 2 R, θ, θ, θx + θx

More information

Bias/variance tradeoff, Model assessment and selec+on

Bias/variance tradeoff, Model assessment and selec+on Applied induc+ve learning Bias/variance tradeoff, Model assessment and selec+on Pierre Geurts Department of Electrical Engineering and Computer Science University of Liège October 29, 2012 1 Supervised

More information

Issues and Approaches for Data Fusion

Issues and Approaches for Data Fusion Issues and Approaches for Data Fusion Peter Wille, University of Connec6cut Slide 1 Outline A Communica6ons Perspec6ve on Data Fusion mul6- user informa6on theory Data Fusion for Detec6on & Es6ma6on scan

More information

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Zhilin Zhang and Ritwik Giri Motivation Sparse Signal Recovery is an interesting

More information

Simple, Efficient and Neural Algorithms for Sparse Coding

Simple, Efficient and Neural Algorithms for Sparse Coding Simple, Efficient and Neural Algorithms for Sparse Coding Ankur Moitra (MIT) joint work with Sanjeev Arora, Rong Ge and Tengyu Ma Algorithmic Aspects of Machine Learning (c) 2015 by Ankur Moitra. Note:

More information

Learning-based compression

Learning-based compression Nature Learning-based compression Value / Knowledge Volkan Cevher Laboratory for Information and Inference Systems http://lions.epfl.ch A paradigm shift in data generation 1977-5hours A paradigm shift

More information

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST Information Theory Lecture 5 Entropy rate and Markov sources STEFAN HÖST Universal Source Coding Huffman coding is optimal, what is the problem? In the previous coding schemes (Huffman and Shannon-Fano)it

More information

Distributed Source Coding Using LDPC Codes

Distributed Source Coding Using LDPC Codes Distributed Source Coding Using LDPC Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete May 29, 2010 Telecommunications Laboratory (TUC) Distributed Source Coding

More information

A latent variable modelling approach to the acoustic-to-articulatory mapping problem

A latent variable modelling approach to the acoustic-to-articulatory mapping problem A latent variable modelling approach to the acoustic-to-articulatory mapping problem Miguel Á. Carreira-Perpiñán and Steve Renals Dept. of Computer Science, University of Sheffield {miguel,sjr}@dcs.shef.ac.uk

More information

Least Squares Parameter Es.ma.on

Least Squares Parameter Es.ma.on Least Squares Parameter Es.ma.on Alun L. Lloyd Department of Mathema.cs Biomathema.cs Graduate Program North Carolina State University Aims of this Lecture 1. Model fifng using least squares 2. Quan.fica.on

More information

Interes'ng- Phrase Mining for Ad- Hoc Text Analy'cs

Interes'ng- Phrase Mining for Ad- Hoc Text Analy'cs Interes'ng- Phrase Mining for Ad- Hoc Text Analy'cs Klaus Berberich (kberberi@mpi- inf.mpg.de) Joint work with: Srikanta Bedathur, Jens DiIrich, Nikos Mamoulis, and Gerhard Weikum (originally presented

More information

Experimental Designs for Planning Efficient Accelerated Life Tests

Experimental Designs for Planning Efficient Accelerated Life Tests Experimental Designs for Planning Efficient Accelerated Life Tests Kangwon Seo and Rong Pan School of Compu@ng, Informa@cs, and Decision Systems Engineering Arizona State University ASTR 2015, Sep 9-11,

More information

Algorithms for NLP. Classifica(on III. Taylor Berg- Kirkpatrick CMU Slides: Dan Klein UC Berkeley

Algorithms for NLP. Classifica(on III. Taylor Berg- Kirkpatrick CMU Slides: Dan Klein UC Berkeley Algorithms for NLP Classifica(on III Taylor Berg- Kirkpatrick CMU Slides: Dan Klein UC Berkeley Objec(ve Func(ons What do we want from our weights? Depends! So far: minimize (training) errors: This is

More information

Estimation of information-theoretic quantities

Estimation of information-theoretic quantities Estimation of information-theoretic quantities Liam Paninski Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/ liam liam@gatsby.ucl.ac.uk November 16, 2004 Some

More information

Caching (a pair of) Gaussians

Caching (a pair of) Gaussians Caching (a pair of) Gaussians Giel J. Op t Veld Michael C. Gastpar School of Computer and Communication Sciences, EPFL Lausanne, Switzerland giel.optveld@epfl.ch michael.gastpar@epfl.ch Abstract A source

More information

Sta$s$cal sequence recogni$on

Sta$s$cal sequence recogni$on Sta$s$cal sequence recogni$on Determinis$c sequence recogni$on Last $me, temporal integra$on of local distances via DP Integrates local matches over $me Normalizes $me varia$ons For cts speech, segments

More information

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability Probability theory Naïve Bayes classification Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s height, the outcome of a coin toss Distinguish

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

Applied Machine Learning for Biomedical Engineering. Enrico Grisan

Applied Machine Learning for Biomedical Engineering. Enrico Grisan Applied Machine Learning for Biomedical Engineering Enrico Grisan enrico.grisan@dei.unipd.it Data representation To find a representation that approximates elements of a signal class with a linear combination

More information

ECE Information theory Final

ECE Information theory Final ECE 776 - Information theory Final Q1 (1 point) We would like to compress a Gaussian source with zero mean and variance 1 We consider two strategies In the first, we quantize with a step size so that the

More information

EE5585 Data Compression May 2, Lecture 27

EE5585 Data Compression May 2, Lecture 27 EE5585 Data Compression May 2, 2013 Lecture 27 Instructor: Arya Mazumdar Scribe: Fangying Zhang Distributed Data Compression/Source Coding In the previous class we used a H-W table as a simple example,

More information

Bellman s Curse of Dimensionality

Bellman s Curse of Dimensionality Bellman s Curse of Dimensionality n- dimensional state space Number of states grows exponen

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Undirected Graphical Models Mark Schmidt University of British Columbia Winter 2016 Admin Assignment 3: 2 late days to hand it in today, Thursday is final day. Assignment 4:

More information

Probabilis)c image reconstruc)on, foreground removal, and power spectrum inference from radio interferometers

Probabilis)c image reconstruc)on, foreground removal, and power spectrum inference from radio interferometers Probabilis)c image reconstruc)on, foreground removal, and power spectrum inference from radio interferometers Ben Wandelt IAP, ILP, UPMC, CNRS Sorbonne University Theore>cal interest in 21cm cosmology

More information

Streaming - 2. Bloom Filters, Distinct Item counting, Computing moments. credits:www.mmds.org.

Streaming - 2. Bloom Filters, Distinct Item counting, Computing moments. credits:www.mmds.org. Streaming - 2 Bloom Filters, Distinct Item counting, Computing moments credits:www.mmds.org http://www.mmds.org Outline More algorithms for streams: 2 Outline More algorithms for streams: (1) Filtering

More information

Improved Bayesian Compression

Improved Bayesian Compression Improved Bayesian Compression Marco Federici University of Amsterdam marco.federici@student.uva.nl Karen Ullrich University of Amsterdam karen.ullrich@uva.nl Max Welling University of Amsterdam Canadian

More information

Image Data Compression. Dirty-paper codes Alexey Pak, Lehrstuhl für Interak<ve Echtzeitsysteme, Fakultät für Informa<k, KIT

Image Data Compression. Dirty-paper codes Alexey Pak, Lehrstuhl für Interak<ve Echtzeitsysteme, Fakultät für Informa<k, KIT Image Data Compression Dirty-paper codes 1 Reminder: watermarking with side informa8on Watermark embedder Noise n Input message m Auxiliary informa

More information