e-companion ONLY AVAILABLE IN ELECTRONIC FORM
|
|
- Posy Todd
- 6 years ago
- Views:
Transcription
1 OPERATIONS RESEARCH doi /opre ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer Segent by Diitris Bertsias and Ada J. Mersereau, Operations Research, doi /opre This docuent accopanies A Learning Approach for Interactive Marketing to a Custoer Segent by Bertsias and Mersereau. We provide proofs of soe results fro that paper and soe additional coputational results. References point to that docuent, and we use notation specified there. A. Proofs of. Results Proposition 1 can be extended to the case in which the decisions x are not fixed but are decided in stages. Corrollary 1. For 0 and where x i ay depend on s t + i 1 j=t yj f t + i 1 j=t xj y j for i>t, E y t ( E y t+1 E y t+ J t s t + t+ t+ y i f t + x i y i ) x t+ s t + t+ 1 t+ 1 y i f t + x i y i xt+1 s t +y t f t +x t y t x t s t f J t t s t f t Proof. We can apply Proposition 1 to the inner expectation to show that the left-hand side of the inequality is greater than or equal to E y t ( t+ 1 t+ 1 E y t+1 E y t+ 1 J t s t + y i f t + x i y i ) x t+ 1 s t + t+ 2 t+ 2 y i f t + x i y i xt+1 s t + y t f t + x t y t x t s t f t The sae arguent applied 1 ore ties yields the desired result. Proof of Proposition 2 Observe that a feasible policy for the N A + N B -stage-size proble is to set the decision x t at stage t equal to the optial decision for the N A -stage-size proble plus the optial decision for the N B -stage-size proble. Let J t s t f t N A N B T denote the cost-to-go function corresponding to this policy beginning in stage t and state s t f t. Observe J T 1 s T 1 f T 1 NT is linear in N,so J T 1 s T 1 f T 1 N A N B T= J T 1 s T 1 f T 1 N A T+ J T 1 s T 1 f T 1 N B T Now consider stage t<t 1. Assue for any s t+1 and f t+1, J t+1 s t+1 f t+1 N A N B T J t+1 s t+1 f t+1 N A T+ J t+1 s t+1 f t+1 N B T and let x A s t = arg ax x x =N A s t + f x t + E y J t+1 s t + y f t + x yn A Txs t f t x B s t = arg ax x x =N B s t + f x t + E y J t+1 s t + y f t + x yn B Txs t f t ec1
2 ec2 Bertsias and Mersereau: Learning Approach for Interactive Marketing pp. ec1 ec5; suppl. to Oper. Res., doi /opre , 07 INFORMS Then J t s t f t N A N B T = s t s t + f t x A + xb + E y J t+1 s t + y f t + x A + x B yn A N B T x A + x B s t f t s t s t + f x A t + xb + E yj t+1 s t + y f t + x A + x B yn A Tx A + x B s t f t + E y J t+1 s t + y f t + x A + x B yn B Tx A + x B s t f t s t = s t + f x A t + xb + E y AE y BJ t+1 s t + y A + y B f t + x A + x B y A y B N A Tx B s t + y A f t + x A y A x A s t f t + E y BE y AJ t+1 s t + y A + y B f t + x A + x B y A y B N B Tx A s t + y B f t + x B y B x B s t f t s t s t + f x A t + xb + E y AJ t+1s t + y A f t + x A y A N A Tx A s t f t + E y BJ t+1 s t + y B f t + x B y B N B Tx B s t f t = J t s t f t N A T+ J t s t f t N B T where the equality in the third-to-last line follows fro Lea 1 and the inequality on the second-to-last line follows fro Proposition 1. The desired result follows fro induction and the fact that J 0 s t f t N A + N B T J 0 s t f t N A N B T. Proof of Proposition The optial T A + T B -horizon policy yields at least as uch expected reward as the policy that uses the optial T A -horizon policy for ties 0 1T A 1, then uses the optial T B -horizon policy for ties T A T A + 1T A + T B 1. If we denote the decisions and outcoes in periods 0T A 1 under this policy as the vectors x A and y A respectively, then this stateent is equivalent to J 0 s 0 f 0 NT A + T B J 0 s 0 f 0 NT A + E x A y AJ 0s 0 + y A f 0 + x A y A NT B where the expectation is over the decisions and outcoes of the T A -stage proble, and is shorthand for the expression in the left-hand side of the inequality in Corollary 1 with t = 0 and = T A 1. By that corollary then, we have E x A y AJ 0s 0 + y A f 0 + x A y A NT B J 0 s 0 f 0 NT B This gives the desired result, J 0 s 0 f 0 NT A + T B J 0 s 0 f 0 NT A + J 0 s 0 f 0 NT B. Proof of Proposition 4 The proble with stage size N and horizon T/ is equivalent to a proble with horizon T, where stage sizes are 0 when the stage t is not divisible by, and N when t is divisible by. Denote the optial value function of this odified proble by J t s t f t NT. Adopt the convention J T s T f T NT= J T s T f T NT= 0 for all s T, f T. We proceed by induction. Assue for soe t divisible by that J t+ s t+ f t+ NT Jt+s t+ f t+ NT for all s t+, f t+. Fix s t, f t and let { x = arg ax x x =N s t s t + f t x + E y Jt+s t + y f t + x yntxs t f t and arbitrarily assign x t x t+1 x t+ 1 such that x = t+ 1 =t x and x t = x t+1 = = x t+ 1 = N. This represents a feasible (non-markov) policy for stages tt + 1t+ 1oftheT-stage, N -stage-size proble.
3 Bertsias and Mersereau: Learning Approach for Interactive Marketing pp. ec1 ec5; suppl. to Oper. Res., doi /opre , 07 INFORMS ec By the definition of x, we can write { J t st f t NT = ax x x =N s t s t + f t x + E y J t+s t + y f t + x ynt x s t f t = s t s t + f t x + E y J t+s t + y f t + x ynt x s t f t s t s t + f x t + E y Jt+ s t + y f t + x ynt x s t f t where the inequality follows fro the induction assuption. We introduce the notation x a b = b =a x. Using this notation, we can write x = x tt+ 1 = x tt+ 2 + x t+ 1. Substituting yields J t st f t NT s t s t + f t x tt+ 1 + E y J t+ s t + y f t + x tt+ 1 ynt x tt+ 1 s t f t (EC1) = s t s t + f t s t s t + f t x tt+ 2 + E y x tt+ 2 + E y s t + y s t + f t + xtt+ 2 x t+ 1 + E y J t+ s t + y + y f t + x tt+ 2 + x t+ 1 y y NT x t+ 1 s t + y f t + x tt+ 2 y x tt+ 2 s t f t ax x x =N { s t + y s t + f t + xtt+ 2 x +E y Jt+ s t +y+y f t +x tt+ 2 +x y y NT x s t + y f t + x tt+ 2 y x tt+ 2 s t f t s t = s t + f x tt+ 2 t + E y J t+ 1 s t + y f t + x tt+ 2 ynt x tt+ 2 s t f t (EC2) where the first equality follows fro Lea 1 and the fact that s E + y y s + f + x x s f = s + E y y x s f = s + x s /s + f = s s + f + x s + f + x s + f We can repeat the arguents (EC1) (EC2) 2 ore ties to get J t st f t NT s t s t + f x t t + E yj t+1 s t + y t f t + x t y t NT x t s t f t { ax x x =N = J t s t f t NT The desired result follows by induction. B. Soe Properties of J 0 s0 f 0 s t s t + f t x + E y J t+1 s t + y t f t + x y t NTxs t f t The propositions proven in this section support the assertions ade in 4.1 that the function J 0 s0 f 0 is convex as a function of and an upper bound for the true value function J 0 s 0 f 0. Proposition 6. J 0 s0 f 0 J 0 s 0 f 0 for all, s 0, and f 0. Proof. First, we use induction to show Jt st f t J t s t f t for all t,, s t, and f t. Fix, and consider stage T 1. Let = arg ax s T 1 1 /st + f T 1, then J T 1s T 1 f T 1 = Ns T 1 1 /st + f T 1. If s T 1 1 /st + f T 1 T 1, then: M J T 1 s T 1 f T 1 = N T 1 + Jˆ 1 1 T 1 st ft NT 1 + Jˆ 1 1 T 1sT ft = J T 1 st 1 f T 1 =1
4 ec4 Bertsias and Mersereau: Learning Approach for Interactive Marketing pp. ec1 ec5; suppl. to Oper. Res., doi /opre , 07 INFORMS If s T 1 1 /st + f T 1 <T 1, then: J T 1 s T 1 f T 1 = N s T 1 s T 1 + f T 1 N T 1 = J T 1 st 1 f T 1 Now for soe t assue Jt+1 st+1 f t+1 J t+1 s t+1 f t+1 for all, s t+1, and f t+1. Let x t be feasible and achieve the axiu in the optiization of Equation (5). x t is feasible in the optiization proble of Equation (7), E y tj t+1 s t + y t f t + x t y t x t s t f t E y tj t+1 st + y t f t + x t y t x t s t f t and t N M =1 xt = 0, thus by coparison of Equations (5) and (7) we have Jt st f t J t s t f t.by induction we thus have J1 s1 f 1 J 1 s 1 f 1 for all, s 1, and f 1, then a coparison of Equations (5) and (11) gives us J 0 s0 f 0 J 0 s 0 f 0. Proposition 7. J 0 s0 f 0 is convex as a function of for all s 0 and f 0. Proof. First, we use induction to show that Jˆ t st ft is convex in for all tst, and f t 1. For all st and f T 1, Jˆ T 1 1 1sT ft is the axiu of linear functions of and is thus convex in. Now for arbitrary t<t 1 assue Jˆ t+1 st+1 ft+1 is convex in for all st+1 and f t+1. Then fix xt and see that E y Jˆ t+1 st + xf t + x y xst ft is a positively weighted su of convex functions of and is thus convex. Jˆ t st ft is then a axiu over a finite set of convex functions and is thus convex as a function of for all s t and f t. Jt st f t is the su of convex functions and is thus convex for all s t and f t. Thus EJ1 s1 f 1 is convex as a function of for all s 1 and f 1. Finally, J 0 s0 f 0 is a axiu of convex functions and is thus convex as a function of for all s 0 and f 0. C. Alternate Algoriths for Selecting Here we present support for our choice of ethod for selecting the paraeter. The results in this section ake use of the subproble approxiations of 4.2 with H = 2, B = N/10, and copare the following ethods for selecting : ADP: This is the ethod used to generate the results in 6. is assued constant and is chosen using binary search to identify the for which the constraint M =1 x0 = N is satisfied in the relaxed proble. After 7 iterations of binary search, the constrained proble (11) is used to deterine a feasible solution. ADP_in: This approach assues constant and attepts to select a that iniizes the value J 0 s0 f 0. The nuerical iniization relies on the convexity of the relaxed proble value function (see Proposition 7). Specifically, we begin with a known interval for (initially, 0 1) and subdivide the interval into 4 evenly spaced subintervals. By evaluating and coparing the relaxed value function at each of the subinterval boundaries, we can narrow the interval to at ost one half the original interval. This procedure is iterated 7 ties. ADP_: This ipleents a version of the algorith with retaining a coponent for each future tie stage. We include variables for 1 through H, using H for estiating the relaxed value functions beyond the lookahead horizon H. The coponent paraeters are chosen in a iniization procedure that perfors local search on a discretized grid of values. The discretization we use is 005, which we note is coarser than the precision of ADP_in. Table EC.1 gives results for a few selected probles for each of the ethods described. We note that the results do not give evidence that our assuption of constant is a poor one, nor does it see that significant gains can be achieved by using a iniization procedure to choose. TABLE EC.1. Siulation results coparing ADP, ADP_in, and ADP_l. Nubers represent average nubers of successes over 2,000 siulated probles. s, f T N k Ideal Greedy Intval. ADP ADP_in ADP_l 2, U , U , U , U
5 Bertsias and Mersereau: Learning Approach for Interactive Marketing pp. ec1 ec5; suppl. to Oper. Res., doi /opre , 07 INFORMS ec5 TABLE EC.2. Siulation results for soe randoly generated ulti-segent probles. Coputation ties represent average CPU tie per stage on an Intel Xeon 2.4 GHz processor. The Greedy and Dynaic algoriths took negligible tie per stage (<0005 seconds). S T M Ni 0 { { 1 4 Cpu tie Cpu tie sf k Greedy Dynaic Info Decop Info Decop 1 9 U0 2 8 U U U U U U0 2 8 U U U D. Coputational Results for Multiple Segents with Migrating Custoers Through siulated experients, we evaluate the effectiveness of the ethod described in 4. for accounting for the igration of custoers aong segents. For purposes of coparison, we have ipleented the following heuristics for the proble described in 4.: Greedy: Sends to all custoers in state i the available essage offering the greatest expected reward in the current stage. Thus, this ethod accounts for neither the custoer igration dynaics nor the effects of inforation accuulation. Dynaics: This heuristic fixes all reward probabilities at their expected values, then solves a siple dynaic progra for each custoer. In the case with known purchase probabilities, solving for each custoer independently produces an optial policy for the overall proble. This ethod accounts for custoer igration dynaics but ignores inforation effects. Info: This ignores custoer dynaics entirely and akes decisions using the dynaic prograing-based adaptive sapling heuristic of 4. Decop: This is the decoposition-based approxiation described in 4.. We test the algorith on a few randoly generated exaples. True reward probabilities and prior distributions are generated as in 6. Transition probabilities for each segent and essage are chosen by selecting an S-vector of unifor rando deviates and noralizing so that S j=1 P ij = 1. Average results over 2,000 randoly generated probles for a few cases are presented in Table EC.2. We observe that the Decop heuristic outperfors all the other ethods for each set of probles tried. Moreover, the iproveent afforded by the Decop heuristic is statistically significant in each case. Most notably, the decoposition approach perfors as well as the bext of the Dynaic and Info techniques in all of the exaples, suggesting it is adequately accounting for both inforation value and custoer dynaics. We also note that all three of the Dynaic, Info, and Decop ethods are preferable to the Greedy heuristic in all the exaples. Coputation ties are reasonable and coparable to those observed for the single-segent probles of 6, although we point out that we have chosen instances with fewer custoers per stage than in 6.
Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies
Approxiation in Stochastic Scheduling: The Power of -Based Priority Policies Rolf Möhring, Andreas Schulz, Marc Uetz Setting (A P p stoch, r E( w and (B P p stoch E( w We will assue that the processing
More informationA Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness
A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October
More informationIntelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes
More informationThis model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.
CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when
More informationarxiv: v1 [cs.ds] 3 Feb 2014
arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/
More informationCombining Classifiers
Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/
More informationA note on the multiplication of sparse matrices
Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani
More informationAn improved self-adaptive harmony search algorithm for joint replenishment problems
An iproved self-adaptive harony search algorith for joint replenishent probles Lin Wang School of Manageent, Huazhong University of Science & Technology zhoulearner@gail.co Xiaojian Zhou School of Manageent,
More informationModel Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon
Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential
More informationExperimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis
City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna
More informationKernel Methods and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic
More informationFairness via priority scheduling
Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation
More informationLost-Sales Problems with Stochastic Lead Times: Convexity Results for Base-Stock Policies
OPERATIONS RESEARCH Vol. 52, No. 5, Septeber October 2004, pp. 795 803 issn 0030-364X eissn 1526-5463 04 5205 0795 infors doi 10.1287/opre.1040.0130 2004 INFORMS TECHNICAL NOTE Lost-Sales Probles with
More informationInteractive Markov Models of Evolutionary Algorithms
Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary
More informationA Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science
A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest
More informationNonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy
Storage Capacity and Dynaics of Nononotonic Networks Bruno Crespi a and Ignazio Lazzizzera b a. IRST, I-38050 Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I-38050 Povo (Trento) Italy INFN Gruppo
More informationRandomized Recovery for Boolean Compressed Sensing
Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch
More informationA Simple Regression Problem
A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where
More informationŞtefan ŞTEFĂNESCU * is the minimum global value for the function h (x)
7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not
More informationFigure 1: Equivalent electric (RC) circuit of a neurons membrane
Exercise: Leaky integrate and fire odel of neural spike generation This exercise investigates a siplified odel of how neurons spike in response to current inputs, one of the ost fundaental properties of
More informationAlgorithms for parallel processor scheduling with distinct due windows and unit-time jobs
BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and
More informationBootstrapping Dependent Data
Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly
More informationBest Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence
Best Ar Identification: A Unified Approach to Fixed Budget and Fixed Confidence Victor Gabillon Mohaad Ghavazadeh Alessandro Lazaric INRIA Lille - Nord Europe, Tea SequeL {victor.gabillon,ohaad.ghavazadeh,alessandro.lazaric}@inria.fr
More informationSharp Time Data Tradeoffs for Linear Inverse Problems
Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used
More informationC na (1) a=l. c = CO + Clm + CZ TWO-STAGE SAMPLE DESIGN WITH SMALL CLUSTERS. 1. Introduction
TWO-STGE SMPLE DESIGN WITH SMLL CLUSTERS Robert G. Clark and David G. Steel School of Matheatics and pplied Statistics, University of Wollongong, NSW 5 ustralia. (robert.clark@abs.gov.au) Key Words: saple
More informationBayes Decision Rule and Naïve Bayes Classifier
Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.
More informationCS Lecture 13. More Maximum Likelihood
CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood
More informationStochastic Subgradient Methods
Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods
More informationDistributed Subgradient Methods for Multi-agent Optimization
1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions
More informationCOS 424: Interacting with Data. Written Exercises
COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well
More informationSupport Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization
Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering
More informationQuantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search
Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths
More informationIntroduction to Discrete Optimization
Prof. Friedrich Eisenbrand Martin Nieeier Due Date: March 9 9 Discussions: March 9 Introduction to Discrete Optiization Spring 9 s Exercise Consider a school district with I neighborhoods J schools and
More information13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices
CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay
More informationMSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE
Proceeding of the ASME 9 International Manufacturing Science and Engineering Conference MSEC9 October 4-7, 9, West Lafayette, Indiana, USA MSEC9-8466 MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL
More informationLecture 21. Interior Point Methods Setup and Algorithm
Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and
More informationA Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair
Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving
More informationEstimating Parameters for a Gaussian pdf
Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3
More informationOptimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks
1 Optial Resource Allocation in Multicast Device-to-Device Counications Underlaying LTE Networks Hadi Meshgi 1, Dongei Zhao 1 and Rong Zheng 2 1 Departent of Electrical and Coputer Engineering, McMaster
More informationOPTIMIZATION in multi-agent networks has attracted
Distributed constrained optiization and consensus in uncertain networks via proxial iniization Kostas Margellos, Alessandro Falsone, Sione Garatti and Maria Prandini arxiv:603.039v3 [ath.oc] 3 May 07 Abstract
More informationA Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay
A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer
More information1 Identical Parallel Machines
FB3: Matheatik/Inforatik Dr. Syaantak Das Winter 2017/18 Optiizing under Uncertainty Lecture Notes 3: Scheduling to Miniize Makespan In any standard scheduling proble, we are given a set of jobs J = {j
More informationare equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are,
Page of 8 Suppleentary Materials: A ultiple testing procedure for ulti-diensional pairwise coparisons with application to gene expression studies Anjana Grandhi, Wenge Guo, Shyaal D. Peddada S Notations
More informationWeighted- 1 minimization with multiple weighting sets
Weighted- 1 iniization with ultiple weighting sets Hassan Mansour a,b and Özgür Yılaza a Matheatics Departent, University of British Colubia, Vancouver - BC, Canada; b Coputer Science Departent, University
More informationComputational and Statistical Learning Theory
Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher
More informationN-Point. DFTs of Two Length-N Real Sequences
Coputation of the DFT of In ost practical applications, sequences of interest are real In such cases, the syetry properties of the DFT given in Table 5. can be exploited to ake the DFT coputations ore
More informationNon-Parametric Non-Line-of-Sight Identification 1
Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,
More informationBipartite subgraphs and the smallest eigenvalue
Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.
More informationINTEGRATIVE COOPERATIVE APPROACH FOR SOLVING PERMUTATION FLOWSHOP SCHEDULING PROBLEM WITH SEQUENCE DEPENDENT FAMILY SETUP TIMES
8 th International Conference of Modeling and Siulation - MOSIM 10 - May 10-12, 2010 - Haaet - Tunisia Evaluation and optiization of innovative production systes of goods and services INTEGRATIVE COOPERATIVE
More informationASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical
IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul
More informationRandomized Accuracy-Aware Program Transformations For Efficient Approximate Computations
Randoized Accuracy-Aware Progra Transforations For Efficient Approxiate Coputations Zeyuan Allen Zhu Sasa Misailovic Jonathan A. Kelner Martin Rinard MIT CSAIL zeyuan@csail.it.edu isailo@it.edu kelner@it.edu
More informationInspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information
Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub
More informationFinite Horizon Throughput Maximization and Sensing Optimization in Wireless Powered Devices over Fading Channels
Finite Horizon Throughput Maxiization and Sensing Optiization in Wireless Powered Devices over Fading Channels Mehdi Salehi Heydar Abad, Ozgur Ercetin arxiv:1804.01834v2 [cs.it] 9 Sep 2018 Abstract Wireless
More informationCSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13
CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture
More informationAnalyzing Simulation Results
Analyzing Siulation Results Dr. John Mellor-Cruey Departent of Coputer Science Rice University johnc@cs.rice.edu COMP 528 Lecture 20 31 March 2005 Topics for Today Model verification Model validation Transient
More informationBoosting with log-loss
Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the
More informationCh 12: Variations on Backpropagation
Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith
More informationEnsemble Based on Data Envelopment Analysis
Enseble Based on Data Envelopent Analysis So Young Sohn & Hong Choi Departent of Coputer Science & Industrial Systes Engineering, Yonsei University, Seoul, Korea Tel) 82-2-223-404, Fax) 82-2- 364-7807
More informationList Scheduling and LPT Oliver Braun (09/05/2017)
List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)
More informationarxiv: v1 [math.na] 10 Oct 2016
GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble
More informationBest Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence
Best Ar Identification: A Unified Approach to Fixed Budget and Fixed Confidence Victor Gabillon, Mohaad Ghavazadeh, Alessandro Lazaric To cite this version: Victor Gabillon, Mohaad Ghavazadeh, Alessandro
More informationUsing EM To Estimate A Probablity Density With A Mixture Of Gaussians
Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points
More informationConvex Programming for Scheduling Unrelated Parallel Machines
Convex Prograing for Scheduling Unrelated Parallel Machines Yossi Azar Air Epstein Abstract We consider the classical proble of scheduling parallel unrelated achines. Each job is to be processed by exactly
More informationLecture 12: Ensemble Methods. Introduction. Weighted Majority. Mixture of Experts/Committee. Σ k α k =1. Isabelle Guyon
Lecture 2: Enseble Methods Isabelle Guyon guyoni@inf.ethz.ch Introduction Book Chapter 7 Weighted Majority Mixture of Experts/Coittee Assue K experts f, f 2, f K (base learners) x f (x) Each expert akes
More informationPolygonal Designs: Existence and Construction
Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G
More informationON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD
PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN
More informationPhysically Based Modeling CS Notes Spring 1997 Particle Collision and Contact
Physically Based Modeling CS 15-863 Notes Spring 1997 Particle Collision and Contact 1 Collisions with Springs Suppose we wanted to ipleent a particle siulator with a floor : a solid horizontal plane which
More informationA Smoothed Boosting Algorithm Using Probabilistic Output Codes
A Soothed Boosting Algorith Using Probabilistic Output Codes Rong Jin rongjin@cse.su.edu Dept. of Coputer Science and Engineering, Michigan State University, MI 48824, USA Jian Zhang jian.zhang@cs.cu.edu
More informationSoft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis
Soft Coputing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Beverly Rivera 1,2, Irbis Gallegos 1, and Vladik Kreinovich 2 1 Regional Cyber and Energy Security Center RCES
More informationAsynchronous Gossip Algorithms for Stochastic Optimization
Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu
More informationEMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS
EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS Jochen Till, Sebastian Engell, Sebastian Panek, and Olaf Stursberg Process Control Lab (CT-AST), University of Dortund,
More informationSequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5,
Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5, 2015 31 11 Motif Finding Sources for this section: Rouchka, 1997, A Brief Overview of Gibbs Sapling. J. Buhler, M. Topa:
More informationStochastic Optimization of Product-Machine Qualification in a Semiconductor Back-end Facility
Stochastic Optiization of Product-Machine Qualification in a Seiconductor Back-end Facility Mengying Fu, Ronald Askin, John Fowler, Muhong Zhang School of Coputing, Inforatics, and Systes Engineering,
More informationShannon Sampling II. Connections to Learning Theory
Shannon Sapling II Connections to Learning heory Steve Sale oyota echnological Institute at Chicago 147 East 60th Street, Chicago, IL 60637, USA E-ail: sale@athberkeleyedu Ding-Xuan Zhou Departent of Matheatics,
More informationOn Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40
On Poset Merging Peter Chen Guoli Ding Steve Seiden Abstract We consider the follow poset erging proble: Let X and Y be two subsets of a partially ordered set S. Given coplete inforation about the ordering
More informationConstrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008
LIDS Report 2779 1 Constrained Consensus and Optiization in Multi-Agent Networks arxiv:0802.3922v2 [ath.oc] 17 Dec 2008 Angelia Nedić, Asuan Ozdaglar, and Pablo A. Parrilo February 15, 2013 Abstract We
More informationE0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis
E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds
More informationSymbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm
Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial
More informationPattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition
More informationLeast Squares Fitting of Data
Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a
More informationWhen Short Runs Beat Long Runs
When Short Runs Beat Long Runs Sean Luke George Mason University http://www.cs.gu.edu/ sean/ Abstract What will yield the best results: doing one run n generations long or doing runs n/ generations long
More informationSupplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data
Suppleentary to Learning Discriinative Bayesian Networks fro High-diensional Continuous Neuroiaging Data Luping Zhou, Lei Wang, Lingqiao Liu, Philip Ogunbona, and Dinggang Shen Proposition. Given a sparse
More informationBlock designs and statistics
Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent
More informationHandwriting Detection Model Based on Four-Dimensional Vector Space Model
Journal of Matheatics Research; Vol. 10, No. 4; August 2018 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Handwriting Detection Model Based on Four-Diensional Vector
More informationA Theoretical Analysis of a Warm Start Technique
A Theoretical Analysis of a War Start Technique Martin A. Zinkevich Yahoo! Labs 701 First Avenue Sunnyvale, CA Abstract Batch gradient descent looks at every data point for every step, which is wasteful
More informationOn Constant Power Water-filling
On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives
More informationProjectile Motion with Air Resistance (Numerical Modeling, Euler s Method)
Projectile Motion with Air Resistance (Nuerical Modeling, Euler s Method) Theory Euler s ethod is a siple way to approxiate the solution of ordinary differential equations (ode s) nuerically. Specifically,
More informationBest Procedures For Sample-Free Item Analysis
Best Procedures For Saple-Free Ite Analysis Benjain D. Wright University of Chicago Graha A. Douglas University of Western Australia Wright s (1969) widely used "unconditional" procedure for Rasch saple-free
More informationA MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION
A eshsize boosting algorith in kernel density estiation A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION C.C. Ishiekwene, S.M. Ogbonwan and J.E. Osewenkhae Departent of Matheatics, University
More informationOn the Analysis of the Quantum-inspired Evolutionary Algorithm with a Single Individual
6 IEEE Congress on Evolutionary Coputation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-1, 6 On the Analysis of the Quantu-inspired Evolutionary Algorith with a Single Individual
More informationHybrid System Identification: An SDP Approach
49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The
More informationKonrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D Berlin - Wilmersdorf
Konrad-Zuse-Zentru für Inforationstechnik Berlin Heilbronner Str. 10, D-10711 Berlin - Wilersdorf Folkar A. Borneann On the Convergence of Cascadic Iterations for Elliptic Probles SC 94-8 (Marz 1994) 1
More informationNew Slack-Monotonic Schedulability Analysis of Real-Time Tasks on Multiprocessors
New Slack-Monotonic Schedulability Analysis of Real-Tie Tasks on Multiprocessors Risat Mahud Pathan and Jan Jonsson Chalers University of Technology SE-41 96, Göteborg, Sweden {risat, janjo}@chalers.se
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial
More informationOptimum Value of Poverty Measure Using Inverse Optimization Programming Problem
International Journal of Conteporary Matheatical Sciences Vol. 14, 2019, no. 1, 31-42 HIKARI Ltd, www.-hikari.co https://doi.org/10.12988/ijcs.2019.914 Optiu Value of Poverty Measure Using Inverse Optiization
More informationIntelligent Systems: Reasoning and Recognition. Artificial Neural Networks
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial
More informationSupplementary Information for Design of Bending Multi-Layer Electroactive Polymer Actuators
Suppleentary Inforation for Design of Bending Multi-Layer Electroactive Polyer Actuators Bavani Balakrisnan, Alek Nacev, and Elisabeth Sela University of Maryland, College Park, Maryland 074 1 Analytical
More information