LambdaMF: Learning Nonsmooth Ranking Functions in Matrix Factorization Using Lambda

Size: px
Start display at page:

Download "LambdaMF: Learning Nonsmooth Ranking Functions in Matrix Factorization Using Lambda"

Transcription

1 2015 IEEE International Conference on Data Mining LambdaMF: Learning Nonsmooth Ranking Fnctions in Matrix Factorization Using Lambda Gang-He Lee Department of Compter Science and Information Engineering National Taiwan University Sho-De Lin Department of Compter Science and Information Engineering National Taiwan University Abstract This paper emphasizes optimizing ranking measres in a recommendation problem. Since ranking measres are nondifferentiable, previos works have been proposed to deal with this problem via approximations or lower/pper bonding of the loss. However, sch mismatch between ranking measres and approximations/bonds can lead to non-optimal ranking reslts. To solve this problem, we propose to model the gradient of non-differentiable ranking measre based on the idea of virtal gradient, which is called lambda in learning to rank. In addition, noticing the difference between learning to rank and recommendation models, we prove that nder certain circmstance the existence of poplar items can lead to nlimited norm growing of the latent factors in a matrix factorization model. We frther create a novel reglarization term to remedy sch concern. Finally, we demonstrate that or model, LambdaMF, otperforms several state-of-the-art methods. We frther show in experiments that in all cases or model achieves global optimm of normalized discont cmlative gain dring training. Detailed implementation and spplementary material can be fond at ( b /). I. INTRODUCTION With the prosperos emergence of e-commerce, recommendation system has played a big role in online stores. In general, sers might prefer a recommendation system that selects a sbset of candidate items for them to choose. In this sense the top-n recommended list is commonly adopted to evalate a ranking-oriented recommendation system. Sch evalation consists of an Information Retrieval (IR) measre, i.e. ranking measre, and a ct-off vale N. For example, Precision@5 specifies the precision vale given top 5 recommended items. However, the commonly sed ranking measres are either indiscriminate or discontinos over certain model space. Fig. 1 illstrates this behavior: adding a small Δs to the score of docment c, s(c) cannot alter the ranking of c. Ths a derivative eqal to 0 appears in the measring objective fnction. However, adding a larger vale Δs to the score of c can case the ranking of docment c to immediately jmp from 3 rd to 2 nd, which leads to discontinity on the ranking measre de to the swapping of two docments. The property of zero or discontinity gradient discorages the sage of common gradient descent optimization techniqes, which sally leads to non-optimal soltions. To handle sch drawback, researchers have designed novel loss fnction for optimization. They are sally derived from a Figre 1. Ranking based on different model score smooth approximation [1] or a bond [2] of the original ranking measres. However, there is a concern that the mismatch between the loss fnction and the ranking measre wold reslt in a gap between the optimal and learned models [3]. In the field of learning to rank, lambda-based methods [4] have been proposed to bypass converting challenging ranking measres into loss fnctions, by sing a proxy of the gradient for ranking measres. The reqired ranking and sorting in ranking measres are then incorporated into the model by calclating the virtal gradient, a.k.a lambda, after sorting. Inheriting the idea of lambda, we design a new recommendation model named Lambda Matrix Factorization (LambdaMF) to combine the idea of lambda with the highly-sccessfl matrix factorization model in collaborative filtering. To or knowledge, LambdaMF is the first ranking matrix factorization model to achieve global training optimm of a ranking measre: the Normalized Discont Cmlative Gain (NDCG). Or experiment also shows that LambdaMF directly optimize NDCG 98.4% of time dring the optimization process. Moreover, we have observed an interesting effect while learning LambdaMF. Sometimes the norm of some latent parameters can grow nlimitedly de to the lack of sppression power to constrain the effect of poplar items dring learning. In this paper we prove that sch phenomenon can always happen with the existence of some poplar items and some ranking-oriented matrix factorization models that satisfy certain criteria. We frther prpose a novel reglarization term to combat norm growing and empirically show the nmerical stability of or soltion /15 $ IEEE DOI /ICDM

2 Finally, we condct experiments with MovieLens and Netflix datasets. Empirical comparison shows that or method otperforms several state-of-the-art models. In a ntshell, or main contribtion in this paper can be listed as follows: We create a new matrix factorization algorithm modeling the gradient of ranking measre. To or best knowledge, LambdaMF is the first model to show that it empirically achieves global optimm of training data. We prove that some ranking-oriented matrix factorization model wold sffer from nlimited norm growing de to poplar item effect. We create a novel reglarization term to deal with the poplar item effect on recommendation systems. We condct experiments to show that the model is scalable to large recommendation datasets and otperforms several state-of-the-art models. The remaining sections are organized as follows. Section II presents several related works and comparative discssion. In Section III, the proposed LambdaMF and some related discssion are presented. Afterwards, experiment reslts and conclsion are in Section IV and Section V. II. RELATED WORK The designed model is strongly related to researches on both collaborative filtering and learning to rank. We discss them briefly below. A. Learning to Rank In the context of Learning to Rank (LTR), the problem is sally formlated with qeries Q, docments D and relevance levels R for docments in respect to each qery. A featre vector is often given for each docment-qery pair as the focs is on optimizing the ranking measre for each qery. The essential difference from a recommendation model is ths revealed: there is no qeries or featres in a collaborative filtering task. Nevertheless, people sally evalate the reslts of Top-N recommendation by correlating sers to qeries and items to docments. To be general, we wold se the same ser to qery and item to docment analogy to describe the LTR task in the remaining paragraphs. There are several different LTR measres that can be simply divided into explicit feedback and implicit relevance settings. For explicit feedback, for each observed item, sers explicitly provide a specific score for it. In this scenario, the performance is often evalated sing normalized disconted cmlative gain (NDCG) with a ct off vale K. Given relevance level R i and predicted ranking l i for every recommended item i in the sorted list based on sorted model score, the NDCG@K is generated as follows: NDCG@K = DCG@K K max(dcg@k),dcg@k = 2 Ri 1 log 2 l i +1 l i=1 B. Collaborative Filtering Classically, the recommendation task is eqivalent to the task of rating prediction. Well-known examples can be fond in competitions sch as the Netflix Prize. Common methods can be divided into memory based and model based methods. The major distinction between memory based and model based methods lies in whether the process of learning a model is condcted. Both methods can be regarded as collaborative filtering methods, as they meet the basic assmption that similar sers tend to share similar interests. However, recently the ranking-oriented CF methods [1], [5] [8] attract more attention becase ranking prediction can better captre the activity of personal preference ordering of items. Example can be fond in CLiMF [1], optimizing an approximation of mean reciprocal rank (MRR), and CoFi Rank [2], optimizing a bond of NDCG. However, in CLiMF, the 1 ranking of an item is approximated as the sigmoid fnction of its model score. It only considers the actal vale of the model score and rather than the ordinal ranking of the model score. In contrast, LambdaMF directly deals with actal ranking de to is incorporation of sorting. For CoFi Rank, the loss fnction is derived from a bond of NDCG. However, there is no garantee abot the distance between the bond and actal real ranking fnction. Conversely, LambdaMF directly formlate the ranking fnction into its gradient and therefore there is no need to worry abot the similar extent of approximation or bond. Comparing to the above models, LambdaMF is more general; it presents a template model which can be fine-tned for different types of ranking fnctions given sparse data. III. LAMBDA MATRIX FACTORIZATION This section first introdces the backgrond of lambda, then discsses how lambda gradient can be incorporated into a matrix factorization model. To reflect the difference between learning to rank and recommendation system, a novel reglarization term is created to address the concern of overflow from sing lambda. Finally, a time complexity analysis is presented to demonstrate the efficiency of the model A. RankNet and LambdaRank In a series of researches on learning to rank [4], RankNet [9] was first proposed to handle the problem sing gradient. In this model, given a qery q, for every pair of docments i and j with different relevance level R q,i >R q,j, RankNet aims to promote the relative predicted probability of these pair in terms of their model score s i, s j. Then the relative probability, docment i ranked higher than docment j, generated from this model is defined as (1) P q i,j = 1 (1) 1+exp( σ(s i s j )) σ is simply a parameter controlling the shape of sigmoid fnction. Finally, the loss fnction is defined as the cross entropy between the pair-wise preference ordering of the training data and the relative probability of the model. Given the same assmption with relevance level R q,i >R q,j, the 824

3 probability of pairwise real preference ordering between i, j is defined as P q i,j =1. Finally, the loss fnction and gradient for the qery q, docment i, j is defined as (2), (3) C q i,j = P q i,j log P q i,j (1 P q i,j )log(1 P q i,j ) (2) C q i,j = log(1 + exp( σ(s i s j ))) σ = 1+exp(σ(s i s j )) = Cq i,j (3) Based on the idea of RankNet, formlating pair-wise ranking problem into gradient descent for a pair of docments, LambdaRank [3] is proposed to formlate gradient of a listwise ranking measre, which is called λ. Several possible λs have been tested in [3], while the athors report the best λ sed for optimizing NDCG as (4) with ΔNDCG defined as the absolte NDCG gain from swapping docments i and j C q i,j s i = λ q i,j = σ ΔNDCG 1+exp(σ(s i s j )) = Cq i,j s j (4) C q i,j w = Cq i,j w + Cq i,j w = λq i,j w λq i,j w B. Lambda in Matrix Factorization To introdce λ into a collaborative filtering model, the first thing to notice is that we do not assme the existences of ser or item featres de to privacy concern. Hence, we adopt MF as or latent factor model representing featres for sers and items. Given M sers and N items, there is a sparse rating matrix R M,N, in which each row represents a ser while each colmn represents an item. The vales in the matrix are only available when the corresponding ratings exist in the training dataset, and the task of MF is to gess the remaining ratings. Similar to Singlar Vale Decomposition, the idea behind MF is to factorize the rating matrix R M,N into two matrices U M,d and V N,d with pre-specified latent factor dimension d. Given a [1,M] and b [1,N], U a represents the latent factor of the a th ser while V b represents the latent factor of the b th item. Finally, the rating prediction can be formlated as the matrix prodct below. The inner-prodct of U,V i represents the predicted score for each (, i) pair: R = UV T Using MF as the basis of or latent model, now we are ready to introdce LambdaMF. Sppose the goal is to optimize a ranking measre fnction f, we first constrct sers and items latent factors of dimension d as or model basis. Then we assme that there is a virtal cost fnction C optimizing f, and the gradient of C in respect to model score is defined as λ, C i,j = λ i,j (5) However, to obtain the gradient in respect to model parameters, we need to first discss the differences between LambdaRank and LambdaMF. First, given a ser, when a pair of items (i, j) is chosen, nlike in LTR which only model weight vector is reqired to be pdated, in a recommendation task we have item latent factors of i and j and ser latent factor of to pdate. Second, in the original LambdaRank model, the score of the pair (, i) in the model is generated from the prediction otpts of a neral network, while in MF sch score is generated from the inner-prodct of U and V i. Hence, we have si = V i and si = U. Here we are ready to describe how to apply SGD to learn model parameters. Given the first difference described above, it is apparent that there are 2 item latent factors and 1 ser latent factor to be considered when generating the stochastic gradient. Moreover, given the second difference, we can exploit the chain rle dring the derivation of gradient. Finally, given a ser, a pair of item i and j, with R,i >R,j, the gradient can be compted as (6): Ci,j V i Ci,j V j Ci,j U = C i,j = C i,j = λ V i,ju (6) i = λ V i,ju j = C i,j + C i,j = λ U i,j(v i V j ) The specific definition of λ i,j is not given in this section. In contrast, we want to convey that the design of λ i,j can be simple and generic. In [10], to deal with LTR problems, several λ have been proposed for optimizing corresponding metrics. In or opinion, any fnction with positive range can be viewed as an legitimate lambda as long as it can promote the preferred item and penalize the less relevant item. Here we emphasize the positive vales of lambda as positive gradient which can always enlarge the gap between each pair of items with different relevance levels and leads to faster optimization. C. Poplar Item Effect As wold be shown later, we fond the existence of poplar items can case a serios problems while trying to apply the concept of Lambda in MF. Moreover, we will show that the phenomena is niversal for MF models with positive gradient C i,j > 0 for all j with R,i >R,j sch as BPRMF [5]. Sppose that there exists a poplar item î liked by all sers who have rated it, then for each ser among them, the rating for item î wold keep climbing dring the optimization process as C i,j is positive: it pshes the score of U Vi T pwards. Moreover, if the latent factors among all sers observing the item are similar, the increasing of predicted score of î for every ser observing î wold not case the decrease of predicted score of î for the other sers who observe î. Hence, the latent factor of item î wold keep growing and soon case overflow. Formally speaking, let s first denote the set of sers observing î in the training set as Rel(î), the rating of î for ser k as R, the latent factors in the iteration t as U t k,î /V t. Then the above discssion yields the Theorem 1, whose proof is shown in spplementary materials. Theorem 1 (Poplar Item Theorem): If there exists an item î, sch that for all sers k Rel(î), R R k,î k,j for all other observed item j of ser k. Frthermore, if after certain iteration τ, latent factors of all sers k Rel(î) converge to certain extent. That is, there exists a vector Ū t sch that for all 825

4 k Rel(î) in all iteration t>τ, inner-prodct(uk t, U τ ) > 0. Then the norm of Vî will eventally grow to infinity for any MF model satisfying the constraint that Cṷ i,j > 0 for all j sî with R >R,î,j, as shown below: lim n V n î 2 = Note that the existence of item î can be easily satisfied since the set size of Rel(î) can be arbitrary (even with only one ser). The other condition in Theorem 1, namely the convergence of latent factors of sers observing î, can also happen freqently in CF since we generally believe that sers with similar transaction history, i.e. bying the same item î, shall have similar latent factors. Frthermore, since the theorem only reqires Cṷ i,j > 0 for all j with sî R >R,î,j, it is not restricted to LambdaMF bt applies to all other matrix factorization models satisfying sch constraint. Empirical evidences will also be shown in the experiment section. D. Reglarization in LambdaMF To address the above concern, an intitive soltion is to add a reglarization term to the original cost C (we call this new cost Ċ). For example, a commonly sed L2 reglarization term, i.e. weight decay, can confine the norm of the matrix U and V and ths restrict the complexity of model. Then the new cost fnction Ċ can be formlated as Ċ = Ci,j α 2 ( U V 2 2 ) (7) all(,i,j) data However, since the norm of Vî can grow very fast, the L2 reglarization term may not effectively stop the norm of Vî from going to infinity when the effect of weight decay is not significant. On the other extreme, if α is large, the capability of the model is strongly restricted even when the norm of all parameters are small. Hence, we arge that the tilization of norm restriction based method is blind; it cannot adapt its reglarization intensity for different magnitdes of parameter norms. Here, we propose another reglarization to confine the inner-prodct of U Vi T (which is the prediction otcome of the model) to be as close to the actal rating R,i for every observed rating, i as possible, as defined in (8) Ĉ = C ˆ i,j ˆ C i,j V j C ˆ i,j all(,i,j) data C i,j α 2 all(,i) data (R i,j U V T i ) 2 (8) = λ i,ju i + α(r,i U V T i )U (9) = λ i,ju i + α(r,j U V T j )U = λ i,j(v i V j )+α(r,i U Vi T )V i + α(r,j U V T j )V j We choose the sqare error between R,i and U Vi T becase it is differentiable and ths can be optimized easily. This eqation is known as mean-sqare-error (MSE), which is often adopted in a rating prediction task as an objective fnction [11]. There are two advantages in adopting the MSE reglarization term. 1) Intensity-adaptive reglarization term: It can be derived from (8) that the gradient of C with respect to parameters is (9). Sch reglarization can adapt its intensity to the difference between the predicted score and the real ). When the predicted score varies, the reglarization power also adjst linearly. In conclsion, we wold like to point ot that the adaptation power of MSE reglarization is atomatic with no maniplation of vale α needed. 2) Indctive transfer with MSE: Indctive transfer [12] rating sing (R,i U V T i is a techniqe to se some sorce task T S to help the learning of target task T T. In or scenario, T S corresponds to MSE while T T is corresponds to the ranking measre f. Moreover, MSE presents a way to model learning to rank with point-wise learning. An optimal MSE=0 also sggests an optimal NDCG=1. Hence, the proposed reglarization can be viewed as adopting an indctive transfer strategy to enhance the performance of LambdaMF. E. 3.5 Algorithm and Complexity Analysis Once the lambda gradient and reglarization term are derived, we are ready to define the algorithm and analyze its time complexity. In the previos sections we have provided a general definition of λ, bt how to choose λ remains nsolved. In [9], the λ is defined as (4); however, we arge that the mltiplication with RankNet gradient is irrelevant to the target fnction. In addition, the comptation of RankNet gradient incldes exponential fnction, which is time consming. Therefore, given a ser, ranking measre f, and an item ranking list ξ, for every pair of items i and j with R,i >R,j, we adopt the absolte ranking measre difference between ξ and ξ, the same list as ξ except that the rankings of i and j are swapped. That is, λ i,j = f(ξ) f(ξ ) (10) The ranking measre can be applied to certain ranking measres. To be concrete, applying NDCG as the ranking measre and denoting the ranking of i as l i, (10) will be: f(ξ) f(ξ ) = (2Ri 2 Rj 1 )( log(1+l 1 i) log(1+l ) j) (11) max(dcg) Let s assme that the comptation of inner-prodct of latent factors takes O(1) time to simplify the following derivation. Given N observed items for a ser, the comptation of ξ takes O( N log N) time since it reqires sorting, so as the comptation of λ i,j. Hence, if SGD is applied, pdating a pair of items for a ser takes total O( N log N) time, which is expensive considering only a single pair of item is pdated. To overcome sch deficiency, we propose to condct the mini-batch gradient descent. After sorting the observed item list for a ser, we compte the gradient for all observed items. As a reslt, O( N log N) time is still needed to do 826

5 the sorting and max(dcg). Finally, the comptation of λ i,j takes constant time with O( N 2 ) pairs. Hence, the total comptation of pdating O( N 2 ) pairs takes O( N 2 ) time. Effectively, pdating each pair of items takes O(1) time as the sorting time is shadowed with mini-batch learning strctre. To sm p, the whole algorithm is depicted in Algorithm 1. Algorithm 1: Learning LambdaMF inpt : R M,N, learning rate η, α, latent factor dimension d, # of iterations n iter, and target ranking measre f Initialize U M,d,V N,d randomly; for t 1 to n iter do for 1 to M do C ˆ 0; for all observed i R do C ˆ 0; ξ = observed item list of sorted by its predicted scores for all observed pair i, j R,R,i >R,j do ξ = ξ with i, j swapped retrn U, V C ˆ i,j U ˆ Ci,j V ˆ i Ci,j V j C ˆ += as Eqation(9),(10) C ˆ += as Eqation(9),(10) C ˆ V j += as Eqation(9),(10) U += η C ˆ for all observed i R do V i += η C ˆ Since there are enormos amont of items in a recommender system, the O( N 2 ) time complexity seems to be nignorable. However, the sparsity of data does compensate this drawback. De to data sparsity, the nmber of observed items per ser N is negligible compared to the nmber of total items N. For example, with 99% sparsity, the comptation of all pairs takes 10 4 N 2 N 2. In [7], each λ is compted throgh all N items, leading to the failre of applying lambda. IV. EXPERIMENT In this section, we evalate the performance of LambdMF comparing to other state-of-the-art recommendation methods. Frthermore, we demonstrate the optimality of LambdaMF and the stability of reglarization. A. Comparison Methods In this work, we adopt NDCG as the target ranking fnction to demonstrate the performance of LambdaMF; applying LambdaMF to other ranking fnctions is also possible bt not inclded in this paper. We compare LambdaMF with several state-of-the-art LTR recommendation system methods. Table I STATISTICS OF DATASETS Netflix MovieLens # of sers 480, # of items 17,770 1,682 sparsity 98.82% 93.70% PopRec: Poplar Recommendation (PopRec) is a strong non-cf baseline. The idea of PopRec is to give sers nonpersonalized recommendation according to the poplarity of items in the training set. That is, the more times an item appears in training, the higher ranking the item gets. CoFi Rank: CoFi Rank is a state-of-the-art MF method minimizing a convex pper bond of (1-NDCG) [2]. CoFi Rank presents the choice of formlating an pper bond as the learning fnction, which differs from or choice of formlating gradient. In their work, several other loss fnctions are also evalated for NDCG. We compare with the NDCG loss fnction (denoted as CoFi NDCG) and the loss fnction with the best performance (denoted as CoFi Best) reported in the original paper. ListRank MF: ListRank MF is another state-of-the-art MF method optimizing cross-entropy of top-one probability [6], which is implemented sing softmax fnction. Thogh the top-one probability seems to be irrelevant to NDCG, the experiment shows the speriority over CoFi Rank in some experiments. B. Experiment Reslts We condct experiments on MovieLens 100K and Netflix dataset as in [2]. The statistics are illstrated in Table I. For each dataset, we apply weak generalization [2], [6] to prodce the experimental data. That is, we first remove sers who rated less than 20, 30, and 60 items (sch as movies). Then for the remaining sers, we sample N =10, 20, 50 ratings for each ser in the training dataset and pt the remaining ratings in testing dataset. As a reslt, a ser will have at least 10 relevant items in testing set, which favors the sage of NDCG@10 as or evalation fnction. Similar to [2], [6] which fixed all model parameters in their experiment, we tne 5 set of parameters in MovieLens 100K N = 20 dataset and set η = 0.001,α = 0.5 and n iter = 250 for or experiments. For N = 10, 20, 50, we constrct different experimental dataset 10 times for each N to avoid bias. Finally, in each experimental dataset, we rn LambdaMF 10 times for MovieLens 100K and once for Netflix, following the same setting as [2]. Finally, we report average reslt of NDCG@10 across 10 experimental datasets for N =10, 20, 50. The experiment reslts for MovieLens 100K and Netflix are shown in Table II and Table III, respectively. Note that for LambdaMF, we report two experiment reslts of adopting different reglarization terms, L2 and MSE. For L2 reglarization, we apply the weight decay for each related latent factors once while pdating a single ser. For all experiment, 827

6 Table II (MEAN,STANDARD DEVIATION) OF IN MOVIELENS100K N=10 N=20 N=50 PopRec (0.5995,0.0000) (0.6202,0.0000) (0.6310,0.0000) CoFi NDCG (0.6400,0.0061) (0.6307,0.0062) (0.6076,0.0077) CoFi Best (0.6420,0.0252) (0.6686,0.0058) (0.7169,0.0059) ListRank MF (0.6943,0.0050) (0.6940,0.0036) (0.6881,0.0052) LambdaMF L2 (0.5518,0.0066) (0.5813,0.0074) (0.6471,0.0074) LambdaMF MSE (0.7119,0.0005) (0.7126,0.0008) (0.7172,0.0013) Table III (MEAN) OF IN NETFLIX; THE RESULTS OF COFI RANK FOR N=50 AND LISTRANK MF IS NOT AVAILABLE SINCE THE AUTHORS DO NOT REPORT ITS RESULTS AND DISCLOSE THE COMPLETE PARAMETERS N=10 N=20 N=50 PopRec (0.5175) (0.5163) (0.5293) CoFi NDCG (0.6081) (0.6204) (Not available) CoFi Best (0.6082) (0.6287) (Not available) LambdaMF L2 (0.7550) (0.7586) (0.7464) LambdaMF MSE (0.7171) (0.7163) (0.7218) LambdaMF MSE best (0.7558) (0.7586) (0.7664) we fond that LambdaMF with MSE reglarization achieves sperior performance comparing to all other competitors. In Netflix, as LambdaMF with L2 reglarization yields better reslt than MSE reglarization, it seems to be de to the tolerability of strong reglarization power in large Netflix dataset, bt MSE reglarization performs better in general for all experiments. Moreover, we fond that LambdaMF with MSE reglarization sing fewer iterations achieves the best reslt in all Netflix experimental settings sing the same parameters (LambdaMF MSE best). Finally, we condct npaired t test on the LambdaMF MSE with the other models. With two tailed p vale smaller than , we add to show extremely significant improvement. Besides, we also rn experiments for LambdaMF withot reglarization, and it lead to overflow after certain iterations in all cases. C. Optimality of LambdaMF Even with sperior performance, one may still arge that the learning of LambdaMF is only an approximation of optimizing ranking measre, bt not a real optimization of ranking measre. The qestion has once been answered in [10] sing Monte-Carlo hypothesis testing of the weighting vectors of the nderlined model. The reslt of testing shows that LambdaRank has achieved a local optimm with 99% confidence. However, in this work, we notice that LambdaMF with MSE reglarization is capable of achieving a global optimm in all or experimental datasets given long enogh iterations. That is, the training performance of NDCG@10 achieves 1.0. Frthermore, the NDCG@10 is nondecreasing for all iterations ntil convergence in Netflix dataset and nondecreasing for at least 98.4% of the iterations in MovieLens 100K dataset before reaching an optimm. V. CONCLUSION In this work, we prpose a novel framework for lambda in a recommendation scenario. After deriving a model incorporating MF and lambda, we frther prove that rankingbased MF model can be ineffective de to overflow in certain circmstances. To deal with this problem, an adjstment of or model sing reglarization is proposed. Note that sch reglarization is not emphasized in a typical LTR problem. In addition, the pdate step for each pair of items takes effectively O(1) time in LambdaMF. We empirically show that LambdaMF otperforms the state-of-the-arts in terms of NDCG@10. Extended experiments demonstrate that LambdaMF exhibits global optimality and directly optimizes target ranking fnction. The insensitivity of MSE reglarization in the experiments also demonstrates the stability of MSE reglarization. ACKNOWLEDGMENT This work is spported by Taiwan government MOST fnding nmber E MY2 and E MY2 REFERENCES [1] Y. Shi, A. Karatzoglo, L. Baltrnas, M. Larson, N. Oliver, and A. Hanjalic, Climf: learning to maximize reciprocal rank with collaborative less-is-more filtering, in Proceedings of the sixth ACM conference on Recommender systems. ACM, 2012, pp [2] M. Weimer, A. Karatzoglo, Q. V. Le, and A. Smola, Maximm margin matrix factorization for collaborative ranking, Advances in neral information processing systems, [3] C. Qoc and V. Le, Learning to rank with nonsmooth cost fnctions, Proceedings of the Advances in Neral Information Processing Systems, vol. 19, pp , [4] C. J. Brges, From ranknet to lambdarank to lambdamart: An overview, Learning, vol. 11, pp , [5] S. Rendle, C. Fredenthaler, Z. Gantner, and L. Schmidt-Thieme, Bpr: Bayesian personalized ranking from implicit feedback, in Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. AUAI Press, 2009, pp [6] Y. Shi, M. Larson, and A. Hanjalic, List-wise learning to rank with matrix factorization for collaborative filtering, in Proceedings of the forth ACM conference on Recommender systems. ACM, 2010, pp [7] W. Zhang, T. Chen, J. Wang, and Y. Y, Optimizing top-n collaborative filtering via dynamic negative item sampling, in Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. ACM, 2013, pp [8] P. Cremonesi, Y. Koren, and R. Trrin, Performance of recommender algorithms on top-n recommendation tasks, in Proceedings of the forth ACM conference on Recommender systems. ACM, 2010, pp [9] C. Brges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hllender, Learning to rank sing gradient descent, in Proceedings of the 22nd international conference on Machine learning. ACM, 2005, pp [10] P. Donmez, K. M. Svore, and C. J. Brges, On the local optimality of lambdarank, in Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval. ACM, 2009, pp [11] P.-L. Chen, C.-T. Tsai, Y.-N. Chen, K.-C. Cho, C.-L. Li, C.-H. Tsai, K.-W. W, Y.-C. Cho, C.-Y. Li, W.-S. Lin et al., A linear ensemble of individal and blended models for msic rating prediction, KDDCp, [12] S. J. Pan and Q. Yang, A srvey on transfer learning, Knowledge and Data Engineering, IEEE Transactions on, vol. 22, no. 10, pp ,

TrustSVD: Collaborative Filtering with Both the Explicit and Implicit Influence of User Trust and of Item Ratings

TrustSVD: Collaborative Filtering with Both the Explicit and Implicit Influence of User Trust and of Item Ratings Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence TrstSVD: Collaborative Filtering with Both the Explicit and Implicit Inflence of User Trst and of Item Ratings Gibing Go Jie Zhang

More information

FOUNTAIN codes [3], [4] provide an efficient solution

FOUNTAIN codes [3], [4] provide an efficient solution Inactivation Decoding of LT and Raptor Codes: Analysis and Code Design Francisco Lázaro, Stdent Member, IEEE, Gianligi Liva, Senior Member, IEEE, Gerhard Bach, Fellow, IEEE arxiv:176.5814v1 [cs.it 19 Jn

More information

Decoupled Collaborative Ranking

Decoupled Collaborative Ranking Decoupled Collaborative Ranking Jun Hu, Ping Li April 24, 2017 Jun Hu, Ping Li WWW2017 April 24, 2017 1 / 36 Recommender Systems Recommendation system is an information filtering technique, which provides

More information

Multi-order Attentive Ranking Model for Sequential Recommendation

Multi-order Attentive Ranking Model for Sequential Recommendation Mlti-order Attentive Ranking Model for Seqential Recommendation L Y 1, Chx Zhang 2, Shangsong Liang 1,3, Xiangliang Zhang 1 1 King Abdllah University of Science and Technology, Thwal, 23955, SA 2 University

More information

Sources of Non Stationarity in the Semivariogram

Sources of Non Stationarity in the Semivariogram Sorces of Non Stationarity in the Semivariogram Migel A. Cba and Oy Leangthong Traditional ncertainty characterization techniqes sch as Simple Kriging or Seqential Gassian Simlation rely on stationary

More information

Incorporating Diversity in a Learning to Rank Recommender System

Incorporating Diversity in a Learning to Rank Recommender System Proceedings of the Twenty-Ninth International Florida Artificial Intelligence Research Society Conference Incorporating Diversity in a Learning to Rank Recommender System Jacek Wasilewski and Neil Hrley

More information

Affine Invariant Total Variation Models

Affine Invariant Total Variation Models Affine Invariant Total Variation Models Helen Balinsky, Alexander Balinsky Media Technologies aboratory HP aboratories Bristol HP-7-94 Jne 6, 7* Total Variation, affine restoration, Sobolev ineqality,

More information

BLOOM S TAXONOMY. Following Bloom s Taxonomy to Assess Students

BLOOM S TAXONOMY. Following Bloom s Taxonomy to Assess Students BLOOM S TAXONOMY Topic Following Bloom s Taonomy to Assess Stdents Smmary A handot for stdents to eplain Bloom s taonomy that is sed for item writing and test constrction to test stdents to see if they

More information

The Dual of the Maximum Likelihood Method

The Dual of the Maximum Likelihood Method Department of Agricltral and Resorce Economics University of California, Davis The Dal of the Maximm Likelihood Method by Qirino Paris Working Paper No. 12-002 2012 Copyright @ 2012 by Qirino Paris All

More information

Decision Making in Complex Environments. Lecture 2 Ratings and Introduction to Analytic Network Process

Decision Making in Complex Environments. Lecture 2 Ratings and Introduction to Analytic Network Process Decision Making in Complex Environments Lectre 2 Ratings and Introdction to Analytic Network Process Lectres Smmary Lectre 5 Lectre 1 AHP=Hierar chies Lectre 3 ANP=Networks Strctring Complex Models with

More information

1 Undiscounted Problem (Deterministic)

1 Undiscounted Problem (Deterministic) Lectre 9: Linear Qadratic Control Problems 1 Undisconted Problem (Deterministic) Choose ( t ) 0 to Minimize (x trx t + tq t ) t=0 sbject to x t+1 = Ax t + B t, x 0 given. x t is an n-vector state, t a

More information

A Model-Free Adaptive Control of Pulsed GTAW

A Model-Free Adaptive Control of Pulsed GTAW A Model-Free Adaptive Control of Plsed GTAW F.L. Lv 1, S.B. Chen 1, and S.W. Dai 1 Institte of Welding Technology, Shanghai Jiao Tong University, Shanghai 00030, P.R. China Department of Atomatic Control,

More information

Chapter 4 Supervised learning:

Chapter 4 Supervised learning: Chapter 4 Spervised learning: Mltilayer Networks II Madaline Other Feedforward Networks Mltiple adalines of a sort as hidden nodes Weight change follows minimm distrbance principle Adaptive mlti-layer

More information

Essentials of optimal control theory in ECON 4140

Essentials of optimal control theory in ECON 4140 Essentials of optimal control theory in ECON 4140 Things yo need to know (and a detail yo need not care abot). A few words abot dynamic optimization in general. Dynamic optimization can be thoght of as

More information

Step-Size Bounds Analysis of the Generalized Multidelay Adaptive Filter

Step-Size Bounds Analysis of the Generalized Multidelay Adaptive Filter WCE 007 Jly - 4 007 London UK Step-Size onds Analysis of the Generalized Mltidelay Adaptive Filter Jnghsi Lee and Hs Chang Hang Abstract In this paper we analyze the bonds of the fixed common step-size

More information

Sensitivity Analysis in Bayesian Networks: From Single to Multiple Parameters

Sensitivity Analysis in Bayesian Networks: From Single to Multiple Parameters Sensitivity Analysis in Bayesian Networks: From Single to Mltiple Parameters Hei Chan and Adnan Darwiche Compter Science Department University of California, Los Angeles Los Angeles, CA 90095 {hei,darwiche}@cs.cla.ed

More information

Nonlinear parametric optimization using cylindrical algebraic decomposition

Nonlinear parametric optimization using cylindrical algebraic decomposition Proceedings of the 44th IEEE Conference on Decision and Control, and the Eropean Control Conference 2005 Seville, Spain, December 12-15, 2005 TC08.5 Nonlinear parametric optimization sing cylindrical algebraic

More information

SQL-Rank: A Listwise Approach to Collaborative Ranking

SQL-Rank: A Listwise Approach to Collaborative Ranking SQL-Rank: A Listwise Approach to Collaborative Ranking Liwei Wu Depts of Statistics and Computer Science UC Davis ICML 18, Stockholm, Sweden July 10-15, 2017 Joint work with Cho-Jui Hsieh and James Sharpnack

More information

UNCERTAINTY FOCUSED STRENGTH ANALYSIS MODEL

UNCERTAINTY FOCUSED STRENGTH ANALYSIS MODEL 8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-1 April 01, Tallinn, Estonia UNCERTAINTY FOCUSED STRENGTH ANALYSIS MODEL Põdra, P. & Laaneots, R. Abstract: Strength analysis is a

More information

Lecture Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2

Lecture Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2 BIJU PATNAIK UNIVERSITY OF TECHNOLOGY, ODISHA Lectre Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2 Prepared by, Dr. Sbhend Kmar Rath, BPUT, Odisha. Tring Machine- Miscellany UNIT 2 TURING MACHINE

More information

Stability of Model Predictive Control using Markov Chain Monte Carlo Optimisation

Stability of Model Predictive Control using Markov Chain Monte Carlo Optimisation Stability of Model Predictive Control sing Markov Chain Monte Carlo Optimisation Elilini Siva, Pal Golart, Jan Maciejowski and Nikolas Kantas Abstract We apply stochastic Lyapnov theory to perform stability

More information

System identification of buildings equipped with closed-loop control devices

System identification of buildings equipped with closed-loop control devices System identification of bildings eqipped with closed-loop control devices Akira Mita a, Masako Kamibayashi b a Keio University, 3-14-1 Hiyoshi, Kohok-k, Yokohama 223-8522, Japan b East Japan Railway Company

More information

Move Blocking Strategies in Receding Horizon Control

Move Blocking Strategies in Receding Horizon Control Move Blocking Strategies in Receding Horizon Control Raphael Cagienard, Pascal Grieder, Eric C. Kerrigan and Manfred Morari Abstract In order to deal with the comptational brden of optimal control, it

More information

E ect Of Quadrant Bow On Delta Undulator Phase Errors

E ect Of Quadrant Bow On Delta Undulator Phase Errors LCLS-TN-15-1 E ect Of Qadrant Bow On Delta Undlator Phase Errors Zachary Wolf SLAC Febrary 18, 015 Abstract The Delta ndlator qadrants are tned individally and are then assembled to make the tned ndlator.

More information

Fast Path-Based Neural Branch Prediction

Fast Path-Based Neural Branch Prediction Fast Path-Based Neral Branch Prediction Daniel A. Jiménez http://camino.rtgers.ed Department of Compter Science Rtgers, The State University of New Jersey Overview The context: microarchitectre Branch

More information

Path-SGD: Path-Normalized Optimization in Deep Neural Networks

Path-SGD: Path-Normalized Optimization in Deep Neural Networks Path-SGD: Path-Normalized Optimization in Deep Neral Networks Behnam Neyshabr, Rslan Salakhtdinov and Nathan Srebro Toyota Technological Institte at Chicago Department of Compter Science, University of

More information

1. Tractable and Intractable Computational Problems So far in the course we have seen many problems that have polynomial-time solutions; that is, on

1. Tractable and Intractable Computational Problems So far in the course we have seen many problems that have polynomial-time solutions; that is, on . Tractable and Intractable Comptational Problems So far in the corse we have seen many problems that have polynomial-time soltions; that is, on a problem instance of size n, the rnning time T (n) = O(n

More information

Lecture 8: September 26

Lecture 8: September 26 10-704: Information Processing and Learning Fall 2016 Lectrer: Aarti Singh Lectre 8: September 26 Note: These notes are based on scribed notes from Spring15 offering of this corse. LaTeX template cortesy

More information

Decision Oriented Bayesian Design of Experiments

Decision Oriented Bayesian Design of Experiments Decision Oriented Bayesian Design of Experiments Farminder S. Anand*, Jay H. Lee**, Matthew J. Realff*** *School of Chemical & Biomoleclar Engineering Georgia Institte of echnology, Atlanta, GA 3332 USA

More information

An Investigation into Estimating Type B Degrees of Freedom

An Investigation into Estimating Type B Degrees of Freedom An Investigation into Estimating Type B Degrees of H. Castrp President, Integrated Sciences Grop Jne, 00 Backgrond The degrees of freedom associated with an ncertainty estimate qantifies the amont of information

More information

Discontinuous Fluctuation Distribution for Time-Dependent Problems

Discontinuous Fluctuation Distribution for Time-Dependent Problems Discontinos Flctation Distribtion for Time-Dependent Problems Matthew Hbbard School of Compting, University of Leeds, Leeds, LS2 9JT, UK meh@comp.leeds.ac.k Introdction For some years now, the flctation

More information

Applying Fuzzy Set Approach into Achieving Quality Improvement for Qualitative Quality Response

Applying Fuzzy Set Approach into Achieving Quality Improvement for Qualitative Quality Response Proceedings of the 007 WSES International Conference on Compter Engineering and pplications, Gold Coast, stralia, Janary 17-19, 007 5 pplying Fzzy Set pproach into chieving Qality Improvement for Qalitative

More information

Theoretical and Experimental Implementation of DC Motor Nonlinear Controllers

Theoretical and Experimental Implementation of DC Motor Nonlinear Controllers Theoretical and Experimental Implementation of DC Motor Nonlinear Controllers D.R. Espinoza-Trejo and D.U. Campos-Delgado Facltad de Ingeniería, CIEP, UASLP, espinoza trejo dr@aslp.mx Facltad de Ciencias,

More information

Classify by number of ports and examine the possible structures that result. Using only one-port elements, no more than two elements can be assembled.

Classify by number of ports and examine the possible structures that result. Using only one-port elements, no more than two elements can be assembled. Jnction elements in network models. Classify by nmber of ports and examine the possible strctres that reslt. Using only one-port elements, no more than two elements can be assembled. Combining two two-ports

More information

Second-Order Wave Equation

Second-Order Wave Equation Second-Order Wave Eqation A. Salih Department of Aerospace Engineering Indian Institte of Space Science and Technology, Thirvananthapram 3 December 016 1 Introdction The classical wave eqation is a second-order

More information

Technical Note. ODiSI-B Sensor Strain Gage Factor Uncertainty

Technical Note. ODiSI-B Sensor Strain Gage Factor Uncertainty Technical Note EN-FY160 Revision November 30, 016 ODiSI-B Sensor Strain Gage Factor Uncertainty Abstract Lna has pdated or strain sensor calibration tool to spport NIST-traceable measrements, to compte

More information

4 Exact laminar boundary layer solutions

4 Exact laminar boundary layer solutions 4 Eact laminar bondary layer soltions 4.1 Bondary layer on a flat plate (Blasis 1908 In Sec. 3, we derived the bondary layer eqations for 2D incompressible flow of constant viscosity past a weakly crved

More information

On the tree cover number of a graph

On the tree cover number of a graph On the tree cover nmber of a graph Chassidy Bozeman Minerva Catral Brendan Cook Oscar E. González Carolyn Reinhart Abstract Given a graph G, the tree cover nmber of the graph, denoted T (G), is the minimm

More information

FRTN10 Exercise 12. Synthesis by Convex Optimization

FRTN10 Exercise 12. Synthesis by Convex Optimization FRTN Exercise 2. 2. We want to design a controller C for the stable SISO process P as shown in Figre 2. sing the Yola parametrization and convex optimization. To do this, the control loop mst first be

More information

Assignment Fall 2014

Assignment Fall 2014 Assignment 5.086 Fall 04 De: Wednesday, 0 December at 5 PM. Upload yor soltion to corse website as a zip file YOURNAME_ASSIGNMENT_5 which incldes the script for each qestion as well as all Matlab fnctions

More information

Solving a System of Equations

Solving a System of Equations Solving a System of Eqations Objectives Understand how to solve a system of eqations with: - Gass Elimination Method - LU Decomposition Method - Gass-Seidel Method - Jacobi Method A system of linear algebraic

More information

FREQUENCY DOMAIN FLUTTER SOLUTION TECHNIQUE USING COMPLEX MU-ANALYSIS

FREQUENCY DOMAIN FLUTTER SOLUTION TECHNIQUE USING COMPLEX MU-ANALYSIS 7 TH INTERNATIONAL CONGRESS O THE AERONAUTICAL SCIENCES REQUENCY DOMAIN LUTTER SOLUTION TECHNIQUE USING COMPLEX MU-ANALYSIS Yingsong G, Zhichn Yang Northwestern Polytechnical University, Xi an, P. R. China,

More information

Optimal Control of a Heterogeneous Two Server System with Consideration for Power and Performance

Optimal Control of a Heterogeneous Two Server System with Consideration for Power and Performance Optimal Control of a Heterogeneos Two Server System with Consideration for Power and Performance by Jiazheng Li A thesis presented to the University of Waterloo in flfilment of the thesis reqirement for

More information

Discussion of The Forward Search: Theory and Data Analysis by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli

Discussion of The Forward Search: Theory and Data Analysis by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli 1 Introdction Discssion of The Forward Search: Theory and Data Analysis by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli Søren Johansen Department of Economics, University of Copenhagen and CREATES,

More information

Stair Matrix and its Applications to Massive MIMO Uplink Data Detection

Stair Matrix and its Applications to Massive MIMO Uplink Data Detection 1 Stair Matrix and its Applications to Massive MIMO Uplink Data Detection Fan Jiang, Stdent Member, IEEE, Cheng Li, Senior Member, IEEE, Zijn Gong, Senior Member, IEEE, and Roy S, Stdent Member, IEEE Abstract

More information

arxiv:quant-ph/ v4 14 May 2003

arxiv:quant-ph/ v4 14 May 2003 Phase-transition-like Behavior of Qantm Games arxiv:qant-ph/0111138v4 14 May 2003 Jiangfeng D Department of Modern Physics, University of Science and Technology of China, Hefei, 230027, People s Repblic

More information

Adaptive Fault-tolerant Control with Control Allocation for Flight Systems with Severe Actuator Failures and Input Saturation

Adaptive Fault-tolerant Control with Control Allocation for Flight Systems with Severe Actuator Failures and Input Saturation 213 American Control Conference (ACC) Washington, DC, USA, Jne 17-19, 213 Adaptive Falt-tolerant Control with Control Allocation for Flight Systems with Severe Actator Failres and Inpt Satration Man Wang,

More information

Lecture Notes: Finite Element Analysis, J.E. Akin, Rice University

Lecture Notes: Finite Element Analysis, J.E. Akin, Rice University 9. TRUSS ANALYSIS... 1 9.1 PLANAR TRUSS... 1 9. SPACE TRUSS... 11 9.3 SUMMARY... 1 9.4 EXERCISES... 15 9. Trss analysis 9.1 Planar trss: The differential eqation for the eqilibrim of an elastic bar (above)

More information

Information Source Detection in the SIR Model: A Sample Path Based Approach

Information Source Detection in the SIR Model: A Sample Path Based Approach Information Sorce Detection in the SIR Model: A Sample Path Based Approach Kai Zh and Lei Ying School of Electrical, Compter and Energy Engineering Arizona State University Tempe, AZ, United States, 85287

More information

Section 7.4: Integration of Rational Functions by Partial Fractions

Section 7.4: Integration of Rational Functions by Partial Fractions Section 7.4: Integration of Rational Fnctions by Partial Fractions This is abot as complicated as it gets. The Method of Partial Fractions Ecept for a few very special cases, crrently we have no way to

More information

Random Walks on the Bipartite-Graph for Personalized Recommendation

Random Walks on the Bipartite-Graph for Personalized Recommendation International Conference on Compter Science and Artificial Intelligence (ICCSAI 213) ISBN: 978-1-6595-132-4 Random Wals on the Bipartite-Graph for Personalized Recommendation Zhong-yo Pei, Chn-heng Chiang,

More information

Optimal search: a practical interpretation of information-driven sensor management

Optimal search: a practical interpretation of information-driven sensor management Optimal search: a practical interpretation of information-driven sensor management Fotios Katsilieris, Yvo Boers and Hans Driessen Thales Nederland B.V. Hengelo, the Netherlands Email: {Fotios.Katsilieris,

More information

Creating a Sliding Mode in a Motion Control System by Adopting a Dynamic Defuzzification Strategy in an Adaptive Neuro Fuzzy Inference System

Creating a Sliding Mode in a Motion Control System by Adopting a Dynamic Defuzzification Strategy in an Adaptive Neuro Fuzzy Inference System Creating a Sliding Mode in a Motion Control System by Adopting a Dynamic Defzzification Strategy in an Adaptive Nero Fzzy Inference System M. Onder Efe Bogazici University, Electrical and Electronic Engineering

More information

Estimating models of inverse systems

Estimating models of inverse systems Estimating models of inverse systems Ylva Jng and Martin Enqvist Linköping University Post Print N.B.: When citing this work, cite the original article. Original Pblication: Ylva Jng and Martin Enqvist,

More information

Downloaded 07/06/18 to Redistribution subject to SIAM license or copyright; see

Downloaded 07/06/18 to Redistribution subject to SIAM license or copyright; see SIAM J. SCI. COMPUT. Vol. 4, No., pp. A4 A7 c 8 Society for Indstrial and Applied Mathematics Downloaded 7/6/8 to 8.83.63.. Redistribtion sbject to SIAM license or copyright; see http://www.siam.org/jornals/ojsa.php

More information

Scheduling parallel jobs to minimize the makespan

Scheduling parallel jobs to minimize the makespan J Sched (006) 9:433 45 DOI 10.1007/s10951-006-8497-6 Schedling parallel jobs to minimize the makespan Berit Johannes C Springer Science + Bsiness Media, LLC 006 Abstract We consider the NP-hard problem

More information

Discussion Papers Department of Economics University of Copenhagen

Discussion Papers Department of Economics University of Copenhagen Discssion Papers Department of Economics University of Copenhagen No. 10-06 Discssion of The Forward Search: Theory and Data Analysis, by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli Søren Johansen,

More information

Setting The K Value And Polarization Mode Of The Delta Undulator

Setting The K Value And Polarization Mode Of The Delta Undulator LCLS-TN-4- Setting The Vale And Polarization Mode Of The Delta Undlator Zachary Wolf, Heinz-Dieter Nhn SLAC September 4, 04 Abstract This note provides the details for setting the longitdinal positions

More information

Andrew W. Moore Professor School of Computer Science Carnegie Mellon University

Andrew W. Moore Professor School of Computer Science Carnegie Mellon University Spport Vector Machines Note to other teachers and sers of these slides. Andrew wold be delighted if yo fond this sorce material sefl in giving yor own lectres. Feel free to se these slides verbatim, or

More information

The Linear Quadratic Regulator

The Linear Quadratic Regulator 10 The Linear Qadratic Reglator 10.1 Problem formlation This chapter concerns optimal control of dynamical systems. Most of this development concerns linear models with a particlarly simple notion of optimality.

More information

Modelling by Differential Equations from Properties of Phenomenon to its Investigation

Modelling by Differential Equations from Properties of Phenomenon to its Investigation Modelling by Differential Eqations from Properties of Phenomenon to its Investigation V. Kleiza and O. Prvinis Kanas University of Technology, Lithania Abstract The Panevezys camps of Kanas University

More information

Chapter 2 Difficulties associated with corners

Chapter 2 Difficulties associated with corners Chapter Difficlties associated with corners This chapter is aimed at resolving the problems revealed in Chapter, which are cased b corners and/or discontinos bondar conditions. The first section introdces

More information

Safe Manual Control of the Furuta Pendulum

Safe Manual Control of the Furuta Pendulum Safe Manal Control of the Frta Pendlm Johan Åkesson, Karl Johan Åström Department of Atomatic Control, Lnd Institte of Technology (LTH) Box 8, Lnd, Sweden PSfrag {jakesson,kja}@control.lth.se replacements

More information

Bayes and Naïve Bayes Classifiers CS434

Bayes and Naïve Bayes Classifiers CS434 Bayes and Naïve Bayes Classifiers CS434 In this lectre 1. Review some basic probability concepts 2. Introdce a sefl probabilistic rle - Bayes rle 3. Introdce the learning algorithm based on Bayes rle (ths

More information

AN IMPROVED VARIATIONAL MODE DECOMPOSITION METHOD FOR INTERNAL WAVES SEPARATION

AN IMPROVED VARIATIONAL MODE DECOMPOSITION METHOD FOR INTERNAL WAVES SEPARATION 3rd Eropean Signal Processing Conference (EUSIPCO AN IMPROVED VARIATIONAL MODE DECOMPOSITION METHOD FOR INTERNAL WAVES SEPARATION Jeremy Schmitt, Ernesto Horne, Nelly Pstelni, Sylvain Jobad, Philippe Odier

More information

Imprecise Continuous-Time Markov Chains

Imprecise Continuous-Time Markov Chains Imprecise Continos-Time Markov Chains Thomas Krak *, Jasper De Bock, and Arno Siebes t.e.krak@.nl, a.p.j.m.siebes@.nl Utrecht University, Department of Information and Compting Sciences, Princetonplein

More information

A Single Species in One Spatial Dimension

A Single Species in One Spatial Dimension Lectre 6 A Single Species in One Spatial Dimension Reading: Material similar to that in this section of the corse appears in Sections 1. and 13.5 of James D. Mrray (), Mathematical Biology I: An introction,

More information

Ted Pedersen. Southern Methodist University. large sample assumptions implicit in traditional goodness

Ted Pedersen. Southern Methodist University. large sample assumptions implicit in traditional goodness Appears in the Proceedings of the Soth-Central SAS Users Grop Conference (SCSUG-96), Astin, TX, Oct 27-29, 1996 Fishing for Exactness Ted Pedersen Department of Compter Science & Engineering Sothern Methodist

More information

CHANNEL SELECTION WITH RAYLEIGH FADING: A MULTI-ARMED BANDIT FRAMEWORK. Wassim Jouini and Christophe Moy

CHANNEL SELECTION WITH RAYLEIGH FADING: A MULTI-ARMED BANDIT FRAMEWORK. Wassim Jouini and Christophe Moy CHANNEL SELECTION WITH RAYLEIGH FADING: A MULTI-ARMED BANDIT FRAMEWORK Wassim Joini and Christophe Moy SUPELEC, IETR, SCEE, Avene de la Bolaie, CS 47601, 5576 Cesson Sévigné, France. INSERM U96 - IFR140-

More information

A Characterization of Interventional Distributions in Semi-Markovian Causal Models

A Characterization of Interventional Distributions in Semi-Markovian Causal Models A Characterization of Interventional Distribtions in Semi-Markovian Casal Models Jin Tian and Changsng Kang Department of Compter Science Iowa State University Ames, IA 50011 {jtian, cskang}@cs.iastate.ed

More information

Control Performance Monitoring of State-Dependent Nonlinear Processes

Control Performance Monitoring of State-Dependent Nonlinear Processes Control Performance Monitoring of State-Dependent Nonlinear Processes Lis F. Recalde*, Hong Ye Wind Energy and Control Centre, Department of Electronic and Electrical Engineering, University of Strathclyde,

More information

i=1 y i 1fd i = dg= P N i=1 1fd i = dg.

i=1 y i 1fd i = dg= P N i=1 1fd i = dg. ECOOMETRICS II (ECO 240S) University of Toronto. Department of Economics. Winter 208 Instrctor: Victor Agirregabiria SOLUTIO TO FIAL EXAM Tesday, April 0, 208. From 9:00am-2:00pm (3 hors) ISTRUCTIOS: -

More information

Model Discrimination of Polynomial Systems via Stochastic Inputs

Model Discrimination of Polynomial Systems via Stochastic Inputs Model Discrimination of Polynomial Systems via Stochastic Inpts D. Georgiev and E. Klavins Abstract Systems biologists are often faced with competing models for a given experimental system. Unfortnately,

More information

Convergence analysis of ant colony learning

Convergence analysis of ant colony learning Delft University of Technology Delft Center for Systems and Control Technical report 11-012 Convergence analysis of ant colony learning J van Ast R Babška and B De Schtter If yo want to cite this report

More information

A Characterization of the Domain of Beta-Divergence and Its Connection to Bregman Variational Model

A Characterization of the Domain of Beta-Divergence and Its Connection to Bregman Variational Model entropy Article A Characterization of the Domain of Beta-Divergence and Its Connection to Bregman Variational Model Hyenkyn Woo School of Liberal Arts, Korea University of Technology and Edcation, Cheonan

More information

Control of a Power Assisted Lifting Device

Control of a Power Assisted Lifting Device Proceedings of the RAAD 212 21th International Workshop on Robotics in Alpe-Adria-Danbe Region eptember 1-13, 212, Napoli, Italy Control of a Power Assisted Lifting Device Dimeas otios a, Kostompardis

More information

3.1 The Basic Two-Level Model - The Formulas

3.1 The Basic Two-Level Model - The Formulas CHAPTER 3 3 THE BASIC MULTILEVEL MODEL AND EXTENSIONS In the previos Chapter we introdced a nmber of models and we cleared ot the advantages of Mltilevel Models in the analysis of hierarchically nested

More information

Nonparametric Identification and Robust H Controller Synthesis for a Rotational/Translational Actuator

Nonparametric Identification and Robust H Controller Synthesis for a Rotational/Translational Actuator Proceedings of the 6 IEEE International Conference on Control Applications Mnich, Germany, October 4-6, 6 WeB16 Nonparametric Identification and Robst H Controller Synthesis for a Rotational/Translational

More information

A Regulator for Continuous Sedimentation in Ideal Clarifier-Thickener Units

A Regulator for Continuous Sedimentation in Ideal Clarifier-Thickener Units A Reglator for Continos Sedimentation in Ideal Clarifier-Thickener Units STEFAN DIEHL Centre for Mathematical Sciences, Lnd University, P.O. Box, SE- Lnd, Sweden e-mail: diehl@maths.lth.se) Abstract. The

More information

Universal Scheme for Optimal Search and Stop

Universal Scheme for Optimal Search and Stop Universal Scheme for Optimal Search and Stop Sirin Nitinawarat Qalcomm Technologies, Inc. 5775 Morehose Drive San Diego, CA 92121, USA Email: sirin.nitinawarat@gmail.com Vengopal V. Veeravalli Coordinated

More information

Math 116 First Midterm October 14, 2009

Math 116 First Midterm October 14, 2009 Math 116 First Midterm October 14, 9 Name: EXAM SOLUTIONS Instrctor: Section: 1. Do not open this exam ntil yo are told to do so.. This exam has 1 pages inclding this cover. There are 9 problems. Note

More information

Development of Second Order Plus Time Delay (SOPTD) Model from Orthonormal Basis Filter (OBF) Model

Development of Second Order Plus Time Delay (SOPTD) Model from Orthonormal Basis Filter (OBF) Model Development of Second Order Pls Time Delay (SOPTD) Model from Orthonormal Basis Filter (OBF) Model Lemma D. Tfa*, M. Ramasamy*, Sachin C. Patwardhan **, M. Shhaimi* *Chemical Engineering Department, Universiti

More information

The Cryptanalysis of a New Public-Key Cryptosystem based on Modular Knapsacks

The Cryptanalysis of a New Public-Key Cryptosystem based on Modular Knapsacks The Cryptanalysis of a New Pblic-Key Cryptosystem based on Modlar Knapsacks Yeow Meng Chee Antoine Jox National Compter Systems DMI-GRECC Center for Information Technology 45 re d Ulm 73 Science Park Drive,

More information

Elements of Coordinate System Transformations

Elements of Coordinate System Transformations B Elements of Coordinate System Transformations Coordinate system transformation is a powerfl tool for solving many geometrical and kinematic problems that pertain to the design of gear ctting tools and

More information

Optimization via the Hamilton-Jacobi-Bellman Method: Theory and Applications

Optimization via the Hamilton-Jacobi-Bellman Method: Theory and Applications Optimization via the Hamilton-Jacobi-Bellman Method: Theory and Applications Navin Khaneja lectre notes taken by Christiane Koch Jne 24, 29 1 Variation yields a classical Hamiltonian system Sppose that

More information

* Matrix Factorization and Recommendation Systems

* Matrix Factorization and Recommendation Systems Matrix Factorization and Recommendation Systems Originally presented at HLF Workshop on Matrix Factorization with Loren Anderson (University of Minnesota Twin Cities) on 25 th September, 2017 15 th March,

More information

Research Article Permanence of a Discrete Predator-Prey Systems with Beddington-DeAngelis Functional Response and Feedback Controls

Research Article Permanence of a Discrete Predator-Prey Systems with Beddington-DeAngelis Functional Response and Feedback Controls Hindawi Pblishing Corporation Discrete Dynamics in Natre and Society Volme 2008 Article ID 149267 8 pages doi:101155/2008/149267 Research Article Permanence of a Discrete Predator-Prey Systems with Beddington-DeAngelis

More information

4.2 First-Order Logic

4.2 First-Order Logic 64 First-Order Logic and Type Theory The problem can be seen in the two qestionable rles In the existential introdction, the term a has not yet been introdced into the derivation and its se can therefore

More information

When are Two Numerical Polynomials Relatively Prime?

When are Two Numerical Polynomials Relatively Prime? J Symbolic Comptation (1998) 26, 677 689 Article No sy980234 When are Two Nmerical Polynomials Relatively Prime? BERNHARD BECKERMANN AND GEORGE LABAHN Laboratoire d Analyse Nmériqe et d Optimisation, Université

More information

EOQ Problem Well-Posedness: an Alternative Approach Establishing Sufficient Conditions

EOQ Problem Well-Posedness: an Alternative Approach Establishing Sufficient Conditions pplied Mathematical Sciences, Vol. 4, 2, no. 25, 23-29 EOQ Problem Well-Posedness: an lternative pproach Establishing Sfficient Conditions Giovanni Mingari Scarpello via Negroli, 6, 226 Milano Italy Daniele

More information

Network Coding for Multiple Unicasts: An Approach based on Linear Optimization

Network Coding for Multiple Unicasts: An Approach based on Linear Optimization Network Coding for Mltiple Unicasts: An Approach based on Linear Optimization Danail Traskov, Niranjan Ratnakar, Desmond S. Ln, Ralf Koetter, and Mriel Médard Abstract In this paper we consider the application

More information

Reducing Conservatism in Flutterometer Predictions Using Volterra Modeling with Modal Parameter Estimation

Reducing Conservatism in Flutterometer Predictions Using Volterra Modeling with Modal Parameter Estimation JOURNAL OF AIRCRAFT Vol. 42, No. 4, Jly Agst 2005 Redcing Conservatism in Fltterometer Predictions Using Volterra Modeling with Modal Parameter Estimation Rick Lind and Joao Pedro Mortaga University of

More information

Study on the impulsive pressure of tank oscillating by force towards multiple degrees of freedom

Study on the impulsive pressure of tank oscillating by force towards multiple degrees of freedom EPJ Web of Conferences 80, 0034 (08) EFM 07 Stdy on the implsive pressre of tank oscillating by force towards mltiple degrees of freedom Shigeyki Hibi,* The ational Defense Academy, Department of Mechanical

More information

Strategic Timing of Content in Online Social Networks

Strategic Timing of Content in Online Social Networks Strategic Timing of Content in Online Social Networks Sina Modaresi Department of Indstrial Engineering, University of Pittsbrgh, Pittsbrgh PA 56, sim3@pitt.ed Jan Pablo Vielma Sloan School of Management,

More information

Simplified Identification Scheme for Structures on a Flexible Base

Simplified Identification Scheme for Structures on a Flexible Base Simplified Identification Scheme for Strctres on a Flexible Base L.M. Star California State University, Long Beach G. Mylonais University of Patras, Greece J.P. Stewart University of California, Los Angeles

More information

On relative errors of floating-point operations: optimal bounds and applications

On relative errors of floating-point operations: optimal bounds and applications On relative errors of floating-point operations: optimal bonds and applications Clade-Pierre Jeannerod, Siegfried M. Rmp To cite this version: Clade-Pierre Jeannerod, Siegfried M. Rmp. On relative errors

More information

CHARACTERIZATIONS OF EXPONENTIAL DISTRIBUTION VIA CONDITIONAL EXPECTATIONS OF RECORD VALUES. George P. Yanev

CHARACTERIZATIONS OF EXPONENTIAL DISTRIBUTION VIA CONDITIONAL EXPECTATIONS OF RECORD VALUES. George P. Yanev Pliska Std. Math. Blgar. 2 (211), 233 242 STUDIA MATHEMATICA BULGARICA CHARACTERIZATIONS OF EXPONENTIAL DISTRIBUTION VIA CONDITIONAL EXPECTATIONS OF RECORD VALUES George P. Yanev We prove that the exponential

More information

Bridging the Gap Between Multigrid, Hierarchical, and Receding-Horizon Control

Bridging the Gap Between Multigrid, Hierarchical, and Receding-Horizon Control Bridging the Gap Between Mltigrid, Hierarchical, and Receding-Horizon Control Victor M. Zavala Mathematics and Compter Science Division Argonne National Laboratory Argonne, IL 60439 USA (e-mail: vzavala@mcs.anl.gov).

More information

Temporal Social Network: Group Query Processing

Temporal Social Network: Group Query Processing Temporal Social Network: Grop Qery Processing Xiaoying Chen 1 Chong Zhang 2 Yanli H 3 Bin Ge 4 Weidong Xiao 5 Science and Technology on Information Systems Engineering Laboratory National University of

More information

Constructive Root Bound for k-ary Rational Input Numbers

Constructive Root Bound for k-ary Rational Input Numbers Constrctive Root Bond for k-ary Rational Inpt Nmbers Sylvain Pion, Chee Yap To cite this version: Sylvain Pion, Chee Yap. Constrctive Root Bond for k-ary Rational Inpt Nmbers. 19th Annal ACM Symposim on

More information