GT-SEER: Geo-Temporal SEquential Embedding Rank for Point-of-interest Recommendation

Size: px
Start display at page:

Download "GT-SEER: Geo-Temporal SEquential Embedding Rank for Point-of-interest Recommendation"

Transcription

1 GT-SEER: Geo-Temporal SEquential Embedding Rank for Point-of-interest Recommendation Senglin Zao, Tong Zao, Irwin King, and Micael R. Lyu Department of Computer Science & Engineering Te Cinese University of Hong Kong, Satin, N.T., Hong Kong Te collaborative filtering tecniques are used to learn te user preference [3], [13] [15], [35]. In addition, te tensor factorization and Markov cain model are employed to capture te ceck-ins sequential pattern. For instance, researcers in [19], [34] exploit te categories transitive pattern in sequential ceck-ins to recommend POIs. Zang et al. [4] propose an additive Markov cain model to explore te wole past sequence s influence. Moreover, researcers in [3], [5] learn two successive ceck-ins transitive probability in latent feature space via a tensor factorization model to recommend next new POIs. Altoug all previous studies ave improved POI recommendation from te sequential modeling perspective, tey cannot capture contextual ceck-in information from te wole sequence. In fact, POIs witin a ceck-in sequence tat traces an individual s daily activities always demonstrate a contextual and complementary property. For example, users always ceckin at restaurant, gym, and office witin te same sequence of one day. Te tree types of POIs compose a user s daily life dining, work, and entertainment after work. Hence, POIs in a sequence are complementary from te function perspective and are igly correlated wit suc a contextual property. Tese facts motivate us to come up wit an embedding metod to capture te contextual information. We exploit te embedding learning tecnique to capture te contextual ceck-in information and furter propose te SEquential Embedding Rank (SEER) model for POI recommendation. Specifically, we learn te POI embeddings based on a popular neural language model, word2vec [23]. We treat eac user as a document, ceck-in sequence in one day as a sentence, and eac POI as a word. Ten, we learn te POI representation from ceck-in sequences in te embedding space. On te oter and, we treat te ceck-in activity as a kind of feedback and learn user preferences troug a pairwise ranking model. In oter words, we assume tat a user prefers a cecked-in POI tan te uncecked, and learn tis kind of pairwise preference via a ranking model. On basis of te POI embedding model and te pairwise preference ranking model, we propose te SEER model to combine tem togeter. Moreover, we incorporate two important factors, i.e., temporal influence and geograpical influence, into te SEER model to enance system performance and propose te Temporal SEER (T-SEER) model and te Geo-Temporal SEER (GT- SEER) model. Because user ceck-ins in LBSNs are timesensitive, sequences on different days exibit temporal variarxiv: v1 [cs.ir] 19 Jun 216 Abstract Point-of-interest (POI) recommendation is an important application in location-based social networks (LBSNs), wic learns te user preference and mobility pattern from ceck-in sequences to recommend POIs. However, previous POI recommendation systems model ceck-in sequences based on eiter tensor factorization or Markov cain model, wic cannot capture contextual ceck-in information in sequences. Te contextual ceck-in information implies te complementary functions among POIs tat compose an individual s daily ceckin sequence. In tis paper, we exploit te embedding learning tecnique to capture te contextual ceck-in information and furter propose te SEquential Embedding Rank (SEER) model for POI recommendation. In particular, te SEER model learns user preferences via a pairwise ranking model under te sequential constraint modeled by te POI embedding learning metod. Furtermore, we incorporate two important factors, i.e., temporal influence and geograpical influence, into te SEER model to enance te POI recommendation system. Due to te temporal variance of sequences on different days, we propose a temporal POI embedding model and incorporate te temporal POI representations into a temporal preference ranking model to establis te Temporal SEER (T-SEER) model. In addition, We incorporate te geograpical influence into te T-SEER model and develop te Geo-Temporal SEER (GT-SEER) model. To verify te effectiveness of our proposed metods, we conduct elaborated experiments on two real life datasets. Experimental results sow tat our proposed metods outperform state-of-te-art models. Compared wit te best baseline competitor, te GT-SEER model improves at least 28% on bot datasets for all metrics. I. INTRODUCTION Location-based social networks (LBSNs) suc as ave become popular services to attract users saring teir ceck-in beaviors, making friends, and writing comments on point-of-interests (POIs). For example, as attracted over 5 million people worldwide and recorded over 8 billion ceck-ins until now. 1 To improve user experience in LBSNs by suggesting favorite locations, POI recommendation comes out, wic mines users ceck-in sequences to recommend places were an individual as not been. POI recommendation not only elps users explore new interesting places in a city, but also facilitates business owners to launc advertisements. Due to te significance of POI recommendation, a bunc of metods ave been proposed to enance te POI recommendation system [2], [6], [13], [35], [36]. In general, researcers learn te user preference and te sequence information to recommend POIs [19], [34], [4]. 1 ttps://foursquare.com/about

2 ance. For example, users always ceck-in at POIs around offices on weekday wile visit sopping malls on weekend. Terefore, ceck-in sequences on different days naturally exibit variant temporal caracteristics, work on weekday and entertainment on weekend. To tis end, we define te temporal POI, wic refers to a POI taking a specific temporal state (i.e., day type, weekday or weekend) as context. Ten, we learn te temporal POI embedding given te concatenation of te context POI and te temporal state. We incorporate te temporal POI embeddings into a temporal preference ranking model to establis T-SEER model. In addition, we observe tat users prefer to visit POIs tat are geograpically adjacent to teir cecked-in POIs. Tis geograpical caracteristic inspires us to advance te preference ranking model troug more sopisticated pairwise preference relations tat discriminate te uncecked POIs according to geograpical information. Hence, we incorporate te geograpical influence into te T-SEER model and develop te GT-SEER model. Te contributions of tis paper are summarized as follows: By projecting every POI into one object in an embedding space, we learn POIs contextual relations from ceckin sequences troug word2vec framework. Our proposed SEER model better captures te sequential pattern, learning not only te consecutive ceck-ins transitive probability but also POIs intrinsic relations represented in sequences. Compared wit previous sequential model, te SEER model acieves more tan 5% improvement. We propose te T-SEER model tat is te first work capturing te variant temporal features in sequences on different days. In addition, our model jointly learns te user preference and sequence pattern. By incorporating te temporal influence, te T-SEER model improves te SEER model about 1%. By exploiting a new way to incorporate te geograpical influence, we develop te GT-SEER model tat improves te T-SEER model about 15%. From te model perspective, we advance te pairwise preference ranking metod troug discriminating te uncecked POIs according to geograpical information. Te rest of tis paper is organized as follows. In Section II, we review te related work. In Section III, we introduce two real world datasets and report empirical data analysis tat motivates our metods. Next, we introduce our proposed motods, SEER, T-SEER, and GT-SEER model in Section IV. Ten, we evaluate our proposed models in Section V. Finally, we conclude tis paper and point out possible future work in Section VI. II. RELATED WORK In tis section, we first demonstrate te recent progress of POI recommendation. Ten, we report ow te prior work exploits te sequential influence, temporal influence, and geograpical influence to improve te POI recommendation. Since our proposed metods adopt an embedding learning metod, word2vec, to model ceck-in sequences, we also review te literature of word2vec framework and its applications. POI Recommendation. POI recommendation as attracted intensive academic attention recently. Most of proposed metods base on te Collaborative Filtering (CF) tecniques to learn user preferences on POIs. Researcers in [35], [37], [38] employ te user-based CF to recommend POIs, wile, oter researcers [2], [6], [7], [13], [15] leverage te modelbased CF, i.e., Matrix Factorization (MF) [11]. Furtermore, Some researcers [16], [21] observe tat it is better to treat te ceck-ins as implicit feedback tan te explicit way. Tey utilize te weigted regularized MF [1] to model tis kind of implicit feedback. Oter researcers model te implicit feedback troug te pairwise learning tecniques, wic assume users prefer te cecked-ins POIs tan te uncecked. Researcers in [3], [44] learn te pairwise preference via te Bayesian personalized ranking (BPR) loss [28]. Li et al. [14] propose a ranking based CF model to recommend POIs, wic measures te pairwise preference troug te WARP loss [33]. Sequential Influence. Sequential influence is mined for POI recommendation. Existing studies employ te Markov cain property in consecutive ceck-ins to capture te sequential pattern. Specifically most of successive POI recommendation systems depend on te sequential correlations in successive ceck-ins [3], [5], [18], [42]. Researcers in [3], [5] recommend te successive POIs on te basis of Factorized Personalized Markov Cain (FPMC) model [29]. Liu et al. [18] employ te recurrent neural network (RNN) to find te sequential correlations. In addition, researcers in [19], [34] learn te categories transitive pattern in sequential ceck-ins. Zang et al. [4] predict te sequential transitive probability troug an additive Markov cain model. However, all previous sequential models cannot capture contextual ceck-in information from te wole sequence. Hence, we propose a POI embedding metod to learn sequential POIs representations, wic captures te ceck-ins contextual relations in a sequence. Temporal Influence. Temporal influence is mined for POI recommendation in prior work [3], [4], [6], [37]. Temporal caracteristics can be summarized as, periodicity, nonuniformness, and consecutiveness. Periodicity is first proposed in [4], depicting te periodic pattern of user ceckin activities. For instance, people always stay in teir offices and surrounding places on weekdays wile go to sopping malls on weekends. Non-uniformness is first proposed in [6], demonstrating tat a user s ceck-in preferences may cange at different time. For example, weekday and weekend imply different ceck-in preferences, work and entertainment. In addition, consecutiveness are used in [3], [6], capturing te consecutive ceck-ins correlations to improve performance. In our model, te consecutiveness can be depicted in sequential modeling. Moreover, we propose te temporal POI embedding model to capture te periodicity and non-uniformness among weekday and weekend. Geograpical Influence. Geograpical influence plays an important role in POI recommendation, since te ceck-in activity in LBSNs is limited to geograpical conditions. To capture te geograpical influence, researcers in [2], [4], [43] propose Gaussian distribution based models. Researcers

3 TABLE I: Data statistics #users 1,34 3,24 #POIs 16,561 33,578 #ceck-ins 865, ,453 Avg. #ceck-ins of eac user Density Random Sequence 5 Nonconsecutive Consecutive Correlation 5 Correlation 5 (a) (a) Sequence vs. Random (b) Consecutive vs. Nonconsecutive Fig. 1: POI correlation in sequences in [35], [37] employ te power law distribution model. In addition, researcers in [38], [39], [41] leverage te kernel density estimation model. Moreover, researcers in [16], [21] incorporate te geograpical influence into a weigted regularized MF model [1], [26] and learn te geograpical influence jointly wit te user preference. Similar to [16], [21], we model te ceck-ins as implicit feedback; yet we learn it troug a Bayesian pairwise ranking metod [28]. Furtermore, we propose a geograpical pairwise ranking model, wic captures te geograpical influence via discriminating te uncecked POIs according to teir geograpical information. Embedding Learning. Word2vec [23] is an effective metod to learn embedding representations in word sequences. It models te words contextual correlations in word sentences, sowing better performance tan te perspectives of word transitivity in sentences and word similarity. It is generally used in natural language processing [22], [24]. Afterwards, paragrap2vector [12] and oter variants [17], [2] are proposed to enance te word2vec framework for specific purposes. Since te efficacy of te framework in capturing te correlations of items, word2vec is employed to te network embedding [1], user modeling [31], as well as in item modeling [3] and item recommendation [9], [25]. Tese successes persuade us to exploit te word2vec framework to model POIs representations in ceck-in sequences. Our POI embedding model is similar to te prod2vec model in [9] and KNI model in [25]. However, we incorporate te temporal variance into te word2vec framework to develop te temporal POI embedding tat is a variant matcing te POI recommendation task. III. DATA DESCRIPTION AND ANALYSIS In tis section, we first introduce two real world LBSN datasets, and ten conduct empirical analysis on tem to (b) Fig. 2: Day of week ceck-in pattern at different ours explore te properties in ceck-in sequences of one day. A. Data Description We use two ceck-in datasets crawled from real world LBSNs: data provided in [8] and data in [43]. We preprocess te data by filtering te POIs ceckedin less tan five users and users wose ceck-ins are less tan ten times. Ten we keep te remaining users ceckin records from January 1, 211 to July 31, 211. After te preprocessing, te datasets contain te statistical properties as sown in Table I. B. Empirical Analysis We conduct data analysis to answer te following two questions: 1) ow POIs in sequences of one day correlate eac oter? 2) ow ceck-in sequences perform on different days? We investigate te correlations of POIs in sequences of one day, as sown in Figure 1. To calculate te correlation between two POIs, we construct te user-poi matrix according to te ceck-in records. Ten, we measure te correlation of a POI pair in terms of te Jaccard similarity of tose users wo ave cecked-in at te two POIs. In Figure 1(a), we calculate te average correlation value of POI pairs in sequences for all users, and compare it wit average correlation value of 5, random POI pairs. We observe tat te correlation of POIs in sequences is muc iger tan random pairs by about 1 times for and 5 times for, wic motivates te sequential modeling. In Figure 1(b), we compare

4 TABLE II: Symbol notations u l t s k m d C U L S u S D Su T U L user name POI name temporal state for a sequence context window size negative sample size for embedding learning negative sample size for preference learning latent vector dimension te set of ceck-ins te set of users te set of POIs a sequence for user u te set of sequences te set of preference relations for S u temporal state feature matrix user latent feature matrix user latent feature matrix te correlation of consecutive pairs wit nonconsecutive pairs in sequences. Take a sequence of (l 1, l 2, l 3 ) as an example, (l 1, l 2 ) and (l 2, l 3 ) are consecutive pairs, and (l 1, l 3 ) is a nonconsecutive pair. We also calculate te average value of all sequences for all users to make te comparison. We observe tat te nonconsecutive pairs contain comparable correlation to te consecutive pairs. Hence, not only consecutive POIs are igly correlated [3], [44], all POIs in a sequence are igly correlated wit a contextual property. Accordingly, it is not satisfactory to only model te consecutive ceckins transitive probability by Markov cain model or te consecutive ceck-ins correlation by tensor factorization. Tis observation motivates us to model te wole sequence troug te word2vec framework. We explore ow te variant temporal caracteristics on different days affect te user s ceck-in beavior. Figure 2 demonstrates te number of cumulated ceck-ins for all users at different ours on different days of a week, from Monday to Sunday. From te statistics of cumulated ceck-ins in Figure 2, we observe te day of week ceck-in pattern at different ours: Saturday and Sunday take te similar pattern, wile Monday to Friday take an intra similar pattern tat is different from te weekends. We may infer tat weekday and weekend exert two types of effects on te user s ceck-in beavior. Terefore, modeling te sequence pattern sould contain tis temporal feature. IV. MODEL In tis section, we first demonstrate ow to capture te sequential pattern troug our POI embedding model. Ten, we propose te SEER model to learn te POI recommendation system. Next, we propose te temporal POI embedding model and propose te T-SEER model to incorporate te temporal influence. Furter, we incorporate te geograpical influence into te T-SEER model and propose te GT-SEER model. Finally, we report ow to learn te proposed models. In order to elp understand te paper, we list some important notations in te following, sown in Table II. A. POI Embedding Fig. 3: POI embedding model We propose a POI embedding metod to learn te sequential pattern, wic captures POIs contextual information from user ceck-in sequences. Our model is based on te word2vec framework, i.e., Skip-Gram model [23]. In order to learn te POI representations, we treat eac user as a document, ceck-in sequence in a day as a sentence, and eac POI as a word. To better describe te model, we present some basic concepts as follows. Definition 1 (ceck-in). A ceck-in is a triple u, l, t tat depicts a user u visiting POI l at time t. Definition 2 (Ceck-in sequence). A ceck-in sequence is a set of ceck-ins of user u in one day, denoted as S u = { l 1, t 1,..., l n, t n }, were t 1 to t n belong to te same day. For simplicity, we denote S u = {l 1,..., l n }. Definition 3 (Target POI and context POI). In a sequence S u, te cosen l i is te target POI and oter POIs in S u are context POIs. POI embedding model learns te representations from ceck-in sequences as sown in Figure 3. We treat eac POI as a unique continuous vector, and ten represents context POIs in a sliding window from l i k to l i+k given a target POI l i. In oter words, te vector of a target POI l i is used as a feature to predict te context POIs from l i k to l i+k. Formally, given a POI sequence S u = {l 1,..., l n }, te objective function of POI embedding model is to maximize te average log probability, L(S u) = 1 S u l i S u k c k,c log Pr(l i+c l i), (1) were l i is te target POI, l i+c is te context POI, and k is te context size controlling te sliding window. Here, we formulate te probability Pr(l i+c l i ) using a softmax function. Denote l c, l i R d are te vector representations of output layer context POI l i+c and target POI l i respectively, d is te vector dimension. Ten, te probability Pr(l i+c l i ) is formulated as, Pr(l i+c l i ) = exp(l c l i ) l i L exp(l c l i ), (2) were L is te POI set and ( ) is te inner product operator.

5 In order to make te model efficient for learning, Mikolov et al. [23] propose two metods to learn te word2vec model, ierarcical softmax and negative sampling. In tis paper, we employ te negative sampling tecnique. Now we avoid to calculate te softmax function directly. We attempt to maximize te context POI s occurrence and minimize te negative sample s occurrence. Ten, te objective function could be formulated in a new form easier to optimize. Following [23], we can define te L(S u ) troug te negative sampling tecnique, L(S u ) = 1 ( log σ(l S u c l i )+ l i S u k c k,c E k P nci log σ( l k l i) ) (3), were l k is te sampled negative POI, is te number of negative samples, P nci denotes te distribution of POIs not in S u, and σ( ) is te sigmoid function. E k P nci ( ) means to calculate te expectation value for negative sample l k generated wit distribution P nci. Here we adopt te same strategy in [23] to draw te negative samples, namely using a unigram distribution raised to te power 3 4 to construct P nc i. B. SEquential Embedding Rank (SEER) Model We model te user preference in POI recommendation troug pairwise ranking. User ceck-ins not only contain te sequential pattern, but also imply te user preference. We observe tat ceck-in activity is a kind of implicit feedback, wic as been modeled to capture users preferences on POIs [14], [16], [21]. To learn tis kind of implicit feedback, we leverage te Bayesian personalized ranking criteria [28] to model te user ceck-in activity. Formally, for eac ceck-in u, l i, we define te pairwise preference order as, l i > u, l n, (4) were l i is te cecked-in POI and l n is any oter uncecked POI. Te pairwise preference order means user u prefers te cecked-in POI l i tan te uncecked POI l n. Supposing tat te function f( ) represents user ceck-in preference score, we model te pairwise preference order by Pr(l i > u l n ) = σ(f(u, l i ) f(u, l n )), (5) were Pr(l i > u l n ) denotes te probability of user u prefers POI l i tan l n, and σ( ) is te sigmoid function. Furtermore, we employ te matrix factorization (MF) model [11] to formulate te preference score function. In oter words, we are able to use te latent vector inner product to define te score function, f(u, l) = u l, (6) were u, l R d are latent vectors of user u and POI l, respectively. Tus, te pairwise preference score function can be formulated as, Pr(l i > u l n ) = σ(u l i u l n ). (7) Fig. 4: Temporal POI embedding model Suppose C is te set containing all ceck-ins, S is te set containing all sequences, L is te set of POIs, and L u is te cecked-in POIs of user u. To model te pairwise preference of ceck-ins in S u, we sample uncecked POIs from L \ L u and construct a pairwise preference set, D Su = {(u, l i, l n ) l i S u, l n L \ L u }. (8) Hence, learning te pairwise preference relations in S u is equivalent to maximize te log probability of preference pairs in D Su, L(D Su ) = log σ(u (l i l n )). (9) (u,l i,l n) D Su Moreover, we propose te SEER model to learn te user preference and as well as sequential pattern for POI recommendation togeter. Learning te SEER model is equivalent to maximize L(S u ) in Eq. (3) and L(D Su ) in Eq. (9) togeter. Terefore, te objective function of te SEER model can be formulated as, O = arg max (αl(s u ) + βl(d Su ), (1) S u S l i S u were α and β are te yperparameters to trade-off te sequential influence and te user preference. Substituting L(S u ) and L(D Su ) wit Eq. (3) and Eq. (9) respectively, ten we can learn te SEER model troug te following objective function, arg max α log σ(l c l i)+ ( S u S l i S u k c k,c αe k P nci log σ( l k li)+ D Su β log(σ(u (l i l n))) ). C. Temporal SEER (T-SEER) model (11) To model te temporal variance of sequences on different days, we propose te T-SEER model. As sown in Figure 2, user ceck-ins demonstrate different patterns on weekday and weekend. Tus, we sould model te sequences on weekday and weekend differently. Te POI embedding model in Figure 3 only learns te contextual information of POIs from te ceck-in sequences, but ignore te variant

6 temporal caracteristics among sequences. To tis end, we propose te temporal POI embedding model to learn POI representations. We propose te temporal POI embedding tat represents te POI in sequences wit specific temporal state. In our case, we want to discriminate weekday and weekend, ence te temporal state t s is composed of two options, weekday and weekend. As sown in Figure 4, we learn te representations of context POIs from l i k to l i+k given a target POI l i and te sequence temporal state t s. Formally, given a sequence S u and its temporal state t s, our model attempts to learn te temporal POI embeddings troug maximizing te following probability, L(S u) = 1 ( log Pr(li+c l i, t ) s). S u (12) l i S u k c k,c Similarly, we formulate te probability Pr(l i+c l i, t s ) using a softmax function. For better description, we introduce two symbols, defined as follows: ˆl c = l c l c, l t i = l i t s, were is te concatenation operator, and l c, l i, and t s are latent vectors of output layer context POI, target POI, and temporal state, respectively. Tus, we get ˆl c l t i = l c l i +l c t s. Terefore, te probability Pr(l i+c l i, t s ) can be formulated as, Pr(l i+c l i, t s ) = exp(ˆl c l t i) l i L exp(ˆl c l t i). (13) Furtermore, we define te L(S u ) troug te negative sampling tecnique, L(S u ) = 1 ( log σ(ˆl c l t S u i)+ l i S u k c k,c E k P nci log σ( ˆl k lt i) ) (14). Te key to deducing te temporal pairwise preference ranking is te preference score function. We use l t i = l i t s to represent te temporal POI latent vector, wic is consistent wit te temporal POI embedding model. In addition, we define û = u u, ten te score function can be formulated as, f(u, t s, l i ) = û l t i. (15) Denote te temporal pairwise preference order as l i > u,ts l n. Substituting Eq. (15) in Eq. (5) and eliminating te common term u t s, we get te pairwise preference probability function, Pr(l i > u,ts l n ) = σ(u (l i l n )). (16) Because Eq. (16) is equivalent to Eq. (7), te objective function L(D Su ) for temporal pairwise preference ranking keeps te same. Terefore, te objective for T-SEER model can be formulated as follows, arg max α log σ(ˆl c l t i)+ ( S u S l i S u k c k,c αe k P nci log σ( ˆl k lt i)+ D Su β log(σ(u (l i l n))) ). (17) D. Geo-Temporal SEER (GT-SEER) Model We propose te GT-SEER model by incorporating geograpical influence. According to Tobler s first law of geograpy, Everyting is related to everyting else, but near tings are more related tan distant ting [32]. It implies tat POIs adjacent to eac oter are more correlated, wic is verified by observation in prior work [2], [37], [43]. Because of te observation tat users prefer te POIs nearby te ceckedin tan POIs far away, we can discriminate te uncecked POIs and reconstruct te pairwise preference set for better preference modeling. Definition 4 (Neigboring POI and non-neigboring POI). For eac ceck-in u, l i, te neigboring POI is te POI wose distance from l i is less tan or equal to a tresold s, wile te non-neigboring POI is te POI wose distance is more tan s. Here te tresold distance s is calculated in kilometer. Considering te geograpical influence, eac ceck-in u, l i implies two kinds of pairwise preference relations: te user prefers te cecked-in POI l i tan te uncecked neigboring POI l ne, and prefers te uncecked neigboring POI l ne tan te uncecked non-neigboring POI l nn. Denote d(l i, l j ) as te distance of two POIs l i and l j, we represent te pairwise preferences for ceck-in u, l i as, l i > u,d(li,l ne) s l ne l ne > u,d(li,l nn)>s l nn. (18) Furter, we reconstruct te pairwise preference set, D S u = {(u, l i, l ne ) (u, l ne, l nn ) (u, l i ) C, d(l i, l ne ) s, d(l i, l nn ) > s, l ne, l nn L \ L u }. (19) Finally, we substitute te pairwise preference set in Eq. (17) to incorporate te geograpical influence and formulate te te objective function of GT-SEER, O = arg max α log σ(ˆl c l t i)+ ( S u S l i S u k c k,c αe k P nci log σ( ˆl k lt i)+ β log(σ(u (l i l ) n))), D Su (2) were we substitute te preference set D Su wit a geograpical preference set D S u, oter symbols retain te same as Eq. (17). E. Learning We use an alternate iterative update procedure and employ stocastic gradient descent to learn te objective function. Te objective function of our model is to optimize two parts l i S u (αl(s u ) + βl(d Su ). togeter, O = arg max S u S To learn te model, for eac sampled training instance, we separately calculate te derivatives for L(S u ) and L(D Su ) and update te corresponding parameters along te ascending gradient direction, Θ t+1 = Θ t + η O(Θ) Θ, (21)

7 Algoritm 1: Model learning of GT-SEER. Input: S Output: U, L, T 1 Initialize U, L, L, and T (uniformly at random) 2 for iterations do 3 for S u S do 4 for u, l i S u do 5 for eac context POI l c do 6 Update parameters according to Eq. (22) 7 for {k P ncc } do 8 Update parameters according to Eq. (23) 9 end 1 end 11 Uniformly sample m uncecked POIs 12 for (u, l i, l ne) D m do 13 δ = 1 σ(u l i u l ne) 14 u u + βηδ(l i l ne) 15 l i l i + βηδu ; l ne l ne βηδu 16 end 17 for (u, l ne, l nn) D m do 18 δ = (1 σ(u l ne u l nn)) 19 u u + βηδ(l ne l nn) 2 l ne l ne + βηδu ; l nn l nn βηδu 21 end 22 end 23 end 24 end were Θ is te training parameter and η is te learning rate. Specifically, for a ceck-in u, l i, we calculate te stocastic gradient decent for L(S u ). First, we get te updating rule for te context POI l c, l i l i + αη(1 σ(ˆl c l t i))l c t i t i + αη(1 σ(ˆl c l t i))l c l c l c + αη(1 σ(ˆl c l t i))(l i + t i ). Ten, we update te negative sample l k as follows, l i l i αησ(ˆl k lt i)l k t i t i αησ(ˆl k lt i)l k l k l k αησ(ˆl k lt i)(l i + t i ). (22) (23) To update L(D Su ), we calculate te stocastic gradient decent for eac pair (u, l i, l n ). Denote δ = 1 σ(u l i u l n ), we update te parameters as follows, u u + βηδ(l i l n ) l i l i + βηδu l n l n βηδu. (24) Algoritm 1 sows te details of learning te GT-SEER model. S is te set of all sequences, and S u is a sequence of user u. U, L, and T are feature matrices of user, POI, and temporal state. L, an assistant learning parameter, is te output layer POI matrix in Skip-Gram model. We use te standard way [23] to learn te POI representations in te sequences, as sown from line 5 to line 1 in Algoritm 1. Next, we exploit te Bootstrap sampling to generate m uncecked POIs and ten classify te uncecked POIs as neigboring POIs and non-neigboring POIs according to teir distances from te cecked-in POI l i. Ten, we establis te pairwise preference set D m for eac ceck-in u, l i. Here D m = {(u, l i, l ne ) (u, l ne, l nn ) d(l i, l ne ) s, d(l i, l nn ) > s, l ne, l nn L \ L u }. Ten we learn te parameters for eac instance in D m, sown from line 12 to line 21 in Algoritm 1. Here, we sow te detailed updating rules for GT-SEER model. Te SEER model and T-SEER model are special cases of te GT-SEER model, so we can use similar means to learn tem. After learning te GT-SEER model, we get te latent feature representations of users, POIs, and temporal states. Ten we can estimate te ceck-in possibility of user u over a candidate POI l at temporal state t s according to te preference score function. For SEER model, we use te Eq. (6) to estimate te ceck-in possibility. For T-SEER model and GT-SEER model, we use te Eq. (15) for score estimation. Finally, we rank te candidate POIs and select te top N POIs wit te igest estimated possibility values for eac user. Scalability. After using some sampling tecniques, te complexity of our model is linear in O( C ), were C is te set of all ceck-ins. Hence, tis proposed algoritm is scalable. Specifically, te parameter update in Eq. (22) and Eq. (23) is in O(d), were d is te latent vector dimension. Hence for eac context, te update procedure is in O(d) + O( d), were is te number of negative samples. Because te context sliding window size is k, POI embedding learning for eac ceck-in u, l i from line 5 to 1 is in O(k d). For te pairwise preference learning from line 11 to 21, we sample m uncecked POIs, wic can generate maximum O(m 2 ) pairwise preference tuples. For eac tuple, te update procedure is in O(d). As a result, te parameter update from line 11 to 21 is in O(m 2 d). Because we employ embedding learning and pairwise preference learning for eac ceck-in, te complexity of our model is O ( (k +m 2 ) d C ), were C is te set of all ceck-ins. For k,, m, and d are fixed yperparameters, te proposed model can be treated as linear in O( C ). Furtermore, in order to make our model more efficient, we turn to te asyncronous version of stocastic gradient descent (ASGD) [27]. As te ceck-in frequency distribution of POIs in LBSNs follows a power law [35], tis results in a long tail of infrequent POIs, wic guarantees to employ te ASGD to parallel te parameter updates. V. EXPERIMENTAL EVALUATION We conduct experiments to seek te answers of te following questions: 1) ow te proposed models perform comparing wit oter state-of-te-art recommendation metods? 2) ow eac component (i.e., sequential modeling, temporal effect, and geograpical influence) affects te model performance? 3) ow te parameters affect te model performance? A. Experimental Setting Two real-world datasets are used in te experiment: one is from provided in [8] and te oter is from in [43]. Table I demonstrates te statistical information of

8 TABLE III: Model feature demonstration Pairwise Preference Sequential Modeling Temporal Effect Geograpical Influence SEER T-SEER GT-SEER 5 BPRMF WRMF LRT LORE Rank GeoFM SEER T SEER GT SEER BPRMF WRMF LRT LORE Rank GeoFM SEER T SEER GT SEER te datasets. In order to make our model satisfactory to te scenario of recommending for future ceck-ins, we coose te first 8% of eac user s ceck-in as training data, te remaining 2% for test data, following [3], [4]. B. Performance Metrics In tis work, we compare te model performance troug precision and recall, wic are generally used to evaluate a POI recommendation system [6], [14]. To evaluate a top-n recommendation system, we denote te precision and recall as P@N and R@N, respectively. Supposing L visited denotes te set of correspondingly visited POIs in te test data, and L N,rec denotes te set of recommended POIs, te definitions of P@N and R@N are formulated as follows, = 1 U R@N = 1 U C. Model Comparison u U u U L visited L N,rec, (25) N L visited L N,rec. (26) L visited In tis paper, we propose tree models: SEER, T-SEER, and GT-SEER, wit features sown in Table III. Te SEER model captures te sequential influence and user preference, sowing advantages of our embedding metods. Temporal influence and geograpical influence are important for POI recommendation, and are usually modeled to improve te POI recommendation performance. Hence, we incorporate te temporal and geograpical influence into te SEER model to establis te T-SEER and GT-SEER model. We compare our proposed models wit state-of-te-art collaborative filtering models for implicit feedback and POI recommendation metods. BPRMF [28]: Bayesian Personalized Ranking Matrix Fac-torization (BPRMF) is a popular pairwise ranking metod tat models te implicit feedback data to recommend top-k items. WRMF [1], [26]: Weigted Regularized Matrix Factorization (WRMF) model is designed for implicit feedback ranking problem. We set te weigt mapping function of user u i at POI l j as w i,j = (1 + 1 C i,j ).5, were C i,j is te ceck-in counts, following te setting in [21]. LRT [6]: Location Recommendation framework wit Temporal effects model (LRT) is a state-of-te-art POI recommendation metod, wic captures te temporal effect in POI recommendation. LORE [4]: LORE is state-of-te-art model tat exploits te sequential influence for location recommendation P@1 (a) Precision- BPRMF WRMF LRT LORE Rank GeoFM SEER T SEER GT SEER P@1 (c) Precision- 5 R@1 (b) Recall- BPRMF WRMF LRT LORE Rank GeoFM SEER T SEER GT SEER Fig. 5: Model comparison R@1 (d) Recall- Compared wit oter work [3], [34], LORE employs te wole sequence s contribution, not only te successive ceck-ins sequential influence. Rank-GeoFM [14]: Rank-GeoFM is a ranking based geograpical factorization metod, wic incorporates te geograpical and temporal influence in a latent ranking model. D. Experimental Results In te following, we demonstrate te experimental results on P@N and R@N. Since te models performances are consistent for different values of N, e.g., 1, 5, 1, and 2, we sow representative results at 5 and 1 following [6], [7]. For te MF-based baseline metods (i.e., BPRMF, WRMF, LRT, and Rank-GeoFM) and our proposed models, te recommendation performance and te computation cost consistently increase wit te latent vector dimension. To be fair, we set te same dimension for all tese models. In our experiments, we set te latent vector dimension as 5 for te trade-off of computation cost and model performance. 1) Performance Comparison: From te experimental results, we discover tat our proposed models acieve better performance tan te baselines, as sown in Figure 5. Rank- GeoFM is te best baseline competitor. Since Rank-GeoFM as incorporated te geograpical influence and temporal influence, in order to make te comparison fair, we compare GT- SEER wit Rank-GeoFM. Experimental results sow tat GT- SEER attains improvements over Rank-GeoFM at least 28% on bot datasets for all metrics. Tis verifies te effectiveness of our sequential modeling and as well as te validity of means for incorporating temporal influence and geograpical influence. In addition, we observe tat models perform better

9 on tan for precision, but worse for recall. Te reason lies in tat eac user s test data size in is bigger tan. As sown in Table I, te average ceck-ins for eac user in is about two times of. According to te metrics in Eq. (25) and Eq. (26), te result is reasonable. 2) Comparison Discussion: Troug te model comparison in Figure 5, we verify te strategy of our proposed models, and sow te contribution of eac component, including sequential modeling, temporal effect, and geograpical influence. SEER vs. BPRMF. BPRMF is a special case of SEER model, wen not considering te sequential influence. Te SEER model gains more tan 15% improvement on bot datasets for all metrics over BPRMF. Tis implies tat te sequential influence is important for POI recommendation and our embedding metod performs excellently for sequential modeling. SEER vs. LORE. Te SEER model outperforms LORE more tan 5%, wic indicates our model better captures te sequential pattern. Compared wit LORE, te SEER model takes two advantages: te word2vec framework captures te POI contextual information in sequences, and te sequential correlations and te pairwise preference are jointly learned rater tan separately modeled. T-SEER vs. SEER. Te T-SEER model captures not only POIs correlation in a sequence but also te temporal variance in sequences. We observe tat T-SEER model improves SEER at least about 1% on bot datasets for all metrics. GT-SEER vs. T-SEER. GT-SEER improves te T-SEER model at least about 15% on bot datasets for all metrics. It means our strategy of incorporating geograpical influence by discriminating te uncecked POIs is valid. 3) Parameter Effect: In tis section, we sow ow te tree important yperparameters, α, β, and s affect te model performance. α and β balance te sequential influence and te user preference. s sows te sensitivity of our geograpical model. We tune α and β to see ow to trade-off te sequential influence and user preference, sown in Figure 6 (we only sow and for space limit). Bot α and β appear togeter wit te learning rate η in te parameter update procedures. It is not necessary to separately tune te tree parameters. We are able to absorb te learning rate η into α and β. In oter words, we set α α η, β β η. We avoid to tune te learning rate η, and turn to control te update step size troug tuning α and β. Hence α and β sould be small enoug to guarantee convergence. We set α =, and cange β to see ow te model performance varies wit β α. SEER and T-SEER attain te best performance if β α [1, 2], wile GT-SEER attains te best performance if β α [5,.5]. For GT-SEER, more preference pairs are leveraged to train te model suc tat we need smaller β to rebalance te sequential influence and user preference. In te GT-SEER model, we classify te uncecked POIs as neigboring POIs and non-neigboring POIs to constitute a new preference set according to a tresold distance s. Here (a) on SEER (c) on T-SEER (e) on GT-SEER (b) on SEER (d) on T-SEER (f) on GT-SEER Fig. 6: Parameter effect on α and β s (log scale, base 1) (a) s (log scale, base 1) (b) Fig. 7: Parameter effect on distance tresold s we coose different values of s to see ow tis parameter affects te model performance, as sown in Figure 7 (we only sow and for space limit). We observe tat GT-SEER model acieves te best performance at s = 1. Furtermore, wen s is extremely small or extremely large, we cannot classify te uncecked POIs, ence te GT-SEER model degenerates to T-SEER model witout te consideration of geograpical influence. VI. CONCLUSION AND FURTHER WORK We study te problem of POI recommendation in tis paper. In order to capture contextual ceck-in information idden in te sequences, we propose te POI embedding model to learn POI representations. Next, we propose te SEER model to recommend POIs, wic learns user preferences via a pairwise ranking model under te sequential representation constraint modeled by te POI embeddings. Moreover, we establis

10 te temporal POI embedding model to capture te temporal variance of sequences on different days and propose te T- SEER model to incorporate tis kind of temporal influence. Finally, we propose te GT-SEER model to improve te recommendation performance troug incorporating geograpical influence into te T-SEER model. Experimental results on two datasets, and, sow tat our sequential embedding rank model better captures te sequential pattern, outperforming previous sequential model LORE more tan 5%. In addition, te proposed GT-SEER model improves at least 28% on bot datasets for all metrics compared wit te best baseline competitor. Our future work may be carried out as follows: 1) Since we only consider te sequence of one day in tis paper, we may discuss oter scenarios in te future, for instance, sequences consisted of consecutive ceck-ins wose interval is under a fixed time tresold, e.g., four ours or eigt ours. 2) We may subsume more information, e.g., users comments and social relations, in tis system to improve performance. REFERENCES [1] Steven Skiena Bryan Perozzi, Rami Alrfou. Deepwalk (online learning of social representations). SIGKDD, 214. [2] Cen Ceng, Haiqin Yang, Irwin King, and Micael R Lyu. Fused matrix factorization wit geograpical and social influence in locationbased social networks. In AAAI, 212. [3] Cen Ceng, Haiqin Yang, Micael R Lyu, and Irwin King. Were you like to go next: Successive point-of-interest recommendation. In IJCAI, 213. [4] Eunjoon Co, Set A Myers, and Jure Leskovec. Friendsip and mobility: User movement in location-based social networks. In SIGKDD, 211. [5] Sansan Feng, Xutao Li, Yifeng Zeng, Gao Cong, Yeow Meng Cee, and Quan Yuan. Personalized ranking metric embedding for next new poi recommendation. In IJCAI, 215. [6] Huiji Gao, Jiliang Tang, Xia Hu, and Huan Liu. Exploring temporal effects for location recommendation on location-based social networks. In RecSys, 213. [7] Huiji Gao, Jiliang Tang, Xia Hu, and Huan Liu. Content-aware point of interest recommendation on location-based social networks. In AAAI, 215. [8] Huiji Gao, Jiliang Tang, and Huan Liu. gscorr: Modeling geo-social correlations for new ceck-ins on location-based social networks. In CIKM, 212. [9] Miajlo Grbovic, Vladan Radosavljevic, Nemanja Djuric, Narayan Bamidipati, Jaikit Savla, Varun Bagwan, and Doug Sarp. E- commerce in your inbox: Product recommendations at scale. In SIGKDD, pages , 215. [1] Yifan Hu, Yeuda Koren, and Cris Volinsky. Collaborative filtering for implicit feedback datasets. In ICDM, 28. [11] Yeuda Koren, Robert Bell, and Cris Volinsky. Matrix factorization tecniques for recommender systems. Computer, 29. [12] Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICML, 214. [13] Huayu Li, Ricang Hong, Siai Zu, and Yong Ge. Point-of-interest recommender systems: A separate-space perspective. In ICDM, 215. [14] Xutao Li, Gao Cong, Xiao-Li Li, Tuan-An Nguyen Pam, and Sonali Krisnaswamy. Rank-GeoFM: A ranking based geograpical factorization metod for point of interest recommendation. In SIGIR, 215. [15] Defu Lian, Yong Ge, Fuzeng Zang, Nicolas Jing Yuan, Xing Xie, Tao Zou, and Yong Rui. Content-aware collaborative filtering for location recommendation based on uman mobility data. In ICDM, 215. [16] Defu Lian, Cong Zao, Xing Xie, Guangzong Sun, Enong Cen, and Yong Rui. GeoMF: Joint geograpical modeling and matrix factorization for point-of-interest recommendation. In SIGKDD, 214. [17] Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. Learning contextsensitive word embeddings wit neural tensor skip-gram model. In IJCAI, 215. [18] Qiang Liu, Su Wu, Liang Wang, and Tieniu Tan. Predicting te next location: A recurrent model wit spatial and temporal contexts. In AAAI, 216. [19] Xin Liu, Yong Liu, Karl Aberer, and Cunyan Miao. Personalized pointof-interest recommendation by mining users preference transition. In CIKM, 213. [2] Yang Liu, Ziyuan Liu, Tat-Seng Cua, and Maosong Sun. Topical word embeddings. In AAAI, 215. [21] Yong Liu, Wei Wei, Aixin Sun, and Cunyan Miao. Exploiting geograpical neigborood caracteristics for location recommendation. In CIKM, 214. [22] Tomas Mikolov, Quoc V Le, and Ilya Sutskever. Exploiting similarities among languages for macine translation. arxiv preprint arxiv: , 213. [23] Tomas Mikolov, Ilya Sutskever, Kai Cen, Greg S Corrado, and Jeff Dean. Distributed representations of words and prases and teir compositionality. In NIPS, 213. [24] Tomas Mikolov, Wen-tau Yi, and Geoffrey Zweig. Linguistic regularities in continuous space word representations. In HLT-NAACL, pages , 213. [25] Makbule Gulcin Ozsoy. From word embeddings to item recommendation. arxiv preprint arxiv: , 216. [26] Rong Pan, Yunong Zou, Bin Cao, Natan Nan Liu, Rajan Lukose, Martin Scolz, and Qiang Yang. One-class collaborative filtering. In ICDM, 28. [27] Benjamin Rect, Cristoper Re, Stepen Wrigt, and Feng Niu. Hogwild: A lock-free approac to parallelizing stocastic gradient descent. In NIPS, 211. [28] Steffen Rendle, Cristop Freudentaler, Zeno Gantner, and Lars Scmidt-Tieme. BPR: Bayesian personalized ranking from implicit feedback. In UAI, 29. [29] lars scmidttieme steffen rendle, cristop freudentaler. Factorizing personalized markov cains for next-basket recommendation. In WWW, 21. [3] Duyu Tang, Bing Qin, and Ting Liu. Learning semantic representations of users and products for document level sentiment classification. In ACL, 215. [31] Duyu Tang, Bing Qin, Ting Liu, and Yuekui Yang. User modeling wit neural network for review rating prediction. In IJCAI, 215. [32] Waldo R Tobler. A computer movie simulating urban growt in te detroit region. Economic geograpy, 197. [33] Jason Weston, Cong Wang, Ron Weiss, and Adam Berenzweig. Latent collaborative retrieval. ICML, 212. [34] Jiang Ye, Ze Zu, and Hong Ceng. Wat s your next move: User activity prediction in location-based social networks. In SDM, 213. [35] Mao Ye, Peifeng Yin, Wang-Cien Lee, and Dik-Lun Lee. Exploiting geograpical influence for collaborative point-of-interest recommendation. In SIGIR, 211. [36] Hongzi Yin, Yizou Sun, Bin Cui, Ziting Hu, and Ling Cen. Lcars: a location-content-aware recommender system. In SIGKDD, 213. [37] Quan Yuan, Gao Cong, Zongyang Ma, Aixin Sun, and Nadia Magnenat Talmann. Time-aware point-of-interest recommendation. In SIGIR, 213. [38] Jia-Dong Zang and Ci-Yin Cow. igslr: Personalized geo-social location recommendation: A kernel density estimation approac. In SIGSPATIAL, 213. [39] Jia-Dong Zang and Ci-Yin Cow. GeoSoCa: Exploiting geograpical, social and categorical correlations for point-of-interest recommendations. In SIGIR, 215. [4] Jia-Dong Zang, Ci-Yin Cow, and Yanua Li. LORE: Exploiting sequential influence for location recommendations. In SIGSPATIAL, 214. [41] Jia-Dong Zang, Ci-Yin Cow, and Yu Zeng. Orec: An opinion-based point-of-interest recommendation framework. In CIKM, 215. [42] Wei Zang and Jianyong Wang. Location and time aware social collaborative retrieval for new successive point-of-interest recommendation. In CIKM, 215. [43] Senglin Zao, Irwin King, and Micael R Lyu. Capturing geograpical influence in POI recommendations. In ICONIP, 213. [44] Senglin Zao, Tong Zao, Haiqin Yang, Micael R Lyu, and Irwin King. Stellar: spatial-temporal latent ranking for successive point-ofinterest recommendation. In AAAI, 216.

Aggregated Temporal Tensor Factorization Model for Point-of-interest Recommendation

Aggregated Temporal Tensor Factorization Model for Point-of-interest Recommendation Aggregated Temporal Tensor Factorization Model for Point-of-interest Recommendation Shenglin Zhao 1,2B, Michael R. Lyu 1,2, and Irwin King 1,2 1 Shenzhen Key Laboratory of Rich Media Big Data Analytics

More information

Natural Language Understanding. Recap: probability, language models, and feedforward networks. Lecture 12: Recurrent Neural Networks and LSTMs

Natural Language Understanding. Recap: probability, language models, and feedforward networks. Lecture 12: Recurrent Neural Networks and LSTMs Natural Language Understanding Lecture 12: Recurrent Neural Networks and LSTMs Recap: probability, language models, and feedforward networks Simple Recurrent Networks Adam Lopez Credits: Mirella Lapata

More information

Time-aware Point-of-interest Recommendation

Time-aware Point-of-interest Recommendation Time-aware Point-of-interest Recommendation Quan Yuan, Gao Cong, Zongyang Ma, Aixin Sun, and Nadia Magnenat-Thalmann School of Computer Engineering Nanyang Technological University Presented by ShenglinZHAO

More information

Location Regularization-Based POI Recommendation in Location-Based Social Networks

Location Regularization-Based POI Recommendation in Location-Based Social Networks information Article Location Regularization-Based POI Recommendation in Location-Based Social Networks Lei Guo 1,2, * ID, Haoran Jiang 3 and Xinhua Wang 4 1 Postdoctoral Research Station of Management

More information

Notes on Neural Networks

Notes on Neural Networks Artificial neurons otes on eural etwors Paulo Eduardo Rauber 205 Consider te data set D {(x i y i ) i { n} x i R m y i R d } Te tas of supervised learning consists on finding a function f : R m R d tat

More information

Learning based super-resolution land cover mapping

Learning based super-resolution land cover mapping earning based super-resolution land cover mapping Feng ing, Yiang Zang, Giles M. Foody IEEE Fellow, Xiaodong Xiuua Zang, Siming Fang, Wenbo Yun Du is work was supported in part by te National Basic Researc

More information

Predicting the Next Location: A Recurrent Model with Spatial and Temporal Contexts

Predicting the Next Location: A Recurrent Model with Spatial and Temporal Contexts Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence AAAI-16 Predicting the Next Location: A Recurrent Model with Spatial and Temporal Contexts Qiang Liu, Shu Wu, Liang Wang, Tieniu

More information

2.11 That s So Derivative

2.11 That s So Derivative 2.11 Tat s So Derivative Introduction to Differential Calculus Just as one defines instantaneous velocity in terms of average velocity, we now define te instantaneous rate of cange of a function at a point

More information

Online Learning: Bandit Setting

Online Learning: Bandit Setting Online Learning: Bandit Setting Daniel asabi Summer 04 Last Update: October 0, 06 Introduction [TODO Bandits. Stocastic setting Suppose tere exists unknown distributions ν,..., ν, suc tat te loss at eac

More information

The derivative function

The derivative function Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative

More information

Point-of-Interest Recommendations: Learning Potential Check-ins from Friends

Point-of-Interest Recommendations: Learning Potential Check-ins from Friends Point-of-Interest Recommendations: Learning Potential Check-ins from Friends Huayu Li, Yong Ge +, Richang Hong, Hengshu Zhu University of North Carolina at Charlotte + University of Arizona Hefei University

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

Te comparison of dierent models M i is based on teir relative probabilities, wic can be expressed, again using Bayes' teorem, in terms of prior probab

Te comparison of dierent models M i is based on teir relative probabilities, wic can be expressed, again using Bayes' teorem, in terms of prior probab To appear in: Advances in Neural Information Processing Systems 9, eds. M. C. Mozer, M. I. Jordan and T. Petsce. MIT Press, 997 Bayesian Model Comparison by Monte Carlo Caining David Barber D.Barber@aston.ac.uk

More information

THE hidden Markov model (HMM)-based parametric

THE hidden Markov model (HMM)-based parametric JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007 1 Modeling Spectral Envelopes Using Restricted Boltzmann Macines and Deep Belief Networks for Statistical Parametric Speec Syntesis Zen-Hua Ling,

More information

3.4 Worksheet: Proof of the Chain Rule NAME

3.4 Worksheet: Proof of the Chain Rule NAME Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are

More information

Scaling Neighbourhood Methods

Scaling Neighbourhood Methods Quick Recap Scaling Neighbourhood Methods Collaborative Filtering m = #items n = #users Complexity : m * m * n Comparative Scale of Signals ~50 M users ~25 M items Explicit Ratings ~ O(1M) (1 per billion)

More information

Combining functions: algebraic methods

Combining functions: algebraic methods Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

Learning to Recommend Point-of-Interest with the Weighted Bayesian Personalized Ranking Method in LBSNs

Learning to Recommend Point-of-Interest with the Weighted Bayesian Personalized Ranking Method in LBSNs information Article Learning to Recommend Point-of-Interest with the Weighted Bayesian Personalized Ranking Method in LBSNs Lei Guo 1, *, Haoran Jiang 2, Xinhua Wang 3 and Fangai Liu 3 1 School of Management

More information

Taylor Series and the Mean Value Theorem of Derivatives

Taylor Series and the Mean Value Theorem of Derivatives 1 - Taylor Series and te Mean Value Teorem o Derivatives Te numerical solution o engineering and scientiic problems described by matematical models oten requires solving dierential equations. Dierential

More information

Efficient algorithms for for clone items detection

Efficient algorithms for for clone items detection Efficient algoritms for for clone items detection Raoul Medina, Caroline Noyer, and Olivier Raynaud Raoul Medina, Caroline Noyer and Olivier Raynaud LIMOS - Université Blaise Pascal, Campus universitaire

More information

The entransy dissipation minimization principle under given heat duty and heat transfer area conditions

The entransy dissipation minimization principle under given heat duty and heat transfer area conditions Article Engineering Termopysics July 2011 Vol.56 No.19: 2071 2076 doi: 10.1007/s11434-010-4189-x SPECIAL TOPICS: Te entransy dissipation minimization principle under given eat duty and eat transfer area

More information

Poisson Equation in Sobolev Spaces

Poisson Equation in Sobolev Spaces Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on

More information

Time (hours) Morphine sulfate (mg)

Time (hours) Morphine sulfate (mg) Mat Xa Fall 2002 Review Notes Limits and Definition of Derivative Important Information: 1 According to te most recent information from te Registrar, te Xa final exam will be eld from 9:15 am to 12:15

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

Probabilistic Graphical Models Homework 1: Due January 29, 2014 at 4 pm

Probabilistic Graphical Models Homework 1: Due January 29, 2014 at 4 pm Probabilistic Grapical Models 10-708 Homework 1: Due January 29, 2014 at 4 pm Directions. Tis omework assignment covers te material presented in Lectures 1-3. You must complete all four problems to obtain

More information

RECOGNITION of online handwriting aims at finding the

RECOGNITION of online handwriting aims at finding the SUBMITTED ON SEPTEMBER 2017 1 A General Framework for te Recognition of Online Handwritten Grapics Frank Julca-Aguilar, Harold Moucère, Cristian Viard-Gaudin, and Nina S. T. Hirata arxiv:1709.06389v1 [cs.cv]

More information

NOWADAYS, Collaborative Filtering (CF) [14] plays an

NOWADAYS, Collaborative Filtering (CF) [14] plays an JOURNAL OF L A T E X CLASS FILES, VOL. 4, NO. 8, AUGUST 205 Multi-behavioral Sequential Prediction with Recurrent Log-bilinear Model Qiang Liu, Shu Wu, Member, IEEE, and Liang Wang, Senior Member, IEEE

More information

REVIEW LAB ANSWER KEY

REVIEW LAB ANSWER KEY REVIEW LAB ANSWER KEY. Witout using SN, find te derivative of eac of te following (you do not need to simplify your answers): a. f x 3x 3 5x x 6 f x 3 3x 5 x 0 b. g x 4 x x x notice te trick ere! x x g

More information

New Distribution Theory for the Estimation of Structural Break Point in Mean

New Distribution Theory for the Estimation of Structural Break Point in Mean New Distribution Teory for te Estimation of Structural Break Point in Mean Liang Jiang Singapore Management University Xiaou Wang Te Cinese University of Hong Kong Jun Yu Singapore Management University

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

MAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016

MAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016 MAT244 - Ordinary Di erential Equations - Summer 206 Assignment 2 Due: July 20, 206 Full Name: Student #: Last First Indicate wic Tutorial Section you attend by filling in te appropriate circle: Tut 0

More information

Material for Difference Quotient

Material for Difference Quotient Material for Difference Quotient Prepared by Stepanie Quintal, graduate student and Marvin Stick, professor Dept. of Matematical Sciences, UMass Lowell Summer 05 Preface Te following difference quotient

More information

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

Lab 6 Derivatives and Mutant Bacteria

Lab 6 Derivatives and Mutant Bacteria Lab 6 Derivatives and Mutant Bacteria Date: September 27, 20 Assignment Due Date: October 4, 20 Goal: In tis lab you will furter explore te concept of a derivative using R. You will use your knowledge

More information

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. Preface Here are my online notes for my course tat I teac ere at Lamar University. Despite te fact tat tese are my class notes, tey sould be accessible to anyone wanting to learn or needing a refreser

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x) Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of

More information

A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES

A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES Ronald Ainswort Hart Scientific, American Fork UT, USA ABSTRACT Reports of calibration typically provide total combined uncertainties

More information

Regularized Regression

Regularized Regression Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

RaRE: Social Rank Regulated Large-scale Network Embedding

RaRE: Social Rank Regulated Large-scale Network Embedding RaRE: Social Rank Regulated Large-scale Network Embedding Authors: Yupeng Gu 1, Yizhou Sun 1, Yanen Li 2, Yang Yang 3 04/26/2018 The Web Conference, 2018 1 University of California, Los Angeles 2 Snapchat

More information

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y

More information

Overdispersed Variational Autoencoders

Overdispersed Variational Autoencoders Overdispersed Variational Autoencoders Harsil Sa, David Barber and Aleksandar Botev Department of Computer Science, University College London Alan Turing Institute arsil.sa.15@ucl.ac.uk, david.barber@ucl.ac.uk,

More information

HOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS

HOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS HOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS Po-Ceng Cang National Standard Time & Frequency Lab., TL, Taiwan 1, Lane 551, Min-Tsu Road, Sec. 5, Yang-Mei, Taoyuan, Taiwan 36 Tel: 886 3

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

7 Semiparametric Methods and Partially Linear Regression

7 Semiparametric Methods and Partially Linear Regression 7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).

More information

A Theoretically Grounded Application of Dropout in Recurrent Neural Networks

A Theoretically Grounded Application of Dropout in Recurrent Neural Networks A Teoretically Grounded Application of Dropout in Recurrent Neural Networks Yarin Gal University of Cambridge {yg279,zg201}@cam.ac.uk oubin Garamani Abstract Recurrent neural networks (RNNs) stand at te

More information

Artificial Neural Network Model Based Estimation of Finite Population Total

Artificial Neural Network Model Based Estimation of Finite Population Total International Journal of Science and Researc (IJSR), India Online ISSN: 2319-7064 Artificial Neural Network Model Based Estimation of Finite Population Total Robert Kasisi 1, Romanus O. Odiambo 2, Antony

More information

Spike train entropy-rate estimation using hierarchical Dirichlet process priors

Spike train entropy-rate estimation using hierarchical Dirichlet process priors publised in: Advances in Neural Information Processing Systems 26 (23), 276 284. Spike train entropy-rate estimation using ierarcical Diriclet process priors Karin Knudson Department of Matematics kknudson@mat.utexas.edu

More information

How to Find the Derivative of a Function: Calculus 1

How to Find the Derivative of a Function: Calculus 1 Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te

More information

A Multiaxial Variable Amplitude Fatigue Life Prediction Method Based on a Plane Per Plane Damage Assessment

A Multiaxial Variable Amplitude Fatigue Life Prediction Method Based on a Plane Per Plane Damage Assessment American Journal of Mecanical and Industrial Engineering 28; 3(4): 47-54 ttp://www.sciencepublisinggroup.com/j/ajmie doi:.648/j.ajmie.2834.2 ISSN: 2575-679 (Print); ISSN: 2575-66 (Online) A Multiaxial

More information

Bounds on the Moments for an Ensemble of Random Decision Trees

Bounds on the Moments for an Ensemble of Random Decision Trees Noname manuscript No. (will be inserted by te editor) Bounds on te Moments for an Ensemble of Random Decision Trees Amit Durandar Received: Sep. 17, 2013 / Revised: Mar. 04, 2014 / Accepted: Jun. 30, 2014

More information

Effect of the Dependent Paths in Linear Hull

Effect of the Dependent Paths in Linear Hull 1 Effect of te Dependent Pats in Linear Hull Zenli Dai, Meiqin Wang, Yue Sun Scool of Matematics, Sandong University, Jinan, 250100, Cina Key Laboratory of Cryptologic Tecnology and Information Security,

More information

LAPLACIAN MATRIX LEARNING FOR SMOOTH GRAPH SIGNAL REPRESENTATION

LAPLACIAN MATRIX LEARNING FOR SMOOTH GRAPH SIGNAL REPRESENTATION LAPLACIAN MATRIX LEARNING FOR SMOOTH GRAPH SIGNAL REPRESENTATION Xiaowen Dong, Dorina Tanou, Pascal Frossard and Pierre Vandergeynst Media Lab, MIT, USA xdong@mit.edu Signal Processing Laboratories, EPFL,

More information

Bounds on the Moments for an Ensemble of Random Decision Trees

Bounds on the Moments for an Ensemble of Random Decision Trees Noname manuscript No. (will be inserted by te editor) Bounds on te Moments for an Ensemble of Random Decision Trees Amit Durandar Received: / Accepted: Abstract An ensemble of random decision trees is

More information

CS522 - Partial Di erential Equations

CS522 - Partial Di erential Equations CS5 - Partial Di erential Equations Tibor Jánosi April 5, 5 Numerical Di erentiation In principle, di erentiation is a simple operation. Indeed, given a function speci ed as a closed-form formula, its

More information

A Spatial-Temporal Probabilistic Matrix Factorization Model for Point-of-Interest Recommendation

A Spatial-Temporal Probabilistic Matrix Factorization Model for Point-of-Interest Recommendation Downloaded 9/13/17 to 152.15.112.71. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php Abstract A Spatial-Temporal Probabilistic Matrix Factorization Model

More information

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Statistica Sinica 24 2014, 395-414 doi:ttp://dx.doi.org/10.5705/ss.2012.064 EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Jun Sao 1,2 and Seng Wang 3 1 East Cina Normal University,

More information

Analysis of Solar Generation and Weather Data in Smart Grid with Simultaneous Inference of Nonlinear Time Series

Analysis of Solar Generation and Weather Data in Smart Grid with Simultaneous Inference of Nonlinear Time Series Te First International Worksop on Smart Cities and Urban Informatics 215 Analysis of Solar Generation and Weater Data in Smart Grid wit Simultaneous Inference of Nonlinear Time Series Yu Wang, Guanqun

More information

MATH1131/1141 Calculus Test S1 v8a

MATH1131/1141 Calculus Test S1 v8a MATH/ Calculus Test 8 S v8a October, 7 Tese solutions were written by Joann Blanco, typed by Brendan Trin and edited by Mattew Yan and Henderson Ko Please be etical wit tis resource It is for te use of

More information

Minimizing D(Q,P) def = Q(h)

Minimizing D(Q,P) def = Q(h) Inference Lecture 20: Variational Metods Kevin Murpy 29 November 2004 Inference means computing P( i v), were are te idden variables v are te visible variables. For discrete (eg binary) idden nodes, exact

More information

Finding and Using Derivative The shortcuts

Finding and Using Derivative The shortcuts Calculus 1 Lia Vas Finding and Using Derivative Te sortcuts We ave seen tat te formula f f(x+) f(x) (x) = lim 0 is manageable for relatively simple functions like a linear or quadratic. For more complex

More information

Improved Algorithms for Largest Cardinality 2-Interval Pattern Problem

Improved Algorithms for Largest Cardinality 2-Interval Pattern Problem Journal of Combinatorial Optimization manuscript No. (will be inserted by te editor) Improved Algoritms for Largest Cardinality 2-Interval Pattern Problem Erdong Cen, Linji Yang, Hao Yuan Department of

More information

Financial Econometrics Prof. Massimo Guidolin

Financial Econometrics Prof. Massimo Guidolin CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis

More information

Fast Explicit and Unconditionally Stable FDTD Method for Electromagnetic Analysis Jin Yan, Graduate Student Member, IEEE, and Dan Jiao, Fellow, IEEE

Fast Explicit and Unconditionally Stable FDTD Method for Electromagnetic Analysis Jin Yan, Graduate Student Member, IEEE, and Dan Jiao, Fellow, IEEE Tis article as been accepted for inclusion in a future issue of tis journal. Content is final as presented, wit te exception of pagination. IEEE TRANSACTIONS ON MICROWAVE THEORY AND TECHNIQUES 1 Fast Explicit

More information

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator. Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative

More information

3.1 Extreme Values of a Function

3.1 Extreme Values of a Function .1 Etreme Values of a Function Section.1 Notes Page 1 One application of te derivative is finding minimum and maimum values off a grap. In precalculus we were only able to do tis wit quadratics by find

More information

HARMONIC ALLOCATION TO MV CUSTOMERS IN RURAL DISTRIBUTION SYSTEMS

HARMONIC ALLOCATION TO MV CUSTOMERS IN RURAL DISTRIBUTION SYSTEMS HARMONIC ALLOCATION TO MV CUSTOMERS IN RURAL DISTRIBUTION SYSTEMS V Gosbell University of Wollongong Department of Electrical, Computer & Telecommunications Engineering, Wollongong, NSW 2522, Australia

More information

2.8 The Derivative as a Function

2.8 The Derivative as a Function .8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open

More information

Deep Belief Network Training Improvement Using Elite Samples Minimizing Free Energy

Deep Belief Network Training Improvement Using Elite Samples Minimizing Free Energy Deep Belief Network Training Improvement Using Elite Samples Minimizing Free Energy Moammad Ali Keyvanrad a, Moammad Medi Homayounpour a a Laboratory for Intelligent Multimedia Processing (LIMP), Computer

More information

Technology-Independent Design of Neurocomputers: The Universal Field Computer 1

Technology-Independent Design of Neurocomputers: The Universal Field Computer 1 Tecnology-Independent Design of Neurocomputers: Te Universal Field Computer 1 Abstract Bruce J. MacLennan Computer Science Department Naval Postgraduate Scool Monterey, CA 9393 We argue tat AI is moving

More information

Pre-Calculus Review Preemptive Strike

Pre-Calculus Review Preemptive Strike Pre-Calculus Review Preemptive Strike Attaced are some notes and one assignment wit tree parts. Tese are due on te day tat we start te pre-calculus review. I strongly suggest reading troug te notes torougly

More information

Physically Based Modeling: Principles and Practice Implicit Methods for Differential Equations

Physically Based Modeling: Principles and Practice Implicit Methods for Differential Equations Pysically Based Modeling: Principles and Practice Implicit Metods for Differential Equations David Baraff Robotics Institute Carnegie Mellon University Please note: Tis document is 997 by David Baraff

More information

5.1 We will begin this section with the definition of a rational expression. We

5.1 We will begin this section with the definition of a rational expression. We Basic Properties and Reducing to Lowest Terms 5.1 We will begin tis section wit te definition of a rational epression. We will ten state te two basic properties associated wit rational epressions and go

More information

Flavius Guiaş. X(t + h) = X(t) + F (X(s)) ds.

Flavius Guiaş. X(t + h) = X(t) + F (X(s)) ds. Numerical solvers for large systems of ordinary differential equations based on te stocastic direct simulation metod improved by te and Runge Kutta principles Flavius Guiaş Abstract We present a numerical

More information

Linearized Primal-Dual Methods for Linear Inverse Problems with Total Variation Regularization and Finite Element Discretization

Linearized Primal-Dual Methods for Linear Inverse Problems with Total Variation Regularization and Finite Element Discretization Linearized Primal-Dual Metods for Linear Inverse Problems wit Total Variation Regularization and Finite Element Discretization WENYI TIAN XIAOMING YUAN September 2, 26 Abstract. Linear inverse problems

More information

. If lim. x 2 x 1. f(x+h) f(x)

. If lim. x 2 x 1. f(x+h) f(x) Review of Differential Calculus Wen te value of one variable y is uniquely determined by te value of anoter variable x, ten te relationsip between x and y is described by a function f tat assigns a value

More information

Bootstrap prediction intervals for Markov processes

Bootstrap prediction intervals for Markov processes arxiv: arxiv:0000.0000 Bootstrap prediction intervals for Markov processes Li Pan and Dimitris N. Politis Li Pan Department of Matematics University of California San Diego La Jolla, CA 92093-0112, USA

More information

Investigating Euler s Method and Differential Equations to Approximate π. Lindsay Crowl August 2, 2001

Investigating Euler s Method and Differential Equations to Approximate π. Lindsay Crowl August 2, 2001 Investigating Euler s Metod and Differential Equations to Approximate π Lindsa Crowl August 2, 2001 Tis researc paper focuses on finding a more efficient and accurate wa to approximate π. Suppose tat x

More information

A Spectral Algorithm For Latent Junction Trees - Supplementary Material

A Spectral Algorithm For Latent Junction Trees - Supplementary Material A Spectral Algoritm For Latent Junction Trees - Supplementary Material Ankur P. Parik, Le Song, Mariya Isteva, Gabi Teodoru, Eric P. Xing Discussion of Conditions for Observable Representation Te observable

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

Basic Nonparametric Estimation Spring 2002

Basic Nonparametric Estimation Spring 2002 Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,

More information

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set WYSE Academic Callenge 00 Sectional Matematics Solution Set. Answer: B. Since te equation can be written in te form x + y, we ave a major 5 semi-axis of lengt 5 and minor semi-axis of lengt. Tis means

More information

Cubic Functions: Local Analysis

Cubic Functions: Local Analysis Cubic function cubing coefficient Capter 13 Cubic Functions: Local Analysis Input-Output Pairs, 378 Normalized Input-Output Rule, 380 Local I-O Rule Near, 382 Local Grap Near, 384 Types of Local Graps

More information

1. Which one of the following expressions is not equal to all the others? 1 C. 1 D. 25x. 2. Simplify this expression as much as possible.

1. Which one of the following expressions is not equal to all the others? 1 C. 1 D. 25x. 2. Simplify this expression as much as possible. 004 Algebra Pretest answers and scoring Part A. Multiple coice questions. Directions: Circle te letter ( A, B, C, D, or E ) net to te correct answer. points eac, no partial credit. Wic one of te following

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

IEOR 165 Lecture 10 Distribution Estimation

IEOR 165 Lecture 10 Distribution Estimation IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat

More information

Dedicated to the 70th birthday of Professor Lin Qun

Dedicated to the 70th birthday of Professor Lin Qun Journal of Computational Matematics, Vol.4, No.3, 6, 4 44. ACCELERATION METHODS OF NONLINEAR ITERATION FOR NONLINEAR PARABOLIC EQUATIONS Guang-wei Yuan Xu-deng Hang Laboratory of Computational Pysics,

More information

word2vec Parameter Learning Explained

word2vec Parameter Learning Explained word2vec Parameter Learning Explained Xin Rong ronxin@umich.edu Abstract The word2vec model and application by Mikolov et al. have attracted a great amount of attention in recent two years. The vector

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

Convergence and Descent Properties for a Class of Multilevel Optimization Algorithms

Convergence and Descent Properties for a Class of Multilevel Optimization Algorithms Convergence and Descent Properties for a Class of Multilevel Optimization Algoritms Stepen G. Nas April 28, 2010 Abstract I present a multilevel optimization approac (termed MG/Opt) for te solution of

More information

Fast Exact Univariate Kernel Density Estimation

Fast Exact Univariate Kernel Density Estimation Fast Exact Univariate Kernel Density Estimation David P. Hofmeyr Department of Statistics and Actuarial Science, Stellenbosc University arxiv:1806.00690v2 [stat.co] 12 Jul 2018 July 13, 2018 Abstract Tis

More information

Estimating Peak Bone Mineral Density in Osteoporosis Diagnosis by Maximum Distribution

Estimating Peak Bone Mineral Density in Osteoporosis Diagnosis by Maximum Distribution International Journal of Clinical Medicine Researc 2016; 3(5): 76-80 ttp://www.aascit.org/journal/ijcmr ISSN: 2375-3838 Estimating Peak Bone Mineral Density in Osteoporosis Diagnosis by Maximum Distribution

More information

Simulation and verification of a plate heat exchanger with a built-in tap water accumulator

Simulation and verification of a plate heat exchanger with a built-in tap water accumulator Simulation and verification of a plate eat excanger wit a built-in tap water accumulator Anders Eriksson Abstract In order to test and verify a compact brazed eat excanger (CBE wit a built-in accumulation

More information

Handling Missing Data on Asymmetric Distribution

Handling Missing Data on Asymmetric Distribution International Matematical Forum, Vol. 8, 03, no. 4, 53-65 Handling Missing Data on Asymmetric Distribution Amad M. H. Al-Kazale Department of Matematics, Faculty of Science Al-albayt University, Al-Mafraq-Jordan

More information