AUTOMATIC DETECTION OF RWIS SENSOR MALFUNCTIONS (PHASE II) Northland Advanced Transportation Systems Research Laboratories Project B: Fiscal Year 2006

Size: px
Start display at page:

Download "AUTOMATIC DETECTION OF RWIS SENSOR MALFUNCTIONS (PHASE II) Northland Advanced Transportation Systems Research Laboratories Project B: Fiscal Year 2006"

Transcription

1 AUTOMATIC DETECTION OF RWIS SENSOR MALFUNCTIONS (PHASE II) FINAL REPORT Northland Advanced Transportation Systes Research Laboratories Project B: Fiscal Year 2006 Carolyn J. Crouch Donald B. Crouch Richard M. Maclin Aditya Poluetla Departent of Coputer Science University of Minnesota Duluth 1114 Kirby Drive Duluth, Minnesota

2 AUTOMATIC DETECTION OF RWIS SENSOR MALFUNCTIONS (PHASE II) Objective The overall goal of this project (Phases I & II) was to develop coputerized procedures that detect Road Weather Inforation Syste (RWIS) sensor alfunctions. In the first phase of the research we applied two classes of achine learning techniques to data generated by RWIS sensors in order to predict sensor alfunctions and thereby iprove accuracy in forecasting teperature, precipitation, and other weather-related data. We built odels using achine learning (ML) ethods that eploy data fro nearby sensors in order to predict liely values of those sensor that are being onitored. A sensor that begins to deviate noticeably fro values inferred fro nearby sensors serves as an indication that the sensor has begun to fail. We used both classification and regression algoriths in Phase I; in particular, we used three classification algoriths: J48 decision trees, naïve Bayes, and Bayesian networs, and six regression algoriths: linear regression, least edian squares (LMS), M5P, ultilayer perceptron (MLP), radial basis function networ (RBF), and the conjunctive rule algorith (CR). We perfored a series of experients to deterine which of these odels can be used to detect alfunctions in RWIS sensors. We copared the values predicted by the various ML ethods to the actual values observed at an RWIS sensor to detect sensor alfunctions. 1 Accuracy of an algorith in predicting values plays a ajor role in deterining the accuracy with which alfunctions can be identified. Fro the experients perfored to predict teperature at an RWIS sensor, we concluded that the classification algoriths LMS and M5P gave results accurate to ±1 F and had low standard deviation across sites. Both odels were identified to be able to detect sensor alfunctions accurately. RBF Networs and CR failed to predict teperature values. A threshold distance of 2 F between actual and predicted teperature values is sufficient to identify a sensor alfunction when J48 is used to predict teperature class values. The use of precipitation as an additional source of decision inforation produced no significant iproveent on the accuracy of these algoriths in predicting teperature. The J48, naïve Bayes, and Bayes net exhibited ixed results when predicting the presence or absence of precipitation. However, a cobination of J48 and Bayes nets can be used to detect precipitation sensor alfunctions, with J48 being used to predict the absence of precipitation and Bayesian networs, the presence of precipitation. Visibility was best classified using M5P. When the difference between prediction error and the ean absolute error for M5P is greater than 1.96 standard deviation, one can reasonably conclude that the sensor is alfunctioning. In Phase II, we investigated the use of Hidden Marov odels to predict RWIS sensor values. The Hidden Marov odel (HMM) is a technique used to odel a sequence of teporal events. For exaple, suppose we have the sequence of values produced by a given sensor over a fixed tie period. An HMM can be odeled to produce this sequence and then used to deterine the probability of the occurrence of another sequence of values, such as that produced by the given sensor for a subsequent tie period. If the actual values produced by the sensor deviate fro the predicted sequence then a alfunction ay have occurred. 1 Detailed inforation regarding our approach to solving the overall objective of this project is contained in the final report for Phase I. [1] 2

3 Hidden Marov Models A discrete process taing place in the real world generates what can be viewed as a sybol at each step. For exaple, the teperature at a given site is a result of existing weather conditions and the sensor s location. Over a period of tie the process generates a teporal sequence of sybols. Based on the outcoe of the process, these sybols can be discrete (e.g., precipitation type) or continuous (e.g., teperature). Hidden Marov odels ay be used to analyze a sequence of observed sybols and to produce a statistical odel that describes the worings of the process and the generation of sybols over tie. Such a odel can then be eployed to identity or classify other sequences of the sae process. Hidden Marov odels capture teporal data in the for of a directed graph. Each node represents a state used in recognizing a sequence of events. The arcs between nodes represent a transition probability fro one state to another; probabilities can be cobined to deterine the probability that a given sequence will be produced by the HMM. Transitions between states are triggered by events in the doain. HMM states are referred to as hidden because the syste we wish to odel ay have underlying causes that cannot be observed. For exaple, factors that deterine the position and velocity of an object when the only inforation available is its position are non-observable. The hidden process taing place in a syste can be deterined fro the events that generate the sequence of observable events. State transition networs can be used to represent nowledge about a syste. HMM state transition networs have an initial state and an accepting or end state. The networ recognizes a sequence of events if these events start at the initial state and end in the accepting state. A Marov odel is a probabilistic odel over a finite set of states, where the probability of being in a state at a tie t depends only on the previous state visited at tie t-1. An HMM is a odel where the syste being odeled is assued to be a Marovian process with unnown paraeters. The coponents that constitute an HMM are a finite set of states, a set of transitions between these states and their corresponding probability values, and a set of output sybols eitted by the states. An initial state distribution gives the probability of being in a particular state at the start of the path. After each tie interval a new state is entered depending on the transition probability fro the previous state. This follows a siple Marov odel in which the probability of entering a state depends only on the previous state. The state sequence obtained over tie is called the path. The transition probability a of oving fro a state to a state is given by the probability of being in state at tie t when at tie t-1 we were in state a = P(path t = path t-1 = ). After a transition is ade to a new state, a sybol is eitted fro the new state. Thus as the state path is followed we get a sequence of sybols (x 1, x 2, x 3,...). The sybol which is observed at a state is based on a probability distribution that depends on the state. The probability that a sybol b is eitted on transition to state, called the eission probability (e), is deterined by e b () = P(x t = b path t = ). Consider the HMM exaple in Figure 1 in which there are only three possible output sybols, p, q, and r, and four states, 1, 2, 3 and 4. In this case our state transition probability atrix will be of size 4x4 and the eission probability atrix will be of size 3 by 4. At each state we can observe any of the three sybols p, q and r based on their respective eission probabilities at that respective state. The transition fro one state to another is associated with the transition probability between the respective states. The transition probability of oving fro state 1 to state 2 is represented by a 12. Assuing a sequence length of 24, we will observe 24 transitions, including a transition fro the start 3

4 a s2 p,q,r a s1 a 13 a 11 1 a 21 a 12 a 23 a 22 2 p,q,r a 2e a 1e start a 31 a 42 a24 end a s4 a s3 p,q,r 3 a 23 a 32 a 32 a 34 a 43 4 a 4e p,q,r a 3e a 33 a 44 Figure 1: An HMM with four states and three possible outputs state and an ending at the end state after the 24 th sybol is seen. The HMM in Figure 1 allows all possible state transitions. Rabiner and Juang [2] list three basic questions that have to be answered satisfactorily in order for an HHM odel to be used effectively: (1) What is the probability that an observed sequence is produced by the given odel? A odel can generate any sequences based on the transitions taing place and sybols eitted each tie a state is visited. The probability of the observed sequence being reproduced by the odel gives an estiate of how good the odel is; a low probability indicates that this odel probably did not produce it. Forward/Bacward algoriths [3] can be used to find the probability of observing a sequence in a given odel. (2) How do we find an optial state sequence (path) for a given observation sequence? HMMs ay contain ultiple paths that yield a given sequence of sybols. In order to select the optial state path, we need to identify the path with the highest probability of occurrence. The Viterbi algorith [4,5] is a procedure that finds the single best path in the odel for a given sequence of sybols. (3) How do we adjust the odel paraeters, that is, the transition probabilities, eission probabilities, and the initial state probability, to generate a odel that best represents the training set? Model paraeters need to be adjusted so as to axiize the probability of the observed sequence. The Bau-Welch algorith [2] is a ethod that uses an iterative approach to solve this proble. It starts with preassigned probabilities and adjusts the dynaically based on the 4

5 observed sequences in the training set. The Forward/Bacward algoriths, the Bau-Welch algorith, and the Viterbi algorith are used extensively in our developent of HMM odels. They are relatively efficient algoriths that have been proposed to solve these three probles. [6] We will describe these algoriths in detail in the following subsections. Forward/Bacward Algoriths A sequence of observations can be generated by taing different paths in an HMM odel. To find the probability of a sequence we need to total the probabilities that are obtained for each possible path the sequence can tae. The Forward/Bacward algoriths use dynaic prograing to calculate this probability; the ajor advantage of these algoriths is that all possible paths do not have to be enuerated. Since HMMs follow the Marov property, the probability of a path including a particular state depends only on the state that was entered before it. The Forward Algorith uses this property to find the probability of the sybol sequence. Consider an HMM odel in which i 1 sybols in a sequence x of length L have been observed and the incoing sybol at position i in the sequence is x i. The probability that sybol x i is in state can be deterined by first suing over all possible states the product of two probabilities, the probability of being at any state when the last sybol was observed and the transition probability of oving fro state to state, and then ultiplying this suation result by the eission probability of x i at state. If a transition is not possible, then the transition probability is taen as 0. This forula for calculating the probability of observing x i at position i in state is referred to as the forward probability and is expressed as f ( i) = e ( x ) f ( i 1) a. The pseudocode for the Forward Algorith is given in Table 1. [7] i Forward Algorith Initialization (i = 0) : f 0 (0) =1; f (0) = 0, > 0. Iteration (i = 1... L) : f ( i) = e ( x ) f ( i 1) a Terination : P( x) = f ( L) 0 i a f (i) - probability of seeing the sybol at position i in state e (x i ) - probability of eitting the sybol x i by state a - probability of transition fro state to state P(x) - probability of observing the entire sequence x L - length of the sequence Table 1: Forward Algorith 5

6 The Bacward Algorith can also be used to calculate the probability of an observed sequence. It wors analogously to the Forward Algorith; however, instead of calculating the probability values fro the start state, the Bacward Algorith starts at the end state and oves towards the start state. In each bacward step, it calculates the probability of being at that state taing into account that the sequence fro the current state to the end state has already been observed. The bacward probability b (i), the probability of observing the reainder of the sequence given that the current state is after observing i sybols, is defined as b ( i) = p( x x path ). i L i = The pseudocode for the Bacward Algorith is given in Table 2. [7] The final probability generated by either the Forward or Bacward algorith is the sae. Viterbi Algorith The Viterbi algorith is used to find the ost probable path taen across the states in an HMM. It uses dynaic prograing and a recursive approach to find the path. The algorith checs all possible paths leading to a state and selects the ost probable one. Calculations are done using induction in an approach siilar to the forward algorith, but instead of using a suation, the Viterbi algorith uses axiization. The probability of the ost probable path ending at state, that is, v (i), after observing the first i characters of the sequence, is defined as v i + 1) = e ( i + 1) ax ( v ( i) a ). ( The pseudocode for the Viterbi Algorith is shown in Table 3. [7] It ay be noted that v 0 (0) is initialized to 1. The algorith uses pointers to eep trac of the states during a transition. The pointer ptr i () stores the state that leads to state after observing i sybols in the given sequence. It is deterined as follows: ptr ) = arg ax ( v ( i) a ). i ( Initialization (i = L) : Bacward Algorith b ( L) = a 0 for all Iteration (i = L ) : b i) a e ( x ) b ( i + 1) Terination : ( = i+ 1 P x) = a e ( x ) b (1) ( 0 1 b (i) - probability of observing rest of the sequence when in state and having already seen i sybols e (x i ) - probability of eitting the sybol x i by state a - probability of transition fro state to state P(x) - probability of observing the entire sequence Table 2: Bacward Algorith 6

7 The ost probable path is found by oving bacwards through the pointers fro the end state to the start state. By this ethod, it is possible to obtain ore than one path as the ost probable one; in this case, one of the paths is randoly selected. Bau-Welch Algorith The Bau-Welch algorith is used to adjust the HMM paraeters when the path taen by each training sequence is not nown. The algorith counts the nuber of ties each paraeter is used when the observed set of sybols in the training sequence is input to the current HMM. The algorith constitutes two steps: the Expectation Step (E Step) and the Maxiization Step (M Step). In the E Step, the forward and bacward probabilities at each position in the sequence are calculated. To deterine the probability of the entire sequence, with sybol being observed at state i, these probabilities are cobined as follows: P( path f ( i) b ( i) = x). f ( L) i = N Using the training set of sequences, the expected nuber of ties a sybol c is eitted at state is defined as n j () i b () j f j x j f N ( L) { j i x i = c}, c = i Viterbi Algorith Initialization (i = 0) : (0) = 1, v (0) = 0 for 0 Iteration (i = 1... L) : v0 > v ( i) = e ( x ) ax ( v ptr ( ) = arg ax ( v i i ( i 1) a ( i 1) a Terination : P(x,path*) = ax (v (L)a 0 ) path* L = argax (v (L)a 0 ) Tracebac ( i = L...1) : path* i-1 = ptr i (path* i ) l l ) ) v (i) - probability of the ost probable path obtained after observing the first i characters of the sequence and ending at state ptr (i) - pointer that stores the state that leads to state after observing i sybols path* i - state visited at position i in the sequence Table 3: Viterbi Algorith 7

8 The nuber of ties a transition fro state to occurs is given by n = j i+ + i j x f j ( i) a e ( x 1 f j N ( L) where the superscript j refers to an instance in the training set. In order to axiize perforance, the Maxiization Step uses the nuber of ties a sybol is seen at a state and the nuber of ties a transition occurs between two states (in the E Step) to update the transition and eission probabilities. The updated eission probability is and the transition probability is updated using ) b n, c + n', c e ( c) =, ( n, c + n', c ) a = c' n ( n Pseudo-counts n',c and n' (eission and transition probabilities, respectively) are taen into account because it prevents the nuerator or denoinator fro assuing a value of zero which happens when a particular state is not used in the given set of observed sequences or a transition does not tae place. Table 4 contains the pseudocode of the Bau-Welch algorith. In the next section, we describe the data transforation procedures we eployed in order to use Hidden Marov odels for weather data odeling. We then describe our approach to the use of HMMs to as a predictor of RWIS sensor values. + n' + n'. ) j ( i 1), Bau-Welch Algorith Initialize the paraeters of HMM and pseudocounts n',c and n' ->l Iterate until convergence or for a fixed nuber of ties - E Step: for each training sequence j = 1... n calculate the forward probability f (i) for the sequence j calculate the bacward probability b (i) for the sequence j add the contribution of sequence j to n,c and n ->l - M Step: update HMM paraeters using the expected counts n,c and n ->l and the pseudocounts n',c and n' ->l. Table 4: Bau-Welch Algorith 8

9 Weather Data Modeling Minnesota has 91 RWIS eteorological easureent stations positioned alongside ajor highways to collect local paveent and atospheric data. The State is laid out in a fixed grid forat and RWIS stations are located within each grid. The stations theselves are positioned soe 60 apart. Each RWIS station utilizes various sensing devices, which are placed both below the highway surface and on towers above the roadway. The sensor data, that is, the variables that will be used in deterining sensor alfunctions, are dependent on the particular type of sensor being evaluated. Each sensor data ite consists of a site identifier, sensor identification, date, tie, and the raw sensor values. As in Phase I, a set of 13 RWIS stations were chosen for the project. The selected sites are not subject to icrocliatic conditions and are contained in the sae or adjacent grids located in the Northern part of the State. Each RWIS station is located near one or ore regional airports so that the data generated by their Autoated Weather Observing Syste (AWOS) sensors can be copared with the data generated by the RWIS site. Airport AWOS sites are located no ore than 30 iles fro each of the selected RWIS sites (a distance ensuring that each RWIS site is lined to at least one regional airport yet eeping to a iniu the nuber of airports lying within the 30-ile radius of any given RWIS site). Figure 2 illustrates the location of each site and its adjacent airports. As ay be noted, the 13 sites cluster naturally into three groups; this clustering was used to advantage as we refined our analysis of the sensor data. Grouping the sites enables us to prevent coparisons between two sites present in totally different cliatological regions. Each set can be copared to a siple cliatological regie and the cliatic changes at one site are reflected on nearby sites, not necessarily at that instant but after a certain duration of tie. Clustering helps in predicting the weather condition at a site when that condition is nown in other sites in the set Figure 2. Location of RWIS/AWOS sites. 9

10 Along with the weather inforation fro the RWIS sites, we use eteorological data gathered fro the AWOS sites to help with the prediction of values at an RWIS site. The location of the RWIS sites with respect to the site's topography is variable, as these sensors sit near a roadway and are soeties located on bridges. AWOS sites are located at airports on a flat surrounding topography, which leads to better coparability between weather data obtained fro these sites. Including AWOS inforation generally proved beneficial in predicting values at an RWIS site. Our overall research project focuses on a subset of sensors that includes precipitation, air teperature and visibility. These sensors were identified by the Minnesota Departent of Transportation as being of particular interest. However, data fro other sensors are used to deterine if these particular sensors are operating properly. For HMMs, we concentrated on precipitation and air teperature since the M5P algorith perfored so well in detecting visibility sensor alfunctions. [1] RWIS Sensor Data RWIS sensors report observed values every 10 inutes, resulting in 6 records per hour. Greenwich Mean Tie (GMT) is used for recording the values. The eteorological conditions reported by RWIS sensors are air teperature, surface teperature, dew point, visibility, precipitation, the aount of precipitation accuulation, wind speed and wind direction. Following is a short description of these variables along with the forat they follow. (1) Air Teperature Air teperature is recorded in Celsius in increents of one hundredth of a degree, with values ranging fro to 9999 and a value of indicating an error. For exaple, a teperature of 10.5 degree Celsius is reported as (2) Surface Teperature Surface teperature is the teperature easured near the road surface. It is recorded in the sae forat as air teperature. (3) Dew Point Dew point is defined as the teperature at which dew fors. It is recorded in the sae forat as air teperature. (4) Visibility Visibility is the axiu distance that it is possible to see without any aid. Visibility reported is the horizontal visibility recorded in one tenth of a eter with values ranging fro to A value of -1 indicates an error. For exaple, a visibility of eters is reported as (5) Precipitation Precipitation is the aount of water in any for that falls to earth. Precipitation is reported using three different variables, precipitation type, precipitation intensity and precipitation rate. A coded approach is used for reporting precipitation type and intensity. The codes used and the inforation they convey are given in Table 5. The precipitation type gives the for of water that reaches the earth's surface. Precipitation type with a code of 0 indicates no precipitation, a code of 1 indicates the presence of soe aount of precipitation but the sensor fails to detect the for of the water, a code of 2 represents rain, and the codes 3, 41 and 42 respectively represent snowfall with an increase in intensity. 10

11 Code Precipitation Intensity Code Precipitation Type 0 No precipitation 0 None 1 Precipitation detected, not identified 1 Light 2 Rain 2 Slight 3 Snow 3 Moderate 41 Moderate Snow 4 Heavy 42 Heavy Snow 5 Other 6 Unnown Table 5: RWIS Precipitation Codes The precipitation intensity is used to indicate how strong the precipitation is when present. When no precipitation is present then intensity is given by code 0. As we ove fro codes 1 to 4 they indicate an increase in intensity of precipitation. Codes 5 and 6 are used to indicate an intensity that cannot be classified into any of previous codes and in cases when the sensor is unable to easure the intensity respectively. A value of 255 for precipitation intensity indicates either an error or absence of this type of sensor. Precipitation rate is easured in illieters per hour with values ranging fro 000 to 999 except for a value of -1 that indicates either an error or absence of this type of sensor. (6) Precipitation Accuulation Precipitation accuulation is used to report the aount of water falling in illieters. Values reported range fro to and a value of -1 indicating an error. Precipitation accuulation is reported for the last 1, 3, 6, 12 and 24 hours. (7) Wind Speed Wind speed is recorded in tenths of eters per second with values ranging fro 0000 to 9999 and a value of -1 indicating an error. For exaple, a wind speed of 2.05 eters/second is reported as 205. (8) Wind Direction Wind direction is reported as an angle with values ranging fro 0 to 360 degrees. A value of -1 indicates an error. (9) Air Pressure Air pressure is defined as the force exerted by the atosphere at a given point. The pressure reported is the pressure when reduced to sea level. It is easured in tenths of a illibar and the 11

12 values reported range fro to A value of -1 indicates an error. For exaple, air pressure of illibars is reported as AWOS Sensor Data AWOS is a collection of systes including eteorological sensors, a data collection syste, a centralized server, and displays for huan interaction which are used to observe and report weather conditions at airports in order to help pilots in aneuvering the aircraft and airport authorities for proper woring of airports and runways. AWOS sensors report values on an hourly basis. The local tie is used in reporting. AWOS sensors report air teperature, dew point, visibility, weather conditions in a coded anner, air pressure and wind inforation. Following is a description of these variables. (1) Air Teperature The air teperature value is reported as integer values ranging fro -140 o F to 140 o F. (3) Dew Point The dew point teperature is reported in the sae forat as air teperature. (4) Visibility The visibility is reported as a set of values ranging fro 3/16 ile to 10 iles. These values can be converted into floating point nubers for siplicity. (5) Weather Code AWOS uses a coded approach to represent the current weather condition. 80 different possible codes are listed to indicate various conditions. Detailed inforation about these codes is given in the docuent titled Data Docuentation of Data Set 3283: ASOS Surface Airways Hourly Observations published by the National Cliatic Data Center [NCDC, 2005]. (6) Air Pressure The air pressure is reported in tenth of a illibar increents. For exaple, an air pressure of illibars is reported as (7) Wind Speed and Direction AWOS encodes wind speed and direction into a single variable. Wind speed is easured in nots ranging fro 0 nots to 999 nots. Wind direction is easured in degrees ranging fro 0 to 360 in increents of 10. The single variable for wind has five digits with the first two representing direction and the last three representing speed. For exaple, the value for a wind speed of 90 nots and direction of 80 degrees will be As noted above, the data forat used for reporting RWIS and AWOS sensor values differs. To ae the data fro these two sources copatible, the data need to be transfored into a coon forat. In soe cases, additional preprocessing is necessary in order to discretize the data. We will describe the data transforation that we utilized in the following subsection. 12

13 Sensor Data Transforation RWIS sensors report data every 10 inutes, whereas AWOS provides hourly reports. Thus, RWIS data needs to be averaged when used in conjunction with AWOS data. The changes that were ade to the RWIS data in order to produce a coon forat follows. RWIS uses Greenwich Mean Tie (GMT) and AWOS uses Central Tie (CT). The reporting tie in RWIS is changed fro GMT to CT. Variables, such as air teperature, surface teperature, and dew point, that are reported in Celsius by RWIS are converted to Fahrenheit, the forat used by AWOS. To convert six readings per hour to a single hourly reading in RWIS, a siple average is taen. Distance, which is easured in iloeters by RWIS for visibility and wind speed, is converted into iles, the forat used by AWOS. A single hourly reading is obtained through a siple average in RWIS. In order to obtain hourly averages for precipitation type and intensity reported by RWIS, we use the ost frequently reported code for that hour. For precipitation rate, a siple average is used. RWIS sites report precipitation type and intensity as separate variables, whereas AWOS cobines the into a single weather code. [8] Mapping precipitation type and intensity reported by RWIS to the AWOS weather codes is not feasible and requires soe coproises to be ade. We thus eep these variables in their original forat. Of the three features selected for use in predictions, precipitation type is the one whose direct coparison between RWIS and AWOS values cannot be done because each uses a different forat for reporting. For broader coparison, we cobine all the codes that report different fors of precipitation, in both RWIS and AWOS, into a single code, which indicates the presence of soe for of precipitation. Apart fro using the features obtained fro the RWIS and AWOS sites, we also ade use of historical inforation to represent our training data. This increases the aount of weather inforation we have for a given location or region. We collected hourly teperature values for the 14 AWOS sites (see Figure 2) fro the Weather Underground website 2 for a duration of seven years, naely, 1997 to For any locations, the teperature was reported ore than once an hour; in such cases the average of the teperature across the hour was taen. As we already have readings for teperature fro two different sources, RWIS and AWOS, we use the inforation gathered fro the website to adjust our dataset by deriving values such as the projected hourly teperature. To calculate the projected hourly teperatures for an AWOS site we use past teperature inforation obtained fro the website for this location. The projected hourly teperature for an hour of a day is defined as the su of the average teperature reported for that day in the year and the onthly average difference in teperature of that hour in the day for the respective onth. The steps followed to calculate the projected hourly teperature for an AWOS site are as follows: Obtain the hourly teperature for the respective AWOS site. Calculate the average teperature for a day as the ean of the hourly readings across a day. Calculate the hourly difference as the difference between the average daily teperature and actual teperature seen at that hour. Calculate the average difference in teperature per hour for a particular onth (onthly average difference) as the average of all hourly differences in a onth for that hour obtained

14 fro all the years in the data collected. Calculate the projected hourly teperature for a day fro the su of the average teperature for that day and the onthly average difference of that hour in the day. For exaple, suppose the teperature value at the AWOS site KORB for the first hour for January 1 st for year 1997 is 32ºF. The teperature values for all 24 hours on January 1 st are averaged; assue this value is 30ºF. The hourly difference for this day for the first hour will be 2ºF. Now assue the average of all the first hourly differncee values for the onth of January for KORB fro years 1997 to 2004 is 5ºF. Then the projected teperature value for the first hour in January 1 st, 1997 will be 35ºF, which is the su of the average teperature seen on January 1 st 1997 and the onthly average difference for the first hour in the onth of January. The projected hourly teperature was used as a feature in the datasets used for predicting weather variables at an RWIS site and is also used in the process of discretizing teperature. Discretization of the Data Hidden Marov Models require the data to be discrete. If the data is continuous, discretization of the data involves finding a set of values that split the continuous sequence into intervals; each interval is then assigned a single discrete value. Discretization can be accoplished using unsupervised or supervised ethods. In unsupervised discretization, the data is divided into a fixed nuber of equal intervals, without any prior nowledge of the target class values of instances in the given dataset. In supervised discretization, the splitting point is deterined at a location that increases the inforation gain with respect to the given training dataset. [9] Dougherty [10] gives a brief description of the process and a coparison of unsupervised and supervised discretization. WEKA [11] provides a wide range of options to discretize any continuous variable, using supervised and unsupervised echaniss. We propose a new ethod for discretization of RWIS teperature sensor values that uses data obtained fro other sources. Naely, using the projected hourly teperature for an AWOS site along with the current reported teperature for the closest RWIS site, we deterine a class value for the current RWIS teperature value. The actual reported teperature at an RWIS site is subtracted fro the projected hourly teperature for the AWOS site closest to it for that specific hour. This difference is then divided by the standard deviation of the projected hourly teperature for that AWOS site. The result indicates how uch the actual value deviates fro the projected value, that is, the nuber of standard deviation fro the projected value (or the ean): nu_stdev = (actual_tep proj_tep) / std_dev The nuber of standard deviations deterines the class that the teperature value is assigned, as shown in Table 6. The ranges and the nuber of classes can be deterined by the user or based on requireents of the algorith. For exaple, to convert the actual teperature of 32ºF at an RWIS site, we calculate the projected teperature for that hour at the associated AWOS site, say 30ºF, and the standard deviation of projected teperatures for the year, This yields a value of as the nuber of standard deviations fro the ean. Thus the representation for 32ºF is assigned to be class 6. The advantage of deterining classes in this anner is that the effect of season and tie of day is at least partially reoved fro the data. 14

15 Class Value Class Value 1 nu_stdev < < nu_stddev nu_stdev < nu_stddev < nu_stdev < nu_stddev < nu_stdev nu_stdev > < nu_stdev 0.25 Table 6: RWIS Teperature Classes Feature Vectors To build odels that can predict values reported by an RWIS sensor, we require a training set and a test set that contain data related to the weather conditions. Each eber of these sets is represented by a so-called feature vector for the dataset consisting of a set of input attributes and an output attribute, with the output attribute corresponding to the RWIS sensor of interest. The input attributes for a feature vector in the test set are applied to the odel built fro the training set to predict the output attribute value. This predicted value is copared with the sensor s actual value to estiate odel perforance. The feature vector has the for input 1,..., input n, output where input i is an input attribute and output is the output attribute. The perforance of a odel is evaluated using cross-validation. The data is divided repeatedly into training and testing groups using 10-fold cross validation, repeated 10 ties. In 10-fold cross validation the data is randoly peruted and then divided into 10 equal sized groups. Each group is then in turn used as test data while the other 9 groups are used to create a odel (in this way every data point is used as a test point exactly once). The results for the ten groups are sued, and the overall result for the test data is reported. Predicting a value for an RWIS sensor corresponds to predicting the weather condition being reported by that sensor. Since variations in a weather condition, lie air teperature, are dependent to soe degree on other weather conditions, such as wind, precipitation, and air pressure, any different types of sensors are used to predict a single sensor s value at a given location. Earlier readings (such as those during the previous 24 hour period) fro a sensor can be used to predict present conditions, as changes in weather conditions follow a pattern and are not totally rando. In addition, as noted in Figure 2, we divided the RWIS-AWOS sites into clusters. Any site within a single cluster is assued to have cliatic conditions siilar to any other site in the cluster. This assuption allows us to use both RWIS and AWOS data obtained fro the nearby sites (those in the sae cluster) to predict variables at a given RWIS site in the cluster. In Hidden Marov odels the instance in a dataset used for predicting a class value of a variable consists of a string of sybols that for the feature vector. The sybols that constitute the feature 15

16 vector are unique and are generated fro the training set. Predicting a weather variable value at an RWIS site that is based on hourly readings for a day yields a string of length 24. This string fors the feature vector which is used in the dataset. When inforation fro a single site is used for predictions, the class values taen by the variables for the sybol set. When the variable inforation fro two of ore sites is used together, a class value is obtained by appending the variable's value fro each site together. For exaple, to predict a teperature class of site 19 using the Northeast cluster in Figure 2, we include teperature data fro sites 27 and 67. The cobination of class values fro these three sites for a particular hour will for a class string. For exaple if the teperature class values at sites 19, 27, and 67 are 3, 5, and 4, respectively, then the class string fored by appending the class values is 354. The sequence of hourly class strings for a day for an instance in the dataset. All unique class strings that are seen in the training set are arranged in ascending order, if possible, and each class string is assigned a sybol. For hourly readings taen during a day we arrive at a string which is 24 sybols long, which fors the feature vector used by the HMM. Hidden Marov Approach To build an HMM odel fro a given nuber of states and sybols, the Bau-Welch algorith is first applied to the training set to deterine the initial state, transission probabilities, and eission probabilities. Each sybol generated using the training set is eitted by a state. Instances fro the test set are then passed to the Viterbi algorith which yields the ost probable state path. To predict a variable s class value, we need the sybol that would be observed across the ost probable state path found by the Viterbi algorith. In order to obtain such sybols, we had to odify the Viterbi algorith. Table 7 contains the odified algorith. The algorith wors siilarly to the original algorith (Table 3). It calculates the probability of the ost probable path obtained after observing the first i characters of the given sequence and ending at state using v ( i i) = e ( x ) ax ( v ( i 1) a ), where e (x i ) is the eission probability of the sybol at state and a is the transition probability of oving fro state to state. However, in the odified Viterbi algorith, in place of e (x i ) we use the eission probability value that is the axiu of all of the possible sybols that can be seen at state when the sybol x i is present in the actual sybol sequence. We calcualte v (i) in the odified Viterbi algorith as v ( s s i) = ax ( v ( i 1) a )*ax ( e ( sybolset )). The actual sybol seen in the sybol string for that respective tie is taen and all its possible sybols are found. Of the possible sybols or class strings, only those that are seen in the training instances are considered in the set of possible sybols. The sybol fro the set of possible sybols that has the highest eission probability is selected as the sybol observed for the given state. To retain the sybol that was observed at a state, the odified Viterbi algorith uses the pointer bestsybol i () which contains the position of the selected sybol in the set of possible sybols seen at state for a position i in the sequence. It is given by bestsybol ) = arg ax ( e ( sybolset )). i ( s s 16

17 Modified Viterbi Algorith Initialization (i = 0) : (0) = 1, v (0) = 0 for 0 Iteration (i = 1... L) : v0 > sybolset = allpossiblesybols( x ) v ( i) = ax ( v ptr ( ) = arg ax i ( i 1) a ( v ( i 1) a bestsybol ( ) = arg ax ( e i ) * ax ( e Terination : P(x,path*) = ax (v (L)a 0 ) path* L = argax (v (L)a 0 ) Tracebac ( i = L...1) : path* i-1 = ptr i (path* i ) Sybol Observed ( i = 1... L): sybol(i) = bestsybol i ( path* i ) s l ) s i ( sybolset ( sybolset s )) s )) v (i) - ptr (i) - path* i - bestsybol i () - sybolset s probability of the ost probable path obtained after observing the first I characters of the sequence and ending at state pointer that stores the state that leads to state after observing i sybols state visited at position i in the sequence the ost probable sybol seen at state at position in the sequence the set of all possible sybols that can be eitted fro a state when a certain sybol is actually seen allpossibesybols(x i ) function that generates all possible sybols for the given sybol x i. sybol(i) sybol observed ith position in the string. Table 7: Modified Viterbi Algortih The ost probable path generated by the algorith is visited fro the start, and by retaining the sybols that were observed at the state for a particular tie, we can deterine the sybols with the highest eission probability at each state in the ost probable state path. This new sybol sequence gives the observed sequence for that given input sequence; in other words, it is the predicted sequence for the given actual sequence. To predict the class values for a variable at an RWIS site for a given instance, we pass the instance through the odified Viterbi algorith which then gives the sybols that have the highest eission probabilities along the ost probable path. The sybol found is then converted bac into a class string, and the class value for the RWIS site of interest is regarded as the predicted value. 17

18 For exaple, to predict teperature class values at site 19 in Figure 2, we use the class string for teperature class values fro sites 27 and 67 appended to the value observed at site 19. Suppose the class string sequence for a given day is 343, 323, 243, 544,, 324 a feature vector of length of 24, one for each hour in a day. This sequence is passed to the odified Viterbi algorith which yields the observed sequence along with ost probable state path. Now suppose the output sequence for the given sequence is 433, 323, 343, 334,, 324 As we are interested in predicting the class values at site 19, taing only the first class value fro a class string we get for actual values 3, 3, 2, 5,, 3 The observed (or predicted) values for the day are 4, 3, 3, 3,, 3 The distance between these class values gives the accuracy of the predictions ade by the odel. The perforance of the HMM on the given test set, on precipitation type and discretized teperature, can be evaluated using the sae ethods that were used in the general classification approach used in Phase 1. [1] Epirical Analysis Based on the instance given, an HMM tries to predict the ost probable path taen across the states. For weather data the path is the predicted values of the weather sensor using the surrounding sensors for additional inforation. HMMs can be used to classify discrete attributes. When data is present in the for of a tie series, such as hourly precipitation, HMMs can be used to identity the path of predicted values fro which we can further obtain the values observed through the path. We used the odified Viterbi algorith to predict the sybol observed when a state in the ost probable path is reached for the given instance. HMMs require a training dataset to build the odel, that is, to estiate the initial state, transition probabilities, and eission probabilities. The test dataset was used to evaluate the odel. Multiple n-fold cross-validation was used to estiate odel perforance. As HMMs require the presence of all sybols in the sybol sequence, any tie sequence with issing data (due to a sensor not reporting or transission probles) was oitted fro the dataset used for predictions. HMMs require class value inforation. For the RWIS sites that were used for predictions, we discretized the teperature following the ethod previously described; naely, the teperature was broen down into nine classes based on the value obtained for the nuber of standard deviations the actual teperature value differs fro the predicted value (Table 6). For nearby sites, we used a broader classification of teperature values, as shown in Table 7; such sites required less precise differentiation since they are providing additional inforation for aing predictions. For exaple, to predict teperature values at site 19, we used teperature values of the nearby sites 27 and 67 for additional inforation. Each teperature value at site 19 was represented as one of nine possible classes and teperatures at sites 27 and 67 were chosen as one of five classes. 18

19 Class Value Class Value 1 nu_stdev < < nu_stdev nu_stdev nu_stdev > < nu_stdev -0.5 Table 8: RWIS Teperature Classes for Nearby Sites In the following experients, we trained the Hidden Marov Model using the ethods available in the HMM Toolbox for MATLAB developed by Murphy [12]. The odified Viterbi algorith was used for testing. We perfored ten iterations of the Bau-Welch algorith during the training of the HMM. We set the nuber of states in the odel to 24 and each state was allowed to eit all possible sybols obtained fro the training set. Since we are using hourly readings for a day, each instance in the dataset is a string of sybols with a length of 24. Training the HMM Two different ethods were devised to train the HMMs to predict the teperature class value reported at an RWIS site. The teperature class inforation fro the RWIS site and the other RWIS sites in its group are used to for the feature sybols. The results obtained fro predicting teperature class value were used to copare the two ethods. (1) Method A In this approach we erged all the data fro a group and trained the HMM using this data. As noted earlier, the site being used for prediction has 9 teperature classes (Table 6) and the other sites in the group have 5 teperature classes (Table 8). Thus, each site has its own erged dataset. We create the dataset in which each instance is a string of sybols representing hourly readings per day, where a sybol for an hour was obtained fro appending that hour's teperature class value for all of the RWIS sites in a group together. For exaple to predict teperature class values at site 19, we used the teperature classes fro sites 27 and 67. If the teperature class values at tie t seen at sites 19, 27 and 67 are 6, 3 and 2 then the class string generated for tie t will be 632. We estiated the initial state, transition, and eission probabilities by applying the Bau-Welch algorith on this dataset with cobined class inforation. (2) Method B In this approach the data fro the sites used is not erged. Instead, data fro each site is used for training the odel, and the eission probabilities of all sites in the group are ultiplied to obtain a single eission atrix for the final odel. We created a separate dataset for each RWIS site in which each instance is a string of sybols representing hourly teperature class value per day. We applied the Bau-Welch algorith to the dataset for the RWIS sites to estiate the initial state, transition probabilities, and eission 19

20 probabilities. The eission probability atrices obtained for each RWIS site in a group were then joined together by ultiplying the respective values in each cell in the atrix. The resultant eission atrix was ade stochastic, that is, such the su of all rows and coluns is 1. In order to increase probabilities with sall values, we applied the -estiate to the probability values in the atrix, using a value of 20 for and the inverse of the nuber of classes as the value of p. In this case the nuber of classes denotes the total nuber of class values taen by the variable we are trying to predict. By applying the -estiate, we get new values in each cell of the atrix as defined by atrix(row.col) = (atrix(row,col) + p) / (1 + ) We used the initial state and transition probabilities for the RWIS site whose teperature value is to be predicted in the odel. We used the teperature values at the RWIS sites for the years 2002 and 2003 in the training set while the test set contained teperature inforation fro January 2004 to April We discretized the teperature values by allocating the into nine classes. Training was done using the two ethods entioned above, and teperature class value was predicted for instances in the test data using the odified Viterbi algorith. Results We used the absolute distance between the actual and predicted class values for teperature as a easure to estiate the perforance of the algorith. Figure 3 shows the percentage of instances in the training set for each distance between the actual and predicted class averaged across the results fro different RWIS sitea. The HMM was trained using Method A and Method B. A distance of zero indicates a perfectly classified instance % Percentage of Instances 50.00% 40.00% 30.00% 20.00% 10.00% Method A Method B 0.00% Distance Figure 3. Percentage of Instances in Test Set with Differences between Actual and Predicted Values. 20

Automatic Detection of RWIS Sensor Malfunctions (Phase II)

Automatic Detection of RWIS Sensor Malfunctions (Phase II) Automatic Detection of RWIS Sensor Malfunctions (Phase II) Final Report Prepared by: Carolyn J. Crouch Donald B. Crouch Richard M. Maclin Aditya Polumetla Department of Computer Science University of Minnesota

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

MACHINE LEARNING METHODS FOR THE DETECTION OF RWIS SENSOR MALFUNCTIONS

MACHINE LEARNING METHODS FOR THE DETECTION OF RWIS SENSOR MALFUNCTIONS MACHINE LEARNING METHODS FOR THE DETECTION OF RWIS SENSOR MALFUNCTIONS A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY ADITYA POLUMETLA IN PARTIAL FULFILLMENT

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Ensemble Based on Data Envelopment Analysis

Ensemble Based on Data Envelopment Analysis Enseble Based on Data Envelopent Analysis So Young Sohn & Hong Choi Departent of Coputer Science & Industrial Systes Engineering, Yonsei University, Seoul, Korea Tel) 82-2-223-404, Fax) 82-2- 364-7807

More information

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science Proceedings of the 6th WSEAS International Conference on Applied Coputer Science, Tenerife, Canary Islands, Spain, Deceber 16-18, 2006 183 Qualitative Modelling of Tie Series Using Self-Organizing Maps:

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

OBJECTIVES INTRODUCTION

OBJECTIVES INTRODUCTION M7 Chapter 3 Section 1 OBJECTIVES Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance, and

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

Optical Properties of Plasmas of High-Z Elements

Optical Properties of Plasmas of High-Z Elements Forschungszentru Karlsruhe Techni und Uwelt Wissenschaftlishe Berichte FZK Optical Properties of Plasas of High-Z Eleents V.Tolach 1, G.Miloshevsy 1, H.Würz Project Kernfusion 1 Heat and Mass Transfer

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

AUTOMATIC DETECTION OF RWIS SENSOR MALFUNCTIONS (PHASE I) Northland Advanced Transportation Systems Research Laboratories Project A: Fiscal Year 2005

AUTOMATIC DETECTION OF RWIS SENSOR MALFUNCTIONS (PHASE I) Northland Advanced Transportation Systems Research Laboratories Project A: Fiscal Year 2005 AUTOMATIC DETECTION OF RWIS SENSOR MALFUNCTIONS (PHASE I) FINAL REPORT Northland Advanced Transportation Systems Research Laboratories Project A: Fiscal Year 2005 Carolyn J. Crouch Donald B. Crouch Richard

More information

A method to determine relative stroke detection efficiencies from multiplicity distributions

A method to determine relative stroke detection efficiencies from multiplicity distributions A ethod to deterine relative stroke detection eiciencies ro ultiplicity distributions Schulz W. and Cuins K. 2. Austrian Lightning Detection and Inoration Syste (ALDIS), Kahlenberger Str.2A, 90 Vienna,

More information

Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5,

Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5, Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5, 2015 31 11 Motif Finding Sources for this section: Rouchka, 1997, A Brief Overview of Gibbs Sapling. J. Buhler, M. Topa:

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

Force and dynamics with a spring, analytic approach

Force and dynamics with a spring, analytic approach Force and dynaics with a spring, analytic approach It ay strie you as strange that the first force we will discuss will be that of a spring. It is not one of the four Universal forces and we don t use

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

arxiv: v3 [cs.ds] 22 Mar 2016

arxiv: v3 [cs.ds] 22 Mar 2016 A Shifting Bloo Filter Fraewor for Set Queries arxiv:1510.03019v3 [cs.ds] Mar 01 ABSTRACT Tong Yang Peing University, China yangtongeail@gail.co Yuanun Zhong Nanjing University, China un@sail.nju.edu.cn

More information

Effective joint probabilistic data association using maximum a posteriori estimates of target states

Effective joint probabilistic data association using maximum a posteriori estimates of target states Effective joint probabilistic data association using axiu a posteriori estiates of target states 1 Viji Paul Panakkal, 2 Rajbabu Velurugan 1 Central Research Laboratory, Bharat Electronics Ltd., Bangalore,

More information

Figure 1: Equivalent electric (RC) circuit of a neurons membrane

Figure 1: Equivalent electric (RC) circuit of a neurons membrane Exercise: Leaky integrate and fire odel of neural spike generation This exercise investigates a siplified odel of how neurons spike in response to current inputs, one of the ost fundaental properties of

More information

Bayesian Learning. Chapter 6: Bayesian Learning. Bayes Theorem. Roles for Bayesian Methods. CS 536: Machine Learning Littman (Wu, TA)

Bayesian Learning. Chapter 6: Bayesian Learning. Bayes Theorem. Roles for Bayesian Methods. CS 536: Machine Learning Littman (Wu, TA) Bayesian Learning Chapter 6: Bayesian Learning CS 536: Machine Learning Littan (Wu, TA) [Read Ch. 6, except 6.3] [Suggested exercises: 6.1, 6.2, 6.6] Bayes Theore MAP, ML hypotheses MAP learners Miniu

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding IEEE TRANSACTIONS ON INFORMATION THEORY (SUBMITTED PAPER) 1 Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding Lai Wei, Student Meber, IEEE, David G. M. Mitchell, Meber, IEEE, Thoas

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Analyzing Simulation Results

Analyzing Simulation Results Analyzing Siulation Results Dr. John Mellor-Cruey Departent of Coputer Science Rice University johnc@cs.rice.edu COMP 528 Lecture 20 31 March 2005 Topics for Today Model verification Model validation Transient

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

Principal Components Analysis

Principal Components Analysis Principal Coponents Analysis Cheng Li, Bingyu Wang Noveber 3, 204 What s PCA Principal coponent analysis (PCA) is a statistical procedure that uses an orthogonal transforation to convert a set of observations

More information

Lecture 12: Ensemble Methods. Introduction. Weighted Majority. Mixture of Experts/Committee. Σ k α k =1. Isabelle Guyon

Lecture 12: Ensemble Methods. Introduction. Weighted Majority. Mixture of Experts/Committee. Σ k α k =1. Isabelle Guyon Lecture 2: Enseble Methods Isabelle Guyon guyoni@inf.ethz.ch Introduction Book Chapter 7 Weighted Majority Mixture of Experts/Coittee Assue K experts f, f 2, f K (base learners) x f (x) Each expert akes

More information

THE KALMAN FILTER: A LOOK BEHIND THE SCENE

THE KALMAN FILTER: A LOOK BEHIND THE SCENE HE KALMA FILER: A LOOK BEHID HE SCEE R.E. Deain School of Matheatical and Geospatial Sciences, RMI University eail: rod.deain@rit.edu.au Presented at the Victorian Regional Survey Conference, Mildura,

More information

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words) 1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu

More information

Deflation of the I-O Series Some Technical Aspects. Giorgio Rampa University of Genoa April 2007

Deflation of the I-O Series Some Technical Aspects. Giorgio Rampa University of Genoa April 2007 Deflation of the I-O Series 1959-2. Soe Technical Aspects Giorgio Rapa University of Genoa g.rapa@unige.it April 27 1. Introduction The nuber of sectors is 42 for the period 1965-2 and 38 for the initial

More information

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat

More information

Bayes Decision Rule and Naïve Bayes Classifier

Bayes Decision Rule and Naïve Bayes Classifier Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.

More information

Estimating Parameters for a Gaussian pdf

Estimating Parameters for a Gaussian pdf Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

Kinematics and dynamics, a computational approach

Kinematics and dynamics, a computational approach Kineatics and dynaics, a coputational approach We begin the discussion of nuerical approaches to echanics with the definition for the velocity r r ( t t) r ( t) v( t) li li or r( t t) r( t) v( t) t for

More information

National 5 Summary Notes

National 5 Summary Notes North Berwick High School Departent of Physics National 5 Suary Notes Unit 3 Energy National 5 Physics: Electricity and Energy 1 Throughout the Course, appropriate attention should be given to units, prefixes

More information

Reduced Length Checking Sequences

Reduced Length Checking Sequences Reduced Length Checing Sequences Robert M. Hierons 1 and Hasan Ural 2 1 Departent of Inforation Systes and Coputing, Brunel University, Middlesex, UB8 3PH, United Kingdo 2 School of Inforation echnology

More information

Data-Driven Imaging in Anisotropic Media

Data-Driven Imaging in Anisotropic Media 18 th World Conference on Non destructive Testing, 16- April 1, Durban, South Africa Data-Driven Iaging in Anisotropic Media Arno VOLKER 1 and Alan HUNTER 1 TNO Stieltjesweg 1, 6 AD, Delft, The Netherlands

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization Use of PSO in Paraeter Estiation of Robot Dynaics; Part One: No Need for Paraeterization Hossein Jahandideh, Mehrzad Navar Abstract Offline procedures for estiating paraeters of robot dynaics are practically

More information

SPECTRUM sensing is a core concept of cognitive radio

SPECTRUM sensing is a core concept of cognitive radio World Acadey of Science, Engineering and Technology International Journal of Electronics and Counication Engineering Vol:6, o:2, 202 Efficient Detection Using Sequential Probability Ratio Test in Mobile

More information

When Short Runs Beat Long Runs

When Short Runs Beat Long Runs When Short Runs Beat Long Runs Sean Luke George Mason University http://www.cs.gu.edu/ sean/ Abstract What will yield the best results: doing one run n generations long or doing runs n/ generations long

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Tie-Varying Jaing Links Jun Kurihara KDDI R&D Laboratories, Inc 2 5 Ohara, Fujiino, Saitaa, 356 8502 Japan Eail: kurihara@kddilabsjp

More information

Bootstrapping Dependent Data

Bootstrapping Dependent Data Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly

More information

Identical Maximum Likelihood State Estimation Based on Incremental Finite Mixture Model in PHD Filter

Identical Maximum Likelihood State Estimation Based on Incremental Finite Mixture Model in PHD Filter Identical Maxiu Lielihood State Estiation Based on Increental Finite Mixture Model in PHD Filter Gang Wu Eail: xjtuwugang@gail.co Jing Liu Eail: elelj20080730@ail.xjtu.edu.cn Chongzhao Han Eail: czhan@ail.xjtu.edu.cn

More information

Multilayer Neural Networks

Multilayer Neural Networks Multilayer Neural Networs Brain s. Coputer Designed to sole logic and arithetic probles Can sole a gazillion arithetic and logic probles in an hour absolute precision Usually one ery fast procesor high

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

a a a a a a a m a b a b

a a a a a a a m a b a b Algebra / Trig Final Exa Study Guide (Fall Seester) Moncada/Dunphy Inforation About the Final Exa The final exa is cuulative, covering Appendix A (A.1-A.5) and Chapter 1. All probles will be ultiple choice

More information

NBN Algorithm Introduction Computational Fundamentals. Bogdan M. Wilamoswki Auburn University. Hao Yu Auburn University

NBN Algorithm Introduction Computational Fundamentals. Bogdan M. Wilamoswki Auburn University. Hao Yu Auburn University NBN Algorith Bogdan M. Wilaoswki Auburn University Hao Yu Auburn University Nicholas Cotton Auburn University. Introduction. -. Coputational Fundaentals - Definition of Basic Concepts in Neural Network

More information

Convolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder

Convolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder Convolutional Codes Lecture Notes 8: Trellis Codes In this lecture we discuss construction of signals via a trellis. That is, signals are constructed by labeling the branches of an infinite trellis with

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

A Smoothed Boosting Algorithm Using Probabilistic Output Codes

A Smoothed Boosting Algorithm Using Probabilistic Output Codes A Soothed Boosting Algorith Using Probabilistic Output Codes Rong Jin rongjin@cse.su.edu Dept. of Coputer Science and Engineering, Michigan State University, MI 48824, USA Jian Zhang jian.zhang@cs.cu.edu

More information

Using a De-Convolution Window for Operating Modal Analysis

Using a De-Convolution Window for Operating Modal Analysis Using a De-Convolution Window for Operating Modal Analysis Brian Schwarz Vibrant Technology, Inc. Scotts Valley, CA Mark Richardson Vibrant Technology, Inc. Scotts Valley, CA Abstract Operating Modal Analysis

More information

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition Upper bound on false alar rate for landine detection and classification using syntactic pattern recognition Ahed O. Nasif, Brian L. Mark, Kenneth J. Hintz, and Nathalia Peixoto Dept. of Electrical and

More information

Tracking using CONDENSATION: Conditional Density Propagation

Tracking using CONDENSATION: Conditional Density Propagation Tracking using CONDENSATION: Conditional Density Propagation Goal Model-based visual tracking in dense clutter at near video frae rates M. Isard and A. Blake, CONDENSATION Conditional density propagation

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Ocean 420 Physical Processes in the Ocean Project 1: Hydrostatic Balance, Advection and Diffusion Answers

Ocean 420 Physical Processes in the Ocean Project 1: Hydrostatic Balance, Advection and Diffusion Answers Ocean 40 Physical Processes in the Ocean Project 1: Hydrostatic Balance, Advection and Diffusion Answers 1. Hydrostatic Balance a) Set all of the levels on one of the coluns to the lowest possible density.

More information

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks 1 Optial Resource Allocation in Multicast Device-to-Device Counications Underlaying LTE Networks Hadi Meshgi 1, Dongei Zhao 1 and Rong Zheng 2 1 Departent of Electrical and Coputer Engineering, McMaster

More information

Introduction to Machine Learning. Recitation 11

Introduction to Machine Learning. Recitation 11 Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,

More information

A multi-model multi-analysis limited area ensemble: calibration issues

A multi-model multi-analysis limited area ensemble: calibration issues - European Centre for Mediu-Range Weather Forecasts - Third International Workshop on Verification Methods Reading 31 Jan 2 Feb. 2007 A ulti-odel ulti-analysis liited area enseble: calibration issues M.

More information

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes Graphical Models in Local, Asyetric Multi-Agent Markov Decision Processes Ditri Dolgov and Edund Durfee Departent of Electrical Engineering and Coputer Science University of Michigan Ann Arbor, MI 48109

More information

Measures of average are called measures of central tendency and include the mean, median, mode, and midrange.

Measures of average are called measures of central tendency and include the mean, median, mode, and midrange. CHAPTER 3 Data Description Objectives Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance,

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016/2017 Lessons 9 11 Jan 2017 Outline Artificial Neural networks Notation...2 Convolutional Neural Networks...3

More information

Generalized Queries on Probabilistic Context-Free Grammars

Generalized Queries on Probabilistic Context-Free Grammars IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 20, NO. 1, JANUARY 1998 1 Generalized Queries on Probabilistic Context-Free Graars David V. Pynadath and Michael P. Wellan Abstract

More information

Feedforward Networks

Feedforward Networks Feedforward Networks Gradient Descent Learning and Backpropagation Christian Jacob CPSC 433 Christian Jacob Dept.of Coputer Science,University of Calgary CPSC 433 - Feedforward Networks 2 Adaptive "Prograing"

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

Grafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space

Grafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space Journal of Machine Learning Research 3 (2003) 1333-1356 Subitted 5/02; Published 3/03 Grafting: Fast, Increental Feature Selection by Gradient Descent in Function Space Sion Perkins Space and Reote Sensing

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

General Properties of Radiation Detectors Supplements

General Properties of Radiation Detectors Supplements Phys. 649: Nuclear Techniques Physics Departent Yarouk University Chapter 4: General Properties of Radiation Detectors Suppleents Dr. Nidal M. Ershaidat Overview Phys. 649: Nuclear Techniques Physics Departent

More information

Best Procedures For Sample-Free Item Analysis

Best Procedures For Sample-Free Item Analysis Best Procedures For Saple-Free Ite Analysis Benjain D. Wright University of Chicago Graha A. Douglas University of Western Australia Wright s (1969) widely used "unconditional" procedure for Rasch saple-free

More information

INNER CONSTRAINTS FOR A 3-D SURVEY NETWORK

INNER CONSTRAINTS FOR A 3-D SURVEY NETWORK eospatial Science INNER CONSRAINS FOR A 3-D SURVEY NEWORK hese notes follow closely the developent of inner constraint equations by Dr Willie an, Departent of Building, School of Design and Environent,

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

Maximum a Posteriori Decoding of Turbo Codes

Maximum a Posteriori Decoding of Turbo Codes Maxiu a Posteriori Decoing of Turbo Coes by Bernar Slar Introuction The process of turbo-coe ecoing starts with the foration of a posteriori probabilities (APPs) for each ata bit, which is followe by choosing

More information

Feedforward Networks

Feedforward Networks Feedforward Neural Networks - Backpropagation Feedforward Networks Gradient Descent Learning and Backpropagation CPSC 533 Fall 2003 Christian Jacob Dept.of Coputer Science,University of Calgary Feedforward

More information