CREDIBILITY IN THE REGRESSION CASE REVISITED (A LATE TRIBUTE TO CHARLES A. HACHEMEISTER) H.BUHLMANN. ETH Zurich. A. GlSLER. Winterhur- Versicherungen

Similar documents
Claims Reserving under Solvency II

REGRESSION TREE CREDIBILITY MODEL

Stochastic Design Criteria in Linear Models

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

Fundamentals of Engineering Analysis (650163)

Mathematical models in regression credibility theory

Algebra II. Paulius Drungilas and Jonas Jankauskas

Linear Algebra Review

Matrix Factorization and Analysis

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Prediction Uncertainty in the Bornhuetter-Ferguson Claims Reserving Method: Revisited

Time Series Analysis for Macroeconomics and Finance

5.6. PSEUDOINVERSES 101. A H w.

Linear Prediction Theory

Principal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R,

MA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2

Lectures 5 & 6: Hypothesis Testing

2 Statistical Estimation: Basic Concepts

x y = 1, 2x y + z = 2, and 3w + x + y + 2z = 0

Fixed Effects, Invariance, and Spatial Variation in Intergenerational Mobility

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

Multivariate GARCH models.

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE

ABSTRACT KEYWORDS 1. INTRODUCTION

1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

LECTURE NOTE #11 PROF. ALAN YUILLE

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Assignment 2 (Sol.) Introduction to Machine Learning Prof. B. Ravindran

R = µ + Bf Arbitrage Pricing Model, APM

Linear Algebra (Review) Volker Tresp 2018

Practical Linear Algebra: A Geometry Toolbox

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)

11. Further Issues in Using OLS with TS Data

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

1. The Multivariate Classical Linear Regression Model

ESTIMATING BIVARIATE TAIL

[y i α βx i ] 2 (2) Q = i=1

PRINCIPAL COMPONENTS ANALYSIS

Principal component analysis

Principal Components Analysis. Sargur Srihari University at Buffalo

CS 195-5: Machine Learning Problem Set 1

Appendix A: Matrices

STA414/2104 Statistical Methods for Machine Learning II

Review of linear algebra

Computational Aspects of Aggregation in Biological Systems

Multivariate Statistical Analysis

Linear Regression and Its Applications

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

Computational Linear Algebra

ECE 275A Homework 6 Solutions

STAT 100C: Linear models

Lecture 6. Numerical methods. Approximation of functions

Notes on Time Series Modeling

FREQUENTIST BEHAVIOR OF FORMAL BAYESIAN INFERENCE

Chapter Two Elements of Linear Algebra

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research

Linear Algebra: Matrix Eigenvalue Problems

INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES

Lecture # 31. Questions of Marks 3. Question: Solution:

Supermodular ordering of Poisson arrays

Matrix Factorizations

Variations. ECE 6540, Lecture 02 Multivariate Random Variables & Linear Algebra

SELECTION OF CREDIBILITY REGRESSION MODELS. ETH Zurich, Switzerland

Principal Components Analysis (PCA)

Math 155 Prerequisite Review Handout

MLES & Multivariate Normal Theory

The Credibility Estimators with Dependence Over Risks

Ross (1976) introduced the Arbitrage Pricing Theory (APT) as an alternative to the CAPM.

16.1 L.P. Duality Applied to the Minimax Theorem

Lecture 9 SLR in Matrix Form

OPERATIONS RESEARCH. Linear Programming Problem

Kernel-Based Contrast Functions for Sufficient Dimension Reduction

Experience Rating in General Insurance by Credibility Estimation

7. Forecasting with ARIMA models

Standard Error of Technical Cost Incorporating Parameter Uncertainty

ELEMENTARY LINEAR ALGEBRA

LS.1 Review of Linear Algebra

Tangent spaces, normals and extrema

DYNAMIC AND COMPROMISE FACTOR ANALYSIS

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

REDUNDANCY ANALYSIS AN ALTERNATIVE FOR CANONICAL CORRELATION ANALYSIS ARNOLD L. VAN DEN WOLLENBERG UNIVERSITY OF NIJMEGEN

Stat 579: Generalized Linear Models and Extensions

Ridge regression. Patrick Breheny. February 8. Penalized regression Ridge regression Bayesian interpretation

Ordinary Least Squares Linear Regression

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

AN EFFICIENT GLS ESTIMATOR OF TRIANGULAR MODELS WITH COVARIANCE RESTRICTIONS*

Regression. Estimation of the linear function (straight line) describing the linear component of the joint relationship between two variables X and Y.

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

Part III. Hypothesis Testing. III.1. Log-rank Test for Right-censored Failure Time Data

Hypothesis Testing hypothesis testing approach

Regression. Oscar García

Eigenvalues, Eigenvectors, and an Intro to PCA

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Eigenvalues, Eigenvectors, and an Intro to PCA

Kernel Methods. Machine Learning A W VO

References. Regression standard errors in clustered samples. diagonal matrix with elements

GARY G. VENTER 1. CHAIN LADDER VARIANCE FORMULA

Transcription:

CREDIBILITY IN THE REGRESSION CASE REVISITED (A LATE TRIBUTE TO CHARLES A. HACHEMEISTER) H.BUHLMANN ETH Zurich A. GlSLER Winterhur- Versicherungen ABSTRACT Many authors have observed that Hachemeisters Regression Model for Credibility - if applied to simple linear regression - leads to unsatisfactory credibility matrices: they typically 'mix up' the regression parameters and in particular lead to regression lines that seem 'out of range' compared with both individual and collective regression lines. We propose to amend these shortcomings by an appropriate definition of the regression parameters: - intercept - slope Contrary to standard practice the intercept should however not be defined as the value at time zero but as the value of the regression line at the barycenter of time. With these definitions regression parameters which are uncorrelated in the collective can be estimated separately by standard one dimensional credibility techniques. A similar convenient reparametrization can also be achieved in the general regression case. The good choice for the regression parameters is such as to turn the design matrix into an array with orthogonal columns. 1. THE GENERAL MODEL In his pioneering paper presented at the Berkeley Credibility Conference 1974, Charlie Hachemeister introduced the following General Regression Case Credibility Model: a) Description of individual risk r. risk quality: 8 r observations (random vector): x 2r ASTIN BULLETIN, Veil 27. No I, 1997, pp HI 98

84 H BUHLMANN-A GISLER with distribution dp{xj6 r ) and where X ir = observation of risk r at time i b) Description of collective: {0 r ;(r= 1, 2,..., N)} are i.i.d. with structure function U(9) We are interested in the (unknown) individually correct pure premiums fi,(o r ) = E[XJ6 r ] (i - 1, 2,..., n) = n(6 r ) where H,(8) = individual pure premium at time i and we suppose that these individual pure premiums follow a regression pattern where //(0 r ) ~ n-vector, P(d r ) -^-vector and Y r ~ n* p-matrix (= design matrix). Remark: The model is usually applied for p < n and maximal rank of Y r ; in practice p is much smaller than n (e.g. p = 2). The goal is to have credibility estimator J3(d r ) for /3(0 r ) which by linearity leads to the credibility estimator jj.(o r ) for jj.(6 r ). 2. THE ESTIMATION PROBLEM AND ITS RELEVANT PARAMETERS AND SOLUTION (GENERAL CASE) We look for a ~ p - vector A ~ p * n matrix The following quantities are the "relevant parameters" for finding this estimator E\Cov\X r,x r IO r \\ = <& r <D r ~ n*n matrix (regular) (1) Cov[P(9 r ),p (6 r )] = A A ~ p*p matrix (regular) (2) E[P(6 r )]= b b~p- vector (3) We find the credibility formula = (I-Z r )b + Z r b? (4)

CREDIBILITY IN THE REGRESSION CASE REVISITED 85 where Z r = (/ - W~ ] A' 1 )"' = (W r + A" 1 )"' W r = A(A + W' 1 )"' ~ credibility matrix(p * p) W r - Y r <5>~ x Y r ~ auxiliary matrix (p * p) bf = W~ [ Y r Q>~ ] X r ~ individual estimate (p * 1) (5) (6) (7) Discussion: The generality under which formula (4) can be proved is impressiv, but this generality is also its weakness. Only by specialisation it is possible to understand how the formula can be used for practical applications. Following the route of Hachemeisters original paper we hence use it now for the special case of simple linear regression. 3. SIMPLE LINEAR REGRESSION Let Y=Y = 1 2 and hence R) becomes (8) which is one of the most frequently applied regression cases. Assume further that <B r is diagonal, i.e. that observations X ir, X jr given 9 r are uncorrelated fort ^j. To simplify notation, we drop in the following the index r, i.e. we write <J> instead of O r, W instead of W, and Z instead of Z r. Hence (9) 1 9 e.g. ay = ^-; V, = "volume" of observation at time i. Let A = '10 MO

86 H BUHLMANN-A GISLER We find X n 1 Y1 " k k=\~^ 2-<k=lZJ =1 o-f (10) It is convenient to write '* J V k k = \ (which is always possible for diagonal <&). Hence we have w = V. *-"< V. lk k2 y. Think of - as sampling weights, then we have v. 1 w = - (11) where E 1 ' 1, Va^' denote the moments with respect to the sampling distribution. One then also finds (see (7)) = W~ l Y Q>~ 1 X r (12) (s) [k 2 ]-E s E (s) [k 2 ]-E s [X kr ]-E h) [k] E (s) [kx kr ] where E^[kX kr ] = Remark: It is instructive to verify by direct calculation that the values given by (12) to b Or, b lr are identical with those obtained from k=\ The calculations to obtain the credibility matrix Z (see (5)) are as follows: 1 f T l -T 0 I _ I PO +P()I I 9 0 T 0 T l ~ To J l+poi Pi 2 J

CREDIBILITY IN THE REGRESSION CASE REVISITED 87 Abbreviate Poi ' ~TT Hence l+h 0 (14) -h 0] E {s) [k] E is) [k]h, - Discussion: The credibility matrix obtained is not satisfactory from a practical point of view: a) individual weights are not always between zero and one: b) both intercept Po(6 r ) of the credibility line and slope /Jj (9 r ) of the credibility line may not lie between intercept and slope of individual line and collective line. Numerical examples: n = 5 V k =\ collective regression line: b 0 = 100 b x = 10 individual regression line: b$ = 70 b* = 7 Example 1: a = 20 T 0 = 10 T,=5 T 10 =0 resulting credibility line: p o (O r ) = 88.8 /},(0 r ) = 3.7 Example 2: a = 20 T 0 =100'000 T, = 5 T l0 =0 resulting credibility line: P 0 (6 r ) = 64.5 /3, (0 r ) = 8.8 Example 3: a = 20 T o = 10 T, = 100' 000 T 10 = 0 resulting credibility line: j} 0 (d r ) = 94.7 P x {9 r ) = 0.3

H. BUHLMANN - A. GISLER Comments: In none of the 3 examples do both, intercept and slope of the credibility line, lie between the collective and the individual values. In example 2 there is a great prior uncertainty about the intercept (r 0 very big). One would expect that the credibility estimator gives full weight to the intercept of the individual regression line and that P 0 (0 r ) nearly coincides with b*. But P 0 (6 r ) is even smaller than b Q and b*, In example 3 there is a great prior uncertainty about the slope and one would expect, that Pi(d r ) s bf. But P x {B r ) is much smaller than b x and b*. For this reason many actuaries have either considered Hachemeisters regression model as not usable or have tried to impose artificially additional constraints (e.g. De Vylder (1981) or De Vylder (1985)). Dannenburg (1996) discusses the effects of such constraints and shows that they have serious drawbacks. This paper shows that by an appropriate reparametrization the defects of the Hachemeister model can be made to disappear and that hence no additional constraints are needed. Example 1 a collective -»- individual -*- Credibility 4. SIMPLE LINEAR REGRESSION WITH BARYCENTRIC INTERCEPT The idea, that choosing the time scale in such a way as to have the intercept at the barycenter of time, is already mentioned in Hachemeisters paper, although it is then not used to make the appropriate model assumptions. Choosing the intercept at the barycenter of the time scale means formally that our design matrix is chosen as

CREDIBILITY IN THE REGRESSION CASE REVISITED 89 Y = 2-E {s) [k] Remark: A It is well known, that any linear transformation of the time scale (or more generally of the covariates) does not change the credibility estimates. But what we do in the following changes the original model by assuming that the matrix A is now the covariance matrix of the 'new' vector fi(9 r ),fi 0 {6 r ) now being the intercept at the barycenter of time instead of the intercept at the time zero. In our general formulae obtained in section 3 we have to replace n-. It is also important that sample variances and covariances are not changed by this shift of time scale. We immediately obtain and Cov M (k,x kr ) Var(s)[k] Z = -h n -Var M [k]h m Var {s) [k](l + h 0 These formulae are now becoming very well understandable, in particular the crosseffect between the credibility formulae for intercept and slope is only due to their correlation in the collective (off diagonal elements in the matrix A). In case of no correlation between regression parameters in the collective we have Z = 1 0 0 Var {s) [k](l which separates our credibility matrix into two separate one-dimensional credibility formulae with credibility weights Zu=l-TT \ + h = 1 -^ = V ^ (15) Q a 1 o 2 1 + War (s) r 0 v. T o w Var {s) V.Var Var [k]a (s) [k] (s) 2 2 'ar (s) [k]- V.Var u) \k]+, xfv.

90 H. BUHLMANN - A. GISLER Remark: Observe the classical form of the credibility weights in (15) with volumes V. forz n and V.Var M [k] forz 22. Numerical examples The model assumptions of the following three examples numbered 4-6 are exactly the same as in the examples numbered 1-3 of the previous section with the only difference that the first element of the vector (5(9 r ) now represents the intercept at the barycenter. Thus we have: collective regression line: b 0 = 130 ft, = 10 individual regression line: ft* = 91 ft,* = 7 The resulting credibility lines are: Example 4: f) 0 (9 r ) = 108.3 /3,(0 r ) = 8.8 Example 5: J3 0 (9 r ) = 91.0 (5 l (9 r ) = 8.8 Example 6: P 0 (9 r ) = 108.3 /3,(0 r ) = 7.0 Comments: Intercept and slope of the credibility lines are always lying between the values of the individual and of the collective regression line. In example 5 (respectively in example 6) the intercept f5 0 (6 r ) (respectively the slope (5 x (9 r )) coincides with ft* (resp. ft,*). It is also interesting to note that the credibility line of example 5 is exactly the same as the one of example 2. -m- collective -«- individual -*- Credibility

CREDIBILITY IN THE REGRESSION CASE REVISITED 9 1 5. HOW TO CHOOSE THE BARYCENTER? Unfortunately the barycenter for each risk is shifting depending on the individual sampling distribution. There is usually no way to bring - simultaneously for all risks - the matrices Y, W, Z into the convenient form as discussed in the last section. This discussion however suggests that the most reasonable parametrization is the one using the intercept at the barycenter of the collective. This has two advantages: it is the point to which individual barycenters are (in the sum of least square sense) closest and the orthogonality property of parameters still holds for the collective. In the following we work with this parametrization and assume that the regression parameters in this parametrization are uncorrelated. Hence we work from now on with the regression line where K is the barycenter of the collective i.e. K = ^"_ i. We assume also that the collective parameters are uncorrelated, i.e. 0 i If we shift to the individual barycenter <s) [^] we obtain the line: Hence (16) \(R\ I l 0 ^ M ^ t«j~ *-) 'HH I>J~ n-j _ l 0 " " l l A^l For the /8-line we have further 1 ^-2 Po - -J' "0-2,, ~ "0 T T V T i T o ' v -

92 H. BUHLMANN - A. GISLER Formula (14 bar ) then leads to ( Ah ( o a) Var r {s) Var w [k]-ah$ {a) Remarks: a) Obviously formula (17) is not as attractive as (14 sep ) but the two are similar b) In the case A small (or h^^ small) the numbers differ by little. The case of A small is very often encountered in practice. For instance in the data used by Hachemeister with 5 groups (states) and an observation period of 12 time units (quarters), IAI max is only 0.17 time units. c) An interesting case is h^a) = 0 <=> TQ = <*> (great prior uncertainty about the intercept). Then for arbitrary A we have 0 Var {s) [k] 0 Var (s) [k] + h (a) This case is also studied by Hachemeister under the heading "mixed model". It also explains why the credibility lines coincide in our numerical examples 2 and 5. 6.1 Motivation 6. BARYCENTRIC CREDIBILITY - GENERAL CASE The basic idea which we have pursued in the case of simple linear regression can be summarized,as follows: Step 1: Reparametrize the regression problem conveniently Step 2: In the reparametrized model assume that regression parameters are uncorrelated. It turns out that for simple linear regression the good choice of the parameters is as follows: - intercept at barycenter of time - slope You should observe that for this choice the design matrix 1 2-E (s) [k] 1 n-e {s) [k]

CREDIBILITY IN THE REGRESSION CASE REVISITED 93 has two orthogonal columns (using the weights of the sampling distribution). This is the clue for the general regression case. The good choice of the regression parameters is such as to render the design matrix into an array with orthogonal columns. 6.2 The Barycentric Model Let Y = % Y A 1/7 ^2 "P, and assume volumes Vj, V 2,..., V n and let be V. = ^ V k. We think of column j in Y as a random variable Y } which assumes Y jk with sampling weight in short P (s \Yj = Y jk ] = where F' s) stands for the sampling distribution. As in the case of simple linear regression it turns out that also in the general case this sampling distribution allows a concise and convenient notation. We have from (9) V. and from (10) v. v. where w v = Under the barvcentric condition we find W = V. a i.e. a matrix of diagonal form. E^iY^j (18)

94 H BUHLMANN - A GISLER Assuming non-correlation for the corresponding parametrization we have A = 0 with Hence a and finally -IN = (W + A~')W = (19) (19) shows that our credibility matrix is of diagonal form. Hence the multidimensional credibility formula breaks down into p one dimensional formulae with credibility weights: V.E (s) [Y 2 ] Observe the "volume" V.E {s) [YJ] for the j-th component. 6.3 The Summary Statistics for the Barycentric Model From (7) we have h* = W~ l Y~ l <b~ l X = CX where the elements of C are 1 V J c = Y y E {S) \Y 2 } >J V. (21)

CREDIBILITY IN THE REGRESSION CASE REVISITED 95 or lr= E h) [Y 2 ] l = l ' 2 '-P (22) 6.4 How to find the Barycentric Reparametrization We start with the design matrix y and its column vectors y,, Y 2,, Y p and want to find the new design matrix y* with orthogonal column vectors Y x, y 2 *,..., Y* The construction of the vectors Y k is obtained recursively: i) Start with Y x = Y x ii) If you have constructed Y x, Y 2,..., Y k _ x, you find Y k as follows a) Solve E {s) [(Y k -a*y x - a* 2 Y 2 -...-a k _ x Y k _ x ) 2 ] = min! over all values of a,, a 2,..., a k, b) Define Y* := Y k - atf - o# -... - a k _ x Y k _ x Remarks: i) obviously this leads to Y k such that E M [Y k *-Y,*] = O for all l<k (23) ii) The procedure of orthogonalisation is called weighted Gram-Schmitt in Numerical Analysis, iii) The result of this procedure depends on the order of the colums of the original matrix. Hence there might be several feasible solutions. With the new design matrixy* we can now also find the new parameters Pj(6 r ) 7 = 1,2,... /?. The regression equation becomes R) which reads componentwise * V Multiply both sides by Y lk and sum over i leading to ^ * = E (i) [(Y k ) 2 ]-pl(6 r ) (24)

96 H BUHLMANN-A GISLER where, on the right hand side, we have used the orthogonality of Y k and Y } fory * k. Hence (25) which defines our new parameters in the barycentric model. You should observe that this transformation of the regression parameters fij (8 r ) may lead to new parameters P*(9 r ) which are sometimes difficult to interprete. In each application one has therefore to decide whether the orthogonality property of the design matrix or the interpretability of the regression parameters is more important. Luckily - as we have seen - there is no problem with the interpretation in the case of simple linear regression and interpretability is also not decisive if we are interested in prediction only. 6.5 An example Suppose that we want to model n k (9 r ) as depending on time in a quadratic manner, i.e. Our design matrix is hence of the following form ' 1 1 1 1 2 4 Y = 1 k k l I n n Let us construct the design matrix K* with orthogonal columns. Following the procedure as outlined in 6.4 we obviously have for the first two columns those obtained in the case of simple linear regression (measuring time from its barycenter) and we only have to construct K, Formally: \ 1-, 1 2-, Y' = n-e w [k]

CREDIBILITY IN THE REGRESSION CASE REVISITED 97 To find K 3 we must solve - a, -a 2 (k- E [k])\ = mm! Using relation (23) we obtian Hence we get * _ E u) [k 2 (k-e (s) [k])] Ul ~ Var {s) [k] r (J»W * = l,2,...,n (26) and from we get both - the interpretation of P*(0 r ) (use (25)) - the prediction,.(0) = ' =] Yyp*(O r ) where pj (6 r ) is the credibility estimator. Due to orthogonality of Y* it can be obtained componentwise. 7. FINAL REMARKS Our whole discussion of the general case is based on a particular fixed sampling distribution. As this distribution typically varies from risk to risk Y*, /T and Z* depend on the risk r and we cannot achieve orthogonality of Y* simultaneously for all risks r. This is the problem which we have already discussed in section 5. The observations made there apply also to the general case and the basic lesson is the same. You should construct the orthogonal Y* for the sampling distribution of the whole collective which then will often lead to "nearly orthogonal" design matrices for the individual risks which again "nearly separates" the credibility formula into componentwise procedures. The question not addressed in this paper is the one of choice of the number of regression parameters. In the case of simple linear regression this question would be: Should you use a linear regression function, a quadratic or a higher order polynominal? Generally the question is: How should one choose the design matrix to start with? We hope to address this question in a forthcoming paper.

98 H. BUHLMANN - A. GISLER ACKNOWLEDGEMENT The authors wish to thank Peter Biihlmann, University of California, Berkeley for very stimulating ideas and discussions which have led to the formulation of the general case barycentric credibility. REFERENCES HACHEMEISTER, C. (1975) Credibility for regression models with application to trend. Credibility: Theory and Applications, (e.d. D.M. Kahn), 129-163 Academic Press, New York. DANNENBURG, D. (1996) Basic actuarial credibility models. PhD Thesis University of Amsterdam. DE VYLDER, F. (1981) Regression model with scalar credibility weights. Mitteilungen Vereinigung Schweizerischer Versicherungsmathematiker, Heft 1, 27-39. DE VYLDER, F. (1985) Non-linear regression in credibility theory. Insurance: Mathematics and Economics 4, 163-172. PROF. H BUHLMANN DR. A. GISLER Mathematics Department "Winterthur", Swiss Insurance Co. ETH Zurich P. O. Box 357 CH - 8092 Zurich CH - 8401 Winterthur