Notes on Cellwise Data Interpolation for Visualization Xavier Tricoche

Similar documents
Spring, 2006 CIS 610. Advanced Geometric Methods in Computer Science Jean Gallier Homework 1

Notes on multivariable calculus

Chapter 3 Conforming Finite Element Methods 3.1 Foundations Ritz-Galerkin Method

Matrix Algebra: Vectors

Linear Algebra. Min Yan

Vectors. January 13, 2013

Lecture 2: Linear Algebra Review

Course Summary Math 211

Vector Calculus. Lecture Notes

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

Sec. 1.1: Basics of Vectors

INTRODUCTION TO FINITE ELEMENT METHODS

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

Problem Set 6: Solutions Math 201A: Fall a n x n,

1 Functions of Several Variables 2019 v2

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 00 Dec 27, 2014.

B553 Lecture 3: Multivariate Calculus and Linear Algebra Review

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Scientific Computing: Optimization

Vector Spaces, Affine Spaces, and Metric Spaces

Optimization Tutorial 1. Basic Gradient Descent

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Differentiable Functions

Math 341: Convex Geometry. Xi Chen

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Decidability of consistency of function and derivative information for a triangle and a convex quadrilateral

Fall, 2003 CIS 610. Advanced geometric methods. Homework 1. September 30, 2003; Due October 21, beginning of class

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Chapter 3. Riemannian Manifolds - I. The subject of this thesis is to extend the combinatorial curve reconstruction approach to curves

The Convergence of Mimetic Discretization

Week 4: Differentiation for Functions of Several Variables

AFFINE COMBINATIONS, BARYCENTRIC COORDINATES, AND CONVEX COMBINATIONS

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Math Advanced Calculus II

Linear Algebra. Preliminary Lecture Notes

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]

- 1 - Items related to expected use of technology appear in bold italics.

Linear Algebra Massoud Malek

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

SYLLABUS. 1 Linear maps and matrices

Convex Functions and Optimization

1 The linear algebra of linear programs (March 15 and 22, 2015)

Linear Algebra. Preliminary Lecture Notes

Pose estimation from point and line correspondences

Camera Models and Affine Multiple Views Geometry

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Notes on Linear Algebra I. # 1

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL

CHAPTER 13 Numerical differentiation of functions of two variables

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1

MA 323 Geometric Modelling Course Notes: Day 11 Barycentric Coordinates and de Casteljau s algorithm

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

(x, y) = d(x, y) = x y.

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

Nonlinear Optimization

This pre-publication material is for review purposes only. Any typographical or technical errors will be corrected prior to publication.

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Lecture 2: Review of Prerequisites. Table of contents

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations

Computational Stiffness Method

Chapter 2: Unconstrained Extrema

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Interpolation Functions for General Element Formulation

1.1.1 Algebraic Operations

Vector Spaces. (1) Every vector space V has a zero vector 0 V

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

4 Linear Algebra Review

Matrix Basic Concepts

Detailed Proof of The PerronFrobenius Theorem

mathematical objects can be described via equations, functions, graphs, parameterization in R, R, and R.

CO 250 Final Exam Guide

100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX

NATIONAL BOARD FOR HIGHER MATHEMATICS. Research Scholarships Screening Test. Saturday, February 2, Time Allowed: Two Hours Maximum Marks: 40

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Newton s Method. Javier Peña Convex Optimization /36-725

Linear transformations

PREPRINT 2010:23. A nonconforming rotated Q 1 approximation on tetrahedra PETER HANSBO

Newtonian Mechanics. Chapter Classical space-time

Introduction to tensors and dyadics

Scaling the Topology of Symmetric, Second-Order Planar Tensor Fields

Semidefinite Programming

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Solutions to Selected Questions from Denis Sevee s Vector Geometry. (Updated )

Mathematics for Graphics and Vision

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

X. Linearization and Newton s Method

The method of lines (MOL) for the diffusion equation

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

Tangent spaces, normals and extrema

x 1 x 2. x 1, x 2,..., x n R. x n

2. Polynomial interpolation

Bézier Curves and Splines

Functional Analysis MATH and MATH M6202

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 06

Transcription:

Notes on Cellwise Data Interpolation for Visualization Xavier Tricoche urdue University While the data (computed or measured) used in visualization is only available in discrete form, it typically corresponds to continuous quantities that we want to visualize as such. Transforming discrete data into a continuous representation is the role of interpolation. In the following, we first define the general interpolation problem before reviewing the methods most commonly used in practice, namely (affine) linear, bilinear, and trilinear interpolation, and consider the typical queries that are performed on these various forms of interpolation. 1 Notations In the following, R designates the set of real numbers. A real number (also referred to as scalar value) is denoted by a cursive letter: a R. The d-dimensional Euclidean space is denoted by R d. Elements of R d are d-dimensional real vectors that we write: a R d. ositions in the corresponding d-dimensional affine space (see Def. 1) are simply denoted by bold letters: a E d. The coordinates of a d-dimensional vector a (resp. of the point a E d ) are denoted by the scalars a 1,..., a d and a (resp. a) can alternatively be represented by the column vector a 1 a 2., a d which one usually prefers to write in its more compact form: (a 1, a 2,..., a d ) T, where we have used the transpose operator ( ) T to turn a column vector into a row vector. The scalar (or dot) product between two d-dimensional vectors a and b is written a b and corresponds to d the scalar expression: i1 a ib i. Finally, we designate d-dimensional square matrices by bold capital letters: A and refer to their value in row i and column j as A ij. 2 Mathematical Foundations Before proceeding, we must introduce some fundamental mathematical concepts. The following discussion assumes some familiarity with basic notions from calculus. A good reference is the classical book by Spivak [?]. The informal presentation below is only meant to keep the discussion self-contained. 2.1 Euclidean Space We start by introducing the concept of affine space, which offers a mathematical model for the physical space in visualization. Affine spaces enable the interaction of points and vectors. Definition 1: Affine Space A affine space is a geometric structure that combines a point set A and a vector space V through an addition operator + : A V A that satisfies following properties: p A, v, w V, (p+ v )+ w p+( v + w) q A, v V,! a p A, q p+ v a there exists a unique... The sum operator in the definition above can be geometrically interpreted as a translation: one obtains a point CS530 - Spring 2019 age 1 of 8

q p + v by translating the point p by the vector v and one defines the difference v q p as the vector that translates p to q. ractically, we are primarily interested in a special type of affine space, namely the Euclidean space. x ω j p j + ω k p k, with ω j + ω k 1. If we set u ω k, we finally obtain x (1 u)p j + up k p j + u (p k p j ), which is the equation of a line connecting p j and p k. These various cases are illustrated in the 2D case in Figure 1. Definition 2: Euclidean Space The Euclidean space of dimension d is the affine space where the point set A is R d and the associated vector space V is R d itself, equipped with its vector space structure. 2.2 Global and Local Coordinates A coordinate system is a means to represent the position of a point through a series of numbers. The definition of the Euclidean space given above suggests a natural (or canonical) choice for the coordinate system. Figure 1: Local coordinates with respect to 9 vertices in the plane. The shaded area corresponds to the convex hull of the vertices. Definition 3: Standard Coordinate System The standard coordinate system of the d- dimensional Euclidean space is defined as ( O, e 0, e 1,..., e d 1 ), where O (0,..., 0) R d is the origin of the coordinate system and the vectors e i, i 0,..., d 1 form the canonical basis of R d : e i (j) δ a ij, i [0,..., d 1]. The coordinates of a point p R d are simply the coordinates of Op p O in the canonical basis. a Kroneker symbol: δ ii 1, δ ij 0 if i j While the interpolation equations (discussed below) can be solved in the standard coordinate system, it is often simpler to use a local coordinate system that is tuned to the vertices to interpolate. Different geometric configurations will call for different coordinate systems but they will all share the following general form. Definition 4: Local Coordinates The local coordinates of a point x R d with respect to N vertices p 0,..., p N 1 R d are functions ω i : R d R, i 0,..., N 1 that satisfy the following constraints: x N 1 i0 ω i p i, with N 1 i0 ω i 1. (1) Local coordinates can be thought of as weights assigned to each vertex. To illustrate the behavior of these coordinates, let us consider a few special cases. If ω 0 ω 1... ω N 1 1 N then x is the center of gravity of the vertices, also called barycenter. If we have ω j 1 for some index j in [0, N 1] and ω i 0, i j, then Equation (1) leads to x p j. Finally, if we have ω j 0, ω k 0, and ω i 0, i j, k, then Equation (1) simplifies to 2.3 Simplices The geometry of the cells that compose a mesh constrains the type of interpolation strategy that can be applied to them. Among the various kinds of cells used in practice, the simplex (plural simplices) constitutes the simplest case. To define a simplex, we first introduce the concept of linear dependence. Definition 5: Linear Dependence A set of vectors v 0, v 1,..., v d 1 are said to be linearly dependent if there exist coefficients α 0, α 1,..., α d 1 R such that d 1 α ivi 0. i0 Similarly, the d + 1 points p 0, p 1,..., p d are said to be linearly dependent if the d vectors v 1 : p 1 p 0,..., v d : p d p 0 are linearly dependent. For instance in 2D, 3 points are linearly dependent if they are aligned (or collinear): in this case the triangle formed by these three points is degenerate. Similarly in 3D 4 points are linearly dependent if they are co-planar. Note that d + 2 points are always linearly dependent in R d since the d+1 vectors that they form cannot be linearly independent in R d, since it is a vector space of dimension d. Hence there can be at most d + 1 points in a linearly independent set in R d, which leads to the definition of a simplex. Definition 6: Simplex A d-dimensional simplex in R d is the convex hull of d + 1 linearly independent points p 0, p 1,..., p d. In CS530 - Spring 2019 age 2 of 8

other words, the simplex is the set { x R d, ω 0, ω 1,..., ω d, i ω i 0, d ω i 1, x i0 d ω i p i }. i0 (2) For example, simplices in 1D are line segments, simples in 2D are triangles, and simplices in 3D are tetrahedra. 2.4 Calculus The central object of study in scientific visualization are functions defined over a subset of the Euclidean space in one or several dimensions. Definition 7: Functions A function f from Ω R n to R p is a relation (or correspondence, or mapping) between the positions in Ω and their associated values in R p. This relation is written f : Ω R n R p and f(x) y means that y is the unique value in R p that f associates with x Ω R n. The set of all values y in R p that correspond to the image by f : Ω R n R p of some point x in R n is called the image of f: Image(f) {y R p x Ω R n, f(x) y}. Ω itself is called the domain of definition (or simply domain) of f. Note that we made no particular assumptions so far on the dimension p of the image of the function f. In scientific visualization, however, one is primarily concerned with cases corresponding to p 1: scalar functions, p 2 (resp. p 3): vector functions in R 2 (resp. in R 3 ), and p 4 (resp. p 9): tensor functions in R 2 (resp. R 3 ). Similarly, most relevant functions in scientific visualization are continuous. Definition 8: Continuity A function f : Ω R n R p is said to be continuous at x 0 Ω R n if following property is satisfied: ɛ>0, δ >0, x B δ (x 0 ) Ω f(x) f(x 0 ) <ɛ, where B δ (x 0 ) : {x R n x x 0 < δ} is the n- dimensional ball of radius δ centered at x 0. Further, f is said to be continuous over Ω (or simply continuous) if it is continuous at every point in Ω. In some cases, one is interested in functions that are not only continuous but also continuously differentiable, that is, they have continuous derivatives up to some order. Definition 9: Differentiability The derivative of a scalar function f : Ω R R at some value x Ω is noted f (x) and is defined as f f(x + h) f(x) (x) lim. (3) h 0 h If this limit exists, the function is said to be differentiable at x. If f is differentiable for all values in Ω R, it is said to be continuously differentiable over Ω. Note that the differentiability of f over an interval implies that f is continuous over that interval. The converse is not true, however. Definition 10: Functions of multiple variables The partial derivative of a differentiable function f : Ω R n R with respect to its i-th variable (0 i n 1) is defined as: f f(x + h e i ) f(x) lim, (4) x i h 0 h where e i is the i-th basis vector of the Euclidean space R n. The vector formed by the partial derivatives of f with respect to all its variables is called gradient vector: f : ( f x 0, 2.5 Interpolation f x 1,..., f x n 1 ) T (5) In the following, we consider different interpolation approaches, each applicable to a particular situation. Regardless of the considered solution, the general interpolation problem can be defined as follows. Definition 11: Interpolation Let {(p i, f i )} 0 i N 1] be a set of N vertices p i in a region Ω R d associated with scalar values f i. The interpolation problem consists in finding a function f : Ω R d R that satisfies the following constraints: i [0,..., N 1], f(p i ) f i. (6) Note that the function f is typically chosen to be continuous. One may also impose some additional smoothness constraints on the function f, such as to require that it be continuously differentiable once (C 1 ), twice (C 2 ), or for all degrees (C ). For instance, all polynomial functions are C. CS530 - Spring 2019 age 3 of 8

3 Linear Interpolation We consider in this section the simplest form of interpolation, namely affine linear interpolation. are the solutions in this case and we finally obtain: f(x) f 1 f 0 x + f 0 x 1 f 1 x 0 (10) Definition 12: Affine Linear A affine linear scalar function f : R d R has the general form: f( x ) a x + b (7) where a and x are vectors in R d : x O + x is the position where the function is evaluated while a is a constant, and b is a scalar value. Assume we have 2 values x 0 and x 1 on the real axis associated with two values f 0 and f 1, respectively. We wish to determine the value at some other point x located between x 0 and x 1 using linear interpolation. The situation is illustrated in Figure 2. Note that an alternative, more geometric approach, to solving this problem that will prove powerful as we consider higher dimensions consists in introducing the local coordinates of x with respect to x 0 and x 1, which we define as: x (1 u)x 0 + ux 1, with u x x 0. (11) Again, refer to Figure 2. The value f(x) at x can now be computed as Together, a and b uniquely define the affine linear function f and provide d + 1 degrees of freedom to satisfy the interpolation of d + 1 data points (2 points in 1D, 3 points f(x) (1 u) f 0 + u f 1 (12) in 2D, 4 points in 3D,...). Equation (12) is equivalent to following expression: In this case the interpolation (Equations (6)) corresponds to following system of (d + 1) equations: f(x) f 0 u (f 1 f 0 ) (x x 0 ) f 1 f 0 a 1 p 0,1 p 0,2 p 0,d 1 f 0 p 1,1 p 1,2 p 1,d 1 a 2 f and we obtain Equation 10. 1......... (8) a d 3.2 Two-dimensional Case p d,1 p d,2 p d,d 1 f d b In the two-dimensional case, we have 3 points p 0, p 1, and 3.1 One-dimensional Case p 2 in R 2 associated with values f 0, f 1, and f 2, respectively. These points span a triangle, that is a 2-simplex. 2, f2 0, f0 1, f1 Figure 3: Linear interpolation in a triangle. Figure 2: Linear interpolation in 1D between the value f 0 at x 0 and the value f 1 at x 1. Equation (7) takes the form: f(x) ax + b and our task consists in determining the value of a and b that satisfy the system of equations (c.f. Equation (8)): ( ( ) ( ) x0 1 a f0 (9) x 1 1) b f 1 It is straightforward to show that a f 1 f 0, b f 0 x 1 f 1 x 0 In this case Equation (7) can be written as: f(x) a x + b (13) a 1 x + a 2 y + b (14) and the interpolation Equations (8) lead to: p 0x p 0y 1 a 1 p 1x p 1y 1 a 2 p 2x p 2y 1 b f 0 f 1 f 2 (15) We can solve this system directly by applying Cramer s CS530 - Spring 2019 age 4 of 8

rule 1. Specifically, we obtain: f 0 p 0y 1 f 1 p 1y 1 f 2 p 2y 1 a 1 p 0x f 0 1 p 1x f 1 1 p 2x f 2 1 a 2 p 0x p 0y f 0 p 1x p 1y f 1 p 2x p 2y f 2 b p 0x p 0y 1 : p 1x p 1y 1 p 2x p 2y 1 (16) where the straight brackets designate the determinant of the respective 3 3 matrices. Additional calculus finally leads to following expressions: a 1 ( f 1 p 2y f 2 p 1y + f 2 p 2y f 0 p 2y + f 0 p 1y f 1 p 2y ) / a 2 ( p 1x f 2 p 1y f 1 + p 1y f 0 p 2x f 2 + p 2x f 1 p 1x f 0 ) / b ( f 0 (p 1x p 2y p 1y p 1y ) + f 1 (p 1y p 2y p 2x p 2y ) + f 2 (p 2x p 1y p 1x p 2y ) ) /. (17) Since β 0 1 β 1 β 2, the system can be simplified into a system of two unknowns β 1 and β 2. β 1 ( ) + β 2 (x 2 x 0 ) x x 0 (22) β 1 (y 1 y 0 ) + β 2 (y 2 y 0 ) y y 0 (23) For convenience, we define: p i p i p 0 ( x i, ỹ i ) T, p p p 0 ( x, ỹ) T. β 1 x 1 + β 2 x 2 x (24) β 1 ỹ 1 + β 2 ỹ 2 ỹ (25) This 2 2 linear system of equations can be solved using Cramer s rule to yield: x x 2 ỹ ỹ 2 x x 0 x 2 x 0 y y 0 y 2 y 0 β 1 (26) x 1 x ỹ 1 ỹ x x 0 y 1 y 0 y y 0 β 2 (27) β 0 1 β 1 β 2, (28) where is the determinant of the system: x 1 x 2 p 1x p 2y p 1y p 1y + p 1y p 2y p 2x p 2y + p 2x p 1y p 1x p 2y ỹ 1 ỹ 2 x 2 x 0 y 1 y 0 y 2 y 0. (29) 3.3 Three-dimensional Case The barycentric coordinates can be similarly applied to the linear interpolation over a three-dimensional simplex called tetrahedron. In that case, the cell admits 4 vertices p 0, p 1, p 2, p 3 with associated values f 0, f 1, f 2, and f 3. Arguably, this method is a recipe for mistakes that we should certainly avoid. Fortunately, there is a more elegant (and less error-prone) solution to this problem, which involves the concept of barycentric coordinates. 3, f3 Definition 13: Barycentric Coordinates The barycentric coordinates β 0, β 1,..., β d of a point p R d associated with the vertices p 0, p 1,..., p d of a d-dimensional simplex are defined as: d β i p i p, i0 and d β i 1. (18) i0 Notice that barycentric coordinates are a special case of local coordinates (Definition 4) that applies to a simplex (Definition 6). In the two-dimensional case, the barycentric coordinates satisfy following equations. β 0 x 0 + β 1 x 1 + β 2 x 2 x (19) β 0 y 0 + β 1 y 1 + β 2 y 2 y (20) β 0 + β 1 + β 2 1 (21) 1 See http://en.wikipedia.org/wiki/cramer s_rule Figure 4: 0, f0 1, f1 2, f2 Linear interpolation in a tetrahedron. β 0 x 0 + β 1 x 1 + β 2 x 2 + β 3 x 3 x (30) β 0 y 0 + β 1 y 1 + β 2 y 2 + β 3 y 3 y (31) β 0 z 0 + β 1 z 1 + β 2 z 2 + β 3 z 3 z (32) β 0 + β 1 + β 2 1 (33) Since β 0 1 β 1 β 2 β 3, the system can be simplified into a system of three unknowns β 1, β 2, and β 3. β 1 ( ) + β 2 (x 2 x 0 ) + β 3 (x 3 x 0 ) x (34) x 0 β 1 (y 1 y 0 ) + β 2 (y 2 y 0 ) + β 3 (y 3 y 0 ) y (35) y 0 β 1 (z 1 z 0 ) + β 2 (z 2 z 0 ) + β 3 (z 3 z 0 ) z z(36) 0 (37) CS530 - Spring 2019 age 5 of 8

We define: p i p i p 0 ( x i, ỹ i, z i ) T, p p p 0 ( x, ỹ, z) T. β 1 x 1 + β 2 x 2 + β 3 x 3 x (38) β 1 ỹ 1 + β 2 ỹ 2 + β 3 ỹ 3 ỹ (39) β 1 z 1 + β 2 z 2 + β 3 z 3 z (40) (41) In this case we obtain a 3 3 linear system of equations whose solution is: x x 2 x 3 x x 0 x 2 x 0 x 3 x 0 ỹ ỹ 2 ỹ 3 y y 0 y 2 y 0 y 3 y 0 z z 2 z 3 z z 0 z 2 z 0 z 3 z 0 β 1 (42) x 1 x x 2 x x 0 x 3 x 0 ỹ 1 ỹ ỹ 2 y 1 y 0 y y 0 y 3 y 0 z 1 z z 3 z 1 z 0 z z 0 z 3 z 0 β 2 (43) x 1 x 2 x x 2 x 0 x x 0 ỹ 1 ỹ 2 ỹ y 1 y 0 y 2 y 0 y y 0 z 1 z 2 z z 1 z 0 z 2 z 0 z z 0 β 3 (44) β 0 1 β 1 β 2 β 3, (45) where is the determinant of the system: x 1 x 2 x 3 x 2 x 0 x 3 x 0 ỹ 1 ỹ 2 ỹ 3 y 1 y 0 y 2 y 0 y 3 y 0 z 1 z 2 z 3 z 1 z 0 z 2 z 0 z 3 z 0. (46) 4 Multilinear Interpolation While linear interpolation is applicable to simplices (in any dimension), it cannot be used to interpolate the other major cell types encountered in visualization, namely quadrilateral in 2D and hexahedra in 3D. The solution in this case is provided by a combination of linear interpolations. 4.1 Bilinear Interpolation in 2D We first consider the simplest case: bilinear interpolation in a unit square. In this case the values at the four corners correspond to f 0, f 1, f 2, andf 3. We consider a position p (u, v) T. The auxiliary values f 4,..., f 7 defined on the edges can be obtained by linear interpolation along each edge. For instance f 4 (1 u) f 0 + u f 1 and f 6 (1 u) f 3 + u f 2. Bilinear interpolation now consists in interpolating the value at p through a linear interpolation between f 4 and f 6 : f f(u, v) (1 v) f 4 + v f 6. By substituting f 4 and f 6 by their value and reordering the term, we obtain the expression of the bilinear interpolation: f(u, v) (1 u)(1 v) f 0 +u(1 v) f 1 +uv f 2 +(1 u)v f 3. (47) Figure 5: 1-v (0,v), f7 (0,1), f3 v (u,1), f6 (u,v), f (1,1), f2 (1,v), f5 (0,0), f0 u 1-u (1,0), f1 (u,0), f4 Bilinear interpolation in a unit square. Note that the result does not depend on the order in which we choose to perform horizontal and vertical interpolations. Indeed, if instead of considering f 4 ad f 6 we compute f 5 (1 v) f 1 + v f 3 and f 7 (1 v) f 0 + v f 3, we obtain f(u, v) (1 u) f 7 + u f 5, which after substitution yields the expression in Equation (47). As the name suggests, a bilinear function is not linear. Indeed, the terms of the expression above can be rearranged to yield the following formula. f(u, v) f 0 +(f 1 f 0 ) u+(f 3 f 1 ) v+(f 0 f 1 +f 2 f 3 ) uv (48) The term u v is the nonlinear part of the expression. In the general case, a quadrilateral cell is not a square and it can have an arbitrary shape. The application of the bilinear formula is not directly possible since the coordinates (u, v) T of the considered point p are not readily available. To determine these coordinates we need a mapping from 0, f0 Figure 6: 3, f3 2, f2 1, f1 Arbitrary quadrilateral cell. quadrilateral to unit square, which we call G. Indeed, the position in the unit square that p maps to is in fact its local coordinates G(p) (u, v) T. This mapping corresponds to the inverse of the transformation T : R 2 R 2 from unit square to quadrilateral. The key observation here is that T is in fact a bilinear transformation. By definition, this mapping satisfies: T(0, 0) p 0, T(1, 0) p 1, T(1, 1) p 2, and T(0, 1) p 3. Using Equation (48), we obtain: p T(u, v) p 0 + (p 1 p 0 ) u + (p 3 p 0 ) v+ (p 0 p 1 + p 2 p 3 ) uv (49) To simplify notations, we set p i p i p 0 for i 1, 2. Similarly, we set p p p 0. Finally, we introduce p p 0 p 1 + p 2 p 3. The equation above then yields a CS530 - Spring 2019 age 6 of 8

nonlinear system of equations in u and v. x 1 u + x 3 v + x uv x (50) ỹ 1 u + ỹ 3 v + y uv ỹ (51) The second equation can be used to express v as a function of u: v ỹ ỹ 1u ỹ 3 + y u. (52) If we substitute the expression into the first equation, we obtain following quadratic equation in u: ( x 3 ỹ xỹ 3 ) + ( x 1 ỹ 3 + x 3 ỹ 1 + x ỹ xy ) u+ ( x 1 y ỹ 1 x ) u 2 0, which can be written more compactly as follows: (53) p 3, p + ( p 1, p 3 + p, p ) u + p 1, p u 2 0, (54) where the notation p, q designates the determinant of the matrix whose columns are the points p and q, respectively. The solutions of this equation are given by following equations. f(p T ) (1 u)(1 v)f 4 + u(1 v)f 5 + uvf 6 + (1 u)vf 7. utting everything together, one obtains the expression of the trilinear interpolation: f(p) (1 u)(1 v)(1 w)f 0 + u(1 v)(1 w)f 1 + uv(1 w)f 2 + (1 u)v(1 w)f 3 + (1 u)(1 v)wf 4 + u(1 v)wf 5 + uvw f 6 + (1 u)vw f 7. The general form of a trilinear function is (57) f(x, y, z) a+bx+cy+dz+exy+fxz+gyz+hxyz. (58) Here a f 0, b f 1 f 0, c f 3 f 0, d f 4 f 0, e f 0 f 1 +f 2 f 3, f f 0 f 1 +f 5 f 4, g f 0 f 3 +f 7 f 4, and h f 0 + f 1 f 2 + f 3 f 4 + f 5 f 6 + f 7. In the case of an arbitrary shaped hexahedron, the challenge consists in finding the reference coordinates (u, v, w) of an arbitrary position p (x, y, z) T to apply the above formula. ( p 1, p 3 + p, p ) 2 4 p 3, p p 1, p (55) u 1,2 ( p 1, p 3 + p, p ) ±. (56) 2 p 1, p 4 7 6 If > 0, the equation admits two solutions. If the quadrilateral is convex, however, only one of these solutions will correspond to coordinates u and v(u) (Equation (52)) that satisfy 0 u, v 1. 3 5 4.2 Trilinear Interpolation in 3D The bilinear approach can be extended to the 3D case. The interpolation is then called trilinear and it applies to a hexahedron. This cell has 8 vertices p 0,..., p 7 T (u,v,1) Figure 8: 2 0 1 Trilinear interpolation in an arbitrary hexahedron. 4 (0,0,1) 7 (0,1,1) 3 (0,1,0) B (u,v,0) 5 (1,0,1) 0 (0,0,0) 1 (1,0,0) Figure 7: 6 (1,1,1) 2 (1,1,0) Trilinear interpolation in a cuboid. Using the notations of Figure 8, the value at an arbitrary position p(u, v, w) can be expressed as the linear interpolation between the values at p B (u, v, 0) and at p T (u, v, 1): f(p) (1 w) f(p B ) + w f(p T ). Since p B lies in a quadriteral face its value can be obtained by bilinear interpolation of the values at p 0, p 1, p 2, and p 3 : f(p B ) (1 u)(1 v)f 0 + u(1 v)f 1 + uvf 2 + (1 u)vf 3. Similarly, the value at p T can be obtained by bilinear interpolation in the quadrilateral face p 4, p 5, p 6, and p 7 : Similar to the application of bilinear interpolation in an arbitrarily shaped quadrilateral, the coordinates (u, v, w) satisfy the following equations: T(u, v, w) (1 u)(1 v)(1 w)p 0 + u(1 v)(1 w)p 1 uv(1 w)p 2 + (1 u)v(1 w)p 3 + (1 u)(1 v)wp 4 + u(1 v)wp 5 + uvw p 6 + (1 u)vw p 7 p. (59) However, contrary to the bilinear case, this system of three nonlinear equations (one equation per spatial dimension) does not admit a closed form solution. Instead, a numerical search must be performed. ractically, a Newton-Raphson method is used to iteratively converge toward the solution. An initial guess u 0 (u 0, v 0, w 0 ) T must first be provided. Then, at each step i, the current approximate solution u i (u i, v i, w i ) T is updated to bring it closer to the solution. If we consider the function d : T p, where T is the function defined in Equation (59), then, by definition, u (u, v, w) T CS530 - Spring 2019 age 7 of 8

satisfies d(u) 0. From u i, the required correction step δ i should satisfy d(u i + δ i ) 0. (60) Using a Taylor expansion up to first order, this can be rewritten as d(u i ) + d ui δ i d(u i ) + T ui δ i 0, (61) which suggests to select δ i ( T ui ) 1 d(ui ). (62) By defining u i+1 u i + δ i u i ( T ui ) 1 d(ui ), (63) one defines a series that converges quadratically toward the solution u if the initial guess u 0 is selected close enough to the actual solution. Observe that T in the expression above is a matrix whose rows are the gradient vectors of the x, y, and z components of the vector-valued function T (see Equation (59)): T T x u T y u T z u T x v T y v T z v T x w T y w T z w (64) The numerical search requires to compute the inverse of that matrix. CS530 - Spring 2019 age 8 of 8