Evaluation and Validation

Similar documents
Aperiodic Task Scheduling

D/A-Converters. Jian-Jia Chen (slides are based on Peter Marwedel) Informatik 12 TU Dortmund Germany

Worst-Case Execution Time Analysis. LS 12, TU Dortmund

Worst-Case Execution Time Analysis. LS 12, TU Dortmund

Analysis of Algorithms. Unit 5 - Intractable Problems

Embedded Systems Design: Optimization Challenges. Paul Pop Embedded Systems Lab (ESLAB) Linköping University, Sweden

Scheduling Periodic Real-Time Tasks on Uniprocessor Systems. LS 12, TU Dortmund

Schedulability and Optimization Analysis for Non-Preemptive Static Priority Scheduling Based on Task Utilization and Blocking Factors

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

Multiprocessor Scheduling I: Partitioned Scheduling. LS 12, TU Dortmund

Unit 2: Problem Classification and Difficulty in Optimization

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Non-Preemptive and Limited Preemptive Scheduling. LS 12, TU Dortmund

Evaluation and Validation

Running time analysis of a multi-objective evolutionary algorithm on a simple discrete optimization problem

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

This means that we can assume each list ) is

Task Models and Scheduling

Embedded Systems 14. Overview of embedded systems design

CS 6901 (Applied Algorithms) Lecture 3

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

A Brief Introduction to Multiobjective Optimization Techniques

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms

Energy-efficient Mapping of Big Data Workflows under Deadline Constraints

Embedded Systems 23 BF - ES

Week Cuts, Branch & Bound, and Lagrangean Relaxation

arxiv: v1 [cs.os] 6 Jun 2013

Lecture: Workload Models (Advanced Topic)

Metaheuristics and Local Search

Real-Time Workload Models with Efficient Analysis

Evaluation and Validation

Distributed Optimization. Song Chong EE, KAIST

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches

Multiobjective Evolutionary Algorithms. Pareto Rankings

Theoretical Computer Science

Math Models of OR: Branch-and-Bound

Lecture 13. Real-Time Scheduling. Daniel Kästner AbsInt GmbH 2013

An Efficient Knapsack-Based Approach for Calculating the Worst-Case Demand of AVR Tasks

A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

Time and Schedulability Analysis of Stateflow Models

A Parallel Evolutionary Approach to Multi-objective Optimization

Runtime Analysis of Evolutionary Algorithms for the Knapsack Problem with Favorably Correlated Weights

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Design of Distributed Systems Melinda Tóth, Zoltán Horváth

Dynamic Semantics. Dynamic Semantics. Operational Semantics Axiomatic Semantics Denotational Semantic. Operational Semantics

Embedded Systems Development

Topic 17. Analysis of Algorithms

System Model. Real-Time systems. Giuseppe Lipari. Scuola Superiore Sant Anna Pisa -Italy

AS computer hardware technology advances, both

Automatic Resource Bound Analysis and Linear Optimization. Jan Hoffmann

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

In complexity theory, algorithms and problems are classified by the growth order of computation time as a function of instance size.

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Robust Multi-Objective Optimization in High Dimensional Spaces

Gestion de la production. Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA

Data Structures in Java

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

Introduction. How can we say that one algorithm performs better than another? Quantify the resources required to execute:

Integer Linear Programming (ILP)

Methods for finding optimal configurations

0-1 Knapsack Problem in parallel Progetto del corso di Calcolo Parallelo AA

16.410/413 Principles of Autonomy and Decision Making

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019

Non-Work-Conserving Non-Preemptive Scheduling: Motivations, Challenges, and Potential Solutions

Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar

Optimization-based Modeling and Analysis Techniques for Safety-Critical Software Verification

The Concurrent Consideration of Uncertainty in WCETs and Processor Speeds in Mixed Criticality Systems

CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms

Real-Time Systems. Lecture #14. Risat Pathan. Department of Computer Science and Engineering Chalmers University of Technology

Practical Tips for Modelling Lot-Sizing and Scheduling Problems. Waldemar Kaczmarczyk

Real-time Systems: Scheduling Periodic Tasks

Multi Objective Optimization

20. Dynamic Programming II

Computational complexity theory

Computational Intelligence Winter Term 2017/18

Running Time Analysis of Multi-objective Evolutionary Algorithms on a Simple Discrete Optimization Problem

Special Nodes for Interface

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms

An Evolution Strategy for the Induction of Fuzzy Finite-state Automata

1 Column Generation and the Cutting Stock Problem

Real-time scheduling of sporadic task systems when the number of distinct task types is small

Beta Damping Quantum Behaved Particle Swarm Optimization

Loop Scheduling and Software Pipelining \course\cpeg421-08s\topic-7.ppt 1

5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1

15-780: LinearProgramming

CS 6783 (Applied Algorithms) Lecture 3

Schedulability of Periodic and Sporadic Task Sets on Uniprocessor Systems

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

A New Task Model and Utilization Bound for Uniform Multiprocessors

Evolutionary Algorithms and Dynamic Programming

CEC 450 Real-Time Systems

FPGA Implementation of a Predictive Controller

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

Geometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators

Computational Complexity

arxiv: v3 [cs.ds] 23 Sep 2016

MIRROR: Symmetric Timing Analysis for Real-Time Tasks on Multicore Platforms with Shared Resources

Transcription:

Evaluation and Validation Jian-Jia Chen (Slides are based on Peter Marwedel) TU Dortmund, Informatik 12 Germany Springer, 2010 2016 年 01 月 05 日 These slides use Microsoft clip arts. Microsoft copyright restrictions apply.

Structure of this course Application Knowledge 2: Specification 3: ES-hardware 4: system software (RTOS, middleware, ) Design repository 6: Application mapping 7: Optimization 5: Evaluation & validation (energy, cost, performance, ) Design 8: Test Numbers denote sequence of chapters - 2 -

Validation and Evaluation Definition: Validation is the process of checking whether or not a certain (possibly partial) design is appropriate for its purpose, meets all constraints and will perform as expected (yes/no decision). Definition: Validation with mathematical rigor is called (formal) verification. Definition: Evaluation is the process of computing quantitative information of some key characteristics of a certain (possibly partial) design. - 3 -

How to evaluate designs according to multiple criteria? Many different criteria are relevant for evaluating designs: Average & worst case delay power/energy consumption thermal behavior reliability, safety, security cost, size weight EMC characteristics radiation hardness, environmental friendliness,.. How to compare different designs? (Some designs are better than others) - 4 -

Definitions Let X: m-dimensional solution space for the design problem. Example: dimensions correspond to # of processors, size of memories, type and width of busses etc. Let F: n-dimensional objective space for the design problem. Example: dimensions correspond to average and worst case delay, power/energy consumption, size, weight, reliability, Let f(x)=(f 1 (x),,f n (x)) where x X be an objective function. We assume that we are using f(x) for evaluating designs. solution space f(x) objective space x x - 5 -

Pareto points We assume that, for each objective, an order < and the corresponding order are defined. Definition: Vector u=(u 1,,u n ) F dominates vector v=(v 1,,v n ) F u is better than v with respect to one objective and not worse than v with respect to all other objectives: i {1,..., n}: u i {1,..., n}: u Definition: Vector u F is indifferent with respect to vector v F neither u dominates v nor v dominates u i i < v v i i - 6 -

Pareto points A solution x X is called Pareto-optimal with respect to X there is no solution y X such that u=f(x) is dominated by v=f(y). x is a Pareto point. Definition: Let S F be a subset of solutions. v F is called a non-dominated solution with respect to S v is not dominated by any element S. v is called Pareto-optimal v is non-dominated with respect to all solutions F. A Pareto-set is the set of all Pareto-optimal solutions Pareto-sets define a Pareto-front (boundary of dominated subspace) - 7 -

Pareto Point Objective 1 (e.g. energy consumption) indifferent worse Pareto-point better indifferent (Assuming minimization of objectives) Objective 2 (e.g. run time) - 8 -

Pareto Set Objective 1 (e.g. energy consumption) Pareto set = set of all Pareto-optimal solutions dominated Paretoset (Assuming minimization of objectives) Objective 2 (e.g. run time) - 9 -

One more time Pareto point Pareto front - 10 -

Design space evaluation Design space evaluation (DSE) based on Pareto-points is the process of finding and returning a set of Pareto-optimal designs to the user, enabling the user to select the most appropriate design. - 11 -

How to evaluate designs according to multiple criteria? Many different criteria are relevant for evaluating designs: Average & worst case delay power/energy consumption thermal behavior reliability, safety, security cost, size weight EMC characteristics radiation hardness, environmental friendliness,.. How to compare different designs? (Some designs are better than others) - 12 -

Average delays (execution times) Estimated average execution times : Difficult to generate sufficiently precise estimates; Balance between run-time and precision π x Accurate average execution times: As precise as the input data is. We need to compute average and worst case execution times - 13 -

Worst case execution time (1) Definition of worst case execution time: disrtribution of times BCET EST BCET worst-case guarantee worst-case performance possible execution times timing predictability WCET WCET EST Time constraint time WCET EST must be 1. safe (i.e. WCET) and 2. tight (WCET EST -WCET WCET EST ) Graphics: adopted from R. Wilhelm + Microsoft Cliparts - 14 -

Worst case execution times (2) Complexity: in the general case: undecidable if a bound exists. for restricted programs: simple for old architectures, very complex for new architectures with pipelines, caches, interrupts, virtual memory, etc. Approaches: for hardware: requires detailed timing behavior for software: requires availability of machine programs; complex analysis (see, e.g., www.absint.de) - 15 -

Structure of this course Application Knowledge 2: Specification & modeling 3: ES-hardware 4: system software (RTOS, middleware, ) Design repository 6: Application mapping 7: Optimization 5: Evaluation & validation (energy, cost, performance, ) Design 8: Test Numbers denote sequence of chapters [ Appendix : Standard Optimization Techniques - 16 -

Integer linear programming models Ingredients: Cost function Constraints Involving linear expressions of integer variables from a set X Cost function C = a x X i i x i with a i R R, x i N N (1) Constraints: j J :, x X i R bi, j xi c j with bi j, c j R R (2) Def.: The problem of minimizing (1) subject to the constraints (2) is called an integer linear programming (ILP) problem. If all x i are constrained to be either 0 or 1, the ILP problem is said to be a 0/1 integer linear programming problem. - 17 -

Example C = 5x + x x x 1 1, + x x 2 1 + 6x2 4, 2 x + 3 x 3 3 2 {0,1} C Optimal - 18 -

Remarks on integer programming Maximizing the cost function: just set C =-C Integer programming is NP-complete. Running times depend exponentially on problem size, but problems of >1000 vars solvable with good solver (depending on the size and structure of the problem) The case of x i R is called linear programming (LP). Polynomial complexity, but most algorithms are exponential, in practice still faster than for ILP problems. The case of some x i R and some x i N is called mixed integer-linear programming. ILP/LP models good starting point for modeling, even if heuristics are used in the end. Solvers: lp_solve (public), CPLEX (commercial), - 19 -

Petri Net: Matrix N describing all changes of markings t( p) W ( p, t) if p t \ t + W ( t, p) if p t \ t = W ( p, t) + W ( t, p) if p t 0 Def.: Matrix N of net N is a mapping N: P T Z (integers) such that t T : N(p,t)= t(p) Component in column t and row p indicates the change of the marking of place p if transition t takes place. For pure nets, (N, M 0 ) is a complete representation of a net. t - 20 -

Petri Net: Place invariants Standardized technique for proving properties of system models For any transition t j T we are looking for sets R P of places for which the accumulated marking is constant: Example: p R t j ( p) = t j 0-21 -

Characteristic Vector p R t j ( p) = 0 Let: c R ( p) = 1 if 0 if p R p R 0 = t ( p) = j p R p P t j ( p) c R ( p) = t j c R Scalar product - 22 -

Condition for place invariants p R t j ( p) = t ( p) c ( p) = t c = j j R p P R 0 Accumulated marking constant for all transitions if t t 1 n c... c R R =... = 0... 0 Equivalent to N T c R = 0 where N T is the transposed of N - 23 -

- 24 - More detailed view of constraints = 0 0 0 0 ) (... ) ( ) ( ) ( )... (... ) ( )... ( ) ( )... ( 2 1 1 2 1 2 1 1 1 n R R R n m m n n p c p c p c p t p t p t p t p t p t System of linear equations. Solution vectors must consist of zeros and ones.

Put Things Together: Maximum Place Invariants maximize X p2p x p such that X p2p t j (p)x p =0 x p 2{0, 1}, 8p 2 P - 25 -

Another Example: Knapsack Problem Example IP formulation: The Knapsack problem: I wish to select items to put in my backpack. There are m items available. Item i weights w i kg, Item i has value v i. I can carry Q kg. Let x i = max s.t. 1 0 if I select item i otherwise x i v i i x i w i Q i x i 0,1 { } - 26 -

Variance of Knapsack Problem Given a set of periodic tasks with implicit deadlines Task τ i : period T i, Options: Execution without/with scratchpad memory (SPM) Without SPM: Worst-case execution time C i,1 With SPM: required m i scratchpad memory size and Worstcase execution time C i,2 U i,1 = C i,1 /Ti, U i,2 = C i,2 /Ti Objective Select the tasks to be put into the SPM Minimize the required SPM size The task set should be schedulable by using EDF min s.t. x i m i i x i U i,2 + (1 x i )U i,1 i i { } x i 0,1 1-27 -

Evolutionary Algorithms (1) Evolutionary Algorithms are based on the collective learning process within a population of individuals, each of which represents a search point in the space of potential solutions to a given problem. The population is arbitrarily initialized, and it evolves towards better and better regions of the search space by means of randomized processes of selection (which is deterministic in some algorithms), mutation, and recombination (which is completely omitted in some algorithmic realizations). [Bäck, Schwefel, 1993] - 28 -

Evolutionary Algorithms (2) The environment (given aim of the search) delivers a quality information (fitness value) of the search points, and the selection process favours those individuals of higher fitness to reproduce more often than worse individuals. The recombination mechanism allows the mixing of parental information while passing it to their descendants, and mutation introduces innovation into the population [Bäck, Schwefel, 1993] - 29 -

Evolutionary Algorithms Principles of Evolution Selection Cross-over Mutation Thiele - 30 -

An Evolutionary Algorithm in Action max. y2 hypothetical trade-off front min. y1 Thiele - 31 -

- 32 -

A Generic Multiobjective EA population archive evaluate sample vary update truncate new population new archive Thiele - 33 -

- 34 -

Summary Integer (linear) programming Integer programming is NP-complete Linear programming is faster Good starting point even if solutions are generated with different techniques Simulated annealing Modeled after cooling of liquids Overcomes local minima Evolutionary algorithms Maintain set of solutions Include selection, mutation and recombination - 35 -

Simulated Annealing General method for solving combinatorial optimization problems. Based the model of slowly cooling crystal liquids. Some configuration is subject to changes. Special property of Simulated annealing: Changes leading to a poorer configuration (with respect to some cost function) are accepted with a certain probability. This probability is controlled by a temperature parameter: the probability is smaller for smaller temperatures. - 36 -

Simulated Annealing Algorithm procedure SimulatedAnnealing; var i, T: integer; begin i := 0; T := MaxT; configuration:= <some initial configuration>; while not terminate(i, T) do begin while InnerLoop do begin NewConfig := variation(configuration); delta := evaluation(newconfig,configuration); if delta < 0 then configuration := NewConfig; else if SmallEnough(delta, T, random(0,1)) then configuration := Newconfiguration; end; T:= NewT(i,T); i:=i+1; end; end; - 37 -

Explanation Initially, some random initial configuration is created. Current temperature is set to a large value. Outer loop: Temperature is reduced for each iteration Terminated if (temperature lower limit) or (number of iterations upper limit). Inner loop: For each iteration: New configuration generated from current configuration Accepted if (new cost cost of current configuration) Accepted with temperature-dependent probability if (cost of new config. > cost of current configuration). - 38 -

Behavior for actual functions 130 steps 200 steps [people.equars.com/~marco/poli/phd/node57.html] http://foghorn.cadlab.lafayette.edu/cadapplets/fp/fpintro.html - 39 -

Performance This class of algorithms has been shown to outperform others in certain cases [Wegener, 2005]. Demonstrated its excellent results in the TimberWolf layout generation package [Sechen] Many other applications - 40 -