Computational Intelligence in Product-line Optimization

Similar documents
Metaheuristics and Local Search

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches

Lecture 9 Evolutionary Computation: Genetic algorithms

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms

Local Search & Optimization

Computational statistics

Intuitionistic Fuzzy Estimation of the Ant Methodology

Algorithms and Complexity theory

CSC 4510 Machine Learning

ARTIFICIAL INTELLIGENCE

Artificial Intelligence Methods (G5BAIM) - Examination

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Ant Colony Optimization: an introduction. Daniel Chivilikhin

Capacitor Placement for Economical Electrical Systems using Ant Colony Search Algorithm

Methods for finding optimal configurations

Search. Search is a key component of intelligent problem solving. Get closer to the goal if time is not enough

1 Heuristics for the Traveling Salesman Problem

Local Search & Optimization

Optimization Methods via Simulation

Beta Damping Quantum Behaved Particle Swarm Optimization

PARTICLE SWARM OPTIMISATION (PSO)

A.I.: Beyond Classical Search

P, NP, NP-Complete. Ruth Anderson

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Hill climbing: Simulated annealing and Tabu search

New rubric: AI in the news

Outline. Ant Colony Optimization. Outline. Swarm Intelligence DM812 METAHEURISTICS. 1. Ant Colony Optimization Context Inspiration from Nature

Local search and agents

Discrete evaluation and the particle swarm algorithm

Adaptive Generalized Crowding for Genetic Algorithms

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm

Sensitive Ant Model for Combinatorial Optimization

Available online at ScienceDirect. Procedia Computer Science 20 (2013 ) 90 95

Self-Adaptive Ant Colony System for the Traveling Salesman Problem

Bounded Approximation Algorithms

3D HP Protein Folding Problem using Ant Algorithm

Finding optimal configurations ( combinatorial optimization)

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

A Note on the Parameter of Evaporation in the Ant Colony Optimization Algorithm

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

Single Solution-based Metaheuristics

Overview. Optimization. Easy optimization problems. Monte Carlo for Optimization. 1. Survey MC ideas for optimization: (a) Multistart

DISTRIBUTION SYSTEM OPTIMISATION

Local Search and Optimization

Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization

Data Warehousing & Data Mining

BLACKHOLE WORMHOLE THEORY

Local and Stochastic Search

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

Part B" Ants (Natural and Artificial)! Langton s Vants" (Virtual Ants)! Vants! Example! Time Reversibility!

Pengju

Fundamentals of Metaheuristics

Implementation of Travelling Salesman Problem Using ant Colony Optimization

Evolutionary Computation

A GA Mechanism for Optimizing the Design of attribute-double-sampling-plan

The Cosmic Microwave Background

Random Search. Shin Yoo CS454, Autumn 2017, School of Computing, KAIST

Learning in State-Space Reinforcement Learning CIS 32

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing

Natural Computing. Lecture 11. Michael Herrmann phone: Informatics Forum /10/2011 ACO II

Computational Intelligence Methods

Artificial Intelligence Heuristic Search Methods

Bioinformatics: Network Analysis

A pruning pattern list approach to the permutation flowshop scheduling problem

Introduction. THE FORMATION OF THE SOLAR SYSTEM - Theories Old and New Imperial College Press

CHAPTER 9 THE ARROW OF TIME

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems

Analog Computing: a different way to think about building a (quantum) computer

Design and Analysis of Algorithms

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

Chapter 8: Introduction to Evolutionary Computation

Gravity Teacher s Guide

Scaling Up. So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.).

Introduction to Simulated Annealing 22c:145

Fundamentals of Genetic Algorithms

DETECTING THE FAULT FROM SPECTROGRAMS BY USING GENETIC ALGORITHM TECHNIQUES

Genetic Algorithms and Genetic Programming Lecture 17

Methods for finding optimal configurations

7.1 Basis for Boltzmann machine. 7. Boltzmann machines

Intelligens Számítási Módszerek Genetikus algoritmusok, gradiens mentes optimálási módszerek

CS 6783 (Applied Algorithms) Lecture 3

Firefly algorithm in optimization of queueing systems

Discrete Evaluation and the Particle Swarm Algorithm.

SIMU L TED ATED ANNEA L NG ING

Genes and DNA. 1) Natural Selection. 2) Mutations. Darwin knew this

An artificial chemical reaction optimization algorithm for. multiple-choice; knapsack problem.

Preface to Presentation

Design and Optimization of Energy Systems Prof. C. Balaji Department of Mechanical Engineering Indian Institute of Technology, Madras

Constraint satisfaction search. Combinatorial optimization search.

9 Markov chain Monte Carlo integration. MCMC

TUTORIAL: HYPER-HEURISTICS AND COMPUTATIONAL INTELLIGENCE

Statistical Computing (36-350)

Traffic Signal Control with Swarm Intelligence

SVAN 2016 Mini Course: Stochastic Convex Optimization Methods in Machine Learning

Monday, November 25, 2013 Reading: Chapter 12 all; Chapter 13 all. There will be class on Wednesday. Astronomy in the news?

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

SAT-Solvers: propositional logic in action

Genetic Algorithm. Outline

Wallace Hall Academy

Transcription:

Computational Intelligence in Product-line Optimization Simulations and Applications Peter Kurz peter.kurz@tns-global.com June 2017 Restricted use

Restricted use

Computational

Intelligence in

Product-line

Optimization

Restricted use

Simulations and Applications Peter Kurz Head of Research & Development Applied Marketing Science KANTAR TNS June 2017 peter.kurz@tns-global.com Portfolio optimization is one important topic in the context of new product development. In market research discrete choice models are one of the main techniques to determine the preference of new products. The poster compare particle swarm, genetic, ant colony, simulated annealing and multi-verse optimizer concerning their performance and ability to find the optimal portfolio. The results of the work are based on simulated datasets and one real empirical dataset conducted in the German mobile phone market. All datasets have nested attribute structures that results in high complexity. The number of possible concepts (search space) varies between 4.2*10 4 and 8.3*10 42 in case of our simulated datasets and between 2.12 * 10 15 and 4.57*10 17 in our empirical data.

Motivation Product-Line-Optimization is one of the most important problems in marketing (Green/Krieger 1985) New Product Development is key for profitability of a company (Hauser 2011) Due to the high pressure of globalization, the fast technological development and shorter lifecycles of the products the pressure for the companies get larger and larger to improve product lines (Tsafarikis et. All 2011) In new product development 65-79% of all new products brought to the market fail (Herrmann 2006, Tacke et. Al. 2014)

Aims and Resulting Models Product-line: Bundle of Products of at least two products from same style with at least on different feature (one different attribute level) Product- line optimization problems based on discrete choice experiment data are so called NP-hard problems (Kohli/Krishnamurti 1989) As a consequence, finding a polynomial algorithm to solve any NP-hard problem would give polynomial algorithms for all the problems in NP, which is unlikely as many of them are considered hard. Normally you end up with millions of possible combinations of product levels and attributes, therefore the search space is too large for exhaustive product searches. Usually brute force techniques fail to find the optima within an acceptable time. 10

Maximizing Marketshares

Maximizing Marketshares (2) Under the following restrictions: Number of possible product lines (N Number of all possible products; E.. Number products within one product line): Example: Product consisting of 6 Attributes with 5 Level each, 5 products in each product line 7.75*10 18

Selected Algorithms Inspired from Nature: Gradient free optimization algorithms mimic mechanisms observed in nature or use heuristics. Gradient free methods are not necessarily guaranteed to find the true global optimal solution, but they are able to find many good solutions. Genetic Algorithm (Holland 1975): Simulation of Selection (survival of the fittest), recombination (crossover) and mutation (variation) like in the evolution. Particle Swarm Optimization (Kennedy/Eberhart 1995): Stochastic, population-based computer algorithm that applies to swarm intelligence (simulation of fish- or bird-swarms) Ant-Colony optimization (Colori et. Al. 1991): Motivated by the search for an optimal path in a graph, based on the behavior of ants seeking a path between their colony and a source of food. Simulated Annealing (Kirkpattrick et. All. 1983): inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. Multiverse optimizer (Mirjalili et al. 2015): Main inspirations of this algorithm are based on three concepts in cosmology: white hole, black hole, and wormhole.

Genetic Algorithm A typical genetic algorithm requires: a genetic representation of the solution domain and a fitness function to evaluate the solution domain and a termination criterion. Each solution represents one chromosome, constructed from genes (attributes and levels) Crossbreeding of chromosomes with a certain probability Each chromosome has a certain probability of mutations The higher the target function value of the chromosome, the higher the probability to reproduce. Balakrishnan/Jakob (1996), Tsafarakis et. al.(2011)

Particle Swarm Optimization Each agent (particle) represents a design point and moves in n-dimensional space looking for the best solution. Each agent represents one solution of the problem. Each agent adjusts ist movements according to the effects of self experience and social interaction. Speed and position of an agent within the solution space is responsible for the quality of the solution Each particle fixes his speed and position concerning his best and the best overall solution of the swarm. V id (t+1) = V id (t) + c 1 rand 1 (P id -s id (t)) + c 2 rand 2 (P gd -s id (t)) s id (t+1) = s id (t) + v id (t+1) i particle, v dimension, t iteration Tsafarakas et.al. (2011)

Ant Colony Optimization Ant behavior was the inspiration for the optimization technique Dynamic of the pheromone spore from the Ant Colony to the food source Way of a single ant thru the decision network (Source: Albritton/McMullen (2007))

Ant colony optimization (2) With an ACO algorithm, the shortest path in a graph, between two points A and B, is built from a combination of several paths. It is not easy to give a precise definition of what algorithm is or is not an ant colony, because the definition may vary according to the authors and uses. Each solution is represented by an ant moving in the search space. Ants mark (with pheromone) the best solutions and take account of previous markings to optimize their search. They can be seen as probabilistic multi-agent algorithms using a probability distribution to make the transition between each iteration. In their versions for combinatorial problems, they use an iterative construction of solutions. It is possible that the best solution eventually be found, even though no ant would prove effective (see graphic).

Simulated Annealing Metaheuristic probabilistic technique for approximating the global optimum of a given function in a large search space The simulation of annealing minimize a function with a large number of variables to the statistical mechanics of equilibration (annealing) of the mathematically equivalent artificial multi-atomic system The slow cooling implemented in the Simulated Annealing algorithm is interpreted as a slow decrease in the probability of accepting worse solutions as the solution space is explored. Minimization of a energy function over a cooling schedule with Values C P(AcceptanceChange) = exp( K accept all changes of Level that gain the energy function Randomly accept also solutions with a worse energy level Belloni et.al (2008) )

Multiverse Optimizer (1) Multi-verse theory is a well-known theory between physicists. It is believed in this theory that there are more than one big bang and each big bang causes the birth of a universe. The term multi-verse stands opposite of universe, which refers to the existence of other universes in addition to the universe that we all are living in. Multiple universes interact and might even collide with each other in the multi-verse theory. The multi-verse theory also suggests that there might be different physical laws in each of the universes. The three main concepts of the multi-verse theory are chosen as the inspiration for the MVO algorithm: white holes, black holes, and wormholes.

Multiverse Optimizer (2) A white hole has never seen in our universe, but physicists think that the big bang can be considered as a white hole and may be the main component for the birth of a universe. It is also argued in the cyclic model of multi-verse theory that big bangs/white holes are created where the collisions between parallel universes occur. Black holes, which have been observed frequently, behave completely in contrast to white wholes. They attract everything including light beams with their extremely high gravitational force. Wormholes are those holes that connect different parts of a universe together. The wormholes in the multi-verse theory act as time/space travel tunnels where objects are able to travel instantly between any corners of a universe (or even from one universe to another). Every universe has an inflation rate that causes its expansion through space. Inflation speed of a universe is very important in terms of forming stars, planets, asteroids, black holes, white holes, wormholes, physical laws, and suitability for life.

Multiverse Optimizer (3) During optimization, the following rules are applied to the universes of MVO: 1. The higher inflation rate, the higher probability of having white hole. 2. The higher inflation rate, the lower probability of having black holes. 3. Universes with higher inflation rate tend to send objects through white holes. 4. Universes with lower inflation rate tend to receive more objects through black holes. 5. The objects in all universes may face random movement towards the best universe via wormholes regardless of the inflation rate. Two main factors of the MVO algorithm: wormhole existence probability (WEP) and travelling distance rate (TDR). WEB define the probability of wormhole s existence in universes. It is required to increase linearly over the iterations in order to emphasize exploitation as the progress of optimization process. TDR is a factor to define the variation that an object can be teleported by a wormhole around the best universe obtained so far. In contrast to WEP, TDR is increased over the iterations to have more precise exploitation/local search around the best obtained universe. (Source: Mirjalili et.al. (2015))

5 Anwendungsbeispiel 23

Simulation Study (1) Simulation of part-worth utilities from different Discrete Choice Experiments based on normal distributions Factors of the Experiment: Table: Factors and Levels of the Simulation Study Generation of a random status-quo market Number of Datasets used for simulation: 20* 3 * 3 * 2 * 2 = 720 Each of the CI approaches generate 20 solutions: 720*20 = 14,400 comparisons.

Simulation Study (2) Table shows: three blocks each contain 12 Data sets (6 ones with 100 respondents and 6 one with 500 respondents) Each data set was replicated 20 times (runs) and each algorithm provides 20 solutions (iterations) best: best solution for each of the algorithms over 20 runs, 20 iterations for each of the 12 data sets divided by the global optimal solution (known from simulation). avg: artitmetic mean of the solutions over 20 runs * 20 iterations for 12 datasets per block, divided by number of data sets time: average computational time each algorithm needs for one run in one iteration

Simulation Study (3) Results: Simulated annealing (SA) performed best in our simulation study, followed by multiverse optimization (MVO) and genetic algorithm (GA). In our small problems (< 3.8*10 10 ) all algorithms are able to find a global optimal solution. In our large problems (> 1,9*10 23 ) ant colony(aco) and particle swarm(pso) optimization failed to find acceptable global optimal solutions. Computational time for ACO and PSO are much larger compared to the other algorithms The pretty simple algorithm SA performed best either in computational time or finding the global optimal solution.

Conjoint Study Mobile Phone Tariff Study oversight: Study conducted in 2014 KANTAR TNS/Lightspeed Germany 3677 respondents 24 Attributes (94 Level) 2,256 parameter per respondent 11 Attributes pre-paid-tariff 12 attributes post-paid-tariff 1 ASC (pre-/post-paid) 2,941 respondents prefer pre-paid 1,295 respondents prefer post-paid Attribute-Structure Prepaid Duration Prepaid Data Speed Prepaid Prepaid SMS Prepaid Data Tariff Postpaid Duration Postpaid Data speed Postpaid Postpaid SMS Postpaid Data Costs ( 500MB) Costs (<500MB) Costs ( 500MB) Costs (<500MB) Prepaid Flatrate Prepaid Provider Postpaid Flatrate Postpaid Provider Prepaid Calls Fixed Line Prepaid Anytime minutes Postpaid Calls Fixed line Postpaid Anytime minutes Prepaid Calls offnet Postpaid Calls off-net Postpaid Calls on-net

Structure of the product portfolio optimization Two different branches of the market: pre-paid and post-paid The client searches for a product-portfolio with the highest market shares The portfolio should consist of three pre-paid and three post-pad tariff concepts Which product line maximizes the clients market share Number of possible solutions: pre-paid market: 2.12 * 10 15 post-paid market: 4,57 * 10 17 Status quo market: 7 competitors with 3 post-paid and 3 prepaid tariffs each actual market (starting point) consist of 48 concepts Competitive products are generated by using 600 random product concepts and use the 42 best

Results post-paid pre-paid best in % avg in % time in s best in % avg in % time in s GA 12,9696 12,9696 26,85 13,0321 13,0321 14,65 PSO 12,9696 12,9508 297,90 13,0253 12,9932 165,20 ACO 12,9680 12,8993 359,90 13,0242 12,9195 180,15 SA 12,9696 12,9505 29,85 13,0163 12,9841 13,15 MVO 13,0142 12,9994 370,80 13,0422 13,0420 195,85 Our empirical data show a completely different picture compared to the simulated data. The complex structure of the choice model seems to stress the algorithms more Clear winner is the newly developed Multiverse Optimizer (MVO) highest market shares for all runs and all iterations. Disadvantage is that the computational time is 14 to 15 times longer than for the fastest algorithm. As we don t know the global optima in a real study, the performance is measured by highest market shares for the clients portfolio.

Findings MVO performed pretty well based on the 20 iterations for each portfolio. The MVO algorithm failed only once (200MB data volume instead of 300MB). The four other algorithms show a more diversified picture. Differences in the portfolios are related to two attributes data volume (200MB vs. 300MB or 1GB) and duration of contract (12 months vs. 24 months). Another difference that frequently occurs is related to attribute pre-paid data speed (7,2 Mbits/s instead of 19,4 Mbits/s) Results are pretty much the same but we can show, that at least four of the five algorithms get stuck at a local optima. For sure we can t know, that the best algorithm (MVO) really reached the global optima. But MVO performed best in our computations

Conclusion Portfolio optimization is a rather complex problem and we can show that it s hard to decide if we really solve the problem or end up with local optima (related to the nature of the gradient free algorithms). Most of the algorithms have some tuning parameter that should be set reasonable. One could never be sure if one of the algorithms perform better with different parameters. In our MVO case it is important to set the wormhole existence probability (WEP) and travelling distance rate (TDR) correct to reach good results. Compared to four other tested algorithms MVO needs more computational time, this could sometimes be an disadvantage especially when solving large problems. Based on our simulated data all five algorithms perform pretty well. The results based on our complex alternative specific structure of the empirical study, MVO performed better than the other tested algorithms. This could be a result of the worm wholes, that suddenly appear in the iterations and open a tunnel from on branch to another. MVO therefore offers a better opportunity for slightly different concepts to switch from one branch to the other within the tree structure. Further research is needed to see if the findings hold under different conditions of complexity and for different empirical data.