Elliptic entropy of uncertain random variables

Similar documents
Variance and Pseudo-Variance of Complex Uncertain Random Variables

Minimum Spanning Tree with Uncertain Random Weights

Chance Order of Two Uncertain Random Variables

Some limit theorems on uncertain random sequences

Tail Value-at-Risk in Uncertain Random Environment

On the convergence of uncertain random sequences

Minimum spanning tree problem of uncertain random network

Inclusion Relationship of Uncertain Sets

Membership Function of a Special Conditional Uncertain Set

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model

Reliability Analysis in Uncertain Random System

Formulas to Calculate the Variance and Pseudo-Variance of Complex Uncertain Variable

On Liu s Inference Rule for Uncertain Systems

Uncertain Programming Model for Solid Transportation Problem

A numerical method for solving uncertain differential equations

Knapsack Problem with Uncertain Weights and Values

Spanning Tree Problem of Uncertain Network

Matching Index of Uncertain Graph: Concept and Algorithm

A New Approach for Uncertain Multiobjective Programming Problem Based on P E Principle

Uncertain Systems are Universal Approximators

Uncertain Structural Reliability Analysis

Hybrid Logic and Uncertain Logic

Hamilton Index and Its Algorithm of Uncertain Graph

Uncertain Risk Analysis and Uncertain Reliability Analysis

Uncertain Second-order Logic

Research memoir on belief reliability

The covariance of uncertain variables: definition and calculation formulae

Runge-Kutta Method for Solving Uncertain Differential Equations

Uncertain Entailment and Modus Ponens in the Framework of Uncertain Logic

Stability and attractivity in optimistic value for dynamical systems with uncertainty

Estimating the Variance of the Square of Canonical Process

ON LIU S INFERENCE RULE FOR UNCERTAIN SYSTEMS

Structural Reliability Analysis using Uncertainty Theory

Uncertain risk aversion

Accepted Manuscript. Uncertain Random Assignment Problem. Sibo Ding, Xiao-Jun Zeng

Theoretical Foundation of Uncertain Dominance

Spectral Measures of Uncertain Risk

Uncertain Quadratic Minimum Spanning Tree Problem

An Uncertain Bilevel Newsboy Model with a Budget Constraint

Uncertain Satisfiability and Uncertain Entailment

Title: Uncertain Goal Programming Models for Bicriteria Solid Transportation Problem

Euler Index in Uncertain Graph

Yuefen Chen & Yuanguo Zhu

Optimizing Project Time-Cost Trade-off Based on Uncertain Measure

Uncertain Models on Railway Transportation Planning Problem

Uncertain Distribution-Minimum Spanning Tree Problem

Uncertain Logic with Multiple Predicates

New independence definition of fuzzy random variable and random fuzzy variable

UNCERTAIN OPTIMAL CONTROL WITH JUMP. Received December 2011; accepted March 2012

An Uncertain Control Model with Application to. Production-Inventory System

A Chance-Constrained Programming Model for Inverse Spanning Tree Problem with Uncertain Edge Weights

The α-maximum Flow Model with Uncertain Capacities

An Analytic Method for Solving Uncertain Differential Equations

A New Uncertain Programming Model for Grain Supply Chain Design

On the Continuity and Convexity Analysis of the Expected Value Function of a Fuzzy Mapping

Value at Risk and Tail Value at Risk in Uncertain Environment

Uncertain flexible flow shop scheduling problem subject to breakdowns

2748 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 26, NO. 5, OCTOBER , Yuanguo Zhu, Yufei Sun, Grace Aw, and Kok Lay Teo

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

Why is There a Need for Uncertainty Theory?

CHATTERING-FREE SMC WITH UNIDIRECTIONAL AUXILIARY SURFACES FOR NONLINEAR SYSTEM WITH STATE CONSTRAINTS. Jian Fu, Qing-Xian Wu and Ze-Hui Mao

A MODEL FOR THE INVERSE 1-MEDIAN PROBLEM ON TREES UNDER UNCERTAIN COSTS. Kien Trung Nguyen and Nguyen Thi Linh Chi

Nontrivial Solutions for Boundary Value Problems of Nonlinear Differential Equation

A Note of the Expected Value and Variance of Fuzzy Variables

THE inverse shortest path problem is one of the most

Credibilistic Bi-Matrix Game

AN INTRODUCTION TO FUZZY SOFT TOPOLOGICAL SPACES

Representation theorem for AVaR under a submodular capacity

UNCORRECTED PROOF. Importance Index of Components in Uncertain Reliability Systems. RESEARCH Open Access 1

MEAN-ABSOLUTE DEVIATION PORTFOLIO SELECTION MODEL WITH FUZZY RETURNS. 1. Introduction

Constructive proof of deficiency theorem of (g, f)-factor

LIFE SPAN OF BLOW-UP SOLUTIONS FOR HIGHER-ORDER SEMILINEAR PARABOLIC EQUATIONS

Cross-entropy measure on interval neutrosophic sets and its applications in Multicriteria decision making

A new method of level-2 uncertainty analysis in risk assessment based on uncertainty theory

Choquet Integral and Its Applications: A Survey

Chapter 3 Second Order Linear Equations

First Integrals/Invariants & Symmetries for Autonomous Difference Equations

A Generalized Decision Logic in Interval-set-valued Information Tables

Finding Robust Solutions to Dynamic Optimization Problems

Divergence measure of intuitionistic fuzzy sets

Sensitivity and Stability Analysis in Uncertain Data Envelopment (DEA)

Product Zero Derivations on Strictly Upper Triangular Matrix Lie Algebras

Available online at J. Nonlinear Sci. Appl., 10 (2017), Research Article

Distance-based test for uncertainty hypothesis testing

Generalization of Belief and Plausibility Functions to Fuzzy Sets

ELA

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

Springer Uncertainty Research. Yuanguo Zhu. Uncertain Optimal Control

LINEAR SECOND-ORDER EQUATIONS

Hybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5].

Exact Asymptotics in Complete Moment Convergence for Record Times and the Associated Counting Process

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

Solving fuzzy matrix games through a ranking value function method

An uncertain search model for recruitment problem with enterprise performance

Practical implementation of possibilistic probability mass functions

Hierarchical Structures on Multigranulation Spaces

arxiv: v1 [econ.th] 15 Nov 2018

Uni-soft hyper BCK-ideals in hyper BCK-algebras.

Conditional Deng Entropy, Joint Deng Entropy and Generalized Mutual Information

The Semi-Pascal Triangle of Maximum Deng Entropy

Transcription:

Elliptic entropy of uncertain random variables Lin Chen a, Zhiyong Li a, Isnaini osyida b, a College of Management and Economics, Tianjin University, Tianjin 372, China b Department of Mathematics, Universitas Negeri Semarang, Semarang 5229, Indonesia Abstract: Entropies of uncertain random variables are introduced to provide some kinds of quantitative measurement of the uncertainty. In chance theory, a concept of entropy for uncertain random variables has been defined by using logarithm. However, such an entropy fails to measure the uncertain degree of some uncertain random variables. For solving this problem, this paper firstly define the elliptic entropy to characterize the uncertainty of uncertain random variables. Secondly, some properties of elliptic entropy are discussed. Finally, triangular and radical entropies of uncertain random variables are investigated. Keywords: Chance theory; uncertain random variable; entropy; elliptic entropy 1 Introduction Entropy, as a measure of information quantity, was first proposed by Shannon [22] for a discrete random variable in the form of logarithm in 1949. Then, Jaynes [1] proposed a maximum entropy principle, which is to choose the one with the maximum entropy in practice among the random variables with the same expected value and variance. To deal with humans uncertainty, Liu [14] founded an uncertainty theory based on an uncertain measure which satisfies normality, duality, subadditivity and product axioms. Liu [15] proposed a concept of entropy for uncertain variables in the form of logarithm function. Then, Dai and Chen [6] gave a formula to calculate the entropy of a function of uncertain variables, and Chen and Dai [4] proposed the maximum entropy principle for uncertain variables. After that, the properties of entropy for uncertain variables were further studied by Chen et al. [5], Ning et al. [21], Tang and Gao [24], and Yao et al. [25]. As the system becomes more complex, the uncertainty and randomness are required to be considered simultaneously in a system. In order to describe this phenomenon, chance theory was pioneered by Liu [16] via giving the concepts of uncertain random variable and chance measure. Some basic concepts of chance theory such as chance distribution, expected value, and variance were also proposed by Liu [16]. As an important contribution to chance theory, Liu [16] presented an operational law of uncertain random variables. After that, chance theory was developed steadily and applied widely. In 215, Sheng et al. [23] provided a definition of entropy to characterize the uncertainty of un- 1

certain random variable. However, the concept of entropy for uncertain random variables proposed by Sheng et al. [23] has a drawback. That is, it cannot answer the following question: How much of entropy of uncertain random variable associated to uncertain variable? For solving this problem, Ahmadzade et al. [1] employ a new approach to define the concept of entropy for uncertain random variables. After that, Ahmadzade et al. [2] provided the quadratic entropy of uncertain random variables, and verified its properties such as translation invariance and positive linearity. In order to explore diverse definitions of entropy for uncertain random variables and provide more mathematical tools to engineers, this paper provides a new type of entropy called elliptic entropy and presents some properties of the elliptic entropy. The remainder of this paper is organized as follows. Section 2 presents some basic concepts and theorems about the uncertainty theory and chance theory, respectively. In Section 3, a definition of elliptic entropy of uncertain random variable is proposed and some properties are presented. As supplements, other types of entropies are studied in Section 4. At the end of the paper, a brief summary is presented. 2 Preliminary Liu [16] proposed the chance theory, which is a mathematical methodology for modeling complex systems with both uncertainty and randomness. Ahmadzade et al. [2] obtained several formulas to calculate the moments of an uncertain random variable. After that, Ahmadzade et al. [3] studied some properties of uncertain random sequences. Gao and Sheng [7] studied the law of large numbers of uncertain random variables. Liu and Yao [2] proposed an uncertain random logic to deal with the uncertain random knowledge. Besides, chance theory was applied into many fields, such as uncertain random programming (Liu [17], Ke et al. [13], Ke et al. [12]), uncertain random system (Gao et al. [8], Gao and Yao [9]), uncertain random risk analysis (Liu and alescu [18], Liu and alescu [19]), uncertain random network (Ke et al. [11]). Assume that (Γ, L, M) is an uncertainty space, and (Ω, A, Pr) is a probability space. The product (Γ, L, M) (Ω, A, Pr) is called a chance space. Each element Θ L A is called an event in the chance space. The chance measure of event Θ is defined by Liu [16] as Ch{Θ} = 1 Pr{ω Ω M{γ Γ (γ, ω) Θ} x}dx. Liu [17] provided the following operation law to determine the chance distribution of uncertain random variable. Assume that η 1, η 2,, η m are independent random variables with probability distributions Ψ 1, Ψ 2,, Ψ m, respectively, and assume that τ 1, τ 2,, τ n are uncertain variables (not necessarily independent). Then the uncertain random variable ξ = f(η 1, η 2,, η m, τ 1, τ 2,, τ n ) 2

has a chance distribution Φ(x) = F (x; y 1, y 2,, y m )dψ 1 (y 1 ) dψ m (y m ), where F (x; y 1, y 2,, y m ) is the uncertainty distribution of f(y 1, y 2,, y m, τ 1, τ 2,, τ n ) for any real numbers y 1, y 2,, y m. 3 Elliptic Entropy of Uncertain andom Variable In chance theory, Sheng et al. [23] put forward the concept of entropy for uncertain random variables in 215. However, the concept of entropy for uncertain random variables proposed by Sheng et al. [23] has a drawback. That is, it cannot answer the following question: How much of entropy of uncertain random variable associated to uncertain variable? For solving this problem, Ahmadzade et al. [1] used a new approach to define the concept of entropy for uncertain random variables as follows. Definition 3.1 (Ahmadzade et al. [1]) Suppose that η 1, η 2,, η m are independent random variables with probability distributions Ψ 1, Ψ 2,, Ψ m, respectively, and τ 1, τ 2,, τ n are uncertain variables. Triangular entropy of uncertain random variable ξ = f(η 1,, η m, τ 1,, τ n ) is defined as following H[ξ] = + S(F (x, y 1,, y m ))dxdψ 1 (y 1 ) dψ m (y m ), where S(t) = tlnt (1 t)ln(1 t) and F (x, y 1,, y m ) is the uncertainty distribution of uncertain variable f(y 1,, y m, τ 1,, τ n ) for any real numbers y 1,, y m. Following with Ahmadzade et al. [1], we define the elliptic entropy of uncertain random variable as follows. Definition 3.2 Suppose that η 1, η 2,, η m are independent random variables with probability distributions Ψ 1, Ψ 2,, Ψ m, respectively, and τ 1, τ 2,, τ n are uncertain variables. Elliptic entropy of uncertain random variable ξ = f(η 1,, η m, τ 1,, τ n ) is defined as following H[ξ] = + S(F (x, y 1,, y m ))dxdψ 1 (y 1 ) dψ m (y m ), where g(t) = 2k t(1 t) and F (x, y 1,, y m ) is the uncertainty distribution of uncertain variable f(η 1,, η m, τ 1,, τ n ) for any real numbers y 1,, y m. Note that 2k t(1 t) is the upside of an ellipse and a symmetric function with respect to t =.5, and it is strictly increasing in [,.5] and strictly decreasing in [.5, 1]. Moreover, k is the half axis of ellipse and takes values on the interval (, + ). Besides, ellipse is closely related to elliptic function, elliptic integral and elliptic curve which are widely applied in information sciences. So elliptic entropy can provide an useful quantitative measurement of the uncertainty. 3

Theorem 3.3 Let η 1, η 2,, η m be independent random variables with probability distributions Ψ 1, Ψ 2,, Ψ m, respectively, and let τ 1, τ 2,, τ n be independent uncertain variables. If function f is a measurable function, then uncertain random variables ξ = f(η 1,, η m, τ 1,, τ n ) has elliptic entropy + H[ξ] (x, y 1,, y m ) dαdψ 1 (y 1 ) dψ m (y m ). Proof. Let g(α) = 2k. Then g(α) is a derivable function with g (α) 1 2α. Since α(1 α) g(f (x, y 1,, y m )) = F (x,y1,,y m ) g (α)dα, we have H[ξ] = = + + 1 It follows from Fubini theorem that H[ξ] = g(f (x, y 1,, y m ))dxdψ 1 (y 1 ) dψ m (y m ) F (x,y1,,y m ) F (x,y 1,,y m ) F (,y1,,y m) 1 F (,y 1,,y m) = 1 = Thus the proof is finished. g (α)dαdxdψ 1 (y 1 ) dψ m (y m ) g (α)dαdxdψ 1 (y 1 ) dψ m (y m ). (α,y 1,,y m ) (α,y 1,,y m )) F (,y1,,y m) F (,y 1,,y m) 1 m 1 g (α)dαdxdψ 1 (y 1 ) dψ m (y m ) g (α)dαdxdψ 1 (y 1 ) dψ m (y m ). (α, y 1,, y m )g (α)dαdψ 1 (y 1 ) dψ m (y m ) (α, y 1,, y m )g (α)dαdψ 1 (y 1 ) dψ m (y m ) (α, y 1,, y m )g (α)dαdψ 1 (y 1 ) dψ m (y m ) (α, y 1,, y m ) dαdψ 1 (y 1 ) dψ m (y m ). Theorem 3.4 Let τ be an uncertain variable with uncertainty distribution function Φ and let η be a random variable with probability distribution function Ψ. If ξ = τ η, then H[ξ] = H[τ]E[η]. Proof. It is obvious that (α, y) = Φ 1 (α)y, therefore by using Theorem 3.3, we obtain 1 H[ξ] Φ 1 (α)y dαdψ(y) 1 Φ 1 (α) dα ydψ(y) = H[τ]E[η]. 4

Theorem 3.5 Let τ be an uncertain variable with uncertainty distribution function Φ and η be a random variable with probability distribution function Ψ. If ξ = η + τ, then H[ξ] = H[τ]. Proof. It is obvious that (α, y) = Φ 1 (α) + y, therefore by using Theorem 3.3, we obtain 1 = H[τ]. 1 H[ξ] 1 (Φ 1 (α) + y) dαdψ(y) Φ 1 (α) dαdψ(y) + k Φ 1 (α) dα 1 y dαdψ(y) 1 dψ(y) + k dα ydψ(y) Thus the proof is complete. Theorem 3.6 Let η 1, η 2,, η n be independent random variables, and let τ 1, τ 2,, τ n be independent uncertain variables. Also, suppose that ξ 1 = f 1 (η 1, τ 1 ), ξ 2 = f 2 (η 2, τ 2 ),, ξ n = f n (η n, τ n ). If f(x 1, x 2,, x n ) is strictly increasing with respect to x 1, x 2,, x m and strictly decreasing with respect x m+1, x m+2,, x n, then ξ = f(η 1,, η m, τ 1,, τ n ) has elliptic entropy H[ξ] 1 f( 1 (α, y 1 ),, m (α, y m ), m+1 (1 α, y m+1),, n (1 α, y n ) dαdψ 1 (y 1 ) dψ n (y n ), where i (α, y i ) is the inverse uncertainty distribution of uncertain variable f i (τ i, y i ) for any real number y i, i = 1, 2,, n. Proof. It follows from Theorems 2.1 and 3.1 immediately. We omit it. Corollary 3.7 Let η 1, η 2,, η n be independent random variables, and τ 1, τ 2,, τ n be independent uncertain variables. Also, suppose that ξ 1 = f 1 (η 1, τ 1 ), ξ 2 = f 2 (η 2, τ 2 ),, ξ n = f n (η n, τ n ). If f(x 1, x 2,, x n ) is strictly increasing with respect to x 1, x 2,, x n, then ξ = f(η 1, η 2,, τ n ) has elliptic entropy H[ξ] 1 f(f1 1 (α, y 1 ), F2 1 (α, y 2 ),, Fn 1 (α, y n )) dαdψ 1 (y 1 ) dψ n (y n ). 5

Corollary 3.8 Let η 1, η 2,, η n be independent random variables, and τ 1, τ 2,, τ n be independent uncertain variables. Also, suppose that ξ 1 = f 1 (η 1, τ 1 ), ξ 2 = f 2 (η 2, τ 2 ),, ξ n = f n (η n, τ n ). If f(x 1, x 2,, x n ) is strictly decreasing with respect to x 1, x 2,, x n, then ξ = f(η 1, η 2,, τ n ) has elliptic entropy H[ξ] 1 f(f1 1 (1 α, y 1 ), F2 1 (1 α, y 2 ),, Fn 1 (1 α, y n )) dαdψ 1 (y 1 ) dψ n (y n ). Theorem 3.9 Let η 1 and η 2 be random variables with probability distribution functions Ψ 1 and Ψ 2, respectively, and τ 1 and τ 2 be uncertain variables with uncertainty distribution functions Φ 1 and Φ 2, respectively. If ξ 1 = η 1 + τ 1 and ξ 2 = η 2 + τ 2, then H[ξ 1 ξ 2 ] = H[τ 1 τ 2 ] + H[τ 2 ]E[η 1 ] + H[τ 1 ]E[η 2 ]. Proof. It is clear that F1 1 (α, y 1 ) = y 1 + Φ 1 1 1 (α) and F2 (α, y 2 ) = y 2 + Φ 1 2 (α). By using Theorem 3.6, we have H[ξ] +k +k +k +k 1 1 1 1 1 1 1 1 1 (α, y 1 ) 2 (α, y 2 ) dαdψ 1 (y 1 )dψ 2 (y 2 ) (y 1 + Φ 1 1 (α))(y 2 + Φ 1 2 (α)) dαdψ 1 (y 1 )dψ 2 (y 2 ) Φ 1 1 (α)φ 1 2 (α) dαdψ 1 (y 1 )dψ 2 (y 2 ) y 1 (α)φ 1 2 (α) dαdψ 1 (y 1 )dψ 2 (y 2 ) y 2 (α)φ 1 1 (α) dαdψ 1 (y 1 )dψ 2 (y 2 ) Φ 1 1 (α)φ 1 2 (α) dα dψ 1 (y 1 )dψ 2 (y 2 ) (α)φ 1 2 (α) dα y 1 dψ 1 (y 1 )dψ 2 (y 2 ) (α)φ 1 1 (α) dα = H[τ 1 τ 2 ] + H[τ 2 ]E[η 1 ] + H[τ 1 ]E[η 2 ]. y 2 dψ 1 (y 1 )dψ 2 (y 2 ) Theorem 3.1 Let η 1 and η 2 be random variables with probability distribution functions Ψ 1 and Ψ 2 respectively, and τ 1 and τ 2 be uncertain variables with uncertainty distribution functions Φ 1 and Φ 2, respectively. Also suppose that ξ 1 = f(η 1, τ 1 ) and ξ 2 = f(η 2, τ 2 ). Then for any real numbers a and b, we have H[aξ 1 + bξ 2 ] = a H[ξ 1 ] + b H[ξ 2 ]. 6

Proof. We will prove this theorem will by three steps. Step 1: We prove H[aξ 1 ] = a H[ξ 1 ]. If a >, then af(τ 1, y 1 ) has an inverse uncertainty distribution (α, y 1 ) = a 1 (α, y 1 ), where (α, y 1 ) is the inverse uncertainty distribution of f 1 (τ 1, y 1 ). If follows from Theorem 3.6 that 1 H[aξ] = ak F1 1 (α, y 1 ) dαdψ 1 (y 1 ) = a H[ξ 1 ]. If a <, then af(τ 1, y 1 ) has an inverse uncertainty distribution If follows from Theorem 3.6 that H[aξ] = ak = ak 1 = ak (α, y 1 ) = a 1 (1 α, y 1 ). 1 1 1 (1 α, y 1 ) dαdψ 1 (y 1 ) 1 2α 1 (α, y 1 ) d( α)dψ 1 (y 1 ) 1 (α, y 1 ) d(α)dψ 1 (y 1 ) = a H[ξ 1 ]. If a =, then we immediately have H[aξ 1 ] = = a H[ξ 1 ]. Thus we always have H[aξ 1 ] = a H[ξ 1 ]. Step 2: We prove H[ξ 1 + ξ 2 ] = H[ξ 1 ] + H[ξ 2 ]. The inverse uncertainty distribution of f 1 (τ 1, y 1 ) + f 2 (τ 2, y 2 ) is It follows from Theorem 3.6 that H[ξ 1 + ξ 2 ] 2 1 = H[ξ 1 ] + H[ξ 2 ]. 2 (α, y 1, y 2 ) = (α, y 1 ) + (α, y 2 ). 1 (F1 1 (α, y 1 ) + F2 1 (α, y 2 )) dαdψ 1 (y 1 )dψ 2 (y 2 ) 1 (α, y 1 ) dαdψ 1 (y 1 )dψ 2 (y 2 )+k 1 2 Step 3: For any real numbers a and b, it follows from Steps 1 and 2 that 2 (α, y 2 ) dαdψ 1 (y 1 )dψ 2 (y 2 ) H[aξ 1 + bξ 2 ] = a H[ξ 1 ] + b H[ξ 2 ]. The theorem is proved. 4 Other types of Entropies for Uncertain andom Variable Definition 4.1 Suppose that η 1, η 2,, η m are independent random variables with probability distributions Ψ 1, Ψ 2,, Ψ m, respectively, and τ 1, τ 2,, τ n are uncertain variables. Triangular entropy of uncertain random variable ξ = f(η 1,, η m, τ 1,, τ n ) is defined as following + H[ξ] = S(F (x, y 1,, y m ))dxdψ 1 (y 1 ) dψ m (y m ), 7

where S(t) = 1 1 2t and F (x, y 1,, y m ) is the uncertainty distribution of uncertain variable f(y 1,, y m, τ 1,, τ n ) for any real numbers y 1,, y m. Definition 4.2 Suppose that η 1, η 2,, η m are independent random variables with probability distributions Ψ 1, Ψ 2,, Ψ m, respectively, and τ 1, τ 2,, τ n are uncertain variables. adical entropy of uncertain random variable ξ = f(η 1,, η m, τ 1,, τ n ) is defined as following H[ξ] = + S(F (x, y 1,, y m ))dxdψ 1 (y 1 ) dψ m (y m ), where S(t) t(1 t), k > and F (x, y 1,, y m ) is the uncertainty distribution of uncertain variable f(y 1,, y m, τ 1,, τ n ) for any real numbers y 1,, y m. 5 Conclusions The concepts of entropies for uncertain random variables are very important and necessary to measure the degree of uncertainty. This paper put forward the elliptic entropy of uncertain random variables. We first introduced a definition of elliptic entropy for uncertain random variables. Based on this definition, several properties were derived. In the future, we plan to investigate other types of entropies of uncertain random variable such as triangular entropy, radical entropy, and we will study their mathematical properties and possible applications. Acknowledgments This work was supported by the Projects of the Humanity and Social Science Foundation of Ministry of Education of China (No. 13YJA6365), and the Key Project of Hubei Provincial Natural Science Foundation, China (No. 215CFA144). eferences [1] H. Ahmadzade,. Gao, M. Dehghan and Y. Sheng, Partial entropy of uncertain random variables, Journal of Intelligent and Fuzzy Systems, DOI: 1.3233/JIFS-161161. [2] H. Ahmadzade,. Gao and H. Zarei, Partial quadratic entropy of uncertain random variables, Journal of Uncertain Systems 1(4)(216), 292-31. [3] H. Ahmadzade, Y. Sheng and M. Esfahani, On the convergence of uncertain random sequences, Fuzzy Optimization and Decision Making, DOI: 1.17/s17-16-9242-z. [4] X. Chen and W. Dai, Maximum entropy principle for uncertain variables, International Journal of Fuzzy Systems 13(3)(211), 232-236. 8

[5] X. Chen, S. Kar and D. alescu, Cross-entropy measure of uncertain variables, Information Sciences 21(212), 53-6. [6] W. Dai and X. Chen, Entropy of function of uncertain variables, Mathematical and Computer Modelling 55(3-4)(212), 754-76. [7]. Gao and Y. Sheng, Law of large numbers for uncertain random variables with different chance distributions, Journal of Intelligent and Fuzzy Systems 31(3)(216), 1227-1234. [8]. Gao, Y. Sun and D. alescu, Order statistics of uncertain random variables with application to k-out-of-n system, Fuzzy Optimization and Decision Making, DOI: 1.17/s17-16-9245-9. [9]. Gao and K. Yao, Importance index of components in uncertain random systems, Knowledge-Based Systems 19(216), 28-217. [1] E. Jaynes, Information theory and statistical mechanics, Physical eview 16(4)(1957), 62-63. [11] H. Ke, Uncertain random time-cost trade-off problem, Journal of Uncertainty Analysis and Applications 2(23)(214), 1-1. [12] H. Ke, H. Liu and G. Tian, An uncertain random programming model for project scheduling problem, International Journal of Intelligent Systems 3(1)(215), 66-79. [13] H. Ke, T. Su and Y. Ni, Uncertain random multilevel programming with application to product control problem, Soft Computing 19(6)(215), 1739-1746. [14] B. Liu, Uncertainty Theory, 2nd edn, Springer-Verlag, Berlin, 27. [15] B. Liu, Some research problems in uncertainty theory, Journal of Uncertain Systems 3(1)(29), 3-1. [16] Y. Liu, Uncertain random variables: a mixture of uncertainty and randomness, Soft Computing 17(4)(213), 625-634. [17] Y. Liu, Uncertain random programming with applications, Fuzzy Optimization and Decision Making 12(2)(213), 153-169. [18] Y. Liu and D. alescu, isk index in uncertain random risk analysis, International Journal of Uncertainty, Fuzziness and Knowledge Based Systems 22(4)(214), 491-54. [19] Y. Liu and D. alescu, Value-at-risk in uncertain random risk analysis, Information Sciences 391(217), 1-8. [2] Y. Liu and K. Yao, Uncertain random logic and uncertain random entailment, Journal of Ambient Intelligence and Humanized Computing, DOI: 1.17/s12652-17-465-9. 9

[21] Y. Ning, H. Ke, and Z. Fu, Triangular entropy of uncertain variables with application to portfolio selection, Soft Computing 19(8)(215), 223-229. [22] C. Shannon, The Mathematical Theory of Communication, The University of Illinois Press, Urbana, 1949. [23] Y. Sheng, G. Shi and D. alescu, Entropy of uncertain random variables with application to minimum spanning tree problem, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, to be published. [24] W. Tang and W. Gao, Triangular entropy of uncertain variables. Information: An International Interdisciplinary Journal 16(2)(213), 1279-1282. [25] K. Yao, J. Gao, and W. Dai, Sine entropy for uncertain variable, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 21(5)(213), 743-753. 1