diff --git "a/GtAzT4oBgHgl3EQfHftK/content/tmp_files/load_file.txt" "b/GtAzT4oBgHgl3EQfHftK/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/GtAzT4oBgHgl3EQfHftK/content/tmp_files/load_file.txt" @@ -0,0 +1,1285 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf,len=1284 +page_content='Risk-Averse MDPs under Reward Ambiguity Haolin Ruan School of Data Science, City University of Hong Kong, Kowloon Tong, Hong Kong haolin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='ruan@my.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='cityu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='hk Zhi Chen Department of Management Sciences, College of Business, City University of Hong Kong, Kowloon Tong, Hong Kong zhi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='chen@cityu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='hk Chin Pang Ho School of Data Science, City University of Hong Kong, Kowloon Tong, Hong Kong clint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='ho@cityu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='hk We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Introduction Markov decision processes (MDPs) provide a powerful modeling framework for sequential decision- making problems and reinforcement learning in stochastic dynamic environments (Puterman 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Obtaining the model parameters of MDPs that perfectly reflect the environments, however, has always been a challenge in practice, as these parameters are estimated from limited data that are potentially contaminated (Mannor et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Moreover, these parameters, such as transition kernel and reward function, are often time-dependent or even uncertain, but they are approximated as fixed values in an overly simplified setting (Mannor et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Therefore, the output policies of MDPs are often disappointing in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Robust MDPs address the aforementioned issues of parameter ambiguity, by allowing the unknown values of transition kernels and reward functions to lie in a given ambiguity set (Behza- dian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2021, Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2019, Clement and Kroer 2021a, Delgado et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Then, robust MDPs seek for policies that maximize the worst-case expected return over all transition kernels 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='01045v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='LG] 3 Jan 2023 Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 2 and reward functions in the ambiguity sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' By specifying ambiguity sets that contain the unknown transition kernels with high confidence, the optimal policies of robust MDPs are robust to param- eter ambiguity (Iyengar 2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In this paper, we focus on the case where the reward function is ambiguous, which sometimes is referred to as imprecise-reward MDPs (Alizadeh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2015, Regan and Boutilier 2010, 2011a,b, 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' This particular setting is also closely related to imitation learning, which trains an agent to learn a certain behavior of an expert, while only some demonstrated trajectories of her is available (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2020, Ho and Ermon 2016, Osa et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2018, Rashidinejad et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' When applying inverse reinforcement learning approach to learn the reward function that completely represents the expert’s preference (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2020, Choi and Kim 2012, Ng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2000), the yielded policies, which suffer from reward ambiguity, may perform poorly in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' To handle reward ambiguity, we utilize techniques from distributionally robust optimization (DRO) (Derman and Mannor 2020) and distributionally robust chance-constrained program (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2007, Postek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2018), assuming that the true reward distribution resides in an ambiguity set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' This approach does not require the reward function to be precisely specified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Instead, only the descriptions of common distribution information such as support, moments and shape in the ambiguity set are needed, which are often much easier to be obtained/estimated (Hanasusanto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2015, 2017, Zymler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In this paper, we consider a Wasserstein ambiguity set for our distributionally robust models as in Abdullah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (2019), Calafiore and Ghaoui (2006), Xie (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Unlike phi-divergence ambiguity sets which may contain too extreme member distributions, the closeness between points in the support set is incorporated in Wasserstein sets, thus their member distributions may be more reasonable (Gao and Kleywegt 2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' on the other hand, Wasserstein sets are often a better choice than moment-based ambiguity sets when the number of samples is too small to obtain a reliable estimation on moments (Yang 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We choose Wasserstein sets for these reasons, although other types of ambiguity sets such as nested ambiguity sets (Xu and Mannor 2010, 2012) and the ambiguity sets based on Prohorov metric (Erdo˘gan and Iyengar 2006) are also considered in literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' For our distributionally robust chance-constrained MDPs, we will furthermore show its equivalence with the nominal counterparts with an adjusted risk level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' To the best of our knowledge, this is the first result in MDPs that establishes the mutual transformation between distributional ambiguity and risk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Our return-risk model (RR) is a risk-averse MDP model that not only takes into account reward ambiguity, but also considers both the average and risk of the return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' MDPs that minimize the risk of the return instead of the expected cost are called risk-aware MDPs (also called risk-sensitive or risk-averse MDPs) (Ahmadi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2021, B¨aauerle and Rieder 2017, Carpin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2016, Haskell and Jain 2015, Huang and Haskell 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In risk-aware optimization, the objective function is taken as Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 3 a risk measure, such as value-at-risk (VaR) (Delage and Mannor 2007, 2010, Gilbert et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2017), conditional value-at-risk (CVaR) (B¨auerle and Ott 2011, Chow et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2017, Huang and Guo 2016) and other spectral risk measures (B¨auerle and Glauner 2021), and variants of expected utility (Bernard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2022, Jaimungal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2022, Pflug and Wozabal 2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Among these risk measures, VaR and CVaR are arguably the most popular ones and have attracted the attention of many researchers (B¨auerle and Ott 2011, Chow et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2017, Delage and Mannor 2007, 2010, Gilbert et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2017, Huang and Guo 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' By using CVaR, one aims to give a precise depiction of the extreme tail of the distribution (of the uncertain rewards), while VaR does not reflect the extreme scenerios exceeding VaR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' It is well-known that CVaR is a coherent risk measure, which can be efficiently optimized by convex optimization tools (Chen and Xie 2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' in contrast, VaR is a more challenging risk measure because it is not a coherent one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' One remarkable advantage of VaR is its stability of estimation (especially under fat-tailed reward distribution (Sarykalin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2008)), which is particularly important under data-driven settings where the number of samples are limited and decision makers evaluate models based on their out-of-sample performances (Bertsimas and Thiele 2006, van de Berg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2022, Zheng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' To demonstrate, we provide an example where we consider a one-step MDP with only 1 state s and 2 actions a1 and a2 (Sutton and Barto 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In this one-step MDP, the decision maker only makes one decision in each episode, and she aims to maximize her VaR/CVaR of rewards for these episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We consider uncertain rewards ˜rs,a1 ∼ Pt-dist and ˜rs,a2 = ˜rs,a1 + ρ|s| where Pt-dist is a Student’s t-distribution and we vary its degree of freedom δ ∈ {2,3,4}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We set the shift ratios ρ = {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='05i}i∈[5], and for testing the estimation accuracy w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' VaR (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', CVaR) (where we choose the risk threshold 10%), we set the shift quantity s as Pt-dist-VaR0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1[˜rs,a1] (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', Pt-dist-CVaR0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1[˜rs,a1]), where both risk measures can be efficiently calculated (see Appendix B for more details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We evaluate the decision maker’s accuracy rate as the proportion of testing samples where she has chosen the action with a higher VaR/CVaR of rewards (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', action a2);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' for each pair of accuracy rate and shift ratio, following Yamai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (2002), 1000 random reward samples for each state-action pair are available for the decision maker, and we test her accuracy rate based on 10000 testing samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' As illustrated in Figure 1, the accuracy rate increases with the shift ratio ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' As δ decreases, F becomes more fat-tailed, and the accuracy rate of VaR is remarkably higher than that of CVaR, which indicates that the statistical inference on VaR would be more accurate than on CVaR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Therefore, VaR may be a more preferable choice when only small sample sets are available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Our return-risk model is motivated by the soft-robust criterion/model, which optimizes a convex combination of the mean and a robust performance in the optimization literature (Ben-Tal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' MDPs with soft-robustness are also popular in recent years, where decision makers aim to Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='25 Shift ratio 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='0 Accuracy rate VaR CVaR 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='25 Shift ratio 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='0 VaR CVaR 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='25 Shift ratio 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='0 VaR CVaR Figure 1 The accuracy rates of the decision maker choosing the correct action (so that the VaR/CVaR of her rewards is maximized): δ = 4 (left), δ = 3 (middle) and δ = 2 (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' maximize a weighted average of the mean and percentile performances (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2020, Lobo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Unlike these existing soft-robust MDPs, however, the proposed return-risk model is fundamentally different in two aspects: first, these existing soft-robust models have no consideration for reward ambiguity, while we utilize distributionally robustness to account for reward ambiguity, by which we can hedge against the most adversarial realization of the distribution of rewards (within the ambiguity set), thus our model is more robust to reward uncertainty (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2019, Xu and Mannor 2010);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' second, we choose VaR as the risk measure which has a direct interpretation to percentile performances, and, as illustrated above, tends to be more advantageous in data-driven optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Our work concentrates on model-based setting, where our proposed models are motivated by the classical (dual formulation of) nominal MDPs (Puterman 2014) and the chance-constrained MDPs (Delage and Mannor 2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' It is worth noting that, beyond model-based setting, there are other inspiring and innovative researches on robust reinforcement learning, such as robust TDC algorithms and robust Q-learning (Roy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2017, Wang and Zou 2021), robust policy gradient (Wang and Zou 2022), least squares policy iteration (Lagoudakis and Parr 2003) and sample complexity analysis (Panaganti and Kalathil 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Note that, though model-free reinforcement learning can be used to learn satisfactory policies for complex environment, the requirement of large amounts of interaction (with environment) may render the learning process slow (Kaiser et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2019), while high sample efficiency is one strong advantage of model-based learning (Sutton and Barto 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We also note that MDPs with transition kernel ambiguity is another active research line where distributionally robustness is widely employed (Clement and Kroer 2021b, Shapiro 2016, 2021, Xu and Mannor 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We may summarize our contributions as follows (and we also compare our contributions to those of related works in Table 2 in Appendix I).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 5 (i) We show that the distributionally robust model of optimizing expected rewards can be reformulated as a convex conic program, which is equivalent to the nominal MDP with a convex regularization in the objective function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (ii) For distributionally robust chance-constrained MDPs (DCC), we show that it can be refor- mulated as nominal chance-constrained MDPs at adjusted risk levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' This observation bridges the gap between risk and parameter ambiguity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (iii) Combining the proposed models in (i) and (ii), we propose the return-risk MDP that maximizes the weighted average of the expectation and VaR of reward (both under distributionally robustness to reward uncertainty), which is flexible and can perform well under the criteria of mean and percentile returns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (iv) When only considering deterministic policies, we show that our return-risk model can also account for risk from uncertain transition kernel, and we derive its equivalent reformulation as a mixed-integer second-order cone program (MISOCP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (v) To solve the proposed return-risk model, we design a first-order method that is more scalable than the MOSEK solver, thus is faster with large-size problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (vi) In the simulation and empirical experiments, we adopt a data-driven setting, where the decision maker aims at maximizing the expectation and VaR of the random reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We compare the performances of distributionally robust MDPs (DRMDPs), DCC, RR, robust MDPs (RMDPs) (Delage and Mannor 2010) and BROIL (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2020), and results show that the third one performs the best under both expectation and different VaR’s (with risk thresholds 5%, 10% and 15%), which showcases its advantages and adjustability to the decision makers’ changeable preferences between return and risk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The remainder of this paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We introduce the background in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In Sections 3 and 4, we study DRMDPs as well as the DCC model, respectively, and we derive their tractable reformulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Combining these proposed models, we propose the RR model in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The designed first-order algorithm for the RR model is detailed in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We compare the performances of DRMDP, DCC, RR, RMDP and BROIL, and demonstrate the advantage of our proposed algorithm in Section 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Conclusion is drawn in Section 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Background We consider an infinite-horizon MDP with a finite state space S = {1,··· ,S} and a finite action space A = {1,··· ,A}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Let P ∈ RS×A×S be the transition probability kernel such that ps,a,s′ is denoted to be the transition probability of transiting to state s′ ∈ S when action a ∈ A is chosen in state s ∈ S;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' thus, ps,a ∈ ∆S is the transition probability distribution for every (s,a) ∈ S × A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Given the state-action pair (s,a), an agent will receive an expected reward rs,a ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' To simplify our notation, we denote the reward function as a vector r = {rs,a}(s,a)∈S×A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 6 We seek for the optimal stationary randomized policy π = {πs}s∈S with πs ∈ ∆A for all s ∈ S, where an action a ∈ A will be taken in state s ∈ S with probability πs,a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' A nominal MDP that maximizes the expected reward can be formulated (Puterman 2014) as ℓN = max x∈X r⊤x, (1) where the feasible set X is given by X = � x ∈ RSA + �� (E − γ · ¯P )x = p0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Here the coefficient matrices E = diag(e⊤,··· ,e⊤) ∈ RS×SA with S all-ones vectors e ∈ RA and ¯P = (¯p1,··· , ¯pS)⊤ ∈ RS×SA with ¯ps = {ps′,a,s}(s′,a)∈S×A for all s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' For each (s,a) ∈ S × A, we denote the sth sub- vector of x as xs = {xi}i∈{(s−1)A+1,··· ,sA};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' its ath component xs,a can be interpreted as the total discounted probability one occupying state s and choosing action a when applying the policy π⋆ s,a = x⋆ s,a/(� a∈A x⋆ s,a) ∀(s,a) ∈ S × A (Puterman 2014)1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We have a discount factor γ ∈ (0,1) and the initial distribution p0 ∈ RS ++ of the initial states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Problem (1) is a linear program that can be efficiently solved by simplex method and interior-point method (Nocedal and Wright 2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' One can also compute the optimal policy efficiently by applying value iteration or policy iteration to solve the associated Bellman equation of this problem (Bertsekas and Tsitsiklis 1995, Puterman 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The nominal MDP (1) does not account for uncertainty in either rewards or transition kernel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' To account for reward uncertainty, Delage and Mannor (2010) assume that the random reward vector ˜r follows a known Gaussian distribution P and propose a chance-constrained MDP model as follows: ℓCC(ε) = � � � � � � � max y s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' P[˜r⊤x ≥ y] ≥ 1 − ε x ∈ X, y ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (2) In fact, the above chance-constrained model maximizes the VaR (at the risk level 1 − ε) of the reward with respect to the distribution P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Since P is assumed Gaussian, by theorem 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 in Pr´ekopa (2013), one can reformulate problem (2) as a second-order cone program as follows: ℓCC(ε) = max x∈X EP[˜r⊤x] − ∥F−1(1 − ε)Σ1/2x∥2, where F−1(·) is the inverse of the cumulative density function of the Gaussian distribution P and Σ is the covariance matrix of P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Second-order cone programs allow efficient solutions by state-of-the-art commercial solvers such as CPLEX, Gurobi and MOSEK (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', Ben-Tal and Nemirovski (2001)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Despite its tractability, the chance-constrained MDP (2) requires the precise underlying reward distribution as input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Moreover, the above reformulation does not hold for generic distribution P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 1 By Puterman (2014), any x ∈ X admits such interpretation, thus we can retrieve our policies of all the proposed models in this paper in this way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 7 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Distributionally Robust MDPs In many real-world situations, the true distribution of the uncertain reward is hard (if not impossi- ble) to obtain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Instead, we may have some firm knowledge, such as moments and shape about it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' As one of the most efficacious treatments for such situations, the DRO approach models uncertainty as a random variable governed by an unknown probability distribution residing in an ambiguity set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Facing distributional ambiguity, a decision maker seeks for solutions that hedge against the most adversarial distribution from within the ambiguity set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' To be specific, in our context, we assume that the true distribution of the uncertain reward resides in a Wasserstein ball of radius θ ≥ 0 around some reference distribution ˆP: F(θ) = {P ∈ P(RSA) | dW � P, ˆP � ≤ θ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (3) Here P(RSA) is the set of all probability distributions on RSA, and the Wasserstein distance between two distributions P1 and P2, equipped with a general norm ∥ · ∥ in RSA, is given by dW (P1,P2) = infP∈Q(P1,P2) EP[∥˜r1 − ˜r2∥], where Q(P1,P2) is the set of all joint distributions with marginal distributions P1 and P2 that govern ˜r1 and ˜r2, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The random parameter in the nominal MDP (1) is the expectation of reward, which in practice, is often estimated by the average of historical samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' However, when the sample size is small, such a sample average is not close to the expectation but rather, is known to be optimistically biased (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', Smith and Winkler (2006)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Hence, the nominal MDP (1) based on samples may yield an unsatisfactory policy that does not perform well out-of-sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' For this reason, a possible alternative is to maximize instead the worst-case expected reward as in the following distributionally robust MDP: ℓDRMDP(θ) = max x∈X inf P∈F(θ)EP[˜r⊤x].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (4) The following proposition offers an equivalent conic program for (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The distributionally robust MDP (4) can be reformulated a conic program ℓDRMDP(θ) = max x∈X EˆP[˜r⊤x] − θ · ∥x∥∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' It is not hard to observe that the distributionally robust MDPs can be viewed as a convex reg- ularization of the nominal MDP (4) under the reference distribution ˆP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In particular, the convex regularizing term in the distributionally robust MDP is θ∥x∥∗, which is sized by the Wasserstein radius θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Interestingly, we have also found that an (distributionally) optimistic MDP can be refor- mulated as a reverse conic program with a (concave) regularization term −θ∥x∥∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We relegate this result to Appendix D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 8 Figure 2 Values of ε with respect to different θ’s: ε = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='05 (left), ε = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 (middle), and ε = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='15 (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We remark that, problem (4) is indeed a special case of the robust optimization problem consid- ered in Jaimungal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (2022), where we consider the expected utility framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Compared to the policy gradient methods provided in Jaimungal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (2022) where convergence is not established, we have derived its equivalent reformulation as a tractable conic program which can be efficiently solved by state-of-the-art commercial solvers such as Gurobi, Mosek and CPLEX, and can also be seamlessly incorporated in the tractable reformulation of our proposed return-risk model in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Distributionally Robust Chance-Constrained MDPs In this section, we turn from optimizing the expectation of reward to its tailed performance, by exploring chance-constrained MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In particular, we still consider Wasserstein ambiguity sets (3) to account for distributional ambiguity, meanwhile specifying the reference distribution ˆP and the norm ∥ · ∥ in the definition of the Wasserstein distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' For the former, we focus on an elliptical reference distribution ˆP = P(µ,Σ,g) 2 throughout this section, whose probability density distribution is given by f(r) = k · g � 1 2(r − µ)⊤Σ−1(r − µ) � , where k is a positive normalization scalar, µ is a mean vector, Σ is a positive definite matrix and g is a generating function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We emphasize that this assumption on ˆP is mild as this is only the center of the ambiguity set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In particular, our proposed distributionally robust chance-constrained MDPs can account for all types of distributions (as long as they are inside the ambiguity set) and they are not restricted to be all elliptical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' As we shall see, such specifications lead to tractable reformulation of our proposed models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Preliminaries on elliptical distributions are relegated to Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' For the latter, we adopt the Mahalanobis norm associated with the positive definite matrix Σ, captured by ∥x∥Σ = √ x⊤Σ−1x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Note that the dual norm of a Mahalanobis norm ∥ · ∥Σ is another Mahalanobis norm ∥ · ∥Σ−1 that is defined by the inverse matrix Σ−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2 Note that results in Section 3 hold for a general reference distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 1e-03 8e-04 6e-04 4e-04 2e-04 0e+00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='040 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='045 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='0501e-03 8e-04 6e-04 4e-04 2e-04 0e+00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='085 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='090 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='095 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1001e-03 8e-04 6e-04 4e-04 2e-04 0e+00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='130 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='135 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='140 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='145 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='150Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 9 In a distributionally robust chance-constrained MDP, we hope that even in the worst-case, with a high confidence the reward is no less than a lower bound, and we aim at maximizing such a lower bound by solving ℓDCC(θ,ε) = � � � � � � � � � max y s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' inf P∈F(θ)P[˜r⊤x ≥ y] ≥ 1 − ε x ∈ X, y ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (5) Quite notably, the worst-case chance constraint in the pessimistic chance-constrained MDP (5) is equivalent to a nominal chance constraint in (2) with a higher risky level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Suppose in the Wasserstein ambiguity set (3), the reference distribution is an ellip- tical distribution ˆP = P(µ,Σ,g) and the Wasserstein distance is equipped with a Mahalanobis norm associated with the positive definite matrix Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The distributionally robust chance constraint ∀ P ∈ F(θ) : P[˜r⊤x ≥ y] ≥ 1 − ε (6) is satisfiable if and only if P(µ,Σ,g)[˜r⊤x ≥ y] ≥ 1 − ε, where ε = 1 − Φ(¯η⋆) ≤ ε with ¯η⋆ that can be computed via bisection method which searches for the smallest η ≥ Φ−1(1 − ε) that satisfies η(Φ(η) − (1 − ε)) − � η2/2 (Φ−1(1−ε)) 2/2 kg(z)dz ≥ θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Equipped with Lemma 1, it then turns out that the distributionally robust chance-constrained MDP (5) is equivalent to a nominal chance-constrained MDP (2) at a higher risky level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Conse- quently, the distributionally robust chance-constrained MDP (5) can be reformulated into a conic program, or more precisely, a second-order cone program owing to our choice of the Mahalanobis norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Suppose in the Wasserstein ambiguity set (3), the reference distribution is an elliptical distribution ˆP = P(µ,Σ,g) and the Wasserstein distance is equipped with a Mahalanobis norm associated with the positive definite matrix Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' If the risk threshold satisfies ε < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5, then the distributionally robust chance-constrained MDP (5) is equivalent to the second-order cone program ℓDCC(θ,ε) = max x∈X µ⊤x − ∥Φ−1(1 − ε)Σ1/2x∥2, where ε = 1 − Φ(¯η⋆) ≤ ε with ¯η⋆ being the smallest η ≥ Φ−1(1 − ε) that satisfies η(Φ(η) − (1 − ε)) − � η2/2 (Φ−1(1−ε)) 2/2 kg(z)dz ≥ θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Similar to the distributionally robust MDPs in Section 3, the distributionally robust chance- constrained MDPs also admit an optimistic counterpart, which is equivalent to the nominal chance- constrained MDPs with a larger risk threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We relegate this result to Appendix E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' To conclude this section, we present in Figure 2 the relations between ε and ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Indeed, for any fixed ε, there is a one-to-one correspondence between the risk threshold ε and the Wasserstein radius θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Following from this fact, for the chance-constrained model in our numerical experiments (Section 7), we only calibrate the risk threshold rather than the Wasserstein radius.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 10 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Return-Risk MDP For rational decision makers, two types of rewards are their chief concerns: the average and the worst-case rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' However, the risk-averse models often can not achieve decent average return on which the model put no emphasis (Carpin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2016, Delage and Mannor 2010, Jiang and Powell 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' To take both concerns into considerations, we leverage the established DRMDPs and DCC model in Sections 3 and 4 as ingredients and propose the return-risk MDP that maximizes the weighted average of the worst-case expectation and VaR of reward as follows: ℓRR(α,θ,ε) = max x∈X α inf P∈F(θ)EP[˜r⊤x] + (1 − α) inf P∈F′(θ)P-VaRε[˜r⊤x].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (7) Here the Wasserstein ball F(θ) is assumed equipped with a general reference distribution and an L2-norm in the definition of the Wasserstein distance, while an elliptical reference distribution ˆP = P(µ,Σ,g) and a Mahalanobis norm associated with the positive definite matrix Σ are assumed for F ′(θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' It is not hard to see that the return-risk MDP (7) takes the distributionally robust MDP (4) and the distributionally robust chance-constrained MDP (5) in as special cases by varying ε, θ and α ∈ {0,1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Furthermore, by choosing a fractional α, the return-risk model enables one to tailor a balance between risk and return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proposition 3 below provides an equivalent second-order cone program for the return-risk MDP (7) under these assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Suppose in (7) the Wasserstein ball F(θ) (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', F ′(θ)) is equipped with a general distribution (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', an elliptical reference distribution ˆP = P(µ,Σ,g)) and the norms in the definitions of the Wasserstein distances of F(θ) and F ′(θ) are an L2-norm and the Mahalanobis norm associated with Σ ≻ 0, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Assume that the risk threshold satisfies ε < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5, then the return-risk MDP (7) is equivalent to a second-order cone program ℓRR(α,θ,ε) = max x∈X µ⊤x − αθ · ∥x∥2 − (1 − α) · ∥Φ−1(1 − ε)Σ1/2x∥2, (8) where ε = 1 − Φ(¯η⋆) ≤ ε with ¯η⋆ being the smallest η ≥ Φ−1(1 − ε) that satisfies η(Φ(η) − (1 − ε)) − � η2/2 (Φ−1(1−ε)) 2/2 kg(z)dz ≥ θ, and it could be computed via bisection method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Risk-Awareness for Uncertain Transition Kernel By adopting the static soft-robust framework in Lobo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (2020), one can indeed also account for the uncertainty in transition kernel in our return-risk model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' As in Lobo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (2020), suppose we have finite samples of transition kernel { ˆP i}i∈[N] with weights w ∈ ∆N := {w ∈ RN + | e⊤w = 1} that are generated by MCMC (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', Kruschke (2010)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Our proposed model is then as follows: max π∈(∆A)S ψ · EˆP[g(π, ˜P )] + (1 − ψ) · ˆP-CVaRι[g(π, ˜P )].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (9) Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 11 max (1 − ψ)(η − 1 1 − ι � i∈[N] yi) + ψ · � i∈[N] (µ⊤xi − αθ · ∥xi∥2 − (1 − α)∥Φ−1(1 − ε)Σ1/2xi∥2) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' yi − wiη ≥ αθ · ∥xi∥2 + (1 − α) · ∥Φ−1(1 − ε)Σ1/2xi���2 − µ⊤xi ∀i ∈ [N] (E − γ · ¯P i)xi = wi · p0 ∀i ∈ [N] xi ≤ wi 1−γπ ∀i ∈ [N] xi s,a ≥ wi 1 − γ (πs,a − 1) + � a′∈A xi s,a′ ∀(i,s,a) ∈ N × S × A π ∈ (∆A)S ∩ {0,1}SA,η ∈ R,xi ∈ RSA + ,y ∈ RN + ∀i ∈ [N].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Figure 3 Reformulation of (9) as an MISOCP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Here the objective function in (9) is again soft-robust against the uncertainty (in transition kernel), with the weight ψ ∈ [0,1] as the controller for the robustness and ι ∈ [0,1] is the risk threshold (w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' the uncertain transition kernel).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The weighted empirical distribution ˆP[ ˜P = ˆP i] = wi ∀i ∈ [N] and the function g(π,P ) = max µ⊤x − αθ · ∥x∥2 − (1 − α) · ∥Φ−1(1 − ε)Σ1/2x∥2 s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' xs,a = πs,a · � a′∈A xs,a′ ∀(s,a) ∈ S × A (E − γ · ¯P )x = p0 x ∈ RSA + represents the optimal value of the return-risk model with the additional constraint that the optimal policy should be the input π ∈ (∆A)S and with ¯P as the coefficient matrix corresponding to the input transition kernel P .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Quite notably, when focusing on deterministic policies, one can reformulate (9) as an MISOCP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' If π is restricted to be a deterministic policy (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', π ∈ (∆A)S ∩{0,1}SA), prob- lem (9) has an equivalent MISOCP reformulation as in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We remark that, though deterministic policies seem to be restricted compared to the randomized ones, they actually are more favored under some situations;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' for example, they may be a more suitable choice in some medical domains where randomized policies are unworkable for practical and philosophical reasons (Rosen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Also, randomized policies may be difficult to be evaluated after they have been deployed and may have poor reproducibility (Lobo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' First-Order Method In this section, we introduce an efficient first-order algorithm to solve the equivalent formulation (8) of our return-risk model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Our algorithm is based on an alternating direction linearized proximal Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 12 method of multipliers (AD-LPMM) algorithm (Beck 2017, Shefi and Teboulle 2014), which is a variant of the alternating direction method of multiplier (ADMM) algorithm and also has a con- vergence rate of O(1/N) (here N is the number of iterations) proved by Beck (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The proposed splitting allows efficient update of variables in AD-LPMM (where the solutions are analytical or can be retrieved by an efficient bisection method).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' For the primal update of the ADMM algorithm, one needs to solve minimization problems with a quadratic term involved (in its objective function);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' in AD-LPMM, this quadratic term can be linearized by adding a proximity term to the objective function, which could render the primal update much easier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' To implement our AD-LPMM algorithm, first we will introduce auxiliary variables and rewrite (8) (as a minimization problem) as follows: min αθ · ∥x∥2 + (1 − α) · ∥Φ−1(1 − ε)Σ1/2y∥2 − µ⊤z s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (E − γ · ¯P )x = p0 x = y x = z x ∈ RSA,y ∈ RSA,z ∈ RSA + , (10) where, in the spirit of AD-LPMM, we can split the decision variables into two groups and update them separately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The augmented Lagrangian function of (10) is: L(x,y,z;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='λ,ξ,η) = αθ · ∥x∥2 + (1 − α)Φ−1(1 − ε) · ∥Σ1/2y∥2 − µ⊤z + λ⊤((E − γ · ¯P )x − p0) + ξ⊤(x − y) +η⊤(x − z) + c 2 · �������� (E − γ · ¯P )x − p0 x − y x − z �������� 2 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Based on our splitting method, we will update the two groups of variables (y,z) and x separately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' For the update of (y,z), we define two primal update operators Py(x,ξ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='c) = arg min y (1 − α)Φ−1(1 − ε) · ∥Σ1/2y∥2 − ξ⊤y + c 2 · ∥x − y∥2 2 and Pz(x,η;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='c) = arg min z≥0 −z⊤(µ + η) + c 2 · ∥x − z∥2 2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' while for the update of x (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', the second group of variables), we define Px(y,z,λ,ξ,η;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='ν,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' ˆx) = arg min x αθ · ∥x∥2 + x⊤((E − γ · ¯P )⊤λ + ξ + η) + c 2 · �������� (E − γ · ¯P )x − p0 x − y x − z �������� 2 2 + 1 2 · ℓ2 Q(c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='ν)(x − ˆx),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Chen,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ho: Risk-Averse MDPs under Reward Ambiguity 13 Algorithm 1: AD-LPMM for Problem (10) Input: Frobenius norm ν = ∥(E − γ · ¯P )⊤(E − γ · ¯P ) + 2 · I∥F,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' initial stepsize c0 > 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' stepsize growth rate β > 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' desired precision δ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' x0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' y0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' z0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' λ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' ξ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' η0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' k ← 0 while �������� (E − γ · ¯P )xk − p0 xk − yk xk − zk �������� ∞ ≥ δ do // Primal update step 1: yk+1 ← Py(xk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='ξk;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='ck);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' step 2: zk+1 ← Pz(xk,ηk;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='ck);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' step 3: xk+1 ← Px(yk+1,zk+1,λk,ξk,ηk;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='ck,ν,xk);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' // Dual update step 4: λk+1 ← λk + ck · ((E − γ · ¯P )xk+1 − p0);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' step 5: ξk+1 ← ξk + ck · (xk+1 − yk+1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' step 6: ηk+1 ← ηk + ck · (xk+1 − zk+1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' // Increase stepsize step 7: ck+1 ← ck + βc0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' step 8: k ← k + 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' end Output: Solution xk where Q(c,ν) = c · ((ν − 2) · I − (E − γ · ¯P )⊤(E − γ · ¯P )) and ℓQ(·) (equipped with a positive semi-definite matrix Q) is a weighted vector norm such that ℓQ(x) = � x⊤Qx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' As we shall see in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='3, the update of x is fast (where an analytical solution is available) with the proximity term (1/2)·ℓ2 Q(c,ν)(x− ˆx) added.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Note that when Q(c,ν) ≡ 0, the update in AD-LPMM degenerates to an ADMM’s one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We now introduce our AD-LPMM in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Basically, the most time-consuming computa- tions lie in the primal update phase, where the updates are carried out by solving a minimization problem with other variables fixed at values after their last updates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' As shall be detailed soon, owing to our variable splitting method, the primal updates are also quite fast, where analytical solu- tions or solutions obtained by bisection are available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Here we choose a stepsize that is increasing in every iteration (with a growth rate β > 0), which in practice accelerates the convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 14 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Subproblem in Step 1: Proximal Mapping and Projection To solve Py(x,ξ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='c), first we would utilize the technique of proximal mapping and establish the following equivalences: Py(x,ξ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='c) = Prox (1−α)Φ−1(1−ε) c ∥·∥Σ(x + 1 c · ξ) = x + 1 c · ξ − (1−α)Φ−1(1−ε) c ProjBℓΣ−1 (·) � 1 (1−α)Φ−1(1−ε) · (c · x + ξ) � , (11) where Proxf(·)(x) = arg minv f(v) + 1 2 · ∥v − x∥2 2 is the proximal mapping operator and ProjBℓΣ(·)(x) = arg min v:ℓΣ(v)≤1 1 2 · ∥v − x∥2 2 (12) is the operator of projection on the unit ball BℓΣ(·) = {x ∈ RSA | ℓΣ(x) ≤ 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Here, the first equality in (11) holds by the definition of the proximal mapping operator, and the second equality follows from,e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', example 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='7 in Beck (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Indeed, problem (12) allows an efficient solution obtained by a bisection method to locate its optimal dual solution λ⋆ ≥ 0 (after which the optimal primal solution can be retrieved immediately), where the upper bound of the bisection is provided in Lemma 2 relegated to Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The time complexity of the solution process (11), as well as the pseudocode for the bisection method, are provided in the following proposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Problem Py(x,ξ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='c) can be solved in time O(SAlog(1/δ′)), where δ′ is the desired precision of the bisection method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Subproblem is Step 2: Componentwise Update Problem Pz(x,η;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='c) can be decomposed into SA single-variable quadratic programming problems, each allowing an analytical solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We summarize the time complexity and details in the following proposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Problem Pz(x,η;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='c) can be solved in time O(SA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Subproblem in Step 3: Linearization and Proximal Mapping Compared to the update in ADMM, in our AD-LPMM, a proximity term (1/2) · ℓ2 Q(c,ν)(x − ˆx) is added to the objective function of the update in step 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' By choosing Q(·,·) as mentioned in Section 6, we can linearize all the quadratic terms in Px(y,z,λ,ξ,η;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='c,ν, ˆx), thus the solution can be obtained analytically by the technique of proximal mapping (meanwhile assuring the positive semi-definiteness of Q(ck,ν) in every iteration of Algorithm 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' This solution process, as well as its time complexity, is provided in the following proposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proposition 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Problem Px(y,z,λ,ξ,η;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='c,ν, ˆx) can be solved in time O(SA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=" Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 15 100 200 300 400 500 Sample size 15 14 13 VaR ( '=0." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='15) DRMDP CC RR BROIL RMDP 100 200 300 400 500 Sample size 14 12 10 Mean DRMDP CC RR BROIL RMDP Figure 4 Empirical study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Models DRMDP (4), CC (2), RR (7), RMDP and BROIL evaluated by VaR (risk threshold ε′ = 15%) and mean of reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The upper and lower edges of the shaded areas are respectively the 95% and 5% percentiles of the 100 performances, while the solid lines are the medians.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Numerical Experiments In this section, we conduct two numerical experiments to compare the performances of DRMDPs (4), CC (2)3, RR (7), RMDPs (Delage and Mannor 2010) and BROIL (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2020) (please see Appendices F and G for more details for the last two models).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In both experi- ments, we train our reward functions with different sample sizes (100,200,300,400,500).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' For each sample size, performance of each model is evaluated for 100 times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The performance of each model is evaluated by expectation and VaR with risk thresholds ε′ ∈ {5%,10%,15%}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Cross validations are conducted for parameter selection (please see Appendix H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 for details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In Section 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1, we conduct a simulation study where MDPs are generated randomly as in Regan and Boutilier (2012);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In Section 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2, we study a machine replacement problem introduced in Delage and Mannor (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' As implied in our proofs, in this section, the Wasserstein ambiguity set of DRMDPs (4) will be equipped with a general reference distribution and an L2-norm for the Wasserstein distance;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' as for RR (7), we use a general reference distribution and an L2-norm in the definition of the Wasserstein distance for the Wasserstein ambiguity set F(θ), while for F ′(θ), we use an elliptical reference distribution ˆP = P(µ,Σ,g) and the Mahalanobis norm associated with the positive definite matrix Σ for the Wasserstein distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' All optimization problems are solved by MOSEK on a 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='3GHz processor with 32GB memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=" Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 16 100 200 300 400 500 Sample size 1600 1650 1700 1750 VaR ( '=0." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='15) DRMDP CC RR BROIL RMDP 100 200 300 400 500 Sample size 1750 1800 1850 Mean DRMDP CC RR BROIL RMDP Figure 5 Simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Models DRMDP (4), CC (2), RR (7), RMDP and BROIL evaluated by VaR (risk threshold ε′ = 15%) and mean of reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The upper and lower edges of the shaded areas are respectively the 95% and 5% percentiles of the 100 performances, while the solid lines are the medians.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Simulation Study In this experiment, we follow the experiment setup in Regan and Boutilier (2012) where the number of reachable next-states and the transition kernel are randomly generated (both of which are known to decision makers).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' More details of the experiment setting are relegated to Appendix H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' As illustrated in Figures 5 and 7 (where the latter for VaR with ε′ ∈ {5%,10%} is relegated to Appendix H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='4), when the decision maker aims to optimize her tailed performances, CC is a preferable choice compared to DRMDPs;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' on the contrary, when pursuing optimizing the average return, DRMDPs perform much better than CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Observe that the RR model, which includes both DRMDPs and the DCC model as special cases, remains as the best model under all criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In particular, one can observe that, RR achieves higher percentile returns than BROIL (that is a model without robustness), which demonstrates the benefits of distributionally robustness and the advantage of the risk measure VaR for percentile performance optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' As expected, RMDPs end up yielding over-conservative policies;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' as a result, it performs poorly in most instances under all criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Machine Replacement Problem In this experiment, we follow the experiment setup in Delage and Mannor (2010) and consider the case where a factory holds an extensive amount of machines, each of which is subject to the same underlying MDP (more details of the experiment setting can be found in Appendix H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Our setting is similar to Delage and Mannor (2010) except for the follows: we use a data-driven setting 3 As we demonstrated in Section 4, a DCC is equivalent to a nominal chance-constrained one with an adjusted risk level, thus here we simply choose the latter as the benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 17 as described above, and we evaluate our (policies of) models by looking at the various performance measures as in Section 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We report the overall performances of the five models in Figures 4 and 8 (where the latter for VaR with ε′ ∈ {5%,10%} is relegated to Appendix H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Similar to the previous experiment, RR always performs better than or equal to the best model between CC and DRMDPs, and it provides the best performance under all criteria, which again manifest the merit of taking both the expected and worst-case performances into consideration and distributionally robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Computation Times of Different Algorithms Table 1 The average of the runtimes of the MOSEK solver and the AD-LPMM algorithm in seconds and the relative gaps (%) to the optimal values computed by MOSEK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' S=A Runtimes Relative gaps MOSEK AD-LPMM 40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='60 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='79 < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 % 70 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='58 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='81 < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 % 100 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='50 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2 % 130 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='54 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='17 < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 % 160 444.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='06 168.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='4 % In this section, we compare the computation times of our AD-LPMM algorithm with the state- of-the-art solver MOSEK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Table 1 reports the runtimes of the the AD-LPMM and MOSEK when solving problem (8) at different problem sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Results indicate that, though our AD-LPMM is slower than the MOSEK solver when problem size is small, it showcases its strong scalability and become much faster than MOSEK with large-size problems (while always maintaining high solution quality), where the advantage is more notable when the problem scales up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Conclusion We consider risk-aware MDPs with ambiguous reward functions and propose the return-risk model, which is versatile and can optimize any weighted combination of the average and quantile perfor- mances of a policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' This model generalizes and combines the advantage of distributionally robust MDPs and distributionally robust chance-constrained MDPs, thus is powerful in both average and percentile performances optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In particular, risk from uncertain transition kernel can also be captured by the return-risk model when output policies are deterministic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Tractable refor- mulations are provided for all our proposed models, and we design an AD-LPMM algorithm for Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 18 the return-risk model, which is well scalable and faster than the MOSEK solver with large-scale problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Experimental results showcase the versatility of the return-risk model as well as the scalability of the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In the future, we believe that it would be important to explore more efficient methods for obtaining solution of RR, where function approximation and policy gradient (Sutton and Barto 2018) are possible choices to achieve this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' References Abdullah, Mohammed Amin, Hang Ren, Haitham Bou Ammar, Vladimir Milenkovic, Rui Luo, Mingtian Zhang, Jun Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Wasserstein robust reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' arXiv preprint arXiv:1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='13196 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ahmadi, Mohamadreza, Ugo Rosolia, Michel Ingham, Richard Murray, Aaron Ames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Constrained risk-averse Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The 35th AAAI Conference on Artificial Intelligence (AAAI-21).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Alizadeh, Pegah, Yann Chevaleyre, Jean-Daniel Zucker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Approximate regret based elicitation in Markov decision process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The 2015 IEEE RIVF International Conference on Computing & Communication Technologies-Research, Innovation, and Vision for Future (RIVF).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' IEEE, 47–52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' B¨aauerle, Nicole, Ulrich Rieder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Partially observable risk-sensitive Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Mathe- matics of Operations Research 42(4) 1180–1196.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' B¨auerle, Nicole, Alexander Glauner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Minimizing spectral risk measures applied to Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Mathematical Methods of Operations Research 94(1) 35–69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' B¨auerle, Nicole, Jonathan Ott.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Markov decision processes with average-value-at-risk criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Mathe- matical Methods of Operations Research 74(3) 361–379.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Beck, Amir.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' First-order methods in optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' SIAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Behzadian, Bahram, Reazul Russel, Marek Petrik, Chin Pang Ho.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Optimizing percentile criterion using robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' International Conference on Artificial Intelligence and Statistics 1009–1017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ben-Tal, Aharon, Dimitris Bertsimas, David B Brown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' A soft robust model for optimization under ambiguity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Operations research 58(4-part-2) 1220–1234.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ben-Tal, Aharon, Arkadi Nemirovski.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Lectures on modern convex optimization: analysis, algorithms, and engineering applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' SIAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Bernard, Carole, Silvana M Pesenti, Steven Vanduffel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Robust distortion risk measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='08850 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Bertsekas, Dimitri, John Tsitsiklis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Neuro-dynamic programming: an overview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proceedings of 1995 34th IEEE Conference on Decision and Control, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' IEEE, 560–564.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Bertsimas, Dimitris, Aur´elie Thiele.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Robust and data-driven optimization: modern decision making under uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Models, methods, and applications for innovative decision making.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' INFORMS, 95– 122.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 19 Blanchet, Jose, Karthyek Murthy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Quantifying distributional model risk via optimal transport.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Math- ematics of Operations Research 44(2) 565–600.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Brown, Daniel, Scott Niekum, Marek Petrik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Bayesian robust optimization for imitation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 33 2479–2491.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Calafiore, Carlo, L El Ghaoui.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' On distributionally robust chance-constrained linear programs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Journal of Optimization Theory and Applications 130(1) 1–22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Carpin, Stefano, Yin-Lam Chow, Marco Pavone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Risk aversion in finite Markov decision processes using total cost criteria and average value at risk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2016 IEEE International Conference on Robotics and Automation (ICRA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' IEEE, 335–342.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Chen, Xin, Melvyn Sim, Peng Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' A robust optimization perspective on stochastic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Operations Research 55(6) 1058–1071.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Chen, Xinyue, Zijian Zhou, Zheng Wang, Che Wang, Yanqiu Wu, Keith Ross.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Bail: Best-action imitation learning for batch deep reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 33 18353–18363.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Chen, Zhi, Daniel Kuhn, Wolfram Wiesemann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Data-driven chance constrained programs over Wasser- stein balls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' arXiv preprint arXiv:1809.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='00210 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Chen, Zhi, Melvyn Sim, Huan Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Distributionally robust optimization with infinitely constrained ambiguity sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Operations Research 67(5) 1328–1344.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Chen, Zhi, Weijun Xie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Sharing the value-at-risk under distributional ambiguity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Mathematical Finance 31(1) 531–559.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Choi, Jaedeug, Kee-Eung Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Nonparametric Bayesian inverse reinforcement learning for multiple reward functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Chow, Yinlam, Mohammad Ghavamzadeh, Lucas Janson, Marco Pavone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Risk-constrained reinforce- ment learning with percentile risk criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The Journal of Machine Learning Research 18(1) 6070–6120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Clement, Julien, Christian Kroer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' First-order methods for Wasserstein distributionally robust MDP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' PMLR, 2010–2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Clement, Julien Grand, Christian Kroer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' First-order methods for wasserstein distributionally robust mdp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' PMLR, 2010–2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Delage, Erick, Shie Mannor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Percentile optimization in uncertain Markov decision processes with application to efficient exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proceedings of the 24th International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' PMLR, 225–232.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Delage, Erick, Shie Mannor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Percentile optimization for Markov decision processes with parameter uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Operations Research 58(1) 203–213.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Delgado, Karina, Leliane De Barros, Daniel Dias, Scott Sanner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Real-time dynamic programming for Markov decision processes with imprecise probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Artificial Intelligence 230 192–223.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 20 Derman, Esther, Shie Mannor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Distributional robustness and regularization in reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='02894 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Erdo˘gan, Emre, Garud Iyengar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ambiguous chance constrained problems and robust optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Mathematical Programming 107(1) 37–61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Gao, Rui, Anton Kleywegt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Distributionally robust stochastic optimization with Wasserstein distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' arXiv preprint arXiv:1604.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='02199 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Gao, Rui, Anton Kleywegt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Distributionally robust stochastic optimization with Wasserstein distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Mathematics of Operations Research .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Gilbert, Hugo, Paul Weng, Yan Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Optimizing quantiles in preference-based Markov decision pro- cesses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proceedings of the AAAI Conference on Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Hanasusanto, Grani, Vladimir Roitch, Daniel Kuhn, Wolfram Wiesemann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' A distributionally robust perspective on uncertainty quantification and chance constrained programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Mathematical Pro- gramming 151(1) 35–62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Hanasusanto, Grani, Vladimir Roitch, Daniel Kuhn, Wolfram Wiesemann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ambiguous joint chance constraints under mean and dispersion information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Operations Research 65(3) 751–767.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Haskell, William, Rahul Jain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' A convex analytic approach to risk-aware Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' SIAM Journal on Control and Optimization 53(3) 1569–1598.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ho, Jonathan, Stefano Ermon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Generative adversarial imitation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Advances in neural infor- mation processing systems 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Hogg, Robert V, Allen T Craig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Introduction to mathematical statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (5”” edition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Englewood Hills, New Jersey .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Huang, Wenjie, William Haskell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Risk-aware q-learning for Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2017 IEEE 56th Annual Conference on Decision and Control (CDC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' IEEE, 4928–4933.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Huang, Yonghui, Xianping Guo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Minimum average value-at-risk for finite horizon semi-Markov decision processes in continuous time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' SIAM Journal on Optimization 26(1) 1–28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Iyengar, Garud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Robust dynamic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Mathematics of Operations Research 30(2) 257–280.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Jaimungal, Sebastian, Silvana M Pesenti, Ye Sheng Wang, Hariom Tatsat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Robust risk-aware rein- forcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' SIAM Journal on Financial Mathematics 13(1) 213–226.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Jiang, Daniel R, Warren B Powell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Risk-averse approximate dynamic programming with quantile-based risk measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Mathematics of Operations Research 43(2) 554–579.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Kaiser, Lukasz, Mohammad Babaeizadeh, Piotr Milos, Blazej Osinski, Roy H Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Model-based reinforcement learning for atari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' arXiv preprint arXiv:1903.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='00374 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 21 Kruschke, John K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Bayesian data analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Wiley Interdisciplinary Reviews: Cognitive Science 1(5) 658–676.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Lagoudakis, Michail G, Ronald Parr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Least-squares policy iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The Journal of Machine Learning Research 4 1107–1149.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Lobo, Elita A, Mohammad Ghavamzadeh, Marek Petrik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Soft-robust algorithms for batch reinforce- ment learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' arXiv preprint arXiv:2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='14495 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Mannor, Shie, Ofir Mebel, Huan Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Robust MDPs with k-rectangular uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Mathematics of Operations Research 41(4) 1484–1509.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Mannor, Shie, Duncan Simester, Peng Sun, John Tsitsiklis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Bias and variance approximation in value function estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Management Science 53(2) 308–322.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ng, Andrew Y, Stuart J Russell, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Algorithms for inverse reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Icml, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Nocedal, Jorge, Stephen Wright.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Numerical optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Springer Science & Business Media.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Osa, Takayuki, Joni Pajarinen, Gerhard Neumann, J Andrew Bagnell, Pieter Abbeel, Jan Peters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' An algorithmic perspective on imitation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' arXiv preprint arXiv:1811.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='06711 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Panaganti, Kishan, Dileep Kalathil.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Sample complexity of robust reinforcement learning with a gen- erative model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' International Conference on Artificial Intelligence and Statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' PMLR, 9582–9602.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Petrik, Marek.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Optimization-based approximate dynamic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' University of Massachusetts Amherst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Petrik, Marek, Ronny Luss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Interpretable policies for dynamic product recommendations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' UAI .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Pflug, Georg, David Wozabal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ambiguity in portfolio selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Quantitative Finance 7(4) 435–442.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Postek, Krzysztof, Aharon Ben-Tal, Dick Den Hertog, Bertrand Melenberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Robust optimization with ambiguous stochastic constraints under mean and dispersion information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Operations Research 66(3) 814–833.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Pr´ekopa, Andr´as.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Stochastic programming, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 324.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Springer Science & Business Media.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Puterman, Martin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Markov decision processes: discrete stochastic dynamic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' John Wiley & Sons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Rashidinejad, Paria, Banghua Zhu, Cong Ma, Jiantao Jiao, Stuart Russell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Bridging offline reinforce- ment learning and imitation learning: A tale of pessimism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Regan, Kevin, Craig Boutilier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Robust policy computation in reward-uncertain MDPs using nondom- inated policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proceedings of the AAAI Conference on Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Regan, Kevin, Craig Boutilier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2011a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Eliciting additive reward functions for Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Twenty-Second International Joint Conference on Artificial Intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 22 Regan, Kevin, Craig Boutilier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2011b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Robust online optimization of reward-uncertain MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Twenty-Second International Joint Conference on Artificial Intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Regan, Kevin, Craig Boutilier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Regret-based reward elicitation for Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' arXiv preprint arXiv:1205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2619 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Rosen, Laura, Orly Manor, Dan Engelhard, David Zucker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In defense of the randomized controlled trial for health promotion research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' American journal of public health 96(7) 1181–1186.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Roy, Aurko, Huan Xu, Sebastian Pokutta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Reinforcement learning under model mismatch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Advances in neural information processing systems 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Sarykalin, Sergey, Gaia Serraino, Stan Uryasev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Value-at-risk vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' conditional value-at-risk in risk management and optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' State-of-the-art decision-making tools in the information-intensive age.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Informs, 270–294.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Shapiro, Alexander.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Rectangular sets of probability measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Operations Research 64(2) 528–541.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Shapiro, Alexander.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Distributionally robust optimal control and mdp modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Operations Research Letters 49(5) 809–814.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Shefi, Ron, Marc Teboulle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' SIAM Journal on Optimization 24(1) 269– 297.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Smith, James, Robert L Winkler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The optimizer’s curse: skepticism and postdecision surprise in decision analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Management Science 52(3) 311–322.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Sutton, Richard S, Andrew G Barto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Reinforcement learning: An introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' MIT press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' van de Berg, Damien, Thomas Savage, Panagiotis Petsagkourakis, Dongda Zhang, Nilay Shah, Ehecatl Anto- nio del Rio-Chanona.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Data-driven optimization for process systems engineering applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Chemical Engineering Science 248 117135.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Wang, Yue, Shaofeng Zou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Online robust reinforcement learning with model uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 34 7193–7206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Wang, Yue, Shaofeng Zou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Policy gradient method for robust reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='07344 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Xie, Weijun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' On distributionally robust chance constrained programs with Wasserstein distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Mathematical Programming 186(1) 115–155.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Xu, Huan, Shie Mannor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Distributionally robust Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 23 2505–2513.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Xu, Huan, Shie Mannor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Distributionally robust markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Mathematics of Operations Research 37(2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 23 Yamai, Yasuhiro, Toshinao Yoshiba, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Comparative analyses of expected shortfall and value-at- risk: their estimation error, decomposition, and optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Monetary and economic studies 20(1) 87–121.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Yang, Insoon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Wasserstein distributionally robust stochastic control: A data-driven approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' IEEE Transactions on Automatic Control 66(8) 3863–3870.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Yu, Pengqian, Huan Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Distributionally robust counterpart in markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' IEEE Transactions on Automatic Control 61(9) 2538–2543.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Zheng, Kan, Zhe Yang, Kuan Zhang, Periklis Chatzimisios, Kan Yang, Wei Xiang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Big data-driven optimization for mobile networks toward 5g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' IEEE network 30(1) 44–51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Zymler, Steve, Daniel Kuhn, Ber¸c Rustem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Distributionally robust joint chance constraints with second-order moment information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Mathematical Programming 137(1) 167–198.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 24 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proof of Results A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proofs of Results in Section 3 Proof of Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' It is sufficient to rewrite the objective of (4) as follows: inf P∈F(θ)EP[˜r⊤x] = − sup P∈F(θ) EP[−˜r⊤x] = −min λ≥0 � λθ − � RSA inf ξ∈RSA(λ∥ξ − r∥ + ξ⊤x) dˆPr � = − min λ≥∥x∥∗ � λθ − � RSA r⊤x dˆPr � = EˆP[˜r⊤x] − θ∥x∥∗, where the second identity follows from theorem 1 in Gao and Kleywegt (2016) and the third identity follows from strong conic duality inf ξ∈RK(λ∥ξ − r∥ + ξ⊤x) = � � � r⊤x λ ≥ ∥x∥∗ −∞ λ ∈ [0,∥x∥∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Substituting the above reexpression then concludes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proofs of Results in Section 4 Proof of Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Notice that (6) is equivalent to sup P∈F(θ) P �˜r⊤x < y � ≤ ε ⇐⇒ sup P∈F(θ) P �˜r⊤x ≤ y � ≤ ε, where it is equivalent if we replace the strict inequality on the left-hand side with a weak one on the right-hand side;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' see proposition 3 in Gao and Kleywegt (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Exploring the definition of VaR, we note that sup P∈F(θ) P �˜r⊤x ≤ y � ≤ ε ⇐⇒ sup P∈F(θ) P-VaR1−ε � y − ˜r⊤x � ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' By corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='9 in Chen and Xie (2021) and the assumption of Mahalanobis norm, it holds that sup P∈F(θ) P-VaR1−ε � y − ˜r⊤x � = P(µ,Σ,g)-VaR1−ε � y − ˜r⊤x � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' In other words, the worst-case VaR around the elliptical distribution P(µ,Σ,g) with the risk threshold ε is equal to the nominal elliptical VaR with a small risk threshold ε ≤ ε (which, would correspond to a higher risk level).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We thus obtain sup P∈F(θ) P-VaR1−ε � y − ˜r⊤x � ≤ 0 ⇐⇒ P(µ,Σ,g)-VaR1−ε [y − ˜r⊤x] ≤ 0 ⇐⇒ P(µ,Σ,g) �˜r⊤x ≤ y � ≤ ε ⇐⇒ P(µ,Σ,g) �˜r⊤x ≥ y � ≥ 1 − ε, Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 25 where the last equivalence follows from P(µ,Σ,g) being a continuous distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proof of Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' By Lemma 1, the first constraint in (5) is the same as P(µ,Σ,g) �˜r⊤x ≥ y � ≥ 1 − ε, where ε = 1 − Φ(¯η⋆) ≤ ε and ¯η⋆ is the smallest η ≥ Φ−1(1 − ε) that satisfies η(Φ(η) − (1 − ε)) − � η2/2 (Φ−1(1−ε)) 2/2 kg(z)dz ≥ θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The constraint can then be further written as P(µ,Σ,g)[˜r⊤x ≥ y] ≥ 1 − ε ⇐⇒ Φ((µ⊤x − y)/ √ x⊤Σx) ≥ 1 − ε ⇐⇒ µ⊤x − y ≥ Φ−1(1 − ε) √ x⊤Σx ⇐⇒ µ⊤x − y ≥ ∥Φ−1(1 − ε)Σ1/2x∥2, where the first equivalence holds by the linearity of elliptical distributions, the second one is because that Φ(·) is non-decreasing, and the last one is due to the fact that 1−ε ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5 (which follows from ε ≤ ε < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Observe that the optimum is achieved at y⋆ = µ⊤x − ∥Φ−1(1 − ε)Σ1/2x∥2, plugging this in the objective of problem (5) then concludes our proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proofs of Results in Section 5 Proof of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' By Proposition 1 and Proposition 2, we have inf P∈F(θ)EP[˜r⊤x] = −θ∥x∥2 + EˆP[˜r⊤x] and inf P∈F′(θ)P-VaR1−ε[˜r⊤x] = µ⊤x − ∥Φ−1(1 − ε)Σ1/2x∥2 with ε as claimed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Substituting the above two equations into (7) and rearranging the terms then concludes our proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proof of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' By the definition of ˆT, problem (9) can be rewritten as: max π∈(∆A)S ψ � i∈[N] wi · g(π, ˆP i) + (1 − ψ)max η∈R � � �η − 1 1 − ι � i∈[N] wi(η − g(π, ˆP i))+ � � �.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' By introducing auxiliary decision variables y ∈ RN, it can be further reformulated as: max ψ � i∈[N] wi · g(π, ˆP i) + (1 − ψ) � �η − 1 1 − ι � i∈[N] yi � � s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' yi ≥ wi(η − g(π, ˆP i)) ∀i ∈ [N] π ∈ (∆A)S,y ∈ RN +,η ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (13) Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 26 Here we can express wi · g(π,P ) = max µ⊤x − αθ · ∥x∥2 − (1 − α) · ∥Φ−1(1 − ε)Σ1/2x∥2 s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' xs,a = πs,a · � a′∈A xs,a′ ∀(s,a) ∈ S × A (E − γ · ¯P )x = wi · p0 x ∈ RSA + (14) as in Lobo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We can then, by combining (13) and (14), reformulate problem (9) as: max ψ � i∈[N] (µ⊤xi − αθ · ∥xi∥2 − (1 − α) · ∥Φ−1(1 − ε)Σ1/2xi∥2) + (1 − ψ)(η − 1 1 − ι � i∈[N] yi) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' yi − wiη ≥ αθ · ∥xi∥2 + (1 − α) · ∥Φ−1(1 − ε)Σ1/2xi∥2 − µ⊤xi ∀i ∈ [N] xi s,a = πs,a · � a′∈A xi s,a′ ∀i ∈ [N],(s,a) ∈ S × A (E − γ · ¯P i)xi = wi · p0 ∀i ∈ [N] π ∈ (∆A)S,η ∈ R,xi ∈ RSA + ,y ∈ RN + ∀i ∈ [N].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Now it is sufficient to focus on the second set of constraints xi s,a = πs,a · � a′∈A xi s,a′ ∀i ∈ [N],(s,a) ∈ S × A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (15) Since we only consider deterministic policy π ∈ {0,1}SA and � a∈A xi s,a ∈ [0,wi/(1 − γ)] (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='10 in Petrik (2010)), we have the McCormick relaxation (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', Petrik and Luss (2016)) of (15) as: � � � � � � � � � � � � � � � � � � � � � xi s,a ≤ � a′∈A xi s,a′ xi s,a ≤ wi 1 − γ πs,a xi s,a ≥ 0 xi s,a ≥ wi 1 − γ (πs,a − 1) + � a′∈A xi s,a′ for all i ∈ [N],(s,a) ∈ S × A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Our conclusion then follows from the fact that the McCormick relaxation is precise when π ∈ {0,1} (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', the extreme values of the interval [0,1]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proofs of Results in Section 6 Proof of Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' By (11), it is sufficient to focus on solving ProjBℓΣ(·)(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' By eigenvalue decomposition, we have Σ = G⊤DG4 with D = diag(d1,··· ,dSA), thus we have: ProjBℓΣ(·)(x) = arg min 1 2 · ∥v − x∥2 2 s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' v⊤G⊤DGv ≤ 1 v ∈ RSA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 4 The eigenvalue decomposition here is not counted in the time complexity of the bisection method (or the AD-LPMM algorithm), since this process is carried out for computing Σ1/2 in (8) (before we solve (8)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 27 By change of variable u = Gv and let b = Gx, it is sufficient to focus on the equivalent problem: arg min 1 2 · ∥u − b∥2 2 s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' u⊤Du ≤ 1 u ∈ RSA, (16) where we can retrieve v⋆ = G⊤u⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The Lagrangian function of(16) (with the introduced dual variable ζ ∈ R+) is L(u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='ζ) = 1 2 · ∥u − b∥2 2 + ζ(u⊤Du − 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Since (16) is a convex optimization problem, the KKT condition is the sufficient condition for the optimality of the primal and dual solutions: � � � � � � � � � � � � � u⊤Du ≤ 1 ζ ≥ 0 ζ(u⊤Du − 1) = 0 ∇uL(u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='ζ) = u − b + 2ζ · Du = 0, where for ζ = 0, we have � � � u⊤Du ≤ 1 u − b = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' while when ζ > 0, we have � � � u⊤Du = 1 (I + 2ζ · D)u − b = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Therefore, if b⊤Db ≤ 1, we have u⋆ = b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' if b⊤Db > 1, it is sufficient to solve the equation g(ζ) = 1 where g(ζ) = � i∈[SA] dib2 i (1 + 2ζdi)2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The function g is monotonically decreasing function on [0,+∞) and limζ→+∞ g(ζ) = 0, thus we can apply the bisection method to search on the interval [0, ¯ζ] (where ¯ζ : g(¯ζ) ≤ 1 is the upper bound for the search which we provide in Lemma 2) to locate ζ⋆ and retrieve u⋆ i = bi/(1+2ζ⋆di) ∀i ∈ [SA].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The pseudocode is provided in Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The time complexity of solving Py(x,ξ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='c) is dominated by the bisection method, which has time complexity O(log(1/δ′)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Our conclusion follows from the fact that the computation in each iteraion of the bisection takes time O(SA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The inequality g(ζ) ≤ 1 holds for all ζ ≥ (1/(2di′′))(bi′√SAdi′ − 1), where i′ ∈ arg maxi∈[SA] dib2 i and i′′ ∈ arg mini∈[SA] di Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 28 Algorithm 2: Bisection for Problem (16) Input: Desired precision δ′, initial lower bound ζ ← 0 and upper bound ζ > 0 if g(0) ≤ 1 then u ← b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' end else while |ζ − ζ| ≥ δ′ do ζ ← 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5(ζ + ζ);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' if g(ζ) >= 1 then ζ ← ζ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' end else ζ ← ζ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' end end for i = 1,··· ,SA do ui = bi/(1 + 2ζdi);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' end end Output: Solution u Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Observe that, g(ζ) ≤ � i∈[SA] di′b2 i′ (1 + 2ζdi)2 ≤ SAdi′b2 i′ (1+2ζdi′′)2 , from which we have SAdi′b2 i′ (1 + 2ζdi′′)2 ≤ 1 ⇒ g(ζ) ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Our conclusion thus follows by rearranging the terms of the inequality on the left-hand side.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' By Lemma 2, one can choose ζ = (1/(2di′′))(bi′√SAdi′ − 1), where i′ ∈ arg maxi∈[SA] dib2 i and i′′ ∈ arg mini∈[SA] di for Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proof of Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Notice that, it is sufficient to solve the ith subproblem: arg min z≥0 c 2z2 − (cxi + µi + ηi)z = max � 0, 1 c(cxi + µi + ηi) � for all i ∈ [SA], where our conclusion follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 29 Proof of Proposition 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' By the definition of Q(·,·), we have Px(y,z,λ,ξ,η;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='ν,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' ˆx) = arg min x αθ · ∥x∥2 + x⊤((E − γ · ¯P )⊤λ + ξ + η) + c 2 · �������� (E − γ · ¯P )(x − ˆx) + (E − γ · ¯P )ˆx − p0 x − ˆx + ˆx − y x − ˆx + ˆx − z �������� 2 2 + 1 2 · ℓ2 Q(c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='ν)(x − ˆx) = arg min x αθ · ∥x∥2 + x⊤((E − γ · ¯P )⊤λ + ξ + η) + c 2 · �������� (E − γ · ¯P )(x − ˆx) x − ˆx x − ˆx �������� 2 2 +c · x⊤ � (E − γ · ¯P )⊤ � (E − γ · ¯P )ˆx − p0 � + 2 · ˆx − y − z � + 1 2 · ℓ2 Q(c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='ν)(x − ˆx) = arg min x αθ cν · ∥x∥2 + x⊤w + 1 2 · ∥x − ˆx∥2 2 = arg min x αθ cν · ∥x∥2 + 1 2 · ∥x − (ˆx − w)∥2 2 = � 1 − αθ cν max{∥w∥2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' αθ cν } � (ˆx − w) where we denote w = 1 cν �� E − γ · ¯P �⊤ λ + ξ + η � + 1 ν �� E − γ · ¯P �⊤ �� E − γ · ¯P � ˆx − p0 � + 2 · ˆx − y − z � ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' and the last equality holds by,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', exam- ple 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='9 in Beck (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The computation time is dominated by computing ∥w∥2, which is O(SA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Evaluation of VaR and CVaR of Student’s t-Distribution The VaR of a Student’s t-distribution with threshold ε is in fact the lower-ε percentile of its probability density function (PDF), which can be looked up in table in, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', Hogg and Craig (1995) (under some common values of ε < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We provide the calculation of CVaR as follows (with degree of freedom δ > 1 and v := Pt-dist-VaRε(˜r) assumed known): Pt-dist-CVaRε(˜r) = 1 ε · Γ( δ+1 2 ) (πδ) 1 2 Γ( δ 2 ) � v −∞ r (1+ r2 δ ) δ+1 2 dr = 1 ε · δ 1 2 ·Γ( δ+1 2 ) 2π 1 2 Γ( δ 2 ) � 1+ v2 δ −∞ u− k+1 2 du = − δ 1 2 ·Γ( δ+1 2 ) επ 1 2 (δ−1)Γ( δ 2 ) · � 1 + v2 δ �− k−1 2 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' where the first equality follows from the definition of the CVaR and the PDF of the t-distribution herein,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' the second equality holds by the technique of integration by substitution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Preliminaries on Elliptical Distributions The probability density distribution of an elliptical reference distribution P(µ,Σ,g) is given by f(r) = k · g �1 2(r − µ)⊤Σ−1(r − µ) � , Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 30 where k is a positive normalization scalar, µ is a mean vector, Σ is a positive definite matrix and g is a generating function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Elliptical distribution is a broad family of distributions that includes for example, the multivariate normal distribution, multivariate t-distribution and multivariate logistic distribution, as special cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' One notable property of the elliptical distribution is the linearity: any linear combination of elliptically distributed random variables still follows an elliptical distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' That is, for any random vector ˜r ∼ P(µ,Σ,g), it holds that ˜r⊤x ∼ P(µx,σ2x,g) with µx = µ⊤x and σx = √ x⊤Σx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Indeed, we can express the combination as ˜r⊤x = µx + σx˜z, where ˜z ∼ P(0,1,g) is a standard elliptically distributed random variable whose probability density function and cumulative distribution function are φ(z) = k·g (z2/2) and Φ(x) = � x −∞ k·g(z2/2)dz, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' For a concrete example we take a closer look at a standard normal distribution, for which the normalization scalar and generating function are k = 1/ √ 2π and g(x) = exp(−x), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Distributionally Optimistic MDPs In contrast to the robust model, sometimes the decision maker prefers exploration over exploitation if she would like to learn more information about the MDP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' As such, we could instead adopt an optimistic counterpart where we focus on the best case, motivating the following distributionally optimistic MDP: ℓO(θ) = max x∈X sup P∈F(θ) EP[˜r⊤x].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (17) In contrast to the robust case, here our decision depends instead on the best possible (expected) outcome, which exactly embodies optimism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We summarize the reformulation of (17) as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proposition 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The distributionally optimistic MDP (17) is equivalent to an optimization prob- lem ℓO(θ) = max x∈X EˆP[˜r⊤x] + θ∥x∥∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' It is sufficient to rewrite the objective of (17) as follows: sup P∈F(θ) EP[˜r⊤x] = − inf P∈F(θ)EP[−˜r⊤x] = −(EˆP[−˜r⊤x] − θ∥x∥∗) = EˆP[˜r⊤x] + θ∥x∥∗, where the second identity follows similar lines as in the proof of Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The reformulation in Proposition 8 is a reverse conic program that is, in general, non-convex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' However, it can be recast as a mixed-integer linear program, provided that ∥ · ∥∗ is the commonly used L1-norm or L∞-norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Such a mixed-integer linear program can be solved by the state-of-the- art approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 31 E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Distributionally Optimistic Chance-Constrained Model In a distributionally optimistic chance-constrained MDP model, where we focus on the best case that with high probability, the reward is no smaller than some lower bound that we maximize.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Formally, the distributionally optimistic chance-constrained MDP model is formulated as follows: ℓDOCC(θ,ε) = � � � � � � � � � max y s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' sup P∈F(θ) P[˜r⊤x ≥ y] ≥ 1 − ε x ∈ X, y ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (18) The optimistic chance-constrained model (18) is also equivalent to a nominal chance-constrained model, however, at a less risky level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Before formally establishing this argument, two lemmas are introduced as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The worst (largest) probability of the random vector ˜r attaining a value in the set R, sup P∈F(θ) P[˜r ∈ R], (19) is equivalent to min λ≥0 � λθ + � r∈RSA(λ · dist(r,R) − 1)−dˆPr � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Here, we use dist(r,R) = inf{∥r − ˆr∥ | ˆr ∈ R} to denote the distance from the vector r ∈ RSA to the set R ⊆ RSA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Using theorem 1 in Gao and Kleywegt (2016) or theorem 1 in Blanchet and Murthy (2019), the uncertainty quantification problem (19) is equal to min λ≥0 � λθ − � r∈RSA inf w∈RSA{λ∥w − r∥ − I[w ∈ R]}dˆPr � , (20) where I is the 0-1 indicator function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Consider the second term in the objective of the above minimization problem, we have inf w∈RSA{λ∥w − r∥ − I[w ∈ R]} = −(λ · dist(r,R) − 1)−.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (21) Indeed, if r ∈ R (for which, dist(r,R) = 0), then by choosing w = v, it holds that inf w∈RSA{λ∥w − r∥ − I[w ∈ R]} = −1 = −(λ · dist(r,R) − 1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' whereas if r /∈ R, then it holds that inf w∈RSA{λ∥w − r∥ − I[w ∈ R]} = min � inf w∈R{λ∥w − r∥ − 1}, inf w /∈Rλ∥w − r∥ � = min � inf w∈R{λ∥w − r∥ − 1},0 � = −(λ · dist(r,R) − 1)−.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Plugging expression (21) into problem (20) gives the desired result, which, by proposition 3 in Gao and Kleywegt (2016), holds regardless of whether R is open or closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 32 Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The distributionally optimistic chance constraint inf P∈F(θ)P[˜r ∈ R] ≤ ε (22) with a risk threshold ε ∈ (0,1) is satisfiable if and only if P-CVaRε[−dist(˜r, ¯R)] ≥ − θ 1 − ε, where ¯R = RSA \\ R is the complement of the set of undesired events R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We first re-express (22) as sup P∈F(θ) P[˜r ∈ ¯R] ≥ 1 − ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Using Lemma 3, the above constraint is equivalent to min λ≥0 � λθ + � r∈RSA(λ · dist(r, ¯R) − 1)−dˆPr � ≥ 1 − ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (23) The left-hand side problem can be presented by min � min λ>0 � λθ + � r∈RSA(λ · dist(r, ¯R) − 1)−dˆPr � ,1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Since 1 ≥ 1 − ε, the above re-expression implies that constraint (23) is equivalent to min λ>0 � λθ + � r∈RSA(λ · dist(r, ¯R) − 1)−dˆPr � ≥ 1 − ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Multiplying both sides by (λ(1 − ε))−1 > 0, we arrive at min τ<0 � 1 1 − ε � r∈RSA(−dist(r, ¯R) − τ)+dˆPr + τ � ≥ − θ 1 − ε, which, together with the fact min τ≥0 � 1 1 − ε � r∈RSA(−dist(r, ¯R) − τ)+dˆPr + τ � ≥ 0 ≥ − θ 1 − ε, is equivalent to min τ∈R � 1 1 − ε � r∈RSA(−dist(r, ¯R) − τ)+dˆPr + τ � ≥ − θ 1 − ε, where the left-hand side is essentially ˆP-CVaRε[−dist(˜r, ¯R)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Now we are ready to establish the equivalence between the chance-constrained model and its optimistic counterpart (with an adjusted risk threshold).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Suppose in the Wasserstein ambiguity set (3), the reference distribution is an ellip- tical distribution ˆP = P(µ,Σ,g) and the Wasserstein distance is equipped with a Mahalanobis norm associated with the positive definite matrix Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The distributionally optimistic robust chance con- straint ∃ P ∈ F(θ) : P[˜r⊤x ≥ y] ≥ 1 − ε is satisfiable if and only if P(µ,Σ,g)[˜r⊤x ≥ y] ≥ 1 − ¯ε, where ¯ε = 1 − Φ(η⋆) ≥ ε with η⋆ being the smallest η ≤ Φ−1(1 − ε) that satisfies η(Φ(η) − (1 − ε)) + � (Φ−1(1−ε)) 2/2 η2/2 kg(z)dz ≤ θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 33 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We first look at the individual distributionally optimistic robust chance constraint ∃ P ∈ F(θ) : P[˜r⊤x ≥ y] ≥ 1 − ε for some generic coefficient vector x ∈ RSA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The above chance constraint is equivalent to sup P∈F(θ) P[˜r⊤x ≥ y] ≥ 1 − ε ⇐⇒ sup P∈F(θ) P[˜r⊤x > y] ≥ 1 − ε ⇐⇒ inf P∈F(θ)P[˜r⊤x ≤ y] ≤ ε, where for the first equivalence, by using proposition 3 in Gao and Kleywegt (2016) , it is indifferent to replace the strict inequality with a weak one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Exploring the definition of VaR, we note that inf P∈F(θ)P[˜r⊤x ≤ y] ≤ ε ⇐⇒ inf P∈F(θ)P-VaR1−ε[y − ˜r⊤x] ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Hence, with the translation invariance of VaR, it is sufficient to show that inf P∈F(θ)P-VaR1−ε[−˜r⊤x] ≜ inf v∈R � v | inf P∈F(θ)P[−˜r⊤x > v] ≤ ε � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (24) By Lemma 4 and the assumption of Mahalanobis norm, we have inf P∈F(θ)P � −˜r⊤x > v � ≤ ε ⇐⇒ P(µ,Σ,g)-CVaRε[−dist(˜r, ¯R)] ≥ − θ 1 − ε ⇐⇒ −P(µ,Σ,g)-CVaRε[−(−˜r⊤x − v)+] ≤ θ∥x∥Σ−1 1 − ε , where ¯R = � r | − r⊤x ≤ v � and we leverage the closed form solution dist(˜r, ¯R) = � −˜r⊤x − v �+ /∥x∥Σ−1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=', lemma 2 in Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Let PS = P(µ,Σ,g) for simplicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' By the property of elliptical distribution, for ˜r ∼ PS and any real vector x, we have −˜r⊤x ∼ P(µS,σ2 S,g) = P(−µ⊤x,x⊤Σx,g).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We denote its probability density function as h(z) = k σS g � (z − µS) 2 2σ2 S � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The left-hand side of the constraint can be further transformed as −PS-CVaRε[−(−˜r⊤x − v)+] = −EPS[−(−˜r⊤x − v)+ | − (−˜r⊤x − v)+ ≥ PS-VaRε[−(−˜r⊤x − v)+]] = − 1 1 − ε � sup{z|−(z−v)+≥PS-VaRε[−(−˜r⊤x−v)+]} −∞ −(z − v)+h(z)dz = 1 1 − ε � sup{z|−(z−v)+≥PS-VaRε[−(−˜r⊤x−v)+]} v (z − v)h(z)dz = 1 1 − ε � PS-VaR1−ε[−˜r⊤x] v (z − v)h(z)dz,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Chen,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ho: Risk-Averse MDPs under Reward Ambiguity 34 in which the last equality holds from sup{z | − (z − v)+ ≥ PS-VaRε[−(−˜r⊤x − v)+]} = sup{z | min{v − z,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='0} ≥ PS-VaRε[min{v + ˜r⊤x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='0}]} = sup{z | min{−z,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='−v} ≥ PS-VaRε[min{˜r⊤x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='−v}]} = sup{z | − z ≥ PS-VaRε[min{˜r⊤x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='−v}]} = sup{z | z ≤ PS-VaR1−ε[max{−˜r⊤x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='v}]} = sup{z | z ≤ PS-VaR1−ε[−˜r⊤x]} = PS-VaR1−ε[−˜r⊤x].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Here, the second equality is due to the translation invariance of VaR, the third one follows from −v ≥ PS-VaRε[min{˜r⊤x,−v}], the fifth one is because that for any ε ∈ (0,1), the distributionally optimistic robust VaR satisfies v = inf P∈F(θ)P-VaR1−ε[−˜r⊤x] ≤ PS-VaR1−ε[−˜r⊤x], (25) thus the 1 − ε quantiles of −˜r⊤x and max{−˜r⊤x,v} coincide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Let us denote q1−ε = PS-VaR1−ε[−˜r⊤x], which, by its definition, satisfies q1−ε − µS σS = PS-VaR1−ε �−˜r⊤x − µS σS � = P0 (0,1,g)-VaR1−ε[˜z] = Φ−1(1 − ε), Here, the first equality holds for the translation invariance and the positive homogeneity of VaR, while the last one follows from the definition of VaR under the standard elliptical distribution P(0,1,g).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Following the last reformulation of the constraint, we further have 1 1 − ε � q1−ε v (z−v)h(z)dz = 1 1 − ε � q1−ε v z· k σS g � (z − µS) 2 2σ2 S � dz− v 1 − ε � q1−ε v k σS g � (z − µS) 2 2σ2 S � dz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Chen,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ho: Risk-Averse MDPs under Reward Ambiguity 35 For its first component,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' we have ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 − ε ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� q1−ε ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='v ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='z · k ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='g ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='(z − µS) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2σ2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='dz ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 − ε ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� q1−ε ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='v ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='z − µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='k · g ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='(z − µS) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2σ2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='dz + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 − ε ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� q1−ε ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='v ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='k · g ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='(z − µS) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2σ2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='dz ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 − ε ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� q1−ε ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='v ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='z − µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='k · g ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='(z − µS) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2σ2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='d ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='�z − µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 − ε ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='Φ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='�q1−ε − µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='− Φ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='�v − µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 − ε ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='q1−ε−µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='v−µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t · k · g ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='�t2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='d ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='�z − µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='+ µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 − ε ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='Φ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='�q1−ε − µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='− Φ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='�v − µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 − ε ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='(q1−ε−µS)2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2σ2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='(v−µS)2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2σ2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='k · g(z)dz + µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1 − ε ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='Φ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='�q1−ε − µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='− Φ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='�v − µS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='σS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=',' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' while for the second component,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' it holds that v 1 − ε � q1−ε v k σS g � (z − µS) 2 2σ2 S � dz = v 1 − ε � q1−ε−µS σS v−µS σS k · g �z2 2 � dz = v 1 − ε � Φ �q1−ε − µS σS � − Φ �v − µS σS �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Hence, combine the constraint with (25), we have the following equivalent expression for prob- lem (24): inf v s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' � (q1−ε−µS)2 2σ2 S (v−µS)2 2σ2 S k · g(z)dz + µS − v σS � Φ �q1−ε − µS σS � − Φ �v − µS σS �� ≤ θ∥x∥Σ−1 σS = θ v ≤ PS-VaR1−ε[−˜r⊤x] v ∈ R, where the equality follows from the definition of the Mahalanobis norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Let η = (v − µS)/σS, the best-case VaR now becomes inf µS + σSη s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' � (Φ−1(1−ε))2/2 η2/2 k · g(z)dz − η · (1 − ε − Φ(η)) ≤ θ η ≤ Φ−1(1 − ε) η ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (26) The function V (η) ≜ � (Φ−1(1−ε))2/2 η2/2 k · g(z)dz − η · (1 − ε − Φ(η)) Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 36 is monotonically decreasing on (−∞,Φ−1(1 − ε)) since for any η < Φ−1(1 − ε), it holds that V ′(η) = −η · k · g �η2 2 � − (1 − ε) + Φ(η) + ηφ(η) = Φ(η) − (1 − ε) < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Thus problem (26) can be efficiently solved be a bisection algorithm and the optimal η⋆ as claimed can be obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Finally the result can be obtained as follows: ∃ P ∈ F(θ) : P[˜r⊤x ≥ y] ≥ 1 − ε ⇐⇒ −y ≥ σSη⋆ + µS ⇐⇒ −y − µS σS ≥ η⋆ ⇐⇒ Φ �−y − µS σS � ≥ Φ(η⋆) ⇐⇒ P(µ,Σ,g) � ˜r⊤x − µS σS ≥ y − µS σS � ≥ 1 − ¯ε ⇐⇒ P(µ,Σ,g)[˜r⊤x ≥ y] ≥ 1 − ¯ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' With ¯ε in Lemma 5, we are now ready to derive a second-order cone reformulation of the distributionally optimistic chance-constrained model (18).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proposition 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Suppose in the Wasserstein ambiguity set (3), the reference distribution is an elliptical distribution ˆP = P(µ,Σ,g) and the Wasserstein distance is equipped with a Mahalanobis norm associated with the positive definite matrix Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' If the risk threshold satisfies ε ≤ ¯ε < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5, then the distributionally optimistic chance-constrained MDP (18) is equivalent to the second-order cone program ℓDOCC(θ,ε) = max x∈X µ⊤x − ∥Φ−1(1 − ¯ε)Σ1/2x∥2, where ¯ε = 1 − Φ(η⋆) ≥ ε with η⋆ being the smallest η ≤ Φ−1(1 − ε) that satisfies η(Φ(η) − (1 − ε)) + � (Φ−1(1−ε)) 2/2 η2/2 kg(z)dz ≤ θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' By Lemma 5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' the first constraint in (18) is equivalent to P(µ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='Σ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='g)[˜r⊤x ≥ y] ≥ 1 − ¯ε,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' where ¯ε = 1 − Φ(η⋆) ≥ ε with η⋆ being the smallest η ≤ Φ−1(1 − ε) that satisfies η(Φ(η) − (1 − ε)) + � (Φ−1(1−ε)) 2/2 η2/2 kg(z)dz ≤ θ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' which can be further transformed as follows: P(µ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='Σ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='g)[˜r⊤x ≥ y] ≥ 1 − ¯ε ⇐⇒ Φ((µ⊤x − y)/ √ x⊤Σx) ≥ 1 − ¯ε ⇐⇒ µ⊤x − y ≥ Φ−1(1 − ¯ε) √ x⊤Σx ⇐⇒ µ⊤x − y ≥ ∥Φ−1(1 − ¯ε)Σ1/2x∥2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Chen,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ho: Risk-Averse MDPs under Reward Ambiguity 37 where the first equivalence holds by the linearity of elliptical distributions,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' the second one holds because of the non-decreasing cumulative distribution function Φ(·),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' and the third one holds as ¯ε < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Since the optimal value is achieved with y = µ⊤x − ∥Φ−1(1 − ¯ε)Σ1/2x∥2, plugging this equation in the objective of (18) then concludes our proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Additional Details on Robust MDPs As introduced in Delage and Mannor (2010), robust MDPs maximizes the total expected return considering the worst-case realization of the uncertain parameter within a predefined ambiguity set: max π∈Π min r0∈R,r1∈R,···E � ∞ � t=0 γtrt(st) | s0 ∝ p0,π � , (27) where Π is the set of all the stationary randomized policies, rt and st are the reward and state at time stage t, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' As in Delage and Mannor (2010), we set R to be the 99% confidence ellipsoid of the random reward vector as the uncertainty set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Additional Details on BROIL Similar to our return-risk model, BROIL (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 2020) also seeks a policy that maximizes the weighted average of the mean and percentile performances: max π∈Π λ · E � ∞ � t=0 γtrt(st) | s0 ∝ p0,π � + (1 − λ) · CVaRε � ∞ � t=0 γtrt(st) | s0 ∝ p0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='π � , (28) where λ ∈ [0,1] is the weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Given R ∈ RSA×n as the matrix of (n) reward samples, BROIL can be expressed as a linear program as follows: max x∈X,y∈Rλ · 1 ne⊤R⊤x + (1 − λ) · � y − 1 ε · 1 ne⊤(y · e − R⊤x) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Observe that, there are two major differences between BROIL and our return-risk model: first, BROIL use CVaR as its risk measure, while VaR is applied in our return-risk model;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' second, while distributionally robustness is considered in (both the mean and VaR of return in) our objective function, BROIL only computes the nominal mean and CVaR of the return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Additional Details and Results on the Experiments H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Additional Details of Parameter Selection We use cross validation for parameter selection in both the simulation and empirical studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' For DRMDPs (4), the candidate set for θ is {0,2,··· ,18};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' for CC (2), the candidate set for ε is {iε′/5}i∈[5];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' for RR (7), we select θ such that ε varies among {iε′/5}i∈[5], and we select α ∈ {0,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='25,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='75,1};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' for BROIL (28), we select λ × ε ∈ {0,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='25,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='75,1} × {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='05,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='15};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' for RMDPs (27), as in Delage and Mannor (2010), we set R to be the 99% confidence ellipsoid of the random reward vector as the uncertainty set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 38 Figure 6 A machine replacement problem with fixed Gaussian rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Additional Details of the Simulation Study We consider S = 10 states, A = 10 actions, a uniform initial state distribution, and a discount factor γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' For each state s ∈ [S], the number of reachable next-state is ⌈log S⌉.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We sample the true reward from a multivariate normal distribution N(µ′,Σ′), where for each k ∈ [SA], µ′ k is generated as follows: first we sample a number (0 or 1) from a discrete uniform distribution in {0,1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' If the result is 0, we generate µ′ k from the normal distribution N(50,100);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' otherwise we generate it from N(90,100).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Standard deviations of rewards are generated in the same manner with another two normal distributions N(3,9) and N(18,9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Both standard deviations and means are trimmed to be non-negative after the above procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The correlation matrix of rewards is generated as follows: we first sample a matrix R ∈ RSA×SA with all its entries independently sampled in [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='25,1] uniformly, and then obtain our correlation matrix diag(d)V diag(d), where V = R⊤R and d = {di}i∈[SA] = {1/√Vii}i∈[SA].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Additional Details of the Empirical Study In this experiment, each machine is subject to the same underlying MDP with a state set S = [S] with S = 50 and an action set with only two actions: repair the machine or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The transition is deterministic and the discount factor is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The reward depends on both the current state and action, and all the rewards are independently and normally distributed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Figure 6 illustrates the true underlying distribution that generates the random rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Additional Results of the Simulation Study H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Additional Results of the Empirical Study 130,1) N(-130,1) 130,1) N(-130,20) 2 N(0,10) N(0,10 N(0,10-4) V0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='10 Repair N(-100,800) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=" Not RepairRuan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 39 100 200 300 400 500 Sample size 1500 1600 1700 VaR ( '=0." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content="05) DRMDP CC RR BROIL RMDP 100 200 300 400 500 Sample size 1550 1600 1650 1700 1750 VaR ( '=0." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1) DRMDP CC RR BROIL RMDP Figure 7 Simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Models DRMDP (4), CC (2), RR (7), RMDP and BROIL evaluated by VaR (risk thresh- old ε′ ∈ {5%,10%}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The upper and lower edges of the shaded areas are respectively the 95% and 5% percentiles of the 100 performances, while the solid lines are the medians.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' 100 200 300 400 500 Sample size 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='0 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='0 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content="5 VaR ( '=0." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='05) DRMDP CC RR BROIL RMDP 100 200 300 400 500 Sample size 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='0 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='5 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='0 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content="5 VaR ( '=0." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content='1) DRMDP CC RR BROIL RMDP Figure 8 Empirical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Models DRMDP (4), CC (2), RR (7), RMDP and BROIL evaluated by VaR (risk threshold ε′ ∈ {5%,10%}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' The upper and lower edges of the shaded areas are respectively the 95% and 5% percentiles of the 100 performances, while the solid lines are the medians.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Related Works Table 2 summarizes literature that is related to our work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' We remark that, compared to its related works in Table 2, our return-risk model is the only one that considers risk ambiguity, and we have also designed a fast first-order algorithm to obtain its solution, which enhance the practicality of our model for large-scale problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Ruan, Chen, Ho: Risk-Averse MDPs under Reward Ambiguity 40 Table 2 Related works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' Paper Uncertainty Robustness Ambiguity set Risk measure Soft-robustness Delage and Mannor (2010) Rewards and transition kernel VaR No Xu and Mannor (2010) Rewards and transition kernel DRO Nested No Yu and Xu (2015) Rewards and transition kernel DRO (General) Nested No Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (2020) Rewards CVaR Yes Gilbert et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (2017) Rewards VaR No Lobo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'} +page_content=' (2020) Transition kernel CVaR Yes Yang (2020) Transition kernel DRO Wasserstein No This paper Rewards DRO Wasserstein VaR Yes' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GtAzT4oBgHgl3EQfHftK/content/2301.01045v1.pdf'}