تعداد نشریات | 48 |
تعداد شمارهها | 1,407 |
تعداد مقالات | 15,340 |
تعداد مشاهده مقاله | 1,039,794 |
تعداد دریافت فایل اصل مقاله | 573,220 |
Uncertainty-aware Path Planning Using Reinforcement Learning and Deep Learning Methods | ||
Computer and Knowledge Engineering | ||
مقالات آماده انتشار، پذیرفته شده، انتشار آنلاین از تاریخ 11 آذر 1399 | ||
نوع مقاله: Machine learning-Sadoghi | ||
شناسه دیجیتال (DOI): 10.22067/cke.2020.39287 | ||
نویسندگان | ||
Nematollah Ab azar ![]() ![]() | ||
Imam Khomeini International University | ||
چکیده | ||
This paper proposes new algorithms to improve Reinforcement Learning (RL) and Deep Q Network (DQN) methods for path planning considering uncertainty in the environment's perception. The authors aim to formulate and solve the path planning optimization problem by minimizing the path length, avoiding obstacles, and minimizing the related uncertainty. In this regard, a reward function is constructed based on a weighted feature of the environment images, robot and environment constraints, and path optimality criterion as the optimization problem's objective function. In this study, Deep Learning (DL) is used for two purposes. First, for perceiving a real environment to find the state transition matrix of the mobile robot path planning problem, and second, for extracting the state’s features directly from an image of the environment to select the appropriate actions. To solve the path planning problem, it is formed in the context of an RL problem, and a Convolutional Neural Network (CNN) is used to approximate Q-values as a linear parameterized function. Implementing this approach improves the Q-learning, SARSA, and DQN algorithms as the new versions, called POQL, POSARSA, and PODQN. The learning process results show that using newly improved algorithms increases path planning performance by more than 20%, 21%, and 5% compared to the Q learning, SARSA, and DQN, respectively. | ||
آمار تعداد مشاهده مقاله: 120 |