Introduction

2021-11-30

Introduction

Path planning has been studied extensively over the last few

decades. In the real world not every navigation task is map-
based even though map-based navigation algorithms such as

A* and uniform-cost search can compute optimal path very
fast. Other algorithms such as SLAM can compute the map
but not every robot is equipped with LiDAR sensors and the
computation costs are high. In this paper, we focus on the
problem of item searching in a map-less environment using
only visual input. Our agent is trained with object detection
algorithms such as CNN to be able to detect a certain set
of objects. The agent will then exploit various reinforcement
learning algorithms to generate efficient policies to get to the
destination object from random spawn points with minimum
collisions. In our approach, we use object detection algorithms
to acquire the surrounding environment and collect the reward,
which will then be fed into the reinforcement learning models
to generate navigation policies. We evaluate our method in the
iGibson environment to examine its effectiveness.