When performing reinforcement learning using a robot arm in the real environment, it is important to perform reinforcement learning safely and quickly. This is because unexpected behaviors during reinforcement learning and long-term learning can damage the robot arm or surrounding objects. In this study, trajectory-based imitation learning that suppresses unexpected situations and quickly learns the policies suitable for the robots is proposed by limiting the workspace to be explored through one human demonstration. Trajectory-based imitation learning consists of two stages. First, a reference trajectory is generated considering the position of a target object and the expert trajectory from the human demonstration. Second, the target task is trained by performing reinforcement learning based on the generated reference trajectory. Experiments were conducted in simulation and real environments to verify the proposed imitation learning algorithm. In the simulation, a laptop folding task was performed with a success rate of 97% to verify the performance of the algorithm. In addition, it was shown that safe and fast learning is possible with only one demonstration video on the drawer arrangement in a real environment.