Both Evolutionary Algorithms (EAs) and Reinforcement Learning Algorithms (RLAs) have proven successful in policy optimisation tasks, however, there is scarce literature comparing their strengths and weaknesses. This makes it difficult to determine which group of algorithms is best suited for a task. This paper presents a comparison of two EAs and two RLAs in solving EvoMan - a video game playing benchmark. We test the algorithms both with and without noise introduction in the initialisation of multiple video game environments. We demonstrate that EAs reach a similar performance to RLAs in the static environments, but when noise is introduced the performance of EAs drops drastically while the performance of RLAs is much less affected.