This paper presents a safe reinforcement learning system for automated driving that benefits from multimodal future trajectory predictions. We propose a safety system that consists of two safety components: a rule-based and a multimodal learning-based safety system. The rule-based module is based on common driving rules. On the other hand, the multi-modal learning-based safety module is a data-driven safety rule that learns safety patterns from historical driving data. Specifically, it utilizes mixture density recurrent neural networks (MD-RNN) for multimodal future trajectory predictions to mimic the potential behaviors of an autonomous agent and consequently accelerate the learning process. Our simulation results demonstrate that the proposed safety system outperforms previously reported results in terms of average reward and collision frequency.