Collisions between workers and operating vehicles are the leading source of fatal incidents in the construction industry. One of the most prevalent factors causing contact hazards is the decline in construction workers' auditory situational awareness due to the hearing loss and the complicated nature of construction noises. Thus, a computational technique that can augment the audible sense of a worker can significantly improve safety performance. Since construction machines often generate distinct sound patterns while operating at the construction sites, audio signal processing could be an innovative solution to achieve the goal. Unfortunately, the current body of knowledge regarding automated surveillance in construction still lacks such advanced methods. This paper presents a newly developed auditory surveillance framework using convolutional neural networks (CNNs) that can detect collision hazards by processing acoustic signals in construction sites. The study specifically has two primary contributions: (1) a new labeled dataset of normal and abnormal sound events relating to collision hazards in the construction site, and (2) a novel audio-based machine learning model for automated detection of collision hazards. The model was trained with different network architectures, and its performance was evaluated using various measures, including accuracy, recall, precision, and combined F-measure. The research is expected to help increase the auditory situational awareness of construction workers and consequently enhance construction safety.