Computer vision and robotics present tremendous opportunities for automating routine inspections of reinforced concrete bridges. One of the most critical aspects of these inspections is delamination assessment, as delaminations present immediate safety concerns due to falling concrete. Current methods of delamination assessment include hammer sounding and chain dragging, which are time consuming and difficult when accessibility is limited. Infrared technology presents an alternative method of assessing delaminations. In this work, a novel inspection method is proposed that uses an infrared camera combined with a convolutional neural network to automatically assess delaminations in infrared images. MobileNetV2 is implemented as an encoder with Deeplab V3 to perform pixel-wise segmentation in infrared images of delaminations. The results show 74.5% mean intersection over union (mIoU) for predicting delaminated areas, which is comparable with the performance of this network architecture on benchmark data sets. Reviewing the predicted delamination areas also shows that the results accurately predict delamination locations, and accuracy limitations primarily exist in the fine outline details of the delamination. The automated delamination assessment method was also tested by mounting an upward facing thermal camera on a mobile ground robot to perform a bridge soffit inspection. The robotic scanning data set yielded a mIoU of 79.5% for delamination assessment. The increase in mIoU is likely due to the image data being better structured in the robotic images. This displays the ability to combine infrared imagery, convolutional neural networks, and unmanned mobile robots to meet case-specific accessibility needs for more accurate and time-efficient delamination assessment.