Social media has recently become a digital lifeline used to relay information and locate survivors in disaster situations. Currently, officials and volunteers scour social media for any valuable information; however, this approach is implausible as millions of posts are shared by the minute. Our goal is to automate actionable information extraction from social media posts to efficiently direct relief resources. Identifying damage and human casualties allows first responders to efficiently allocate resources and save as many lives as possible. Since social media posts contain text, images and videos, we propose a multimodal deep learning framework to identify damage related information. This framework combines multiple pretrained unimodal convolutional neural networks that extract features from raw text and images independently, before a final classifier labels the posts based on both modalities. Experiments on a home-grown database of labeled social media posts showed promising results and validated the merits of the proposed approach.