Places365 CNNs

Convolutional neural networks (CNNs) trained on the Places2 Database can be used for scene recognition as well as generic deep scene features for visual recognition. We share the following pre-trained CNNs using Caffe and PyTorch.


Here we release the data of Places365-Standard and the data of Places365-Challenge to the public. Places365-Standard is the core set of Places2 Database, which has been used to train the Places365-CNNs. We will add other kinds of annotation on the Places365-Standard in the future. Places365-Challenge is the competition set of Places2 Database, which has 6.2 million extra images compared to the Places365-Standard. The Places365-Challenge will be used for the Places Challenge 2016.

Data of Places365-Standard

There are 1.8 million train images from 365 scene categories in the Places365-Standard, which are used to train the Places365 CNNs. There are 50 images per category in the validation set and 900 images per category in the testing set.


Data of Places365-Challenge 2016

Compared to the train set of Places365-Standard, the train set of Places365-Challenge has 6.2 million extra images, leading to totally 8 million train images for the Places365 challenge 2016. The validation set and testing set are the same as the Places365-Standard.


Data of Places-Extra69

Besides the 365 scene categories released in Places365 above, here we release the image data for the extra 69 scene categories (totally there are 434 scene categories included in the Places Database) as Places-Extra69. The category list of the Places-Extra69 is at here.There are the splits of train and test in the compressed file. For each category, we leave 100 images out as the test images. There are 98,721 images for training and 6,600 images for testing.


Evaluation Server of Places365

You could register to submit the prediction on the test set of Places365 via the evaluation server



Terms of use: by downloading the image data you agree to the following terms:

  1. You will use the data only for non-commercial research and educational purposes.
  2. You will NOT distribute the above images.
  3. Massachusetts Institute of Technology makes no representations or warranties regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose.
  4. You accept full responsibility for your use of the data and shall defend and indemnify Massachusetts Institute of Technology, including its employees, officers and agents, against any and all claims arising from your use of the data, including but not limited to your use of any copies of copyrighted images that you may create from the data.

Please email Bolei Zhou if you have any questions or comments.