Tiny ImageNet Visual Recognition Challenge

Welcome to the tiny ImageNet evaluation server. Tiny ImageNet Challenge is the default course project for Stanford CS231N. It runs similar to the ImageNet challenge (ILSVRC). The goal of the challenge is for you to do as well as possible on the Image Classification problem. You will submit your final predictions on a test set to this evaluation server and we will maintain a class leaderboard.

Tiny Imagenet has 200 classes. Each class has 500 training images, 50 validation images, and 50 test images. We have released the training and validation sets with images and annotations. We provide both class labels and bounding boxes as annotations; however, you are asked only to predict the class label of each image without localizing the objects. The test set is released without labels. You can download the whole tiny ImageNet dataset here.

We use test set error rate, the fraction of test images that are incorrectly classified by the model, to measure the performance. To submit your predictions on the test set, name your submission file as <your SUNetID>.txt and upload it from your local machine. Your submission should be a two-column file with 10,000 lines. Each line contains a pair of test image filename and its predicted class id. One sample line might look like:

test_9925.JPEG n01910747

This file illustrates a submission of random guessing, giving us a chance accuracy 0.005 (1/200). Note that, the class ids correspond to synsets in ImageNet. For example, you can browse images and metadata of class id n01910747 using this link.

Your rank will be updated on the leaderboard once the submission is accepted. Ill-formatted files will be automatically rejected. To prevent wild guessing, you are allowed to make a new submission 2 hours after the last one. We strongly recommend you to save your best submission file on disk for grading purpose. Every submission overwrites previous records on the server. By default, your full name will be displayed on the leaderboard. You are welcome to contact us if you would like to use a funny nickname instead. Please contact via email in case you have any questions about the evaluation server. Good luck!


#NameError Rate# Submissions
1Dasgupta, Saumitro0.4381
2Yang, Xuan0.4448
3Feng, Shaoming0.5053
4Banerjee, Arijit0.5307
5Boin, Jean-Baptiste0.5605
6Hansen, Lucas0.5626
7Miller, John0.56710
8Pour Ansari, Mohammad Hadi0.56814
9Ghili, Saman0.56815
10Wang, Keven0.57411
11Yao, Leon0.6003
12Au, Benjamin0.7055
13Bodington, Dash0.80811
14Random Guesser0.9957