maq20 commited on
Commit
1a5624c
·
1 Parent(s): 98c4093
Files changed (3) hide show
  1. README.md +190 -0
  2. dataset_infos.json +1 -0
  3. test-dataset.py +120 -0
README.md ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - extended|other-nist
16
+ task_categories:
17
+ - image-classification
18
+ task_ids:
19
+ - multi-class-image-classification
20
+ paperswithcode_id: mnist
21
+ pretty_name: MNIST
22
+ dataset_info:
23
+ features:
24
+ - name: image
25
+ dtype: image
26
+ - name: label
27
+ dtype:
28
+ class_label:
29
+ names:
30
+ 0: '0'
31
+ 1: '1'
32
+ 2: '2'
33
+ 3: '3'
34
+ 4: '4'
35
+ 5: '5'
36
+ 6: '6'
37
+ 7: '7'
38
+ 8: '8'
39
+ 9: '9'
40
+ config_name: mnist
41
+ splits:
42
+ - name: train
43
+ num_bytes: 17470848
44
+ num_examples: 60000
45
+ - name: test
46
+ num_bytes: 2916440
47
+ num_examples: 10000
48
+ download_size: 11594722
49
+ dataset_size: 20387288
50
+ ---
51
+
52
+ # Dataset Card for MNIST
53
+
54
+ ## Table of Contents
55
+ - [Dataset Description](#dataset-description)
56
+ - [Dataset Summary](#dataset-summary)
57
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
58
+ - [Languages](#languages)
59
+ - [Dataset Structure](#dataset-structure)
60
+ - [Data Instances](#data-instances)
61
+ - [Data Fields](#data-fields)
62
+ - [Data Splits](#data-splits)
63
+ - [Dataset Creation](#dataset-creation)
64
+ - [Curation Rationale](#curation-rationale)
65
+ - [Source Data](#source-data)
66
+ - [Annotations](#annotations)
67
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
68
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
69
+ - [Social Impact of Dataset](#social-impact-of-dataset)
70
+ - [Discussion of Biases](#discussion-of-biases)
71
+ - [Other Known Limitations](#other-known-limitations)
72
+ - [Additional Information](#additional-information)
73
+ - [Dataset Curators](#dataset-curators)
74
+ - [Licensing Information](#licensing-information)
75
+ - [Citation Information](#citation-information)
76
+ - [Contributions](#contributions)
77
+
78
+ ## Dataset Description
79
+
80
+ - **Homepage:** http://yann.lecun.com/exdb/mnist/
81
+ - **Repository:**
82
+ - **Paper:** MNIST handwritten digit database by Yann LeCun, Corinna Cortes, and CJ Burges
83
+ - **Leaderboard:**
84
+ - **Point of Contact:**
85
+
86
+ ### Dataset Summary
87
+
88
+ The MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class.
89
+ Half of the image were drawn by Census Bureau employees and the other half by high school students (this split is evenly distributed in the training and testing sets).
90
+
91
+ ### Supported Tasks and Leaderboards
92
+
93
+ - `image-classification`: The goal of this task is to classify a given image of a handwritten digit into one of 10 classes representing integer values from 0 to 9, inclusively. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-mnist).
94
+
95
+ ### Languages
96
+
97
+ English
98
+
99
+ ## Dataset Structure
100
+
101
+ ### Data Instances
102
+
103
+ A data point comprises an image and its label:
104
+
105
+ ```
106
+ {
107
+ 'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x276021F6DD8>,
108
+ 'label': 5
109
+ }
110
+ ```
111
+
112
+ ### Data Fields
113
+
114
+ - `image`: A `PIL.Image.Image` object containing the 28x28 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
115
+ - `label`: an integer between 0 and 9 representing the digit.
116
+
117
+ ### Data Splits
118
+
119
+ The data is split into training and test set. All the images in the test set were drawn by different individuals than the images in the training set. The training set contains 60,000 images and the test set 10,000 images.
120
+
121
+ ## Dataset Creation
122
+
123
+ ### Curation Rationale
124
+
125
+ The MNIST database was created to provide a testbed for people wanting to try pattern recognition methods or machine learning algorithms while spending minimal efforts on preprocessing and formatting. Images of the original dataset (NIST) were in two groups, one consisting of images drawn by Census Bureau employees and one consisting of images drawn by high school students. In NIST, the training set was built by grouping all the images of the Census Bureau employees, and the test set was built by grouping the images form the high school students.
126
+ The goal in building MNIST was to have a training and test set following the same distributions, so the training set contains 30,000 images drawn by Census Bureau employees and 30,000 images drawn by high school students, and the test set contains 5,000 images of each group. The curators took care to make sure all the images in the test set were drawn by different individuals than the images in the training set.
127
+
128
+ ### Source Data
129
+
130
+ #### Initial Data Collection and Normalization
131
+
132
+ The original images from NIST were size normalized to fit a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels (i.e., pixels don't simply have a value of black and white, but a level of greyness from 0 to 255) as a result of the anti-aliasing technique used by the normalization algorithm. The images were then centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.
133
+
134
+ #### Who are the source language producers?
135
+
136
+ Half of the source images were drawn by Census Bureau employees, half by high school students. According to the dataset curator, the images from the first group are more easily recognizable.
137
+
138
+ ### Annotations
139
+
140
+ #### Annotation process
141
+
142
+ The images were not annotated after their creation: the image creators annotated their images with the corresponding label after drawing them.
143
+
144
+ #### Who are the annotators?
145
+
146
+ Same as the source data creators.
147
+
148
+ ### Personal and Sensitive Information
149
+
150
+ [More Information Needed]
151
+
152
+ ## Considerations for Using the Data
153
+
154
+ ### Social Impact of Dataset
155
+
156
+ [More Information Needed]
157
+
158
+ ### Discussion of Biases
159
+
160
+ [More Information Needed]
161
+
162
+ ### Other Known Limitations
163
+
164
+ [More Information Needed]
165
+
166
+ ## Additional Information
167
+
168
+ ### Dataset Curators
169
+
170
+ Chris Burges, Corinna Cortes and Yann LeCun
171
+
172
+ ### Licensing Information
173
+
174
+ MIT Licence
175
+
176
+ ### Citation Information
177
+
178
+ ```
179
+ @article{lecun2010mnist,
180
+ title={MNIST handwritten digit database},
181
+ author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
182
+ journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
183
+ volume={2},
184
+ year={2010}
185
+ }
186
+ ```
187
+
188
+ ### Contributions
189
+
190
+ Thanks to [@sgugger](https://github.com/sgugger) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"mnist": {"description": "The MNIST dataset consists of 70,000 28x28 black-and-white images in 10 classes (one for each digits), with 7,000\nimages per class. There are 60,000 training images and 10,000 test images.\n", "citation": "@article{lecun2010mnist,\n title={MNIST handwritten digit database},\n author={LeCun, Yann and Cortes, Corinna and Burges, CJ},\n journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},\n volume={2},\n year={2010}\n}\n", "homepage": "http://yann.lecun.com/exdb/mnist/", "license": "", "features": {"image": {"id": null, "_type": "Image"}, "label": {"num_classes": 10, "names": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": {"input": "image", "output": "label"}, "task_templates": [{"task": "image-classification", "image_column": "image", "label_column": "label", "labels": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]}], "builder_name": "mnist", "config_name": "mnist", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 17470848, "num_examples": 60000, "dataset_name": "mnist"}, "test": {"name": "test", "num_bytes": 2916440, "num_examples": 10000, "dataset_name": "mnist"}}, "download_checksums": {"https://storage.googleapis.com/cvdf-datasets/mnist/train-images-idx3-ubyte.gz": {"num_bytes": 9912422, "checksum": "440fcabf73cc546fa21475e81ea370265605f56be210a4024d2ca8f203523609"}, "https://storage.googleapis.com/cvdf-datasets/mnist/train-labels-idx1-ubyte.gz": {"num_bytes": 28881, "checksum": "3552534a0a558bbed6aed32b30c495cca23d567ec52cac8be1a0730e8010255c"}, "https://storage.googleapis.com/cvdf-datasets/mnist/t10k-images-idx3-ubyte.gz": {"num_bytes": 1648877, "checksum": "8d422c7b0a1c1c79245a5bcf07fe86e33eeafee792b84584aec276f5a2dbc4e6"}, "https://storage.googleapis.com/cvdf-datasets/mnist/t10k-labels-idx1-ubyte.gz": {"num_bytes": 4542, "checksum": "f7ae60f92e00ec6debd23a6088c31dbd2371eca3ffa0defaefb259924204aec6"}}, "download_size": 11594722, "post_processing_size": null, "dataset_size": 20387288, "size_in_bytes": 31982010}}
test-dataset.py ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """MNIST Data Set"""
18
+
19
+
20
+ import struct
21
+
22
+ import numpy as np
23
+
24
+ import datasets
25
+ from datasets.tasks import ImageClassification
26
+
27
+
28
+ _CITATION = """\
29
+ @article{lecun2010mnist,
30
+ title={MNIST handwritten digit database},
31
+ author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
32
+ journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
33
+ volume={2},
34
+ year={2010}
35
+ }
36
+ """
37
+
38
+ _DESCRIPTION = """\
39
+ The MNIST dataset consists of 70,000 28x28 black-and-white images in 10 classes (one for each digits), with 7,000
40
+ images per class. There are 60,000 training images and 10,000 test images.
41
+ """
42
+
43
+ _URL = "https://storage.googleapis.com/cvdf-datasets/mnist/"
44
+ _URLS = {
45
+ "train_images": "train-images-idx3-ubyte.gz",
46
+ "train_labels": "train-labels-idx1-ubyte.gz",
47
+ "test_images": "t10k-images-idx3-ubyte.gz",
48
+ "test_labels": "t10k-labels-idx1-ubyte.gz",
49
+ }
50
+
51
+
52
+ class MNIST(datasets.GeneratorBasedBuilder):
53
+ """MNIST Data Set"""
54
+
55
+ BUILDER_CONFIGS = [
56
+ datasets.BuilderConfig(
57
+ name="mnist",
58
+ version=datasets.Version("1.0.0"),
59
+ description=_DESCRIPTION,
60
+ )
61
+ ]
62
+
63
+ def _info(self):
64
+ return datasets.DatasetInfo(
65
+ description=_DESCRIPTION,
66
+ features=datasets.Features(
67
+ {
68
+ "image": datasets.Image(),
69
+ "label": datasets.features.ClassLabel(names=["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]),
70
+ }
71
+ ),
72
+ supervised_keys=("image", "label"),
73
+ homepage="http://yann.lecun.com/exdb/mnist/",
74
+ citation=_CITATION,
75
+ task_templates=[
76
+ ImageClassification(
77
+ image_column="image",
78
+ label_column="label",
79
+ )
80
+ ],
81
+ )
82
+
83
+ def _split_generators(self, dl_manager):
84
+ urls_to_download = {key: _URL + fname for key, fname in _URLS.items()}
85
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
86
+ return [
87
+ datasets.SplitGenerator(
88
+ name=datasets.Split.TRAIN,
89
+ gen_kwargs={
90
+ "filepath": [downloaded_files["train_images"], downloaded_files["train_labels"]],
91
+ "split": "train",
92
+ },
93
+ ),
94
+ datasets.SplitGenerator(
95
+ name=datasets.Split.TEST,
96
+ gen_kwargs={
97
+ "filepath": [downloaded_files["test_images"], downloaded_files["test_labels"]],
98
+ "split": "test",
99
+ },
100
+ ),
101
+ ]
102
+
103
+ def _generate_examples(self, filepath, split):
104
+ """This function returns the examples in the raw form."""
105
+ # Images
106
+ with open(filepath[0], "rb") as f:
107
+ # First 16 bytes contain some metadata
108
+ _ = f.read(4)
109
+ size = struct.unpack(">I", f.read(4))[0]
110
+ _ = f.read(8)
111
+ images = np.frombuffer(f.read(), dtype=np.uint8).reshape(size, 28, 28)
112
+
113
+ # Labels
114
+ with open(filepath[1], "rb") as f:
115
+ # First 8 bytes contain some metadata
116
+ _ = f.read(8)
117
+ labels = np.frombuffer(f.read(), dtype=np.uint8)
118
+
119
+ for idx in range(size):
120
+ yield idx, {"image": images[idx], "label": str(labels[idx])}