Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -21,9 +21,9 @@ size_categories:
|
|
21 |
|
22 |
## Dataset Description
|
23 |
|
24 |
-
-
|
25 |
-
-
|
26 |
-
-
|
27 |
- **Point of Contact:** lukashenrik.helff@stud.tu-darmstadt.de, wolfgang.stammer@cs.tu-darmstadt.de
|
28 |
|
29 |
### Dataset Summary
|
@@ -31,35 +31,41 @@ size_categories:
|
|
31 |
This diagnostic dataset is specifically designed to evaluate the visual logical learning capabilities of machine learning models.
|
32 |
It offers a seamless integration of visual and logical challenges, providing 2D images of complex visual trains,
|
33 |
where the classification is derived from underlying logical rules.
|
34 |
-
|
35 |
-
|
36 |
-
|
|
|
|
|
|
|
|
|
37 |
of machine learning models to learn and apply logical reasoning within a visual context.
|
38 |
|
39 |
### Supported Tasks and Leaderboards
|
40 |
|
|
|
|
|
41 |
Logical complexity:
|
42 |
|
43 |
-
|
44 |
|
45 |
-
|
46 |
|
47 |
-
|
48 |
|
49 |
Visual complexity:
|
50 |
|
51 |
-
|
52 |
-
|
53 |
|
54 |
-
|
55 |
|
56 |
-
|
57 |
-
|
58 |
|
59 |
Train attribute distributions:
|
60 |
|
61 |
-
|
62 |
-
|
63 |
|
64 |
### Languages
|
65 |
|
@@ -69,52 +75,83 @@ English
|
|
69 |
|
70 |
### Data Instances
|
71 |
|
72 |
-
|
73 |
-
|
|
|
|
|
|
|
|
|
|
|
74 |
|
75 |
### Data Fields
|
76 |
|
77 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
78 |
|
79 |
### Data Splits
|
80 |
|
81 |
-
|
|
|
|
|
|
|
82 |
|
83 |
## Dataset Creation
|
84 |
|
85 |
### Curation Rationale
|
86 |
|
87 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
88 |
|
89 |
### Source Data
|
90 |
|
91 |
#### Initial Data Collection and Normalization
|
92 |
|
93 |
-
|
|
|
94 |
|
95 |
#### Who are the source language producers?
|
96 |
|
97 |
-
|
98 |
|
99 |
### Annotations
|
100 |
|
101 |
#### Annotation process
|
102 |
|
103 |
-
|
104 |
|
105 |
#### Who are the annotators?
|
106 |
|
107 |
-
|
108 |
|
109 |
### Personal and Sensitive Information
|
110 |
|
111 |
-
|
112 |
|
113 |
## Considerations for Using the Data
|
114 |
|
115 |
### Social Impact of Dataset
|
116 |
|
117 |
-
|
118 |
|
119 |
### Discussion of Biases
|
120 |
|
@@ -128,15 +165,23 @@ Please refer to our paper.
|
|
128 |
|
129 |
### Dataset Curators
|
130 |
|
131 |
-
|
132 |
|
133 |
### Licensing Information
|
134 |
|
135 |
-
|
136 |
|
137 |
### Citation Information
|
138 |
|
139 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
140 |
|
141 |
### Contributions
|
142 |
|
|
|
21 |
|
22 |
## Dataset Description
|
23 |
|
24 |
+
- [Homepage](https://sites.google.com/view/v-lol/home)
|
25 |
+
- [Repository](https://github.com/ml-research/vlol-dataset-gen)
|
26 |
+
- [Paper](https://arxiv.org/abs/2306.07743)
|
27 |
- **Point of Contact:** lukashenrik.helff@stud.tu-darmstadt.de, wolfgang.stammer@cs.tu-darmstadt.de
|
28 |
|
29 |
### Dataset Summary
|
|
|
31 |
This diagnostic dataset is specifically designed to evaluate the visual logical learning capabilities of machine learning models.
|
32 |
It offers a seamless integration of visual and logical challenges, providing 2D images of complex visual trains,
|
33 |
where the classification is derived from underlying logical rules.
|
34 |
+
The fundamental idea of V-LoL remains to integrate the explicit logical learning tasks of classic symbolic AI benchmarks into visually complex scenes,
|
35 |
+
creating a unique visual input that retains the challenges and versatility of explicit logic.
|
36 |
+
In doing so, V-LoL bridges the gap between symbolic AI challenges and contemporary deep learning datasets offering various visual logical learning tasks
|
37 |
+
that pose challenges for AI models across a wide spectrum of AI research, from symbolic to neural and neuro-symbolic AI.
|
38 |
+
Moreover, we provide a flexible [dataset generator](https://github.com/ml-research/vlol-dataset-gen) that
|
39 |
+
empowers researchers to easily exchange or modify the logical rules, thereby enabling the creation of new datasets incorperating novel logical learning challenges.
|
40 |
+
By combining visual input with logical reasoning, this dataset serves as a comprehensive benchmark for assessing the ability
|
41 |
of machine learning models to learn and apply logical reasoning within a visual context.
|
42 |
|
43 |
### Supported Tasks and Leaderboards
|
44 |
|
45 |
+
We offer a diverse set of datasets that present challenging AI tasks targeting various reasoning abilities. The following provides an overview of the available datasets:
|
46 |
+
|
47 |
Logical complexity:
|
48 |
|
49 |
+
- Theory X: The train has either a short, closed car or a car with a barrel load is somewhere behind a car with a golden vase load. This rule was originally introduced as "Theory X" in the new East-West Challenge.
|
50 |
|
51 |
+
- Numerical rule: The train has a car where its car position equals its number of payloads which equals its number of wheel axles.
|
52 |
|
53 |
+
- Complex rule: Either, there is a car with a car number which is smaller than its number of wheel axles count and smaller than the number of loads, or there is a short and a long car with the same colour where the position number of the short car is smaller than the number of wheel axles of the long car, or the train has three differently coloured cars. We refer to Tab. 3 in the supp. for more insights on required reasoning properties for each rule.
|
54 |
|
55 |
Visual complexity:
|
56 |
|
57 |
+
- Realistic train representaions.
|
58 |
+
- Block representation.
|
59 |
|
60 |
+
OOD Trains:
|
61 |
|
62 |
+
- A train carrying 2-4 cars.
|
63 |
+
- A train carrying 7 cars.
|
64 |
|
65 |
Train attribute distributions:
|
66 |
|
67 |
+
- Michalski attribute distribution.
|
68 |
+
- Random attribute distribution.
|
69 |
|
70 |
### Languages
|
71 |
|
|
|
75 |
|
76 |
### Data Instances
|
77 |
|
78 |
+
```
|
79 |
+
{
|
80 |
+
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=480x270 at 0x1351D0EE0>,
|
81 |
+
'label': 1
|
82 |
+
}
|
83 |
+
```
|
84 |
+
|
85 |
|
86 |
### Data Fields
|
87 |
|
88 |
+
The data instances have the following fields:
|
89 |
+
|
90 |
+
- image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded
|
91 |
+
Decoding of a large number of image files might take a significant amount of time.
|
92 |
+
Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
|
93 |
+
- label: an int classification label.
|
94 |
+
|
95 |
+
Class labels mapping:
|
96 |
+
| ID | Class |
|
97 |
+
| --- | ----------- |
|
98 |
+
| 0 | Westbound |
|
99 |
+
| 1 | Eastbound |
|
100 |
+
|
101 |
|
102 |
### Data Splits
|
103 |
|
104 |
+
|
105 |
+
| | Train | Validation |
|
106 |
+
| --- | --- | ----------- |
|
107 |
+
| # of samples | 10000 | 2000 |
|
108 |
|
109 |
## Dataset Creation
|
110 |
|
111 |
### Curation Rationale
|
112 |
|
113 |
+
Despite the successes of recent developments in visual AI, different shortcomings still exist;
|
114 |
+
from missing exact logical reasoning, to abstract generalization abilities, to understanding complex and noisy scenes.
|
115 |
+
Unfortunately, existing benchmarks, were not designed to capture more than a few of these aspects.
|
116 |
+
Whereas deep learning datasets focus on visually complex data but simple visual reasoning tasks,
|
117 |
+
inductive logic datasets involve complex logical learning tasks, however, lack the visual component.
|
118 |
+
To address this, we propose the visual logical learning dataset, V-LoL, that seamlessly combines visual and logical challenges.
|
119 |
+
Notably, we introduce the first instantiation of V-LoL, V-LoL-Train, -- a visual rendition of a classic benchmark in symbolic AI, the Michalski train problem.
|
120 |
+
By incorporating intricate visual scenes and flexible logical reasoning tasks within a versatile framework,
|
121 |
+
V-LoL-Train provides a platform for investigating a wide range of visual logical learning challenges.
|
122 |
+
To create new V-LoL challenges, we provide a comprehensive guide and resources in our [GitHub repository](https://github.com/ml-research/vlol-dataset-gen).
|
123 |
+
The repository offers a collection of tools and code that enable researchers and practitioners to easily generate new V-LoL challenges based on their specific requirements. By referring to our GitHub repository, users can access the necessary documentation, code samples, and instructions to create and customize their own V-LoL challenges.
|
124 |
|
125 |
### Source Data
|
126 |
|
127 |
#### Initial Data Collection and Normalization
|
128 |
|
129 |
+
The individual datasets are generated using the V-LoL-Train generator. See [GitHub repository](https://github.com/ml-research/vlol-dataset-gen).
|
130 |
+
|
131 |
|
132 |
#### Who are the source language producers?
|
133 |
|
134 |
+
See [GitHub repository](https://github.com/ml-research/vlol-dataset-gen).
|
135 |
|
136 |
### Annotations
|
137 |
|
138 |
#### Annotation process
|
139 |
|
140 |
+
The images are generated in two steps: first sampling a valid symbolic representation of a train and then visualizing it within a 3D scene.
|
141 |
|
142 |
#### Who are the annotators?
|
143 |
|
144 |
+
Annotations are automatically derived using a python, prolog, and blender pipline. See [GitHub repository](https://github.com/ml-research/vlol-dataset-gen).
|
145 |
|
146 |
### Personal and Sensitive Information
|
147 |
|
148 |
+
The dataset does not contain personal nor sensitive information.
|
149 |
|
150 |
## Considerations for Using the Data
|
151 |
|
152 |
### Social Impact of Dataset
|
153 |
|
154 |
+
The dataset has no social impact.
|
155 |
|
156 |
### Discussion of Biases
|
157 |
|
|
|
165 |
|
166 |
### Dataset Curators
|
167 |
|
168 |
+
Lukas Helff
|
169 |
|
170 |
### Licensing Information
|
171 |
|
172 |
+
MIT License
|
173 |
|
174 |
### Citation Information
|
175 |
|
176 |
+
@misc{helff2023vlol,
|
177 |
+
title={V-LoL: A Diagnostic Dataset for Visual Logical Learning},
|
178 |
+
author={Lukas Helff and Wolfgang Stammer and Hikaru Shindo and Devendra Singh Dhami and Kristian Kersting},
|
179 |
+
journal={Dataset available from https://sites.google.com/view/v-lol},
|
180 |
+
year={2023},
|
181 |
+
eprint={2306.07743},
|
182 |
+
archivePrefix={arXiv},
|
183 |
+
primaryClass={cs.AI}
|
184 |
+
}
|
185 |
|
186 |
### Contributions
|
187 |
|