asenppopov commited on
Commit
e6571d9
·
verified ·
1 Parent(s): 4649e7e

Add dataset README with TAR format documentation

Browse files
Files changed (1) hide show
  1. README.md +357 -0
README.md ADDED
@@ -0,0 +1,357 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - reinforcement-learning
4
+ - robotics
5
+ tags:
6
+ - robotics
7
+ - libero
8
+ - manipulation
9
+ - semantic-action-chunking
10
+ - vision-language
11
+ - imitation-learning
12
+ size_categories:
13
+ - 100K<n<1M
14
+ ---
15
+
16
+ # GATE-VLAP Datasets
17
+
18
+ **Grounded Action Trajectory Embeddings with Vision-Language Action Planning**
19
+
20
+ This repository contains preprocessed datasets from the LIBERO benchmark suite, specifically designed for training vision-language-action models with semantic action segmentation.
21
+
22
+ ## Why Raw Format?
23
+
24
+ We provide datasets in **raw PNG + JSON format** rather than pre-packaged TAR/WebDataset files for several important reasons:
25
+
26
+ ### Advantages of Raw Format
27
+
28
+ 1. **Easy Inspection**: Browse and visualize individual demonstrations directly on HuggingFace
29
+ 2. **Maximum Flexibility**:
30
+ - Load with any framework (PyTorch, TensorFlow, JAX)
31
+ - Convert to your preferred format (TAR, RLDS, LeRobot, custom)
32
+ - Cherry-pick specific demos or subtasks
33
+ 3. **Better Debugging**:
34
+ - Inspect problematic frames without extracting archives
35
+ - Verify data quality visually
36
+ - Check action sequences frame-by-frame
37
+ 4. **Transparent**: See exact file structure and metadata organization
38
+ 5. **Version Control**: Git LFS handles individual files better than large archives
39
+
40
+ ### Converting to TAR/WebDataset
41
+
42
+ If you need TAR format for efficient streaming during training, you can easily convert:
43
+
44
+ ```python
45
+ import webdataset as wds
46
+ from pathlib import Path
47
+ import json
48
+ from PIL import Image
49
+
50
+ def convert_to_tar(input_dir, output_pattern, maxcount=1000):
51
+ """
52
+ Convert raw PNG+JSON format to WebDataset TAR shards.
53
+
54
+ Args:
55
+ input_dir: Path to subtask directory (e.g., "libero_10/pick_up_the_black_bowl")
56
+ output_pattern: Output pattern (e.g., "output/shard-%06d.tar")
57
+ maxcount: Max samples per shard (default: 1000 frames per TAR)
58
+ """
59
+ with wds.ShardWriter(output_pattern, maxcount=maxcount) as sink:
60
+ subtask_path = Path(input_dir)
61
+
62
+ # Iterate through demos
63
+ for demo_dir in sorted(subtask_path.iterdir()):
64
+ if not demo_dir.is_dir():
65
+ continue
66
+
67
+ # Iterate through timesteps
68
+ for json_file in sorted(demo_dir.glob("*.json")):
69
+ png_file = json_file.with_suffix(".png")
70
+
71
+ if not png_file.exists():
72
+ continue
73
+
74
+ # Load data
75
+ with open(json_file) as f:
76
+ data = json.load(f)
77
+
78
+ # Create WebDataset sample
79
+ sample = {
80
+ "__key__": f"{demo_dir.name}/{json_file.stem}",
81
+ "png": Image.open(png_file),
82
+ "json": data,
83
+ "action.pyd": data["action"], # NumPy-compatible format
84
+ "robot_state.pyd": data["robot_state"],
85
+ }
86
+
87
+ sink.write(sample)
88
+
89
+ # Example: Convert a subtask to TAR
90
+ convert_to_tar(
91
+ "libero_10/pick_up_the_black_bowl",
92
+ "tar_output/pick_up_the_black_bowl-%06d.tar"
93
+ )
94
+ ```
95
+
96
+ ### Loading Raw Data
97
+
98
+ ```python
99
+ from pathlib import Path
100
+ import json
101
+ from PIL import Image
102
+ import numpy as np
103
+
104
+ def load_demo(demo_dir):
105
+ """Load a single demonstration."""
106
+ frames = []
107
+ demo_path = Path(demo_dir)
108
+
109
+ for json_file in sorted(demo_path.glob("*.json")):
110
+ # Load metadata
111
+ with open(json_file) as f:
112
+ data = json.load(f)
113
+
114
+ # Load image
115
+ png_file = json_file.with_suffix(".png")
116
+ data["image"] = np.array(Image.open(png_file))
117
+
118
+ frames.append(data)
119
+
120
+ return frames
121
+
122
+ # Load a specific demo
123
+ demo = load_demo("libero_10/pick_up_the_black_bowl/demo_0")
124
+ print(f"Demo length: {len(demo)} frames")
125
+ print(f"Action shape: {demo[0]['action'].shape}")
126
+ ```
127
+
128
+ ## Datasets Included
129
+
130
+ ### LIBERO-10 (Long-Horizon Tasks)
131
+
132
+ - **Task Type**: 10 complex, long-horizon manipulation tasks
133
+ - **Segmentation Method**: Semantic Action Chunking using Gemini Vision API
134
+ - **Demos**: 1,354 demonstrations across 29 subtasks
135
+ - **Frames**: 103,650 total frames
136
+ - **Subtasks**: Tasks are automatically segmented into atomic subtasks
137
+
138
+ **Example Tasks**:
139
+ - `pick_up_the_black_bowl` → Segmented into pick and place subtasks
140
+ - `close_the_drawer` → Segmented into approach, grasp, close subtasks
141
+ - `put_the_bowl_in_the_drawer` → Multi-step pick, open, place, close sequence
142
+
143
+ ### LIBERO-Object (Object Manipulation)
144
+
145
+ - **Task Type**: 10 object-centric manipulation tasks
146
+ - **Segmentation Method**: Rule-based gripper detection with stop signals
147
+ - **Demos**: 875 demonstrations across 20 subtasks
148
+ - **Frames**: 66,334 total frames
149
+ - **Subtasks**: Pick and place variations for 10 different objects
150
+
151
+ **Example Tasks**:
152
+ - `pick_up_the_alphabet_soup` → Approach, grasp, lift
153
+ - `place_the_alphabet_soup_on_the_basket` → Move, position, place, release
154
+
155
+ ## ���� Dataset Structure
156
+
157
+ ```
158
+ gate-institute/GATE-VLAP-datasets/
159
+ ├── libero_10/ # Long-horizon tasks
160
+ │ ├── close_the_drawer/
161
+ │ │ ├── demo_0/
162
+ │ │ │ ├── demo_0_timestep_0000.png # RGB observation (128x128)
163
+ │ │ │ ├── demo_0_timestep_0000.json # Action + metadata
164
+ │ │ │ ├── demo_0_timestep_0001.png
165
+ │ │ │ ├── demo_0_timestep_0001.json
166
+ │ │ │ └── ...
167
+ │ │ ├── demo_1/
168
+ │ │ └── ...
169
+ │ ├── pick_up_the_black_bowl/
170
+ │ └── ... (29 subtasks total)
171
+
172
+ ├── libero_object/ # Object manipulation tasks
173
+ │ ├── pick_up_the_alphabet_soup/
174
+ │ │ ├── demo_0/
175
+ │ │ │ ├── demo_0_timestep_0000.png
176
+ │ │ │ ├── demo_0_timestep_0000.json
177
+ │ │ │ └── ...
178
+ │ │ └── ...
179
+ │ └── ... (20 subtasks total)
180
+
181
+ └── metadata/ # Dataset statistics & segmentation
182
+ ├── libero_10_complete_stats.json
183
+ ├── libero_10_all_segments.json
184
+ ├── libero_object_complete_stats.json
185
+ └── libero_object_all_segments.json
186
+ ```
187
+
188
+ ## Data Format
189
+
190
+ ### JSON Metadata (per timestep)
191
+
192
+ Each `.json` file contains:
193
+
194
+ ```json
195
+ {
196
+ "action": [0.1, -0.2, 0.0, 0.0, 0.0, 0.0, 1.0], // 7-DOF action (xyz, rpy, gripper)
197
+ "robot_state": [...], // Joint positions, velocities
198
+ "demo_id": "demo_0",
199
+ "timestep": 42,
200
+ "subtask": "pick_up_the_black_bowl",
201
+ "parent_task": "LIBERO_10",
202
+ "is_stop_signal": false // Segment boundary marker
203
+ }
204
+ ```
205
+
206
+ ### Action Space
207
+
208
+ - **Dimensions**: 7-DOF
209
+ - `[0:3]`: End-effector position delta (x, y, z)
210
+ - `[3:6]`: End-effector orientation delta (roll, pitch, yaw)
211
+ - `[6]`: Gripper action (0.0 = close, 1.0 = open)
212
+ - **Range**: Normalized to [-1, 1]
213
+ - **Control**: Delta actions (relative to current pose)
214
+
215
+ ### Image Format
216
+
217
+ - **Resolution**: 128×128 pixels
218
+ - **Channels**: RGB (3 channels)
219
+ - **Format**: PNG (lossless compression)
220
+ - **Camera**: Front-facing agentview camera
221
+
222
+ ## Metadata Files Explained
223
+
224
+ ### 1. `libero_10_complete_stats.json`
225
+
226
+ **Purpose**: Overview statistics for the entire LIBERO-10 dataset
227
+
228
+ ```json
229
+ {
230
+ "dataset": "LIBERO-10",
231
+ "total_parent_tasks": 10,
232
+ "total_subtasks": 29,
233
+ "total_demos": 1354,
234
+ "total_frames": 103650,
235
+ "parent_task_mapping": {
236
+ "LIBERO_10": {
237
+ "frames": 103650,
238
+ "demos": 1354,
239
+ "subtasks": ["pick_up_the_black_bowl", "close_the_drawer", ...]
240
+ }
241
+ },
242
+ "subtask_details": {
243
+ "pick_up_the_black_bowl": {
244
+ "demo_count": 48,
245
+ "frame_count": 3516,
246
+ "avg_frames_per_demo": 73.25,
247
+ "parent_task": "LIBERO_10"
248
+ },
249
+ ...
250
+ }
251
+ }
252
+ ```
253
+
254
+ **Use Case**:
255
+ - Understand dataset composition
256
+ - Plan training splits
257
+ - Check demo/frame distribution across tasks
258
+
259
+ ### 2. `libero_10_all_segments.json`
260
+
261
+ **Purpose**: Detailed segmentation metadata for each demonstration
262
+
263
+ ```json
264
+ {
265
+ "demo_0": {
266
+ "subtask": "pick_up_the_black_bowl",
267
+ "parent_task": "LIBERO_10",
268
+ "segments": [
269
+ {
270
+ "segment_id": 0,
271
+ "start_frame": 0,
272
+ "end_frame": 35,
273
+ "description": "Approach the black bowl",
274
+ "action_type": "reach"
275
+ },
276
+ {
277
+ "segment_id": 1,
278
+ "start_frame": 36,
279
+ "end_frame": 45,
280
+ "description": "Grasp the black bowl",
281
+ "action_type": "grasp"
282
+ },
283
+ ...
284
+ ],
285
+ "segmentation_method": "gemini_vision_api",
286
+ "total_segments": 3
287
+ },
288
+ ...
289
+ }
290
+ ```
291
+
292
+ **Use Case**:
293
+ - Train with semantic action chunks
294
+ - Implement hierarchical policies
295
+ - Analyze action primitives
296
+ - Filter by segment type
297
+
298
+ ### 3. `libero_object_complete_stats.json`
299
+
300
+ **Purpose**: Statistics for LIBERO-Object dataset (same structure as LIBERO-10)
301
+
302
+ **Key Differences**:
303
+ - Fewer, simpler subtasks (20 vs 29)
304
+ - Object-centric task naming
305
+ - Rule-based segmentation instead of vision-based
306
+
307
+ ### 4. `libero_object_all_segments.json`
308
+
309
+ **Purpose**: Segmentation for LIBERO-Object demonstrations
310
+
311
+ **Segmentation Method**: Rule-based gripper detection
312
+ - Segments identified by gripper state changes
313
+ - Stop signals mark task completion
314
+ - More consistent segment boundaries than vision-based
315
+
316
+ ## Citation
317
+
318
+ If you use this dataset, please cite:
319
+
320
+ ```bibtex
321
+ @article{gateVLAP2024,
322
+ title={GATE-VLAP: Grounded Action Trajectory Embeddings with Vision-Language Action Planning},
323
+ author={[Your Name]},
324
+ journal={arXiv preprint arXiv:XXXX.XXXXX},
325
+ year={2024}
326
+ }
327
+
328
+ @inproceedings{liu2023libero,
329
+ title={LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning},
330
+ author={Liu, Bo and Zhu, Yifeng and Gao, Chongkai and Feng, Yihao and Liu, Qiang and Zhu, Yuke and Stone, Peter},
331
+ booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
332
+ year={2023}
333
+ }
334
+ ```
335
+
336
+ ## Related Resources
337
+
338
+ - **Model Checkpoints**: [gate-institute/GATE-VLAP](https://huggingface.co/gate-institute/GATE-VLAP) *(coming soon)*
339
+ - **Original LIBERO**: [https://github.com/Lifelong-Robot-Learning/LIBERO](https://github.com/Lifelong-Robot-Learning/LIBERO)
340
+ - **Paper**: [arXiv:XXXX.XXXXX](https://arxiv.org) *(coming soon)*
341
+
342
+
343
+ ## Acknowledgments
344
+
345
+ - **LIBERO Benchmark**: Original dataset by Liu et al. (2023)
346
+ - **Segmentation**: Gemini Vision API for LIBERO-10 semantic chunking
347
+ - **Infrastructure**: Processed on GATE Institute infrastructure
348
+
349
+ ## Contact
350
+
351
+ For questions or issues, please open an issue on our [GitHub repository](https://github.com/your-repo) or contact [[email protected]].
352
+
353
+ ---
354
+
355
+ **Dataset Version**: 1.0
356
+ **Last Updated**: December 2025
357
+ **Maintainer**: GATE Institute