Transformers
remyx
Inference Endpoints
salma-remyx commited on
Commit
9cb2773
·
verified ·
1 Parent(s): e56f5fb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -1,5 +1,49 @@
1
  ---
2
  license: llama3.1
 
 
3
  ---
4
 
5
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/647777304ae93470ffc28913/Z7kEAxSxvpYkKNjBLm6GY.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: llama3.1
3
+ datasets:
4
+ - remyxai/vqasynth_spacellava
5
  ---
6
 
7
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/647777304ae93470ffc28913/Z7kEAxSxvpYkKNjBLm6GY.png)
8
+
9
+ # Model Card for SpaceLLaVA
10
+
11
+ **SpaceLlama3.1** uses [llama3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) as the llm backbone along with the fused DINOv2+SigLIP features of [prismatic-vlms](https://github.com/TRI-ML/prismatic-vlms) for a full fine-tune on a [dataset](https://huggingface.co/datasets/remyxai/vqasynth_spacellava) designed with [VQASynth](https://github.com/remyxai/VQASynth/tree/main) to enhance spatial reasoning as in [SpatialVLM](https://spatial-vlm.github.io/).
12
+
13
+
14
+ ## Model Details
15
+
16
+ ### Model Description
17
+
18
+ This model uses data synthesis techniques and publically available models to reproduce the work described in SpatialVLM to enhance the spatial reasoning of multimodal models.
19
+ With a pipeline of expert models, we can infer spatial relationships between objects in a scene to create VQA dataset for spatial reasoning.
20
+
21
+
22
+ - **Developed by:** remyx.ai
23
+ - **Model type:** MultiModal Model, Vision Language Model, Prismatic-vlms, Llama 3.1
24
+ - **License:** Apache-2.0
25
+ - **Finetuned from model:** Llama 3.1
26
+
27
+ ### Model Sources
28
+ - **Dataset:** [SpaceLLaVA](https://huggingface.co/datasets/remyxai/vqasynth_spacellava)
29
+ - **Repository:** [VQASynth](https://github.com/remyxai/VQASynth/tree/main)
30
+ - **Paper:** [SpatialVLM](https://arxiv.org/abs/2401.12168)
31
+
32
+
33
+ ## Citation
34
+ ```
35
+ @article{chen2024spatialvlm,
36
+ title = {SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities},
37
+ author = {Chen, Boyuan and Xu, Zhuo and Kirmani, Sean and Ichter, Brian and Driess, Danny and Florence, Pete and Sadigh, Dorsa and Guibas, Leonidas and Xia, Fei},
38
+ journal = {arXiv preprint arXiv:2401.12168},
39
+ year = {2024},
40
+ url = {https://arxiv.org/abs/2401.12168},
41
+ }
42
+
43
+ @inproceedings{karamcheti2024prismatic,
44
+ title = {Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models},
45
+ author = {Siddharth Karamcheti and Suraj Nair and Ashwin Balakrishna and Percy Liang and Thomas Kollar and Dorsa Sadigh},
46
+ booktitle = {International Conference on Machine Learning (ICML)},
47
+ year = {2024},
48
+ }
49
+ ```