meta-llama/llama-2-7b-chat-hf
finetuned for 215 steps on meta-math/MetaMathQA-40K
. Training loss: 0.756800. Source code
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.