Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,9 @@ pipeline_tag: text-generation
|
|
25 |
### Used Datasets
|
26 |
- Orca-style dataset
|
27 |
- Alpaca-style dataset
|
28 |
-
- No other
|
|
|
|
|
29 |
|
30 |
### Prompt Template
|
31 |
```
|
@@ -41,7 +43,7 @@ pipeline_tag: text-generation
|
|
41 |
|
42 |
## Usage
|
43 |
|
44 |
-
-
|
45 |
- Our model can handle up to 10k+ input tokens, thanks to the `rope_scaling` option
|
46 |
|
47 |
```python
|
@@ -74,7 +76,7 @@ output_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
|
74 |
## Evaluation Results
|
75 |
|
76 |
### Overview
|
77 |
-
- We conducted a performance evaluation
|
78 |
We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`
|
79 |
We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
|
80 |
- We used [MT-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge), a set of challenging multi-turn open-ended questions, to evaluate the models
|
@@ -102,12 +104,9 @@ git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
|
|
102 |
cd lm-evaluation-harness
|
103 |
```
|
104 |
|
105 |
-
## Ethical Issues
|
106 |
-
|
107 |
-
### Ethical Considerations
|
108 |
-
- There were no ethical issues involved, as we did not include the benchmark test set or the training set in the model's training process
|
109 |
-
|
110 |
## Contact Us
|
111 |
|
112 |
-
###
|
113 |
-
- [Upstage](https://en.upstage.ai)
|
|
|
|
|
|
25 |
### Used Datasets
|
26 |
- Orca-style dataset
|
27 |
- Alpaca-style dataset
|
28 |
+
- No other dataset was used except for the dataset mentioned above
|
29 |
+
- No benchmark test set or the training set are used
|
30 |
+
|
31 |
|
32 |
### Prompt Template
|
33 |
```
|
|
|
43 |
|
44 |
## Usage
|
45 |
|
46 |
+
- The followings are tested on A100 80GB
|
47 |
- Our model can handle up to 10k+ input tokens, thanks to the `rope_scaling` option
|
48 |
|
49 |
```python
|
|
|
76 |
## Evaluation Results
|
77 |
|
78 |
### Overview
|
79 |
+
- We conducted a performance evaluation following the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
|
80 |
We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`
|
81 |
We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
|
82 |
- We used [MT-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge), a set of challenging multi-turn open-ended questions, to evaluate the models
|
|
|
104 |
cd lm-evaluation-harness
|
105 |
```
|
106 |
|
|
|
|
|
|
|
|
|
|
|
107 |
## Contact Us
|
108 |
|
109 |
+
### About Upstage
|
110 |
+
- [Upstage](https://en.upstage.ai) is a company specialized in Large Language Models (LLMs) and AI. We will help you build private LLMs and related applications.
|
111 |
+
If you have a dataset to build domain specific LLMs or make LLM applications, please contact us at ► [click here to contact](https://www.upstage.ai/private-llm?utm_source=huggingface&utm_medium=link&utm_campaign=privatellm)
|
112 |
+
- As of August 1st, our 70B model has reached the top spot in openLLM rankings, marking itself as the current leading performer globally.
|