modelId
stringlengths 6
118
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
7.74k
| library_name
stringclasses 264
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 52
values | createdAt
unknown | card
stringlengths 1
1.01M
|
---|---|---|---|---|---|---|---|---|---|
damgomz/ft_16_4e6_base_x8 | damgomz | "2024-06-22T15:08:12" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-23T11:11:20" | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 86142.15871238708 |
| Emissions (Co2eq in kg) | 0.0521259050887542 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 1.0169533443964198 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0897302735470235 |
| Consumed energy (kWh) | 1.1066836179434458 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.16582365552134512 |
| Emissions (Co2eq in kg) | 0.0337390121623516 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_4e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 4e-06 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.724495 | 0.424528 |
| 1 | 0.355208 | 0.249139 | 0.917037 |
| 2 | 0.224822 | 0.240320 | 0.933497 |
| 3 | 0.184909 | 0.222563 | 0.904180 |
| 4 | 0.153819 | 0.218022 | 0.915023 |
| 5 | 0.119465 | 0.246321 | 0.914588 |
| 6 | 0.090303 | 0.268250 | 0.914945 |
|
risolmayo/4c44a1ff-f68c-47b7-b6c0-1e2d7bc753eb | risolmayo | "2025-02-07T01:03:00" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B",
"base_model:adapter:unsloth/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | "2025-02-07T00:51:09" | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4c44a1ff-f68c-47b7-b6c0-1e2d7bc753eb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B
bf16: true
chat_template: llama3
data_processes: 24
dataset_prepared_path: null
datasets:
- data_files:
- 90559523681be0ab_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/90559523681be0ab_train_data.json
type:
field_input: abstract
field_instruction: prompt
field_output: y_true
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 4
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: risolmayo/4c44a1ff-f68c-47b7-b6c0-1e2d7bc753eb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 7.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.04
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
lr_scheduler_warmup_steps: 50
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/90559523681be0ab_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-8
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 17333
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
total_train_batch_size: 32
train_batch_size: 8
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b3e5bfbd-fea9-49bb-bacf-9d1430738190
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b3e5bfbd-fea9-49bb-bacf-9d1430738190
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4c44a1ff-f68c-47b7-b6c0-1e2d7bc753eb
This model is a fine-tuned version of [unsloth/Llama-3.2-1B](https://huggingface.co/unsloth/Llama-3.2-1B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 17333
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1267 | 0.0022 | 1 | 0.3185 |
| 0.021 | 0.1079 | 50 | 0.0043 |
| 0.0652 | 0.2157 | 100 | 0.0009 |
| 0.0285 | 0.3236 | 150 | 0.0008 |
| 0.0014 | 0.4315 | 200 | 0.0006 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
adammoss/ddpm-128-epsilon-1000-sigmoid-run2 | adammoss | "2023-06-11T08:20:30" | 4 | 0 | diffusers | [
"diffusers",
"diffusers:DDPMConditionPipeline",
"region:us"
] | null | "2023-06-11T01:21:44" | Entry not found |
EPFL-VILAB/4M_tokenizers_DINOv2-B14-global_8k_16_224 | EPFL-VILAB | "2024-06-14T08:22:25" | 50 | 1 | ml-4m | [
"ml-4m",
"safetensors",
"arxiv:2312.06647",
"arxiv:2406.09406",
"license:other",
"region:us"
] | null | "2024-06-12T08:49:11" | ---
license: other
license_name: sample-code-license
license_link: LICENSE
library_name: ml-4m
---
# 4M: Massively Multimodal Masked Modeling
*A framework for training any-to-any multimodal foundation models. <br>Scalable. Open-sourced. Across tens of modalities and tasks.*
[`Website`](https://4m.epfl.ch) | [`GitHub`](https://github.com/apple/ml-4m) | [`BibTeX`](#citation)
Official implementation and pre-trained models for :
[**4M: Massively Multimodal Masked Modeling**](https://arxiv.org/abs/2312.06647), NeurIPS 2023 (Spotlight) <br>
*[David Mizrahi](https://dmizrahi.com/)\*, [Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/), [Teresa Yeo](https://aserety.github.io/), [Mingfei Gao](https://fly6464.github.io/), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)*
[**4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities**](https://arxiv.org/abs/2406.09406), arXiv 2024 <br>
*[Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/)\*, [David Mizrahi](https://dmizrahi.com/)\*, [Ali Garjani](https://garjania.github.io/), [Mingfei Gao](https://fly6464.github.io/), [David Griffiths](https://www.dgriffiths.uk/), [Jiaming Hu](https://scholar.google.com/citations?user=vm3imKsAAAAJ&hl=en), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)*
4M is a framework for training "any-to-any" foundation models, using tokenization and masking to scale to many diverse modalities.
Models trained using 4M can perform a wide range of vision tasks, transfer well to unseen tasks and modalities, and are flexible and steerable multimodal generative models.
We are releasing code and models for "4M: Massively Multimodal Masked Modeling" (here denoted 4M-7), as well as "4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities" (here denoted 4M-21).
## Installation
For install instructions, please see https://github.com/apple/ml-4m.
## Usage
The DINOv2-B/14 global embedding tokenizer can be loaded from Hugging Face Hub as follows:
```python
from fourm.vq.vqvae import VQVAE
tok_dinov2_global = VQVAE.from_pretrained('EPFL-VILAB/4M_tokenizers_DINOv2-B14-global_8k_16_224')
```
Please see https://github.com/apple/ml-4m/blob/main/README_TOKENIZATION.md for more detailed instructions and https://github.com/apple/ml-4m for other tokenizer and 4M model checkpoints.
## Citation
If you find this repository helpful, please consider citing our work:
```
@inproceedings{4m,
title={{4M}: Massively Multimodal Masked Modeling},
author={David Mizrahi and Roman Bachmann and O{\u{g}}uzhan Fatih Kar and Teresa Yeo and Mingfei Gao and Afshin Dehghan and Amir Zamir},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
}
@article{4m21,
title={{4M-21}: An Any-to-Any Vision Model for Tens of Tasks and Modalities},
author={Roman Bachmann and O{\u{g}}uzhan Fatih Kar and David Mizrahi and Ali Garjani and Mingfei Gao and David Griffiths and Jiaming Hu and Afshin Dehghan and Amir Zamir},
journal={arXiv 2024},
year={2024},
}
```
## License
The model weights in this repository are released under the Sample Code license as found in the [LICENSE](LICENSE) file. |
huggingartists/the-beatles | huggingartists | "2022-02-27T11:47:43" | 7 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/the-beatles",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05" | ---
language: en
datasets:
- huggingartists/the-beatles
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/c771d3ee1c0969503cdaf34edf76f38a.400x400x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Beatles</div>
<a href="https://genius.com/artists/the-beatles">
<div style="text-align: center; font-size: 14px;">@the-beatles</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from The Beatles.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/the-beatles).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-beatles")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2p2c5864/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on The Beatles's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/286vzjah) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/286vzjah/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/the-beatles')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/the-beatles")
model = AutoModelWithLMHead.from_pretrained("huggingartists/the-beatles")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk)
[![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
|
ssharm87/t5-small-finetuned-xsum-ss | ssharm87 | "2022-09-18T17:13:52" | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-09-18T07:18:08" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum-ss
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
config: default
split: train
args: default
metrics:
- name: Rouge1
type: rouge
value: 26.3663
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-ss
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5823
- Rouge1: 26.3663
- Rouge2: 6.4727
- Rougel: 20.538
- Rougelsum: 20.5411
- Gen Len: 18.8006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 2.8125 | 0.25 | 3189 | 2.5823 | 26.3663 | 6.4727 | 20.538 | 20.5411 | 18.8006 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Arshik/mistral-chat-model | Arshik | "2024-05-13T18:22:56" | 137 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-13T18:08:23" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Wonder-Griffin/TraXLMistralv1 | Wonder-Griffin | "2024-09-10T13:34:20" | 48 | 0 | transformers | [
"transformers",
"safetensors",
"TraXLMistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-10T13:34:00" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
danielkty22/bert-base-uncased-finetune-squad-ep-1.0-lr-1e-05-wd-0.001-dp-0.2-ss-0-st-False-fh-False-hs-700 | danielkty22 | "2023-09-28T21:38:05" | 132 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-09-28T21:33:40" | Entry not found |
mradermacher/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent-GGUF | mradermacher | "2024-12-15T00:01:10" | 31 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:EpistemeAI/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent",
"base_model:quantized:EpistemeAI/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-14T23:15:15" | ---
base_model: EpistemeAI/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/EpistemeAI/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent-GGUF/resolve/main/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent-GGUF/resolve/main/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent-GGUF/resolve/main/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent-GGUF/resolve/main/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent-GGUF/resolve/main/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent-GGUF/resolve/main/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent-GGUF/resolve/main/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent-GGUF/resolve/main/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent-GGUF/resolve/main/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent-GGUF/resolve/main/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent-GGUF/resolve/main/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent-GGUF/resolve/main/Polypsyche-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-auto-divergent.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bond005/ruT5-ASR-large | bond005 | "2024-05-02T16:30:24" | 106 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"PyTorch",
"Transformers",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-06-22T13:59:14" | ---
language: ru
license: apache-2.0
tags:
- PyTorch
- Transformers
widget:
- text: краеугольным камнем любышь алгоритных машиного обучения является преждес его обобщающая способности тогда мы обучаем некоторую модель у нас есть обучающая выборка унаситькюмся ошибки и наша задачи сводится вообщем такомптиминационной задачи мы минимизируем в функцию ошибки по параметрам нашей модели на обучающие выбрать но на самом деле хотим там и не этого мы не обучающую ошибку хотим минимизировать
- text: это ашибкараспознаваня
---
# ruT5-ASR-large
Model was trained by [bond005](https://scholar.google.ru/citations?user=3AJKH38AAAAJ) to correct errors, restore punctuation and capitalization in the ASR output (in particular, output of [Wav2Vec2-Large-Ru-Golos](https://huggingface.co/bond005/wav2vec2-large-ru-golos)). The model is based on [ruT5-large](https://huggingface.co/ai-forever/ruT5-large).
## Usage
To correct ASR outputs the model can be used as a standalone sequence-to-sequence model as follows:
```python
from transformers import T5ForConditionalGeneration
from transformers import GenerationConfig
from transformers import T5Tokenizer
import torch
def restore_text(text: str, tokenizer: T5Tokenizer, config: GenerationConfig,
model: T5ForConditionalGeneration) -> str:
if len(text) == 0: # if an input text is empty, then we return an empty text too
return ''
x = tokenizer(text, return_tensors='pt', padding=True).to(model.device)
max_size = int(x.input_ids.shape[1] * 2.0 + 10)
min_size = 3
if x.input_ids.shape[1] <= min_size:
return text
out = model.generate(**x, generation_config=config, max_length=max_size)
res = tokenizer.decode(out[0], skip_special_tokens=True).strip()
return ' '.join(res.split())
# load model and tokenizer
tokenizer_for_restoring = T5Tokenizer.from_pretrained('bond005/ruT5-ASR-large')
model_for_restoring = T5ForConditionalGeneration.from_pretrained('bond005/ruT5-ASR-large')
config_for_restoring = GenerationConfig.from_pretrained('bond005/ruT5-ASR-large')
if torch.cuda.is_available():
model_for_restoring = model_for_restoring.cuda()
input_examples = [
'краеугольным камнем любышь алгоритных машиного обучения является преждес его ' \
'обобщающая способности тогда мы обучаем некоторую модель у нас есть обучающая ' \
'выборка унаситькюмся ошибки и наша задачи сводится вообщем такомптиминационной ' \
'задачи мы минимизируем в функцию ошибки по параметрам нашей модели на обучающие ' \
'выбрать но на самом деле хотим там и не этого ' \
'мы не обучающую ошибку хотим минимизировать', # 0
'максимально ухучать идеальную систему в воде туда какие то элементы или условия ' \
'чтобы итоговое результат должен быть такой мы должны в двадцать два раза ' \
'замедлить нашу разработку' # 1
]
for idx, val in enumerate(input_examples):
restored = restore_text(val, tokenizer_for_restoring,
config_for_restoring, model_for_restoring)
print('==========')
print(f'Example {idx + 1}')
print('==========')
print('')
print('ASR output before restoring:')
print('')
print(val)
print('')
print('After restoring:')
print('')
print(restored)
print('')
```
```text
==========
Example 1
==========
ASR output before restoring:
краеугольным камнем любышь алгоритных машиного обучения является преждес его обобщающая способности тогда мы обучаем некоторую модель у нас есть обучающая выборка унаситькюмся ошибки и наша задачи сводится вообщем такомптиминационной задачи мы минимизируем в функцию ошибки по параметрам нашей модели на обучающие выбрать но на самом деле хотим там и не этого мы не обучающую ошибку хотим минимизировать
After restoring:
Краеугольным камнем любого алгоритма машинного обучения является прежде всего его общая способность. Тогда мы обучаем некоторую модель, у нас есть обучающая выборка, у нас есть критическая ошибка, и наша задача сводится в общем к компенсационной задаче. Мы минимизируем функцию ошибки по параметрам нашей модели на обучающую выборку, но на самом деле хотим там и не этого. Мы не обучающую ошибку хотим минимизировать.
==========
Example 2
==========
ASR output before restoring:
максимально ухучать идеальную систему в воде туда какие то элементы или условия чтобы итоговое результат должен быть такой мы должны в двадцать два раза замедлить нашу разработку
After restoring:
Максимально ухудшать идеальную систему, вводить туда какие-то элементы или условия. Чтобы итоговый результат должен быть такой, мы должны в 22 раза замедлить нашу разработку.
``` |
vwxyzjn/ppo_mistral_vllm_1e-6_kl_0.15 | vwxyzjn | "2024-06-04T06:59:00" | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-04T06:57:55" | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.1
tags:
- generated_from_trainer
model-index:
- name: ppo_mistral_vllm_1e-6_kl_0.15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ppo_mistral_vllm_1e-6_kl_0.15
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 7
- gradient_accumulation_steps: 64
- total_train_batch_size: 448
- total_eval_batch_size: 56
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
kostiantynk/485cd22e-1e21-409d-9063-3ccd7cbd9d94 | kostiantynk | "2025-01-20T20:26:48" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2025-01-20T20:24:02" | ---
library_name: peft
license: cc-by-nc-4.0
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 485cd22e-1e21-409d-9063-3ccd7cbd9d94
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ec6cbf6a5412fc88_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ec6cbf6a5412fc88_train_data.json
type:
field_input: ar
field_instruction: en
field_output: eg
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/485cd22e-1e21-409d-9063-3ccd7cbd9d94
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/ec6cbf6a5412fc88_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 653b74b9-69fc-4955-a291-8ce0203e80d5
wandb_project: Birthday-SN56-7-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 653b74b9-69fc-4955-a291-8ce0203e80d5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 485cd22e-1e21-409d-9063-3ccd7cbd9d94
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.7927 | 0.0009 | 1 | nan |
| 0.0 | 0.0027 | 3 | nan |
| 0.0 | 0.0055 | 6 | nan |
| 1.522 | 0.0082 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
stevennguyen/jankgpt | stevennguyen | "2025-02-05T15:05:39" | 109 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-05-31T15:40:09" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: jankgpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jankgpt
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1021 | 0.08 | 100 | 3.0845 |
| 3.0967 | 0.16 | 200 | 3.0801 |
| 3.0836 | 0.24 | 300 | 3.0788 |
| 3.0587 | 0.32 | 400 | 3.0834 |
| 3.0541 | 0.4 | 500 | 3.0850 |
| 3.0305 | 0.48 | 600 | 3.0981 |
| 3.0016 | 0.55 | 700 | 3.1095 |
| 2.9926 | 0.63 | 800 | 3.1197 |
| 2.9714 | 0.71 | 900 | 3.1251 |
| 2.9671 | 0.79 | 1000 | 3.1399 |
| 2.9813 | 0.87 | 1100 | 3.1221 |
| 3.0625 | 0.95 | 1200 | 3.0943 |
| 3.035 | 1.03 | 1300 | 3.0655 |
| 2.9603 | 1.11 | 1400 | 3.0517 |
| 2.9562 | 1.19 | 1500 | 3.0356 |
| 2.927 | 1.27 | 1600 | 3.0243 |
| 2.936 | 1.35 | 1700 | 3.0045 |
| 2.9404 | 1.43 | 1800 | 2.9877 |
| 2.9226 | 1.51 | 1900 | 2.9740 |
| 2.9015 | 1.59 | 2000 | 2.9549 |
| 2.8955 | 1.66 | 2100 | 2.9432 |
| 2.8738 | 1.74 | 2200 | 2.9272 |
| 2.8663 | 1.82 | 2300 | 2.9138 |
| 2.8515 | 1.9 | 2400 | 2.8980 |
| 2.8433 | 1.98 | 2500 | 2.8859 |
| 2.7071 | 2.06 | 2600 | 2.8801 |
| 2.6697 | 2.14 | 2700 | 2.8721 |
| 2.6623 | 2.22 | 2800 | 2.8631 |
| 2.6561 | 2.3 | 2900 | 2.8551 |
| 2.6604 | 2.38 | 3000 | 2.8465 |
| 2.6372 | 2.46 | 3100 | 2.8402 |
| 2.6279 | 2.54 | 3200 | 2.8320 |
| 2.6209 | 2.62 | 3300 | 2.8264 |
| 2.6192 | 2.69 | 3400 | 2.8226 |
| 2.605 | 2.77 | 3500 | 2.8204 |
| 2.6054 | 2.85 | 3600 | 2.8187 |
| 2.6183 | 2.93 | 3700 | 2.8181 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1+rocm5.2
- Datasets 2.12.0
- Tokenizers 0.13.3
|
umm-maybe/StarCoder-1B-StackStar2 | umm-maybe | "2024-04-27T01:30:35" | 138 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-27T01:29:18" | Entry not found |
ravikumar101/mistral-7b-v3 | ravikumar101 | "2023-12-22T08:06:11" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-22T08:00:02" | ---
license: apache-2.0
---
|
JuniperChinenye/missu12 | JuniperChinenye | "2025-01-16T09:12:27" | 44 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-16T09:09:47" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
roleplaiapp/MN-12B-Mag-Mell-R1-IQ3_M-GGUF | roleplaiapp | "2025-01-27T13:05:13" | 5 | 0 | transformers | [
"transformers",
"gguf",
"12b",
"IQ3_M",
"iq3",
"llama-cpp",
"mag",
"mell",
"text-generation",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-01-27T13:04:29" | ---
library_name: transformers
pipeline_tag: text-generation
tags:
- 12b
- IQ3_M
- gguf
- iq3
- llama-cpp
- mag
- mell
- text-generation
---
# roleplaiapp/MN-12B-Mag-Mell-R1-IQ3_M-GGUF
**Repo:** `roleplaiapp/MN-12B-Mag-Mell-R1-IQ3_M-GGUF`
**Original Model:** `MN-12B-Mag-Mell-R1`
**Quantized File:** `MN-12B-Mag-Mell-R1.IQ3_M.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `IQ3_M`
## Overview
This is a GGUF IQ3_M quantized version of MN-12B-Mag-Mell-R1
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/).
|
juhoon01/ko_llama3_model_shinhan_2 | juhoon01 | "2024-11-04T01:30:30" | 76 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-11-04T01:16:59" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
allenai/ivila-block-layoutlm-finetuned-s2vl-v2 | allenai | "2022-10-03T21:12:49" | 115 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlm",
"token-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-04-29T22:23:38" | ---
language: en
---
|
mradermacher/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted-GGUF | mradermacher | "2024-10-31T23:04:40" | 8 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted",
"base_model:quantized:sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-10-31T22:50:33" | ---
base_model: sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted-GGUF/resolve/main/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Sailor-14B-Chat-GGUF | mradermacher | "2024-05-17T01:36:29" | 88 | 0 | transformers | [
"transformers",
"gguf",
"multilingual",
"sea",
"sailor",
"sft",
"chat",
"instruction",
"en",
"zh",
"id",
"th",
"vi",
"ms",
"lo",
"dataset:CohereForAI/aya_dataset",
"dataset:CohereForAI/aya_collection",
"dataset:Open-Orca/OpenOrca",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:openbmb/UltraFeedback",
"base_model:sail/Sailor-14B-Chat",
"base_model:quantized:sail/Sailor-14B-Chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-05-17T00:46:37" | ---
base_model: sail/Sailor-14B-Chat
datasets:
- CohereForAI/aya_dataset
- CohereForAI/aya_collection
- Open-Orca/OpenOrca
- HuggingFaceH4/ultrachat_200k
- openbmb/UltraFeedback
language:
- en
- zh
- id
- th
- vi
- ms
- lo
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- multilingual
- sea
- sailor
- sft
- chat
- instruction
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/sail/Sailor-14B-Chat
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-Chat-GGUF/resolve/main/Sailor-14B-Chat.Q2_K.gguf) | Q2_K | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-Chat-GGUF/resolve/main/Sailor-14B-Chat.IQ3_XS.gguf) | IQ3_XS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-Chat-GGUF/resolve/main/Sailor-14B-Chat.IQ3_S.gguf) | IQ3_S | 6.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-Chat-GGUF/resolve/main/Sailor-14B-Chat.Q3_K_S.gguf) | Q3_K_S | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-Chat-GGUF/resolve/main/Sailor-14B-Chat.IQ3_M.gguf) | IQ3_M | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-Chat-GGUF/resolve/main/Sailor-14B-Chat.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-Chat-GGUF/resolve/main/Sailor-14B-Chat.Q3_K_L.gguf) | Q3_K_L | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-Chat-GGUF/resolve/main/Sailor-14B-Chat.IQ4_XS.gguf) | IQ4_XS | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-Chat-GGUF/resolve/main/Sailor-14B-Chat.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-Chat-GGUF/resolve/main/Sailor-14B-Chat.Q4_K_M.gguf) | Q4_K_M | 9.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-Chat-GGUF/resolve/main/Sailor-14B-Chat.Q5_K_S.gguf) | Q5_K_S | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-Chat-GGUF/resolve/main/Sailor-14B-Chat.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-Chat-GGUF/resolve/main/Sailor-14B-Chat.Q6_K.gguf) | Q6_K | 12.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-Chat-GGUF/resolve/main/Sailor-14B-Chat.Q8_0.gguf) | Q8_0 | 15.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kanh1/kanha-0.1-2.5-Mistral-7B | kanh1 | "2024-01-25T09:48:31" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-25T09:46:01" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ayushdh96/Sarcasm_Detection_Distill_Bert_Fine_Tuned | ayushdh96 | "2025-01-11T02:52:18" | 15 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"feature-extraction",
"text-classification",
"sarcasm-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-11T02:41:40" | ---
library_name: transformers
tags: [text-classification, sarcasm-detection]
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TheBloke/Synthia-7B-GPTQ | TheBloke | "2023-09-27T12:45:55" | 18 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-08-22T14:39:41" | ---
license: llama2
model_name: Synthia 7b
base_model: migtissera/Synthia-7b
inference: false
model_creator: Migel Tissera
model_type: llama
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Synthia 7b - GPTQ
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Synthia 7b](https://huggingface.co/migtissera/Synthia-7b)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Migel Tissera's Synthia 7b](https://huggingface.co/migtissera/Synthia-7b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Synthia-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Synthia-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Synthia-7B-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Synthia-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Synthia-7B-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, without Act Order and group size 128g. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Synthia-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Synthia-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Synthia-7B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Synthia-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Synthia-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Synthia-7B-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/Synthia-7B-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Synthia-7B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Synthia-7B-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Synthia-7B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Synthia-7B-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Migel Tissera's Synthia 7b
|
l3utterfly/open-llama-3b-v2-layla | l3utterfly | "2023-12-19T07:49:27" | 1,499 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-08-18T12:22:53" | ---
license: apache-2.0
---
# Model Card
### Model Description
OpenLlama 3B fine-tuned using ShareGPT datasets for multi-turn conversations.
- **Developed by:** l3utterfly
- **Funded by:** Layla Network
- **Model type:** OpenLlama
- **Language(s) (NLP):** English
- **License:** Llama2
- **Finetuned from model:** OpenLlama 3B
## Uses
Base model used by Layla - the offline personal assistant: https://www.layla-network.ai
Help & support: https://discord.gg/x546YJ6nYC
Prompt:
```
User:
Assistant:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_l3utterfly__open-llama-3b-v2-layla)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 35.63 |
| ARC (25-shot) | 38.23 |
| HellaSwag (10-shot) | 66.43 |
| MMLU (5-shot) | 28.56 |
| TruthfulQA (0-shot) | 44.4 |
| Winogrande (5-shot) | 62.83 |
| GSM8K (5-shot) | 1.06 |
| DROP (3-shot) | 7.88 |
|
Sumaia/bert-base-uncased-finetuned-wnli | Sumaia | "2023-03-07T04:19:38" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-06T22:04:58" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-wnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6913
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 73 | 0.7008 | 0.4366 |
| No log | 2.0 | 146 | 0.6943 | 0.5211 |
| No log | 3.0 | 219 | 0.6943 | 0.4789 |
| No log | 4.0 | 292 | 0.6913 | 0.5634 |
| No log | 5.0 | 365 | 0.6932 | 0.5634 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cpu
- Datasets 2.10.1
- Tokenizers 0.13.2
|
myhaaaaaaa/f6af0049-c578-4edd-809f-670abae8fbad | myhaaaaaaa | "2025-01-24T11:31:35" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:adapter:NousResearch/Hermes-2-Pro-Mistral-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-24T10:55:03" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f6af0049-c578-4edd-809f-670abae8fbad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 030ea58a91f4640e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/030ea58a91f4640e_train_data.json
type:
field_input: boe_text_cleaned
field_instruction: tweet_text_cleaned
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: myhaaaaaaa/f6af0049-c578-4edd-809f-670abae8fbad
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/030ea58a91f4640e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9b8ffe7d-1b3f-4e1e-ba21-152b3efb3b6b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9b8ffe7d-1b3f-4e1e-ba21-152b3efb3b6b
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f6af0049-c578-4edd-809f-670abae8fbad
This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1034 | 0.4884 | 200 | 0.0261 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
denbeo/c5c3c23a-a56f-487c-9193-75edc15dafa5 | denbeo | "2025-01-17T18:37:38" | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/tinyllama-chat",
"base_model:adapter:unsloth/tinyllama-chat",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-17T18:14:10" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c5c3c23a-a56f-487c-9193-75edc15dafa5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/tinyllama-chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d3b1d835749b83ad_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d3b1d835749b83ad_train_data.json
type:
field_input: rejected
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: denbeo/c5c3c23a-a56f-487c-9193-75edc15dafa5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d3b1d835749b83ad_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0d2293ac-9f59-4571-af11-389887041d31
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0d2293ac-9f59-4571-af11-389887041d31
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c5c3c23a-a56f-487c-9193-75edc15dafa5
This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0545 | 0.0101 | 200 | 1.8332 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
yunjinchoi/klue-bert-base-q-encoder | yunjinchoi | "2023-06-10T15:08:31" | 32 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | "2023-06-10T15:06:03" | Entry not found |
randheerk/my_awesome_wnut_model | randheerk | "2025-01-02T13:26:36" | 115 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:arrow",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2025-01-02T12:46:49" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- arrow
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: arrow
type: arrow
config: default
split: test
args: default
metrics:
- name: Precision
type: precision
value: 0.5615615615615616
- name: Recall
type: recall
value: 0.34661723818350326
- name: F1
type: f1
value: 0.42865329512893985
- name: Accuracy
type: accuracy
value: 0.9448505835577786
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the arrow dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2758
- Precision: 0.5616
- Recall: 0.3466
- F1: 0.4287
- Accuracy: 0.9449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2658 | 0.5435 | 0.3355 | 0.4149 | 0.9437 |
| No log | 2.0 | 426 | 0.2758 | 0.5616 | 0.3466 | 0.4287 | 0.9449 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cpu
- Datasets 3.2.0
- Tokenizers 0.21.0
|
lielbin/BabyBERTa-CHILDES_2.5-with-Masking_run2-finetuned-SQuAD | lielbin | "2024-05-30T14:12:58" | 114 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-05-30T13:12:46" | ---
tags:
- generated_from_trainer
model-index:
- name: BabyBERTa-CHILDES_2.5-with-Masking_run2-finetuned-SQuAD
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BabyBERTa-CHILDES_2.5-with-Masking_run2-finetuned-SQuAD
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
mradermacher/AceGPT-v2-32B-i1-GGUF | mradermacher | "2024-08-02T10:17:53" | 197 | 0 | transformers | [
"transformers",
"gguf",
"ar",
"zh",
"en",
"base_model:FreedomIntelligence/AceGPT-v2-32B",
"base_model:quantized:FreedomIntelligence/AceGPT-v2-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-06-23T17:44:21" | ---
base_model: FreedomIntelligence/AceGPT-v2-32B
language:
- ar
- zh
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/FreedomIntelligence/AceGPT-v2-32B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/AceGPT-v2-32B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-IQ2_S.gguf) | i1-IQ2_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-Q2_K.gguf) | i1-Q2_K | 12.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-IQ3_S.gguf) | i1-IQ3_S | 14.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-IQ3_M.gguf) | i1-IQ3_M | 14.8 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.7 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-Q4_0.gguf) | i1-Q4_0 | 18.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.6 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.2 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-i1-GGUF/resolve/main/AceGPT-v2-32B.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF | mradermacher | "2025-02-03T02:00:04" | 524 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:nbeerbower/GreatFirewall-DPO",
"dataset:nbeerbower/Schule-DPO",
"dataset:nbeerbower/Purpura-DPO",
"dataset:nbeerbower/Arkhaios-DPO",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:flammenai/Date-DPO-NoAsterisks",
"dataset:flammenai/Prude-Phi3-DPO",
"dataset:Atsunori/HelpSteer2-DPO",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"base_model:nbeerbower/Dumpling-Qwen2.5-7B-1k-r64-2e-5",
"base_model:quantized:nbeerbower/Dumpling-Qwen2.5-7B-1k-r64-2e-5",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-02-02T21:45:07" | ---
base_model: nbeerbower/Dumpling-Qwen2.5-7B-1k-r64-2e-5
datasets:
- nbeerbower/GreatFirewall-DPO
- nbeerbower/Schule-DPO
- nbeerbower/Purpura-DPO
- nbeerbower/Arkhaios-DPO
- jondurbin/truthy-dpo-v0.1
- antiven0m/physical-reasoning-dpo
- flammenai/Date-DPO-NoAsterisks
- flammenai/Prude-Phi3-DPO
- Atsunori/HelpSteer2-DPO
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nbeerbower/Dumpling-Qwen2.5-7B-1k-r64-2e-5
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-7B-1k-r64-2e-5-i1-GGUF/resolve/main/Dumpling-Qwen2.5-7B-1k-r64-2e-5.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
macadeliccc/Polyglot-8x7b-v0.1 | macadeliccc | "2024-01-16T16:42:08" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"en",
"zh",
"ja",
"de",
"id",
"vi",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-15T20:24:51" | ---
license: cc-by-nc-nd-4.0
language:
- en
- zh
- ja
- de
- id
- vi
library_name: transformers
---
# Polyglot-8x7b-v0.1
![polyglot](polyglot-8x7b.png)
Polyglot-8x7b is a Mixture of Experts approach to a multilingual model.
The model is capable of quality content in 6 languages.
The advantage to this approach is being able to repurpose English models in other languages.
For example, you can ask the model to output something you would find in math model trained in English to the desired language of your choice.
This formula allows for very powerful combinations of models. It could be 2 languages and 6 task based models, or vice versa.
# Evaluations (4-bit bnb)
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|----------|------:|------|-----:|--------|-----:|---|-----:|
|arc_easy | 1|none | 0|acc |0.8552|± |0.0072|
| | |none | 0|acc_norm|0.8018|± |0.0082|
|boolq | 2|none | 0|acc |0.8691|± |0.0059|
|hellaswag | 1|none | 0|acc |0.6649|± |0.0047|
| | |none | 0|acc_norm|0.8375|± |0.0037|
|openbookqa| 1|none | 0|acc |0.3740|± |0.0217|
| | |none | 0|acc_norm|0.4680|± |0.0223|
|piqa | 1|none | 0|acc |0.8286|± |0.0088|
| | |none | 0|acc_norm|0.8297|± |0.0088|
|winogrande| 1|none | 0|acc |0.7451|± |0.0122|
# Code Example
Inference [Colab](https://colab.research.google.com/drive/1tYSb63IKZDsiQ5BIJU8Oc92phxugAmB3?usp=sharing)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
def generate_response(prompt):
"""
Generate a response from the model based on the input prompt.
Args:
prompt (str): Prompt for the model.
Returns:
str: The generated response from the model.
"""
# Tokenize the input prompt
inputs = tokenizer(prompt, return_tensors="pt")
# Generate output tokens
outputs = model.generate(**inputs, max_new_tokens=256, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id)
# Decode the generated tokens to a string
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response
# Load the model and tokenizer
model_id = "macadeliccc/Polyglot-8x7b-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,load_in_4bit=True)
# Example prompts in different languages
english_prompt = "Write a quicksort algorithm in python"
chinese_prompt = "用Python写一个快速排序算法"
japanese_prompt = "Pythonでクイックソートアルゴリズムを書いてください"
vietnamese_prompt = "Viết thuật toán quicksort trong python"
indonesian_prompt = "Tulis algoritma quicksort dalam python"
german_prompt = "Schreiben Sie einen Quicksort-Algorithmus in Python"
# Generate and print responses for each language
print("English Response:")
print(generate_response(english_prompt), "\n")
print("Chinese Response:")
print(generate_response(chinese_prompt), "\n")
print("Japanese Response:")
print(generate_response(japanese_prompt), "\n")
print("Vietnamese Response:")
print(generate_response(vietnamese_prompt), "\n")
print("Indonesian Response:")
print(generate_response(indonesian_prompt), "\n")
print("German Response:")
print(generate_response(german_prompt), "\n")
```
#### English Response:
Write a quicksort algorithm in python
Here is a quicksort algorithm implemented in Python:
```python
def quicksort(array):
if len(array) <= 1:
return array
else:
pivot = array[0]
left = [x for x in array[1:] if x <= pivot]
right = [x for x in array[1:] if x > pivot]
return quicksort(left) + [pivot] + quicksort(right)
# Test the algorithm
array = [5, 2, 9, 1, 7, 8, 3, 6]
print(quicksort(array))
```
This implementation of quicksort uses recursion to divide the array into two smaller subarrays, one containing elements less than or equal to the pivot and the other containing elements greater than the pivot. The pivot is chosen as the first element in the array. The function then recursively calls itself on the left and right subarrays, and concatenates the sorted subarrays with the pivot in the middle.
The time complexity of quicksort is O
#### Chinese Response:
用Python写一个快速排序算法
快速排序是一种选择排序法,它在每次迭代中选择一个基准数并将其数组分为两个子数组:一个数组中的数字都小于基准数,另一个数组中的数字都大于基准数。这种分区过程递归地应用于两个子数组,直到数组为空或只含一个数字为止。这种排序法的时间复杂度为O(nlogn)。
```python
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[0]
left = [x for x in arr[1:] if x <= pivot]
right = [x for x in arr[1:] if x > pivot]
return quicksort(left) + [pivot] + quicksort(right)
if __name__ == '__main__':
arr = [5, 2, 9, 1, 7, 8, 3]
```
#### Japanese Response:
Pythonでクイックソートアルゴリズムを書いてください。
def quicksort(array):
if len(array) <= 1:
return array
pivot = array[0]
left = [x for x in array[1:] if x <= pivot]
right = [x for x in array[1:] if x > pivot]
return quicksort(left) + [pivot] + quicksort(right)
if __name__ == '__main__':
array = [5, 2, 9, 1, 7, 8, 3]
print(quicksort(array))
このコードはPythonでクイックソートアルゴリズムを実装しています。
quicksort() 関数は、配列を受け取り、配列の要素をソートして返す。
この実装は基本的なクイックソートです。
配列の最初の要素をピボットとして、それ以外の要素を
#### Vietnamese Response:
*So far vietnamese may have an issue with token generation. I fixed the vietnamese token generation but it still doesnt work q
Viết thuật toán quicksort trong python có inductuve proof.
QuickSort là algoortihm có time complexity O(log₂nim) trong trung bnh. QuickSort có 3 step:Partition Sort, Choose Pivot, Sort Two Partitions. QuickSort có recursive call.
+ Partition Sort là step sort partition data into two partitions based on pivot element. Data partitioned based on pivot element là sorted and unsorted.
+ Partition Sort có 4 step:Partition, Choose Pivot, Choose Left and Right Pivot, and Swap Pivot.
+ Partition Sort có recursive call.
+ Partition Sort có time complexity O(|A|).Partition Sort có not stable.Partition Sort có not inplace.Partition Sort có not randomized.Partition Sort có not adaptive.Partition Sort có not online.Partition Sort có not parallelizable.Partition Sort có not work with duplicate keys.Partition Sort có not work with negative keys.Partition Sort có not work with floating point numbers.Partition Sort có not work with strings.Partition Sort có not work with unsorted data.Partition Sort có not work with sorted data.Partition Sort có not work with partially sorted data.Partition Sort có not work with ordered data.Partition Sort có not work with
```python
def quicksort(arr, left=0, right=None):
if right is None:
right = len(arr) - 1
if left < right:
pivot_index = partition(arr, left, right)
quicksort(arr, left, pivot_index - 1)
quicksort(arr, pivot_index + 1, right)
return arr
def partition(arr, left, right):
pivot = arr[right]
i = left - 1
for j in range(left, right):
if arr[j] <=
```
#### German Response:
Schreiben Sie einen Quicksort-Algorithmus in Python, der die georderten Elemente in einer Liste sortiert.
Der Quicksort-Algorithmus ist ein effizienter Sortieralgorithmus, der in der Regel eine O(n log n) Komplexität hat. Er funktioniert wie folgt:
1. Wähle ein Element aus der Liste als Pivot-Element.
2. Teile die Liste in zwei Teile: Elemente, die kleiner als das Pivot-Element, und Elemente, die größer oder gleich als das Pivot-Element.
3. Rekursiv sortiere die beiden Teile.
4. Verbinde die sortierten Teile.
Hier ist ein Python-Code, der den Quicksort-Algorithmus implementiert:
```python
def quicksort(lst):
if len(lst) <= 1:
return lst
else:
pivot = lst[0]
less = [x for x in lst[1:] if x < pivot]
greater = [x for x in lst[1:] if x >= pivot]
return quicksort(less
```
#### Indonesian Response
Tulis algoritma quicksort dalam python
QuickSort adalah salah satu algoritma pengurutan yang paling populer dan efisien. Ini adalah algoritma pengurutan in-place, yang berarti bahwa data tidak perlu disalvage ke lokasi lain. Algoritme bekerja dengan memilih tumpukan yang diurutkan sebagai pivot, dan memecah tumpukan menjadi dua bagian yang lebih kecil. Setiap bagian ini kemudian diurutkan dengan cara yang sama.
Berikut adalah implementasi QuickSort dalam bahasa Python:
```python
def quicksort(arr):
if len(arr) <= 1:
return arr
else:
pivot = arr[0]
less = [x for x in arr[1:] if x <= pivot]
greater = [x for x in arr[1:] if x > pivot]
return quicksort(
``` |
LoneStriker/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-6.0bpw-h6-exl2 | LoneStriker | "2024-01-13T06:52:08" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-13T06:36:56" | ---
license: cc-by-nc-4.0
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/vwcJfOnL-2QDJ0ShfxRJ5.png)
---
# Disclaimer:
## This model is experimental, do not expect everything to work.
This model uses the Chatml **prompting format**
---
Beeg noromaid on ***steroids***. Suitable for RP, ERP.
This model was trained on the Zloss fork of Charles, and should fix issue the model had.
Use Chatml prompt format, but not the special token.
The reason is that Axolotl merge the finetune with the base model at 1.0 weight basically, but this is too much, so I use another script available [HERE](https://github.com/DocShotgun/LLM-notebooks/blob/main/weighted-lora-merge.ipynb) to merge with less weight, sadly, it don't take the special Chatml token. It's like Orca2 for the matter.
## Credits:
- Undi
- IkariDev
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains FP16 files of Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- prompt-template start -->
### Prompt format: Chatml
```
<|im_start|>system
{sysprompt}<|im_end|>
<|im_start|>user
{input}<|im_end|>
<|im_start|>assistant
{output}<|im_end|>
```
## Datasets used:
- Aesir 1, 2 & 3 modified by us, credit to ([MinervaAI](https://huggingface.co/MinervaAI) / [Gryphe](https://huggingface.co/Gryphe))
- [LimaRP-20231109](https://huggingface.co/datasets/lemonilia/LimaRP) ([Lemonilia](https://huggingface.co/lemonilia))
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal) ([NobodyExistsOnTheInternet](https://huggingface.co/NobodyExistsOnTheInternet)
- [No-robots-ShareGPT](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt) ([Doctor-Shotgun](https://huggingface.co/Doctor-Shotgun))
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek |
henryj18/xlm-roberta-base-finetuned-panx-fr | henryj18 | "2022-12-02T06:16:33" | 86 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-12-02T05:57:45" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8493752110773387
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2992
- F1: 0.8494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.537 | 1.0 | 382 | 0.3279 | 0.8002 |
| 0.2603 | 2.0 | 764 | 0.2987 | 0.8356 |
| 0.1589 | 3.0 | 1146 | 0.2992 | 0.8494 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Docfile/my_awesome_opus_books_model | Docfile | "2023-11-23T21:41:47" | 13 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-11-23T21:27:09" | Entry not found |
lesso11/2bf5948a-e2fb-4699-83de-c035322e72d7 | lesso11 | "2025-01-20T04:09:30" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:adapter:Qwen/Qwen1.5-0.5B-Chat",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-20T04:08:20" | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2bf5948a-e2fb-4699-83de-c035322e72d7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B-Chat
bf16: true
chat_template: llama3
datasets:
- data_files:
- 303dfabc72680414_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/303dfabc72680414_train_data.json
type:
field_instruction: question
field_output: model_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso11/2bf5948a-e2fb-4699-83de-c035322e72d7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/303dfabc72680414_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 961f8bc9-fae7-4c17-bb15-183b7daea2f3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 961f8bc9-fae7-4c17-bb15-183b7daea2f3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2bf5948a-e2fb-4699-83de-c035322e72d7
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.0774 | 0.0144 | 1 | 4.2488 |
| 3.4806 | 0.0719 | 5 | 3.6374 |
| 2.8445 | 0.1439 | 10 | 2.9566 |
| 2.222 | 0.2158 | 15 | 2.3208 |
| 2.2446 | 0.2878 | 20 | 2.0328 |
| 1.9766 | 0.3597 | 25 | 1.9845 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
yuan-sf63/chenyu_label_0.5_96 | yuan-sf63 | "2023-03-03T18:59:51" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-03T18:46:58" | Entry not found |
abaddon182/dd1ea20e-e580-4f8e-b656-7e5915039c97 | abaddon182 | "2025-01-29T15:50:32" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/tinyllama-chat",
"base_model:adapter:unsloth/tinyllama-chat",
"license:apache-2.0",
"region:us"
] | null | "2025-01-29T15:41:41" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dd1ea20e-e580-4f8e-b656-7e5915039c97
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/tinyllama-chat
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- d505230510882b7c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d505230510882b7c_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: abaddon182/dd1ea20e-e580-4f8e-b656-7e5915039c97
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/d505230510882b7c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: afd22e6e-10d0-4a21-b12b-899cde9bb283
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: afd22e6e-10d0-4a21-b12b-899cde9bb283
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dd1ea20e-e580-4f8e-b656-7e5915039c97
This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2977 | 0.0119 | 1 | 1.3630 |
| 0.9552 | 0.5935 | 50 | 0.8742 |
| 0.72 | 1.1869 | 100 | 0.8315 |
| 0.8087 | 1.7804 | 150 | 0.8086 |
| 0.5844 | 2.3739 | 200 | 0.7882 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ahsan30/FINAL_Reservoir_llama3.1-8b-q8_0 | ahsan30 | "2025-01-27T21:13:22" | 21 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-27T21:11:49" | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ahsan30
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
joyfine/vicuna-7b-fine-tuning_QA_SST_20_2 | joyfine | "2023-11-14T21:54:25" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-14T21:48:34" | Entry not found |
bane5631/223c1cc3-1336-4dc6-b490-d9a9d6342a1d | bane5631 | "2025-02-01T04:44:52" | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-01T04:25:08" | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 223c1cc3-1336-4dc6-b490-d9a9d6342a1d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8391d58e45127793_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8391d58e45127793_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: bane5631/223c1cc3-1336-4dc6-b490-d9a9d6342a1d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/8391d58e45127793_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2d2c0770-ec7d-41e1-a607-5367ecb0940b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2d2c0770-ec7d-41e1-a607-5367ecb0940b
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 223c1cc3-1336-4dc6-b490-d9a9d6342a1d
This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.0665 | 0.6809 | 200 | 1.8341 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sercetexam9/afro-xlmr-base-hau-finetuned-augmentation-LUNAR | sercetexam9 | "2025-01-28T13:52:28" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:Davlan/afro-xlmr-base",
"base_model:finetune:Davlan/afro-xlmr-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-28T13:36:24" | ---
library_name: transformers
license: mit
base_model: Davlan/afro-xlmr-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: afro-xlmr-base-hau-finetuned-augmentation-LUNAR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-base-hau-finetuned-augmentation-LUNAR
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3766
- F1: 0.7024
- Roc Auc: 0.8104
- Accuracy: 0.5513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4359 | 1.0 | 144 | 0.4259 | 0.1730 | 0.5612 | 0.1965 |
| 0.3509 | 2.0 | 288 | 0.3506 | 0.4649 | 0.6710 | 0.3670 |
| 0.3062 | 3.0 | 432 | 0.3230 | 0.5924 | 0.7446 | 0.4452 |
| 0.2507 | 4.0 | 576 | 0.3000 | 0.6340 | 0.7629 | 0.4748 |
| 0.2156 | 5.0 | 720 | 0.2989 | 0.6716 | 0.7870 | 0.5235 |
| 0.1632 | 6.0 | 864 | 0.3159 | 0.6645 | 0.7827 | 0.5009 |
| 0.1665 | 7.0 | 1008 | 0.3168 | 0.6817 | 0.7973 | 0.5217 |
| 0.1289 | 8.0 | 1152 | 0.3148 | 0.6807 | 0.7958 | 0.5357 |
| 0.1166 | 9.0 | 1296 | 0.3261 | 0.6850 | 0.7946 | 0.5217 |
| 0.0927 | 10.0 | 1440 | 0.3268 | 0.6828 | 0.7910 | 0.5496 |
| 0.0693 | 11.0 | 1584 | 0.3387 | 0.6982 | 0.8028 | 0.5496 |
| 0.0571 | 12.0 | 1728 | 0.3544 | 0.6938 | 0.8050 | 0.5374 |
| 0.0568 | 13.0 | 1872 | 0.3439 | 0.6959 | 0.8037 | 0.5565 |
| 0.0467 | 14.0 | 2016 | 0.3673 | 0.6940 | 0.8054 | 0.5391 |
| 0.043 | 15.0 | 2160 | 0.3642 | 0.7015 | 0.8066 | 0.5548 |
| 0.0356 | 16.0 | 2304 | 0.3685 | 0.6998 | 0.8069 | 0.5530 |
| 0.0395 | 17.0 | 2448 | 0.3766 | 0.7024 | 0.8104 | 0.5513 |
| 0.0353 | 18.0 | 2592 | 0.3766 | 0.7005 | 0.8088 | 0.5478 |
| 0.0352 | 19.0 | 2736 | 0.3753 | 0.7008 | 0.8092 | 0.5478 |
| 0.0327 | 20.0 | 2880 | 0.3745 | 0.6985 | 0.8074 | 0.5461 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
ReadyArt/Beeper-King-22B_EXL2_4.5bpw_H8 | ReadyArt | "2025-01-26T21:49:36" | 5 | 0 | null | [
"safetensors",
"mistral",
"exl2",
"region:us"
] | null | "2025-01-26T21:05:09" | # Beeper-King-22B
<figure>
<img src="https://huggingface.co/ToastyPigeon/Beeper-King-22B/resolve/main/_f5a298d7-14af-4bc5-a651-99373a63ef31.jpg" width="550">
<figcaption>A beeping kingfisher.</figcaption>
</figure>
This is another slightly-adjusted version of [MS-Meadowlark-22B](https://huggingface.co/allura-org/MS-Meadowlark-22B).
The models used in this merge are:
- [nbeerbower/Mistral-Small-Gutenberg-Doppel-22B](https://huggingface.co/nbeerbower/Mistral-Small-Gutenberg-Doppel-22B)
- [crestf411/MS-sunfall-v0.7.0](https://huggingface.co/crestf411/MS-sunfall-v0.7.0)
- [Alfitaria/mistral-small-fujin-qlora](https://huggingface.co/Alfitaria/mistral-small-fujin-qlora)
- [ToastyPigeon/mistral-small-springdragon-qlora](https://huggingface.co/ToastyPigeon/mistral-small-springdragon-qlora)
- [concedo/Beepo-22B](https://huggingface.co/concedo/Beepo-22B)
Specifically, changing the instruct portion and the base to apply the QLoRAs to from Mistral-Small-Instruct-2409 to Beepo-22B made the model a little more wild in a fun way. It's still done a good job of adhering to prompts and character descriptions, because Beepo is very compliant model.
Instruct format is Mistral V2 & V3, but thanks to the inclusion of Beepo this model should also work with Alpaca. |
Josephgflowers/Cinder-Phi-2-STEM-2.94B-Test | Josephgflowers | "2024-02-19T01:55:04" | 173 | 1 | transformers | [
"transformers",
"safetensors",
"gguf",
"phi",
"text-generation",
"custom_code",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-18T17:37:25" | ---
license: mit
widget:
- text: >
<|system|>
You are a helpful assistant</s>
<|user|>
Can you explain to me how quantum computing works?</s>
<|assistant|>
---
Modified version of Phi 2 with 2 added layers.
More details coming soon.
Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6328952f798f8d122ce62a44/obCyZSvfUefEWrOXaeB3o.png) |
AhmedTaha012/gptneo-txt2arxml-ppo | AhmedTaha012 | "2023-04-30T23:45:34" | 174 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-04-30T23:44:50" | Entry not found |
PrunaAI/mattshumer-Hermes-2-Pro-11B-bnb-4bit | PrunaAI | "2024-08-02T15:41:29" | 6 | 2 | pruna-engine | [
"pruna-engine",
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-03-27T11:02:29" | ---
library_name: pruna-engine
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI)
[![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
[![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.com/invite/vb6SmA3hxu)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed by using bitsandbytes.
- ***How does the model quality change?*** The quality of the model output will slightly degrade.
- ***What is the model format?*** We the standard safetensors format.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
## Usage
``` python
from transformers import AutoTokenizer
import transformers
import torch
model = "PrunaAI/mattshumer-Hermes-2-Pro-11B-bnb-4bit"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model mattshumer/Hermes-2-Pro-11B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 10