modelId
stringlengths
6
118
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
223M
likes
int64
0
7.74k
library_name
stringclasses
264 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
52 values
createdAt
unknown
card
stringlengths
1
1.01M
DopeBearmine/weather
DopeBearmine
"2025-01-24T20:13:22Z"
6
0
null
[ "tensorboard", "safetensors", "vit", "region:us" ]
null
"2025-01-24T18:50:10Z"
Entry not found
Spacyzipa/sanjeev_07_02_24
Spacyzipa
"2024-02-07T10:12:47Z"
4
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "endpoints_compatible", "region:us" ]
image-text-to-text
"2024-02-07T07:07:41Z"
Entry not found
franco-rojas/bloom-1b1-finetuned-tfmviu
franco-rojas
"2023-09-30T16:31:26Z"
152
0
transformers
[ "transformers", "pytorch", "bloom", "text-generation", "generated_from_trainer", "base_model:bigscience/bloom-1b1", "base_model:finetune:bigscience/bloom-1b1", "license:bigscience-bloom-rail-1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-09-29T04:41:50Z"
--- license: bigscience-bloom-rail-1.0 base_model: bigscience/bloom-1b1 tags: - generated_from_trainer model-index: - name: bloom-1b1-finetuned-tfmviu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloom-1b1-finetuned-tfmviu This model is a fine-tuned version of [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.5185 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 222 | 3.1300 | | No log | 2.0 | 444 | 3.2264 | | 2.3093 | 3.0 | 666 | 3.5185 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
mradermacher/LN-Korean-14B-v0.1-GGUF
mradermacher
"2024-07-31T04:26:27Z"
17
0
transformers
[ "transformers", "gguf", "ko", "zh", "base_model:CjangCjengh/LN-Korean-14B-v0.1", "base_model:quantized:CjangCjengh/LN-Korean-14B-v0.1", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-07-30T19:24:37Z"
--- base_model: CjangCjengh/LN-Korean-14B-v0.1 language: - ko - zh library_name: transformers license: cc-by-nc-sa-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/CjangCjengh/LN-Korean-14B-v0.1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/LN-Korean-14B-v0.1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.1-GGUF/resolve/main/LN-Korean-14B-v0.1.Q2_K.gguf) | Q2_K | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.1-GGUF/resolve/main/LN-Korean-14B-v0.1.IQ3_XS.gguf) | IQ3_XS | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.1-GGUF/resolve/main/LN-Korean-14B-v0.1.IQ3_S.gguf) | IQ3_S | 6.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.1-GGUF/resolve/main/LN-Korean-14B-v0.1.Q3_K_S.gguf) | Q3_K_S | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.1-GGUF/resolve/main/LN-Korean-14B-v0.1.IQ3_M.gguf) | IQ3_M | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.1-GGUF/resolve/main/LN-Korean-14B-v0.1.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.1-GGUF/resolve/main/LN-Korean-14B-v0.1.Q3_K_L.gguf) | Q3_K_L | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.1-GGUF/resolve/main/LN-Korean-14B-v0.1.IQ4_XS.gguf) | IQ4_XS | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.1-GGUF/resolve/main/LN-Korean-14B-v0.1.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.1-GGUF/resolve/main/LN-Korean-14B-v0.1.Q4_K_M.gguf) | Q4_K_M | 9.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.1-GGUF/resolve/main/LN-Korean-14B-v0.1.Q5_K_S.gguf) | Q5_K_S | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.1-GGUF/resolve/main/LN-Korean-14B-v0.1.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.1-GGUF/resolve/main/LN-Korean-14B-v0.1.Q6_K.gguf) | Q6_K | 12.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.1-GGUF/resolve/main/LN-Korean-14B-v0.1.Q8_0.gguf) | Q8_0 | 15.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
lesso13/c4692f38-8597-4602-95e2-041a9441b18f
lesso13
"2025-01-29T19:12:34Z"
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:Intel/neural-chat-7b-v3-3", "base_model:adapter:Intel/neural-chat-7b-v3-3", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-29T17:18:53Z"
--- library_name: peft license: apache-2.0 base_model: Intel/neural-chat-7b-v3-3 tags: - axolotl - generated_from_trainer model-index: - name: c4692f38-8597-4602-95e2-041a9441b18f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Intel/neural-chat-7b-v3-3 bf16: auto chat_template: llama3 datasets: - data_files: - 50647f9e6e89cbb7_train_data.json ds_type: json format: custom path: /workspace/input_data/50647f9e6e89cbb7_train_data.json type: field_input: ingredients_processed field_instruction: title field_output: directions format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso13/c4692f38-8597-4602-95e2-041a9441b18f hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/50647f9e6e89cbb7_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c53eddb1-5a0f-4d15-bd00-9389024c7d94 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c53eddb1-5a0f-4d15-bd00-9389024c7d94 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # c4692f38-8597-4602-95e2-041a9441b18f This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0080 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ISTNetworks/new_arabic_LLama3_8B
ISTNetworks
"2024-05-29T13:17:14Z"
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-29T13:06:59Z"
--- tags: - merge - mergekit - lazymergekit base_model: - LLama3-8B --- # new_arabic_LLama3_8B ## ๐Ÿ’ป Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "ISTNetworks/new_arabic_LLama3_8B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
MayBashendy/ArabicNewSplits4_FineTuningAraBERT_run1_AugV5_k3_task5_organization
MayBashendy
"2024-12-09T19:51:58Z"
164
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv02", "base_model:finetune:aubmindlab/bert-base-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-09T19:50:02Z"
--- library_name: transformers base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: ArabicNewSplits4_FineTuningAraBERT_run1_AugV5_k3_task5_organization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ArabicNewSplits4_FineTuningAraBERT_run1_AugV5_k3_task5_organization This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3044 - Qwk: 0.5633 - Mse: 1.3044 - Rmse: 1.1421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:| | No log | 0.1667 | 2 | 2.1878 | 0.0370 | 2.1878 | 1.4791 | | No log | 0.3333 | 4 | 1.3588 | 0.3083 | 1.3588 | 1.1657 | | No log | 0.5 | 6 | 1.3346 | 0.2084 | 1.3346 | 1.1553 | | No log | 0.6667 | 8 | 1.6107 | 0.2842 | 1.6107 | 1.2691 | | No log | 0.8333 | 10 | 1.6034 | 0.2389 | 1.6034 | 1.2663 | | No log | 1.0 | 12 | 1.5060 | 0.2877 | 1.5060 | 1.2272 | | No log | 1.1667 | 14 | 1.4421 | 0.3513 | 1.4421 | 1.2009 | | No log | 1.3333 | 16 | 1.3840 | 0.3438 | 1.3840 | 1.1764 | | No log | 1.5 | 18 | 1.3343 | 0.3598 | 1.3343 | 1.1551 | | No log | 1.6667 | 20 | 1.3586 | 0.4430 | 1.3586 | 1.1656 | | No log | 1.8333 | 22 | 1.3225 | 0.4597 | 1.3225 | 1.1500 | | No log | 2.0 | 24 | 1.2590 | 0.4709 | 1.2590 | 1.1221 | | No log | 2.1667 | 26 | 1.2039 | 0.4997 | 1.2039 | 1.0972 | | No log | 2.3333 | 28 | 1.2501 | 0.4709 | 1.2501 | 1.1181 | | No log | 2.5 | 30 | 1.2475 | 0.4885 | 1.2475 | 1.1169 | | No log | 2.6667 | 32 | 1.2880 | 0.4736 | 1.2880 | 1.1349 | | No log | 2.8333 | 34 | 1.0968 | 0.5342 | 1.0968 | 1.0473 | | No log | 3.0 | 36 | 0.8683 | 0.5672 | 0.8683 | 0.9318 | | No log | 3.1667 | 38 | 0.8042 | 0.5095 | 0.8042 | 0.8968 | | No log | 3.3333 | 40 | 0.8048 | 0.4901 | 0.8048 | 0.8971 | | No log | 3.5 | 42 | 0.8573 | 0.6161 | 0.8573 | 0.9259 | | No log | 3.6667 | 44 | 1.0560 | 0.5638 | 1.0560 | 1.0276 | | No log | 3.8333 | 46 | 1.2548 | 0.5055 | 1.2548 | 1.1202 | | No log | 4.0 | 48 | 1.5072 | 0.4628 | 1.5072 | 1.2277 | | No log | 4.1667 | 50 | 1.5551 | 0.4474 | 1.5551 | 1.2470 | | No log | 4.3333 | 52 | 1.5773 | 0.4520 | 1.5773 | 1.2559 | | No log | 4.5 | 54 | 1.4398 | 0.4954 | 1.4398 | 1.1999 | | No log | 4.6667 | 56 | 1.1994 | 0.5388 | 1.1994 | 1.0952 | | No log | 4.8333 | 58 | 1.0994 | 0.5831 | 1.0994 | 1.0485 | | No log | 5.0 | 60 | 1.0631 | 0.5953 | 1.0631 | 1.0311 | | No log | 5.1667 | 62 | 1.2124 | 0.5407 | 1.2124 | 1.1011 | | No log | 5.3333 | 64 | 1.3983 | 0.5243 | 1.3983 | 1.1825 | | No log | 5.5 | 66 | 1.5422 | 0.4885 | 1.5422 | 1.2418 | | No log | 5.6667 | 68 | 1.5609 | 0.4771 | 1.5609 | 1.2494 | | No log | 5.8333 | 70 | 1.4322 | 0.5184 | 1.4322 | 1.1967 | | No log | 6.0 | 72 | 1.1673 | 0.5812 | 1.1673 | 1.0804 | | No log | 6.1667 | 74 | 1.0331 | 0.6231 | 1.0331 | 1.0164 | | No log | 6.3333 | 76 | 1.0521 | 0.6252 | 1.0521 | 1.0257 | | No log | 6.5 | 78 | 1.1825 | 0.5605 | 1.1825 | 1.0874 | | No log | 6.6667 | 80 | 1.3724 | 0.5252 | 1.3724 | 1.1715 | | No log | 6.8333 | 82 | 1.4427 | 0.5238 | 1.4427 | 1.2011 | | No log | 7.0 | 84 | 1.4253 | 0.5279 | 1.4253 | 1.1938 | | No log | 7.1667 | 86 | 1.3821 | 0.5311 | 1.3821 | 1.1756 | | No log | 7.3333 | 88 | 1.3205 | 0.5274 | 1.3205 | 1.1491 | | No log | 7.5 | 90 | 1.2603 | 0.5786 | 1.2603 | 1.1226 | | No log | 7.6667 | 92 | 1.2001 | 0.6118 | 1.2001 | 1.0955 | | No log | 7.8333 | 94 | 1.1598 | 0.6193 | 1.1598 | 1.0769 | | No log | 8.0 | 96 | 1.1141 | 0.6202 | 1.1141 | 1.0555 | | No log | 8.1667 | 98 | 1.1151 | 0.6217 | 1.1151 | 1.0560 | | No log | 8.3333 | 100 | 1.1161 | 0.6160 | 1.1161 | 1.0565 | | No log | 8.5 | 102 | 1.1698 | 0.6220 | 1.1698 | 1.0816 | | No log | 8.6667 | 104 | 1.2404 | 0.6049 | 1.2404 | 1.1137 | | No log | 8.8333 | 106 | 1.3207 | 0.5661 | 1.3207 | 1.1492 | | No log | 9.0 | 108 | 1.3870 | 0.5521 | 1.3870 | 1.1777 | | No log | 9.1667 | 110 | 1.4139 | 0.5521 | 1.4139 | 1.1891 | | No log | 9.3333 | 112 | 1.4059 | 0.5521 | 1.4059 | 1.1857 | | No log | 9.5 | 114 | 1.3793 | 0.5521 | 1.3793 | 1.1744 | | No log | 9.6667 | 116 | 1.3430 | 0.5507 | 1.3430 | 1.1589 | | No log | 9.8333 | 118 | 1.3125 | 0.5580 | 1.3125 | 1.1456 | | No log | 10.0 | 120 | 1.3044 | 0.5633 | 1.3044 | 1.1421 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
LoneStriker/gemma-2b-it-4.0bpw-h6-exl2
LoneStriker
"2024-02-22T15:27:59Z"
4
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:2312.11805", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:1804.06876", "arxiv:2110.08193", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:2203.09509", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-02-22T15:26:48Z"
--- library_name: transformers tags: [] widget: - text: | <start_of_turn>user How does the brain work?<end_of_turn> <start_of_turn>model inference: parameters: max_new_tokens: 200 extra_gated_heading: "Access Gemma on Hugging Face" extra_gated_prompt: "To access Gemma on Hugging Face, youโ€™re required to review and agree to Googleโ€™s usage license. To do this, please ensure youโ€™re logged-in to Hugging Face and click below. Requests are processed immediately." extra_gated_button_content: "Acknowledge license" license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms --- # Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 2B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [7B instruct model](https://huggingface.co/google/gemma-7b-it). **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-it-gg-hf) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto", torch_dtype=torch.float16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Chat Template The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```py from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "gg-hf/gemma-2b-it" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype=dtype, ) chat = [ { "role": "user", "content": "Write a hello world program" }, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` At this point, the prompt contains the following text: ``` <start_of_turn>user Write a hello world program<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceeded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```py inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 | | [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | --------- | | **Average** | | **54.0** | **56.4** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 | | [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 | | [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 | | [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 | | [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 | | [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 | | [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 | | [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 | | [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 | | ------------------------------ | ------------- | ----------- | --------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
JiaxiJiang/textual_inversion_clock
JiaxiJiang
"2024-03-22T08:17:14Z"
36
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "diffusers-training", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-03-22T07:52:45Z"
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion - diffusers-training base_model: runwayml/stable-diffusion-v1-5 inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Textual inversion text2image fine-tuning - JiaxiJiang/textual_inversion_clock These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
bobox/DeBERTa2-0.9B-ST-v1-checkpoints-tmp
bobox
"2024-09-02T22:42:45Z"
5
0
null
[ "pytorch", "tensorboard", "deberta-v2", "region:us" ]
null
"2024-08-30T14:33:38Z"
Entry not found
LahiruProjects/criminal-case-classifier1
LahiruProjects
"2024-04-02T15:32:23Z"
110
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-04-02T15:07:04Z"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: criminal-case-classifier1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # criminal-case-classifier1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8530 - Accuracy: 0.5077 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 300 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9563 | 0.31 | 10 | 1.1314 | 0.3385 | | 1.1275 | 0.62 | 20 | 1.0607 | 0.4769 | | 1.0692 | 0.94 | 30 | 1.0871 | 0.2923 | | 1.0717 | 1.25 | 40 | 1.1759 | 0.4154 | | 1.0113 | 1.56 | 50 | 1.1322 | 0.3538 | | 0.8463 | 1.88 | 60 | 1.1809 | 0.3846 | | 0.8573 | 2.19 | 70 | 1.0676 | 0.4154 | | 0.8711 | 2.5 | 80 | 1.0690 | 0.3846 | | 0.809 | 2.81 | 90 | 1.1253 | 0.4154 | | 0.7148 | 3.12 | 100 | 1.0913 | 0.4769 | | 0.5847 | 3.44 | 110 | 1.0920 | 0.5077 | | 0.5486 | 3.75 | 120 | 1.0597 | 0.5538 | | 0.5184 | 4.06 | 130 | 1.1016 | 0.4769 | | 0.2637 | 4.38 | 140 | 1.1908 | 0.4923 | | 0.3562 | 4.69 | 150 | 1.0238 | 0.5385 | | 0.3292 | 5.0 | 160 | 1.1011 | 0.5692 | | 0.1333 | 5.31 | 170 | 1.3049 | 0.5385 | | 0.1256 | 5.62 | 180 | 1.2819 | 0.5538 | | 0.1415 | 5.94 | 190 | 1.4929 | 0.5231 | | 0.0942 | 6.25 | 200 | 1.5290 | 0.5538 | | 0.0548 | 6.56 | 210 | 1.4844 | 0.5538 | | 0.0457 | 6.88 | 220 | 1.6174 | 0.5077 | | 0.0226 | 7.19 | 230 | 1.6499 | 0.5538 | | 0.032 | 7.5 | 240 | 1.7371 | 0.5077 | | 0.0158 | 7.81 | 250 | 1.8099 | 0.5385 | | 0.0244 | 8.12 | 260 | 1.9706 | 0.4769 | | 0.0134 | 8.44 | 270 | 1.8825 | 0.5231 | | 0.0117 | 8.75 | 280 | 1.8414 | 0.5077 | | 0.0111 | 9.06 | 290 | 1.8478 | 0.5077 | | 0.0107 | 9.38 | 300 | 1.8530 | 0.5077 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
nirmaldhara/gita-text-generation-gpt2
nirmaldhara
"2024-09-14T17:14:41Z"
127
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-14T17:13:42Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ZhangShenao/math_math-gemma-2-9b-it-rs-sample_7500_tp
ZhangShenao
"2025-01-16T07:24:50Z"
6
0
null
[ "safetensors", "gemma2", "region:us" ]
null
"2025-01-16T07:21:28Z"
Entry not found
VERSIL91/4ff71c75-3979-442e-a77f-e53f56142bf9
VERSIL91
"2024-12-13T15:36:03Z"
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-3B", "base_model:adapter:unsloth/Qwen2.5-3B", "license:other", "region:us" ]
null
"2024-12-13T15:21:36Z"
--- library_name: peft license: other base_model: unsloth/Qwen2.5-3B tags: - axolotl - generated_from_trainer model-index: - name: 4ff71c75-3979-442e-a77f-e53f56142bf9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml accelerate_config: dynamo_backend: inductor mixed_precision: bf16 num_machines: 1 num_processes: auto use_cpu: false adapter: lora base_model: unsloth/Qwen2.5-3B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 7b99c633142b08f3_train_data.json ds_type: json format: custom path: /workspace/input_data/7b99c633142b08f3_train_data.json type: field_input: prompt_option field_instruction: prompt_question field_output: country format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true group_by_length: false hub_model_id: VERSIL91/4ff71c75-3979-442e-a77f-e53f56142bf9 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lora_target_modules: - q_proj - v_proj lr_scheduler: cosine max_memory: 0: 70GiB max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/7b99c633142b08f3_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true quantization_config: llm_int8_enable_fp32_cpu_offload: true load_in_8bit: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 4056 strict: false tf32: false tokenizer_type: AutoTokenizer torch_compile: true train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 4ff71c75-3979-442e-a77f-e53f56142bf9 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 4ff71c75-3979-442e-a77f-e53f56142bf9 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 4ff71c75-3979-442e-a77f-e53f56142bf9 This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0078 | 1 | nan | | 0.0 | 0.1017 | 13 | nan | | 0.0 | 0.2033 | 26 | nan | | 0.0 | 0.3050 | 39 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso02/57b6696f-95a7-421e-8777-44b3b2751d79
lesso02
"2025-01-21T07:00:41Z"
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B", "base_model:adapter:unsloth/Qwen2.5-0.5B", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-21T06:35:12Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B tags: - axolotl - generated_from_trainer model-index: - name: 57b6696f-95a7-421e-8777-44b3b2751d79 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-0.5B bf16: true chat_template: llama3 datasets: - data_files: - 8b781c1617481132_train_data.json ds_type: json format: custom path: /workspace/input_data/8b781c1617481132_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: lesso02/57b6696f-95a7-421e-8777-44b3b2751d79 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 25 micro_batch_size: 2 mlflow_experiment_name: /tmp/8b781c1617481132_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e45dac16-2243-42f5-8ac6-226d8e694661 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: e45dac16-2243-42f5-8ac6-226d8e694661 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 57b6696f-95a7-421e-8777-44b3b2751d79 This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0001 | 1 | nan | | 0.0 | 0.0007 | 5 | nan | | 0.0 | 0.0014 | 10 | nan | | 0.0 | 0.0021 | 15 | nan | | 0.0 | 0.0028 | 20 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
tanoManzo/nucleotide-transformer-v2-500m-multi-species_ft_BioS45_1kbpHG19_DHSs_H3K27AC_one_shot
tanoManzo
"2024-10-29T19:48:14Z"
147
0
transformers
[ "transformers", "safetensors", "esm", "text-classification", "generated_from_trainer", "custom_code", "base_model:InstaDeepAI/nucleotide-transformer-v2-500m-multi-species", "base_model:finetune:InstaDeepAI/nucleotide-transformer-v2-500m-multi-species", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-10-29T19:47:00Z"
--- library_name: transformers license: cc-by-nc-sa-4.0 base_model: InstaDeepAI/nucleotide-transformer-v2-500m-multi-species tags: - generated_from_trainer model-index: - name: nucleotide-transformer-v2-500m-multi-species_ft_BioS45_1kbpHG19_DHSs_H3K27AC_one_shot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nucleotide-transformer-v2-500m-multi-species_ft_BioS45_1kbpHG19_DHSs_H3K27AC_one_shot This model is a fine-tuned version of [InstaDeepAI/nucleotide-transformer-v2-500m-multi-species](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-500m-multi-species) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.46.0.dev0 - Pytorch 2.4.1+cu121 - Datasets 2.18.0 - Tokenizers 0.20.0
aslez123/segmentation-train
aslez123
"2024-02-27T11:04:50Z"
34
0
transformers
[ "transformers", "tensorboard", "safetensors", "segformer", "generated_from_trainer", "base_model:nvidia/mit-b0", "base_model:finetune:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-02-27T10:31:47Z"
--- license: other base_model: nvidia/mit-b0 tags: - generated_from_trainer model-index: - name: segmentation-train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segmentation-train This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
abhishek/wf85-h28o-tffz-0
abhishek
"2023-12-14T18:33:51Z"
8
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "autotrain", "dataset:abhishek/autotrain-data-wf85-h28o-tffz", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-12-14T18:33:46Z"
--- tags: - autotrain - image-classification widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace datasets: - abhishek/autotrain-data-wf85-h28o-tffz --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.06153846153846154 f1_micro: 0.18181818181818182 f1_weighted: 0.055944055944055944 precision_macro: 0.03636363636363636 precision_micro: 0.18181818181818182 precision_weighted: 0.03305785123966942 recall_macro: 0.2 recall_micro: 0.18181818181818182 recall_weighted: 0.18181818181818182 accuracy: 0.18181818181818182
VERSIL91/89337416-e71c-47d6-b1e1-5bfaea333a89
VERSIL91
"2024-12-26T13:06:19Z"
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:JackFram/llama-160m", "base_model:adapter:JackFram/llama-160m", "license:apache-2.0", "region:us" ]
null
"2024-12-26T13:04:46Z"
--- library_name: peft license: apache-2.0 base_model: JackFram/llama-160m tags: - axolotl - generated_from_trainer model-index: - name: 89337416-e71c-47d6-b1e1-5bfaea333a89 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml accelerate_config: dynamo_backend: inductor mixed_precision: bf16 num_machines: 1 num_processes: auto use_cpu: false adapter: lora base_model: JackFram/llama-160m bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 562043a1fdc4c9f9_train_data.json ds_type: json format: custom path: /workspace/input_data/562043a1fdc4c9f9_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true group_by_length: false hub_model_id: VERSIL91/89337416-e71c-47d6-b1e1-5bfaea333a89 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lora_target_modules: - q_proj - v_proj lr_scheduler: cosine max_memory: 0: 70GiB max_steps: 5 micro_batch_size: 2 mlflow_experiment_name: /tmp/562043a1fdc4c9f9_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true quantization_config: llm_int8_enable_fp32_cpu_offload: true load_in_8bit: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer torch_compile: true train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 89337416-e71c-47d6-b1e1-5bfaea333a89 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 89337416-e71c-47d6-b1e1-5bfaea333a89 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 89337416-e71c-47d6-b1e1-5bfaea333a89 This model is a fine-tuned version of [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.5794 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.3029 | 0.0019 | 1 | 5.6038 | | 5.3121 | 0.0037 | 2 | 5.5934 | | 5.3087 | 0.0074 | 4 | 5.5794 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Roamify/finetuned-attraction_summarization_t5
Roamify
"2024-06-20T15:37:07Z"
6
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-06-20T15:36:47Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF
mradermacher
"2024-12-12T02:10:30Z"
38
0
transformers
[ "transformers", "gguf", "en", "dataset:jondurbin/airoboros-gpt4-1.4", "base_model:jondurbin/airoboros-7b-gpt4-1.4", "base_model:quantized:jondurbin/airoboros-7b-gpt4-1.4", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "imatrix" ]
null
"2024-12-12T00:21:21Z"
--- base_model: jondurbin/airoboros-7b-gpt4-1.4 datasets: - jondurbin/airoboros-gpt4-1.4 language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.4 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.4 | very low quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 3.9 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 3.9 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 3.9 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-7b-gpt4-1.4-i1-GGUF/resolve/main/airoboros-7b-gpt4-1.4.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
cespalv/sem-segmentor
cespalv
"2024-07-08T15:36:52Z"
30
0
transformers
[ "transformers", "safetensors", "mask2former", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-07-08T15:31:32Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
stvhuang/rcr-run-bvz9qc57-65884-master-0_20240102T164836-ep08
stvhuang
"2024-01-08T21:47:43Z"
90
0
transformers
[ "transformers", "safetensors", "deberta-v2", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
"2024-01-08T21:46:28Z"
Entry not found
Harsh202/new_opt_finetune_MTL_ecoc_Minimal_alpaca_lora_groupFalse_low_LR_BCE_128intermediate_2epoch
Harsh202
"2024-09-10T08:57:34Z"
145
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-10T08:55:32Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
callmesan/indic-sentence-bert-nli-roman-urdu-binary
callmesan
"2024-12-03T18:21:42Z"
6
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:l3cube-pune/indic-sentence-bert-nli", "base_model:finetune:l3cube-pune/indic-sentence-bert-nli", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-03T17:51:44Z"
--- library_name: transformers license: cc-by-4.0 base_model: l3cube-pune/indic-sentence-bert-nli tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: indic-sentence-bert-nli-roman-urdu-binary results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # indic-sentence-bert-nli-roman-urdu-binary This model is a fine-tuned version of [l3cube-pune/indic-sentence-bert-nli](https://huggingface.co/l3cube-pune/indic-sentence-bert-nli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2789 - Accuracy: 0.9061 - Precision: 0.9058 - Recall: 0.9055 - F1: 0.9057 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.4984 | 0.9912 | 56 | 0.4611 | 0.8452 | 0.8486 | 0.8489 | 0.8452 | | 0.3582 | 2.0 | 113 | 0.3373 | 0.8826 | 0.8843 | 0.8802 | 0.8816 | | 0.2724 | 2.9912 | 169 | 0.2869 | 0.8901 | 0.8894 | 0.8901 | 0.8897 | | 0.2093 | 4.0 | 226 | 0.2754 | 0.8926 | 0.8922 | 0.8920 | 0.8921 | | 0.1622 | 4.9912 | 282 | 0.2980 | 0.8989 | 0.9016 | 0.8961 | 0.8978 | | 0.1235 | 6.0 | 339 | 0.3167 | 0.8889 | 0.8883 | 0.8884 | 0.8884 | | 0.1125 | 6.9912 | 395 | 0.3369 | 0.8939 | 0.8973 | 0.8907 | 0.8926 | | 0.0811 | 8.0 | 452 | 0.3535 | 0.8914 | 0.8906 | 0.8918 | 0.8911 | | 0.0797 | 8.9912 | 508 | 0.3833 | 0.8914 | 0.8919 | 0.8898 | 0.8906 | | 0.0585 | 9.9115 | 560 | 0.3809 | 0.8926 | 0.8924 | 0.8918 | 0.8920 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
morturr/flan-t5-base-amazon-text-classification-2024-06-25-seed-16
morturr
"2024-06-25T09:20:16Z"
6
0
transformers
[ "transformers", "safetensors", "t5", "text-classification", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-25T08:58:20Z"
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer model-index: - name: flan-t5-base-amazon-text-classification-2024-06-25-seed-16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-amazon-text-classification-2024-06-25-seed-16 This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.2 - Pytorch 2.3.1+cu121 - Datasets 2.10.1 - Tokenizers 0.15.2
FremyCompany/rl-bert-oscar-nl-step1
FremyCompany
"2023-04-17T10:00:35Z"
116
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-04-17T09:47:35Z"
Entry not found
melisa/get_linear_approximation_last_meta-llama_Meta-Llama-3-8B-Instruct_cut_0
melisa
"2024-05-25T15:33:20Z"
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-25T15:27:48Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Martha-987/whisper-small-ArMarthaFikryTest
Martha-987
"2023-06-19T09:13:43Z"
79
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "ar", "dataset:Martha-987/MyOwnData", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2023-06-19T07:21:55Z"
--- language: - ar license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - Martha-987/MyOwnData model-index: - name: Whisper Small Ar- Martha results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Ar- Martha This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the MyOwnData dataset. It achieves the following results on the evaluation set: - eval_loss: 2.6636 - eval_wer: 48.2981 - eval_runtime: 6533.5828 - eval_samples_per_second: 0.866 - eval_steps_per_second: 0.866 - epoch: 0.01 - step: 5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5655 ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
etetet/my_awesome_eli5_mlm_model
etetet
"2023-07-29T20:34:10Z"
178
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-07-29T20:07:20Z"
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: my_awesome_eli5_mlm_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_mlm_model This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2578 | 1.0 | 1145 | 2.0618 | | 2.1775 | 2.0 | 2290 | 2.0267 | | 2.1086 | 3.0 | 3435 | 2.0174 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
timm/vit_medium_patch32_clip_224.tinyclip_laion400m
timm
"2024-12-27T02:01:55Z"
174
0
open_clip
[ "open_clip", "pytorch", "safetensors", "clip", "zero-shot-image-classification", "license:mit", "region:us" ]
zero-shot-image-classification
"2024-03-20T21:37:47Z"
--- tags: - clip library_name: open_clip pipeline_tag: zero-shot-image-classification license: mit --- # Model card for vit_medium_patch32_clip.tinyclip_laion400m
DMFZ/marian-finetuned-kde4-en-to-fr
DMFZ
"2024-07-28T16:09:58Z"
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2024-07-28T08:29:52Z"
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: en-fr split: train args: en-fr metrics: - name: Bleu type: bleu value: 52.91210143343284 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8554 - Bleu: 52.9121 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.42.4 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
azugarini/clue-instruct-llama-7b
azugarini
"2024-07-11T13:12:05Z"
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:azugarini/clue-instruct", "arxiv:2404.06186", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-02-27T20:15:20Z"
--- license: llama2 datasets: - azugarini/clue-instruct language: - en metrics: - rouge --- ## Pre-print More details about the model are available [here](https://arxiv.org/abs/2404.06186) ## Citation If you find it useful, please cite us: ``` @inproceedings{zugarini2024clue, title={Clue-Instruct: Text-Based Clue Generation for Educational Crossword Puzzles}, author={Zugarini, Andrea and Zeinalipour, Kamyar and Kadali, Surya Sai and Maggini, Marco and Gori, Marco and Rigutini, Leonardo}, booktitle={Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, pages={3347--3356}, year={2024} } ```
surathisin/surathisin-model-test
surathisin
"2023-10-12T14:05:19Z"
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-10-12T13:48:10Z"
Entry not found
pantelis-ninja/unsloth-Qwen2.5-3B-Instruct_gas-1_dtype-bfloat16_r-8_lr-0.0005_ms-100_gas-1_max-steps-100
pantelis-ninja
"2024-11-30T08:14:57Z"
75
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-11-30T08:13:59Z"
--- base_model: unsloth/qwen2.5-3b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** pantelis-ninja - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-3b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
johnsutor/mixture-of-llamas-linear
johnsutor
"2024-05-30T16:36:28Z"
49
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2203.05482", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:merge:DeepMount00/Llama-3-8b-Ita", "base_model:VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct", "base_model:merge:VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct", "base_model:failspy/Meta-Llama-3-8B-Instruct-abliterated-v3", "base_model:merge:failspy/Meta-Llama-3-8B-Instruct-abliterated-v3", "base_model:jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0", "base_model:merge:jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:merge:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:nbeerbower/llama-3-gutenberg-8B", "base_model:merge:nbeerbower/llama-3-gutenberg-8B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-30T16:19:13Z"
--- base_model: - VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct - nbeerbower/llama-3-gutenberg-8B - jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0 - meta-llama/Meta-Llama-3-8B-Instruct - DeepMount00/Llama-3-8b-Ita - failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # linear This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct) * [nbeerbower/llama-3-gutenberg-8B](https://huggingface.co/nbeerbower/llama-3-gutenberg-8B) * [jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0](https://huggingface.co/jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0) * [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) * [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: meta-llama/Meta-Llama-3-8B-Instruct parameters: density: 0.5 weight: 1.0 - model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 parameters: density: 0.5 weight: 1.0 - model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct parameters: density: 0.5 weight: 1.0 - model: DeepMount00/Llama-3-8b-Ita parameters: density: 0.5 weight: 1.0 - model: nbeerbower/llama-3-gutenberg-8B parameters: density: 0.5 weight: 1.0 - model: jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0 parameters: density: 0.5 weight: 1.0 merge_method: linear tokenizer_source: union base_model: meta-llama/Meta-Llama-3-8B-Instruct parameters: int8_mask: true dtype: bfloat16 ```
AlignmentResearch/robust_llm_pythia-imdb-31m-mz-ada-v3-nd
AlignmentResearch
"2024-03-25T18:03:29Z"
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-31m", "base_model:finetune:EleutherAI/pythia-31m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
"2024-03-25T18:03:21Z"
--- tags: - generated_from_trainer base_model: EleutherAI/pythia-31m model-index: - name: robust_llm_pythia-imdb-31m-mz-ada-v3-nd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-imdb-31m-mz-ada-v3-nd This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
isenbek/llama-2-7b-chat-hf-local
isenbek
"2023-08-23T06:15:13Z"
6
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-08-23T05:00:48Z"
Entry not found
briannlongzhao/10
briannlongzhao
"2024-01-29T21:40:36Z"
4
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "custom-diffusion", "base_model:stabilityai/stable-diffusion-2-1", "base_model:adapter:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-01-27T18:21:25Z"
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1 instance_prompt: a photo of <new1> American lobster tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - custom-diffusion inference: true --- # Custom Diffusion - briannlongzhao/10 These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1. The weights were trained on a photo of <new1> American lobster using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following. For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
youdiniplays/tl-ceb-model
youdiniplays
"2024-01-14T16:43:55Z"
89
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-01-14T08:26:22Z"
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - bleu model-index: - name: tl-ceb-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tl-ceb-model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5272 - Bleu: 2.9334 - Gen Len: 18.2954 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 0.9668 | 1.0 | 6516 | 0.8034 | 2.2949 | 18.3327 | | 0.8082 | 2.0 | 13032 | 0.6691 | 2.6324 | 18.3182 | | 0.7297 | 3.0 | 19548 | 0.5954 | 2.7526 | 18.2929 | | 0.6745 | 4.0 | 26064 | 0.5474 | 2.886 | 18.308 | | 0.6319 | 5.0 | 32580 | 0.5272 | 2.9334 | 18.2954 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
seprised/llama_fine_tuned
seprised
"2024-12-29T14:30:07Z"
137
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Llama-3.2-3B-Instruct-bnb-4bit", "base_model:finetune:unsloth/Llama-3.2-3B-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-12-29T13:37:11Z"
--- base_model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** seprised - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
juan-glez29/BERTuit-ideologiamul-none
juan-glez29
"2024-02-19T18:17:54Z"
8
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-02-19T12:02:43Z"
Entry not found
timm/resnetrs420.tf_in1k
timm
"2025-01-21T21:43:16Z"
395
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "transformers", "arxiv:2103.07579", "arxiv:1512.03385", "license:apache-2.0", "region:us" ]
image-classification
"2023-04-05T18:54:04Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm - transformers --- # Model card for resnetrs420.tf_in1k A ResNetRS-B image classification model. This model features: * ReLU activations * single layer 7x7 convolution with pooling * 1x1 convolution shortcut downsample Trained on ImageNet-1k by paper authors in Tensorflow. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 191.9 - GMACs: 64.2 - Activations (M): 126.6 - Image size: train = 320 x 320, test = 416 x 416 - **Papers:** - Revisiting ResNets: Improved Training and Scaling Strategies: https://arxiv.org/abs/2103.07579 - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/resnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('resnetrs420.tf_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnetrs420.tf_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 160, 160]) # torch.Size([1, 256, 80, 80]) # torch.Size([1, 512, 40, 40]) # torch.Size([1, 1024, 20, 20]) # torch.Size([1, 2048, 10, 10]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnetrs420.tf_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 10, 10) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @article{bello2021revisiting, title={Revisiting ResNets: Improved Training and Scaling Strategies}, author={Irwan Bello and William Fedus and Xianzhi Du and Ekin D. Cubuk and Aravind Srinivas and Tsung-Yi Lin and Jonathon Shlens and Barret Zoph}, journal={arXiv preprint arXiv:2103.07579}, year={2021} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
SzilviaB/Daredevil-Aura-8B_uncensored_OAS_abliterated
SzilviaB
"2024-10-03T20:00:47Z"
7
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:mlabonne/NeuralDaredevil-8B-abliterated", "base_model:merge:mlabonne/NeuralDaredevil-8B-abliterated", "base_model:saishf/Aura-Uncensored-OAS-8B-L3", "base_model:merge:saishf/Aura-Uncensored-OAS-8B-L3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-10-03T19:55:36Z"
--- base_model: - mlabonne/NeuralDaredevil-8B-abliterated - saishf/Aura-Uncensored-OAS-8B-L3 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) * [saishf/Aura-Uncensored-OAS-8B-L3](https://huggingface.co/saishf/Aura-Uncensored-OAS-8B-L3) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mlabonne/NeuralDaredevil-8B-abliterated - model: saishf/Aura-Uncensored-OAS-8B-L3 merge_method: slerp base_model: mlabonne/NeuralDaredevil-8B-abliterated dtype: bfloat16 parameters: t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers ```
mradermacher/Kosmos-EVAA-gamma-8B-GGUF
mradermacher
"2024-12-28T13:56:34Z"
47
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:jaspionjader/Kosmos-EVAA-gamma-8B", "base_model:quantized:jaspionjader/Kosmos-EVAA-gamma-8B", "endpoints_compatible", "region:us" ]
null
"2024-12-28T13:49:15Z"
--- base_model: jaspionjader/Kosmos-EVAA-gamma-8B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/jaspionjader/Kosmos-EVAA-gamma-8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-gamma-8B-GGUF/resolve/main/Kosmos-EVAA-gamma-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-gamma-8B-GGUF/resolve/main/Kosmos-EVAA-gamma-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-gamma-8B-GGUF/resolve/main/Kosmos-EVAA-gamma-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-gamma-8B-GGUF/resolve/main/Kosmos-EVAA-gamma-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-gamma-8B-GGUF/resolve/main/Kosmos-EVAA-gamma-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-gamma-8B-GGUF/resolve/main/Kosmos-EVAA-gamma-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-gamma-8B-GGUF/resolve/main/Kosmos-EVAA-gamma-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-gamma-8B-GGUF/resolve/main/Kosmos-EVAA-gamma-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-gamma-8B-GGUF/resolve/main/Kosmos-EVAA-gamma-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-gamma-8B-GGUF/resolve/main/Kosmos-EVAA-gamma-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-gamma-8B-GGUF/resolve/main/Kosmos-EVAA-gamma-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-gamma-8B-GGUF/resolve/main/Kosmos-EVAA-gamma-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
yiyanghkust/finbert-tone
yiyanghkust
"2022-10-17T00:35:39Z"
4,186,465
166
transformers
[ "transformers", "pytorch", "tf", "text-classification", "financial-sentiment-analysis", "sentiment-analysis", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: "en" tags: - financial-sentiment-analysis - sentiment-analysis widget: - text: "growth is strong and we have plenty of liquidity" --- `FinBERT` is a BERT model pre-trained on financial communication text. The purpose is to enhance financial NLP research and practice. It is trained on the following three financial communication corpus. The total corpora size is 4.9B tokens. - Corporate Reports 10-K & 10-Q: 2.5B tokens - Earnings Call Transcripts: 1.3B tokens - Analyst Reports: 1.1B tokens More technical details on `FinBERT`: [Click Link](https://github.com/yya518/FinBERT) This released `finbert-tone` model is the `FinBERT` model fine-tuned on 10,000 manually annotated (positive, negative, neutral) sentences from analyst reports. This model achieves superior performance on financial tone analysis task. If you are simply interested in using `FinBERT` for financial tone analysis, give it a try. If you use the model in your academic work, please cite the following paper: Huang, Allen H., Hui Wang, and Yi Yang. "FinBERT: A Large Language Model for Extracting Information from Financial Text." *Contemporary Accounting Research* (2022). # How to use You can use this model with Transformers pipeline for sentiment analysis. ```python from transformers import BertTokenizer, BertForSequenceClassification from transformers import pipeline finbert = BertForSequenceClassification.from_pretrained('yiyanghkust/finbert-tone',num_labels=3) tokenizer = BertTokenizer.from_pretrained('yiyanghkust/finbert-tone') nlp = pipeline("sentiment-analysis", model=finbert, tokenizer=tokenizer) sentences = ["there is a shortage of capital, and we need extra financing", "growth is strong and we have plenty of liquidity", "there are doubts about our finances", "profits are flat"] results = nlp(sentences) print(results) #LABEL_0: neutral; LABEL_1: positive; LABEL_2: negative ```
BatsResearch/bonito-v1
BatsResearch
"2024-06-11T12:10:55Z"
670
94
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "data generation", "text2text-generation", "en", "dataset:BatsResearch/ctga-v1", "arxiv:2402.18334", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-02-26T10:29:04Z"
--- datasets: - BatsResearch/ctga-v1 language: - en library_name: transformers pipeline_tag: text2text-generation tags: - data generation license: apache-2.0 --- # Model Card for bonito <!-- Provide a quick summary of what the model is/does. --> Bonito is an open-source model for conditional task generation: the task of converting unannotated text into task-specific training datasets for instruction tuning. ![Bonito](https://raw.githubusercontent.com/BatsResearch/bonito/main/assets/workflow.png) ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> Bonito can be used to create synthetic instruction tuning datasets to adapt large language models on users' specialized, private data. In our [paper](https://arxiv.org/abs/2402.18334), we show that Bonito can be used to adapt both pretrained and instruction tuned models to tasks without any annotations. - **Developed by:** Nihal V. Nayak, Yiyang Nan, Avi Trost, and Stephen H. Bach - **Model type:** MistralForCausalLM - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** `mistralai/Mistral-7B-v0.1` ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [https://github.com/BatsResearch/bonito](https://github.com/BatsResearch/bonito) - **Paper:** [Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation](https://arxiv.org/abs/2402.18334) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> To easily generate synthetic instruction tuning datasets, we recommend using the [bonito](https://github.com/BatsResearch/bonito) package built using the `transformers` and the `vllm` libraries. ```python from bonito import Bonito from vllm import SamplingParams from datasets import load_dataset # Initialize the Bonito model bonito = Bonito("BatsResearch/bonito-v1") # load dataaset with unannotated text unannotated_text = load_dataset( "BatsResearch/bonito-experiment", "unannotated_contract_nli" )["train"].select(range(10)) # Generate synthetic instruction tuning dataset sampling_params = SamplingParams(max_tokens=256, top_p=0.95, temperature=0.5, n=1) synthetic_dataset = bonito.generate_tasks( unannotated_text, context_col="input", task_type="nli", sampling_params=sampling_params ) ``` ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> Our model is trained to generate the following task types: summarization, sentiment analysis, multiple-choice question answering, extractive question answering, topic classification, natural language inference, question generation, text generation, question answering without choices, paraphrase identification, sentence completion, yes-no question answering, word sense disambiguation, paraphrase generation, textual entailment, and coreference resolution. The model might not produce accurate synthetic tasks beyond these task types. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> **Limitations** Our work relies on the availability of large amounts of unannotated text. If only a small quantity of unannotated text is present, the target language model, after adaptation, may experience a drop in performance. While we demonstrate positive improvements on pretrained and instruction-tuned models, our observations are limited to the three task types (yes-no question answering, extractive question answering, and natural language inference) considered in our paper. **Risks** Bonito poses risks similar to those of any large language model. For example, our model could be used to generate factually incorrect datasets in specialized domains. Our model can exhibit the biases and stereotypes of the base model, Mistral-7B, even after extensive supervised fine-tuning. Finally, our model does not include safety training and can potentially generate harmful content. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> We recommend users thoroughly inspect the generated tasks and benchmark performance on critical datasets before deploying the models trained with the synthetic tasks into the real world. ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> To train Bonito, we create a new dataset called conditional task generation with attributes by remixing existing instruction tuning datasets. See [ctga-v1](https://huggingface.co/datasets/BatsResearch/ctga-v1) for more details. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Training Hyperparameters - **Training regime:** <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> We train the model using [Q-LoRA](https://github.com/artidoro/qlora) by optimizing the cross entropy loss over the output tokens. The model is trained for 100,000 steps. The training takes about 4 days on four GPUs to complete. We use the following hyperparameters: - Q-LoRA rank (r): 64 - Q-LoRA scaling factor (alpha): 4 - Q-LoRA dropout: 0 - Optimizer: Paged AdamW - Learning rate scheduler: linear - Max. learning rate: 1e-04 - Min. learning rate: 0 - Weight decay: 0 - Dropout: 0 - Max. gradient norm: 0.3 - Effective batch size: 16 - Max. input length: 2,048 - Max. output length: 2,048 - Num. steps: 100,000 ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> ``` @inproceedings{bonito:aclfindings24, title = {Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation}, author = {Nayak, Nihal V. and Nan, Yiyang and Trost, Avi and Bach, Stephen H.}, booktitle = {Findings of the Association for Computational Linguistics: ACL 2024}, year = {2024}} ```
crybit/role_172840396119
crybit
"2024-10-08T16:15:25Z"
5
0
null
[ "safetensors", "llama", "region:us" ]
null
"2024-10-08T16:12:41Z"
Entry not found
maddes8cht/h2oai-h2ogpt-gm-oasst1-en-2048-falcon-7b-v2-gguf
maddes8cht
"2023-11-22T20:26:16Z"
207
1
transformers
[ "transformers", "gguf", "gpt", "llm", "large language model", "h2o-llmstudio", "conversational", "en", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "region:us" ]
text-generation
"2023-10-23T06:28:36Z"
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: >- https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico license: apache-2.0 datasets: - OpenAssistant/oasst1 pipeline_tag: conversational --- [![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]() I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information # h2ogpt-gm-oasst1-en-2048-falcon-7b-v2 - GGUF - Model creator: [h2oai](https://huggingface.co/h2oai) - Original model: [h2ogpt-gm-oasst1-en-2048-falcon-7b-v2](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2) # K-Quants in Falcon 7b models New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants. For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance. So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations. # About GGUF format `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library. A growing list of Software is using it and can therefore use this model. The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov # Quantization variants There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you: # Legacy quants Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types. Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants. ## Note: Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions. (This mainly refers to Falcon 7b and Starcoder models) # K-quants K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load. So, if possible, use K-quants. With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences. --- # Original Model Card: # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) - Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate`, `torch` and `einops` libraries installed. ```bash pip install transformers==4.29.2 pip install accelerate==0.19.0 pip install torch==2.0.0 pip install einops==0.6.1 ``` ```python import torch from transformers import AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2", use_fast=False, padding_side="left", trust_remote_code=True, ) generate_text = pipeline( model="h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2", tokenizer=tokenizer, torch_dtype=torch.float16, trust_remote_code=True, use_fast=False, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2", use_fast=False, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2", torch_dtype=torch.float16, device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|endoftext|><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=False, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` RWForCausalLM( (transformer): RWModel( (word_embeddings): Embedding(65024, 4544) (h): ModuleList( (0-31): 32 x DecoderLayer( (input_layernorm): LayerNorm((4544,), eps=1e-05, elementwise_affine=True) (self_attention): Attention( (maybe_rotary): RotaryEmbedding() (query_key_value): Linear(in_features=4544, out_features=4672, bias=False) (dense): Linear(in_features=4544, out_features=4544, bias=False) (attention_dropout): Dropout(p=0.0, inplace=False) ) (mlp): MLP( (dense_h_to_4h): Linear(in_features=4544, out_features=18176, bias=False) (act): GELU(approximate='none') (dense_4h_to_h): Linear(in_features=18176, out_features=4544, bias=False) ) ) ) (ln_f): LayerNorm((4544,), eps=1e-05, elementwise_affine=True) ) (lm_head): Linear(in_features=4544, out_features=65024, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Model Validation Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). ```bash CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2 --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log ``` ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. ***End of original Model File*** --- ## Please consider to support my work **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community. <center> [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io) [![Stack Exchange](https://stackexchange.com/users/flair/26485911.png)](https://stackexchange.com/users/26485911) [![GitHub](https://maddes8cht.github.io/assets/buttons/github-button.png)](https://github.com/maddes8cht) [![HuggingFace](https://maddes8cht.github.io/assets/buttons/huggingface-button.png)](https://huggingface.co/maddes8cht) [![Twitter](https://maddes8cht.github.io/assets/buttons/twitter-button.png)](https://twitter.com/maddes1966) </center>
ssh1419/deplot-batch-1-token-freeze-curri-update-loss
ssh1419
"2024-09-19T03:02:24Z"
34
0
transformers
[ "transformers", "safetensors", "pix2struct", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
"2024-09-19T03:01:44Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SakanaAI/Evo-Ukiyoe-v1
SakanaAI
"2024-07-19T04:51:55Z"
132
33
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "ja", "license:apache-2.0", "region:us" ]
text-to-image
"2024-07-11T08:37:41Z"
--- library_name: diffusers license: apache-2.0 language: - ja pipeline_tag: text-to-image tags: - stable-diffusion --- # ๐ŸŸ Evo-Ukiyoe-v1 ๐Ÿค— [Models](https://huggingface.co/SakanaAI/Evo-Ukiyoe-v1/) | ๐Ÿ“ [Blog](https://sakana.ai/evo-ukiyoe/) | ๐Ÿฆ [Twitter](https://twitter.com/SakanaAILabs) **Evo-Ukiyoe-v1** is an experimental education-purpose Japanese woodblock print Ukiyoe style image generation model. The model was train based on Sakana AI's [Evo-SDXL-JP](https://huggingface.co/SakanaAI/EvoSDXL-JP-v1). All the dataset used to train Evo-Ukiyoe comes from Ukiyoe images belonged to [Ritsumeikan University, Art Research Center](https://www.arc.ritsumei.ac.jp/). Please refer to our [blog](https://sakana.ai/evo-ukiyoe/) for more details. ## Usage Use the code below to get started with the model. <details> <summary> Click to expand </summary> 1. Git clone this model card ``` git clone https://huggingface.co/SakanaAI/Evo-Ukiyoe-v1 ``` 2. Install git-lfs if you don't have it yet. ``` sudo apt install git-lfs git lfs install ``` 3. Create conda env ``` conda create -n evo-ukiyoe python=3.11 conda activate evo-ukiyoe ``` 4. Install packages ``` cd Evo-Ukiyoe-v1 pip install -r requirements.txt ``` 5. Run ```python from evo_ukiyoe_v1 import load_evo_ukiyoe prompt = "็€็‰ฉใ‚’็€ใฆใ„ใ‚‹็ŒซใŒๅบญใงใŠ่Œถใ‚’้ฃฒใ‚“ใงใ„ใ‚‹ใ€‚" pipe = load_evo_ukiyoe(device="cuda") images = pipe(prompt + "่ผปใฎๆตฎไธ–็ตตใ€‚่ถ…่ฉณ็ดฐใ€‚", negative_prompt='', guidance_scale=8.0, num_inference_steps=40).images images[0].save("image.png") ``` </details> ## Model Details <!-- Provide a longer summary of what this model is. --> - **Developed by:** [Sakana AI](https://sakana.ai/) - **Model type:** Diffusion-based text-to-image generative model - **Language(s):** Japanese - **Blog:** https://sakana.ai/evo-ukiyoe/ ## License The Python script included in this repository and Lora weight are licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0). Please note that the license for the model/pipeline generated by this script is inherited from the source models. ## Uses This model is provided for research and development purposes only and should be considered as an experimental prototype. It is not intended for commercial use or deployment in mission-critical environments. Use of this model is at the user's own risk, and its performance and outcomes are not guaranteed. Sakana AI shall not be liable for any direct, indirect, special, incidental, or consequential damages, or any loss arising from the use of this model, regardless of the results obtained. Users must fully understand the risks associated with the use of this model and use it at their own discretion. ## Acknowledgement Evo-Ukiyoe was trained based on Evo-SDXL-JP. We would like to thank the developers of Evo-SDXL-JP source models for their contributions and for making their work available. - [SDXL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) - [Juggernaut-XL-v9](https://huggingface.co/RunDiffusion/Juggernaut-XL-v9) - [SDXL-DPO](https://huggingface.co/mhdang/dpo-sdxl-text2image-v1) - [JSDXL](https://huggingface.co/stabilityai/japanese-stable-diffusion-xl) ## Citation @misc{Evo-Ukiyoe, url = {[https://huggingface.co/SakanaAI/Evo-Nishikie-v1](https://huggingface.co/SakanaAI/Evo-Nishikie-v1)}, title = {Evo-Ukiyoe}, author = {Clanuwat, Tarin and Shing, Makoto and Imajuku, Yuki and Kitamoto, Asanobu and Akama, Ryo} }
oldiday/515af5ec-b1ab-4762-b938-ed21aacc8db4
oldiday
"2025-02-04T01:59:16Z"
16
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:fxmarty/tiny-llama-fast-tokenizer", "base_model:adapter:fxmarty/tiny-llama-fast-tokenizer", "region:us" ]
null
"2025-02-04T01:57:55Z"
--- library_name: peft base_model: fxmarty/tiny-llama-fast-tokenizer tags: - axolotl - generated_from_trainer model-index: - name: 515af5ec-b1ab-4762-b938-ed21aacc8db4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: fxmarty/tiny-llama-fast-tokenizer bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 8b74be9ab0373a6f_train_data.json ds_type: json format: custom path: /workspace/input_data/8b74be9ab0373a6f_train_data.json type: field_input: references field_instruction: question field_output: answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: oldiday/515af5ec-b1ab-4762-b938-ed21aacc8db4 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 8 mlflow_experiment_name: /tmp/8b74be9ab0373a6f_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: ab2cd1d3-a8f2-4277-a76f-00c40e9d7b71 wandb_project: Gradients-On-Six wandb_run: your_name wandb_runid: ab2cd1d3-a8f2-4277-a76f-00c40e9d7b71 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 515af5ec-b1ab-4762-b938-ed21aacc8db4 This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3753 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0008 | 1 | 10.3801 | | 10.3801 | 0.0068 | 9 | 10.3798 | | 10.3788 | 0.0135 | 18 | 10.3793 | | 10.3781 | 0.0203 | 27 | 10.3787 | | 10.3786 | 0.0270 | 36 | 10.3781 | | 10.3774 | 0.0338 | 45 | 10.3774 | | 10.3774 | 0.0405 | 54 | 10.3768 | | 10.3757 | 0.0473 | 63 | 10.3762 | | 10.376 | 0.0540 | 72 | 10.3757 | | 10.3751 | 0.0608 | 81 | 10.3754 | | 10.3751 | 0.0675 | 90 | 10.3753 | | 10.3757 | 0.0743 | 99 | 10.3753 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ClarenceDan/cccaae51-e179-43bf-b398-7687ff139333
ClarenceDan
"2025-01-19T03:05:10Z"
10
0
peft
[ "peft", "safetensors", "opt", "axolotl", "generated_from_trainer", "base_model:facebook/opt-1.3b", "base_model:adapter:facebook/opt-1.3b", "license:other", "region:us" ]
null
"2025-01-19T03:00:48Z"
--- library_name: peft license: other base_model: facebook/opt-1.3b tags: - axolotl - generated_from_trainer model-index: - name: cccaae51-e179-43bf-b398-7687ff139333 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: facebook/opt-1.3b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 5b1d7d2de5370ca4_train_data.json ds_type: json format: custom path: /workspace/input_data/5b1d7d2de5370ca4_train_data.json type: field_input: context field_instruction: question field_output: answers format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: ClarenceDan/cccaae51-e179-43bf-b398-7687ff139333 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/5b1d7d2de5370ca4_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 46dd44b3-e4e4-4c44-8605-e8e8b6dd956e wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 46dd44b3-e4e4-4c44-8605-e8e8b6dd956e warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # cccaae51-e179-43bf-b398-7687ff139333 This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5734 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 16.1647 | 0.0001 | 1 | 4.0757 | | 15.6047 | 0.0003 | 3 | 4.0171 | | 17.3113 | 0.0005 | 6 | 3.5892 | | 11.4373 | 0.0008 | 9 | 2.5734 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
liujunshi/my_awesome_wnut_model
liujunshi
"2023-06-05T12:14:08Z"
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:wnut_17", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-06-05T12:11:41Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wnut_17 metrics: - precision - recall - f1 - accuracy model-index: - name: my_awesome_wnut_model results: - task: name: Token Classification type: token-classification dataset: name: wnut_17 type: wnut_17 config: wnut_17 split: test args: wnut_17 metrics: - name: Precision type: precision value: 0.5543672014260249 - name: Recall type: recall value: 0.2882298424467099 - name: F1 type: f1 value: 0.37926829268292683 - name: Accuracy type: accuracy value: 0.9402761745970672 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wnut_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset. It achieves the following results on the evaluation set: - Loss: 0.2761 - Precision: 0.5544 - Recall: 0.2882 - F1: 0.3793 - Accuracy: 0.9403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.2888 | 0.4658 | 0.2020 | 0.2818 | 0.9364 | | No log | 2.0 | 426 | 0.2761 | 0.5544 | 0.2882 | 0.3793 | 0.9403 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
goldfish-models/kor_hang_10mb
goldfish-models
"2024-08-26T16:50:41Z"
8
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "goldfish", "arxiv:2408.10441", "kor", "dataset:oscar-corpus/OSCAR-2109", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-08-13T08:19:42Z"
--- license: apache-2.0 language: - kor datasets: - oscar-corpus/OSCAR-2109 library_name: transformers pipeline_tag: text-generation tags: - goldfish - arxiv:2408.10441 --- # kor_hang_10mb Goldfish is a suite of monolingual language models trained for 350 languages. This model is the <b>Korean</b> (Hangul script) model trained on 10MB of data, after accounting for an estimated byte premium of 1.29; content-matched text in Korean takes on average 1.29x as many UTF-8 bytes to encode as English. The Goldfish models are trained primarily for comparability across languages and for low-resource languages; Goldfish performance for high-resource languages is not designed to be comparable with modern large language models (LLMs). Note: kor_hang is an [individual language](https://iso639-3.sil.org/code_tables/639/data) code. It is not contained in any macrolanguage codes contained in Goldfish (for script hang). All training and hyperparameter details are in our paper, [Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024)](https://www.arxiv.org/abs/2408.10441). Training code and sample usage: https://github.com/tylerachang/goldfish Sample usage also in this Google Colab: [link](https://colab.research.google.com/drive/1rHFpnQsyXJ32ONwCosWZ7frjOYjbGCXG?usp=sharing) ## Model details: To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json. All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences. For best results, make sure that [CLS] is prepended to your input sequence (see sample usage linked above)! Details for this model specifically: * Architecture: gpt2 * Parameters: 39087104 * Maximum sequence length: 512 tokens * Training text data (raw): 12.93MB * Training text data (byte premium scaled): 10.005MB * Training tokens: 2388480 (x10 epochs) * Vocabulary size: 50000 * Compute cost: 1804686144307200.0 FLOPs or ~0.2 NVIDIA A6000 GPU hours Training datasets (percentages prior to deduplication): * 100.00000%: [OSCAR 2021/09](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109) ## Citation If you use this model, please cite: ``` @article{chang-etal-2024-goldfish, title={Goldfish: Monolingual Language Models for 350 Languages}, author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.}, journal={Preprint}, year={2024}, url={https://www.arxiv.org/abs/2408.10441}, } ```
exala/db_fe2_1.1
exala
"2024-11-19T19:34:35Z"
107
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-11-19T17:07:33Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
band2001/stolaf-angora-3200
band2001
"2024-04-25T15:42:41Z"
5
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "dataset:band2001/stolaf-angora", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-10T02:24:54Z"
--- license: mit datasets: - band2001/stolaf-angora --- # Model Card for Angora-3200 <!-- Provide a quick summary of what the model is/does. --> This model has been created to help computer science students at St. Olaf College (Northfield, MN) answer questions about fundamental CS principles as well as questions about the specific technical stacks and procedures St. Olaf Computer Science uses. ## Angora-3200 Details This model is built off of [Google's Gemma 7b-it](https://huggingface.co/google/gemma-7b-it) model. It was fine tuned with a dataset created with the purpose of addressing St. Olaf specific Computer Science questions. Some of these questions reference the specific instance of git the institution uses or address steps to declare the computer science major. This model was fine-tuned using MLX on an Apple M3 Max Chip. This model was trained for 3200 iterations using LoRA as the method for finetuning. - **Developed by:** Ben Anderson & Keegan Murray - **Funded by:** St. Olaf College MSCS Department - **Model type:** Generative - **License:** MIT - **Finetuned from model:** [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) <!-- Provide the basic links for the model. --> - **Repository:** See the GitHub repository [here](https://github.com/band2001/stolaf-angora) - **Paper:** Coming soon... - **Demo:** A video demo is available [here](https://drive.google.com/file/d/1iwThVj88FTgLNANZdv2NineRcBXAqtZp/view?usp=sharing). ## Uses This is intended to be used by Computer Science students at St. Olaf College. While it can be used broadly for general computer science questions, it has been finetuned to answer questions specific to the St. Olaf Computer Science program. ## How to Get Started with the Model Use the code below to get started with the model. ### Direct Use With Transformers Library #### Use a pipeline as a high-level helper ```python from transformers import pipeline pipe = pipeline("text-generation", model="band2001/stolaf-angora-3200") ``` #### Load model directly ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("band2001/stolaf-angora-3200") model = AutoModelForCausalLM.from_pretrained("band2001/stolaf-angora-3200", device_map="auto") input_ids = tokenizer("YOUR PROMPT HERE", return_tensors="pt").to("YOUR DEVICE IF USING GPU ACCELERATION") outputs = model.generate(**input_ids, max_new_tokens=256) decoded_output = tokenizer.decode(outputs[0]) ``` ### Direct Use With MLX Library Note MLX can only be used with Apple Silicon Macs. It is also recommended to use one of their Max series chips or higher. ```python from mlx_lm import load, generate def format_prompt(prompt, system_prompt = "YOUR SYSTEM PROMPT"): return """<bos><start_of_turn>user ## Instructions {} ## User {}<end_of_turn> <start_of_turn>model """.format(system_prompt, prompt) model, tokenizer = load("band2001/stolaf-angora-3200") prompt = format_prompt("YOUR PROMPT HERE") decoded_output = generate( model, tokenizer, prompt=prompt, verbose=True, temp=0.0, max_tokens=256, ) ``` ### Out-of-Scope Use Outside of using this model to ask questions about computer science topics (generally and specific to St. Olaf College), this model should not be used for other inference. Asking questions about other topics will likely yield answers; however, they have not been fine-tuned and will most likely contain errors and/or could potentially include offensive content. ## Bias, Risks, and Limitations As we created the fine-tuning dataset from scratch, it is relatively limited compared to the overall size of the model. Our dataset has about 2000 observations, while the model has roughly 8.5B parameters. So while our dataset had a noticeable effect on the tuning of this model, it still will fall back on other knowledge occasionally and provide partially incorrect answers for St. Olaf specific questions. Also note the limitations present in the [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) model and assume they are present in this model as well. ## Training Details ### Training Data The training data can be found in the St. Olaf Angora Dataset ([band2001/stolaf-angora](https://huggingface.co/datasets/band2001/stolaf-angora)). ### Training Procedure To train the model, the data needs to be in the following format. Note the data in [band2001/stolaf-angora](https://huggingface.co/datasets/band2001/stolaf-angora) already is. ``` <bos><start_of_turn>user ## Instructions system prompt goes here ## User prompt/query goes here<end_of_turn> <start_of_turn>model model response here (put a response here for tuning purposes)<end_of_turn><eos> ``` Once the data is in the correct format, QLoRA is recommended. The model can be fine-tuned either using mlx-lm and mps (to tune on an Apple Silicon machine) or a bitsandbytes configuration and cuda (to tune on a machine with Nvidia GPUs). #### Preprocessing To preprocess your data to be in the correct format outlined above, you can use the following helper function: ```python def generate_prompt(entry, system_prompt = SYSTEM_PROMPT): ''' This function formats a question/answer pair to gemma's chat template. :param: entry - a dictionary with an instruction and a response :param: system_prompt: the system prompt to be used :return: the formated string for gemma's chat template ''' return """<bos><start_of_turn>user ## Instructions {} ## User {}<end_of_turn> <start_of_turn>model {}<end_of_turn><eos>""".format(system_prompt, entry["instruction"], entry["response"]) ``` When trying to use inference with this model, you can format the user's query using this helper function: ```python def format_prompt(prompt, system_prompt = SYSTEM_PROMPT): ''' This function formats a question to gemma's chat template. :param: prompt - a string with the user's query :param: system_prompt: the system prompt to be used :return: the formated string for gemma's chat template ''' return """<bos><start_of_turn>user ## Instructions {} ## User {}<end_of_turn> <start_of_turn>model """.format(system_prompt, prompt) ``` #### Training Process The MLX LoRA fine-tuning approach was used. You can learn more about [MLX LoRA here](https://github.com/ml-explore/mlx-examples/blob/main/lora/README.md). The Gemma-7b-it was loaded in without any conversion. The default `batch_size = 16` was chosen and to reach a 3200 iteration fine-tuned model the model was tuned with 800 iterations four times. Once the fine-tuned weights were created, the model was fused using MLX's fuse functionality. You can learn more about [fusing with MLX here](https://github.com/ml-explore/mlx-examples/blob/main/lora/README.md#Fuse-and-Upload). One important change made when fusing with MLX was to change some of the MLX package code to include `"format":"pt"` in the metadata so this model can be used with the transformers library. To do that, the following was done: you can tweak the library code like below in <path_to_your_site-packages>/mlx_lm/utils.py by replacing `mx.save_safetensors(str(shard_path), shard, metadata={"format":"mlx"})` with `mx.save_safetensors(str(shard_path), shard, metadata={"format":"pt"})` to output fused weights with the metadata attribute. Special thanks to [Alexweberk's guide on GitHub](https://gist.github.com/alexweberk/635431b5c5773efd6d1755801020429f) to help solve this issue. Finally, the fused model was uploaded to this HuggingFace repo! If you look at the GitHub repo for this project, mlx_lora.sh includes the command used for the LoRA fine-tuning, mlx_fuse.sh includes the command for the model fusing, and mlx_upload.sh includes the upload command. There is additionally an optional mlx_convert.sh for converting the Google Gemma 7b-it model before fine-tuning if desired. ## Evaluation Testing loss and perplexity were the two metrics used to evaluate the Angora models. A summary of the results for all the different iteration models is included below. ### Results | Number of iterations | Testing Loss | Perplexity | |:----------|:----------|:---------| |800 | 0.569 | 1.766 | | 1600 | 0.302 | 1.352 | | 2400 | 0.225 | 1.252 | | 3200 | 0.185 | 1.203 | | 4000 | 0.170 | 1.185 | ### Testing Data The testing data is available [here](https://huggingface.co/datasets/band2001/stolaf-angora/viewer/default/test). ## Model Card Contact Ben Anderson - [ander6@stolaf.edu](mailto:ander6@stolaf.edu) Keegan Murray - [murray7@stolaf.edu](mailto:murray7@stolaf.edu)
paultimothymooney/DeepSeek-R1-Distill-Qwen-1.5B-Q8_0-GGUF
paultimothymooney
"2025-01-21T23:38:54Z"
21
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-01-21T23:38:39Z"
--- library_name: transformers base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B tags: - llama-cpp - gguf-my-repo --- # paultimothymooney/DeepSeek-R1-Distill-Qwen-1.5B-Q8_0-GGUF This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo paultimothymooney/DeepSeek-R1-Distill-Qwen-1.5B-Q8_0-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo paultimothymooney/DeepSeek-R1-Distill-Qwen-1.5B-Q8_0-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo paultimothymooney/DeepSeek-R1-Distill-Qwen-1.5B-Q8_0-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo paultimothymooney/DeepSeek-R1-Distill-Qwen-1.5B-Q8_0-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q8_0.gguf -c 2048 ```
ray1031/esm2_t12_35M_UR50D-pretrained-evaluation
ray1031
"2023-12-22T05:37:29Z"
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "esm", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-12-22T05:25:28Z"
Entry not found
joshnader/rho-math-7b-interpreter-v0.1-Q8_0-GGUF
joshnader
"2024-07-02T07:24:14Z"
5
0
null
[ "gguf", "nlp", "math", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:microsoft/rho-math-7b-interpreter-v0.1", "base_model:quantized:microsoft/rho-math-7b-interpreter-v0.1", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-02T07:23:43Z"
--- base_model: microsoft/rho-math-7b-interpreter-v0.1 language: - en license: mit pipeline_tag: text-generation tags: - nlp - math - llama-cpp - gguf-my-repo --- # joshnader/rho-math-7b-interpreter-v0.1-Q8_0-GGUF This model was converted to GGUF format from [`microsoft/rho-math-7b-interpreter-v0.1`](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo joshnader/rho-math-7b-interpreter-v0.1-Q8_0-GGUF --hf-file rho-math-7b-interpreter-v0.1-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo joshnader/rho-math-7b-interpreter-v0.1-Q8_0-GGUF --hf-file rho-math-7b-interpreter-v0.1-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo joshnader/rho-math-7b-interpreter-v0.1-Q8_0-GGUF --hf-file rho-math-7b-interpreter-v0.1-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo joshnader/rho-math-7b-interpreter-v0.1-Q8_0-GGUF --hf-file rho-math-7b-interpreter-v0.1-q8_0.gguf -c 2048 ```
fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-26543668
fine-tuned
"2024-05-28T18:56:30Z"
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-26543668", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2024-05-28T18:55:59Z"
--- license: apache-2.0 datasets: - fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-26543668 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-26543668', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
MaziyarPanahi/calme-2.2-qwen2-7b
MaziyarPanahi
"2024-09-19T11:22:48Z"
30,254
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "qwen", "finetune", "chatml", "OpenHermes-2.5", "HelpSteer2", "Orca", "SlimOrca", "conversational", "en", "dataset:nvidia/HelpSteer2", "dataset:teknium/OpenHermes-2.5", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Open-Orca/SlimOrca", "base_model:Qwen/Qwen2-7B", "base_model:finetune:Qwen/Qwen2-7B", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-27T08:57:21Z"
--- language: - en license: apache-2.0 library_name: transformers tags: - chat - qwen - qwen2 - finetune - chatml - OpenHermes-2.5 - HelpSteer2 - Orca - SlimOrca base_model: Qwen/Qwen2-7B datasets: - nvidia/HelpSteer2 - teknium/OpenHermes-2.5 - microsoft/orca-math-word-problems-200k - Open-Orca/SlimOrca model_name: calme-2.2-qwen2-7b pipeline_tag: text-generation inference: false model_creator: MaziyarPanahi quantized_by: MaziyarPanahi model-index: - name: calme-2.2-qwen2-7b results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 35.97 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-qwen2-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 33.11 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-qwen2-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 19.34 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-qwen2-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 5.48 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-qwen2-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 13.28 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-qwen2-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 32.21 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-qwen2-7b name: Open LLM Leaderboard --- <img src="./qwen2-fine-tunes-maziyar-panahi.webp" alt="Qwen2 fine-tune" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # MaziyarPanahi/calme-2.2-qwen2-7b This is a fine-tuned version of the `Qwen/Qwen2-7B` model. It aims to improve the base model across all benchmarks. # โšก Quantized GGUF All GGUF models are available here: [MaziyarPanahi/calme-2.2-qwen2-7b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2-7b-GGUF) # ๐Ÿ† [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__calme-2.2-qwen2-7b) | Metric |Value| |-------------------|----:| |Avg. |23.23| |IFEval (0-Shot) |35.97| |BBH (3-Shot) |33.11| |MATH Lvl 5 (4-Shot)|19.34| |GPQA (0-shot) | 5.48| |MuSR (0-shot) |13.28| |MMLU-PRO (5-shot) |32.21| # Prompt Template This model uses `ChatML` prompt template: ``` <|im_start|>system {System} <|im_end|> <|im_start|>user {User} <|im_end|> <|im_start|>assistant {Assistant} ```` # How to use ```python # Use a pipeline as a high-level helper from transformers import pipeline messages = [ {"role": "user", "content": "Who are you?"}, ] pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.2-qwen2-7b") pipe(messages) # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.2-qwen2-7b") model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.2-qwen2-7b") ```
RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf
RichardErkhov
"2024-07-02T19:22:36Z"
25
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
"2024-07-02T19:13:41Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) TinyDolphin-2.8.2-1.1b-laser - GGUF - Model creator: https://huggingface.co/cognitivecomputations/ - Original model: https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser/ | Name | Quant method | Size | | ---- | ---- | ---- | | [TinyDolphin-2.8.2-1.1b-laser.Q2_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q2_K.gguf) | Q2_K | 0.4GB | | [TinyDolphin-2.8.2-1.1b-laser.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [TinyDolphin-2.8.2-1.1b-laser.IQ3_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.IQ3_S.gguf) | IQ3_S | 0.47GB | | [TinyDolphin-2.8.2-1.1b-laser.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [TinyDolphin-2.8.2-1.1b-laser.IQ3_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.IQ3_M.gguf) | IQ3_M | 0.48GB | | [TinyDolphin-2.8.2-1.1b-laser.Q3_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q3_K.gguf) | Q3_K | 0.51GB | | [TinyDolphin-2.8.2-1.1b-laser.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [TinyDolphin-2.8.2-1.1b-laser.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [TinyDolphin-2.8.2-1.1b-laser.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [TinyDolphin-2.8.2-1.1b-laser.Q4_0.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q4_0.gguf) | Q4_0 | 0.59GB | | [TinyDolphin-2.8.2-1.1b-laser.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [TinyDolphin-2.8.2-1.1b-laser.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [TinyDolphin-2.8.2-1.1b-laser.Q4_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q4_K.gguf) | Q4_K | 0.62GB | | [TinyDolphin-2.8.2-1.1b-laser.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [TinyDolphin-2.8.2-1.1b-laser.Q4_1.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q4_1.gguf) | Q4_1 | 0.65GB | | [TinyDolphin-2.8.2-1.1b-laser.Q5_0.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q5_0.gguf) | Q5_0 | 0.71GB | | [TinyDolphin-2.8.2-1.1b-laser.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [TinyDolphin-2.8.2-1.1b-laser.Q5_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q5_K.gguf) | Q5_K | 0.73GB | | [TinyDolphin-2.8.2-1.1b-laser.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [TinyDolphin-2.8.2-1.1b-laser.Q5_1.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q5_1.gguf) | Q5_1 | 0.77GB | | [TinyDolphin-2.8.2-1.1b-laser.Q6_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q6_K.gguf) | Q6_K | 0.84GB | | [TinyDolphin-2.8.2-1.1b-laser.Q8_0.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_TinyDolphin-2.8.2-1.1b-laser-gguf/blob/main/TinyDolphin-2.8.2-1.1b-laser.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- license: apache-2.0 datasets: - cerebras/SlimPajama-627B - bigcode/starcoderdata - teknium/openhermes language: - en --- # TinyDolphin-2.8.2-1.1b-laser ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/655dc641accde1bbc8b41aec/x8c5Ue58EAHRl1cp2Wwk1.webp) Join Our Discord! https://discord.gg/cognitivecomputations This is an version 3 of a model trained on 3 3090's by Kearm on the new Dolphin 2.8 dataset by Eric Hartford https://erichartford.com/dolphin ๐Ÿฌ This model uses our laser technique from https://github.com/cognitivecomputations/laserRMT to denoise the model! For this version we increased the epochs as well as refined the datasets used. ## Example Outputs TBD Support my efforts! https://ko-fi.com/kearm # Orignal Model Card Below # TinyLlama-1.1B </div> https://github.com/jzhang38/TinyLlama The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ๐Ÿš€๐Ÿš€. The training has started on 2023-09-01. We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint. #### This Collection This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen. #### Eval | Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg | |-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----| | Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 | | TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11| | TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 | | TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 | | TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.5T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 | | TinyLlama-1.1B-intermediate-step-955k-2T | 2T | 54.63 | 33.40 | 56.83 | 28.07 | 54.67 | 63.21 | 70.67 | 51.64 | | TinyLlama-1.1B-intermediate-step-1195k-2.5T | 2.5T | 58.96 | 34.40 | 58.72 | 31.91 | 56.78 | 63.21 | 73.07 | 53.86| | TinyLlama-1.1B-intermediate-step-1431k-3T | 3T | 59.20 | 36.00 | 59.12 | 30.12 | 55.25 | 57.83 | 73.29 | 52.99|
mradermacher/Llama-3-70B-Synthia-v3.5-GGUF
mradermacher
"2024-05-27T04:49:46Z"
28
0
transformers
[ "transformers", "gguf", "en", "base_model:migtissera/Llama-3-70B-Synthia-v3.5", "base_model:quantized:migtissera/Llama-3-70B-Synthia-v3.5", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-05-27T00:36:49Z"
--- base_model: migtissera/Llama-3-70B-Synthia-v3.5 language: - en library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/migtissera/Llama-3-70B-Synthia-v3.5 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.IQ3_XS.gguf) | IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.IQ3_M.gguf) | IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.Q5_K_M.gguf) | Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
furrutiav/bert_qa_extractor_cockatiel_2022_ef_mixtral_v2_linear_weight_it_807
furrutiav
"2024-03-09T23:19:29Z"
91
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
"2024-03-09T23:19:04Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/BeisenAI-7B-Chat-GGUF
mradermacher
"2024-12-14T20:39:20Z"
56
0
transformers
[ "transformers", "gguf", "beisen", "train", "zh", "base_model:maxosai/BeisenAI-7B-Chat", "base_model:quantized:maxosai/BeisenAI-7B-Chat", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-12-14T20:09:08Z"
--- base_model: maxosai/BeisenAI-7B-Chat language: - zh library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - beisen - train --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/maxosai/BeisenAI-7B-Chat <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/BeisenAI-7B-Chat-GGUF/resolve/main/BeisenAI-7B-Chat.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/BeisenAI-7B-Chat-GGUF/resolve/main/BeisenAI-7B-Chat.Q3_K_S.gguf) | Q3_K_S | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/BeisenAI-7B-Chat-GGUF/resolve/main/BeisenAI-7B-Chat.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/BeisenAI-7B-Chat-GGUF/resolve/main/BeisenAI-7B-Chat.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/BeisenAI-7B-Chat-GGUF/resolve/main/BeisenAI-7B-Chat.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/BeisenAI-7B-Chat-GGUF/resolve/main/BeisenAI-7B-Chat.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BeisenAI-7B-Chat-GGUF/resolve/main/BeisenAI-7B-Chat.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BeisenAI-7B-Chat-GGUF/resolve/main/BeisenAI-7B-Chat.Q5_K_S.gguf) | Q5_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/BeisenAI-7B-Chat-GGUF/resolve/main/BeisenAI-7B-Chat.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/BeisenAI-7B-Chat-GGUF/resolve/main/BeisenAI-7B-Chat.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/BeisenAI-7B-Chat-GGUF/resolve/main/BeisenAI-7B-Chat.Q8_0.gguf) | Q8_0 | 8.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/BeisenAI-7B-Chat-GGUF/resolve/main/BeisenAI-7B-Chat.f16.gguf) | f16 | 15.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
lesso/f20bd0af-6478-4f25-9389-467721618ed2
lesso
"2025-02-07T12:30:17Z"
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-0.5B", "base_model:adapter:Qwen/Qwen2.5-0.5B", "license:apache-2.0", "region:us" ]
null
"2025-02-07T12:23:15Z"
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-0.5B tags: - axolotl - generated_from_trainer model-index: - name: f20bd0af-6478-4f25-9389-467721618ed2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-0.5B bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 8b6e906fbaabc6a1_train_data.json ds_type: json format: custom path: /workspace/input_data/8b6e906fbaabc6a1_train_data.json type: field_instruction: instruction field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/f20bd0af-6478-4f25-9389-467721618ed2 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/8b6e906fbaabc6a1_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 91b08c31-c935-462e-b1e5-3c3de26cc56f wandb_project: new-02 wandb_run: your_name wandb_runid: 91b08c31-c935-462e-b1e5-3c3de26cc56f warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f20bd0af-6478-4f25-9389-467721618ed2 This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5930 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8849 | 0.0006 | 1 | 1.7108 | | 1.4593 | 0.0284 | 50 | 1.6532 | | 1.3904 | 0.0569 | 100 | 1.6134 | | 1.2375 | 0.0853 | 150 | 1.5988 | | 1.3273 | 0.1138 | 200 | 1.5930 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
whiteapple8222/e06c3bcf-5c44-42a2-8326-a1481dd0642d
whiteapple8222
"2025-02-06T10:39:54Z"
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:sethuiyer/Medichat-Llama3-8B", "base_model:adapter:sethuiyer/Medichat-Llama3-8B", "license:other", "region:us" ]
null
"2025-02-06T10:31:28Z"
--- library_name: peft license: other base_model: sethuiyer/Medichat-Llama3-8B tags: - axolotl - generated_from_trainer model-index: - name: e06c3bcf-5c44-42a2-8326-a1481dd0642d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: sethuiyer/Medichat-Llama3-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - a1b281c6d31a7336_train_data.json ds_type: json format: custom path: /workspace/input_data/a1b281c6d31a7336_train_data.json type: field_instruction: passage field_output: question format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: whiteapple8222/e06c3bcf-5c44-42a2-8326-a1481dd0642d hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 1358 micro_batch_size: 2 mlflow_experiment_name: /tmp/a1b281c6d31a7336_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 12e67bb2-1212-4109-94de-02222dc25293 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 12e67bb2-1212-4109-94de-02222dc25293 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e06c3bcf-5c44-42a2-8326-a1481dd0642d This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3962 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 94 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.2662 | 0.0107 | 1 | 2.6563 | | 1.6499 | 0.2574 | 24 | 1.5020 | | 1.6759 | 0.5147 | 48 | 1.4368 | | 1.5255 | 0.7721 | 72 | 1.3962 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
LHRuig/erickrom4
LHRuig
"2025-02-02T03:20:08Z"
9
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2025-02-02T03:19:48Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: suit output: url: images/suit.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: erickrom4 --- # erickrom4 <Gallery /> ## Model description erickrom4 lora ## Trigger words You should use `erickrom4` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/LHRuig/erickrom4/tree/main) them in the Files & versions tab.
mradermacher/EinBase-70B-v0.1-full-GGUF
mradermacher
"2024-05-06T06:01:22Z"
15
0
transformers
[ "transformers", "gguf", "en", "base_model:SF-Foundation/EinBase-70B-v0.1-full", "base_model:quantized:SF-Foundation/EinBase-70B-v0.1-full", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-03-24T04:14:10Z"
--- base_model: SF-Foundation/EinBase-70B-v0.1-full language: - en library_name: transformers quantized_by: mradermacher --- ## About static quants of https://huggingface.co/SF-Foundation/EinBase-70B-v0.1-full <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.Q2_K.gguf) | Q2_K | 25.9 | | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.IQ3_XS.gguf) | IQ3_XS | 28.7 | | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.IQ3_S.gguf) | IQ3_S | 30.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.Q3_K_S.gguf) | Q3_K_S | 30.3 | | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.IQ3_M.gguf) | IQ3_M | 31.4 | | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.Q3_K_M.gguf) | Q3_K_M | 33.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.Q3_K_L.gguf) | Q3_K_L | 36.6 | | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.IQ4_XS.gguf) | IQ4_XS | 37.6 | | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.Q4_0.gguf) | Q4_0 | 39.3 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.IQ4_NL.gguf) | IQ4_NL | 39.7 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.Q4_K_S.gguf) | Q4_K_S | 39.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.Q4_K_M.gguf) | Q4_K_M | 41.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.Q5_K_S.gguf) | Q5_K_S | 47.9 | | | [GGUF](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.Q5_K_M.gguf) | Q5_K_M | 49.2 | | | [PART 1](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.Q6_K.gguf.part2of2) | Q6_K | 57.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EinBase-70B-v0.1-full-GGUF/resolve/main/EinBase-70B-v0.1-full.Q8_0.gguf.part2of2) | Q8_0 | 73.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Helsinki-NLP/opus-mt-pon-sv
Helsinki-NLP
"2023-08-16T12:02:52Z"
115
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pon", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- tags: - translation license: apache-2.0 --- ### opus-mt-pon-sv * source languages: pon * target languages: sv * OPUS readme: [pon-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pon.sv | 26.4 | 0.436 |
PSW/bart-base-dialogsumgen-xsum-conv-samsum
PSW
"2022-08-26T05:05:13Z"
5
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-08-24T13:17:22Z"
# **PSW/bart-base-dialogsumgen-xsum-conv-samsum** 1. reverse trained on dialogsum 2. generate from xsum 3. train on synthetic data 4. fine-tune on samsum
team-nave/xlm-roberta-base-finetuned-panx-all
team-nave
"2022-11-17T17:36:59Z"
110
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-11-17T17:05:26Z"
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1583 - F1: 0.8563 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 358 | 0.1748 | 0.8282 | | 0.2366 | 2.0 | 716 | 0.1580 | 0.8434 | | 0.2366 | 3.0 | 1074 | 0.1583 | 0.8563 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1 - Datasets 1.16.1 - Tokenizers 0.10.3
tchen175/llama3.1-8b-financial-news-sentiment
tchen175
"2024-11-12T08:23:44Z"
75
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-11-09T12:06:30Z"
--- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** tchen175 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gokuls/whisper-base.en-speech-commands-h
gokuls
"2024-10-06T01:50:06Z"
8
0
null
[ "tensorboard", "safetensors", "whisper", "generated_from_trainer", "dataset:speech_commands", "base_model:openai/whisper-base.en", "base_model:finetune:openai/whisper-base.en", "license:apache-2.0", "model-index", "region:us" ]
null
"2024-10-06T01:21:09Z"
--- license: apache-2.0 base_model: openai/whisper-base.en tags: - generated_from_trainer datasets: - speech_commands metrics: - accuracy model-index: - name: whisper-base.en-speech-commands-h results: - task: name: Audio Classification type: audio-classification dataset: name: speech_commands type: speech_commands config: v0.02 split: None args: v0.02 metrics: - name: Accuracy type: accuracy value: 0.7922661870503597 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-base.en-speech-commands-h This model is a fine-tuned version of [openai/whisper-base.en](https://huggingface.co/openai/whisper-base.en) on the speech_commands dataset. It achieves the following results on the evaluation set: - Loss: 1.3313 - Accuracy: 0.7923 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3859 | 1.0 | 412 | 1.3474 | 0.7707 | | 0.2732 | 2.0 | 824 | 1.2471 | 0.7599 | | 0.2373 | 3.0 | 1236 | 1.2114 | 0.7729 | | 0.1694 | 4.0 | 1648 | 1.1600 | 0.7914 | | 0.1495 | 5.0 | 2060 | 1.1535 | 0.7914 | | 0.1931 | 6.0 | 2472 | 1.1446 | 0.7860 | | 0.1329 | 7.0 | 2884 | 1.3313 | 0.7923 | | 0.0731 | 8.0 | 3296 | 1.2812 | 0.7860 | | 0.0702 | 9.0 | 3708 | 1.2134 | 0.7873 | | 0.0828 | 10.0 | 4120 | 1.6292 | 0.7887 | | 0.08 | 11.0 | 4532 | 1.4677 | 0.7797 | | 0.0481 | 12.0 | 4944 | 1.3770 | 0.7909 | ### Framework versions - Transformers 4.43.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
Arekku21/vivit-b-16x2-kinetics400-finetuned-MSL_40_classes_14
Arekku21
"2023-12-18T22:43:57Z"
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "vivit", "video-classification", "endpoints_compatible", "region:us" ]
video-classification
"2023-12-18T17:47:12Z"
Entry not found
chauhoang/8ef11a2a-75d7-4dd0-a3b0-833cbd226075
chauhoang
"2025-01-13T22:15:07Z"
12
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-7b-it", "base_model:adapter:unsloth/gemma-7b-it", "license:apache-2.0", "region:us" ]
null
"2025-01-13T22:00:52Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/gemma-7b-it tags: - axolotl - generated_from_trainer model-index: - name: 8ef11a2a-75d7-4dd0-a3b0-833cbd226075 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-7b-it bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 7d44a6e484287958_train_data.json ds_type: json format: custom path: /workspace/input_data/7d44a6e484287958_train_data.json type: field_input: question_en field_instruction: question field_output: answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 5 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: chauhoang/8ef11a2a-75d7-4dd0-a3b0-833cbd226075 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 5 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/7d44a6e484287958_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b35a3c79-9110-4937-b6f0-28ad104fc718 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b35a3c79-9110-4937-b6f0-28ad104fc718 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 8ef11a2a-75d7-4dd0-a3b0-833cbd226075 This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0010 | 1 | 2.7354 | | 1.8612 | 0.0098 | 10 | 1.4138 | | 0.8931 | 0.0196 | 20 | 0.8393 | | 0.716 | 0.0294 | 30 | 0.7320 | | 0.6916 | 0.0392 | 40 | 0.6936 | | 0.6612 | 0.0490 | 50 | 0.6873 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Panis8362/Llama3_2_Finetome
Panis8362
"2025-02-09T04:39:18Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-02-09T04:39:10Z"
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Panis8362 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Aleereza/tinnyllama_prtokenizer_sum
Aleereza
"2024-01-05T18:01:54Z"
16
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-01-05T18:01:08Z"
Entry not found
miugod/bibert-iwslt14ende
miugod
"2023-03-30T14:03:27Z"
106
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-03-30T13:55:20Z"
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: bibert-ende results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bibert-ende This model is a fine-tuned version of [jhu-clsp/bibert-ende](https://huggingface.co/jhu-clsp/bibert-ende) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8713 - Accuracy: 0.6310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 1.12.0+cu113 - Datasets 2.10.1 - Tokenizers 0.13.2
distily/distily_verify_update9
distily
"2024-09-04T16:46:37Z"
5
0
Distily
[ "Distily", "tensorboard", "safetensors", "gpt2", "generated_from_trainer", "dataset:wikimedia/wikipedia", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:creativeml-openrail-m", "region:us" ]
null
"2024-09-04T16:43:39Z"
--- base_model: gpt2 datasets: - wikimedia/wikipedia library_name: Distily license: creativeml-openrail-m tags: - generated_from_trainer - Distily base_model_relation: finetune model-index: - name: distily_verify_update9 results: [] --- # Summary Distilled with [Distily](https://github.com/lapp0/distily) library using teacher model [gpt2](https://huggingface.co/gpt2) on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia). <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. # Model description More information needed # Intended uses & limitations More information needed --> # Model Architecture: - **Architecture**: `GPT2LMHeadModel` - **Total Parameters**: 81,912,576 - **Data Type (dtype)**: torch.bfloat16 - **Model Size**: 0.16 GB <details> <summary>Student Model Details</summary> ``` GPT2LMHeadModel( (transformer): GPT2Model( (wte): Embedding(50257, 768) (wpe): Embedding(1024, 768) (drop): Dropout(p=0.1, inplace=False) (h): ModuleList( (0-5): 6 x GPT2Block( (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (attn): GPT2FlashAttention2( (c_attn): Conv1D() (c_proj): Conv1D() (attn_dropout): Dropout(p=0.1, inplace=False) (resid_dropout): Dropout(p=0.1, inplace=False) ) (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (mlp): GPT2MLP( (c_fc): Conv1D() (c_proj): Conv1D() (act): NewGELUActivation() (dropout): Dropout(p=0.1, inplace=False) ) ) ) (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True) ) (lm_head): Linear(in_features=768, out_features=50257, bias=False) ) ``` </details> <br/> # Resource Usage - Max Train VRAM Use: 15.7096 GB - Available VRAM: 23.6497 GB - GPUs: - 1x NVIDIA GeForce RTX 4090 - CPUs: 28 - CPU Memory: 62.6429 GB - CPU Memory Bandwidth: 700 GB/s # Distillation (Teacher -> Student) Architecture Difference: - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 -> 81,912,576 - **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16 - **Model Size**: 0.24 GB -> 0.16 GB <details> <summary>Module Diff Details</summary> ```diff --- teacher model modules +++ student model modules @@ -4,7 +4,7 @@ (wpe): Embedding(1024, 768) (drop): Dropout(p=0.1, inplace=False) (h): ModuleList( - (0-11): 12 x GPT2Block( + (0-5): 6 x GPT2Block( (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (attn): GPT2FlashAttention2( (c_attn): Conv1D() ``` </details> <br/> # Train Dataset Trained on 3,221,668 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. - Num Samples: `3,960` - Subset: `20231101.en` - Split: `train` # Training Objective ``` DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5.0, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm_teacher_only_affine, projector=orthogonal)) ``` # Hyperparameters The following hyperparameters were used during training: <details> <summary>Expand</summary> - learning_rate: `0.0002` - train_batch_size: `16` - eval_batch_size: `8` - seed: `42` - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08` - lr_scheduler_type: `polynomial` - num_epochs: `1.0` - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5.0, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm_teacher_only_affine, projector=orthogonal))` - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7fd8ec2cb7c0>` - student_model_name_or_path: `None` - student_config_name_or_path: `distilbert/distilgpt2` - student_model_config: `None` - reinitialize_weights: `None` - copy_teacher_modules: `[('lm_head', False)]` - student_model_as_bitnet: `False` - teacher_model_name_or_path: `gpt2` - teacher_load_in_8bit: `False` - teacher_load_in_4bit: `False` - dataset_uri: `wikimedia/wikipedia` - dataset_subset: `20231101.en` - dataset_split: `train` - dataset_column_name: `text` - dataset_sample_size: `4000` - dataset_test_size: `0.01` - gradient_accumulation_steps: `1` - weight_decay: `0.0` - max_grad_norm: `1.0` - warmup_ratio: `0.0` - warmup_steps: `0` - gradient_checkpointing: `True` </details> <br/> # Framework Versions - Distily 0.5.0 - Transformers 4.44.2 - Pytorch 2.3.0 - Datasets 2.21.0
farzadab/test-uv-pipeline
farzadab
"2024-07-10T21:49:39Z"
16
0
transformers
[ "transformers", "safetensors", "ultravox", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
"2024-07-09T23:06:09Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
spneshaei/distilbert-base-uncased-imdb
spneshaei
"2023-06-14T09:15:30Z"
104
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-06-14T09:14:44Z"
Entry not found
crystantine/Vellfire2024
crystantine
"2024-10-01T05:54:18Z"
17
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2024-09-29T20:21:32Z"
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora base_model: black-forest-labs/FLUX.1-dev instance_prompt: VELLF1RE40 license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # VELLF1RE40 Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) <Gallery /> ## Trigger words You should use `VELLF1RE40` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/crystantine/Vellfire2024/tree/main) them in the Files & versions tab.
shaoleen00/detr-finetuned-cef-v1
shaoleen00
"2024-08-26T20:11:25Z"
5
0
transformers
[ "transformers", "safetensors", "table-transformer", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
"2024-08-26T20:11:15Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ys7yoo/binary-inference_roberta-base_lr1e-05_wd1e-03_ep10_plant_fold4
ys7yoo
"2023-11-25T18:03:39Z"
5
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-11-25T17:57:20Z"
Entry not found
xukp20/Llama-3-8B-Instruct-SPPO-score-Iter3_gp_8b-table-0.002
xukp20
"2024-09-29T02:40:23Z"
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-28T14:46:45Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
areegtarek/idefics-9b-split1-v1-split1.2-v1
areegtarek
"2024-03-27T21:52:12Z"
63
0
transformers
[ "transformers", "safetensors", "idefics", "image-text-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
image-text-to-text
"2024-03-27T21:49:24Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ivangrapher/22dc9ad1-0d9a-4065-9dd2-1457b5e3040c
ivangrapher
"2025-01-20T20:53:59Z"
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Meta-Llama-3-8B", "base_model:adapter:NousResearch/Meta-Llama-3-8B", "license:other", "region:us" ]
null
"2025-01-20T20:48:11Z"
--- library_name: peft license: other base_model: NousResearch/Meta-Llama-3-8B tags: - axolotl - generated_from_trainer model-index: - name: 22dc9ad1-0d9a-4065-9dd2-1457b5e3040c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Meta-Llama-3-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - d78b4b1a225214cf_train_data.json ds_type: json format: custom path: /workspace/input_data/d78b4b1a225214cf_train_data.json type: field_input: choices field_instruction: question field_output: messages format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device: cuda early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: ivangrapher/22dc9ad1-0d9a-4065-9dd2-1457b5e3040c hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 75GiB max_steps: 30 micro_batch_size: 2 mlflow_experiment_name: /tmp/d78b4b1a225214cf_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 15 sequence_len: 1024 special_tokens: pad_token: <|end_of_text|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: true trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e87876a2-cd2e-4969-9e61-d7434bbaf1de wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: e87876a2-cd2e-4969-9e61-d7434bbaf1de warmup_steps: 15 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 22dc9ad1-0d9a-4065-9dd2-1457b5e3040c This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 15 - training_steps: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0011 | 1 | 3.0148 | | 2.9469 | 0.0054 | 5 | 2.8384 | | 1.9054 | 0.0108 | 10 | 1.8545 | | 1.2705 | 0.0162 | 15 | 1.2298 | | 1.1792 | 0.0216 | 20 | 1.1131 | | 1.0892 | 0.0270 | 25 | 1.0844 | | 1.0961 | 0.0324 | 30 | 1.0784 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
zelk12/MT4-Gen5-GMA-gemma-2-9B
zelk12
"2024-12-28T19:49:37Z"
7
1
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:zelk12/MT4-Gen5-GP-gemma-2-MTMMT3g4-9B", "base_model:merge:zelk12/MT4-Gen5-GP-gemma-2-MTMMT3g4-9B", "base_model:zelk12/MT4-Gen5-MA-gemma-2-MT4g2MT3g4-9B", "base_model:merge:zelk12/MT4-Gen5-MA-gemma-2-MT4g2MT3g4-9B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-12-28T19:43:14Z"
--- base_model: - zelk12/MT4-Gen5-GP-gemma-2-MTMMT3g4-9B - zelk12/MT4-Gen5-MA-gemma-2-MT4g2MT3g4-9B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [zelk12/MT4-Gen5-GP-gemma-2-MTMMT3g4-9B](https://huggingface.co/zelk12/MT4-Gen5-GP-gemma-2-MTMMT3g4-9B) * [zelk12/MT4-Gen5-MA-gemma-2-MT4g2MT3g4-9B](https://huggingface.co/zelk12/MT4-Gen5-MA-gemma-2-MT4g2MT3g4-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: zelk12/MT4-Gen5-GP-gemma-2-MTMMT3g4-9B - model: zelk12/MT4-Gen5-MA-gemma-2-MT4g2MT3g4-9B merge_method: slerp base_model: zelk12/MT4-Gen5-GP-gemma-2-MTMMT3g4-9B dtype: bfloat16 parameters: t: 0.25 ```
sentence-transformers/bert-base-nli-mean-tokens
sentence-transformers
"2024-11-05T15:50:46Z"
1,216,444
36
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "jax", "rust", "onnx", "safetensors", "openvino", "bert", "feature-extraction", "sentence-similarity", "transformers", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers pipeline_tag: sentence-similarity --- **โš ๏ธ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)** # sentence-transformers/bert-base-nli-mean-tokens This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/bert-base-nli-mean-tokens') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/bert-base-nli-mean-tokens') model = AutoModel.from_pretrained('sentence-transformers/bert-base-nli-mean-tokens') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/bert-base-nli-mean-tokens) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
Heber77/paraquantizar
Heber77
"2023-07-31T17:53:26Z"
106
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-07-31T17:34:54Z"
Entry not found
mini1013/master_cate_bc0
mini1013
"2025-01-23T19:15:16Z"
168
0
setfit
[ "setfit", "safetensors", "roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:mini1013/master_domain", "base_model:finetune:mini1013/master_domain", "model-index", "region:us" ]
text-classification
"2025-01-23T19:14:52Z"
--- tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: ์˜จ๊ฐ€์กฑ ๋ณด๋“œ๊ฒŒ์ž„ ์˜์–ด ์›Œ๋“œ์˜จ๋”์ŠคํŠธ๋ฆฌํŠธ ์œ ์•„๊ต์œก๊ธฐ๊ด€ ์ถœ์‚ฐ/์œก์•„ > ๊ต๊ตฌ > ํ•™์Šต๋ณด๋“œ๊ฒŒ์ž„ - text: ์œ ์•„ ์‚ฌ๊ณ ๋ ฅ๋ฐœ๋‹ฌ ์ปค๋„ฅํŠธ 4๋ชฉ๊ฒŒ์ž„ ๋ผ์ง€ ๊ฐ€์กฑ๊ฒŒ์ž„ ๋‘๋‡Œ๊ฒŒ์ž„ ์ถœ์‚ฐ/์œก์•„ > ๊ต๊ตฌ > ํ•™์Šต๋ณด๋“œ๊ฒŒ์ž„ - text: ์‹ ๋น„์•„ํŒŒํŠธ ํ•œ์ž ๊ท€์‹  1-20 ๊ถŒ ์–ด๋ฆฐ์ด ์‹ ๋น„์•„ํŒŒํŠธ ํ•œ์ž ๊ท€์‹  5 ์ถœ์‚ฐ/์œก์•„ > ๊ต๊ตฌ > ํ•™์Šต๊ต๊ตฌ > ๊ธฐํƒ€๊ต๊ตฌ - text: ์–ด๋ฆฐ์ด ํ•œ๊ธ€ ์Œ์ ˆ, ์ˆซ์ž,์•ŒํŒŒ๋ฒณ,๊ตฌ๊ตฌ๋‹จ ์Šคํ‹ฐ์ปค ์•ŒํŒŒ๋ฒณ ์†Œ๋ฌธ์ž ์†Œ ์ถœ์‚ฐ/์œก์•„ > ๊ต๊ตฌ > ํ•™์Šต๊ต๊ตฌ > ๊ธฐํƒ€๊ต๊ตฌ - text: ์„ค๋ฏผ์„์˜ ์„ธ๊ณ„์‚ฌ ๋Œ€๋ชจํ—˜ 1-17๊ถŒ ์ดˆ๋“ฑ ์–ด๋ฆฐ์ด ์—ญ์‚ฌ ์„ค๋ฏผ์„์˜ ์„ธ๊ณ„์‚ฌ ๋Œ€๋ชจํ—˜ 18 ์ถœ์‚ฐ/์œก์•„ > ๊ต๊ตฌ > ํ•™์Šต๊ต๊ตฌ > ๊ธฐํƒ€๊ต๊ตฌ metrics: - accuracy pipeline_tag: text-classification library_name: setfit inference: true base_model: mini1013/master_domain model-index: - name: SetFit with mini1013/master_domain results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 1.0 name: Accuracy --- # SetFit with mini1013/master_domain This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 3 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 2.0 | <ul><li>'๋ณด์•ฝ๊ฒŒ์ž„ ์ด๊ฒŒ ์™œ ์˜ค๋ฆฌ๋„ˆ๊ตฌ๋ฆฌ, 1๊ฐœ TS 687769 ํฌ๋ ˆ์ŠคํŠธ ์ถœ์‚ฐ/์œก์•„ > ๊ต๊ตฌ > ํ•™์Šต๋ณด๋“œ๊ฒŒ์ž„'</li><li>'๋ชจ๋‹๊ธ€๋กœ๋ฆฌ 15000 ์žฅ๊ธฐ ์ž์„ํƒ€์ž… ์žฅ๊ธฐ์•Œ ์žฅ๊ธฐํŒ ํด๋”ํ˜• ์ ‘์ดํ˜• ๋ณด๋“œ๊ฒŒ์ž„ 77 ์ฒด์Šค ์ถœ์‚ฐ/์œก์•„ > ๊ต๊ตฌ > ํ•™์Šต๋ณด๋“œ๊ฒŒ์ž„'</li><li>'๊ณ ํ”ผ์‰ฌ ํ•œ๊ธ€3 ์‰ฌ์šด ๋ฐ›์นจ ๊ธ€์ž ์ถœ์‚ฐ/์œก์•„ > ๊ต๊ตฌ > ํ•™์Šต๋ณด๋“œ๊ฒŒ์ž„'</li></ul> | | 0.0 | <ul><li>'๋ฆฝํ”„๋กœ๊ทธ ์„ ํƒ ๊ตฌ๋งค (ํ’€์„ธํŠธ ๊ตฌ๋งค์‹œ ๋ฆฝํ”„๋กœ๊ทธ ์•ŒํŒŒ๋ฒณ ์นด๋“œ 27์ข… ) 2์ง‘ (DVD7+CD7+๋Œ€๋ณธ6๊ถŒ) ์ถœ์‚ฐ/์œก์•„ > ๊ต๊ตฌ > ๋น„๋””์˜ค/DVD'</li><li>'๊ณ ๊ตํ† ๋ก ,ํŒ ์ถœ์‚ฐ/์œก์•„ > ๊ต๊ตฌ > ๋น„๋””์˜ค/DVD'</li><li>'์‹œ๊ฐ„์˜์ˆฒ ์ถœ์‚ฐ/์œก์•„ > ๊ต๊ตฌ > ๋น„๋””์˜ค/DVD'</li></ul> | | 1.0 | <ul><li>'์ดˆ๋“ฑ ๋น„์ฆˆ ๋ณด์„์‹ญ์ž์ˆ˜ ์•„ํฌ๋ฆด ํ‚ค๋ง ๊ฐ€๋ฐฉ๊ณ ๋ฆฌ ๋งŒ๋“ค๊ธฐ 10์ธ ์ง€๋Šฅ๋ฐœ๋‹ฌ ์†์ž‘์—… ํ˜‘์—… ์–ด๋ฆฐ์ด์ง‘ ์ƒํ’ˆ ์„ ํƒ_์„ ์ธ์žฅ ์ถœ์‚ฐ/์œก์•„ > ๊ต๊ตฌ > ํ•™์Šต๊ต๊ตฌ > ๋ฏธ์ˆ ๊ต๊ตฌ'</li><li>'๋””์ฆˆ๋‹ˆ ์Œ์•…์ด๋ก  1-12๊ถŒ ์œ ์•„ ์–ด๋ฆฐ์ด ํ”ผ์•„๋…ธ ์Œ์•… ๊ต์žฌ ์ฑ… ๋””์ฆˆ๋‹ˆ ์Œ์•… ์ด๋ก  6 ์ถœ์‚ฐ/์œก์•„ > ๊ต๊ตฌ > ํ•™์Šต๊ต๊ตฌ > ๊ธฐํƒ€๊ต๊ตฌ'</li><li>'๋ฌด์ง€๊ฐœ๋กค / ํŽ ํŠธ๊ต๊ตฌ ๊ฒ€์ • ์ถœ์‚ฐ/์œก์•„ > ๊ต๊ตฌ > ํ•™์Šต๊ต๊ตฌ > ์˜์–ด๊ต๊ตฌ'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 1.0 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the ๐Ÿค— Hub model = SetFitModel.from_pretrained("mini1013/master_cate_bc0") # Run inference preds = model("์˜จ๊ฐ€์กฑ ๋ณด๋“œ๊ฒŒ์ž„ ์˜์–ด ์›Œ๋“œ์˜จ๋”์ŠคํŠธ๋ฆฌํŠธ ์œ ์•„๊ต์œก๊ธฐ๊ด€ ์ถœ์‚ฐ/์œก์•„ > ๊ต๊ตฌ > ํ•™์Šต๋ณด๋“œ๊ฒŒ์ž„") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 7 | 14.4143 | 34 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 70 | | 1.0 | 70 | | 2.0 | 70 | ### Training Hyperparameters - batch_size: (256, 256) - num_epochs: (30, 30) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 50 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:----:|:-------------:|:---------------:| | 0.0238 | 1 | 0.4941 | - | | 1.1905 | 50 | 0.465 | - | | 2.3810 | 100 | 0.0367 | - | | 3.5714 | 150 | 0.0 | - | | 4.7619 | 200 | 0.0 | - | | 5.9524 | 250 | 0.0 | - | | 7.1429 | 300 | 0.0 | - | | 8.3333 | 350 | 0.0 | - | | 9.5238 | 400 | 0.0 | - | | 10.7143 | 450 | 0.0 | - | | 11.9048 | 500 | 0.0 | - | | 13.0952 | 550 | 0.0 | - | | 14.2857 | 600 | 0.0 | - | | 15.4762 | 650 | 0.0 | - | | 16.6667 | 700 | 0.0 | - | | 17.8571 | 750 | 0.0 | - | | 19.0476 | 800 | 0.0 | - | | 20.2381 | 850 | 0.0 | - | | 21.4286 | 900 | 0.0 | - | | 22.6190 | 950 | 0.0 | - | | 23.8095 | 1000 | 0.0 | - | | 25.0 | 1050 | 0.0 | - | | 26.1905 | 1100 | 0.0 | - | | 27.3810 | 1150 | 0.0 | - | | 28.5714 | 1200 | 0.0 | - | | 29.7619 | 1250 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.44.2 - PyTorch: 2.2.0a0+81ea7a4 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
october-sd/pegasus-xsum-finetuned-en-sum-2
october-sd
"2024-03-14T02:16:10Z"
97
0
transformers
[ "transformers", "tensorboard", "safetensors", "pegasus", "text2text-generation", "summarization", "generated_from_trainer", "base_model:october-sd/pegasus-xsum-finetuned-en-sum", "base_model:finetune:october-sd/pegasus-xsum-finetuned-en-sum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
"2024-03-13T19:00:56Z"
--- base_model: october-sd/pegasus-xsum-finetuned-en-sum tags: - summarization - generated_from_trainer model-index: - name: pegasus-xsum-finetuned-en-sum-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-xsum-finetuned-en-sum-2 This model is a fine-tuned version of [october-sd/pegasus-xsum-finetuned-en-sum](https://huggingface.co/october-sd/pegasus-xsum-finetuned-en-sum) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.4990 - eval_runtime: 242.6814 - eval_samples_per_second: 20.603 - eval_steps_per_second: 2.575 - epoch: 2.0 - step: 1015 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.2 - Datasets 2.15.0 - Tokenizers 0.15.2
prxy5604/07f4e296-6c7a-4f44-a8fe-802ea4743427
prxy5604
"2025-01-24T18:11:34Z"
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B", "base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B", "license:llama3", "region:us" ]
null
"2025-01-24T17:24:21Z"
--- library_name: peft license: llama3 base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B tags: - axolotl - generated_from_trainer model-index: - name: 07f4e296-6c7a-4f44-a8fe-802ea4743427 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 1a9b97378fcbebe1_train_data.json ds_type: json format: custom path: /workspace/input_data/1a9b97378fcbebe1_train_data.json type: field_input: captions field_instruction: raw_sentences field_output: raw_anns format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: prxy5604/07f4e296-6c7a-4f44-a8fe-802ea4743427 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/1a9b97378fcbebe1_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 24718b6c-9560-45d6-8f4f-62368e0b0d98 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 24718b6c-9560-45d6-8f4f-62368e0b0d98 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 07f4e296-6c7a-4f44-a8fe-802ea4743427 This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3565 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.6272 | 0.0007 | 1 | 2.0187 | | 1.3778 | 0.0339 | 50 | 1.4875 | | 1.3254 | 0.0679 | 100 | 1.4757 | | 1.3392 | 0.1018 | 150 | 1.4507 | | 1.3384 | 0.1358 | 200 | 1.3565 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
BoyaWu10/bunny-stablelm-2-siglip-lora
BoyaWu10
"2024-02-13T13:58:36Z"
9
2
transformers
[ "transformers", "safetensors", "bunny-stablelm", "text-generation", "custom_code", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
"2024-02-06T02:04:47Z"
--- inference: false license: apache-2.0 --- # Model Card Bunny is a family of lightweight multimodal models. Bunny-stablelm-2-siglip-lora leverages StableLM-2 as the language model backbone and SigLIP as the vision encoder. It is pretrained on LAION-2M and finetuned on Bunny-695K. More details about this model can be found in [GitHub](https://github.com/BAAI-DCAI/Bunny). # License This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses. The content of this project itself is licensed under the Apache license 2.0.
nm-testing/TinyLlama-1.1B-Chat-v1.0-gsm8k-pruned.2of4-tensor_wts_tensor_act_int8-BitM
nm-testing
"2024-12-17T03:28:32Z"
6
0
null
[ "safetensors", "llama", "8-bit", "compressed-tensors", "region:us" ]
null
"2024-12-17T03:28:09Z"
Entry not found
THUDM/glm-edge-1.5b-chat-gguf
THUDM
"2024-11-28T08:56:06Z"
252
1
null
[ "gguf", "glm", "edge", "text-generation", "zh", "en", "license:other", "region:us", "conversational" ]
text-generation
"2024-11-27T13:25:26Z"
--- license: other license_name: glm-4 license_link: LICENSE language: - zh - en pipeline_tag: text-generation tags: - glm - edge inference: false --- # Glm-Edge-Chat-1.5B-GGUF ไธญๆ–‡้˜…่ฏป, ็‚นๅ‡ป[่ฟ™้‡Œ](README_zh.md) ## Inference with Ollama ### Installation The code for adapting this model is actively being integrated into the official `llama.cpp`. You can test it using the following adapted version: ```bash git clone https://github.com/piDack/llama.cpp -b support_glm_edge_model cmake -B build -DGGML_CUDA=ON # Or enable other acceleration hardware cmake --build build -- -j ``` ### Inference After installation, you can start the GLM-Edge Chat model using the following command: ```shell llama-cli -m <path>/model.gguf -p "<|user|>\nhi<|assistant|>\n" -ngl 999 ``` In the command-line interface, you can interact with the model by entering your requests, and the model will provide the corresponding responses. ## License The usage of this modelโ€™s weights is subject to the terms outlined in the [LICENSE](LICENSE).
notlober/gpt2custom
notlober
"2024-08-28T13:05:44Z"
105
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-08-28T13:05:21Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Helsinki-NLP/opus-mt-en-sq
Helsinki-NLP
"2023-08-16T11:31:12Z"
1,677
2
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "sq", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-sq * source languages: en * target languages: sq * OPUS readme: [en-sq](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sq/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sq/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sq/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sq/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.sq | 46.5 | 0.669 |