PLLaMa-7b-base / README.md
Xianjun's picture
Update README.md
6c5afae
|
raw
history blame
1.39 kB
metadata
license: apache-2.0

Model Card for Model ID

This model is optimized for plant science by continuing pertaining on over 1.5 million plant science academic articles based on LLaMa-2.

  • Developed by: [UCSB]

  • Language(s) (NLP): [More Information Needed]

  • License: [More Information Needed]

  • Finetuned from model [optional]: [LLaMa-2]

  • Paper [optional]: [https://arxiv.org/pdf/2401.01600.pdf]

  • Demo [optional]: [More Information Needed]

How to Get Started with the Model

from transformers import LlamaTokenizer, LlamaForCausalLM import torch

tokenizer = LlamaTokenizer.from_pretrained("Xianjun/PLLaMa-7b-base") model = LlamaForCausalLM.from_pretrained("Xianjun/PLLaMa-7b-base").half().to("cuda")

instruction = "How to ..." batch = tokenizer(instruction, return_tensors="pt", add_special_tokens=False).to("cuda") with torch.no_grad(): output = model.generate(**batch, max_new_tokens=512, temperature=0.7, do_sample=True) response = tokenizer.decode(output[0], skip_special_tokens=True)

Citation [optional]

@inproceedings{Yang2024PLLaMaAO, title={PLLaMa: An Open-source Large Language Model for Plant Science}, author={Xianjun Yang and Junfeng Gao and Wenxin Xue and Erik Alexandersson}, year={2024}, url={https://api.semanticscholar.org/CorpusID:266741610} }