This is a Gemma model uploaded using the KerasHub library and can be used with JAX, TensorFlow, and PyTorch backends. This model is related to a CausalLM task.

Model config:

  • name: gemma_backbone
  • trainable: True
  • vocabulary_size: 256000
  • num_layers: 18
  • num_query_heads: 8
  • num_key_value_heads: 1
  • hidden_dim: 2048
  • intermediate_dim: 32768
  • head_dim: 256
  • layer_norm_epsilon: 1e-06
  • dropout: 0
  • query_head_dim_normalize: True
  • use_post_ffw_norm: False
  • use_post_attention_norm: False
  • final_logit_soft_cap: None
  • attention_logit_soft_cap: None
  • sliding_window_size: 4096
  • use_sliding_window_attention: False
Downloads last month
7
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the HF Inference API does not support keras-hub models with pipeline type text-generation

Model tree for harishnair04/gemma_instruct_medtr_2b

Base model

google/gemma-2-2b
Finetuned
(492)
this model

Dataset used to train harishnair04/gemma_instruct_medtr_2b

Space using harishnair04/gemma_instruct_medtr_2b 1