CosmoGemma_2b_en / README.md
sultan-hassan's picture
Update README.md
53443e2 verified
|
raw
history blame
1.94 kB
metadata
library_name: keras-nlp
pipeline_tag: text-generation

Hey I am CosmoGemma 👋 I can answer cosmology questions from astroph.CO research articles.

This is a Gemma_2b_en fine-tuned on QA pairs (3.5k) generated from Cosmology and Nongalactic Astrophysics articles (arXiv astro-ph.CO) from 2018-2022 and tested on QA pairs (1k) generated from 2023 articles, scoring over 75% accuracy.

To generate an answer for a given question using this model, please use:

import keras import keras_nlp

gemma_lm = keras_nlp.models.CausalLM.from_preset("hf://sultan-hassan/CosmoGemma_2b_en") template = "Instruction:\n{instruction}\n\nResponse:\n{response}"

Question = "write your question here"

prompt = template.format( instruction=Question,
response="", ) out = gemma_lm.generate(prompt, max_length=1024) ind = out.index('Response') + len('Response')+2 print ("Question:", Question) print ("Answer:", out[ind:])

This is a Gemma model uploaded using the KerasNLP library and can be used with JAX, TensorFlow, and PyTorch backends. This model is related to a CausalLM task.

Model config:

  • name: gemma_backbone
  • trainable: True
  • vocabulary_size: 256000
  • num_layers: 18
  • num_query_heads: 8
  • num_key_value_heads: 1
  • hidden_dim: 2048
  • intermediate_dim: 32768
  • head_dim: 256
  • layer_norm_epsilon: 1e-06
  • dropout: 0
  • query_head_dim_normalize: True
  • use_post_ffw_norm: False
  • use_post_attention_norm: False
  • final_logit_soft_cap: None
  • attention_logit_soft_cap: None
  • sliding_window_size: 4096
  • use_sliding_window_attention: False

This model card has been generated automatically and should be completed by the model author. See Model Cards documentation for more information.