File size: 1,935 Bytes
aae9588
 
 
 
9cbf151
 
 
 
 
 
53443e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aae9588
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
library_name: keras-nlp
pipeline_tag: text-generation
---

Hey I am CosmoGemma 👋 I can answer cosmology questions from astroph.CO research articles.

This is a Gemma_2b_en fine-tuned on QA pairs (3.5k) generated from Cosmology and Nongalactic Astrophysics articles (arXiv astro-ph.CO) 
from 2018-2022 and tested on QA pairs (1k) generated from 2023 articles, scoring over 75% accuracy.


To generate an answer for a given question using this model, please use:

import keras
import keras_nlp

gemma_lm = keras_nlp.models.CausalLM.from_preset("hf://sultan-hassan/CosmoGemma_2b_en")
template = "Instruction:\n{instruction}\n\nResponse:\n{response}"

Question = "write your question here"

prompt = template.format(
  instruction=Question,                                                                   
  response="",
  )
out = gemma_lm.generate(prompt, max_length=1024)
ind = out.index('Response') + len('Response')+2
print ("Question:", Question)
print ("Answer:", out[ind:])




This is a [`Gemma` model](https://keras.io/api/keras_nlp/models/gemma) uploaded using the KerasNLP library and can be used with JAX, TensorFlow, and PyTorch backends.
This model is related to a `CausalLM` task.

Model config:
* **name:** gemma_backbone
* **trainable:** True
* **vocabulary_size:** 256000
* **num_layers:** 18
* **num_query_heads:** 8
* **num_key_value_heads:** 1
* **hidden_dim:** 2048
* **intermediate_dim:** 32768
* **head_dim:** 256
* **layer_norm_epsilon:** 1e-06
* **dropout:** 0
* **query_head_dim_normalize:** True
* **use_post_ffw_norm:** False
* **use_post_attention_norm:** False
* **final_logit_soft_cap:** None
* **attention_logit_soft_cap:** None
* **sliding_window_size:** 4096
* **use_sliding_window_attention:** False

This model card has been generated automatically and should be completed by the model author. See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for more information.