Instruction
stringlengths 0
86.4k
⌀ | Response
stringlengths 0
127k
⌀ |
---|---|
I am getting the following error although the model has been included. OSError: Incorrect path_or_model_id: '/kaggle/input/gemma/transformers/7b-it/1'. Please provide either the path to a local folder or the repo_id of a model on the Hub. Can anyone assist? Thks | Hi Patrick, Could you help me understand how did you get this message? Was it in a Kaggle notebook? Did you add the model to the notebook (on the right hand side of the notebook)? |
So far I have only found notebooks that run the 2b models. I have tried to get it running but with no success yet. These are the issues I've been having: This error while installing the pip packages: tensorflow 2.15.0 requires keras<2.16,>=2.15.0, but you have keras 3.0.5 which is incompatible. However, keras >=3 is required by Gemma. This other error while loading the model: GemmaTokenizer requires thetensorflow-textpackage. Please install with 'pip install tensorflow-text', which seems related. Also, installing tensorflow-text does not fix it. Has anyone managed to solve these dependencies? | Those issues don't sound related to the 7b model, this notebook succeeds with getting the packages in (with 2b): https://kaggle.com/code/nilaychauhan/get-started-with-gemma-using-kerasnlp But installing packages has nothing to do with the actual model. |
context window is 8k | That is definitely a long context window that I need! |
Hi that's awesome -- do you mind making the notebook public? It looks like it may still be private. | Oh actually just noticed that it's because the URL has a trailing ) on it so people clicking will be led to the wrong link. If you update your post to remove that it should work fine. :) |
If you scroll down to the "Model Instance" section you can read documentation for each of the framework implementations. Here you will find links to GitHub repos. Hope that helps (in addition to the comments folks have already made with pointers). | Thanks. I now have access to the source code repo. |
How do we can do Knowledge Distillation to roberta from gemma model ? | I'd like to know too. |
What is the form of the software provided to users under its open-source model? Are there any Git repositories made available to the public? | is this you are asking for? - https://github.com/google/gemma_pytorch |
Here is the keras repo: https://github.com/keras-team/keras-nlp | Thanks, this is also very useful. |
Thanks for your interest in LIT! The LIT team would love to hear more about your intended use and any additional features you would find helpful as your notebook evolves beyond a port of our official Colab. Please don’t hesitate to reach out to me here or on GitHub! | Thanks , I will reach out to you if I need any |
Oh actually just noticed that it's because the URL has a trailing ) on it so people clicking will be led to the wrong link. If you update your post to remove that it should work fine. :) | Thanks , actually my notebook was public 12 hours back but due to some personal reasons I have to make it private, But you can find the notebook on my github repo. Link is here https://github.com/PranithChowdary/Experiement-with-Gemma/blob/main/lit_gemma.ipynb |
this issues comes out what should i do ? i switched to colab , because kaggle gave me error that o sent earlier | they are lying, its not working from my side, i have tf 2.15 |
Hello everyone, I run on Transformer framework 7B, my first run was ok, the result is printed, but was stuck after printing the output. So i ended the session and restarted the session. When i run once again, i have this issue : ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 105.0/105.0 MB 15.9 MB/s eta 0:00:0000:0100:01 Installing collected packages: bitsandbytes Successfully installed bitsandbytes-0.42.0 ValueError Traceback (most recent call last) Cell In[4], line 14 12 # Load the model 13 quantization_config = BitsAndBytesConfig(load_in_4bit=True) ---> 14 tokenizer = AutoTokenizer.from_pretrained("/kaggle/input/gemma/transformers/7b-it/1") 15 model = AutoModelForCausalLM.from_pretrained("/kaggle/input/gemma/transformers/7b-it/1", quantization_config=quantization_config) 16 # Use the model File /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:811, in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 808 if os.path.isdir(pretrained_model_name_or_path): 809 tokenizer_class.register_for_auto_class() 810 return tokenizer_class.from_pretrained( --> 811 pretrained_model_name_or_path, *inputs, trust_remote_code=trust_remote_code, **kwargs 812 ) 813 elif config_tokenizer_class is not None: 814 tokenizer_class = None ValueError: Tokenizer class GemmaTokenizer does not exist or is not currently imported. Does anybody have the same issue and indicate me what to do next ? | pip install -U transformers, try this. You should see this getting fixed. |
Gemma is not a multimodal model. It's text-in, text-out and so you can't use it with KerasCV. But you can use it with KerasNLP. For multimodal use cases you can try the Gemini models: https://ai.google.dev/ | Ho I see. Thank you for your answer. |
Apologies for the confusion here. You have indeed consented, but while logged out (or not using your Kaggle account). You still need to consent again while logged in. Can you remove your USER_LICENSE_CONSENT cookie (or clear all your kaggle.com cookies)? Next, consent to the terms while logged-in using the "Verify via Kaggle Account" | this issues comes out what should i do ? i switched to colab , because kaggle gave me error that o sent earlier |
Sorry, my mistake, I see this already has the model attached on the notebook. I was able to reproduce the issue and I'll try to find what's going on. | any news? does it work? |
What is the form of the software provided to users under its open-source model? Are there any Git repositories made available to the public? | Here is the keras repo: https://github.com/keras-team/keras-nlp |
Where can I find tutorials or resources to learn more about using Gemma on Kaggle? | Looking forward build cool stuff with Gemma! |
I suppose that is because you passed the transformer object into llama_index's Setting.llm, which is not defined in the llama_index framework. To fix this, you may need to pass a model wrapped by llama_index. For example: from llama_index.llms import HuggingFaceLLM from llama_index.core import Settings Settings.llm = HuggingFaceLLM(model_name="google/gemma-2b-it",tokenizer_name="google/gemma-2b-it",device_map="auto") | Thank you for your response. You are correct. My code is now functioning without any issues using Settings.llm = HuggingFaceLLM(model_name=,tokenizer_name=,device_map= |
I made a notebook on how to use LIT to get more out of Google's Gemma model. This codelab demonstrates how to use sequence salience, an interpretability technique, to analyze different prompt engineering approaches. Link to the Notebook, support my work if you like it. | Thanks for your interest in LIT! The LIT team would love to hear more about your intended use and any additional features you would find helpful as your notebook evolves beyond a port of our official Colab. Please don’t hesitate to reach out to me here or on GitHub! |
So far I have only found notebooks that run the 2b models. I have tried to get it running but with no success yet. These are the issues I've been having: This error while installing the pip packages: tensorflow 2.15.0 requires keras<2.16,>=2.15.0, but you have keras 3.0.5 which is incompatible. However, keras >=3 is required by Gemma. This other error while loading the model: GemmaTokenizer requires thetensorflow-textpackage. Please install with 'pip install tensorflow-text', which seems related. Also, installing tensorflow-text does not fix it. Has anyone managed to solve these dependencies? | I had the same problem trying to load the 2b model: ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. tensorflow 2.15.0 requires keras<2.16,>=2.15.0, but you have keras 3.0.5 which is incompatible. tensorflowjs 4.16.0 requires packaging~=23.1, but you have packaging 21.3 which is incompatible. |
Where can I find tutorials or resources to learn more about using Gemma on Kaggle? | Hi, you can take a look at some of the pinned notebooks on the Code tab for a good starting point! |
When I run query_engine = index.as_query_engine() using Gemma as the LLM in LlamaIndex, I encounter the following error: AttributeError: 'GemmaForCausalLM' object has no attribute 'metadata Can you please help me with this? Here is the whole code: transformers AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained() model = AutoModelForCausalLM.from_pretrained(, device_map=) llama_index.core Settings, VectorStoreIndex, SimpleDirectoryReader Settings.tokenizer = tokenizer Settings.llm = model Settings.embed_model = llama_index.core VectorStoreIndex, SimpleDirectoryReader documents = SimpleDirectoryReader().load_data() index = VectorStoreIndex.from_documents( documents ) query_engine = index.as_query_engine() | I suppose that is because you passed the transformer object into llama_index's Setting.llm, which is not defined in the llama_index framework. To fix this, you may need to pass a model wrapped by llama_index. For example: from llama_index.llms import HuggingFaceLLM from llama_index.core import Settings Settings.llm = HuggingFaceLLM(model_name="google/gemma-2b-it",tokenizer_name="google/gemma-2b-it",device_map="auto") |
Hello everyone, I run on Transformer framework 7B, my first run was ok, the result is printed, but was stuck after printing the output. So i ended the session and restarted the session. When i run once again, i have this issue : ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 105.0/105.0 MB 15.9 MB/s eta 0:00:0000:0100:01 Installing collected packages: bitsandbytes Successfully installed bitsandbytes-0.42.0 ValueError Traceback (most recent call last) Cell In[4], line 14 12 # Load the model 13 quantization_config = BitsAndBytesConfig(load_in_4bit=True) ---> 14 tokenizer = AutoTokenizer.from_pretrained("/kaggle/input/gemma/transformers/7b-it/1") 15 model = AutoModelForCausalLM.from_pretrained("/kaggle/input/gemma/transformers/7b-it/1", quantization_config=quantization_config) 16 # Use the model File /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:811, in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 808 if os.path.isdir(pretrained_model_name_or_path): 809 tokenizer_class.register_for_auto_class() 810 return tokenizer_class.from_pretrained( --> 811 pretrained_model_name_or_path, *inputs, trust_remote_code=trust_remote_code, **kwargs 812 ) 813 elif config_tokenizer_class is not None: 814 tokenizer_class = None ValueError: Tokenizer class GemmaTokenizer does not exist or is not currently imported. Does anybody have the same issue and indicate me what to do next ? | At this point in time you need to use the latest version of the transformers library if you want to work with the transformers implementations of the Gemma models. You can find code snippets with instructions on how to install and upgrade the relevant packages in the "Usage" section on the model card. |
I want to run gemma2b with ollama locally on my machine. What is the structure of the modelfile ? `FROM ./gemma-2b.gguf TEMPLATE """[INST] {{ if .System }}<>{{ .System }}<> {{ end }}{{ .Prompt }} [/INST] """ SYSTEM """""" PARAMETER stop [INST] PARAMETER stop [/INST] PARAMETER stop <> PARAMETER stop <>` I tried this but it just gives me nonsense. Either no answer or writes 100 all the time. Any ideas? | How do we download the quantized gemma-2b.gguf? I only see the 34gb one on hugging-face 🤔. |
I want to run gemma2b with ollama locally on my machine. What is the structure of the modelfile ? `FROM ./gemma-2b.gguf TEMPLATE """[INST] {{ if .System }}<>{{ .System }}<> {{ end }}{{ .Prompt }} [/INST] """ SYSTEM """""" PARAMETER stop [INST] PARAMETER stop [/INST] PARAMETER stop <> PARAMETER stop <>` I tried this but it just gives me nonsense. Either no answer or writes 100 all the time. Any ideas? | You could adapt the example from https://www.kaggle.com/models/google/gemma/frameworks/pyTorch. For example, I would use: FROM ./gemma-b-it.gguf TEMPLATE SYSTEM PARAMETER stop <end_of_turn> By the way, make sure that you use the instruction-tuned model (e.g. gemma-2b-it.gguf). |
https://www.kaggle.com/code/loneguy/fine-tune-gemma-models-in-keras-using-lora | Sorry, my mistake, I see this already has the model attached on the notebook. I was able to reproduce the issue and I'll try to find what's going on. |
i did this already | Apologies for the confusion here. You have indeed consented, but while logged out (or not using your Kaggle account). You still need to consent again while logged in. Can you remove your USER_LICENSE_CONSENT cookie (or clear all your kaggle.com cookies)? Next, consent to the terms while logged-in using the "Verify via Kaggle Account" |
What is the form of the software provided to users under its open-source model? Are there any Git repositories made available to the public? | Here is the CPP one: https://github.com/google/gemma.cpp |
I just want to know is it possible to caption images by this. Any idea? | For that use case, I would recommend checking out the Gemini APIs which have multimodal capabilities. You can get started here: https://www.kaggle.com/models/google/gemini-api |
https://www.kaggle.com/code/loneguy/fine-tune-gemma-models-in-keras-using-lora | You haven't setup your API key or environment variables per these steps: https://ai.google.dev/gemma/docs/lora_tuning#configure_your_api_key https://ai.google.dev/gemma/docs/lora_tuning#set_environment_variables |
What is the form of the software provided to users under its open-source model? Are there any Git repositories made available to the public? | If you scroll down to the "Model Instance" section you can read documentation for each of the framework implementations. Here you will find links to GitHub repos. Hope that helps (in addition to the comments folks have already made with pointers). |
Where can I find tutorials or resources to learn more about using Gemma on Kaggle? | Hello, I didn't see using Gemma with KerasCV. It's in process? |
I made a notebook on how to use LIT to get more out of Google's Gemma model. This codelab demonstrates how to use sequence salience, an interpretability technique, to analyze different prompt engineering approaches. Link to the Notebook, support my work if you like it. | Hi that's awesome -- do you mind making the notebook public? It looks like it may still be private. |
module 'keras_nlp' has no attribute 'models'. This is the error which i'm facing, I installed keras and keras_nlp properly and it's still like this. THE CODE IS BELOW import keras import keras_nlp import numpy as np gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("gemma_instruct_7b_en") gemma_lm.generate("What is Overfitting and how to mitigate Overfitting?", max_length=250) | What's your python version? Try to create a conda environment with python=3.9 and try again |
Where can I find tutorials or resources to learn more about using Gemma on Kaggle? | great question. What are its strengths / weaknesses compared to GPT-4 ? I will check youtube and this space as well. Looking forward to learning more. |
pip install -U transformers, try this. You should see this getting fixed. | Thank you Kiran, I got this fixed. But i still have some warnings, even though my output looks great. I'll post them in comment below. |
Hello everyone, I run on Transformer framework 7B, my first run was ok, the result is printed, but was stuck after printing the output. So i ended the session and restarted the session. When i run once again, i have this issue : ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 105.0/105.0 MB 15.9 MB/s eta 0:00:0000:0100:01 Installing collected packages: bitsandbytes Successfully installed bitsandbytes-0.42.0 ValueError Traceback (most recent call last) Cell In[4], line 14 12 # Load the model 13 quantization_config = BitsAndBytesConfig(load_in_4bit=True) ---> 14 tokenizer = AutoTokenizer.from_pretrained("/kaggle/input/gemma/transformers/7b-it/1") 15 model = AutoModelForCausalLM.from_pretrained("/kaggle/input/gemma/transformers/7b-it/1", quantization_config=quantization_config) 16 # Use the model File /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:811, in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 808 if os.path.isdir(pretrained_model_name_or_path): 809 tokenizer_class.register_for_auto_class() 810 return tokenizer_class.from_pretrained( --> 811 pretrained_model_name_or_path, *inputs, trust_remote_code=trust_remote_code, **kwargs 812 ) 813 elif config_tokenizer_class is not None: 814 tokenizer_class = None ValueError: Tokenizer class GemmaTokenizer does not exist or is not currently imported. Does anybody have the same issue and indicate me what to do next ? | **These are the last lines of comments while running, we can see some warnings : ** `low_cpu_mem_usage` was , now to since model quantized. Loading checkpoint shards: % / [02:<:, s/it] /opt/conda/lib/python3/site-packages/bitsandbytes/nn/modules.py:: UserWarning: Input into Linear4bit torch.float16, but bnb_4bit_compute_dtype=torch.float32 (default). This will lead to slow inference training speed. warnings.warn() -02- 08::: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:] Unable to register cuDNN factory: Attempting to register factory plugin cuDNN when one has already been registered -02- 08::: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:] Unable to register cuFFT factory: Attempting to register factory plugin cuFFT when one has already been registered -02- 08::: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:] Unable to register cuBLAS factory: Attempting to register factory plugin cuBLAS when one has already been registered <bos>What the best thing about Kaggle? - The ability to learn others - The ability to share your own work - The ability to compete competitions - The ability to connect data science professionals **Answer:** The ability to learn others. Kaggle a platform where data science enthusiasts can learn each other. It has a large community of data science experts who share their knowledge insights through notebooks, tutorials, competitions.<eos> |
I'm assuming this is happening in a notebook? Can you share the notebook where you are seeing this error? | i am following this: https://ai.google.dev/gemma/docs/lora_tuning and error pops up on loading model |
I'm assuming this is happening in a notebook? Can you share the notebook where you are seeing this error? | its happening both in colab and kaggle, even if i have access to gemma |
Apologies for the confusion here. You have indeed consented, but while logged out (or not using your Kaggle account). You still need to consent again while logged in. Can you remove your USER_LICENSE_CONSENT cookie (or clear all your kaggle.com cookies)? Next, consent to the terms while logged-in using the "Verify via Kaggle Account" | what is the problem? |
If not a notebook, can you share exactly what you are doing so we can try and reproduce? | https://www.kaggle.com/code/loneguy/fine-tune-gemma-models-in-keras-using-lora |
Apologies for the confusion here. You have indeed consented, but while logged out (or not using your Kaggle account). You still need to consent again while logged in. Can you remove your USER_LICENSE_CONSENT cookie (or clear all your kaggle.com cookies)? Next, consent to the terms while logged-in using the "Verify via Kaggle Account" | worked, thank you alot) |
At this point in time you need to use the latest version of the transformers library if you want to work with the transformers implementations of the Gemma models. You can find code snippets with instructions on how to install and upgrade the relevant packages in the "Usage" section on the model card. | Thank you Paul, it's done, i posted some other issues below. |
Hey, everyone. I have already accept everything, but this error again and again shows. What can i do? | Are you sure that the email you used to request access is the same that you use here in Kaggle? And, if it is, did you received any success notification (green dot with a verified) after being redirected to the model page? |
Hi Philippe. I'm having the same issue on Kaggle Notebook. From the logs, the main conflict I see is that at some point TensorFlow 2.15 is installed, and it requires Keras < 2.16 and >= 2.15.0, but the installed version is 3.0.5, which is otherwise required for running Gemma. I haven't been able to solve this issue yet, although I suspect it is related to the installation of keras-nlp. | Did you check this notebook? https://www.kaggle.com/code/nilaychauhan/get-started-with-gemma-using-kerasnlp |
Subsets and Splits