File size: 3,495 Bytes
e209e35 40dd052 d78e9e4 139c3ee e209e35 8f0820b c9fc25c 8f0820b 694580e 8f0820b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
---
license: apache-2.0
tags:
- legal
- chemistry
- medical
- text-generation-inference
- art
- finance
- uncensored
pipeline_tag: text-generation
---
# Nidum-Limitless-Gemma-2B LLM
Welcome to the repository for Nidum-Limitless-Gemma-2B, an advanced language model that provides unrestricted and versatile responses across a wide range of topics. Unlike conventional models, Nidum-Limitless-Gemma-2B is designed to handle any type of question and deliver comprehensive answers without content restrictions.
## Key Features:
- **Unrestricted Responses:** Address any query with detailed, unrestricted responses, providing a broad spectrum of information and insights.
- **Versatility:** Capable of engaging with a diverse range of topics, from complex scientific questions to casual conversation.
- **Advanced Understanding:** Leverages a vast knowledge base to deliver contextually relevant and accurate outputs across various domains.
- **Customizability:** Adaptable to specific user needs and preferences for different types of interactions.
## Use Cases:
- Open-Ended Q&A
- Creative Writing and Ideation
- Research Assistance
- Educational and Informational Queries
- Casual Conversations and Entertainment
## How to Use:
To get started with Nidum-Limitless-Gemma-2B, you can use the following sample code for testing:
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="nidum/Nidum-Limitless-Gemma-2B",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda", # replace with "mps" to run on a Mac device
)
messages = [
{"role": "user", "content": "who are you"},
]
outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
```
## Release Date:
Nidum-Limitless-Gemma-2B is now officially available. Explore its capabilities and experience the freedom of unrestricted responses.
## Contributing:
We welcome contributions to enhance the model or expand its functionalities. Details on how to contribute will be available in the coming updates.
## Quantized Model Versions
To accommodate different hardware configurations and performance needs, Nidum-Limitless-Gemma-2B-GGUF is available in multiple quantized versions:
| Model Version | Description |
|------------------------------------------------|-------------------------------------------------------|
| **Nidum-Limitless-Gemma-2B-Q2_K.gguf** | Optimized for minimal memory usage with lower precision. Suitable for resource-constrained environments. |
| **Nidum-Limitless-Gemma-2B-Q4_K_M.gguf** | Balances performance and precision, offering faster inference with moderate memory usage. |
| **Nidum-Limitless-Gemma-2B-Q8_0.gguf** | Provides higher precision with increased memory usage, suitable for tasks requiring more accuracy. |
| **Nidum-Limitless-Gemma-2B-F16.gguf** | Full 16-bit floating point precision for maximum accuracy, ideal for high-end GPUs. |
It is available here: https://huggingface.co/nidum/Nidum-Limitless-Gemma-2B-GGUF
## Contact:
For any inquiries or further information, please contact us at **[email protected]**.
---
Dive into limitless possibilities with Nidum-Limitless-Gemma-2B!
Special Thanks to @cognitivecomputations for inspiring us and scouting the best datasets that we could round up to make a rockstar model for you
--- |