--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - bunnycore/QandoraExp-7B - trollek/Qwen2.5-7B-CySecButler-v0.1 base_model: - bunnycore/QandoraExp-7B - trollek/Qwen2.5-7B-CySecButler-v0.1 library_name: transformers --- # Qwen2.5-7B-Qandora-CySec ZeroXClem/Qwen2.5-7B-Qandora-CySec is an advanced model merge combining Q&A capabilities and cybersecurity expertise using the mergekit framework. This model excels in both general question-answering tasks and specialized cybersecurity domains. ## 🚀 Model Components - **[bunnycore/QandoraExp-7B](https://huggingface.co/bunnycore/QandoraExp-7B)**: Powerful Q&A capabilities - **[trollek/Qwen2.5-7B-CySecButler-v0.1](https://huggingface.co/trollek/Qwen2.5-7B-CySecButler-v0.1)**: Specialized cybersecurity knowledge ## 🧩 Merge Configuration The models are merged using spherical linear interpolation (SLERP) for optimal blending: ```yaml slices: - sources: - model: bunnycore/QandoraExp-7B layer_range: [0, 28] - model: trollek/Qwen2.5-7B-CySecButler-v0.1 layer_range: [0, 28] merge_method: slerp base_model: bunnycore/QandoraExp-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ### Key Parameters - **Self-Attention (self_attn)**: Controls blending across self-attention layers - **MLP**: Adjusts Multi-Layer Perceptron balance - **Global Weight (t.value)**: 0.5 for equal contribution from both models - **Data Type**: bfloat16 for efficiency and precision ## 🎯 Applications 1. General Q&A Tasks 2. Cybersecurity Analysis 3. Hybrid Scenarios (general knowledge + cybersecurity) ## 🛠 Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "ZeroXClem/Qwen2.5-7B-Qandora-CySec" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) input_text = "What are the fundamentals of python programming?" input_ids = tokenizer.encode(input_text, return_tensors="pt") output = model.generate(input_ids, max_length=100) response = tokenizer.decode(output[0], skip_special_tokens=True) print(response) ``` ## 📜 License This model inherits the licenses of its base models. Refer to bunnycore/QandoraExp-7B and trollek/Qwen2.5-7B-CySecButler-v0.1 for usage terms. ## 🙏 Acknowledgements - bunnycore (QandoraExp-7B) - trollek (Qwen2.5-7B-CySecButler-v0.1) - mergekit project ## 📚 Citation If you use this model, please cite this repository and the original base models. ## 💡 Tags merge, mergekit, lazymergekit, bunnycore/QandoraExp-7B, trollek/Qwen2.5-7B-CySecButler-v0.1, cybersecurity, Q&A