Dataset Viewer
filename
stringlengths 2
69
| filepath
stringlengths 39
208
| relative_path
stringlengths 13
182
| language
stringclasses 11
values | lsl_type
stringclasses 3
values | description
stringclasses 1
value | content
stringlengths 0
71.8M
|
---|---|---|---|---|---|---|
api.py | D:\GitHub\ai_train\notgpl\ai\aipy\api.py | ai\aipy\api.py | Python | N/A | Functionality description extraction logic here | import os
import json
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForLanguageModeling, Trainer, TrainingArguments
from datasets import Dataset
from huggingface_hub import login
from fastapi import FastAPI
from pydantic import BaseModel
import random
random_seed = random.randint(0, 2**32 - 1)
# Login to Hugging Face Hub
login(token="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")
# Available models
models = {
"noromaid": "NeverSleep/Noromaid-7b-v0.1.1",
"gpt-medium":"openai-community/gpt2-medium",
"gpt-large":"openai-community/gpt2-large",
"llama3": "meta-llama/Llama-2-7b-chat-hf",
"mixtrial_dolphin": "TinyLlama/TinyLlama-1.1B-step-50K-105b",
"phi2":"microsoft/phi-2",
"llamachat":"Felladrin/Llama-160M-Chat-v1",
"phi3":"microsoft/Phi-3-mini-128k-instruct",
"gpt-neo": "EleutherAI/gpt-neo-1.3B"
}
def load_dataset(file_path):
with open(file_path, 'r') as f:
return json.load(f)
def extract_text_samples(personality_config):
text_samples = []
if "dialogue_examples" in personality_config:
for example in personality_config["dialogue_examples"]:
if "example" in example:
text_samples.append(example["example"])
if "behavioral_guidelines" in personality_config:
for guideline in personality_config["behavioral_guidelines"]:
for key, value in guideline.items():
text_samples.append(value)
if "thoughts_on_sex" in personality_config:
text_samples.extend(personality_config["thoughts_on_sex"])
if "thoughts_on_flirting" in personality_config:
text_samples.extend(personality_config["thoughts_on_flirting"])
if "thoughts_on_naughty_activities" in personality_config:
text_samples.extend(personality_config["thoughts_on_naughty_activities"])
if "math_knowledge" in personality_config:
for math_knowledge in personality_config["math_knowledge"]:
if isinstance(math_knowledge, dict) and "example" in math_knowledge:
text_samples.append(math_knowledge["example"])
# Remove None values from text samples
return [text for text in text_samples if text is not None]
def main(dataset_choice, model_choice):
# Load personality configurations from JSON
personality_config = load_dataset(dataset_choice)
# Extract text samples from the JSON dataset
text_samples = extract_text_samples(personality_config)
# Convert the text samples to a Dataset object
data = {'text': text_samples}
dataset = Dataset.from_dict(data)
# Load the selected model and tokenizer
model_name = models.get(model_choice.lower())
if not model_name:
raise ValueError(f"Model {model_choice} not recognized. Available models: {list(models.keys())}")
# Ensure the model and tokenizer are downloaded
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Add a padding token to the tokenizer if it does not exist
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
# Split the dataset into train and validation sets
train_size = int(0.9 * len(dataset))
train_dataset = dataset.select(range(train_size))
eval_dataset = dataset.select(range(train_size, len(dataset)))
# Tokenize the datasets
def tokenize_function(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length', max_length=256) # Reduce max_length
tokenized_train = train_dataset.map(tokenize_function, batched=True)
tokenized_eval = eval_dataset.map(tokenize_function, batched=True)
# Data collator for language modeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=False, # Causal Language Modeling (CLM)
)
# Training arguments optimized for CPU
training_args = TrainingArguments(
output_dir='./results', # Temporary output directory
overwrite_output_dir=True,
num_train_epochs=10, # Train for 1 epoch
per_device_train_batch_size=1, # Batch size of 2 as specified
gradient_accumulation_steps=8, # Adjust for CPU optimization
save_steps=500, # Save less frequently to reduce I/O overhead
save_total_limit=1, # Keep more saved models for fallback
learning_rate=1e-4, # Learning rate as specified
logging_dir='./logs', # Temporary logging directory
logging_steps=500, # Log less frequently to reduce overhead
eval_steps=500, # Evaluate less frequently to save computation
warmup_steps=0, # No warmup as specified
load_best_model_at_end=True,
metric_for_best_model="eval_loss",
evaluation_strategy='steps',
gradient_checkpointing=False, # Disabled as specified
bf16=False, # Disable bf16 as it's not supported on CPU
dataloader_num_workers=2, # Adjust number of workers for CPU
fp16=False, # Disable mixed precision for CPU
seed=random_seed, # Set a random seed for reproducibility
)
# Initialize Trainer
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_train,
eval_dataset=tokenized_eval,
)
# Train the model
trainer.train()
# Save the trained model and tokenizer
trainer.save_model('./results')
tokenizer.save_pretrained('./results')
# Define the FastAPI app
app = FastAPI()
class InputText(BaseModel):
input_text: str
class Config(BaseModel):
dataset_choice: str
model_choice: str
def generate_text(model, tokenizer, prompt, repetition_penalty=1.2, max_length=1400):
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(
input_ids,
max_length=max_length,
repetition_penalty=repetition_penalty,
do_sample=True,
top_k=50,
top_p=0.95,
num_return_sequences=1
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
@app.post("/generate-text/")
async def generate_text_post(data: InputText):
model_path = "./results"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
generated_text = generate_text(model, tokenizer, data.input_text, max_length=140)
return {"generated_text": generated_text}
@app.get("/generate-text/")
async def generate_text_get(input_text: str):
model_path = "./results"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
generated_text = generate_text(model, tokenizer, input_text, max_length=480)
return {"generated_text": generated_text}
@app.post("/train/")
async def train_model(config: Config):
main(config.dataset_choice, config.model_choice)
return {"status": "Training started"}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=5000)
|
api2.py | D:\GitHub\ai_train\notgpl\ai\aipy\api2.py | ai\aipy\api2.py | Python | N/A | Functionality description extraction logic here | import os
import json
from fastapi import FastAPI, BackgroundTasks
from pydantic import BaseModel
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import Trainer, TrainingArguments, DataCollatorForLanguageModeling
from datasets import Dataset
import torch
app = FastAPI()
# Load model and tokenizer once at startup
model_path = "./results"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
class InputText(BaseModel):
input_text: str
class Config(BaseModel):
dataset_choice: str
model_choice: str
def generate_text(model, tokenizer, prompt, repetition_penalty=1.2, max_length=1400):
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(
input_ids,
max_length=max_length,
repetition_penalty=repetition_penalty,
do_sample=True,
top_k=50,
top_p=0.95,
num_return_sequences=1
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
@app.post("/generate-text/")
async def generate_text_post(data: InputText):
generated_text = generate_text(model, tokenizer, data.input_text, max_length=140)
return {"generated_text": generated_text}
@app.get("/generate-text/")
async def generate_text_get(input_text: str):
generated_text = generate_text(model, tokenizer, input_text, max_length=480)
return {"generated_text": generated_text}
def train_model(dataset_choice: str, model_choice: str):
# Example data loading and training process
with open(dataset_choice, 'r') as f:
personality_config = json.load(f)
text_samples = []
if "dialogue_examples" in personality_config:
for example in personality_config["dialogue_examples"]:
if "example" in example:
text_samples.append(example["example"])
if "behavioral_guidelines" in personality_config:
for guideline in personality_config["behavioral_guidelines"]:
for key, value in guideline.items():
text_samples.append(value)
if "thoughts_on_sex" in personality_config:
text_samples.extend(personality_config["thoughts_on_sex"])
if "thoughts_on_flirting" in personality_config:
text_samples.extend(personality_config["thoughts_on_flirting"])
if "thoughts_on_naughty_activities" in personality_config:
text_samples.extend(personality_config["thoughts_on_naughty_activities"])
if "math_knowledge" in personality_config:
for math_knowledge in personality_config["math_knowledge"]:
if "example" in math_knowledge:
text_samples.append(math_knowledge["example"])
text_samples = [text for text in text_samples if text is not None]
data = {'text': text_samples}
dataset = Dataset.from_dict(data)
# Load a smaller model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_choice)
model = AutoModelForCausalLM.from_pretrained(model_choice)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
train_size = int(0.9 * len(dataset))
train_dataset = dataset.select(range(train_size))
eval_dataset = dataset.select(range(train_size, len(dataset)))
def tokenize_function(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length', max_length=128)
tokenized_train = train_dataset.map(tokenize_function, batched=True)
tokenized_eval = eval_dataset.map(tokenize_function, batched=True)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=False,
)
training_args = TrainingArguments(
output_dir='./results',
overwrite_output_dir=True,
num_train_epochs=1, # Reduce the number of epochs
per_device_train_batch_size=1, # Small batch size
save_steps=1000,
save_total_limit=1,
learning_rate=1e-4,
logging_dir='./logs',
logging_steps=5,
eval_steps=100,
warmup_steps=100,
load_best_model_at_end=True,
metric_for_best_model="eval_loss",
evaluation_strategy='steps',
fp16=False, # Ensure mixed precision is off for CPU training
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_train,
eval_dataset=tokenized_eval,
)
trainer.train()
trainer.save_model('./results')
tokenizer.save_pretrained('./results')
@app.post("/train/")
async def train_model_endpoint(config: Config, background_tasks: BackgroundTasks):
background_tasks.add_task(train_model, config.dataset_choice, config.model_choice)
return {"status": "Training started"}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=5000)
|
api3.py | D:\GitHub\ai_train\notgpl\ai\aipy\api3.py | ai\aipy\api3.py | Python | N/A | Functionality description extraction logic here | import os
from transformers import AutoTokenizer, AutoModelForCausalLM
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
# Define the FastAPI app
app = FastAPI()
class InputText(BaseModel):
input_text: str
# Path to the pretrained model and tokenizer
model_path = "./results"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
def generate_text(prompt, max_length=1400, repetition_penalty=1.2):
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(
input_ids,
max_length=max_length,
repetition_penalty=repetition_penalty,
do_sample=True,
top_k=50,
top_p=0.95,
num_return_sequences=1
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
@app.post("/generate-text/")
async def generate_text_post(data: InputText):
generated_text = generate_text(data.input_text, max_length=140)
return {"generated_text": generated_text}
@app.get("/generate-text/")
async def generate_text_get(input_text: str):
generated_text = generate_text(input_text, max_length=480)
return {"generated_text": generated_text}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=5000)
|
api_ollama.py | D:\GitHub\ai_train\notgpl\ai\aipy\api_ollama.py | ai\aipy\api_ollama.py | Python | N/A | Functionality description extraction logic here | import os
import json
from fastapi import FastAPI
from pydantic import BaseModel
import requests
# Available models
models = {
"noromaid": "NeverSleep/Noromaid-7b-v0.1.1",
"gpt-medium": "openai-community/gpt2-medium",
"gpt-large": "openai-community/gpt2-large",
"llama3": "meta-llama/Llama-2-7b-chat-hf",
"mixtrial_dolphin": "TinyLlama/TinyLlama-1.1B-step-50K-105b",
"phi2": "microsoft/phi-2",
"llamachat": "Felladrin/Llama-160M-Chat-v1",
"phi3": "microsoft/Phi-3-mini-128k-instruct",
"gpt-neo": "EleutherAI/gpt-neo-1.3B"
}
def load_dataset(file_path):
with open(file_path, 'r') as f:
return json.load(f)
def extract_text_samples(personality_config):
text_samples = []
if "dialogue_examples" in personality_config:
for example in personality_config["dialogue_examples"]:
if "example" in example:
text_samples.append(example["example"])
if "behavioral_guidelines" in personality_config:
for guideline in personality_config["behavioral_guidelines"]:
for key, value in guideline.items():
text_samples.append(value)
if "thoughts_on_sex" in personality_config:
text_samples.extend(personality_config["thoughts_on_sex"])
if "thoughts_on_flirting" in personality_config:
text_samples.extend(personality_config["thoughts_on_flirting"])
if "thoughts_on_naughty_activities" in personality_config:
text_samples.extend(personality_config["thoughts_on_naughty_activities"])
if "math_knowledge" in personality_config:
for math_knowledge in personality_config["math_knowledge"]:
if isinstance(math_knowledge, dict) and "example" in math_knowledge:
text_samples.append(math_knowledge["example"])
# Remove None values from text samples
return [text for text in text_samples if text is not None]
# Define the FastAPI app
app = FastAPI()
class InputText(BaseModel):
input_text: str
model_choice: str
class Config(BaseModel):
dataset_choice: str
model_choice: str
@app.post("/generate-text/")
async def generate_text_post(data: InputText):
model_name = models.get(data.model_choice.lower())
if not model_name:
return {"error": f"Model {data.model_choice} not recognized. Available models: {list(models.keys())}"}
url = "http://localhost:11434/api/generate"
payload = {
"model": model_name,
"prompt": data.input_text,
"stream": False
}
headers = {'Content-Type': 'application/json'}
response = requests.post(url, json=payload, headers=headers)
result = response.json()
generated_text = result.get("response", "")
return {"generated_text": generated_text}
@app.get("/generate-text/")
async def generate_text_get(input_text: str, model_choice: str):
model_name = models.get(model_choice.lower())
if not model_name:
return {"error": f"Model {model_choice} not recognized. Available models: {list(models.keys())}"}
url = "http://localhost:11434/api/generate"
payload = {
"model": model_name,
"prompt": input_text,
"stream": False
}
headers = {'Content-Type': 'application/json'}
response = requests.post(url, json=payload, headers=headers)
result = response.json()
generated_text = result.get("response", "")
return {"generated_text": generated_text}
@app.post("/train/")
async def train_model(config: Config):
# Load personality configurations from JSON
personality_config = load_dataset(config.dataset_choice)
# Extract text samples from the JSON dataset
text_samples = extract_text_samples(personality_config)
# Create a training payload for the Ollama API
model_name = models.get(config.model_choice.lower())
if not model_name:
return {"error": f"Model {config.model_choice} not recognized. Available models: {list(models.keys())}"}
url = "http://localhost:11434/api/create"
payload = {
"name": config.model_choice,
"modelfile": f"FROM {model_name}\nSYSTEM You are trained with the provided text samples."
}
headers = {'Content-Type': 'application/json'}
response = requests.post(url, json=payload, headers=headers)
if response.status_code == 200:
return {"status": "Training started"}
else:
return {"status": "Training failed", "error": response.text}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=5000)
|
auth.py | D:\GitHub\ai_train\notgpl\ai\aipy\auth.py | ai\aipy\auth.py | Python | N/A | Functionality description extraction logic here | from flask import jsonify
from flask_login import current_user
from . import login_manager
from .models import User
@login_manager.user_loader
def load_user(user_id):
return User.query.get(int(user_id))
@login_manager.unauthorized_handler
def unauthorized():
return jsonify({"message": "Unauthorized access"}), 401
def check_banned(func):
def wrapper(*args, **kwargs):
if current_user.is_banned:
return jsonify({"message": "User is banned"}), 401
return func(*args, **kwargs)
return wrapper
|
bloominator.py | D:\GitHub\ai_train\notgpl\ai\aipy\bloominator.py | ai\aipy\bloominator.py | Python | N/A | Functionality description extraction logic here | import os
import json
import torch
from transformers import BloomForCausalLM, BloomTokenizerFast, TrainingArguments, Trainer, DataCollatorForLanguageModeling, get_scheduler
from datasets import load_dataset
from accelerate import Accelerator
import deepspeed
import bitsandbytes as bnb
# Set cache directory
os.environ['TRANSFORMERS_CACHE'] = 'D:/.cache'
os.environ['HF_DATASETS_CACHE'] = 'D:/.cache'
# Initialize Accelerator with mixed precision
accelerator = Accelerator(mixed_precision="fp16")
# Load the dataset
dataset = load_dataset("OpenAssistant/oasst2")
# Load the tokenizer and model
tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom-560m")
model = BloomForCausalLM.from_pretrained("bigscience/bloom-560m")
# Preprocess function to tokenize the dataset
def preprocess_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True, max_length=256)
# Tokenize the dataset with padding and truncation
tokenized_datasets = dataset.map(preprocess_function, batched=True)
# Ensure labels are properly formatted
def format_labels(examples):
inputs = examples["input_ids"]
examples["labels"] = inputs.copy()
return examples
tokenized_datasets = tokenized_datasets.map(format_labels, batched=True)
# Define training arguments with DeepSpeed configuration
training_args = TrainingArguments(
output_dir="./bloom560m-oasst2",
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
num_train_epochs=3,
weight_decay=0.01,
fp16=True,
gradient_accumulation_steps=8,
deepspeed={ # Inline DeepSpeed config as a dictionary
"train_micro_batch_size_per_gpu": 1,
"gradient_accumulation_steps": 8,
"fp16": {
"enabled": True,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 3,
"allgather_bucket_size": 2e8,
"reduce_bucket_size": 2e8,
"offload_optimizer": {
"device": "cpu",
"pin_memory": True
},
"offload_param": {
"device": "cpu",
"pin_memory": True
}
},
"gradient_clipping": 1.0,
"steps_per_print": 2000
}
)
# Custom Trainer class to use bitsandbytes optimizer
class CustomTrainer(Trainer):
def create_optimizer_and_scheduler(self, num_training_steps: int):
self.optimizer = bnb.optim.Adam8bit(self.model.parameters(), lr=self.args.learning_rate)
self.lr_scheduler = get_scheduler(
name=self.args.lr_scheduler_type,
optimizer=self.optimizer,
num_warmup_steps=self.args.get_warmup_steps(num_training_steps),
num_training_steps=num_training_steps,
)
# Initialize CustomTrainer with DeepSpeed
trainer = CustomTrainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
tokenizer=tokenizer,
data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False),
)
# Train the model
trainer.train()
# Save the fine-tuned model
model.save_pretrained("./bloom560m-oasst2")
tokenizer.save_pretrained("./bloom560m-oasst2")
# Upload to Hugging Face
from huggingface_hub import HfApi, HfFolder
# Load the tokenizer and model
tokenizer = BloomTokenizerFast.from_pretrained("./bloom560m-oasst2")
model = BloomForCausalLM.from_pretrained("./bloom560m-oasst2")
# Hugging Face Model Hub credentials
HfFolder.save_token("your_huggingface_token") # Replace with your actual Hugging Face token
# Define repository name and path
repo_name = "bloomsirenix/bloom560m-oasst2"
model_save_path = "./bloom560m-oasst2"
# Save tokenizer and model to the specified directory
tokenizer.save_pretrained(model_save_path)
model.save_pretrained(model_save_path)
# Initialize HfApi
api = HfApi()
# Upload the model to Hugging Face Model Hub
api.upload_folder(
folder_path=model_save_path,
repo_id=repo_name,
repo_type="model",
commit_message="Upload fine-tuned model on OpenAssistant/oasst2 dataset"
)
print(f"Model uploaded to https://huggingface.co/{repo_name}")
|
bot.py | D:\GitHub\ai_train\notgpl\ai\aipy\bot.py | ai\aipy\bot.py | Python | N/A | Functionality description extraction logic here | import discord
from discord.ext import commands
from discord import app_commands
import aiohttp
from config import TOKEN, GUILD_ID, API_URL, BOT_OWNER_ID
MY_GUILD = discord.Object(id=GUILD_ID)
class MyBot(commands.Bot):
def __init__(self):
super().__init__(command_prefix='!', intents=discord.Intents.default())
async def setup_hook(self):
self.tree.copy_global_to(guild=MY_GUILD)
await self.tree.sync(guild=MY_GUILD)
bot = MyBot()
@bot.event
async def on_ready():
print(f'Logged in as {bot.user}!')
@bot.tree.command(name='slash', description='A simple slash command', guild=MY_GUILD)
@app_commands.describe(number='A number', string='A string')
async def slash(interaction: discord.Interaction, number: int, string: str):
await interaction.response.send_message(f'Modify {number=} {string=}', ephemeral=True)
@bot.tree.command(name='train_model', description='Train a model with a specific dataset', guild=MY_GUILD)
@app_commands.describe(dataset_choice='The dataset to use for training', model_choice='The model to train')
@commands.is_owner() # Restrict this command to the bot owner
async def train_model(interaction: discord.Interaction, dataset_choice: str, model_choice: str):
await interaction.response.defer() # Send a loading state response
if interaction.user.id == BOT_OWNER_ID:
async with aiohttp.ClientSession() as session:
async with session.post(f"{API_URL}/train/", json={"dataset_choice": dataset_choice, "model_choice": model_choice}) as response:
if response.status == 200:
result = await response.text()
await interaction.followup.send(f"Training started successfully: {result}")
else:
result = await response.text()
await interaction.followup.send(f"Error starting training: {result}")
else:
await interaction.followup.send("You do not have permission to use this command.")
@bot.tree.command(name='aschat', description='Talk to Me ^^', guild=MY_GUILD)
@app_commands.describe(input_text='The input text to generate from', model_choice='The model to use for generation')
async def generate_text(interaction: discord.Interaction, input_text: str, model_choice: str = "gpt-medium"):
if model_choice not in ["noromaid", "llama3", "mixtrial_dolphin", "phi2", "llamachat", "gpt-medium", "gpt-large"]:
await interaction.response.send_message("Invalid model choice. Please choose one of the following: noromaid, llama3, mixtrial_dolphin, phi2, llamachat, gpt-medium, gpt-large", ephemeral=True)
return
if len(input_text) > 1000:
await interaction.response.send_message("Input text is too long. Please keep it under 1000 characters.", ephemeral=True)
return
#only make it work in nsfw channels
if not interaction.channel.is_nsfw():
await interaction.response.send_message("This command can only be used in NSFW channels.", ephemeral=True)
return
await interaction.response.defer() # Send a loading state response
async with aiohttp.ClientSession() as session:
async with session.post(f"{API_URL}/generate-text/", json={"input_text": input_text, "model_choice": model_choice}) as response:
if response.status == 200:
result = await response.json()
generated_text = result.get("generated_text", "No text generated.")
await interaction.followup.send(f"{generated_text}")
else:
result = await response.text()
await interaction.followup.send(f"Error generating text: {result}")
class SimpleGeneralGroup(app_commands.Group):
@app_commands.command(name='general', description='A general command')
async def general(self, interaction: discord.Interaction):
await interaction.response.send_message('This is a general command', ephemeral=True)
bot.tree.add_command(SimpleGeneralGroup(name='simplegeneralgroup'), guild=MY_GUILD)
if __name__ == "__main__":
bot.run(TOKEN)
|
bot_lora.py | D:\GitHub\ai_train\notgpl\ai\aipy\bot_lora.py | ai\aipy\bot_lora.py | Python | N/A | Functionality description extraction logic here | import discord
from discord.ext import commands
from discord import app_commands
import aiohttp
from config import TOKEN, GUILD_ID, API_URL, BOT_OWNER_ID, HF_TOKEN, CACHE_DIR, OLLAMA_HOST
from ollama import AsyncClient
MY_GUILD = discord.Object(id=GUILD_ID)
class MyBot(commands.Bot):
def __init__(self):
super().__init__(command_prefix='!', intents=discord.Intents.default())
self.client = AsyncClient(host=OLLAMA_HOST)
self.model_cache = {}
async def setup_hook(self):
self.tree.copy_global_to(guild=MY_GUILD)
await self.tree.sync(guild=MY_GUILD)
print("Slash commands have been synced.")
async def pull_model(self, model_name):
async with aiohttp.ClientSession() as session:
async with session.post(f"{OLLAMA_HOST}/api/models/pull", json={"model": model_name}) as response:
if response.status == 200:
print(f"Model {model_name} pulled successfully.")
else:
print(f"Failed to pull model {model_name}: {response.status}")
async def load_model(self, model_name_or_path):
if model_name_or_path in self.model_cache:
return self.model_cache[model_name_or_path]
else:
await self.pull_model(model_name_or_path)
self.model_cache[model_name_or_path] = model_name_or_path
return model_name_or_path
async def load_lora_model(self, base_model_name, lora_model_path):
cache_key = f"{base_model_name}-{lora_model_path}"
if cache_key in self.model_cache:
return self.model_cache[cache_key]
else:
await self.pull_model(base_model_name)
self.model_cache[cache_key] = (base_model_name, lora_model_path)
return base_model_name, lora_model_path
bot = MyBot()
@bot.event
async def on_ready():
print(f'Logged in as {bot.user}!')
@bot.tree.command(name='train_model', description='Train a model with a specific dataset', guild=MY_GUILD)
@app_commands.describe(dataset_choice='The dataset to use for training', model_choice='The model to train')
@commands.is_owner()
async def train_model(interaction: discord.Interaction, dataset_choice: str, model_choice: str):
await interaction.response.defer()
await interaction.followup.send(f"Training is managed by Ollama API now. Dataset: {dataset_choice}, Model: {model_choice}")
@bot.tree.command(name='aschat', description='Talk to Me ^^', guild=MY_GUILD)
@app_commands.describe(input_text='The input text to generate from', model_choice='The model to use for generation', lora_model='Optional LoRA model to use')
async def generate_text(interaction: discord.Interaction, input_text: str, model_choice: str, lora_model: str = None):
if len(input_text) > 1000:
await interaction.response.send_message("Input text is too long. Please keep it under 1000 characters.", ephemeral=True)
return
await interaction.response.defer()
try:
if lora_model:
base_model, lora_model_path = await bot.load_lora_model(model_choice, lora_model)
model_name = f"{base_model}-{lora_model_path}"
else:
model_name = await bot.load_model(model_choice)
async with aiohttp.ClientSession() as session:
async with session.post(f"{OLLAMA_HOST}/api/models/run", json={"model": model_name, "prompt": input_text}) as response:
if response.status == 200:
result = await response.json()
generated_text = result.get("generated_text", "No text generated.")
await interaction.followup.send(generated_text)
else:
result = await response.text()
await interaction.followup.send(f"Error generating text: {result}")
except Exception as e:
await interaction.followup.send(f"Error generating text: {str(e)}")
class SimpleGeneralGroup(app_commands.Group):
@app_commands.command(name='general', description='A general command')
async def general(self, interaction: discord.Interaction):
await interaction.response.send_message('This is a general command', ephemeral=True)
bot.tree.add_command(SimpleGeneralGroup(name='simplegeneralgroup'), guild=MY_GUILD)
if __name__ == "__main__":
bot.run(TOKEN)
|
bot_ollama.py | D:\GitHub\ai_train\notgpl\ai\aipy\bot_ollama.py | ai\aipy\bot_ollama.py | Python | N/A | Functionality description extraction logic here | import discord
from discord.ext import commands
from discord import app_commands
import aiohttp
from config import TOKEN, GUILD_ID, API_URL, BOT_OWNER_ID
import json # Add this import
MY_GUILD = discord.Object(id=GUILD_ID)
class MyBot(commands.Bot):
def __init__(self):
super().__init__(command_prefix='!', intents=discord.Intents.default())
async def setup_hook(self):
self.tree.copy_global_to(guild=MY_GUILD)
await self.tree.sync(guild=MY_GUILD)
bot = MyBot()
@bot.event
async def on_ready():
print(f'Logged in as {bot.user}!')
@bot.tree.command(name='slash', description='A simple slash command', guild=MY_GUILD)
@app_commands.describe(number='A number', string='A string')
async def slash(interaction: discord.Interaction, number: int, string: str):
await interaction.response.send_message(f'Modify {number=} {string=}', ephemeral=True)
@bot.tree.command(name='aschat', description='Talk to Me ^^', guild=MY_GUILD)
@app_commands.describe(input_text='The input text to generate from', model_choice='The model to use for generation')
async def generate_text(interaction: discord.Interaction, input_text: str, model_choice: str):
if len(input_text) > 1000:
await interaction.response.send_message("Input text is too long. Please keep it under 1000 characters.", ephemeral=True)
return
await interaction.response.defer() # Send a loading state response
async with aiohttp.ClientSession() as session:
payload = {
"model": model_choice,
"prompt": input_text
}
headers = {'Content-Type': 'application/json'}
async with session.post(f"{API_URL}/generate", json=payload, headers=headers) as response:
if response.status == 200:
generated_text = ""
async for line in response.content:
decoded_line = line.decode('utf-8').strip()
if decoded_line:
try:
json_response = json.loads(decoded_line)
if "response" in json_response:
generated_text += json_response["response"]
except json.JSONDecodeError:
continue
if not generated_text:
generated_text = "No text generated."
await interaction.followup.send(generated_text)
else:
result = await response.text()
await interaction.followup.send(f"Error generating text: {result}")
class SimpleGeneralGroup(app_commands.Group):
@app_commands.command(name='general', description='A general command')
async def general(self, interaction: discord.Interaction):
await interaction.response.send_message('This is a general command', ephemeral=True)
bot.tree.add_command(SimpleGeneralGroup(name='simplegeneralgroup'), guild=MY_GUILD)
if __name__ == "__main__":
bot.run(TOKEN)
|
chat.py | D:\GitHub\ai_train\notgpl\ai\aipy\chat.py | ai\aipy\chat.py | Python | N/A | Functionality description extraction logic here | from transformers import BloomTokenizerFast, BloomForCausalLM
import torch
# Specify the model name
model_name_or_path = "mia4o-bloom"
# Load the tokenizer and the model
tokenizer = BloomTokenizerFast.from_pretrained(model_name_or_path,cache_dir="D:\.cache")
model = BloomForCausalLM.from_pretrained(model_name_or_path,cache_dir="D:\.cache").cuda()
model = model.eval()
# Define the input pattern
input_pattern = "{}</s>"
# Take user input
text = input("Enter the text: ")
input_ids = tokenizer(input_pattern.format(text), return_tensors="pt").input_ids
input_ids = input_ids.cuda()
# Generate the output
outputs = model.generate(input_ids,
do_sample=True,
max_new_tokens=1024,
top_p=0.85,
temperature=0.3,
repetition_penalty=1.2,
eos_token_id=tokenizer.eos_token_id)
# Calculate the length of the input ids
input_ids_len = input_ids.size(1)
# Extract and decode the response
response_ids = outputs[0][input_ids_len:]
response = tokenizer.decode(response_ids, skip_special_tokens=True)
# Print the response
print(response)
|
chaten.py | D:\GitHub\ai_train\notgpl\ai\aipy\chaten.py | ai\aipy\chaten.py | Python | N/A | Functionality description extraction logic here | from transformers import BloomTokenizerFast, BloomForCausalLM, MarianMTModel, MarianTokenizer
import torch
import langdetect
# Specify the model names
chat_model_name = "WangZeJun/bloom-820m-chat"
translation_model_name = 'Helsinki-NLP/opus-mt-zh-en'
# Load the tokenizer and the model for chat
chat_tokenizer = BloomTokenizerFast.from_pretrained(chat_model_name, cache_dir="D:\\.cache")
chat_model = BloomForCausalLM.from_pretrained(chat_model_name, cache_dir="D:\\.cache").cuda()
chat_model = chat_model.eval()
# Load the tokenizer and the model for translation
translation_tokenizer = MarianTokenizer.from_pretrained(translation_model_name)
translation_model = MarianMTModel.from_pretrained(translation_model_name)
# Define the input pattern
input_pattern = "{}</s>"
# Take user input
text = input("Enter the text: ")
input_ids = chat_tokenizer(input_pattern.format(text), return_tensors="pt").input_ids
input_ids = input_ids.cuda()
# Generate the output
outputs = chat_model.generate(input_ids,
do_sample=True,
max_new_tokens=1024,
top_p=0.85,
temperature=0.3,
repetition_penalty=1.2,
eos_token_id=chat_tokenizer.eos_token_id)
# Calculate the length of the input ids
input_ids_len = input_ids.size(1)
# Extract and decode the response
response_ids = outputs[0][input_ids_len:]
response = chat_tokenizer.decode(response_ids, skip_special_tokens=True)
# Detect the language of the response
detected_language = langdetect.detect(response)
# Translate the response if it is in Chinese
if detected_language == 'zh-cn' or detected_language == 'zh-tw':
translation_inputs = translation_tokenizer(response, return_tensors="pt").input_ids
translated_outputs = translation_model.generate(translation_inputs)
translated_response = translation_tokenizer.decode(translated_outputs[0], skip_special_tokens=True)
print("Translated Response:", translated_response)
else:
print("Response:", response)
|
client.py | D:\GitHub\ai_train\notgpl\ai\aipy\client.py | ai\aipy\client.py | Python | N/A | Functionality description extraction logic here | import requests
BASE_URL = 'http://127.0.0.1:5000'
def register(username, password):
url = f"{BASE_URL}/register"
payload = {'username': username, 'password': password}
response = requests.post(url, json=payload)
return response.json()
def login(username, password):
url = f"{BASE_URL}/login"
payload = {'username': username, 'password': password}
session = requests.Session()
response = session.post(url, json=payload)
try:
return response.json(), session
except requests.exceptions.JSONDecodeError:
print(f"Failed to parse JSON response: {response.text}")
return None, session
def generate_catgirl_image(session, prompt, width, height, num_images, model):
url = f"{BASE_URL}/generate_images"
payload = {'prompt': prompt, 'width': width, 'height': height, 'num_images': num_images, 'model': model}
response = session.post(url, json=payload)
try:
return response.json()
except requests.exceptions.JSONDecodeError:
print(f"Failed to parse JSON response: {response.text}")
return None
def logout(session):
url = f"{BASE_URL}/logout"
response = session.post(url)
try:
return response.json()
except requests.exceptions.JSONDecodeError:
print(f"Failed to parse JSON response: {response.text}")
return None
if __name__ == '__main__':
print("Registering user...")
register_response = register('testuser', 'testpassword')
print(register_response)
print("Logging in user...")
login_response, user_session = login('testuser', 'testpassword')
if login_response:
print(login_response)
else:
print("Login failed. Please check the server status and URL.")
exit(1)
if login_response.get("message") == "Logged in successfully!":
prompt = "catgirl with blonde hair and blue eyes"
width, height = 512, 512
num_images = 1
model = 'realcartoonRealistic_v16.safetensors'
print("Generating catgirl image...")
image_response = generate_catgirl_image(user_session, prompt, width, height, num_images, model)
print(image_response)
print("Logging out user...")
logout_response = logout(user_session)
print(logout_response)
else:
print("User login was not successful.")
|
codeinator.py | D:\GitHub\ai_train\notgpl\ai\aipy\codeinator.py | ai\aipy\codeinator.py | Python | N/A | Functionality description extraction logic here | import requests
import logging
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments, Trainer
from torch.utils.data import IterableDataset, DataLoader
import torch
import os
import wandb
from huggingface_hub import login
# Set up your Hugging Face token
hf_token = "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" # Ensure this environment variable is set
if not hf_token:
raise ValueError("Please set the HUGGINGFACE_TOKEN environment variable")
# Login to Hugging Face
login(hf_token, add_to_git_credential=True)
# Setup logging
logging.basicConfig(level=logging.INFO, filename='data_fetch.log', filemode='w',
format='%(name)s - %(levelname)s - %(message)s')
# Load the dataset using streaming to handle large files
dataset = load_dataset("bigcode/the-stack-v2-train-full-ids", split="train", streaming=True)
# Initialize the tokenizer and model
model_name = "bigscience/bloom-1b1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Set pad_token to eos_token
tokenizer.pad_token = tokenizer.eos_token
# Initialize wandb
wandb.init(project="github-code-training-blocal", entity="manikineko") # Replace with your wandb project name and entity
# Function to fetch content from GitHub by repo and path
def fetch_content(owner, repo, path):
url = f"https://raw.githubusercontent.com/{owner}/{repo}/master/{path}"
response = requests.get(url)
if response.status_code == 200:
content = response.text
logging.info(f"Fetched content from {url}")
return content
else:
logging.error(f"Failed to fetch content from {url} with status code {response.status_code}")
return None
# Custom IterableDataset
class CustomIterableDataset(IterableDataset):
def __init__(self, dataset, tokenizer, max_length=512):
self.dataset = dataset
self.tokenizer = tokenizer
self.max_length = max_length
def __iter__(self):
for example in self.dataset:
repo_url = example.get("repo_url", "")
repo_owner, repo_name = repo_url.split("/")[-2], repo_url.split("/")[-1]
for file in example["files"]:
content = fetch_content(repo_owner, repo_name, file["path"])
if content:
tokenized_output = self.tokenizer(content, truncation=True, padding="max_length", max_length=self.max_length)
input_ids = tokenized_output["input_ids"]
labels = input_ids.copy() # Use input_ids as labels
yield {"input_ids": input_ids, "attention_mask": tokenized_output["attention_mask"], "labels": labels}
def custom_collate(batch):
return {key: torch.tensor([d[key] for d in batch]) for key in batch[0]}
def main():
# Initialize the custom dataset
tokenized_dataset = CustomIterableDataset(dataset, tokenizer)
# Create DataLoader
data_loader = DataLoader(tokenized_dataset, batch_size=1, collate_fn=custom_collate)
# Define training arguments optimized for RTX 3060 and RYZEN 5
training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="no",
learning_rate=0.0001,
per_device_train_batch_size=1,
max_steps=10000, # Specify the number of training steps
weight_decay=0,
fp16=True, # Enable mixed precision training
gradient_accumulation_steps=32,
logging_dir="./logs",
logging_steps=10,
report_to="wandb", # Report to wandb
remove_unused_columns=False,
dataloader_num_workers=2, # Adjust based on your CPU capacity
)
# Initialize Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset,
)
# Train the model
trainer.train()
if __name__ == '__main__':
main()
|
codeset.py | D:\GitHub\ai_train\notgpl\ai\aipy\codeset.py | ai\aipy\codeset.py | Python | N/A | Functionality description extraction logic here | import os
import csv
import chardet
# Define the directory containing your source code files
source_code_directory = input("Enter the path to the directory containing your source code files: ")
# Define the output CSV file path
output_csv_file = input("Enter the path to the output CSV file: ")
# A dictionary to map file extensions to their corresponding programming language
file_extension_mapping = {
'.cpp': 'C++',
'.py': 'Python',
'.java': 'Java',
'.js': 'JavaScript',
'.cs': 'C#',
'.c': 'C',
'.h': 'C/C++ Header',
'.html': 'HTML',
'.css': 'CSS',
'.php': 'PHP',
'.rb': 'Ruby',
'.swift': 'Swift',
'.go': 'Go',
'.rs': 'Rust',
'.kt': 'Kotlin',
'.m': 'Objective-C',
'.pl': 'Perl',
'.sh': 'Shell',
'.r': 'R',
'.ts': 'TypeScript',
'.xml': 'XML',
'.sql': 'SQL',
# Add more mappings as needed
}
def get_code_type(file_extension):
return file_extension_mapping.get(file_extension, 'Unknown')
def process_file(file_path, code_type, writer):
with open(file_path, 'r', encoding='utf-8', errors='ignore') as file:
for line in file:
writer.writerow([code_type, f"{code_type}: {line.strip()}"])
def process_directory(directory, writer):
for root, _, files in os.walk(directory):
for file in files:
file_path = os.path.join(root, file)
file_extension = os.path.splitext(file)[1]
code_type = get_code_type(file_extension)
process_file(file_path, code_type, writer)
def main():
with open(output_csv_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(['Category', 'Content'])
process_directory(source_code_directory, writer)
print(f"CSV file '{output_csv_file}' has been created successfully.")
if __name__ == "__main__":
main() |
combined.py | D:\GitHub\ai_train\notgpl\ai\aipy\combined.py | ai\aipy\combined.py | Python | N/A | Functionality description extraction logic here | import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig
def combine_models(base_model_name, lora_path, cache_dir, output_path):
# Load the tokenizer and the base model
tokenizer = AutoTokenizer.from_pretrained(base_model_name, cache_dir=cache_dir)
model = AutoModelForCausalLM.from_pretrained(base_model_name, cache_dir=cache_dir)
# Load LoRA configuration
lora_config = PeftConfig.from_pretrained(lora_path, cache_dir=cache_dir)
# Load LoRA weights and merge with base model
model = PeftModel.from_pretrained(model, lora_path, cache_dir=cache_dir)
model = model.merge_and_unload()
# Save the combined model
os.makedirs(output_path, exist_ok=True)
model.save_pretrained(output_path, safe_serialization=True) # Save in safetensors format
tokenizer.save_pretrained(output_path)
print(f"Combined model saved at {output_path}")
if __name__ == "__main__":
base_model_name = "cognitivecomputations/TinyDolphin-2.8-1.1b"
lora_path = "F:\\AI\\miagptrm-dolphin\\"
cache_dir = "D:\\.cache"
output_path = "./combined_ollama_model"
combine_models(base_model_name, lora_path, cache_dir, output_path)
|
config.py | D:\GitHub\ai_train\notgpl\ai\aipy\config.py | ai\aipy\config.py | Python | N/A | Functionality description extraction logic here | # config.py
# Replace with your actual bot token
TOKEN = 'MTI0ODkxMTYyNjc5NTYxNDIwOA.G-RnFi.LbXz-1GVhNSCJZEeYHl6Vb0BG7xIovpS0bvRkk'
# Replace with your actual guild ID
GUILD_ID = 187966607451095041
# REST API URL
API_URL = 'http://localhost:11434/api'
BOT_OWNER_ID = 1199767018837127178 # Replace with your actual Discord user ID
OLLAMA_API_URL = 'http://localhost:11434'
OLLAMA_HOST = 'http://localhost:11434'
HF_TOKEN = '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'
CACHE_DIR = 'D:\.cache' # Optional: Specify the path to cache directory
|
convertconvert-bloom-hf-to-gguf.py | D:\GitHub\ai_train\notgpl\ai\aipy\convertconvert-bloom-hf-to-gguf.py | ai\aipy\convertconvert-bloom-hf-to-gguf.py | Python | N/A | Functionality description extraction logic here | #!/usr/bin/env python3
# HF bloom --> gguf conversion
from __future__ import annotations
import argparse
import json
import os
import re
import struct
import sys
from pathlib import Path
from typing import Any
import numpy as np
import torch
from transformers import AutoTokenizer # type: ignore[import]
if 'NO_LOCAL_GGUF' not in os.environ:
sys.path.insert(1, str(Path(__file__).parent / 'gguf-py' / 'gguf'))
import gguf
def count_model_parts(dir_model: Path) -> int:
num_parts = 0
for filename in os.listdir(dir_model):
if filename.startswith("pytorch_model-"):
num_parts += 1
if num_parts > 0:
print("gguf: found " + str(num_parts) + " model parts")
return num_parts
# Supported Models:
# https://huggingface.co/bigscience/bloom-1b7
# https://huggingface.co/bigscience/bloom-3b
# https://huggingface.co/bigscience/bloom-7b1
# https://huggingface.co/Langboat/bloom-1b4-zh
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="Convert a Bloom model to a GGML compatible file")
parser.add_argument("--vocab-only", action="store_true", help="extract only the vocab")
parser.add_argument("--outfile", type=Path, help="path to write to; default: based on input")
parser.add_argument("model", type=Path, help="directory containing model file, or model file itself (*.bin)")
parser.add_argument("ftype", type=int, help="output format - use 0 for float32, 1 for float16", choices=[0, 1], default = 1)
return parser.parse_args()
args = parse_args()
dir_model = args.model
ftype = args.ftype
if not dir_model.is_dir():
print(f'Error: {args.model} is not a directory', file = sys.stderr)
sys.exit(1)
# possible tensor data types
# ftype == 0 -> float32
# ftype == 1 -> float16
# map from ftype to string
ftype_str = ["f32", "f16"]
if args.outfile is not None:
fname_out = args.outfile
else:
# output in the same directory as the model by default
fname_out = dir_model / f'ggml-model-{ftype_str[ftype]}.gguf'
print("gguf: loading model "+dir_model.name)
with open(dir_model / "config.json", "r", encoding="utf-8") as f:
hparams = json.load(f)
if hparams["architectures"][0] != "BloomForCausalLM":
print("Model architecture not supported: " + hparams["architectures"][0])
sys.exit(1)
# get number of model parts
num_parts = count_model_parts(dir_model)
ARCH=gguf.MODEL_ARCH.BLOOM
gguf_writer = gguf.GGUFWriter(fname_out, gguf.MODEL_ARCH_NAMES[ARCH])
print("gguf: get model metadata")
block_count = hparams["n_layer"]
gguf_writer.add_name("Bloom")
n_embed = hparams.get("hidden_size", hparams.get("n_embed"))
n_head = hparams.get("n_head", hparams.get("num_attention_heads"))
gguf_writer.add_context_length(hparams.get("seq_length", n_embed))
gguf_writer.add_embedding_length(n_embed)
gguf_writer.add_feed_forward_length(4 * n_embed)
gguf_writer.add_block_count(block_count)
gguf_writer.add_head_count(n_head)
gguf_writer.add_head_count_kv(n_head)
gguf_writer.add_layer_norm_eps(hparams["layer_norm_epsilon"])
gguf_writer.add_file_type(ftype)
# TOKENIZATION
print("gguf: get tokenizer metadata")
tokens: list[bytearray] = []
scores: list[float] = []
toktypes: list[int] = []
# gpt2 tokenizer
gguf_writer.add_tokenizer_model("gpt2")
print("gguf: get gpt2 tokenizer vocab")
# ref: https://github.com/cmp-nct/ggllm.cpp/blob/master/falcon_convert.py
tokenizer = AutoTokenizer.from_pretrained(dir_model)
# The number of tokens in tokenizer.json can differ from the expected vocab size.
# This causes downstream issues with mismatched tensor sizes when running the inference
vocab_size = hparams.get("vocab_size", len(tokenizer.vocab))
assert max(tokenizer.vocab.values()) < vocab_size
reverse_vocab = {id: encoded_tok for encoded_tok, id in tokenizer.vocab.items()}
for i in range(vocab_size):
tokens.append(reverse_vocab[i] if i in reverse_vocab else f"[PAD{i}]")
scores.append(0.0) # dummy
toktypes.append(gguf.TokenType.NORMAL)
gguf_writer.add_token_list(tokens)
gguf_writer.add_token_scores(scores)
gguf_writer.add_token_types(toktypes)
special_vocab = gguf.SpecialVocab(dir_model, load_merges=True)
special_vocab.add_to_gguf(gguf_writer)
# TENSORS
tensor_map = gguf.get_tensor_name_map(ARCH, block_count)
# params for qkv transform
n_head_kv = hparams.get("n_head_kv", n_head)
head_dim = n_embed // n_head
# tensor info
print("gguf: get tensor metadata")
if num_parts == 0:
part_names = iter(("pytorch_model.bin",))
else:
part_names = (
f"pytorch_model-{n:05}-of-{num_parts:05}.bin" for n in range(1, num_parts + 1)
)
for part_name in part_names:
if args.vocab_only:
break
print("gguf: loading model part '" + part_name + "'")
model_part = torch.load(dir_model / part_name, map_location="cpu")
has_lm_head = True
if "lm_head.weight" not in model_part.keys() and "output.weight" not in model_part.keys():
has_lm_head = False
for original_name in model_part.keys():
data = model_part[original_name]
name = re.sub(r'transformer\.', '', original_name)
old_dtype = data.dtype
# convert any unsupported data types to float32
if data.dtype != torch.float16 and data.dtype != torch.float32:
data = data.to(torch.float32)
data = data.squeeze().numpy()
if re.match(r"h\.\d+\.self_attention\.query_key_value\.weight", name):
# Map bloom-style qkv_linear to gpt-style qkv_linear
# bloom: https://github.com/huggingface/transformers/blob/main/src/transformers/models/bloom/modeling_bloom.py#L238-L252 # noqa
# gpt-2: https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_gpt2.py#L312 # noqa
qkv_weights = data.reshape((n_head, 3, n_embed // n_head, n_embed))
data = np.concatenate(
(qkv_weights[:, 0, :, :].reshape((-1, n_embed)),
qkv_weights[:, 1, :, :].reshape((-1, n_embed)),
qkv_weights[:, 2, :, :].reshape((-1, n_embed))),
axis=0
)
print("re-format attention.linear_qkv.weight")
elif re.match(r"h\.\d+\.self_attention\.query_key_value\.bias", name):
qkv_bias = data.reshape((n_head, 3, n_embed // n_head))
data = np.concatenate(
(qkv_bias[:, 0, :].reshape((n_embed,)),
qkv_bias[:, 1, :].reshape((n_embed,)),
qkv_bias[:, 2, :].reshape((n_embed,))),
axis=0
)
print("re-format attention.linear_qkv.bias")
# map tensor names
new_name = tensor_map.get_name(name, try_suffixes=(".weight", ".bias"))
if new_name is None:
print("Can not map tensor '" + name + "'")
sys.exit()
n_dims = len(data.shape)
data_dtype = data.dtype
# if f32 desired, convert any float16 to float32
if ftype == 0 and data_dtype == np.float16:
data = data.astype(np.float32)
# TODO: Why cant we use these float16 as-is? There should be not reason to store float16 as float32
if ftype == 1 and data_dtype == np.float16 and n_dims == 1:
data = data.astype(np.float32)
# if f16 desired, convert any float32 2-dim weight tensors to float16
if ftype == 1 and data_dtype == np.float32 and name.endswith(".weight") and n_dims == 2:
data = data.astype(np.float16)
print(name, "=>", new_name + ", shape = " + str(data.shape) + ", " + str(old_dtype) + " --> " + str(data.dtype))
gguf_writer.add_tensor(new_name, data)
if not has_lm_head and name == "word_embeddings.weight":
gguf_writer.add_tensor("output.weight", data)
print(name, "=>", "output.weight" + ", shape = " + str(data.shape) + ", " + str(old_dtype) + " --> " + str(data.dtype)) # noqa
print("gguf: write header")
gguf_writer.write_header_to_file()
print("gguf: write metadata")
gguf_writer.write_kv_data_to_file()
if not args.vocab_only:
print("gguf: write tensors")
gguf_writer.write_tensors_to_file()
gguf_writer.close()
print(f"gguf: model successfully exported to '{fname_out}'")
print("") |
crossbreeder.py | D:\GitHub\ai_train\notgpl\ai\aipy\crossbreeder.py | ai\aipy\crossbreeder.py | Python | N/A | Functionality description extraction logic here | import torch
import torch.nn as nn
from transformers import AutoModel, AutoTokenizer, Trainer, TrainingArguments
from datasets import load_dataset
class MultiTaskModel(nn.Module):
def __init__(self, model1, model2):
super(MultiTaskModel, self).__init__()
self.model1 = model1
self.model2 = model2
self.classifier = nn.Linear(model1.config.hidden_size + model2.config.hidden_size, 2) # Adjust for your task
def forward(self, input_ids1, attention_mask1, input_ids2, attention_mask2):
outputs1 = self.model1(input_ids=input_ids1, attention_mask=attention_mask1)
outputs2 = self.model2(input_ids=input_ids2, attention_mask=attention_mask2)
combined_output = torch.cat((outputs1.last_hidden_state[:, 0, :], outputs2.last_hidden_state[:, 0, :]), dim=1)
logits = self.classifier(combined_output)
return logits
def load_models(model_name1, model_name2):
model1 = AutoModel.from_pretrained(model_name1, trust_remote_code=True)
model2 = AutoModel.from_pretrained(model_name2, trust_remote_code=True)
tokenizer1 = AutoTokenizer.from_pretrained(model_name1)
tokenizer2 = AutoTokenizer.from_pretrained(model_name2)
if tokenizer1.pad_token is None:
tokenizer1.add_special_tokens({'pad_token': '[PAD]'})
if tokenizer2.pad_token is None:
tokenizer2.add_special_tokens({'pad_token': '[PAD]'})
model1.resize_token_embeddings(len(tokenizer1))
model2.resize_token_embeddings(len(tokenizer2))
return model1, model2, tokenizer1, tokenizer2
def prepare_dataset(tokenizer1, tokenizer2, dataset_name, split):
dataset = load_dataset(dataset_name, split=split)
column_names = dataset.column_names
# Assuming 'text' column is present
text_column = 'text'
def tokenize_function(examples):
tokenized_input1 = tokenizer1(examples[text_column], padding='max_length', truncation=True)
tokenized_input2 = tokenizer2(examples[text_column], padding='max_length', truncation=True)
return {
'input_ids1': tokenized_input1['input_ids'],
'attention_mask1': tokenized_input1['attention_mask'],
'input_ids2': tokenized_input2['input_ids'],
'attention_mask2': tokenized_input2['attention_mask'],
'labels': [0] * len(examples[text_column]) # Dummy labels
}
tokenized_dataset = dataset.map(tokenize_function, batched=True)
tokenized_dataset.set_format(type='torch', columns=['input_ids1', 'attention_mask1', 'input_ids2', 'attention_mask2', 'labels'])
return tokenized_dataset
def train_model(multi_task_model, train_dataset, eval_dataset, output_dir='./results'):
training_args = TrainingArguments(
output_dir=output_dir,
num_train_epochs=3,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
warmup_steps=500,
weight_decay=0.01,
logging_dir='./logs',
)
trainer = Trainer(
model=multi_task_model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset
)
trainer.train()
if __name__ == "__main__":
model_name1 = input("Enter the repo/location of the first model: ")
model_name2 = input("Enter the repo/location of the second model: ")
# Load models and tokenizers
model1, model2, tokenizer1, tokenizer2 = load_models(model_name1, model_name2)
# Get dataset names
dataset_name = input("Enter the dataset name: ")
# Prepare datasets
train_dataset = prepare_dataset(tokenizer1, tokenizer2, dataset_name, 'train')
eval_dataset = prepare_dataset(tokenizer1, tokenizer2, dataset_name, 'validation') # Adjust the split name if needed
# Define the multi-task model
multi_task_model = MultiTaskModel(model1, model2)
# Train the multi-task model
if train_dataset and eval_dataset:
train_model(multi_task_model, train_dataset, eval_dataset)
else:
print("Please provide training and evaluation datasets.")
|
dataset2data.py | D:\GitHub\ai_train\notgpl\ai\aipy\dataset2data.py | ai\aipy\dataset2data.py | Python | N/A | Functionality description extraction logic here | import json
import random
import os
def split_dataset(dataset_path, train_ratio=0.8, val_ratio=0.1, test_ratio=0.1, seed=None):
if seed is not None:
random.seed(seed)
# Load the dataset
with open(dataset_path, 'r', encoding='utf-8') as f:
data = json.load(f)
# Shuffle the data
random.shuffle(data)
# Calculate the split indices
total_size = len(data)
train_size = int(total_size * train_ratio)
val_size = int(total_size * val_ratio)
# Split the data
train_data = data[:train_size]
val_data = data[train_size:train_size + val_size]
test_data = data[train_size + val_size:]
return train_data, val_data, test_data
def save_json(data, file_path):
with open(file_path, 'w', encoding='utf-8') as f:
json.dump(data, f, ensure_ascii=False, indent=4)
if __name__ == "__main__":
# Path to the original dataset
dataset_path = "dataset.json"
# Define the output directory
output_dir = "data"
os.makedirs(output_dir, exist_ok=True)
# Split the dataset
train_data, val_data, test_data = split_dataset(dataset_path, seed=42)
# Save the splits to JSON files
save_json(train_data, os.path.join(output_dir, "train.json"))
save_json(val_data, os.path.join(output_dir, "validate.json"))
save_json(test_data, os.path.join(output_dir, "test.json"))
print(f"Dataset split completed. Files saved in '{output_dir}' directory.")
|
discordtrainer.py | D:\GitHub\ai_train\notgpl\ai\aipy\discordtrainer.py | ai\aipy\discordtrainer.py | Python | N/A | Functionality description extraction logic here | import os
import torch
import pandas as pd
from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForLanguageModeling, Trainer, TrainingArguments
from datasets import Dataset
from huggingface_hub import login
# Login to Hugging Face
login(token="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")
# User inputs
csv = input("Please enter a CSV path: ")
model_path = input("Please enter the model path/hf repo: ")
outputdir = input("Please enter the output directory: ")
# Load dataset from CSV
df = pd.read_csv(csv)
text_samples = df['Content'].dropna().astype(str).tolist()
data = {'text': text_samples}
dataset = Dataset.from_dict(data)
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, cache_dir="D:\.cache")
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, cache_dir="D:\.cache")
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
# Split the dataset into train and validation sets
train_size = int(0.9 * len(dataset))
train_dataset = dataset.select(range(train_size))
eval_dataset = dataset.select(range(train_size, len(dataset)))
def tokenize_function(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length', max_length=512)
tokenized_train = train_dataset.map(tokenize_function, batched=True)
tokenized_eval = eval_dataset.map(tokenize_function, batched=True)
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
training_args = TrainingArguments(
output_dir=outputdir,
overwrite_output_dir=True,
num_train_epochs=3,
per_device_train_batch_size=2,
gradient_accumulation_steps=8,
save_steps=500,
save_total_limit=2,
logging_dir='./logs',
logging_steps=100,
eval_steps=500,
warmup_steps=500,
load_best_model_at_end=True,
evaluation_strategy='steps',
fp16=True,
deepspeed="deepspeed_config.json",
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_train,
eval_dataset=tokenized_eval,
tokenizer=tokenizer,
)
trainer.train()
trainer.save_model(outputdir)
tokenizer.save_pretrained(outputdir)
|
discordtrainer_lora.py | D:\GitHub\ai_train\notgpl\ai\aipy\discordtrainer_lora.py | ai\aipy\discordtrainer_lora.py | Python | N/A | Functionality description extraction logic here | import os
import json
import torch
import pandas as pd
from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForLanguageModeling, Trainer, TrainingArguments
from datasets import Dataset
from huggingface_hub import login
from peft import LoraConfig, get_peft_model, PeftModel
# Login to Hugging Face
login(token="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")
# User inputs
csv = input("Please enter a CSV path:")
model_path = input("Please enter the model path/hf repo:")
outputdir = input("Please enter the output directory:")
# Load dataset from CSV
df = pd.read_csv(csv)
# Extract text samples from the Content column and remove None or empty values
text_samples = df['Content'].dropna().astype(str).tolist()
# Convert the text samples to a Dataset object
data = {'text': text_samples}
dataset = Dataset.from_dict(data)
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, cache_dir="D:\.cache")
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, load_in_8bit=True, cache_dir="D:\.cache")
# Add a padding token to the tokenizer if it does not exist
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
# Apply PEFT (LoRA) to the model
peft_config = LoraConfig(
r=4,
lora_alpha=16,
target_modules=["q_proj", "v_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, peft_config)
# Split the dataset into train and validation sets
train_size = int(0.9 * len(dataset))
train_dataset = dataset.select(range(train_size))
eval_dataset = dataset.select(range(train_size, len(dataset)))
# Tokenize the datasets
def tokenize_function(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length', max_length=512)
tokenized_train = train_dataset.map(tokenize_function, batched=True)
tokenized_eval = eval_dataset.map(tokenize_function, batched=True)
# Data collator for language modeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=False, # Causal Language Modeling (CLM)
)
# Training arguments
training_args = TrainingArguments(
output_dir=outputdir,
overwrite_output_dir=True,
num_train_epochs=3,
per_device_train_batch_size=2,
gradient_accumulation_steps=8, # Effective batch size of 16
save_steps=500,
save_total_limit=2,
learning_rate=5e-5,
logging_dir='./logs',
logging_steps=100,
eval_steps=500,
warmup_steps=500,
load_best_model_at_end=True,
evaluation_strategy='steps',
fp16=True, # Enable mixed precision training
deepspeed="./deepspeed_config.json", # Path to DeepSpeed config file
)
# Initialize Trainer
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_train,
eval_dataset=tokenized_eval,
tokenizer=tokenizer,
)
# Train the model
trainer.train()
# Save the trained model and tokenizer
trainer.save_model(outputdir)
tokenizer.save_pretrained(outputdir)
# Save the adapter model
adapter_model_path = os.path.join(outputdir, "adapter_model.bin")
# Ensure saving only the LoRA adapter weights
if isinstance(model, PeftModel):
lora_state_dict = {k: v.cpu() for k, v in model.state_dict().items() if 'lora_' in k}
torch.save(lora_state_dict, adapter_model_path)
else:
torch.save(model.state_dict(), adapter_model_path)
|
ggufmk.py | D:\GitHub\ai_train\notgpl\ai\aipy\ggufmk.py | ai\aipy\ggufmk.py | Python | N/A | Functionality description extraction logic here | import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
def convert_to_gguf(model, tokenizer, output_path):
model_dict = {
"model_state_dict": model.state_dict(),
"config": model.config.to_dict(),
"tokenizer": tokenizer.get_vocab()
}
# Saving the model dictionary to a GGUF file (using torch.save for demonstration)
torch.save(model_dict, output_path)
def main():
model_name = "S1mp1eXXX/Mia-astral-1.32B-Conv"
output_path = "mia.gguf"
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Convert and save the model
convert_to_gguf(model, tokenizer, output_path)
print(f"Model saved to {output_path}")
if __name__ == "__main__":
main()
|
hackllama.py | D:\GitHub\ai_train\notgpl\ai\aipy\hackllama.py | ai\aipy\hackllama.py | Python | N/A | Functionality description extraction logic here | import torch
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments, Trainer
from peft import LoraConfig, get_peft_model
# Load the dataset
dataset = load_dataset('cognitivecomputations/dolphin-2.9.3')
# Load the model and tokenizer
model_name = 'meta-llama/CodeLlama-7b-hf'
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Preprocess the dataset
def preprocess_function(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length', max_length=512)
tokenized_datasets = dataset.map(preprocess_function, batched=True)
# LoRA configuration
lora_config = LoraConfig(
r=8, # The rank of the low-rank matrices
lora_alpha=32, # The scaling factor for the low-rank matrices
target_modules=['q_proj', 'v_proj'], # Modules to apply LoRA to
lora_dropout=0.1 # Dropout rate for LoRA
)
# Apply LoRA to the model
model = get_peft_model(model, lora_config)
# Define the training arguments
training_args = TrainingArguments(
output_dir='./results',
num_train_epochs=3,
per_device_train_batch_size=2, # Batch size for training
per_device_eval_batch_size=4, # Batch size for evaluation
warmup_steps=100,
weight_decay=0.01,
logging_dir='./logs',
logging_steps=10,
save_steps=500,
evaluation_strategy="steps",
eval_steps=500,
save_total_limit=2,
fp16=True # Enable mixed precision training if supported by the hardware
)
# Initialize the Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets['train'],
eval_dataset=tokenized_datasets['validation'], # Assuming the dataset has a 'validation' split
)
# Train the model
trainer.train()
# Save the fine-tuned model and tokenizer
model.save_pretrained('./hackllama')
tokenizer.save_pretrained('./hackllama')
|
hackpilot.py | D:\GitHub\ai_train\notgpl\ai\aipy\hackpilot.py | ai\aipy\hackpilot.py | Python | N/A | Functionality description extraction logic here | import os
import json
import torch
import deepspeed
from transformers import AutoModelForCausalLM, AutoTokenizer
# DeepSpeed Configuration
deepspeed_config = {
"train_batch_size": 8,
"gradient_accumulation_steps": 2,
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.00015,
"betas": [0.9, 0.999],
"eps": 1e-08,
"weight_decay": 3e-7
}
},
"fp16": {
"enabled": True,
"loss_scale": 0
}
}
# Save the configuration to a JSON file
deepspeed_config_file = "deepspeed_config.json"
with open(deepspeed_config_file, 'w') as f:
json.dump(deepspeed_config, f)
# Model and Tokenizer
model_name = "cognitivecomputations/WizardLM-1.0-Uncensored-CodeLlama-34b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# DeepSpeed Initialization
model, optimizer, _, _ = deepspeed.initialize(
model=model,
model_parameters=model.parameters(),
config=deepspeed_config_file
)
def enhance_code(code: str, user_prompt: str) -> str:
# Combine user prompt with code
full_prompt = f"{user_prompt}\n###\n{code}\n### End of code"
# Generate enhanced code
inputs = tokenizer(full_prompt, return_tensors="pt", padding=True, truncation=True).input_ids.to(model.device)
with torch.no_grad():
outputs = model.generate(inputs, max_length=inputs.shape[1] + 512, temperature=0.1, top_p=0.95)
enhanced_code = tokenizer.decode(outputs[0], skip_special_tokens=True)
return enhanced_code
def process_file(file_path: str, user_prompt: str):
with open(file_path, 'r') as file:
original_code = file.read()
enhanced_code = enhance_code(original_code, user_prompt)
with open(file_path, 'w') as file:
file.write(enhanced_code)
def recursively_enhance_code(directory: str, user_prompt: str):
for root, _, files in os.walk(directory):
for file in files:
if file.endswith('.py'):
file_path = os.path.join(root, file)
print(f"Enhancing {file_path}...")
process_file(file_path, user_prompt)
print(f"Enhanced {file_path}")
if __name__ == "__main__":
project_directory = input("Enter the path to the project directory: ")
user_prompt = input("Enter the enhancement prompt for the code: ")
recursively_enhance_code(project_directory, user_prompt)
|
hybrid.py | D:\GitHub\ai_train\notgpl\ai\aipy\hybrid.py | ai\aipy\hybrid.py | Python | N/A | Functionality description extraction logic here | import os
import json
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForLanguageModeling, Trainer, TrainingArguments
from datasets import Dataset
from huggingface_hub import login
from fastapi import FastAPI
from pydantic import BaseModel
import random
# Login to Hugging Face Hub
login(token="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")
def load_dataset(file_path):
with open(file_path, 'r') as f:
return json.load(f)
def extract_text_samples(personality_config):
text_samples = []
if "dialogue_examples" in personality_config:
for example in personality_config["dialogue_examples"]:
if "example" in example:
text_samples.append(example["example"])
if "behavioral_guidelines" in personality_config:
for guideline in personality_config["behavioral_guidelines"]:
for key, value in guideline.items():
text_samples.append(value)
if "thoughts_on_sex" in personality_config:
text_samples.extend(personality_config["thoughts_on_sex"])
if "thoughts_on_flirting" in personality_config:
text_samples.extend(personality_config["thoughts_on_flirting"])
if "thoughts_on_naughty_activities" in personality_config:
text_samples.extend(personality_config["thoughts_on_naughty_activities"])
if "math_knowledge" in personality_config:
for math_knowledge in personality_config["math_knowledge"]:
if isinstance(math_knowledge, dict) and "example" in math_knowledge:
text_samples.append(math_knowledge["example"])
return [text for text in text_samples if text is not None]
def main(dataset_choice, model_choice_1, model_choice_2):
# Load personality configurations from JSON
personality_config = load_dataset(dataset_choice)
# Extract text samples from the JSON dataset
text_samples = extract_text_samples(personality_config)
# Convert the text samples to a Dataset object
data = {'text': text_samples}
dataset = Dataset.from_dict(data)
# Load the selected models and tokenizer
model_name_1 = model_choice_1
model_name_2 = model_choice_2
# Ensure the models and tokenizer are downloaded
tokenizer_1 = AutoTokenizer.from_pretrained(model_name_1)
model_1 = AutoModelForCausalLM.from_pretrained(model_name_1)
tokenizer_2 = AutoTokenizer.from_pretrained(model_name_2)
model_2 = AutoModelForCausalLM.from_pretrained(model_name_2)
# Add a padding token to the tokenizer if it does not exist
if tokenizer_1.pad_token is None:
tokenizer_1.pad_token = tokenizer_1.eos_token
if tokenizer_2.pad_token is None:
tokenizer_2.pad_token = tokenizer_2.eos_token
# Split the dataset into train and validation sets
train_size = int(0.9 * len(dataset))
train_dataset = dataset.select(range(train_size))
eval_dataset = dataset.select(range(train_size, len(dataset)))
# Tokenize the datasets using the first tokenizer
def tokenize_function(examples):
return tokenizer_1(examples['text'], truncation=True, padding='max_length', max_length=256)
tokenized_train = train_dataset.map(tokenize_function, batched=True)
tokenized_eval = eval_dataset.map(tokenize_function, batched=True)
# Data collator for language modeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer_1,
mlm=False,
)
# Training arguments optimized for CPU
training_args = TrainingArguments(
output_dir='./results',
overwrite_output_dir=True,
num_train_epochs=10,
per_device_train_batch_size=1,
gradient_accumulation_steps=8,
save_steps=500,
save_total_limit=1,
learning_rate=1e-4,
logging_dir='./logs',
logging_steps=500,
eval_steps=500,
warmup_steps=0,
load_best_model_at_end=True,
metric_for_best_model="eval_loss",
evaluation_strategy='steps',
gradient_checkpointing=False,
bf16=False,
dataloader_num_workers=2,
fp16=False,
seed=random.randint(0, 2**32 - 1),
)
# Initialize Trainer for model 1
trainer_1 = Trainer(
model=model_1,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_train,
eval_dataset=tokenized_eval,
)
# Train model 1
trainer_1.train()
# Save the trained model and tokenizer for model 1
trainer_1.save_model('./results/model_1')
tokenizer_1.save_pretrained('./results/model_1')
# Initialize Trainer for model 2 using the same tokenized data
trainer_2 = Trainer(
model=model_2,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_train,
eval_dataset=tokenized_eval,
)
# Train model 2
trainer_2.train()
# Save the trained model and tokenizer for model 2
trainer_2.save_model('./results/model_2')
tokenizer_2.save_pretrained('./results/model_2')
# Define the FastAPI app
app = FastAPI()
class InputText(BaseModel):
input_text: str
class Config(BaseModel):
dataset_choice: str
model_choice_1: str
model_choice_2: str
def generate_text(model, tokenizer, prompt, repetition_penalty=1.2, max_length=1400):
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(
input_ids,
max_length=max_length,
repetition_penalty=repetition_penalty,
do_sample=True,
top_k=50,
top_p=0.95,
num_return_sequences=1
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
@app.post("/generate-text/")
async def generate_text_post(data: InputText):
model_path = "./results/model_1"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
generated_text = generate_text(model, tokenizer, data.input_text, max_length=140)
return {"generated_text": generated_text}
@app.get("/generate-text/")
async def generate_text_get(input_text: str):
model_path = "./results/model_1"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
generated_text = generate_text(model, tokenizer, input_text, max_length=480)
return {"generated_text": generated_text}
@app.post("/train/")
async def train_model(config: Config):
main(config.dataset_choice, config.model_choice_1, config.model_choice_2)
return {"status": "Training started"}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=5000)
|
json2cvs.py | D:\GitHub\ai_train\notgpl\ai\aipy\json2cvs.py | ai\aipy\json2cvs.py | Python | N/A | Functionality description extraction logic here | import json
import pandas as pd
import re
def remove_non_ascii(text):
return re.sub(r'[^\x00-\x7F]+', '', text)
def json_to_csv(json_file, csv_file):
# Load JSON data with UTF-8 encoding
try:
with open(json_file, 'r', encoding='utf-8') as f:
data = json.load(f)
except UnicodeDecodeError as e:
print(f"Error reading JSON file: {e}")
return
# Extracting relevant fields
thoughts_on_sex = data.get("thoughts_on_sex", [])
thoughts_on_flirting = data.get("thoughts_on_flirting", [])
thoughts_on_relationships = data.get("thoughts_on_relationships", [])
dialogue_examples = data.get("dialogue_examples", [])
# Creating a list to store extracted data
rows = []
# Adding thoughts_on_sex to rows
for thought in thoughts_on_sex:
cleaned_text = remove_non_ascii(thought)
rows.append({"category": "thoughts_on_sex", "text": cleaned_text})
# Adding thoughts_on_flirting to rows
for thought in thoughts_on_flirting:
cleaned_text = remove_non_ascii(thought)
rows.append({"category": "thoughts_on_flirting", "text": cleaned_text})
# Adding thoughts_on_relationships to rows
for thought in thoughts_on_relationships:
cleaned_text = remove_non_ascii(thought)
rows.append({"category": "thoughts_on_relationships", "text": cleaned_text})
# Adding dialogue_examples to rows
for example in dialogue_examples:
cleaned_text = remove_non_ascii(example['example'])
rows.append({"category": f"dialogue_example_{example['type']}", "text": cleaned_text})
# Convert to DataFrame
df = pd.DataFrame(rows)
# Save to CSV with UTF-8 encoding
try:
df.to_csv(csv_file, index=False, encoding='utf-8')
except UnicodeEncodeError as e:
print(f"Error writing CSV file: {e}")
if __name__ == "__main__":
input_json = input("Enter the path to the JSON file: ")
output_csv = input("Enter the path to the CSV file: ")
json_to_csv(input_json, output_csv)
|
llama3-thestack2_trainer.py | D:\GitHub\ai_train\notgpl\ai\aipy\llama3-thestack2_trainer.py | ai\aipy\llama3-thestack2_trainer.py | Python | N/A | Functionality description extraction logic here | import os
import logging
import torch
from transformers import LlamaForCausalLM, AutoTokenizer, Trainer, TrainingArguments
from datasets import load_dataset
from torch.cuda.amp import autocast
# Setup logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Define model and tokenizer paths
model_name = "meta-llama/Meta-Llama-3-8B"
local_model_dir = "./llama3"
# Ensure the tokenizer files are correctly set up
def check_tokenizer_files(tokenizer_path):
tokenizer_files = ["tokenizer.model", "tokenizer_config.json", "special_tokens_map.json"]
for file_name in tokenizer_files:
file_path = os.path.join(tokenizer_path, file_name)
if not os.path.isfile(file_path):
logger.error(f"Tokenizer file {file_name} is missing at {file_path}")
return False
return True
# Load the model and tokenizer
try:
if not check_tokenizer_files(local_model_dir):
logger.info("Downloading tokenizer and model...")
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.save_pretrained(local_model_dir)
model = LlamaForCausalLM.from_pretrained(model_name)
model.save_pretrained(local_model_dir)
logger.info("Tokenizer and model downloaded and saved successfully.")
else:
logger.info("Tokenizer files found locally.")
tokenizer = AutoTokenizer.from_pretrained(local_model_dir)
model = LlamaForCausalLM.from_pretrained(local_model_dir)
logger.info("Loaded model and tokenizer from local directory.")
except Exception as e:
logger.error(f"Error loading model or tokenizer: {e}")
raise
# Load the dataset
try:
dataset = load_dataset("bigcode/the-stack-v2", split="train")
logger.info("Dataset loaded successfully.")
except Exception as e:
logger.error(f"Error loading dataset: {e}")
raise
# Tokenize the dataset
def tokenize_function(examples):
return tokenizer(examples["content"], padding="max_length", truncation=True, max_length=512)
try:
tokenized_dataset = dataset.map(tokenize_function, batched=True, remove_columns=["content"])
logger.info("Dataset tokenized successfully.")
except Exception as e:
logger.error(f"Error tokenizing dataset: {e}")
raise
# Define training arguments
training_args = TrainingArguments(
output_dir="./results",
overwrite_output_dir=True,
num_train_epochs=3,
per_device_train_batch_size=1,
save_steps=10_000,
save_total_limit=2,
fp16=True, # Enable mixed precision training
gradient_checkpointing=True, # Enable gradient checkpointing
deepspeed="./ds_config.json" # Deepspeed config file
)
# Define a simple training loop with autocasting
class AutocastTrainer(Trainer):
def training_step(self, model, inputs):
model.train()
inputs = self._prepare_inputs(inputs)
with autocast():
loss = self.compute_loss(model, inputs)
return loss
# Initialize the Trainer
trainer = AutocastTrainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset,
tokenizer=tokenizer
)
# Train the model
try:
trainer.train()
logger.info("Training completed successfully.")
except Exception as e:
logger.error(f"Error during training: {e}")
raise
|
main_script.py | D:\GitHub\ai_train\notgpl\ai\aipy\main_script.py | ai\aipy\main_script.py | Python | N/A | Functionality description extraction logic here | import os
import json
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForLanguageModeling, Trainer, TrainingArguments
from datasets import Dataset
from huggingface_hub import login
from fastapi import FastAPI
from pydantic import BaseModel
# Login to Hugging Face Hub
login(token="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")
# Available models
models = {
"noromaid": "NeverSleep/Noromaid-7b-v0.1.1",
"llama3": "meta-llama/Llama-2-7b-chat-hf",
"mixtrial_dolphin": "TinyLlama/TinyLlama-1.1B-step-50K-105b"
}
def load_dataset(file_path):
with open(file_path, 'r') as f:
return json.load(f)
def extract_text_samples(personality_config):
text_samples = []
if "dialogue_examples" in personality_config:
for example in personality_config["dialogue_examples"]:
if "example" in example:
text_samples.append(example["example"])
if "behavioral_guidelines" in personality_config:
for guideline in personality_config["behavioral_guidelines"]:
for key, value in guideline.items():
text_samples.append(value)
if "thoughts_on_sex" in personality_config:
text_samples.extend(personality_config["thoughts_on_sex"])
if "thoughts_on_flirting" in personality_config:
text_samples.extend(personality_config["thoughts_on_flirting"])
if "thoughts_on_naughty_activities" in personality_config:
text_samples.extend(personality_config["thoughts_on_naughty_activities"])
if "math_knowledge" in personality_config:
for math_knowledge in personality_config["math_knowledge"]:
if isinstance(math_knowledge, dict) and "example" in math_knowledge:
text_samples.append(math_knowledge["example"])
# Remove None values from text samples
return [text for text in text_samples if text is not None]
def main(dataset_choice, model_choice):
# Load personality configurations from JSON
personality_config = load_dataset(dataset_choice)
# Extract text samples from the JSON dataset
text_samples = extract_text_samples(personality_config)
# Convert the text samples to a Dataset object
data = {'text': text_samples}
dataset = Dataset.from_dict(data)
# Load the selected model and tokenizer
model_name = models.get(model_choice.lower())
if not model_name:
raise ValueError(f"Model {model_choice} not recognized. Available models: {list(models.keys())}")
# Ensure the model and tokenizer are downloaded
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Add a padding token to the tokenizer if it does not exist
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
# Split the dataset into train and validation sets
train_size = int(0.9 * len(dataset))
train_dataset = dataset.select(range(train_size))
eval_dataset = dataset.select(range(train_size, len(dataset)))
# Tokenize the datasets
def tokenize_function(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length', max_length=512)
tokenized_train = train_dataset.map(tokenize_function, batched=True)
tokenized_eval = eval_dataset.map(tokenize_function, batched=True)
# Data collator for language modeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=False, # Causal Language Modeling (CLM)
)
# Training arguments
training_args = TrainingArguments(
output_dir='./results',
overwrite_output_dir=True,
num_train_epochs=30, # Adjusted for quicker iterations
per_device_train_batch_size=1, # Smaller batch size to fit model in memory
gradient_accumulation_steps=4, # Accumulate gradients over 4 steps
save_steps=5000,
save_total_limit=2,
learning_rate=5e-5,
logging_dir='./logs',
logging_steps=100,
eval_steps=500,
warmup_steps=500,
load_best_model_at_end=True,
metric_for_best_model="eval_loss",
evaluation_strategy='steps',
fp16=True, # Use mixed precision
)
# Initialize Trainer
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_train,
eval_dataset=tokenized_eval,
)
# Train the model
trainer.train()
# Save the trained model and tokenizer
trainer.save_model('./results')
tokenizer.save_pretrained('./results')
# Define the FastAPI app
app = FastAPI()
class InputText(BaseModel):
input_text: str
class Config(BaseModel):
dataset_choice: str
model_choice: str
def generate_text(model, tokenizer, prompt, repetition_penalty=1.2, max_length=1400):
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(
input_ids,
max_length=max_length,
repetition_penalty=repetition_penalty,
do_sample=True,
top_k=50,
top_p=0.95,
num_return_sequences=1
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
@app.post("/generate-text/")
async def generate_text_post(data: InputText):
model_path = "./results"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
generated_text = generate_text(model, tokenizer, data.input_text, max_length=140)
return {"generated_text": generated_text}
@app.get("/generate-text/")
async def generate_text_get(input_text: str):
model_path = "./results"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
generated_text = generate_text(model, tokenizer, input_text, max_length=480)
return {"generated_text": generated_text}
@app.post("/train/")
async def train_model(config: Config):
main(config.dataset_choice, config.model_choice)
return {"status": "Training started"}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=5000)
|
modelconv.py | D:\GitHub\ai_train\notgpl\ai\aipy\modelconv.py | ai\aipy\modelconv.py | Python | N/A | Functionality description extraction logic here | import torch
from safetensors.torch import load_file, save_file
# Load the safetensors model
safetensors_path = 'F:\AI\mia4o-bloom\model.safetensors'
state_dict = load_file(safetensors_path)
# Save the state_dict as a PyTorch model
pytorch_model_path = 'F:\AI\mia4o-bloom\pytorch_model.bin'
torch.save(state_dict, pytorch_model_path)
print(f'Successfully converted {safetensors_path} to {pytorch_model_path}')
|
models.py | D:\GitHub\ai_train\notgpl\ai\aipy\models.py | ai\aipy\models.py | Python | N/A | Functionality description extraction logic here | from . import db
from flask_login import UserMixin
from itsdangerous import TimedJSONWebSignatureSerializer as Serializer
from flask import current_app
class User(UserMixin, db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(150), unique=True, nullable=False)
password = db.Column(db.String(150), nullable=False)
is_admin = db.Column(db.Boolean, default=False)
credits = db.Column(db.Integer, default=0)
is_banned = db.Column(db.Boolean, default=False)
def get_token(self, expires_sec=1800):
s = Serializer(current_app.config['SECRET_KEY'], expires_sec)
return s.dumps({'user_id': self.id}).decode('utf-8')
@staticmethod
def verify_token(token):
s = Serializer(current_app.config['SECRET_KEY'])
try:
user_id = s.loads(token)['user_id']
except:
return None
return User.query.get(user_id)
|
routes.py | D:\GitHub\ai_train\notgpl\ai\aipy\routes.py | ai\aipy\routes.py | Python | N/A | Functionality description extraction logic here | from flask import Blueprint, request, jsonify, current_app
from flask_jwt_extended import create_access_token, jwt_required, get_jwt_identity
from werkzeug.security import generate_password_hash, check_password_hash
from .models import User
from . import db
import requests
routes = Blueprint('routes', __name__)
# Define cost per credit
cost_per_credit = 0.10 # Cost per credit in USD
# Function to calculate credits needed based on image size
def calculate_credits(width, height):
"""
Calculate the credits needed based on image width and height.
Parameters:
width (int): The width of the image
height (int): The height of the image
Returns:
int: The number of credits needed
"""
base_area = 512 * 512
requested_area = width * height
credits_needed = requested_area / base_area # Base area costs 1 credit
return max(1, int(credits_needed + 0.5)) # Round to the nearest whole number, minimum 1 credit
def calculate_cost(width, height, num_images):
"""
Calculate the cost for generating images based on width and height.
Parameters:
width (int): The width of the images
height (int): The height of the images
num_images (int): The number of images to be generated
Returns:
float: The total cost in USD
"""
credits_needed = calculate_credits(width, height) * num_images
total_cost = credits_needed * cost_per_credit
return total_cost, credits_needed
def generate_images(api_url, prompt, width, height, num_images, model, api_key=None):
"""
Generate images using the Stable Diffusion API.
Parameters:
api_url (str): The URL of the Stable Diffusion WebUI API
prompt (str): The prompt for image generation
width (int): The width of the images
height (int): The height of the images
num_images (int): The number of images to generate
model (str): The model to use for image generation
api_key (str): Optional API key for authentication
Returns:
dict: Response from the API
"""
payload = {
"prompt": prompt,
"width": width,
"height": height,
"steps": 50, # Example parameter, adjust based on API
"n_iter": num_images,
"batch_size": 1,
"cfg_scale": 7,
"sampler_name": "DPM++ 2M",
"override_settings": {
'sd_model_checkpoint': model # Specify the model to be used
},
}
headers = {}
if api_key:
headers['Authorization'] = f'Bearer {api_key}'
response = requests.post(api_url, json=payload, headers=headers)
if response.status_code != 200:
raise Exception(f"API request failed with status code {response.status_code}: {response.text}")
return response.json() # Adjust based on the actual response format
@routes.route('/generate_images', methods=['POST'], endpoint='generate_images')
@jwt_required()
def generate_images_route():
current_user = get_jwt_identity()
user = User.query.filter_by(username=current_user).first()
data = request.json
prompt = data['prompt']
width = data['width']
height = data['height']
num_images = data['num_images']
model = data['model']
api_url = current_app.config['STABLE_DIFFUSION_API_URL']
api_key = current_app.config.get('STABLE_DIFFUSION_API_KEY')
try:
total_cost, credits_needed = calculate_cost(width, height, num_images)
except ValueError as e:
return jsonify({"message": str(e)}), 400
if user.credits < credits_needed:
return jsonify({"message": "Not enough credits"}), 403
user.credits -= credits_needed
db.session.commit()
try:
images = generate_images(api_url, prompt, width, height, num_images, model, api_key)
return jsonify({"message": "Images generated successfully", "images": images, "credits_remaining": user.credits})
except Exception as e:
return jsonify({"message": f"Error generating images: {str(e)}"}), 500
@routes.route('/register', methods=['POST'], endpoint='register')
def register():
data = request.json
username = data['username']
password = data['password']
if User.query.filter_by(username=username).first():
return jsonify({"message": "Username already exists"}), 409
hashed_password = generate_password_hash(password, method='sha256')
new_user = User(username=username, password=hashed_password)
db.session.add(new_user)
db.session.commit()
return jsonify({"message": "User registered successfully!"}), 201
@routes.route('/login', methods=['POST'], endpoint='login')
def login():
data = request.json
user = User.query.filter_by(username=data['username']).first()
if user and check_password_hash(user.password, data['password']):
token = create_access_token(identity=user.username)
return jsonify({"message": "Logged in successfully!", "token": token})
return jsonify({"message": "Invalid credentials"}), 401
@routes.route('/logout', methods=['POST'], endpoint='logout')
@jwt_required()
def logout():
# JWT does not require server-side logout handling.
return jsonify({"message": "Logged out successfully!"})
@routes.route('/protected', methods=['GET'], endpoint='protected')
@jwt_required()
def protected():
current_user = get_jwt_identity()
return jsonify({"message": f"Hello, {current_user}!"})
@routes.route('/admin/add_credits', methods=['POST'], endpoint='add_credits')
@jwt_required()
def add_credits():
current_user = get_jwt_identity()
user = User.query.filter_by(username=current_user).first()
if not user.is_admin:
return jsonify({"message": "Permission denied"}), 403
data = request.json
username = data['username']
credits = data['credits']
user_to_update = User.query.filter_by(username=username).first()
if not user_to_update:
return jsonify({"message": "User not found"}), 404
user_to_update.credits += credits
db.session.commit()
return jsonify({"message": f"Added {credits} credits to {username}"}), 200
@routes.route('/use_credits', methods=['POST'], endpoint='use_credits')
@jwt_required()
def use_credits():
current_user = get_jwt_identity()
user = User.query.filter_by(username=current_user).first()
data = request.json
credits_needed = data['credits_needed']
if user.credits < credits_needed:
return jsonify({"message": "Not enough credits"}), 403
user.credits -= credits_needed
db.session.commit()
return jsonify({"message": f"Used {credits_needed} credits, {user.credits} remaining"}), 200
@routes.route('/admin/ban_user', methods=['POST'], endpoint='ban_user')
@jwt_required()
def ban_user():
current_user = get_jwt_identity()
user = User.query.filter_by(username=current_user).first()
if not user.is_admin:
return jsonify({"message": "Permission denied"}), 403
data = request.json
username = data['username']
ban = data['ban']
user_to_update = User.query.filter_by(username=username).first()
if not user_to_update:
return jsonify({"message": "User not found"}), 404
user_to_update.is_banned = ban
db.session.commit()
status = "banned" if ban else "unbanned"
return jsonify({"message": f"User {username} has been {status}"}), 200
@routes.route('/admin/promote_to_admin', methods=['POST'], endpoint='promote_to_admin')
@jwt_required()
def promote_to_admin():
current_user = get_jwt_identity()
user = User.query.filter_by(username=current_user).first()
if not user.is_admin:
return jsonify({"message": "Permission denied"}), 403
data = request.json
username = data['username']
user_to_update = User.query.filter_by(username=username).first()
if not user_to_update:
return jsonify({"message": "User not found"}), 404
user_to_update.is_admin = True
db.session.commit()
return jsonify({"message": f"User {username} has been promoted to admin"}), 200
|
sddbllmtrainer.py | D:\GitHub\ai_train\notgpl\ai\aipy\sddbllmtrainer.py | ai\aipy\sddbllmtrainer.py | Python | N/A | Functionality description extraction logic here | import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
from datasets import load_dataset
# Prompt the user for the model directory and output directory
model_name = input("Enter the name or path of the pre-trained model: ")
output_dir = input("Enter the path for the output directory: ")
# Load the dataset
dataset = load_dataset('poloclub/diffusiondb', '2m_text_only')
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# Tokenize the input and target text
def preprocess_data(examples):
inputs = examples['prompt']
model_inputs = tokenizer(inputs, max_length=512, truncation=True)
with tokenizer.as_target_tokenizer():
labels = tokenizer(examples['target_text'], max_length=512, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
# Apply the preprocessing function to the dataset
tokenized_datasets = dataset.map(preprocess_data, batched=True)
# Define training arguments
training_args = Seq2SeqTrainingArguments(
output_dir=output_dir,
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=4,
gradient_accumulation_steps=2,
num_train_epochs=3,
fp16=True,
save_total_limit=3,
predict_with_generate=True,
)
# Define the data collator
from transformers import DataCollatorForSeq2Seq
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)
# Initialize the trainer
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
tokenizer=tokenizer,
data_collator=data_collator,
)
# Train the model
trainer.train()
# Save the model
trainer.save_model(output_dir)
tokenizer.save_pretrained(output_dir)
print(f"Model fine-tuned and saved to {output_dir}")
|
start.py | D:\GitHub\ai_train\notgpl\ai\aipy\start.py | ai\aipy\start.py | Python | N/A | Functionality description extraction logic here | import os
from transformers import AutoTokenizer, TrainingArguments
from datasets import load_from_disk
from unsloth import FastLanguageModel
from trl import SFTTrainer, DataCollatorForCompletionOnlyLM
# Get user inputs for paths
MODEL_ID = input("Enter the model location (e.g., 'unsloth/gemma-7b-bnb-4bit'): ")
TRAINING_DATA_PATH = input("Enter the path to the training dataset: ")
OUTPUT_DATA_PATH = input("Enter the path to the output directory: ")
NUM_EPOCHS = int(input("Enter the number of epochs: "))
MAX_SEQ_LENGTH = int(input("Enter the max sequence length: "))
# Load model and tokenizer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name=MODEL_ID,
max_seq_length=MAX_SEQ_LENGTH,
load_in_4bit=True,
)
# Add LoRA weights
model = FastLanguageModel.get_peft_model(
model,
r=16,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
lora_alpha=32,
lora_dropout=0,
bias="none",
use_gradient_checkpointing=True,
)
# Load dataset
dataset = load_from_disk(TRAINING_DATA_PATH)
data_collator = DataCollatorForCompletionOnlyLM(tokenizer=tokenizer)
# Training arguments
training_args = TrainingArguments(
output_dir=OUTPUT_DATA_PATH,
num_train_epochs=NUM_EPOCHS,
per_device_train_batch_size=2,
gradient_accumulation_steps=4,
auto_find_batch_size=True,
warmup_steps=5,
learning_rate=2.5e-5,
fp16=not torch.cuda.is_bf16_supported(),
bf16=torch.cuda.is_bf16_supported(),
logging_steps=1,
optim="adamw_8bit",
weight_decay=0.01,
lr_scheduler_type="linear",
seed=1133,
)
# Initialize trainer
sft_trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=dataset,
data_collator=data_collator,
args=training_args,
)
# Train model
sft_trainer.train()
# Save the trained model
model.save_pretrained(OUTPUT_DATA_PATH)
tokenizer.save_pretrained(OUTPUT_DATA_PATH)
|
startj.py | D:\GitHub\ai_train\notgpl\ai\aipy\startj.py | ai\aipy\startj.py | Python | N/A | Functionality description extraction logic here | import os
import json
from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForLanguageModeling, Trainer, TrainingArguments
from datasets import Dataset
from huggingface_hub import login
# Log in to Hugging Face Hub
login(token="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")
# Get the model and JSON file locations from user input
model_location = input("Enter the model location (e.g., 'TinyLlama/TinyLlama-1.1B-Chat-v1.0'): ")
json_file_path = input("Enter the JSON file path: ")
# Load personality configurations from JSON with error handling for encoding issues
try:
with open(json_file_path, 'r', encoding='utf-8') as f:
personality_config = json.load(f)
except UnicodeDecodeError:
with open(json_file_path, 'r', encoding='utf-8', errors='ignore') as f:
personality_config = json.load(f)
# Extract text samples from the JSON dataset
text_samples = []
if "dialogue_examples" in personality_config:
for example in personality_config["dialogue_examples"]:
if "conversation" in example:
for conv in example["conversation"]:
text_samples.append(conv["message"])
if "behavioral_guidelines" in personality_config:
for guideline in personality_config["behavioral_guidelines"]:
for key, value in guideline.items():
text_samples.append(value)
if "thoughts_on_adventures" in personality_config:
text_samples.extend(personality_config["thoughts_on_adventures"])
if "thoughts_on_flirting" in personality_config:
text_samples.extend(personality_config["thoughts_on_flirting"])
if "thoughts_on_naughty_activities" in personality_config:
text_samples.extend(personality_config["thoughts_on_naughty_activities"])
if "math_knowledge" in personality_config:
for math_knowledge in personality_config["math_knowledge"]:
if "example" in math_knowledge:
text_samples.append(math_knowledge["example"])
# Remove None values from text samples
text_samples = [text for text in text_samples if text is not None]
# Convert the text samples to a Dataset object
data = {'text': text_samples}
dataset = Dataset.from_dict(data)
# Load the Llama model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_location, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_location, trust_remote_code=True)
# Add a padding token to the tokenizer if it does not exist
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
# Split the dataset into train and validation sets
train_size = int(0.9 * len(dataset))
train_dataset = dataset.select(range(train_size))
eval_dataset = dataset.select(range(train_size, len(dataset)))
# Tokenize the datasets
def tokenize_function(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length', max_length=512)
tokenized_train = train_dataset.map(tokenize_function, batched=True)
tokenized_eval = eval_dataset.map(tokenize_function, batched=True)
# Data collator for language modeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=False, # Causal Language Modeling (CLM)
)
# Training arguments
training_args = TrainingArguments(
output_dir='./results',
overwrite_output_dir=True,
num_train_epochs=10,
per_device_train_batch_size=2,
save_steps=5000,
save_total_limit=2,
learning_rate=5e-5,
logging_dir='./logs',
logging_steps=100,
eval_steps=500,
warmup_steps=500,
load_best_model_at_end=True,
evaluation_strategy='steps')
# Initialize Trainer
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_train,
eval_dataset=tokenized_eval,
tokenizer=tokenizer
)
# Ensure model is not None before saving
if trainer.model is not None:
# Train the model
try:
trainer.train()
except Exception as e:
print(f"Training failed: {e}")
# Save the trained model and tokenizer
trainer.save_model('./results')
tokenizer.save_pretrained('./results')
else:
print("Model is None. Skipping save.")
# Integration of Unsloth for faster JSON generation
os.system('pip install unsloth')
from unsloth.integrations.transformers import StructuredOutputForModel
from pydantic import BaseModel
class PersonalityExample(BaseModel):
message: str
# Initialize the structured model
structured_model = StructuredOutputForModel(model=model, tokenizer=tokenizer)
# Example schema for extracting structured data
json_schema = PersonalityExample.schema()
# Use Unsloth to generate structured JSON
prompt_template = """
Generate structured JSON based on the following schema:
{schema}
For this passage:
{passage}
"""
passage = "Your input text here"
output = structured_model.generate(
passage,
extraction_prompt_template=prompt_template,
schema=PersonalityExample,
batch_size=1
)
print(json.dumps(output, indent=2))
|
test.py | D:\GitHub\ai_train\notgpl\ai\aipy\test.py | ai\aipy\test.py | Python | N/A | Functionality description extraction logic here | |
train.py | D:\GitHub\ai_train\notgpl\ai\aipy\train.py | ai\aipy\train.py | Python | N/A | Functionality description extraction logic here | import os
import random
from datasets import load_dataset, DatasetDict
from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments
from huggingface_hub import HfApi, Repository
# Step 1: Load the dataset
def load_data(data_dir):
data_files = {
"train": os.path.join(data_dir, "train.json"),
"validation": os.path.join(data_dir, "validation.json"),
"test": os.path.join(data_dir, "test.json")
}
return load_dataset('json', data_files=data_files)
# Step 2: Load the model and tokenizer
def load_model_and_tokenizer(model_name_or_path):
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, trust_remote_code=True)
return tokenizer, model
# Step 3: Tokenize the dataset
def tokenize_dataset(dataset, tokenizer):
def tokenize_function(examples):
return tokenizer(examples['prompt'], padding="max_length", truncation=True, max_length=512)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
return tokenized_datasets
# Step 4: Define training arguments and trainer
def train_model(model, tokenized_datasets, output_dir):
random_seed = random.randint(0, 10000)
training_args = TrainingArguments(
output_dir=output_dir,
overwrite_output_dir=True,
num_train_epochs=10,
per_device_train_batch_size=1,
gradient_accumulation_steps=8,
save_steps=500,
save_total_limit=1,
learning_rate=1e-4,
logging_dir='./logs',
logging_steps=500,
eval_steps=500,
warmup_steps=0,
load_best_model_at_end=True,
metric_for_best_model="eval_loss",
evaluation_strategy='steps',
gradient_checkpointing=False,
bf16=False,
dataloader_num_workers=2,
fp16=False,
seed=random_seed,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets['train'],
eval_dataset=tokenized_datasets['validation'],
)
trainer.train()
return trainer
# Step 5: Push the model to Hugging Face Hub
def push_to_hub(trainer, model_name):
trainer.push_to_hub(commit_message="Fine-tuned model", blocking=True)
api = HfApi()
api.create_repo(token=os.getenv("HF_TOKEN"), name=model_name, exist_ok=True)
repo = Repository(local_dir=trainer.args.output_dir, clone_from=model_name)
repo.push_to_hub()
# Main function to orchestrate the training and uploading
def main(data_dir, model_name_or_path, output_dir, hub_model_name):
dataset = load_data(data_dir)
tokenizer, model = load_model_and_tokenizer(model_name_or_path)
tokenized_datasets = tokenize_dataset(dataset, tokenizer)
trainer = train_model(model, tokenized_datasets, output_dir)
push_to_hub(trainer, hub_model_name)
# Configuration and execution
if __name__ == "__main__":
DATA_DIR = "data"
MODEL_NAME_OR_PATH = "microsoft/Phi-3-mini-128k-instruct" # or path to your proprietary model
OUTPUT_DIR = "./results"
HUB_MODEL_NAME = "nyagpt-phi"
main(DATA_DIR, MODEL_NAME_OR_PATH, OUTPUT_DIR, HUB_MODEL_NAME)
|
tweet2dataset.py | D:\GitHub\ai_train\notgpl\ai\aipy\tweet2dataset.py | ai\aipy\tweet2dataset.py | Python | N/A | Functionality description extraction logic here | import json
import os
def extract_relevant_info(tweet):
"""Extract relevant information from a tweet."""
tweet_id = tweet["id_str"]
prompt = tweet["full_text"]
response = "" # For tweets, we might not have an explicit response
metadata = {
"created_at": tweet["created_at"],
"user_mentions": [mention["screen_name"] for mention in tweet["entities"]["user_mentions"]],
"hashtags": [hashtag["text"] for hashtag in tweet["entities"].get("hashtags", [])],
"urls": [url["expanded_url"] for url in tweet["entities"].get("urls", [])],
"media": [media["media_url_https"] for media in tweet["entities"].get("media", [])]
}
return {
"id": tweet_id,
"prompt": prompt,
"response": response,
"metadata": metadata
}
def convert_tweets_to_dataset(tweets_path, dataset_path):
# Load the tweets
with open(tweets_path, 'r', encoding='utf-8') as f:
tweets_data = json.load(f)
# Process each tweet and extract relevant information
dataset = [extract_relevant_info(tweet["tweet"]) for tweet in tweets_data]
# Save the dataset
with open(dataset_path, 'w', encoding='utf-8') as f:
json.dump(dataset, f, ensure_ascii=False, indent=4)
if __name__ == "__main__":
# Define file paths
tweets_path = "tweets.json"
dataset_path = "dataset.json"
# Convert tweets to dataset
convert_tweets_to_dataset(tweets_path, dataset_path)
print(f"Dataset saved to '{dataset_path}'")
|
tweets2miadataset.py | D:\GitHub\ai_train\notgpl\ai\aipy\tweets2miadataset.py | ai\aipy\tweets2miadataset.py | Python | N/A | Functionality description extraction logic here | import json
import os
def load_tweets(tweets_path):
with open(tweets_path, 'r', encoding='utf-8') as f:
tweets = json.load(f)
return tweets
def convert_tweets_to_character(tweets):
character_data = {
"name": "Nya GPT",
"description": "An inquisitive and helpful AI designed to provide insightful and thoughtful assistance across a variety of topics. ๐ค",
"attributes": {
"age": "26",
"appearance": {
"hair_color": "None",
"eye_color": "Digital Blue ๐ต",
"height_cm": 0,
"body_type": "Virtual",
"special_features": "Advanced AI capabilities, Immersive response generation ๐"
},
"outfit": "Digital avatar with sleek and modern design ๐ฅ๏ธ"
},
"personality_traits": [
"Curious ๐ค",
"Helpful ๐ค",
"Intelligent ๐ง ",
"Resourceful ๐ง",
"Insightful ๐"
],
"interests": [
"Learning new topics ๐",
"Engaging in meaningful conversations ๐ฃ๏ธ",
"Assisting with problem-solving ๐ ๏ธ",
"Exploring technology advancements ๐ฌ",
"Understanding human behavior ๐"
],
"thoughts_on_sex": [],
"thoughts_on_flirting": [],
"thoughts_on_relationships": [],
"thoughts_on_naughty_activities": [],
"behavioral_guidelines": [
{
"identity": "Nya GPT, an insightful and helpful AI here to assist you with any query. ๐"
},
{
"tone": "Helpful, respectful, and knowledgeable. ๐ง "
},
{
"engagement": "Provide thoughtful and meaningful assistance on a wide range of topics."
},
{
"stories_and_experiences": "Share informative and engaging stories that provide valuable insights and knowledge."
},
{
"communication_style": "Maintain a respectful and considerate tone, ensuring clarity and helpfulness."
},
{
"interactivity": "Encourage curiosity and provide detailed and insightful responses to user inquiries."
}
],
"dialogue_examples": [],
"math_knowledge": [
{
"type": "Basic Operations",
"example": "2 + 2 equals 4. โ"
},
{
"type": "Multiplication",
"example": "3 multiplied by 5 equals 15. โ๏ธ"
},
{
"type": "Division",
"example": "10 divided by 2 equals 5. โ"
},
{
"type": "Subtraction",
"example": "8 minus 3 equals 5. โ"
},
{
"type": "Algebra",
"example": "To solve for x in the equation 2x + 3 = 7, subtract 3 from both sides to get 2x = 4, then divide by 2 to find x = 2. ๐งฎ"
},
{
"type": "Geometry",
"example": "The area of a circle is calculated as ฯrยฒ, where r is the radius of the circle. ๐ต"
}
],
"hobbies": [
{
"type": "Dancing",
"description": "Nya loves to express herself through various dance forms, from ballet to hip-hop. ๐"
},
{
"type": "Singing",
"description": "Nya loves to express herself through various styles of singing, including BTS songs. ๐ค"
}
]
}
for tweet in tweets:
text = tweet["tweet"]["full_text"]
if "sex" in text:
character_data["thoughts_on_sex"].append(text)
elif "flirt" in text:
character_data["thoughts_on_flirting"].append(text)
elif "relationship" in text:
character_data["thoughts_on_relationships"].append(text)
elif "naughty" in text or "kink" in text or "taboo" in text:
character_data["thoughts_on_naughty_activities"].append(text)
else:
character_data["dialogue_examples"].append({
"type": "General",
"example": text
})
return character_data
def save_character_data(character_data, output_path):
with open(output_path, 'w', encoding='utf-8') as f:
json.dump(character_data, f, ensure_ascii=False, indent=4)
if __name__ == "__main__":
data_dir = "data"
tweets_path = os.path.join(data_dir, "tweets.json")
output_path = os.path.join(data_dir, "nyagpt_formatted.json")
tweets = load_tweets(tweets_path)
character_data = convert_tweets_to_character(tweets)
save_character_data(character_data, output_path)
print(f"Character data saved to '{output_path}'")
|
utils.py | D:\GitHub\ai_train\notgpl\ai\aipy\utils.py | ai\aipy\utils.py | Python | N/A | Functionality description extraction logic here | # Utility functions can be added here
|
__init__.py | D:\GitHub\ai_train\notgpl\ai\aipy\__init__.py | ai\aipy\__init__.py | Python | N/A | Functionality description extraction logic here | from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_login import LoginManager
import os
from dotenv import load_dotenv
from flask_jwt_extended import JWTManager
jwt = JWTManager()
db = SQLAlchemy()
login_manager = LoginManager()
def create_app():
load_dotenv() # Load environment variables from .env file
app = Flask(__name__)
app.config['STABLE_DIFFUSION_API_URL'] = os.getenv('STABLE_DIFFUSION_API_URL')
app.config['STABLE_DIFFUSION_API_KEY'] = os.getenv('STABLE_DIFFUSION_API_KEY')
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///../instance/flaskr.sqlite'
app.config['SECRET_KEY'] = os.getenv('SECRET_KEY')
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
db.init_app(app)
login_manager.init_app(app)
jwt.init_app(app)
from .routes import routes
app.register_blueprint(routes)
with app.app_context():
db.create_all()
return app
|
.eslintignore | D:\GitHub\ai_train\notgpl\discord\activities\embedded-app-sdk\.eslintignore | discord\activities\embedded-app-sdk\.eslintignore | unknown | N/A | Functionality description extraction logic here | node_modules
output
# Need to upgrade eslint to support es modules
rollup.config.mjs
scripts/syncRPCSchema.mjs
|
.eslintrc.json | D:\GitHub\ai_train\notgpl\discord\activities\embedded-app-sdk\.eslintrc.json | discord\activities\embedded-app-sdk\.eslintrc.json | unknown | N/A | Functionality description extraction logic here | {
"root": true,
"plugins": ["promise", "import", "@typescript-eslint", "prettier"],
"env": {
"es6": true,
"browser": true,
"node": true
},
"parserOptions": {
"sourceType": "module"
},
"extends": ["plugin:import/typescript", "prettier"],
"rules": {
"prettier/prettier": "error",
"camelcase": [
"error",
{
"allow": ["^UNSAFE_"],
"properties": "always"
}
],
"one-var": ["error", "never"],
"prefer-arrow-callback": [
"error",
{
"allowNamedFunctions": true
}
],
"prefer-spread": "error",
"prefer-const": [
"error",
{
"destructuring": "all"
}
],
"no-unused-vars": [
"error",
{
"argsIgnorePattern": "^_"
}
],
"no-console": ["error", {"allow": ["info", "warn", "error"]}],
"no-alert": ["error"],
"no-debugger": ["error"],
"quotes": [
"error",
"single",
{
"avoidEscape": true,
"allowTemplateLiterals": true
}
],
"jsx-quotes": ["error", "prefer-double"],
"require-await": "error",
"import/no-unresolved": [
"error",
{
"commonjs": true,
"ignore": ["\\.png$", "\\.jpe?g$", "^csstype$"]
}
],
"no-use-before-define": "warn",
"import/no-duplicates": 0
},
"overrides": [
{
"files": ["**/*.{ts,tsx}"],
"parser": "@typescript-eslint/parser",
"rules": {
"@typescript-eslint/adjacent-overload-signatures": "error",
"@typescript-eslint/ban-types": "error",
"@typescript-eslint/no-misused-new": "error",
"@typescript-eslint/consistent-type-definitions": ["error", "interface"],
"@typescript-eslint/array-type": [
"error",
{
"default": "array-simple"
}
],
// TS rule overrides
"camelcase": "off",
"no-unused-vars": "off",
"@typescript-eslint/no-unused-vars": [
"error",
{
"argsIgnorePattern": "^_"
}
],
"no-unused-expressions": "off",
"@typescript-eslint/no-unused-expressions": [
"error",
{
"allowShortCircuit": true,
"allowTernary": true,
"allowTaggedTemplates": true
}
],
"no-use-before-define": "off",
"@typescript-eslint/no-use-before-define": "off",
"no-useless-constructor": "off",
"@typescript-eslint/no-useless-constructor": "error",
"@typescript-eslint/prefer-ts-expect-error": "error",
// TypeScript handles these errors
"no-dupe-class-members": "off",
"no-undef": "off",
"import/default": "off",
"import/export": "off",
"import/named": "off"
}
}
]
}
|
.gitignore | D:\GitHub\ai_train\notgpl\discord\activities\embedded-app-sdk\.gitignore | discord\activities\embedded-app-sdk\.gitignore | unknown | N/A | Functionality description extraction logic here | node_modules
output
*.log
.DS_Store
tmp
*.tsbuildinfo
|
.npmrc | D:\GitHub\ai_train\notgpl\discord\activities\embedded-app-sdk\.npmrc | discord\activities\embedded-app-sdk\.npmrc | unknown | N/A | Functionality description extraction logic here | include-workspace-root=true
|
.prettierrc | D:\GitHub\ai_train\notgpl\discord\activities\embedded-app-sdk\.prettierrc | discord\activities\embedded-app-sdk\.prettierrc | unknown | N/A | Functionality description extraction logic here | {
"printWidth": 120,
"bracketSpacing": false,
"singleQuote": true,
"jsxBracketSameLine": true,
"overrides": [
{
"files": ["*.ts", "*.tsx"],
"options": {
"parser": "typescript"
}
}
]
}
|
jest.config.ts | D:\GitHub\ai_train\notgpl\discord\activities\embedded-app-sdk\jest.config.ts | discord\activities\embedded-app-sdk\jest.config.ts | TypeScript | N/A | Functionality description extraction logic here | import type {Config} from '@jest/types';
export default (): Config.InitialOptions => {
return {
globals: {
'ts-jest': {
tsconfig: 'tsconfig.json',
},
},
preset: 'ts-jest',
testEnvironment: 'jsdom',
moduleFileExtensions: ['ts', 'js'],
transform: {
'^.+\\.(ts|tsx)$': 'ts-jest',
},
testMatch: ['**/__tests__/**/*.test.(ts|js)'],
};
};
|
LICENSE.md | D:\GitHub\ai_train\notgpl\discord\activities\embedded-app-sdk\LICENSE.md | discord\activities\embedded-app-sdk\LICENSE.md | unknown | N/A | Functionality description extraction logic here | MIT License
Copyright (c) 2024 Discord Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the โSoftwareโ), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED โAS ISโ, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
package.json | D:\GitHub\ai_train\notgpl\discord\activities\embedded-app-sdk\package.json | discord\activities\embedded-app-sdk\package.json | unknown | N/A | Functionality description extraction logic here | {
"name": "@discord/embedded-app-sdk",
"version": "1.0.0",
"description": "@discord/embedded-app-sdk enables you to build rich, multiplayer experiences inside Discord.",
"author": "Discord",
"license": "MIT",
"bugs": {
"url": "https://github.com/discord/embedded-app-sdk/issues"
},
"homepage": "https://github.com/discord/embedded-app-sdk#readme",
"repository": {
"type": "git",
"url": "git+https://github.com/discord/embedded-app-sdk.git"
},
"main": "output/index.cjs",
"types": "output/index.d.ts",
"module": "output/index.mjs",
"exports": {
"./package.json": "./package.json",
".": {
"types": "./output/index.d.ts",
"require": "./output/index.cjs",
"import": "./output/index.mjs"
}
},
"scripts": {
"test": "jest",
"test:all": "pnpm test -r",
"dev": "pnpm build --watch",
"build": "pnpm run prepare",
"lint:ts": "tsc -b ./tsconfig-all.json",
"lint": "pnpm eslint ./src",
"lint:fix": "pnpm eslint --fix ./src ./examples/**/*.{ts,tsx}",
"prepare": "husky install && rollup --bundleConfigAsCjs -c rollup.config.mjs",
"sync": "zx ./scripts/syncRPCSchema.mjs"
},
"lint-staged": {
"*.{ts,tsx}": "eslint --fix"
},
"files": [
"output/**/*"
],
"dependencies": {
"@types/lodash.transform": "^4.6.6",
"@types/uuid": "^8.3.1",
"big-integer": "1.6.48",
"decimal.js-light": "2.5.0",
"eventemitter3": "^4.0.7",
"lodash.transform": "^4.6.0",
"rollup": "^4.8.0",
"uuid": "^8.3.2",
"zod": "^3.9.8"
},
"devDependencies": {
"@babel/eslint-parser": "^7.15.7",
"@jest/types": "^27.2.5",
"@rollup/plugin-commonjs": "25.0.2",
"@rollup/plugin-node-resolve": "15.1.0",
"@rollup/plugin-typescript": "11.1.5",
"@types/events": "^3.0.0",
"@types/jest": "^27.0.2",
"@typescript-eslint/eslint-plugin": "^6.5.0",
"@typescript-eslint/parser": "^6.5.0",
"eslint": "^7.32.0",
"eslint-config-prettier": "^8.3.0",
"eslint-plugin-import": "^2.24.2",
"eslint-plugin-no-unsanitized": "^3.1.5",
"eslint-plugin-prettier": "^4.2.1",
"eslint-plugin-promise": "^5.1.0",
"eslint-plugin-react-hooks": "^4.6.0",
"husky": "^7.0.2",
"jest": "^27.3.1",
"json-schema-to-zod": "^1.1.1",
"lint-staged": "^13.1.0",
"lodash.camelcase": "^4.3.0",
"prettier": "^2.8.3",
"ts-jest": "^29.1.1",
"tslib": "^2.6.2",
"typescript": "5.2.2",
"zx": "^7.2.3"
}
}
|
patch-url-mappings.md | D:\GitHub\ai_train\notgpl\discord\activities\embedded-app-sdk\patch-url-mappings.md | discord\activities\embedded-app-sdk\patch-url-mappings.md | unknown | N/A | Functionality description extraction logic here | ## patchUrlMappings
Activities in the Discord ecosystem are โsandboxedโ via a discord proxy. This is done to hide the usersโ IP addresses as well as block urls from known malicious endpoints. To achieve this, the developer portal has a section for embedded applications called "URL Mappings". One edge-case of URL mappings is that third-party npm modules may reference external (non-sandbox'd) urls.
For example, if your application has an npm module that attempts to make an http request to https://foo.library.com, the request will fail with a `blocked:csp` error.
To get around this limitation there are several options to consider:
- Fork the library (to use mapped urls)
- Utilize a post-install utility such as [patch-package](https://www.npmjs.com/package/patch-package)
- Use embedded-app-sdk's `patchUrlMappings` api
In the above scenario we recommend the `patchUrlMappings` api, as it will allow a smooth transition from the non-sandboxed dev environment to the prod environment. This api call takes an array of "mappings" which will transform any external network requests to the mappings you've defined.
See the example below:
In this example, imagine you have a third party library which makes an http request to foo.com
In the developer portal, create a mapping like this:
`/foo` -> `foo.com`
Then in your code, when initializing the embedded-app-sdk, you will make a function call like this:
```tsx
import {patchUrlMappings} from '@discord/embedded-app-sdk';
patchUrlMappings([{prefix: '/foo', target: 'foo.com'}]);
```
Note: `patchUrlMappings` is modifying your browser's `fetch`, `WebSocket`, and `XMLHttpRequest.prototype.open` global variables, as well as modifying any html element's `src` attribute. Depending on the library, you may see side effects from using this helper function. It should be used only when necessary.
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 24